id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.13532 | Anisotropic body compliance facilitates robotic sidewinding in complex
environments | Sidewinding, a locomotion strategy characterized by the coordination of
lateral and vertical body undulations, is frequently observed in rattlesnakes
and has been successfully reconstructed by limbless robotic systems for
effective movement across diverse terrestrial terrains. However, the
integration of compliant mechanisms into sidewinding limbless robots remains
less explored, posing challenges for navigation in complex, rheologically
diverse environments. Inspired by a notable control simplification via
mechanical intelligence in lateral undulation, which offloads feedback control
to passive body mechanics and interactions with the environment, we present an
innovative design of a mechanically intelligent limbless robot for sidewinding.
This robot features a decentralized bilateral cable actuation system that
resembles organismal muscle actuation mechanisms. We develop a feedforward
controller that incorporates programmable body compliance into the sidewinding
gait template. Our experimental results highlight the emergence of mechanical
intelligence when the robot is equipped with an appropriate level of body
compliance. This allows the robot to 1) locomote more energetically
efficiently, as evidenced by a reduced cost of transport, and 2) navigate
through terrain heterogeneities, all achieved in an open-loop manner, without
the need for environmental awareness. | Velin Kojouharov, Tianyu Wang, Matthew Fernandez, Jiyeon Maeng, Daniel I. Goldman | 2023-09-24T02:59:42Z | http://arxiv.org/abs/2309.13532v1 | # Anisotropic body compliance facilitates robotic
###### Abstract
Sidewinding, a locomotion strategy characterized by the coordination of lateral and vertical body undulations, is frequently observed in rattlesnakes and has been successfully reconstructed by limbless robotic systems for effective movement across diverse terrestrial terrains. However, the integration of compliant mechanisms into sidewinding limbless robots remains less explored, posing challenges for navigation in complex, rheologically diverse environments. Inspired by a notable control simplification via mechanical intelligence in lateral undulation, which offloads feedback control to passive body mechanics and interactions with the environment, we present an innovative design of a mechanically intelligent limbless robot for sidewinding. This robot features a decentralized bilateral cable actuation system that resembles organismal muscle actuation mechanisms. We develop a feedforward controller that incorporates programmable body compliance into the sidewinding gait template. Our experimental results highlight the emergence of mechanical intelligence when the robot is equipped with an appropriate level of body compliance. This allows the robot to 1) locomote more energetically efficiently, as evidenced by a reduced cost of transport, and 2) navigate through terrain heterogeneities, all achieved in an open-loop manner, without the need for environmental awareness.
## I Introduction
Sidewinding serves as the primary locomotion strategy for several desert-dwelling viper species [1, 2, 3, 4], and for other taxa navigating granular surfaces [5, 6]. During sidewinding, snakes generate vertical and lateral undulations in the body, i.e., propagating two waves in the vertical and horizontal planes simultaneously, following a two-wave template [7, 8]. This coordinated body movement leads to the formation of alternating body lifting and static contact, providing traction on the substrate for robust locomotion.
Sidewinding is of great interest for limbless robots (snake robots) to replicate [9, 10, 11, 12]. Unlike lateral undulation, which requires drag anisotropy to generate thrust, typically achieved with wheels [13, 14] or scales [15], sidewinding is a form of locomotion capable of producing translational movement under isotropic friction condition. In sidewinding, instead of maintaining consistent body contact with the substrate as in lateral undulation, adjusting the coordination between vertical and horizontal waves enables the body to establish and break contact with the substrate. This feature facilitates the design and planning of contact patterns for effective and robust locomotion [16, 17]. However, research on robotic sidewinding has predominantly focused on homogeneous substrates, while negotiating obstacles during sidewinding remains less explored and challenging.
As compliant body behaviors have been observed in sidewinder rattlesnakes during obstacle negotiation in previous research, it is hypothesized that, in addition to the modulation of gait parameters, robotic sidewinders require body compliance to navigate obstacle-rich environments [18]. Previously, a serially linked (joint actuated) limbless robot was used to model the sidewinding rattlesnakes with the method of amplitude modulation [18] but ultimately failed to replicate the compliant behaviors exhibited by the snakes when faced with an array of obstacles, due to the lack of sensing capability. This further motivated the idea that compliance is key to sidewinding through obstacle-rich terrains.
While compliant sidewinding remains less explored, one major approach to achieving compliant lateral undulatory locomotion in limbless robots is through "computational intelligence," which involves real-time tuning of the body shape in response to obstacles based on proprioceptive sensory feedback (e.g., vision [19, 20], contact sensing [21, 22], and joint torque sensing [23, 24]). Recent studies
Fig. 1: Mechanically intelligent limbless robot, inspired by sidewinding snakes, capable of performing sidewinding locomotion in diverse, rheologically complex terrestrial environments. (**A**) The sidewinding behavior observed in rattlesnakes. (**B**) The sidewinding locomotion of the robot on granular media. (**C**) A diagram of sidewinding motion. Gray areas in the body indicate static contact with the substrate, and white areas represent body segments lifted and in motion. Gray rectangles denote tracks. The red arrow shows the center of mass direction of motion. Reproduced from [8]. (**D**) A diagram of the vertical and horizontal waves propagating from head to tail in sidewinding, characterized by a \(\pi/2\) phase difference. Grey areas denote static contact. Reproduced from [8].
have shown that "physical intelligence" (PI) or "mechanical intelligence" (MI) [25] can be another means of achieving compliant limbless locomotion in complex environments. This approach offloads the complexity of computation and control onto passive body mechanics [26, 27, 28]. Specifically, our prior work [28] introduced a bilaterally actuated, cable-driven limbless robot inspired by organismal muscle actuation mechanisms. Through a control scheme for programmable body compliance, we showed that MI simplifies locomotion control for lateral undulation in complex terrestrial terrains.
Inspired by the control simplification achieved through MI in lateral undulation, we hypothesize that MI can similarly enhance obstacle navigation in sidewinding. To validate our hypothesis, we devised a novel 3D cable-driven limbless robot for sidewinding and developed a control scheme for variable body compliance. Through robophysical experiments, we compared the robot's sidewinding performance across varying levels of body compliance and observed that MI emerges when the robot is programmed with an appropriate degree of body compliance, facilitating the negotiation of heterogeneities. Further, by measuring the cost of transport, we demonstrated that MI improves sidewinding energy efficiency.
## II Robot Design and Control
To test our hypothesis, we designed a modular limbless robot. The robot consists of a series of 12 modules connected by 11 passive hinge joints (total length 1.31 m). There are two types of joints on the robot: vertical bending joints and lateral bending joints, each with one rotational degree of freedom rotation in their respective planes. The combination of these two bending joints allows the robot to simultaneously propagate waves in the horizontal and vertical planes - necessary to produce a sidewinding gait. The vertical and lateral joints are evenly spaced along the body, where joints 3, 6, and 9 are vertical bending with the remaining 8 being lateral bending (Fig. 2A). The higher number of lateral bending joints allows us to achieve much higher curvature in the horizontal plane compared to the vertical plane, similar to what has been observed in sidewinding rattlesnakes [8]. This gives this robot an advantage in replicating the snake's gaits compared to previous sidewinding limbless robots that use alternating vertical and lateral bending modules [16, 7].
### _Module Components_
Aside from the orientation of the joints (vertical vs. lateral bending), all modules are built identically (length of 10 cm and diameter of 7.5 cm). Each module has a 3D-printed PLA case that houses one DYNAMIXEL 2XL430-W250-T (ROBOTIS), which packages 2 independently controlled servo motors. Each servo motor has a pulley (9.5 mm inner diameter) that is spooled with a non-elastic fishing line (Rikimura) which has negligible shape memory and deformation response to stretching. The other end of each of the two lines is attached to the following case.
### _Bilateral Cable Actuation_
A majority of existing limbless robots employ a "joint actuation" mechanism which actuates each joint in the spine with a rotary motor [29, 30, 31, 32]. Alternatively, bilateral cable actuation has recently been used in the design of limbless robots as a way to introduce compliance [27, 28]. The sidewinding robot presented in this work features a decentralized bilateral actuation mechanism, i.e., each joint is actuated with two independently controlled cables. Thus, the robot moves through coordinating the shortening and lengthening of each cable.
### _Power and Communication_
The robot is powered by a DC power supply with 11.1 V and receives control signals transmitted from a PC via U2D2 (ROBOTIS). Each servo motor is connected in series with internal wiring running through the joints, resulting in minimal electrical harnessing along the robot's body. The power and communication lines are tied together to create the tether for the robot.
### _Sidewinding Gait Template_
To implement a sidewinding gait on our robot, we used a two-wave template that is widely used in sidewinding
Fig. 2: Detailed mechanical design of a bilateral cable-driven limbless robot for sidewinding. **(A)** Computer Aided Design (CAD) representation of the robot. The design features 8 lateral bending joints (cyan) and 3 vertical bending joints (pink) **(B)** Picture of the robot with zoomed-in view of 2 joints – one vertical bending and one lateral bending. **(C)** Picture and labeled schematic of a single robot module.
robots [7, 8, 18],
\[\alpha_{H}(i,t) =A_{H}\sin\left(2\pi\xi_{H}\frac{i}{N_{H}}-2\pi\omega t\right), \tag{1}\] \[\alpha_{V}(i,t) =A_{V}\sin\left(2\pi\xi_{V}\frac{i}{N_{V}}-2\pi\omega t-\frac{\pi }{2}\right),\]
where subscripts \(H\) and \(V\) refer to horizontally and vertically oriented motors, respectively; \(\alpha\) represents joint angle; \(i\) is joint index; \(t\) is time; \(A\), \(\xi\) and \(\omega\) is the amplitude, the spatial and temporal frequencies of the corresponding wave; and \(N\) is the total number of joints in the corresponding plane.
To accurately form a joint angle \(\alpha\) as defined in Eq. 1, we need to adjust the lengths of the left and right cables around the joint so that they both are shortened. Since the deformation of cables in the robot is negligible, the lengths of the left cable (\(\mathcal{L}^{l}\)) and right cable (\(\mathcal{L}^{r}\)) are determined by the robot's geometry as shown in Fig. 3, following
\[\mathcal{L}^{l}(\alpha) =2\sqrt{L_{1}^{2}+L_{2}^{2}}\cos\left[-\frac{\alpha}{2}+\tan^{-1} \left(\frac{L_{1}}{L_{2}}\right)\right], \tag{2}\] \[\mathcal{L}^{r}(\alpha) =2\sqrt{L_{1}^{2}+L_{2}^{2}}\cos\left[\frac{\alpha}{2}+\tan^{-1} \left(\frac{L_{1}}{L_{2}}\right)\right].\]
### _Programmable Body Compliance_
Based on Eq. 2, we can implement accurate body postures in sidewinding gait on our robot. As mentioned previously, bilateral actuation allows us to program body compliance via coordinately loosening cables. Extending the implementation of the generalized compliance variable (\(G\)) defined in [28] to our sidewinder robot, we can quantify the body compliance in the robot using \(G\). The cable length control scheme in this work is then given by
\[L_{H,i}^{l}(\alpha_{H,i}) =\left\{\begin{array}{ll}\mathcal{L}_{H,i}^{l}(\alpha_{H,i}),& \text{if }\alpha_{H,i}\leq-\gamma\\ \mathcal{L}_{H,i}^{l}[-\min(A,\gamma)]\\ +l_{0}\cdot[\gamma+\alpha_{H,i}],&\text{if }\alpha_{H,i}>-\gamma\\ \end{array}\right. \tag{3}\] \[L_{H,i}^{r}(\alpha_{H,i}) =\left\{\begin{array}{ll}\mathcal{L}_{H,i}^{r}(\alpha_{H,i}),& \text{if }\alpha_{H,i}\geq\gamma\\ \mathcal{L}_{H,i}^{r}[\min(A,\gamma)]\\ +l_{0}\cdot[\gamma-\alpha_{H,i}],&\text{if }\alpha_{H,i}<\gamma\end{array}\right.\] \[L_{V,i}^{l}(\alpha_{V,i}) =\mathcal{L}_{V,i}^{r}(\alpha_{V,i})\] \[L_{V,i}^{r}(\alpha_{V,i}) =\mathcal{L}_{V,i}^{r}(\alpha_{V,i})\]
where superscripts \(l\) and \(r\) refer to left and right, respectively; \(\gamma\) is short for \((2G-1)A_{H}\); and \(l_{0}\) is a design parameter which we fix over this work as 41.8 mm/rad. Following Eq.3, the robot can achieve three representative compliance states with varied \(G\) (Fig. 4): 1) bidirectionally non-compliant (\(G=0\)), where each joint angle strictly follows the trajectories suggested by Eq. 1; 2) directionally compliant (\(G=0.5\)), where the joints are only allowed to be perturbed to form a larger angle than suggested; and 3) bidirectionally compliant (\(G=1\)), where the joints are allowed to be perturbed in both directions, in an anisotropic way regulated by Eq.3. For a detailed discussion of this length control scheme we refer to [28]. Note that in this work, we only enabled programmable compliance on the horizontal joints, whereas vertical joints remain non-compliant (\(G=0\)) for all time.
## III Results
### _Flat Terrain Experiment_
As hinted in previous work where body compliance can improve lateral undulation locomotion efficiency in diverse environments [28], we started with testing the robot's sidewinding performance on flat terrain with varied generalized compliance value \(G\). In this experiment, we fixed the parameters in Eq. 1 as \(A_{H}=75^{\circ},\xi_{H}=1,A_{V}=25^{\circ},\xi_{V}=1\), with which the robot's body shape can approximate that observed from rattlesnakes (video included in the supplementary video) [8]. We quantify the performance using locomotion speed and mechanical cost of transport, quantities that are commonly used to study both biological and robotic locomotion [33, 34, 35].
We set up a similar experiment shown in Fig. 5 by running the sidewinding gait on the robot on a flat surface with
Fig. 4: Schematic of representative compliant robot states under varied generalized compliance \(G\): bidirectionally non-compliant (\(G=0\)), where a joint does not admit force in either direction so that the joint angle follows the template trajectory (dashed line); directionally compliant (\(G=0.5\)) where a joint only admits force that bends the joint further (to form a larger joint angle as shown by yellow region); and bidirectionally compliant (\(G=1\)), where a joint admits force in both directions in an anisotropic manner (to form either a smaller or larger joint angle as shown by yellow region). Reproduced from [28].
Fig. 3: Geometry of an individual joint for the calculation of cable lengths to form the suggested joint angle \(\alpha\). Reproduced from [28].
Coulomb friction (\(\mu\approx 0.7\)). We varied the generalized compliance of the robot in the lateral bending joints, from \(G=0\) (fully rigid) to \(G=1.5\) (very compliant) with an increment of 0.25. We ran three trials for each \(G\) value and in each trial the robot sidewinds two gait cycles. We attached 13 markers evenly on the robot's body and recorded the robot's motion using OptiTrack motion tracking system. We then averaged each marker's displacement to calculate the robot's center of geometry displacement. To calculate mechanical cost of transport (\(c_{\text{mt}}\)), we used the equation \(c_{\text{mt}}=W/mgd\), where \(W\) is the work done by cables which is estimated using the torque sensor reading from the servo motor, \(mg\) is the robot's weight, and \(d\) is the displacement.
We found that unlike in lateral undulation, when sidewinding in an open environment, having compliance in the body can decrease the mechanical cost of transport in open, hard-ground environments. While the fully rigid body results in a slightly higher displacement (0.476 m/cycle) compared to the \(G=1\) robot (0.400 m/cycle), the work done by the pulleys in the \(G=1\) is far less, resulting in a consistent decrease in the mechanical cost of transport as generalized compliance increases. The value of \(G=1\) was the local minima of the cost of transport. After \(G=1\), the robot can no longer maintain the desired contact pattern for effective sidewinding, resulting in much lower displacements per body cycle (for \(G=1.5\), the robot only translates 0.351 m/cycle). This result gave us the basis for selecting what generalized compliance parameters to use in later experiments. Given that sidewinding efficiency tends to break down after \(G=1\), for the following experiments, we will be comparing three G values: 0, 0.5, and 1.
### _Obstacle Terrain Experiment Setup_
To verify our hypothesis that mechanical intelligence induced by the body compliance can enhance obstacle navigation in sidewinding, we set up a model heterogeneous environment for the robot: a level pegboard base (\(L=2.4\) m, \(W=1.2\) m) with a row of obstacles (5 cm diameter PVC pipes) as depicted in Fig. 6A. In this series of experiments, we fixed the parameters in Eq. 1 as \(A_{H}=75^{\circ},\xi_{H}=1,A_{V}=25^{\circ},\xi_{V}=1\). The parameters were selected so that the ratio of the wavelength displayed in robot and the obstacle spacing roughly matches with that observed from rattlesnakes (\(\sim\)0.8, video included in the supplementary video) [8]. Further, the robot is wrapped with a mesh skin (4 cm ID expandable sleeving, McMaster-Carr) to create a smoother contact surface between the robot and the environment.
### _Experiment with Varied Obstacle Spacing_
A total of 15 sets of trials were conducted - 5 different obstacle spacings (60, 65, 70, 75, and 80 cm) each with 3 different generalized compliance values (\(G=0,G=0.5,G=1\)). Given that the attack angle and initial condition of the body could affect the ability of the robot to traverse through the obstacles, we selected five different initial positions and orientations to start the gait for each set of trials. For our experiments, the criterion for success was to have the entire body clear the line connecting centers of obstacles. If the robot fails to clear the center line of the obstacles after 10 gait cycles or if the robot became jammed between two obstacles, the experiment was classified as a failure. In every set of trials, the traverse probability represents the percentage of successful outcomes out of five initial positions.
Our experiment results indicate that, across different obstacle spacings, having a more compliant body led to a higher traverse probability Fig. 6C. Moreover, the robot that has anisotropic bidirectional compliance outperforms others because it allows body joints to comply with the obstacles in different directions. We observed that in the bidirectionally compliant robot 1) the interactions with the obstacles led to less drastic deviations from the robots initial trajectory, and 2) the body compliance allowed the robot to deform its body to squeeze through obstacles that are tighter than the robot's body length before deformation. Contrary, the two primary failure modes that were observed with the non-compliant robot were: 1) the robot was not able to deform its body enough to squeeze between two obstacles or 2) because the robot cannot absorb the impact of obstacle collisions, it rapidly reorients its body into an undesirable position, causing it to jam. Both of the failure modes are mitigated by increased compliance. Fig. 6F shows the average reorientation angle in successful trials for different \(G\) parameters of the robot. With \(G=0\), the average reorientation angle was \(115.5\pm 14.6\) degrees, with \(G=0.5\) it was \(69.6\pm 55.2\) degrees, and with \(G=1\) it was \(55.1\pm 44.5\) degrees. The average reorientation angle was lower for the more compliant robot because it locally deforms its bodies to mitigate harsh obstacle contacts that
Fig. 5: Sidewinding locomotion speed (red) and mechanical cost of transport \(c_{\text{mt}}\) (blue) as a function of body compliance \(G\). Locomotion speed is measured by the averaged center of mass displacement normalized by the body length of the robot over a gait cycle. Mechanical cost of transport is a unit-less quantity calculated by the work done by cables divided by the product of the robot’s weight and distance traveled. Error bars represent standard deviations. The inset shows a time lapse of the bilaterally compliant (\(G=1\)) robot sidewinding on hard ground.
cause reorientation. Further, across all trials, the robot with bilateral compliance (\(G=1\)) had lower a average number of cycles to traverse (\(3.59\pm 1.73\) cycles to traverse, in the success trials) compared to both the directionally compliant (\(3.96\pm 2.17\) cycles to traverse, in successful trials) and the non-compliant robot (\(5.07\pm 2.61\) cycles to traverse, in the success trials) as shown in Fig. 6E. Overall, increased body compliance helps to prevent and mitigate reorientation due to obstacle interaction and decreases the number of cycles necessary for the robot to traverse through the obstacle array.
Note that while body compliance shows its advantages across experiments with varied obstacle spacings, by far the highest traverse probability for the robot was at 70 and 75 cm obstacle spacing, the same obstacle spacing ratio as what was observed in the biological experiments. We hypothesize that having compliance alone is not exclusively sufficient for obstacle-rich environments when sidewinding. Instead, choosing the "appropriate" gait parameters based on the heterogeneities present in the environment is also important. On the other hand, appropriate gait parameters alone cannot guarantee traversal, as the traverse probability for the \(G=0\) trials consistently remained below 20%. Thus, our results indicate that in order to achieve effective locomotion within complex environments, a sidewinding robot needs the synergy of computational intelligence (to select appropriate parameters) and mechanical intelligence (for passive body mechanics and compliant body-environment interactions).
### _Experiment with Varied Gait Parameters_
To further validate that the effect of body compliance is not exclusive to specific gait parameter choices, we varied the spatial frequency and amplitude of the horizontal wave and ran experiments at the 70 cm obstacle spacing. Without the loss of generality, we chose \(A_{H}=82.5^{\circ},75^{\circ},67.5^{\circ}\) and \(\xi_{H}=1.1,1,0.9\), respectively, while \(A_{V}\) and \(\xi_{V}\) remained unchanged. As in the previous tests, each of these experiments was repeated with 5 different initial conditions, and we compared the robot's performance with no compliance (\(G=0\)) and with anisotropic bidirectional compliance (\(G=1\)).
Remarkably, the bidirectionally compliant robot produced traverse probabilities larger than 60% for all parameter combinations as shown in Fig. 6D. While for all three gait parameter combinations, the non-compliant robot failed to get through in every trial. This result suggests that with an appropriate level of body compliance \(G\), robot performance can remain robust for an increased range of parameters. Even without an "optimal" choice in gait parameters for a particular environment, body compliance can help facilitate effective locomotion.
### _Natural Terrain Experiment_
Lastly, we conducted a series of open-loop outdoor experiments to examine the potential applications of sidewinding with anisotropic bidirectional compliance in complex natural
Fig. 6: Robot performance when sidewinding through an array of obstacles. (**A**) Diagram of the experimental setup. Obstacle spacing \(d\), robot initial condition, robot wavelength \(\lambda\) and the generalized compliance parameter \(G\) were varied for different experiments. (**B**) Time-lapse photos of (i) a failure (\(G=0\)) and (ii) a success (\(G=1\)). Success is the entire robot body passing the center line intersecting the obstacles. (**C**) The traverse (success) probability of the robot for different (\(G\)) values across different obstacle spacing (normalized by the robot’s wavelength). (**D**) The traverse (success) probability of the robot for different (\(G\)) values with different robot wavelengths and fixed obstacle spacing of 70cm (the axis is obstacle spacing normalized by the robot’s wavelength). We tested three different gaits with \(A_{H}=82.5,75,67.5\) deg and \(\xi_{H}=1.1,1.0,0.9\), respectively, which are noted by their corresponding wavelengths of the robot body shape \(\lambda=79,91,104\) cm. (**E**) The average traverse time (in number of cycles) to traverse through the obstacles for each successful trial, sorted by \(G\) value. (**F**) The average robot reorientation angle (in degrees) for each successful trial, sorted by \(G\) value.
terrains. We tested the robot in two different terrains: 1) pine straw with small ferns and 2) coarse granular media. These environments imitate what the robot could encounter during future applications such as planetary exploration, environmental monitoring, and open-field search-and-rescue tasks. Each of the trials was performed with bidirectional compliance (\(G=1\)) in the horizontal bending joints and non-compliant vertical bending joints. Similar to the observations in indoor experiments, bidirectional compliance allowed for effective negotiation of irregularities, as the robot body is more likely to deform and deflect from the harsh contact with surrounding obstacles. Our outdoor experiments demonstrated the robot's locomotion capability and potential for practical applications.
## IV Discussion and Conclusion
Obstacle negotiation in complex, natural environments remains challenging for limbless robots. Prior research on limbless robot locomotion has attempted to tackle these challenges through a variety of methods, relying on online gait parameter tuning based on precise real-time proprioceptive sensory feedback of the environment [19, 20]. These methods often require high onboard computational capabilities or sufficient prior knowledge of the environment for effective locomotion. Contrary, recent work in lateral undulation has focused on offloading the computational complexity needed for obstacle negotiation to mechanically intelligent, compliant robot design [28].
In this work, we focused on introducing compliance to sidewinding to simplify the control needed in complex terrains. By incorporating compliance into the robot, we simplify the control process, enabling the robot to sidewind effectively with open-loop controls over a range of heterogeneities in the environment. Our approach utilizes a traveling wave template for both vertical and horizontal waves that exhibits low sensitivity to variations in wave parameters. We observed that across the various robot sidewinding experiments, by introducing compliance we achieve both more energetically efficient locomotion on hard ground, and improved navigation through heterogeneities in both lab and outdoor terrains. We hypothesize that when sidewinding obstacle-rich environments, having compliance in the lateral wave helps minimize the effect of harsh robot-environment interactions, allowing the robot to either 1) squeeze through obstacles or 2) brush by them without having large changes in body orientation. The robot's ability to exploit its compliance to improve open-loop sidewinding performance across these various terrains makes it mechanically intelligent.
Notice that in this work, the generalized compliance parameter (\(G\)) was only varied in the lateral joints, not the vertical joints. Sidewinding requires careful coordination of horizontal and vertical waves along the body to establish and break contact with the substrate. Implementing the same compliance strategy in the vertical direction as the one used in the lateral joints negatively affected the robot's ability to sidewind. We hypothesize that this is because the contact pattern determined by the suggested gait is disturbed by unwanted ground contact brought by vertical compliance. Instead of remaining above the ground, vertical bending joints "droop down". However, we assume there could be better compliant strategies for vertical waves during sidewinding so that the contact pattern can be preserved while the energy consumption can go down.
This work also builds a strong framework for designing multi-modal compliant limbless robots capable of multiple modes of limbless locomotion (e.g., sidewinding, lateral undulation, etc.). Previous work has shown that compliance can improve obstacle negotiation in highly obstacle-dense terrains when using lateral undulation [28]. This work suggests that the same bilateral actuation strategy can be used to aid sidewinding in both open and obstacle-rich environments. By designing a robot capable of exploiting body compliance to be mechanically intelligent in both sidewinding and lateral undulation, we can get closer to creating agile, robust, and capable limbless robots for real-world applications such as search-and-rescue, planetary exploration, and inspection.
More generally, modeling mechanics and interactions involved in biological limbless locomotion are challenging, making limbless robots good tools (as "robophysical" models) for revealing fundamental principles underlying limbless locomotion [36, 37, 28, 38, 7]. To this end, this robot has the potential to serve as a model to study snake sidewinding. With a bilaterally cable-driven robot we can systematically test locomotor performance with varied gait parameters and level of body compliance, which is impossible to carry out with animals. Through comparison across robotic and biological systems, this robot can help us learn sidewinding snakes' kinematics, dynamics, and even physiology, deepening our understanding of their locomotion in complex terrains.
Fig. 7: The robot demonstrates its capability of sidewinding in complex natural environments with bidirectional compliance (\(G=1\)). (**A**) Time lapsed images of the robot traversing pine straw and fern environment. (**B**) Time lapsed images of the robot traversing coarse granular media environment. |
2309.08674 | Fake News Detectors are Biased against Texts Generated by Large Language
Models | The spread of fake news has emerged as a critical challenge, undermining
trust and posing threats to society. In the era of Large Language Models
(LLMs), the capability to generate believable fake content has intensified
these concerns. In this study, we present a novel paradigm to evaluate fake
news detectors in scenarios involving both human-written and LLM-generated
misinformation. Intriguingly, our findings reveal a significant bias in many
existing detectors: they are more prone to flagging LLM-generated content as
fake news while often misclassifying human-written fake news as genuine. This
unexpected bias appears to arise from distinct linguistic patterns inherent to
LLM outputs. To address this, we introduce a mitigation strategy that leverages
adversarial training with LLM-paraphrased genuine news. The resulting model
yielded marked improvements in detection accuracy for both human and
LLM-generated news. To further catalyze research in this domain, we release two
comprehensive datasets, \texttt{GossipCop++} and \texttt{PolitiFact++}, thus
amalgamating human-validated articles with LLM-generated fake and real news. | Jinyan Su, Terry Yue Zhuo, Jonibek Mansurov, Di Wang, Preslav Nakov | 2023-09-15T18:04:40Z | http://arxiv.org/abs/2309.08674v1 | # Fake News Detectors are Biased against
###### Abstract
The spread of fake news has emerged as a critical challenge, undermining trust and posing threats to society. In the era of Large Language Models (LLMs), the capability to generate believable fake content has intensified these concerns. In this study, we present a novel paradigm to evaluate fake news detectors in scenarios involving both human-written and LLM-generated misinformation. Intriguingly, our findings reveal a significant bias in many existing detectors: they are more prone to flagging LLM-generated content as fake news while often misclassifying human-written fake news as genuine. This unexpected bias appears to arise from distinct linguistic patterns inherent to LLM outputs. To address this, we introduce a mitigation strategy that leverages adversarial training with LLM-paraphrased genuine news. The resulting model yielded marked improvements in detection accuracy for both human and LLM-generated news. To further catalyze research in this domain, we release two comprehensive datasets, GossipCop++ and PolitiFact++, thus amalgamating human-validated articles with LLM-generated fake and real news.
+
Footnote †: \(\dagger\): Equal contribution.
## 1 Introduction
_In an age of universal deceit, telling the truth is a revolutionary act._
-- George Orwell
The dissemination of false information can cause chaos, hatred, and trust issues, and can eventually hinder the development of society as a whole (Wasserman and Madrid-Morales, 2019). Among them, fake news is often used to manipulate certain populations and had a catastrophic impact on multiple events, such as Brexit (Bastos and Mercea, 2019), the COVID-19 pandemic (van Der Linden et al., 2020), and the 2022 Russian assault on Ukraine (Mbah and Wasum, 2022). To spread such fake news, adversaries conventionally will deploy propaganda techniques and manually write the fake news (Huang et al., 2022).
Creating convincing disinformation manually is a labor-intensive and time-consuming process, which may limit the scale and speed at which such content can be produced. This makes it less efficient and desirable for adversaries who aim for widespread and rapid dissemination of false information (Zellers et al., 2019). With the development of language models like GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2019), more and more adversaries tend to utilize these models to automate fake news curation, resulting in a surge in the amount of fake news (Weidinger et al., 2021). The recent advances in large language models (LLMs) have exacerbated the situation, as their increased capabilities can generate more convincing and nuanced disinformation at an unprecedented scale (Pan et al., 2023; Zhuo et al., 2023). For instance, the emergence and application of LLMs (Brown et al., 2020; Touvron et al., 2023; Li et al., 2023) like GPT-3 and Chat-GPT have markedly impacted the media landscape. From January 1, 2022, to April 1, 2023, there was a dramatic surge in synthetic articles, especially on misinformation news websites (Hanley and Durumeric, 2023). Relative to the previous year, there was an increase of 79.4% in the production of synthetic news articles on mainstream websites. However, this pales compared to the astounding 342% increase seen on misinformation-oriented sites over the same period.
With the increasing concerns that humans are likely deceived or misled by LLM-generated fake news, there is an urgent need to study how the era of LLMs can affect fake news detection. Previous works have only trained fake news detectors to detect human-written or language-model-generated
fake news (Figueira and Oliveira, 2017; Zellers et al., 2019; Schuster et al., 2020). Compared to these studies, we consider a more realistic scenario where the detectors must identify both human-written and LLM-generated fake news. Intuitively, we add the same amount of LLM-generated fake news as human-written to the training and test sets. Different from Zellers et al. (2019) and Pagnoni et al. (2022) aiming to defend against synthetic fake news via specific designs, our goal is to examine the performance of generic fake news detectors in detecting naturally written fake news by LLMs and humans. To synthesize the natural fake news via LLMs, we design a systematic framework to instruct LLMs with identifiable structures. We choose ChatGPT as the backbone model, as it is one of the most representative instruction-tuned LLMs that can generate human-like context.
Throughout our experiments on various fake news detectors like BERT, RoBERTa and ELECTRA (Khan et al., 2021), we surprisingly find that they can detect LLM-generated fake news better than human-written ones, in contrast to previous concerns about the challenges of identifying LLM-generated fake news (Pan et al., 2023). To further understand this finding, we continue paraphrasing human-written real news via ChatGPT and evaluate whether the detectors can correctly identify both LLM-paraphrased and human-written real news. We find that fake news detectors perform much worse on LLM-paraphrased real news than human-written ones. Based on these observations, we conclude that **fake news detectors are biased towards LLM-generate texts** and tend to classify them as fake news regardless of their truthfulness.
To mitigate such biases, we first study whether fake news detectors may take'shortcuts' to learn the LLM-generated fake news. Inspired by content-based features of news articlesHorne and Adali (2017); Norregaard et al. (2019), we analyze the new landscape (NELA) features and provide several hypotheses based on the statistical evidence. We demonstrate that bias can be mitigated by training on selective features with two regression detectors. We further propose a debiasing technique for fake news detectors by leveraging adversarial training with LLM-paraphrased real news. We show that our approach can effectively mitigate the biases and narrow the performance gap between LLM-generated and human-written texts.
Our contributions can be summarized as follows:
* We introduce a new and realistic setting for evaluating fake news detectors. In this scenario, detectors must identify both human-written and LLM-generated fake news. This reflects real-world situations more accurately, considering the increasing usage of LLMs in disseminating disinformation. Testing detectors against human and LLM-generated content allows us to assess their resilience and effectiveness in an evolving fake news landscape.
* Our analysis uncovers surprising findings. Despite existing concerns about the ability of fake news detectors to identify LLM-generated fake news, we find these detectors demonstrate a bias. They disproportionately classify LLM-generated content as fake news, even when it is truthful.
* We delve deeper into these observations, suggesting potential explanations for the detected bias via content-based NELA features. We propose that these detectors may learn'shortcuts', identifying fake news based on unique linguistic features in LLM-generated texts.
* On the basis of this bias, we develop a mitigation technique leveraging adversarial training (Bai et al., 2021) with LLM-paraphrased real news. This strategy effectively reduces biases, enhancing the performance of fake news detectors on both human-written and LLM-generated content.
* We also provide two new datasets, GossipCop++ and PolitiFact++, for the research community. Along with the original human-written news articles, these datasets contain high-quality 97 and 4,084 LLM-synthesized fake news articles, respectively. We believe they can serve as benchmarks and valuable resources for further research into developing and evaluating fake news detectors.
## 2 Related Work
### Fake News Synthesis
There has been a focus in prior research on using deep learning to produce misinformation with the aim of facilitating the spread of machine-generated fake news. Zellers et al. (2019) leverage GPT-2 (Radford et al., 2019) to pre-train a large-scale
news corpus and show that the generator effectively synthesizes fake news. Later, Huang et al. (2023) improve the controllability of the synthesized fake news by conditioning the generation on knowledge elements, including entities, relations and events, extracted from the original news article. Shu et al. (2021) enhance the factuality of the generated article by introducing a fact retriever that fetches relevant information from external corpora. Mosallanezhad et al. (2022) exploit adversarial reinforcement learning to generate topic-preserving fake news articles. These studies have developed methods for generating fake news that is hard to distinguish from real news for humans. More recently, Huang et al. (2023) incorporated propaganda techniques to synthesize the fake news via data augmentation Feng et al. (2021); Zhuo et al. (2023). However, these approaches require costly designs to synthesize the text. In this work, we tend to utilize large language models to synthesize fake news via prompting. Compared to the prior studies, we need no model training while guaranteeing the quality of synthesized fake news.
### Fake News Detection
Previous works on fake news detection have mainly explored two directions: content-based and knowledge-based detection Manzoor et al. (2019). For content-based detection, researchers have studied how well the pre-trained classifiers can detect machine-generated text Su et al. (2023). Zellers et al. (2019) show that finetuning RoBERTa can detect synthesized fake news with 95% accuracy and that the performance transfers across decoding strategies and to smaller generators. Ippolito et al. (2020) find that the best-performing detectors are those that deceive humans because decoding strategies must balance fluency with lexical and syntactic novelty. Different from content-based detection, knowledge-based detection emphasizes auxiliary knowledge for news verification. These methods typically utilize external knowledge about entity relationships or social knowledge about online posts for fake news detection. While existing methods have demonstrated the usefulness of heterogeneous social relations and external information Shu et al. (2021); Sheng et al. (2021), they either do not model the interactions between the news content and different types of knowledge data or model them at a coarse-grained (e.g., sentence) level, which limits their performance. In this study, we focus on content-based detection and use a series of representative pre-trained detectors to detect both large-language-model-generated and human-written fake news.
## 3 Task Definition
Neural fake news detection, an ever-evolving domain, has witnessed significant shifts with the emergence of LLMs. It is imperative to understand the dataset compositions and the challenges after LLMs emerge. Therefore, we outline the task definitions across two eras, namely _Pre-LLM Era_ and _LLM Era_.
### Pre-LLM Era: Traditional Neural Fake News Detection
In the era of Pre-LLM, the training dataset conventionally contains two types of data, human-written real news (\(\mathcal{D}_{HR}\)) and fake news (\(\mathcal{D}_{HF}\)),
\[\mathcal{D}_{HR}=\{(x_{1}^{HR},y_{1}^{HR}),(x_{2}^{HR},y_{2}^{HR}),\dots,(x_{N }^{HR},y_{N}^{HR})\} \tag{1}\]
\[\mathcal{D}_{HF}=\{(x_{1}^{HF},y_{1}^{HF}),(x_{2}^{HF},y_{2}^{HF}),\dots,(x_{N }^{HF},y_{N}^{HF})\} \tag{2}\]
where \(x_{i}\) represents the \(i^{th}\) news article, \(y_{i}\) denotes the label for \(x_{i}\), with \(y_{i}\in\{0,1\}\) (0 for real, 1 for fake) and \(N\) is the total number of articles in each dataset.
Historically, adversarial attempts to fabricate fake news predominantly stemmed from humans, leading to a dataset composition reflecting this reality. Hence, the neural fake news detector \(M(x;\theta,\mathcal{D})\) is tailored to discern between authentic human-written real news and fake news, training on \(\mathcal{D}_{HR}\) and \(\mathcal{D}_{HF}\) with the following loss function:
\[Loss(\theta)=\sum_{i=1}^{N}\mathcal{L}(M(x_{i};\theta,\mathcal{D}_{HR}\cup \mathcal{D}_{HF}),y_{i}), \tag{3}\]
where \(\mathcal{L}\) is a typical binary cross-entropy loss.
### LLM Era: Advanced Fake News Detection
The introduction of LLMs ushered in an era of amplified complexities, resulting in the importance of additional training on LLM-generated fake news (\(\mathcal{D}_{MF}\)):
\[\mathcal{D}_{MF}=\{(x_{1}^{MF},y_{1}^{MF}),(x_{2}^{MF},y_{2}^{MF}), \tag{4}\] \[\qquad\quad\dots,(x_{N}^{MF},y_{N}^{MF})\},\]
where \(x^{MF_{i}}\) represents the \(i^{th}\) LLM-generated news article, \(y_{i}^{MF}\) denotes the label for \(x_{i}^{MF}\), with \(y_{i}^{MF}\in\{0,1\}\) (0 in this case) and \(N\) is the total number of articles in each dataset.
In this contemporary setting, the prolific capabilities of LLMs manifest in their ability to craft narratives that rival human-written content in quality and authenticity. The detectors trained solely on traditional datasets may inadvertently overlook the nuances of LLM-generated content. Therefore, in this setting, the fake news detectors will be trained on the combination of \(\mathcal{D}_{HR}\), \(\mathcal{D}_{HF}\) and \(\mathcal{D}_{MF}\),
\[Loss(\theta^{\prime})=\sum_{i=1}^{N}\mathcal{L}(M(x_{i};\theta^{ \prime},\\ \mathcal{D}_{HR}\cup\mathcal{D}_{HF}\cup\mathcal{D}_{MF}),y_{i}). \tag{5}\]
This model ensures holistic and robust detector training. By integrating both human and LLM-generated fabrications, fake news detectors are better equipped to navigate the multifaceted challenges of the current fake news paradigm.
## 4 Prompting Large Language Models to Generate Fake News
### ChatGPT As A Fake News Generator
To generate fake news using LLMs, we tend to elucidate the optimal strategies an adversary might leverage to fabricate such deceptive content.
Economically, ChatGPT presents a compelling proposition. Unlike its counterparts, such as GPT-3 Brown et al. (2020), interfacing with ChatGPT via its web API or iOS application incurs no direct financial costs, positioning it as an economical vector for potential misinformation campaigns.
On the technical spectrum, there exists a range of open-source LLMs, notably LLaMA Touvron et al. (2023). Engaging these models for fake news synthesis necessitates profound technical expertise, given their substantial computational demands for local deployment. Beyond mastering their operational dynamics, effective deployment also hinges on specific hardware provisions, with a pronounced emphasis on GPUs, to realize their full potential.
In light of these factors, ChatGPT is delineated as the prime LLM for our investigative foray into fake news generation. A salient limitation of naive prompting, however, is the emergence of identifiable structures in the generated content. Such structures, characterized by recurrent formatting patterns or predictable metadata placements, can betray the machine-generated nature of the content, undermining its deceptive intent. Recognizing the impracticality of manual scrutiny over extensive datasets to negate these patterns, we introduce a refined methodology: _Structured Mimicry Prompting_ (SMP). SMP employs a tailored prompting paradigm to discretely process the core narrative and the article's title, as depicted in Figure 1. This strategic approach enables LLMs to emulate the nuance and depth inherent to authentic misleading narratives.
### Fake News Detection Datasets in LLM Era
When selecting the data source to construct our datasets in the LLM era, we consider the following two criteria. First, the news articles must be human-written and have been widely used in the Pre-LLM era. This ensures that the seed human fake news in SMP has high quality. Second, the news events described in the articles must be important to the general audience. Motivated by these two criteria, we repurpose the fake news data repository FakeNewsNet Shu et al. (2020) as our data source. FakeNewsNet contains two datasets, PolitiFact and GossipCop. To improve the data quality and ease the fake news generation, we filter out news articles that do not contain titles or descriptions. By adopting the SMP prompting technique via ChatGPT, we compose 97 and 4,084 LLM-generated fake news for PolitiFact and GossipCop, respectively. Combining the original datasets, we propose two new datasets, PolitiFact++ and GossipCop++.
In order to verify the effectiveness of SMP, we use MAUVE metric Pillutla et al. (2021) to compute the distribution similarity between the human-written fake news and LLM-generated ones. By naively prompting ChatGPT with "Generate a fake news article with a title" on PolitiFact and GossipCop, we collect all the generated outputs and compute the MAUVE scores. We find that the MAUVE scores for PolitiFact and GossipCop are 3.1% and 1.2%, respectively. By utilizing SMP with ChatGPT, we observe that the MAUVE scores for PolitiFact and GossipCop are 72.5% and 71.8% respectively, indicating that the LLM-generated fake news is more highly aligned with human-written ones, com
pared to the ones with naive prompting.
## 5 Experiment Setup
In our experiments, we aim to (1) systematically study the performance of fake news detectors in the LLM era, (2) examine the issues of these fake news detectors, and (3) mitigate these identified issues.
### Datasets
We use PolitiFact++ and GossipCop++ as the training and test dataset, respectively, which are
\begin{table}
\begin{tabular}{c|c|c|c c c|c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multicolumn{4}{c|}{Accuracy} & \multirow{2}{*}{F1} & \multirow{2}{*}{Recall} & \multirow{2}{*}{Precision} & \multirow{2}{*}{Auroc} \\ \cline{4-10} \cline{6-10} & & HR & & & & & & & \\ \hline \multirow{8}{*}{GossipCop++} & \multirow{2}{*}{RoBERTa} & Large & 80.91 & **77.97** & **99.88** & 84.91 & 85.50 & **88.92** & 82.32 & 94.33 \\ & & Base & 85.56 & 69.65 & 99.76 & 85.13 & 85.06 & 84.70 & 85.43 & 92.75 \\ \cline{2-10} & \multirow{2}{*}{BERT} & Large & 87.39 & 70.13 & 99.39 & 86.08 & 85.89 & 84.76 & 87.05 & 92.67 \\ & & Base & 87.45 & 66.59 & 99.27 & 85.19 & 84.85 & 82.93 & 86.86 & 91.56 \\ \cline{2-10} & \multirow{2}{*}{ELECTRA} & Large & 80.17 & 71.73 & 99.88 & 82.99 & 83.45 & 85.80 & 81.23 & 91.84 \\ & & Base & 80.05 & 63.53 & 99.63 & 83.81 & 83.44 & 81.58 & 85.39 & 90.83 \\ \cline{2-10} & \multirow{2}{*}{ALBERT} & Large & 92.96 & 56.43 & 98.53 & 85.22 & 83.98 & 77.48 & 91.67 & 90.16 \\ & & Base & 85.68 & 59.24 & 97.92 & 82.13 & 81.47 & 78.58 & 84.58 & 88.72 \\ \cline{2-10} & \multirow{2}{*}{DeBERTa} & Large & 92.59 & 67.56 & 99.88 & **88.16** & **87.61** & 83.72 & **91.87** & **94.38** \\ & & Base & **93.02** & 57.41 & 98.41 & 85.47 & 84.28 & 77.91 & 91.78 & 91.33 \\ \hline \hline \multirow{8}{*}{PolitiFact++} & \multirow{2}{*}{RoBERTa} & Large & 30.41 & **68.04** & **100.00** & 57.22 & 66.26 & **84.02** & 54.70 & 79.32 \\ & & Base & 86.56 & 61.86 & 100.00 & 74.74 & 76.21 & 80.93 & 72.02 & 83.30 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{BERT} & Large & 48.97 & 35.61 & 98.97 & 62.63 & 77.62 & 76.59 & 59.92 & 74.02 \\ \cline{1-1} & & Base & 69.59 & 38.14 & 100.00 & 69.33 & 69.25 & 69.07 & 69.43 & 78.25 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{ELECTRA} & Large & 63.92 & 62.89 & 100.00 & 72.68 & 74.88 & 81.44 & 69.30 & 83.94 \\ \cline{1-1} & & Base & 82.47 & 50.52 & 100.00 & 78.78 & 78.07 & 75.26 & 81.11 & 87.04 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{ALBERT} & Large & **90.72** & 29.90 & 98.97 & 77.58 & 74.18 & 64.43 & 87.41 & 86.09 \\ \cline{1-1} & & Base & 75.26 & 40.21 & 100.00 & 72.68 & 71.96 & 70.10 & 73.91 & 81.80 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{DeBERTa} & Large & 70.62 & 53.61 & 100.00 & 73.71 & 74.50 & 76.80 & 72.33 & 82.43 \\ \cline{1-1} & & Base & 90.21 & 43.30 & 100.00 & **80.93** & **78.98** & 71.65 & **87.97** & **88.00** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance metrics of various fake news detectors on the GossipCop++ and PolitiFact++ datasets. **HR**: Human-written Real news. **HF**: Human-written Fake news. **MF**: LLM-generated Fake news.
Figure 1: SMP: Prompting LLMs to generate fake news articles.
proposed in Section 4.1. We show the details of two datasets in Table2,
### Fake News Detectors
We choose five widely adopted language models as fake news detectors, RoBERTa Liu et al. (2019), BERT Kenton and Toutanova (2019), ELECTRA Clark et al. (2019), ALBERT Lan et al. (2020), DeBERTa He et al. (2020), with their variants Large and Base models). These language models have demonstrated their superior performance in classifying fake news articles. We train these models on A100 GPUs and use the default hyperparameters as the same as Pagnoni et al. (2022) using a learning rate of 1e-6 and training for 10 epochs.
## 6 Results and Analysis
In this section, we present the results of our investigation and discuss the findings according to each research question (RQ).
### RQ1: How well can fake news detectors perform on PolitiFact++ and GossipCop++?
To evaluate the performance of selected fake news detectors, we report the accuracy of each part of the data (human-written real news, human-written fake news, and LLM-generated fake news), F1 scores, recalls, precisions and AUROCs in Table 1. Notably, DeBERTa variants outperform other models, registering an F1 score of 87.61 on GossipCop++ and 78.98 on PolitiFact++. A deeper dive into the accuracy metrics reveals a pronounced disparity in detecting human-written versus LLM-generated fake news. Remarkably, the detectors exhibit near-perfect accuracy in identifying LLM-generated fake news, yet falter significantly with human-written fake news. Among the evaluated models, RoBERTa-Large demonstrates a more consistent ability to classify fake news, outperforming its counterparts in detecting both human-written and LLM-generated fake news. Nonetheless, even for RoBERTa-Large, a discernible gap persists, with accuracy discrepancies exceeding 20% and 30% on GossipCop++ and PolitiFact++, respectively. These findings suggest an inherent bias in fake news detectors towards machine-generated content, particularly those crafted by LLMs. A plausible explanation is that detectors might exploit certain patterns or'shortcuts' inherent to LLM-generated content, thereby skewing their detection capabilities.
### RQ2: Why are fake news detectors biased towards LLM-generated news?
To comprehend the observed bias in fake news detectors towards content generated by LLMs, we embarked on an in-depth analysis of content-based features. Drawing inspiration from prior work on news veracity detection Horne and Adali (2017), we computed News Landscape (NELA) features. These features, derived from the NELA toolkit, encapsulate six dimensions of news content: style, complexity, bias, affect, morale, and event. We applied these features to both GossipCop++ and PolitiFact++. Employing Tukey's pairwise test Tukey (1949), we discerned significant feature disparities among human-written fake news, LLM-generated fake news, and human-written real news.
Our analysis, as presented in Table 7, reveals that most of the NELA features differ significantly between human-written and LLM-generated fake news. Moreover, the divergence between LLM-generated fake news and human-written real news is more pronounced than between human-written fake and real news. This underscores the relative ease of detecting LLM-generated fake news, shedding light on the bias observed in RQ1. The NELA features for PolitiFact++ are detailed in Appendix 7.
To further understand the influence of these features on detection performance, we evaluated two regression models: logistic regression and decision tree. These models were chosen to explore the potential for countering biases in detecting LLM-generated fake news. For GossipCop++, we retained NELA features that exhibited no significant disparity between human-written and LLM-generated fake news. For PolitiFact++, given the paucity of such NELA features, we also incorporated features that significantly differentiated human-written fake news from real news.
Table 4 presents the results of both models. Notably, the debiased logistic regression model for GossipCop++ exhibits a decrease in accuracy for LLM-generated fake news (from 95.79% to 86.51%) but an increase in accuracy for human-written fake news (from 47.33% to 53.89%). Similar trends are observed for the PolitiFact++ dataset.
shift in performance dynamics emerges. While the proficiency in identifying LLM-generated fake news wanes, there is an increase in the performance of detecting human-written fake news. This shift can be attributed to the prior models' propensity to capitalize on features intrinsic to LLM-generated content. However, a slight decline in overall detection efficacy, especially for human-written real news, necessitates scrutiny. Our efforts to debias might inadvertently overlook pivotal features crucial for discerning genuine from fabricated content. This underscores the importance of judicious feature selection and a profound understanding of dataset biases. It is pivotal to recognize that stellar performance on a specific subset might veil underlying biases. The overarching challenge lies in crafting models that harmonize precision with fairness. Overreliance on distinct LLM-generated fake news characteristics could compromise a model's broader applicability.
### RQ3: How can we mitigate bias in fake news detectors?
Our analysis in Section 6.2 revealed a pronounced bias in detectors, which tends to overfit the unique features of LLM-generated fake news. To address this issue, we introduce an adversarial training-inspired strategy, augmenting our training set with high-quality LLM-generated real news. To this end, we got 132 and 8,168 paraphrased real news articles for GossipCop++ and PolitiFact++ after manual filtering, respectively. By employing ChatGPT to generate paraphrased content resembling genuine news articles, we aim to foster a detector that is adept across diverse news content rather than being narrowly focused on a specific subset. This section details our methodology and assesses the quality of the LLM-generated real news relative to its source.
#### 6.3.1 Quality Assessment of LLM-Generated real news
To ascertain the quality and authenticity of LLM-generated news, we embarked on a rigorous evaluation. We randomly sampled 100 pairs from the two datasets respectively, each pairing a human-authored article with its LLM-generated counterpart. Our goal is to generate real news that captures the essence of the original while being indistinguishable from human-authored content. Two authors, familiar with the research context yet objective, were annotators for the human evaluation components. We employed the following metrics to critically evaluate the LLM-generated content:
1. **Semantic Consistency with SimCSE**: The metric, leveraging the SimCSE model Gao et al. (2021), calculates the cosine similarity between embeddings of the original and LLM-generated news. A higher score signifies strong semantic alignment, ensuring the core narrative is retained.
2. **Readability Assessment**: The metric measures the text's comprehensibility. Annotators need to compare the original and LLM-generated news, rating their clarity and understandability on a set scale.
3. **Authenticity Perception**: The metric evaluates the content's perceived credibility. Anno
\begin{table}
\begin{tabular}{l l||l l} \hline \hline & quotes & HR-MF & MF-HR & - \\ & exclain & HF-MF & MF-HR & - \\ & alpine & HR-MF & MF-HR & - \\ & alleaps & HR-MF & MF-HR & - \\ & ages & HR-MF & MF-HR & - \\ & CC & HF-MF & MF-HR & - \\ & CD & HF-MF & MF-HR & - \\ & DT & HF-MF & MF-HR & - \\ & IN & HF-MF & MF-HR & - \\ & JJ & HF-MF & MF-HR & - \\ & MD & MF-HR & HF-HR & - \\ & NNS & HF-MF & MF-HR & - \\ & NSP & HF-MF & MF-HR & - \\ & PRP & HF-MF & MF-HR & - \\ & PRPS & HF-MF & MF-HR & - \\ & RB & HF-MF & MF-HR & HF-HR \\ & TO & HF-MF & MF-HR & - \\ & WPS & HF-MF & MF-HR & - \\ & WER & - & MF-HR & HF-HR \\ & VB & - & MF-HR & HF-HR \\ & VRD & HF-MF & MF-HR & - \\ & VRG & HF-MF & MF-HR & - \\ & VRN & HF-SM & MF-HR & - \\ & VZ & - & MF-HR & HF-HR \\ & WDT & -MF-HR & HF-HR & - \\ \hline \multirow{4}{*}{complexity} & MF & MF & MF & \(<\) HR & - \\ & any working & - & MF-HR & - \\ & word count & HF-SM & MF-HR & - \\ & among index & HF-SM & MF-HR & - \\ & colleman line & HR-MF & MF-HR & - \\ \hline \multirow{4}{*}{bias} & Bus words & MF-MF & MF-HR & HF-HR \\ & assertures & HF-SM & MF-HR & HF-HR \\ & hedges & HF-SM & MF-HR & HF-HR \\ & implicates & HF-ME & MF-HR & -HR \\ & report verbs & - & MF-HR & - \\ & positive opinion words & HF-SM & MF-HR & - \\ & negative opinion words & - & MF-HR & - \\ & valange & - & MF-HR & - \\ & vadone & - & MF-HR & - \\ & weng & HF-SM & MF-HR & - \\ & wpons & - & MF-HR & HF-HR \\ & wens & - & MF-HR & HF-HR \\ & weng & - & MF-HR & - \\ & wspa & - & MF-HR & - \\ & hospospvue & HF-ME & - \\ & & hospvue & - & MF-HR & - \\ & moral & hospvue & - & MF-HR & - \\ & AuthorityView & - & MF-HR & - \\ & ParityVirnee & - & HF-HR & - \\ \hline event & man dates & HF-SM & MF-HR & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of content-based features across Human-written Fake news (HF), LLM-generated Fake news (MF), and Human-written Real news (HR) for the GossipCop++ dataset. The table showcases differences in style, complexity, bias, affect, morale, and event features. The colour intensity represents the significance of the difference (\(p\) value), with darker shades indicating higher significance.
tators compared both news versions, assessing their perceived authenticity.
4. **Stylistic Alignment**: Annotators need to evaluate the stylistic consistency of the LLM-generated content with traditional news writing standards. They compared the LLM-generated news with standard articles, rating their stylistic congruence.
Table 6 shows that LLM-generated real news scores align closely with those of original news across all metrics. The Semantic Consistency score, as measured by SimCSE, underscores the significant semantic congruence between the LLM-generated and original news articles. This is further corroborated by the readability, authenticity perception, and stylistic alignment scores. The non-significant \(p\)-values emphasize that LLM-generated content is virtually indistinguishable from human-authored news. Additionally, the robust Cohen's Kappa scores [10] highlight the consistency in evaluations, attesting to the high quality and authenticity of the LLM-generated news.
#### 6.3.2 Mitigating Bias in Detectors
Building upon our earlier findings of biases in fake news detectors, we sought to devise a mitigation strategy. The overarching goal was to ensure that the detectors generalize well across diverse news types rather than being overly attuned to LLM-generated content.
\begin{table}
\begin{tabular}{l l|c c c|c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Accuracy} & \multirow{2}{*}{F1} & \multirow{2}{*}{Recall} & \multirow{2}{*}{Precision} & \multirow{2}{*}{Avure} \\ \cline{5-8} \cline{7-9} & & HR & & & & & \\ \hline \multirow{4}{*}{GossipCopy++} & logistic regression & 77.09(2.02) & 47.33(1.0) & [95.90(3.0) & 74.33(2.2) & 73.59(0.3) & 75.75(0.1) & 71.56(0.6) & 74.41(0.2) \\ & logistic regression(debiased) & 71.51(0.2) & 53.89(0.4) & **86.51(0.1)** & 70.86(0.1) & 70.66(0.1) & 71.13(0.1) & 70.20(0.2) & 70.86(0.1) \\ & decision tree & 70.32(0.5) & 54.80(0.9) & **86.90(0.5)** & 70.95(0.2) & 70.66(0.3) & 70.49(0.2) & 70.85(0.6) & 70.60(0.2) \\ & decision tree(debiased) & 67.43(0.5) & 57.91(1.0) & **78.70(0.9)** & 67.87(0.2) & 68.00(0.3) & 67.72(0.2) & 68.30(0.8) & 67.88(0.2) \\ \hline \multirow{4}{*}{FolitiFact++} & logistic regression & 63.37(4.7) & 70.21(1.5) & 93.84(1.9) & 26.72(4.2) & 75.11(1.8) & 69.57(2.9) & 81.97(1.8) & 73.67(2.2) \\ & regression(debiased) & 63.39(0.5) & 77.44(7.7) & 89.68(3.7) & 79.25(1.5) & 75.34(1.6) & 69.83(2.7) & 25.13(0.4) & 74.22(0.0) \\ & decision tree & 76.26(2.0) & 58.84(0.9) & 29.57(0.0) & 75.91(0.1) & 76.59(1.3) & 76.71(1.3) & 75.75(2.4) & 76.18(1.0) \\ \cline{1-1} & decision tree(debiased) & 76.28(2.2) & 69.26(4.8) & 81.47(6.8) & 75.78(1.4) & 75.61(1.5) & 76.17(1.7) & 75.25(2.4) & 75.93(1.5) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance metrics of logistic regression and decision tree models on the GossipCop++ and PolitiFact++ datasets. **HR**: Human-written Real news. **HF**: Human-written Fake news. **MF**: LLM-generated Fake news.
\begin{table}
\begin{tabular}{l|l l|c c c|c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multicolumn{3}{c|}{HF} & \multicolumn{3}{c}{MR} \\ & & Before & Debiased & Difference & Before & Debiased & Difference \\ \hline \multirow{8}{*}{GossipCop++} & RoBERTa & Large & 77.97 & 84.46 & 6.497 & 24.24 & 90.70 & 66.461 \\ & Base & 69.65 & 78.21 & 8.57\(\dagger\) & 31.21 & 90.58 & 59.36\(\dagger\) \\ & BERT & Large & 70.13 & 77.85 & 7.71 & 52.63 & 89.47 & 36.84\(\dagger\) \\ & Base & 65.69 & 72.46 & 5.88\(\dagger\) & 46.02 & 93.27 & 47.25\(\dagger\) \\ & & Large & 71.73 & 77.60 & 5.88\(\dagger\) & 31.95 & 90.82 & 58.87\(\dagger\) \\ & & Base & 63.53 & 70.50 & 6.98\(\dagger\) & 33.54 & 90.21 & 56.67\(\dagger\) \\ & & Large & 56.43 & 62.42 & 6.00\(\dagger\) & 58.02 & 93.51 & 35.50\(\dagger\) \\ & & Base & 59.24 & 65.73 & 6.49\(\dagger\) & 49.69 & 95.10 & 45.41\(\dagger\) \\ & & Large & 67.56 & 77.36 & 9.79\(\dagger\) & 38.43 & 94.12 & 55.69\(\dagger\) \\ & & Base & 57.41 & 70.62 & 13.22\(\dagger\) & 41.49 & 94.86 & 53.37\(\dagger\) \\ \hline \multirow{8}{*}{PolitiFact++} & RoBERTa & Large & 68.04 & 73.20 & 5.151 & 25.77 & 89.69 & 63.92\(\dagger\) \\ & Base & 61.86 & 58.76 & -3.09\(\dagger\) & 27.84 & 87.63 & 59.79\(\dagger\) \\ \cline{1-1} & & Large & 53.61 & 62.89 & 9.28\(\dagger\) & 43.30 & 87.63 & 44.33\(\dagger\) \\ \cline{1-1} & & Base & 38.14 & 55.67 & 17.53\(\dagger\) & 49.48 & 92.78 & 43.30\(\dagger\) \\ \cline{1-1} & & Large & 62.89 & 73.20 & 10.31\(\dagger\) & 32.99 & 96.99 & 56.70\(\dagger\) \\ \cline{1-1} & & Base & 50.52 & 61.86 & 11.34\(\dagger\) & 31.96 & 91.75 & 59.79\(\dagger\) \\ \cline{1-1} & & Large & 29.90 & 40.21 & 10.31\(\dagger\) & 59.79 & 91.75 & 31.96\(\dagger\) \\ \cline{1-1} & & Base & 40.21 & 50.52 & 10.31\(\dagger\) & 48.45 & 96.91 & 48.45\(\dagger\) \\ \cline{1-1} & & Large & 53.61 & 75.26 & 21.65\(\dagger\) & 42.27 & 93.81 & 51.55\(\dagger\) \\ \cline{1-1} & & Base & 43.30 & 75.26 & 31.96\(\dagger\) & 39.18 & 92.78 & 53.61\(\dagger\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance comparison of various models on the GossipCop++ and PolitiFact++ datasets before and after debiasing. The ‘Difference’ column highlights the performance change post-debiasing. **HF**:human-written fake news. **MR**: LLM-generated real news.
Our debiasing approach draws inspiration from adversarial training (Bai et al., 2021). In essence, we aimed to challenge the model during its training phase, compelling it to focus on the intrinsic features of fake news rather than specific idiosyncrasies of LLM-generated content. The methodology encompassed:
1. Validating the quality of LLM-generated real news to ensure it mirrors human-written content.
2. Augmenting the training regimen to incorporate a broader spectrum of news sources.
We conducted experiments on the GossipCop++ and PolitiFact++ datasets, and the results are reported in Table 5. The RoBERTa-Large model, when tested on the GossipCop++ dataset, exhibited a 6.49 percentage point enhancement in detecting human-written fake news and a significant 66.46 percentage point improvement for LLM-generated real news. This trend of improvement is evident across most models. However, an exception is the RoBERTa-Base model on the PolitiFact++ dataset, which saw a 3.09 percentage point decline for human-written real news, but still achieved a substantial 59.79 percentage point increase for LLM-generated real news. The decline in the performance, particularly for human-written fake news, might be attributed to the model's sensitivity to the nuances of the dataset or the inherent challenges posed by the PolitiFact++ dataset.
In summation, our adversarially-inspired debiasing strategy has demonstrated its efficacy in bolstering the generalization capabilities of fake news detectors. The empirical results underscore the viability of our approach in the quest for more robust and universally adept fake news detection systems.
## 7 Conclusion
In this study, we introduced a novel paradigm for fake news detection, factoring in both human-written and LLM-generated news articles. Our investigations uncovered an unexpected bias: detectors frequently misclassify truthful LLM outputs as fake. Delving deeper, we identified potential linguistic'shortcuts' these detectors take. Our mitigation strategy, founded on adversarial training with LLM-paraphrased real news, effectively reduced this bias. We further contributed by offering two enriched datasets, GossipCop++ and PolitiFact++, enhancing the scope for future research in this domain.
## Limitations
The datasets, GossipCop++ and PolitiFact++, while expansive, represent specific genres of news and might not encompass the entire spectrum of news content. The types of news included are influenced by the culture, language, and region from which they originate. Consequently, the biases and nuances we identify may be particular to these datasets and not universally applicable. Our identification of bias towards LLM-generated content might seem deterministic, suggesting that all detectors will inevitably be biased against LLM outputs. However, it is crucial to understand that the bias emerges from the training data and model architectures we used. Different configurations might produce varied results. The mitigation strategy, while effective in our tests, is not a one-size-fits-all solution. Its efficacy is contingent on the nature of the bias and the specific LLMs in play. Lastly, the linguistic'shortcuts' and identified NELA features as potential reasons for the bias are based on our observations and analysis. While they offer a plausible explanation, they might not capture the entirety of the model's decision-making process. Different models or a change in training data might lead to different sets of influential features. Future research can delve deeper into these intricacies to provide a more comprehensive understanding.
|
2309.06659 | Beyond English: Centering Multilingualism in Data Visualization | Information visualization and natural language are intricately linked.
However, the majority of research and relevant work in information and data
visualization (and human-computer interaction) involve English-speaking
populations as both researchers and participants, are published in English, and
are presented predominantly at English-speaking venues. Although several
solutions can be proposed such as translating English texts in visualization to
other languages, there is little research that looks at the intersection of
data visualization and different languages, and the implications that current
visualization practices have on non-English speaking communities. In this
position paper, we argue that linguistically diverse communities abound beyond
the English-speaking world and offer a richness of experiences for the
visualization research community to engage with. Through a case study of how
two non-English languages interplay with data visualization reasoning in
Madagascar, we describe how monolingualism in data visualization impacts the
experiences of underrepresented populations and emphasize potential harm to
these communities. Lastly, we raise several questions towards advocating for
more inclusive visualization practices that center the diverse experiences of
linguistically underrepresented populations. | Noëlle Rakotondravony, Priya Dhawka, Melanie Bancilhon | 2023-09-13T01:17:10Z | http://arxiv.org/abs/2309.06659v2 | # Beyond English: Centering Multilingual in Data Visualization
###### Abstract
Information visualization and natural language are intricately linked. However, the majority of research and relevant work in information and data visualization (and human-computer interaction) involve English-speaking populations as both researchers and participants, are published in English, and are presented predominantly at English-speaking venues. Although several solutions can be proposed such as translating English texts in visualization to other languages, there is little research that looks at the intersection of data visualization and different languages, and the implications that current visualization practices have on non-English speaking communities. In this position paper, we argue that linguistically diverse communities abound beyond the English-speaking word and offer a richness of experiences for the visualization research community to engage with. Through a case study of how two non-English languages interplay with data visualization reasoning in Madagascar, we describe how monolingualism in data visualization impacts the experiences of underrepresented populations and emphasize potential harm to these communities. Lastly, we raise several questions towards advocating for more inclusive visualization practices that center the diverse experiences of linguistically underrepresented populations.
Human-centered computingVisualizationVisualization design and evaluation methods
## 1 Introduction
Visualizing data is fundamentally about communicating information to viewers. To this end, designers and researchers often incorporate elements such as descriptive text, and symbols to more effectively communicate the information being represented. For instance, including human-readable text in natural language to explain a data visualization is a nearly universal practice in visualization research and design communities [15]. With the majority of human-computer interaction researchers, including those working in information visualization, being based in English-speaking WEIRD countries [19]. English is used extensively as the language of choice in data visualization research. However, researchers and designers outside of predominantly English-speaking countries have long been using data visualizations to communicate information across the globe, in a variety of non-English speaking cultures.
As online and hybrid conferences have widened access to the visualization research community in recent years, it has become more apparent that existing practices within visualization research and design (such as monolingualism) may contribute towards excluding underrepresented and under-served groups from the visualization research community. Despite ongoing efforts to foreground inclusion and equity in our research and design practices as a community, we have often neglected language as a factor that may influence who gets to participate in and benefit from visualization research.
Hence, in this paper, we ask:
* What is the impact of monolingual, English-speaking visualization research practices on underrepresented communities, such as individuals from non-English speaking cultures?
* How do monolingual visualization practices further the exclusion of already underrepresented and under-served communities in visualization research?
We focus specifically on language used for descriptive text, human-readable captions, and communication with data visualizations. We illustrate our discussion with a case study on the use of two non-English languages and their interactions with data visualization in Madagascar where multilingual is a result of colonization, imposing distinct languages for colloquial communications and for formal instruction. We argue that simple measures, such as translating English text in data visualizations to local languages, are accessible but fundamentally ignore the broader issue at hand -- that monolingual research practices hide how the lived experiences of viewers may influence the ways in which they interact with visualizations. We end with a call to action to the visualization research community to examine our current practices for exclusionary effects and provide suggestions for potential research directions to increase multilingualism in visualization research.
## 2 Related Work
We describe previous research within information visualization and human-computer interaction (HCI) with a focus on critical data visualization and HCI.
### Critical Data Visualization and HCI
Scholars working in critical data visualization and HCI advocate for equitable research practices when working with marginalized and underrepresented populations. Within the context of information visualization, Dork _et al._[12] outline a number of ways in which researchers can question how their values and assumptions pervade their existing research practices. Specifically, they suggest research practices that prioritize disclosure, contingency, plurality and empowerment [12].
On the HCI side, D'Ignazio and Klein propose a data feminism framework for researchers and practitioners to question and disrupt inequitable research practices when working with marginalized populations [11]. They urge researchers to challenge assumptions about gender binaries, power structures, and the objectivity of data while incorporating context and the diverse lived experiences of the people behind the data [11]. In their call to action to the HCI community, Ogbonnaya-Ogburt _et al._ propose ways to adapt critical race theory learnings to HCI research. They elucidate how racism is pervasive in existing research practices, and advocate for race-conscious research practices [24].
Work within critical visualization has also started considering the ethics of unquestioned research practices on audiences. For instance, Correll highlights potential side-effects of standard visualizations such as viewers feeling alienated from the people whose data is being
represented [8]. Meanwhile, in their study of attitudes towards data in rural Pennsylvania, Peck _et al._[29] found that one-size-fits-all approaches in visualization research tend to neglect certain demographic populations and that individuals' personal lived experiences strongly influence how they relate to data visualizations.
## 3 The Integration of Visualization & Text Should Consider Language
When it comes to information communication, the language surrounding visualization has been shown to be equally important as the chart, if not more. Several studies have shown that integrating text and visualization can improve recall [5]. People have also reported to prefer formats that contain text [3, 32]. The integration of text and charts has been examined across a number of applications, including high-stake tasks such as Bayesian reasoning, which is notoriously difficult for most people.
Bancilhon _et al._ found evidence that integrating text and visualization reduced cognitive load when solving a Bayesian task compared to when showing either representation alone [4]. However, Ottley _et al._ has shown that text, visualization and a combination of the two formats elicit the same speed and accuracy in Bayesian tasks [27]. When examining eye tracking patterns during a study task, they found that participants likely identified critical information more effectively using visualization, but extracted information more effectively from text [26]. These mixed findings show that the integration of text and chart is not straightforward. While several other applications have shown that integration techniques can improve recall and accuracy [35], techniques such as linking and hovering have shown no measurable improvement in the context of Bayesian reasoning [22]. More research needs to be conducted to examine the best techniques to integrate the two such that their respective benefits are optimized.
While format and layout can impact reasoning when integrating text and charts, semantics can also play a role. Stokes _et al._ found that texts that describe statistical or relational components of a chart lead to more takeaways than texts that describe elemental or other components [33]. Other studies have shown that the title of a visualization can influence its interpretation more than the visualization itself [16].
A majority of the studies examining the effects of integrating chart, text, and language have been conducted in English. A study by Rakoudrarovy _et al._ showed an example of how the verbalization of quantitative probability through visualizations can vary across French, Arabic, English, German, and Mandarin [30]. However, further studies are still needed to build evidence on how text interacts with visualization in under-represented languages. Moreover, when evaluating reasoning using charts, there needs to be important considerations for individual differences in visual literacy, which can be influenced by language and culture. There lacks research on developing valid and inclusive visualization literacy assessments. Pandey _et al._, who developed a shortened version of the visualization literacy test, posit that more research needs to be done to adopt the MINI-VLAT in different languages [28].
In emerging fields such as AI and NLP, there is an increasing amount of research on text and visualization integration. Several researchers have investigated ways to assist the visually impaired or people with low visualization literacy via techniques such as chart summarization [23]. Other researchers like Shen _et al._ have examined how to automate chart generation using natural language and conducted a survey of natural language interfaces for data visualization [31]. The lack of consideration for language persists across these fields, where most of the development and evaluation is done in English.
## 4 Madagascar: A case study of language, data visualization, in non-English
In this section, we illustrate the interplay between language and data visualization in a non-English speaking context. We describe how the affordance of language can impact the expressiveness of data visualizations, and challenge the different sub-areas of visualization research. The case of Madagascar that we highlight is likely common to countries and communities speaking more than one official language, and for which the spoken colloquial languages are different from that of instruction. In Madagascar, Malagasy is the country's national language. It is spoken and understood overall in its different dialectical variations [6]. In the post colonial era, French was instituted as language of instruction, and was reaffirmed as a _dejure_ official language by the country's constitution in 2007. In public schools through grade five, Malagasy is the language of instruction for all subjects, whereas in high school it is for the subjects of history and Malagasy language only. Instruction for science and the other subjects is delivered in French.
While the majority of people have been in contact with the French language in their primary years, most rarely, if not never, practice it after school. As of 2022, it is estimated that 26.5% of Malagasy population older than 10 years old speak French at least at a colloquial fluency [20]. Beyond being the predominant language of education in primary and higher education, French is also used and referenced for most technology-related topics on the island1.
Footnote 1: Le Français langue d’enseignement, accessed: 2023-07-10
### _Communicating With Data Visualization_
Data visualization today is about communicating (quantitative) information through static or interactive visuals and graphs. In line with existing observations of the performance of audiences from bilingual settings in mathematical reasoning [2, 34] and in HCI [13, 25], the prevalence of French in Madagascar, especially in science education and technology, can influence the statistical or quantitative reasoning with technological instruments, such as interactive interfaces, in Madagascar even among an highly educated audience.
In an exploratory study of how French and Malagasy are used to discuss data and data visualization in Madagascar, Andrianarivony _et al._[1] asked native speakers to describe their approach for completing basic visualization reading tasks such as finding a specific value, and to verbalize their takeaways from a data visualization in either Malagasy or French only. The stimulus combines a line chart and a bar chart depicting the evolution of Madagascar's GDP and the recorded number of social conflicts in the country. Participants in the study hold at least a higher education degree and were fluent in both languages.
Results highlighted the easiness of using French for participants to talk about how they interact with the charts, explicitly naming the different elements of the visuals (such as _backhat_, _linechart_,...), the actions that they took to explore it (such as _clicking_, _zooming_,...), and their note on the data (such as _increase_, _decrease_,...). On the other hand, participants in Malagasy showed difficulty to verbalize their analysis of both the chart and its underlying data, essentially in finding the necessary terminologies which they often already know in French. While the knowledge of French appears to be helpful, the lack of linguistic tools for verbalizing data and visualizations in their native language excludes approximately 74% of the Malagasy population who do not speak French in Madagascar from benefiting from visual and graph-based communications. It also reflects the understudy of non-English languages in visualization and interaction studies.
### _Data Visualization Literacy_
Beyond challenging the use of data visualization for communication, a lack of multilingualism in research also directly impacts
visualization literacy. As visualization literacy gains interest among researchers, visualization literacy assessment tests constitute useful instruments to help develop future design and methodologies that help audiences better read and make sense of data visualizations. However, most approaches in the research literature are in English, and assume underlying characteristics of the test takers that are not inclusive, especially for those from non-WEIRD contexts.
For the majority of potential visualization users in Madagascar, knowledge of English constitutes a limitation to access the off-the-shelf assessment tests. Test items extensively refer to data, charts elements, and the use of statistical analyses. They require fluency in charts naming conventions, mathematics, and data reasoning, and their contents are sourced from materials that do not necessarily align with other educational systems and curricula outside that of the country in which they were developed. For example, for VLAT [18], test-talexers are expected to possess graph reading and interpretation skills comparable to the K-12 curriculum. In Madagascar, students are extensively exposed to graph reading at a much later stage in their course of studies. This difference in curriculum required for data visualization literacy and tests might therefore impact participants' performance, potentially leading to incorrect conclusions.
While translation from English constitutes a good alternative to the lack of multilingualism of data visualization, more aspects of data visualizations beyond text translation still need to be considered and culturally adapted to ensure adequate evaluations. As Peck _et al._[29]'s study showed, the way individuals relate to data and visualizations influence their engagement with a visual representation. While we hypothesize that adapting the western-centered visualization literacy test items to local contexts can increase test takers' interest in the data, more evidence and studies are still needed to understand how such engagement may impact their visualization literacy scores.
In this illustrative case of Madagascar, we argued how a lack of multilingualism in data and information visualization research can exclude underrepresented communities from participating in research studies. Similar to Madagascar, the described duality in language is shared among many other countries and post colonial countries, where the inherited or imposed language of instruction does not align with the colloquial language, creating among other challenges, the unspoken divide between who can benefit from data and information visualization and who cannot. While we acknowledge that this issue is at core systemic, we argue that the information visualization research community can actively contribute to closing the gap.
## 5 What is the impact of monolingualism and what can we do about it?
Data visualization and text, hence language, are intrinsically linked [15]. The interplay of visualizations along with integrated text are many times used to communicate critical information that can impact decision-making across different levels, and to promote social good. Reflecting on the case in Sect. 4, a direct harm that the lack of multilingualism in our research practices can pose is the exclusion of marginalized populations.
### The Exclusion of Marginalized Populations
Data about marginalized and underrepresented populations is often incomplete, does not exist or requires additional care for the privacy and anonymity of the people in the dataset. Available data often focus on their experiences of exclusion and suffering. For instance, anthropographics (human-shaped visualizations) of marginalized and underrepresented populations are widely used within English-based information visualization research in charitable giving settings or in "humanitarian" visualizations [7, 21]. However, marginalized and underrepresented populations are rarely involved in visualization research projects, even one does directly impacting them [9]. In the case of English-based anthropographics, individuals belonging to the populations being represented (refugees, victims of violence) may not even be able to meaningfully interact with these visualizations due to language barriers. Hence, it is essential that visualization research projects about marginalized and underrepresented populations find creative ways to involve said groups of people in the research process [10].
Moreover, the groups that are historically underrepresented from research are mostly from non-WEIRD countries or countries with under-developed resources for research. When their particular needs are not reflected in research carried out in WEIRD countries for use globally, any claim for _generalizability_ of results perpetuates the imposition of culturally non-adapted approaches, misleadingly labelled as development efforts. For instance, as data visualization literacy is a fundamental skill and the basis of the development of methods for teaching data visualization, the exclusion of marginalized groups' lived experiences can result in materials that are inadequate for local education systems beyond WEIRD countries. This inadequacy is often more prevalent in countries where the languages of instructions are not the native and spoken languages for communication [34].
Figure 1: Stimulus used in Andrianarivony _et al._’s exploratory study investigating how bilingual audiences use their native and secondary languages with visualizations in Madagascar [1]. Image courtesy of the authors.
### _Call to Action_
We propose actions for the information visualization community towards increasing the involvement of linguistically underrepresented populations in research. We emphasize that our propositions are preliminary ideas intended to start a community conversation around our existing research practices, and that several of these ideas will require additional long-term collective thinking and active support before implementation.
Participatory Design and VisualizationAlthough information visualization research projects lean heavy on the interaction side, as a community, we often engage with research participants through short studies and in laboratory settings. This is in part due to the nature of our field of research being largely quantitative. However, as rich, qualitative studies become more common in visualization research, we can consider investigating how the lived experiences of viewers influence how they interact with data visualizations. For instance, future work can explore the (side) effects of using English in visualizations for underrepresented communities by directly involving said communities in visualization design workshops and investigating their interactions with multilingual visualizations. Additional research opportunities could focus on data visualization literacy in multiple languages as well as in countries with specific linguistic cultures.
Leveraging Emerging ML TechnologiesWith the rapid development of ML technologies, the information visualization has already started speculating about the potential of incorporating these technologies within existing visualization workflows. In terms of natural language, access to LLMs in a variety of languages, faster and on-demand language translation services can support linguistically underrepresented populations who may not have access to human translators or where there are few alternatives to English-based visualizations. We imagine research opportunities around automated visualization tools with LLM-powered translation features. Similarly, we imagine visualization design and literacy tools that provide human-readable descriptions and captions in languages other than English, that combine multiple languages in a single interface as well as ones that include expertise from experts outside of data visualization (such as linguists) to address the challenges of learning additional languages.
Expanding Access to the Visualization CommunityThe COVID-19 pandemic saw the visualization community adapt to remote and hybrid conferences, which opened up access to communities who previously could not attend these conferences due to a number of financial, geographical and border-crossing limitations. Despite the many lessons learnt running successful remote and hybrid conferences during this time, a return to _"normal"_, in-person only conferences can erase much of the progress around conference accessibility made during the pandemic.
We envision smaller satellite conferences and remote workshops, in local languages, in under-served countries to increase researcher diversity within information visualization research and to support linguistic diversity. For instance, within the HCI community, several geographically-bound conferences such as NordicCHI and AfriCHI already take place throughout the year. Although the information visualization community is smaller, we imagine a future with more remote opportunities to be in community with visualization researchers from underrepresented countries through workshop or paper tracks for specific geographic regions and dedicated mentorship networks.
Additionally, there has been recent interest within the information visualization community to examine our publication practices. For instance, Hao and colleagues [14] examined 32 years of publications and citations at IEEE VIS venues and found that the majority of cited and referenced papers were limited to the same subfields of Computer Science. Building on this work, as a community, we may reflect on our citational practices, as Kumar and Karusala [17] urge HCI researchers: Why do we cite the way we do in information visualization research? Who do we cite (and not cite) and why? Where does the majority of our knowledge in information visualization research come from and why?
Lastly, several challenges may arise when implementing actions towards inclusive visualization research that go beyond simple translations. For instance, collaborations can be costly in both material and intellectual resources. Conducting studies online requires participants to have access to the necessary technologies. Some training and study configurations require in-person collaboration for which long distance, international travels are necessary. International mobility, beyond issues related to its environmental sustainability, comes with many barriers to the collaborations between researchers across countries but may particularly impact those from the Global South with limited border-crossing privileges.
Moreover, conducting data visualization research in languages other than English requires peer-reviewers who understand the language. While there are certainly experienced reviewers fluent in non-English languages, the community still needs to reflect on and discuss the best practices for including under-represented languages in research while being mindful of the potential of creating isolated findings that cannot be communicated widely.
Despite these challenges, we strongly advocate for the visualization community to start this important conversation on increasing linguistic diversity in our research practices.
## 6 Conclusion
We discussed how monolingual practices in the data visualization community creates a divide between who gets included in or benefits from visualization research. We emphasize the importance of text and language in data visualization by highlighting how their interplay helps in promoting the access to existing data visualizations, and the inclusion of traditionally underrepresented populations in data visualization research. Reflecting from an exploratory study in Madagascar, where the lack of linguistic tools for data and visualizations impacts how people interact with charts and verbalize their takeaways, we highlight the potential harms that the exclusion of under-studied languages can cause -- especially when communicating data about and to marginalized and underrepresented populations. We conclude this position paper by calling the community's attention to several discussion avenues towards increasing the involvement of underrepresented populations in visualization research through efforts such as participatory design, leveraging emerging ML technologies and expanding global access to the visualization research community. Our hope with this work is that data visualizations can be created, designed, used and accessed by all, especially those who make up the majority of the global population.
|
2308.16763 | Ladder-of-Thought: Using Knowledge as Steps to Elevate Stance Detection | Stance detection aims to identify the attitude expressed in a document
towards a given target. Techniques such as Chain-of-Thought (CoT) prompting
have advanced this task, enhancing a model's reasoning capabilities through the
derivation of intermediate rationales. However, CoT relies primarily on a
model's pre-trained internal knowledge during reasoning, thereby neglecting the
valuable external information that is previously unknown to the model. This
omission, especially within the unsupervised reasoning process, can affect the
model's overall performance. Moreover, while CoT enhances Large Language Models
(LLMs), smaller LMs, though efficient operationally, face challenges in
delivering nuanced reasoning. In response to these identified gaps, we
introduce the Ladder-of-Thought (LoT) for the stance detection task.
Constructed through a dual-phase Progressive Optimization Framework, LoT
directs the small LMs to assimilate high-quality external knowledge, refining
the intermediate rationales produced. These bolstered rationales subsequently
serve as the foundation for more precise predictions - akin to how a ladder
facilitates reaching elevated goals. LoT achieves a balance between efficiency
and performance. Our empirical evaluations underscore LoT's efficacy, marking a
16% improvement over GPT-3.5 and a 10% enhancement compared to GPT-3.5 with CoT
on stance detection task. | Kairui Hu, Ming Yan, Joey Tianyi Zhou, Ivor W. Tsang, Wen Haw Chong, Yong Keong Yap | 2023-08-31T14:31:48Z | http://arxiv.org/abs/2308.16763v2 | # Ladder-of-Thouight: Using Knowledge as Steps to Elevate Stance Detection
###### Abstract
Stance detection aims to identify the attitude expressed in a document towards a given target. Techniques such as Chain-of-Thought (CoT) prompting have advanced this task, enhancing a model's reasoning capabilities through the derivation of intermediate rationales. However, CoT relies primarily on a model's pre-trained internal knowledge during reasoning, thereby neglecting the valuable external information that is previously unknown to the model. This omission, especially within the unsupervised reasoning process, can affect the model's overall performance. Moreover, while CoT enhances Large Language Models (LLMs), smaller LMs, though efficient operationally, face challenges in delivering nuanced reasoning. In response to these identified gaps, we introduce the **L**adder-**of-**T**hought (LoT) for the stance detection task. Constructed through a dual-phase Progressive Optimization Framework, LoT directs the small LMs to assimilate high-quality external knowledge, refining the intermediate rationales produced. These bolstered rationales subsequently serve as the foundation for more precise predictions - akin to how a ladder facilitates reaching elevated goals. LoT achieves a balance between efficiency and performance. Our empirical evaluations underscore LoT's efficacy, marking a \(16\%\) improvement over GPT-3.5 and a \(10\%\) enhancement compared to GPT-3.5 with CoT on stance detection task.
Kairui Hu\({}^{\star}\), Ming Yan\({}^{\star}\), Joey Tianyi Zhou\({}^{\star}\), Ivor W. Tsang\({}^{\star}\), Wen Haw Chong\({}^{\dagger}\), Yong Keong Yap\({}^{\dagger}\)\({}^{\star}\) Centre for Frontier AI Research (CFAR), A*STAR, Singapore
\({}^{\star}\) Institute of High Performance Computing (IHPC), A*STAR, Singapore
\({}^{\dagger}\) DSO National Laboratories, Singapore
Stance Detection, Ladder-of-Thought, Language Model, Knowledge Infusion
## 1 Introduction
Stance detection is the task of discerning the stance towards a specific target in an provided document. This task can be challenging given the breadth of topics and the depth of reasoning required to make accurate predictions. Nevertheless, the landscape of stance detection has evolved significantly with the success of Pre-trained Language Models (PLMs). These PLMs, when fine-tuned for downstream tasks, demonstrate a remarkable improvement in performance [1, 2].
Leveraging the capabilities of LMs, prompt-based techniques have further enhanced the performance, especially when LLMs such as GPT-3.5 are equipped with meticulously designed prompts [3]. The Chain-of-Thought (CoT) prompting stands as a prominent prompting strategy, enabling LMs to produce coherent and systematic reasoning rationales, which in turn improves the subsequent prediction accuracy [4]. However, CoT has a discernible limitation: it mainly relies on the model's internal, pre-existing knowledge when generating these rationales [5]. External knowledge, which is often dynamic, evolving, and abundant in domain-specific insights, remains unexploited [6]. Given CoT's reliance on the model's pre-trained knowledge, its unsupervised intermediate reasoning process may inevitably produce less reliable rationales, affecting the model's overall performance [5, 6, 7, 8].
The integration of external background knowledge is paramount for optimizing models' stance detection capabilities [9]. Predictions can be compromised in the absence of this auxiliary information, particularly when limited by the model's intrinsic knowledge. Table 1 serves as a testament: despite ChatGPT's utilization of CoT [3], smaller models like BERT can outperform it in stance detection tasks when supplemented with external knowledge from Wikipedia [9].
Moreover, the expansive architecture of LLMs like GPT-3.5 brings concerns about efficiency. On the other hand, smaller LMs, though more operationally efficient, often compromise on the reasoning capability due to their compactness [4, 7]. And while CoT provides performance gain in LLMs, it does not effectively benefit the smaller-sized models [4]. This underscores the need for enhancing the reasoning prowess of smaller models without bloating their size.
To address these challenges, we propose **L**adder-**of-**T**hought (LoT), a novel methodology that leverages external knowledge as steps to elevate stance detection. LoT operates
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Paradigms** & **Knowledge** & **Sizes** & **Reasoning** & **Performance** \\ \hline WS-BERT & External & 340\(M\) & Weak & 74.5 \\ CoT & Internal & 175\(B\) & Strong & 68.9 \\
**LoT** (ours) & External & 780\(M\) & Strong & 79.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different stance detection paradigms.
on a Progressive Optimization Framework. The "ladder" in LoT represents this Progressive Optimization Process. The initial phase absorbs external information, guiding the model to generate more reliable intermediate knowledge as rationales. This intermediate knowledge, act as "steps" to progressively elevate the model's comprehensive understanding capability, culminating in a robust stance detection. Tailored for smaller LMs, LoT strikes a harmonious balance between efficiency and performance. It facilitates the seamless integration of ample external knowledge and cultivates profound reasoning capabilities. The architecture of LoT is illustrated in Fig. 1.
Our main contributions are summarized as follows:
* a novel method for stance detection. By enriching smaller LMs with external knowledge, LoT effectively facilitates the generation of more reliable intermediate rationales, consequently enhancing the prediction performance.
* We demonstrate that LoT outperforms existing methods, achieving state-of-the-art results while maintaining efficiency.
## 2 Methodology
### Task Definition:
**Stance Detection**: Stance detection involves identifying the stance of an opinionated document concerning a specific target. Formally, consider a set \(D=\{(x_{i}=(d_{i},t_{i}),y_{i})\}_{i=1}^{n}\) representing \(n\) instances. Here, \(x_{i}\) encapsulates a document \(d_{i}\) and a target \(t_{i}\). The task is to deduce the stance label, \(y_{i}\), which can be categorized as {positive, negative, neutral}.
### External Knowledge Retrieval
To increase the reliability of the generated intermediate rationales in LoT, we integrate external knowledge to enhance the generation in a supervised manner. Specifically, a web retrieval process fetches pertinent external information for each target \(t_{i}\) from Google Search. By extending beyond the traditional realms of Wikipedia and diving into the wider web, we access a plethora of diverse and dynamic information [10]. This shift aligns with the emerging trend of exploring beyond the boundaries of Wikipedia-based research [11, 12, 10].
### Ladder-of-Thought (LoT) Architecture:
The Ladder-of-Thought (LoT) architecture enhances stance detection, enabling smaller models to reason more effectively. LoT draws its metaphor from the construction of a ladder, where the process of Progressive Optimization forms the framework of the ladder, and the reliable intermediate knowledge, fortified with external insights, serves as the integral "steps". These pivotal steps empower the model to reach heightened insights and deeper comprehension, facilitating more accurate predictions. LoT is developed through a dual-phase Progressive Optimization Framework:
1. **Phase 1 - Generation Fine-tuning**: In this foundational phase, the pre-trained model \(M_{0}\) is fine-tuned with the retrieved knowledge. This transfers the external insights to the model, guiding it to generate more robust intermediate knowledge that subsequently aids in downstream stance predictions. The resulting model \(M_{1}\) facilitates the generation of more enriched and reliable intermediate rationales, denoted as \(k_{intermediate\_i}\).
2. **Phase 2 - Prediction Fine-tuning**: Phase-2 utilizes the enhanced knowledge generated from Phase-1 to expertly discern stance labels. By concatenating the document, target, and the generated knowledge, we construct an enhanced input representation, \(x_{enhanced\_i}\). \(M_{1}\) is then fine-tuned with this enhanced input, culminating in the final model \(M_{2}\). Given the knowledge-infused input, \(M_{2}\) can conduct stance prediction \(y_{i}\).
Figure 1: The Overview of Ladder-of-Thought Architecture
The Ladder-of-Thought (LoT) architecture employs a Progressive Optimization Framework to enhance the stance detection model step-by-step. Leveraging the concept of cognitive evolution, LoT signifies a novel paradigm for model training. In particular, phase-1 is the foundation of LoT, infusing the model with core knowledge, reminiscent of grounding a student in fundamental theories. In Phase-2, this grounded rationale is utilized to guide the model towards more nuanced stance detection. The optimization from \(M_{0}\) to \(M_{2}\) via \(M_{1}\) reflects the LoT philosophy: evolving model capabilities through deliberate optimization, striking a balance between computational efficiency and reasoning depth.
For a detailed step-by-step procedure of the Progressive Optimization, refer to Algorithm 1.
```
0: Document matrix \(D=\{d_{1},d_{2},...,d_{n}\}\), Target vector \(T=\{t_{1},t_{2},...,t_{n}\}\), Pre-trained model \(M_{0}\)
0: Stance prediction vector \(Y=\{y_{1},y_{2},...,y_{n}\}\)
1:functionLoT(\(D,T,M_{0}\))
2:Phase-1:
3:for\(i=1\) to \(n\)do
4:\(k_{i}\leftarrow\text{WebRetrieval}(t_{i})\)
5:\(M_{1}\leftarrow\text{GenerationFinetune}(\{k_{1},k_{2},...,k_{n}\},M_{0})\)
6:for\(i=1\) to \(n\)do
7:\(k_{\text{intermediate\_i}}\gets M_{1}(d_{i},t_{i})\)
8:\(x_{\text{enhanced\_i}}\leftarrow\text{IntegrateInputs}(d_{i},k_{\text{intermediate\_i}},t_{i})\)
9:Phase-2:
10:\(M_{2}\leftarrow\text{PredictionFinetune}(\{x_{\text{enhanced\_1}},x_{\text{enhanced\_2}},...,\)\(x_{\text{enhanced\_n}}\},M_{1})\)
11:for\(i=1\) to \(n\)do
12:\(y_{i}\gets M_{2}(x_{\text{enhanced\_2}})\)
13:return\(Y\)
```
**Algorithm 1** Progressive Optimization Algorithm
## 3 Experiment
### Dataset and Evaluation Metric
The VAried Stance Topics (VAST) [13] is a classic zero-shot and few-shot stance detection dataset. It encompasses a broad spectrum of topics: 4,003 for training, 383 for development, and 600 for testing. Unlike other datasets for stance detection like P- stance [14] which only have 2 targets or SemEval-2016 [15] with 4 targets, VAST covers a numerous number of targets spanning various domains. Following the preceding studies [13, 9], the macro average of F1-score is used as the evaluation metric.
### Baselines and Models
We employ FLAN-T5-Large, the 780M parameter version of FLAN-T5, as our backbone. We compare our model with the following baselines: TGA-Net [13], BERT, BERT-GCN [16], CKE-Net [2], WS-BERT-Single [9], DQA [3], StSQA [3]. The first five methods are based on BERT and its variants. DQA is based on ChatGPT with direct input-output (IO) prompting, while StSQA employs CoT on ChatGPT, prompting ChatGPT in a step-by-step manner.
### Result
The overall results of our model and the baselines are reported in Table 2.
Compared to the baseline FLAN-T5, LoT achieves a remarkable improvement, achieving an F1 score of \(79.2\), while FLAN-T5 achieves an F1 score of \(73.6\). This highlights the efficacy of our LoT. Furthermore, compared to ChatGPT-based DQA, which operates on an expansive architecture and achieves an F1 score of \(62.3\), our LoT demonstrates not just superior performance but tangible efficiency with significantly fewer parameters. This compact model size promises better deployment possibilities in real-world scenarios where computational resources can be a constraint.
Compared to StSQA with an F1 score of \(68.9\), our LoT also outperforms this CoT-enhanced ChatGPT approach. This result showcases that despite CoT amplifying internal reasoning, our LoT can absorb high-quality external knowledge, facilitating a more accurate prediction.
### Ablation Study
The foundational structure of LoT is built on the dual-phase Progressive Optimization framework. As all implementations involves the Prediction Fine-tuning, our focus lies in understanding the efficacy of two specific aspects of LoT: Generation Fine-tuning and the enhanced intermediate knowledge. We conduct an ablation study to evaluate their individual and comprehensive impact. In addition to the baseline and the complete LoT implementation, we introduce two intermediate settings for comprehensive comparison:
**CoT**: Following the principle of CoT, this configuration skips Generation Fine-tuning, directly utilizing the pre-trained model to produce intermediate knowledge and per
\begin{table}
\begin{tabular}{l l r} \hline \hline
**Methods** & **Models** & **F1 Scores** \\ \hline TGA-Net & BERT & 66.5 \\ BERT & BERT & 68.4 \\ BERT-GCN & BERT & 69.2 \\ CKE-Net & BERT & 70.1 \\ WS-BERT-Single & BERT & 74.5 \\ DQA & GPT-3.5 & 62.3 \\ StSQA & GPT-3.5 & 68.9 \\ \hline Baseline FLAN-T5 & FLAN-T5 & 73.6 \\
**LoT** (Ours) & FLAN-T5 & **79.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison on the VAST dataset.
form the subsequent prediction. This offers insights into the impact of the raw knowledge that is directly prompted from the pre-trained model on prediction performance.
**Phase1-Only**: Focusing exclusively on Phase-1 Fine-tuning, this configuration omits the subsequent integration of the generated knowledge during Phase-2 fine-tuning. The objective is to evaluate the direct influence of Phase-1 Fine-tuning and determine if it enhances the model's intrinsic knowledge.
Table 3 showcases the results of the ablation study.
The Baseline achieves an F1 score of 73.4, representing the performance without any additional enhancements.
By comparison, the CoT configuration slightly decreases to 73.1. This aligns with our prior discussion that small models may not benefit from CoT due to their limited reasoning capabilities [4]. Although directly prompting for intermediate knowledge yields some rationales, their quality is compromised. The unsupervised nature of these intermediate outputs may introduce potential noise. Hence, introducing CoT might inadvertently add complexity to the models, distracting them from accurate prediction. This underscores the significance of a supervised fine-tuning phase to enhance the reliability in knowledge generation.
The Phase1-Only configuration achieves an F1 score of 74.2, surpassing our baseline. This score suggests that Generation Fine-tuning can effectively enhance the model's inherent knowledge base. By supplementing the model with external information, even without explicitly leveraging the generated knowledge during predictions, we can still witness an improvement over the baseline performance. This underscores that enriching the foundational knowledge of the model can inherently bolster its capabilities in stance detection.
With our LoT configuration, the model reaches an F1 score of 79.2, showcasing a remarkable performance improvement over both the baseline and the other configurations. This substantial increase underlines the benefits of our overall Progressive Optimization Framework in LoT.
### Overfitting in Progressive Optimization
In our Progressive Optimization Framework, overfitting presents a notable challenge. If the model undergoes excessive training during Generation Fine-tuning (Phase-1), it could become overly specialized for the initial task, leading to a detrimental impact on its performance during the subsequent predictions. Achieving the ideal equilibrium between these phases is crucial. We investigate the influence of training epochs in Phase-1 on the subsequent prediction accuracy in Phase-2. The outcomes are depicted in Figure 2.
The findings suggest that the optimal performance is achieved at around 2 epochs, with a subsequent decline in performance as the number of epoch increases. This juncture signifies the ideal balance: it facilitates the generation of high-quality intermediate knowledge without an excessive reliance on Phase-1. While Phase-1 aims to enhance the model's reasoning for Phase-2, it is important to avoid overemphasizing the former phase at the expense of the latter. Our results highlight the importance of a strategic equilibrium, ensuring that each phase complements the other, ultimately constructing a robust and effective Progressive Optimization Framework.
## 4 Conclusion
In this research, we introduce the Ladder-of-thought (LoT). This method effectively enhances the smaller LMs' reasoning abilities with a dual-phase Progressive Optimization Framework. LoT enables the model to efficiently absorb high-quality external knowledge, thereby crafting more reliable intermediate rationales that facilitate accurate predictions. Our empirical evaluations demonstrate the efficacy of LoT, highlighting its superiority over the existing methods. LoT showcases that even smaller LMs, with the right guidance, can outperform LLMs like ChatGPT in stance detection. LoT is also applicable to other downstream tasks, and we aim to explore further in future works.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Models** & **Gen Fine-tuning** & **Gen Knowledge** & **F1** \\ \hline Baseline & – & – & 73.4 \\ CoT & – & ✓ & 73.1 \\ Phase1-Only & ✓ & – & 74.2 \\ LoT & ✓ & ✓ & 79.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on LoT.
Figure 2: Effect of Phase-1 Training epochs on the overall prediction accuracy. |
2309.11443 | Signature Activation: A Sparse Signal View for Holistic Saliency | The adoption of machine learning in healthcare calls for model transparency
and explainability. In this work, we introduce Signature Activation, a saliency
method that generates holistic and class-agnostic explanations for
Convolutional Neural Network (CNN) outputs. Our method exploits the fact that
certain kinds of medical images, such as angiograms, have clear foreground and
background objects. We give theoretical explanation to justify our methods. We
show the potential use of our method in clinical settings through evaluating
its efficacy for aiding the detection of lesions in coronary angiograms. | Jose Roberto Tello Ayala, Akl C. Fahed, Weiwei Pan, Eugene V. Pomerantsev, Patrick T. Ellinor, Anthony Philippakis, Finale Doshi-Velez | 2023-09-20T16:17:26Z | http://arxiv.org/abs/2309.11443v1 | # Signature Activation: A Sparse Signal View for Holistic Saliency
###### Abstract
The adoption of machine learning in healthcare calls for model transparency and explainability. In this work, we introduce Signature Activation, a saliency method that generates holistic and class-agnostic explanations for Convolutional Neural Network (CNN) outputs. Our method exploits the fact that certain kinds of medical images, such as angiograms, have clear foreground and background objects. We give theoretical explanation to justify our methods. We show the potential use of our method in clinical settings through evaluating its efficacy for aiding the detection of lesions in coronary angiograms.
Machine Learning, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly, Anomaly,, Anomaly
of pneumonia on COVID-19 patients with chest CT scans (Harmon et al., 2020). These types of explanation methods can broadly be described as Class-Discriminative and Class Agnostic.
Class-Discriminative Saliency MethodsThere are a large number of approaches for generating class saliency activation maps. Methods such as backpropagation (Simonyan et al., 2014), Integrated Gradients (Sundararajan et al., 2017), CAM (Zhou et al., 2015), Grad-CAM (Selvaraju et al., 2017), and Grad-CAM++ (Chattopadhay et al., 2018) operate by taking gradients with respect to a particular class. In addition to issues with using gradients (such as noise and gradient saturation), these offer only partial explanations related to a particular class. Gradient-free algorithms such as Ablation-CAM (Desai and Ramaswamy, 2020) and Score-CAM (Wang et al., 2020) have better stability because they do not use gradients. However, they too are designed to focus information related to a specific classification. They are also more computationally expensive than the approaches that use gradients. Our approach, Signature Activation is computationally efficient whilst being gradient free; unlike these class-discriminative approaches, it also includes a more complete array of pixels that the model use for classifying.
Class Agnostic Saliency MethodsNon-class discriminative methods such as CNN-Fixations (Mopuri et al., 2019) and Eigen-CAM (Muhammad and Yeasin, 2020) look more holistically at what the model fixates on for producing the final decision, rather than focusing on pixels relevant only to a particular class. Nonetheless, CNN-Fixations faces significant challenges, not only in terms of substantial computational costs, but also due to the complex and potentially prohibitive manipulation required when working with pre-trained models. On the other hand, Eigen-CAM is constrained by its ability to abstract only linear representations of the data, thereby limiting its capability to capture more complex relationships. As previously mentioned, Signature Activation is computationally efficient, requires only access to the layers of the model, and is able to capture non-linear relationships of the data.
## 3 Background
It is well known that CNNs learn hierarchical representations of the training data (LeCun et al., 2015), with deeper layers abstracting higher semantical features of the data. Previous saliency methods such as Grad-CAM, Grad-CAM++, Score-CAM, and Eigen-CAM have exploited the semantic information captured in the last convolutional layers to generate attribution heatmaps, either through backpropagation, maximizing the score of the highest predicted class, or through Principal Component Analysis. These methods utilize the learned filters of the last convolutional layer to generate class activation maps. These maps are later superimposed over the original image to highlight the regions of interest for the classification. For developing Signature Activation, we also seek to utilize the last convolutional layer to generate heatmaps, while employing the properties of the forward pass.
## 4 Problem Setting and Classifier: Invasive Coronary Angiograms
Invasive coronary angiograms, or simply coronary angiograms, are a diagnostic procedure that involves the insertion of a catheter into the body often through the wrist or groin, threading it up to the coronary artery where a contrast dye is injected into the patient's bloodstream. As the dye travels through the heart's arteries it makes them visible under X-ray imaging (Kern et al., 2019). This allows physicians to identify any blockages and lesions that may be impairing the heart's performance.
When AI tools are used to assist with lesion detection, explanations can help calibrate trust in the classification. An effective explanation for a model trained to identify coronary lesions should mirror the diagnostic process undertaken by a clinician. It must convincingly demonstrate the model's capacity to meticulously examine the entirety of the arterial structure, accurately distinguishing it from surrounding anatomy. Furthermore, it should substantiate the model's capability to pinpoint the lesion's location.
For example, in a lesion detector classifier, knowing if the model is able to detect the lesions is critical. It is also required to understand if the model is overall evaluating only the regions of interest, without fixating on the background or learning human annotations in the image (Kaufman et al., 2011; Yagis et al., 2021).] In our work, we trained two binary lesions classifiers, for the right and the left coro
Figure 1: One frame taken from a Left Coronary Angiogram. On the left, the frame is presented without modifications. On the right, the background is suppressed by computing the image signature of the original image and then recovered through the Inverse Discrete Cosine Transform. The background is effectively suppresed.
nary artery accordingly, through transfer learning with the MoviNets architecture for video classification (Kondratyuk et al., 2021).
## 5 Failures of Traditional Explanation Methods
A common approach to problems like explaining lesion detection above involve creating class-discriminative saliency maps. These explanations highlight the areas of an image that contributes the most towards the output probability for the given category of interest. They are usually computed with respect to the label that outputs the highest probability. The generated explanation can allow the end user to understand what features of a particular class are the most relevant in the decision making for the model. Saliency maps can also aid in the debugging and validation of models (Adebayo, 2022; Montavon et al., 2018). However, class-based discrimination methods do not provide an explanation that considers the whole context of what the model detects during the forward pass, not being faithful to how the model operates for taking the decision. For example, a neural network classifier \(f\) outputs the probabilities for every class at evaluating the image during the forward pass computation \(f(I)\). While class-discriminative methods only focus on the direction of the steepest change with respect to the loss at a particular class. With gradients methods this entails computing \(\frac{\partial L_{c}(f)}{\partial I}\), involving the backpropagation operation which is not used to generate the output probabilities.
One set of issues arise because, by design, these explanations are developed to highlight regions of importance for just a particular label. By only focusing on one class at a time, the explanation does not capture the full spectrum of all the possible regions of relevance, hence limiting its scope of interpretation. This leads to a potential lack of comprehensive understanding of other classes and the relationships between them which could impact overall model robustness and performance.
Let us consider a specific case: suppose we have a classifier that only learns to detect moderate to severe lesions. Suppose that we input an angiogram of a patient with both a mild and moderate lesion in different locations. A class-discriminative explanation that highlights the most relevant features for this patient will highlight the location of the moderate lesion only, even though there is also a mild lesion also present. The healthcare provider may then dismiss the mild lesion as a potential object of concern, or miss it entirely. Even if the classifier had multiple outputs for mild, moderate, and severe lesions, the above problem could still happen. For example, if the moderate lesion had the highest class probability, then an interpretation based on that class would emphasize only the moderate lesion in the image, ignoring other regions that might also be impacting the decision process (probabilities associated with the mild lesion). In instances where an image display occurrences of multiple classes, it is important to know if the model also detects the other objects or regions of interests, as most classifiers output all the class probabilities instead of just the highest one. As models can also make mistakes, an explanation that only provides evidence for a falsely positive classification can also lead to unneeded interventions for the patient such as a coronary stent. Such confirmation biases are well-documented in the literature (Adebayo et al., 2018).
Besides leaving out information that might support other classes, the sparsity induced by class-discriminative saliency maps can hide information about what the model is using for even the highest probability class. Specifically, cardiologists performing coronary angiograms often look at the whole vessel to determine the presence and location of the lesion when the dye fills up the artery. Our trained models do the same. However, in Figure 3, we can appreciate how Grad-CAM only highlights patches where the lesions are located--even though other parts are being used. Indeed, our proposed method faithfully suggests that the model fixates on the entire vessel to output its decision.
In Figure 2, we show another example of this effect based on an example from a ConvNeXtLarge architecture (Liu et al., 2022) pre-trained on Imagnet (Russakovsky et al., 2015) on explaining an image with both "tiger cat" and "remotes". Grad-CAM only highlights patches of the image where the tiger cats are located, while Imagenet contains labels for both "tiger cat" and "remote" and the classifier outputs probabilities for both. In contrast, our proposed method detects not only the cats but it also highlights the remotes, displaying features relevant for both. It also shows that the model detects the cats' paws which are not considered as regions of importance by Grad-CAM, although a human could consider them as relevant features of a cat. It differentiates the points of attention on what the model focuses on while not underscoring the background.
## 6 A Novel Explanation Method: Signature Activation
As noted in Section 3, none of the former algorithms exploit the properties of the forward pass to generate their attribution mask. As these methods do not make use of the properties of the forward pass, their explanations are unfaithful to the decision making process that the model takes. We develop a novel saliency method that takes advantage of the foreground-background sparsity of images and how that sparsity is propagated through the forward pass in the CNN. Our intuition draws from the fact that Papyan et al. (2017) prove that "the forward pass is guaranteed to recover an estimate of the underlying representations of an input signal, assuming these are sparse in a local sense" (p. 3).
Natural images tend to have a clear foreground-background separation. The subjects or objects remain the primary area of in interest in most computer vision tasks with the scene usually being relegated (Liu et al., 2018). Let \(I\in\mathbb{R}^{n\times n}\) be a black and white image. Assuming the signal can be decomposed as foreground \(f\in\mathbb{R}^{n\times n}\) and background \(b\in\mathbb{R}^{n\times n}\) we can write \(I\) as
\[I=f+b. \tag{1}\]
The problem of separating the foreground and background known as _figure-ground segmentation_ is a well studied concept both from the cognitive science and computer vision communities (Frintrop et al., 2010). The above decomposition can be illustrated in the context of Coronary Angiograms. In an angiogram there is a clear background and foreground where, the foreground \(f\) represents the highlighted artery by the dye and the background \(b\) corresponds to the _gray_ area were there are no highlighted vessels. In Figure 1 we can appreciate an instance where the background in a Left Coronary Angiogram is suppressed.
Classical computer vision distinguished objects from images via digital image processing techniques such as contour detection, feature extraction, and spectral analysis (Gonzalez & Woods, 2018). The success of Neural Networks for object detection and classification tasks has shifted most of the efforts of the computer vision community towards CNNs and Transformer based architectures (Girshick et al., 2013; Redmon et al., 2016; Dosovitskiy et al., 2021). However the ground work and theory of image processing can still be applied to better understand and generate faithful explanations for CNNs.
Following Hou et al. (2012) we assume that an image \(I\in\ \mathbb{R}^{n\times n}\) can be decomposed as Equation 1 where \(f\) is sparse in the standard spatial basis and \(b\) is sparsely supported in the basis of the Discrete Cosine Transform (DCT) (Ahmed et al., 1974), i.e. \(b=Cx\) where \(C\) is an orthogonal matrix where each column corresponds to a DCT basis vector and \(x\) is sparse. We seek to isolate the foreground from the background. In order to do so we introduce the following tool:
**Definition 6.1**.: Let \(I\in\mathbb{R}^{n\times n}\) be a black an white image, the image Signature of \(I\), denoted as \(\text{Sig}(I)\), is defined as the sign of the coefficients of the Discrete Cosine Transform:
\[\text{Sig}(I)=\text{sign}(\text{DCT}(I)).\]
When looking at only the signs of the DCT coefficients, we essentially extract a form of high-level, structural information about the image. By computing the Inverse Discrete Cosine Transform (IDCT) of the image signature, we obtain a reconstructed image that approximately isolates the support of \(f\). Based on the Uniform Uncertainty Principle (Candes & Tao, 2006), Hou et al. (2012) proved in Proposition 1, please reference to Appendix A for the precise statement of the theorem, that in expectation the cosine similarity between \(\text{IDCT}(\text{Sig}(f))\) and \(\text{IDCT}(\text{Sig}(I))\) is greater than \(0.5\). Cosine similarity of -1 indicates no resemblance between the inputs, 0 means decorrelation between the inputs, and 1 signals total agreement between the inputs. This translate to having in expectation a fair extraction of the foreground by computing \(\text{IDCT}(\text{Sig}(f))\). Notice that the above might not occur for images that do not have an object of interest or a clear foreground-background separation.
Given that angiograms have a direct focal object of attention and backdrop, the suppression of the background through the Image Signature is effective as can be seen in Figure 1. The main driving question for the presented method was: do CNNs only look at objects of interest during the forward pass? Papyan et al. (2017) showed that the forward pass approximately solves a multi-layer convolutional sparse
Figure 3: One frame taken from a Right Coronary Angiogram with a lesion, with two different heatmaps. One the left, the heatmap overlay corresponds to the features that are highlighted by GradCAM. On the right, the heatmap overlay corresponds to the features that are highlighted by Signature Activation. Both produced with MoviNets.
Figure 2: Image that contains instances from two classes from Imagenet, Tiger Cat and Remote, with two different heatmaps. On the left, the heatmap overlay corresponds to the features that are highlighted by GradCAM with the highest probability corresponding to _tiger cat_. On the right, the heatmap overlay corresponds to the features that are highlighted by Signature Activation. Both produced with the second to last layer convolutional layer of ConvNeXLarge. Signature Activation detects both of the remotes as well as the cats.
coding model (ML-CSC). As the forward pass computation approximately estimate the sparse local patches that make up the image signal, we hypothesize that both, the spatial sparsity of the foreground as well as the sparsity of the DCT of the background, are preserved throughout the learned patches. As the deeper layers carry higher semantically information regarding the images we aim to recover the learned foreground of the model with respect to the deeper convolutional layers.
Based on our hypothesis we present the Signature Activation method. A gradient free non-linear method to generate activation maps that encapsulate the nature of the forward pass, i.e. heatmaps that are faithful to the decision making process of the model, and that highlight the learned regions of fixation for the model. Although previous explanation methods have worked with varying success, they do not explicitly consider the inherent properties of the input images. By disregarding the rest of the classes, with class discriminative explanations certain parts of the foreground could be potentially be disregarded as background incurring in information loss. Due to the required transparency and precision needed to deploy Machine Learning algorithms in clinical settings it is desirable to exploit the structure of inputs, while approximating the inner workings of the models. With the clear separation of foreground and background in the angiograms and the sparsity of the foreground being preserved through the CNN with the added emphasis on the lesion due to the learned classification task, we seek to extract faithfully the learned foreground through the last convolutional layer.
### Definition of our method
Let \(I\in\mathbb{R}^{n\times n}\) be an image and \(A^{k}\in\mathbb{R}^{l\times l\times c}\) the \(k\)-th convolutional layer of a CNN model evaluated after the evaluation of the forward pass for the input image \(I\). For an accurate model we will suppose that \(A^{k}\) abstracts the foreground and the background of the image in order to take the classification decision. For a given channel \(s\) we assume that it can be decomposed as
\[A^{k}_{s}=f_{s}+b_{s}\]
where \(f_{s}\) carries the relevant or foreground information and \(b_{s}\) consists of the background or irrelevant information for the classification. As the sparsity is preserved through the forward pass, \(f_{s}\) will remain sparsely supported in the standard spatial basis and \(b_{s}\) sparsely supported in the basis of the DCT. The signature for the activation \(A^{k}\) at the channel \(s\) is given by
\[\text{Sig}(A^{k}_{s})=\text{sign}(\text{DCT}(A^{k}_{s})). \tag{2}\]
The image signature is applied to the channels of the last convolutional filter and added over. Following the principles of visual saliency outlined by (Itti et al., 1998), an activation map is produced as follows:
\[\text{M}(I)=B*\left(\tfrac{1}{S}\sum_{s=1}^{S}[\text{IDCT}(\text{Sig}(A^{k}_{s }))]\circ[\text{IDCT}(\text{Sig}(A^{k}_{s}))]\right),\]
where \(B\) represents a bilateral filter 1(Tomasi and Manduchi, 1998), \(*\) is the convolution operation, and \(\circ\) is the Hadamard product. After obtaining \(\text{M}(I)\), the map is resized to match the dimensions of the original image. One of the key differences with most of the outlined methods is that the above approach does not rely on the input's class. In the case of multi-class classification the proposed approach allows to detect if multiple objects for which the training data posses label, are present in the given input. This signals that the model might need debugging if the objects of interest are not being highlighted in the activation map.
Figure 4: One frame from a Right Coronary Angiogram and different masks generated by different methods. Signature Activation detects the whole vessel and provides an approximate area for where the lesion is located.
## 7 Results
In this section, we provide empirical evidence that: 1) our method works better than other methods at Weakly Supervised Object Localization, 2) our method passes the sanity checks for saliency maps, and 3) our method proves effective and aiding in the detections of lesions in coronary angiograms. The implementation of our method is available at: [https://github.com/dak/signature-activation](https://github.com/dak/signature-activation).
### Signature Activation outperforms other methods in Weakly Supervised Object Localization
Discriminative regions might only focus in certain aspects of the located objects while not the objects themselves. As Signature Activation approximates the entire fixation of the model, it tends to create maps that encapsulate the entire objects as can be seen in Figure 3. To assess this claim and evaluate the fidelity of Signature Activation we performed Weakly Supervised Object Localization (WSOL) over the the explanations in the multi-class setting. The task consists of creating bounding boxes based on the heatmaps to analyze how well they matched the ground of detecting the objects. We utilized the ConvNeXtLarge architecture as its performance competes with modern vision transformers and the ILSVRC 2017 (Russakovsky et al., 2015) validation set consisting of \(50,000\) images with their corresponding bounding boxes by computing the Intersection over Union (IoU) as a measure of overlap. The IoU is a ratio that measures the overlap between two bounding boxes. It is defined as the area of overlap between the two bounding boxes divided by the area of union of the two bounding boxes. The experiment was designed as follows: utilizing Grad-CAM, Grad-CAM++, Eigen-CAM, and Signature Activation we computed heatmaps with each method over the 50,000 images of the ILSVRC 2017 validation set with the ConvNeXtLarge architecture. For each heatmap we generated bounding boxes that matched the number of ground truth bounding boxes by thresholding from \(0.01\) to \(1\) over the binary mask values until the number of boxes matched the number of ground truth boxes. If the IoU measure was greater than 0.5 it was considered as a positive instance and negative otherwise. As the explanations created through the saliency methods are not originally designed to detect objects, we measured the Error Rate for the number of total predictions. The results are reported in Table 1.
An error can be produces due to multiple reasons: it can be that the saliency maps generates more or less patches than there are true bounding boxes, it can also be the case that that the patches are only located in discriminate region that do not encapsulate the entire objects making the IoU smaller than 0.5, or it can be that the generated bounding boxes are simply capturing background information. As can be seen in Table 1, Signature Activation outperforms the rest of the methods at detecting the objects over the images. We hypothesize by observing Table 1 and looking at manual examples, such as in Figure 5, that given the discriminate nature of both Grad-CAM and Grad-CAM++, the highlighted regions only corresponded to relevant features and not the complete objects which makes the generated bounding boxes miss the ground truth ones.
### Signature Activation passes the saliency Sanity Checks
As discussed by Adebayo et al. (2018), some saliency methods are independent of the architecture and are not fit to generate explanations. Starting from the output layer, the Cascading Randomization test examines the robustness of the saliency method by progressively randomizing layers of the neural network. A method that relies on meaningful features should produce different saliency maps as more layers are randomized. The Independent Randomization test, on the other hand, scrutinizes the method's sensitivity to randomization by just randomizing one of the layer's weight at a time while preserving the rest of the architecture as the original one. If the saliency map changes signifi
\begin{table}
\begin{tabular}{|c|c|} \hline
**Method** & **Error Rate** \\ \hline Grad-CAM & 0.9244 \\ \hline Grad-CAM++ & 0.9421 \\ \hline Eigen-CAM & 0.6733 \\ \hline Signature & 0.6407 \\ \hline \end{tabular}
\end{table}
Table 1: Table that contains the average Error Rate for the task of generating bounding boxes based on the heatmaps generated by computing Grad-CAM, Grad-CAM++, Eigen-CAM, and Signature Activation. A lower Error Rate indicates better results for the used method.
Figure 5: Image that contains instances from two classes from Imagenet, Tiger Cat and Boxer Dog, with two different heatmaps. On the left, the heatmap overlay corresponds to the features that are highlighted by Grad-CAM++ with the highest probability corresponding (incorrectly) to _bullmastiff_. On the right, the heatmap overlay corresponds to the features that are highlighted by Signature Activation. Both produced with the second to last layer convolutional layer of ConvNeXtLarge. Signature Activation detects the two subjects of importance.
cantly under this condition, it suggests that the method is indeed capturing information from the model's parameters. As can be observed Figure 6, Saliency Activation passes both the Cascading Randomization as well as Independent Randomization tests proposed by Adebayo et al. (2018).
### Angiogram Evaluation
A cardiologist manually inspected 790 coronary angiograms with their corresponding masks produced by Signature Activation in order to validate the efficacy of the method. As can be seen in Figures 4 and 7, explanations generated with Signature activation highlight all the relevant regions of the artery. The regions of top most intensity (highlighted in red) tend to align with the location of the lesion. This potentially signals that the method not only detect the vessel completely but it also locates the lesion just like class discriminate methods. The explanation mimics more the process of how a cardiologist make its diagnostics by looking at the artery in its entirety.
## 8 Discussion and Broader Impact
Signature Activation offers a significant shift in approach compared to other prevalent discriminative methods, providing a more holistic view of what the model visualizes during the forward pass. By exploiting the sparsity of the images and the model, this method bypasses the need for intricate architectural manipulation and offers a class-agnostic perspective into the model's decision-making process. The proposed method also leverages the DCT, which enables it to be computationally efficient, thus addressing one of the major challenges faced by many existing methods.
Nevertheless, like all explanation methods, it is not without its limitations. Its local nature means that it may not provide a complete explanation of the entire model. Moreover, its utility might be restricted in tasks where the images lack a clear object of interest.
Despite these caveats, the potential broader impact of this method, especially in clinical settings, could be substantial. In healthcare, the ability to generate more comprehensive and holistic explanations of model predictions could significantly enhance the interpretability of AI-driven diagnostic tools. This could, in turn, lead to increased trust and acceptance from clinicians, who could leverage these insights to make more informed decisions. Moreover, the method's emphasis on exploiting image and model sparsity could potentially align well with other medical imaging tasks apart from angiograms, where relevant features often have a clear foreground-background separation.
## 9 Conclusion
In this work, we provide reasons for why class-discriminative explanations for neural network models provide narrow views of model's decision making process. We propose Signature Activation, a class-agnostic method to generate visual explanation for CNNs that aims to provide a more holistic view of what affects a model's predictions. We provide theoretical justifications for our method as well as empirical evaluations - we verified the fidelity of our explanations through Weakly Supervised Object Localization test, as well as checked that they pass the saliency Sanity Checks. We also explore the potential use of our method in clinical decision making by analyzing a use case in the detection of lesions in Invasive Coronary Angiograms.
## Acknowledgements
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1750358. Any
Figure 6: Sanity checks for Saliency Activation for two images. The first rows contains the images corresponding to the Cascading Randomization test and the second correspond to the Independent Randomization test. The first image and second image in both rows are the same, the first is the original image and the second is the original Saliency Activation Map. The changes can be seen from left to right, going from the deeper to the shallower layers. By looking at the example we can see how the activation maps change greatly from the original one.
opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
2301.13476 | An investigation of challenges encountered when specifying training data
and runtime monitors for safety critical ML applications | Context and motivation: The development and operation of critical software
that contains machine learning (ML) models requires diligence and established
processes. Especially the training data used during the development of ML
models have major influences on the later behaviour of the system. Runtime
monitors are used to provide guarantees for that behaviour. Question / problem:
We see major uncertainty in how to specify training data and runtime monitoring
for critical ML models and by this specifying the final functionality of the
system. In this interview-based study we investigate the underlying challenges
for these difficulties. Principal ideas/results: Based on ten interviews with
practitioners who develop ML models for critical applications in the automotive
and telecommunication sector, we identified 17 underlying challenges in 6
challenge groups that relate to the challenge of specifying training data and
runtime monitoring. Contribution: The article provides a list of the identified
underlying challenges related to the difficulties practitioners experience when
specifying training data and runtime monitoring for ML models. Furthermore,
interconnection between the challenges were found and based on these
connections recommendation proposed to overcome the root causes for the
challenges. | Hans-Martin Heyn, Eric Knauss, Iswarya Malleswaran, Shruthi Dinakaran | 2023-01-31T08:56:40Z | http://arxiv.org/abs/2301.13476v1 | An investigation of challenges encountered when specifying training data and runtime monitors for safety critical ML applications+
###### Abstract
**[Context and motivation]** The development and operation of critical software that contains machine learning (ML) models requires diligence and established processes. Especially the training data used during the development of ML models have major influences on the later behaviour of the system. Runtime monitors are used to provide guarantees for that behaviour. **[Question / problem]** We see major uncertainty in how to specify training data and runtime monitoring for critical ML models and by this specifying the final functionality of the system. In this interview-based study we investigate the underlying challenges for these difficulties. **[Principal ideas/results]** Based on ten interviews with practitioners who develop ML models for critical applications in the automotive and telecommunication sector, we identified 17 underlying challenges in 6 challenge groups that relate to the challenge of specifying training data and runtime monitoring. **[Contribution]** The article provides a list of the identified underlying challenges related to the difficulties practitioners experience when specifying training data and runtime monitoring for ML models. Furthermore, interconnection between the challenges were found and based on these connections recommendation proposed to overcome the root causes for the challenges.
Keywords:artificial intelligence context data requirements machine learning requirements engineering runtime monitoring
## 1 Introduction
With constant regularity, unexpected and undesirable behaviour of machine learning (ML) models are reported in academia [24, 26, 51, 9, 52], the press, and by NGOs3. These problems become especially apparent, and reported upon, when ML models violate ethical principles. Racial, religious, or gender biases are introduced through a lack of insight into the (sometimes immensely large
set of) training data and missing runtime checks for example in large language models such as GPT-3 [1], or facial recognition software based on deep learning [36]. Unfortunately, improving the performance of deep learning models often requires an exponential growth in training data [3]. Data requirements can help in preventing unnecessarily large and biased datasets [48]. Due to changes in the environment, ML models can become "stale", i.e., the context changes so significantly that the performance of the model decreases below acceptable levels [5]. Runtime monitors collect performance data and indicate the need for re-training of the model with updated training data. However, these monitors need to be specified at design time. Data requirements can support the specification of runtime monitor [7]. The lack of specifications becomes specifically apparent with ML models that are part of _critical_ software 4 because it is not possible to establish traceability from system requirements (e.g., functional safety requirements) to requirements set on the training data and the runtime monitoring [35].
Footnote 4: We define critical software as software that is safety, privacy, ethically, and/or mission critical, i.e., a failure in the software can cause significant injury or the loss of life, invasion of personal privacy, violation of human rights, and/or significant economic or environmental consequences [31].
#### 2.0.1 Scope and research questions
The purpose of this study is to highlight current challenges experienced by practitioners in specifying training data and runtime monitoring for ML in safety critical software.
The paper contributes a practitioner's point of view on the challenges reported in academic literature. The aim is to identify starting-points for a future engineering research on the use of runtime monitors for critical ML systems. The following research questions guided this study:
**RQ1:**: What are challenges encountered by practitioners when specifying training data for ML models in safety critical software?
**RQ2:**: What are challenges encountered by practitioners when specifying runtime monitors especially in relation to fulfilling safety requirements?
Figure 1: Overview of identified challenge categories
Figure 1 shows the main themes we found in answering the research questions. Concerning RQ1, the interviewees reported on several problems: the data selection process is nontransparent and guidelines especially towards defining suitable measures for data variety are missing. There are no clear context definitions that help in defining data needs, and current safety standards provide little guidance. Concerning RQ2, we found that the problem of defining suitable metrics and the lack of guidance from safety standards also inhibits the ability to specify runtime monitors. Furthermore, practitioners reported on challenges regarding explainability of ML decisions, and the processing and memory overhead caused by runtime monitors in safety critical embedded systems.
The remaining sections of this paper are structured as follows: Section 2 outlines and argues for the research methods of this study; Section 3 presents the results amd answers to the research questions; Section 4 discusses the findings, provides recommendations to practitioners and for further research, identifies related literature, elaborates on threats to validity, and provides a conclusion.
## 2 Research Method
We applied a qualitative interview-based survey with open-ended semi-structured interviews for data collection. Following the suggestions of Creswell and Creswell [13] the qualitative study was conducted in four steps: Preparation of interviews, data collection through interviews, data analysis, and result validation.
#### 2.0.1 Preparations of interviews
Based on the a-priori formulated research questions, two of the researchers of this study created an interview guide5 which was validated and improved by the remaining two researchers. The interview guide contains four sections of questions: the first section includes questions about the interviewees' current role, background and previous experiences. The second section focuses on questions that try to understand challenges when specifying and selecting training data for ML models and how training data affect the performance of these models. The third section investigates challenges when ML models are incorporated in critical systems and how they affect the ability to specify training data. The fourth section concentrates on the run time monitoring aspect of the ML model and contains questions on challenges when specifying runtime monitors.
Footnote 5: The interview guide is available at [https://doi.org/10.7910/DVN/WJ8TKY](https://doi.org/10.7910/DVN/WJ8TKY).
#### 2.0.2 Sampling strategy
We chose the participants for this study purposefully using a maximum variation strategy [14]. We were able to recruit interviewees from five different companies, ranging from a local start-up to a multinational world leading communication company. An overview is given in Table 1.
A selection criteria for the company was that they must work with safety-critical systems and ML. Within the companies we tried to find interview candidates with different roles and work experiences to obtain a view beyond the developers' perspective. Besides function developers and ML model developers, we were
interested in interviewing requirement engineers and product / function owners because they represent key roles in deriving system or function specifications. We provided the companies with a list of roles that we identified beforehand as interesting for interviewing6. Additionally, we interviewed two researchers from academia who participate in a joint industry EU Horizon 2020 project called VEDLIoT7. Both researchers worked also with ML models in industry before. Therefore, they could provide insights into both the academic and the industry perspective. A list of the ten interviewees for this study is provided in Table 2.
Footnote 6: The list included functional safety experts, requirement engineers, product owners or function owners, function or model developers, and data engineers.
Footnote 7: Very efficient deep learning in the Internet of Things
#### 3.2.1 Data collection through interviews
All interviews were conducted remotely using either the conference software Zoom or Microsoft Teams and took between 60 - 90 minutes. The a-priori defined interview guide was only available to the interviewers and was not distributed to the participants beforehand. Each participant was interviewed by two interviewers who alternated in asking questions and observing. At the start of each interview, the interviewers provided some background information about the study's purpose. Then, the interview guide was followed. However, as we encouraged discussions with the interviewees, we allowed deviations from the interview guide by asking additional questions, or changing the order of the questions when it was appropriate [30]. All interviews were recorded and semi-automatically transcribed. The interviewers manually checked and anonymised the results.
\begin{table}
\begin{tabular}{c l l l} \hline
**Company** & \multicolumn{2}{c}{**Area of operations**} & **Employees** & **Countries** \\ \hline
1 & Telecommunication networks & \(>10.000\) & World \\
2 & Automotive OEM & \(>10.000\) & World \\
3 & Automatic Driving & \(>1.000\) & Europe \\
4 & Industrial camera systems & \(>1000\) & USA \\
5 & Deep Learning.optimisation for IoT \(\geq 100\) & Sweden \\ \hline \end{tabular}
\end{table}
Table 1: Companies participating in the study
\begin{table}
\begin{tabular}{c l l} \hline
**Inter-** & **Role** & **Experience** \\
**viewee** & **Researcher (Academic)** & **Functional Safety for ADAS** \\ \hline
**A** & **Function developer** & **Sensor and perception systems** \\
**C** & **Principal engineer** & **ML model integration** \\
**D** & **ML model developer** & **Distributed and edge systems** \\
**E** & **Function owner** & **ADAS perception functions** \\
**F** & **Function developers** & **Automatic driving systems** \\
**G** & **Data Scientist** & **Distributed systems** \\
**H** & **Requirement Engineer** & **Perception systems** \\
**I** & **Researcher (Academic)** & **Neural Network development** \\
**J** & **Functional Safety Manager Sensor systems** \\ \hline \end{tabular}
\end{table}
Table 2: Participants of the study
#### 2.2.2 Data analysis
The data analysis followed suggestions by Saldana [41] and consisted of two cycles of coding and validation of the themes through a workshop and member checking.
_First coding cycle:_ Attribute coding was used to extract information about the participants' role and previous experiences. Afterwards, the two interviewers independently applied structural coding to collect phrases in the interviews that represent topics relevant to answering the research questions. The researchers compared the individually assigned codes and applied descriptive coding with the aim of identifying phrases that describe common themes across the interviews.
_Theme validation:_ In a focus group, the identified themes were presented and discussed. Thirteen researchers from both industry and academia in the VEDLIoT project participated. Three of the participants also were interviewed for this study. The aim of the focus group was to reduce bias in the selection of themes and to identify any additional themes that the researchers might have missed.
_Second coding cycle:_ After the themes were identified and validated, the second coding cycle was used to map the statements of the interviewees to the themes, and consequently identify the answers to the research questions. The second cycle was conducted by the two researchers who did not conduct the first cycle coding in order to reduce confirmation bias. The mapping was then confirmed and agreed upon by all involved researchers.
#### 2.2.3 Result validation
Member checking, as described in [14, Ch. 9] was used to validate the identified themes that answer RQ 1 and RQ 2. Additionally, we presented the results in a 60 minute focus group to an industry partner and allowed for feedback and comments on the conclusions we drew from the data.
## 3 Results
During the first coding cycle, structural coding resulted in 117 statements for RQ1 and 77 statements for RQ2. Through descriptive coding preliminary themes were found. The statements and preliminary themes were discussed during a workshop. Based on the feedback from the workshop, 117 statements for RQ1 were categorised into eight final challenge themes and three challenge categories relating to the challenge of specifying training data. Similar, the 77 original statements for RQ2 were categorised into 13 final challenge themes in five challenge categories relating to the challenge of specifying runtime monitoring. A total of six challenge categories emerged for both RQs, out of which two categories contain challenges relating to both training data and runtime monitoring specification, and three challenge themes base on statements from both RQs. The categories and final challenge themes are listed in Table 3.
### Answer to RQ1: Challenges practitioners experience when specifying training data
The interviewees were asked to share their experiences in selecting training data, the influence of the selection of training data on the system's performance and
safety, and any experiences and thoughts on defining specifications for training data for ML. Based on the interview data, we identified three challenge groups related to specifying training data: missing guidelines for data selection, unclear design domain, and unsuitable safety standards
#### 3.2.1 Missing guidelines for data selection
Four interviewees reported on a lack of guidelines and processes related to the selection of training data. A reason can be that data selection bases on "grown habits" that are not properly documented. Unlike conventional software development, the training of ML is an iterative process of discovering the necessary training data based on experience and experimentation. Requirements set on the data are described as disconnected and unclear for the data selection process. For example, one interviewee stated that if a requirements is set that images shall contain a road, it remains unclear what specific properties this road should have. Six interviewees described missing requirements on the data variety and missing completeness criteria as a reason for the disconnection of requirements from data selection.
[title= "How much of it (the data) should be in darkness? How much in rainy conditions, and how much should be in snowy situations?" - Interview F "For example, we said that we shall collect data under varying weather conditions. What does that mean?" - Interview B Another interviewee stated that it is not clear how to measure variety, which could be a reason why it is difficult to define requirements on data variety.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**ID** & **Challenge Theme** & **Relates to** & **Related** \\ & & **Data. Monitor.** & **Literature** \\ \hline
**1** & **Lack of explainability about ML decisions** & ✓ & \\
1.1 No access to inner states of ML models & ✓ & [18] \\
1.2 No failure models for ML models & ✓ & [51] \\
1.3 IP protection & ✓ & \\
**2** & **Missing conditions for runtime checks** & ✓ & \\
2.1 Unclear metrics and/or boundary conditions & ✓ & [11, 21, 43] \\
2.2 Unclear measure of confidence & ✓ & [17, 34] \\
**3** & **Missing guidelines for data selection** & ✓ & ✓ \\
3.1 Disconnection from requirements & ✓ & & [16, 42] \\
3.2 Grown data selection habits & ✓ & & [20, 33] \\
3.3 Unclear completeness criteria & ✓ & & [49] \\
3.4 Unclear measure of variety & ✓ & ✓ & [45, 50] \\
**4** & **Overhead for monitoring solution** & & ✓ \\
4.1 Limited resources in embedded systems & ✓ & [38] \\
4.2 Meeting timing requirements & ✓ & \\
4.3 Reduction of true positive rate & ✓ & \\
**5** & **Unclear design domain** & ✓ & \\
5.1 Design domain depends on available data & ✓ & & [6] \\
5.2 Uncertainty in context & ✓ & & [22] \\
**6** & **Unsuitable safety standards** & ✓ & ✓ \\
6.1 Focus on processes instead of technical solution & ✓ & ✓ & [10] \\
6.2 No guidelines for probabilistic effects in software & ✓ & & [28, 43] \\
6.3 Safety case only through monitoring solution & & ✓ & [31, 46] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Challenge groups (bold) and themes found in the interview data. Data.: Challenges related to specifying training data (RQ1). Monitor.: Challenges related to specifying runtime monitoring (RQ2).
"What [is] include[d] in variety of data? Is there a good measure of variety?" - Interview A
#### 3.1.1 Unclear design domain
Three interviewees describe uncertainty in the design domain as a reason for why it is difficult to specify training data. If the design domain is unclear, it will be challenging to specify the necessary training data.
"We need to understand for what context the training data can be used." - Interview J"ODD [(Operational Design Domain)]? Yes, of course it translates into data requirements." - Interview F
#### 3.1.2 Unsuitable safety standards
Because we were specifically investigating ML in safety critical applications, we asked the participants if they find guidance in safety standards towards specifying training data. Five interviewees stated that current safety standards used in their companies do not provide suitable guidance for the development of ML models. While for example ISO 26262 provides guidance on how to handle probabilistic effects in hardware, no such guidance is provided for software related probabilistic faults.
"The ISO 26262 gives guidance on the hardware design; [...] how many faults per hour [are acceptable] and how you achieve that. For the software side, it doesn't give any failure rates or anything like that. It takes a completely process oriented approach [...]." - Interview C
One interviewee mentioned that safety standards should emphasise more the data selection to prevent faults in the ML model due to insufficient training.
"To understand that you have the right data and that the data is representative, ISO 26262 is not covering that right now which is a challenge." - Interview H
### Answer to RQ2: Challenges practitioners experience when specifying runtime monitors
We asked the interviewees on the role of runtime monitoring for the systems they develop, their experience with specifying runtime monitoring, and the relation of runtime monitoring to fulfilling safety requirements on the system. We identified five challenge groups related to runtime monitoring: lack of explainability about ML decisions, missing conditions for runtime checks, missing guidelines for data selection, overhead for monitoring solution, and unsuitable safety standards.
#### 3.1.3 Lack of explainability about ML
A reason why it is difficult to specify runtime monitors for ML models is the inability to produce failure models for ML. In normal software development, causal cascades describe how a fault in a software components propagates trough the systems and eventually leads to a failure. This requires the ability to break down the ML model into smaller components and analyse their potential failure behaviour. Four interviewees however reported that they can only see the ML model as a "black-box" with no access to the inner states of the ML model. As a consequence, there is no insight into the failure behaviour of the ML model.
"[Our insight is] limited because it's a black box. We can only see what goes in and then what comes out to the other side. And so if there is some error in the behavior, then we don't know if it's because [of a] classification error, planning error, execution error?" - Interview F
The reason for opacity of ML models is not necessarily due to technology limitations, but also due to constraints from protection of intellectual property (IP).
"Why is it a black box? That's not our choice. That's because we have a supplier and they don't want to tell us [all details]." - Interview F
#### 4.1.1 Missing conditions for runtime checks
A problem of specifying runtime monitors is the need for finding suitable monitoring conditions. This requires the definition of metrics, goals and boundary conditions. Five interviewees report that they face challenges when defining these metrics for ML models.
"What is like a confidence score, accuracy score, something like that? Which score do you need to ensure [that you] classified [correctly]?" - Interview F
Especially the relation between correct behaviour of the ML model and measure of confidence is unclear, and therefore impede runtime monitoring specification.
"We say confidence, that's really important. But what can actually go wrong here?" - Interview J
#### 4.1.2 Missing guidelines for data selection
The inability to specify the meaning of data variety also relates to missing conditions for runtime checks. For example, runtime monitors can be used to collect additional training data, but without a measure of data variety it is difficult to find the required data points.
Overhead for monitoring solutionAn often overlooked problem seems to be the induced (processing) overhead from a monitoring solution. Especially in the automotive sector, many software components run on embedded computer devices with limited resources.
"You don't have that much compute power in the car, so the monitoring needs to be very light in its memory and compute footprint on the device, maybe even a separate device that sits next to the device." - Interview F
And due to the limited resources in embedded systems, monitoring solutions can compromise timing requirements of the system. Additionally, one interviewee reported concerns regarding the reduction of the ML model's performance.
"[...] the true positive rate is actually decreasing when you have to pass it through this second opinion goal. It's good from a coverage and safety point of view, but it reduces the overall system performance." - Interview F
#### 3.3.2 Unsuitable safety standards
Safety standards are mostly not suitable for being applied to ML model development. Therefore, safety is often ensured through non-ML monitoring solutions. Interviewees reported that it is not a good solution to rely only on the monitors for safety criticality:
"[...] so the safety is now moved from the model to the monitor instead, and it shouldn't be. It should be the combination of the two that makes up safety." - Interview B
One reason is that freedom of inference between a non-safety critical component (the ML model), and a safety critical component (the monitor) must be ensured which can complicate the system design.
"**And then especially if you have mixed critical systems [it] means you have ASIL [(Automotive Safety Integrity Level)] and QM [(Quality Management)] components in your design [and] you want to achieve freedom from interference in your system. You have to think about safe communication and memory protection.**" - Interview J
## 4 Discussion and Conclusion
The results reveal connections between the challenges. Not all theme groups relate exclusively to one of the two challenges. For example, themes in the groups _unsuitable safety standards_ and _missing guidelines for data selection_ relate to both challenges of specifying training data and runtime monitoring. Furthermore, we identified cause-effect relations between different themes and across different group of themes. For example _IP protection_ is a cause for the _inability of accessing the inner states_ and for _creating failure models for ML model_. We based this assessment on a semantic analyses of the words used in the statements relating to these themes. For example, Interviewee F stated that:
"**That neural network is something [of a] black box in itself. You don't know why it do[es] things. Well, you cannot say anything about its inner behavior**" - Interview F
Later in the interview, the same participants states:
"**Why is it a black box? That's not our choice. That's because we have a supplier and they don't want to tell us [all details].**" - Interview F
Figure 2 illustrates the identified cause-effect relations, relations between the themes, and how the different themes relate to the challenges.
#### 4.0.1 Recommendations to practitioners and for further research
The identified root causes of the challenges described by the participants allowed us to formulate recommendations listed in Table 4. Because these recommendations try to solve root causes described by the participants of the interview study, we think they are a useful first step towards solving the challenges related to specifying training data and runtime monitoring.
### Related Literature
_The problem of finding the "right" data:_ For acquiring data, data scientists have to rely on data mining with little to no quality checking and potential biases [4]. Biased datasets are a common cause for erroneous or unexpected behaviour of ML models in critical environments, such as in medical diagnostic [8], in the juridical system [19, 37], or in safety-critical applications [15, 46].
There are attempts to create "unbiased" datasets. One approach is to curate manually the dataset, such as in the FairFace dataset [29], the CASIA-SURF CeFaA dataset [32], or Fairbatch [40]. An alternative road is to use data augmentation techniques to "rebalance" the dataset [27, 45]. However, it was discovered that it is not sufficient for avoiding bias to use an assumed balanced datasets during training [20, 49, 50] because it is often unclear which features in the data need to be balanced. Approaches for curating or manipulating the dataset require information on the target domain, i.e., one needs to set requirements on the dataset depending on the desired operational context [6, 16, 22]. But deriving a data specification for ML is not common practise [25, 33, 42].
Figure 2: Connection between the identified challenge themes. Enclosed themes have been identified as causes for the surrounding themes. Furthermore, dotted lines indicate relations between different themes.
_The problem of finding the "right" runtime monitor:_ Through clever test strategies, some uncertainty can be eliminated in regards to the behaviour of the model [11]. However, ML components are often part of systems of systems and their behaviour is hard to predict and analyse at design time [47]. DevOps principles from software engineering give promising ideas on how to tackle remaining uncertainty at runtime [34]. As part of the operation of the model, runtime models that "augment information available at design-time with information monitored at runtime" help in detecting deviations from the expected behaviour [17]. These runtime models for ML can take the form of model assertions, i.e., checking of explicitly defined attributes of the model at runtime [28]. However, the authors state that "bias in training sets are out of scope for model assertion". Another model based approach can be the creation of neuron activation patterns for runtime monitoring [12]. Other approaches treat the ML model as "black-box", and only check for anomalies and drifts in the input data [39] the output [43], or both [18]. However, similar to the aforementioned challenges when specifying data for ML, runtime monitoring needs an understanding on how to "define, refine, and measure quality of ML solutions" [23], i.e., in relation to non-functional requirements one needs to understand which quality aspects are relevant, and how to measure them [21]. Most commonly applied safety standards emphasise processes and traceability to mitigate systematic mistakes during the development of critical systems. Therefore, if the training data and runtime monitoring cannot be specified, a traceability between safety goals and the deployed system cannot be established [10].
\begin{table}
\begin{tabular}{p{34.1pt} p{34.1pt}} \hline \hline
**ID** & **Recommendation** \\ \hline
**I** & **Avoid restrictive IP protection.** IP protection is a cause for the inability of accessing the inner states of the ML models (black-box model). This causes a nontransparent measure of confidence, and an inability to formulate failure models. To our knowledge, no studies have yet been performed on the consequences of IP protection of ML models on the ability to monitor and reason (e.g., in a safety case) for the correctness of ML model decisions. \\ \hline
**II** & **Relate measures of confidence to actual performance metrics.** For runtime monitoring, the measure of confidence is often used to evaluate the reliability of the ML model’s results. But without understanding and relating that measure to clearly defined performance metrics of the ML model first, the measure of confidence provides little insight for runtime monitoring. In general, defining suitable metrics and boundary conditions should become an integral part of RE for machine learning as it affects both the ability to define data requirements and runtime monitoring requirements. \\ \hline
**III** & **Overcome grown data selection habits.** Grown data selection habits have been mentioned as a reason for a lack of clear completeness criteria and a disconnection from requirements. Based on our results, we argue that more systematic data selection processes need to be established in companies. This would allow for a better connection of the data selection process to requirement engineering and it creates a traceability between system requirements, completeness criteria and data requirements. Additionally, it might also reduce the amount of data needed for training, and therefore cost of development. \\ \hline
**IV** & **Balance hardware limitation in embedded systems.** Runtime monitoring causes a processing and memory overhead that can compromise timing requirements and reduce the ML model’s performance. Today, safety criticality of systems with ML is mostly ensured through monitoring solutions. By decomposing the safety requirements instead onto both the monitoring and the ML model, the monitors might become more resource efficient, faster, and less constraining in regards to the decisions of the ML model. However, safety requirements on the ML models might trigger requirements on the training data. \\ \hline \hline \end{tabular}
\end{table}
Table 4: Recommendations for practitioners and suggestions for further research
For many researchers and practitioners, runtime verification and monitoring is a promising road to assuring safety and robustness for ML in critical software [2, 11]. However, runtime monitoring also creates a processing and memory overhead that needs to be considered especially in resource-limited environments such as embedded devices [38].
The related work has been mapped to the challenges identified in the interview study in Table 3.
### Threats to validity
A lack of rigour (i.e., degree of control) in the study design can cause confounding which can manifest bias in the results [44]. The following mechanisms in this study tried to reduce confounding: The interview guide was peer-reviewed by an independent researcher, and a test session of the interview was conducted. To reduce personal bias, at least two authors were present during all interviews, and the authors took turn in leading the interviews. To confirm the initial findings from the interview study and reduce the risk of researchers' bias, a workshop was organised which was also visited by participants who were not part of the interview study. Another potential bias can arise from the sampling of participants. Although we applied purposeful sampling, we still had to rely on the contact persons of the companies to provide us with suitable interview candidates. We could not directly see a list of employees and choose the candidates ourselves. Regarding generalisability of the findings, the limited number of companies involved in the study can pose a threat to external validity. However, two of the companies are world-leading companies in their fields, which, in our opinion, gives them a deep understanding and experience of the discussed problems. Furthermore, we included companies from a variety of different fields to establish better generalisability. Furthermore, our data includes only results valid for the development of safety-critical ML models. We assume that the findings are applicable also to other forms of criticality, such as privacy-critical, but we cannot conclude on that generalisability based on the available data.
### Conclusion
This paper reported on a interview-based study that identified challenges related to specifying training data needs and runtime monitoring for safety critical ML models. Through interviews conducted at five companies we identified 17 challenges in six groups. Furthermore, we performed a semantic analysis to identify the underlying root-causes. We saw that several underlying challenges affect both the ability to specify training data and runtime monitoring. For example, we concluded that restrictive IP protection can cause an inability to access and understand the inner states of a ML model. Without insight into the ML model's state, the measure of confidence cannot be related to actual performance metrics. Without clear performance metrics, it is difficult to define the necessary degree of variety in the training data. Furthermore, grown data selection impedes proper requirement engineering for training data. Finally, safety requirements should be
distributed on both the ML model which can cause requirements on the training data, and on runtime monitors which can reduce the overhead by the monitoring solution. These recommendations will serve as starting point for further engineering research.
|
2307.16858 | Small constant uniform rectifiability | We provide several equivalent characterizations of locally flat, $d$-Ahlfors
regular, uniformly rectifiable sets $E$ in $\mathbb{R}^n$ with density close to
$1$ for any dimension $d \in \mathbb{N}$ with $1 \le d \le n-1$. In particular,
we show that when $E$ is Reifenberg flat with small constant and has Ahlfors
regularity constant close to $1$, then the Tolsa alpha coefficients associated
to $E$ satisfy a small constant Carleson measure estimate. This estimate is
new, even when $d = n-1$, and gives a new characterization of chord-arc domains
with small constant. | Cole Jeznach | 2023-07-31T17:16:42Z | http://arxiv.org/abs/2307.16858v2 | # Small-constant uniform rectifiability
###### Abstract.
We provide several equivalent characterizations of locally flat, \(d\)-Ahlfors regular, uniformly rectifiable sets \(E\) in \(\mathbb{R}^{n}\) with density close to \(1\) for any dimension \(d\in\mathbb{N}\), \(1\leq d<n\). In particular, we show that when \(E\) is Reifenberg flat with small constant and has Ahlfors regularity constant close to \(1\), then the Tolsa \(\alpha\) coefficients associated to \(E\) satisfy a small-constant Carleson measure estimate. This estimate is new, even when \(d=n-1\), and gives a new characterization of chord-arc domains with small constant.
Key words and phrases:uniform rectifiability, square function estimates, chord-arc surfaces 2020 Mathematics Subject Classification: Primary: 28A75 C. Jeznach was partially supported by the Simons Collaborations in MPS grant 563916, and NSF DMS grant 2000288. The author would like to thank his advisors Max Engelstein and Svitlana Mayboroda for many helpful conversations regarding the paper, and their unending support. He also would like to thank Guy David for several helpful conversations regarding the main result.
their core, though, uniformly rectifiable sets have many equivalent geometric characterizations, all of which quantify (in some sense) the \(d\)-rectifiability of \(E\) at different points \(x\in E\) and scales \(r>0\).
Just to list two such examples, a \(d\)-Ahlfors regular set \(E\subset\mathbb{R}^{n}\) is \(d\)-uniformly rectifiable if and only if the Tolsa \(\alpha_{\mathcal{H}^{d}|_{E}}\) numbers satisfy the Carleson measure estimate [10]
\[\sup_{x\in E,\,r>0}r^{-d}\int_{0}^{r}\int_{E\cap B(x,r)}\alpha_{\mathcal{H}^{ d}|_{E}}(y,t)^{2}\;\frac{d\mathcal{H}(y)dt}{t}\leq M_{1}, \tag{1.1}\]
for some uniform \(M_{1}>0\). Here the \(\alpha_{\mathcal{H}^{d}|_{E}}(x,r)\) are bounded coefficients which measure the distance from \(E\) to the space of \(d\)-planes in the ball \(B(x,r)\) (see Definition 1.5), and so the estimate (1.1) says that for most balls \(B(x,r)\) centered on \(E\), this distance is quantitatively small in a precise sense. Of course, in the estimate above, one could take different coefficients (e.g., the so-called \(L^{1}\) beta coefficients, \(\beta_{1}\)) and still obtain a characterization [11]. In terms of a slightly more concrete definition, it turns out that (1.1) is equivalent to \(E\) having "big pieces of Lipschitz images of subsets of \(\mathbb{R}^{d}\)", which is to say the following: there is some uniform \(M_{2}>0\) so that for each \(x\in E\) and every \(r>0\), one can find a Lipschitz mapping \(\rho:B_{d}(0,r)\subset\mathbb{R}^{d}\to\mathbb{R}^{n}\) with Lipschitz norm \(\leq 1+M_{2}\) so that
\[\mathcal{H}^{d}(E\cap B(x,r)\cap\rho(B_{d}(0,r)))\geq(1+M_{2})^{-1}\mathcal{H} ^{d}(B_{d}(0,r)). \tag{1.2}\]
There are many other interesting geometric and analytic characterizations of uniformly rectifiable sets, and we refer the reader to [11, 11] where this is pursued. The goal of the current paper is to take on a systematic study of the quantitative relationship between such constants \(M_{1}\) and \(M_{2}\) in the _small-constant_ regime: if \(M_{1}\) is sufficiently small, does it mean that \(M_{2}\) also is, and can this be made quantitative?
In this paper, we show that this is indeed the case. In fact, we show that the estimate (1.1) with small constant \(M_{1}\) (along with good Ahlfors regularity control) characterizes a certain class of Ahlfors regular sets \(E\subset\mathbb{R}^{n}\) of any dimension \(1\leq d\leq n-1\) that have very good approximations by very flat Lipschitz graphs (Theorem 1.9). This approximation property is even stronger than the "big pieces of Lipschitz images of subsets of \(\mathbb{R}^{d}\)" property mentioned above. We call such sets uniformly rectifiable of dimension \(d\) with small constant \(\delta>0\) (see Definition 1.7). Moreover our result is quantitative in that (1.1) holds with \(M_{1}=\delta^{\theta}\) for some dimensional constant \(\theta\in(0,1)\) depending only on \(n\) and \(d\) whenever \(E\subset\mathbb{R}^{n}\) is uniformly rectifiable of dimension \(d\) with small constant \(\delta\), and a converse holds as well. This quantitative Carleson measure estimate serves as an important tool in the upcoming work in [10], where the authors study the regularity of the Poisson kernel associated to a degenerate elliptic operator outside of Ahlfors regular sets of high co-dimension in \(\mathbb{R}^{n}\). In addition, our method of proof brings with it several other characterizations. In particular, we relate the constant \(M_{1}\) to the control of the oscillation of the tangent planes to \(E\) and the Reifenberg flatness of \(E\).
These two other characterizations are largely motivated by the of work Semmes [12, 13] (and later Blatt [14]) on chord-arc surfaces with small constant as well as Kenig and Toro [17, 18] in their study of the Poisson kernel regularity for chord-arc domains with small (and vanishing) constant. In particular, the work of Kenig and Toro showed that chord-arc domains with small constant in many ways serve as an appropriate
substitute for \(C^{1}\) domains in the study of boundary value problems for elliptic PDE below the Lipschitz thresh-hold. It turns out that under a global assumption of Reifenberg flatness of a domain \(\Omega\), the Poisson kernel \(k\) associated to the Laplace operator on \(\Omega\) satisfies \(\log k\in\operatorname{VMO}(\partial\Omega)\) if and only if the domain \(\Omega\) is a chord-arc domain with vanishing constant [16]. This result is the proper analogue (and converse) of the earlier result of Jerison and Kenig, which says that \(\log k\in\operatorname{VMO}(\partial\Omega)\) for \(C^{1}\) domains (though, in general \(\log k\) need not be continuous or even bounded for such domains) [14]. Since then, chord-arc domains with small constant have continued to be an important geometric object in the study of quantitative properties of solutions to elliptic PDE on rough domains [17, 18, 19, 20, 21, 22], free boundary problems for elliptic measure [1, 1, 1, 16, 20, 23], and even have corresponding analogues and importance in other PDE settings [14, 15, 21], and we can only scratch the surface here on the plethora of theory devoted to the study of PDE on such domains.
Since chord-arc domains with small constant \(\Omega\) have rich PDE properties, there has been much interest in understanding and providing alternative geometric characterizations of such domains. Roughly speaking, these are domains whose boundaries locally separate space in two whose boundaries are Ahlfors regular and bilaterally well-approximated by hyperplanes. In addition these domains have unit normal with small BMO-norm (see Definition 1.13 for a more precise statement) [16]. It is known that such domains also have good Lipschitz graph approximations, and thus their boundaries are closely related to uniformly rectifiable sets of dimension \((n-1)\) with small constant. In fact, when \(\Omega\) is a domain satisfying some underlying topological assumptions, we shall use our results to give an alternative characterization of \(\Omega\) being a chord-arc domain with small constant using the Carleson measure estimate (1.1) on \(\partial\Omega\) (see Theorem 1.15).
Before rigorously stating the main result, we remark that the relationship between some of the defining characteristics of chord-arc domains with small constant (such as Reifenberg flatness, oscillation of the unit normal, and Lipschitz graph approximations) has been studied and exists in the literature in varying contexts (in the co-dimension one case for chord-arc surfaces and chord-arc domains in [11][16] and [18], and in any co-dimension for smooth embedded hypersurfaces [15], for example). Still in our main result for uniformly rectifiable sets \(E\subset\mathbb{R}^{n}\) of dimension \(d\) and small constant \(\delta>0\), we provide proofs that hold for general Ahlfors regular sets of any co-dimension, and we do not impose any topological assumptions on the set \(\mathbb{R}^{n}\setminus E\) apriori. In any case, the characterization in terms of the small constant Carleson measure estimate (1.1) is new in any dimension and co-dimension. In addition, our techniques provide a systematic way to obtain small-constant Carleson measure estimates such as (1.1) for coefficients besides the Tolsa \(\alpha\) numbers for small constant uniformly rectifiable sets, which we hope to prove useful for small-constant PDE results in the future. Let us now provide enough background to state the main result, Theorem 1.9.
### Main result and outline of the paper
In this paper, we always denote the ambient space by \(\mathbb{R}^{n}\), for \(n\in\mathbb{N}\), and \(d\in\mathbb{N}\) will always be so that \(0<d<n\). We reserve the notation \(A(n,d)\) to denote the collection of all \(d\)-planes \(P\subset\mathbb{R}^{n}\), and \(G(n,d)\) for the Grassmannian of \(d\)-dimensional subspaces of \(\mathbb{R}^{n}\). Also, we denote by \(\mathcal{H}^{d}\) the \(d\)-dimensional Hausdorff measure
on \(\mathbb{R}^{n}\), normalized for notational convenience so that if \(P\in A(n,d)\), \(x\in P\), and \(r>0\), then \(\mathcal{H}^{d}(B(x,r)\cap P)=r^{d}\). Lastly, whenever \(P\) is a plane, we denote by \(\pi_{P}:\mathbb{R}^{n}\to P\) the orthogonal projection onto the plane \(P\). Let us begin by introducing several related notions of \(d\)-dimensional sets in \(\mathbb{R}^{n}\) and their geometric regularity that are needed to state our main result.
**Definition 1.1**.: A Borel measure \(\mu\) on \(\mathbb{R}^{n}\) is said to be \(d\)-Ahlfors regular with constant \(C_{\mu}>0\) provided that for each \(x\in\operatorname{spt}\mu\) and each \(r>0\), one has
\[C_{\mu}^{-1}r^{d}\leq\mu(B(x,r))\leq C_{\mu}r^{d}.\]
If \(E\subset\mathbb{R}^{n}\) is closed, we say that \(E\) is \(d\)-Ahlfors regular with constant \(C_{E}>0\) if \(\mathcal{H}^{d}|_{E}\) is \(d\)-Ahlfors regular with constant \(C_{E}>0\). Finally, if only the upper (lower) bound holds as above, then we say \(\mu\) is upper (lower) \(d\)-Ahlfors regular with constant \(C_{\mu}\).
**Remark 1.2**.: The choice to normalize \(\mathcal{H}^{d}\) as above, and the role of the constant \(C_{\mu}\) in Definition 1.1 is important, since we shall often want to measure how close a \(d\)-Ahlfors regular measure \(\mu\) is to \(d\)-dimensional surface measure on \(\operatorname{spt}\mu\). In particular, we shall often use the phrase "\(d\)-Ahlfors regular with small constant" when the constant \(C_{\mu}>1\) is very close to \(1\), even though the phrase is misleading.
Next we introduce Jones' \(\beta\) numbers (see [10, 11]) and Tolsa's \(\alpha\) numbers (see [10]), which have been studied extensively in relation to rectifiable and uniformly rectifiable measures on \(\mathbb{R}^{n}\) and singular integral operators. We also introduce the notion of Reifenberg flat sets, which were introduced by Reifenberg in his solution of the Plateau problem [14].
**Definition 1.3**.: If \(E\) is a \(d\)-Ahlfors regular set, then define for \(x\in\mathbb{R}^{n}\) and \(r>0\),
\[b\beta_{1,E}(x,r):=r^{-d-1}\inf_{P\in A(n,d)}\left(\int_{B(x,r)}\operatorname {dist}\left(y,P\right)\,d\mathcal{H}|_{E}(y)+\int_{B(x,r)}\operatorname{ dist}\left(y,E\right)\,d\mathcal{H}^{d}|_{P}(y)\right).\]
**Definition 1.4**.: For \(\Omega\subset\mathbb{R}^{n}\) open, denote by \(\Lambda(\Omega)\) the space of \(1\)-Lipschitz functions \(f:\mathbb{R}^{n}\to\mathbb{R}\) that are compactly supported in \(\Omega\). If \(\mu\) and \(\nu\) are measures on \(\mathbb{R}^{n}\), then we define the localized Wasserstein distance between \(\mu\) and \(\nu\) in \(B(x,r)\subset\mathbb{R}^{n}\) by
\[\mathcal{D}_{x,r}(\mu,\nu)\coloneqq\sup_{f\in\Lambda(B(x,r))}\left|\int f(d\mu -d\nu)\right|.\]
**Definition 1.5**.: Denote by \(\operatorname{Flat}(n,d)\) the set of measures of the form \(c\mathcal{H}^{d}|_{P}\) where \(c>0\) and \(P\in A(n,d)\). If \(\mu\) is a \(d\)-Ahlfors regular measure, then define for \(x\in\mathbb{R}^{n}\) and \(r>0\),
\[\alpha_{\mu}(x,r):=r^{-d-1}\inf_{\nu\in\operatorname{Flat}(n,d)}\mathcal{D}_{x,r}(\mu,\nu).\]
In our notation \(\alpha_{\mu}\), we omit the dependence on the dimension \(d\) of the measure \(\mu\), since it shall be clear from context.
**Definition 1.6**.: We define the normalized local Hausdorff distance for closed sets \(E,F\subset\mathbb{R}^{n}\) that meet \(\overline{B(x,r)}\) by
\[d_{x,r}(E,F):=r^{-1}\left(\sup_{y\in E\cap\overline{B(x,r)}}\operatorname{dist} \left(y,F\right)+\sup_{y\in F\cap\overline{B(x,r)}}\operatorname{dist}\left(y,E \right)\right).\]
With this distance, we define the bilateral beta (infinity) numbers by
\[b\beta_{\infty,E}(x,r):=\inf_{P}d_{x,r}(E,P),\]
where the infimum is taken over all \(d\)-planes \(P\in A(n,d)\) that meet \(\overline{B(x,r)}\). Moreover, we say that a closed set \(E\subset\mathbb{R}^{n}\) is \(\delta\)-Reifenberg flat if \(b\beta_{\infty,E}(x,r)\leq\delta\) for every \(x\in E\) and \(r>0\). We warn the reader that this definition of \(\delta\)-Reifenberg flatness is different than that in [10].
Finally, we come to the notion of small-constant uniformly rectifiable sets.
**Definition 1.7**.: A closed set \(E\subset\mathbb{R}^{n}\) is \(\delta\)-uniformly rectifiable of dimension \(d\) (\(\delta\)-UR for short) if \(0<\delta<1/10\) and the following holds:
\[\begin{array}{ll}\text{for every $x\in E$ and $r>0$, there is a $d$-dimensional Lipschitz graph $\Gamma$}\\ \text{with constant $\leq\delta$ so that $\mathcal{H}^{d}|_{E}(B(x,r)\setminus\Gamma)+ \mathcal{H}^{d}|_{\Gamma}(B(x,r)\setminus E)\leq\delta r^{d}$ and}\\ \Gamma\cap B(x,r/2)\neq\emptyset.\end{array} \tag{1.3}\]
Again we usually omit the dimension \(d\) since it will be clear from context.
**Remark 1.8**.: In the definition of \(\delta\)-UR we _impose_ that \(\delta<1/10\). This is because if \(\delta\) were allowed to be large, the definition would be satisfied for any \(d\)-Ahlfors regular set \(E\), whereas we want the \(\delta\)-UR condition to be some small-constant quantification uniform rectifiability. In particular, since \(\delta<1/10\), it is straight-forward to verify that \(\delta\)-UR sets are \(d\)-Ahlfors regular with constant close to \(1\) (see Lemma 3.2). Moreover, they satisfy the "big pieces of Lipschitz graphs condition" and so \(\delta\)-UR sets are \(d\)-uniformly rectifiable as in the sense of David and Semmes [11] with bounded constant (see Definition 4.1). From the previous discussion it follows that \(\delta\)-UR sets are \(d\)-rectifiable, so they have approximate tangent planes for \(\mathcal{H}^{d}\)-almost all \(x\in E\) (see Theorem 5.1), which we denote by \(T(x)\in G(n,d)\).
Notice also that the \(\delta\)-UR condition is strictly stronger than the "big pieces of Lipschitz images of subsets of \(\mathbb{R}^{d}\)" condition mentioned in (1.2) with small constant \(M_{2}\). Indeed, if \(E=V_{1}\cup V_{2}\) where \(V_{1}\) and \(V_{2}\) are distinct \(d\)-planes in \(\mathbb{R}^{n}\), then one can check that \(E\) is \(d\)-Ahlfors regular and satisfies (1.2) with \(M_{2}=0\) but is not \(\delta\)-UR of dimension \(d\) for \(\delta\) small.
In the language of the above, our main result is that a set \(E\subset\mathbb{R}^{n}\) is \(\delta\)-UR of dimension \(d\) if and only if one of various other quantities is sufficiently small (with quantitative control). We refer the reader to Definition 2.1 for the precise definition of a \(\delta\)-Corona decomposition, which is somewhat cumbersome to place here without first discussing the Christ-David dyadic lattice in Section 2.1.
**Theorem 1.9**.: _Fix \(n,d\in\mathbb{N}\) with \(0<d<n\) and \(C_{E}>0\). Then there are constants \(\delta_{0}>0\) and \(\theta_{0}\in(0,1)\) depending only on \(n\), \(d\), and \(C_{E}>0\), so that the following holds. Whenever \(0<\delta<\delta_{0}\), \(E\subset\mathbb{R}^{n}\) is \(d\)-Ahlfors regular with constant \(C_{E}\), and one of the following conditions hold_
1. \(E\) _is_ \(\delta\)_-uniformly rectifiable,_
2. \(E\) _admits_ \(\delta\)_-Corona decompositions,_
3. \(E\) _is upper_ \(d\)_-Ahlfors regular with constant_ \((1+\delta)\)_, and for_ \(d\mu(x)=g(x)\ d\mathcal{H}^{d}|_{E}(x)\) _with_ \((1+\delta)^{-1}\leq g\leq 1+\delta\)_, then for all_ \(x\in E\) _and_ \(r>0\)_,_ \[\mu(B(x,r))^{-1}\int_{B(x,r)}\int_{0}^{r}\alpha_{\mu}(y,t)^{2}\ \frac{d\mu(y)dt}{t}\leq\delta,\]
4. \(E\) _is upper_ \(d\)_-Ahlfors regular with constant_ \((1+\delta)\)_, and for all_ \(x\in E\) _and_ \(r>0\)_,_ \(b\beta_{1,E}(x,r)\leq\delta\)_,_
5. \(E\) _is upper_ \(d\)_-Ahlfors regular with constant_ \((1+\delta)\) _and_ \(\delta\)_-Reifenberg flat,_
6. \(E\) _is_ \(d\)_-rectifiable, lower_ \(d\)_-Ahlfors regular with constant_ \((1+\delta)\)_, and for every_ \(x\in E\) _and_ \(r>0\)_, there is a_ \(V\in G(n,d)\) _so that_ \[\fint_{B(x,r)}\left\|\pi_{T(x)}-\pi_{V}\right\|\ d\mathcal{H}^{d}|_{E}(x)+ \sup_{y\in B(x,r)\cap E}\frac{|\pi_{V^{\perp}}(y-x)|}{r}\leq\delta,\]
_then all of the others also hold with constant \(\delta^{\theta_{0}}\) in place of \(\delta\)._
**Remark 1.10** (Sharpness of the Ahlfors regularity assumption).: Let us discuss the sharpness of the small-constant Ahlfors regularity assumptions appearing in the conditions (A)-(F) in Theorem 1.9.
First, in the places they appear in (D), (E), and (F), they are necessary. We shall see shortly that (D) and (E) are easily seen to be equivalent. In (E), the upper Ahlfors regularity assumption can be seen to be necessary by example of a very flat snowflake, as in [10]. The key point is that there are \(\delta\)-Reifenberg flat snowflakes for arbitrarily small \(\delta\) that have infinite \(\mathcal{H}^{d}\) measure. Finite truncations of such constructions yield very flat \(d\)-Ahlfors regular sets with large constant \(C_{E}\gg 1\), but for which small-constant Ahlfors regularity fails. By Lemma 3.2, such sets are not \(\delta^{\theta_{0}}\)-UR. In (F), one can see the lower \(d\)-Ahlfors regularity assumption is necessary by taking \(E=V^{+}\) for some half \(d\)-plane \(V^{+}\). Again, such an \(E\) is \(d\)-Ahlfors regular with large constant and satisfies the other condition in (F) trivially with \(\delta=0\), but is not \(\delta^{\theta}\)-UR.
This brings us to (C) which is more delicate. If one instead considers the measure \(d\mu(x)=g(x)d\mathcal{H}^{d}|_{E}(x)\) where \(1/2\leq g\leq 2\), \(g\) attains the values \(1/2\) and \(2\) somewhere, yet \(\|g\|_{\rm BMO}=\delta\), then in fact our arguments will show that still the Carleson condition
\[\mu(B(x,r))^{-1}\int_{B(x,r)}\int_{0}^{r}\alpha_{\mu}(y,t)^{2}\ \frac{d\mu(y)dt}{t}\leq \delta^{\theta_{0}},\]
holds whenever \(E\) is \(\delta\)-UR (see the proofs of Lemmas 4.4 and 4.5). On the other hand, \(\mu\) is not \(d\)-Ahlfors regular with small-constant, and thus a small-constant Carleson condition on the coefficients \(\alpha_{\mu}\) alone cannot imply small-constant Ahlfors regularity of the _measure_\(\mu\). This is not to say that the implication cannot hold for the measure \(\mathcal{H}^{d}|_{E}\), and indeed there is a subtle but important difference between \(\mu\) and \(\mathcal{H}^{d}|_{E}\) in the \(\alpha\) coefficients. Of particular note is the fact that Preiss' density Theorem applies to surface measure \(\mathcal{H}^{d}|_{E}\) when \(E\) is
\(d\)-rectifiable, while for \(\mu\) the densities still exist, but are governed by \(g\) which can fluctuate as shown by the previous example. At this stage we do not know whether the small-constant \(d\)-Ahlfors regularity assumption in (C) is necessary.
We prove Theorem 1.9 one step at a time, proving (in alphabetical order) each of the conditions (A)-(F) with constant \(\delta\) implies the subsequent condition with constant \(C_{0}\delta^{\theta_{0}}\), where \(C_{0},\theta_{0}>0\) depend only on \(n\), \(d\), and \(C_{E}\). Instead of repeating this phrase over and over, we shall instead write "(A) gives (B)", when really we mean that (A) implies (B) with constant \(C_{0}\delta^{\theta_{0}}\) in place of \(\delta\). By taking \(\delta_{0}\) and \(\theta_{0}\) even smaller, this is enough to prove the Theorem. We do not explicitly compute \(\theta_{0}\) in the proof of each implication, except for where there is a clear optimal power; instead, we care only that each condition is quantitatively controlled by the previous one.
The bulk of our work (and our main contribution) is in showing (A) gives (B), and (B) gives (C), which are done in Sections 3 and 4 respectively. Here we should emphasize as stated previously that for (large-constant) Ahlfors regular, uniformly rectifiable measures \(\mu\), one has the large-constant Carleson measure estimate
\[\sup_{x\in\operatorname{spt}\mu,r>0}\mu(B(x,r))^{-1}\int_{B(x,r)}\int_{0}^{r} \alpha_{\mu}(y,t)^{2}\;\frac{d\mu(y)dt}{t}<\infty,\]
(see [10, Theorem 1.2]). However, it takes delicate analysis to show that this quantity is small for \(\delta\)-UR sets (and in fact, necessitates the appropriate notion of a "small-constant"-Corona decomposition, which we introduce here).
That (C) gives (D) is immediate once one recalls the fact that for Ahlfors regular sets \(E\), the Carleson measure estimate in (C) in fact implies that \(\alpha_{\mu}(x,r)^{2}\leq C\delta\), and that the \(\alpha_{\mu}\) dominate the \(b\beta_{1,E}\)[10, Lemma 3.2]. Similarly that (D) gives (E) is a straight-forward estimate that uses the Lipschitz nature of the distance function to show \(b\beta_{\infty,E}(x,r)\leq Cb\beta_{1,E}(x,r)^{1/(d+1)}\). As such we omit the proofs. We prove that (E) gives (F) in Section 5 from an argument that estimates the portion of \(E\) whose tangent planes make a large angle with a good approximation to \(E\) in a ball \(B(x,r)\). This argument is different than the proof using the Gauss-Green Theorem by Kenig and Toro in co-dimension \(1\) (see Theorem 2.1 in [10] and also the proof following (2.18) in [1]), and in particular also works in any co-dimension. Finally, the proof of (F) gives (A) exists in several forms in the literature. When \(d=n-1\) and \(E\) is a smooth enough hypersurface, the argument is due to Semmes [11, Proposition 5.1] (and the resulting approximating Lipschitz graphs are referred to as Semmes decompositions) and later used in [13]. It is also proved under different topological assumptions of a domain \(\Omega\) in [11, Theorem 4.16], where the hypothesis on the quantity \(\left|\pi_{V^{\perp}}(y-x)\right|/r\) is removed, and proved by other means. When \(d<n-1\), this implication is essentially proved in [1, Lemma 3.2] again when \(E\subset\mathbb{R}^{n}\) is a \(C^{1}\) manifold, but for the sake of completeness, we outline the proof of Blatt in Section 6 to make clear the fact that in our setting (and with the Ahlfors regularity assumptions), the argument does not require \(E\) to be a \(C^{1}\) manifold.
### An application to chord-arc domains
Let us end the introduction with a discussion relating \(\delta\)-UR sets of dimension \((n-1)\) in \(\mathbb{R}^{n}\), and \(\delta\)-chord-arc domains (as defined in [13]), as promised earlier. All of the arguments involved in the proof of Theorem 1.9 are local, and
thus there are corresponding local and "vanishing" results that follow from these arguments, though they are slightly technical to write down. In fact, these local results, which we leave to Section 7, are in more direct analogy to the so-called \(\delta\)-chord-arc domains introduced by Kenig in Toro in [13] and [13]. Let us define these rigorously now.
**Definition 1.11**.: A domain \(\Omega\subset\mathbb{R}^{n}\) is said to satisfy the separation property if for each \(K\subset\mathbb{R}^{n}\) compact, there is an \(R_{K}>0\) so that for each \(x\in\partial\Omega\cap K\) and each \(r\in(0,R_{K})\), there is a choice of \(V\in A(n,n-1)\) and choice of normal vector \(\vec{n}_{V}\) to \(V\) so that \(x\in V\), and
\[\mathcal{T}^{+}(r,x)= \{y+t\vec{n}_{V}\in B(x,r)\;:\;y\in V,\;t>r/4\}\subset\Omega,\] \[\mathcal{T}^{-}(r,x)= \{y+t\vec{n}_{V}\in B(x,r)\;:\;y\in V,\;t<r/4\}\subset\Omega^{c}.\]
If \(\Omega\) is unbounded, we assume also that \(\partial\Omega\) divides \(\mathbb{R}^{n}\) into two distinct, nonempty connected components.
**Definition 1.12**.: Let \(\delta\in(0,\delta_{n})\) for some small dimensional constant \(\delta_{n}>0\). A domain \(\Omega\subset\mathbb{R}^{n}\) is said to be a \(\delta\)-Reifenberg flat domain if for each \(K\subset\mathbb{R}^{n}\) compact, there is an \(R_{K}>0\) so that for each \(x\in\partial\Omega\cap K\) and each \(r\in(0,R_{K})\), \(b\beta_{\infty,\partial\Omega}(x,r)\leq\delta\). If \(\Omega\) is unbounded, we assume also that
\[\sup_{x\in\partial\Omega,\;r>0}b\beta_{\infty,\partial\Omega}(x,r)\leq\delta_ {n}.\]
**Definition 1.13**.: Let \(\delta\in(0,\delta_{n})\). A set of locally finite perimeter \(\Omega\subset\mathbb{R}^{n}\) is called a \(\delta\)-chord-arc domain if \(\Omega\) is a \(\delta\)-Reifenberg flat domain satisfying the separation property, \(\partial\Omega\) is \((n-1)\)-Ahlfors regular, and in addition the following holds. For each \(K\subset\mathbb{R}^{n}\) compact, there is an \(R_{K}>0\) so that for each \(x\in\partial\Omega\cap K\) and each \(r\in(0,R_{k})\), we have
\[\left\|\vec{n}\right\|_{*}(B(x,R_{K}))\leq\delta.\]
Here \(\vec{n}(y)\) is the unit outer normal to \(\partial\Omega\), \(\vec{n}_{y,r}=\fint_{\partial\Omega\cap B(y,r)}\vec{n}(z)\;d\mathcal{H}^{n-1} (z)\), and
\[\left\|\vec{n}\right\|_{*}(B(y,r))=\sup_{0<s\leq r}\left(\fint_{\partial\Omega \cap B(y,r)}\left|\vec{n}(z)-\vec{n}_{y,r}\right|^{2}\;d\mathcal{H}^{n-1}(z) \right)^{1/2}.\]
In the terminology we have introduced thus far, there is no immediate containment between \(\delta\)-UR sets of dimension \((n-1)\) and boundaries of \(\delta\)-chord arc domains. This is because chord-arc domains satisfy topological separation conditions and are sets of locally finite perimeter, and because \(\delta\)-UR sets satisfy global flatness conditions, while \(\delta\)-chord-arc domains satisfy local ones. However, these differences are minor, and the two notions are very closely related. In particular, equation (2.18) in [1] says that for \(\delta\)-chord arc domains, one has \(\left|(\vec{n}_{x,r},y-x)\right|\leq C\delta^{1/2}r\) for \(y\in B(x,r)\) whenever \(\left\|\vec{n}\right\|_{*}(B(x,r))\leq\delta\). This implies that the second condition in (F) holds locally for \(\delta\)-chord arc domains. In addition one can prove local lower \((n-1)\)-Ahlfors regularity of \(\partial\Omega\) (with small constant) from the local Reifenberg flatness condition (see the proof of Theorem 5.2). Combining these with the fact that the proof of Theorem 1.9 is local, we see that when \(\Omega\) is a \(\delta\)-chord-arc domain, then on compact sets for small enough scales, \(\partial\Omega\) satisfies the \(\delta^{\theta_{0}}\)-UR conditions. This is made precise by the following Theorem, and in fact, as long as we assume some underlying conditions on a domain \(\Omega\), we obtain a new characterization of \(\delta\)-chord-arc domains. For simplicity, we
choose just one such condition coming from (A)-(F) to give the characterization, which we make as the following local definition.
**Definition 1.14**.: Let \(\mu\) be \(d\)-Ahlfors regular. We say that \(\mu\) satisfies the local \(\delta\)-UR condition of dimension \(d\) if for each \(K\subset\mathbb{R}^{n}\) compact, there is an \(R_{K}>0\) so that for each \(x\in\operatorname{spt}\mu\cap K\) and \(r\in(0,R_{K})\) one has \(\mu(B(x,r))\leq(1+\delta)r^{d}\) and
\[\mu(B(x,r))^{-1}\int_{B(x,r)}\int_{0}^{r}\alpha_{\mu}(y,t)^{2}\;\frac{d\mu(y) dt}{t}\leq\delta.\]
**Theorem 1.15**.: _Fix \(n\in\mathbb{N}\) and \(C_{E}>0\). Then there are constants \(\delta_{0},\theta_{0}\in(0,1)\) depending only on \(n\) and \(C_{E}\) so that the following holds._
_Let \(\Omega\subset\mathbb{R}^{n}\) be a set of locally finite perimeter such that \(\Omega\) satisfies the separation property and \(\partial\Omega\) is \((n-1)\)-Ahlfors regular with constant \(C_{E}\). If \(\Omega\) is unbounded, assume in addition that \(\sup_{x\in\partial\Omega,\;r>0}b\beta_{\infty,\partial\Omega}(x,r)\leq\delta_ {0}\). Then for any \(\delta\in(0,\delta_{0})\) each of the conditions_
1. \(\Omega\) _is a_ \(\delta\)_-chord arc domain,_
2. _for any measure_ \(d\mu(x)=g(x)\mathcal{H}^{n-1}|_{\partial\Omega}(x)\) _with_ \((1+\delta)^{-1}\leq g\leq 1+\delta\)_,_ \(\mu\) _satisfies the local_ \(\delta\)_-UR condition of dimension_ \((n-1)\)_,_
_implies the other with constant \(\delta^{\theta_{0}}\) in place of \(\delta\)._
For a discussion of the proof of Theorem 1.15, see Section 7.
## 2. Preliminary definitions
We introduce the system of "dyadic cubes" for Ahlfors regular sets, which is an integral part of the definition of a \(\delta\)-Corona decomposition. They also play an important role in the square function we prove on the Tolsa \(\alpha\) coefficients in Theorem 4.7, since we opt to prove a dyadic version instead of the continuous one.
### The Christ-David dyadic lattice
Recall that as in [1], if \(\mu\) is a \(d\)-Ahlfors regular measure in \(\mathbb{R}^{n}\) with constant \(C_{\mu}\), then one can construct a family of subsets of \(E=\operatorname{spt}\mu\) that plays an analogous role to the family of dyadic cubes in \(\mathbb{R}^{n}\). In particular, for each \(j\in\mathbb{Z}\), there is a partition \(\Delta_{j}\) of \(E\) into "dyadic cubes" of \(E\) that satisfy the following:
\[\text{if }j\leq k,Q\in\Delta_{j}\text{, and }Q^{\prime}\in\Delta_{k}\text{, then either }Q\cap Q^{\prime}=\emptyset\text{ or }Q\subset Q^{\prime}, \tag{2.2}\] \[\text{if }Q\in\Delta_{j}\text{, then }C_{D}^{-1}2^{j}\leq \operatorname{diam}Q\leq C_{D}2^{j}\text{ and }C_{D}^{-1}2^{jd}\leq\mathcal{H}^{d}(Q)\leq C_{D}2^{jd},\] (2.3) \[\text{if }Q\in\Delta_{j}\text{ and }\tau>0\text{ then }\mathcal{H}^{d}\left(\left\{x\in Q\;:\; \operatorname{dist}\left(x,E\setminus Q\right)\leq\tau 2^{j}\right\}\right)\;\leq C_{D} \tau^{1/C_{D}}2^{jd}. \tag{2.1}\]
Remark that in (2.1)-(2.3) above, the constant \(C_{D}\) only depends on the dimensions \(n,d\) and \(C_{\mu}\). Also, condition (2.3) for \(\tau\) sufficiently small furnishes the existence of a "center" of each cube \(c_{Q}\in Q\), which satisfies
\[\operatorname{dist}\left(c_{Q},E\setminus Q\right)\geq C_{D}^{-1}\text{diam}\,Q,\]
so that
\[B(c_{Q},C_{D}^{-1}\text{diam}\,Q)\cap E\subset Q. \tag{2.4}\]
By convention, we define for \(\lambda>1\),
\[\lambda Q=\{x\in E\;:\;\operatorname{dist}\,(x,Q)\leq(\lambda-1)\operatorname{diam} \,Q\}.\]
In a similar manner, for any \(Q\), we define \(B_{Q}=B(c_{Q},\operatorname{diam}Q)\) so that \(B_{Q}\) is a ball centered on \(E\) satisfying
\[Q\subset B_{Q}\cap E\subset 2Q.\]
If \(Q\subset Q^{\prime}\), and \(Q\in\Delta_{j},Q^{\prime}\in\Delta_{j+1}\) then \(Q\) is said to be a child of \(Q^{\prime}\), and \(Q^{\prime}\) is said to be the parent of \(Q\). Similarly, if \(R\) and \(R^{\prime}\) share a parent, then they are said to be siblings. The set of all dyadic cubes of \(E\) is \(\Delta=\cup_{j}\Delta_{j}\), and for \(R\in\Delta\), we denote all dyadic cubes contained in \(R\) by \(\Delta(R)\). Finally, if \(Q\in\Delta\) belongs to \(\Delta_{j}\), we write \(\operatorname{gen}Q=j\).
### Small-constant Corona Decompositions
Let us define precisely \(\delta\)-Corona decompositions for \(d\)-Ahlfors regular sets. We opt to make the definition as strong as possible, since we anticipate this will be the most useful property of small-constant UR sets from which one can obtain precise, quantitative, small constant square function estimates. Fix a constant \(C_{n,d}>1\) large, and to be determined below in the discussion of Remark 2.2.
**Definition 2.1**.: Suppose that \(E\subset\mathbb{R}^{n}\) is \(d\)-Ahlfors regular, and \(E\) has a system of dyadic cubes \(\Delta\) with constant \(C_{D}\leq C_{n,d}\). Then we say that \(E\) admits a \(\delta\)-Corona decomposition in \(R_{0}\in\Delta\) if for each \(R\in\Delta\) such that \(\operatorname{gen}R=\operatorname{gen}R_{0}\) and \(R\subset\delta^{-1}B_{R_{0}}\), there is a partition \(\mathcal{F}(R)\) of \(\Delta(R)\) which satisfies the following:
(2.5) \[\text{each }S\in\mathcal{F}(R)\text{ has a maximal cube }Q(S)\text{ so that if }Q\in S\text{ and some }Q^{\prime}\in\Delta(R)\text{ satisfies }Q\subset Q^{\prime}\subset Q(S)\text{, then }Q^{\prime}\in S\text{. Moreover, if }Q\in S\text{, then either all of its children, or none of its children are in }S\text{.}\] (2.6) for each \(S\in\mathcal{F}(R)\) there is a \(d\)-dimensional Lipschitz graph \(\Gamma=\Gamma(S)\) with Lipschitz constant \(\leq\delta\) so that for each \(Q\in S\) and \(x\in\delta^{-1}B_{Q}\), we have \(\operatorname{dist}\,(x,E)+\operatorname{dist}\,(x,\Gamma)\leq\delta \operatorname{diam}Q\). Moreover, \[\mathcal{H}^{d}(\delta^{-1}B_{Q(S)}\cap(E\Delta\Gamma))\leq\delta\mathcal{H}^ {d}(Q(S)).\] (Here \(E\Delta\Gamma=(E\setminus\Gamma)\cup(\Gamma\setminus E)\) is not to be confused with the symbol for dyadic lattices \(\Delta=\cup_{j\in\mathbb{Z}}\Delta_{j}\)). (2.7) the maximal cubes \(Q(S)\) satisfy a small-constant carleson packing condition in that for each \(R^{\prime}\in\Delta(R)\), one has
\[\sum_{\begin{subarray}{c}S\in\mathcal{F}(R)\\ Q(S)\subset R^{\prime}\end{subarray}}\mathcal{H}^{d}(Q(S))\leq(1+\delta) \mathcal{H}^{d}(R^{\prime}).\]
Moreover, we have the following condition on the "top" Lipschitz graphs of the Corona decomposition:
(2.8) \[\text{If }S_{R},S_{R_{0}}\text{ are so that }R\in S_{R}\in\mathcal{F}(R)\text{ and }R_{0}\in S_{R_{0}}\in \mathcal{F}(R_{0})\text{, then }\Gamma(S_{R})=\Gamma(S_{R_{0}})\text{. That is to say, each of the chosen Lipschitz graphs for the collections containing the top cubes \(R\) are identical.
Finally we say that \(E\) admits \(\delta\)-Corona decompositions if it admits a \(\delta\)-Corona decomposition in each \(R_{0}\in\Delta\).
**Remark 2.2**.: Some remarks about this definition (and how it is different from the usual Corona decomposition as in [10]) are in order. In general, a Corona decomposition for a uniformly rectifiable set \(E\) includes a partition of dyadic cubes into "good cubes" and "bad cubes," where the bad cubes do not have a good approximating Lipschitz graph as in (2.6). In the small-constant setting, it turns out that all cubes are "good," and thus the main condition satisfied is that there are not too many families \(S\in\mathcal{F}(R)\) as quantified by (2.7). Also, since we are interested in bi-lateral approximations of Ahlfors regular sets by planes, we include in (2.6) that the approximating graph \(\Gamma\) be sufficiently close to \(E\) as well. The fact that there are measure estimates on \(E\Delta\Gamma\) inside \(\delta^{-1}B(Q(S))\) and (2.8) holds are perks we obtain for free when showing 1 gives 2, which shall be useful to us in estimating the Tolsa \(\alpha\) coefficients for \(\delta\)-UR sets.
One other main difference is that in a \(\delta\)-Corona decomposition, as opposed to a general one, we require that the Carleson packing constant appearing in (2.7) be controlled as \(\delta\downarrow 0\). This plays a crucial role in the arguments that follow, since this implies that if \(R^{\prime}\in\Delta(R)\) is a maximal cube in some family, \(R^{\prime}=Q(S)\) for some \(S\in\mathcal{F}(R)\), then necessarily one has
\[\sum_{\begin{subarray}{c}S\in\mathcal{F}(R)\\ Q(S)\leq R^{\prime}\end{subarray}}\mathcal{H}^{d}(Q(S))\leq\delta\mathcal{H}^{ d}(R^{\prime}). \tag{2.9}\]
Also it is important to remark that the so-called "coherent" condition on the families \(\mathcal{F}(R)\) from (2.5) includes two pieces. The second part, which asserts that if \(Q\in S\) then either all of its children are or none of its children are, has as an important consequence that
\[\begin{array}{ll}&\text{if $x\in Q(S)$ then either $x$ is in arbitrarily small cubes of $S$, or $x$ is contained}\\ &\text{in a minimal cube of $S$.}\end{array} \tag{2.10}\]
Finally this brings us to the appearance (and definition) of the constant \(C_{n,d}\). Notice that by definition, a \(\delta\)-Corona decomposition of \(E\) is assumed to hold over a dyadic system \(\Delta\) with bounded constant \(C_{D}\leq C_{n,d}\) where \(C_{n,d}\) is chosen as follows. We shall soon see (Lemma 3.2) that \(\delta\)-UR sets of dimension \(d\) in \(\mathbb{R}^{n}\) are \(d\)-Ahlfors regular with uniformly bounded constant. In particular, they admit a system of dyadic cubes \(\Delta\) as in Section 2.1 with constant \(C_{D}\) depending only \(n,d\), which we define to be \(C_{n,d}\). Forcing this condition on the system \(\Delta\) is rather minor, but it allows us to rule out pathological examples of \(\delta\)-Corona decompositions for \(\delta\)-UR sets associated to a system of dyadic cubes with very large constant.
### Conventions for constants
In general, we denote by \(C\) a constant which is allowed to change line per line, depending on the parameters explicitly stated in the statement of a Lemma, Theorem, or Corollary. We avoid using the symbols \(\lesssim,\gtrsim\) but very infrequently will use the notation \(A\simeq_{D}B\) to mean that there is some constant \(C>0\) depending only on \(D\) so that \(C^{-1}A\leq B\leq CA\).
## 3. \(\delta\)-UR measures admit \(\delta^{\theta_{0}}\)-Corona decompositions
In this section, show that 1 gives 2, i.e., we prove Theorem 3.3. Let us begin within two useful lemmas. The first shall be used repeatedly in future Sections.
To motivate the first result, remark that if \(\mu\) is \(d\)-Ahlfors regular with constant \(C_{\mu}>0\) and support \(E\), then in general we may only conclude that \(\mu\) and \(\mathcal{H}^{d}|_{E}\) are mutually absolutely continuous with density \(g=d\mu/d\mathcal{H}^{d}|_{E}\) satisfying \(C_{\mu}^{-1}\leq g\leq 2^{d}C_{\mu}\), and moreover, \(E\) is \(d\)-Ahlfors regular with constant \(2^{d}C_{\mu}^{2}\) (see for example, [14, Theorem 6.9]). This crude estimate is problematic if we want precise control of the Ahlfors regularity constant of \(\mathcal{H}^{d}|_{E}\) when \(\mu\) is \(d\)-Ahlfors regular with constant \(C_{\mu}\) that is close to \(1\). When more geometric regularity is assumed, though, this can be strengthened as in the following.
**Lemma 3.1**.: _Suppose that \(\mu\) is a \(d\)-Ahlfors regular measure in \(\mathbb{R}^{n}\) with constant \(C_{\mu}>0\), and that \(E=\operatorname{spt}\mu\) is \(d\)-rectifiable. Then the density \(d\mathcal{H}^{d}|_{E}/d\mu\) exists and satisfies_
\[C_{\mu}^{-1}\leq d\mathcal{H}^{d}|_{E}/d\mu\leq C_{\mu},\]
\(\mu\)_-almost everywhere. In particular, for any subset \(A\subset\mathbb{R}^{n}\) Borel, we have_
\[C_{\mu}^{-1}\leq\frac{\mathcal{H}^{d}|_{E}(A)}{\mu(A)}\leq C_{\mu}, \tag{3.1}\]
_and \(\mathcal{H}^{d}|_{E}\) is \(d\)-Ahlfors regular with constant \(C_{\mu}^{2}>0\)._
Proof.: It is straight-forward to see from the Ahlfors regularity of \(\mu\) that \(E\) is also \(d\)-Ahlfors regular, and \(\mathcal{H}^{d}|_{E}\) and \(\mu\) are mutually absolutely continuous with density \(d\mathcal{H}^{d}|_{E}/d\mu\) bounded above and below. Since \(E\) is rectifiable, we know that the density
\[\theta^{d}(x)=\lim_{r\downarrow 0}\mathcal{H}^{d}|_{E}(B(x,r))/r^{d}\]
exists and equals \(1\) for \(\mathcal{H}^{d}\) almost all \(x\in E\) (see, for example, Theorem 16.2 in [14]). It follows then that for \(\mathcal{H}^{d}|_{E}\) (and thus \(\mu\)) almost all \(x\),
\[\frac{d\mathcal{H}^{d}|_{E}}{d\mu}(x) \leq\limsup_{r\downarrow 0}\frac{\mathcal{H}^{d}|_{E}(B(x,r))}{ \mu(B(x,r))}\] \[=\limsup_{r\downarrow 0}\frac{\mathcal{H}^{d}|_{E}(B(x,r))}{r^{d}} \frac{r^{d}}{\mu(B(x,r))}\] \[\leq C_{\mu},\]
by Ahlfors regularity of \(\mu\). A similar computation shows \(d\mathcal{H}^{d}|_{E}/d\mu(x)\geq C_{\mu}^{-1}\) for \(\mu\) almost all \(x\), and thus whenever \(A\subset\mathbb{R}^{n}\) is Borel,
\[C_{\mu}^{-1}\mu(A)=\int_{A}C_{\mu}^{-1}\;d\mu\leq\int_{A}\frac{d\mathcal{H}^{ d}|_{E}}{d\mu}\;d\mu\leq C_{\mu}\int_{A}\;d\mu=C_{\mu}\,\mu(A).\]
Since \(\mathcal{H}^{d}|_{E}(A)=\int_{A}(d\mathcal{H}^{d}|_{E}/d\mu)\;d\mu\) (see for example, Theorem 2.12 in [14]), this shows (3.1). The last claim of the Lemma follows by taking \(A=B(x,r)\) for \(x\in E\) in (3.1) and using Ahlfors regularity of \(\mu\).
The proof of the following lemmas are omitted, since they are proved (when \(d=n-1\)) in [10, Lemma 5.4]. The arguments in higher codimension are the same.
**Lemma 3.2** (see Lemma 5.4, [10]).: _There is a constant \(C_{0}\geq 1\) depending only on \(n\) and \(d\) such that if \(E\subset\mathbb{R}^{n}\) is \(\delta\)-UR of dimension \(d\), then \(E\) is \(d\)-Ahlfors regular with constant at most \(1+C_{0}\delta^{1/d}\) and \(C_{0}\delta^{1/d}\)-Reifenberg flat._
We may now state the main Theorem of this section, which is that (A) gives (B). Of course, in our setting of \(\delta\)-UR sets, life becomes easier in that we need not go through the effort of constructing the Lipschitz graphs _by hand_ in a small-constant Corona decomposition, as the authors do in [10]. Instead, the following result simply says that with our approximating Lipschitz graphs coming from the definition of a \(\delta\)-UR set, we obtain a Corona decomposition with a loss in a constant, and an exponent in \(\delta\).
**Theorem 3.3**.: _There are \(C_{0}\geq 1\), \(\delta_{0}>0\) and \(\theta_{0}\in(0,1)\) depending only on the underlying dimensions \(n\) and \(d\) so that if \(E\subset\mathbb{R}^{n}\) is \(\delta\)-UR of dimension \(d\) in \(\mathbb{R}^{n}\) with \(\delta\in(0,\delta_{0})\), then \(E\) admits \(C_{0}\delta^{\theta_{0}}\)-Corona decompositions._
Proof.: As mentioned at the end of Remark 2.2, since \(E\) is \(\delta\)-UR of dimension \(d\), we may fix once and for all a system of dyadic cubes \(\Delta\) for \(E\) with constant \(C_{D}\leq C_{n,d}\).
Now begin with some dyadic cube \(R_{0}\in\Delta\) for \(E\). Denote by \(C_{E}\) the Ahlfors regularity constant of \(\mathcal{H}^{d}|_{E}\), which by Lemma 3.2, is bounded. Fix \(\theta,\theta^{\prime}\in(0,1)\) to be determined, and set \(\eta\coloneqq\delta^{\theta},M\coloneqq\delta^{-\theta^{\prime}}\). For definiteness, we state now that \(\theta=1/2\) and \(\theta^{\prime}<\theta/(2d)\) shall suffice here, but these parameters are different than the \(\theta_{0}\) in the conclusion of the Theorem. We construct our partition \(\mathcal{F}\equiv\mathcal{F}(R_{0})\) by sequential coherent generations. That is, we will construct \(\mathcal{F}\) as a disjoint union \(\mathcal{F}=\mathcal{F}_{0}\cup\mathcal{F}_{1}\cup\mathcal{F}_{2}\cup\dots\) where each \(\mathcal{F}_{i}\) consists of coherent collections \(S\subset\Delta(R_{0})\), and \(\mathcal{F}_{0}\) contains a single collection \(S_{0}\) with top cube \(Q(S_{0})=R_{0}\). Moreover, each \(\mathcal{F}_{i}\) for \(i\in\mathbb{N}\) will satisfy the following: for each \(S\in\mathcal{F}_{i}\), there is a unique \(S^{\prime}\in\mathcal{F}_{i-1}\) so that \(Q(S)\subset Q(S^{\prime})\), and all cubes \(Q\in\Delta(R_{0})\) with \(Q(S)\subsetneq Q\subset Q(S^{\prime})\) are such that \(Q\in S^{\prime}\). Also, we shall deal only with the partition \(\mathcal{F}=\mathcal{F}(R_{0})\) of \(\Delta(R_{0})\) for now, and leave to the very end of the proof how to ensure that (2.8) holds for the other \(R\in\Delta\) that are nearby \(B_{R_{0}}\).
As mentioned, we shall take \(R_{0}\) to be the top cube of the only collection \(S_{0}\) in the zeroth generation family, \(\mathcal{F}_{0}\). Since \(E\) is \(\delta\)-UR, we choose a \(\delta\)-Lipschitz graph \(\Gamma\equiv\Gamma(S_{0})\) so that
\[\mathcal{H}^{d}|_{E}(MB_{R_{0}}\setminus\Gamma)+\mathcal{H}^{d}| _{\Gamma}(MB_{R_{0}}\setminus E) \leq\delta(M\mathrm{diam}\,R_{0})^{d}. \tag{3.2}\] \[=\delta^{1-d\theta^{\prime}}(\mathrm{diam}\,R_{0})^{d}.\]
In what follows, estimate (3.2) (and the fact that \(\Gamma,E\) are sufficiently flat) shall be the _only_ fact we use about \(\Gamma\) to ensure that this particular \(\Gamma\) shall suffice in the construction of \(S_{0}\).
Now we continue adding children of \(R_{0}\) to the collection \(S_{0}\) until we reach a cube \(Q\) that has a sibling \(Q^{\prime}\) (possibly \(Q^{\prime}=Q\)) which is mediocre for \(S_{0}\), meaning that
\[\mathcal{H}^{d}(Q^{\prime}\setminus\Gamma)>\eta\mathcal{H}^{d}(Q^{\prime}). \tag{3.3}\]
At this stage, \(Q\), and all of its siblings become minimal cubes of the collection \(S_{0}\), and all of their children become top cubes for the new collections \(S\in\mathcal{F}_{1}\). Notice that for such \(Q\), if \(\tilde{Q}\) is the parent of \(Q\), then \(\tilde{Q}\) is not mediocre for \(S_{0}\), and thus,
\[\mathcal{H}^{d}(Q\setminus\Gamma)\leq\mathcal{H}^{d}(\tilde{Q}\setminus\Gamma) \leq\eta\mathcal{H}^{d}(\tilde{Q})\leq C\eta\mathcal{H}^{d}(Q), \tag{3.4}\]
with constant \(C\) depending only on \(C_{D}\) and the underlying dimensions.
Let us make some observations about the family \(S_{0}\) constructed. First of all, \(S_{0}\) is coherent (i.e., satisfies condition (2.5)) by construction. Moreover, from (3.4) we see that all
satisfy
\[\mathcal{H}^{d}(Q\setminus\Gamma)\leq C\eta\mathcal{H}^{d}(Q), \tag{3.5}\]
since those \(Q\in S_{0}\) that are not minimal satisfy the above inequality with \(C=1\). Let us show now that this measure estimate implies that for the cubes in \(S_{0}\), \(\Gamma\) and \(E\) are very near each other in that for any \(Q\in S_{0}\), we have
\[\sup_{x\in MB_{Q}\cap(E\cup\Gamma)}\operatorname{dist}\,(x,\Gamma) +\operatorname{dist}\,(x,E) \leq CM^{2}\eta^{1/d}\mathrm{diam}\,Q\] \[=C\delta^{\theta/d-2\theta^{\prime}}\mathrm{diam}\,Q, \tag{3.6}\]
for some constant \(C\) depending only on the dimensions and \(C_{D}\).
First, suppose that \(Q\in S_{0}\), and \(x\in B(c_{Q},C_{D}^{-1}\mathrm{diam}\,Q/2)\cap Q\setminus\Gamma\) (recall that \(c_{Q}\) is the center of \(Q\), for which (2.4) holds). Denoting \(r=\operatorname{dist}\,(x,\Gamma)\), then as long as \(\delta_{0}\) is sufficiently small, we must have \(r\leq C_{D}^{-1}\mathrm{diam}\,Q/2\). This is because otherwise we have \(Q\setminus\Gamma\supset Q\cap B(x,C_{D}^{-1}\mathrm{diam}\,Q/2)\), and thus \(\mathcal{H}^{d}(Q\setminus\Gamma)\geq\mathcal{H}^{d}(Q\cap B(x,C_{D}^{-1} \mathrm{diam}\,Q/2))\geq c\mathcal{H}^{d}(Q)\) for some constant \(0<c<1\) depending only on \(C_{D}\) and \(C_{E}\). When \(\delta_{0}\) (and thus \(\eta\)) is sufficiently small, this contradicts (3.5) and whence the fact that \(Q\in S_{0}\). Hence, we may assume that \(r\leq C_{D}^{-1}\mathrm{diam}\,Q/2\), so \(B(x,r)\subset B(c_{Q},C_{D}^{-1}\mathrm{diam}\,Q)\), and thus
\[\mathcal{H}^{d}(Q\setminus\Gamma)\geq\mathcal{H}^{d}(Q\cap B(x,r))\geq C_{E} ^{-1}r^{d}.\]
Since \(Q\in S_{0}\), we have that (3.5) gives \(r\leq C\eta^{1/d}\mathrm{diam}\,Q\), i.e.,
\[\sup_{x\in E\cap B(c_{Q},C_{D}^{-1}\mathrm{diam}\,Q/2)}\operatorname{dist}\,( x,\Gamma)\leq C\eta^{1/d}\mathrm{diam}\,Q. \tag{3.7}\]
Next, recall from Lemma 3.2 that \(E\) is \(C\delta^{1/d}\) Reifenberg flat, and similarly, so is \(\Gamma\). We claim that for \(\delta_{0}\) sufficiently small, this implies
\[\begin{array}{l}\text{for every }x\in(\Gamma\setminus E)\cap B(c_{Q},C_{D}^{- 1}\mathrm{diam}\,Q/8),\text{ there is a point}\\ y\ \in\ E\,\cap\,B(x,2\mathrm{dist}\,(x,E))\,\cap\,B(c_{Q},C_{D}^{-1}\mathrm{ diam}\,Q/2)\ \text{ with }\ \operatorname{dist}\,(y,\Gamma)\ \geq\\ (2/3)\mathrm{dist}\,(x,E).\end{array} \tag{3.8}\]
Indeed, if no such point \(y\in E\cap B(x,2\mathrm{dist}\,(x,E))\) exists, then each such \(y\) satisfies \(\operatorname{dist}\,(y,\Gamma)<(2/3)\mathrm{dist}\,(x,E)\), and the fact that \(E\) and \(\Gamma\) are very well approximated by \(d\)-planes in \(B(x,2\mathrm{dist}\,(x,E))\) would lead to the contradiction that there is some \(z\in E\) with \(|x-z|<\operatorname{dist}\,(x,E)\). Now we simply recall that \(c_{Q}\in E\), and so since \(x\in B(c_{Q},C_{D}^{-1}\mathrm{diam}\,Q/8)\) we have that \(\operatorname{dist}\,(x,E)\leq C_{D}^{-1}\mathrm{diam}\,Q/8\), so that \(y\in B(c_{Q},C_{D}^{-1}\mathrm{diam}\,Q/2)\). This completes the proof of (3.8). Along with (3.7), this shows
\[\sup_{x\in B(c_{Q},C_{D}^{-1}\mathrm{diam}\,Q/8)\cap(E\Delta\Gamma)} \operatorname{dist}\,(x,\Gamma)+\operatorname{dist}\,(x,E)\leq C\eta^{1/d} \mathrm{diam}\,Q. \tag{3.9}\]
Appealing again to the fact that \(E\) and \(\Gamma\) are \(C\delta^{1/d}\) Reifenberg-flat, one deduces (3.6) from (3.9) with a possibly larger \(C\). The proof is slightly technical, but it merely requires choosing good approximating planes for \(E\) and \(\Gamma\) at different scales. For the sake of completeness, let us sketch a few details.
Recall here that we use the notation \(d_{x,r}\) for the normalized local Hausdorff distance as in Definition 1.6. In addition, \(\theta,\theta^{\prime}\) are such that \(\theta^{\prime}<\theta/(2d)\), and thus we have that
can be made arbitrarily small if \(\delta_{0}\) is chosen small enough. Since \(\Gamma\) is a \(\delta\)-Lipschitz graph, we know that there is some plane \(d\)-plane \(P_{\Gamma}\) so that
\[d_{c_{Q},r}(\Gamma,P_{\Gamma})\leq C\delta^{1/d},\]
as long as \(\delta_{0}\) is sufficiently small, and as long as \(r\geq C_{D}^{-1}\mathrm{diam}\,Q/8\). Since \(E\) is \(C\delta^{1/d}\)-Reifenberg flat, we may choose \(d\)-planes \(P_{E}\) and \(P_{E}^{\prime}\) so that
\[d_{c_{Q},2\mathrm{diam}\,Q}(E,P_{E})+d_{c_{Q},2M\mathrm{diam}\,Q}(E,P_{E}^{ \prime})\leq C\delta^{1/d}.\]
The fact that \(P_{E}\) and \(P_{E}^{\prime}\) are very good approximations to \(E\) inside \(2B_{Q}\), and run very near the center of \(B_{Q}\), imply that \(d_{c_{Q},2\mathrm{diam}\,Q}(P_{E},P_{E}^{\prime})\leq CM\delta^{1/d}\), and thus
\[d_{c_{Q},M\mathrm{diam}\,Q}(P_{E},P_{E}^{\prime})\leq CM\delta^{1/d}.\]
Finally, recalling estimate (3.9), we see that \(d_{c_{Q},\mathrm{diam}\,Q}(P_{\Gamma},P_{E})\leq C\eta^{1/d}\), which implies \(d_{c_{Q},M\mathrm{diam}\,Q}(P_{\Gamma},P_{E})\leq C\eta^{1/d}\). Thus if we compare distances from \(\Gamma\), to \(P_{\Gamma}\), to \(P_{E}\), then \(P_{E}^{\prime}\) and finally to \(E\) inside \(B(c_{Q},M\mathrm{diam}\,Q)\), we obtain (3.6).
Hence, for the first family \(\mathcal{F}_{0}\), we have that the first part of (2.6) holds with \(CM^{2}\eta^{1/d}=C\delta^{\theta/d-2\theta^{\prime}}\). Of course by construction, we also have the desired measure estimate
\[\mathcal{H}^{d}(MB_{Q(S_{0})}\cap(E\Delta\Gamma))\leq C\delta^{1-d\theta^{ \prime}}\mathcal{H}^{d}(Q(S_{0})). \tag{3.10}\]
We have one final step for \(\mathcal{F}_{0}\), which is to estimate the portion of minimal cubes of \(S_{0}\), denoted \(m(S_{0})\), contained in \(Q(S_{0})\).
Recall that if \(Q\in m(S_{0})\), then necessarily \(Q\) has a sibling \(Q^{\prime}\in m(S_{0})\) which is mediocre for \(S_{0}\), i.e., (3.3) holds. Then since the number of siblings of any dyadic cube in \(\Delta\) is uniformly bounded (by Ahlfors regularity of \(E\)), we have that
\[\sum_{Q\in m(S_{0})}\mathcal{H}^{d}(Q) \leq C\sum_{\begin{subarray}{c}Q^{\prime}\in m(S_{0})\\ Q^{\prime}\ \mathrm{mediocre}\end{subarray}}\mathcal{H}^{d}(Q^{\prime})\leq \frac{C}{\eta}\sum_{\begin{subarray}{c}Q^{\prime}\in m(S_{0})\\ Q^{\prime}\ \mathrm{mediocre}\end{subarray}}\mathcal{H}^{d}(Q^{\prime}\setminus\Gamma)\] \[\leq\frac{C}{\eta}\mathcal{H}^{d}(Q(S_{0})\setminus\Gamma)\leq \frac{C}{\eta}\mathcal{H}^{d}|_{E}(MB_{R_{0}}\setminus\Gamma)\] \[\leq C\delta^{1-d\theta^{\prime}-\theta}\mathcal{H}^{d}(R_{0})\]
by definition of \(\eta\). Notice that if \(\theta,\theta^{\prime}\) are sufficiently small, then \(1-d\theta^{\prime}-\theta>0\), and in particular, taking \(\theta=1/2\) and \(\theta^{\prime}<\theta/2d\) shall suffice for these purposes.
Now assuming that \(\mathcal{F}_{i-1}\) has been constructed for \(i\in\mathbb{N}\), we make each child \(Q_{0}\) of some \(Q^{\prime}\in m(S^{\prime})\) for \(S^{\prime}\in\mathcal{F}_{i-1}\) a top cube \(Q_{0}=Q(S)\) of a new collection \(S\in\mathcal{F}_{i}\), and construct \(S\in\mathcal{F}_{i}\) in the same way as we did \(S_{0}\in\mathcal{F}_{0}\). That is, since \(E\) is \(\delta\)-UR, we choose a Lipschitz a \(\delta\)-Lipschitz graph \(\Gamma=\Gamma(Q_{0})\) for which
\[\mathcal{H}^{d}|_{E}(MB_{Q_{0}}\setminus\Gamma)+\mathcal{H}^{d}|_{\Gamma}(MB_{ Q_{0}}\setminus E)\leq\delta(M\mathrm{diam}\,Q_{0})^{d}.\]
We continue to add subcubes of \(Q\in\Delta(Q_{0})\) to the collection \(S\) until we find a some cube \(Q\) who has a sibling \(Q^{\prime}\) which is mediocre for \(S\) in that (3.3) holds. At this stage \(Q\) and all of its siblings become minimal cubes of \(S\), and each of their children become top cubes in the
next generation \(\mathcal{F}_{i+1}\). The same proof above applies to this collection \(S\) in place of \(S_{0}\): it is coherent in that (2.5) holds, and in addition, we have the following estimates:
\[\sup_{x\in MB_{Q}\cap(E\cup\Gamma)}\operatorname{dist}\left(x,E \right)+\operatorname{dist}\left(x,\Gamma(Q(S))\right) \leq C\delta^{\theta/d-2\theta^{\prime}}\mathrm{diam}\,Q\text{ for }Q\in S, \tag{3.12}\] \[\mathcal{H}^{d}(MB_{Q(S)}\cap(E\Delta\Gamma(Q(S)))) \leq C\delta^{1-d\theta^{\prime}}\mathcal{H}^{d}(Q(S)),\] (3.13) \[\sum_{Q\in m(S)}\mathcal{H}^{d}(Q) \leq C\delta^{1-d\theta^{\prime}-\theta}\mathcal{H}^{d}(Q(S)). \tag{3.11}\]
Recalling that \(M=\delta^{-\theta^{\prime}}\), we conclude that \(E\) admits a \(C\delta^{\theta_{0}}\) Corona decomposition with \(\theta_{0}=\min\{\theta/d-2\theta^{\prime},1-d\theta^{\prime}-\theta,\theta^ {\prime}\}>0\) provided that we show (3.13) implies (2.7) with \(C\delta^{\theta_{0}}\) in place of \(\delta\), i.e.,
\[\sum_{\begin{subarray}{c}S\in\mathcal{F}\\ Q(S)\subset R\end{subarray}}\mathcal{H}^{d}(Q(S))\leq(1+C\delta^{\theta_{0}}) \mathcal{H}^{d}(R). \tag{3.14}\]
Let \(R\in\Delta(R_{0})\), and choose an index \(i^{*}\in\mathbb{N}\cup\{0\}\) and a collection \(S^{*}\) so that \(R\in S^{*}\in\mathcal{F}_{i^{*}}\). Assume first \(R\) is the top cube of \(S^{*}\), \(R=Q(S^{*})\). Then by construction,
\[\bigcup_{\begin{subarray}{c}S\in\mathcal{F}_{i^{*}+1}\\ Q(S)\subset R\end{subarray}}Q(S)=\bigcup_{\begin{subarray}{c}Q\in m(S^{*})\\ Q\subset R\end{subarray}}Q,\]
where each of the unions above is a disjoint union. In particular, we see
\[\sum_{\begin{subarray}{c}S\in\mathcal{F}_{i^{*}+1}\\ Q(S)\subset R\end{subarray}}\mathcal{H}^{d}(Q(S)) =\sum_{\begin{subarray}{c}Q\in m(S^{*})\\ Q\subset R\end{subarray}}\mathcal{H}^{d}(Q)\] \[\leq C\delta^{1-d\theta^{\prime}-\theta}\mathcal{H}^{d}(R)\] \[\leq C\delta^{\theta_{0}}\mathcal{H}^{d}(R),\]
by (3.13). Now, for any index \(k\in\mathbb{N}\), \(k\geq i^{*}+1\), we have that
\[\bigcup_{\begin{subarray}{c}S\in\mathcal{F}_{k}\\ Q(S)\subset R\end{subarray}}Q(S)=\bigcup_{\begin{subarray}{c}S\in\mathcal{F} _{k-1}\\ Q(S)\subset R\end{subarray}}\left(\bigcup_{Q\in m(S)}Q\right)\]
where each of the unions above are disjoint. This gives
\[\sum_{\begin{subarray}{c}S\in\mathcal{F}_{k}\\ Q(S)\subset R\end{subarray}}\mathcal{H}^{d}(Q(S)) =\sum_{\begin{subarray}{c}S\in\mathcal{F}_{k-1}\\ Q(S)\subset R\end{subarray}}\left(\sum_{Q\in m(S)}\mathcal{H}^{d}(Q)\right)\] \[\leq C\delta^{\theta_{0}}\sum_{\begin{subarray}{c}S\in\mathcal{F} _{k-1}\\ Q(S)\subset R\end{subarray}}\mathcal{H}^{d}(Q(S)) \tag{3.15}\] \[\leq(C\delta^{\theta_{0}})^{k-i^{*}}\mathcal{H}^{d}(R),\]
where the first inequality in the above follows from (3.13), and the second is by induction on \(k\). As long as \(\delta_{0}\) is small enough (depending only on \(\theta_{0}\) and the underlying dimensions), \(C\delta_{0}^{\theta_{0}}<1\). Whence from (3.15) we obtain
\[\sum_{\begin{subarray}{c}S\in\mathcal{F}\\ Q(S)\subset R\end{subarray}}\mathcal{H}^{d}(Q(S)) \leq\sum_{k\geq i^{*}+1}\sum_{\begin{subarray}{c}S\in\mathcal{F} _{k}\\ Q(S)\subset R\end{subarray}}\mathcal{H}^{d}(Q(S))\] \[\leq\sum_{k\geq i^{*}+1}\left(C\delta^{\theta_{0}}\right)^{k-i^ {*}}\mathcal{H}^{d}(R)\] \[=\left(\sum_{k=1}^{\infty}(C\delta^{\theta_{0}})^{k}\right) \mathcal{H}^{d}(R)\] \[\leq C\delta^{\theta_{0}}\mathcal{H}^{d}(R),\]
since again, \(\delta_{0}\) is small enough so that \(\sum_{k\geq 1}(C\delta^{\theta_{0}})^{k}=1-(1-C\delta^{\theta_{0}})^{-1}\leq C \delta^{\theta_{0}}\). We have thus shown that whenever \(R\in\Delta(R_{0})\) is a top cube, \(R=Q(S^{*})\), then
\[\sum_{\begin{subarray}{c}S\in\mathcal{F}\\ Q(S)\subset Q(S^{*})\end{subarray}}\mathcal{H}^{d}(Q(S))\leq C\delta^{\theta_ {0}}\mathcal{H}^{d}(Q(S^{*})). \tag{3.16}\]
From here, we deduce our estimate for general \(R\).
Suppose that \(R\in\Delta(R_{0})\) is arbitrary. Then we can decompose the collection of top cubes \(Q(S)\subset R\), \(S\in\mathcal{F}\) into 2 disjoint collections, \(\mathcal{T}(R)\) and \(\mathcal{R}(R)\), where
\[\mathcal{T}(R) \coloneqq\{S\in\mathcal{F}\;:\;Q(S)\subset R,\text{ and }Q(S) \subset Q(S^{\prime})\subset R\text{ implies }S=S^{\prime}\}\] \[\mathcal{R}(R) \coloneqq\{S\in\mathcal{F}\;:\;Q(S)\subset R,S\not\in\mathcal{ T}(R)\}.\]
Put simply, \(\mathcal{T}(R)\) consists of the collections \(S\in\mathcal{F}\) whose top cubes are the "first" descendants of \(R\) that are top cubes, and \(\mathcal{R}(R)\) are the rest. Note that necessarily the cubes in \(\mathcal{T}(R)\) are disjoint, and also to each \(Q(S)\in\mathcal{R}(R)\) there is some \(S^{\prime}\in\mathcal{T}(R)\) for which \(Q(S)\subsetneq Q(S^{\prime})\). We estimate
\[\sum_{\begin{subarray}{c}S\in\mathcal{F}\\ Q(S)\subset R\end{subarray}}\mathcal{H}^{d}(Q(S)) =\sum_{S\in\mathcal{T}(R)}\mathcal{H}^{d}(Q(S))+\sum_{S\in \mathcal{R}(R)}\mathcal{H}^{d}(Q(S))\] \[\leq\sum_{S\in\mathcal{T}(R)}\mathcal{H}^{d}(Q(S))+\sum_{S^{ \prime}\in\mathcal{T}(R)}\left(\sum_{\begin{subarray}{c}S\in\mathcal{F}\\ Q(S)\subset Q(S^{\prime})\end{subarray}}\mathcal{H}^{d}(Q(S))\right)\] \[\leq\sum_{S\in\mathcal{T}(R)}\mathcal{H}^{d}(Q(S))+C\delta^{ \theta_{0}}\sum_{S^{\prime}\in\mathcal{T}(R)}\mathcal{H}^{d}(Q(S^{\prime}))\]
\[=(1+C\delta^{\theta_{0}})\sum_{S\in\mathcal{T}(R)}\mathcal{H}^{d}(Q(S))\] \[\leq(1+C\delta^{\theta_{0}})\mathcal{H}^{d}(R).\]
In the above, we used (3.16) in the third inequality, and the fact that the cubes \(Q(S)\) for \(S\in\mathcal{T}(R)\) are disjoint and contained in \(R\) in the last. This shows (3.14), and thus our proof is complete once we can justify how to construct the other \(\mathcal{F}(R)\) as in Definition 2.1 so that (2.8) holds.
However, this last step is simple. Notice that if \(R\in\Delta_{j}\) where \(\operatorname{gen}R=j=\operatorname{gen}R_{0}\), and also \(R\subset(M/3)B_{R_{0}}\), then for \(\delta_{0}\) sufficiently small, we have that \((M/3C_{D})B_{R}\subset MB_{R_{0}}\). Hence (3.2) gives that
\[\mathcal{H}^{d}|_{E}\left(\left(\frac{M}{3C_{D}}B_{R}\right)\setminus\Gamma(S _{0})\right)+\mathcal{H}^{d}|_{\Gamma(S_{0})}\left(\left(\frac{M}{3C_{D}}B_{R }\right)\setminus E\right)\leq C\delta^{1-d\theta^{\prime}}(\operatorname{ diam}R)^{d},\]
where \(C>0\) depends only on \(n,d\) and \(C_{D}\). In particular, we recall that this was the only condition we used on the Lipschitz graph \(\Gamma(S_{0})\) to be chosen for the top cube of \(S_{0}\) in order to construct \(\mathcal{F}\). In particular, by simply taking \(C>0\) larger in the conclusion of the Theorem, we can use this same Lipschitz graph \(\Gamma(S_{0})\) for each such \(R\), and repeat the construction of \(\mathcal{F}\) essentially verbatim to construction the partition \(\mathcal{F}(R)\) of \(\Delta(R)\) for each such \(R\), finishing the proof of the Theorem.
## 4. \(\delta\)-Corona decompositions imply \(\alpha_{\mu}(x,r)\) are small
In this section, we show that (B) gives (C), i.e., we prove Theorem 4.7. Although the Carleson measure estimate we prove is a dyadic version of (C), this discrete estimate appearing in Theorem 4.7 implies the continuous one. This estimate can be found in [1, Lemma 5.9] so we omit the proof. The first step in the proof is to obtain small-constant Carleson measure estimates on the \(\alpha_{\mu}(x,r)\) when \(\mu\) is a measure supported on a small-constant Lipschitz graphs with density close to \(1\). This estimate is done in [11] when \(\mu\) is surface measure on the graph, but for completeness we fill in the gap when one takes \(\mu\) slightly more general.
To state the Theorem in the same language as in [11], whenever \(E=\operatorname{spt}\mu\) is \(d\)-Ahlfors regular, \(\Delta\) is a system of dyadic cubes for \(E\), and \(Q\in E\), we abuse notation of \(\alpha_{\mu}\) and set
\[\alpha_{\mu}(Q)\coloneqq\alpha_{\mu}(c_{Q},3\operatorname{diam}Q) \tag{4.1}\]
where of course, \(\alpha_{\mu}(c_{Q},3\operatorname{diam}Q)\) is as in Definition 1.5. There is a very minor difference in \(\alpha_{\mu}(Q)\) and that written in [11], where the normalization is taken with respect to the quantity \(\ell(Q)\) (which is \(\ell(Q)=2^{j}\) when \(Q\in\Delta_{j}\)) in place of \(3\operatorname{diam}Q\), but since these quantities are comparable (with constant \(C_{D}\)) this difference is unimportant in the estimates that follow. Let us state two main Theorems proved in [11], which we shall use in our proof of Theorem 4.7. First we recall the definition of "large-constant" uniformly rectifiable measures.
**Definition 4.1**.: Suppose that \(\mu\) is a \(d\)-Ahlfors regular measure on \(\mathbb{R}^{n}\). Then \(\mu\) is said to be \(d\)-uniformly rectifiable (with constant \(M>1\)) if for each \(x\in\operatorname{spt}\mu\) and \(r>0\), there exists a
bi-Lipschitz map \(f:B_{d}(0,r)\subset\mathbb{R}^{d}\to\mathbb{R}^{n}\) with bi-Lipschitz constant \(\leq M\), so that
\[\mu(B(x,r)\cap f(B_{d}(0,r)))\geq M^{-1}r^{d}.\]
**Theorem 4.2** (Theorem 1.2 in [109]).: _Let \(\mu\) be a \(d\)-Ahlfors regular measure in \(\mathbb{R}^{n}\) with constant \(C_{\mu}\), and suppose that \(\mu\) is \(d\)-uniformly rectifiable (with large constant, \(M>1\)). Fix a system \(\Delta\) of dyadic cubes for \(\mu\) with constant \(C_{D}>0\). Then there is some constant \(C_{0}=C_{0}(n,d,C_{\mu},M,C_{D})>1\) so that one has the Carleson condition_
\[\sup_{R_{0}\in\Delta}\mu(R_{0})^{-1}\sum_{Q\in\Delta(R_{0})}\alpha_{\mu}(Q)^{ 2}\mu(Q)\leq C_{0}. \tag{4.2}\]
_Of course here, \(C_{0}\) is independent of \(R_{0}\in\Delta\)._
**Theorem 4.3** (Theorem 1.1 and Remark 4.1 that follows in [109]).: _Suppose that \(\Gamma\) is a \(d\)-dimensional Lipschitz graph in \(\mathbb{R}^{n}\) with constant \(\delta<1\), let \(\mu=\mathcal{H}^{d}|_{\Gamma}\), and suppose that \(\Delta\) is a system of dyadic-cubes for \(E\) with constant \(C_{D}>0\). Then the following Carleson condition holds:_
\[\sup_{R_{0}\in\Delta}\mu(R_{0})^{-1}\sum_{Q\in\Delta(R_{0})}\alpha_{\mu}(Q)^{ 2}\mu(Q)\leq C_{0}\delta^{2},\]
_where \(C_{0}=C_{0}(n,d,C_{D})\) is independent of \(\delta\)._
To be totally transparent, Remark 4.1 in [109] is stated for true dyadic cubes in \(\mathbb{R}^{n}\) that meet the Lipschitz graph \(\Gamma\), and the result is stated as a global Carleson packing condition for compactly supported Lipschitz graphs. However, it is straight-forward to deduce the Theorem above from how Remark 4.1 is stated. Indeed, one can obtain a local result from a global one by fixing an initial cube \(R_{0}\) of \(\Gamma\), and finding a \((C\delta)\)-Lipschitz graph \(\Gamma^{\prime}\) that agrees with \(\Gamma\) in \(10B_{R_{0}}\) and has support in \(20B_{R_{0}}\). Then Theorem 4.3 applied to \(\Gamma^{\prime}\) gives the result, since the \(\alpha_{\mu}(Q)\) are local to \(3B_{R_{0}}\) anyway.
From here, we can extend this small-constant estimate to measures of the form \(d\mu(x)=g(x)\;d\mathcal{H}^{d}|_{\Gamma}(x)\) where \(g(x)\) is some controlled density, using the following two Lemmas.
**Lemma 4.4**.: _Suppose that \(\mu\) is a \(d\)-Ahlfors regular in \(\mathbb{R}^{n}\) with constant \(C_{\mu}>0\), and \(\Delta\) is a system of dyadic cubes for \(E=\operatorname{spt}\mu\) with constant \(C_{D}>0\). Then for any \(g\in L^{\infty}(\mu)\), the coefficients \(\mathcal{O}_{\mu,g}(Q)\) defined on the cubes \(Q\) by_
\[\mathcal{O}_{\mu,g}(Q)\coloneqq(\operatorname{diam}Q)^{-d-1}\inf_{\lambda\in \mathbb{R}}\sup_{f\in\Lambda(3B_{Q})}\left|\int fg-\lambda f\;d\mu\right| \tag{4.3}\]
_satisfy the Carleson condition_
\[\sup_{R_{0}\in\Delta}\mu(R_{0})^{-1}\sum_{Q\in\Delta(R_{0})}\mathcal{O}_{\mu,g}(Q)^{2}\mu(Q)\leq C_{0}(\text{osc }g)^{2}\]
_where \(\operatorname{osc}g\coloneqq\operatorname{ess\,sup}g-\operatorname{ess\,inf}g\), taken with respect to the measure \(\mu\). Here, \(C_{0}=C_{0}(n,d,C_{\mu},C_{D})>0\) is independent of \(g\)._
Proof.: The idea is to use a dyadic Martingale decomposition of \(g\) with respect to the dyadic cubes for \(E=\operatorname{spt}\mu\), but for a family of adjacent dyadic cubes for \(E\) rather than the single system from Section 2.1 (we shall see the flexibility this gives us shortly). Theorems 2.9 and
5.9 in [14] imply that there exists \(\delta\in(0,1)\) small, and \(M\in\mathbb{N}\), \(C_{\mathcal{D}}>1\), large depending only on \(n,d\) and \(C_{\mu}\), so that the following holds. For each \(\omega\in\{1,\ldots,M\}\), one can find a system of \(\mathcal{D}(\omega)=\cup_{j\in\mathbb{Z}}\mathcal{D}_{j}(\omega)\) of "dyadic cubes" for \(E\) such that
\[\text{for all }j\in\mathbb{Z},\,E=\cup_{Q\in\mathcal{D}_{j}( \omega)}Q, \tag{4.5}\] \[\text{if }R,R^{\prime}\in\mathcal{D}_{j}(\omega)\text{ and }R\neq R ^{\prime},\,\text{then }\mathcal{H}^{d}(R\cap R^{\prime})=0,\] (4.6) \[\text{for each }j\leq\ell\text{ and }Q\in\mathcal{D}_{\ell}(\omega), \,\text{one has }Q=\cup_{R\subset Q,R\in\mathcal{D}_{j}(\omega)}R,\] (4.7) \[\text{each }Q\in\mathcal{D}_{j}(\omega)\text{ has a ``center,'' }z_{Q}\text{ so that }B(z_{Q},\delta^{j}/5)\cap E\subset Q\subset B(z_{Q},3\delta^{j})\text{. Consequently, }\mathcal{H}^{d}(Q)\simeq_{C_{\mu}}\delta^{jd}. \tag{4.4}\]
These first properties are essentially the same as the system \(\Delta\) from Section 2.1, but here the \(\{\mathcal{D}(\omega)\;:\;1\leq\omega\leq M\}\) also satisfy
\[\text{for any }x\in E,r>0,\,\text{there is some choice of }1\leq\omega\leq M,\,j\in\mathbb{Z}\text{ with }\] \[C_{\mathcal{D}}^{-1}r\leq\delta^{j}\leq C_{\mathcal{D}}r,\, \text{and}\,\,Q\in\mathcal{D}_{j}(\omega)\text{ so that }B(x,r)\subset Q. \tag{4.8}\]
To be clear, the results from [14] apply to more general doubling metric spaces, but as explained in Lemma 2.2 and Remark 2.9 of [1], these results can be applied to Ahlfors regular sets to obtain the system above. We shall use the family of dyadic systems \(\{\mathcal{D}(\omega)\}_{\omega=1}^{M}\) in conjunction with the fixed dyadic system \(\Delta\) from Section 2.1 to prove the result.
Let \(Q_{0}\in\Delta\) be a dyadic cube of \(E\) from Section 2.1, and fix any cube \(R\subset Q_{0}\). By (4.8), there exists some \(1\leq\omega_{R}\leq M\), \(j_{R}\in\mathbb{Z}\) with \(\delta^{j_{R}}\simeq_{C_{\mathcal{D}}}\operatorname{diam}R\), and \(Q_{R}\in\mathcal{D}_{j_{R}}(\omega_{R})\) so that \(3B_{R}\subset Q_{R}\). Now since \(g\chi_{Q_{R}}\in L^{2}(\mu)\), we may write
\[(g-(g)_{Q_{R}})\chi_{Q_{R}}=\sum_{S\in\mathcal{D}(\omega_{R},Q_{R})}\Delta_{S}g \tag{4.9}\]
with convergence in \(L^{2}(\mu)\), where by definition, \(\mathcal{D}(\omega_{R},Q_{R})\) are the subcubes of \(Q_{R}\) (in \(\mathcal{D}(\omega_{R})\)),
\[\Delta_{S}g=\sum_{S^{\prime}\in\operatorname{child}(S)}((g)_{S^{\prime}}-(g)_ {S})\chi_{S^{\prime}},\]
and \((g)_{A}=\oint_{A}g\ d\mu\). Moreover, one has that
\[\|(g-(g)_{Q_{R}})\chi_{Q_{R}}\|_{L^{2}(\mu)}^{2}=\sum_{S\in\mathcal{D}(\omega _{R},Q_{R})}\|\Delta_{S}g\|_{L^{2}(\mu)}^{2},\]
since the terms on the right-hand side of (4.9) are pairwise orthogonal in \(L^{2}(\mu)\).
Now since \(\mathcal{O}_{\mu,g}(R)=\mathcal{O}_{\mu,g+a}(R)\) for any constant \(a\), we may assume that \((g)_{Q_{R}}=0\) in our estimate of \(\mathcal{O}_{\mu,g}(R)\). Fix \(f\in\Lambda(3B_{R})\), and note that since \(f\) is \(1\)-Lipschitz and \(\operatorname{spt}f\subset Q_{R}\),
\[\left|\int gf\ d\mu\right| \leq\sum_{S\in\mathcal{D}(\omega_{R},Q_{R})}\left|\int\Delta_{S} gf\ d\mu\right|\] \[=\sum_{S\in\mathcal{D}(\omega_{R},Q_{R})}\left|\int\Delta_{S}g(f- f(z_{S}))\ d\mu\right|\]
\[\leq C_{\mu}\sum_{S\in\mathcal{D}(\omega_{R},Q_{R})}(\operatorname{diam}S)^{1+d/ 2}\|\Delta_{S}g\|_{L^{2}(\mu)}.\]
Since \(f\) was arbitrary, taking \(\lambda=0\) in (4.3) and recalling that \(\mu(R)\simeq_{C_{\mu}}(\operatorname{diam}R)^{d}\) we obtain for some constant \(C\) depending only on \(n,d\) and \(C_{\mu}\),
\[\mathcal{O}_{\mu,g}(R)^{2}\mu(R)\leq C\left(\sum_{S\in\mathcal{D}(\omega_{R},Q _{R})}\frac{(\operatorname{diam}S)^{1+d/2}}{(\operatorname{diam}R)}||\Delta_{ S}g||_{L^{2}(\mu)}\right)^{2}\frac{1}{\mu(R)}.\]
Using the fact that \(\operatorname{diam}Q_{R}\simeq_{C_{\mathcal{D}}}\operatorname{diam}R\), the Cauchy-Schwarz inequality, and the estimate \(\left(\sum_{S\in\mathcal{D}(\omega_{R},Q_{R})}\left(\frac{\operatorname{diam} S}{\operatorname{diam}R}\right)\mu(S)\right)\leq C\mu(Q_{R})\leq C\mu(R)\) give
\[\sum_{R\in\Delta(Q_{0})}\mathcal{O}_{\mu,g}(R)^{2}\mu(R) \leq C\sum_{R\in\Delta(Q_{0})}\left(\sum_{S\in\mathcal{D}(\omega_{ R},Q_{R})}\left(\frac{\operatorname{diam}S}{\operatorname{diam}R}\right)|| \Delta_{S}g||_{L^{2}(\mu)}^{2}\right)\] \[\times\left(\sum_{S\in\mathcal{D}(\omega_{R},Q_{R})}\left(\frac{ \operatorname{diam}S}{\operatorname{diam}R}\right)\mu(S)\right)\frac{1}{\mu(R)}\] \[\leq C\sum_{R\in\Delta(Q_{0})}\left(\sum_{S\in\mathcal{D}(\omega_ {R},Q_{R})}\left(\frac{\operatorname{diam}S}{\operatorname{diam}R}\right)|| \Delta_{S}g||_{L^{2}(\mu)}^{2}\right) \tag{4.10}\] \[\leq C\sum_{R\in\Delta(Q_{0})}\left(\sum_{S\in\mathcal{D}(\omega_ {R},Q_{R})}\left(\frac{\operatorname{diam}S}{\operatorname{diam}Q_{R}}\right) ||\Delta_{S}g||_{L^{2}(\mu)}^{2}\right).\]
Setting \(\mathcal{I}(\omega)=\{Q\in\mathcal{D}(\omega)\ :\ \ Q\subset C_{2}Q_{0}\}\) for some large \(C_{2}>1\) depending only on \(n,d,C_{\mu},C_{D}\) and \(C_{\mathcal{D}}\), we can crudely estimate (4.10) by
\[C\sum_{\omega=1}^{M}\sum_{Q\in\mathcal{I}(\omega)}\sum_{S\in\mathcal{D}(\omega,Q)}\left(\frac{\operatorname{diam}S}{\operatorname{diam}Q}\right)||\Delta_{ S}g||_{L^{2}(\mu)}^{2} \tag{4.11}\]
for some large constant \(C>1\) depending on the same parameters. This is because each term in the summand of (4.10) appears in (4.11), and to each term in (4.11) there are only finitely many terms coming from (4.10) which are associated to those of (4.11), by Ahlfors regularity and the fact that \(\operatorname{diam}Q_{R}\simeq_{C_{\mathcal{D}}}R\).
Finally, switching the order of summation in (4.11) and using the fact that
\[\sum_{Q\in\mathcal{I}(\omega),Q>S}\left(\frac{\operatorname{diam}S}{ \operatorname{diam}Q}\right)\leq C,\]
we see that
\[\sum_{R\in\Delta(Q_{0})}\mathcal{O}_{\mu,g}(R)^{2}\mu(R)\leq C\sum_{\omega=1}^ {M}\sum_{Q\in\operatorname{top}(\mathcal{I}(\omega))}\sum_{S\in\mathcal{D}( \omega,Q)}||\Delta_{S}g||_{L^{2}(\mu)}^{2}\]
\[\leq C\sum_{\omega=1}^{M}\sum_{Q\in\operatorname{top}(\mathcal{I}( \omega))}\|(g-(g)_{Q})\chi_{Q}\|_{L^{2}(\mu)}^{2}\] \[\leq C\text{osc}(g)^{2}\mu(Q_{0}),\]
where \(\operatorname{top}(\mathcal{I}(\omega))\) are the maximal cubes in \(\mathcal{I}(\omega)\) whose boundaries intersect in sets of zero \(\mathcal{H}^{d}\) measure. This completes the proof, since the constants \(\delta,M\) and \(C_{\mathcal{D}}\) depend only on \(n,d\) and \(C_{\mu}\).
**Lemma 4.5**.: _Suppose that \(\mu\) is a \(d\)-Ahlfors regular measure with constant \((1+\delta)<2\). Moreover, assume that \(E=\operatorname{spt}\mu\) is \(d\)-rectifiable. Fix \(\Delta\), a system of dyadic cubes for \(E\) with constant \(C_{D}>0\). Then for the function \(g=d\mu/d\mathcal{H}^{d}|_{E}\), we have_
\[C_{0}^{-1}\left(-C_{0}\mathcal{O}_{\mu,g^{-1}}(Q)+\alpha_{ \mathcal{H}^{d}|_{E}}(Q)\right)\leq\alpha_{\mu}(Q)\leq C_{0}\left(\mathcal{O} _{\mathcal{H}^{d}|_{E},g}(Q)+\alpha_{\mathcal{H}^{d}|_{E}}(Q)\right),\]
_where \(C_{0}=C_{0}(n,d,C_{D})>0\) is independent of \(\mu\) and \(\delta\)._
Proof.: By virtue of Lemma 3.1, we know that the densities \(d\mathcal{H}^{d}|_{E}/d\mu\) and \(d\mu/d\mathcal{H}^{d}|_{E}\) exist and satisfy
\[(1+\delta)^{-1}\leq\frac{d\mathcal{H}^{d}|_{E}}{d\mu},\frac{d\mu }{d\mathcal{H}^{d}|_{E}}\leq(1+\delta)\]
\(\mu\)-almost everywhere. Set \(g\coloneqq d\mu/d\mathcal{H}^{d}|_{E}\). We readily compute for any dyadic cube \(Q\in\Delta\), any \(f\in\Lambda(3B_{Q})\), \(\lambda\in\mathbb{R}\) and \(\nu\) a \(d\)-flat measure,
\[\left|\int f(d\mu-d\nu)\right| =\left|\int f\left(gd\mathcal{H}^{d}|_{E}-d\nu\right)\right|\] \[\leq\left|\int f\left(g-\lambda\right)d\mathcal{H}^{d}|_{E}\right| +\left|\int f\left(\lambda d\mathcal{H}^{d}|_{E}-d\nu\right)\right|.\]
Taking the infimum over \(\lambda\in\mathbb{R}\) which minimizes the quantity \(\mathcal{O}_{\mathcal{H}^{d}|_{E},g}\) (which one readily sees must be in \([(1+\delta)^{-1},(1+\delta)]\), by the bounds on \(g\)), and then taking the infimum over flat measures in the definition of \(\alpha_{\mathcal{H}^{d}|_{E}}\), we obtain
\[\alpha_{\mu}(Q)\leq C\left(\mathcal{O}_{\mathcal{H}^{d}|_{E},g}(Q)+\alpha_{ \mathcal{H}^{d}|_{\Gamma}}(Q)\right),\]
with constant \(C>0\) depending on \(n,d\) and \(C_{D}\). Reversing the roles of \(\mu\) and \(\mathcal{H}^{d}|_{E}\) gives the other inequality.
Putting together the previous Lemmas and Theorem 4.3, we obtain our small-constant Carleson measure estimate we shall use in proving Theorem 4.7.
**Corollary 4.6**.: _Suppose that \(\Gamma\) is a \(d\)-dimensional \(\delta\)-Lipschitz graph in \(\mathbb{R}^{n}\), and \(\mu\) is a \(d\)-Ahlfors regular measure with support \(\Gamma\) and with constant \((1+\delta)\). Fix \(\Delta\), a system of dyadic cubes for \(\Gamma\) with constant \(C_{D}\). Then one has the Carleson packing condition,_
\[\sup_{R_{0}\in\Delta}\mu(R_{0})^{-1}\sum_{Q\in\Delta(R_{0})}\alpha_{\mu}(Q)^{ 2}\mu(Q)\leq C_{0}\delta^{2}.\]
_Here \(C_{0}=C_{0}(n,d,C_{D})>0\) is independent of \(\mu\) and \(\delta\)._
Proof.: Set \(g:=d\mu/d\mathcal{H}^{d}|_{\Gamma}\). Lemma 3.1 implies that \(g,g^{-1}\leq(1+\delta)\). We combine the cube-wise inequality from Lemma 4.5 along with the Carleson packing conditions from Theorem 4.3 and Lemma 4.4 to see that for any \(R_{0}\in\Delta\),
\[\mu(R_{0})^{-1}\sum_{Q\in\Delta(R_{0})}\alpha_{\mu}(Q)^{2}\mu(Q) \leq C\mu(R_{0})^{-1}\left(\sum_{Q\in\Delta(R_{0})}\left(\mathcal{ O}_{\mathcal{H}^{d}|_{\Gamma},g}(Q)^{2}+\alpha_{\mathcal{H}^{d}|_{\Gamma}}(Q)^{2} \right)\mu(Q)\right)\] \[\leq C((\operatorname{osc}g)^{2}+\delta^{2})\] \[\leq C\left(\left((1+\delta)-(1+\delta)^{-1}\right)^{2}+\delta^ {2}\right)\leq C\delta^{2},\]
completing the proof of the Corollary.
Finally, we transfer the \(\alpha\)-Carleson packing conditions from Lipschitz graphs to \(\delta\)-UR sets with the small-constant Corona decomposition, i.e., we prove (B) implies (C). The upper Ahlfors regular in this implication essentially comes for free.
**Theorem 4.7**.: _There are constants \(C_{0}>1\) and \(\delta_{0},\theta_{0}\in(0,1)\) depending only on the dimensions \(n\) and \(d\) so that the following holds. Whenever \(0<\delta<\delta_{0}\), \(E\subset\mathbb{R}^{n}\) is \(d\)-Ahlfors regular and admits \(\delta\)-Corona decompositions, then for any measure \(\mu\) of the form \(d\mu(x)=g(x)d\mathcal{H}^{d}|_{E}(x)\) where \((1+\delta)^{-1}\leq g\leq(1+\delta)\), we have the Carleson condition_
\[\sup_{R_{0}\in\Delta}\mu(R_{0})^{-1}\sum_{Q\in\Delta(R_{0})}\alpha_{\mu}(Q)^{2 }\mu(Q)\leq C_{0}\delta^{\theta_{0}}. \tag{4.12}\]
_Moreover, \(E\) is upper \(d\)-Ahlfors regular with constant \(1+C_{0}\delta^{\theta_{0}}\)._
_Here \(\Delta\) is a fixed system of dyadic cubes for \(E\) with bounded constant \(C_{D}\leq C_{n,d}\) coming from the definition of \(\delta\)-Corona decompositions._
Proof.: Choose a system of dyadic cubes \(\Delta\) for \(E\) as in the definition of a \(\delta\)-Corona decomposition with constant \(C_{D}\leq C_{n,d}\). We begin the proof with a reduction. Notice that by condition (2.6) of \(\delta\)-Corona decompositions, \(E\) is \(\delta\)-UR as long as \(\delta_{0}\) is sufficiently small. By Lemma 3.2 then, we may assume that \(E\) is Ahlfors regular with constant \((1+C_{0}\delta^{1/d})\leq(1+C_{0}\delta_{0}^{1/d})\). Let \(\tilde{\mu}=\mathcal{H}^{d}|_{E}\). Whenever \(R_{0}\in\Delta\), denote by \(\mathcal{F}(R_{0})\) the partition of \(\Delta(R_{0})\) given by the \(\delta\)-Corona decomposition of \(E\) in Definition 2.1. Moreover, denote by \(S_{R_{0}}\in\mathcal{F}(R_{0})\) the subcollection of \(\Delta(R_{0})\) containing \(R_{0}\). We show that the conclusion of the Theorem follows from the estimate
\[\sup_{R_{0}\in\Delta}\tilde{\mu}(R_{0})^{-1}\sum_{Q\in S_{R_{0}}}\alpha_{ \tilde{\mu}}(Q)^{2}\tilde{\mu}(Q)\leq C_{0}\delta^{\theta_{0}}. \tag{4.13}\]
Indeed, assume that (4.13) holds, and let \(R_{0}\in\Delta\) be given. Choose \(\mathcal{F}(R_{0})\) as in the Definition of a \(\delta\)-Corona decomposition. Then
\[\tilde{\mu}(R_{0})^{-1}\sum_{Q\in\Delta(R_{0})}\alpha_{\tilde{\mu} }(Q)^{2}\tilde{\mu}(Q) =\tilde{\mu}(R_{0})^{-1}\sum_{Q\in S_{R_{0}}}\alpha_{\tilde{\mu}} (Q)^{2}\tilde{\mu}(Q)\] \[\qquad+\tilde{\mu}(R_{0})^{-1}\sum_{S\in\mathcal{F}(R_{0})\setminus \{S_{R_{0}}\}}\sum_{Q\in S}\alpha_{\tilde{\mu}}(Q)^{2}\tilde{\mu}(Q)\]
\[\stackrel{{\eqref{eq:2.1}}}{{\leq}}C_{0}\delta^{\theta_{0}}+C \tilde{\mu}(R_{0})^{-1}\sum_{S\in\mathcal{F}(R_{0})\setminus\{S_{R_{0}}\}}\tilde {\mu}(Q(S))\] \[\stackrel{{\eqref{eq:2.2}}}{{\leq}}C_{0}\delta^{ \theta_{0}}+C\delta,\]
as long as \(\delta_{0}<1\), where \(C>0\) is some dimensional constant depending only on \(n,d\) here and in the future. Next, one argues just as in the proof of Corollary 4.6 to replace \(\tilde{\mu}\) with \(\mu\) to obtain
\[\mu(R_{0})^{-1}\sum_{Q\in\Delta(R_{0})}\alpha_{\mu}(Q)^{2}\mu(Q) \leq C\left(C_{0}\delta^{\theta_{0}}+\left(\operatorname{osc} \frac{d\mu}{d\tilde{\mu}}\right)^{2}\right)\] \[\leq CC_{0}\delta^{\theta_{0}}+C\delta^{2}.\]
This shows that it suffices to demonstrate (4.13), and thus from here on we fix \(R_{0}\in\Delta\), \(\mathcal{F}=\mathcal{F}(R_{0})\), and we may as well assume in our estimates that \(\mu=\tilde{\mu}=\mathcal{H}^{d}|_{E}\). Fix the top family \(S\equiv S_{R_{0}}\in\mathcal{F}\) to be the subcollection for which \(R_{0}\in S_{R_{0}}\), and choose an approximating \(\delta\)-Lipschitz graph \(\Gamma=\Gamma(S)\) as in (2.6) that also satisfies (2.8). Notice also that \(R_{0}=Q(S)\). We break the remainder of the proof into three steps.
**Step one**: We ascribe a density to \(\Gamma\cap 3B_{Q(S)}\). Recall that \(c_{Q(S)}\) is the center of \(Q(S)\), and that \(B_{Q(S)}=B(c_{Q(S)},\operatorname{diam}Q(S))\supset Q(S)\). The condition (2.6) on the proximity of \(\Gamma\) to \(E\) near \(\delta^{-1}B_{Q(S)}\) implies that there is \(c_{\Gamma,S}\in\Gamma\cap B_{Q(S)}\) so that \(\left|c_{Q(S)}-c_{\Gamma,S}\right|\leq\delta\operatorname{diam}Q(S)\).
Now we produce a system of dyadic cubes for \(\Gamma\) that come from true dyadic cubes in \(\mathbb{R}^{n}\) as follows. Note that up to rotation, we may assume that \(\Gamma\) is the graph of a Lipschitz function over \(\mathbb{R}^{d}\subset\mathbb{R}^{n}\). Let \(Q^{\Gamma}(S)\) be a (true) closed cube in \(\mathbb{R}^{n}\) with axis-parallel sides centered at \(c_{\Gamma,S}\) such that \(10B_{Q(S)}\subset Q^{\Gamma}(S)\subset 20\sqrt{n}B_{Q(S)}\), so that \(\operatorname{diam}Q^{\Gamma}(S)\simeq_{n,d}\operatorname{diam}Q(S)\). Denote by \(\tilde{Q}^{\Gamma}(S)\) the projection of \(Q^{\Gamma}(S)\) onto \(\mathbb{R}^{d}\), and notice that \(\tilde{Q}^{\Gamma}(S)\) is a true cube in \(\mathbb{R}^{d}\), since \(Q^{\Gamma}(S)\) has axis-parallel sides. Split \(\tilde{Q}^{\Gamma}(S)\) into \(2^{d}\) closed subcubes \(\tilde{Q}^{\Gamma}_{1,1}\ldots,\tilde{Q}^{\Gamma}_{1,2^{d}}\) of \(\tilde{Q}^{\Gamma}(S)\subset\mathbb{R}^{d}\) with disjoint interiors. Denote this collection of cubes in \(\mathbb{R}^{d}\) by \(\Delta^{\Gamma}_{1}(\tilde{Q}^{\Gamma}(S))\), which we call first generation (true) dyadic cubes in \(\mathbb{R}^{d}\) contained in \(\tilde{Q}^{\Gamma}(S)\). Then one generates the family \(\Delta^{\Gamma}_{j}(\tilde{Q}^{\Gamma}(S))\) from \(\Delta^{\Gamma}_{j-1}(\tilde{Q}^{\Gamma}(S))\) from by splitting each cube in the previous generation into \(2^{d}\) more (true) closed cubes in \(\mathbb{R}^{d}\). With \(\Delta^{\Gamma}_{0}(\tilde{Q}^{\Gamma}(S))=\{\tilde{Q}^{\Gamma}(S)\}\), denote by
\[\Delta^{\Gamma}(\tilde{Q}^{\Gamma}(S))=\bigcup_{j\geq 0}\Delta^{\Gamma}_{j}( \tilde{Q}^{\Gamma}(S)).\]
We then lift the dyadic cubes in \(\Delta^{\Gamma}(\tilde{Q}^{\Gamma}(S))\) to closed cubes in \(\mathbb{R}^{n}\) centered on \(\Gamma\). That is, if \(\tilde{Q}^{\Gamma}\in\Delta^{\Gamma}_{j}(\tilde{Q}^{\Gamma}(S))\) with center \(c_{\tilde{Q}^{\Gamma}}\), then since \(\Gamma\) is a \(\delta\)-Lipschitz graph defined over \(\mathbb{R}^{d}\), there is a unique \(c_{Q^{\Gamma}}\in\Gamma\) so that \(\pi_{\mathbb{R}^{d}}(c_{Q^{\Gamma}})=c_{\tilde{Q}^{\Gamma}}\). Let \(Q^{\Gamma}\) be a closed cube in \(\mathbb{R}^{n}\) with axis-parallel sides, center equal to \(c_{Q^{\Gamma}}\), and side-length equal to that of \(\tilde{Q}^{\Gamma}\). Denote the collection of all such cubes generated this way by \(\Delta^{\Gamma}_{j}(Q^{\Gamma}(S))\), and set
\[\Delta^{\Gamma}\equiv\Delta^{\Gamma}(Q^{\Gamma}(S))=\bigcup_{j\geq 0}\Delta^{ \Gamma}_{j}(Q^{\Gamma}(S)).\]
Notice by construction we have the following facts about the dyadic cubes in \(\Delta^{\Gamma}(Q^{\Gamma}(S))\):
\[\begin{array}{l}\mbox{for each $j\geq 0$, the cubes in $\Delta^{\Gamma}_{j}(Q^{\Gamma}(S))$ have disjoint interiors. Moreover,}\\ \Gamma\cap Q^{\Gamma}(S)=\cup_{Q^{\Gamma}\in\Delta^{\Gamma}_{j}(Q^{\Gamma}(S)) }(\Gamma\cap\operatorname{Int}(Q^{\Gamma}))\cup F_{j},\mbox{where $F_{j}\subset\Gamma$ is an $\mathcal{H}^{d}$-null}\\ \mbox{set}.\end{array} \tag{4.14}\]
\[\begin{array}{l}\mbox{if $Q^{\Gamma}\in\Delta^{\Gamma}_{j}(Q^{\Gamma}(S))$ where $j\in\mathbb{N}$, then there is a unique $R^{\Gamma}\in\Delta^{\Gamma}_{j-1}(Q^{\Gamma}(S))$}\\ \mbox{so that $Q^{\Gamma}\subset R^{\Gamma}$,}\\ \mbox{for each $Q^{\Gamma}\in\Delta^{\Gamma}$, one has that}\\ (1+C\delta)^{-1}\mathcal{H}^{d}(\pi_{\mathbb{R}^{d}}(Q^{\Gamma}))\leq \mathcal{H}^{d}(\Gamma\cap Q^{\Gamma})\leq(1+C\delta)\mathcal{H}^{d}(\pi_{ \mathbb{R}^{d}}(Q^{\Gamma}))\\.\end{array} \tag{4.15}\]
Indeed, conditions (4.15) and (4.16) follow from the fact that \(\Gamma\) is a \(\delta\)-Lipschitz graph as long as \(\delta_{0}\) is chosen small enough.
Fix parameters \(M,\lambda>1\) to be determined. In the end, the choice of these parameters shall depend only on the underlying dimensions \(n\) and \(d\). It is easy to see that by \(d\)-Ahlfors regularity of \(E\), the set
\[\left\{R\in\Delta_{\operatorname{gen}R_{0}}\ :\ R\cap 20\sqrt{n}B_{Q(S)}\neq \emptyset\right\}\]
has boundedly many elements, \(R_{1},R_{2},\ldots,R_{k}\) with \(k\leq C\), again for some dimensional constant \(C\) depending only on \(n\) and \(d\). As per (2.8), we know that for each \(1\leq j\leq k\), there is a partition \(\mathcal{F}_{i}=\mathcal{F}(R_{i})\) of \(\Delta(R_{i})\) that satisfy the conditions of the \(\delta\)-Corona decomposition, (2.5)-(2.7). Moreover, such a partition can be chosen so that when \(S_{i}\in\mathcal{F}_{i}\) is the subcollection with \(R_{i}\in S_{i}\), we know that the conditions (2.5)-(2.7) are satisfied with the same Lipschitz graph \(\Gamma\) as the one chosen for \(S\).
We perform a stopping time argument on the cubes in \(\Delta^{\Gamma}\) to find some coherent collection \(S^{\Gamma}\subset\Delta^{\Gamma}\) of (true) cubes in \(\mathbb{R}^{n}\) for which \(\Gamma\) and \(E\) are sufficiently close. Let us say that a cube \(Q^{\Gamma}\in\Delta^{\Gamma}\) is 'far from \(S\)' (written FS) if the cube \(\lambda Q^{\Gamma}\) does not meet any \(Q\in S\cup S_{1}\cup\cdots\cup S_{k}\eqqcolon S^{*}\) satisfying
\[M^{-1}\mathrm{diam}\,Q^{\Gamma}\leq\mathrm{diam}\,Q\leq M\mathrm{diam}\,Q^{ \Gamma}. \tag{4.17}\]
Now we proceed generation by generation. If \(M,\lambda\) are chosen large enough and if \(\delta_{0}\) is sufficiently small (depending on only on the underlying dimensions), then it is clear that the first few generations of cubes in \(\Delta^{\Gamma}\) are not FS. These first few generations of cubes then, set to be in the collection \(S^{\Gamma}\). We continue generation by generation in the dyadic system \(\Delta^{\Gamma}\), until we reach a cube \(Q^{\Gamma}\) which has a sibling (possibly itself) which is FS. At this stage, \(Q^{\Gamma}\) and all of its siblings become minimal cubes of \(S^{\Gamma}\), denoted \(m(S^{\Gamma})\), and no other subcubes from the parent of \(Q^{\Gamma}\) are added to \(S^{\Gamma}\). Notice that this gives that the collection \(S^{\Gamma}\) is coherent, and of course, its minimal cubes are disjoint and contained in \(Q^{\Gamma}(S)\). In addition, if \(Q^{\Gamma}\in m(S^{\Gamma})\), then the fact that its parent, \(R^{\Gamma}\) is not FS implies that \(\lambda R^{\Gamma}\) meets some element \(Q\in S^{*}\) with
\[M^{-1}\mathrm{diam}\,R^{\Gamma}\leq\mathrm{diam}\,Q\leq M\mathrm{diam}\,R^{ \Gamma},\]
It is clear that as long as \(\delta_{0}\) is taken sufficiently small (depending on \(M\) and \(\lambda\)), then the fact that \(\operatorname{diam}R^{\Gamma}\) and \(\operatorname{diam}Q\) are comparable and the fact that \(\lambda R^{\Gamma}\) meets \(Q\) implies that
\[\delta^{-1}B_{Q}\supset\delta_{0}^{-1}B_{Q}\supset R^{\Gamma}.\]
Then condition (2.6) implies that
\[\sup_{x\in(E\cup\Gamma)\cap R^{\Gamma}}\operatorname{dist}\left(x,E\right)+ \operatorname{dist}\left(x,\Gamma\right)\leq\delta\operatorname{diam}Q\leq \delta M\operatorname{diam}R^{\Gamma}. \tag{4.18}\]
Since \(Q^{\Gamma}\subset R^{\Gamma}\), of course this then means
\[\sup_{x\in(E\cup\Gamma)\cap Q^{\Gamma}}\operatorname{dist}\left(x,E\right)+ \operatorname{dist}\left(x,\Gamma\right)\leq 2\delta M\operatorname{diam}Q^{\Gamma}. \tag{4.19}\]
Now, we claim that for any cube \(Q^{\Gamma}\) satisfying (4.19) (and thus, for all cubes \(Q^{\Gamma}\in S^{\Gamma}\)), we have that
\[1-C_{1}(\delta M)^{\theta_{1}}\leq\frac{\mathcal{H}^{d}|_{E}(Q^{\Gamma})}{ \mathcal{H}^{d}|_{\Gamma}(Q^{\Gamma})}\leq 1+C_{1}(\delta M)^{\theta_{1}} \tag{4.20}\]
for constants \(C_{1}>1\) and \(\theta_{1}\in(0,1)\) depending only the underlying dimensions \(n\) and \(d\) as long as \(\delta_{0}\) is small. The proof of this fact is a bit tedious, but the main ideas are that \(\Gamma\) is the graph of a \(\delta\)-Lipschitz graph defined over \(\mathbb{R}^{d}\), and there is another \(\delta\)-Lipschitz graph \(\Gamma^{\prime}\) that gives a good (measure) approximation to \(E\) inside \(Q^{\Gamma}\) in the sense that
\[\mathcal{H}^{d}(Q^{\Gamma}\cap(E\Delta\Gamma^{\prime}))\leq C\delta( \operatorname{diam}Q^{\Gamma})^{d},\]
by virtue of (2.6). Then the fact that \(Q^{\Gamma}\) is centered on \(\Gamma\), and estimate (4.19) implies that \(\Gamma^{\prime}\) also passes near the center of \(Q^{\Gamma}\), and also can be written as a \((C_{1}M\delta)\)-Lipschitz graph over \(\mathbb{R}^{d}\). One concludes then by directly computing the surface measure of these graphs parametrized over \(\pi_{\mathbb{R}^{d}}(Q^{\Gamma})\).
Finally, we ascribe our density. Define the coefficients \(b_{Q^{\Gamma}}\coloneqq\mathcal{H}^{d}|_{E}(Q^{\Gamma})/\mathcal{H}^{d}|_{ \Gamma}(Q^{\Gamma})\), set
\[g(x)\coloneqq\begin{cases}b_{Q^{\Gamma}},&\text{when }x\in\Gamma\cap Q^{ \Gamma},\text{ and }Q^{\Gamma}\in m(S^{\Gamma})\\ 1&\text{otherwise},\end{cases}\]
and define \(d\gamma(x)\coloneqq g(x)d\mathcal{H}^{d}|_{\Gamma}(x)\). Notice that by (4.20), we have that \(1-C_{1}(M\delta)^{\theta_{1}}\leq g(x)\leq 1+C_{1}(M\delta)^{\theta_{1}}\) on \(\Gamma\), so that since \(\Gamma\) is a \(\delta\)-Lipschitz graph, we have that
\[\gamma\text{ is a $d$-Ahlfors regular measure with support $\Gamma$ and constant $1+C_{1}(M\delta)^{\theta_{1}}$}. \tag{4.21}\]
for some (larger) \(C_{1}>1\), depending only on \(n\) and \(d\). With our definition of our approximating measure in hand, we move to the main estimate.
**Step two**: We estimate the \(\alpha_{\mu}\) by the \(\alpha_{\gamma}\). First, notice that if \(x\in(E\Delta\Gamma)\cap Q^{\Gamma}(S)\), then \(x\) belongs to some minimal cube of \(S^{\Gamma}\) (since otherwise, being contained in arbitrarily small cubes of \(S^{\Gamma}\), and (4.19) gives that \(x\in E\cap\Gamma\)). From this, one deduces that for any cube \(Q^{\Gamma}\in S^{\Gamma}\),
\[\left((E\Delta\Gamma)\cap Q^{\Gamma}\right)\subset\bigcup_{\begin{subarray}{c }R^{\Gamma}\in m(S^{\Gamma})\\ R^{\Gamma}\subset Q^{\Gamma}\end{subarray}}R^{\Gamma}. \tag{4.22}\]
Now, let us begin our estimate. Fix any cube \(Q\in S\) for \(E\). Notice \(3B_{Q}\subset 10B_{Q(S)}\subset Q^{\Gamma}(S)\), by choice of \(Q^{\Gamma}(S)\). Now since \(\mathcal{F}\) is coherent, any \(Q^{\Gamma}\in\Delta^{\Gamma}\) meeting \(3B_{Q}\) that also satisfies \(M\mathrm{diam}\,Q^{\Gamma}\geq\mathrm{diam}\,Q\) must be so that \(Q^{\Gamma}\in S^{\Gamma}\). Indeed let us argue this by contradiction and suppose that this is not true. Then there is some minimal cube \(R^{\Gamma}\in S^{\Gamma}\) containing \(Q^{\Gamma}\), and some sibling of \(R^{\Gamma}\), \((R^{\Gamma})^{\prime}\in\Delta^{\Gamma}\) which is FS. That is, \(\lambda(R^{\Gamma})^{\prime}\) does not meet any \(R\in S^{*}\) satisfying
\[M^{-1}\mathrm{diam}\,(R^{\Gamma})^{\prime}\leq\mathrm{diam}\,R\leq M\mathrm{ diam}\,(R^{\Gamma})^{\prime}.\]
However, notice that as long as \(\lambda\) is chosen large enough (depending on \(M\)), then the fact that \((R^{\Gamma})^{\prime}\) and \(R^{\Gamma}\) are siblings, \(R^{\Gamma}\) meets \(3B_{Q}\), and \(\mathrm{diam}\,Q\leq M\mathrm{diam}\,Q^{\Gamma}\leq M\mathrm{diam}\,(R^{ \Gamma})^{\prime}\) implies that \(\lambda(R^{\Gamma})^{\prime}\) meets \(Q\). Since \((R^{\Gamma})^{\prime}\) is FS, then \(Q\in S\) and \(\mathrm{diam}\,Q\leq M\mathrm{diam}\,(R^{\Gamma})^{\prime}\) imply \(\mathrm{diam}\,(R^{\Gamma})^{\prime}>M\mathrm{diam}\,Q\). However, since \(\mathcal{F}\) is coherent, and \(\lambda(R^{\Gamma})^{\prime}\) meets \(Q\), it is easy to see that there is some \(Q^{\prime}\supset Q\) with \(Q^{\prime}\in S\) so that
\[M^{-1}\mathrm{diam}\,(R^{\Gamma})^{\prime}\leq\mathrm{diam}\,Q^{\prime}\leq M \mathrm{diam}\,(R^{\Gamma})^{\prime},\]
which contradicts the fact that \((R^{\Gamma})^{\prime}\) is FS. Hence, we have proved that for any cube \(Q^{\Gamma}\in\Delta^{\Gamma}\),
\[Q\in S,3B_{Q}\cap Q^{\Gamma}\neq\emptyset,\text{ and }M\mathrm{diam}\,Q^{ \Gamma}\geq\mathrm{diam}\,Q\Rightarrow Q^{\Gamma}\in S^{\Gamma}. \tag{4.23}\]
From here, we see that (4.23) and the fact that \(\Delta^{\Gamma}\) partitions the space \(Q^{\Gamma}(S)\), that whenever \(Q\in S\),
\[3B_{Q}\text{ meets some }Q^{\Gamma}\in S^{\Gamma}\text{ satisfying }\mathrm{diam}\,Q\leq\mathrm{diam}\,Q^{\Gamma}\leq 2\mathrm{diam}\,Q. \tag{4.24}\]
For every \(Q\in S\), denote by any such choice of a cube \(Q^{\Gamma}\) as in (4.24) by \(T(Q)\in S^{\Gamma}\). Remark that since \(\mathrm{diam}\,T(Q)\geq\mathrm{diam}\,Q\) for \(Q\in S\), we know that \(10T(Q)\supset 3B_{Q}\).
For each cube \(Q\in S\), choose a flat measure \(\nu_{Q}\) minimizing the quantity
\[\tilde{\alpha}_{\gamma}(T(Q))\coloneqq(\mathrm{diam}\,T(Q))^{-d-1}\inf_{\nu \in\mathrm{Flat}(n,d)}\mathcal{D}_{c_{T(Q)},10\mathrm{diam}\,T(Q)}(\gamma,\nu),\]
where \(\mathrm{Flat}(n,d)\) is the space of flat measures as in Definition 1.5 and \(\mathcal{D}_{x,r}\) is as in Definition 1.4. It follows by definitions that whenever \(f\in\Lambda(3B_{Q})\), then
\[(\mathrm{diam}\,Q)^{-d-1}\left|\int f(d\mu-d\nu_{Q})\right| \leq(\mathrm{diam}\,Q)^{-d-1}\left(\left|\int f(d\mu-d\gamma) \right|+\left|\int f(d\gamma-d\nu_{Q})\right|\right) \tag{4.25}\] \[\leq(\mathrm{diam}\,Q)^{-d-1}\left|\int f(d\mu-d\gamma)\right|+C \tilde{\alpha}_{\gamma}(T(Q)).\]
To estimate the first term above, we notice that by (4.22), we have
\[\left|\int f(d\mu-d\gamma)\right| \leq\sum_{\begin{subarray}{c}Q^{\Gamma}\in m(S^{\Gamma})\\ Q^{\Gamma}\cap 3B_{Q}\neq\emptyset\end{subarray}}\left|\int_{Q^{\Gamma}}f(d\mu-d \gamma)\right|\] \[\leq\sum_{\begin{subarray}{c}Q^{\Gamma}\in m(S^{\Gamma})\\ Q^{\Gamma}\cap 3B_{Q}\neq\emptyset\end{subarray}}\left|\int_{Q^{\Gamma}}f-f(c_{Q^{ \Gamma}})d\mu\right|+\left|\int_{Q^{\Gamma}}f-f(c_{Q^{\Gamma}})d\gamma\right|\]
\[\leq C\sum_{\begin{subarray}{c}Q^{\Gamma}\in m(S^{\Gamma})\\ Q^{\Gamma}\cap 3B_{Q}\neq\emptyset\end{subarray}}(\operatorname{diam}Q^{\Gamma})\mu(Q ^{\Gamma}),\]
since \(\mu(Q^{\Gamma})=\gamma(Q^{\Gamma})\) for \(Q^{\Gamma}\in m(S^{\Gamma})\), (by construction), and since \(f\) is \(1\)-Lipschitz. Combining the above with (4.25) then gives
\[\sum_{Q\in S} \alpha_{\mu}(Q)^{2}\mu(Q)\] \[\leq C\sum_{Q\in S}\tilde{\alpha}_{\gamma}(T(Q))^{2}\mu(Q)+C\sum_ {Q\in S}\mu(Q)(\operatorname{diam}Q)^{-2d-2}\Big{(}\sum_{\begin{subarray}{c}Q ^{\Gamma}\in m(S^{\Gamma}),\\ Q^{\Gamma}\cap 3B_{Q}\neq\emptyset\end{subarray}}(\operatorname{diam}Q^{\Gamma})\mu( Q^{\Gamma})\Big{)}^{2}\] \[\Rightarrow:(\operatorname{I})+(\operatorname{II}).\]
To estimate (I), we remark that the diameter estimate on \(T(Q)\), (4.24), gives us that for some dimensional constant \(C>0\), and uniformly over all \(R^{\Gamma}\in S^{\Gamma}\),
\[\#\left\{Q\in S\;:\;T(Q)=R^{\Gamma}\right\}\leq C. \tag{4.26}\]
Hence we readily see,
\[\sum_{Q\in S}\tilde{\alpha}_{\gamma}(T(Q))^{2}\mu(Q) \leq C\sum_{R^{\Gamma}\in\Delta^{\Gamma}(Q^{\Gamma}(S))}\tilde{ \alpha}_{\gamma}(R^{\Gamma})^{2}\gamma(R^{\Gamma})\] \[\leq C(M\delta)^{\theta_{1}}\gamma(Q^{\Gamma}(S))\leq C(M\delta) ^{\theta_{1}}\mu(Q(S)).\]
To be clear, the second to last inequality above follows from the estimate on the \(d\)-Ahlfors regularity constant of \(\gamma\), (4.21), and the fact that \(\Gamma\) is a \(\delta\)-Lipschitz graph. Along with the fact that the cubes \(Q^{\Gamma}\cap\Gamma\) for \(Q^{\Gamma}\in\Delta^{\Gamma}\) serve as a system of dyadic cubes for \(\Gamma\), the estimate then follows from Corollary (4.6). This gives our desired estimate on (I).
We now move onto (II). Define the collection of cubes \(\mathcal{I}(Q)\subset m(S^{\Gamma})\) for \(Q\in S\) by
\[\mathcal{I}(Q)\coloneqq\{Q^{\Gamma}\in m(S^{\Gamma})\;:\;Q^{\Gamma}\cap 3B_{Q} \neq\emptyset,Q^{\Gamma}\subset R^{\Gamma}\in S^{\Gamma}\text{ for some }R^{\Gamma}\text{ satisfying \eqref{eq:def
We have the following string of inequalities:
\[\text{(II)} =\sum_{Q\in S}\mu(Q)(\operatorname{diam}Q)^{-2d}\left(\sum_{\begin{subarray} {c}Q^{\Gamma}\in m(S^{\Gamma})\\ Q^{\Gamma}\cap 3B_{Q}\neq\emptyset\end{subarray}}\left(\frac{\operatorname{diam}Q^{ \Gamma}}{\operatorname{diam}Q}\right)\mu(Q^{\Gamma})\right)^{2}\] \[\leq C\sum_{Q\in S}(\operatorname{diam}Q)^{-d}\left(\sum_{Q^{ \Gamma}\in\mathcal{I}(Q)}\left(\frac{\operatorname{diam}Q^{\Gamma}}{ \operatorname{diam}Q}\right)\mu(Q^{\Gamma})\right)^{2}\] \[\leq C\sum_{Q\in S}\left(\sum_{Q^{\Gamma}\in\mathcal{I}(Q)}\mu(Q ^{\Gamma})\left(\frac{\operatorname{diam}Q^{\Gamma}}{\operatorname{diam}T(Q)} \right)^{2}\right)(\operatorname{diam}Q)^{-d}\left(\sum_{Q^{\Gamma}\in \mathcal{I}(Q)}\mu(Q^{\Gamma})\right)\] \[\leq C\sum_{Q\in S}\left(\sum_{Q^{\Gamma}\in\mathcal{I}(Q)}\mu(Q ^{\Gamma})\left(\frac{\operatorname{diam}Q^{\Gamma}}{\operatorname{diam}T(Q) }\right)^{2}\right)\] \[\leq C\sum_{R^{\Gamma}\in S^{\Gamma}}\sum_{\begin{subarray}{c}Q^ {\Gamma}\in m(S^{\Gamma})\,:\,Q^{\Gamma}\subset R^{\Gamma}\\ Q^{\Gamma}\text{ meets some }3B_{Q}\text{ where }Q\in S\end{subarray}}\mu(Q^{\Gamma}) \left(\frac{\operatorname{diam}Q^{\Gamma}}{\operatorname{diam}R^{\Gamma}} \right)^{2}\] \[\leq C\sum_{Q^{\Gamma}\in m(S^{\Gamma})}\mu(Q^{\Gamma})\sum_{ \begin{subarray}{c}R^{\Gamma}\in S^{\Gamma}\\ R^{\Gamma}\supset Q^{\Gamma}\end{subarray}}\left(\frac{\operatorname{diam}Q^ {\Gamma}}{\operatorname{diam}R^{\Gamma}}\right)^{2}\leq C\sum_{Q^{\Gamma}\in m (S^{\Gamma})}\mu(Q^{\Gamma}).\]
In the above, the first inequality holds simply because we sum the same cubes, by (4.27), and since \(\mu(Q)\simeq_{n,d}(\operatorname{diam}Q)^{d}\). The second holds by Cauchy-Schwarz and since \(\operatorname{diam}Q\simeq_{n,d}\operatorname{diam}T(Q)\), and the third because the cubes in \(\mathcal{I}(Q)\) are disjoint minimal cubes of \(S^{\Gamma}\) contained in \(10B_{Q}\). The fourth follows from (4.24) and (4.26), so that each to each \(Q\) there corresponds an \(R^{\Gamma}\in S^{\Gamma}\) for which the term appears, and each such \(R^{\Gamma}\) corresponds to only finitely many such \(Q\in S\). The fifth is just a switching of the order of summation, and the sixth is because the remaining inner series is a geometric series.
In view of our estimate on (I), our last step of the proof is to verify that for some \(\theta>0\) (depending only on \(n\) and \(d\)),
\[\sum_{Q^{\Gamma}\in m(S^{\Gamma})}\mu(Q^{\Gamma})\leq C\delta^{\theta}\mu(Q(S)). \tag{4.28}\]
In fact, we shall we that we may take \(\theta=1\).
**Step three**: we prove (4.28). Let \(Q^{\Gamma}\in m(S^{\Gamma})\). By definition, there is some \(R^{\Gamma}\in S^{\Gamma}\), a sibling of \(Q^{\Gamma}\) that is FS. Recall that \(R^{\Gamma}\subset Q^{\Gamma}(S)\subset 20\sqrt{n}B_{Q(S)}\), and thus (4.19) implies that \(R^{\Gamma}\) must meet one of the \(R_{0},R_{1},\dots,R_{k}\), say \(R_{j}\) as long as \(\delta_{0}\) is small. For convenience write \(S_{0}=S=S(R_{0})\). Recall that
\[\operatorname{diam}R^{\Gamma}\leq\operatorname{diam}Q^{\Gamma}(S)\leq C \operatorname{diam}Q(S)\leq C\operatorname{diam}R_{j},\]
for some dimensional constant \(C>0\). By taking \(M\) large enough, we see that it must be the case that \(\operatorname{diam}R_{j}>M\operatorname{diam}R^{\Gamma}\), since otherwise \(R^{\Gamma}\) is not FS. Now we repeat this argument on the children of \(R_{j}\). Because \(R^{\Gamma}\) meets \(R_{j}\), we see that there is some \(R_{j}^{1}\subset R_{j}\) a child of \(R_{j}\) that meets \(R^{\Gamma}\). Since \(\operatorname{diam}R_{j}^{1}\geq C_{D}^{-2}\operatorname{diam}R_{j}\), we know that \(\operatorname{diam}R_{j}^{1}\geq MC_{D}^{-2}\operatorname{diam}R^{\Gamma}\), and thus (again taking \(M\) large enough, depending only on dimensional constants), we see that either \(R_{j}^{1}\not\in S_{j}\), or, \(R_{j}^{1}\in S_{j}\) and in fact the stronger inequality \(\operatorname{diam}R_{j}^{1}>M\operatorname{diam}R^{\Gamma}\) holds, since \(R^{\Gamma}\) is FS. We continue this argument finitely many times until we reach a child \(R_{j}^{\ell}\subset R_{j},R_{j}^{\ell}\in m(S_{j})\), which meets \(R^{\Gamma}\) and satisfies \(\operatorname{diam}R_{j}^{\ell}>M\operatorname{diam}R^{\Gamma}\). In particular, since \(Q^{\Gamma}\) is a sibling of \(R^{\Gamma}\), and since \(\operatorname{diam}R^{\Gamma}<M^{-1}\operatorname{diam}R_{j}^{\ell}\), we readily see that \(Q^{\Gamma}\subset 3B_{R_{j}^{\ell}}\). This shows that
\[\bigcup_{Q^{\Gamma}\in m(S^{\Gamma})}Q^{\Gamma}\subset\bigcup_{j=0}^{k} \bigcup_{Q\in m(S_{j})}3B_{Q}. \tag{4.29}\]
Using (4.29) and the fact that the \(Q^{\Gamma}\in m(S^{\Gamma})\) are disjoint, we then readily estimate
\[\sum_{Q^{\Gamma}\in m(S^{\Gamma})}\mu(Q^{\Gamma}) =\mu\left(\bigcup_{Q^{\Gamma}\in m(S^{\Gamma})}Q^{\Gamma}\right) \leq\mu\left(\bigcup_{j=0}^{k}\bigcup_{Q\in m(S_{j})}3B_{Q}\right)\] \[\leq\sum_{j=0}^{k}\sum_{Q\in m(S_{j})}\mu(3B_{Q})\leq C\sum_{j=0} ^{k}\sum_{Q\in m(S_{j})}\mu(Q)\] \[\leq C\sum_{j=0}^{k}\sum_{S\in\mathcal{F}(R_{j}),S\neq S_{j}}\mu( Q(S))\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq
that \(E\) has approximate \(d\)-planes \(T(x)\) for \(\mathcal{H}^{d}\)-almost all \(x\in E\) by Theorem 5.1. Define
\[\gamma(E)\coloneqq\sup_{x\in E,r>0}\inf_{V\in G(n,d)}\left(\fint_{B(x,r)}\left\| \pi_{T(x)}-\pi_{V}\right\|\ d\mathcal{H}^{d}|_{E}(x)+\sup_{y\in B(x,r)\cap E} \frac{|\pi_{V^{\perp}}(y-x)|}{r}\right).\]
Of course our intent is to estimate this \(\gamma(E)\) when \(E\) is Reifenberg flat and \(d\)-Ahlfors regular, and then use arguments from [1] to construct Lipschitz graph approximations to \(E\) when \(\gamma(E)\) is small in the next Section. With the language above, we can formulate our Theorem.
**Theorem 5.2**.: _There are constants \(C_{0},\delta_{0}>0\) depending only on \(n\) and \(d\) so that whenever \(\delta\in(0,\delta_{0})\), \(E\subset\mathbb{R}^{n}\) is \(d\)-Ahlfors upper regular with constant \((1+\delta)\) and \(\delta\)-Reifenberg flat, then \(E\) is \(d\)-rectifiable, and moreover, \(\gamma(E)\leq C_{0}\delta^{1/2}\). In addition, \(E\) is lower \(d\)-Ahlfors regular with constant \((1+C_{0}\delta)\)._
Proof.: Suppose that the hypotheses of the Theorem hold. As long as \(\delta_{0}>0\) is chosen sufficiently small (depending only on \(n\) and \(d\)), then Theorem 15.2 in [1] implies that \(E\) satisfies the "big pieces of Lipschitz graphs" property, and thus is \(d\)-uniformly rectifiable (see the main Theorem in [1], or Theorem 1.57 in [1]). Here we are using the fact that if \(\delta_{0}\) is small enough, then then the Reifenberg flatness of \(E\) guarantees that \(E\) is lower \(d\)-Ahlfors regular with some bounded constant (this is made more precise in the following paragraph). In particular, \(E\) is \(d\)-rectifiable, and thus \(T(x)\) exists for \(\mathcal{H}^{d}\)-almost all \(x\in E\). Denote the set of all such \(x\) by \(E^{\prime}\).
Fix \(x\in E,r>0\), and denote by \(P\in A(n,d)\) some choice of a \(d\)-plane so that \(d_{x,2r}(E,P)=b\beta_{\infty,E}(x,2r)\leq\delta\). Notice that by translation invariance of the hypotheses and conclusion of the Theorem, we may as well assume that \(P\in G(n,d)\). Recall that \(d_{x,r}\) is the normalized local Hausdorff distance, so that by definition,
\[\sup_{y\in E\cap\overline{B}(x,2r)}\operatorname{dist}\left(y,P\right)+\sup_ {y\in P\cap\overline{B}(x,2r)}\operatorname{dist}\left(y,E\right)\leq 2\delta r.\]
Choose some point \(p\in P\) so that \(|x-p|\leq 2\delta r\), and thus we have that \(B(p,(1-2\delta)r)\subset B(x,r)\). By Reifenberg's topological disk Theorem, we have that if \(\delta_{0}\) is chosen sufficiently small depending only on \(n\) and \(d\), then necessarily \(E\) is a \(C^{\alpha}\)-topological, \(d\)-disk [10] (see also Section 3 in [1] or [12] for other proofs). In particular, since \(E\) is very well approximated by \(P\) in \(B(x,2r)\), one can argue by contradiction to show that for \(y\in P\cap B(p,r)\), there is some \(x_{y}\in E\cap B(p,(3/2)r)\) so that \(\pi_{P}(x_{y})=y\), provided that \(\delta_{0}\) is sufficiently small. The argument is essentially contained in Lemma 8.3 of [1], so we omit the details. Of course, this projection statement also gives that \(E\) is lower \(d\)-Ahlfors regular as follows.
Notice that in fact, the above implies that to each \(y\in P\cap B(p,(1-4\delta)r)\), there is some \(x_{y}\in E\cap B(p,(1-2\delta)r)\) for which \(\pi_{P}(x_{y})=y\). Indeed, just choose the \(x_{y}\) from above, so that \(x_{y}\in E\cap B(p,(3/2)r)\). Then we estimate
\[|x_{y}-p| \leq|x_{y}-y|+|y-p|\] \[=|x_{y}-\pi_{P}(x_{y})|+|y-p|\] \[=\operatorname{dist}\left(x_{y},P\right)+|y-p|\] \[<2\delta r+(1-4\delta)r\]
\[=(1-2\delta)r,\]
so necessarily \(x_{y}\in B(p,(1-2\delta)r)\). In particular, since \(\pi_{P}\) is \(1\)-Lipschitz, we obtain the lower Ahlfors regularity of \(E\):
\[\mathcal{H}^{d}(E\cap B(x,r)) \geq\mathcal{H}^{d}(\pi|_{P}(E\cap B(x,r))\] \[\geq\mathcal{H}^{d}|_{P}(B(p,(1-4\delta)r))\] \[=(1-4\delta)^{d}r^{d}\] \[\geq(1+C_{0}\delta)^{-1}r^{d},\]
as long as \(\delta_{0}\) is small enough and \(C_{0}>1\) is large enough.
We are now ready to begin our estimate on the \(\pi_{T(x)}\). Fix \(\varepsilon>\delta\) to be determined, and set
\[F\coloneqq\{x\in E^{\prime}\cap B(p,(1-2\delta)r)\;:\;\big{\|}\pi_{T(x)}-\pi_{P }\big{\|}>\varepsilon\},\,\tilde{F}\coloneqq\pi_{P}(F).\]
Define \(\nu\coloneqq(\pi_{P})_{\#}\mathcal{H}^{d}|_{E}\) to be the push-forward measure of \(\mathcal{H}^{d}|_{E}\) through the map \(\pi_{P}\). We claim that
\[\liminf_{s\downarrow 0}\frac{\nu(B(y,s))}{\mathcal{H}^{d}|_{P}(B(y,s))}\geq(1- \varepsilon)^{-1} \tag{5.1}\]
for each \(y\in\tilde{F}\). Let us show this now.
Let \(y\in\tilde{F}\) so that there is some \(x_{y}\in E^{\prime}\cap B(p,(1-2\delta)r)\) with \(\pi_{P}(x_{y})=y\) and \(\big{\|}\pi_{T(x_{y})}-\pi_{P}\big{\|}>\varepsilon\). Choose a unit vector \(e_{y}\in\mathbb{R}^{n}\) for which \(e_{y}\in T(x_{y})\) but \(\|\pi_{P}(e_{y})\|<1-\varepsilon\). We assume as well that \(\pi_{P}(e_{y})\neq 0\), but the argument when \(\pi_{P}(e_{y})=0\) is similar, and in fact, one can show in this case that the left-hand side of (5.1) is \(+\infty\). Choose unit vectors \(e_{1},\dots,e_{d-1}\in\mathbb{R}^{n}\) so that \(\{\pi_{P}(e_{y})/\left|\pi_{P}(e_{y})\right|,e_{1},\dots,e_{d-1}\}\) is an orthonormal basis for \(P\subset\mathbb{R}^{n}\). If we consider the set
\[H_{s}(x_{y})\coloneqq\left\{x_{y}+\left(\beta_{0}e_{y}+\sum_{i=1}^{d-1}\beta_ {i}e_{i}\right)+v\;:\;v\in P^{\perp},(1-\varepsilon)^{2}\beta_{0}^{2}+\sum_{ i=1}^{d-1}\beta_{i}^{2}<s^{2}\right\},\]
for \(s\ll r\), then a direct computation of \(|y-\pi_{P}(z)|=|\pi_{P}(x_{y}-z)|\) for \(z\in H_{s}(x_{y})\) using \(\|\pi_{P}(e_{y})\|<1-\varepsilon\) gives that \(\pi_{P}(H_{s}(x_{y}))\subset P\cap B(y,s)\). In particular, we obtain
\[\nu(B(y,s)) =\mathcal{H}^{d}|_{E}(\pi_{P}^{-1}(B(y,s)\cap P))\] \[\geq\mathcal{H}^{d}|_{E}(H_{s}(x_{y})),\]
so that
\[\frac{\nu(B(y,s))}{\mathcal{H}^{d}|_{P}(B(y,s))} =\frac{\nu(B(y,s))}{s^{d}}\] \[\geq\frac{\mathcal{H}^{d}|_{E}(H_{s}(x_{y}))}{s^{d}} \tag{5.2}\] \[=\frac{\mathcal{H}^{d}|_{E}(sH+x_{y})}{s^{d}},\]
where \(sH+x_{y}\equiv H_{s}(x_{y})\) defines the set \(H\). However, it is clear that \(H\) contains a \(d\)-ellipsoid \(A\subset T(x_{y})\) of the form
\[A\coloneqq\{\beta_{0}e_{y}+\sum_{i=1}^{d-1}\beta_{i}e_{i}^{\prime}\;:\;(1- \varepsilon)^{2}\beta_{0}^{2}+\sum_{i=1}^{d-1}\beta_{i}^{2}<1\},\]
where \(\{e_{y},e_{1}^{\prime},\ldots,e_{d-1}^{\prime}\}\) is some orthonormal basis for \(T(x_{y})\). Thus, in view of Theorem 5.1, (5.2), and the fact that \(\mathcal{H}^{d}|_{T(x_{y})}(\partial A)=0\), we obtain that
\[\liminf_{s\downarrow 0}\frac{\nu(B(y,s))}{\mathcal{H}^{d}|_{P}(B(y, s))} \geq\lim_{s\downarrow 0}\frac{\mathcal{H}^{d}|_{E}(sH+x_{y})}{s^{d}}\] \[=\lim_{s\downarrow 0}s^{-d}(\Phi_{x_{y},s})_{\#}\mathcal{H}^{d}|_{E }\left(H\right)\] \[=\mathcal{H}^{d}|_{T(x_{y})}(A)\] \[=(1-\varepsilon)^{-1},\]
proving (5.1). Here we have used the fact that if \(\mu_{n}\rightharpoonup\mu\) and \(B\subset\mathbb{R}^{n}\) is Borel and bounded with \(\mu(\partial B)=0\), then \(\mu_{n}(B)\to\mu(B)\) (see for example, Proposition 4.26 in [12]). We are also using our convention that \(\mathcal{H}^{d}\) is normalized so that if \(B\) is a ball of radius \(r\) centered on \(T(x_{y})\), then \(\mathcal{H}^{d}|_{T(x_{y})}(B)=r^{d}\), which gives the value of \(\mathcal{H}^{d}|_{T(x_{y})}(A)\) above.
Finally, we get an estimate on the size of \(\mathcal{H}^{d}|_{E}(F)\). First, Theorem 2.12 in [13] implies that
\[\mathcal{H}^{d}|_{E}(F) =\nu(\tilde{F})\] \[\geq\int_{\tilde{F}}\frac{d\nu}{d\mathcal{H}^{d}|_{P}}(y)\;d \mathcal{H}^{d}|_{P}(y)\] \[\geq\int_{\tilde{F}}\liminf_{r\downarrow 0}\frac{\nu(B(y,r))}{ \mathcal{H}^{d}|_{P}(B(y,r))}\,d\mathcal{H}^{d}|_{P}(y) \tag{5.3}\] \[\geq(1-\varepsilon)^{-1}\mathcal{H}^{d}|_{P}(\tilde{F}),\]
by courtesy of (5.2) (in particular, we do not need \(\nu\ll\mathcal{H}^{d}|_{P}\) to get the first inequality above). Denote \(G=E\cap B(p,(1-2r))\setminus F\). Recalling that \(\pi_{P}(E\cap B(p,(1-2\delta)r))\supset P\cap B(p,(1-4\delta)r)\) and the fact that \(\mathcal{H}^{d}|_{E}(B(p,(1-2\delta)r)\setminus(F\cup G))=0\), we see
\[\mathcal{H}^{d}|_{P}(B(p,(1-4\delta)r))\leq\mathcal{H}^{d}(\pi_{P}(G))+ \mathcal{H}^{d}(\pi_{P}(F)).\]
The inequality (5.3), the fact that \(\mathcal{H}^{d}|_{E}\) is upper \(d\)-Ahlfors regular with constant \((1+\delta)\), and the fact that \(\pi_{P}\) is \(1\)-Lipschitz gives
\[(1-4\delta)^{d}r^{d} =\mathcal{H}^{d}|_{P}(B(p,(1-4\delta)r)))\] \[\leq\mathcal{H}^{d}(\pi_{P}(G))+\mathcal{H}^{d}|_{P}(\tilde{F})\] \[\leq\mathcal{H}^{d}|_{E}(G)+(1-\varepsilon)\mathcal{H}^{d}|_{E}(F)\] \[=\mathcal{H}^{d}|_{E}(B(p,(1-2\delta)r))-\varepsilon\mathcal{H}^ {d}|_{E}(F)\] \[\leq(1+\delta)(1-2\delta)^{d}r^{d}-\varepsilon\mathcal{H}^{d}|_{E }(F).\]
Rearranging for \(\mathcal{H}^{d}|_{E}(F)\) yields that
\[\mathcal{H}^{d}|_{E}(F) \leq\varepsilon^{-1}\left((1+\delta)-(1-4\delta)^{d})\right)r^{d}\] \[\leq C\delta\varepsilon^{-1}r^{d},\]
where \(C\) is some constant depending only on \(d\). Thus since \(B(x,(1-4\delta)r)\subset B(p,(1-2\delta)r)\),
\[\int_{B(x,(1-4\delta)r)}\left\|\pi_{T(x)}-\pi_{P}\right\|\ d \mathcal{H}^{d}|_{E} \leq\int_{G}\left\|\pi_{T(x)}-\pi_{P}\right\|\ d\mathcal{H}^{d}|_{E} +\int_{F}\left\|\pi_{T(x)}-\pi_{P}\right\|\ d\mathcal{H}^{d}|_{E}\] \[\leq\varepsilon\mathcal{H}^{d}|_{E}(B(x,r))+2\mathcal{H}^{d}(F)\] \[\leq C(\varepsilon+\delta\varepsilon^{-1})r^{d}\] \[\leq C\delta^{1/2}r^{d}\]
by taking \(\varepsilon=\delta^{1/2}\).
Now we take on the second term of \(\gamma(E)\); this estimate comes directly by choice of \(P\). Notice that for any \(y\in B(x,(1-4\delta)r)\cap E\) we have that
\[|\pi_{P^{\perp}}(y-x)| =|y-\pi_{P}(y)-x+\pi_{P}(x)|\] \[\leq|y-\pi_{P}(y)|+|x-\pi_{P}(x)|\] \[\leq 2\delta r,\]
by choice of \(P\). Thus, altogether we've shown that
\[\int_{B(x,(1-4\delta)r)}\left\|\pi_{T(x)}-\pi_{P}\right\|\ d\mathcal{H}^{d}|_ {E}+\sup_{y\in E\cap B(x,(1-4\delta)r)}\frac{|\pi_{P^{\perp}}(y-x)|}{r}\leq C \left(\delta^{1/2}+C\delta\right)r^{d},\]
from which we readily see that \(\gamma(E)\leq C\delta^{1/2}\).
## 6. \(\gamma(E)\) small implies good Lipschitz approximations to \(E\)
In this last section, we briefly detail how the work of [1] demonstrates that (F) gives (A). The argument involving the Hardy-Littlewood Maximal function goes back to the co-dimension \(1\) case in [12]. Under different assumptions on a domain \(\Omega\) when \(\partial\Omega\) is \((n-1)\)-Ahlfors regular, there is also a proof in [14]. Our goal here is to point the reader to the fact that for these particular arguments in [1], one does not need \(E\) to be a \(C^{1}\)\(d\)-dimensional chord-arc submanifold, as long as we instead assume Ahlfors regularity of \(E\). Since all of the arguments exist in this work, we only enumerate the various steps in the proof. Let us state precisely the Theorem that can be obtained.
**Theorem 6.1** (Theorem 3.1 in [1]).: _There are constants \(\delta_{0},C_{0}>0\) depending only on \(n\), \(d\), and \(C_{E}\), so that whenever \(E\) is lower \(d\)-Ahlfors regular with constant \((1+\delta)\), upper \(d\)-Ahlfors regular with constant \(C_{E}>0\), and \(\gamma(E)\leq\delta<\delta_{0}\), then \(E\) is \((C_{0}\delta^{1/2})\)-UR._
Proof.: We recall also the convention that \(\gamma(E)\leq\delta\) being finite means, by definition, that \(E\) is \(d\)-rectifiable. Thus Theorem 5.1 applies so that \(E\) has approximate tangent \(d\)-planes almost everywhere in \(E\). We continue as in the proof of Lemma 3.2 of [1].
Fix \(x_{0}\in E,R>0\), and \(\tau\in(10\delta,1/3)\). Denote by \(\mathcal{M}_{R}\) for \(R>0\) the variant of the Hardy-Littlewood Maximal function, \(\mathcal{M}_{R}f(x)=\sup_{0<r<R}\mathchoice{\vbox{\hbox{$\int$}}}{ \vbox{\hbox{$\int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}B(y,r)\,|f|\,\ d\mathcal{H}^{d}|_{E}\) for \(x\in E\) and \(\mathcal{M}_{R}f(x)=0\) otherwise.
Set
\[F \coloneqq\{y\in B(x_{0},R)\cap E\;:\;\mathcal{M}_{4R}(\big{\|}\pi_ {T}-\pi_{T_{x_{0},4R}}\big{\|})(y)\leq\tau\},\] \[B \coloneqq(B(x_{0},R)\cap E)\setminus F,\]
where \(T(y)\) is the approximate tangent \(d\)-plane to \(E\) at \(y\) and \(T_{x_{0},4R}\in G(n,d)\) is a \(d\)-plane minimizing the quantity
\[\mathchoice{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$\int$}}}{ \vbox{\hbox{$\int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}}_{B(x_{0},4R)}\,\big{\|}\pi_{T(y)}-\pi_{V}\big{\|}\,\ d \mathcal{H}^{d}|_{E}(x)+\sup_{y\in B(x_{0},4R)\cap E}\frac{|\pi_{V^{\perp}}(y- x_{0})|}{4R}\]
over all \(V\in G(n,d)\). Then Step 1 in [10, Lemma 3.2] (which only uses the Ahlfors regularity of \(E\), and the definition of \(\gamma(E)\)) gives that for \(\delta_{0}\) chosen small enough, there are uniform constants \(a,C>0\) depending only on \(n\), \(d\), and \(C_{E}\) so that
\[\mathcal{H}^{d}(B)\leq Ce^{-a\tau/\delta}R^{d}. \tag{6.1}\]
Steps 2 and 4 of the proof of the same Lemma (which again, only use Ahlfors regularity of \(E\) and Step 1) then give that for \(\delta_{0}\) sufficiently small, there is a Lipschitz graph \(\Gamma\) with constant \(\leq C\tau\) for which \(F\subset\Gamma\). Consequently, setting \(\tau=\delta^{1/2}\) gives
\[\mathcal{H}^{d}((B(x_{0},R)\cap(E\setminus\Gamma)) \leq Ce^{-a\delta^{-1/2}}R^{d} \tag{6.2}\] \[\leq C\delta R^{d}.\]
The fact that \(\mathcal{H}^{d}|_{E}\) is lower \(d\)-Ahlfors regular with constant \((1+\delta)\) and the norm on the Lipschitz constant of \(\Gamma\) readily give
\[\mathcal{H}^{d}(B(x_{0},R)\cap(\Gamma\setminus E)) =\mathcal{H}^{d}(\Gamma\cap B(x_{0},R))-\mathcal{H}^{d}(E\cap B( x_{0},R))\] \[\leq\left(\sqrt{1+C\delta}-(1+\delta)^{-1}\right)R^{d}\] \[\leq C\delta^{1/2}R^{d}.\]
Together with (6.2), this shows that \(E\) is \((C_{0}\delta^{1/2})\)-UR.
In all, Theorem 6.1 thus concludes the proof of Theorem 1.9.
## 7. A comment on local results and chord-arc domains with small constant
As mentioned in the introduction, Theorem 1.9 has local and "vanishing" versions, which correlate more closely to the local definition of \(\delta\)-chord-arc domains as in [11]. Instead of formulating very precise local definitions here, we simply remark the following about the proofs of Theorems 3.3, 4.7, 5.2, and 6.1. In each of these proofs, the conclusion of the Theorem is deduced inside a ball \(B(x,r)\) centered on \(E\) using information about \(E\) in the ball \(B(x,C_{0}r)\) up to scale \(C_{0}r\) for some dimensional constant \(C_{0}=C_{0}(n,d,C_{E})>0\), _except_ for Theorem 3.3. In the argument of Theorem 3.3, we used information about \(E\) inside the larger ball \(B(x,C_{0}r)\) up to the scale \(C_{0}\delta^{-\theta^{\prime}}r\) where \(\theta^{\prime}\in(0,1/4d)\). This means that the local version of Theorem 1.9 should be loosely formulated in the following way.
**Theorem 7.1** (Local version of Theorem 1.9).: _Fix \(n,d\in\mathbb{N}\) with \(0<d<n\) and \(C_{E}>0\). Then there are constants \(\tau_{0},\theta_{0},\delta_{0}\in(0,1)\) depending only on \(n,d\) and \(C_{E}\) so that the following holds._
_Suppose that \(E\) is a set which satisfies the \(d\)-Ahlfors regularity condition from Definition 1.1 with constant \(C_{E}\) for points \(x\in E\cap B(0,R)\) and scales \(0<r<r_{0}\), and any one of the conditions (A)-(F) hold for \(x\in E\cap B(0,R)\) and \(0<r<r_{0}\) with constant \(\delta\in(0,\delta_{0})\). Then the rest hold for all points \(x\in E\cap B(0,\tau_{0}R_{0})\) and scales \(0<r<\delta\tau_{0}r_{0}\) with constant \(\delta^{\theta_{0}}\)._
Here, the phrase "condition (B) holds for \(x\in E\cap B(0,R)\) and \(0<r<r_{0}\) with constant \(\delta\)" really means that for each \(Q_{0}\in\Delta\) with \(\operatorname{dist}\left(Q_{0},B(0,R)\right)\leq r_{0}\) and \(\operatorname{diam}Q_{0}\leq r_{0}\), \(E\) admits \(\delta\)-Corona decompositions in \(Q_{0}\). For the others, the meaning is self-explanatory: we just mean the conditions defining the statement are required to hold only for such points \(x\in E\) and such scales \(r>0\) as opposed to uniformly.
In particular, this remark can be used to prove Theorem 1.15 in the following way.
Proof of Theorem 1.15.: Let \(\Omega\subset\mathbb{R}^{n}\) be such a domain as in the statement of the Theorem, and for convenience write \(\sigma\coloneqq\mathcal{H}^{n-1}\big{|}_{\partial\Omega}\). Fix \(\theta_{0}\), \(\tau_{0}\), and \(\delta_{0}\) coming from Theorem 7.1 depending on \(n\) and \(C_{E}\), and let \(\delta<\delta_{0}\)
Suppose first that \(\Omega\) is a \(\delta\)-chord arc domain. Fix a ball \(B(0,R)\) with \(R>1\) large, and assume without loss of generality that \(0\in\partial\Omega\). Then we may find some \(\rho>0\) small so that for \(x\in\partial\Omega\cap B(0,R)\) and \(r\in(0,\rho)\),
\[b\beta_{\infty,\partial\Omega}(x,r),\;\left\|\vec{n}\right\|_{*}(B(x,\rho)) \leq\delta.\]
This first local Reifenberg condition on \(\partial\Omega\) gives, by the proof of Theorem 5.2, that
\[\sigma(B(x,r))\geq(1+C\delta)^{-1}r^{n-1},\]
whenever \(x\in B(0,\tau_{0}R)\) and \(r\leq\tau_{0}\rho\) for some constant \(C>0\) depending only on \(n\). Moreover, [2, equation (2.18)] implies the estimate \(|\langle\vec{n}_{x,r},y-x\rangle|\leq Cr\delta^{1/2}\) whenever \(x\in\partial\Omega\cap B(0,R)\), \(r\in(0,\rho)\) and \(y\in\partial\Omega\cap B(x,r)\). Here \(C\) is a constant depending only on \(n\) and \(C_{E}\). Combined with the lower Ahlfors regularity condition above, this says exactly that \(\partial\Omega\) satisfies condition (F) for points \(x\in\partial\Omega\cap B(0,\tau_{0}R)\) and scales \(r\in(0,\tau_{0}\rho)\) with constant \(C\delta^{1/2}\). Theorem 7.1 then implies that (C) holds for points \(x\in\partial\Omega\cap B(0,\tau_{0}^{2}R)\) and scales \(r\in(0,\delta\tau_{0}^{2}\rho)\) with constant \(C\delta^{\theta_{0}/2}\). Since \(R>0\) is arbitrary, this shows that (I) implies (II) with worse constant, \(\delta^{\theta^{\prime}_{0}}\) for some \(\theta^{\prime}_{0}\) small.
Conversely, assume that \(\Omega\) satisfies (II). Again fix a ball \(B(0,R)\) with \(R>1\) large, and assume without loss of generality that \(0\in\partial\Omega\). By assumption, there is some \(\rho>0\), so that the measure \(\sigma\) satisfies the small constant Carleson measure condition
\[\sigma(B(x,r))^{-1}\int_{B(x,r)}\int_{0}^{r}\alpha_{\sigma}(y,s)^{2}\;\frac{d \sigma(y)ds}{s}\leq\delta,\]
for all \(x\in\partial\Omega\cap B(0,R)\) and \(r\in(0,\rho)\). Also, \(\sigma(B(x,r))\leq(1+\delta)r^{n-1}\) for such \(x\) and \(r\). In other words, \(\partial\Omega\) satisfies condition (C) for \(x\in B(0,R)\) and \(r\in(0,\rho)\) with constant \(\delta\), so Theorem 7.1 implies that for \(x\in\partial\Omega\cap B(0,\tau_{0}R)\) and \(r\in(0,\delta\tau_{0}\rho)\), \(\partial\Omega\) satisfies conditions (E) and (F) with constant \(\delta^{\theta_{0}}\). In other words, for each \(x\in\partial\Omega\cap B(0,\tau_{0}\rho)\) and \(r\in(0,\delta\tau_{0}\rho)\)
we have
\[b\beta_{\infty,\partial\Omega}(x,r),\ \left\|\vec{n}\right\|_{{}_{\star}}B(x, \delta\tau_{0}\rho)\leq\delta^{\theta_{0}}.\]
Since \(R>0\) is arbitrary, then joint with the underlying assumptions on \(\Omega\) made in the statement of the Theorem, this implies that \(\Omega\) is a \(\delta^{\theta_{0}}\) chord-arc domain. |
2309.17032 | Refined Kolmogorov Complexity of Analog, Evolving and Stochastic
Recurrent Neural Networks | We provide a refined characterization of the super-Turing computational power
of analog, evolving, and stochastic neural networks based on the Kolmogorov
complexity of their real weights, evolving weights, and real probabilities,
respectively. First, we retrieve an infinite hierarchy of classes of analog
networks defined in terms of the Kolmogorov complexity of their underlying real
weights. This hierarchy is located between the complexity classes $\mathbf{P}$
and $\mathbf{P/poly}$. Then, we generalize this result to the case of evolving
networks. A similar hierarchy of Kolomogorov-based complexity classes of
evolving networks is obtained. This hierarchy also lies between $\mathbf{P}$
and $\mathbf{P/poly}$. Finally, we extend these results to the case of
stochastic networks employing real probabilities as source of randomness. An
infinite hierarchy of stochastic networks based on the Kolmogorov complexity of
their probabilities is therefore achieved. In this case, the hierarchy bridges
the gap between $\mathbf{BPP}$ and $\mathbf{BPP/log^*}$. Beyond proving the
existence and providing examples of such hierarchies, we describe a generic way
of constructing them based on classes of functions of increasing complexity.
For the sake of clarity, this study is formulated within the framework of echo
state networks. Overall, this paper intends to fill the missing results and
provide a unified view about the refined capabilities of analog, evolving and
stochastic neural networks. | Jérémie Cabessa, Yann Strozecki | 2023-09-29T07:38:50Z | http://arxiv.org/abs/2309.17032v1 | # Refined Kolmogorov Complexity of Analog, Evolving and Stochastic Recurrent Neural Networks
###### Abstract
We provide a refined characterization of the super-Turing computational power of analog, evolving, and stochastic neural networks based on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respectively. First, we retrieve an infinite hierarchy of classes of analog networks defined in terms of the Kolmogorov complexity of their underlying real weights. This hierarchy is located between the complexity classes \(\mathbf{P}\) and \(\mathbf{P/poly}\). Then, we generalize this result to the case of evolving networks. A similar hierarchy of Kolomogorov-based complexity classes of evolving networks is obtained. This hierarchy also lies between \(\mathbf{P}\) and \(\mathbf{P/poly}\). Finally, we extend these results to the case of stochastic networks employing real probabilities as source of randomness. An infinite hierarchy of stochastic networks based on the Kolmogorov complexity of their probabilities is therefore achieved. In this case, the hierarchy bridges the gap between \(\mathbf{BPP}\) and \(\mathbf{BPP/log^{*}}\). Beyond proving the existence and providing examples of such hierarchies, we describe a generic way of constructing them based on classes of functions of increasing complexity. For the sake of clarity, this study is formulated within the framework of echo state networks. Overall, this paper intends to fill the missing results and provide a unified view about the refined capabilities of analog, evolving and stochastic neural networks.
keywords: Recurrent Neural Networks; Echo state networks; Computational Power; Computability Theory; Analog Computation; Stochastic Computation; Kolmogorov Complexity. +
Footnote †: journal:
## 1 Introduction
Philosophical considerations aside, it can reasonably be claimed that several brain processes are of a computational nature. "The idea that brains are computational in nature has spawned a range of explanatory hypotheses in theoretical neurobiology" [20]. In this regard, the question of the computational capabilities of neural networks naturally arises, among many others.
Since the early 1940s, the theoretical approach to neural computation has been focused on comparing the computational powers of neural network models and abstract computing machines. In 1943, McCulloch and Pitts proposed a modeling of the nervous system as a finite interconnection of logical devices and studied the computational power of "nets of neurons" from a logical perspective [62]. Along these lines, Kleene and Minsky proved that recurrent neural networks composed of McCulloch and Pitts (i.e., Boolean) cells are computationally equivalent to finite state automata [44; 64]. These results paved the way for a future stream of research motivated by the expectation to implement abstract machines on parallel hardware architectures (see for instance [1; 21; 34; 38; 45; 69; 83]).
In 1948, Turing introduced the B-type unorganized machine, a kind of neural network composed of interconnected NAND neuronal-like units [89]. He suggested that the consideration of sufficiently large B-type unorganized machines could simulate the behavior of a universal Turing machine with limited memory. The Turing universality of neural networks involving infinitely many Boolean neurons has been further investigated (see for instance in [24; 25; 32; 71; 80]). Besides, Turing brilliantly anticipated the concepts of "learning" and "training" that would later become central to machine learning. These concepts took shape with the introduction of the _perceptron_, a formal neuron that can be trained to discriminate inputs using Hebbian-like learning [33; 77; 78]. But the computational limitations of the perceptron dampened the enthusiasms for artificial neural networks [65]. The ensuing winter of neural networks lasted until the 1980s, when the popularization of the backpropagation algorithm, among other factors, paved the way for the great success of deep learning [79; 81].
Besides, in the late 50's, von Neumann proposed an alternative approach to brain information processing from the hybrid perspective of digital and analog computations [68]. Along these lines, Siegelmann and Sontag studied the capabilities of _sigmoidal neural networks_, (instead of Boolean ones). They showed
that recurrent neural networks composed of linear-sigmoid cells and rational synaptic weights are Turing complete [37; 67; 88]. This result has been generalized to a broad class of sigmoidal networks [43].
Following the developments in analog computation [84], Siegelmann and Sontag argued that the variables appearing in the underlying chemical and physical phenomena could be modeled by continuous rather than discrete (rational) numbers. Accordingly, they introduced the concept of an _analog neural network_ - a sigmoidal recurrent neural net equipped with real instead of rational weights. They proved that analog neural networks are computationally equivalent to Turing machines with advice, and hence, decide the complexity class \(\mathbf{P/poly}\) in polynomial time of computation [86; 87]. Analog networks are thus capable of _super-Turing_ capabilities and could capture chaotic dynamical features that cannot be described by Turing machines [82]. Based to these considerations, Siegelmann and Sontag formulated the so-called Thesis of Analog Computation - an analogous to the Church-Turing thesis in the realm of analog computation - stating that no reasonable abstract analog device can be more powerful than first-order analog recurrent neural networks [84; 87].
Inspired by the learning process of neural networks, Cabessa and Siegelmann studied the computational capabilities of evolving neural networks [10; 12]. In summary, evolving neural networks using either rational, real, or binary evolving weights are all equivalent to analog neural networks. They also decide the class \(\mathbf{P/poly}\) in polynomial time of computation.
The computational power of stochastic neural networks has also been investigated in detail. For rational-weighted networks, the addition of a discrete source of stochasticity increases the computational power from \(\mathbf{P}\) to \(\mathbf{BPP/log^{*}}\), while for the case of real-weighted networks, the capabilities remain unchanged to the \(\mathbf{P/poly}\) level [85]. On the other hand, the presence of analog noise would strongly reduce the computational power of the systems to that of finite state automata, or even below [4; 57; 61].
Based on these considerations, a refined approach to the computational power of recurrent neural networks has been undertaken. On the one hand, the sub-Turing capabilities of Boolean rational-weighted networks containing 0, 1, 2 or 3 additional sigmoidal cells have been investigated [90; 91]. On the other hand, a refinement of the super-Turing computational power of analog neural networks has been described in terms of the Kolmogorov complexity of the un
derlying real weights [3]. The capabilities of analog networks with weights of increasing Kolmogorov complexity shall stratify the gap between the complexity classes \(\mathbf{P}\) and \(\mathbf{P/poly}\).
The capabilities of analog and evolving neural networks have been generalized to the context of infinite computation, in connection with the attractor dynamics of the networks [6, 7, 11, 13, 14, 15, 16, 17, 18]. In this framework, the expressive power of the networks is characterized in terms of topological classes from the Cantor space (the space of infinite bit streams). A refinement of the computational power of the networks based on the complexity of the underlying real and evolving weights has also been described in this context [8, 9].
The computational capabilities of _spiking neural networks_ (instead of sigmoidal one) has also been extensively studied [54, 55]. In this approach, the computational states are encoded into the temporal differences between spikes rather than within the activation values of the cells. Maass proved that single spiking neurons are strictly more powerful than single threshold gates [59, 60]. He also characterized lower and upper bounds on the complexity of networks composed of classical and noisy spiking neurons (see [47, 49, 51, 52, 56, 58] and [48, 50], respectively). He further showed that networks of spiking neurons are capable of simulating analog recurrent neural networks [53].
In the 2000s, Paun introduced the concept of a _P system_ - a highly parallel abstract model of computation inspired by the membrane-like structure of the biological cell [70, 72]. His work led to the emergence of a highly active field of research. The capabilities of various models of so-called _neural P systems_ have been studied (see for instance [73, 74, 75, 39, 76]). In particular, neural P systems provided with a bio-inspired source of acceleration were shown to be capable of hypercomputational capabilities, spanning all levels of the arithmetical hierarchy [19, 26].
In terms of practical applications, recurrent neural networks are natural candidates for sequential tasks, involving time series or textual data for instance. Classical recurrent architectures, like LSTM and GRU, have been applied with great success in many situations [29]. A 3-level formal hierarchy of the sub-Turing expressive capabilities of these architectures, based on the notions of space complexity and rational recurrence, has been established [63]. Echo state networks are another kind of recurrent neural networks enjoying an increasing popularity due to their training efficiency [40, 41, 42, 46]. The computational capa
bilities of echo state networks have been studied from the alternative perspective of universal approximation theorems [35, 36]. In this context, echo state networks are shown to be universal, in the sense of being capable of approximating different classes of filters of infinite discrete time signals [27, 28, 30, 31]. These works fit within the field of functional analysis rather than computability theory.
In this paper, we extend the refined Kolmogorov-based complexity of analog neural networks [3] to the cases of evolving and stochastic neural networks [12, 85]. More specifically, we provide a refined characterization of the super-Turing computational power of analog, evolving, and stochastic neural networks based on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respectively. First, we retrieve an infinite hierarchy of complexity classes of analog networks defined in terms of the Kolmogorov complexity of their underlying real weights. This hierarchy is located between the complexity classes \(\mathbf{P}\) and \(\mathbf{P/poly}\). Using a natural identification between real numbers and infinite sequences of bits, we generalize this result to the case of evolving networks. Accordingly, a similar hierarchy of Kolomogorov-based complexity classes of evolving networks is obtained. This hierarchy also lies between \(\mathbf{P}\) and \(\mathbf{P/poly}\). Finally, we extend these results to the case of stochastic networks employing real probabilities as source of randomness. An infinite hierarchy of complexity classes of stochastic networks based on the Kolmogorov complexity of their real probabilities is therefore achieved. In this case, the hierarchy bridges the gap between \(\mathbf{BPP}\) and \(\mathbf{BPP/log^{*}}\). Beyond proving the existence and providing examples of such hierarchies, we describe a generic way of constructing them based on classes of functions of increasing complexity. Technically speaking, the separability between non-uniform complexity classes is achieved by means of a generic diagonalization technique, a result of interest per se which improves upon the previous approach [3]. For the sake of clarity, this study is formulated within the framework of echo state networks. Overall, this paper intends to fill the missing results and provide a unified view about the refined capabilities of analog, evolving and stochastic neural networks.
This paper is organized as follows. Section 2 describes the related works. Section 3 provides the mathematical notions necessary to this study. Section 4 presents recurrent neural networks within the formalism of echo state networks. Section 5 introduces the different models of analog, evolving and stochastic recurrent neural networks, and establishes their tight relations to non-uniform
complexity classes defined in terms of Turing machines with advice. Section 6 provides the hierarchy theorems, which in turn, lead to the descriptions of strict hierarchies of classes of analog, evolving and stochastic neural network. Section 7 offers some discussion and concluding remarks.
## 2 Related Works
Kleene and Misnky showed the equvalence between Boolean recurrent neural networks and finite state automata [44; 64]. Siegelmann and Sontag proved the Turing universality of rational-weighted neural networks [88]. Kilian and Siegelmann generalized the result to a broader class of sigmoidal neural networks [43]. In connection with analog computation, Siegelmann and Sontag characterized the super-Turing capabilities of real-weighted neural networks [82; 86; 88]. Cabessa and Siegelmann extended the result to evolving neural networks [12]. The computational power of various kinds of stochastic and noisy neural networks has been characterized [4; 57; 61; 84]. Sima graded the sub-Turing capabilites of Boolean networks containing 0, 1, 2 or 3 additional sigmoidal cells [90; 91]. Balcazar et al. hierarchized the super-Turing computational power of analog networks in terms of the Kolmogorov complexity of their underlying real weights [3]. Cabessa et al. pursued the study of the computational capabilities of analog and evolving neural networks from the perspective of infinite computation [6; 7; 11; 13; 14; 15; 16; 17; 18].
Besides, the computational power of spiking neural networks has been extensively studied by Maass [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 58; 59; 60]. In addition, since the 2000s, the field of P systems, which involves neural P systems in particular, has been booming (see for instance [70; 72; 73; 74; 39; 75; 76]). The countless variations of proposed models are generally Turing complete.
Regarding modern architectures, a hierarchy of the sub-Turing expressive power of LSTM and GRU neural networks has been established [63]. Furthermore, the universality of echo state networks has been studied from the perspective of universal approximation theorems [27; 28; 30; 31].
## 3 Preliminaries
The binary alphabet is denoted by \(\Sigma=\{0,1\}\), and the set of finite words, finite words of length \(n\), infinite words, and finite or infinite words over \(\Sigma\) are
denoted by \(\Sigma^{*}\), \(\Sigma^{n}\), \(\Sigma^{\omega}\), and \(\Sigma^{\leq\omega}\), respectively. Given some finite or infinite word \(w\in\Sigma^{\leq\omega}\), the \(i\)-th bit of \(w\) is denoted by \(w_{i}\), the sub-word from index \(i\) to index \(j\) is \(w[i:j]\), and the length of \(w\) is \(|w|\), with \(|w|=\infty\) if \(w\in\Sigma^{\omega}\).
A _Turing machine (TM)_ is defined in the usual way. A _Turing machine with advice (TM/A)_ is a TM provided with an additional advice tape and function \(\alpha:\mathbb{N}\to\Sigma^{*}\). On every input \(w\in\Sigma^{n}\) of length \(n\), the machine first queries its advice function \(\alpha(n)\), writes this word on its advice tape, and then continues its computation according to its finite program. The advice \(\alpha\) is called _prefix_ if \(m\leq n\) implies that \(\alpha(m)\) is a prefix of \(\alpha(n)\), for all \(m,n\in\mathbb{N}\). The advice \(\alpha\) is called _unbounded_ if the length of the successive advice words tends to infinity, i.e., if \(\lim_{n\to\infty}|\alpha(n)|=\infty\).1 In this work, we assume that every advice \(\alpha\) is prefix and unbounded, which ensures that \(\lim_{n\to\infty}\alpha(n)\in\Sigma^{\omega}\) is well-defined. For any non-decreasing function \(f:\mathbb{N}\to\mathbb{N}\), the advice \(\alpha\) is said to be of size \(f\) if \(|\alpha(n)|=f(n)\), for all \(n\in\mathbb{N}\). We let poly be the set of univariate polynomials with integer coefficients and log be the set of functions of the form \(n\to C\log(n)\) where \(C\in\mathbb{N}\). The advice \(\alpha\) is called _polynomial_ or _logarithmic_ if it is of size \(f\in\text{poly}\) or \(f\in\text{log}\), respectively. A TM/A \(\mathcal{M}\) equipped with some prefix unbounded advice is assumed to satisfy the following additional consistency property: for any input \(w\in\Sigma^{n}\), \(\mathcal{M}\) accepts \(w\) using advice \(\alpha(n)\) iff \(\mathcal{M}\) accepts \(w\) using advice \(\alpha(n^{\prime})\), for all \(n^{\prime}\geq n\).
Footnote 1: Note that if \(\alpha\) is not unbounded, then it can be encoded into the program of a TM, and thus, doesn’t add any computational power to the TM model.
The class of languages decidable in polynomial time by some TM is \(\mathbf{P}\). The class of languages decidable in time \(t:\mathbb{N}\to\mathbb{N}\) by some TM/A with advice \(\alpha\) is denoted by \(\mathbf{TMA}[\alpha,t]\). Given some class of advice functions \(\mathcal{A}\subseteq(\Sigma^{*})^{\mathbb{N}}\) and some class of time functions \(\mathcal{T}\subseteq\mathbb{N}^{\mathbb{N}}\), we naturally define
\[\mathbf{TMA}\left[\mathcal{A},\mathcal{T}\right]=\bigcup_{\alpha\in\mathcal{A }}\bigcup_{t\in\mathcal{T}}\mathbf{TMA}\left[\alpha,t\right].\]
The class of languages decidable in polynomial time by some TM/A with polynomial prefix and non-prefix advice are \(\mathbf{P/poly}^{*}\) and \(\mathbf{P/poly}\), respectively. It can be noticed that \(\mathbf{P/poly}^{*}=\mathbf{P/poly}\).
A _probabilistic Turing machine (PTM)_ is a TM with two transition functions. At each computational step, the machine chooses one or the other tran
sition function with probability \(\frac{1}{2}\), independently from all previous choices, and updates its state, tapes' contents, and heads accordingly. A PTM \(\mathcal{M}\) is assumed to be a decider, meaning that for any input \(w\), all possible computations of \(\mathcal{M}\) end up either in an accepting or in a rejecting state. Accordingly, the random variable corresponding to the decision (0 or 1) that \(\mathcal{M}\) makes at the end of its computation over \(w\) is denoted by \(\mathcal{M}(w)\). Given some language \(L\subseteq\Sigma^{*}\), we say that the PTM \(\mathcal{M}\) decides \(L\) in time \(t:\mathbb{N}\to\mathbb{N}\) if, for every \(w\in\Sigma^{*}\), \(\mathcal{M}\) halts in \(t(|w|)\) steps regardless of its random choices, and \(Pr[\mathcal{M}(w)=1]\geq\frac{2}{3}\) if \(w\in L\) and \(Pr[\mathcal{M}(w)=0]\geq\frac{2}{3}\) if \(w\not\in L\). The class of languages decidable in polynomial time by some PTM is \(\mathbf{BPP}\). A _probabilistic Turing machine with advice (PTM/A)_ is a PTM provided with an additional advice tape and function \(\alpha:\mathbb{N}\to\Sigma^{*}\). The class of languages decided in time \(t\) by some PTM/A with advice \(\alpha\) is denoted by \(\mathbf{PTMA}[\alpha,t]\). Given some class of advice functions \(\mathcal{A}\subseteq(\Sigma^{*})^{\mathbb{N}}\) and some class of time functions, we also define
\[\mathbf{PTMA}\left[\mathcal{A},\mathcal{T}\right]=\bigcup_{\alpha\in\mathcal{ A}}\bigcup_{t\in\mathcal{T}}\mathbf{PTMA}\left[\alpha,t\right].\]
The class of languages decidable in polynomial time by some PTM/A with logarithmic prefix and non-prefix advice are \(\mathbf{BPP}/\mathbf{log}^{*}\) and \(\mathbf{BPP}/\mathbf{log}\), respectively. In this probabilistic case however, it can be shown that \(\mathbf{BPP}/\mathbf{log}^{*}\subsetneq\mathbf{BPP}/\mathbf{log}\).
In the sequel, we will be interested in the size of the advice functions. Hence, we define the following _non-uniform complexity classes_.2 Given a class of languages (or associated machines) \(\mathcal{C}\subseteq 2^{\Sigma^{*}}\) and a function \(f:\mathbb{N}\to\mathbb{N}\), we say that \(L\in\mathcal{C}/f^{*}\) if there exist some \(L^{\prime}\in\mathcal{C}\) and some prefix advice function \(\alpha:\mathbb{N}\to\Sigma^{*}\) such that, for all \(n\in\mathbb{N}\) and for all \(w\in\Sigma^{n}\), the following properties hold:
Footnote 2: This definition is non-standard. Usually, non-uniform complexity classes are defined with respect to a class of advice functions \(\mathcal{H}\subseteq(\Sigma^{*})^{\mathbb{N}}\) instead of a class of advice functions’ size \(\mathcal{F}\subseteq\mathbb{N}^{\mathbb{N}}\).
1. \(|\alpha(n)|=f(n)\), for all \(n\geq 0\);
2. \(w\in L\Leftrightarrow\langle w,\alpha(n)\rangle\in L^{\prime}\);
3. \(\langle w,\alpha(n)\rangle\in L^{\prime}\Leftrightarrow\langle w,\alpha(k) \rangle\in L^{\prime},\text{ for all }k\geq n\).
Given a class of functions \(\mathcal{F}\subseteq\mathbb{N}^{\mathbb{N}}\), we naturally set
\[\mathcal{C}/\mathcal{F}^{*}=\bigcup_{f\in\mathcal{F}}\mathcal{C}/f^{*}.\]
The non-starred complexity classes \(\mathcal{C}/f\) and \(\mathcal{C}/\mathcal{F}\) are defined analogously, except that the prefix property of \(\alpha\) and the last condition are not required. For instance, the class of languages decidable in polynomial time by some Turing machine (resp. probabilistic Turing machines) with prefix advice of size \(f\) is \(\mathbf{P}/f^{*}\) (resp. \(\mathbf{BPP}/f^{*}\)).
Besides, for any \(w=w_{0}w_{1}w_{2}\cdots\in\Sigma^{\leq\omega}\), we consider the _base-2_ and _base-4 encoding_ functions \(\delta_{2}:\Sigma^{\leq\omega}\to[0,1]\) and \(\delta_{4}:\Sigma^{\leq\omega}\to[0,1]\) respectively defined by
\[\delta_{2}(w)=\sum_{i=0}^{|w|}\frac{w_{i}+1}{2^{i+1}}\ \ \text{and}\ \ \delta_{4}(w)=\sum_{i=0}^{|w|}\frac{2w_{i}+1}{4^{i+1}}.\]
The use of base 4 ensures that \(\delta_{4}\) is an injection. Setting \(\Delta:=\delta_{4}(\Sigma^{\omega})\subseteq[0,1]\) ensures that the restriction \(\delta_{4}:\Sigma^{\leq\omega}\to\Delta\) is bijective, and thus that \(\delta^{-1}\) is well-defined on the domain \(\Delta\). In the sequel, for any real \(r\in\Delta\), its base-4 expansion will be generally be denoted as \(\bar{r}=r_{0}r_{1}r_{2}\cdots\in\delta_{4}^{-1}(r)\in\Sigma^{\omega}\). For any \(R\subseteq\Delta\), we thus define \(\bar{R}=\{\bar{r}:r\in R\}\).
Finally, for every probability space \(\Omega\) and every events \(A_{1},\ldots,A_{n}\subseteq\Omega\), the probability that at least one event \(A_{i}\) occurs can be bounded by the _union bound_ defined by
\[\Pr\left(\bigcup_{i=1}^{n}A_{i}\right)\leq\sum_{i=1}^{n}\Pr\left(A_{i}\right).\]
## 4 Recurrent Neural Networks
We consider a specific model of recurrent neural networks complying with the _echo state networks_ architecture [40; 41; 42; 46]. More specifically, a recurrent neural network is composed of an input layer, a pool of interconnected neurons sometimes referred to as the _reservoir_, and an output layer, as illustrated in Figure 1. The networks read and accept or reject finite words over the alphabet \(\Sigma=\{0,1\}\) using a input-output encoding described below. Accordingly, they are capable of performing decisions of formal languages.
**Definition 1**.: _A rational-weighted recurrent neural network (RNN) is a tuple_
\[\mathcal{N}=\left(\mathbf{x},\mathbf{h},\mathbf{y},\mathbf{W_{in}},\mathbf{W_{ res}},\mathbf{W_{out}},\mathbf{h}^{0}\right)\]
_where_
* \(\mathbf{x}=(x_{0},x_{1})\) _is a sequence of two input cells, the data input_ \(x_{0}\) _and the validation input_ \(x_{1}\)_;_
* \(\mathbf{h}=(h_{0},\ldots,h_{K-1})\) _is a sequence of_ \(K\) _hidden cells, sometimes referred to as the reservoir;_
* \(\mathbf{y}=(y_{0},y_{1})\) _is a sequence of two output cells, the data output_ \(y_{0}\) _and the validation output_ \(y_{1}\)_;_
* \(\mathbf{W_{in}}\in\mathbb{Q}^{K\times(2+1)}\) _is a matrix of input weights and biases, where_ \(w_{ij}\) _is the weight from input_ \(x_{j}\) _to cell_ \(h_{i}\)_, for_ \(j\neq 2\)_, and_ \(w_{i2}\) _is the bias of_ \(h_{i}\)_;_
* \(\mathbf{W_{res}}\in\mathbb{Q}^{K\times K}\) _is a matrix of internal weights, where_ \(w_{ij}\) _is the weight from cell_ \(h_{j}\) _to cell_ \(h_{i}\)_;_
* \(\mathbf{W_{out}}\in\mathbb{Q}^{2\times K}\) _is a matrix of output weights, where_ \(w_{ij}\) _is the weight from cell_ \(h_{j}\) _to output_ \(y_{i}\)_;_
* \(\mathbf{h}^{0}=(h_{0}^{0},\ldots,h_{K-1}^{0})\in[0,1]^{K}\) _is the initial state of_ \(\mathcal{N}\)_, where each component_ \(h_{i}^{0}\) _is the initial activation value of cell_ \(h_{i}\)_._
The _activation value_ of the cell \(x_{i}\), \(h_{j}\) and \(y_{k}\) at time \(t\) is denoted by \(x_{i}^{t}\in\{0,1\}\), \(h_{j}^{t}\in[0,1]\) and \(y_{k}^{t}\in\{0,1\}\), respectively. Note that the activation values of input and output cells are Boolean, as opposed to those of the hidden cells. The _input_, _output_ and _(hidden) state_ of \(\mathcal{N}\) at time \(t\) are the vectors
\[\mathbf{x}^{t}=(x_{0}^{t},x_{1}^{t})\in\mathbb{B}^{2},\quad\mathbf{y}^{t}=(y_ {0}^{t},y_{1}^{t})\in\mathbb{B}^{2}\ \ \text{and}\ \ \mathbf{h}^{t}=(h_{0}^{t},\ldots,h_{K-1}^{t})\in[0,1]^{K}\,,\]
respectively. Given some input \(\mathbf{x}^{t}\) and state \(\mathbf{h}^{t}\) at time \(t\), the state \(\mathbf{h}^{t+1}\) and the output \(\mathbf{y}^{t+1}\) at time \(t+1\) are computed by the following equations:
\[\mathbf{h}^{t+1} = \sigma\left(\mathbf{W_{in}}(\mathbf{x}^{t}:1)+\mathbf{W_{res}} \mathbf{h}^{t}\right) \tag{1}\] \[\mathbf{y}^{t+1} = \theta\left(\mathbf{W_{out}}\mathbf{h}^{t+1}\right) \tag{2}\]
where \((\mathbf{x}^{t}:1)\) denotes the vector \((x_{0}^{t},x_{1}^{t},1)\), and \(\sigma\) and \(\theta\) are the _linear sigmoid
and the _hard-threshold_ functions respectively given by
\[\sigma(x)=\begin{cases}0&\text{if }x<0\\ x&\text{if }0\leq x\leq 1\\ 1&\text{if }x>1\end{cases}\quad\text{and}\quad\theta(x)=\begin{cases}0&\text{if }x<0\\ 1&\text{if }x\geq 1\end{cases}.\]
The constant value \(1\) in the input vector \((\mathbf{x}^{t},1)\) ensure that the hidden cells \(\mathbf{h}\) receive the last column of \(\mathbf{W_{in}}\in\mathbb{Q}^{K\times(2+1)}\) as biases at each time step \(t\geq 0\). In the sequel, the bias of cell \(h_{i}\) will be denoted as \(w_{i2}\).
An _input_\(\mathbf{x}\) of length \(n\) for the network \(\mathcal{N}\) is an infinite sequence of inputs at successive time steps \(t=0,1,2,\dots\), such that the \(n\) first validation bits are equal to \(1\), while the remaining data and validation bits are equal to \(0\), i.e.,
\[\mathbf{x}=\mathbf{x}^{0}\mathbf{x}^{1}\cdots\mathbf{x}^{n-1}\mathbf{0}^{ \omega}\in\left(\mathbb{B}^{2}\right)^{\omega}\]
where \(\mathbf{x}^{i}=(x_{0}^{i},1)\) and \(x_{0}^{i}\in\{0,1\}\), for \(i=1,\dots,n-1\), and \(\mathbf{0}=(0,0)\). Suppose that the network \(\mathcal{N}\) is in the initial state \(\mathbf{h}^{0}\) and that input \(\mathbf{x}\) is presented to \(\mathcal{N}\) step by step. The dynamics given by Equations (1) and (2) ensures that that \(\mathcal{N}\) will generate the sequences of states and outputs
\[\mathbf{h} = \mathbf{h}^{0}\mathbf{h}^{1}\mathbf{h}^{2}\cdots\in\left([0,1]^{ K}\right)^{\omega}\] \[\mathbf{y} = \mathbf{y}^{1}\mathbf{y}^{2}\mathbf{y}^{3}\cdots\in\left(\mathbb{ B}^{2}\right)^{\omega}\]
Figure 1: A recurrent neural network. The network is composed of two Boolean input cells (data and validation), two Boolean output cells (data and validation) and a set of \(K\) hidden cells, the reservoir, that are recurrently interconnected. The weight matrices \(\mathbf{W_{in}}\), \(\mathbf{W_{res}}\), and \(\mathbf{W_{out}}\) labeling the connections between these layers are represented. In this illustration, the network reads the finite word \(10100\) by means of its data and validation input cells (blue), and eventually rejects it, as shown by the pattern of the data and validation output cells (red).
step by step, where \(\mathbf{y}\coloneqq\mathcal{N}(\mathbf{x})\) is the _output_ of \(\mathcal{N}\) associated with input \(\mathbf{x}\).
Now, let \(w=w_{0}w_{1}\cdots w_{n-1}\in\Sigma^{*}\) be some finite word of length \(n\), let \(\tau\in\mathbb{N}\) be some integer, and let \(f:\mathbb{N}\to\mathbb{N}\) be some non-decreasing function. The word \(w\in\Sigma^{*}\) can naturally be associated with the input
\[\mathbf{w}=\mathbf{w}^{0}\mathbf{w}^{1}\cdots\mathbf{w}^{n-1}\mathbf{0}^{ \omega}\in\left(\mathbb{B}^{2}\right)^{\omega}\]
defined by \(\mathbf{w}^{i}=(x_{0}^{i},x_{1}^{i})=(w_{i},1)\), for \(i=1,\ldots,n-1\). The input pattern is thus given by
\[\begin{array}{l}x_{0}^{0}\ x_{0}^{1}\ x_{0}^{2}\ \cdots\ =w_{0}\ w_{1}\ \cdots\ w_{n-1}\ 0\ 0\ 0\ \cdots\\ x_{1}^{0}\ x_{1}^{1}\ x_{1}^{2}\ \cdots\ =\ 1\ \ \ 1\ \ \cdots\ \ \ \ 1\ \ \ 0\ 0\ 0\ \cdots\end{array}\]
where the validation bits indicate whether an input is actively being processed or not, and the corresponding data bits represent the successive values of the input (see Figure 1). The word \(w\) is said to be _accepted_ or _rejected by \(\mathcal{N}\) in time \(\tau\)_ if the output
\[\mathcal{N}(\mathbf{w})=\mathbf{y}=\mathbf{y}^{1}\mathbf{y}^{2}\cdots\mathbf{y }^{\tau}\mathbf{0}^{\omega}\in\left(\mathbb{B}^{2}\right)^{\omega}\]
is such that \(\mathbf{y}^{\mathbf{i}}=(0,0)\), for \(i=1,\ldots,\tau-1\), and \(\mathbf{y}^{\tau}=(1,1)\) or \(\mathbf{y}^{\tau}=(0,1)\), respectively. The output pattern is thus given by
\[\begin{array}{l}y_{0}^{0}\ y_{0}^{1}\ y_{0}^{2}\ \cdots\ =0\ 0\ \cdots\ y_{1}^{\tau}\ 0\ 0\ \cdots\\ y_{1}^{0}\ y_{1}^{1}\ y_{1}^{2}\ \cdots\ =0\ 0\ \cdots\ \ 1\ \ 0\ 0\ \cdots\end{array}\]
where \(y_{1}^{\tau}=1\) or \(y_{1}^{\tau}=0\), respectively.
In addition, the word \(w\) is said to be _accepted_ or _rejected by \(\mathcal{N}\) in time \(f\)_ if it is accepted or rejected in time \(\tau\leq f(n)\), respectively.3 A language \(L\subseteq\Sigma^{*}\) is _decided by \(\mathcal{N}\) in time \(f\)_ if for every word \(w\in\Sigma^{*}\),
Footnote 3: The choice of the letter \(f\) (instead of \(t\)) for referring to a computation time is deliberate, since the computation time of the networks will later be linked to the advice length of the Turing machines.
\[\begin{array}{l}w\in L\mbox{ implies that }\mathbf{w}\mbox{ is accepted by }\mathcal{N}\mbox{ in time }f\mbox{ and}\\ w\not\in L\mbox{ implies that }\mathbf{w}\mbox{ is rejected by }\mathcal{N}\mbox{ in time }f.\end{array}\]
A language \(L\subseteq\Sigma^{*}\) is _decided by \(\mathcal{N}\) in polynomial time_ if there exists a polyno
mial \(f\) such that \(L\) is decided by \(\mathcal{N}\) in time \(f\). If it exists, the _language decided by \(\mathcal{N}\) in time \(f\)_ is denoted by \(L_{f}(\mathcal{N})\). Besides, a network \(\mathcal{N}\) is said to be a _decider_ if any finite word is eventually accepted or rejected by it. In this case, the _language decided by \(\mathcal{N}\)_ is unique and is denoted by \(L(\mathcal{N})\). We will assume that all the networks that we consider are deciders. s
Recurrent neural networks with rational weights have been shown to be computationally equivalent to Turing machines [88].
**Theorem 1**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_
1. \(L\) _is decidable by some TM;_
2. \(L\) _is decidable by some RNN._
Proof.: (sketch) (ii) \(\rightarrow\) (i): The dynamics of any RNN \(\mathcal{N}\) is governed by Equations (1) and (2), which involves only rational weights, and thus can clearly be simulated by some Turing machine \(\mathcal{M}\).
(i) \(\rightarrow\) (ii): We provide a sketch of the original proof of this result [88]. This proof is based on the fact that any finite (and infinite) binary word can be encoded into and the activation value of a neuron, and decoded from this activation value bit by bit. This idea will be reused in some forthcoming proofs. First of all, recall that any TM \(\mathcal{M}\) is computationally equivalently to, and can be simulated in real time by, some \(p\)-stack machine \(\mathcal{S}\) with \(p\geq 2\). We thus show that any \(p\)-stack machine \(\mathcal{S}\) can be simulated by some RNN \(\mathcal{N}\). Towards this purpose, we encode every stack content
\[w=w_{0}\cdots w_{n-1}\in\{0,1\}^{*}\]
as the rational number
\[q_{w}=\delta_{4}(w)=\sum_{i=0}^{n-1}\frac{2\cdot w(i)+1}{4^{i+1}}\in[0,1].\]
For instance, \(w=1110\) is encoded into \(q_{w}=\frac{3}{4}+\frac{3}{16}+\frac{3}{64}+\frac{1}{256}\). With this base-4 encoding, the required stack operations can be performed by simple functions involving the sigmoid-linear function \(\sigma\), as described below:
* Reading the top of the stack: \(\text{top}(q_{w})=\sigma(4q_{w}-2)\)
* Pushing \(0\) into the stack: \(\mathrm{push}_{0}(q_{w})=\sigma(\frac{1}{4}q_{w}+\frac{1}{4})\)
* Pushing \(1\) into the stack: \(\mathrm{push}_{1}(q_{w})=\sigma(\frac{1}{4}q_{w}+\frac{3}{4})\)
* Popping the stack: \(\mathrm{pop}(q_{w})=\sigma(4q_{w}-(2\mathrm{top}(q_{w})+1))\)
* Emptiness of the stack: \(\mathrm{empty}(q_{w})=\sigma(4q_{w})\)
Hence, the content \(w\) of each stack can be encoded into the rational activation value \(q_{w}\) of a stack neuron, and the stack operations (reading the top, pushing \(0\) or \(1\), popping and testing the emptiness) can be performed by simple neural circuits implementing the functions described above.
Based on these considerations, we can design an RNN \(\mathcal{N}\) which correctly simulates the \(p\)-stack machine \(\mathcal{S}\). The network \(\mathcal{N}\) contains \(3\) neurons per stack: one for storing the encoded content of the stack, one for reading the top element of the stack, and one for storing the answer of the emptiness test of the stack. Moreover, \(\mathcal{N}\) contains two pools of neurons implementing the computational states and transition function of \(\mathcal{S}\), respectively. For any computational state, input bit, and contents of the stacks, \(\mathcal{N}\) computes the next computational state and updates the stacks' contents in accordance with the transition function of \(\mathcal{S}\). In this way, the network \(\mathcal{N}\) simulates the behavior of the \(p\)-stack machine \(\mathcal{S}\) correctly. The network \(\mathcal{N}\) is illustrated in Figure 2.
It can be noticed that the simulation process described in the proof of Theorem 1 is performed in real time. More precisely, if a language \(L\subseteq\Sigma^{*}\) is decided
Figure 2: Construction of an RNN that simulates a \(p\)-stack machine.
by some TM in time \(f(n)\), then \(L\) is decided by some RNN in time \(f(n)+O(n)\). Hence, when restricted to polynomial time of computation, RNNs decide the complexity class \(\mathbf{P}\).
**Corollary 2**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_
1. \(L\in\mathbf{P}\)_;_
2. \(L\) _is decidable by some RNN in polynomial time._
## 5 Analog, Evolving and Stochastic Recurrent Neural Networks
We now introduce analog, evolving and stochastic recurrent neural networks, which are all variants of the RNN model. In polynomial time, these models capture the complexity classes \(\mathbf{P/poly}\), \(\mathbf{P/poly}\) and \(\mathbf{BPP/log}^{*}\), respectively, which all strictly contain the class \(\mathbf{P}\) and include non-recursive languages. According to these considerations, these augmented models have been qualified as _super-Turing_. For each model, a tight characterization in terms of Turing machines with specific kinds of advice is provided.
### Analog networks
An _analog recurrent neural network (ANN)_ is an RNN as defined in Definition 1, except that the weight matrices are real instead of rational [87]. Formally, an ANN is an RNN
\[\mathcal{N}=\left(\mathbf{x},\mathbf{h},\mathbf{y},\mathbf{W_{in}},\mathbf{W_ {res}},\mathbf{W_{out}},\mathbf{h}^{0}\right)\]
such that
\[\mathbf{W_{in}}\in\mathbb{R}^{K\times(2+1)},\ \ \mathbf{W_{res}}\in\mathbb{R}^{K \times K}\ \ \text{and}\ \ \mathbf{W_{out}}\in\mathbb{R}^{2\times K}.\]
The definitions of acceptance and rejection of words as well as of decision of languages is the same as for RNNs.
It can been shown that any ANN \(\mathcal{N}\) containing the irrational weights \(r_{1},\ldots,r_{k}\in\mathbb{R}\setminus\mathbb{Q}\) is computationally equivalent to some ANN \(\mathcal{N}^{\prime}\) using only a single irrational weight \(r\in\mathbb{R}\setminus\mathbb{Q}\) such that \(r\in\Delta\subseteq[0,1]\) and \(r\) is the bias \(w_{02}\) of the hidden cell \(h_{0}\)[87]. Hence, without loss of generality, we restrict our attention to such networks. Let \(r\in\Delta\) and \(R\subseteq\Delta\).
* \(\mathrm{ANN}[r]\) denotes the class of ANNs such that all weights but \(w_{02}\) are rational and \(w_{02}=r\).
* \(\mathrm{ANN}[R]\) denotes the class of ANNs such that all weights but \(w_{02}\) are rational and \(w_{02}\in R\).
In this definition, \(r\) is allowed to be a rational number. In this case, an \(\mathrm{ANN}[r]\) is just a specific RNN.
In exponential time of computation, analog recurrent neural networks can decide any possible language. In fact, any language \(L\subseteq\Sigma^{*}\) can be encoded into the infinite word \(\bar{r}_{L}\in\Sigma^{\omega}\), where the \(i\)-th bit of \(\bar{r}_{L}\) equals \(1\) iff the \(i\)-th word of \(\Sigma^{*}\) belongs to \(L\), according to some enumeration of \(\Sigma^{*}\). Hence, we can build some ANN containing the real weight \(r_{L}=\delta_{4}(\bar{r}_{L})\), which, for every input \(w\), decides whether \(w\in L\) or \(w\not\in L\) by decoding \(\bar{r}_{L}\) and reading the suitable bit. In polynomial time of computation, however, the ANNs decide the complexity class \(\mathbf{P/poly}\), and hence, are computationally equivalent to Turing machines with polynomial advice (TM/poly(A)). The following result holds [87]:
**Theorem 3**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_
1. \(L\in\mathbf{P/poly}\)_;_
2. \(L\) _is decidable by some ANN in polynomial time._
Given some ANN \(\mathcal{N}\) and some \(q\in\mathbb{N}\), the _truncated network_\(\mathcal{N}|_{q}\) is defined as the network \(\mathcal{N}\) whose all weights and activation values are truncated after \(q\) precision bits at each step of the computation. The following result shows that, up to time \(q\), the network \(\mathbb{N}\) can limit itself to \(O(q)\) precision bits without affecting the result of its computation [87].
**Lemma 4**.: _Let \(\mathcal{N}\) be some ANN computing in time \(f:\mathbb{N}\rightarrow\mathbb{N}\). Then, there exists some constant \(c>0\) such that, for every \(n\in\mathbb{N}\) and every input \(w\in\Sigma^{n}\), the networks \(\mathcal{N}\) and \(\mathcal{N}|_{cf(n)}\) produce the same outputs up to time \(f(n)\)._
The computational relationship between analog neural networks and Turing machines with advice can actually be strengthened. Towards this purpose, for any non-decreasing function \(f:\mathbb{N}\rightarrow\mathbb{N}\) and any class of such functions \(\mathcal{F}\), we define the following classes of languages decided by analog neural networks in
time \(f\) and \(\mathcal{F}\), respectively:
\[\mathbf{ANN}\left[r,f\right] = \left\{L\subseteq\Sigma^{\omega}:L=L_{f}(\mathcal{N})\text{ for some }\mathcal{N}\in\text{ANN}[r]\right\}\] \[\mathbf{ANN}\left[R,\mathcal{F}\right] = \bigcup_{r\in R}\bigcup_{f\in\mathcal{F}}\mathbf{ANN}\left[r,f \right].\]
In addition, for any real \(r\in\Delta\) and any function \(f:\mathbb{N}\rightarrow\mathbb{N}\), the prefix advice \(\alpha(\bar{r},f):\mathbb{N}\rightarrow\Sigma^{*}\) of length \(f\) associated with \(r\) is defined by
\[\alpha(\bar{r},f)(n)=r_{0}r_{1}\cdots r_{f(n)-1}\]
for all \(n\in\mathbb{N}\). For any set of reals \(R\subseteq\Delta\) and any class of functions \(\mathcal{F}\subseteq\mathbb{N}^{\mathbb{N}}\), we naturally set
\[\alpha(\bar{R},\mathcal{F})=\bigcup_{\bar{r}\in\bar{R}}\bigcup_{f\in\mathcal{ F}}\left\{\alpha(\bar{r},f)\right\}\]
Conversely, note that any prefix unbounded advice \(\alpha:\mathbb{N}\rightarrow\Sigma^{*}\) is of the form \(\alpha(\bar{r},f)\), where \(\bar{r}=\lim_{n\rightarrow\infty}\alpha(n)\in\Sigma^{\omega}\) and \(f:\mathbb{N}\rightarrow\mathbb{N}\) is defined by \(f(n)=|\alpha(n)|\).
The following result clarifies the tight relationship between analog neural networks using real weights and Turing machines using related advices. Note that the real weights of the networks correspond precisely to the advice of the machines, and the computation time of the networks are related to the advice length of the machines.
**Proposition 5**.: _Let \(r\in\Delta\) be some real weight and \(f:\mathbb{N}\rightarrow\mathbb{N}\) be some non-decreasing function._
1. \(\mathbf{ANN}\left[r,f\right]\subseteq\mathbf{TMA}\left[\alpha(\bar{r},cf),O(f ^{3})\right]\)_, for some_ \(c>0\)_._
2. \(\mathbf{TMA}\left[\alpha(\bar{r},f),f\right]\subseteq\mathbf{ANN}\left[r,O(f)\right]\)_._
Proof.: (i) Let \(L\in\mathbf{ANN}\left[r,f\right]\). Then, there exists some \(\text{ANN}[r]\)\(\mathcal{N}\) such that \(L_{f}(\mathcal{N})=L\). By Lemma 4, there exists some constant \(c>0\) such that the network \(\mathcal{N}\) and the truncated network \(\mathcal{N}|_{cf(n)}\) produce the same outputs up to time step \(f(n)\), for all \(n\in\mathbb{N}\). Now, consider Procedure 1 below. In this procedure, all instructions except the query one (line 2) are recursive. Besides, the simulation of each step of \(\mathcal{N}|_{cf(n)}\) involves a constant number of multiplications and additions of rational numbers, all representable by \(cf(n)\) bits, and can thus be performed in time \(O(f^{2}(n))\) (for the products). Consequently, the
simulation of the \(f(n)\) steps of \(\mathcal{N}|_{cf(n)}\) can be performed in time \(O(f^{3}(n))\). Hence, Procedure 1 can be simulated by some TM/A \(\mathcal{M}\) using advice \(\alpha(\bar{r},cf)\) in time \(O(f^{3}(n))\). In addition, Lemma 4 ensures that \(w\) is accepted by \(\mathcal{M}\) iff \(w\) is accepted by \(\mathcal{N}\), for all \(w\in\Sigma^{*}\). Hence, \(L(\mathcal{M})=L(\mathcal{N})=L\), and therefore \(L\in\mathbf{TMA}\left[\alpha(\bar{r},cf),O(f^{3})\right]\).
```
Input: input \(w\in\Sigma^{n}\)
1 Query the advice \(\alpha(\bar{r},cf)(n)=r_{0}r_{1}\cdots r_{cf(n)-1}\)
2for\(t=0,1,\ldots,f(n)-1\)do
3 Simulate the truncated network \(\mathcal{N}|_{cf(n)}\) which uses the rational approximation \(\tilde{r}=\delta_{4}(r_{0}r_{1}\cdots r_{cf(n)-1})\) of \(r\) as its weight; return Output of \(\mathcal{N}|_{cf(n)}\) over \(w\) at time step \(f(n)\)
```
**Procedure 1**
(ii) Let \(L\in\mathbf{TMA}\left[\alpha(\bar{r},f),f\right]\). Then, there exists some TM/A \(\mathcal{M}\) with advice \(\alpha(\bar{r},f)\) such that \(L_{f}(\mathcal{M})=L\). We show that \(\mathcal{M}\) can be simulated by some analog neural network \(\mathcal{N}\) with real weight \(r\). The network \(\mathcal{N}\) simulates the advice tape of \(\mathcal{M}\) as described in the Proof of Theorem 1: the left and right contents of the tape are encoded and stored into two stack neurons \(x_{l}\) and \(x_{r}\), respectively, and the tape operations are simulated using appropriate neural circuits. On every input \(w\in\Sigma^{n}\), the network \(\mathcal{N}\) works as follows. First, \(\mathcal{N}\) copies its real bias \(r=\delta_{4}(r_{0}r_{1}r_{2}\cdots)\) into some neuron \(x_{a}\). Every time \(\mathcal{M}\) reads some new advice bit \(r_{i}\), then \(\mathcal{N}\) first pops \(r_{i}\) from its neuron \(x_{a}\), which thus takes the updated activation value \(\delta_{4}(r_{i+1}r_{i+2}\cdots)\), and next pushes \(r_{i}\) into neuron \(x_{r}\). This process is performed in constant time. At this point, neurons \(x_{l}\) and \(x_{r}\) contain the encoding of the bits \(r_{0}r_{1}\cdots r_{i}\). Hence, \(\mathcal{N}\) can simulates the recursive instructions of \(\mathcal{M}\) in the usual way, in real time, until the next bit \(r_{i+1}\) is read [88]. Overall, \(\mathcal{N}\) simulates the behavior of \(\mathcal{M}\) in time \(O(f(n))\).
We now show that \(\mathcal{M}\) and \(\mathcal{N}\) output the same decision for input \(w\). If \(\mathcal{M}\) does not reach the end of its advice word \(\alpha(n)\), the behaviors of \(\mathcal{M}\) and \(\mathcal{N}\) are identical, and so are their outputs. If at some point, \(\mathcal{M}\) reaches the end of \(\alpha(n)\) and reads successive blank symbols, then \(\mathcal{N}\) continues to pop the successive bits \(r_{|\alpha(n)|T|_{\alpha(n)|+1}}\cdots\) from neuron \(x_{a}\), to push them into neuron \(x_{r}\), and to simulates the behavior of \(\mathcal{M}\). In this case, \(\mathcal{N}\) simulates the behavior of \(\mathcal{M}\) working with some extension of the advice \(\alpha(n)\), which by the consistency property of \(\mathcal{M}\) (cf. Section 3), produces the same output as if working with
advice \(\alpha(n)\). In this way, \(w\) is accepted by \(\mathcal{M}\) iff \(w\) is accepted by \(\mathcal{N}\), and thus \(L(\mathcal{M})=L(\mathcal{N})\). Therefore, \(L\in\mathbf{ANN}\left[r,O(f)\right]\).
The following corollary shows that the class of languages decided in polynomial time by analog networks with real weights and by Turing machines with related advices are the same.
**Corollary 6**.: _Let \(r\in\Delta\) be some real weight and \(R\subseteq\Delta\) be some set of real weights._
1. \(\mathbf{ANN}\left[r,\mathrm{poly}\right]=\mathbf{TMA}\left[\alpha(\bar{r}, \mathrm{poly}),\mathrm{poly}\right]\)_._
2. \(\mathbf{ANN}\left[R,\mathrm{poly}\right]=\mathbf{TMA}\left[\alpha(\bar{R}, \mathrm{poly}),\mathrm{poly}\right]\)_._
Proof.: (i) Let \(L\in\mathbf{ANN}\left[r,\mathrm{poly}\right]\). Then, there exists \(f\in\mathrm{poly}\) such that \(L\in\mathbf{ANN}\left[r,f\right]\). By Proposition 5-(i), \(L\in\mathbf{TMA}\left[\alpha(\bar{r},cf),O(f^{3})\right]\), for some \(c>0\). Thus \(L\in\mathbf{TMA}\left[\alpha(\bar{r},\mathrm{poly}),\mathrm{poly}\right]\). Conversely, let \(L\in\mathbf{TMA}\left[\alpha(\bar{r},\mathrm{poly}),\mathrm{poly}\right]\). Then, there exist \(f,f^{\prime}\in\mathrm{poly}\) such that \(L\in\mathbf{TMA}\left[\alpha(\bar{r},f^{\prime}),f\right]\). By the consistency property of the TM/A, we can assume without loss of generality that \(f^{\prime}=f\). By Proposition 5-(ii), \(L\in\mathbf{ANN}\left[r,O(f)\right]\), and hence \(L\in\mathbf{ANN}\left[r,\mathrm{poly}\right]\).
(ii) This point follows directly from point (i) by taking the union over all \(r\in R\).
### Evolving networks
An _evolving recurrent neural network (ENN)_ is an RNN where the weight matrices can evolve over time inside a bounded space instead of staying static [12]. Formally, an ENN is a tuple
\[\mathcal{N}=\left(\mathbf{x},\mathbf{h},\mathbf{y},\left(\mathbf{W}_{\mathbf{ in}}^{t}\right)_{t\in\mathbb{N}},\left(\mathbf{W}_{\mathbf{res}}^{t}\right)_{t \in\mathbb{N}},\left(\mathbf{W}_{\mathbf{out}}^{t}\right)_{t\in\mathbb{N}}, \mathbf{h}^{0}\right)\]
where \(\mathbf{x},\mathbf{h},\mathbf{y},\mathbf{h}^{0}\) are defined as in Definition 1, and
\[\mathbf{W}_{\mathbf{in}}^{t}\in\mathbb{Q}^{K\times(2+1)},\ \ \mathbf{W}_{\mathbf{res}}^{t}\in \mathbb{Q}^{K\times K}\ \ \text{and}\ \ \mathbf{W}_{\mathbf{out}}^{t}\in\mathbb{Q}^{2\times K}.\]
are input, reservoir and output weight matrices at time \(t\) such that \(\|\mathbf{W}_{\mathbf{in}}^{t}\|_{\max}=\|\mathbf{W}_{\mathbf{res}}^{t}\|_{ \max}=\|\mathbf{W}_{\mathbf{out}}^{t}\|_{\max}\leq C\) for some constant \(C>1\) and for all \(t\in\mathbb{N}\). The boundedness condition expresses the fact that the synaptic weights are confined into a certain range of values imposed by the biological constitution
of the neurons. The successive values of an evolving weight \(w_{ij}\) is denoted by \((w_{ij}^{t})_{t\in\mathbb{N}}\). The dynamics of an ENN is given by the following adapted equations
\[\mathbf{h}^{t+1} = \sigma\left(\mathbf{W}_{\mathbf{in}}^{t}(\mathbf{x}^{t}:1)+ \mathbf{W}_{\mathbf{res}}^{t}\mathbf{h}^{t}\right) \tag{3}\] \[\mathbf{y}^{t+1} = \theta\left(\mathbf{W}_{\mathbf{out}}^{t}\mathbf{h}^{t+1}\right). \tag{4}\]
The definition of acceptance and rejection of words and decision of languages is the same as for RNNs.
In this case also, it can been shown that any ENN \(\mathcal{N}\) containing the evolving weights \(\bar{e}_{1},\ldots,\bar{e}_{n}\in[-C,C]^{\omega}\) is computationally equivalent to some ENN \(\mathcal{N}^{\prime}\) containing only one evolving weight \(\bar{e}\in[-C,C]^{\omega}\), such that \(\bar{e}\) evolves only among the binary values \(0\) and \(1\), i.e. \(\bar{e}\in\Sigma^{\omega}\), and \(\bar{e}\) is the evolving bias \((w_{02}^{t})_{t\in\mathbb{N}}\) of the hidden cell \(h_{0}\)[12]. Hence, without loss of generality, we restrict our attention to such networks. Let \(\bar{e}\in\Sigma^{\omega}\) be some binary evolving weight and \(\bar{E}\subseteq\Sigma^{\omega}\).
* ENN\([\bar{e}]\) denoted the class of ENNs such that all weights but \(w_{02}\) are static, and \((w_{02}^{t})_{t\in\mathbb{N}}=\bar{e}\).
* ENN\([\bar{E}]\) denotes the class of ENNs such that all weights but \(w_{02}\) are static, and \((w_{02}^{t})_{t\in\mathbb{N}}\in\bar{E}\).
Like analog networks, evolving recurrent neural networks can decide any possible language in exponential time of computation. In polynomial time, they decide the complexity class \(\mathbf{P/poly}\), and thus, are computationally equivalent to Turing machines with polynomial advice (TM/poly(A)). The following result holds [12]:
**Theorem 7**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_
1. \(L\in\mathbf{P/poly}\)_;_
2. \(L\) _is decidable by some ENN in polynomial time._
An analogous version of Lemma 4 holds for the case of evolving networks [12]. Note that the boundedness condition on the weights is involved in this result.
**Lemma 8**.: _Let \(\mathcal{N}\) be some ENN computing in time \(f:\mathbb{N}\rightarrow\mathbb{N}\). Then, there exists some constant \(c\) such that, for every \(n\in\mathbb{N}\) and every input \(w\in\Sigma^{n}\), the networks \(\mathcal{N}\) and \(\mathcal{N}|_{cf(n)}\) produce the same outputs up to time \(f(n)\)._
Once again, the computational relationship between evolving neural networks and Turing machines with advice can be strengthen. For this purpose, we define the following classes of languages decided by evolving neural networks in time \(f\) and \(\mathcal{F}\), respectively:
\[\mathbf{ENN}\left[\bar{e},f\right] = \left\{L\subseteq\Sigma^{\omega}:L=L_{f}(\mathcal{N})\text{ for some }\mathcal{N}\in\text{ENN}[\bar{e}]\right\}\] \[\mathbf{ENN}\left[\bar{E},\mathcal{F}\right] = \bigcup_{\bar{e}\in\bar{E}}\bigcup_{f\in\mathcal{F}}\mathbf{ENN} \left[\bar{e},f\right].\]
For any \(\bar{e}\in\Sigma^{\omega}\) and any function \(f:\mathbb{N}\rightarrow\mathbb{N}\), we consider the prefix advice \(\alpha(\bar{e},f):\mathbb{N}\rightarrow\Sigma^{*}\) associated with \(e\) and \(f\) defined by
\[\alpha(\bar{e},f)(n)=e_{0}e_{1}\cdots e_{f(n)-1}\]
for all \(n\in\mathbb{N}\). Conversely, any prefix advice \(\alpha:\mathbb{N}\rightarrow\Sigma^{*}\) is clearly of the form \(\alpha(\bar{e},f)\), where \(\bar{e}=\lim_{n\rightarrow\infty}\alpha(n)\in\Sigma^{\omega}\) and \(f(n)=|\alpha(n)|\) for all \(n\in\mathbb{N}\).
The following relationships between neural networks with evolving weights and Turing machines with related advice hold:
**Proposition 9**.: _Let \(e\in\Sigma^{\omega}\) be some binary evolving weight and \(f:\mathbb{N}\rightarrow\mathbb{N}\) be some non-decreasing function._
1. \(\mathbf{ENN}\left[\bar{e},f\right]\subseteq\mathbf{TMA}\left[\alpha(\bar{e}, cf),O(f^{3})\right]\)_, for some_ \(c>0\)_._
2. \(\mathbf{TMA}\left[\alpha(\bar{e},f),f\right]\subseteq\mathbf{ENN}\left[\bar{e}, O(f)\right]\)_._
Proof.: The proof is very similar to that of Proposition 5.
(i) Let \(L\in\mathbf{ENN}\left[\bar{e},f\right]\). Then, there exists some \(\text{ENN}[\bar{e}]\)\(\mathcal{N}\) such that \(L_{f}(\mathcal{N})=L\). By Lemma 8, there exists some constant \(c>0\) such that the network \(\mathcal{N}\) and the truncated network \(\mathcal{N}|_{cf(n)}\) produce the same outputs up to time step \(f(n)\), for all \(n\in\mathbb{N}\). Now, consider Procedure 2 below. In this procedure, all instructions except the query one are recursive. Procedure 2 can be simulated by some TM/A \(\mathcal{M}\) using advice \(\alpha(\bar{e},f)\) in time \(O(f^{3}(n))\), as described in the proof of Proposition 5. In addition, \(\mathcal{M}\) and \(\mathcal{N}\) decide the same language \(L\), and therefore \(L\in\mathbf{TMA}\left[\alpha(\bar{e},f),O(f^{3})\right]\).
(ii) Let \(L\in\mathbf{TMA}\left[\alpha(\bar{e},f),f\right]\). Then, there exists to some TM/A \(\mathcal{M}\) with advice \(\alpha(\bar{e},f)\) such that \(L_{f}(\mathcal{M})=L\). The machine \(\mathcal{M}\) can be simulated by the network \(\mathcal{N}\) with evolving weight \(\bar{e}=e_{0}e_{1}e_{2}\cdots\) as follows. First, \(\mathcal{N}\) simultaneously counts and pushes into a stack neuron \(x_{a}\) the successive bits of \(\bar{e}\) as they
arrive. Then, for \(k=1,2,\dots\) and until it produces a decision, \(\mathcal{N}\) proceeds as follows. If necessary, \(\mathcal{N}\) waits for \(x_{a}\) to contain more than \(2^{k}\) bits, copies the content \(e_{0}e_{1}\cdots e_{2^{k}}\cdots\) of \(x_{a}\) in reverse order into another stack neuron \(x_{a^{\prime}}\), and simulates \(\mathcal{M}\) with advice \(e_{0}e_{1}\cdots e_{2^{k}}\cdots\) in real time. Every time \(\mathcal{M}\) reads a new advice bit, \(\mathcal{N}\) tries to access it from its stack \(x_{a^{\prime}}\). If \(x_{a^{\prime}}\) does not contain this bit, then \(\mathcal{N}\) restart the whole process with \(k+1\). When \(k=\log(f(n))\), the stack \(x_{a}\) contains \(2^{k}=f(n)\) bits, which ensures that \(\mathcal{N}\) properly simulates \(\mathcal{M}\) with advice \(e_{0}e_{1}\cdots e_{f(n)-1}\). Hence, the whole simulation process is achieved in time \(O(\sum_{k=1}^{\log(f(n))}2^{k})=O(2^{\log(f(n))+1})=O(f(n))\). In this way, \(\mathcal{M}\) and \(\mathcal{N}\) decide the same language \(L\), and \(\mathcal{M}\) is simulated by \(\mathcal{N}\) in time \(O(f)\). Therefore, \(L\in\mathbf{ENN}\left[\bar{e},O(f)\right]\).
The class of languages decided in polynomial time by evolving networks and Turing machines using related evolving weights and advices are the same.
**Corollary 10**.: _Let \(\bar{e}\in\Sigma^{\omega}\) be some binary evolving weight and \(\bar{E}\subseteq\Sigma^{\omega}\) be some set of binary evolving weights._
1. \(\mathbf{ENN}\left[\bar{e},\mathrm{poly}\right]=\mathbf{TMA}\left[\alpha(\bar{e },\mathrm{poly}),\mathrm{poly}\right]\)_._
2. \(\mathbf{ENN}\left[\bar{E},\mathrm{poly}\right]=\mathbf{TMA}\left[\alpha(\bar{ E},\mathrm{poly}),\mathrm{poly}\right]\)_._
Proof.: The proof is similar to that of Corollary 6.
### Stochastic networks
A _stochastic recurrent neural network (SNN)_ is an RNN as defined in Definition 1, except that the network contains additional stochastic cells as inputs [84]. Formally, an SNN is an RNN
\[\mathcal{N}=\left(\mathbf{x},\mathbf{h},\mathbf{y},\mathbf{W_{in}},\mathbf{W_ {res}},\mathbf{W_{out}},\mathbf{h}^{0}\right)\]
such that \(\mathbf{x}=(x_{0},x_{1},x_{2},\cdots,x_{k+1})\), where \(x_{0}\) and \(x_{1}\) are the data and validation input cells, respectively, and \(x_{2},\ldots,x_{k+1}\) are \(k\) additional stochastic cells. The dimension of the input weight matrix is adapted accordingly, namely \(\mathbf{W_{in}}\in\mathbb{R}^{K\times((k+2)+1)}\). Each stochastic cell \(x_{i}\) is associated with a probability \(p_{i}\in[0,1]\), and at each time step \(t\geq 0\), the activation of the cell \(x_{i}^{t}\) takes value \(1\) with probability \(p_{i}\), and value \(0\) with probability \(1-p_{i}\). The dynamics of an SNN is then governed by Equations (1) and (2), but with the adapted inputs \(\mathbf{x}^{t}=(x_{0}^{t},x_{1}^{t},x_{2}^{t},\cdots,x_{k+1}^{t})\in\mathbb{B} ^{k+2}\) for all \(t\geq 0\).
For some SNN \(\mathcal{N}\), we assume that any input \(w=w_{0}w_{1}\cdots w_{n-1}\in\Sigma^{*}\) is decided by \(\mathcal{N}\) in the same amount of time \(\tau(n)\), regardless of the random pattern of the stochastic cells \(x_{i}^{t}\in\{0,1\}\), for all \(i=2,\ldots,k+1\). Hence, the number of possible computations of \(\mathcal{N}\) over \(w\) is finite. The input \(w\) is _accepted_ (resp. _rejected_) by \(\mathcal{N}\) if the number of accepting (resp. rejecting) computations over the total number of computations on \(w\) is greater than or equal to \(2/3\). This means that the error probability of \(\mathcal{N}\) is bounded by \(1/3\). If \(f:\mathbb{N}\rightarrow\mathbb{N}\) is a non-decreasing function, we say that \(w\) is _accepted_ or _rejected_ by \(\mathcal{N}\)_in time \(f\)_ if it is accepted or rejected in time \(\tau(n)\leq f(n)\), respectively. We assume that any SNN is a decider. The definition of decision of languages is the same as in the case of RNNs.
Once again, any SNN is computationally equivalent to some SNN with only one stochastic cell \(x_{2}\) associated with a real probability \(p\in\Delta\)[84]. Without loss of generality, we restrict our attention to such networks. Let \(p\in\Delta\) be some probability and \(P\subseteq\Delta\).
* SNN\([p]\) denotes the class of SNNs such that the probability of the stochastic cell \(x_{2}\) is equal to \(p\).
* SNN\([P]\) denotes the class of SNNs such that the probability of the stochastic cell \(x_{2}\) is equal to some \(p\in P\).
In polynomial time of computation, the SNNs with rational probabilities decide the complexity class \(\mathbf{BPP}\). By contrast, the SNNs with real probabilities decide the complexity class \(\mathbf{BPP}/\mathbf{log}^{*}\), and hence, are computationally equivalent to probabilistic Turing machines with logarithmic advice (PTM/log(A)). The following result holds [84]:
**Theorem 11**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_
1. \(L\in\mathbf{BPP/log}^{*}\)_;_
2. \(L\) _is decidable by some SNN in polynomial time._
As for the two previous models, the computational relationship between stochastic neural networks and Turing machines with advice can be precised. We define the following classes of languages decided by stochastic neural networks in time \(f\) and \(\mathcal{F}\), respectively:
\[\mathbf{SNN}\left[p,f\right] = \left\{L\subseteq\Sigma^{\omega}:L=L_{f}(\mathcal{N})\text{ for some }\mathcal{N}\in\text{ SNN}[p]\right\}\] \[\mathbf{SNN}\left[P,\mathcal{F}\right] = \bigcup_{p\in P}\bigcup_{f\in\mathcal{F}}\mathbf{SNN}\left[p,f \right].\]
The tight relationships between stochastic neural networks using real probabilities and Turing machines using related advices can now be described. In this case however, the advice of the machines are logarithmically related to the computation time of the networks.
**Proposition 12**.: _Let \(p\in\Delta\) be some real probability and \(f:\mathbb{N}\rightarrow\mathbb{N}\) be some non-decreasing function._
1. \(\mathbf{SNN}[p,f]\subseteq\mathbf{PTMA}\left[\alpha(\bar{p},\log(5f)),O(f^{3} )\right]\)_._
2. \(\mathbf{PTMA}\left[\alpha(\bar{p},\log(f)),f\right]\subseteq\mathbf{SNN} \left[p,O(f^{2})\right]\)_._
Proof.: (i) Let \(L\in\mathbf{SNN}\left[p,f\right]\). Then, there exists some SNN\([p]\)\(\mathcal{N}\) deciding \(L\) in time \(f\). Note that the stochastic network \(\mathcal{N}\) can be considered as a classical rational-weighted RNN with an additional input cell \(x_{2}\). Since \(\mathcal{N}\) has rational weights, it can be noticed that up to time \(f(n)\), the activation values of its neurons are always representable by rationals of \(O(f(n))\) bits. Now, consider Procedure 3 below. This procedure can then be simulated by some PTM/A \(\mathcal{M}\) using advice \(\alpha(\bar{p},\log(5f))\) in time \(O(f^{3})\), as described in the proof of Proposition 5.
It remains to show that \(\mathcal{N}\) and \(\mathcal{M}\) decide the same language \(L\). For this purpose, consider a hypothetical device \(\mathcal{M}^{\prime}\) working as follows: at each time \(t\), \(\mathcal{M}^{\prime}\) takes the sequence of bits \(\bar{b}\) generated by Procedure 3 and concatenates it with some infinite sequence of bits \(\bar{b}^{\prime}\in\Sigma^{\omega}\) drawn independently with probability \(\frac{1}{2}\), thus producing the infinite sequence \(\bar{b}\bar{b}^{\prime}\in\Sigma^{\omega}\). Then, \(\mathcal{M}^{\prime}\) generates a bit \(c^{\prime}_{t}\) iff \(\bar{b}\bar{b}^{\prime}<_{lex}\bar{p}\), which happens precisely with probability \(p\), since \(p=\delta_{2}(\bar{p})\)[2]. Finally, \(\mathcal{M}^{\prime}\) simulates the behavior of \(\mathcal{N}\) at time \(t\) using the stochastic bit
\(x_{2}^{t}=c_{t}^{\prime}\). Clearly, \({\cal M}^{\prime}\) and \({\cal N}\) produce random bits with same probability \(p\), behave in the same way, and thus decide the same language \(L\). We now evaluate the error probability of \({\cal M}\) at deciding \(L\), by comparing the behaviors of \({\cal M}\) and \({\cal M}^{\prime}\). Let \(w\in\Sigma^{n}\) be some input and let
\[\bar{p}_{\cal M}=\alpha(\bar{p},\log(5f))(n)=p_{0}p_{1}\cdots p_{\log(5f(n))-1} \ \ \mbox{and}\ \ p_{\cal M}=\delta_{2}(\bar{p}_{\cal M}).\]
According to Procedure 3, at each time step \(t\), the machine \({\cal M}\) generates \(c_{t}=1\) iff \(\bar{b}<_{lex}\bar{p}_{\cal M}\), which happens precisely with probability \(p_{\cal M}\), since \(p_{\cal M}=\delta_{2}(\bar{p}_{\cal M})\)[2]. On the other hand, \({\cal M}^{\prime}\) generates \(c_{t}^{\prime}=1\), with probability \(p\), showing that \({\cal M}\) and \({\cal M}^{\prime}\) might differ in their decisions. Since \(\bar{p}_{\cal M}\) is a prefix of \(\bar{p}\), it follows that \(p_{\cal M}\leq p\) and
\[p-p_{\cal M}=\sum_{i=\log(5f(n))}^{\infty}\frac{p_{i}}{2^{i+1}}\leq\frac{1}{2^ {\log(5f(n))}}=\frac{1}{5f(n)}.\]
In addition, the bits \(c_{t}\) and \(c_{t}^{\prime}\) are generated by \({\cal M}\) and \({\cal M}^{\prime}\) based on the sequences \(\bar{b}\) and \(\bar{b}\bar{b}^{\prime}\) satisfying \(\bar{b}<_{lex}\bar{b}\bar{b}^{\prime}\). Hence,
\[\Pr(c_{t}\neq c_{t}^{\prime})=\Pr(\bar{p}_{\cal M}<_{lex}\bar{b}\bar{b}^{ \prime}<_{lex}\bar{p})=p-p_{\cal M}\leq\frac{1}{5f(n)}.\]
By a union bound argument, the probability that the sequences \(\bar{c}=c_{0}c_{1}\cdots c_{f(n)-1}\) and \(\bar{c}^{\prime}=c_{0}^{\prime}c_{1}^{\prime}\cdots c_{f(n)-1}^{\prime}\) generated by \({\cal M}\) and \({\cal M}^{\prime}\) differ satisfies
\[\Pr(\bar{c}\neq\bar{c}^{\prime})\leq\frac{1}{5f(n)}f(n)=\frac{1}{5}\ \ \mbox{and thus}\ \ \Pr(\bar{c}=\bar{c}^{\prime})\geq 1-\frac{1}{5}.\]
Since \({\cal M}^{\prime}\) classifies \(w\) correctly with probability at least \(\frac{2}{3}\), it follows that \({\cal M}\) classifies \(w\) correctly with probability at least \((1-\frac{1}{5})\frac{2}{3}>\frac{8}{15}>\frac{1}{2}\). This probability can be increased above \(\frac{2}{3}\) by repeating Procedure 3 a constant number of time and taking the majority of the decisions as output [2]. Consequently, the devices \({\cal M}\), \({\cal M}^{\prime}\) and \({\cal N}\) all decide the same language \(L\), and therefore, \(L\in{\bf PTMA}\left[\alpha(\bar{p},\log(5f)),O(f^{3})\right]\).
(ii) Let \(L\in{\bf PTMA}[\alpha(\bar{p},\log(f)),f]\). Then, there exists some PTM/log(A) \({\cal M}\) with logarithmic advice \(\alpha(\bar{p},\log(f))\) deciding \(L\) in time \(f\). For simplicity purposes, let the advice of \({\cal M}\) be denoted by \(\bar{p}=p_{0}\cdots p_{\log(f(n))-1}\) (\(\bar{p}\) is not anymore the binary expansion of \(p\) from now on). Now, consider Procedure 4
below. The first for loop computes an estimation \(\bar{p}^{\prime}\) of the advice \(\bar{p}\) defined by
\[\bar{p}^{\prime}=\delta_{2}^{-1}(p^{\prime})[0:\log(f(n))-1]=p_{0}^{\prime} \cdots p_{\log(f(n))-1}^{\prime}\]
where
\[p^{\prime}=\frac{1}{k(n)}\sum_{i=0}^{k(n)-1}b_{i}\ \ \text{and}\ \ k(n)={}^{ \ulcorner}10p(1-p)f^{2}(n){}^{\urcorner}\]
and the \(b_{i}\) are drawn according to a Bernouilli distribution of parameter \(p\). The second for loop computes a sequence of random choices
\[\bar{c}=c_{0}\cdots c_{f(n)-1}\]
using von Neumann's trick to simulate a fair coin with a biased one [2]. The third loop simulates the behavior of the PTM/log(A) \(\mathcal{M}\) using the alternative advice \(\bar{p}^{\prime}\) and the sequence of random choices \(\bar{c}\). This procedure can clearly be simulated by some SNN\([p]\)\(\mathcal{N}\) in time \(O(k+2f)=O(f^{2})\), where the random samples of bits are given by the stochastic cell and the remaining recursive instructions simulated by a rational-weighted sub-network.
It remains to show that \(\mathcal{M}\) and \(\mathcal{N}\) decide the same language \(L\). For this purpose, we estimate the error probability of \(\mathcal{N}\) at deciding language \(L\). First, we show that \(\bar{p}^{\prime}\) is a good approximation of the advice \(\bar{p}\) of \(\mathcal{M}\). Note that \(\bar{p}^{\prime}\neq\bar{p}\) iff \(|p^{\prime}-p|>\frac{1}{2^{\log(f(n))}}=\frac{1}{f(n)}\). Note also that by definition, \(p^{\prime}=\frac{\#1}{k(n)}\), where \(\#1\sim\mathcal{B}(k(n),p)\) is a binomial random variable of parameters \(k(n)\) and \(p\) with
\(\mathrm{E}(\#1)=k(n)p\) and \(\mathrm{Var}(\#1)=k(n)p(1-p)\). It follows that
\[\Pr\left(\bar{p}^{\prime}\neq\bar{p}\right) = \Pr\left(|p^{\prime}-p|>\frac{1}{f(n)}\right)\] \[= \Pr\left(|k(n)p^{\prime}-k(n)p|>\frac{k(n)}{f(n)}\right)\] \[= \Pr\left(|\#1-\mathrm{E}(\#1)|>\frac{k(n)}{f(n)}\right).\]
The Chebyshev's inequality ensures that
\[\Pr\left(\bar{p}^{\prime}\neq\bar{p}\right)\leq\frac{\mathrm{Var}(\#1)f^{2}(n )}{k^{2}(n)}=\frac{p(1-p)f^{2}(n)}{k(n)}<\frac{1}{10}\]
since \(k(n)>10p(1-p)f^{2}(n)\).
We now estimate the source of error coming from the simulation of a fair coin by a biased one in Procedure 4 (loop of Line 6). Note that at each step \(i\), if the two bits \(bb^{\prime}\) are different (01 or 10), then \(c_{t}\) is drawn with fair probability \(\frac{1}{2}\), like in the case of the machine \(\mathcal{M}\). Hence, the sampling process of \(\mathcal{N}\) and \(\mathcal{M}\) differ in probability precisely when all of the \(K\) draws produce identical bits \(bb^{\prime}\) (00 or 11). The probability that the two bits \(bb^{\prime}\) are identical at step \(i\) is \(p^{2}+(1-p)^{2}\), and hence, the probability that the \(K=\frac{-4-\log(f(n))}{\log(p^{2}+(1-p)^{2})}\) independent draws all produce identical bits \(bb^{\prime}\) satisfies
\[\left(p^{2}+(1-p)^{2}\right)^{K}\leq 2^{-4-\log(f(n))}=\frac{1}{16f(n)}.\]
by using the fact that \(x^{1/\log(x)}\geq 2\). By a union bound argument, the probability that some \(c_{t}\) in the sequence \(c_{0}\cdots c_{f(n)-1}\) is not drawn with a fair probability \(\frac{1}{2}\) is bounded by \(\frac{1}{16}\). Equivalently, the probability that all random random bits \(c_{t}\) of the sequence \(c_{0}\cdots cs_{f(n)-1}\) are drawn with fair probability \(\frac{1}{2}\) is at least \(\frac{15}{16}\).
To safely estimate the error probability of \(\mathcal{N}\), we restrict ourselves to situations when \(\mathcal{M}\) and \(\mathcal{N}\) behave the same, and assume that \(\mathcal{N}\) always makes errors otherwise. These situations happen when \(\mathcal{M}\) and \(\mathcal{N}\) use the same advice as well as the same fair probability for their random processes. These two events are independent and of probability at least \(\frac{9}{10}\) and at least \(\frac{15}{16}\), respectively. Hence, \(\mathcal{M}\) and \(\mathcal{N}\) agree on any input \(w\) with probability at least \(\frac{9}{10}\cdot\frac{15}{16}>\frac{4}{5}\). Consequently, the probability that \(\mathcal{N}\) decides correctly whether \(w\in L\) or not is bounded by \(\frac{4}{5}\cdot\frac{2}{3}>\frac{1}{2}\). As before, this probability can be made larger than \(\frac{2}{3}\) by
repeating Procedure 4 a constant number of times and taking the majority of the decisions as output [2]. This shows that \(L(\mathcal{N})=L(\mathcal{M})=L\), and therefore \(L\in\mathbf{SNN}\left[p,O(f^{2})\right]\).
```
Input: input \(w\in\Sigma^{n}\)
1for\(i=0,\ldots,k(n):=\ulcorner 10p(1-p)f^{2}(n)\urcorner\)do
2 Draw a random bit \(b_{i}\) with probability \(p\)
3 Compute the estimation of the advice of \(\mathcal{M}\)\(\bar{p}^{\prime}=p_{0}^{\prime}\cdots p_{\log(f(n))-1}^{\prime}=\delta_{2}^{-1}( \frac{1}{k}\sum_{i=0}^{k(n)-1}b_{i})[0:\log(f(n))-1]\)
4for\(t=0,\ldots,f(n)-1\)do
5\(c_{t}=0\);
6for\(i=0,\ldots,\ulcorner\frac{-4-\log(f(n))}{\log(p^{2}+(1-p)^{2})}\urcorner\)do
7 Draw 2 random bits \(b\) and \(b^{\prime}\) with probability \(p\)
8if\(bb^{\prime}=01\)then\(c_{t}=0\); break
9if\(bb^{\prime}=10\)then\(c_{t}=1\); break
10
11for\(t=0,\ldots,f(n)-1\)do
12 Simulate the PTM/log(A) \(\mathcal{M}\) using the advice \(\bar{p}^{\prime}=p_{0}^{\prime}\cdots p_{\log(f(n))-1}^{\prime}\) and the sequence of random choices \(\bar{c}=c_{0}\cdots c_{f(n)-1}\)
13return Output of \(\mathcal{M}\) over \(w\) at time step \(f(n)\)
```
**Procedure 4**
The class of languages decided in polynomial time by stochastic networks using real probabilities and Turing machines using related advices are the same. In this case, however, the length of the advice are logarithmic instead of polynomial.
**Corollary 13**.: _Let \(p\in\Delta\) be some real probability and \(P\subseteq\Delta\) be some set of real probabilities._
1. \(\mathbf{SNN}[p,\mathrm{poly}]=\mathbf{PTMA}[\alpha(\bar{p},\log),\mathrm{poly}]\)_._
2. \(\mathbf{SNN}[P,\mathrm{poly}]=\mathbf{PTMA}[\alpha(\bar{P},\log),\mathrm{poly}]\)_._
Proof.: The proof is similar to that of Corollary 6.
## 6 Hierarchies
In this section, we provide a refined characterization of the super-Turing computational power of analog, evolving, and stochastic neural networks based
on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respectively. More specifically, we show the existence of infinite hierarchies of classes of analog and evolving neural networks located between \(\mathbf{P}\) and \(\mathbf{P/poly}\). We also establish the existence of an infinite hierarchy of classes of stochastic neural networks between \(\mathbf{BPP}\) and \(\mathbf{BPP/log^{*}}\). Beyond proving the existence and providing examples of such hierarchies, we describe a generic way of constructing them based on classes of functions of increasing complexity.
Towards this purpose, we define the _Kolmogorov complexity_ of a real number as stated in a related work [3]. Let \(\mathcal{M}_{U}\) be a universal Turing machine, \(f,g:\mathbb{N}\rightarrow\mathbb{N}\) be two functions, and \(\alpha\in\Sigma^{\omega}\) be some infinite word. We say that \(\alpha\in\bar{K}_{g}^{f}\) if there exists \(\beta\in\Sigma^{\omega}\) such that, for all but finitely many \(n\), the machine \(\mathcal{M}_{U}\) with inputs \(\beta[0:m-1]\) and \(n\) will output \(\alpha[0:n-1]\) in time \(g(n)\), for all \(m\geq f(n)\). In other words, \(\alpha\in\bar{K}_{g}^{f}\) if its \(n\) first bits can be recovered from the \(f(n)\) first bits of some \(\beta\) in time \(g(n)\). The notion expressed its interest when \(f(n)\leq n\), in which case \(\alpha\in\bar{K}_{g}^{f}\) means that every \(n\)-long prefix of \(\alpha\) can be compressed into and recovered from a smaller \(f(n)\)-long prefix of \(\beta\). Given two classes of functions \(\mathcal{F}\) and \(\mathcal{G}\), we define \(\bar{K}_{\mathcal{G}}^{\mathcal{F}}=\bigcup_{f\in\mathcal{F}}\bigcup_{g\in \mathcal{G}}\bar{K}_{g}^{f}\). Finally, for any real number \(r\in\Delta\) with associated binary expansion \(\bar{r}=\delta_{4}^{-1}(r)\in\Sigma^{\omega}\), we say that \(r\in K_{g}^{f}\) (resp. \(r\in K_{\mathcal{G}}^{\mathcal{F}}\)) iff \(\bar{r}\in\bar{K}_{g}^{f}\) (resp. \(\bar{r}\in\bar{K}_{\mathcal{G}}^{\mathcal{F}}\)).
Given some set of functions \(\mathcal{F}\subseteq\mathbb{N}^{\mathbb{N}}\), we say \(\mathcal{F}\) is a class of _reasonable advice bounds_ if the following conditions hold:
* Sub-linearity: for all \(f\in\mathcal{F}\), then \(f(n)\leq n\) for all \(n\in\mathbb{N}\).
* Dominance by a polynomially computable function: for all \(f\in\mathcal{F}\), there exists \(g\in\mathcal{F}\) such that \(f\leq g\) and \(g\) is computable in polynomial time.
* Closure by polynomial composition on the right: For all \(f\in\mathcal{F}\) and for all \(p\in\text{poly}\), there exist \(g\in\mathcal{F}\) such that \(f\circ p\leq g\).
For instance, \(\log\) is a class of reasonable advice bounds. All properties in this definition are necessary for our separation theorems. The first and second conditions are necessary to define Kolmogorov reals associated to advices of bounded size. The third condition comes from the fact that RNNs can access any polynomial number of bits from their weights during polynomial time of computation. Note that our definition is slightly weaker than that of Balcazar et al., who further assume that the class should be closed under \(O(.)\)[3].
The following theorem relates non-uniform complexity classes, based on
polynomial time of computation \(\mathbf{P}\) and reasonable advice bounds \(\mathcal{F}\), with classes of analog and evolving networks using weights inside \(K_{\mathrm{poly}}^{\mathcal{F}}\) and \(\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\), respectively.
**Theorem 14**.: _Let \(\mathcal{F}\) be a class of reasonable advice bounds, and let \(K_{\mathrm{poly}}^{\mathcal{F}}\subseteq\Delta\) and \(\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\subseteq\Sigma^{\omega}\) be the sets of Kolmogorov reals associated with \(\mathcal{F}\). Then_
\[\mathbf{P}/\mathcal{F}^{*}=\mathbf{ANN}\left[K_{\mathrm{poly}}^{\mathcal{F}}, \mathrm{poly}\right]=\mathbf{ENN}\left[\bar{K}_{\mathrm{poly}}^{\mathcal{F}}, \mathrm{poly}\right].\]
Proof.: We prove the first equality. By definition, \(\mathbf{P}/\mathcal{F}^{*}\) is the class of languages decided in polynomial time by some TM/A using any possible prefix advice of length \(f\in\mathcal{F}\), namely,
\[\mathbf{P}/\mathcal{F}^{*}=\mathbf{TMA}\left[\alpha\big{(}\Sigma^{\omega}, \mathcal{F}\big{)},\mathrm{poly}\right].\]
In addition, Corollary 6 ensures that
\[\mathbf{ANN}\left[K_{\mathrm{poly}}^{\mathcal{F}},\mathrm{poly}\right]= \mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}},\mathrm{ poly}\big{)},\mathrm{poly}\right].\]
Hence, we need to show that
\[\mathbf{TMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\big{)},\mathrm{ poly}\right]=\mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}}, \mathrm{poly}\big{)},\mathrm{poly}\right]. \tag{5}\]
Equation 5 can be understood as follows: in polynomial time of computation, the TM/As using small advices (of size \(\mathcal{F}\)) are equivalent to those using larger but compressible advices (of size \(\mathrm{poly}\) and inside \(\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\)).
For the sake of simplicity, we suppose that the polynomial time of computation of the TM/As are clear from the context by introducing the following abbreviations:
\[\mathbf{TMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\big{)},\mathrm{poly}\right] :=\mathbf{TMA}\left[\alpha\big{(}\bar{\Sigma}^{\omega},\mathcal{F} \big{)}\right]\] \[\mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F }},\mathrm{poly}\big{)},\mathrm{poly}\right] :=\mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F }},\mathrm{poly}\big{)}\right].\]
We show the backward inclusion of Eq. (5). Let
\[L\in\mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}}, \mathrm{poly}\big{)}\right].\]
Then, there exists some TM/A \(\mathcal{M}\) using advice \(\alpha(\bar{r},p_{1})\), where \(\bar{r}\in\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\) and
\(p_{1}\in\text{poly}\), deciding \(L\) in time \(p_{2}\in\text{poly}\). Since \(\bar{r}\in\bar{K}_{\text{poly}}^{\mathcal{F}}\), there exist \(\beta\in\Sigma^{\omega}\), \(f\in\mathcal{F}\) and \(p_{3}\in\text{poly}\) such that the \(p_{1}(n)\) bits of \(\bar{r}\) can be computed from the \(f(p_{1}(n))\) bits of \(\beta\) in time \(p_{3}(p_{1}(n))\). Hence, the TM/A \(\mathcal{M}\) can be simulated by the TM/A \(\mathcal{M}^{\prime}\) with advice \(\alpha(\beta,f\circ p_{1})\) working as follows: on every input \(w\in\Sigma^{n}\), \(\mathcal{M}^{\prime}\) first queries its advice string \(\beta_{0}\beta_{1}\cdots\beta_{f(p_{1}(n))-1}\), then reconstructs the advice \(r_{0}r_{1}\ldots r_{p_{1}(n)-1}\) in time \(p_{3}(p_{1}(n))\), and finally simulates the behavior of \(\mathcal{M}\) over input \(w\) in real time. Clearly, \(L(\mathcal{M}^{\prime})=L(\mathcal{M})=L\). In addition, \(p_{3}\circ p_{1}\in\text{poly}\), and since \(\mathcal{F}\) is a class of reasonable advice bounds, there is \(g\in\mathcal{F}\) such that \(f\circ p_{1}\leq g\). Therefore,
\[L\in\mathbf{TMA}\left[\alpha(\beta,g)\right]\subseteq\mathbf{TMA}\left[\alpha \big{(}\Sigma^{\omega},\mathcal{F}\big{)}\right].\]
We now prove the forward inclusion of Eq. (5). Let
\[L\in\mathbf{TMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\big{)}\right].\]
Then, there exists some TM/A \(\mathcal{M}\) with advice \(\alpha(\bar{r},f)\), with \(\bar{r}\in\Sigma^{\omega}\) and \(f\in\mathcal{F}\), deciding \(L\) in time \(p_{1}\in\text{poly}\). Since \(\mathcal{F}\) is a class of reasonable advice bounds, there exists \(g\in\mathcal{F}\) such that \(f\leq g\) and \(g\) is computable in polynomial time. We now define \(\bar{s}\in\bar{K}_{\text{poly}}^{\mathcal{F}}\) using \(\bar{r}\) and \(g\) as follows: for each \(i\geq 0\), let \(\bar{r}_{i}\) be the sub-word of \(\bar{r}\) defined by
\[\bar{r}_{i}=\begin{cases}r_{0}r_{1}\cdots r_{g(0)-1}&\text{if }i=0\\ r_{g(i-1)}r_{g(i-1)+1}\cdots r_{g(i)-1}&\text{if }i>0\end{cases}\]
and let
\[\bar{s}=\bar{r}_{0}0\bar{r}_{1}0\bar{r}_{2}0\cdots.\]
Given the \(g(n)\) first bits of \(\bar{r}\), we can build the \(g(n)+n\geq n\) first bits of \(\bar{s}\) by computing \(g(i)\) and the corresponding block \(\bar{r}_{i}\) (which can be empty) for all \(i\leq n\), and then intertwining those with \(0\)'s. This process can be done in polynomial time, since \(g\) is computable is polynomial time. Therefore, \(\bar{s}\in\bar{K}_{\text{poly}}^{\mathcal{F}}\).
Let \(q(n)=2n\). Since \(\mathcal{F}\) is a class of reasonable advice bounds, \(g(n)\leq n\), and thus \(q(n)=2n\geq g(n)+n\). Now, consider the TM/A \(\mathcal{M}^{\prime}\) with advice \(\alpha(\bar{s},q)\) working as follows. On every input \(w\in\Sigma^{n}\), the machine \(\mathcal{M}^{\prime}\) first queries its advice \(\alpha(\bar{s},q)(n)=s_{0}s_{1}\cdots s_{q(n)-1}\). Then, \(\mathcal{M}^{\prime}\) reconstructs the string \(r_{0}r_{1}\cdots r_{g(n)-1}\) by computing \(g(i)\) and then removing \(n\)\(0\)'s from \(\alpha(\bar{s},q)(n)\)
at positions \(g(i)+i\), for all \(i\leq n\). This is done in polynomial time, since \(g\) is computable in polynomial time. Finally, \(\mathcal{M}^{\prime}\) simulates \(\mathcal{M}\) with advice \(r_{0}r_{1}\cdots r_{g(n)-1}\) in real time. Clearly, \(L(\mathcal{M}^{\prime})=L(\mathcal{M})=L\). Therefore,
\[L\in\mathbf{TMA}\left[\alpha(\bar{s},q)\right]\subseteq\mathbf{TMA}\left[ \alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}},\mathrm{poly}\big{)}\right].\]
The property that have just been established together with Corollary 10 proves the second equality.
We now prove the analogous of Theorem 14 for the case of probabilistic complexity classes and machines. In this case, however, the class of advice bounds does not anymore correspond exactly to the Kolmogorov space bounds of the real probabilities. Instead, a logarithmic correcting factor needs to be introduced. Given some class of functions \(\mathcal{F}\), we let \(\mathcal{F}\circ\log\) denote the set \(\{f\circ\log\mid f\in\mathcal{F}\}\).
**Theorem 15**.: _Let \(\mathcal{F}\) be a class of reasonable advice bounds, then_
\[\mathbf{BPP}/(\mathcal{F}\circ\log)^{*}=\mathbf{SNN}\left[K_{\mathrm{poly}}^{ \mathcal{F}},\mathrm{poly}\right].\]
Proof.: By definition, \(\mathbf{BPP}/(\mathcal{F}\circ\log)^{*}\) is the class of languages decided by PTM/A using any possible prefix advice of length \(f\in\mathcal{F}\circ\log\):
\[\mathbf{BPP}/(\mathcal{F}\circ\log)^{*}=\mathbf{PTMA}\left[\alpha\big{(} \Sigma^{\omega},\mathcal{F}\circ\log\big{)},\mathrm{poly}\right].\]
Moreover, Corollary 13 ensures that
\[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}},\mathrm{poly}\right]= \mathbf{PTMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}},\log \big{)},\mathrm{poly}\right].\]
Hence, we need to prove the following equality:
\[\mathbf{PTMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}},\log \big{)},\mathrm{poly}\right]=\mathbf{PTMA}\left[\alpha\big{(}\Sigma^{\omega}, \mathcal{F}\circ\log\big{)},\mathrm{poly}\right] \tag{6}\]
We first prove the forward inclusion of Eq. 6. Let
\[L\in\mathbf{PTMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}}, \log\big{)},\mathrm{poly}\right].\]
Then, there exists some PTM/A \(\mathcal{M}\) using advice \(\alpha(\bar{r},c\log)\), where \(\bar{r}\in\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\)
and \(c>0\), that decides \(L\) in time \(p_{1}\in\mathrm{poly}\). Since \(\bar{r}\in\bar{K}^{\mathcal{F}}_{\mathrm{poly}}\), there exist \(\beta\in\Sigma^{\omega}\) and \(f\in\mathcal{F}\) such that \(\bar{r}[0:n-1]\) can be computed from \(\beta[0:f(n)-1]\) in time \(p_{2}(n)\in\mathrm{poly}\), for all \(n\geq 0\). Consider the PTM/A \(\mathcal{M}^{\prime}\) with advice \(\alpha(\beta,f\circ c\log)\) working as follows. First, \(\mathcal{M}^{\prime}\) queries its advice \(\beta[0:f(c\log(n))-1]\), then it computes \(\bar{r}[0:c\log(n)-1]\) from this advice in time \(p_{2}(\log(n))\), and finally it simulates \(\mathcal{M}\) with advice \(\bar{r}[0:c\log(n)-1]\) in real time. Consequently, \(\mathcal{M}^{\prime}\) decides the same language \(L\) as \(\mathcal{M}\), and works in time \(O(p_{1}+p_{2}\circ\log)\in\mathrm{poly}\). Therefore,
\[L\in\mathbf{PTMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\circ\log \big{)},\mathrm{poly}\right].\]
We now prove the backward inclusion of Eq. 6. Let
\[L\in\mathbf{PTMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\circ\log \big{)},\mathrm{poly}\right].\]
Then, there exists some PTM/A \(\mathcal{M}\) using advice \(\alpha(\bar{r},f\circ c\log)\), where \(\bar{r}\in\Sigma^{\omega}\), \(f\in\mathcal{F}\) and \(c>0\), that decides \(L\) in time \(p_{1}\in\mathrm{poly}\). Using the same argument as in the proof of Theorem 14, there exist \(\bar{s}\in\bar{K}^{\mathcal{F}}_{\mathrm{poly}}\) and \(g\in\mathcal{F}\) such that \(f\leq g\) and the smaller word \(\bar{r}[0:g(n)-1]\) can be retrieved from the larger one \(\bar{s}[0:2n-1]\) in time \(p_{2}(n)\in\mathrm{poly}\), for all \(n\geq 0\). Now, consider the PTM/A \(\mathcal{M}^{\prime}\) using advice \(\alpha(\bar{s},2c\log)\) and working as follows. First, \(\mathcal{M}^{\prime}\) queries its advice \(\bar{s}[0:2c\log(n)-1]\), then it reconstructs \(\bar{r}[0:g(c\log(n))-1]\) from this advice in time \(O(p_{2}(\log(n)))\), and finally, it simulates \(\mathcal{M}\) with advice \(\bar{r}[0:g(c\log(n))-1]\) in real time. Since \(\bar{r}[0:g(c\log(n))-1]\) extends \(\bar{r}[0:f(c\log(n))-1]\), \(\mathcal{M}^{\prime}\) and \(\mathcal{M}\) decide the same language \(L\). In addition, \(\mathcal{M}^{\prime}\) works in time \(O(p_{1}+p_{2}\circ\log)\in\mathrm{poly}\). Therefore,
\[\mathbf{PTMA}\left[\alpha\big{(}\bar{K}^{\mathcal{F}}_{\mathrm{poly}},\log \big{)},\mathrm{poly}\right].\]
We now prove the separation of non-uniform complexity classes of the form \(\mathcal{C}/f\). Towards this purpose, we assume that each class of languages \(\mathcal{C}\) is defined on the basis of a set of machines that decide these languages. For the sake of simplicity, we naturally identify \(\mathcal{C}\) with its associated class of machines. For instance, \(\mathbf{P}\) and \(\mathbf{BPP}\) are identified with the set of Turing machines and probabilistic Turing machines working in polynomial time, respectively. In this context, we show that, as soon as the advice is increased by a single bit, the capabilities of the corresponding machines are also increased. To achieve this
result, the two following weak conditions are required. First, \(\mathcal{C}\) must contain the machines capable of reading their full inputs (of length \(n\)) and advices (of length \(f(n)\)) (otherwise, any additional advice bit would not change anything). Hence, \(\mathcal{C}\) must at least include the machines working in time \(O(n+f(n))\). Secondly, the advice length \(f(n)\) should be smaller than \(2^{n}\), for otherwise, the advice could encode any possible language, and the corresponding machine would have full power. The following result is proven for machines with general (i.e., non-prefix) advices, before being stated for the particular case of machines with prefix advices.
**Theorem 16**.: _Let \(f,g:\mathbb{N}\rightarrow\mathbb{N}\) be two increasing functions such that \(f(n)<g(n)\leq 2^{n}\), for all \(n\in\mathbb{N}\). Let \(\mathcal{C}\) be a set of machines containing the Turing machines working in time \(O(n+f(n))\). Then \(\mathcal{C}/f\subsetneq\mathcal{C}/g\)._
Proof.: Any Turing machine \(M\) with advice \(\alpha\) of size \(f\) can be simulated by some Turing machine \(M^{\prime}\) with an advice \(\alpha^{\prime}\) of size \(g>f\). Indeed, take \(\alpha^{\prime}(n)=(1)^{g(n)-f(n)-1}0\alpha(n)\). Then, on any input \(w\in\Sigma^{n}\), the machine \(M^{\prime}\) queries its advice \(\alpha^{\prime}(n)\), erases all \(1\)'s up to and including the first encountered \(0\), and then simulates \(M\) with advice \(\alpha(n)\). Clearly, \(M\) and \(M^{\prime}\) decide the same language.
To prove the strictness of the inclusion, we proceed by diagonalization. Recall that the set of (probabilistic) Turing machines is countable. Let \(M_{0},M_{1},M_{2},\ldots\) be an enumeration of the machines in \(\mathcal{C}\). For any \(M_{k}\in\mathcal{C}\) and any advice \(\alpha:\mathbb{N}\rightarrow\Sigma^{*}\) of size \(f\), let \(M_{k}/\alpha\) be the associated (probabilistic) machine with advice, and let \(L(M_{k}/\alpha)\) be its associated language. The language \(L(M_{k}/\alpha)\) can be written as the union of its sub-languages of words of length \(n\), i.e.
\[L(M_{k}/\alpha)=\bigcup_{n\in\mathbb{N}}L(M_{k}/\alpha)^{n}.\]
For each \(k,n\in\mathbb{N}\), consider the set of sub-languages of words of length \(n\) decided by \(M_{k}/\alpha\), for all possible advices \(\alpha\) of size \(f\), i.e.:
\[\mathcal{L}_{k}^{n}=\big{\{}L(M_{k}/\alpha)^{n}:\alpha\text{ is an advice of size }f\big{\}}.\]
Since there are at most \(2^{f(n)}\) advice strings of length \(f(n)\), it follows that \(|\mathcal{L}_{k}^{n}|\leq 2^{f(n)}\), for all \(k\in\mathbb{N}\), and in particular, that \(|\mathcal{L}_{n}^{n}|\leq 2^{f(n)}\). By working on the diagonal \(\mathcal{D}=\big{(}\mathcal{L}_{n}^{n}\big{)}_{n\in\mathbb{N}}\) of the sequence \(\big{(}\mathcal{L}_{k}^{n}\big{)}_{k,n\in\mathbb{N}}\) (illustrated in Table 1), we will build a language \(A=\bigcup_{n\in\mathbb{N}}A^{n}\) that cannot be decided by any Turing machine
in \(\mathcal{C}\) with advice of size \(f\), but can be decided by some Turing machine in \(\mathcal{C}\) with advice of size \(g\). It follows that \(A\in(\mathcal{C}/g)\setminus(\mathcal{C}/f)\), and therefore, \(\mathcal{C}/f\subsetneq\mathcal{C}/g\).
Let \(n\in\mathbb{N}\). For each \(i<2^{n}\), let \(b(i)\in\Sigma^{n}\) be the binary representation of \(i\) over \(n\) bits. For any subset \(\mathcal{L}\subseteq\mathcal{L}_{n}^{n}\), let
\[\mathcal{L}\big{(}b(i)\big{)}=\left\{L\in\mathcal{L}:b(i)\in L\right\}\ \ \text{and}\ \ \bar{\mathcal{L}}\big{(}b(i)\big{)}=\left\{L\in\mathcal{L}:b(i)\not\in L \right\}.\]
Consider the sequence \((\mathcal{L}_{n,0}^{n},\ldots,\mathcal{L}_{n,f(n)+1}^{n})\) of decreasing subsets of \(\mathcal{L}_{n}^{n}\) and the sequence \((A_{0}^{n},\ldots,A_{f(n)+1}^{n})\) of sub-languages of words of length \(n\) defined by induction for every \(0\leq i\leq f(n)\) as follows
\[\mathcal{L}_{n,0}^{n}=\mathcal{L}_{n}^{n}\ \ \text{and}\ \ \mathcal{L}_{n,i+1}^{n}=\begin{cases} \mathcal{L}_{n,i}^{n}\big{(}b(i)\big{)}&\text{if }|\mathcal{L}_{n,i}^{n}\big{(}b(i)\big{)}|<|\bar{ \mathcal{L}}_{n,i}^{n}\big{(}b(i)\big{)}|\\ \bar{\mathcal{L}}_{n,i}^{n}\big{(}b(i)\big{)}&\text{otherwise}\end{cases}\]
\[A_{0}^{n}=\emptyset\ \ \text{and}\ \ \mathcal{A}_{i+1}^{n}=\begin{cases} \mathcal{A}_{i}^{n}\cup\{b(i)\}&\text{if }|\mathcal{L}_{n,i}^{n}\big{(}b(i)\big{)}|<|\bar{ \mathcal{L}}_{n,i}^{n}\big{(}b(i)\big{)}|\\ \mathcal{A}_{i}^{n}&\text{otherwise}.\end{cases}\]
This construction is illustrated in Figure 3.
Note that the \(n\)-bit representation \(b(i)\) of \(i\) is well-defined, since \(0\leq i\leq f(n)<2^{n}\). In addition, the construction ensures that \(|\mathcal{L}_{n,i+1}^{n}|\leq\frac{1}{2}|\mathcal{L}_{n,i}^{n}|\), and since \(|\mathcal{L}_{n,0}^{n}|=|\mathcal{L}_{n}^{n}|\leq 2^{f(n)}\), it follows that \(|\mathcal{L}_{n,f(n)+1}^{n}|=0\), meaning that \(\mathcal{L}_{n,f(n)+1}^{n}=\emptyset\). Furthermore, the construction also ensure that \(\mathcal{L}_{i}^{n}\subseteq\mathcal{L}_{i+1}^{n}\) and \(A_{f(n)+1}^{n}\in\mathcal{L}_{n,i}^{n}\), for all \(0\leq i\leq f(n)\). Now, towards, a contradiction, suppose
that \(A^{n}_{f(n)+1}\in\mathcal{L}^{n}_{n}\). The previous properties imply that
\[A^{n}_{f(n)+1}\in\bigcap_{0\leq i\leq f(n)}\mathcal{L}^{n}_{n,i}=\mathcal{L}^{n}_ {n,f(n)+1}=\emptyset\]
which is a contradiction. Therefore, \(A^{n}_{f(n)+1}\not\in\mathcal{L}^{n}_{n}\), for all \(n\in\mathbb{N}\).
Now, consider the language
\[A=\bigcup_{n\in\mathbb{N}}A^{n}_{f(n)+1}.\]
By construction, \(A^{n}_{f(n)+1}\) is the set of words of length \(n\) of \(A\), meaning that \(A^{n}_{f(n)+1}=A^{n}\), for all \(n\in\mathbb{N}\). We now show that \(A\) cannot be decided by any machine in \(\mathcal{C}\) with advice of size \(f\). Towards a contradiction, suppose that \(A\in\mathcal{C}/f\). Then, there exist \(M_{k}\in\mathcal{C}\) and \(\alpha:\mathbb{N}\to\Sigma^{*}\) of size \(f\) such that \(L(M_{k}/\alpha)=A\). On the one hand, the definition of \(\mathcal{L}^{k}_{k}\) ensures that \(L(M_{k}/\alpha)^{k}\in\mathcal{L}^{k}_{k}\). On the other hand, \(L(M_{k}/\alpha)^{k}=A^{k}=A^{k}_{f(k)+1}\not\in\mathcal{L}^{k}_{k}\), which is a contradiction. Therefore, \(A\not\in\mathcal{C}/f\).
We now show that \(A\in\mathcal{C}/g\). Consider the advice function \(\alpha:\mathbb{N}\to\Sigma^{*}\) of
size \(g=f+1\) given by \(\alpha(n)=\alpha_{0}^{n}\alpha_{1}^{n}\cdots\alpha_{f(n)}^{n}\), where
\[\alpha_{i}^{n}=\begin{cases}1&\text{if }b(i)\in A^{n}\\ 0&\text{otherwise},\end{cases}\]
for all \(0\leq i\leq f(n)\). Note that the advice string \(\alpha(n)\) encodes the sub-language \(A^{n}\), for all \(n\in\mathbb{N}\), since the latter is a subset of \(\{b(i):i\leq f(n)\}\) by definition. Consider the Turing machine with advice \(M/\alpha\) which, on every input \(w=w_{0}w_{1}\cdots w_{n-1}\) of length \(n\), moves its advice head up to the \(i\)-th bit \(\alpha_{i}^{n}\) of \(\alpha(n)\), where \(i=b^{-1}(w)\), if this bit exists (note that \(i<2^{n}\) and \(|\alpha(n)|=f(n)+1\)), and accepts \(w\) if and only if \(\alpha_{i}^{n}=1\). Note that these instructions can be computed in time \(O(f(n)+n)\). In particular, moving the advice head up to the \(i\)-th bit of \(\alpha(n)\) does not require to compute \(i=b^{-1}(w)\) explicitly, but can be achieved by moving simultaneously the input head from the end of the input to the beginning and the advice head from left to right, in a suitable way. It follows that
\[w\in L(M/\alpha)^{n}\ \ \text{iff}\ \ \alpha_{b^{-1}(w)}^{n}=1\ \ \text{iff}\ \ b(b^{-1}(w))\in A^{n}\ \ \text{iff}\ \ w\in A^{n}.\]
Hence, \(L(M/\alpha)^{n}=A^{n}\), for all \(n\in\mathbb{N}\), and thus
\[L(M/\alpha)=\bigcup_{n\in\mathbb{N}}L(M/\alpha)^{n}=\bigcup_{n\in\mathbb{N}}A ^{n}=A.\]
Therefore, \(A\in\mathcal{C}/g\). The argument can be generalized in a straightforward way to any advice size \(g\) such that \(f(n)+1\leq g(n)\leq 2^{n}\).
Finally, the two properties \(A\not\in\mathcal{C}/f\) and \(A\in\mathcal{C}/g\) imply that \(\mathcal{C}/f\subsetneq\mathcal{C}/g\).
We now prove the analogous of Theorem 16 for the case of machines with prefix advice. In this case, however, a stronger condition on the advice lengths \(f\) and \(g\) is required: \(f\in o(g)\) instead of \(f\leq g\).
**Theorem 17**.: _Let \(f,g:\mathbb{N}\to\mathbb{N}\) be two increasing functions such that \(f\in o(g)\) and \(\lim_{n\to\infty}g(n)=+\infty\). Let \(\mathcal{C}\) be a set of machines containing the Turing machines working in time \(O(n+f(n))\). Then \(\mathcal{C}/f^{*}\subsetneq\mathcal{C}/g^{*}\)._
Proof.: The proof is similar to that of Theorem 16, except that we will construct the language \(A\) on the basis of a sequence of integers \((n_{i})_{i\in\mathbb{N}}\). Consider the
sequence \((n_{i})_{i\in\mathbb{N}}\) defined for all \(i\geq 0\) as follows
\[n_{0} = \min\left\{n\in\mathbb{N}:2(f(n)+1)\leq g(n)\right\}\] \[n_{i+1} = \min\Big{\{}n\in\mathbb{N}:2\sum_{j=0}^{i}(f(n_{j})+1)+2(f(n)+1) \leq g(n)\Big{\}}.\]
We show by induction that the sequence \((n_{i})_{i\in\mathbb{N}}\) is well-defined. Since \(f\in o(g)\) and \(\lim_{n\to\infty}g(n)=+\infty\), the following limits hold
\[\lim_{n\to\infty}\frac{2(f(n)+1)}{g(n)} = 2\lim_{n\to\infty}\frac{f(n)}{g(n)}+\lim_{n\to\infty}\frac{2}{g( n)}=0\] \[\lim_{n\to\infty}\frac{2\sum_{j=0}^{i}(f(n_{j})+1)+2(f(n)+1)}{g( n)} = 2\lim_{n\to\infty}\frac{f(n)}{g(n)}+\lim_{n\to\infty}\frac{C+2}{g( n)}=0\]
where \(C=2\sum_{j=0}^{i}(f(n_{j})+1)\), which ensure that \(n_{0}\) and \(n_{i+1}\) are well-defined.
For each \(k,i\in\mathbb{N}\), consider the set of sub-languages of words of length \(n_{i}\) decided by the machine \(M_{k}\in\mathcal{C}\) using any possible advice \(\alpha\) of size \(f\), i.e.,
\[\mathcal{L}_{k}^{n_{i}}=\left\{L(M_{k}/\alpha)^{n_{i}}:\text{$\alpha$ is an advice of size $f$}\right\}.\]
Consider the diagonal \(\mathcal{D}=\left(\mathcal{L}_{i}^{n_{i}}\right)_{i\in\mathbb{N}}\) of the set \(\left(\mathcal{L}_{k}^{n_{i}}\right)_{k,n_{i}\in\mathbb{N}}\). Since there are at most \(2^{f(n_{i})}\) advice strings of length \(f(n_{i})\), it follows that \(|\mathcal{L}_{i}^{n_{i}}|\leq 2^{f(n_{i})}\). Using a similar construction as in the proof of Theorem 16, we can define by induction a sub-language \(A_{f(n_{i})+1}^{n_{i}}\subseteq\Sigma^{n_{i}}\) such that \(A_{f(n_{i})+1}^{n_{i}}\not\in\mathcal{L}_{i}^{n_{i}}\). Then, consider the language
\[A=\bigcup_{i\in\mathbb{N}}A_{f(n_{i})+1}^{n_{i}}=\bigcup_{i\in\mathbb{N}}A^{n_ {i}}.\]
Once again, a similar argument as in the proof of Theorem 16 ensures that \(A\not\in\mathcal{C}/f\). Since \(\mathcal{C}/f^{*}\subseteq\mathcal{C}/f\), it follows that \(A\not\in\mathcal{C}/f^{*}\).
We now show that \(A\in\mathcal{C}/g^{*}\). Recall that, by construction, \(A^{n_{i}}\subseteq\{b(j):0\leq j\leq f(n_{i})\}\). Consider the word homomorphism \(h:\Sigma^{*}\to\Sigma^{*}\) induced by the mapping \(0\mapsto 00\) and \(1\mapsto 11\), and define the symbol \(\#=01\). For each \(i\in\mathbb{N}\), consider the encoding \(\beta_{0}^{n_{i}}\beta_{1}^{n_{i}}\cdots\beta_{f(n_{i})}^{n_{i}}\) of \(A^{n_{i}}\) given by
\[\beta_{j}^{n_{i}}=\begin{cases}1&\text{if $b(j)\in A^{n_{i}}$},\\ 0&\text{otherwise},\end{cases}\]
for all \(0\leq j\leq f(n_{i})\), and let \(\beta(n_{i})=h(\beta_{0}^{n_{i}}\beta_{1}^{n_{i}}\cdots\beta_{f(n_{i})}^{n_{i}})\). Note that \(|\beta(n_{i})|=2(f(n_{i})+1)\). Now, consider the advice function \(\alpha:\mathbb{N}\to\Sigma^{*}\) given by the concatenation of the encodings of the successive \(A^{n_{i}}\) separated by symbols \(\#\). Formally,
\[\alpha(n)=\begin{cases}\beta(n_{0})\#\beta(n_{1})\#\cdots\#\beta(n_{i})&\text {if $n=n_{i}$ for some $i\geq 0$}\\ \beta(n_{0})\#\beta(n_{1})\#\cdots\#\beta(n_{i})\#&\text{if $n_{i}<n<n_{i+1}$.} \end{cases}\]
Note that \(|\alpha(n)|=2\sum_{j=0}^{i}(f(n_{i})+1)\leq g(n_{i})\leq g(n)\) for \(n_{i}\leq n<n_{i+1}\), and that \(\alpha\) satisfies the prefix property: \(m\leq n\) implies \(|\alpha(m)|\leq|\alpha(n)|\). If necessary, the advice strings can be extended by dummy symbols \(10\) in order to achieve the equality \(|\alpha(n)|=g(n)\), for all \(n\geq 0\) (assuming without loss of generality that \(g(n)\) is even). Now, consider the machine with advice \(M/\alpha\) which, on every input \(w\) of length \(n\), first reads its advice string \(\alpha(n)\) up to the end. If the last symbol of \(\alpha(n)\) is \(\#\), then it means that \(|w|\neq n_{i}\) for all \(i\geq 0\), and the machine rejects \(w\). Otherwise, the input is of length \(n_{i}\) for some \(i\geq 0\). Hence, the machine moves its advice head back up to the last \(\#\) symbol, and then moves one step to the right. At this point, the advice head points at the beginning of the advice substring \(\beta(n_{i})\). Then, the machine decodes \(\beta_{0}^{n_{i}}\beta_{1}^{n_{i}}\cdots\beta_{f(n_{i})}^{n_{i}}\) from \(\beta(n_{i})\) by removing one out of two bits. Next, as in the proof of Theorem 16, the machine moves its advice head up to the \(j\)-th bit \(\beta_{j}^{n_{i}}\), where \(j=b^{-1}(w)\), if this bit exists (note that \(j<2^{n_{i}}\) and \(|\beta(n_{i})|=f(n_{i})+1\)), and accepts \(w\) if and only if \(\beta_{j}^{n_{i}}=1\). These instructions can be computed in time \(O(2\sum_{j=0}^{i}(f(n_{j})+1)+n_{i})\). It follows that \(w\in L(M/\alpha)^{n_{i}}\) iff \(w\in A^{n_{i}}\). Thus \(L(M/\alpha)^{n_{i}}=A^{n_{i}}\), for all \(i\in\mathbb{N}\), and hence
\[L(M/\alpha)=\bigcup_{i\in\mathbb{N}}L(M/\alpha)^{n_{i}}=\bigcup_{i\in\mathbb{ N}}A^{n_{i}}=A\]
Therefore, \(A\in\mathcal{C}/g^{*}\).
Finally, the two properties \(A\not\in\mathcal{C}/f^{*}\) and \(A\in\mathcal{C}/g^{*}\) imply that \(\mathcal{C}/f^{*}\subsetneq\mathcal{C}/g^{*}\).
The separability between classes of analog, evolving, and stochastic recurrent neural networks using real weights, evolving weights, and probabilities of different Kolmogorov complexities, respectively, can now be obtained.
**Corollary 18**.: _Let \(\mathcal{F}\) and \(\mathcal{G}\) be two classes of reasonable advice bounds such
_that, there is a \(g\in\mathcal{G}\), such that for every \(f\in\mathcal{F}\), \(f\in o(g)\) and \(\underset{n\rightarrow\infty}{\lim}g(n)=+\infty\). Then_
1. \(\mathbf{ANN}\left[K^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly}\right]\subsetneq \mathbf{ANN}\left[K^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly}\right]\)__
2. \(\mathbf{ENN}\left[\bar{K}^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly}\right]\subsetneq \mathbf{ENN}\left[\bar{K}^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly}\right]\)__
3. \(\mathbf{SNN}\left[K^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly}\right]\subsetneq \mathbf{SNN}\left[K^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly}\right]\)__
Proof.: (i) and (ii): Theorem 14 shows that
\[\mathbf{P}/\mathcal{F}^{*} = \mathbf{ANN}\left[K^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly} \right]=\mathbf{ENN}\left[\bar{K}^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly} \right]\text{ \ and }\] \[\mathbf{P}/\mathcal{G}^{*} = \mathbf{ANN}\left[K^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly} \right]=\mathbf{ENN}\left[\bar{K}^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly} \right].\]
In addition, Theorem 17 ensures that
\[\mathbf{P}/\mathcal{F}^{*}\subsetneq\mathbf{P}/\mathcal{G}^{*}\]
The strict inclusions of Points (i) and (ii) directly follow.
(iii): Theorem 15 states that
\[\mathbf{BPP}/(\mathcal{F}\circ\log)^{*} = \mathbf{SNN}\left[K^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly}\right]\] \[\mathbf{BPP}/(\mathcal{G}\circ\log)^{*} = \mathbf{SNN}\left[K^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly} \right].\]
In addition, note that if \(f\in\mathcal{F}\) and \(g\in\mathcal{G}\) satisfy the hypotheses of Theorem 17, then so do \(f\circ l\in\mathcal{F}\circ\log\) and \(g\circ l\in\mathcal{G}\circ\log\), for all \(l\in\log\). Hence, Theorem 17 ensures that
\[\mathbf{BPP}/(\mathcal{F}\circ\log)^{*}\subsetneq\mathbf{BPP}/(\mathcal{G} \circ\log)^{*}.\]
The strict inclusion of Point (iii) ensues.
Finally, Corollary 18 provides a way to construct infinite hierarchies of classes of analog, evolving and stochastic neural networks based on the Kolmogorov complexity of their underlying weights and probabilities, respectively. The hierarchies of analog and evolving networks are located between \(\mathbf{P}\) and \(\mathbf{P}/\mathbf{poly}\). Those of stochastic networks lie between \(\mathbf{BPP}\) and \(\mathbf{BPP}/\mathbf{log}^{*}\).
For instance, define \(\mathcal{F}_{i}=O\left(\log(n)^{i}\right)\), for all \(i\in\mathbb{N}\). Each \(\mathcal{F}_{i}\) satisfies the three conditions for being a class of reasonable advice bounds (note that the
sub-linearity \(\log(n)^{i}\) is satisfied for \(n\) sufficiently large). By Corollary 18, the sequence of classes \((\mathcal{F}_{i})_{i\in\mathbb{N}}\) induces the following infinite strict hierarchies of classes of neural networks:
\[\mathbf{ANN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{0}},\mathrm{poly}\right] \subsetneq\] \[\mathbf{ENN}\left[\bar{K}_{\mathrm{poly}}^{\mathcal{F}_{0}}, \mathrm{poly}\right] \subsetneq\] \[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{0}},\mathrm{poly }\right] \subsetneq\] \[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{0}},\mathrm{poly }\right] \subsetneq\] \[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{1}},\mathrm{poly }\right] \subsetneq\]
We provide another example of hierarchy for stochastic networks only. In this case, it can be noticed that the third condition for being a class of reasonable advice bounds can be relaxed: we only need that \(\mathcal{F}\circ\log\) is a class of reasonable advice bounds and that the functions of \(\mathcal{F}\) are bounded by \(n\). Accordingly, consider some infinite sequence of rational numbers \((q_{i})_{i\in\mathbb{N}}\) such that \(0<q_{i}<1\) and \(q_{i}<q_{i+1}\), for all \(i\in\mathbb{N}\), and define \(\mathcal{F}_{i}=O(n^{q_{i}})\), for all \(i\in\mathbb{N}\). Each \(\mathcal{F}_{i}\) satisfies the required conditions. By Corollary 18, the sequence of classes \((\mathcal{F}_{i})_{i\in\mathbb{N}}\) induces the following infinite strict hierarchies of classes of neural networks:
\[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{0}},\mathrm{poly}\right] \subsetneq\]
## 7 Conclusion
We provided a refined characterization of the super-Turing computational power of analog, evolving, and stochastic recurrent neural networks based on the Kolmogorov complexity of their underlying real weights, rational weights, and real probabilities, respectively. For the two former models, infinite hierarchies of classes of analog and evolving networks lying between \(\mathbf{P}\) and \(\mathbf{P/poly}\) have been obtained. For the latter model, an infinite hierarchy of classes of stochastic networks located between \(\mathbf{BPP}\) and \(\mathbf{BPP/log}^{*}\) has been achieved. Beyond proving the existence and providing examples of such hierarchies, Corollary 18 establishes a generic way of constructing them based on classes of functions satisfying the reasonable advice bounds conditions.
This work is an extension of the study from Balcazar et al. [3] about a Kolmogorov-based hierarchization of analog neural networks. In particular, the proof of Theorem 14 draws heavily on their Theorem 6.2 [3]. In our paper however, we adopted a relatively different approach guided by the intention
of keeping the computational relationships between recurrent neural networks and Turing machines with advice as explicit as possible. In this regard, Propositions 5, 9 and 12 characterize precisely the connections between the real weights, evolving weights, or real probabilities of the networks, and the advices of different lengths of the corresponding Turing machines. On the contrary, the study of Balcazar et al. keeps these relationships somewhat hidden, by referring to an alternative model of computation: the Turing machines with tally oracles. Another difference between the two works is that our separability results (Theorem 16 and 17) are achieved by means of a diagonalization argument holding for any non-uniform complexity classes, which is a result of specific interest per se. In particular, our method does not rely on the existence of reals of high Kolmogorov complexity. A last difference is that our conditions for classes of reasonable advice bounds are slightly weaker than theirs.
The computational equivalence between stochastic neural networks and Turing machines with advice relies on the results from Siegelmann [85]. Our Proposition 12 is fairly inspired by their Lemma 6.3 [85]. Yet once again, while their latter Lemma concerns the computational equivalence between two models of bounded error Turing machines, our Proposition 12 describes the explicit relationship between stochastic neural networks and Turing machines with logarithmic advice. Less importantly, our appeal to union bound arguments allows for technical simplifications of the arguments presented in their Lemma.
The main message conveyed by our study is twofold: (1) the complexity of the real or evolving weights does matter for the computational power of analog and evolving neural networks; (2) the complexity of the source of randomness does also play a role in the capabilities of stochastic neural networks. These theoretical considerations contrast with the practical research path about approximate computing, which concerns the plethora of approximation techniques - among which precision scaling - that could sometimes lead to disproportionate gains in efficiency of the models [66]. In our context, the less compressible the weights or the source of stochasticity, the more information they contain, and in turn, the more powerful the neural networks employing them.
For future work, hierarchies based on different notions than the Kolmogorov complexity could be envisioned. In addition, the computational universality of echo state networks could be studied from a probabilistic perspective. Given some echo state network \(\mathcal{N}\) complying with well-suited conditions on its reser
voir, and given some computable function \(f\), is it possible to find output weights such that \(\mathcal{N}\) computes like \(f\) with high probability?
Finally, the proposed study intends to bridge some gaps and present a unified view of the refined capabilities of analog, evolving and stochastic recurrent neural networks. The debatable question of the exploitability of the super-Turing computational power of neural networks lies beyond the scope of this paper, and fits within the philosophical approach to hypercomputation [22; 23; 84]. Nevertheless, we believe that the proposed study could contribute to the progress of analog computation [5].
## Acknowledgements
This research was partially supported by Czech Science Foundation, grant AppNeCo #GA22-02067S, institutional support RVO: 67985807.
|
2309.04728 | Transitions in echo index and dependence on input repetitions | The echo index counts the number of simultaneously stable asymptotic
responses of a nonautonomous (i.e. input-driven) dynamical system. It
generalizes the well-known echo state property for recurrent neural networks -
this corresponds to the echo index being equal to one. In this paper, we
investigate how the echo index depends on parameters that govern typical
responses to a finite-state ergodic external input that forces the dynamics. We
consider the echo index for a nonautonomous system that switches between a
finite set of maps, where we assume that each map possesses a finite set of
hyperbolic equilibrium attractors. We find the minimum and maximum repetitions
of each map are crucial for the resulting echo index. Casting our theoretical
findings in the RNN computing framework, we obtain that for small amplitude
forcing the echo index corresponds to the number of attractors for the
input-free system, while for large amplitude forcing, the echo index reduces to
one. The intermediate regime is the most interesting; in this region the echo
index depends not just on the amplitude of forcing but also on more subtle
properties of the input. | Peter Ashwin, Andrea Ceni | 2023-09-09T09:27:31Z | http://arxiv.org/abs/2309.04728v1 | # Transitions in echo index and dependence on input repetitions
###### Abstract
The _echo index_ counts the number of simultaneously stable asymptotic responses of a nonautonomous (i.e. input-driven) dynamical system. It generalizes the well-known _echo state property_ for recurrent neural networks - this corresponds to the echo index being equal to one. In this paper, we investigate how the echo index depends on parameters that govern typical responses to a finite-state ergodic external input that forces the dynamics. We consider the echo index for a nonautonomous system that switches between a finite set of maps, where we assume that each map possesses a finite set of hyperbolic equilibrium attractors. We find the minimum and maximum repetitions of each map are crucial for the resulting echo index. Casting our theoretical findings in the RNN computing framework, we obtain that for small amplitude forcing the echo index corresponds to the number of attractors for the input-free system, while for large amplitude forcing, the echo index reduces to one. The intermediate regime is the most interesting; in this region the echo index depends not just on the amplitude of forcing but also on more subtle properties of the input.
keywords: Nonautonomous dynamical system, Input-driven system, Multistability, Recurrent neural network, Echo state property. +
Footnote †: journal:
###### Contents
* 1 Introduction
* 1.1 Input-driven dynamics and shift dynamics
* 1.2 Local attractors and the echo index
* 2 Sufficient conditions to determine minimum echo index
* 2.1 Subshifts with min-max repetitions of input sequences
* 2.2 Minimum echo for multiple attractor maps
* 2.3 Obstructions to conditions for echo consistency
* 3 Echo index dependence for RNNs
* 3.1 Switching dynamics for RNNs
* 3.2 An example of input-driven RNN with multiple attractors
* 3.3 Bifurcations of echo index
* 4 Conclusions
## 1 Introduction
One of the most pressing questions for artificial intelligence systems, is whether one can understand and query the reasons behind a decision made by such a system. The difficulty of answering this _explainability problem_[18] is reflected in the fact that a trained neural network is commonly referred to as a black box. This suggests it is important to try and understand the functioning of neural networks in decision making under input - it is important to _open the black box_[19] and nonlinear dynamics gives tools that can be used for this. As an example, [8] show that excitable network attractors can be used to understand function and malfunction of trained RNNs for certain tasks.
Recurrent neural networks (RNN) such as echo state networks [10; 15] can retain memory of internal states. In this case, it is important to view the system in an input-driven context [17]. A useful criterion for successful computation is the Echo State Property (ESP) [10], which holds if there is asymptotic loss of information about the internal state of the system and only the input is important to determine the output. However, as discussed in [16], the presence of the ESP depends not just on system but on the particular input considered. As the input streams to an input-driven RNN will never be fully deterministic, this means there is no guarantee that ESP will be satisfied in practise. Recent work has highlighted that RNNs that do not have ESP may still be a useful model for understanding errors in neural networks [9], or in order to design multifunction RNNs that switch between different tasks [7; 4].
In this paper we develop ideas in [9] in more depth by considering parametrizations of input by repetition properties of the inputs. The remainder of this section discusses shift dynamics and introduces the echo index for discrete time dynamical systems. We discuss parametrization of inputs by symbol repetition.
Section 2 presents some conditions to give bounds on echo index and the ESP. Section 3 turns to a numerical example of an echo state RNN where we show how the echo index changes depending on min-max block-length and parameters of the input to the RNN. Section 4 concludes with a discussion, including barriers to strengthening the theoretic results in Section 3.
### Input-driven dynamics and shift dynamics
We consider properties of a discrete time input-driven dynamical system [17] of the form
\[x[k+1]=f(x[k],u[k]), \tag{1}\]
where time is indexed by \(k\in\mathbb{Z}\), states are \(x[k]\in X\subset\mathbb{R}^{n}\) and \(u[k]\in U\subset\mathbb{R}^{p}\) is an input sequence. We denote sequences in bold font, i.e.
\[\mathbf{x}=\{x[k]\}_{k\in\mathbb{Z}},\ \ \mathbf{u}=\{u[k]\}_{k\in\mathbb{Z}},\]
and space of sequences in calligraphic font, i.e. \(\mathcal{X}=X^{\mathbb{Z}}\), \(\mathcal{U}=U^{\mathbb{Z}}\), so that \(\mathbf{x}\in\mathcal{X}\), \(\mathbf{u}\in\mathcal{U}\). We assume that \(X\) and \(U\) are compact sets and assume that \(f:X\times U\to X\) is an update function that is continuous in both arguments, so that a forward orbit is defined. Note that for a given input sequence \(u[k]\), (1) can be thought of as a nonautonomous dynamical system [12]
\[x[k+1]=F(x[k],k), \tag{2}\]
where \(F:X\times\mathbb{R}\to X\) is defined by \(F(x,k)=f(x,u[k])\). The system (1) can also be viewed as a skew product dynamical system [12] over shift dynamics on the input sequence \(\mathbf{u}\in\mathcal{U}\).
We consider the special case where \(U\) is taken from a finite set of \(M\) input values. Without loss of generality we denote \(U=\{0,\ldots,M-1\}\) and call elements of \(U\)_symbols_. We recall some standard concepts from the symbolic dynamics of shifts; see for example [1; 13; 6] for background and more details. We define the shift operator \(\sigma\) by
\[[\sigma(\mathbf{u})][k]=u[k+1].\]
such that \(\sigma:\mathcal{U}\to\mathcal{U}\). Consider a given subset of input sequences \(\mathcal{V}\subseteq\mathcal{U}\). We say \(\mathcal{V}\) is _shift-invariant_ if \(\sigma(\mathcal{V})=\mathcal{V}\) and call (\(\sigma\),\(\mathcal{U}\)) the full shift on \(U\). We consider the product topology on \(\mathcal{V}\) induced by the metric
\[d(\mathbf{u},\mathbf{v})=\sum_{k\in\mathbb{Z}}\frac{d_{U}(u[k],v[k])}{2^{|k|}} \tag{3}\]
where \(d_{U}(u,v)\) is a metric on \(U\). Note that \(\sigma\) acts continuously under such a topology.
If we assume that \(\mathcal{V}\) is shift-invariant and closed then one can lift the nonautonomous system (1) to a continuous _autonomous_ dynamical system on an extended space \(\mathcal{F}:X\times\mathcal{V}\to X\times\mathcal{V}\). This is given by
\[[x[k+1],\mathbf{u}[k]]=\mathcal{F}(x[k],\mathbf{u}[k])=(f(x,u[0]),\sigma( \mathbf{u}[k])) \tag{4}\]
[FIGURE:S3.F4][ENDFIGU
where we write \(\mathbf{u}[k]=\sigma^{k}\mathbf{u}\). The skew product nature of (4) means that composition can be written
\[\mathcal{F}^{n}(x,\mathbf{u})=(\Phi_{n,\mathbf{u}}(x),\sigma^{n}(\mathbf{u})) \tag{5}\]
for all \(n\in\mathbb{Z}_{0}^{+}\), where \(\Phi\) is a cocycle, namely \(\Phi_{0,\mathbf{u}}\) is the identity map on \(X\) and \(\Phi_{n+k,\mathbf{u}}=\Phi_{n,\sigma^{k}(\mathbf{u})}\circ\Phi_{k,\mathbf{u}}\) for any \(n,k\in\mathbb{Z}_{0}^{+}\). We write the forward orbit of \(x[0]\) driven by the input sequence \(\mathbf{u}\) in terms of this cocycle, as
\[x[k]=\Phi_{k,\mathbf{u}}(x[0]). \tag{6}\]
for any \(k\in\mathbb{Z}_{0}^{+}\). This can be extended to negative \(k\) if \(f\) is invertible.
There are many choices of \(\mathcal{V}\subset\mathcal{U}\) that may be used to characterise a set of possible input sequences, for convenience we only consider closed and shift-invariant subsets. Given any \(\mathbf{u}\in\mathcal{U}\) one can define a closed invariant subset in terms of its orbit closure
\[\mathcal{U}(\mathbf{u})=\overline{\{\sigma^{k}(\mathbf{u})\ :\ k\in\mathbb{Z}\}}\]
where \(\overline{\mathcal{V}}\) denotes the closure of \(\mathcal{V}\) in the topology of (3).
An important set of closed and shift-invariant subsets are _subshifts of finite type_ defined as follows: For any \(m\geq k\) we write \(u[k,m]=(u[k],u[k+1],\ldots,u[m])\) to denote a finite string or _word_ of symbols. Given a finite set \(\mathcal{B}\) of words we define
\[\mathcal{U}_{\mathcal{B}}=\{\mathbf{u}\in\mathcal{U}\ :\ u[k,m]\cap\mathcal{B}= \emptyset\text{ for all }k,m\in\mathbb{Z}\text{ with }k\leq m\},\]
namely the set of sequences that contain no word in the set \(\mathcal{B}\). This is called a _subshift of finite type_ with _forbidden words_\(\mathcal{B}\). Without loss of generality we can assume that \(\mathcal{B}\) is _minimal_ in the sense there is no proper subset \(\mathcal{B}^{\prime}\subset\mathcal{B}\) such that \(\mathcal{U}_{\mathcal{B}}=\mathcal{U}_{\mathcal{B}^{\prime}}\).
If \(\mathcal{U}_{\mathcal{B}}\) has a set of forbidden words of length at most \(k+1\) then we say it is a _\(k\)-step subshift_: knowing \(k\) consecutive symbols provides the only constraints on the next symbol. Note that by considering a higher block shift on overlapping words of length \(m\) it is possible to express such a \(\mathcal{U}\) as a \(1\)-step subshift of finite type [6].
We write system (1) in the form of an iterated function system
\[x[k+1]=f_{u[k]}(x[k]), \tag{7}\]
for \(k\in\mathbb{Z}\), where (7) describes the dynamics of switching between one of \(M\) of autonomous dynamical systems.
### Local attractors and the echo index
We recall a nonautonomous notion of local attractor of (1) from [9] for fixed input sequence; this was used to proposed a generalization of the notion of ESP for input-driven systems with finitely many local
attractors. For a given input sequence \(\mathbf{v}\in\mathcal{U}\), we say \(\mathbf{x}=\{x[k]\}_{k\in\mathbb{Z}}\) is an _entire solution_ if it is a trajectory for the input \(\mathbf{v}\) in forwards and backwards time; i.e. it satisfies (1) for all \(k\in\mathbb{Z}\).
Note that we do not require \(f_{i}\) to be invertible or surjective, and hence given some \(x[0]\in X\) there may be (a) multiple choices for \(x[-1]\) or (b) no choice for \(x[-1]\) that lies within \(X\). Note also that whether (a) or (b) hold will typically depend on \(v[k]\) for \(k<0\).
Note that there will always be a point \(x[0]\) that is on an entire solution \(\mathbf{x}\). In particular, the set
\[X_{-n,0}(\mathbf{v}):=\Phi_{n,\sigma^{-n}(\mathbf{v})}(X)\]
is well-defined, compact and non-empty as \(f(X,v)\subset X\) is a continuous image of a compact set. Moreover
\[X_{-n-1,0}(\mathbf{v})=\Phi_{n,\sigma^{-n}(\mathbf{v})}\circ f_{v[-n-1]}(X) \subset X_{-n,0}(\mathbf{v})\]
and hence the set
\[X_{0}(\mathbf{v}):=\bigcap_{n>0}X_{-n,0}(\mathbf{v})\]
consists of points that lie on entire solutions.
An entire solution \(\mathbf{x}\) is _globally (pullback) attracting_ for input \(\mathbf{v}\) if
\[\lim_{n\to\infty}h(X_{-n,0}(\mathbf{v}),x[0])=0\]
where \(h(A,B)=\sup_{y\in A}\inf_{z\in B}d(y,z)\). Following [16] say (1) for a given input \(\mathbf{v}\) has the _ESP_ if it has a unique entire solution \(\mathbf{x}\) that is globally attracting for this input. An equivalent condition is that there is single point in \(X_{0}(\mathbf{v})\).
However, just as an autonomous dynamical system may be multistable, there may be more than one locally attracting entire solution. Moreover, the system (1) may have the echo state property for some inputs but not for others. This is explored in [9] where we define a notion of echo index as the smallest number of uniform attraction entire solutions (UAES) that attract almost all initial states of the system.
If there are a number \(m\) of UAESs \(\{\mathbf{x}_{1},\dots,\mathbf{x}_{m}\}\) such that they decompose1 the whole phase space in sets that are uniformly attracted to them, then we say the system has _echo index_\(m\) and write \(\mathcal{I}(\mathbf{v})=m\); see [9, Definitions 3.3 and 3.4] for formal definitions. Note that the echo index is shift-invariant, that is \(\mathcal{I}(\sigma(\mathbf{v}))=\mathcal{I}(\mathbf{v})\). We note that the echo index \(\mathcal{I}\) may take a number of values on a given closed shift-invariant set \(\mathcal{V}\subset\mathcal{U}\). In particular, we say \(\mathcal{V}\) has _consistent echo_\(n\) if \(\mathcal{I}(\mathbf{v})=n\) for all \(\mathbf{v}\in\mathcal{V}\).
Footnote 1: apart from a subset of zero Lebesgue measure.
## 2 Sufficient conditions to determine minimum echo index
We show that certain assumptions on the behaviour of individual maps and input sequences can be used to guarantee minimum echo index in terms of symbol repetitions for the input.
### Subshifts with min-max repetitions of input sequences
Suppose we have a finite set of symbols in \(U\) and consider a subshift \(U_{\mathcal{B}}\) of finite type defined by a set of prohibited words \(\mathcal{B}\). We consider a particular class of subshifts \(\mathcal{U}_{\mathcal{B}}\) that are characterised by minimum and maximum numbers of a repetition.
We say there is _min repetition \(m_{i}^{-}\) of symbol \(i\)_ if \(\mathcal{B}\) contains all words of the form
\[j\,i^{m}\,k\]
for all \(m<m_{i}^{-}\) and all \(j,k\neq i\). Similarly, we say there is _max repetition \(m_{i}^{+}\) of symbol \(i\)_ if \(\mathcal{B}\) contains the word
\[i^{m_{i}^{+}+1}.\]
We say \(m_{i}^{+}=\infty\) if there is no such word in \(\mathcal{B}\). We say the subshift of finite type \(\mathcal{U}_{\mathcal{B}}\) has _min-max repetitions \(m_{i}^{\pm}\)_ in this case.
For example, for the two symbols \(0\) and \(1\) consider the subshift with prohibited words
\[\mathcal{B}=\{010,0110,01110,1111111,101,1001\}.\]
In this case \(\mathcal{U}_{\mathcal{B}}\) has minimum \(3\) and maximum \(\infty\) repetitions of symbol \(0\), and minimum \(4\) and maximum \(6\) repetitions of symbol \(1\). Note that this subshift can be expressed as a \(1\)-step subshift on blocks of \(7\) consecutive symbols, which in its most general form will have \(2^{7}\) states and at most two arrows from each state. One can represent \(\mathcal{U}_{\mathcal{B}}\) using a smaller number of states using multiple representations of the same state, this is shown in Figure 1.
If the subshift \(\mathcal{V}\) supports an ergodic shift-invariant measure then one can apply tools from ergodic theory [14; 1] or random dynamical systems [2]. In particular, if sequences are chosen with respect to an ergodic shift-invariant probability measure \(\mu\) on \(\mathcal{U}\) then one can use this to ignore annoying sequences in \(\mathcal{U}\) as long as they lie on some set the can be shown to be zero-measure.
For example, given any set of non-zero probabilities \(P\) assigned to symbols \(U\), the Bernoulli product measure \(\mu=P^{\mathbb{Z}}\) assumes the probability of a symbol is given, independent of location, by the same distribution \(P\). This can be shown to be an ergodic measure that is zero for any set of sequences where the frequency of each symbol does not match the probability \(P\).
Figure 1: Graphical representation of a subshift that has minimum \(3\) repeats of \(0\) and between \(4\) and \(6\) repeats of \(1\). For this case we have min-max repetitions \(m_{0}^{-}=3\), \(m_{0}^{+}=\infty\), \(m_{1}^{-}=4\) and \(m_{1}^{+}=6\).
Note that there are many ergodic invariant measures corresponding to the topological subshift shown in Figure 1. By allocating transition probabilities one can define a Markov process that explores this subshift according to this measure. A useful example of this is the family of ergodic subshifts on two symbols
\[\mathcal{U}(m_{0}^{\pm},m_{1}^{\pm},p_{0},p_{1})\]
with minimum and maximum repeats \(m_{i}^{\pm}\) and repeat probability \(0<p_{i}<1\), for each additional repeat of \(i\) after the minimum number. Figure 2 illustrates an example of a family of measures that correspond to the topological subshift in Figure 1 and depends on the parameters \(p_{0}\) and \(p_{1}\).
### Minimum echo for multiple attractor maps
We now discuss some testable (but restrictive) assumptions on the maps (7) that can be used to give bounds on echo index. Suppose \(X\subset\mathbb{R}^{n}\) is a compact \(n\)-dimensional manifold, write \(\ell(\cdot)\) to denote Lebesgue measure on this and consider the finite set of \(M\) maps \(f_{i}:X\to X\) for \(i=0,\ldots,M-1\).
**Assumption 2.1**.: _Suppose that for each \(i=0,\ldots,M-1\) we have:_
1. \(f_{i}\) _is a continuously differentiable map that is almost everywhere a local diffeomorphism._
2. \(f_{i}\) _has a finite number of hyperbolic stable fixed points_ \(x_{i}^{0},\ldots,x_{i}^{L(i)-1}\)_._
3. _The basins of attraction_ \(\mathcal{B}_{i}^{j}\) _of_ \(x_{i}^{j}\) _exhaust the measure of state space in measure, i.e._ \[\ell(X\setminus\bigcup_{j}\mathcal{B}_{i}^{j})=0.\]
4. _There is a_ \(P(i,j,k)\in\{1,\ldots,L(k)\}\) _such that_ \[x_{i}^{j}\in\mathcal{B}_{k}^{P(i,j,k)}.\] _for all_ \(k=1,\ldots,M-1\) _and_ \(j=0,\ldots,L(i)-1\)
Figure 2: A Markov process for the topological subshift in Figure 1. For any choice of parameters \(0<p_{0}<1\) and \(0<p_{1}<1\) the transition probabilities shown generate a Markov process with min-max repetitions \(m_{0}^{-}=3\), \(m_{0}^{+}=\infty\), \(m_{1}^{-}=4\) and \(m_{1}^{-}=6\). For example, after in initial 3 repeats, symbol 0 will be repeated with probability \(p_{0}\) for each further repeat.
**Remark 2.1**.: _Condition (i) implies in particular that the pre-image of any set of zero Lebesgue measure under \(f_{i}\) also has zero measure. Condition (ii) implies that near each \(x_{i}^{j}\) some iterate of \(f_{i}\) is locally a contraction in some neighbourhood; we characterise this in Lemma 2.1. Condition (iii) means that \(\{\mathcal{B}_{i}^{j}\}_{j=1}^{L(i)}\) is a full measure partition for each \(i\). Condition (iv) means that \(x_{i}^{j}\) lies within the basin of an attractor for \(f_{k}\) but moreover it is a non-degeneracy assumption that means that no attractor for \(f_{i}\) is on the basin boundary for \(f_{k}\)._
We characterise the local contraction property more precisely:
**Lemma 2.1**.: _Suppose that Assumption 2.1 is satisfied. Then for each \(i,j\) and any choice of \(0<\rho<1\) there is a neighbourhood \(N_{i}^{j}\) of \(x_{i}^{j}\) and \(n_{i}^{j}\geq 1\) such that if_
\[F:=(f_{i}|_{N_{i}^{j}})^{k}\]
_for any \(k\geq n_{i}^{j}\) then \(F:N_{i}^{j}\to N_{i}^{j}\) has a unique fixed point at \(x_{i}^{j}\) and moreover \(F\) contracts by \(\rho\)_
\[\|F(x)-F(y)\|<\rho\|x-y\|.\]
**Proof.** The assumption of a hyperbolic attraction fixed point \(x_{i}^{j}\) implies linear stability, and hence that in any neighbourhood contained in \(\mathcal{B}_{i}^{j}\) some iterate of \(f_{i}\) will be a contraction. \(\square\)
Now consider a specific input sequence \(\mathbf{v}=\{v[k]\}_{k\in\mathbb{Z}}\). Given some choice of \(A[0]\in\{1,\ldots,N_{v[0]}\}\) and applying Assumption 2.1(iv) we have
\[x_{v[0]}^{A[0]}\in\mathcal{B}_{v[1]}^{P(v[0],A[0],v[1])}.\]
Hence, associated with a sequence \(\mathbf{v}\) and initial choice of attractor \(x_{v[0]}^{A[0]}\) there will be a unique sequence
\[\left\{x_{v[k]}^{A[k]}\right\}_{k\geq 0}\]
such that \(x_{v[k]}^{A[k]}\) is an attractor for \(f_{v[k]}\) contained in the basin of \(x_{v[k+1]}^{A[k+1]}\) for the map \(f_{v[k+1]}\), namely
\[A[k]=P(v[k-1],A[k-1],v[k]). \tag{8}\]
We call such a sequence \(A[k]\) a _forward attractor sequence_ for \(\mathbf{v}\) starting at \(A[0]\). Since \(A[k]\) is determined by (8) there will be only finitely many forward attractor sequences and the number of these is bounded above by \(\min_{i}L(i)\). An _entire attractor sequence_ is a sequence \(A[k]\) satisfying (8) that is associated with a bi-infinite \(\mathbf{v}\); this depends not only on the \(v[k]\) for \(k\geq 0\) but also on \(k<0\).
**Lemma 2.2**.: _Suppose that Assumption 2.1 holds for the system (7). Then there is a \(m_{\min}\) such that for any \(\mathbf{v}\) with \(m_{i}^{-}\geq m_{\min}\) for all \(i\) and any entire attractor sequence \(A[k]\) for this \(\mathbf{v}\), there is a pullback attracting entire solution \(x_{v[k]}^{A[k]}\) such that \(x_{v[k]}^{A[k]}\in\mathcal{B}_{v[k+1]}^{A[k+1]}\)._
**Proof.** Choose any \(0<\rho<1\); by applying Lemma 2.1 for all choices of \(i,j\) we can find an \(\epsilon>0\) and an \(m_{\min}\) such that \(m_{\min}>n_{i}^{j}\) and \(B_{\epsilon}(x_{i}^{j})\subset N_{i}^{j}\). Pick any attractor sequence \(A[k]\) for a \(\mathbf{v}\) that satisfies \(m_{i}^{-}\geq m_{\min}\) and define
\[x^{[A]}[k,n]:=\Phi_{n,\sigma^{k-n}(\mathbf{v})}B_{\epsilon}(x_{v[k-n]}^{A[k-n] }).\]
This is a nested sequence in increasing \(n\) for fixed \(k\) and there is contraction by \(\rho\) over blocks of the same symbol. Hence the set is non-empty and has diameter that shrinks to zero as \(n\to\infty\). This means that
\[x^{[A]}[k]:=\bigcap_{n<k}x^{[A]}[k,n]\]
consists of a single entire solution that pullback attracts an \(\epsilon\)-neighbourhood of itself. \(\Box\)
A consequence of this is that, for long enough minimum block-lengths we can get a lower bound for echo index from the number of distinct entire attractor sequences.
**Theorem 2.3**.: _Suppose that Assumption 2.1 holds for (7) and choose \(m_{\min}\) and \(\mathbf{v}\) such that the conclusion of Lemma 2.2 holds. Suppose that there are \(E\) distinct entire attractor sequences for \(\mathbf{v}\). Then the echo index is at least \(E\) for this sequence._
**Proof.** By Lemma 2.2 each entire attractor sequence \(A[k]\) has a pullback attractor \(x^{[A]}[k]\). These are distinct as long as the entire attractor sequences are distinct. \(\Box\)
Under additional assumptions, one can ensure echo index, in particular for the following case where we assume there is a uniform bound on how long it takes points to enter a given neighbourhood of a single attractor.
**Theorem 2.4**.: _Suppose that Assumption 2.1 holds for (7) and in addition assume that there is an \(i\) such that \(f_{i}\) has a single attracting fixed point \(x_{i}^{0}\) such that \(h(f_{i}^{m}(X),x_{i}^{0})\to 0\) as \(m\to\infty\). Then there is a single attractor sequence and an \(m_{\min}\) such that for every \(\mathbf{v}\) with \(m_{i}^{-}\geq m_{\min}\) the system has echo index one._
**Proof.** In this case note that whenever \(v[k]=i\) we have \(A[k]=0\). Hence there is only one entire attractor sequence. Moreover, given any \(\epsilon\) there is an \(m\) such that all points must enter the neighbourhood \(B_{\epsilon}(x_{i}^{0})\) after at most \(m\) iterates and hence the entire solution pullback attracts all points in \(X\). \(\Box\)
### Obstructions to conditions for echo consistency
Theorem 2.3 gives sufficient conditions to guarantee a minimum echo index for the forced system in terms of \(E\) the number of distinct attractor sequences for the system with input \(\mathbf{v}\). Conversely, Theorem 2.4 shows under stronger conditions that the echo index is precisely one.
One might naively expect that an even stronger result than Theorem 2.3 may follow, namely that the echo index is in fact \(E\) for cases where \(E\geq 2\). This is not the case because under composition new attractors may appear near a basin boundary.
As an example, define
\[F(x)=x+x^{2}\sin\frac{\pi}{x},\ \ F(0)=0\]
and \(f(x)\) is the continuous function such that \(f(x)=F(x)\) for \(|F(x)|<1\) and \(f(x)\) is a linear function with constant slope \(0.5\) for \(|F(x)|>1\). Now consider the maps
\[f_{0}(x)=f(x)+1,\ \ \ f_{1}(x)=f(x-1). \tag{9}\]
One can verify that \(f_{1}(x)\) and \(f_{2}(x)\) each has a unique attracting fixed point, namely \(x_{0}^{0}=3\), \(x_{1}^{0}=2\). However, \(f_{0}\circ f_{1}=f^{2}\) has infinitely many attracting fixed points separated by repelling fixed points; it also has neutrally stable and linearly unstable fixed points; see Figure 3. In summary, this system satisfies Assumption 2.1 and for any input there is only one attractor sequence \(A[k]=0\) for all \(k\), but for an input \(\mathbf{v}\) that alternates between \(0\) and \(1\) there are infinitely many attracting entire solutions.
Hence for this input there is infinite echo index even though each. Nonetheless, applying Theorem 2.4 there is a minimum block-length such that there is only one attracting entire solution. In this case, numerical simulations suggest this minimum is \(2\), which would suggest that the system has echo index one for all input sequences except the periodic sequence that is an infinite repeat of \(01\).
Figure 3: Maps \(f_{0}\) and \(f_{1}\) (9) such that each individual map has a unique globally attracting linearly stable fixed point. However the composition of \(f_{1}\circ f_{0}\) has infinitely many stable fixed points.
## 3 Echo index dependence for RNNs
One of the main motivations for this study is the need to understand how many responses there are for RNNs such as (a) trained discrete-time RNNs of the form [20; 5]
\[x[k+1]=\phi(W_{r}x[k]+W_{i}u[k+1]+W_{f}z[k]) \tag{10}\]
driven by inputs \(u[k]\) or (b) trained ESNs of leaky-integrator neurons:
\[x[k+1]=G(x[k],u[k],z[k]),\ \ G(x,u,z)=(1-\alpha)x+\alpha\phi(W_{r}x+W_{i}u+W_{f}z), \tag{11}\]
where \(\alpha\in(0,1)\) quantifies the leakage, \(\phi\) is a nonlinear function; in both case the input sequence is given by \(u[k]\) and \(z[k]\) represents the output feedback
\[z[k]=\psi(x[k]). \tag{12}\]
The nonlinear function \(\phi\) is called an _activation function_, we assume it is bounded, monotonically increasing, and differentiable, e.g. a scaling of \(\tanh\). By contrast, \(\psi\) is usually the identity function or a softmax function.
### Switching dynamics for RNNs
We consider an ESN with leaky-integrator neurons [11] and no output feedback2 for implementing the input-driven state update rule (7). We consider a map of the form (11) but with
Footnote 2: As discussed in [8] whenever the readout is linear the feedback term can be formally incorporated in the reservoir term. Therefore the absence of the feedback term in an ESN state-update rule can represent an ESN state-update rule after the training session where \(W_{r}\) represents the “effective reservoir” after training.
\[G_{\alpha}(x,u):=(1-\alpha)x+\alpha\tanh(W_{r}x+W_{in}u). \tag{13}\]
A finite set of input values \(U=\{u_{0},u_{1},\ldots,u_{M-1}\}\) defines a number \(M\) of autonomous maps \(f_{i}:X\to X\), where \(X=[-1,1]^{r}\) and \(r\) the dimension of the internal state of the RNN, i.e. the number of neurons. In this RNN's framework, we can see that for small enough input values, the echo index of the nonautonomous switching system is the number of attractors of the input-free ESN. On the other hand, Theorem 2.4 has an interpretation in terms of large amplitude forcing for RNN-like systems. In fact, we know that large amplitude inputs drives the system in the saturating tails regime of \(\tanh\) characterised by a single attracting fixed point [8]. Thus Theorem 2.4 implies that on forcing an RNN with a large amplitude input for long enough, the resulting nonautonomous RNN switching dynamics are characterised by echo index one.
### An example of input-driven RNN with multiple attractors
We provide here an example with a two dimensional ESN to better illustrate the concepts. We choose,
\[W_{r}=\begin{bmatrix}\frac{1}{2}&0\\ 0&\frac{7}{4}\end{bmatrix},\ \ \ W_{in}=I_{2}, \tag{14}\]
with \(I_{2}\) the identity matrix. We consider input sequences \(\mathcal{U}=\{u_{0},u_{1}\}^{\mathbb{Z}}\) where
\[u_{0}:=\begin{pmatrix}\frac{1}{4}\\ \frac{1}{20}\end{pmatrix}\ \ \text{and}\ \ u_{1}:=\begin{pmatrix}-\frac{1}{4}\\ -\frac{1}{2}\end{pmatrix}. \tag{15}\]
What follows can be observed for any value of \(\alpha\in(0,1]\). We chose \(\alpha=\frac{1}{4}\) again because a small leak rate highlights the transient dynamics.
The nonautonomous dynamics driven by some input sequence \(\mathbf{u}\in\mathcal{U}\) consists of sequence of applications of the two maps:
\[f_{0}(x):=G(x,u_{0}),\ \ f_{1}(x):=G(x,u_{1}).\]
One can verify that the autonomous system \(x[k+1]=f_{0}(x[k])\) has two asymptotically stable points with a saddle between them along the vertical line of \(x_{1}\approx 0.45\) (see Figure 4) while the autonomous system \(x[k+1]=f_{1}(x[k])\) has only one (asymptotically stable) fixed point lying in the quadrant where both variables are negatives. Note that [9, Theorem 4.1] can be applied in this example to prove the existence of a local point attractor lying in a strip of negative values of the \(x_{2}\) variable. Nevertheless, it is not straightforward to prove the existence of additional local point attractors.
The stable manifold of the saddle of the autonomous map \(f_{0}\) is a horizontal line dividing the phase space in two sets. Let us denote with \(x^{*}\) the upper stable node of \(f_{0}\) lying on the quadrant where both variables are positive. Thanks to [9, Proposition B.1], we can consider the phase space to be \(X=[-1,1]^{2}\). Let us call
Figure 4: (a) Phase portrait of the map \(f_{0}\). Red line represents the stable manifold of the saddle. Some initial conditions have been evolved and plotted (as black points) in order to visualise the vector field. (b) phase space portrait of the map \(f_{1}\). Note that the purple point is not a fixed point but a _slow point_[19, 8]. In fact, the map \(f_{1}\) is close to a saddle-node bifurcation which occurs nearby the position of such a slow point.
\(X^{up}\) the upper half where \(x^{*}\) lies (including the stable manifold line) and \(X^{down}\) the remaining part. On the other hand, the global attractor of the autonomous map \(f_{1}\) consists of only an asymptotically stable fixed point lying in \(X^{down}\). Lemma 2.1 can be applied to show there exists a (minimum) positive integer \(m_{\min}\) for which \(f_{1}^{m_{\min}}(X^{up})\) is mapped into the interior of \(X^{down}\). For the particular choice of parameters it turns out that \(m_{\min}\approx 30\).
### Bifurcations of echo index
In this section we perform numerical simulations to compute the echo index of the ESN's example of section 3.2. For each choice of parameters determining the sequence \(\mathbf{v}\), we choose \(50\) uniformly distributed initial conditions and iterate \(T\) steps before using a clustering algorithm to numerically estimate the number of clusters in the final state - this is an estimate of the echo index. Random sequences of length \(T\) with varying \(m_{0}^{-}\) and \(m_{1}^{+}\) are chosen, and we fix \(m_{0}^{+}=40\) and \(m_{1}^{-}=1\). Figure 5 highlights that for \(m_{0}^{+}\) large enough there will be echo index \(2\) for small enough \(m_{1}^{+}\). Panel (a) shows \(T=100\), \(p_{0}=0.9\) and \(p_{1}=0.95\). Note that the short length of the timeseries is not enough to collapse the initial conditions down to one of two values. Panel (b) is as for (a) but with \(T=1000\); in this case we have index one or two, though some cases where \(m_{1}^{+}\) is large have not been sampled for long enough giving some spurious echo index \(2\). Panel (c) is as for (a) but with \(T=2000\); we find what is presumably a clear boundary emerging between different values of the echo index. Finally (d) shows the deterministic case where \(p_{0}=0\) and \(p_{1}=1\) which corresponds to the periodic orbit where there are \(m_{0}^{-}\) repeats of \(0\) and \(m_{1}^{+}\) repeats of \(1\) - in this case there a clear boundary between regions with echo index one and two; this presumably corresponds to a bifurcation of the periodically forced map where a second attractor appears. Note that for \(m_{1}^{+}>30\) and large enough \(T\), we always find echo index one as suggested by the discussion at the end of Section 3.2, but for \(m_{0}^{-}\) small it is possible to find echo index one for some values of \(m_{1}^{+}<30\).
## 4 Conclusions
In this paper we have gone beyond work in [9] to highlight specific ways in which the echo index varies with input signal (and hence whether the echo state property holds for a given input). We present this for an iterated function system on a compact space with an example application to an echo state network.
We have so far only considered min-max block-length and repetition probabilities on determining bounds on echo index but the actual value may depend on much more subtle properties of words appearing in the input and properties of the individual maps. It remains a challenge to better understand this relationship between input set, system properties and echo index, even if we restrict only to functions where the only attractors are fixed points. Clearly, responses for cases where the autonomous dynamics of the maps include more complex attractors (such as chaotic, quasiperiodic or period) will be more challenging, not least because
Figure 5: Estimates of echo index for the system (13) with parameters as in the text for a range of random input sequences. Random sequences of length \(T\) with varying \(m_{0}^{-}\) and \(m_{1}^{+}\) are chosen, fixing \(m_{0}^{+}=40\) and \(m_{1}^{-}=1\). Note that the echo index is apparently 2 or more for small enough \(m_{1}^{+}\), or for small \(T\). (a-c) show estimates of the echo index for randomly generate inputs with the given min-max block-lengths and probabilities of repetition \(p_{0}\) and \(p_{1}\) as in Figure 2. Examining longer timeseries when estimating the echo index leads to a clear boundary between regions of different echo index. (d) shows a special case where there are only periodic inputs. We conjecture that the cases (a-c) will limit to (d) for arbitrarily large \(T\).
the a simple generalization of Assumption 2.1(iv) is unreasonable - a single attractor of one map will can stably straddle several basins of attractions for attractors of another map.
One interpretation of our results is that minimum block-lengths are a proxy for the input rate to the system - a long enough minimum block-length corresponds to a slow rate of input. Our results Theorems 2.3 and 2.4 imply that the expected behaviour can be characterised by attractors and basins of the individual maps for slow enough rates of input. For shorter min block-lengths the picture becomes more complex and the transient or nonautonomous behaviour of each map features more strongly in the response to input - there will be an analogy to rate-induced critical transitions [3] in the response for such cases.
A future direction that seems worthy of study is the role of transients in determining the echo index. The computations in Figure 5 show that even for quite long (but finite) computations the number of responses may apparently exceed the echo index which is attained asymptotically. A thorough understanding of responses will be needed to explain such transient behaviour of the echo index. This clearly depends not only on transients in the map dynamics but also on waiting times to see certain words within the inputs.
#### Acknowledgements
We thank EPSRC for support via EP/W52265X/1. We thank Lorenzo Livi, Muhammed Fadera and Claire Postlethwaite for very informative discussions in relation to this work.
#### Data Access
The Matlab code for Figure 5 is available from [https://github.com/peterashwin/ashwin-ceni-2023](https://github.com/peterashwin/ashwin-ceni-2023).
|
2309.05436 | Quantized Fourier and Polynomial Features for more Expressive Tensor
Network Models | In the context of kernel machines, polynomial and Fourier features are
commonly used to provide a nonlinear extension to linear models by mapping the
data to a higher-dimensional space. Unless one considers the dual formulation
of the learning problem, which renders exact large-scale learning unfeasible,
the exponential increase of model parameters in the dimensionality of the data
caused by their tensor-product structure prohibits to tackle high-dimensional
problems. One of the possible approaches to circumvent this exponential scaling
is to exploit the tensor structure present in the features by constraining the
model weights to be an underparametrized tensor network. In this paper we
quantize, i.e. further tensorize, polynomial and Fourier features. Based on
this feature quantization we propose to quantize the associated model weights,
yielding quantized models. We show that, for the same number of model
parameters, the resulting quantized models have a higher bound on the
VC-dimension as opposed to their non-quantized counterparts, at no additional
computational cost while learning from identical features. We verify
experimentally how this additional tensorization regularizes the learning
problem by prioritizing the most salient features in the data and how it
provides models with increased generalization capabilities. We finally
benchmark our approach on large regression task, achieving state-of-the-art
results on a laptop computer. | Frederiek Wesel, Kim Batselier | 2023-09-11T13:18:19Z | http://arxiv.org/abs/2309.05436v3 | # Quantized Fourier and Polynomial Features for more Expressive Tensor Network Models
###### Abstract
In the context of kernel machines, polynomial and Fourier features are commonly used to provide a nonlinear extension to linear models by mapping the data to a higher-dimensional space. Unless one considers the dual formulation of the learning problem, which renders exact large-scale learning unfeasible, the exponential increase of model parameters in the dimensionality of the data caused by their tensor-product structure prohibits to tackle high-dimensional problems. One of the possible approaches to circumvent this exponential scaling is to exploit the tensor structure present in the features by constraining the model weights to be an underparametrized tensor network. In this paper we quantize, i.e. further tensorize, polynomial and Fourier features. Based on this feature quantization we propose to quantize the associated model weights, yielding quantized models. We show that, for the same number of model parameters, the resulting quantized models have a higher bound on the VC-dimension as opposed to their non-quantized counterparts, at no additional computational cost while learning from identical features. We verify experimentally how this additional tensorization regularizes the learning problem by prioritizing the most salient features in the data and how it provides models with increased generalization capabilities. We finally benchmark our approach on large regression task, achieving state-of-the-art results on a laptop computer.
## 1 Introduction
In the context of supervised learning, the goal is to estimate a function \(f\left(\cdot\right):\mathcal{X}\rightarrow\mathcal{Y}\) given \(N\) input-output pairs \(\left\{\boldsymbol{x}_{n},y_{n}\right\}_{n=1}^{N}\), where \(\boldsymbol{x}\in\mathcal{X}\) and \(y\in\mathcal{Y}\). Kernel machines accomplish this by lifting the input data into a high-dimensional feature space by means of a _feature map_\(\boldsymbol{z}\left(\cdot\right):\mathcal{X}\rightarrow\mathcal{H}\) and seeking a linear relationship therein:
\[f\left(\boldsymbol{x}\right)=\left\langle\boldsymbol{z}\left(\boldsymbol{x} \right),\boldsymbol{w}\right\rangle. \tag{1}\]
Training such a model involves the minimization of the regularized empirical risk given a convex measure of loss \(\ell\left(\cdot,\cdot\right):\mathcal{H}\times\mathcal{Y}\rightarrow\mathbb{R}_ {+}\)
\[R_{\text{empirical}}\left(\boldsymbol{w}\right)=\frac{1}{N}\sum_{n=1}^{N}\ell \left(\left\langle\boldsymbol{z}\left(\boldsymbol{x}_{n}\right),\boldsymbol{w }\right\rangle,y_{n}\right)+\lambda\left|\left|\boldsymbol{w}\right|\right|^{2}. \tag{2}\]
Different choices of loss yield the _primal_ formulation of different kernel machines. For example, squared loss results in kernel ridge regression (KRR) (Suykens et al., 2002), hinge loss in support vector machines (SVMs) (Cortes and Vapnik, 1995), and logistic loss yields logistic
regression. Different choices of the feature map \(\mathbf{z}\) allow for modeling different nonlinear behaviors in the data. In this article we consider tensor-product features
\[\mathbf{z}\left(\mathbf{x}\right)=\bigotimes_{d=1}^{D}\mathbf{v}^{\left(d\right)}\left(x_{d} \right), \tag{3}\]
where \(\otimes\) denotes the Kronecker product and \(x_{d}\) denotes the \(d\)-th component of \(\mathbf{x}\). This tensor-product structure arises when considering product kernels (Shawe-Taylor and Cristianini, 2004; Hensman et al., 2017; Solin and Sarkka, 2020), Fourier features (Wahls et al., 2014), when considering B-splines (Karagoz and Batselier, 2020) and polynomials (Shawe-Taylor and Cristianini, 2004).
Due to the tensor-product structure in Equation (3), \(\mathbf{z}(\cdot)\) maps an input sample \(\mathbf{x}\in\mathbb{C}^{D}\) into an exponentially large feature vector \(\mathbf{z}(\mathbf{x})\in\mathbb{C}^{M_{1}M_{2}\cdots M_{D}}\). As a result, the model is also described by an exponential number of weights \(\mathbf{w}\). This exponential scaling in the number of features limits the use of tensor-product features to low-dimensional data or to mappings of very low degree.
Both these computational limitations can be sidestepped entirely by considering the _dual_ formulation of the learning problem in Equation (2), requiring to compute the pairwise similarity of all data respectively by means of a kernel function \(k(\mathbf{x},\mathbf{x}^{\prime})=\langle\mathbf{z}(\mathbf{x}),\mathbf{z}(\mathbf{x}^{\prime})\rangle\). However, the dual formulation requires to instantiate the kernel matrix at a cost of \(\mathcal{O}(N^{2})\) and to estimate \(N\) Lagrange multipliers by solving a (convex) quadratic problem at a cost of at least \(\mathcal{O}(N^{2})\), prohibiting to tackle large-scale data (large \(N\)). To lift these limitations, a multitude of research has focused on finding low-rank approximations of kernels by considering _random_ methods such as polynomial sketching (Pham and Pagh, 2013; Woodruff, 2014; Meister et al., 2019) and random features (Williams and Seeger, 2001; Rahimi and Recht, 2007; Le et al., 2013), which approximate the feature space with probabilistic approximation guarantees.
One way to take advantage of the existing tensor-product structure in Equation (3) is by imposing a tensor network (Kolda and Bader, 2009; Sidiropoulos et al., 2017) constraint on the weights \(\mathbf{w}\). For example, using a polyadic rank-\(R\) constraint reduces the storage complexity of the weights from \(\mathcal{O}(M^{D})\) down to \(\mathcal{O}(DMR)\) and enables the development of efficient learning algorithms with a computational complexity of \(\mathcal{O}(DMR)\) per gradient descent iteration. This idea has been explored for polynomial (Favier and Bouilloc, 2009; Rendle, 2010; Blondel et al., 2016, 2017; Batselier et al., 2017) pure-power-1 polynomials (Novikov et al., 2018), pure-power polynomials of higher degree (Chen et al., 2018), B-splines (Karagoz and Batselier, 2020), and Fourier features (Wahls et al., 2014; Stoudenmire and Schwab, 2016; Efthymiou et al., 2019; Kargas and Sidiropoulos, 2021; Cheng et al., 2021; Wesel and Batselier, 2021).
In this article, we improve on this entire line of research by deriving an exact _quantized_ representation (Khoromskij, 2011) of pure-power polynomials and Fourier features, exploiting their inherent Vandermonde structure. It is worth noting that in this paper quantized means _further tensorized_, and should not be confused with the practice of working with lower precision floating point numbers. By virtue of the derived quantized features, we are able to quantize the model weights. We show that the ensuing quantized models attain higher upper bounds on the VC-dimension for the same number of model parameters and can be trained with no additional computational cost, while learning from the same exact features as their non-quantized counterparts. We finally verify experimentally that:
1. Quantized models are indeed characterized by generalization capabilities. This is demonstrated in Section 5.1, where we show that quantized models achiever lower test errors than the non-quantized models with identical features and identical total number of model parameters.
2. This additional structure regularizes the problem by prioritizing the learning of the peaks in the frequency spectrum of the signal (in the case of Fourier Features) (Section 5.2).
In other words, the quantized structure is learning the most salient features in the data first with its limited amount of available model parameters.
3. Quantized tensor network models can provide state-of-the-art performance on large-scale real-life problems. This is demonstrated in Section 5.3, where we compare the proposed quantized model to both its non-quantized counterpart and other state-of-the-art methods, demonstrating superior generalization performance on a laptop computer.
## 2 Background
We denote scalars in both capital and non-capital italics \(w,W\), vectors in non-capital bold \(\mathbf{w}\), matrices in capital bold \(\mathbf{W}\) and tensors, also known as higher-order arrays, in capital italic bold font \(\mathbf{\mathcal{W}}\). Sets are denoted with calligraphic capital letters, e.g. \(\mathcal{S}\). The \(m\)-th entry of a vector \(\mathbf{w}\in\mathbb{C}^{M}\) is indicated as \(w_{m}\) and the \(m_{1}m_{2}\ldots m_{D}\)-th entry of a \(D\)-dimensional tensor \(\mathbf{\mathcal{W}}\in\mathbb{C}^{M_{1}\times M_{2}\times\cdots\times M_{D}}\) as \(w_{m_{1}m_{2}\ldots m_{D}}\). We denote the complex-conjugate with superscript \({}^{*}\) and \(\otimes\) denotes the Kronecker product. We employ zero-based indexing for all tensors. The Frobenius inner product between two \(D\)-dimensional tensors \(\mathbf{\mathcal{V}},\mathbf{\mathcal{W}}\in\mathbb{C}^{M_{1}\times M_{2}\times \cdots\times M_{D}}\) is defined as
\[\langle\mathbf{\mathcal{V}},\mathbf{\mathcal{W}}\rangle\coloneqq\sum_{m_{1}=0}^{M_{1} -1}\sum_{m_{2}=0}^{M_{2}-1}\cdots\sum_{m_{D}=0}^{M_{D}-1}v_{m_{1}m_{2}\ldots m _{D}}^{*}w_{m_{1}m_{2}\ldots m_{D}}. \tag{4}\]
We define the vectorization operator as \(\texttt{vec}\left(\cdot\right):\mathbb{C}^{M_{1}\times M_{2}\times\cdots \times M_{D}}\to\mathbb{C}^{M_{1}M_{2}\cdots M_{D}}\) such that
\[\texttt{vec}\left(\mathbf{\mathcal{W}}\right)_{m}=w_{m_{1}m_{2}\ldots m_{D}},\]
with \(m=m_{1}+\sum_{d=2}^{D}m_{d}\prod_{k=1}^{d-1}M_{k}\). Likewise, its inverse, the tensorization operator \(\texttt{ten}\left(\cdot,M_{1},M_{2},\ldots,M_{D}\right):\mathbb{C}^{M_{1}M_{2 }\cdots M_{D}}\to\mathbb{C}^{M_{1}\times M_{2}\times\ldots M_{D}}\) is defined such that
\[\texttt{ten}\left(\mathbf{w},M_{1},M_{2},\ldots,M_{D}\right)_{m_{1}m_{2}\cdots m_{ D}}=w_{m}.\]
### Tensor networks
Tensor networks (TNs), also known as tensor decompositions or tensor factorizations (Kolda and Bader, 2009; Cichocki, 2014; Cichocki et al., 2016, 2017) provide a generalization of the concept of matrix rank to tensors.
**Definition 2.1** (Tensor network).: Given a graph \(G=(V,E,\dim)\) where \(V\) is a set of vertices, \(E\) is a set of edges and \(\dim:E\to\mathbb{N}\) assigns a dimension to each edge, a tensor network assigns a core tensor \(\mathbf{\mathcal{C}}_{v}\) to each vertex of the graph, such that \(\mathbf{\mathcal{C}}_{v}\in\otimes_{e\in E_{v}}\mathbb{C}^{\dim(e)}\). Here \(E_{v}=\{e\in E|v\in e\}\) is the set of edges connected to vertex \(v\). The number of parameters of the tensor network is then \(P=\sum_{v\in V}\prod_{e\in E_{v}}\dim(e)\).
Commonly used TNs are the canonical polyadic decomposition (CPD) (Hitchcock, 1927; Kolda and Bader, 2009), the tensor train (TT) (Oseledets, 2011), tensor ring (TR) (Zhao et al., 2016), Tucker decomposition (Kolda and Bader, 2009), hierarchical hierarchical Tucker (Hackbusch and Kuhn, 2009; Grasedyck, 2010) decomposition, block-term decompositions (De Lathauwer, 2008a,b), PEPS (Verstraete and Cirac, 2004) and MERA (Evenbly and Vidal, 2009). We refer to a TN as _underparametrized_ if \(P\ll\prod_{d=1}^{D}M_{d}\).
### Tensorized kernel machines
The tensor-product structure of features in Equation (3) can be exploited by imposing a tensor network structure onto the tensorized model weights
\[\texttt{ten}\left(\mathbf{w},M_{1},M_{2},\ldots,M_{D}\right). \tag{5}\]
Although generally speaking the tensorized model weights are not full rank, modeling them as an underparametrized tensor network allows to compute fast model responses when the feature map \(\mathbf{z}\left(\cdot\right)\) is of the form of Equation (3).
**Theorem 2.2**.: _Suppose \(\mathtt{ten}\left(\mathbf{w},M_{1},M_{2},\ldots,M_{D}\right)\) is a tensor in CPD, TT or TR form. Then model responses and associated gradients_
\[f\left(\mathbf{x}\right)=\langle\bigotimes_{d=1}^{D}\mathbf{v}^{\left(d\right)}\left(x _{d}\right),\mathbf{w}\rangle,\]
_can be computed in \(\mathcal{O}(P)\))._
Proof.: See supplementary material.
Results for more general TNs can be found in the supplementary material. This idea has been explored for a plethora of different combinations of tensor-product features and tensor networks (Wahls et al., 2014; Stoudenmire and Schwab, 2016; Novikov et al., 2018; Chen et al., 2018; Cheng et al., 2021; Khavari and Rabusseau, 2021; Wesel and Batselier, 2021). Training a kernel machine under such constraint yields the following nonconvex optimization problem:
\[\min_{\mathbf{w}} \frac{1}{N}\sum_{n=1}^{N}\ell(\langle\bigotimes_{d=1}^{D}\mathbf{v}^ {\left(d\right)}\left(x_{d}\right),\mathbf{w}\rangle,y_{n})+\lambda\left||\mathbf{w} \right||^{2}, \tag{6}\] \[\text{s.t.}\ \mathtt{ten}\left(\mathbf{w},M_{1},M_{2},\ldots,M_{D} \right)\text{ is a tensor network.}\]
Common choices of tensor network-specific optimizers are the alternating linear scheme (ALS) (Comon et al., 2009; Kolda and Bader, 2009; Uschmajew, 2012; Holtz et al., 2012), the density matrix renormalization Group (DMRG) (White, 1992) and Riemannian optimization (Novikov et al., 2018, 2021). More generic first or second order gradient-based optimization method can also be employed.
## 3 Quantizing polynomial and Fourier features
Before presenting the main contribution of this article, we first provide the definition of a pure-power polynomial feature map.
**Definition 3.1** (Pure-power-\((M_{1}-1,M_{2}-1,\ldots,M_{D}-1)\) polynomial feature map (Chen et al., 2018)).: For an input sample \(\mathbf{x}\in\mathbb{C}^{D}\), the pure-power polynomial features \(\mathbf{z}(\cdot):\mathbb{C}^{D}\rightarrow\mathbb{C}^{M_{1}M_{2}\cdots M_{D}}\) are defined as
\[\mathbf{z}\left(\mathbf{x}\right)=\bigotimes_{d=1}^{D}\mathbf{v}^{\left(d\right)}\left(x_{ d}\right),\]
with \(\mathbf{v}^{\left(d\right)}\left(\cdot\right):\mathbb{C}\rightarrow\mathbb{C}^{M_ {d}}\) the Vandermonde vector
\[\mathbf{v}^{\left(d\right)}\left(x_{d}\right)=\left[1,x_{d},x_{d}^{2},\ldots,x_{ d}^{M_{d}-1}\right]. \tag{7}\]
The \(m_{d}\)-th element of the feature map vector \(\mathbf{v}^{\left(d\right)}(x_{d})\) is
\[v^{\left(d\right)}(x_{d})_{m_{d}}=(x_{d})^{m_{d}},\quad m_{d}=0,1,\ldots,M_{d} -1.\]
The definition of the feature map is given for degree \((M_{1}-1,M_{2}-1,\ldots,M_{D}-1)\) such that the feature map vector \(z(\mathbf{x})\) has a length \(M_{1}M_{2}\cdots M_{D}\). The Kronecker product in Definition 3.1 ensures that all possible combinations of products of monomial basis functions are computed, up to a polynomial degree of \(\sum_{d=1}^{D}(M_{d}-1)\). Compared to the more common affine polynomials, which are eigenfunctions of the polynomial kernel \(k(\mathbf{x},\mathbf{x}^{\prime})=(b+\langle\mathbf{x},\mathbf{x}^{\prime}\rangle)^{M}\)
pure-power polynomial features contain more higher-order terms. Similarly, their use is justified by the Stone-Weierstrass theorem (De Branges, 1959), which guarantees that any continuous function on a locally compact domain can be approximated arbitrarily well by polynomials of increasing degree. Fourier features can be similarly defined by replacing the monomials with complex exponentials.
**Definition 3.2**.: (Fourier Features) For an input sample \(\mathbf{x}\in\mathbb{C}^{D}\), the Fourier feature map \(\mathbf{\varphi}(\cdot):\mathbb{C}^{D}\to\mathbb{C}^{M_{1}M_{2}\cdots M_{D}}\) with \(M_{d}\) basis frequencies \(-\nicefrac{{M_{d}}}{{2}},\ldots,\nicefrac{{M_{d}}}{{2}}-1\) per dimension is defined as
\[\mathbf{\varphi}\left(\mathbf{x}\right)=\bigotimes_{d=1}^{D}\left(c_{d}\,\mathbf{v}^{(d)} \left(e^{-\frac{2\pi\,j\,z_{d}}{L}}\right)\right),\]
where \(j\) is the imaginary unit, \(c_{d}=e^{2\pi\,j\,x_{d}\,\frac{2+M_{d}}{2L}}\), \(L\in\mathbb{C}\) is the periodicity of the function class and \(\mathbf{v}^{(d)}\left(\cdot\right)\) are the Vandermonde vectors of Definition 3.1.
Fourier features are ubiquitous in the field of kernel machines as they are eigenfunctions of \(D\)-dimensional stationary product kernels with respect to the Lebesgue measure, see (Rasmussen and Williams, 2006, Chapter 4.3) or (Hensman et al., 2017; Solin and Sarkka, 2020). As such they are often used for the uniform approximation of such kernels in the limit of \(L\to\infty\) and \(M_{1},M_{2},\ldots,M_{D}\to\infty\)(Wahls et al., 2014, Proposition 1).
We now present the first contribution of this article, which is an exact _quantized_, i.e. further tensorized, representation of pure-power polynomials and Fourier features. These quantized features allows for the quantization of the model weights, which enables to impose additional tensor network structure between features, yielding more expressive models for the same number of model parameters.
### Quantized features
In order to quantize pure-power polynomial features we need to assume that \(M_{d}\) can be written as some power \(M_{d}=Q^{K_{d}}\), where both \(Q,K_{d}\in\mathbb{N}\).
**Definition 3.3** (Quantized Vandermonde vector).: For \(Q,k\in\mathbb{N}\), we define the quantized Vandermonde vector \(\mathbf{s}^{(d,k)}(\cdot):\mathbb{C}\to\mathbb{C}^{Q}\) as
\[\mathbf{s}^{(d,k)}\left(x_{d}\right)\coloneqq\left[1,\,x_{d}^{Q^{k-1}},\ldots,x_{ d}^{(Q-1)Q^{k-1}}\right].\]
The \(q\)-th element of \(\mathbf{s}^{(d,k)}(x_{d})\) is therefore
\[s^{(d,k)}\left(x_{d}\right)_{q}=\left(x_{d}\right)^{qQ^{k-1}},\quad q=0,1, \ldots,Q-1.\]
**Theorem 3.4** (Quantized pure-power-\((M_{d}-1)\) polynomial feature map).: _Each Vandermonde vector \(\mathbf{v}^{(d)}(x_{d})\) can be expressed as a Kronecker product of \(K_{d}\) factors_
\[\mathbf{v}^{(d)}(x_{d})=\bigotimes_{k=1}^{K_{d}}\mathbf{s}^{(d,k)}\left(x_{d}\right),\]
_where \(M_{d}=Q^{K_{d}}\)._
Proof.: From Definition 3.1 we have that
\[v^{(d)}\left(x_{d}\right)_{m_{d}}=(x_{d})^{m_{d}}.\]
Assume that \(M_{d}=Q^{K_{d}}\). We proceed by tensorizing \(\mathbf{v}^{(d)}(x_{d})\) along \(K_{d}\) dimensions, each having size \(Q\). Then
\[v^{(d)}(x_{d})_{m_{d}} =\mathsf{ten}\left(v^{(d)},Q,Q,\ldots,Q\right)_{q_{1}q_{2}\ldots q _{K_{d}}}\] \[=(x_{d})^{q_{1}+q_{2}\,Q+q_{3}\,Q^{2}+\cdots+q_{K_{d}}\,Q^{K_{d}-1}}\] \[=(x_{d})^{q_{1}}\ (x_{d})^{q_{2}\,Q}\ (x_{d})^{q_{3}\,Q^{2}}\cdots\ (x_{d})^{q_{K_{d}}\,Q^{K_{d}-1}}\] \[=s^{(d,1)}{}_{q_{1}}\ s^{(d,2)}{}_{q_{2}}\ s^{(d,3)}{}_{q_{3}}\ \cdots\ s^{(d,K_{d})}{}_{q_{K_{d}}}.\]
The last equality follows directly from Definition 3.3.
Note that in principle it is possible to tensorize with respect to \(K_{d}\) indices such that \(M_{d}=Q_{1}Q_{2}\cdots Q_{K_{d}}\), but we restrain from doing so not to needlessly complicate notation. Theorem 3.4 allows then to quantize pure-power and Fourier features.
**Corollary 3.5** (Quantized pure-power polynomials).: _For an input sample \(\mathbf{x}\in\mathbb{C}^{D}\), the pure-power polynomial feature map can be expressed as_
\[\mathbf{z}\left(\mathbf{x}\right)=\bigotimes_{d=1}^{D}\bigotimes_{k=1}^{K_{d}}\mathbf{s}^ {(d,k)}\left(x_{d}\right).\]
**Corollary 3.6** (Quantized Fourier feature map).: _For an input sample \(\mathbf{x}\in\mathbb{C}^{D}\), the Fourier feature map can be expressed as_
\[\mathbf{\varphi}(\mathbf{x})=\bigotimes_{d=1}^{D}\bigotimes_{k=1}^{K_{d}}c_{d}^{\frac {1}{d_{d}}}\mathbf{s}^{(d,k)}\left(e^{-\frac{2\pi j\mathbf{x}_{d}}{L}}\right),\]
_where \(c_{d}=e^{2\pi\,j\,x_{d}\,\frac{2+M_{d}}{2L}}\)._
Note that when quantized, both pure-power and Fourier features admit an efficient storage complexity of \(\mathcal{O}(DK)\) = \(\mathcal{O}(D\log M)\) instead of \(\mathcal{O}(DM)\).
_Example 3.7_.: Consider \(D=2\), \(M_{1}=16=2^{4}\)\(M_{2}=8=2^{3}\), then the Vandermonde vector of monomials up to degree \(15=M-1\) is constructed from
\[\mathbf{z}(\mathbf{x})=\left[1,\,x_{1}\right]\otimes\left[1,\,x_{1}^{2}\right]\otimes \left[1,\,x_{1}^{4}\right]\otimes\left[1,\,x_{1}^{8}\right]\otimes\left[1,\,x _{2}\right]\otimes\left[1,\,x_{2}^{2}\right]\otimes\left[1,\,x_{2}^{4}\right].\]
We now present the second contribution of this article, which is the quantization of the model weights associated with quantized polynomial and Fourier features. As we will see, these quantized models are more expressive given the number of model parameters and same exact features.
## 4 Quantized tensor network kernel machines
When not considering quantization, model weights allow for tensorial indexing along the \(D\) dimensions of the inputs, i.e. \(\mathsf{ten}\left(\mathbf{w},M_{1},M_{2},\ldots,M_{D}\right)\). Corollary 3.5 and Corollary 3.6 allow to exploit the Kronecker product structure of pure-power polynomial and Fourier features by further tensorizing the model weights of the tensor network-constrained kernel machines of Equation (6)
\[\mathsf{ten}(\mathbf{w},\underbrace{Q,Q,\ldots,Q}_{\prod_{d=1}^{D}\,K_{d}\ \text{ times}}). \tag{8}\]
These further factorized model weights can then be constrained to be a tensor network, and learned by minimizing the empirical risk in the framework of Equation (6). Training a kernel machine under this constraint results in the following nonlinear optimization problem:
\[\min_{\mathbf{w}} \ \frac{1}{N}\sum_{n=1}^{N}\ell(\langle\bigotimes_{d=1}^{D}\bigotimes _{k=1}^{K_{d}}\mathbf{s}^{(d,k)}\left(x_{d}\right),\mathbf{w}\rangle,y_{n})+\lambda \left|\left|\mathbf{w}\right|\right|^{2},\] (9) s.t. \[\texttt{ten}\left(\mathbf{w},Q,Q,\ldots,Q\right)\text{ is a tensor network.}\]
### Computational complexity
In case of CPD, TT or TR-constrained and quantized model weights, model responses and associated gradients can be computed at the same cost as with non-quantized models:
**Theorem 4.1**.: _Consider pure-power and Fourier feature maps factorized as in Corollary 3.5 and Corollary 3.6 and suppose \(\texttt{ten}\left(\mathbf{w},Q,Q,\ldots,Q\right)\) is a tensor in CPD, TT or TR form. Then by Theorem A.3, model responses and associated gradients_
\[f_{\text{quantized}}\left(\mathbf{x}\right)=\langle\bigotimes_{d=1}^{D}\bigotimes_{ k=1}^{K_{d}}\mathbf{s}^{(d,k)}\left(x_{d}\right),\mathbf{w}\rangle,\]
_can be computed in \(\mathcal{O}(P)\)._
Proof.: See supplementary material.
Results for more general TNs can be found in the supplementary material.
### Increased model expressiveness
Constraining a tensor to be a tensor network allows to distill the most salient characteristics of the data in terms of an limited number of effective parameters without destroying its multi-modal nature. This is also known as the _blessing of dimensionality_(Cichocki, 2014) and is the general underlying concept behind tensor network-based methods. In the more specific context of supervised kernel machines, these well-known empirical considerations are also captured in the rigorous framework of VC-theory (Vapnik, 1998). Khavari and Rabusseau (2021, Theorem 2) have recently shown that the VC-dimension and pseudo-dimension of tensor network-constrained models of the form of Equation (9) satisfies the following upper bound _irrespectively of the choice of tensor network_:
\[\text{VC}(f)\leq 2P\log(12|V|),\]
where \(|V|\) is the number of vertices in the TN. Since quantization of the model weights increases the number of vertices in their tensor network representation, quantized models attain higher upper bounds on the VC-dimension and pseudo-dimension _for the same number of model parameters_. For example, in the non-quantized case, parametrizing the TN as a CPD, TT or TR yields
\[\text{VC}(f)\leq 2P\log(12D),\]
while for the quantized case
\[\text{VC}(f_{\text{quantized}})\leq 2P\log(12D\log M).\]
Hence, in case of CPD, TT and TR this additional model expressiveness comes at _no additional computational costs_ when training with gradient descent (Theorems A.3 and A.6). Setting \(Q=2\) provides then in this sense an optimal choice for this additional hyperparameter, as it maximizes the upper bound. It should be noted that a higher-VC dimension does not necessarily imply better performance on unseen data, however as we will see in Sections 5.1 and 5.2, quantization provides an additional source of regularization, and as such, quantized models do not tend to overfit.
Numerical Experiments
In all experiments we consider squared loss \(\ell(f(\mathbf{x}),y)=|f(\mathbf{x})-y|^{2}\), scale our inputs to lie in the unit box, and consider Fourier features (Definition 3.2) as they notably suffer less from ill-conditioning than polynomials. In all experiments we model the weight tensor as a CPD of rank \(R\). We do not consider other TNs in the numerical experiments for three reasons: first, it has been shown that tensor trains are more suited to model time-varying functions such as dynamical systems and time series, as opposed to CPD (Khrulkov et al., 2018). Second, CPD adds only one hyperparameter to our model as opposed to \(D\) hyperparameters for the tensor train or tensor ring. Choosing these hyperparameters (tensor train ranks) is not trivial and can yield models with very different performance for the same total number of model parameters. We hence chose to simply sidestep this issue. Third, CPD-based models are invariant to reordering of the features as opposed to tensor train. We believe that this invariance is very much desired in the context of kernel machines. We solve the ensuing optimization problem using ALS (Uschmajew, 2012). The source code and data to reproduce all experiments is available at [https://github.com/fwesel/QFF](https://github.com/fwesel/QFF).
### Improved generalization capabilities
In this experiment we verify the expected quantization to positively affect the generalization capabilities of quantized models. We compare our approach, which we name quantized tensor kernel machine (QTKM), with the non-quantized tensorized kernel machine (TKM) (Wahls et al., 2014; Stoudenmire and Schwab, 2016; Kargas and Sidiropoulos, 2021; Wesel and Batselier, 2021), random Fourier features (RFF) (Rahimi and Recht, 2007), and with the full, unconstrained model (kernel ridge regression (KRR) which is our baseline, as we are dealing in all cases with squared loss). For our comparison we select eight small UCI datasets (Dua and Graff, 2017). This choice allows us to train KRR by solving its dual optimization problem and thus to implicitly consider \(\prod_{d=1}^{D}M_{d}\) features. For each dataset, we select uniformly at random \(80\,\%\) of the data for training, and keep the rest for test. We set \(Q=2\) and select the remaining hyperparameters (\(\lambda\) and \(L\)) by 3-fold cross validating KRR. We set the number of basis functions \(M_{d}=16\) uniformly for all \(d\) for all models, so that they learn from the same representation (except for RFF, which is intrinsically random). We then vary the rank of the non-quantized tensorized model from \(R=1,2,\ldots,6\) and train all other models such that their number of model parameters \(P\) is at most equal to the ones of the non-quantized model. This means that for TKM \(P=R\sum_{d=1}^{D}M_{d}\), for QTKM \(P=2R\sum_{d=1}^{D}\log M_{d}\) and for RFF \(P\) equals the number of random frequencies. To make sure that TKM and QTKM converge, we run ALS for a very large number of iterations (5000). We report the procedure 10 times, and plot the mean and standard deviation of the test mean squared error (MSE) in Figure 1.
In Figure 1 one can observe that on all datasets, for the same number of model parameters \(P\) and identical features, the generalization performance of QTKM is equivalent or better in term of test MSE. An intuitive explanation for these results is that for equal \(P\), quantization allows to explicitly model correlations within each of the \(D\) modes of the feature map, yielding models with more learning capacity. We notice that while on most datasets the tensor-based approaches recover the performance of KRR, in one case, namely on the yacht dataset, the performance is better than baseline, pointing out at the regularizing effect of the quantized CPD model. In Figure 1 it can also be seen that except on the examined 2-dimensional dataset, both tensor network are consistently outperforming RFF. As we will see, these tensor network-based methods are able to find in a data-dependent way a parsimonious model representation given an exponentially large feature space. This is in contrast to random methods such as RFF, which perform feature selection prior to training and are in this sense oblivious to training data.
Figure 1: Plots of the test mean squared error as a function of the number of model parameters \(P\), for different real-life datasets. In blue, random Fourier features (Rahimi and Recht, 2007), in red tensorized kernel machines with Fourier features (Wahls et al., 2014; Stoudenmire and Schwab, 2016; Kargas and Sidiropoulos, 2021; Wesel and Batselier, 2021), in yellow quantized kernel machines with Fourier features, with quantization \(Q=2\). The gray horizontal full line is the full unconstrained optimization problem, which corresponds to kernel ridge regression (KRR). The grey vertical dotted line is set at \(P=N\). It can be seen that in the underparametrized case, quantization allows to achieve better generalization performance with respect to the non-quantized case.
### Regularizing effect of quantization
We would like to gain insight in the regularizing effect caused by modeling the quantized weights as an underparametrized tensor network. For this reason we investigate how the Fourier coefficients are approximated as a function of the CPD rank in a one-dimensional dataset. In order to remove other sources of regularization, we set \(\lambda=0\). The sound dataset (Wilson and Nickisch, 2015) is a one-dimensional time series regression task which comprises \(60\,000\) sampled points of a sound wave. The training set consists of \(N=59\,309\) points, of which the remainder is kept for test. Based on the Nyquist-Shannon sampling theorem, we consider \(M=2^{13}=8192\) Fourier features, which we quantize with \(Q=2\). We model the signal as a having unit period, hence set \(L=1\). The Fourier coefficients are modeled as a CPD tensor, with rank \(R=10,25,50,100\) in order to yield underparametrized models (\(P\ll M\)). We plot the magnitude of the Fourier coefficients, which we obtain by minimizing Equation (9) under squared loss.
We compare the magnitude of the quantized weights with the magnitude of the unconstrained model response, obtained by solving Equation (2), in Figure 2. From Figure 2 we can see that for low values of \(R\) the quantized kernel machine does not recover the coefficients associated with the lowest frequencies, as a data-independent approach would. Instead, we observe that the coefficients which are recovered for lower ranks, e.g. in case of \(R=10\), are the peaks with the highest magnitude. This is explained by the fact that the additional modes introduced by 2-quantization force the underparametrized tensor network to model the nonlinear relation between different basis which under squared-loss maximize the energy of the signal. As the rank increases, the increased model flexibility allows to model more independent nonlinearities. We can see that already for \(R=100\) the two spectra become almost indistinguishable. We report the relative approximation error of the weights and the standardized mean absolute error on the test set in the supplementary material.
### Large-scale regression
In order to showcase and compare out approach with existing literature in the realm of kernel machines, we consider the airline dataset (Hensman et al., 2013), an 8-dimensional dataset which consists of \(N=5\,929\,413\) recordings of commercial airplane flight delays that occurred in 2008 in the US. As is standard on this dataset (Samo and Roberts, 2016), we consider a uniform random draw of \(\nicefrac{{2}}{{3}}N\) for training and keep the remainder for the evaluation of the mean squared error (MSE) on the test set and repeat the procedure ten times. In order to capture the complicated nonlinear relation between input and output, we resort to consider
Figure 2: Sound dataset. In red, plot of the magnitude of the quantized Fourier coefficients for different values of \(R\) and total number of model parameters \(P\). The magnitude of the full unconstrained Fourier coefficients is shown in black. It can be observed that increasing the CPD rank \(R\) recovers the peaks of frequencies with the highest magnitude.
\(M_{d}=64\) Fourier features per dimension, which we quantize with \(Q=2\). For this experiment, we set \(L=10\), \(\lambda=1\times 10^{-10}\) and run the ALS optimizer for 25 epochs. We train three different QTKMs with \(R=20,30,40\).
We present the results in Table 1, where we can see that QTKM (our approach) is best at predicting airline delay in term of MSE. Other grid-based approaches, such as VFE (Hensman et al., 2017) or Hilbert-GP (Solin and Sarkka, 2020), are forced to resort to additive kernel modeling and thus disregard higher-order interactions between Fourier features pertaining to different dimension. In contrast, QTKM is able to construct \(R\) data-driven explanatory variables based on an exponentially large set of Fourier features. When compared with its non-quantized counterpart TKM (Wesel and Batselier, 2021), we can see that our quantized approach outperforms it with approximately half of its model parameters. Training QTKM on the Intel Core i7-10610U CPU of a Dell Inc. Latitude 7410 laptop with 16 GB of RAM took \((6613\pm 40)\) s for \(R=20\) and took \((13\,039\pm 114)\) s for \(R=40\).
## 6 Conclusion
We proposed to quantize Fourier and pure-power polynomial features, which allowed us to quantize the model weights in the context of tensor network-constrained kernel machines. We verified experimentally the theoretically expected increase in model flexibility which allows us to construct more expressive models with the same number of model parameters which learn from the same exact features. These models benefit from additional tensor network regularization and do not tend to overfit. Possible future research directions stemming from our work are the development of additional regularization that accounts for quantization, possibly in a probabilistic framework and an efficient multi-GPU implementation.
|
2303.00515 | Interpretable Water Level Forecaster with Spatiotemporal Causal
Attention Mechanisms | Forecasting the water level of the Han River is essential to control traffic
and avoid natural disasters. The stream flow of the Han River is affected by
various and intricately connected factors. Thus, a simple forecasting machine
frequently fails to capture its serial pattern. On the other hand, a complex
predictive model loses the interpretability of the model output. This work
proposes a neural network model with a novel transformer exploiting a causal
relationship based on prior knowledge. The transformer consists of
spatiotemporal attention weight that describes the spatial and temporal
causation with multilayer networks with masking. Our model has two
distinguished advantages against the existing spatiotemporal forecasting
models. First, the model allows the heterogeneous predictors for each site such
that a flexible regression is applicable to the causal network. Next, the model
is adapted to partially identified causal structures. As a result, we have
relaxed the constraints of the applicable causal network through our model. In
real data analysis, we use the Han River dataset from 2016 to 2021, compare the
proposed model with deep learning models, and confirm that our model provides
an interpretable and consistent model with prior knowledge, such as a
seasonality arising from the tidal force. Furthermore, in prediction
performance, our model is better than or competitive with the state-of-the-art
models. | Sunghcul Hong, Yunjin Choi, Jong-June Jeon | 2023-02-28T04:37:26Z | http://arxiv.org/abs/2303.00515v6 | # Interpretable Water Level Forecaster with Spatiotemporal Causal Attention Mechanisms
###### Abstract
Forecasting the water level of the Han river is essential to control traffic and avoid natural disasters. There are many variables related to the Han river, and they are intricately connected. In this work, we propose a novel transformer that exploits the causal relationship based on the prior knowledge among the variables and forecasts the water level at the Jamsu bridge in the Han river. Our proposed model considers spatial and temporal causation by formalizing the causal structure as a multilayer network and using masking methods. Due to this approach, we can have interpretability that consistent with prior knowledge. In real data analysis, we use the Han river dataset from 2016 to 2021 and compare the proposed model with deep learning models.
Water level forecasting, Spatiotemporal dependence, Transformer, Interpretable AI
## 1 Introduction
With dramatic advances in computation and large scaled data storage, the predictive models trained by the neural network are widely used in various industries and sectors such as finance, health care, and logistic management(Chatigny et al., 2021; Sezer et al., 2020; Avati et al., 2017; Kaneko and Yada, 2016). The predictive model automatically and periodically updates the model parameters by monitoring its performance, and it rapidly replaces simple and monotonous work conducted by humans. The popular use of neural network models owes to their predictive performance and flexibility for various task-oriented purposes(Begoli et al., 2019; Benidis et al., 2020; Carion et al., 2020; Shen et al., 2018).
However, when the neural network model fails to provide a good prediction output, not expected in the model training phase, diagnosis and modification of the model are generally difficult. Because compositions of multiple nonlinear functions construct the neural network model, the relation between input and output is not directly accounted for by the model parameters. The lack of accountability for the model outputs undermines the model's trustworthiness and discourages using the neural network model.
In developing a water level forecasting model, the need for the model interpretability has been raised in the same context(Aguilera et al., 2019; Ding et al., 2020; Castangia et al., 2023). The model interpretability is required, especially when the forecasting model provides a differing outcome from the expert's opinion or physics law. Even when the neural network model outperforms the experts, it is not easy to accept the result produced by the machine without a confirmative analysis. In the expert domain, the predictive model necessarily follows known physics laws, and typically there are two types of issues for water level forecasting, spatial dependency and temporal dependency(Wu et al., 2020; Noor et al., 2022).
The spatial dependency is primarily related to the network structure consisting of multiple downstream. Geomorphologic factors, including the width of rivers, the structure of linked streams, and the height around the basin, affect the water levels observed on multiple sites. The station gauging water level is usually regarded as a node, and the river stream is modeled as an edge on the network. The temporal dependency is closely related to the physical attributes of water flow explained by fluid dynamics. In a closed system, the water flow upstream directly determines that of down
stream, and the estimation of water levels is of main interest as a functional form on time.
In the real world, the two types of dependencies are entangled with each other. The change of water levels in each spot depends on past observed water levels around spots in a river network. Conventional machine learning models for water level forecasting employ the two-step approach consisting of filtering temporal dependence of serial observations and constructing a spatial dependence with the filtered features. The temporal filter, such as autoregressive analysis, wavelet transformation, and empirical mode decomposition, extracts features in the systematic pre-processing phase(Yadav & Eliza, 2017; Wu et al., 2021). The filtered temporal features of multiple sites are spatially aggregated in the nonlinear models such as the support vector machine, the neural network model, and the neuro-fuzzy system(Ruslan et al., 2014; Yadav & Eliza, 2017). Thus, the interpretability of the two types of dependencies disappears in the forecasting model due to the use of the complex model.
We tackle the challenging problem, a development of the neural network model in which the spatial and temporal dependencies can be explained simultaneously. Our model is a variation of the attention neural network that describes correlations between spatial and temporal features. The attention model consists of two parts: the spatial attention weights and the temporal attention weights. The attention mechanism is formulated by masking the two attention weights that enforce the features to follow our prior knowledge, such as temporal causality and spatial physics law. The trained model explains a causal structure across spatial and temporal features as well as forecasting results. In addition, the proposed model employs a multilayer network that allows spatially heterogeneous predictors and partially identified causal structures.
Section 2 investigates the related work of presenting neural network models for spatiotemporal data analysis. Section 3 introduces river network data of interest in this paper and model assumptions dominated by physics law. Section 4 explains the proposed model focusing on new spatiotemporal attention. Section 5 shows the numerical result attained from river network data analysis, which provides explainable quantities for understanding a complex network. Concluding remarks follow in Section 6.
## 2 Related Work
In this section, we introduce probabilistic forecasting, interpretable AI, spatiotemporal modeling, and their related works.
Probabilistic forecasting.Probabilistic forecaster provides not point estimates but more informative quantities about target variables, such as a conditional distribution or multiple quantiles at multiple levels, for decision-making. The state-of-the-art models exploit as many features as possible, including historical, categorical, and even future information, to forecast the target variable accurately. At each time \(t\), all information excluding the target \(y_{t}\) and available future data \(\tilde{\mathbf{x}}\) forms feature vector \(\mathbf{x}\in\mathbb{R}^{d}\). Let \(\mathcal{Q}\) be a target quantile level set and \(Q=|\mathcal{Q}|\), and define our forecasting process as:
\[\hat{\mathbf{y}}_{t+1:t+\tau,q}=F_{q}(\mathbf{y}_{t-(B-1):t},\mathbf{x}_{t-(B- 1):t},\tilde{\mathbf{x}}_{t-(B-1):t+\tau}),\]
where \(\hat{\mathbf{y}}_{t+1:t+\tau,q}\) is the predicted target quantile at level \(q\in\mathcal{Q}\) of the \(\tau\)-step-ahead forecast at time \(t\), and \(F_{q}\) is the forecaster at a quantile level \(q\).
The forecaster \(F_{q}\) uses all historical data including the target variables with look-back window \(B\) and known features with look-ahead window \(\tau\).
Many deep learning-based probabilistic forecasters, such as DeepAR (Salinas et al., 2020), MQRNN (Wen et al., 2017), and TFT (Lim et al., 2021), have been proposed and are being widely used in various domains due to their powerful performances.
**Interpretable AI.** Interpretable AI has accelerated the development of reliable models in water resource management (Ding et al., 2020; Castangia et al., 2023). Among them, many applications have been conducted based on a TFT, which has both improved interpretability and notable predictive performances (Civitarese et al., 2021; Castangia et al., 2023). The TFT is a transformer-based model that quantifies the importance of variables with a variable selection network and attention mechanism, thereby enhancing interpretability. However, a complex data structure such as spatial dependencies or heterogeneous data incurs a failure to assess the importance of variables.
**Spatiotemporal modeling.** In hydrological time series forecasting, spatial modeling is also crucial as temporal dynamics modeling so that the two regimes, the spatiotemporal stochastic structure are simultaneously considered. The deep learning based models are spotlighted due to their straightforwardness of spatiotemporal modeling. There are two types of main stream to construct the deep learning model as follows. The first approach is combining a canonical forecasting model with a graph neural network (GNN). The GNN-based models learn spatial dependencies using graph convolution network (GCN) and temporal patterns using recurrent neural network (RNN), temporal attention or temporal convolution network (TCN) (Deng et al., 2022). However, these models have limita
tions that the model structure is not flexible enough to include heterogeneous types of spatial-temporal predictors across sites. The second approach is a restriction of model architecture to compel the spatial-temporal dependencies (Ding et al., 2020; Liu et al., 2022). These models are designed for specific domains or spatial structures, so their architectures need to be altered for new spatial structures.
Unlike the aforementioned methods, we propose a general model that can work with both spatiotemporal dependencies and heterogeneous data simultaneously. Our model proposes a simple architecture instead of the GNN to model spatial-temporal structure and surpasses TFT in terms of interpretability.
## 3 Dataset and Multilayer Network
### Dataset
The Han river water level dataset has information on a dam and 5 bridges: Paldang dam (\(D\)), Cheongdam bridge (\(B_{1}\)), Jamsu bridge (\(B_{2}\)), Hangang bridge (\(B_{3}\)), Haengju bridge (\(B_{4}\)), and Ganghwa bridge (\(B_{5}\)). The Paldang dam is one of the most important sources of water supply and factors which cause the change in water level in the Han river. The variables of the Paldang dam are water level (WL), inflow (IF), outflow (OF), storage (STR), and joint usage storage (JUS). The 4 bridges, except the Jamsu bridge, have two variables: water level (WL) and flow (FL). The Jamsu bridge has only the water level and it is our target variable. Our research area is presented in Figure 1.
Additionally, we use additional variables that contain precipitations of three sites (\(P_{1}\), \(P_{2}\), and \(P_{3}\)) around the Han river and temporal variables (month, day, and hour). All variables except temporal ones are aggregated on an hourly level. Then, the aggregated variables are normalized with a min-max scaler. The total period of the dataset is from 2016 to 2021. Descriptive statistics of variables are in Appendix.
### Multilayer Network for Spatiotemporal Modeling
A multilayer network is a very useful tool for modeling complicated patterns between variables in a hierarchical structure (Kivela et al., 2013), such as biomedicine(Hammoud and Kramer, 2020) and community detection (Huang et al., 2020). Choi et al. (2022) captured spatiotemporal patterns of the bike-sharing system by stacking layers along the time axis in the multilayer network.
Motivated by the precedent studies, we utilize the multilayer network for representation learning of spatiotemporal variables. First, basing ourselves on results by prior knowledge and studies of the Han river(Shin and Yoon, 2005; Park and Baek, 2017), we partition the sites into 4 groups: \(S_{1}=\{P_{1},P_{1},P_{3}\},S_{2}=\{B_{5}\},S_{3}=\{D\}\) and \(S_{4}=\{B_{1},B_{2},B_{3},B_{4}\}\), and define nodes corresponding each site \(v_{s},\ s\in\mathcal{S}=\{1,2,3,4\}\). Let \(\mathcal{G}\) denote a multilayer network, which is a tuple defined by a set of nodes \(\mathcal{V}\), a set of edges \(\mathcal{E}\), and a set of layers \(\mathcal{T}=\{1,\ldots,T\}\).
\[\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{T}),\ \mathcal{V}=\bigcup_{t\in \mathcal{T}}V_{t},\]
where \(V_{t}=\{v_{1,t},\ldots,v_{4,t}\}\) and \(v_{s,t}\) denote a node corresponding site \(s\) at \(t^{\text{th}}\) layer.
For ease of evaluation of spatiotemporal effects, we need to strike a compromise between the complexity and interpretability of the model. Assumption 1
Figure 1: Research area, the Han river located in Seoul, South Korea. The color points are spatial sites, and the red of them denotes our target site, the Jamsu bridge \(B_{2}\). The river from the east flows into sea on the west.
restricts the network \(\mathcal{G}\) from being too complex.
**Assumption 1**.: _For \(s,s^{\prime}\in\mathcal{S}\) and \(t,t^{\prime}\in\mathcal{T}\), the multilayer network \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{T})\) satisfies the following conditions:_
1. _Suppose that_ \(s\neq s^{\prime}\) _and_ \((v_{s,t},v_{s^{\prime},t})\in\mathcal{E}\)_, then_ \((v_{s,t^{\prime}},v_{s^{\prime},t^{\prime}})\in\mathcal{E}\)_._
2. _Suppose that_ \(t<t^{\prime}\)_, then_ \(((v,t^{\prime}),(v^{\prime},t))\notin\mathcal{E}\) _and_ \(((v,t),(v,t^{\prime}))\in\mathcal{E}\) _for all_ \(v,v^{\prime}\in\mathcal{V}\)_._
The assumption 1.1 means that the spatial causalities are homogeneous along time points, and assumption 1.2 blocks the effects coming from the future. Based on the assumption 1.2 and prior knowledge, such as the geographical or hydrological characteristics of the target, we need to model identical causalities at each layer \(t\). The causal structure we propose is as follows:
\[v_{1}\to v_{3},v_{1}\to v_{4},v_{2}\to v_{4},v_{3}\to v_{4}. \tag{1}\]
From the causal viewpoint, we can say that \(v_{1}\) is a _cause_ or _parent_ of \(v_{3}\) and \(v_{4}\), and we define a set of parents of node \(v\) as \(\mathrm{Pa}(v)\), e.g., \(v_{1}\in\mathrm{Pa}(v_{3})\).
The node \(v_{1}\) represents precipitation values of three sites \((P_{1},P_{2},P_{3})\), and we regard it as a globally influential variable. \(v_{2}\) is the node corresponding to the Ganghwa bridge, \(B_{5}\). The Han river is a tidal river whose flow and level are affected by tides, and the Ganghwa bridge is located in the downstream area of the river (Park & Baek, 2017). Jung et al. (2018) thought highly of the predicted tide level of \(B_{5}\) as the most important covariate. \(v_{3}\) is related to the Paldang dam (\(D\)) that is located in the upper stream of the river and affects the water level of the Han river directly. That is, it is reasonable that \(\mathrm{Pa}(v_{4})=\{v_{1},v_{2},v_{3}\}\). The multilayer network \(\mathcal{G}\) for the Han river dataset based on the structure (1) is shown in Figure 2.
## 4 Proposed Model
We explained the spatiotemporal structure of our dataset for interpretability. This section proposes a InstaTran (interpretable spatiotemporal attention transformer), an interpretable transformer that postulates the spatiotemporal dependencies as \(\mathcal{G}\). The overall architecture of the proposed model is shown in Figure 3. Ahead of that, we spell out the notations used throughout this section. Note that all vectors are row vectors (not column vectors), and all tensor multiplication follows the operation rule adopted by the deep learning framework (e.g., PyTorch).
### Notations
Let \(\mathbf{x}_{s,t}\in\mathbb{R}^{d_{s}}\) for \(s\in\mathcal{S},t\in\mathcal{T}\) be an explanatory variable to the node \(v_{s,t}\) in \(\mathcal{G}\) and denote a collection of the variables across all sites at time \(t\) by \(\mathbf{x}_{t}=(\mathbf{x}_{1,t},\dots,\mathbf{x}_{4,t})\in\mathbb{R}^{d_{ \mathcal{S}}}\) with \(d_{\mathcal{S}}=\sum_{s\in\mathcal{S}}d_{s}\). Note that each \(\mathbf{x}_{s,t}\) for \(s\) is allowed to be a heterogeneous variable defined on \(\mathbb{R}^{d_{s}}\). Each component in \(\mathbf{x}_{t}\) is denoted by \(x_{i,t},i=1,\dots,d_{\mathcal{S}}\). We define a consecutive partition of \(\{1,\dots,d_{\mathcal{S}}\}\) by \(\mathcal{I}=\{I_{1},\dots,I_{4}\}\) that represents the indices of the components of \(\mathbf{x}_{s,t}\) within \(\mathbf{x}_{t}\). By definition, the cardinality of \(I_{s}\) is equal to \(d_{s}\). In addition to a site-specific variable \(x_{t}\), We consider a categorical variable consisting of month, day, and hour and denote the variable at time point \(t\) by \(\tilde{\mathbf{x}}_{t}\in\mathbb{R}^{3}\).
### Feature Embedding
The proposed model exploits both continuous and categorical variables. To capture seasonality, temporal variables such as month and day are essential. They are transformed into continuous features by the embedding layers \(\tilde{f}:\mathbb{R}\mapsto\mathbb{R}^{d_{\mathrm{emb}}}\), where \(d_{\mathrm{emb}}\) is the dimension of embedding. The continuous variables are also transformed by embedding layers \(f:\mathbb{R}\mapsto\mathbb{R}^{d_{\mathrm{emb}}}\). Note that both continuous and categorical variables are embedded in \(\mathbb{R}^{d_{\mathrm{emb}}}\).
While we use a different embedding layer for each variable and denote the dependency of the embedding layer to each variable by the subscript such as \(\tilde{f}_{j}(\cdot)\) or
Figure 2: The multilayer network (left) for the Han river from time point \(t-1\) to \(t+1\). Under the proposed multilayer network \(\mathcal{G}\), all layers have causal structure (1) identically, and the spatial attention mask (right) is designed as \(M_{\mathcal{S}}\). The elements of squares on the diagonal enable us to model dependencies of variables in the each sites, and the elements of rectangular on the non-diagonal consider the causal relationship between sites. The details of \(M_{\mathcal{S}}\) are discussed in Section 4.3.
\(f_{i}(\cdot)\). For \(t\in\mathcal{T}\),
\[\eta_{i,t} = f_{i}(x_{i,t}),\ i=1,\ldots,d_{\mathcal{S}}\] \[\tilde{\xi}_{j,t} = \tilde{f}_{j}(\tilde{x}_{j,t}),\ j=1,2,3.\]
Then, we aggregate categorical and spatial-dependent features into \(\eta_{i,t}\) and define a concatenated vector \(\xi_{t}\):
\[\tilde{\xi}_{t} = \frac{1}{3}(\tilde{\xi}_{1,t}+\tilde{\xi}_{2,t}+\tilde{\xi}_{3,t})\] \[\xi_{i,t} = \eta_{i,t}+\tilde{\xi}_{t}\] \[\xi_{t} = \xi_{1,t}\oplus_{\mathcal{F}}\cdots\oplus_{\mathcal{F}}\xi_{d_{ \mathcal{S}},t},\]
where \(\oplus_{\mathcal{F}}\) denotes the feature-wise (row-wise) concatenation operator.
Because \(\tilde{\xi}_{t}\) is the embedded vector associated with time variables, it plays a role in a dynamic and trainable positional encoding in transformer (Vaswani et al., 2017). The combined feature \(\xi_{t}\in\mathbb{R}^{d_{\mathcal{S}}\times d_{\text{emb}}}\) represents the embedded feature vector of the Han river at time point \(t\).
For historical and sequential information, we collect embedded feature vectors, and \(\Xi_{t}\in\mathbb{R}^{B\times d_{\mathcal{S}}\times d_{\text{emb}}}\) denotes the concatenated tensor.
\[\Xi_{t}=\xi_{t-(B-1)}\oplus_{\mathcal{T}}\xi_{t-(B-2)}\oplus_{\mathcal{T}} \cdots\oplus_{\mathcal{T}}\xi_{t-1}\oplus_{\mathcal{T}}\xi_{t}\]
where \(B\) is a look-back window size and \(\oplus_{\mathcal{T}}\) denotes the temporal-wise concatenation operator.
### Spatiotemporal Representation Learning
We propose a novel encoder architecture called spatiotemporal attention network (STAN) for compositive representation learning, which considers the spatial causal structure and temporal dependencies as shown in Figure 3. This novel method preserves both spatial and temporal dependencies by stacking various layers which have a distinct roles in the model.
#### 4.3.1 Spatial Causal Attention Network
We propose the spatial causal representation learning architecture called spatial causal attention network (SCAN). The SCAN uses a spatial attention mask based on prior knowledge such as causal structure (1). For a simple example, the river water flows from upstream to downstream. We assume that downstream water features cannot have an impact on the upstream ones in this river. Therefore, the features of sites in this river should be learned based on restricted representation learning methods.
First, we construct the spatial attention mask \(M_{\mathcal{S}}\in\{-\infty,1\}^{d_{\mathcal{S}}\times d_{\mathcal{S}}}\). The element of \(M_{\mathcal{S}}\), \(m_{ij}:=[M_{\mathcal{S}}]_{ij}\) is generated as follows:
\[m_{ij}=\begin{cases}1,&\text{if }i\ \in I_{s},j\in I_{s^{\prime}}\text{ and }v_{s^{\prime}}\in\text{Pa}(v_{s})\cap\{v_{s}\}\\ -\infty,&\text{ otherwise.}\end{cases}\]
The pairs \((i,j)\) with \(m_{i,j}=-\infty\) are excluded in the candidates of features in the attention network, and the exclusion is determined by prior knowledge or physics law. Besides, the pairs with \(m_{ij}=1\) remain in the attention network, and their contribution is learned in training the model.
As with self-attention, the query(\(Q\)), key(\(K\)), and value(\(V\)) are same as \(\Xi_{t}\) in the SCAN. With \(M_{\mathcal{S}}\), the SCAN filters out noisy effects that are not from parents as follows:
\[\text{SCAN}_{1}(\Xi_{t}) = \text{Softmax}(\text{SAS}_{1}(\Xi_{t}))\Xi_{t}W_{1,V}^{\mathcal{S}},\] \[\text{SAS}_{1}(\Xi_{t}) = \Xi_{t}W_{1,Q}^{\mathcal{S}}(\Xi_{t}W_{1,K}^{\mathcal{S}})^{ \top}/\sqrt{d_{\text{h}}}\odot M_{\mathcal{S}}\]
where Softmax denotes the softmax layer, \(W_{1,\cdot}^{\mathcal{S}}\in\mathbb{R}^{d_{\text{emb}}\times d_{\text{h}}}\) is a trainable weight, and \(d_{\text{h}}\) is the dimension of the hidden state. The element-wise product with
Figure 3: Architecture of proposed model (right) and details on SCAN and TAN (left).
broadcasting is denoted by \(\odot\).
We describe the tensor multiplication in detail. For the tensor multiplication in the SCAN, the transpose of 3-dimensional tensor \(A\) is \([A^{\top}]_{ijk}:=[A]_{ikj}\). The tensor multiplication at the time point \(i\) is defined as follows:
\[[AW^{\mathcal{S}}_{1,Q}(AW^{\mathcal{S}}_{1,K})^{\top}]_{ii:}:=[ AW^{\mathcal{S}}_{1,Q}]_{ii:}[(AW^{\mathcal{S}}_{1,K})^{\top}]_{i::},\]
where \([AW^{\mathcal{S}}_{1,]::}:=[A]_{ii:}W^{\mathcal{S}}_{1,\cdot}\). In a similar manner, \([A\odot\mathcal{S}]_{i::}:=[A]_{i::}\odot M_{\mathcal{S}}\), if \([A]_{i::}\in\mathbb{R}^{d_{\mathcal{S}}\times d_{\mathcal{S}}}\) for all \(i\).
By the definition \([AW^{\mathcal{S}}_{1,Q}(AW^{\mathcal{S}}_{1,K})^{\top}]_{i::}\) is the result of two spatial feature matrices for a fixed time \(i\) such that the SCAN models the interemporally spatial dependencies. Due to SAS\({}_{1}\), the attention weights of SCAN, we can quantify the spatial dependencies between variables in representation learning. For the convenience of notation, we let
\[\Phi_{t,1} = \text{SCAN}_{1}(\Xi_{t}). \tag{2}\]
To sum up, \(\Phi_{t,1}\) is the 3-dimensional tensor of \(B\) stacked historical spatial dependency matrices at time \(t\).
#### 4.3.2 Variable Selection Network
So far, we have introduced the method to manipulate spatial dependencies with masking. Next, we need to model temporal causality. Because \(\Phi_{t,1}\) is a 3-dimensional trainable tensor, it frequently causes instability of computation and deteriorates the predictive performance of the trained model. To avoid these problems, we reduce the dimension of \(\Phi_{t,1}\) by the context vector.
Lim et al. (2021) constructs the context vector by a variable selection network (VSN) that averages over hidden states with trainable weights. First, we introduce a gated linear unit layer (GLU) with layer normalization (GLULN). Dauphin et al. (2016) proposes the GLU to control information flow passed on in the deep neural network with gating systems. Let \(\omega_{1},\omega_{2}\in\mathbb{R}^{d_{\omega}}\), and the procedures of gating mechanisms are defined as follows:
\[\text{GLU}_{\omega}(\omega_{1}) = \sigma(\text{FFN}_{\omega,1}(\omega_{1}))\odot\text{FFN}_{\omega, 2}(\omega_{1})\] \[\text{GLULN}_{\omega}(\omega_{1},\omega_{2}) = \text{LayerNorm}(\omega_{2}+\text{GLU}_{\omega}(\omega_{1})),\]
where \(\sigma\) and LayerNorm denote the sigmoid activation function and layer normalization, respectively. The \(\text{FFN}_{\omega,:}:\mathbb{R}^{d_{\omega}}\mapsto\mathbb{R}^{d_{\omega}}\) represents a feedforward network.
Through the GLU and GLULN, the model filters out the noise signal and has a flexible representation (Lim et al., 2021). With additional nonlinear activation and feedforward network, a gated residual network (GRN) enriches the feature vector \(\omega_{1}\) as follows:
\[\text{GRN}_{\omega}(\omega_{1}) = \text{GLULN}(\omega_{1},\text{DFFN}_{\omega}(\omega_{1}))\] \[\text{DFFN}_{\omega}(\omega_{1}) = \text{FFN}_{\omega,4}(\text{ELU}(\text{FFN}_{\omega,3}(\omega_{1}) )),\]
where ELU denotes the exponential linear unit activation function.
We utilize the VSN to construct a context vector by aggregating vectors within the same time point. Let \(\phi^{t(i)}\in\mathbb{R}^{d_{\mathcal{S}}\times d_{\text{h}}}\) be the \(i^{\text{th}}\) matrix of \(\Phi_{t,1}\), where \(i\in\{1,\dots,B\}\). First, we flatten the vector \(\phi^{i}\) as \([\phi^{t(i)}_{1},\dots,\phi^{t(i)}_{d_{\mathcal{S}}}]\) and denote it by \(\text{Flat}(\phi^{t(i)})\in\mathbb{R}^{d_{\mathcal{S}}d_{\text{h}}}\). Then, by \(\text{FFN}_{w_{\phi}}:\mathbb{R}^{d_{\mathcal{S}}d_{\text{h}}}\mapsto\mathbb{R }^{d_{\mathcal{S}}}\), variable selection weights \(\bar{w}^{t(i)}_{\phi}\in\mathbb{R}^{d_{\mathcal{S}}}\) are computed by:
\[w^{t(i)}_{\phi} = \text{FFN}_{w_{\phi}}(\text{Flat}(\phi^{t(i)})))\] \[\bar{w}^{t(i)}_{\phi} = \text{Softmax}(\text{GRN}_{w_{\phi}}(w^{t(i)}_{\phi})). \tag{3}\]
Note that \(\bar{w}^{t(i)}_{\phi}\) represents the importance of variables learned from the first SCAN layer (2), which shares the same idea of Lim et al. (2021). Using \(\bar{w}^{t(i)}_{\phi}\), we generate a context vector \(\bar{\phi}^{t(i)}\in\mathbb{R}^{d_{\text{h}}}\) as follows:
\[\bar{\phi}^{t(i)} = \sum_{j=1}^{d_{\mathcal{S}}}\bar{w}^{t(i)}_{\phi,j}\phi^{t(i)}_{j}\] \[\bar{\Phi}_{t,1} = \bar{\phi}^{t(1)}\oplus_{\mathcal{T}}\dots\oplus_{\mathcal{T}} \bar{\phi}^{t(B)},\]
where \(\bar{w}^{t(i)}_{\phi,j}\) denotes the \(j^{\text{th}}\) elements of \(\bar{w}^{t(i)}_{\phi}\) and \(\bar{\Phi}_{t,1}\in\mathbb{R}^{B\times d_{\text{h}}}\) is a concatenated tensor of the aggregated vectors based on their importance.
Based on the above results, we define the VSN layers as follows:
\[\bar{\Phi}_{t,1}=\text{VSN}_{\bar{\Phi}_{1}}(\Phi_{t,1}). \tag{4}\]
#### 4.3.3 Temporal Attention Network and the \(2^{\text{nd}}\) Scan
The TAN layer is applied to \(\bar{\Phi}_{t,1}\) for modeling temporal dependencies. Both query and key are \(\bar{\Phi}_{t,1}\), and the value is flattened \(\Phi_{t,1}\), which is denoted by \(\text{Flat}(\Phi_{t,1})=\phi^{t(1)}\oplus_{\mathcal{T}}\dots\oplus_{\mathcal{T}} \phi^{t(B)}\in\mathbb{R}^{B\times d_{\mathcal{S}}d_{\text{h}}}\). The procedures in the TAN are as follows:
\[\text{TAN}_{1}(\bar{\Phi}_{t,1},\text{Flat}(\Phi_{t,1}))=\text{ Softmax}(\text{TAS}_{1}(\bar{\Phi}_{t,1}))\text{Flat}(\Phi_{t,1}),\]
where
\[\text{TAS}_{1}(\bar{\Phi}_{t,1})=\bar{\Phi}_{t,1}W^{\mathcal{T}}_{1,Q}(\bar{ \Phi}_{t,1}W^{\mathcal{T}}_{1,K})^{\top}/\sqrt{d_{\text{h}}}\odot M_{1,\mathcal{T}}, \tag{5}\]
\(W^{\mathcal{T}}_{1,\cdot}\in\mathbb{R}^{d_{\text{h}}\times d_{\text{h}}}\) is a trainable weight, and \(M_{1,\mathcal{T}}\in\mathbb{R}^{B\times B}\) denotes the temporal causal mask in STAN.
By using the output of the VSN, the TAN can accurately detect historical events such as dam discharge. The next step is reshaping the output tensor from \(B\times d_{S}d_{\mathrm{h}}\) to \(B\times d_{S}\times d_{\mathrm{h}}\).
\[\Psi_{t}=\mathrm{Reshape}(\mathrm{TAN}_{1}(\bar{\Phi}_{1},\mathrm{ Flat}(\Phi_{1}))). \tag{6}\]
Note that \(\Psi_{t}\in\mathbb{R}^{B\times d_{S}\times d_{\mathrm{h}}}\) still preserves the spatial structure (1) because we use \(\Phi_{t,1}\), the output of the first SCAN, as the value of the attention. By the above procedure, the past features can affect the current features under the spatial structure.
To reflect the results of the TAN, we apply the SCAN layer again.
\[\mathrm{SCAN}_{2}(\Psi_{t}) = \mathrm{Softmax}(\mathrm{SAS}_{2}(\Psi_{t}))\Psi_{t}W_{2,V}^{ \mathcal{S}} \tag{7}\] \[\mathrm{SAS}_{2}(\Psi_{t}) = \Psi_{t}W_{2,Q}^{\mathcal{S}}(\Psi_{t}W_{2,K}^{\mathcal{S}})^{ \top}/\sqrt{d_{\mathrm{h}}}\odot M_{\mathcal{S}}\] \[\Phi_{t,2} = \mathrm{SCAN}_{2}(\Psi_{t}),\]
where \(W_{2,\cdot}^{\mathcal{S}}\in\mathbb{R}^{d_{\mathrm{h}}\times d_{\mathrm{h}}}\).
The attention weights of the second SCAN layer (7) are essential to interpret and quantify spatial effects.
We feed the second SCAN result, \(\Phi_{t,2}\), to the VSN that transfers summarized information to the decoder. The second VSN layer returns the filtered output \(\bar{\Phi}_{t,2}\in\mathbb{R}^{B\times d_{\mathrm{h}}}\) for accurate forecasting.
\[\bar{\Phi}_{t,2}=\mathrm{VSN}_{\bar{\Phi}_{2}}(\Phi_{t,2}). \tag{8}\]
The importance of variables for forecasting is evaluated by the variable selection weights of the second VSN, which is a similar computation as (3). The details are discussed in Section 5.3.
### Temporal Decoder
Inspired by the results of Wen et al. (2017), we propose the decoder architecture to improve the interpretability of the model. Wen et al. (2017) used the global and local context vectors from FFN layer without the LSTM layer. In our method, the decoder has two VSN layers. The first VSN summarizes \(\bar{\Phi}_{t,2}\) and constructs a global context vector \(\theta_{t,g}\in\mathbb{R}^{d_{\mathrm{h}}}\) in (9).
\[\theta_{t,g}=\mathrm{VSN}_{\theta_{g}}(\bar{\Phi}_{t,2}). \tag{9}\]
Then, the second VSN creates the local context vector by the embedded temporal feature representing a locality in (10). The pooled context vector at time \(t+k\) is computed by adding \(\theta_{t},g\) to \(\theta_{t+k,c}\).
\[\theta_{t+k,c} = \mathrm{VSN}_{\theta_{c}}(\bar{\xi}_{1,t+k}\oplus_{\mathcal{F}} \bar{\xi}_{2,t+k}\oplus_{\mathcal{F}}\bar{\xi}_{3,t+k}) \tag{10}\] \[\theta_{t+k} = \theta_{t,g}+\theta_{t+k,c},\]
where \(\theta_{t+k,c}\in\mathbb{R}^{d_{\mathrm{h}}}\).
For, \(k=1,\ldots,\tau\), we aggregate the pooled context vector \(\theta_{t+k}\) across the time range, and concatenate it with \(\bar{\Phi}_{t,2}\).
\[\Theta_{t} = \theta_{t+1}\oplus_{\mathcal{T}}\cdots\oplus_{\mathcal{T}}\theta _{t+\tau},\] \[\Omega_{t} = \bar{\Phi}_{t,2}\oplus_{\mathcal{T}}\Theta_{t},\]
where \(\Theta_{t}\in\mathbb{R}^{\tau\times d_{\mathrm{h}}}\) and \(\Omega_{t}\in\mathbb{R}^{(B+\tau)\times d_{\mathrm{h}}}\). Note that \(\Theta_{t}\) is a feature vector containing information after time \(t\) such that \(\Omega_{t}\) aggregates forward and backward available features at time \(t\).
We convert \(\Omega_{t}\) through the second attention network in (11) to enrich the temporal feature.
\[\bar{\Omega}_{t}=\mathrm{TAN}_{2}(\Omega_{t},\Omega_{t}). \tag{11}\]
Finally, the quantile output layer which is a feed-forward network returns the predicted target sequence \(\hat{y}_{t,q}(k)\in\mathbb{R},\ k=1,\ldots,\tau\) as follows:
\[\hat{y}_{t,q}(k) = \mathrm{FFN}_{\hat{y},q}(\bar{\Omega}_{t}(k)),\ q\in\mathcal{Q}, \tag{12}\]
where \(\bar{\Omega}_{t}(k)\) denotes the \((B+k)^{\mathrm{th}}\) row of \(\bar{\Omega}\), \(k=1,\ldots,\tau\) with the look-ahead window size, \(\tau\).
Our decoder predicts \(y_{t+k,q}\) directly, not recursively. The direct strategy (Chevillon, 2006; Taieb & Atiya, 2016; Wen et al., 2017; Lim et al., 2021) improves the performance because it can avoid error accumulation causing biased forecasting., the model architecture is simple and efficient compared to the TFT because it shares the model weights such as (10). The proposed decoder is more potent than the one in the TFT in real data analysis (see Section 5.1).
### Loss functions
We introduce the composite quantile loss (CQL) to forecast multiple quantiles. First, the quantile loss (QL) is defined by
\[\mathrm{QL}(y,\hat{y};q)=(q-\mathbb{I}_{\{y<\hat{y}\}}(y))(y-\hat{y}), \tag{13}\]
where \(\mathbb{I}_{\mathcal{A}}(a)\) returns \(1\) for \(a\in\mathcal{A}\) and \(0\), otherwise.
Then CQL is given by,
\[\mathrm{CQL}(\mathbf{W})=\sum_{t\in\mathcal{T}}\sum_{q\in\mathcal{Q}}\sum_{k=1 }^{\tau}\frac{\mathrm{QR}(y_{t}(k),\hat{y}_{t,q}(k);q)}{|\mathcal{T}||\mathcal{Q }|\tau},\]
where \(\mathbf{W}\) denotes the entire weight parameters set of the proposed model, \(y_{t}(k)\) and \(\hat{y}_{t,q}(k)\) are the \(k^{\mathrm{th}}\) elements of the target \(y_{t}\) and predicted \(\hat{y}_{t,q}\). \(\hat{y}_{t,q}\) is dependent upon \(\mathbf{W}\).
## 5 Experiments
We demonstrate the superiority of the proposed model based on the probabilistic forecasting prediction and interpretation performance on a real dataset, the Han river water level dataset. Benchmark models are MQ-RNN(Wen et al., 2017), DeepAR(Salinas et al., 2020), TFT (Lim et al., 2021), and proposed model, InstaTran. The TFT is the most complex model which has 99,497 parameters in benchmarks. Next, InstaTran(77,047), DeepAR(45,614) and MQ-RNN(4,099) are in order.
The dataset is split into two parts: training and test dataset. The training and test dataset are made up of data from 2016-2020 and 2021, respectively. We optimize hyperparameter tuning through cross-validation in the training dataset. Each of the benchmark models predicts 12 hours' WL(\(B_{2}\)) using the past 48 hours' data, i.e., \(\tau=12\) and \(B=48\). The target quantile levels set \(\mathcal{Q}=\{0.5,0.7,0.9\}\). Furthermore, we compute the importance measure of features to interpret the output of the proposed model. The experiments are implemented with PyTorch on NVIDIA GeForce RTX 3090, and the source code is provided at [https://github.com/chulhongsung/InstaTran](https://github.com/chulhongsung/InstaTran).
For evaluation, we assess the performance of benchmark models using quantile loss (13) and calibration
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Metric & \(q\) & Parallel STAN & Without \(M_{\mathcal{S}}\) & With TFT decoder & InstaTran \\ \hline \multirow{3}{*}{\(q\)-level QL} & 0.9 & 0.0034 & 0.0025 & 0.0031 & **0.0021** \\ & 0.7 & 0.0072 & 0.0045 & 0.0051 & **0.0036** \\ & 0.5 & 0.0086 & 0.0048 & 0.0059 & **0.0040** \\ & 0.9 & 0.936(0.036) & 0.946(0.046) & 0.798(0.102) & **0.924(0.024)** \\ \(q\)-Rate(\(|q-q\)-Rate) & 0.7 & 0.894 (0.194) & 0.838(0.138) & **0.638(0.062)** & 0.796(0.096) \\ & 0.5 & 0.823(0.323) & 0.666(0.166) & **0.623(0.123)** & 0.647(0.147) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance metrics of variants of the InstaTran. The best results are marked in bold.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Metric & \(q\) & DeepAR & MQ-RNN & TFT & InstaTran(proposed) \\ \hline \multirow{3}{*}{\(q\)-level QL} & 0.9 & 0.0027 & 0.0030 & **0.0019** & 0.0021 \\ & 0.7 & 0.0044 & 0.0051 & **0.0031** & 0.0036 \\ & 0.5 & 0.0039 & 0.0053 & **0.0033** & 0.0040 \\ & 0.9 & 0.970(0.070) & 0.930(0.030) & 0.870(0.030) & **0.924(0.024)** \\ \(q\)-Rate(\(|q-q\)-Rate) & 0.7 & 0.925 (0.225) & 0.788(0.088) & **0.708(0.008)** & 0.796(0.096) \\ & 0.5 & 0.788(0.288) & 0.625(0.125) & **0.392(0.108)** & 0.647(0.147) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance metrics of benchmarks. The best results are marked in bold.
Figure 4: Attention weights of the second SCAN. (a) and (b) are cases on a dry day. (c) and (d) are cases on a rainy day. The x-axis and y-axis consist of variable indices: \(\{0:P_{1},1:P_{2},2:P_{3},3:\text{WL}(B_{5}),4:\text{WL}(D),5:\text{IF}(D),6: \text{STR}(D),7:\text{JUS}(D),8:\text{OF}(D),9:\text{WL}(B_{1}),10:\text{FL}(B_ {1}),11:\text{WL}(B_{2}),12:\text{WL}(B_{3}),13:\text{FL}(B_{3}),14:\text{WL}(B_ {4}),15:\text{FL}(B_{4})\}\).
metric \(q\)-Rate (Chen et al., 2012; Wen et al., 2017).
\[q\text{-level QL} = \sum_{t\in\mathcal{T}^{\prime}}\sum_{k=1}^{\tau}\frac{\text{QR}(y_{ t}(k),\hat{y}_{t,q}(k);q)}{|\mathcal{T}^{\prime}|\tau} \tag{14}\] \[q\text{-Rate} = \sum_{t\in\mathcal{T}^{\prime}}\sum_{k=1}^{\tau}\frac{\mathbb{I}_ {\{y<\hat{y}_{t,q}(k)\}}(y_{t}(k))}{|\mathcal{T}^{\prime}|\tau}, \tag{15}\]
where \(\mathcal{T}^{\prime}\) denotes the set of time points in the test dataset.
The \(q\)-Rate is a ratio of the targets less than \(q\)-level prediction, and it corresponds to calibration. Therefore, \(q\)-Rate close to level \(q\) is preferable.
### Ablation studies of InstaTran
Ahead of comparing benchmark models, through ablation studies removing the mask \(M_{\mathcal{S}}\) and the STAN, we investigate the virtues of the proposed structure in the InstaTran. First, we visualize the attention weights with or without \(M_{\mathcal{S}}\) in the second SCAN (7) taking into account two scenarios: rainy and dry day. Figure 4 shows the causal representation learning via the \(M_{\mathcal{S}}\). The results of cases with the mask are reasonable in that the difference between the two days is distinct, and the rain effect is transferred to the dam and river on a rainy day. It corresponds to our expectations. On the contrary, the river effects come into the variables related to causes such as the dam in the cases without \(M_{\mathcal{S}}\).
Second, we analyze the prediction performances of proposed architectures: the STAN and the decoder, and consider 4 cases: the parallel structure of the STAN, the SCAN without \(M_{\mathcal{S}}\), the InstaTran with the decoder of the TFT, and the InstaTran without ablations. The parallel structure of the STAN means that the SCAN and TAN separately feed their hidden features and learn spatial and temporal features independently. Table 1 shows the proposed architecture's superiority in prediction performances. Compared to other mod
Figure 5: Example of prediction results in the test set.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{TFT} & \multicolumn{4}{c}{InstaTran(proposed)} \\ \cline{2-9} & Mean(Std) & 0.1 & 0.5 & 0.9 & Mean(Std) & 0.1 & 0.5 & 0.9 \\ \hline \(P_{1}\) & 0.031(0.007) & 0.023 & 0.031 & 0.042 & 0.092(0.099) & 0.006 & 0.062 & 0.249 \\ \(P_{2}\) & 0.021(0.014) & 0.008 & 0.017 & 0.030 & 0.042(0.030) & 0.010 & 0.038 & 0.079 \\ \(P_{3}\) & 0.034(0.015) & 0.019 & 0.031 & 0.053 & 0.061(0.028) & 0.034 & 0.053 & 0.104 \\ \(\text{WL}(B_{5})\) & 0.131(0.016) & 0.110 & 0.132 & 0.153 & **0.179(0.070)** & **0.058** & **0.201** & **0.251** \\ \(\text{WL}(D)\) & 0.018(0.015) & 0.006 & 0.013 & 0.040 & 0.088(0.075) & 0.009 & 0.063 & 0.198 \\ \(\text{IF}(D)\) & 0.074(0.021) & 0.005 & 0.070 & 0.101 & 0.074(0.038) & 0.034 & 0.066 & 0.124 \\ \(\text{STR}(D)\) & 0.018(0.007) & 0.010 & 0.017 & 0.027 & 0.006(0.005) & 0.003 & 0.005 & 0.011 \\ \(\text{JUS}(D)\) & 0.012(0.011) & 0.003 & 0.008 & 0.027 & 0.057(0.023) & 0.034 & 0.053 & 0.083 \\ \(\text{OF}(D)\) & 0.096(0.013) & 0.080 & 0.095 & 0.111 & 0.048(0.025) & 0.023 & 0.040 & 0.088 \\ \(\text{WL}(B_{1})\) & 0.040(0.014) & 0.025 & 0.037 & 0.059 & 0.091(0.023) & 0.055 & 0.097 & 0.114 \\ \(\text{FL}(B_{1})\) & 0.015(0.007) & 0.008 & 0.013 & 0.023 & 0.022(0.015) & 0.008 & 0.017 & 0.046 \\ \(\text{WL}(B_{2})\) & **0.249(0.077)** & **0.148** & **0.253** & **0.348** & 0.084(0.048) & 0.045 & 0.064 & 0.160 \\ \(\text{WL}(B_{3})\) & 0.019(0.004) & 0.015 & 0.019 & 0.025 & 0.054(0.046) & 0.008 & 0.038 & 0.117 \\ \(\text{FL}(B_{3})\) & 0.019(0.012) & 0.009 & 0.016 & 0.034 & 0.040(0.081) & 0.006 & 0.014 & 0.083 \\ \(\text{WL}(B_{4})\) & 0.082(0.024) & 0.058 & 0.077 & 0.114 & 0.043(0.046) & 0.011 & 0.020 & 0.114 \\ \(\text{FL}(B_{4})\) & 0.016(0.010) & 0.007 & 0.014 & 0.028 & 0.014(0.040) & 0.050 & 0.004 & 0.026 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Descriptive statistics of variable importance. The most significant variables are marked in bold.
els, the InstaTran with our proposed structures outperforms predictions at overall quantile levels and calibration at a high quantile level, 0.9.
### Comparison with other probabilistic forecasts
We evaluate the benchmark models with two metrics (14) and (15). Figure 5 shows examples of prediction results in the test set. The gray and black lines denote the past observations and target of the water level of \(B_{2}\), respectively. The lower and upper boundary of the blue band present predicted 0.1 and 0.9 level quantiles of the target. The intervals of the DeepAR is too wide, i.e., it is not sharp so that the results can be meaningless. The MQ-RNN produces a biased interval which does not capture target levels. The InstaTran and TFT give reliable intervals at the sharp peak of the target (see total prediction results in Appendix). Table 2 presents the performance results in the test dataset and shows that the TFT outperforms other models except \(q\)-Rate at level 0.9. The InstaTran is competitive with the TFT in \(q\)-level QL; in particular, InstaTran outperforms the others in \(q\)-Rate at level 0.9. The DeepAR has a relatively lower quantile loss, but through Figure 5 and \(q\)-Rate in Table 2, we note that its quantile predictions failed. The total evaluation results from May to October are in Appendix.
### Interpretation
In this section, we compare the interpretability of the InstaTran with the TFT. The variable importance of the InstaTran is quantified as the weights of the VSN layer in (8). While that of the TFT is measured by the weights of the first VSN layer. Table 3 presents the variable importance in prediction. Generally, the standard deviations of importance in the InstaTran are larger than those in the TFT, especially in \(P_{1}\). This means that the importance of the variable in the TFT is less sensitive to its observation or effect. Additionally, there is a tendency for the mean to be larger than the median in the InstaTran. It implies that the distribution has a right tail, and its importance has a significantly high value under critical events such as extreme rainfall.
The TFT fails to capture the important variables because it ignores causal and spatial relationships. The importance of WL(\(B_{2}\)) dominates others in the TFT because the other importances are diluted into the past WL(\(B_{2}\)), and the TFT is not able to distinguish a cause and a mediator. That is, in investigating the causal structure the TFT is able to provide no further information than the conventional autoregressive model. The InstaTran values the WL(\(B_{5}\)) over the WL(\(B_{2}\)), which results from our causal assumption (1) that we set the WL(\(B_{5}\)) as the cause of the WL(\(B_{2}\)). For example, we can consider rainfall as a cause of outflow and a discharge of the upstream as a mediator. Figure 6 shows the importance of \(P_{1}\) and the outflow of the dam. At time point 20 when the \(P_{1}\) soars strikingly, the InstaTran sees \(P_{1}\) as important, but the TFT abstracts away from an unforeseen change of \(P_{1}\). The effect of \(P_{1}\) is delivered to the dam \(D\), then it discharges water into the river. The InstaTran notices that the discharge is caused by the heavy rainfall at time 20. On the contrary, in the TFT, the importance of \(P_{1}\) is overshadowed by OF(\(D\)). The InstaTran learns a causal representation via the SCAN, and it helps to identify the causes in the complex data structure. From Figure 4, we note that the \(P_{1}\) is the main cause of OF(\(D\)) on rainy days. The VSN can exploit this causal representation of \(2^{\text{nd}}\) output for context vector. Figure 7 shows the temporal attention weights in STAN at time points 47 and 67. The weights at time point 23 when the effect of \(P_{1}\) activates, have larger value than others, and the weights in red boxes follow a similar pattern. This means that the TAN layer can capture significant events from the firts VSN output (4) and have consistent result at different time points.
## 6 Conclusion and Limitation
This paper proposes an interpretable water level forecaster based on prior knowledge. To reflect prior knowledge, our model learns feature representation under a constrained multilayer network. This representation learning method contributes to the ability to capture underlying causes of target predictions. Additionally, our proposed model is simple because it consists of only fully connected layers, but powerful than some models.
In spite of remarkable interpretability, our model performs worse than the existing model (TFT), but we argue that the proposed model is competitive and reasonable. We leave improvement of performance and some extensions of our model to other domains as future works.
|
2309.15625 | Leveraging Topology for Domain Adaptive Road Segmentation in Satellite
and Aerial Imagery | Getting precise aspects of road through segmentation from remote sensing
imagery is useful for many real-world applications such as autonomous vehicles,
urban development and planning, and achieving sustainable development goals.
Roads are only a small part of the image, and their appearance, type, width,
elevation, directions, etc. exhibit large variations across geographical areas.
Furthermore, due to differences in urbanization styles, planning, and the
natural environments; regions along the roads vary significantly. Due to these
variations among the train and test domains, the road segmentation algorithms
fail to generalize to new geographical locations. Unlike the generic domain
alignment scenarios, road segmentation has no scene structure, and generic
domain adaptation methods are unable to enforce topological properties like
continuity, connectivity, smoothness, etc., thus resulting in degraded domain
alignment. In this work, we propose a topology-aware unsupervised domain
adaptation approach for road segmentation in remote sensing imagery.
Specifically, we predict road skeleton, an auxiliary task to impose the
topological constraints. To enforce consistent predictions of road and
skeleton, especially in the unlabeled target domain, the conformity loss is
defined across the skeleton prediction head and the road-segmentation head.
Furthermore, for self-training, we filter out the noisy pseudo-labels by using
a connectivity-based pseudo-labels refinement strategy, on both road and
skeleton segmentation heads, thus avoiding holes and discontinuities. Extensive
experiments on the benchmark datasets show the effectiveness of the proposed
approach compared to existing state-of-the-art methods. Specifically, for
SpaceNet to DeepGlobe adaptation, the proposed approach outperforms the
competing methods by a minimum margin of 6.6%, 6.7%, and 9.8% in IoU, F1-score,
and APLS, respectively. | Javed Iqbal, Aliza Masood, Waqas Sultani, Mohsen Ali | 2023-09-27T12:50:51Z | http://arxiv.org/abs/2309.15625v1 | # Leveraging Topology for Domain Adaptive Road Segmentation in Satellite and Aerial Imagery
###### Abstract
Getting precise aspects of road through segmentation from remote sensing imagery is useful for many real-world applications such as autonomous vehicles, urban development and planning, and achieving sustainable development goals (SDGs) 1. Roads are only a small part of the image, and their appearance, type, width, elevation, directions, etc. exhibit large variations across geographical areas. Furthermore, due to differences in urbanization styles, planning, and the natural environments; regions along the roads vary significantly. Due to these variations among the train and test domains (domain shift), the road segmentation algorithms fail to generalize to new geographical locations. Unlike the generic domain alignment scenarios, road segmentation has no scene structure and generic domain adaptive segmentation methods are unable to enforce topological properties like continuity, connectivity, smoothness, etc., thus resulting in degraded domain alignment. In this work, we propose a topology-aware unsupervised domain adaptation approach for road _segmentation_ in remote sensing imagery. During domain adaptation for road segmentation, we predict road skeleton, an auxiliary task to enforce the topological constraints. To enforce consistent predictions of road and skeleton, especially in the unlabeled target domain, the _conformity loss_ is defined across the skeleton prediction head and the road-segmentation head. Furthermore, for self-training, we filter out the noisy pseudo-labels by using a connectivity-based pseudo-labels refinement strategy, on both road and skeleton segmentation heads, thus avoiding holes and discontinuities. Extensive experi
ments on the benchmark datasets show the effectiveness of the proposed approach compared to existing state-of-the-art methods. Specifically, for SpaceNet to DeepGlobe adaptation, the proposed approach outperforms the competing methods by a minimum margin of 6.6%, 6.7%, and 9.8% in IoU, F1-score, and APLS, respectively.
keywords: Remote sensing, Road segmentation, Domain adaptation, Self-training, Deep learning, Sustainable cities and communities. +
Footnote †: journal: Computer Vision and Pattern Recognition
## 1 Introduction
Roads, defined simply as _'a wide way leading from one place to another'_2, play a vital role in limiting or expanding the opportunities to connect. Building a reliable, up-to-date, and comprehensive map of roads with attributes like width, number of lanes, type, etc., is crucial not only for economic analysis, public policy, future development (building smart cities), and private businesses but to the modern world of autonomous transportation too. However, segmenting the unlabeled roads or updating the labels and attributes for the existing ones is labor-intensive, manually cumbersome, and costly [1], which becomes further crucial and time-sensitive in the case of natural disasters. Some recent works used graph-based approaches to extract and map road networks [2; 3]. However, these graph-based approaches only predict the center lines as edges between two possible nodes and are not suitable for extracting road characteristics like road width, number of lanes, type, etc. In this work, we specifically focus on the segmentation and adaptation of roads in remote-sensing imagery.
Footnote 2: [https://www.lexico.com/definition/road](https://www.lexico.com/definition/road)
Recently, due to the availability of annotated datasets, such as DeepGlobe [4] and SpaceNet [5], deep learning-based automatic road segmentation from satellite imagery has shown promising results [6; 2; 3; 7]. However, these models fail to segment roads accurately in unseen geographical regions [8; 9]. This behavior is attributed to the _domain shift_ between the source (training) dataset and the unseen target (testing) dataset from another domain. For satellite imagery, domain shift can occur due to the different image-capturing modalities, lighting conditions, geographical patterns, and the resolution difference between the source and target domains.
Compared to generic domain adaptation methods for semantic segmentation [10; 11; 12; 13; 14; 15], the domain adaptation for satellite road segmentation
poses specific challenges. In the case of generic semantic segmentation, the structure of the scene (roads and objects are mostly in the lower part of the image and the sky is above) helps in domain adaptation. Whereas, satellite/aerial imagery does not have such structure or geometry available [16]. Road's appearance in satellite imagery is more dependent on the attributes of the geographical region; buildings on the edge of the road, natural environment, deserts, etc., and the adaptation performance usually suffers due to large and unstructured background area. Additionally, based on the resolution of the satellite/aerial imagery, road width varies across regions and may only constitute a limited number of pixels making the road segmentation adaptation problem much more challenging.
To address the challenge of road segmentation domain adaptation, we propose a topology-based self-supervised adaptation strategy. Depending upon the construction, geography of the region, and satellite imagery resolution, road widths vary across the regions (source and target domains in our case) as illustrated in Figure 1. This further exacerbates the noisy nature of the pseudo-labels that are
Figure 1: The source-trained model fails to generate accurate road segmentation for target domain images. Predicted roads in the target domain are broken and disconnected, and as indicated from the probability map, the probabilities slowly decrease while continuing in the direction of the road is too small to be labeled as a road. After the adaptation, we see better results with more spatial connectivity and completeness. Best viewed in color.
generated using the models trained on source data. Fig 1 also shows the deteriorated performance of the source model before adaptation. We observe that the road skeleton is well defined across domains, conceptually more domain invariant (i.e., does not depend on road width), and can help in topology preserving during domain adaptation. Hence, we incorporate road skeleton (center-line) prediction alongside road segmentation in a multi-task learning scenario, making the model more generalizable and using this auxiliary information to help in generating more refined pseudo labels during self-training. Note that the road skeleton is not only properly defined across domains and is conceptually more domain invariant (i.e., does not depend upon road width), it is helpful in preserving the topological concepts such as connectivity, continuity, and structure over the skeleton.
To enforce road connectivity, inspired by the classical line detection method [17], we define a _connectivity based pseudo-labels refinement_ strategy (CBR). Using connectivity (neighborhood) information helps us improve the quality of pseudo-labels, which eventually increases the adaptation performance (Figure 6 and Figure 7). To decrease the difference between source and target domain feature distribution, we perform adversarial learning at the encoder level. Note that we show that adversarial learning alone cannot align both domain features due to large data imbalance and is driven by the background (Table 4).
To summarize, this work presents the following contributions. We identify the major limitation of applying existing domain adaptation methods over the road-segmentation problem, i.e., these methods do not exploit the topological properties of the road. To overcome these limitations, we design a multi-task learning strategy of predicting the skeleton along with the road segmentation head, resulting in enriched features. Conformity loss between skeleton prediction and road segmentation is applied as a regularizer to guide the _adaptation_ over the unlabeled target domain. We exploit the desired properties of the road such as connectivity and continuity, to clean the pseudo-labels (connectivity-based refinement of pseudo-labels) resulting in better self-supervised learning/self-training. State-of-the-art domain adaption performance is achieved on benchmark road segmentation datasets.
## 2 Related Work
In recent years, the accessibility of satellite imagery has improved multi-fold, resulting in the collection of large satellite imagery datasets. Deep learning has been applied to various problems including building/built-up regions segmentation [18; 16; 19; 20], destruction and change detection [21; 22; 23; 24; 25],
houses/structures counting [26; 27; 28; 29; 30], and slums detection and mapping [31; 32] are explored. Despite the great success of deep learning-based approaches in remote sensing imagery, very little attention is devoted to domain adaptive road segmentation. In this section, we briefly review the road segmentation and adaptation methods related to the proposed approach.
### Road Segmentation
In recent years many deep learning-based models are proposed for semantic segmentation [33; 34; 35; 36; 37; 38]. Mostly, these approaches are defined for ground road scenes or other generic scenes. Road segmentation in satellite and aerial images is a challenging problem due to large variations in background, shadows of the building, trees, etc. Modifying these generic segmentation methods, several approaches are designed for road extraction from aerial and satellite imagery [6; 39]. The authors in [39]\(\&\)[40] tried to learn direction and connectivity alongside road segmentation respectively to preserve direction and connectivity information. Similarly, authors in [41] embedded dilated convolutions in [42] to better preserve edges and boundaries information. Although these approaches work reasonably well, they require large and precisely annotated road segmentation datasets, which are not trivial to collect.
Road skeleton segmentation can help in capturing the structure and topology of the roads which are important characteristics of road networks. The authors in [43] used skeleton/center-line extraction to improve the broken links in road segmentation incrementally. Whereas, [44; 45] predict skeleton along road segmentation. [44] reported only slight improvement in road segmentation when the skeleton prediction task was added. These methods require labeled data, do not attempt to perform domain alignment, and do not employ any structural or topological consistency, hence, producing deteriorated results when exposed to unseen target domain images. We explicitly enforce structural consistency by employing conformity loss, and for self-supervised domain adaptation, improve pseudo-label quality by performing connectivity-based refinement.
### Domain Adaptation
Domain adaptation approaches [46; 47; 48; 49; 50; 51; 10; 16; 52] are widely used to minimize the domain shift between source and target domains. Adversarial learning is the most common approach to minimize the domain gap between source and target domain images segmentation either at input space [53; 54], structured output space [55; 56; 57; 58; 48] or feature space representations [59; 16]. In recent years, many domain adaptation approaches based on
self-training are proposed with state-of-the-art results for generic semantic segmentation [60; 61; 10; 51; 52; 62]. The main idea of self-training based methods is, for each class, to select the most confident pixels as pseudo-labels and then finetune the source domain trained model using these pseudo-labels for target images [10; 61]. These approaches exploit contextual information (such as vehicles will always be on the ground, and the sky will always be above) and since no such structure exists in satellite images, these approaches fail to perform well on road segmentation problems. Similarly, the adversarial learning matches the global image features, or output probability distributions, which in the case of two classes like road segmentation, is overwhelmed by background (non-road areas).
Besides great progress in domain adaptation for semantic segmentation, the road segmentation adaptation is still an open challenge [63; 64; 65; 9; 8]. The authors in [63] presented two-stage adversarial learning, where the first stage aligns inter-domain global features and the second stage aligns features of intra-domain hard and easy examples using adversarial learning. Similarly, authors in [8] tried to match the source and target features at global and local levels for better adaptation. However, these approaches defined for domain adaptation of road segmentation still fail to present road-specific constraints to preserve the characteristics and topology of the roads. Compared to these existing approaches, we explicitly try to preserve the road structure and connectivity by employing simultaneous road and skeleton segmentation and adaptation, structural conformity preservation loss, and connectivity-based pseudo-labels refinement. These problem-specific components significantly improve the performance compared to existing state-of-the-art methods.
## 3 Methodology
In this section, we provide details of our proposed unsupervised domain adaption (UDA) method for road segmentation. Our approach exploits the topological property of the road by performing connectivity-based pseudo-label refinement for improved pseudo-labels selection. During adaptation, the model is encouraged to learn domain invariant features through the road skeleton prediction task. Since the target domain is unlabeled, the skeleton prediction task is trained by applying consistency/conformity loss across the skeleton prediction and road segmentation head. We use DLinkNet [41], as a base _road segmentation method_. Due to dilated convolutions, high receptive field and skip connections, DLinkNet captures global, multi-scale information and contextual information, resulting in more accurate segmentation results (Table 1).
### Preliminaries
#### 3.1.1 Problem Formulation
Let \(\mathcal{S}\) be the source domain consisting of labeled satellite image, \(I_{s}\), and the corresponding road segmentation labels, \(\ Y_{s}\). The objective of unsupervised domain adaptation is to generalize the model trained on \(\mathcal{S}\) to the target domain \(\mathcal{T}\), where satellite images \(I_{t}\) are available and ground-truth labels \(\ Y_{t}\) are not available. Without the loss of generality, we can assume \(I_{s}\), \(I_{t}\)\(\in\mathbb{R}^{H\times W\times 3}\) and \(\ Y_{s}\), \(Y_{t}\)\(\in\mathbb{Z}^{H\times W\times C}\), where \(H\) and \(W\) are height and width of the image, and \(C\) is the total number of classes a pixel can belong to. In a supervised learning setting for source domain \(\mathcal{S}\), the objective is to learn segmentation model \(g\in G:I\to Y\)
Figure 2: **Network Architecture:** We use a multi-head segmentation network to predict roads and skeleton segmentation with a shared encoder. Cross entropy loss for both road and skeleton is back-propagated for source and target domain images based on ground truth labels and pseudo-labels, respectively. A structural conformity loss is used to ensure the structural conformity between road and skeleton predictions using the skeleton pseudo-labels. Moreover, a discriminator is defined at the encoder to align the features of source and target domain images. During the evaluation, only the road segmentation head is used.
by minimizing the cross-entropy loss function \(\mathcal{L}_{seg}\), defined in Eq. (1).
\[\mathcal{L}_{seg}(I_{s},Y_{s})=-\frac{1}{H\times W}\sum_{h,w}^{H,W}\sum_{c}^{C}Y_ {s}^{h,w,c}\log(P_{s}^{h,w,c}) \tag{1}\]
let \(g(I_{s})=P_{s}\in\mathbb{R}^{H\times W\times C}\) is the segmentation probability map, where \(P_{s}^{h,w,c}\) is probability of pixel \((h,w)\) belonging to class \(c\).
**Pseudo-Label Selection:** To overcome the non-availability of the \(Y_{t}\) in the target domain, typical self-supervised adaptation methods perform pseudo-label selection from probable predictions [51; 10; 12; 61]. Let \(P_{t}=g(I_{t})\) be the output probability volume for the target image \(I_{t}\), predicted by the source trained model, then pseudo-labels \(\hat{Y_{t}}\) are computed by setting \(\hat{Y_{t}}^{w,h,c}=\mathbbm{1}[P_{t}^{h,w,c}=\max(P_{t}^{h,w})]\). Since these predictions might be noisy for \(I_{t}\), low-confidence ones are masked out and not used during the back-propagation.
\[\begin{split}&\hat{\mathcal{L}}_{seg}(I_{t},\hat{Y_{t}},m_{t})=\\ &-\frac{1}{H\times W}\sum_{h,w}^{H,W}m_{t}^{h,w}\sum_{c}^{C}\hat {Y_{t}}^{h,w,c}\log(P_{t}^{h,w,c})\end{split} \tag{2}\]
where \(m_{t}^{h,w}\in\{0,1\}\) depending upon whether this pixel is in the pseudo-label set or not. Generally, \(m_{t}^{h,w}=\mathbbm{1}[P_{t}^{h,w,k}\geq T]\), where \(T\) is user defined threshold and \(k=\arg\max_{c}P_{t}^{h,w,c}\). The network is trained over the \(\,Y_{s}\), and pseudo labels \(\hat{Y_{t}}\) (using \(m\)), which are updated after each round (set of convergence iterations, more details in Sec. 4.1).
#### 3.1.2 Domain adaptation for road segmentation
In the case of road segmentation, Eq. 1 and Eq. 2 are reduced to binary cross entropy instead of categorical cross-entropy. This results in \(P_{s},P_{t}\in\mathbb{R}^{H\times W\times 1}\); representing the output probability map of whether a pixel belongs to the road class or not. The pseudo-labels are defined as \(\hat{Y_{t}}^{h,w}=\mathbbm{1}[P_{t}^{h,w}\geq(1-P_{t}^{h,w})]\). We backpropagate gradients for those pixels only where we are sure (e.g. have high predictive probability) that they either belong to the road or they do not belong road. Therefore, mask \(m_{t}\) is computed by checking two thresholds \(m_{t}^{h,w}=\mathbbm{1}[P_{t}^{h,w}>T_{r}||(1-P_{t}^{h,w})>T_{b}]\), where \(T_{r}\) and \(T_{b}\) are thresholds for the road and background class (not belonging to the road), respectively.
### Multi-task self-supervised domain adaptation
One of the components of the domain shift is the variations in road width across domains, and how the boundary is delineated due to changes in the background (buildings on the edge of the road, natural environment, desert, etc.,). On the other hand, the concept of the center-line or skeleton remains the same across domains. In addition, the skeleton in general embodies topological information like continuity and connectivity much better than what is captured by the road-surface segment. Therefore, in this work, we employ skeleton prediction as an auxiliary task to improve road segmentation across domains (Figure 2). The training labels for the skeleton prediction are obtained by skeletonizing true labels of the source domain \(\mathcal{S}\) to the single pixel width using [66]. The skeleton segmentation head is trained using Eq. 3,
\[\begin{split}\mathcal{L}_{s}^{\phi}(I_{s},Y_{s}^{\phi})& =-\frac{1}{HW}\sum_{h,w}^{H,W}[Y_{s}^{\phi,(h,w)}\ \log(P_{s}^{\phi,(h,w)})\\ &\quad+(1-Y_{s}^{\phi,(h,w)})\ \log(1-P_{s}^{\phi,(h,w)})],\end{split} \tag{3}\]
where \(\mathcal{L}_{s}^{\phi}(I_{s},Y_{s}^{\phi})\) is the skeleton segmentation loss for source domain image \(I_{s}\) with respective labels \(Y_{s}^{\phi}\in\mathbb{R}^{H\times W}\) and predicted probability map \(P_{s}^{\phi}\in\mathbb{R}^{H\times W}\). For target domain images, similar to road pseudo-labels, we define skeleton pseudo-labels based on the probabilities of the skeleton segmentation head and then use Eq. 4.
\[\begin{split}\mathcal{L}_{t}^{\phi}(I_{t},\hat{Y}_{t}^{\phi})& =-\frac{1}{HW}\sum_{w,h}^{H,W}m_{t}^{\phi,(h,w)}[\hat{Y}_{t}^{\phi, (h,w)}\ \log(P_{t}^{\phi,(h,w)})\\ &\quad+(1-\hat{Y}_{t}^{\phi,(h,w)})\ \log(1-P_{t}^{\phi,(h,w)})]\end{split} \tag{4}\]
where \(m_{t}^{\phi,(h,w)}=\mathbb{1}[P_{t}^{\phi,(h,w)}>T_{r}^{\phi}||(1-P_{t}^{\phi,(h,w)})>T_{b}^{\phi}]\), and \(T_{r}^{\phi}\) and \(T_{b}^{\phi}\) are thresholds for the skeleton and background class, respectively. \(m_{t}^{\phi,(h,w)}\) allows the selection of pseudo-labels, where either the model has predicted the road with high confidence or the background with high confidence. \(\hat{Y}_{t}^{\phi}\in\mathbb{R}^{H\times W}\) and \(P_{t}^{\phi}\in\mathbb{R}^{H_{t}\times W_{t}}\) represent the skeleton pseudo-labels and predicted probability map for input target image \(I_{t}\) respectively.
### Connectivity-Based Pseudo Labels Refinement (CBR)
Effective self-training-based domain adaptation requires a reasonable number of good pseudo-labels for each class. The confidence-based pseudo-label assignment and threshold-based masking (Sec. 3.1) results in highly imbalanced
pseudo-labels. In the case of road segmentation, these pseudo-labels might result in broken segments or holes, thus the concept of continuity might not be enforced on the target domain as shown in Figure 4 (before CBR).
Note that the pseudo-label assignment and selection are based on the two thresholds, \(T_{r}\) and \(T_{b}\), where \(T_{r}\) selects the road pixels, and \(T_{b}\) is used for selecting non-road pixels. For ones having predicted probability in-between \(T_{r}\) and \((1-T_{b})\), we set \(m_{t}^{h,w}=0\) (since we are not confident about their label, we call these pixels as 'not-selected' pixels), and they do not contribute to the loss (Eq. 2) and therefore are not used to update the weights of the model. Lowering \(T_{r}\) will result in the background being labeled as roads and a high \(T_{r}\) will result in disjoint road pieces. Therefore, we perform connectivity-based pseudo-label refinement, based on the two thresholds, \(T_{h}(=T_{r})\) and \(T_{l}(>=1-T_{b})\), to improve structure and connectivity in the road and skeleton pseudo-labels.
Let \(\mathcal{R}=\{\forall(h,w)|m_{t}^{h,w}=1\ \&\ P_{t}^{h,w}>T_{h}\}\) be set of selected pixels assigned the pseudo-label road. Let \(\mathcal{M}_{t}=\{\forall(h,w)|m_{t}^{h,w}=0\}\) be set of all 'not-selected' pixels. Any pixel in \(\mathcal{M}_{t}\) that is connected to the road pseudo-labeled pixels in \(\mathcal{R}\) is selected as the road pseudo-label. For all, \((h,w)\in\{(P_{t}^{h,w}>T_{l})\ \&\ (P_{t}^{h,w}<T_{h})\ \&\ \zeta((h,w),R)=\text{True}\}\), we set \(m_{t}^{h,w}=1\) and \(\hat{Y}_{t}^{h,w}=1\). Where \(\zeta((h,w),R)\) is the Boolean function that uses the 8-neighborhood to identify if \((h,w)\) is connected to any pixel in \(\mathcal{R}\). This process is recursive and repeated until all pixels in \(\mathcal{R}\) are traversed and checked for connectedness with the existing road labels as well as the newly assigned road labels. A similar approach is used for skeleton
Figure 3: An illustration of the connectivity-based pseudo label refinement (CBR) approach for both road and skeleton pseudo labels. After applying CBR, the pseudo-labels are increased while reducing the number of ‘not-selected’ pixels based on improving connectivity and completeness.
pseudo-labels selection with \(T_{h}^{\phi}\) and \(T_{l}^{\phi}\) as upper and lower thresholds respectively. This process of pseudo-label selection and connectivity-based refinement is shown in Figure 3. These connectivity-based refined pseudo-labels are used during self-supervised domain adaptation.
### Structure Conformity Loss
To further guide the adaptation process, we enforce spatial consistency between thin skeleton prediction and much wider (road) segment predictions by defining a structural conformity loss (Eq. 5) between the output of two heads. More specifically, we apply an \(L_{2}\)-distance loss between road and skeleton probabilities. The skeleton head tries to predict a single pixel center-line. During UDA for the source images, we compute the difference between the prediction of the skeleton and road segmentation, only for the pixels which are labeled as a skeleton by ground truth. Similarly, in the case of images from the target domain, we compute the difference between the prediction of the skeleton and road segmentation, only for the pixels which are _pseudo-labeled_ as a skeleton as shown in Figure 2 (red arrow originating from skeleton pseudo-labels to conformity loss). Combined conformity loss \(\mathcal{L}_{cl}\) is given below.
\[\mathcal{L}_{cl}=\sum_{H,W}[Y_{s}^{\phi}=1](P_{s}-P_{s}^{\phi})^{2}+\sum_{H,W }[\hat{Y}_{t}^{\phi}=1](P_{t}-P_{t}^{\phi})^{2}. \tag{5}\]
Figure 4: Pseudo-labels before and after connectivity-based refinement (CBR). The pseudo-labels are more complete and connected after CBR indicating their quality is improved significantly.
Proposed conformity loss enforces that the thinning of the road-surface results in the same skeleton as the one predicted by the skeleton segmentation head. An overview of conformity loss for target images is shown in Figure 2.
### Adversarial Learning based Feature Alignment
Finally, we employ adversarial learning to align the source and target domain image features. We define a discriminator network \(D\) at the encoder level of the segmentation network, and try to minimize the gap between source and target domain features as shown in Figure 2. The discriminator network (\(\mathcal{L}_{d}\), Eq. 6) is trained using the cross-entropy loss for source and target domain images. Similarly, the adversarial loss (\(\mathcal{L}_{adv}\), Eq. 7) for the target domain is used to update the encoder of the segmentation network. Let the features at the output of the encoder are denoted by \(F_{e}\in\mathbb{R}^{h_{f}\times w_{f}\times C_{f}}\), with \(h_{f},w_{f}\) and \(C_{f}\) as height, width, and depth of the feature map respectively, the \(\mathcal{L}_{d}\) and \(\mathcal{L}_{adv}\) are defined as,
\[\begin{split}\mathcal{L}_{d}(F_{e})=-\frac{1}{h_{f}\times w_{f}} \sum_{h_{f},w_{f}}[y_{d}\;\log(D(F_{e}))\\ +(1-y_{d})\;\log(1-D(F_{e})))]\end{split} \tag{6}\]
\[\begin{split}\mathcal{L}_{adv}(F_{e_{t}})=-\frac{1}{h_{f}\times w _{f}}\sum_{h_{f},w_{f}}\log(D(F_{e_{t}}))\end{split} \tag{7}\]
\(F_{e_{t}}\) represents the encoder feature maps of the target images while \(y_{d}\) shows the domain (source, target) label to train the discriminator and generate an adversarial loss.
Total Loss Function:During domain adaptation, we simultaneously train the road and skeleton segmentation head using the labeled source data and the pseudo-labeled target data. The composite loss function (Eq. 8) is the summation of the individual losses, Eq. 1, 2, 3, and 4.
\[\begin{split}\mathcal{L}_{comp}=\mathcal{L}_{seg}(I_{s},Y_{s})+ \mathcal{L}_{s}^{\phi}(I_{s},Y_{s}^{\phi})\\ +\hat{\mathcal{L}}_{seg}(I_{t},\hat{Y}_{t})+\hat{\mathcal{L}}_{t}^ {\phi}(I_{t},\hat{Y}_{t}^{\phi})\end{split} \tag{8}\]
Finally, the total loss (Eq. 9) is the summation of composite loss (Eq. 8), conformity loss (Eq. 5) and adversarial loss (Eq. 7).
\[\mathcal{L}_{total}=\mathcal{L}_{comp}+\beta\mathcal{L}_{cl}+\lambda_{adv} \mathcal{L}_{adv} \tag{9}\]
where, hyper-parameters \(\beta\), and \(\lambda_{adv}\) are chosen empirically (Sec. 4.1 and 4.4).
## 4 Experiments
### Experimental Setup
**Datasets:** For evaluation, we use SpaceNet [5], DeepGlobe [4] and Massachusettsetts [67] as benchmark road segmentation datasets. The SpaceNet, DeepGlobe, and Massachusetts datasets are different with respect to colorization, geographies, spatial resolution, texture, illuminations, and road structures. A few sample images from datasets are shown in Fig 5. **SpaceNet** dataset consists of annotated satellite imagery for road segmentation for 4 different cities, i.e., Paris, Las Vegas, Shanghai, and Khartoum. There are a total of 2780 image tiles with a spatial resolution of 1300\(\times\)1300 pixels covering 30\(cm/\)pixel. However, labels are available for a subset of 2548 image tiles. SpaceNet covers around 3000 _sq.km_ of area and around 8000\(km\) annotated roads. **DeepGlobe** consists of 6226 labeled image tiles of size 1024\(\times\)1024, at 50 \(cm/\)pixel. The DeepGlobe dataset is obtained by capturing satellite imagery over Indonesia, Thailand, and India [4]. For available labeled images, we have followed the training and validation split defined by [39]. **Massachusetts** is an aerial imagery road segmentation dataset covering 100 \(cm\) per-pixel. There are a total of 1171 labeled images of size 1500 \(\times\) 1500 with a defined split of 1108 as training, 14 as validation, and 49 as testing images, respectively.
**Model Architecture:** Our proposed method is mainly composed of segmentation and discriminator networks. The segmentation network (also known as the generator network) is based on DLinkNet [41] with ResNet-34 [68] as the backbone feature extractor. We follow the same architecture as the original DLinkNet
Figure 5: Example images showing the visual, infrastructural, and scale differences in Source and Target domains.
[41] for road segmentation to have a fair comparison. A separate decoder-head is attached to the encoder for the skeleton prediction, as shown in Figure 2. The discriminator network is inspired by [48] and composed of four convolution layers each with the filter size 4\(\times\)4 followed by LeakyRelu [69] activation except the last layer, where sigmoid is used as the activation.
**Implementation and Training Details:** The proposed domain adaptation approach is implemented using the PyTorch framework with a single Core-i5 machine with 11GB of GPU memory. From all the images, we randomly crop \(1024\times 1024\) image patch for processing, and the batch size is set to two. The initial learning rates are set to \(2e^{-4}\) and \(1e^{-4}\) for self-supervised adaptation and adversarial learning, respectively. The adversarial loss is scaled with \(\lambda_{adv}=0.01\) to reduce the effect of large gradients. Similarly, the conformity loss is scaled with \(\beta=0.1\). The sensitivity analysis of \(\beta\) and \(\lambda_{adv}\) are detailed in Sec. 4.4.5. To select road pseudo-labels and mask \(m^{h,w}\), thresholds are set as, \(T_{r}=T_{h}=0.9\), \(T_{b}=0.3\) and \(T_{l}=1-T_{b}\). Similarly, for skeleton pseudo-labels and mask \(\hat{m}^{h,w}\), the \(T_{r}^{\phi}=T_{h}^{\phi}\) and \(T_{b}^{\phi}\) are set to be 0.5 and 0.9 respectively, where \(T_{l}^{\phi}=1-T_{b}^{\phi}\).
The DLinkNet [41] road segmentation network is trained on the source domain. During adaptation, we follow an iterative process, i.e., generate pseudo-labels for the whole target dataset (by fixing the segmentation network) and then train that segmentation model over pseudo-labeled target data alongside labeled source data. Following the terminology of [70; 51; 71; 72], we define this iterative process of pseudo-labels generation and model training as rounds. In the initial round (round-0), the segmentation model (trained on source data) is further trained for skeleton segmentation alongside road segmentation using the labeled source data. After round-0, before the start of the next rounds (round-1 and round-2), pseudo-labels for target images are generated using the updated model and then used for retraining that model for two epochs alongside fully labeled source data.
### Experimental Results
For all the experiments, SpaceNet is used as the source domain dataset while DeepGlobe and Massachusetts are used as target domain datasets. Following [16; 63], intersection over union (IoU) & F1-Score are used as evaluation metrics for road segmentation. To better assess the connectivity and connectivity of the segmented roads, we follow [2; 3; 73] and also report APLS (Average Path Length Similarity). For a more fair evaluation, we compare the proposed approach with generic DA methods ([61; 51; 60; 52; 55; 74] ) as well as road-specific DA methods including TGN[8], GOAL[8], RoadDA[63] and DOER[75].
Table 1 Comparison between the proposed approach and existing approaches for SpaceNet to DeepGlobe and SpaceNet to Massachusetts adaptation. **Oracle** results (upper-bound) are obtained when the segmentation model is trained over the target domain images using the ground truth labels [51, 16]. Backbone: Backbone architecture used in the segmentation network. Adv: Adversarial adaptation, ST: Self-training based adaptation, Adv+: Adversarial learning with any other adaptation approach.
**SpaceNet \(\rightarrow\) DeepGlobe:** Table 1 presents a comparative analysis of the proposed road adaptation approach with different source models and existing state-of-the-art methods. Compared to the Source-only (DLinkNet) model, the proposed approach improves the IoU, F1-score, and APLS by a margin of 11%, 15.8%, and 11%, respectively. Compared to TGN[8], GOAL [8], and DOER [75], recent adversarial learning-based methods for domain adaptation of road segmentation, the proposed approach outperforms by significant margin of 18.4%, 14.1%, and 13.4% in IoU and 19.6%, 14.6%, and 13.8% in F1-Score, respectively. Further, compared to another recent approach for road segmentation adaptation [63], the proposed approach shows an improvement by a significant margin of 22.9% in IoU, 26.9% in F1-score, and 35.8% in APLS, respectively. Similarly, compared to ResNet-38 [76] based self-training approaches CBST [61] and MLSL-SISC [51], the proposed approach outperforms by a minimum margin of 6.6%, 6.7%, and 9.8% in IoU, F1-Score, and APLS, respectively. We also compare our results with DeepLab-v2 [33] based self-training methods CRST [60] and Model-Uncertainty [70], where the proposed approach outperforms with a minimum of margins of 12.8% and 13.1% in IoU and F1-Score, respectively. The considerable improvement in all the evaluation metrics, APLS, F1-Score, and IoU indicates
the improvement of segmented roads in target images after adaptation. The gain in APLS indicates the segmented roads after adaptation are more continuous and connected. Compared to competing methods, the proposed approach results are close to the Oracle results. The **Oracle** results, also known as upper-bound, are obtained when the segmentation model is trained over the target domain images using their original ground truth labels as defined by [48, 51, 53, 16, 77]. The **Oracle** model serves as a benchmark or upper bound for any segmentation model to compare the performance of the adaptation approach.
**SpaceNet \(\rightarrow\) Massachusetts:** As visible in Figure 5, the Massachusetts dataset, similar to the SpaceNet dataset, consists of images from the region where roads, buildings, and other structures appear to be appropriately planned. The key challenges for adaptation from the SpaceNet to Massachusetts are the difference in
Figure 6: SpaceNet \(\rightarrow\) DeepGlobe adaptation: Compared to the Source-only model (DLinkNet[41]) and baseline CRST [60], the proposed approach produces better segmentation results with comparatively better completeness and connectivity. “Ours*: Without CBR” and “Ours: With CBR”.
spatial resolution (\(100cm^{2}\) compared to SpaceNet's \(30cm^{2}\) per pixel), visual appearance, and the roads to be thin when compared to the source domain.
The proposed approach outperforms the competing methods with a minimum margin of 3.9% in IoU, 4.6% in F1-Score, and 1.6% in APLS (Table 1). More specifically, we gain 11.0% in IoU, 13.3% in F1-Score, and 29.7% in APLS over the Source-only model (DLinkNet). Similarly, compared to ResNet-38 [76], CBST [61] and MLSL-SISC [51], we gain 7.1%, 5.6% and 3.9% in IoU, 8.3%, 6.4% and 4.6% in F1-Score, and 38.3%, 1.7% and 8.7% in APLS, respectively. Compared to DeepLab-v2 [33], CRTS [60] and Model-Uncertainty [70] by a minimum margin of 4.2%, 4.7% and 3.6% in IoU, F1-Score and APLS, respectively.
### Qualitative Results.
Figure 6 & 7 shows the qualitative results. It can be seen that the Source-only segmentation model (DLinkNet [39]) produces a lot of false negatives (missing roads or segments) as well as false positives (e.g., predicting thick/wider roads).
Figure 7: From SpaceNet to Massachusetts adaptation. Compared to the source model (DLinkNet [41]), our approach produces better segmentation results.
This behavior is attributed to the resolution and road-width differences between the source and target domain images. Specifically, SpaceNet-3 (source domain dataset), has 0.3m/pixel and wider roads while the DeepGlobe and Massachusetts (target datasets) have comparatively low resolution (0.5m/pixel and 1m/pixel, respectively) and thinner roads. The proposed approach tries to mitigate both these issues by increasing the true road segmentation and avoiding the non-road areas. Our proposed connectivity-based pseudo-labels refinement (CBR) further improves the completeness and connectivity of the road segmentation as shown in Figure 6 & 7, "Ours*: Without CBR" and "Ours: With CBR".
### Ablation Experiments
To evaluate the effect of different components, we perform domain adaptation by removing different components (Table 2). Note that for the CL and CBR, the round-0 is performed where the skeleton head is trained along with fine-tuning of the segmentation head over the source dataset (Sec. 4.1).
#### 4.4.1 Effect of skeleton segmentation and Conformity Loss:
The skeleton (center-line) segmentation helps in preserving the road's topology and is a key component of the proposed road-segmentation-adaptation approach. Due to the multi-task learning, introducing a skeleton segmentation head improves the accuracy of the road segmentation (source) model from 35.2 IoU to 36.1 IoU, as shown in Table 2 (Source-only Results with and without skeleton).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline & \multicolumn{4}{c|}{Components} & \multicolumn{3}{c}{SpaceNet \(\rightarrow\) DeepGlobe} \\ \hline Approach & SK & ST & FSA & CL & CBR & IoU & F1-Score & APLS \\ \hline Source-only & - & - & - & - & - & 35.2 & 47.4 & 35.3 \\ \hline & - & - & ✓ & - & - & 37.8 & 54.9 & - \\ Ours & - & ✓ & - & - & - & 37.4 & 53.4 & 38.4 \\ & - & ✓ & ✓ & - & - & 40.3 & 57.4 & 41.8 \\ \hline Source-only & ✓ & - & - & - & - & 36.1 & 53.1 & 39.2 \\ \hline & ✓ & ✓ & - & - & - & 39.7 & 56.8 & 38.1 \\ & ✓ & - & ✓ & - & - & 39.2 & 56.3 & - \\ Ours & ✓ & ✓ & - & ✓ & - & 42.7 & 59.9 & 41.1 \\ & ✓ & ✓ & ✓ & ✓ & - & 44.2 & 61.3 & 45.4 \\ & ✓ & ✓ & ✓ & ✓ & ✓ & **46.2** & **63.2** & **50.3** \\ \hline \end{tabular}
\end{table}
Table 2: Component-wise analysis of our approach, for two sources, one with skeleton and another without it. SK: Skeleton segmentation, ST: Self-training, FSA: Feature space adaptation, CL: Conformity loss, CBR: Connectivity-based pseudo-labels refinement.
Similarly, the APLS score is improved from 35.3 to 39.2 indicating improved connectivity of the segmented roads. Secondly, identifying the skeleton also allows us to apply the conformity loss between the road and skeleton heads alongside self-training, resulting in a performance improvement of 3.0% in IOU, 3.1% in F1-score, and 3.0% in APLS for SpaceNet to DeepGlobe adaptation. Similarly, we apply CBR over the skeleton head to refine the skeleton pseudo-labels, making both the self-training and conformity loss more effective. These additional controls improve our results from 40.3% to 46.2% in IoU, 57.4% to 63.2% in F1-Score, and from 41.8% to 50.3% in APLS score. With only a skeleton segmentation head and conformity loss (with self-training), we see 7.5% points improvement in IoU over the initial source model. For details look at Table 2.
#### 4.4.2 Connectivity based Pseudo-labels Refinement (CBR)
To understand the effect of CBR, we analyze the quality of pseudo-labels in terms of IoU, F1-Score, and APLS with ground truth (only used for evaluation here) with and without CBR (Table 3). We see a considerable improvement in the quality of pseudo-labels (Figure 4), resulting in a more accurate prediction of roads in the target domain after adaptation (Table 2). Specifically, the roads segmented have more connectivity and continuity (Figure 6 & 7), thus resulting in high APLS. To further highlight the effect of CBR, we show the output probability maps and their respective segmentation masks (after thresholding) without and with CBR in Figure 8. It can be seen that in all the cases, the probabilities of road prediction improve significantly with the introduction of CBR. Thus, results in continuous and connected roads.
like, GTA [78] and Cityscapes [79] datasets, the output space adversarial domain is mostly applied because there exists a naturally defined structure, i.e., the sky always on the top side of the image, roads always on down rows of the image, etc. However, no such structure is defined in the case of remote sensing imagery. Secondly, in the case of road segmentation in satellite imagery, adversarial learning is overwhelmed by large backgrounds (non-road areas). In this work, we explored both feature space and output space adversarial learning-based adaptation for satellite imagery.
Table 4 shows that feature space adversarial learning adaptation (FSA) performs better compared to output space adaptation (OSA), but still, the improve
\begin{table}
\begin{tabular}{c|c|c} \hline \multicolumn{2}{c}{SpaceNet \(\rightarrow\) DeepGlobe} \\ \hline Method & IoU & F1-Score \\ \hline Source-only (DLinkNet) [41] & 35.2 & 47.4 \\ Output Space Adaptation (OSA) [48] & 37.3 & 54.2 \\ Feature Space Adaptation (FSA) [59] & 37.8 & 54.9 \\ \hline \end{tabular}
\end{table}
Table 4: Adversarial learning-based adaptation.
Figure 8: SpaceNet to DeepGlobe adaptation. The output probabilities and resulting segmentation are significantly improved after CBR. The roads are more continuous and connected compared to those without CBR.
ment due to adversarial learning alone is only comparable to simple self-training based adaptation. However, when FSA is combined with the self-training, the road segmentation performance is improved by 2.9% in IoU for SpaceNet to DeepGlobe adaptation (Table 2 4th row). Hence, performing adversarial learning along with other components results in better feature alignment, as visible in the last row of Table 2.
#### 4.4.4 Adapting from DeepGlobe to SpaceNet
To further validate the effectiveness of the proposed approach, we performed experiments in the opposite direction (DeepGlobe (source) to SpaceNet (target)). Our proposed method improves IoU from 25.6% (before DA) to 29.8%.
#### 4.4.5 Sensitivity Analysis of Hyperparameters
In this section, we provide the sensitivity analysis of the hyper-parameters \(\lambda_{adv}\), \(\beta\), and pseudo-labels selection thresholds \(T_{h}\) and \(T_{l}\), used during our adaptation approach.
**Sensitivity of \(\beta\) (Conformity loss scaling factor) :** We investigate the effect of conformity loss weight \(\beta\) (Eq. 9) on the adaptation performance of the proposed approach (Table 5). The experiments in Table 5 show that the conformity loss in general is robust to the scaling factor and improves the performance even when no scaling factor, i.e., \(\beta=1.0\) is used. However, better performance is reported for \(\beta=0.1\).
**Effect of \(\lambda_{adv}\) :** In Table 6, we show the effect of adversarial loss scaling factor \(\lambda_{adv}\) along with self-supervised domain adaptation. The adversarial loss helps to improve the adaptation (Table 2), however, is comparatively more sensitive to loss weight as shown in Table 6. The higher value of \(\lambda_{adv}\) propagates large gradients to the network, while a very small value of \(\lambda_{adv}\) may not help the adaptation process significantly. A similar observation was reported by [48] as well.
**Effect of \(T_{h}\) and \(T_{l}\) :** We also analyze the effect of upper and lower thresholds for connectivity-based refinement (CBR) of pseudo-labels. Table 7, shows the adaptation performance for different combinations of upper and lower thresholds.
\begin{table}
\begin{tabular}{c|c c c c} \hline \multicolumn{5}{c}{SpaceNet \(\rightarrow\) DeepGlobe} \\ \hline \hline \(\beta\) & 0.001 & 0.01 & 0.1 & 1.0 \\ \hline IoU & 43.9 & 45.3 & 46.2 & 44.2 \\ F1-Score & 60.9 & 62.3 & 63.2 & 61.3 \\ \hline \end{tabular}
\end{table}
Table 5: Effect of conformity loss weight \(\beta\) on the adaptation process.
Table 6 Effect of \(\lambda_{adv}\), adversarial loss scaling factor in Eq. 9. The results reported are for adversarial learning combined with self-supervised learning only.
It can be seen that reducing the upper threshold to 0.8 deteriorates the overall performance. This is attributed to the false positives (low precision) selected as pseudo-labels, eventually decreasing the IoU as well as F1-Score significantly. The lower threshold for CBR is comparatively less sensitive to overall adaptation performance. However, reducing \(T_{l}\) to 0.5 and less affects the adaptation performance (Table 7).
#### 4.4.6 Analysis of Backbone Architectures
We have performed additional experiments with our proposed road segmentation adaptation approach by changing the backbone networks.
Table 8 Comparison of ResNet-34 and large backbone architectures (ResNet-101, SWIN) used with the DLinkNet segmentation model to evaluate the segmentation and adaptation performance of the proposed approach. It can be observed that despite computation limitations, our adaptation strategy improves models' accuracy considerably over the target domain.
\begin{table}
\begin{tabular}{c|c c c} \hline \multicolumn{3}{c}{SpaceNet \(\rightarrow\) DeepGlobe} \\ \hline \hline \(\lambda_{adv}\) & 0.001 & 0.01 & 0.1 \\ \hline IoU & 37.7 & 40.3 & 37.3 \\ F1-Score & 54.1 & 57.4 & 51.3 \\ \hline \end{tabular}
\end{table}
Table 6: Effect of \(\lambda_{adv}\), adversarial loss scaling factor in Eq. 9. The results reported are for adversarial learning combined with self-supervised learning only.
\begin{table}
\begin{tabular}{c c|c c c} \hline Approach & Backbone & Image Size & IoU & F1-Score & APLS \\ \hline Source-only & ResNet-34 & 1024 x 1024 & 35.2 & 47.4 & 35.3 \\ Ours & & & 46.2 & 63.2 & **50.3** \\ \hline Source-only & ResNet-101 & 512 x 512 & 38.0 & 55.1 & 32.6 \\ Ours & ResNet-101 & 512 x 512 & 45.94 & 63.0 & 47.9 \\ \hline Source-only & Swin-B & 512 x 512 & 39.0 & 56.1 & 37.7 \\ Ours & & & **46.7** & **63.7** & 49.2 \\ \hline \end{tabular}
\end{table}
Table 8: Comparison of ResNet-34 and large backbone architectures (ResNet-101, SWIN) used with the DLinkNet segmentation model to evaluate the segmentation and adaptation performance of the proposed approach. It can be observed that despite computation limitations, our adaptation strategy improves models’ accuracy considerably over the target domain.
By using larger backbones, e.g., Swin-B and ResNet-101, the Source Model's accuracy improves by 3.8% and 2.8% compared to ResNet-34 based DLinkNet, respectively. However, these larger backbones bases segmentation models still suffer from Domain Shift. As indicated in Table 8, our adaptation strategy improves models' accuracy considerably over the target domain (7.94% and 7.7% in IoU and 15.3% and 11.5% in APLS scores, for ResNet-101 and Swin-B based segmentation models, respectively).
Please note that, due to limited computational resources, we were unable to train and adapt using the large backbone models (ResNet-101 and Swin-B) with large image sizes. Therefore we have reduced the size of images to 512 x 512 for ResNet-101 and Swin-B to best utilize our resources, as indicated in Table 8. The results in Table 8 reinforce that regardless of the backbone, road-segmentation (Source-only) models suffer from the domain shift, and our adaptation strategy results in performance improvement (ours). We believe that having large memory GPUs, and increasing the image sizes (as used in the case of ResNet-34) will further improve the results with ResNet-101 and Swin-B backbone architectures. Thus, we argue that the proposed approach improves the performance, is complementary, and can be used with any backbone architecture.
## 5 Conclusions
In this paper, we tackle the challenging problem of overcoming the domain shift for road segmentation algorithms. The proposed self-supervised domain adaptation method exploits the topology of the road by having a structural conformity loss across the skeleton of the road and the road surface itself. The quality of the pseudo-labels for self-training is improved by introducing connectivity-based pseudo-labels refinement. To help the adaptation process, source, and target features are aligned using discriminator-based adversarial learning. Our thorough experimental results on three different datasets on multiple evaluation metrics, multiple backbone architectures, detailed ablation studies, and comparison with several competitive baselines validate the proposed ideas and framework.
|
2309.13240 | NeRF-Enhanced Outpainting for Faithful Field-of-View Extrapolation | In various applications, such as robotic navigation and remote visual
assistance, expanding the field of view (FOV) of the camera proves beneficial
for enhancing environmental perception. Unlike image outpainting techniques
aimed solely at generating aesthetically pleasing visuals, these applications
demand an extended view that faithfully represents the scene. To achieve this,
we formulate a new problem of faithful FOV extrapolation that utilizes a set of
pre-captured images as prior knowledge of the scene. To address this problem,
we present a simple yet effective solution called NeRF-Enhanced Outpainting
(NEO) that uses extended-FOV images generated through NeRF to train a
scene-specific image outpainting model. To assess the performance of NEO, we
conduct comprehensive evaluations on three photorealistic datasets and one
real-world dataset. Extensive experiments on the benchmark datasets showcase
the robustness and potential of our method in addressing this challenge. We
believe our work lays a strong foundation for future exploration within the
research community. | Rui Yu, Jiachen Liu, Zihan Zhou, Sharon X. Huang | 2023-09-23T03:16:58Z | http://arxiv.org/abs/2309.13240v1 | # NeRF-Enhanced Outpainting for Faithful Field-of-View Extrapolation
###### Abstract
In various applications, such as robotic navigation and remote visual assistance, expanding the field of view (FOV) of the camera proves beneficial for enhancing environmental perception. Unlike image outpainting techniques aimed solely at generating aesthetically pleasing visuals, these applications demand an extended view that faithfully represents the scene. To achieve this, we formulate a new problem of faithful FOV extrapolation that utilizes a set of pre-captured images as prior knowledge of the scene. To address this problem, we present a simple yet effective solution called NeRF-Enhanced Outpainting (NEO) that uses extended-FOV images generated through NeRF to train a scene-specific image outpainting model. To assess the performance of NEO, we conduct comprehensive evaluations on three photorealistic datasets and one real-world dataset. Extensive experiments on the benchmark datasets showcase the robustness and potential of our method in addressing this challenge. We believe our work lays a strong foundation for future exploration within the research community.
## I Introduction
The field of view (FOV) of a camera plays a pivotal role in the performance of vision-based navigation [1, 2]. A larger FOV enables robots to perceive more spatial elements and layouts (_e.g._, obstacles, doorways, etc.). This expanded perspective empowers them to make more informed and strategic decisions when planning their paths. A larger FOV also offers substantial benefits for remote sighted agents (RSAs) tasked with assisting visually impaired individuals in navigation [3, 4]. In light of this motivation, our work delves into the challenge of FOV extrapolation. Our goal is to enable robots and remote agents to perceive scene content beyond the immediate camera FOV, thereby enhancing their situational awareness.
In the computer vision domain, our task is closely related to image outpainting [5, 6, 7, 8], also referred to as image extrapolation [9, 10, 11] or image extension [12], which aims to extend the image boundaries with semantically consistent and visually appealing contents. The extrapolated image hallucinates visually plausible scene contents beyond the original FOV, delivering immersive viewing experience for applications such as virtual reality. The hallucination ability is acquired by, for example, training deep learning models on large-scale generic datasets [13] of realistic images. However, image outpainting models cannot be applied to our problem because navigational applications necessitate the extended portions of the image maintain fidelity and coherence with the actual scene.
We define the problem of _faithful FOV extrapolation_ as follows. As illustrated in Fig. 1 (right), a collection of training images that were captured in a given scene (_e.g._, images shown inside red boxes) serves as prior knowledge. We assume that the camera pose corresponding to each training image can be obtained, for example, via structure from motion (SM) [14, 15, 16] methods. During the testing phase, our objective is to faithfully extrapolate the FOV of any newly captured image in the same scene to a specified FOV based on the prior knowledge of the scene (see example images inside blue boxes).
It is possible to adapt existing computer vision techniques, namely image stitching and video expansion, to tackle the faithful FOV extrapolation problem. Image stitching [17, 18, 19, 20] entails aligning overlapping portions of multiple images taken from various angles, whereas video expansion [21], or video extrapolation [22, 23], leverages adjacent frames to extend the FOV of a specific frame. Both methods require precise warping of source images to blend seamlessly with the target image. However, they often yield irregularly shaped non-overlapping areas, imposing limitations on the extent of FOV expansion. Therefore, these methods are not well suited for our goal, as the faithful FOV extrapolation task demands expanding the view to a desired rectangular size.
Given that there has been very limited prior research addressing the challenge of faithful FOV extrapolation for navigation, we propose a simple yet effective method called _NeRF-Enhanced Outpainting (NEO)_. Our method involves first training a neural radiance fields [24] (NeRF) model using training images of original FOV. We then densely sample a substantial number of camera poses within the same scene and, for each sampled pose, the trained NeRF model is applied to render an image with an expanded FOV. Finally, we leverage these rendered images to train an image outpainting model, which is subsequently employed to extrapolate the FOV of input images during the inference phase. We validate the proposed method on three photo-realistic datasets and one real dataset. The NEO method excels at producing high-quality extrapolations tailored to the specified FOV and consistently surpasses the performance of three baseline methods.
In summary, the contributions of this work are as follows. (1) We introduce a novel problem, namely faithful FOV extrapolation for navigation, which has been relatively underexplored in existing literature. (2) We propose the
NeRF-Enhanced Outpainting (NEO) pipeline as a solution for faithful FOV extrapolation. (3) Comprehensive empirical investigations on both photorealistic and real-world datasets consistently validate the effectiveness of the proposed NEO method compared to the baseline counterparts.
## II Related Work
### _Image Outpainting_
Image outpainting, often referred to as image extrapolation or extension, is a task that seeks to expand image boundaries while maintaining semantically coherent content. Typically, the ability to infer such contents is acquired through learning from large-scale datasets of real images. Image outpainting approaches can be broadly categorized as either non-parametric or parametric. Non-parametric methods [25, 26] are restricted to basic pattern outpainting, and they become increasingly fragile as the extrapolation range grows. The emergence of GAN-based models has resulted in significant advancements in image outpainting. Some notable works [27, 9, 12, 5, 10] utilize a single GAN model for extrapolating the input image. More recently, Khurana _et al._[11] propose an image outpainting framework that extends the image within the semantic label space, thereby produce new objects within the extrapolated area. Li _et al._[6] introduce CTO-GAN, which deduces the potential semantic layout based on foreground elements and subsequently generates the corresponding background content with the guidance of the predicted semantics. Yao _et al._[8] formulate this problem as a sequence-to-sequence autoregression task based on image patches, and present a query-based encoder-decoder transformer model to perform extrapolation. In addition to dedicated outpainting methods, certain image inpainting models [28, 29, 30], which have the capability to fill large masks, can also be adapted for image outpainting. Inspired by the pioneering work [31] on diffusion models, there have been endeavors [32, 33, 34, 35] that tackle the image outpainting problem via a diffusion-then-denoising process. One limitation of these pretrained image outpainting models is that the extrapolated parts lack geometric consistency and interpretability for a specific scene. This characteristic renders them unsuitable for real-world application scenarios such as navigation. Therefore, we propose a new problem, faithful FOV extrapolation, the solution of which enables and facilitates navigational applications by ensuring that the extrapolated content remains faithful and relevant to the scene at hand.
### _NeRF and Data Augmentation_
Neural radiance fields (NeRF) [24] enables novel view synthesis by representing the density and color of 3D spatial points of a specific scene through a neural network. With a novel camera pose as input, NeRF can render an image of specified FOV by performing ray marching from the camera's central viewpoint, querying the corresponding color and density fields and conducting volume rendering. Follow-up works on NeRF have explored improving generalizability [36, 37], scene editing [38, 39, 40], neural scene reconstruction [41, 42], training and inference acceleration [43, 44], among others. Unlike other image outpainting techniques which rely on scene distribution priors to extrapolate large FoV, NeRF has its unique advantage in that it implicitly encodes the entire 3D scene, enabling the rendering of novel views in a manner that is both geometrically and semantically coherent. This positions NeRF as a potential approach to achieving faithful FOV extrapolation. Furthermore, NeRF has been utilized as a data augmentation tool to generate synthetic images for training deep neural networks. Moreau _et al._[45] employ NeRF model to create a fresh dataset of synthetic images for training a camera pose regression model. Ge _et al._[46] propose an online data augmentation pipeline based on NeRF synthesis for real-world object detection. In this paper, we present a NeRF-enhanced outpainting pipeline that leverages NeRF to generate sufficient synthetic images for training an FOV extrapolation model.
## III Proposed method
### _Problem Formulation_
As shown in Fig. 1, a camera captured \(N\) training images \(\left\{\mathbf{X}_{i}\in\mathbb{R}^{k\times w\times 3}|i=1,2,\ldots,N\right\}\) in a specific scene, which
Fig. 1: Problem formulation of faithful FOV extrapolation. A collection of training images \(\mathbf{X}_{i}\) (red box) taken within a specific scene serves as prior knowledge. In the testing phase, our objective is to faithfully extrapolate the FOV of newly captured image \(\mathbf{Y}\) (blue box) to a specified FOV by leveraging the prior knowledge of the scene.
may have been taken sparsely. The FOV of each image is \((\alpha_{x},\alpha_{y})\), indicating the horizontal and vertical FOV angles, respectively. We assume that the camera pose \(\mathbf{P}_{i}\in\mathbf{SE}(3)\) for each training image \(\mathbf{X}_{i}\) can be acquired through structure from motion (SfM) pipelines such as COLMAP [16]. During testing, we seek to extrapolate a testing image \(\mathbf{Y}\in\mathbb{R}^{h\times w\times 3}\) with the same FOV \((\alpha_{x},\alpha_{y})\) captured in the same scene to a new image \(\mathbf{\tilde{Y}}\in\mathbb{R}^{H\times W\times 3}\) with larger FOV \((\gamma_{x},\gamma_{y})\) while keeping the camera's focal length constant. The extrapolated portions of the image must be consistent with the real scene.
### _NeRF-Enhanced Outpainting (NEO)_
To address this problem, we propose a simple yet effective method, dubbed NeRF-Enhanced Outpainting (NEO) with three steps in the training stage, as illustrated in Fig. 2.
**Step 1: Training a NeRF model.** We start by training a NeRF model using training images \(\{\mathbf{X}_{i}|i=1,2,\dots,N\}\). NeRF learns an implicit representation of the given 3D scene with a multilayer perceptron (MLP). With the trained NeRF, we can emit a ray from any direction and sample points on the ray to obtain their density and radiance, then render a novel view through volume rendering. Therefore, NeRF can generate an image of a specified FOV with a new camera pose. Using the trained NeRF, we can render an arbitrary number of images with specified poses and desired FOV.
**Step 2: Synthesizing images.** Next, we sample a multitude of new camera poses within the scene. For each pose, we leverage the trained NeRF model to render a pair of images with FOVs of \((\alpha_{x},\alpha_{y})\) and \((\gamma_{x},\gamma_{y})\), respectively. We sample new camera poses by ensuring that they cover all walkable areas in the training trajectories. Moreover, we manage to sample poses with different degrees of freedom (DoF) following their DoF distribution in the specific environment. The details will be presented in Sec. IV.
**Step 3: Training an outpainting model.** Finally, we utilize the NeRF-rendered images as training data to train an image outpainting model, which takes the small-FOV images as input and extrapolates the large-FOV ones. Various image outpainting models [7] can be employed in this step. Image inpainting models [30, 29] that fill in large empty spaces can also be applied to outpainting. However, it is worth noting that some outpainting or inpainting models are designed to generate diverse results with randomness. This property does not align with the objective of faithful FOV extrapolation. Thus, we modify such models during our implementation for the purpose of training an outpainting model that performs deterministic extrapolations faithful to the environment.
During inference, a real small-FOV image is given to the trained outpainting model as input and the model performs faithful FOV extrapolation to obtain the large-FOV image.
### _Discussions_
Iv-C1 Why not directly train an outpainting model using the training images \(\{\mathbf{X}_{i}|i=1,2,\dots,N\}\)?
The outpainting model takes \(\mathbf{X}_{i}\in\mathbb{R}^{h\times w\times 3}\) as input and produces \(\mathbf{\tilde{X}}_{i}\in\mathbb{R}^{H\times W\times 3}\) as output. However, there are no \(\mathbf{\tilde{X}}_{i}\) in the training data. To address this issue, we could simply resize \(\mathbf{X}_{i}\in\mathbb{R}^{h\times w\times 3}\) to \(\mathbf{X}_{i}^{\prime}\in\mathbb{R}^{H\times W\times 3}\) and then crop the central part \(\mathbf{X}_{i}^{\prime\prime}\in\mathbb{R}^{h\times w\times 3}\) to use as input. Since the training is simply achieved through resizing and cropping the original small-FOV training images, we refer to this method as _naive outpainting_. However, this approach encounters two main challenges. First, the quantity and coverage of the training data prove inadequate for hallucinating from a new viewpoint. Second, cropping the central part reduces the FOV of the training image, leading to a mismatch of the FOVs during training and testing stages.
Our proposed NEO pipeline addresses the above two challenges by leveraging NeRF-based synthesis. First, there is no longer a concern about limited amount of training data, as NEO can theoretically generate an unlimited number of synthetic images by sampling arbitrary camera poses across walkable areas. Second, the issue of training-testing FOV mismatch is resolved, because the NEO pipeline trains the outpainting model using synthetic images with the same FOV as the target of the testing stage. The underlying principle of NEO is that the resulting outpainting model learns to extrapolate faithfully in the given scene by processing an extensive volume of training images that comprehensively cover the entire scene.
#### Iv-C2 Why not directly synthesize the target images using the trained NeRF model?
NeRF can generate a specific extended-FOV image given a camera pose. However, during the testing phase, the camera pose of the testing image \(\mathbf{Y}\) is unknown. We could employ camera relocalization (also known as visual localization)
Fig. 2: **NEO pipeline: (1) training a NeRF model with captured training images of original small FOV; (2) using the trained NeRF to synthesize images of extended FOV by densely sampling camera poses in the scene; (3) training an outpainting model with the synthetic images of extended FOV. During inference, we use the trained outpainting model for faithful FOV extrapolation.**
paradigms [15, 16] to estimate the camera pose of \(\mathbf{Y}\). Yet a main issue with combining relocalization and NeRF is that, the estimated pose may not be precise enough due to errors in feature detection, matching, as well as perspective-n-point (PnP) [47] estimation. Consequently, the resulting extended-FOV image rendered by NeRF may not be aligned well with the testing image \(\mathbf{Y}\). In contrast, the NEO pipeline circumvents the need for highly accurate estimation of the camera poses for testing images. Instead, it benefits from NeRF by training an outpainting model with the underlying scene priors from NeRF renderings.
## IV Experiments
### _Baseline Methods_
To demonstrate the effectiveness of our proposed NEO, we first introduce three baseline methods to address this problem, as we have discussed earlier. The first method is the "naive outpainting" mentioned in Sec. III-C, where an outpainting model is trained only using the original small-FOV images. The original image is resized larger and its cropped central part is used as the small-FOV input, while the original image is used as the corresponding large-FOV target. The second baseline approach, dubbed "warping & fusion" (B2), goes through an image stitching pipeline. To be specific, for a testing image, we first retrieve the nearest images from the training set, then build correspondence between the testing image and the retrieved (source) images. Finally, image warping is employed to get a larger FOV image by warping and fusing the contents from source images. The last baseline is called "relocalized NeRF". We first train a NeRF model on the training images. For a testing image, we relocalize its camera pose by employing camera relocalization methods. Finally we render a large-FOV image with the trained NeRF and the relocalized pose.
### _Datasets and Metrics_
We first evaluate the FOV extrapolation performance on three scenes from three photorealistic datasets: 1) **Replica** dataset [51] includes 18 realistic indoor scenes. We adopt the first floor of the apartment0 scene for evaluation. 2) **Gibson** dataset [52] includes realistic scans of 572 full buildings. We adopt the Bonesteel building to verify our method. 3) Habitat-Matterport 3D (**HM3D**) dataset [53] includes realistic scans of \(1,000\) buildings. We adopt scene \(00065\) for evaluation. To verify our method on real scenes, we further demonstrate our method on **ScanNet**[54] which samples posed RGB images from \(1513\) real indoor scans. We adopt scene\(0000\_00\) as our training set and scene\(0000\_01\) whose data is sampled at the same scene but with different trajectories as our testing set.
For photorealistic datasets, we consider a simplified scenario: a robot with a fixed height and a fixed front camera navigates in an indoor environment. In such a setting, the camera has a constant height and can only rotate horizontally, so the motion of the camera has only 3 DoFs. For each scene, we use the Habitat environment [55] to render \(1,000\) training images with \(256\times 256\) resolution and \(90^{\circ}\) FOV from random camera poses at a fixed height of \(1.5m\). We then render \(2,000\) testing images with the same resolution and FOV, which contains 40 random walking paths at the same height with 50 images on each path. The target resolution during testing is \(512\times 512\) (\(126.87^{\circ}\) FOV) by uniformly extrapolating in four directions. For ScanNet, we simulate a challenging but more realistic scenario to extrapolate an input central image with resolution \(240\times 160\) in horizonal directions (left and right sides), whose target resolution is \(240\times 320\). We uniformly sample the original training trajectory by image IDs with an interval 10, leading to 558 training images. For testing, we randomly sample three pieces of continuous trajectory, each with 200 images, leading to 600 testing views.
Since we require the extrapolated regions to be consistent with the scene, we adopt PSNR, SSIM and LPIPS [56] as the evaluation metrics for faithful FOV extrapolation.
### _Implementation Details_
**Photorealistic Datasets.** For Replica, Gibson, and HM3D datasets, we use MAT [30] as our default outpainting model but remove its style manipulation module for deterministic extrapolation. For warping & fusion baseline (B2), we employ the pipeline proposed in [48] as our implementation. For camera relocalization, we first apply an image retrieval method NetVLAD [57] to retrieve the nearest images from the training set w.r.t. a testing image, then run COLMAP [16] which generalizes well to different environments to get the relative pose between the testing image and the retrieved training image. Finally we transform the relative pose to the absolute pose using the known training pose. For NeRF model, we employ DirectVoxGO [49] which leverages 3D voxel representation to accelerate training and inference.
For new data generation, we first derive the walkable areas from the 2D floor plans of the datasets. We sample new camera poses on a 2D horizontal grid with a default interval of \(0.05m\). At each position, we uniformly sample 72 yaw angles for the horizonal rotation. As a result, we sampled about 1.63 million, 1.43 million, 1.56 million camera poses for Replica, Gibson, and HM3D, respectively.
**ScanNet.** For ScanNet [54], we instead use LaMa [29] for outpainting, to satisfy our need on different resolution and aspect ratios. For NeRF model, we replace DirectVoxGO with a state-of-the-art NeRF method [50] on ScanNet to achieve better rendering performance. Other baselines are evaluated in a similar manner as on the photorealistic ones.
For new data generation, we aim to generate 6-DoF novel poses whose distribution is similar to the training trajectory. Specifically, on \(x\)-\(y\) plane, we sample new camera poses on a uniform 2D horizonal grid with an interval of \(0.2m\), and ensure the sampled trajectories are roughly covered by the training set, _i.e._, the Euclidean distance from the new pose to the nearest training pose on \(x\)-\(y\) plane should be limited within a threshold of \(0.3m\). For horizonal rotation (yaw angle), we sample from a uniform distribution whose upper and lower bounds come from the training poses. For other DoFs (vertical translation, pitch and roll rotation), we empirically discover that the training poses
roughly follow a Gaussian distribution, thus we sample from a Gaussian distribution whose mean and standard deviation are calculated from the training poses. Then we employ NeRF to render novel views using these poses. However, practically we found that a non-negligible portion of the rendered images on ScanNet are too blurry to provide useful information for training the outpainting model. To address this problem, we apply a blur detection algorithm based on Laplacian variance to compute the blurry scores for filtering out blurry images. Eventually, we generated 0.75 million outpainting-trainable images on ScanNet.
### _Results_
**Quantitative Results.** Table I illustrates the extrapolation results of the proposed NEO approach and three baseline methods on four datasets. In addition, for validation purposes, the performance of "oracle NeRF" is reported, which employs the groundtruth camera pose of the test image for NeRF rendering. "Oracle NeRF" reflects the quality of the trained NeRF model and sets an upper bound for (B3) relocalized NeRF. Since NEO learns the outpainting from NeRF-augmented images, its performance is expected to be lower than that of "oracle NeRF". As shown in Table I, NEO significantly outperforms the three baseline methods. As anticipated, "oracle NeRF" achieves the best results although it is not for practical use since groundtruth camera poses for testing images are unknown. Although the three baseline methods produce reasonably good results, they encounter non-trivial problems with faithfulness, which we will demonstrate in qualitative results below.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{**Replica (photorealistic)**} & \multicolumn{3}{c}{**Gibson (photorealistic)**} & \multicolumn{3}{c}{**HMMD (photorealistic)**} & \multicolumn{3}{c}{**ScanNet (real)**} \\ \cline{2-13} & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\uparrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline (B1) Naive Outpainting [30] & 20.59 & 0.781 & 0.348 & 18.05 & 0.705 & 0.404 & 17.92 & 0.630 & 0.427 & 19.98 & 0.755 & 0.188 \\ (B2) Warping \& Fusion [48] & 190.03 & 0.745 & 0.397 & 16.61 & 0.688 & 0.496 & 15.94 & 0.582 & 0.512 & 21.02 & 0.755 & 0.238 \\ (B3) Recplaceral NeRF [49, 50, 16] & 16.78 & 0.724 & 0.386 & 14.90 & 0.641 & 0.474 & 14.43 & 0.602 & 0.484 & 18.88 & 0.695 & 0.188 \\ \hline Oracle NeRF [49, 50] & 32.90 & 0.936 & 0.174 & 32.16 & 0.928 & 0.188 & 27.03 & 0.824 & 0.299 & 23.80 & 0.805 & 0.108 \\ \hline NEO & 25.94 & 0.868 & 0.217 & 23.53 & 0.822 & 0.263 & 21.54 & 0.731 & 0.338 & 22.40 & 0.793 & 0.168 \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Quantitative evaluation on four datasets.** The backbone for computing LPIPS metrics is VGG network.
Fig. 4: Qualitative results on **ScanNet** dataset.
Fig. 3: Qualitative results on three photorealistic datasets: **Replica** (1st row), **Gibson** (2nd row), and **HM3D** (3rd row).
**Qualitative Results.** We show some qualitative results on Replica, Gibson, and HM3D datasets in Fig. 3. "Oracle NeRF" learns a geometrically consistent 3D representation thus demonstrates appealing rendering in a coherent way on the three datasets. The areas extrapolated by NEO are also much more accurate and faithful to the scene compared to the three baselines. NEO sometimes suffers from slight misalignment around small objects (_e.g._, the painting on the left wall on the third row of Fig. 3). The extrapolated regions of (B1) naive outpainting tend to be blurry, which is mainly caused by the limited number of training images. The extended areas by (B2) warping & fusion are limited by the non-overlapping regions of neighboring images. As for (B3) relocalized NeRF, the input region (central part) always misaligns with the extrapolated regions due to evident errors in pose estimation. Comparison of Results on the ScanNet dataset, as shown in Fig. 4, leads to similar observations. Surprisingly, we found that NEO can avoid some issues encountered by "oracle NeRF" and achieve better visual quality in some regions, such as the blurry floor region near the border of extrapolated image by "oracle NeRF" in the second row of Fig. 4. We suspect the reason is that, NeRF is trained on sparse views from the original training set, thereby its rendering quality in specific areas may be dependent on the availability of informative, overlapping training images. In contrast, NEO trains the outpainting model on sufficient, dense novel views rendered from NeRF, so it is more capable of learning a semantically and geometrically coherent color field of the scene, effectively reducing the impact of insufficient information in some areas of the original NeRF.
### _Discussions_
**Pose Sampling.** It is important in the NEO approach to cover as many views as possible in the scene in Step 2 of the training process. Thus, the distribution and number of sampled poses are crucial for training a highly effective outpainting model. In this study, we vary the interval of the 2D grid to control the sampling density on the Replica dataset. As shown in Table II (a), the generative performance naturally improves when increasing the pose sampling density. The improvement is significant (+1.30 in PSNR) when reducing the interval from \(0.1m\) to \(0.05m\), where the number of sampled poses increases from \(0.4M\) to \(1.6M\).
**FOV of Training Images.** A key issue of training an outpainting model for FOV extrapolation is the consistency in FOV between training and testing images. Figure 5 demonstrates the FOV mismatch problem in the naive outpainting method. The solid red and blue arrows in Fig. 5(a) represents the camera's inherent FOV \(\alpha\). From a resized training image captured at Pose 2 (Fig. 5(c)), naive outpainting learns to extrapolate the cropped FOV \(\beta\) (dashed red) to \(\alpha\). However, for a testing input image (solid blue) at Pose 1 (Fig. 5(b)), the goal is to extrapolate the inherent FOV \(\alpha\) to a larger FOV \(\gamma\) (dashed green). Though the central parts (inputs) of Fig. 5(b) and Fig. 5(c) are similar, their extrapolated parts are totally different. The FOV mismatch issue of naive outpainting can also be observed in the qualitative results (_e.g._, the painting on the second row in Fig. 3). To further examine the effect of FOV, we evaluate a variant of NEO, which uses the original-FOV synthetic images to train the same outpainting model. As seen in Table II (b), the performance greatly decreases (-5.33 in PSNR), indicating the significance of training FOV.
## V Limitations and Conclusions
As an initial exploration of the faithful FOV extrapolation task, this paper focuses on tackling the problem for only static scenes. However, real-world navigation usually entails dynamic objects or people. Moreover, scenes are rarely static over time, _e.g._, furniture may be rearranged. This study can serve as a probe to more comprehensive research in this area. In the future, we envision to explore solutions that can accommodate the complexities of more realistic scenarios. One way may leverage dynamic NeRF [58] that better handles dynamic scenarios.
To conclude, in this paper, we formulate a new problem named _faithful image extrapolation_ to increase FOV of a given image. It requires the expanded area to adhere to the real environment. To address this problem, inspired by the recent surge of NeRF-based rendering approaches, we propose a novel pipeline dubbed NEO, to train a NeRF-enhanced image outpainting model. Our key insight is to obtain sufficient and interpretable training data to aid the training of outpainting model from the novel views rendered by NeRF on a specific scene. Compared with competing baselines, our model has showcased superior generative performance. Our synthesized views are geometrically and semantically consistent with the 3D environment, thereby achieving faithful extrapolation that opens up potential applications such as AR-based navigation.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Interval & \# Pose & FOV & PSNR \(\uparrow\) & SSIM \(\uparrow\) \\ \hline \multirow{4}{*}{(a)} & **0.05** & 1,629,864 & extended & 25.94 & 0.868 \\ & **0.10** & 406,224 & extended & 24.64 & 0.851 \\ & **0.20** & 101,304 & extended & 24.52 & 0.849 \\ & **0.50** & 10,656 & extended & 23.37 & 0.833 \\ & **1.00** & 3,744 & extended & 22.72 & 0.824 \\ \hline \multirow{2}{*}{(b)} & 0.05 & 1,629,864 & **extended** & 25.94 & 0.868 \\ & 0.05 & 1,629,864 & **original** & 20.61 & 0.802 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Effect of (a) _pose sampling density_ and (b) _FOV of training images_ in NEO pipeline on Replica dataset.
Fig. 5: FOV analysis. The discrepancy in the FOV between the (c) training image and (b) testing image may lead to false extrapolation behaviors. |
2309.15631 | Design and Optimization of Residual Neural Network Accelerators for
Low-Power FPGAs Using High-Level Synthesis | Residual neural networks are widely used in computer vision tasks. They
enable the construction of deeper and more accurate models by mitigating the
vanishing gradient problem. Their main innovation is the residual block which
allows the output of one layer to bypass one or more intermediate layers and be
added to the output of a later layer. Their complex structure and the buffering
required by the residual block make them difficult to implement on
resource-constrained platforms. We present a novel design flow for implementing
deep learning models for field programmable gate arrays optimized for ResNets,
using a strategy to reduce their buffering overhead to obtain a
resource-efficient implementation of the residual layer. Our high-level
synthesis (HLS)-based flow encompasses a thorough set of design principles and
optimization strategies, exploiting in novel ways standard techniques such as
temporal reuse and loop merging to efficiently map ResNet models, and
potentially other skip connection-based NN architectures, into FPGA. The models
are quantized to 8-bit integers for both weights and activations, 16-bit for
biases, and 32-bit for accumulations. The experimental results are obtained on
the CIFAR-10 dataset using ResNet8 and ResNet20 implemented with Xilinx FPGAs
using HLS on the Ultra96-V2 and Kria KV260 boards. Compared to the
state-of-the-art on the Kria KV260 board, our ResNet20 implementation achieves
2.88X speedup with 0.5% higher accuracy of 91.3%, while ResNet8 accuracy
improves by 2.8% to 88.7%. The throughputs of ResNet8 and ResNet20 are 12971
FPS and 3254 FPS on the Ultra96 board, and 30153 FPS and 7601 FPS on the Kria
KV26, respectively. They Pareto-dominate state-of-the-art solutions concerning
accuracy, throughput, and energy. | Filippo Minnella, Teodoro Urso, Mihai T. Lazarescu, Luciano Lavagno | 2023-09-27T13:02:14Z | http://arxiv.org/abs/2309.15631v2 | Design and Optimization of Residual Neural Network Accelerators for Low-Power FPGAs Using High-Level Synthesis
###### Abstract
Residual neural networks (ResNets) are widely used in computer vision tasks. They enable the construction of deeper and more accurate models by mitigating the vanishing gradient problem. Their main innovation is the _residual block_ which allows the output of one layer to bypass one or more intermediate layers and be added to the output of a later layer. Their complex structure and the buffering required by the residual block makes them difficult to implement on resource-constrained platforms. We present a novel design flow for implementing deep learning models for field-programmable gate arrays (FPGAs) optimized for ResNets, using a strategy to reduce their buffering overhead to obtain a resource-efficient implementation of the residual layer. The current implementations of residual networks suffer from diminished performance and heightened computational latency attributable to the way residual blocks are implemented. Our high-level synthesis based flow encompasses a thorough set of design principles and optimization strategies, exploiting in novel ways standard techniques such as _temporal reuse_ and _loop merging_ to efficiently map ResNet models, and potentially other skip connection-based NN architectures, into FPGA. The models are quantized to 8-bit integers for both weights and activations, 16 bits for biases, and 32 bits for accumulations. The experimental results are obtained on the CIFAR-10 dataset using ResNet8 and ResNet20 implemented with Xilinx FPGAs using HLS on the Ultra96-V2 and Kria KV260 boards. Compared to the state-of-the-art on the Kria KV260 board, our ResNet20 implementation achieves \(2.88\times\) speedup with \(0.5\,\mathrm{\char 37}\) higher accuracy of \(91.3\,\mathrm{\char 37}\), while ResNet8 accuracy improves by \(2.8\,\mathrm{\char 37}\) to \(88.7\,\mathrm{\char 37}\). The throughputs of ResNet8 and ResNet20 are \(12\,971\,\mathrm{FPS}\) and \(3254\,\mathrm{FPS}\) on the Ultra96 board, and \(30\,153\,\mathrm{FPS}\) and \(7601\,\mathrm{FPS}\) on the Kria KV26, respectively. They Pareto-dominate state-of-the-art solutions with respect to accuracy, throughput, and energy.
## 1 Introduction
Convolutional neural networks (CNNs) have consistently achieved state-of-the-art results in many tasks, including computer vision and speech recognition [20]. Their success is based on high accuracy and performance due to the improved computational intensity of convolutional layers compared to previous approaches, requiring less memory bandwidth than fully connected (FC) layers [25]. The choice of hardware for implementing convolutional layers profoundly impacts their applicability. Central processing units (CPUs) are versatile and easy to program, but their architecture makes them relatively inefficient. Graphical processing units (GPUs) are designed to handle massive parallelism, allowing them to process multiple computations simultaneously. This aligns well with the inherently parallel nature of CNNs but their energy consumption is notably higher [6]. application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) offer different tradeoffs of cost and flexibility for algorithm acceleration [40]. The latter are less performance and energy efficient due to their reprogrammability, but they have much lower design cost and can be more easily customized for a specific application. Neural networks (NNs) optimized for embedded applications [16, 41] are designed to run efficiently on devices with limited processing power, memory, and energy. They can perform very well on small datasets, such as CIFAR-10 [19] and MNIST [5], and are often used in real-time contexts where response timeliness and low latency are critical.
Residual neural networks (ResNets) [15] use residual blocks (see Fig. 1) to mitigate the vanishing gradient problem for deep networks through _skip connections_. They allow intermediate feature maps to be reprocessed at different points in the network computation, increasing accuracy. However, state-of-the-art implementations of _skip connections_ require significant on-chip buffering resources, which significantly reduce the benefits of streaming-based FPGA implementations. Thus, recent work has focused on optimizing and
shrinking the residual structure of NNs [33] and on finding quantization strategies that improve the efficiency of their hardware design [8].
Deep networks have many parameters and require extensive quantization to reduce their size to fit into the FPGA on-chip memory [22]. For this reason, widely used tools such as FINN [3] focus on low-bit quantization (such as 1-bit [28] or 2-bit [4]) and make suboptimal use of resources for higher-bit quantization, as we will show later. However, low-bit quantizations degrade NN accuracy and may not be suitable for accurate inference for complex problems [30]. We propose an efficient residual block architecture for a concurrent dataflow FPGA implementation, in which each layer is a process and activations are streamed between processes, with the following main contributions:
* An optimized architecture for CNNs that supports residual networks and minimizes buffering resources, allowing on-chip storage of the parameters and activations using 8 bits, which have been shown to achieve good accuracy for our target NN architectures [21].
* A custom high-level synthesis (HLS) code generation flow from the Python model to the FPGA bitstream using Vitis HLS [37].
* The validation of the architecture and the implementation flow using the ResNet8 and ResNet20 residual NNs on CIFAR-10 targeting the Ultra96-V2 and Kria KV260 FPGA boards, to demonstrate the advantages of the proposed solution.
The rest of the paper is organized as follows. Section 2 presents the background and motivation for this work. Section 3 discusses training and quantization, and describes the accelerator architecture with a special focus on skip connection management. Section 4 presents the experimental setup and discusses the results. Section 5 concludes the paper.
## 2 Related Work
The field of FPGA-based acceleration for deep neural networks has gained significant attention due to its potential to achieve high-performance and energy-efficient inference. Several approaches and architectures have been proposed in the literature to address this challenge [10]. In systolic array overlay-based architectures, each processing element (PE) is a single instruction multiple data (SIMD) vector accumulation module, which receives activation inputs and weights in each cycle from the horizontally and vertically adjacent PEs. Pipelined groups of PEs with short local communication and regular architectures can achieve high clock frequencies and efficient global data transfers [32]. The overlay architecture [39] performs the computation of the convolution layers in sequence over a systolic array. However, despite its flexibility, it has high latency due to frequent transfers between external memory [double data rate (DDR) or high bandwidth memory (HBM)] and on-chip memory.
An alternative approach is to implement a _custom dataflow architecture where each layer is associated with a compute unit_. This structure can be pipelined, and activations and weights can be stored in on-chip memory (OCM), reducing latency and increasing throughput. The main limitation of this type of architecture is the number of digital signal processor blocks (DSPs) and lookup tables (LUTs) required to implement the convolutional layers, as well as the size of on-chip buffers for weight storage [27], while activations are streamed from layer to layer. Since streaming tasks have well-defined pipelining rates with static data flows, a customized approach can lead to optimized processing, resulting in improved performance and resource saving.
Widely recognized as one of the leading frameworks for deploying deep neural networks (DNNs) on FPGAs, Xilinx Vitis AI [1] provides a comprehensive set of tools specifically designed for optimizing and deploying DNNs on Xilinx FPGAs. With support for popular frameworks such as TensorFlow [7], PyTorch [11], and Caffe [9], it incorporates various optimization techniques such as pruning, quantization, and kernel fusion to improve performance and reduce memory consumption. The deep learning processor unit (DPU) is the accelerator core used in Vitis AI and consists of several key modules, including a high-performance scheduler module, a hybrid computing array module, an instruction fetch unit module, and a global memory pool module [36]. The DPU is responsible for executing the microcode of the spec
Figure 1: Basic residual block with a long branch with two convolutional layers, and the skip connection (red branch) that must store its input activations until output activation is generated, requiring much memory.
ified DNN model, known as the _xmodel_. Vitis AI uses an overlay-based architecture where model weights and biases are stored in DDR memory and cached in the on-chip weight buffer during inference. Input and output data of the PE array are also cached in OCM. This architecture scales very well for DNN, but may have higher resource utilization and less performance compared to custom dataflow accelerators due to its general-purpose nature and the overhead associated with off-chip memory accesses.
Another widely used tool is FINN [3], an open source framework developed by Xilinx that allows the generation of highly optimized DNNs for FPGA acceleration, with emphasis on dataflow-style architectures. FINN uses HLS to convert trained DNN models into hardware intellectual property (IP) blocks that can be easily integrated into FPGA-based systems. While FINN offers significant customization capabilities, it is primarily designed for low-bitwidth quantization schemes, such as binarized networks. Achieving high performance with FINN often leads to lower accuracy and/or higher resources, particularly when using the 8-bit quantization that has been shown to be the best compromise between resources and accuracy.
[31] evaluates the accuracy, power, throughput, and design time of three different CNNs implemented on FPGAs and compares these metrics to their GPU equivalents. A comparison was also made between a custom implementation of two DNNs using System Verilog and an implementation using the Xilinx tools FINN and Vitis AI [24]. In addition, [14] reports a comparison between FINN and Vitis AI using a widely used set of ResNet model configurations.
We propose an optimized pipelined dataflow architecture tailored for better resource management. Our solution specifically targets residuals NNs and allows using more bits during quantization than e.g. FINN, to improve the trade-off between accuracy, throughput, and memory usage [35]. Its effectiveness is compared with Vitis AI, FINN, and custom resource-efficient approaches [42], demonstrating its potential to efficiently implement complex residual NNs with state-of-the-art performance in terms of latency, energy, accuracy, and resource consumption.
## 3 Methodology
Fig. 2 shows the high-level view of the flow that we use to generate C++ code from the quantized NN model, which includes the following main steps:
* Use Brevitas [26] for NN quantization and extract its graph in QONNX format, which provides an easy-to-parse description of the network, including information such as layer type, input and output quantization, and layer connections (see Section 3.1);
* Generate the C++ of the top function that instantiates all the layers of the network (see Section 3.7 and Section 3.2);
* Generate the register-transfer level (RTL) code using Vitis HLS and a set of model-independent synthesizable C++ libraries that we wrote to implement the optimized layers (see Section 3.3);
* Import the RTL code as a block into a Vivado design and generate the bitstream for FPGA programming.
### Quantization
Quantization is done using the Brevitas framework [26]. NNs are trained using PyTorch, and both the weights (\(\mathit{bw_{\text{w}}}\)) and the activations (\(\mathit{bw_{\text{x}}}\)) are represented as 8-bit integers, because for a variety of applications this is a good compromise between memory usage and accuracy ([21]), while the biases are represented as 16-bit integers (\(\mathit{bs_{\text{b}}}\)) for the same reason.
NN training uses floating-point calculations and models quantization by clamping and rounding. Back-propagation uses dequantized inputs and weights to improve convergence and accuracy, while loss evaluation uses quantization to match the results of the hardware implementation. Inference on the FPGA uses multiplications of operands quantized to different sizes, while the results are accumulated in 32-bit registers to avoid overflows and efficiently map to FPGA resources such as DSPs, as discussed in Section 3.3.
Figure 2: Implementation flow
The quantization \(Q(\cdot)\) of a value \(b\) on \(bw\) bits \[a=Q(b)=\text{clip}(\text{round}(b\cdot 2^{bw-s}),a_{\text{min}},a_{\text{max}}) \cdot 2^{s}\quad s\in\mathbb{N}\] (1) \[a_{\text{min}}=\mathit{act}_{\text{min}}(s)=\left\{\begin{array}{ll}0& \text{if unsigned}\\ -2^{bw-1-s}&\text{if signed}\end{array}\right.\] (2) \[a_{\text{max}}=\mathit{act}_{\text{max}}(s)=\left\{\begin{array}{ll}2^{ bw-s}-1&\text{if unsigned}\\ 2^{bw-1-s}&\text{if signed}\end{array}\right.\] (3)
uses \(s\) as scaling factor, \(a_{\text{min}}\) is the lower clipping bound and \(a_{\text{max}}\) is the higher one. All _zero points_ are set to zero and omitted in the expressions above, while the _scaling factors_ are set to powers of two to map alignment operations between weights and activations into hardware-friendly bit shifts [18].
The bias scaling factor \(s_{\text{b}}\) is calculated as the sum of the input scaling factor \(s_{\text{x}}\) and the weight scaling factor \(s_{\text{w}}\).
After the training, to avoid the hardware implementation of floating point operations, the batch normalization layers are merged with the quantized convolution layers [17] and retrained to calibrate and tune the quantization parameters. The final model is exported to the QONNX format [12, 29].
### Accelerator architecture
The _code generation_ step in Fig. 2 works on the optimized QONNX graph, i.e. after ReLU and batch normalization were merged with convolutional layers, and provides a C++ _top function_ that instantiates all the tasks (also known as dataflow processes) needed to implement network inference:
* _Computation task_: one for each convolution or pooling node to implement the layer computations (see Section 3.3).
* _Parameter task_: one for each convolution to correctly provide the convolution parameters to the computation task. One additional task is added to load parameters from _off-chip memory_ if UltraRAM (URAM) storage is used (see Section 3.4).
* _Window buffer tasks_: multiple tasks for each convolution or pooling node, to format the input data to the computation tasks (see Section 3.6).
All tasks are coded in a reusable, templated C++ library that can adapt to different activation and weight tensor dimensions, data types, and computational parallelism. Each task is composed of a main loop that performs multiple operations.
To increase the accelerator throughput, pipelining is enabled at two different levels:
* Inter-task: concurrent layer execution is achieved using the _dataflow_ pragma in the body of the top function. There is one computation task and, possibly, multiple window buffer tasks running for each of the network layers. The latency of the slowest task determines the overall accelerator throughput.
* Intra-task: concurrent operation execution is used to reduce and balance task latencies. _Computation tasks_ are the most computationally intensive. Thus each top loop inside them is partially unrolled by a factor computed at compile time and based on the complexity of the corresponding computation. This effectively allocates a number of PEs, one for each unrolled iteration, for each _computation task_. Each PE performs one or more multiply and accumulates (MACs) operations per clock cycle. See Section 3.3 and Section 3.5 about how low-level DSP packing is used to increase the number of MACs executed by a DSP unit in a clock cycle. If the _computation task_ belongs to a convolution, the related _parameter task_ main loop is unrolled by the same factor, to provide enough data to support the increased computations parallelism.
An integer linear programming (ILP) model described in Section 3.5 is used to globally optimize the unroll factors (number of PEs) that maximizes NN throughput under DSP resource constraints (DSPs are the most critical FPGA resource for the NN architectures that we considered in this paper).
Network inputs and outputs are implemented as _streams_ of data. direct memory access (DMA) blocks read/write input/output tensors to/from the off-chip memory. Streams also transfer data between tasks in order minimize the memory requirements for on-chip activations storage. (See Section 3.6)
The _data-driven_ execution approach is chosen to process the frames sequentially and as a continuous stream. This is achieved in Vitis HLS by using the ap_ctrl_none pragma in the top function that models the entire NN. Each task is then operating as soon as input data are available.
Inference begins as soon as the DMA attached to the input port of the top-level interface is enabled to download input images. Each task is pipelined; the first stage reads the input stream, while the others process the data and write the output stream. As a further, tool-specific, implementation detail, intra-task pipelines are not flushable, which would consume more resources, but stalling with auto-rewind disabled, to both save resources and avoid deadlocks. Auto-rewind would start a new loop execution while the pipeline is processing the last iterations of the old one, but with data-driven ap_ctrl_none dataflow it would cause deadlocks at runtime. Performance is largely unaffected because the _intra-task pipeline_ latency is very small, just a few cycles, compared to the task latency, which is proportional to the number of
iterations of the _intra-task pipeline_.
### Convolution computation task
Each convolution _computation task_ receives a window of input activations from a _window buffer task_. Fig. 4 shows the pseudo-code for the convolution computation and examples of how the computation pipeline receives input data and computes the partial results. The PARFOR pseudo-code is implemented as an unrolled loop in synthesizable C++.
Input tensors are mostly provided in depth-first-order to each convolution \(i\), as discussed below. The innermost loops are completely unrolled over the filter dimensions (\(\mathit{fh}_{i},\mathit{fw}_{i}\)) and part of the output channel (\(\mathit{och}_{i}\)) dimension. This unroll factor \(\mathit{och}_{i}^{\mathit{par}}\), where "par" means that the execution will be fully data-parallel, defines the number of PEs, as discussed above. It is chosen by the algorithm described in Section 3.5. The \(\mathit{och}_{i}^{\mathit{par}}\) unroll factor is limited by on-chip memory bandwidth and the number of arithmetic units that can be used simultaneously. Increasing the number of output channels computed in parallel per clock cycle requires the corresponding filter parameters to be provided in parallel, i.e., higher memory bandwidth and potentially more BlockRAM (BRAM) resources.
Another optimization changes the order in which the windows are given to the data path, instead of channel first order, and unrolls along the output tensor width (\(\mathit{ow}_{i}\)) loop by a factor (\(\mathit{ow}_{i}^{\mathit{par}}\)). Unrolling along the tensor width allows us to reduce the computation time without requiring more memory bandwidth for the filter parameters, at the cost of more partitioning of the input activation window buffer, and hence of potentially more BRAM resources.
This also allows the weights to be reused within an output stationary dataflow and can be exploited in future work where larger networks are considered and the off-chip memory is used to store network parameters.
We now discuss how we exploit the DSP packing method described in [38] to reduce the hardware overhead of computing quantized data, by performing multiple operations on a single DSPs block. Unlike \(\mathit{och}_{i}^{\mathit{par}}\), which is resource dependent, \(\mathit{ow}_{i}^{\mathit{par}}\) depends on the activation quantization bits. Even though the number of operations packed in a DSP depends on the number of bits, this work only used the configuration described in [38], which presents a method for \(\mathit{bw}_{i}=8\) for both parameters and activations.
Fig. 5 shows two examples of calculation pipelines, with different values of \(\mathit{ow}_{i}^{\mathit{par}}\). Each gray box is a PE that receives:
* _Input activations:_\(\mathit{ow}_{i}^{\mathit{par}}\) inputs. These values change at each iteration of the \(\mathit{och}_{i}^{\mathit{groups}}\) loop. The input activations are multiplied in parallel by the PE input weight and are provided by the corresponding _window buffer tasks_.
* _Input weight_: one input. This value is updated at each clock cycle. The input weight is provided by the corresponding _parameter task_.
\begin{table}
\begin{tabular}{l l} \hline \hline Symbol & Description \\ \hline \(\mathit{ich}_{i}\) & Input tensor channels \\ \(\mathit{ih}_{i}\) & Input tensor height \\ \(\mathit{iw}_{i}\) & Input tensor width \\ \(\mathit{och}_{i}\) & Output tensor channels \\ \(\mathit{oh}_{i}\) & Output tensor height \\ \(\mathit{ow}_{i}\) & Output tensor width \\ \(\mathit{fh}_{i}\) & Filter tensor height \\ \(\mathit{fw}_{i}\) & Filter tensor width \\ \(\mathit{s}_{i}\) & Convolution stride \\ \hline \hline \end{tabular}
\end{table}
Table 1: Symbol definitions for layer \(i\)
Figure 3: Accelerator architecture with direct memory access (DMA) blocks for memory transfers (grey box) and concurrent tasks [computation (yellow), buffer (red), and parameter (green)] communicating through data streams. Parameter loading from off-chip memory to URAMs (dashed) can be enabled on platforms supporting it.
* _Partial accumulation_: one input. This value is updated at each clock cycle. The partial accumulation is provided by the previous pipeline stage.
Each PE receives an input weight every clock cycle, so sufficient OCM bandwidth must be provided (see Section 3.4).
The two pipelines in Fig. 5 highlight how \(\mathit{och}_{i}^{\mathit{par}}\) allocates multiple PEs per pipeline stage (horizontal unroll), and how \(\mathit{ow}_{i}^{\mathit{par}}=2\) modifies the mapping of the input activations to the different stages of the pipelines, thus increasing the number of computations for each PE. The partial accumulation entering each PE comes from the previous pipeline stage. The only exception is the first stage, which receives as value to accumulate the _bias_ of the convolution.
Each MAC calculation, for the case \(\mathit{ow}_{i}^{\mathit{par}}=1\), is done using a DSP from the _Xilinx_ architecture. If \(\mathit{ow}_{i}^{\mathit{par}}=2\) the two MACs have reduced resource usage thanks to the technique described in [38].
As shown by the pipeline in Fig. 5 with \(\mathit{ow}_{i}^{\mathit{par}}=2\), the operation packing is done by multiplying 2 activations (\(A\), \(D\)) by 1 parameter (\(B\)) and accumulating to a partial result (\(P_{i-1}\)). The output (\(P_{i}\)) is passed to the next pipeline stage.
The multiplier of the DSPs receives one \(27\,\mathrm{bit}\) and one \(18\,\mathrm{bit}\) input. The former packs the two activations:
while the latter contains the sign-extended parameter:
Figure 4: Convolution architecture: The data flow is output stationary, and for \(\mathit{och}_{i}^{\mathit{par}}\) output channels the contribution of the input window \(\mathit{fh}_{i},\mathit{fw}_{i}\) is evaluated every clock cycle. Data is written to the output after all input channels have been processed. The dataflow setup for \(\mathit{ow}_{i}^{\mathit{par}}=1\) and \(\mathit{ow}_{i}^{\mathit{par}}=2\) is shown in the two schematics. The input activations are loaded simultaneously, along orange lines, into each gray box, which is a PE. PEs performs a _MAC_ operation and the partial results move through the pipeline from top to bottom along the green lines.
The two operands are multiplied into a \(36\,\mathrm{bit}\) word (\(M\)):
The DSP also adds the partial product to the accumulation coming from the previous pipeline stage, \(P_{i-1}\):
At the end of the chain, a restore stage ensures that the \(p_{v}\) sign does not create errors in the final result:
Note that in our specific use case, where activations and parameters are quantized on \(8\,\mathrm{bits}\), we can at most chain **7** packed DSPs, because of the limited padding between the two words, namely \(2\,\mathrm{bits}\), and the restore mechanism which corrects \(1\,\mathrm{bit}\) overflow. However, for convolution filters with \(\textit{fn}_{i}=3\) and \(\textit{fw}_{i}=3\), the DSP chain should have a length of 9. Hence we split the chain into \(2\) subparts that respect the maximum length condition. The partial accumulations coming from the different chains are then added together in an additional stage, and the result coming from the DSPs pipeline is finally added to the accumulation register.
Registers keeping the partial results of the MAC between multiple iterations are sized in order to avoid overflows in the computations. Considering models with \(\textit{bw}_{i}\) bit quantization, the accumulated values from the product between an activation and a parameter, have a width equal to \(2\cdot\textit{bw}_{i}\), using same bit-width for activation and parameter quantization. For each convolution, the number of accumulations (\(N_{i}^{acc}\)) performed to compute each output value is
\[N_{i}^{acc}=\textit{och}_{i}\cdot\textit{ich}_{i}\cdot\textit{fh}_{i}\cdot \textit{fw}_{i}. \tag{4}\]
Since the addend has \(2\textit{bw}_{i}\) bits, the final accumulation register must have a width equal to
\[\textit{bw}_{i}^{acc}=\lceil\log_{2}(N_{i}^{acc})\rceil+2\textit{bw}_{i}. \tag{5}\]
Considering the worst case for _Resnet8_ and _Resnet20_ with \(8\,\mathrm{bit}\) quantization, the required bitwidth is
\[N_{i}^{acc}=32\cdot 32\cdot 3\cdot 3=9216 \tag{6}\]
\[\textit{bw}_{i}^{acc}=\lceil\log_{2}9216\rceil+2\cdot 8=14+16=30. \tag{7}\]
The accumulation register size is chosen to be \(32\,\mathrm{bit}\) because it ensures no overflow, and using standard C++ types improves C simulation speed.
### Parameter task
Each convolution layer of the QONNX network graph has a _parameter task_ in the top function, feeding the computation pipeline with data from on-chip memory. Depending on the target FPGA, parameters may be stored in:
* _BRAMs_: they can store up to \(4\,\mathrm{KB}\) each and can be initialized by the bitstream. The parameters for each convolution are stored in separate arrays, one for each weight and bias of the convolutions, because each is accessed by a specific _parameter task_.
* _UltraRAMs (URAMs)_: they can store \(32\,\mathrm{KB}\) of data each (allowing higher performance) but require dedicated hardware for initialization (a DMA-driven input stream). The parameters for each convolution are packed into a single array stored in off-chip DRAM (also accessible by the host) and transferred by DMA once at power-up. A concurrent task in the dataflow architecture splits and distributes the input parameter stream to the tasks that handles the parameters of each convolution. Each _Parameter task_ provides the filter data to the computation pipeline and caches it in URAMs at the first iteration for reuse (hence the next URAMs accesses are read-only).
The Ultra96 board lacks URAM, so BRAM is used. The KRIA KV260 board uses the URAM.
As discussed in Section 3.3, the main loop of each convolution's _computation task_ consumes \(\textit{cw}_{i}=\textit{och}_{i}^{par}\cdot\textit{fh}_{i}\cdot\textit{fw}_{i}\) filter data per clock cycle. The \(\textit{ow}_{i}^{par}\) unroll factor does not contribute because each parameter is used for multiplication with \(\textit{ow}_{i}^{par}\) activations. To avoid stalling the computation pipeline, the _parameter task_ must write \(\textit{cw}_{i}\) weights every clock cycle and read the same amount of data from the BRAMs or URAMs. Arrays are then reshaped by a factor equal to \(\textit{cw}_{i}\), using the array_reshape pragma, to achieve the required memory bandwidth.
### Throughput optimization
To avoid stalling, all streams are sized appropriately by our configuration _Python_ script based on their type, as follows.
Streams created by _parameter tasks_ supply _computation tasks_ with a token size equal to the computational parallelism of the consuming convolution, \(\textit{och}_{i}^{par}\), every clock cycle. Since the producer and consumer write and read one token per clock cycle, the stream size is 2.
The sizes of the streams produced by _window buffer tasks_ are discussed in Section 3.6.
The output stream from _computation tasks_ must consider \(\textit{och}_{i}^{par}\) and \(\textit{ow}_{i}^{par}\). The pseudocode in Fig. 4 shows that
computation tasks_ write a burst of \(\mathit{och}_{i}\cdot\mathit{ow}_{i}^{par}\) output activations, grouped into tokens of size \(\mathit{och}_{i}^{par}\) to not stall the pipeline. When packing is applied, the output stream is split into \(\mathit{ow}_{i}^{par}\) parallel channels to ensure enough bandwidth. Each channel is implemented by a first in first out (FIFO) of size \(\mathit{och}_{i}^{groups}=\mathit{och}_{i}/\mathit{och}_{i}^{par}\) to store the burst transactions completely.
As mentioned above, using the _dataflow_ paradigm and assuming optimal stream sizing to avoid stalling, accelerator throughput is limited by the slowest concurrent process. Therefore, the throughput \(\mathit{Th}\) of each layer unit must be balanced for optimal performance. The latency of each module depends on the number of computations for each input frame \(c\) and the computational parallelism \(\mathit{cp}\) required for each block \(i\). The number of computations for a convolutional layer is
\[c_{i}=\mathit{oh}_{i}\cdot\mathit{ow}_{i}\cdot\mathit{och}_{i}\cdot\mathit{ ich}_{i}\cdot\mathit{fh}_{i}\cdot\mathit{fw}_{i}. \tag{8}\]
Since the parameter \(c_{i}\) is fixed and depends on the chosen network architecture, the throughput per layer is set by the number of compute units allocated to each _computation task_ implementing a layer. As shown in the pseudocode in Fig. 4, computation parallelism \(\mathit{cp}_{i}\) is
\[\mathit{cp}_{i} =k_{i}\cdot\mathit{och}_{i}^{par}\cdot\mathit{ow}_{i}^{par}, \tag{9}\] \[\text{with}\quad k_{i} =\mathit{fh}_{i}\cdot\mathit{fw}_{i},\mathit{och}_{i}^{par}, \mathit{ow}_{i}^{par}\in\mathbb{N}. \tag{10}\]
Since the filter size \(k_{i}\) is defined by the model and \(\mathit{ow}_{i}^{par}=2\), because for this work we consider all the quantization bit-widths equal to \(8\,\mathrm{bit}\), the variable to optimize is \(\mathit{och}_{i}^{par}\), i.e. \(\mathit{cp}_{i}\) is an integer multiple of the filter size.
The throughput of each task, \(\mathit{Th}_{i}\) frame per second (FPS), depends on the variable \(\mathit{och}_{i}^{par}\)
\[\mathit{Th}_{i}=\mathit{Th}\left(\mathit{och}_{i}^{par}\right)=\frac{\mathit{ cp}_{i}}{c_{i}}=\frac{k_{i}\cdot\mathit{och}_{i}^{par}\cdot\mathit{ow}_{i}^{par}}{c_{i}}. \tag{11}\]
Considering a network with \(N\) convolutional layers, Algorithm 1 shows an ILP formulation of throughput optimization. If \(i_{\max}\in[1,N]\) is the index of the layer with the highest \(c_{i}\), then the goal is to balance the throughput of all layers
\[\forall i\in[1,N]\quad\mathit{Th}\left(\mathit{och}_{i_{\max}}^{par}\right)= \mathit{Th}\left(x_{i}\right)\implies\mathit{cp}_{i}=\mathit{cp}_{i_{\max}}r_ {i} \tag{14}\]
with \(r_{i}=c_{i}/c_{i_{\max}}\). Then the number of resources needed for each layer can be calculated, given the resources allocated for layer \(i_{\max}\). The total number of parallel computations allocated is
\[\mathit{cp}_{\mathrm{tot}}=\sum_{i=1}^{N}\mathit{cp}_{i}=\sum_{i=1}^{N} \mathit{cp}_{i_{\max}}r_{i}. \tag{15}\]
From (13), \(N_{\mathrm{PAR}}\) limits the maximum number of computations that can be done in parallel and depends on the platform. The FPGAs on the Ultra96 and KRIA KV260 boards that we are considering have \(360\) and \(1248\) DSPs, respectively. During hardware generation, \(N_{\mathrm{PAR}}\) is set to the number of DSPs on the target board.
The ILP can then maximize the throughput of the network by optimizing the parameters for the \(i_{\max}\) layer (12) and automatically configuring the template parameters of the tasks.
### Window generation
Given a convolution input tensor, we only need to store on-chip enough data to provide the input window to the _intra-task_ pipeline of each computational task. For example, Fig. 6 shows an input tensor and the input window mapping for a convolution with \(\mathit{fh}_{i}=3,\mathit{fw}_{i}=3\). It is important to highlight that the activations are produced using _depth-first_ order by the convolution that creates the input tensor (Fig. 4), while the input window is distributed over one channel and multiple lines. It is thus necessary to store all the lines needed to generate an input window, so each window buffer (also called line buffer in the literature) should be sized to accommodate the required activations. The portion of the input tensor (\(B_{i}\)) that the buffer must retain to create an input window is highlighted in Fig. 6
\[B_{i}=[(\mathit{fh}_{i}-1)\cdot\mathit{iw}_{i}+\mathit{fw}_{i}-1]\cdot\mathit{ ich}_{i}. \tag{16}\]
This size is constant over time because each time that the buffer reads an activation and generates a window, it disc
Figure 6: Input window mapped on the input tensor, \(\mathit{ow}^{par}{}_{i}=1\). Fig. 4 shows how the window elements map to the computation pipeline.
The _window buffer tasks_ retrieve from the input buffer the \(B_{i}\) activations required for the windows. At the maximum unroll factor, \(\mathit{och}_{i}^{\mathit{par}}=\mathit{och}_{i}\), each intra-task pipeline of the _computation task_ processes one input window per clock cycle.
The data read by the _window buffer tasks_ from the input activation buffer is \(\mathit{fh}_{i}\cdot\mathit{fw}_{i}\), i.e. a convolution window. The data needed for the input window is not contiguous and cannot be read by directly addressing the buffer, because it is stored sequentially in a FIFO with only one read port available. To provide the necessary bandwidth, the FIFO must be partitioned into \(\mathit{fh}_{i}\cdot\mathit{fw}_{i}\) parts, connected sequentially as shown in Fig. 7. Optimizing the window buffer to reduce the required partitioning in cases that allow a lower window generation rate is left for future work.
The size of each FIFO, \(S_{1}\), \(S_{2}\), is the distance, in number of activations, between successive values of the same input window, considering that the tensor is processed in _depth-first_ order. \(S_{1}\) represents the distance between two activations within the same row of the input window, and it is equal to the number of channels \(\mathit{ich}_{i}\) in the tensor. In contrast, \(S_{2}\) covers the gap between two activations in different rows of the input window, and it is directly proportional to one row (\(\mathit{ich}_{i}\cdot\mathit{iw}_{i}\)) in the input tensor, minus the filter width \(\mathit{fw}_{i}\). Each _task_\(T_{i}\) reads from a FIFO the data for an input window position, \(i\), and provides the input for the next FIFO slice, \(i+1\).
The _padding_ task, if enabled, reads at each cycle the data from _task_\(T_{i}\) for positions \(i\) that do not require padding, and replaces with \(0\) the positions of the input window that must be padded. Thanks to the concurrent execution and padding-aware control of the _window buffer tasks_ and _padding task_, the first buffer slices can be initialized with the tensor values of the next frame, while the last ones generate the final windows of the previous frame, avoiding the latency overhead caused by initializing the FIFOs.
The structure of the _window buffer_ depends on the unroll factor \(\mathit{ow}_{i}^{\mathit{par}}\). Fig. 8 shows the input window mapped to the input tensor with \(\mathit{ow}_{i}^{\mathit{par}}=2\), for which all considerations made before about \(\mathit{ow}_{i}^{\mathit{par}}=1\) apply.
The input buffer size is
\[B_{i}=[(\mathit{fh}_{i}-1)\cdot\mathit{iw}_{i}+\mathit{fw}_{i}]\cdot\mathit{ich }_{i} \tag{17}\]
so the overhead with respect to (17) is minimal. The buffer must be partitioned to ensure the required window production rate. With \(\mathit{ow}_{i}^{\mathit{par}}=2\), the elements of the input window are \((\mathit{fw}_{i}+\mathit{ow}_{i}^{\mathit{par}}-1)\cdot\mathit{fh}_{i}\). Fig. 9 shows how the buffer is partitioned according to the required bandwidth.
The main difference between Fig. 7 and Fig. 9 is how the activations flow in the different FIFO slices.
Given an input filter of size \(\mathit{fh}_{i}\cdot\mathit{fw}_{i}\), an input activation is multiplied, at different times, by a value in each position of the filter window.
If \(\mathit{ow}_{i}^{\mathit{par}}=1\), there is a one-to-one correspondence between the positions of the input window and those of the filter window. This means that the activation must pass through all FIFO slices, because each of them represents a position \(i\) of the input/filter windows.
If \(\mathit{ow}_{i}^{\mathit{par}}=2\), the input window keeps two windows that are multiplied by the parameter window, i.e. part of the activations are evaluated in two adjacent positions for each input window (\(i,i+1\)). Thus, the output of \(T_{i}\) must be connected to the input of the FIFO slice \(i+2\) to ensure correct data flow.
Figure 8: Input window mapped on the input tensor, \(\mathit{ow}_{i}^{\mathit{par}}=2\), retaining two computation windows (red and blue). Fig. 4 shows how the window elements map to the computation pipeline.
Figure 7: Buffer partitioning, \(\mathit{ow}_{i}^{\mathit{par}}=1\). Yellow boxes are the FIFOs with their sizes. Orange boxes are tasks that read and write the FIFOs. Padding is applied before generating the box for the convolution.
### Graph Optimization
The main contribution of this paper is to provide a structured methodology to efficiently implement a residual block in a dataflow accelerator with concurrent processes.
The same considerations from Section 3.6 can be extended to network graphs with multiple nodes processing the same input tensor, i.e. residual blocks, as shown in Fig. 10, to provide a more general solution. Considering a tensor processed by multiple convolutions, multiple branches start from the convolution that generates the tensor. In residual networks the branches are then merged by an _add_ layer.
Fig. 10 shows _Resnet20_ and _Resnet8_ residual block topologies with _2 branches_ per input tensor and _0 or 1 convolutions_ on skip connection (the branch crossing fewer convolutions).
The _add_ layer adds the values from the _2 branches_. Because of the dataflow architecture, the operation starts as soon as both input streams have data. However, the time required to fill each stream is different. The skip connection stream that reaches the _add_ node is filled in parallel with the _conv0_ input stream in the case without downsampling, or after \(ich_{i}\) cycles in the case with downsampling. The input stream from the long branch is filled as soon as _conv1_ provides its first output activation. As shown by Fig. 6, _conv1_ starts processing data as soon as its input buffer is full. The amount of data buffered for skip connections, \(B_{\text{sc}}\), is equal to the amount to be processed by _conv0_ and is sufficient to start _conv1_ operations. To calculate this value we use the _receptive field_[2], which is the portion of feature maps related to successive layers that contribute to produce an activation.
Fig. 11 shows the _receptive field_ of the _conv1_ window with respect to the _conv0_ that generates it. \(B_{\text{sc}}\) is the buffering of all receptive fields projected from the activation in the _conv1_ line buffer as soon as it starts computing. From [2], as shown in Fig. 11, the data to store for each receptive field \(B_{\text{r}}\) is
\[rh_{0} =fh_{1}+fh_{0}-1 \tag{18}\] \[rw_{0} =fw_{1}+fw_{0}-1\] (19) \[B_{r} =rh_{0}\cdot rw_{0}. \tag{20}\]
Sliding the receptive field window over \(ich_{i},iw_{i}\), the obtained buffering \(B_{\text{sc}}\) is
\[B_{\text{sc}}=[iw_{0}\left(rh_{0}-1\right)+rw_{0}]\,ich_{0}. \tag{21}\]
In the dataflow architecture used in the final implementation, the "bypass" branch must store its input activation data
Figure 11: Receptive field of the _conv1_ window. For clarity, the \(ich\) dimension is omitted and _conv1_ stride is assumed to be 1 (\(s_{1}=1\)).
Figure 10: _Resnet20_ and _Resnet8_ residual blocks
Figure 9: Buffer partitioning, \(\textit{ow}_{i}^{\text{par}}=2\). The data in output from each _task Ti_ is connected as input to the FIFO slice at the position \(i+\textit{2}\) because of activation reuse.
from the previous stage until the first output activation is generated by the convolution and the merged output can be generated. In previous dataflow implementations of CNNs, this buffering consumed large amounts of memory ( [34]).
To efficiently support _residual_ blocks, the multiple endpoints of the input tensor and the increased buffering caused by the different number of convolutions (and thus different computation delays) per branch must be handled differently.
The combination of the following optimization steps for the dataflow architecture, _proposed for the first time in this paper_, can avoid it, e.g., in CNNs such as Resnet8 and Resnet20.
The following two transformations (see Fig. 12) show how to solve the problem of multiple endpoints, reducing the length of the skip connection and the required buffering:
* Loop merge: if the residual block has a downsample layer, i.e., a pointwise convolution in the short branch of the skip connection, the algorithm merges the two convolution loops. Both computations are performed by the same task, which provides the tensor produced by the merged layer as an additional output.
* Temporal reuse: to avoid buffering the same tensor twice, if the residual block does not have a downsample layer, the graph optimization uses the window buffer as input to the convolution. The values are forwarded, using a second output stream, to the next layer once they have been completely used.
Thanks to these transformations, the two input streams of the _add_ merge layer are produced simultaneously, and computation tasks never stall. _Conv0_ writes the skip connection stream as soon as the convolution computation starts and at the same rate as the convolution output tensor. The last transformation, shown in Fig. 13, removes the sum of the value coming from the short branch by connecting it as an additional contribution to the second convolution of the long branch. The value from the skip branch is used to initialize the accumulator register, and the addition operation is removed from the network graph.
The producer and consumer of the two branch streams are now the same (_conv0_ and _conv1_), and they produce/consume at the same rate. The required buffering of the skip connection (\(B_{\mathrm{sc}}\)) is now equal to the _conv1_ window buffer size
\[B_{\mathrm{sc}}=B_{1}=[(\textit{fh}_{1}-1)\cdot\textit{iw}_{1}+\textit{fw}_{ 1}-1]\cdot\textit{ich}_{1}. \tag{22}\]
The dimensions of the first residual block without downsample of _Resnet20_ are: \(\textit{iw}_{0}=\textit{iw}_{1}=32\), \(\textit{ich}_{0}=\textit{ich}_{1}=16\), \(\textit{fh}_{0}=\textit{fh}_{1}=3\), \(\textit{fw}_{0}=\textit{fw}_{1}=3\).
The dimensions of the first residual block with downsample of _Resnet20_ are: \(\textit{iw}_{0}=32\), \(\textit{iw}_{1}=16\), \(\textit{ich}_{0}=16\), \(\textit{ich}_{1}=32\), \(\textit{fh}_{0}=\textit{fh}_{1}=3\), \(\textit{fw}_{0}=\textit{fw}_{1}=3\).
The skip connection buffering, \(B_{\mathrm{sc}}\), is then reduced to \(R_{\mathrm{sc}}\), in both cases
\[R_{\mathrm{sc}}=\frac{[(\textit{fh}_{1}-1)\,\textit{iw}_{1}+\textit{fw}_{1}- 1]\,\textit{ich}_{1}}{[(\textit{rh}_{0}-1)\,\textit{iw}_{0}+\textit{rw}_{0}] \,\textit{ich}_{0}}=0.5. \tag{23}\]
The same calculated gain holds for all residual blocks in _Resnet20_ because the product \(\textit{iw}_{i}\cdot\textit{ich}_{i}\) remains constant. The same considerations apply to _Resnet8_, since the structure of its residual blocks is identical to those already analyzed.
From a network graph of a residual block with and without downsampling, Fig. 14 shows the initial and final representations after applying the previously described optimizations.
## 4 Experimental Results
Our architecture is evaluated on the CIFAR-10 dataset, which consists of \(32\)x\(32\) RGB images. The same preprocessing and data augmentation used in [15] is used for both training and testing. The model is trained for \(400\) epochs with a batch size of \(256\), using the stochastic gradient descent (SGD) optimizer and cosine annealing as the learning rate scheduler.
Synthesizable C++ is used for both the hand-written layer process library and the Python-generated top-level dataflow code that calls the process functions. The implementation
Figure 12: Multiple endpoint graph optimizations: (a) input forwarding without downsampling, (b) layer merging when there is a downsample pointwise convolution.
Figure 13: The addition is optimized as initialization of the convolution accumulator register.
flow uses Xilinx Vitis HLS for RTL code generation and Vivado for implementation on the Ultra96-v2 and Kria KV260 boards. Table 2 shows the available resources for the two boards.
The obtained throughputs (FPS, \(\mathrm{Gops/s}\)) and the latency (\(\mathrm{ms}\)) are shown in Table 3. The final resource utilization is summarized in Table 4.
Our proposed architecture is first compared with a ResNet20 implementation and the derived AdderNet described in [42], which is are the most efficient CNN implementations on FPGAs in terms DSP packing and model architecture to-date. Our implementation achieves speedups (\(\mathrm{Gops/s}\)) of \(2.88\)x and \(1.94\)x with \(0.5\,\%\) and \(1.4\,\%\) higher accuracy, with respect to the ResNet20 and Addernet in [42], using the Kria KV260 as a reference platform. Also, the latency is reduced by \(3.84\)x and \(1.96\)x respectively.
We then compare our results with the implementations of the ResNet8 model by Vitis AI and FINN described in [14]. Our solution achieves speedups of \(6.8\)x and \(2.2\)x with a latency improvement of \(28.1\)x and \(3.35\)x respectively. Vitis AI achieved better accuracy by \(0.5\,\%\), probably because it executes _batch normalization_ in hardware, while our implementation outperformed a 4-bit FINN implementation by \(2.8\,\%\).
## 5 Conclusion
This work presents a design flow for CNNs specifically optimized for residual networks. It supports the most commonly used operations for classic CNNs, including convolutions, fully connected (linear) layers, batch normalization, ReLU activation functions, max/average pooling, and skip connections. It is also fairly platform-independent, since it is based on heavily templatized layer models and comes with an ILP-based optimization method to maximize throughput under resource constraints. This allows it to be used with various FPGA platforms, including embedded ones.
A dataflow pipelined architecture minimizes buffering resources for networks with skip connections. The design is validated by experiments on ResNet8 and ResNet20 using the CIFAR-10 dataset. Both activations and weights are quantized in INT8 format using power-of-two scaling factors.
The design uses PyTorch and Brevitas for training and quantization, and Vitis HLS and Vivado for hardware implementation on Kria KV-260 and Ultra96-v2 boards.
The solution achieves an accuracy of \(88.7\,\%\) for ResNet8 and \(91.3\,\%\) for ResNet20, with throughputs of \(12\,971\) FPS and \(3254\) FPS on the Ultra96, and \(30\,153\) FPS and \(7601\) FPS on the Kria KV260. Compared to the state-of-the-art for CNN residual network acceleration on FPGAs [42], it achieves \(2.88\)x speedup with \(0.5\,\%\) higher accuracy for ResNet20 and \(2.2\)x speedup with \(2.8\,\%\) higher accuracy for ResNet8 [14]. Compared to a residual network with packed adders [42], it achieves \(1.94\)x speedup with \(1.4\,\%\) higher accuracy and a latency improvement of \(1.96\,\times\). Considering state-of-the-art frameworks, the comparison shows that the resource-efficient implementation of the residual layer achieves a Pareto-optimal implementation for accuracy, throughput, and latency. Since the boards are the same and all approaches utilize most resources of each FPGA, lower latency also means lower energy than the state of the art.
In summary, the proposed design architecture shows potential as an alternative to the commonly used residual network accelerators on platforms with limited resources. It delivers greater throughput and energy efficiency than the state-of-the-art without increasing hardware costs.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Board** & **FPGA part** & **LUT** & **FF** & **BRAM** & **DSP** & **URAM** \\ \hline Ultra96 & xcza3eg & 141120 & 70560 & 216 & 360 & 0 \\ Kria KV260 & xcza5eg & 234240 & 117120 & 144 & 1248 & 64 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Resources of the Ultra96 and Kria KV260 boards
Figure 14: Graph optimization for the residual blocks of Resnet8 and Resnet20 networks. The skip connection goes through the first convolution layer, conv0, into the second convolution layer, conv1, reducing buffering requirements with and without downsampling. |
2309.10991 | A systematic evaluation of Silicon-rich Nitride Electro-optic Modulator
design and tradeoffs | We present a study of linearized \c{hi}^((3)) based electro-optic modulation
beginning with an analysis of the nonlinear polarizability, and how to
linearize a modulator based on the quadratic third order DC-Kerr effect. Then
we perform a numerical study, designing a linearized \c{hi}^((3)) phase
modulator utilizing Silicon-rich Nitride where we show that a phase modulator
with a V_{\pi} L_{\pi} metric of 1 Vcm or a V_{\pi} L_{\pi} {\alpha} metric of
37VdB is achievable and a V_{\pi} L_{\pi} as low as 0.5Vcm in a push-pull Mach
Zehnder Interferometer. This numerical study argues that linearized modulation
exploiting the \c{hi}^((3)), and \c{hi}^((2)) as applicable, is possible and
can allow for high-speed modulation using a CMOS compatible material platform. | Alex Friedman, Dmitrii Belogolovskii, Andrew Grieco, Yeshaiahu Fainman | 2023-09-20T01:17:22Z | http://arxiv.org/abs/2309.10991v1 | # A systematic evaluation of Silicon-rich Nitride Electro-optic Modulator design and tradeoffs
###### Abstract
We present a study of linearized \(\chi^{(3)}\) based electro-optic modulation beginning with an analysis of the nonlinear polarizability, and how to linearize a modulator based on the quadratic third order DC-Kerr effect. Then we perform a numerical study, designing a linearized \(\chi^{(3)}\) phase modulator utilizing Silicon-rich Nitride where we show that a phase modulator with a \(V_{\pi}L_{\pi}\) metric of 1 Vcm or a \(V_{\pi}L_{\pi}\alpha\) metric of 37VdB is achievable and a \(V_{\pi}L_{\pi}\) as low as 0.5Vcm in a push-pull Mach Zehnder Interferometer. This numerical study argues that linearized modulation exploiting the \(\chi^{(3)}\), and \(\chi^{(2)}\) as applicable, is possible and can allow for high-speed modulation using a CMOS compatible material platform.
## 1 Introduction
Optical interconnects form a major part of the disruptive impact of integrated optical systems in a variety of applications, and therefore have driven continued interest in finding the next generation of optical modulators. Historically, high-speed optical modulators have relied upon lithium niobate [1, 2] where thanks to the lithium niobate on insulator platform \(V_{\pi}L_{\pi}\) metrics on the order of 1.8Vcm have been achieved [3, 4], while in search of higher efficiencies other high-k dielectrics have been considered such as barium titanate [5]. All of these materials have three primary challenges:(i) they are in general not CMOS compatible making fabrication more expensive compared to a CMOS process which can be done by a foundry, (ii) they have low refractive indices compared to Silicon requiring larger waveguide cross-sections to achieve reasonable mode-confinement, and (iii) they all have higher RF permittivity leading to smaller electric fields in the same cladding at the same applied voltage. As a result many optical interconnects still utilize carrier dispersion in silicon waveguides ( \(V_{\pi}L_{\pi}\cong 3.5\,Vcm\), with an insertion loss of 9.2dB ) [6], where excess propagation loss due to high dopant concentrations can limit performance, and so there remains interest in a CMOS compatible alternative to such techniques that can disrupt optical modulators in CMOS manufacturing. The issue remains, however, that most CMOS compatible materials either do not exhibit a \(\chi^{(2)}\), or exhibit a negligible small one such as lower index silicon nitride films. In recent years work in literature, including past work by the author's [7], has demonstrated that not only can Silicon-rich Nitride films exhibit a non-zero \(\chi^{(2)}\) but that their refractive index [8, 9], thermo-optic coefficient [10], as well as \(\chi^{(2)}\) and \(\chi^{(3)}\)[11] are all enhanced with increasing silicon content and that this is true even in the case of low temperature plasma-enhanced chemical vapor deposition-based SRN films [8, 12]. In this manuscript, we will undertake a systematic evaluation of the contributions from second and third order nonlinearities in arbitrary materials and make a case for a \(\chi^{(3)}\) based linearized electro-optic modulator utilizing a form of heterodyne gain. We will then conduct a numerical study of such a modulator, designing phase-type modulator based on our past works Silicon-rich Nitride film properties, achieving \(V_{\pi}L_{\pi}\) metrics from 2 to 3.5Vcm and \(V_{\pi}L_{\pi}\alpha\) metrics of 116VdB as a phase modulator,
or as low as 1Vcm in a push-pull Mach Zehnder Interferometer intensity modulator. We will then explore the integration of such a phase-shift element into a ring resonator cavity as an intensity modulator demonstrating that with proper cavity design and allowing for a degree of coupling miss-match extinction ratios between 12dB and 20dB are achievable. Finally, we will conclude with some discussion on the inherent tradeoffs with such a design and argue that a linearized \(\chi^{(3)}\) based modulator, can serve as a viable CMOS compatible alternative to use materials lacking \(\chi^{(2)}\) nonlinearities, and as a pure phase modulator alternative to traditional plasma-dispersion approaches in silicon.
## 2 Nonlinear Polarizability Analysis
We begin with a brief analysis of second and third order nonlinear optical effects in nonlinear materials with emphasis on the presence of an applied external electric field. Here we include consideration for effects of higher order nonlinearities, beyond that of \(\chi^{(3)}\) alone, as research has shown that most CMOS compatible materials lack a \(\chi^{(2)}\) as a result of their crystal symmetry [13-16]. Although this is an abbreviated discussion on the topic, a useful place to start is through the induced polarization which can be written as follows [17]:
\[P(r,t)\ =\ \epsilon_{0}\left[\chi^{(1)}E(r,t)\ +\ \chi^{(2)}E^{2}(r,t)\ +\ \chi^{(3)}E^{3}(r,t)+\ldots\right]\ (1)\]
In equation 1, \(\chi^{(1)}\), \(\chi^{(2)}\) and \(\chi^{(3)}\) represent the first, second and third order nonlinear susceptibilities respectively and are treated as tensors of rank two, three, and four, respectively. This is a useful formalism because both modulation and wave mixing are understood as solutions to the nonlinear form of Maxwell's equation [17]. If we allow the total electric field \(E(r,t)\)to be a sum of an optical wave (\(E_{\omega}\)) and applied electric field (electro-static \(E_{dc}\) and time-varying \(E_{ac}\)), we can derive expressions for the contributions to the nonlinear portion of the induced polarization, grouping and simplifying terms based on their contributions. Below in equation 2, as an example, we show three first terms of eq. 1 expansion, grouped and labeled with various nonlinear effects.
\[\begin{array}{l}\bar{p}_{NL}\ \
Importantly, if we allow for a combination of electro-static \(E_{dc}\) and time-varying \(E_{ac}\) together with the optical \(E_{\omega}\), we can derive a term (last column in table 1) which allows for third order nonlinearity based linear modulation. This is especially interesting to explore, as unlike \(\chi^{(2)}\) which is dependent on crystal structure, \(\chi^{(3)}\) is present in all materials. In the remainder of this manuscript we will only consider non-resonant electronic nonlinearities as this type of nonlinearity can respond at ultra-fast speeds and is thus of interest for high-speed modulation, as well as be useful for wave-mixing applications [18]. In the following we consider the case of an arbitrary material which has some set of \(\chi^{(2)}\) and \(\chi^{(3)}\) tensors, under the presence of an applied electric field which has both an electro-static (Edc) and a time-varying ac term (Eac). From this we derive an expression for each combination to the change in refractive index showing them below in table 2 based on \(\chi^{(2)}\) vs \(\chi^{(3)}\) and their combination of Edc and Eac fields.
Table 2 reveals a few interesting features of such an arbitrary material. Specifically, for such a material under the presence of an Edc and Eac field there will be of course a static "bias" change in refractive index represented by \(\Delta n_{dc}=\Delta n_{dc}^{\chi^{(2)}}+\Delta n_{dc}^{\chi^{(3)}}\); however, there will also be a modulated component of the change in refractive index represented by \(\Delta n_{ac}=\Delta n_{ac}^{\chi^{(2)}}+\Delta n_{ac}^{\chi^{(3)}}+\Delta n_{ ac+dc}^{\chi^{(3)}}\). If we are interested in constructing a linearized modulator utilizing a given materials \(\chi^{(3)}\), as well as \(\chi^{(2)}\) if it has it, then the \(\Delta n_{ac}\) term is the important one to analyze. Additionally one can notice that the \(\Delta n_{ac+dc}^{\chi^{(3)}}\) mixed term has an extra factor of 2 compared with the non-mixed terms, \(\Delta n_{dc}^{\chi^{(3)}}\) and \(\Delta n_{ac}^{\chi^{(3)}}\). From these formulas and Table 2 the only contributing term that is not linear in Eac is the \(\Delta n_{ac}^{\chi^{(3)}}\) term, which is the reason third order modulation is typically quadratically chirped, a problem for modulator design; however, the term \(\Delta n_{ac+dc}^{\chi^{(3)}}\) has two benefits. First, if we require the Edc \(>>\) Eac, a condition that will dictate the degree of linearity, then the \(\Delta n_{ac+dc}^{\chi^{(3)}}\) term is approximately linear in Eac, and additionally under such a condition it is naturally true that \(\Delta n_{ac}^{\chi^{(3)}}\ll\Delta n_{ac+dc}^{\chi^{(3)}}\) which allows us to ignore the quadratically chirped term.
\begin{table}
\begin{tabular}{|c c c c|} \hline \(\mathbf{DC}\) & \(\mathbf{\Lambda C}\) & \(\mathbf{\Lambda C}\) & \(\mathbf{\Lambda C}\) \\ \hline \(\chi^{(2)}\) & \(\Delta n_{ac}^{\chi^{(3)}}=\sum_{jk}\frac{\chi_{ljk}^{(2)}}{n_{k}^{eq}}E_{j}^{dc}\) & \(\Delta n_{ac}^{\chi^{(3)}}=\sum_{jk}\frac{\chi_{ljk}^{(2)}}{n_{k}^{eq}}E_{j}^{ ac}\) & Not Applicable \\ \hline \(\chi^{(3)}\) & \(\Delta n_{dc}^{\chi^{(3)}}=\sum_{jk}\frac{3\chi_{ljk}^{(3)}}{2n_{k}^{eq}}E_{j}^{ ac}\) & \(\Delta n_{ac}^{\chi^{(3)}}=\sum_{jk}\frac{3\chi_{ljk}^{(3)}}{2n_{k}^{eq}}E_{j}^{ ac^{2}}\) & \(\Delta n_{ac+dc}^{\chi^{(3)}}=\sum_{jk}\frac{3\chi_{ljk}^{(3)}}{n_{k}^{eq}}E_{j}^{ ac}E_{j}^{dc}\) \\ \hline \end{tabular}
\end{table}
Table 2: Second and Third order contributions to the change in refractive index
Secondly, the \(\Delta n_{ac+dc}^{\chi^{(3)}}\) term exhibits a natural form of what can be thought of as a heterodyne gain. This term has a weak high-frequency field (Eac) and a strong low-frequency field (Edc) the product of which produce an effect at the high-frequency field's enhanced by the strength of the low-frequency field (Edc). While this will require the Edc field strength to be high, it will allow the Eac field strength to be proportionally lower, this can be a solution of interest in the CMOS case as 10's of volts-dc can be acceptable whereas the AC voltage is the one that needs to be as low as possible, even sub 1V in some cases. Figure 1 shows an example of how by controlling the ratio of Eac to Edc the quadratic chirping in the resulting change in phase can by removed. Note that as the Eac/Edc ratio increases so does the peak phase change because the magnitude of Eac increases.
It is important to discuss the trade-off between \(\chi^{(2)}\) based Pockels modulation and this \(\chi^{(3)}\) based DC-Induced Kerr modulation. As has been derived previously in the literature there is a general rule that the order of magnitude of \(\chi^{(2)}\) and \(\chi^{(3)}\) should be expected to be \(\frac{\chi^{(1)}}{E_{at}}\) and \(\frac{\chi^{(1)}}{E_{at}^{2}}\) respectively [17], where \(E_{at}\) is the atomic electric field strength. This has the important implication that we should expect effects based on \(\chi^{(3)}\) to be weaker than that of \(\chi^{(2)}\) because the order of magnitude of their coefficients in general differ by the atomic electric field strength and thus an effective \(\chi^{(2)}\) induced by the presence of an applied dc electric field could only approach that of the expected inherent \(\chi^{(2)}\) when the applied electric field approaches that of \(E_{at}\), in materials whose \(\chi^{(2)}\) originates from a crystal lattice dipole. Such a condition is of course not possible as the breakdown field of a given material will in general be much lower than \(E_{at}\) meaning that we will reach the maximum strength of applied electric field before the combination of \(\chi^{(3)}E_{applied}\) is expected to be of the order of the inherent \(\chi^{(2)}\). While this may indeed be a limitation, in realistic materials, especially CMOS compatible materials, the \(\chi^{(2)}\) tensor is often zero due to crystal symmetry, in such cases this technique can still be useful as all materials have a non-zero \(\chi^{(3)}\) tensor. In silicon-rich nitride specifically the story becomes more interesting as the exact origin of the non-zero \(\chi^{(2)}\) in silicon-rich nitride is still not well understood. As a result this general rule may not hold true for such a material. For example, if we define \(\chi^{(2)}_{eff}=3\chi^{(3)}E_{dc}\) and consider our previous results in [7] then we find that at Edc = 1x10\({}^{8}\) V/m we have a \(\chi^{(2)}_{eff}\) of 180 pm/V compared to the measured 14 pm/V inherent \(\chi^{(2)}\) which indicates that a value of \(\chi^{(2)}_{eff}>\chi^{(2)}\) is possible. For this reason, we will explore the design of a linearized modulator based on the DC-Induced Kerr effect in silicon-rich nitride. It has been shown in literature in past work by the author [7] as well as others [11] that PECVD silicon-rich nitride can exhibit refractive indices as high as 3.1 and has a comparably enhanced \(\chi^{(3)}\). Additionally, silicon-rich nitride films are expected to have a high breakdown field, as silicon nitride films can exhibit very high breakdown fields, as well as low optical loss over a
Figure 1: An example for a 10Ghz modulated Eac wave as a function of the fraction of Eac/Edc for a fixed Edc of 1.22x10\({}^{8}\) V/m. In the above plot the black dashed line represents the idealized case.
broader spectral range that Silicon [9, 11, 19]. As the example calculation of a \(\chi^{(2)}_{eff}\) shows, these features can make silicon-rich nitride a very attractive candidate.
## 3 Phase Modulator Design
We begin by analyzing the phase shifter element on its own, as a fundamental building block as well as a phase-type modulator of its own. Figure 2(a) shows a schematic cross-section of the proposed device structure. In the structure a Silicon-rich Nitride waveguide is located on a SiO\({}_{2}\) buried oxide layer. We then create a conformal thin dielectric shield layer (e.g., deposited using ALD) followed by construction of gold (Au) electrodes. Finally, the structure has a top cladding layer of SiO\({}_{2}\). This structure can be thought of as the generic case of a device which lacks such a shield layer, the case where the shield dielectric is SiO\({}_{2}\) and a case where the left and right electrodes could also be placed directly onto the bottom SiO\({}_{2}\) layer and then the center electrode formed after depositing a desired thickness of SiO\({}_{2}\). We consider the generic design case as presented as a useful layout for discussing important tradeoffs of such designs. Firstly, the objective is to maximize the strength of the applied electric field within the waveguide core while minimizing the induced optical loss, a trade-off which will dictate the optimal device performance regime. One key parameter that will dictate the strength of the applied field is the ratio of the relative permittivity of the shield layer to that of the Silicon-rich Nitride waveguide layer.
Therefore, if we utilize a thin cladding layer of silicon nitride, which is known to have an RF permittivity around 7.2 [20], to more closely match the RF permittivity of the interface to that of the SRN waveguide core which we have previously measured to be 9.44 [7], we can increase the penetration of the applied electric field into the waveguide. However, the trade-off is that utilizing a silicon nitride shield layer will reduce the refractive index contrast of the waveguide core, reducing mode confinement and thus increasing induced optical loss.
Figure 3: (a) Effective Index of the TE-like and TM-like modes vs SiO2 ’shield’ thickness. (b) Overlap Integral of the TE-like and TM-like modes with the SRN waveguide core versus SiO2 ’shield’
Figure 2: (a) Schematic cross-section of the proposed device structure. A Silicon-rich Nitride waveguide sits on a SiO2 buried oxide layer. On top of the Silicon-rich Nitride waveguide is a thin dielectric shield layer onto which Ground-Signal-Ground gold electrodes are formed. Finally, the top of the structure is top clad with SiO\({}_{2}\). (b) Image showing the TM optical mode for a 350nm thick, 450nm wide SRN waveguide along with field lines for an applied electric field from the GSG field lines.
thickness. (c) Effective Index of the TE-like and TM-like modes vs spacing from the electrode to waveguide sidewall. (d) Overlap Integral of the TE-like and TM-like modes with the SRN waveguide core versus spacing from the electrode to waveguide sidewall. (e) Propagation Loss vs electrode to sidewall spacing for the TE-like and TM-like modes.
In general the usage of an intermediate dielectric shield layer between the waveguide core and the metal electrodes is a necessity due to optical losses from metals; however, once introduced the mismatch in RF permittivity between the intermediate dielectric shield layer and the waveguide core will "shield" the higher RF permittivity waveguide core from the applied electric fields reducing the strength of the field within the nonlinear medium. The solution then is to utilize a material which matches the RF permittivity of the silicon-rich nitride core; however, when considering practical materials often the RF permittivity and the refractive index increase in tandem. For example, at \(\sim\)7.2 silicon-nitride [20] has an expected RF permittivity closer to that of silicon-rich nitride layer however it has a higher refractive index at 1.8 to 1.95 compared to the 1.45 of SiO\({}_{2}\) which has an RF permittivity in the range of 3.75 to 4.45 [21, 22]. The result of this is that in a realistic CMOS compatible material stack with limited choices for dielectric shield layers there is a tradeoff between the strength of the applied electric field and the loss of the optical mode from the lower modal confinement of a higher refractive index cladding layer. In the design of the phase shift element in this study we will use SiO\({}_{2}\) as the shield layer in order to mitigate excess loss from modal deconfinement. This means that since the shield layer is the same material as the cladding it serves as a physical spacer rather than any additional RF permittivity matching. Our proposed structure will be based on the expected \(\chi^{(2)}\) and \(\chi^{(3)}\) values for silicon-rich nitride films of similar refractive index in literature such as 14 pm/V and 6x10\({}^{\text{-19}}\) m\({}^{2}\)/v\({}^{2}\) from our past work [7] with a good review in [23]; we will utilize the measured RF permittivity from that work as well [7]. Figure 2(b) shows a COMSOL model of the TM optical mode for a 350nm thick 450nm wide Silicon-rich Nitride waveguide along with the applied electric field lines from the ground-signal-ground (GSG) electrodes. Utilizing COMSOL and Lumerical we simulate the electrical and optical fields of the structure as a function of both the electrode to waveguide sidewall spacing as well as the thickness of the shield layer and optimize the width and height of the waveguide to minimize excess at 500nm thick and 350nm wide and assume a SiO\({}_{2}\) shield layer. As in this case the shield layer is assumed to be the same as the cladding, it serves simply as a physical spacer rather than as both a physical spacer and permittivity matching. For these simulations as outlined in section 2, we will consider the up to the case where the mean value of \(E_{dc}\) in the core is as large as \(~{}1.22x10^{8}\) V/m at peak value, taking results from COMSOL simulations.
Figure 3(a-b) shows the effective index and overlap integral of the TE-like and TM-like modes as a function of shield thickness. In the case where the shield is simply the same as the cladding it serves as a physical spacer predominately for the central electrode. As a result, at small shield thickness the TM-like optical mode gets pulled into the thin gap between the center electrode and the waveguide core, increasing the effective index but decreasing the overlap with the SRN layer. In figure 3(c-d) a similar effect occurs for the TE-like mode when the electrodes are brought closer to the sidewall of the waveguide with the enhancement of the effective index, and corresponding decrease in overlap integral, being smaller due to large gaps than the shield layer thickness for the TM-like mode. The tradeoff of course is loss, which is shown in Figure 3(e), whereas the electrode to sidewall spacing is reduced the loss increases. Naturally, the TE-like mode sees a faster increase in loss from bringing the electrodes closer to the side-wall were as the TM-like case will see a faster increase as the shield thickness decreases. It is important to remember the tensorial nature of \(\chi^{(2)}\) and \(\chi^{(3)}\) in light of these facts, specifically the different tensor components utilized depending on the orientation of the optical polarization and applied electric field. As discussed in section 2, the fundamental relation between second and third order nonlinearities, and the presences of a non-zero \(\chi^{(2)}\) in SRN, means that all else being equal a TM polarized optical mode and vertical applied field will produce the largest change in refractive index as it will utilize both the largest \(\chi^{(2)}\) and \(\chi^{(3)}\) tensor. As such in this manuscript we will focus on the case utilizing the \(\chi^{(3)}_{3333}\) tensor component which is the case for a TM-like optical mode and a vertical applied field; however, a variety of other configurations could be explored in the future such as combinations which use not-colinear tensor components for example an in-plane applied field and a TM-like optical mode. By modeling of the applied electric field we predicted the \(\Delta n_{ac}\), shown in figure 4(a), for electrode to sidewall spacings from 200nm to 600nm and shield thickness from 100nm to 200nm. The trade-off here is clear, at a fixed voltage smaller electrode to sidewall spacings result in larger applied field strengths at a fixed voltage, and thus larger changes in refractive index. Similarly, decreasing the shield layer thickness increases the applied field strength in the waveguide core at a fixed voltage and thus increases the change in refractive index. From the change in refractive index curves the minimum length required for a \(\pi\) phase shift (\(L_{\pi}\)) is determined as a function of electrode to sidewall spacing and shield thickness in figure 4(b). As the shield thickness increases the strength of the applied electric field in the waveguide core reduces thus requiring longer path lengths to maintain a \(\pi\) phase shift. Using the lengths from figure 4(b) and the propagation loss values discussed in figure 3(e) we generated predicted values for the insertion loss in figure 4(c) reaching values as low as 3.1dB. Figure 4(d) then, shows the predicted \(V_{\pi}L_{\pi}\) for each corresponding case under the condition that \(\frac{E_{dc}}{E_{ac}}\cong 10\). From this it is clear that we can achieve competitive \(V_{\pi}L_{\pi}\) metrics, from 1 to 1.8Vcm. However, this is only a part of the story as a comprehensive figure of merit should include the loss, so we define an additional figure of merit to be \(V_{\pi}L_{\pi}\alpha\) which results in a unit of VdB. Figure 4(e) shows such a figure of merit including the loss in the analysis. The way to interpret such a figure of merit is to consider that at a \(V_{\pi}L_{\pi}\alpha\) of 37 VdB a 20Vpp AC voltage would have an insertion loss of 1.85dB. It is through a combination of these two figures of merit that the design space, and tradeoffs between voltage, length, and insertion loss can be understood. Based on
Figure 4: (a) A plot of the change in refractive index under the presence of a combination of AC and DC field. In such a situation if we require that Edc >>> Eac then we can write this as approximately \(\Delta n_{ac}=\Delta n_{ac}^{x^{(2)}}+\Delta n_{ac+dc}^{x^{(3)}}\). (b) A plot of the require length for a \(\pi\) phase shift based on \(\Delta n_{ac}\) as a function of electrode to sidewall spacing and shield thickness. (c) A plot of lumped loss for a phase shifter with a length of \(L_{\pi}\) as a function of electrode to sidewall spacing and shield thickness. (d) From the change in refractive index we can determine the metric \(V_{\pi}L_{\pi}\). (e) While the metric \(V_{\pi}L_{\pi}\) is a commonly used metric for such devices, a more comprehensive metric is \(V_{\pi}L_{\pi}\alpha\) which includes loss and is thus in units of V-dB instead of V-cm.
these results then the following performance can be achieved for such a Silicon-rich Nitride Modulator.
Table 3 shows the expected performance for our considered design. A clear trend in this design then is a tradeoff between voltage and lumped loss, for example at a \(V_{\pi}L_{\pi}\) of 2Vcm with a 6.2Vpp ac voltage we get 10dB or at 20Vpp we get 3.1dB. On the other hand we can achieve the minimum 1Vcm \(V_{\pi}L_{\pi}\) metric at an insertion loss of 9.4dB.
## 4 Intensity Modulator Design
In the previous section we discussed the design and performance of a phase modulator, which provides important insight into the fundamental performance of the underlying device. We will now discuss how such a phase shift element can be used as an intensity modulator by embedding it into a ring resonator cavity or in a Mach Zehnder interferometer (MZI) configuration. Both ring-resonator and Mach Zehnder configurations have their own advantages and drawbacks which we will discuss but they can be thought of as broadly representing the two categories of intensity modulators, resonant and non-resonant respectively. As has been well reported in literature [24, 25] a ring resonator can be viewed as a form of a resonant filter, which when the resonant condition, \(\phi_{roundtrip}=m2\pi\), is satisfied light is lost from the transmission port while off resonant light is allowed to pass. It is this condition that allows a phase-modulator embedded into the cavity of a ring resonator to be realized as an intensity modulator, the phase introduced by the phase modulator adds to the nominal roundtrip phase and changes the wavelength at which the nominal round trip phase plus the phase change from the modulator results in an integer multiple of \(2\pi\). Equation 3 below shows the well-known expression for the transmission from an all-pass configuration ring-resonator where \(r\) is the self-coupling coefficient of the bus, \(k\) is the cross-coupling coefficient, \(a\) is the single-pass amplitude transmission, and p is the single-pass phase shift [24]:
\[T_{p}=\frac{a^{2}-2racos(\phi)+r^{2}}{1-2arcos(\phi)+(ra)^{2}}\left(3\right)\]
Therefore, in the idealized case the critical coupling condition can be shown to be when the coupled power is equal to the power lost in the ring, or \(r=a\). In figure 5 below we consider our phase-shift element from section 3 embedded into the cavity of a silicon-rich nitride 45\(\mu\)m bend radius ring resonator. Additionally, we consider the case where there is an 'unintended' mismatch between the amplitude transmission coefficient and the self-coupling coefficient of 0.05, as a typical value to account for fabrication and design variation from past experience and where the phase modulator comprises 90% of the cavity length, to accommodate the coupling region.
\begin{table}
\begin{tabular}{|l l l l|} \hline
**Vdc [N]** & **Vce [Type]** & **I [mm]** & **II [dB]** \\ \hline -200 & -20 & 1 & 3.1 \\ \hline -200 & -6.2 & 3.2 & 10 \\ \hline \end{tabular}
\end{table}
Table 3: Example Possible Design parameters
Figure 5(a) shows the transmission spectra of the nominal device \(T(\phi)\) in black along the expected shifted transmission spectra \(T(\phi+\Delta\phi_{max})\) due to \(\Delta n_{ac}\). Figure 5(b) and(c) then shows the expected extinction ratio (ER), here defined as the ratio transmission from the nominal spectra and the transmission under the peak phase change, as a function of electrode to sidewall spacing and shield thickness. Two general trends can be seen in figure's 5(b) and (c), increasing electrode to sidewall spacing decreases the ER and second increasing shield thickness decreases the ER. As long as the ring resonator is in the critical coupling regime, increasing either the electrode to sidewall spacing or the shield thickness reduces the ER by decreasing the strength of the applied field within the waveguide core. On the other hand, as was discussed in section 3, reducing the electrode to sidewall spacing or the shield thickness increases the loss, in the case of a ring resonator this impacts the maximum achievable quality factor. In turn a commonly used definition of the quality factor is the ratio of stored energy in the cavity to energy dissipated per cycle and therefore is a measurement of the rate of decay of energy in the cavity [26]. These two factors form a tradeoff which is visible in Figure 6(a) and (b).
Figure 6(a) above shows the expected quality factor of a ring resonator with a \(\Delta_{r-a}\) mismatch of 0.05 vs electrode to sidewall spacings for shield thickness from 100nm to 200nm. As mentioned above, the reduction in quality factor for increasing loss (decreased electrode to sidewall spacing or decreased shield thickness) can clearly be seen. As ring resonators are resonant cavities the enhanced intensity contrast is a tradeoff with the ring-down time of the cavity which limits response times. Using the quality factor vs electrode to sidewall spacing and shield thickness we present the photon-lifetime limited bandwidth, \(\frac{1}{2\pi\tau_{cavity}}\), where \(\tau_{cavity}\) is the photon lifetime of the cavity and related to the quality factor as \(Q=\frac{\omega_{0}\tau_{cavity}}{2}\)[26, 27]. From figure 6(b) then we can see that increasing quality factor's results in lower photon lifetime limited bandwidths, reaching as low as \(\sim\)10Ghz for the largest quality factors. Additionally, there is a slight enhancement in photon lifetime limited bandwidth at the smallest electrode to side wall spacings. Based on these results a photon lifetime limit of 60Ghz requires a quality factor of 2000, which is naturally achieved at a shield thickness of 100nm. Therefore, integrating our phase modulator into a ring resonator cavity will allow ER's of 10dB to 20dB and photon lifetime bandwidths of 60Ghz for Q factors of 2000. The non-resonant alternative
Figure 6: (a) Quality Factor versus electrode to sidewall spacing for shield thickness 100nm to 200nm (b) Photon lifetime limited bandwidth vs electrode to sidewall spacing for shield thickness 100nm to 200nm.
Figure 5: (a) Simulated Transmission spectra and maximum shift in transmission spectra for a 45um bend radius ring modulator assuming a \(\Delta_{r-a}=0.05\) mismatch between the single pass amplitude transmission coefficient and the self-coupling coefficient of the bus. (b) Extinction Ratio as a function of electrode to sidewall spacing for shield thickness 100nm to 200nm. (c) Extinction Ratio as a function of shield thickness for electrode-sidewall spacings from 200nm to 600nm.
we will discuss here is the Mach Zehnder Interferometer configuration. Unlike the ring resonator, the MZI configuration being non-resonant does not have a photon lifetime limit to its bandwidth instead being limited by the capacitive load of the electrodes. Here we consider an unbalanced MZI, with a mismatch length of 200\(\mu\)m, and the phase modulator of length \(L_{\pi}\) from section 3 in both arms. Figure 7(a) shows the simulated spectral response of such an unbalanced MZI in the nominal case \(T(\phi)\).
In the case of the MZI by controlling the relative phases introduced into both arms we can form an intensity modulator. In this case we will consider the push-pull configuration where the electrical driving signal of the two arms is \(\pi\) out of phase with each other. The resulting \(V_{\pi}L_{\pi}\) and \(V_{\pi}L_{\pi}\alpha\) metrics can be seen in figures 7(b) and (c) respectively. The analysis here is a fairly straightforward extension of the phase modulator, as figure 7(a) shows a full \(\pi\) phase shift in the spectra is achievable and the corresponding figures of merit clarify that the in addition to converting the phase modulator into an intensity modulator, we have cut the \(V_{\pi}L_{\pi}\) in half, achieving values of 0.5Vcm to 0.9Vcm as a result of utilizing push pull. The primary trade-off in this design is one of compactness, unlike the ring resonator configuration which can be made as small as twice the bend radius of choice, the MZI here requires electrode lengths of \(L_{\pi}\) which are 1mm, or longer, in this case. The trade-off here then is you cut the \(V_{\pi}L_{\pi}\) metric down as low as 0.5Vcm in the push-pull MZI; however, the longer electrode's relative to the ring resonator as well as being driven in push-pull, which is likely to lead to a parallel set of capacitances, relative to the phase modulator means that it will have the largest capacitance of the three configurations. If we compare such results to literature, such as that of lithium niobate on insulator devices [3], we find that SRN Mach Zehnder can achieve a \(V_{\pi}L_{\pi}\) between 0.5 to 0.9Vcm while lithium niobate on insulator devices achieves values in the range 1.8Vcm; however, if we compare to only CMOS compatible techniques such as a depletion type silicon modulator [6] we find values in the range of 3.5Vcm. In table 4 below we have summarized a comparison of various modulator platforms from literature.
\begin{table}
\begin{tabular}{c c c c c c c} & \(V_{\pi}L_{\pi}\) & **II.** & **RF** & Bandwidth & **GNOS** & **Reference** \\ & [**Vcm**] & [**II.**] & **Rensitivity** & & & \\ \hline _This work_ & 1 & 1.55 & 9.45 & 20 to 60Ghz & _yes_ & - \\ \hline Depletion type & 3.5 & 9.2 & N/A & \(\sim\)20Ghz & yes & [6] \\ Silicon & & & & & \\ \hline Lithium Niobate on & 1.8 & 2 & 28 & \(\sim\) 15 Ghz & no & [3] \\ Insulator & & & & & \\ \end{tabular}
\end{table}
Table 4: A comparison of Modulator Performance
Figure 7: (a) Simulated Transmission spectra and maximum shift in transmission spectra for a MZI with an imbalance length of 200\(\mu\)m along with electrodes of the \(L_{\pi}\) length from section 3 in both arms driven in push-pull. (b) A plot of the \(V_{\pi}L_{\pi}\) metric versus electrode to sidewall spacing and shield thickness. (c) A plot of the \(V_{\pi}L_{\pi}\alpha\) metric versus electrode to sidewall spacing and shield thickness.
These techniques range from lithium niobate on insulator to hybrid Silicon on barium titanate (BTO) thin-film approaches. Of these various approaches the silicon on BTO thin-film achieves the clear best \(V_{\pi}L_{\pi}\) metric; however, being a silicon on BTO thin-film device requires post-processing and is not in general CMOS compatible. Additionally, the large nonlinearities that allow for low voltage's are in general smaller when used for wave-mixing. Of the approaches in the table, our result and the depletion type silicon modulator are the only two that can be clearly defined as CMOS compatible material stacks. Our numerical study shows that SRN DC-Induced Kerr modulators can achieve competitive \(V_{\pi}L_{\pi}\) metrics and being a low temperature PECVD process it can bring new capabilities to CMOS compatible platforms.
## 5 Discussion and Conclusion
Traditionally electro-optic modulators have relied upon second order nonlinearities, utilizing the Pockels effect; however, materials that exhibit non-zero \(\chi^{(2)}\) tensors are generally not CMOS compatible. Meanwhile \(\chi^{(3)}\) based modulation has typically been seen as un-attractive due to a much weaker nonlinearity exhibited by most materials as well as the quadratic nature of the effect. In this manuscript we have undertaken a systematic evaluation of electro-optic nonlinearities in a generic material and then made the case that the unique combination of \(\chi^{(2)}\) and \(\chi^{(3)}\) exhibited by SRN makes it a good candidate for capacitive \(\chi^{(3)}\) based electro-optic modulation. We have shown that SRN can achieve \(V_{\pi}L_{\pi}\) metrics as low as 1Vcm in a MZI configuration and extinction ratios as high as 18dB in a ring resonator configuration all utilizing a cmos compatible material platform. Additionally, we addressed the traditional drawback of quadratic chirping in \(\chi^{(3)}\) based modulators by showing that proper choice of the Eac/Ec ratio can not only linearize the change in phase but can also be seen as a heterodyne gain approach with the mixing of the weak high frequency Eac term and the strong low frequency Edc term. While for some applications utilizing a non-CMOS device such as a lithium niobate on insulator modulator can be acceptable, there is a need for CMOS compatible alternatives to such devices. As it stands now if a designer is limited to CMOS processing due to a desire to utilize cost effective tapeouts then they are primarily limited to carrier dispersion-based modulators in silicon. In this manuscript we have argued that adoption of a \(\chi^{(3)}\) based modulator can provide additional utility to such CMOS platforms and that silicon-rich nitride is a good candidate for such adoption. PECVD based silicon nitride films are already widely utilized in CMOS tapeouts, and as has been shown in the past by the authors [7, 8], with proper tuning of gas flow ratios a high refractive index PECVD silicon-rich nitride film can be achieved under otherwise the same processing conditions. In this manuscript we have shown that the unique advantages of high confinement guiding in a low RF permittivity high \(\chi^{(3)}\) and low loss in such a platform makes it an attractive candidate for integration into standard CMOS process flows. Further exploration of linearized \(\chi^{(3)}\) based modulators in a variety of material platforms can provide new and unique capabilities and deserves further investigation.
**Funding.** This work was supported by the Defense Advanced Research Projects Agency (DARPA) DSO NLM and NAC programs, the Office of Naval Research (ONR), the National Science Foundation (NSF) grants DMR-1707641, CBET-1704085, NSF ECCS-180789, NSF ECCS-190184, NSF ECCS-2023730, NSF ECCS-190184, the Army Research Office (ARO), the San Diego Nanotechnology Infrastructure (SDN) supported by the NSF National Nanotechnology Coordinated Infrastructure (grant ECCS-2025752ECCS-1542148), the Quantum Materials for Energy Efficient Neuromorphic Computing-an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE) Office of Science, Basic Energy Sciences under award # DE-SC0019273; Advanced Research Projects Agency (LEED: A Lightwave Energy-Efficient Datacenter), and the Cymer Corporation.
**Acknowledgments.** We thank all of UCSD's nano3 cleanroom staff and Dr Maribel Montero for their assistance with sample fabrication.
**Disclosures.** The authors declare no conflicts of interest.
|
2309.16338 | Anti-Matthew FL: Bridging the Performance Gap in Federated Learning to
Counteract the Matthew Effect | Federated learning (FL) stands as a paradigmatic approach that facilitates
model training across heterogeneous and diverse datasets originating from
various data providers. However, conventional FLs fall short of achieving
consistent performance, potentially leading to performance degradation for
clients who are disadvantaged in data resources. Influenced by the Matthew
effect, deploying a performance-imbalanced global model in applications further
impedes the generation of high-quality data from disadvantaged clients,
exacerbating the disparities in data resources among clients. In this work, we
propose anti-Matthew fairness for the global model at the client level,
requiring equal accuracy and equal decision bias across clients. To balance the
trade-off between achieving anti-Matthew fairness and performance optimality,
we formalize the anti-Matthew effect federated learning (anti-Matthew FL) as a
multi-constrained multi-objectives optimization (MCMOO) problem and propose a
three-stage multi-gradient descent algorithm to obtain the Pareto optimality.
We theoretically analyze the convergence and time complexity of our proposed
algorithms. Additionally, through extensive experimentation, we demonstrate
that our proposed anti-Matthew FL outperforms other state-of-the-art FL
algorithms in achieving a high-performance global model while effectively
bridging performance gaps among clients. We hope this work provides valuable
insights into the manifestation of the Matthew effect in FL and other
decentralized learning scenarios and can contribute to designing fairer
learning mechanisms, ultimately fostering societal welfare. | Jiashi Gao, Xin Yao, Xuetao Wei | 2023-09-28T10:51:12Z | http://arxiv.org/abs/2309.16338v2 | # EFFL: Egalitarian Fairness in Federated Learning for Mitigating Matthew Effect
###### Abstract
Recent advances in federated learning (FL) enable collaborative training of machine learning (ML) models from large-scale and widely dispersed clients while protecting their privacy. However, when different clients' datasets are heterogeneous, traditional FL mechanisms produce a global model that does not adequately represent the poorer clients with limited data resources, resulting in lower accuracy and higher bias on their local data. According to the **Matthew effect**, which describes how the advantaged gain more advantage and the disadvantaged lose more over time, deploying such a global model in client applications may worsen the resource disparity among the clients and harm the principles of social welfare and fairness. To mitigate the Matthew effect, we propose **Egalitarian Fairness Federated Learning** (EFFL), where _egalitarian fairness_ refers to the global model learned from FL has: \(\blacklozenge\) equal accuracy among clients; \(\blacklozenge\) equal decision bias among clients. Besides achieving egalitarian fairness among the clients, EFFL also aims for performance optimality, minimizing the empirical risk loss and the bias for each client; both are essential for any ML model training, whether centralized or decentralized. We formulate EFFL as a constrained multi-constrained multi-objectives optimization (MCMOO) problem, with the decision bias and egalitarian fairness as constraints and the minimization of the empirical risk losses on all clients as multiple objectives to be optimized. We propose a gradient-based three-stage algorithm to obtain the Pareto optimal solutions within the constraint space. Extensive experiments demonstrate that EFFL outperforms other state-of-the-art FL algorithms in achieving a high-performance global model with enhanced egalitarian fairness among all clients.
## 1 Introduction
Federated learning (FL) (McMahan et al., 2017) has emerged as a significant learning paradigm in which clients utilize their local data to train a global model collaboratively without sharing data. FL has attracted wide attention from various fields, especially in domains where data privacy and security are critical, such as healthcare, finance, and social networks. However, when the data distribution among clients is heterogeneous, the global model may perform inconsistently across different clients. This raises a **client-level** fairness issue: how to define and achieve a fair model performance for each client. From the perspective of commercial or profit-driven clients, **contribution fairness**
(Tay et al., 2022; Liu et al., 2022) is attractive, which requires that the model performance on each client is proportional to their data resource contribution to the FL model.
In the real-world, clients may have unequal data resources due to historical or unavoidable factors. They deserve fair treatment based on social welfare and equality principles. However, _contribution fairness_ worsens the resource plight of poorer clients, creating the **Matthew effect**(Merton, 1968). As shown in Fig. 1 (a), a lower-performance model may impair the data generation capabilities of poorer clients, exacerbating the disparity with richer clients over time and undermining social welfare and equality.
To mitigate the Matthew effect, we define **egalitarian fairness** in FL settings, which requires that the global model exhibits equal performance across clients. We consider two aspects of equal performance: \(\blacklozenge\) Accuracy: accuracy reflects how well the global model fits the local data of clients; thus equal accuracy performance can enhance the poor-clients decision quality; \(\blacklozenge\) Decision bias: decision bias reflects how fair of the global model decision on different protected groups such as gender, race, or religion within the client, thus equal decision bias performance can enhance the poor-clients reputation and decision credibility.
**Attaining egalitarian fairness in FL settings presents significant challenges. \(\blacklozenge\) First, egalitarian fairness requires equal performance across all clients, which creates a trade-off with the general objective of maximizing performance during model training. This is a crucial issue as participants may not want to compromise the overall performance for the sake of equality. This trade-off is particularly pronounced when dealing with heterogeneous clients, where the global model displays varied performance; \(\blacklozenge\) The second challenge is that heterogeneous local datasets can lead to conflicting gradient descent directions, posing the trade-off that improving the performance for one client could potentially degrade the performance of others; \(\blacklozenge\) Furthermore, as we consider egalitarian fairness in decision bias, it is noted that there is a potential trade-off between accuracy and decision bias (Wang et al., 2021).**
Several works have focused on client-level fair model training in FL. Mohri et al. (2019) proposed a min-max-based approach that prioritized the model training optimization towards the worst-performing client. However, they failed to achieve performance equality, especially when there were more than two clients. Cui et al. (2021) simultaneously minimized the empirical risk loss across all clients. Li et al. (2021) allowed clients to fine-tune the global model on local data. These methods enhance the model's representation of all clients but cannot guarantee to narrow the gap between the better-performing and the worse-performing clients. To achieve equal model performance, improving the model fitting towards the data distribution of the poorer clients is necessary. Li et al. (2019) proposed q-FFL, which enhances the weights of the poorer clients in the aggregation process to achieve equal accuracy across the clients. However, the method is heuristic and depends on a hyperparameter \(q\), which is difficult to be tuned to achieve the optimal result.
**Our main contributions include: \(\blacklozenge\) We propose an FL framework, called **egalitarian Fair Federated Learning**(EFFL), that aims to achieve an egalitarian fair global model that provides both high performance and egalitarian fairness for clients; \(\blacklozenge\) We formally define the learning goals in EFFL and form it as a multi-constrained multi-objectives optimization (MCMOO) problem. As shown in Fig. 1 (b), the objectives are to minimize the local empirical risk losses \(\{l_{1},...,l_{N}\}\) on
Figure 1: (a) Impact of Matthew effect in FL; (b) Training goals in EFFL.
clients (goal 1). The local decision biases \(\{f_{1},...,f_{N}\}\) on \(N\) clients are constrained to be below an acceptable bias budget (goal 2). These two goals jointly ensure higher model performance across clients. We impose constraints on the deviations of local empirical risk loss and local decision bias from their mean values (goal 3 and goal 4) to achieve egalitarian fairness with equal performance among clients; \(\circleddiamond\) To address the challenges mentioned above, we propose a three-stage gradient-based algorithm that achieves Pareto optimal solutions in the decision space defined by the decision bias and egalitarian fairness constraints, where the global model performance on each local client is maximized and cannot be further improved without harming others; \(\circleddiamond\) We perform comprehensive experiments on both synthetic and real-world datasets and show that our proposed EFFL framework can achieve a global model that outperforms state-of-the-art (SOTA) baselines in terms of both performance and egalitarian fairness across clients.
## 2 Related Work
Most of the existing fairness research (Roh et al., 2021; Shen et al., 2022; Choi et al., 2021; Sankar et al., 2021) assumes that the training process can access the whole training dataset in a centralized manner. However, this assumption does not hold in real-world scenarios where data privacy and communication constraints prevent clients from sharing their data with a central server. In FL, some work (Ezzeldin et al., 2023; Papadaki et al., 2022; Chu et al., 2021; Du et al., 2021; Cui et al., 2021) aims to reduce model outcome bias towards different protected groups such as gender, race, or age. Recently, there has been a growing interest in achieving fairness towards clients in FL. Mohri et al. (2019) propose AFL, a min-max optimization scheme that focuses on the worst-performing client. Li et al. (2021) propose Ditto to achieve training fairness by allowing clients to fine-tune the received global model using local data. Cui et al. (2021) propose FCFL to jointly consider accuracy consistency and decision bias across different local clients (data sources) by minimizing the model loss of the worst-performing client. These works enhance a fair representation of different clients during training but cannot guarantee equal performance. To achieve equal accuracy across clients, Li et al. (2019) propose q-FFL, a heuristic method that adjusts the weights of the aggregate process to enhance the influence of poorer individuals. Pan et al. (2023) propose FedMDFG that adds cosine similarity between the loss vectors among clients and the unit vector as a fairness objective in the local loss functions. Previous work overlooks the trade-offs in achieving equality from a social welfare perspective and local optimality from an individual beneficial perspective. Moreover, as there is an inherent trade-off between accuracy and decision bias for each client (Wang et al., 2021), we also need to consider the deterioration of the decision bias caused by accuracy distribution adjustment. It is necessary to ensure that the decision bias of each individual is within an acceptable budget to provide a trustworthy global model. For the same social welfare purpose as accuracy equality, maintaining decision bias equality among individuals is helpful to improve the reputation and decision credibility of poor clients. We argue that optimizing the accuracy objective alone is insufficient for an FL system that adheres to social ethics and legal regulations. Therefore, we introduce a novel FL framework, EFFL, to produce a global model with high and equitable performance across clients.
## 3 Preliminaries
In this section, we provide the notions and formally define the problem of **Egalitarian Fairness Federated Learning** (EFFL), which extends the fairness criteria in FL and covers novel application scenarios.
### Federated Learning
We focus on horizontal FL(Yang et al., 2019), which involves \(N\) clients, each associated with a specific dataset \(\mathcal{D}_{k}=\{X_{k},A_{k},Y_{k}\}\), where \(k\in\{1,...,N\}\), \(X_{k}\) denotes the general attributes of the data without protected information, \(A_{k}\) denote a protected attribute, such as gender, race, or religion, and \(Y_{k}\) denoted truth label. The FL procedure involves multiple rounds of communication between the server and the clients. In each round, the server sends the global model \(h_{\theta}\) with parameter \(\theta\) to the clients, who then train their local models on their local private datasets \(\{\mathcal{D}_{1},...,\mathcal{D}_{N}\}\), resulting in local models \(\{h_{\theta_{1}},...,h_{\theta_{N}}\}\).The server then aggregates the local parameters and updates the global model for the next communication round (McMahan et al., 2017). The original FL (McMahan et al.,
2017) aims to minimize the average empirical risk loss over all the clients' datasets, and the optimal hypothesis parameter \(\theta^{*}\) satisfies:
\[\theta^{*}=\arg\;\min_{\theta\in\Theta}\sum_{k=1}^{N}\left(\frac{|\mathcal{D}_{k} |}{\sum_{k=1}^{N}|\mathcal{D}_{k}|}l_{k}\left(\hat{Y}_{k},Y_{k}\right)\right), \tag{1}\]
where \(\hat{Y}_{k}=h_{\theta}\left(X_{k},A_{k}\right)\) is the output of the hypothesis \(h_{\theta}\) when input \(\left(X_{k},A_{k}\right)\) and \(l_{k}(\cdot)\) is the loss function for \(k\)-th client.
### Model Performance Metrics
As mentioned in Sec. 1, we study egalitarian fairness in terms of two aspects of model performance: accuracy and decision bias. The local accuracy of the global model on the \(k\)-th client is naturally controlled by the training loss \(l_{k}\), which measures the difference between the model's decision and the truth label on the local data of the \(k\)-th client. In the context of classification, we use the following BCELoss on the \(k\)-th client:
\[\text{BCELoss: }l_{k}(h)=-\frac{1}{|Y|}\sum_{i=1}^{|Y|}\left(Y_{k}^{i}\log \left(\hat{Y}_{k}^{i}\right)+\left(1-Y_{k}^{i}\right)\log\left(1-\hat{Y}_{k}^{ i}\right)\right). \tag{2}\]
Decision bias refers to the disparities in model decisions made across different groups formed by protected attributes, such as gender, race, and region. We use two decision bias metrics, namely _Accuracy Parity Standard Deviation_ (APSD) and _True positive rate Parity Standard Deviation_ (TPSD)(Poulain et al., 2023). Taking a binary classification problem as example, the decision bias measured by APSD or TPSD for the \(k\)-th client are defined as follows:
\[\text{APSD: }f_{k}\left(h\right)=\sqrt{\frac{\sum_{i=1}^{M}\left( \Pr\left(\hat{Y}_{k}=1|A_{k}=i\right)-\mu\right)^{2}}{M}}, \tag{3}\] \[\text{TPSD: }f_{k}\left(h\right)=\sqrt{\frac{\sum_{i=1}^{M}\left( \Pr\left(\hat{Y}_{k}=1|A_{k}=i,Y_{k}=1\right)-\mu\right)^{2}}{M}}, \tag{4}\]
where \(\mu\) is the average _True Positive Rate_ (TPR) or accuracy under all groups divided by the values of the protected attribute, and \(M\) is the number of possible values for the sensitive attribute \(A_{k}\). A hypothesis \(h_{\theta}\) satisfies \(\epsilon_{b}\)-decision bias on \(k\)-th client if \(f_{k}(h)\leq\epsilon_{b}\), where \(\epsilon_{b}\) is the predefined budget for the decision bias.
### Egalitarian fairness in FL
Egalitarian fairness in FL refers to the model providing equal performance across clients, roughly speaking, ensuring clients have levels of performance that are all roughly comparable. Therefore, we evaluate egalitarian fairness in FL based on the degree of equality in performance. In existing work, Pan et al. (2023) measured the performance equality by the cosine similarity between the model losses on all clients \(\left[l_{1},...,l_{N}\right]\) and the unit vector \(p=\mathbf{1}\). This metric fails to distinguish each client's performance and to impose precise constraints on them, especially when the demand for performance equality is dynamic. For instance, clients may allow the violation of performance equality to be within an acceptable threshold. To avoid this, we measure the model performance equality across clients by the absolute deviation of each client's performance from the mean performance of all clients. A hypothesis \(h\) satisfies \(\epsilon_{vl}\)-egalitarian fairness on accuracy performance and \(\epsilon_{vb}\)-egalitarian fairness on decision bias performance if:
\[\left|l_{k}(h)-\bar{l}(h)\right|\leq\epsilon_{vl},\left|f_{k}(h)-\bar{f}(h) \right|\leq\epsilon_{vb},k\in\left\{1,...,N\right\}. \tag{5}\]
where \(\bar{l}(h)=\frac{1}{N}{\sum_{k=1}^{N}l_{k}(h)}\) and \(\bar{f}(h)=\frac{1}{N}{\sum_{k=1}^{N}f_{k}(h)}\) are the average empirical risk loss and average decision bias, respectively, and \(\epsilon_{vl}\) and \(\epsilon_{vb}\) are the predefined budgets for the egalitarian fairness on accuracy and decision bias, respectively.
### Egalitarian Fairness Federated Learning
To achieve a global model that provides both high and equal performance across clients, we propose a novel framework called **Egalitarian Fair Federated Learning** (EFFL), in which the training goals can be formulated as a multi-constrained multi-objectives optimization (MCMOO) problem.
**Definition 1**: _(Egalitarian Fairness Federated Learning) We formalize the **Egalitarian Fair Federated Learning** (EFFL) problem as follows:_
\[\begin{split}&\min_{h\in\mathcal{H}^{*}}\left\{l_{1}\left(h \right),...,l_{N}\left(h\right)\right\},\\ &\text{s.t.}\left\{f_{k}\left(h\right)\right\}_{k=1}^{N}\leq \epsilon_{b},\left\{\left|l_{k}\left(h\right)-\bar{l}(h)\right|\right\}_{k=1}^ {N}\leq\epsilon_{vl},\left\{\left|f_{k}\left(h\right)-\bar{f}(h)\right|\right\} _{k=1}^{N}\leq\epsilon_{vb},\end{split} \tag{6}\]
_where \(h\) is a hypothesis from a hypothesis set \(\mathcal{H}^{*}\)._
The MCMOO problem seeks to minimize the empirical risk losses for all clients while ensuring each client has a \(\epsilon_{b}\)-decision bias. It also satisfies \(\epsilon_{vl}\)-egalitarian fairness for accuracy and \(\epsilon_{vb}\)-egalitarian fairness for decision bias. Finding the optimal solution to the MCMOO problem is nontrivial as the objectives may be conflicting. Therefore, we aim to identify the Pareto-optimal hypothesis \(h\), which is not dominated by any other \(h^{\prime}\in\mathcal{H}\). The definitions of Pareto optimal and Pareto front are as follows:
**Definition 2**: _(Pareto Optimal and Pareto Front (Lin et al., 2019)) In a multi-objective optimization problem with loss function \(l(h)=\left\{l_{1}(h),...,l_{N}(h)\right\}\), we say that for \(h_{1},h_{2}\in\mathcal{H}\), \(h_{1}\) is dominated by \(h_{2}\) if \(\forall i\in[N]\,,l_{i}(h_{2})\leq l_{i}(h_{1})\) and \(\exists i\in[N]\,,l_{i}(h_{2})<l_{i}(h_{1})\). A solution \(h\) is Pareto optimal if it is not dominated by any other \(h^{\prime}\in\mathcal{H}\). The collection of Pareto optimal solutions is called the Pareto set. The image of the Pareto set in the loss function space is called the Pareto front._
## 4 Three-Stage Optimization Approach for EFFL
### Optimization Path to Obtain Pareto Optimal
Fig. 2 illustrates the feasible decision space of EFFL, which is bounded by the intersection of two hypothesis sets: \(\mathcal{H}_{B}\) containing all hypothesis satisfy the \(\epsilon_{b}\)-decision bias constraint, and \(\mathcal{H}_{E}\) containing all hypothesis satisfy the \(\epsilon_{vl}\)-egalitarian fairness on accuracy and \(\epsilon_{vb}\)-egalitarian fairness on decision bias. We aim to search the Pareto set \(\mathcal{H}^{*}\) for the objectives in Eq. 6 within the feasible decision space. The properties of the hypothesis sets are as follows: (1) The \(\mathcal{H}_{B}\) contains hypotheses satisfying \(\epsilon_{b}\)-decision bias in each client,
\[\left\{f_{k}\left(h\right)\right\}_{k=1}^{N}\leq\epsilon_{b},\forall h\in \mathcal{H}_{B}. \tag{7}\]
(2) The \(\mathcal{H}_{E}\) contains hypotheses that satisfy \(\epsilon_{vl}\)-egalitarian fairness on accuracy and \(\epsilon_{vb}\)-egalitarian fairness on decision bias across all clients,
\[\left\{\left|l_{k}\left(h\right)-\bar{l}\left(h\right)\right|\right\}_{k=1}^{N }\leq\epsilon_{vl},\left\{\left|f_{k}\left(h\right)-\bar{f}\left(h\right) \right|\right\}_{k=1}^{N}\leq\epsilon_{vb},\forall h\in\mathcal{H}_{E}. \tag{8}\]
(3) The \(\mathcal{H}^{*}\subset\mathcal{H}_{B}\cap\mathcal{H}_{E}\) is the Pareto set of EFFL in Eq. 6, i.e.,
\[h^{\prime}\not\prec h,\forall h\in\mathcal{H}^{*},\forall h^{\prime}\in \mathcal{H}_{B}\cap\mathcal{H}_{E}. \tag{9}\]
Finding Pareto set for EFFL is nontrivial as the feasible decision space is highly restricted. Moreover, when the number of objectives \(N\) is large, optimizing one objective may adversely affect other objectives. We construct an approximate Pareto front by linear scalarization technique. Average weights are applied to each objective and combine \(N\)-objectives into a single surrogate objective. The surrogate objective forms the convex part of the Pareto front, as shown in Fig. 2, which is denoted as \(\mathcal{H}_{\bar{L}}\). The hypothesis in the \(\mathcal{H}_{\bar{L}}\) satisfies:
Figure 2: Optimization path to obtain Pareto set \(\mathcal{H}^{*}\) for EFFL.
\[\bar{l}\left(h\right)\leq\bar{l}\left(h^{\prime}\right),\forall h\in\mathcal{H}_{L}, h^{\prime}\notin\mathcal{H}_{L}. \tag{10}\]
Compared to \(\mathcal{H}^{*}\), \(\mathcal{H}_{L}\) is easier to obtain and can serve as an intermediate set, from which we propose a three-stage optimization algorithm with an optimal path: \(h^{0}\rightarrow\mathcal{H}_{B}\cap\mathcal{H}_{\bar{L}}\rightarrow\mathcal{H }_{B}\cap\mathcal{H}_{E}\cap\mathcal{H}_{\bar{L}}\rightarrow\mathcal{H}^{*}\) (purple arrows in Fig. 2), and decompose the EFFL problem into three sub-problems as follows:
**Stage 1: Constrained Minimization Problem.** We define a constrained minimization problem on the hypothesis set \(\mathcal{H}\) to obtain a hypothesis \(h^{\prime}\in\mathcal{H}_{B}\cap\mathcal{H}_{\bar{L}}\),
\[\min_{h\in\mathcal{H}}\bar{l}\left(h\right),\text{s.t.}\left\{f_{k}\left(h \right)\right\}_{k=1}^{N}\leq\epsilon_{b}. \tag{11}\]
By solving Eq. 11, we obtain \(h^{\prime}\) that 1) satisfies \(\epsilon_{b}\)-decision bias for each client and 2) minimizes the average empirical risk loss among all clients.
**Stage 2: Multi-Constrained Optimization Problem.** We formulate a multi-constrained optimization problem to obtain a hypothesis \(h^{\prime\prime}\in\mathcal{H}_{B}\cap\mathcal{H}_{E}\cap\mathcal{H}_{\bar{L}}\),
\[\min_{h\in\mathcal{H}}\bar{l}\left(h\right),\text{s.t.}\left\{\left|l_{k} \left(h\right)-\bar{l}\left(h\right)\right|\right\}_{k=1}^{N}\leq\epsilon_{vl},\left\{\left|f_{k}\left(h\right)-\bar{f}\left(h\right)\right|\right\}_{k=1}^{ N}\leq\epsilon_{vb},\left\{f_{k}\left(h\right)\right\}_{k=1}^{N}\leq\epsilon_{b}. \tag{12}\]
By solving Eq. 12, we obtain \(h^{\prime\prime}\) that, compared to \(h^{\prime}\), exhibits the following properties: 1) it provides \(\epsilon_{vl}\)-egalitarian fairness on accuracy; and 2) it provides \(\epsilon_{vb}\)-egalitarian fairness on decision bias.
**Stage 3: Multi-Constrained Pareto Optimization Problem.** Focusing solely on minimizing weighted sum \(\bar{l}(h)\) during optimization may harm individual clients. To address this issue, we formulate a multi-constrained Pareto optimization problem to further optimize \(h^{\prime\prime}\) to \(h^{*}\in\mathcal{H}^{*}\), where the empirical risk loss of each client is further reduced until Pareto optimality is achieved. At this point, the loss of each client cannot be further minimized without adversely affecting the loss of other clients,
\[\begin{split}\min_{h\in\mathcal{H}}&\left\{l_{1} \left(h\right),...,l_{N}\left(h\right)\right\},\text{s.t.}\left\{f_{k}\left(h \right)\right\}_{k=1}^{N}\leq\epsilon_{b},\left\{\left|l_{k}\left(h\right)- \bar{l}\left(h\right)\right|\right\}_{k=1}^{N}\leq\epsilon_{vl},\\ &\left\{\left|f_{k}\left(h\right)-\bar{f}\left(h\right)\right| \right\}_{k=1}^{N}\leq\epsilon_{vb},\bar{l}\left(h\right)\leq\bar{l}\left(h^{ \prime\prime}\right).\end{split} \tag{13}\]
### Three-Stage Optimization for obtaining \(\mathcal{H}^{*}\)
To obtain the convergent solution of the sub-problems defined in Eq. 11\(\sim\)Eq. 13, we propose a gradient-based algorithm for obtaining \(h^{*}\in\mathcal{H}^{*}\), which is suitable for the implementation under FL. Given a hypothesis \(h_{\theta\sigma}\) parameterized by \(\theta^{t}\). At iteration \(t+1\), the update rule of gradient-based methods is \(\theta^{t+1}=\theta^{t}+\eta d\), where \(d\) is a gradient descent direction and \(\eta\) is the step size. In the case of an optimization problem with \(N\) objectives, i.e., \(\min\left\{l_{1}\left(h_{\theta}\right),...,l_{N}\left(h_{\theta}\right)\right\}\), a gradient \(d\) is efficient to make the optimization proceed towards minimization if \(\left\{d^{*T}\nabla_{\theta}l_{i}\left(h_{\theta}\right)\right\}_{i=1}^{N}\leq 0\). As the gradient direction \(d\) resides within the convex hull of the gradients of all objectives and constraints, denoted as \(G=\left[\nabla_{\theta}l_{1}(h_{\theta}),...,\nabla_{\theta}l_{N}(h_{\theta})\right]\)(Desideri, 2012), we can obtain a gradient descent direction \(d^{*}\) by performing a linear transformation on \(G\) using an \(N\)-dimensional vector \(\alpha^{*}\),
\[\begin{split} d^{*}=\alpha^{*T}G,&\text{ where }\alpha^{*}=\arg\min_{\alpha}\left\|\sum_{i=1}^{N}\alpha_{i}\nabla_{\theta}l_{i} \left(h_{\theta}\right)\right\|,\\ &\text{s.t.}\ \ \sum_{i=1}^{N}\alpha_{i}=1,\alpha_{i}\geq 0,\forall i \in\left[N\right].\end{split} \tag{14}\]
**Solution for Stage 1.** We first transform Eq. 11 into an equivalent single-constraint optimization problem by imposing constraint only on the max-value as follows,
\[\min_{h\in\mathcal{H}}\bar{l}\left(h\right),\text{ s.t. }\max\left\{f_{k}\left(h \right)\right\}_{k=1}^{N}\leq\epsilon_{b}. \tag{15}\]
Denoting the \(\max\left\{f_{k}\left(h\right)\right\}_{k=1}^{N}\) as \(f_{max}(h)\), the descent gradient of Eq. 15 lies in the convex hull of \(G^{\prime}=\left[\nabla_{\theta}\bar{l}(h),\nabla_{\theta}f_{max}(h)\right]\). We employ an alternating optimization strategy: if the \(\epsilon_{b}\)-decision bias is satisfied within the worst-case client, only \(\bar{l}(h)\) is further optimized,
\[d^{*}=\arg\min_{d\in G^{\prime}}\,d^{T}\nabla_{\theta}\bar{l}(h),\text{ if }f_{max}(h)\leq\epsilon_{b}. \tag{16}\]
Otherwise, we optimize towards a descent direction \(d\), which minimizes \(f_{max}(h)\) while ensuring that \(\bar{l}(h)\) does not increase, as follows:
\[d^{*}=\arg\min_{d\in G^{\prime}}d^{T}\nabla_{\theta}f_{max}(h),\text{ s.t. }d^{T}\nabla_{\theta}\bar{l}(h)\leq 0,\text{ if }f_{max}(h)> \epsilon_{b}. \tag{17}\]
The gradient direction in Eq. 16 and Eq. 17 is optimized towards to reduce the loss while satisfying the \(\epsilon_{b}-\)decision bias constraint better, leading to a hypothesis \(h^{\prime}\) that balances the trade-off between loss and decision bias.
**Solution for Stage 2.** To reduce the computational complexity of handling \(O(N)\)-constraints in Eq. 12, we optimize the egalitarian fairness for the worst-case client. Moreover, to better achieve \(\epsilon_{vl}-\)egalitarian fairness on accuracy and \(\epsilon_{vb}-\)egalitarian fairness on decision bias, we modify Eq. 12 by treating egalitarian fairness as objectives and applying a constraint \(\bar{l}\left(h\right)\leq\bar{l}\left(h^{\prime}\right)\) to avoid the degradation of model performance. We optimize egalitarian fairness of accuracy and decision bias alternately, i.e., if \(\left\{\left|l_{k}\left(h\right)-\bar{l}\left(h\right)\right|\right\}_{k=1}^ {N}\leq\epsilon_{vl}\),
\[\min_{h\in\mathcal{H}}\max\left\{\left|f_{k}\left(h\right)-\bar{f}(h)\right| -\epsilon_{vb}\right\}_{k=1}^{N},\text{ s.t. }\max\left\{f_{k}(h)\right\}_{k=1}^{N}\leq \epsilon_{b},\bar{l}\left(h\right)\leq\bar{l}\left(h^{\prime}\right), \tag{18}\]
else,
\[\min_{h\in\mathcal{H}}\max\left\{\left|l_{k}\left(h\right)-\bar{l} (h)\right|-\epsilon_{vl}\right\}_{k=1}^{N}, \tag{19}\] \[\text{ s.t. }\max\left\{\left|f_{k}\left(h\right)-\bar{f}(h) \right|\right\}_{k=1}^{N}\leq\epsilon_{vb},\max\left\{f_{k}(h)\right\}_{k=1}^ {N}\leq\epsilon_{b},\bar{l}\left(h\right)\leq\bar{l}\left(h^{\prime}\right).\]
Denoting the \(\max\left\{\left|l_{k}\left(h\right)-\bar{l}(h)\right|-\epsilon_{vl}\right\}_{ k=1}^{N}\) and \(\max\left\{\left|f_{k}\left(h\right)-\bar{f}(h)\right|-\epsilon_{vb}\right\}_{ k=1}^{N}\) as \(\hat{l}_{max}(h)\) and \(\hat{f}_{max}(h)\), respectively, the gradient descent direction of Eq. 18 lies in the convex hull of \(G^{\prime\prime}=\left[\nabla_{\theta}\hat{f}_{max},\nabla_{\theta}f_{max}, \nabla_{\theta}\bar{l}\right]\). We obtain the optimal \(d^{*}\) as follows:
\[d^{*}=\arg\min_{d\in G^{\prime\prime}}d^{T}\nabla_{\theta}\hat{f}_{max}(h), \text{s.t. }d^{T}\nabla_{\theta}\bar{l}(h)\leq 0,d^{T}\nabla_{\theta}f_{max}(h) \leq 0\text{ if }f_{max}(h)>\epsilon_{b}. \tag{20}\]
The gradient descent direction of Eq. 19 lies in the convex hull of \(G^{\prime\prime}=\left[\nabla_{\theta}\hat{l}_{max},\nabla_{\theta}\hat{f}_{max },\nabla_{\theta}f_{max},\nabla_{\theta}\bar{l}\right]\). We obtain the optimal \(d^{*}\) as follows:
\[d^{*}=\arg\min_{d\in G^{\prime\prime}}d^{T}\nabla_{\theta}\hat{l}_{max}(h), \text{s.t. }d^{T}\nabla_{\theta}\bar{l}(h)\leq 0, \tag{21}\]
\[d^{T}\nabla_{\theta}\hat{f}_{max}(h)\leq 0\text{ if }\hat{f}_{max}(h)> \epsilon_{vb},\text{ }d^{T}\nabla_{\theta}f_{max}(h)\leq 0\text{ if }f_{max}(h)> \epsilon_{b}.\]
The constraints are dynamically imposed, depending on whether the current hypotheses \(h\) satisfies \(\epsilon_{b}\)-decision bias and \(\epsilon_{vb}\)-egalitarian fairness on the decision bias. The optimal gradient direction in Eq. 20 and Eq. 21 is optimized towards improving the equality of performance among clients without causing the degradation of model performance, leading to a hypothesis \(h^{\prime\prime}\) that balances the trade-off between egalitarian fairness and maximizing model performance.
**Solution for Stage 3.** To reduce the computational complexity of minimizing \(N\) objectives and handling \(O(N)\)-constraints in Eq. 13, we optimize the empirical risk loss for the worst-case client and impose constraints \(l_{k}\left(h\right)\leq l_{k}\left(h^{\prime\prime}\right)\) to prevent the degradation of performance for other clients.
\[\min_{h\in\mathcal{H}}\max\left\{l_{1}\left(h\right),...,l_{N}\left(h\right) \right\},\text{s.t. }l_{k}(h)\leq l_{k}(h^{\prime\prime}),\forall k\in[N],\bar{l}(h)\leq\bar{l}(h^{ \prime\prime}), \tag{22}\] \[\max\left\{\left|f_{k}\left(h\right)\right|\right\}_{k=1}^{N}\leq \epsilon_{b},\max\left\{\left|l_{k}\left(h\right)-\bar{l}(h)\right|\right\}_{k=1}^ {N}\leq\epsilon_{vl},\max\left\{\left|f_{k}\left(h\right)-\bar{f}(h)\right| \right\}_{k=1}^{N}\leq\epsilon_{vb}.\]
Denoting the \(\max\left\{l_{k}\left(h\right)\right\}_{k=1}^{N}\) as \(l_{max}(h)\), the gradient descent direction lies in the convex hull of \(G^{*}=\left[\nabla_{\theta}l_{max}(h),\nabla_{\theta}l_{1}(h),...,\nabla_{\theta }l_{N}(h),\nabla_{\theta}\bar{l}(h),\nabla_{\theta}f_{max}(h),\nabla_{\theta} \hat{l}_{max}(h),\nabla_{\theta}\hat{f}_{max}(h)\right]\). We obtain the optimal \(d^{*}\) as follows,
\[d^{*}=\arg\min_{d\in G^{*}}d^{T}\nabla_{\theta}l_{max}(h), \tag{23}\] \[\text{ s.t. }d^{T}\nabla_{\theta}\bar{l}(h)\leq 0,\text{ }d^{T} \nabla_{\theta}f_{max}(h)\leq 0\text{ if }f_{max}(h)>\epsilon_{b},\] \[d^{T}\nabla_{\theta}\hat{l}_{max}(h)\leq 0\text{ if }\hat{l}_{max}(h)> \epsilon_{vl},d^{T}\nabla_{\theta}\hat{f}_{max}(h)\leq 0\text{ if }\hat{f}_{max}(h)> \epsilon_{vb},\] \[d^{T}\nabla_{\theta}l_{i}(h)\leq 0,\forall i\in[N]\text{ }and i\neq\arg\max \left\{l_{k}(h)\right\}_{k=1}^{N}.\]
The constraints are dynamically imposed, depending on whether the current hypotheses \(h\) satisfies \(\epsilon_{b}\)-decision bias, \(\epsilon_{vl}\)-egalitarian fairness on the accuracy and \(\epsilon_{eb}\)-egalitarian fairness on the decision bias. The optimal gradient direction in Eq. 23 is optimized to minimize the empirical risk loss on the worst client without deteriorating the performance of other clients, resulting in a Pareto optimal hypothesis \(h^{*}\). The algorithm implementation in FL is described in the Appx. A.
## 5 Experiments
### Settings
**Datasets.** We generate a synthetic dataset with a protected attribute: \(A\sim Ber(0.5)\), two general attributes: \(X_{1}\sim\mathcal{N}(0,1)\), \(X_{2}\sim\mathcal{N}(\mathbbm{1}(a>0),2)\). The label is setting by \(Y\sim Ber(u^{\mathbbm{1}}(x_{1}+x_{2}\leq 0)+u^{h}\mathbbm{1}(x_{1}+x_{2}>0))\), where \(\{u^{i},u_{h}\}=\{0.3,0.6\}\) if \(a=0\), otherwise,\(\{u^{l},u^{h}\}=\{0.1,0.9\}\). We split the dataset into two clients based on whether \(x_{1}\leq-0.3\) to make the clients heterogeneous in distribution and size; \(\blacktriangledown\) Adult is a binary classification dataset with more than \(40000\) adult records for predicting whether the annual income is greater than \(50K\). We split the dataset into two clients based on whether the individual's education level is a Ph.D and select \(race\) as a protected attribute; \(\blacktriangledown\) The eICU dataset includes data records about clinical information and hospital details of patients who are admitted to ICUs. The dataset has been processed following the steps outlined in Johnson et al. (2018). We filter out the hospitals with data points less than \(1500\), leaving \(11\) hospitals in our experiments. Naturally, we treat each hospital as a client.
**Baselines.**\(\blacktriangledown\) FedAvg (McMahan et al., 2017), \(\blacktriangledown\) FedAvg + FairBatch (Roh et al., 2021), \(\blacktriangledown\) FedAvg+FairReg, \(\blacktriangledown\) Ditto (Li et al., 2021), \(\blacktriangledown\) g-FFL (Li et al., 2019), \(\blacktriangledown\) FCFL (Cui et al., 2021), \(\blacktriangledown\) FedMDFG (Pan et al., 2023). All methods are tuned to their best performance.
**Hyperameters** We divide the communication rounds into three stages, each with \(750\), \(750\), and \(500\) rounds, respectively, to ensure that the global model is fully updated and converges in each stage. In the constraint budgets setting, we set the decision bias budget \(\epsilon_{b}\), the egalitarian fairness budget on accuracy \(\epsilon_{vl}\), and the egalitarian fairness budget on decision bias \(\epsilon_{vb}\) to half of the related-performance achieved by the original FedAvg. For example, as shown in Tab. 1, on the synthetic
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{Model Performance} \\ \cline{3-5} & & \multicolumn{2}{c}{Local Acc.} & \multicolumn{2}{c}{Local Bias} \\ \cline{3-5} & & Avg. & Std\((\epsilon_{vl})\) & Avg.(\(\epsilon_{b}\)) & Std\((\epsilon_{vb})\) \\ \hline \multirow{5}{*}{Synthetic \(\epsilon_{b}=0.1\)} & FedAvg & **.7735** &.0283(\(\times\)) &.2480(\(\times\)) &.0819(\(\times\)) \\ & q-FFL & **.7735** &.0283(\(\times\)) &.2480(\(\times\)) &.0819(\(\times\)) \\ & Ditto &.7229 &.0132(\(\times\)) &.2703(\(\times\)) &.0566(\(\times\)) \\ \cline{1-1} & FedMDFG &.7717 &.0068(\(\times\)) &.02473(\(\times\)) &.0662(\(\times\)) \\ \cline{1-1} & FedAvg+FairReg &.6360 &.6643(\(\times\)) &.0140(\(\times\)) &.0798(\(\times\)) \\ \cline{1-1} & FedAvg+FairReg &.6227 &.0394(\(\times\)) &.0952(\(\times\)) &.0463(\(\times\)) \\ \cline{1-1} & FCFL &.6330 &.0177(\(\times\)) &.0812(\(\times\)) &.0435(\(\times\)) \\ \cline{1-1} & EFL &.6327 & **.0887**(\(\times\)) & **.0801**(\(\times\)) & **.0359**(\(\times\)) \\ \hline \multirow{5}{*}{Adult \(\epsilon_{b}=0.01\)} & FedAvg & **.7767** &.0592(\(\times\)) &.0328(\(\times\)) &.0093(\(\times\)) \\ & q-FFL &.7662 &.0400(\(\times\)) &.0472(\(\times\)) &.0046(\(\times\)) \\ \cline{1-1} & Ditto &.7210 & **.0039**(\(\times\)) &.0169(\(\times\)) &.0111(\(\times\)) \\ \cline{1-1} & FedMDFG &.7629 &.0397(\(\times\)) &.0436(\(\times\)) &.0068(\(\times\)) \\ \cline{1-1} & FedAvg+FairBatch &.7756 &.0556(\(\times\)) &.036(\(\times\)) &.0128(\(\times\)) \\ \cline{1-1} & FedAvg+FairReg &.7663 &.0686(\(\times\)) &.0089(\(\times\)) &.0066(\(\times\)) \\ \cline{1-1} & FCFL &.7638 &.0487(\(\times\)) &.0143(\(\times\)) &.0159(\(\times\)) \\ \cline{1-1} & EFL &.7685 &.0281(\(\setminus\)) & **.0366**(\(\times\)) & **.0090**(\(\setminus\)) \\ \hline \multirow{5}{*}{eICU \(\epsilon_{b}=0.02\)} & FedAvg &.6560 &.0427(\(\times\)) &.0371(\(\times\)) &.0409(\(\times\)) \\ \cline{1-1} & eICU &.9-FFL & **.6656** &.0425(\(\times\)) &.0371(\(\times\)) &.0405(\(\times\)) \\ \cline{1-1} & Ditto &.6311 &.0216(\(\times\)) &.0472(\(\times\)) &.0447(\(\times\)) \\ \cline{1-1} & \(\epsilon_{b}=0.02\) & FedAvg+FairReg &.6479 &.0227(\(\times\)) &.0311(\(\times\)) &.0266(\(\times\)) \\ \cline{1-1} & \(\epsilon_{vl}=0.02\) & FedAvg+FairBatch &.6441 &.0413(\(\times\)) &.0304(\(\times\)) &.0298(\(\times\)) \\ \cline{1-1} & FCFL &.6530 &.0408(\(\times\)) &.0322(\(\times\)) &.0266(\(\times\)) \\ \cline{1-1} & FCFL &.6550 &.0272(\(\times\)) &.0344(\(\times\)) &.0246(\(\times\)) \\ \cline{1-1} & EFL &.6530 & **.0195**(\(\setminus\)) & **.0209**(\(\times\)) & **.0201**(\(\times\)) \\ \hline \hline \end{tabular}
* **xxx** : Best performance compared to all algorithms. \((\times)\): Violation of constraint exceeds 10%. \((\approx)\): Close to constraint, with violation of constraint not exceeding 10%. \((\surd)\): Satisf constraint.
\end{table}
Table 1: The test performance on three datasets.
dataset experiments, the decision bias Avg. of FedAvg is \(0.2480\), so we set \(\epsilon_{b}=0.1\). The accuracy Std. of FedAvg is \(0.0283\), so we set \(\epsilon_{vl}=0.01\). The decision bias Std. of FedAvg is \(0.0819\), so we set \(\epsilon_{vb}=0.04\). We use the same parameter setting strategy for other datasets. Since the constraints may conflict with each other, this setting allows us to better evaluate the superior performance of our proposed EFFL method and avoid making a constraint too tight, which may result in a solution that is only optimal on this constraint.
**Evaluation Metrics.** To evaluate the effectiveness and equality of the global model's performance across all clients, we introduce three evaluation metrics under the accuracy and decision bias performance, respectively: Avg.: the average performance of the global model across all clients; Std.: the variation of the performance of the global model across clients. We utilize the TPSD defined in Eq. 4 as metric for decision bias. The results under APSD are reported in Appx. B.4.1.
More details about the experiments implementation and additional experiments are in the Appx. B.
### Accuracy, Decision Bias and Egalitarian Fairness
We compare the global model performance of our proposed EFFL with other SOTA baselines on three datasets. In the EFFL problem setting, we introduce three types of constraints \(\epsilon_{b}\), \(\epsilon_{vl}\) and \(\epsilon_{vb}\), which are used to control the decision bias, the equality of accuracy performance distribution, and the equality of decision performance distribution, respectively. As shown in Tab. 1, our method achieves better satisfaction of the three constraints, with strict satisfaction under Synthetic and Adult datasets, and approximate satisfaction under eICU dataset. The current SOTA baselines are not able to guarantee all three constraints simultaneously. In terms of accuracy, the methods that do not consider decision bias (FedAvg, q-FFL, Ditto, and FedMDFG) have higher accuracy. However, as there is a trade-off between accuracy and decision bias, we compare our method with the baselines that also aim to reduce decision bias (FedAvg+FairBatch, FedAvg+FairReg, FCFL). Our method achieves the best constraint satisfaction with only 0.3% decrease in accuracy on the synthetic dataset, 0.7% decrease in accuracy on the adult dataset, and 0.2% decrease in accuracy on the eICU dataset.
Fig. 3 shows the convergent efficiency of EFFL within \(2000\) communication rounds on three datasets. The experiments are repeated \(20\) times. We can observe that the optimization objective, accuracy of the global model converges as communication rounds progress, with the decision bias and the egalitarian fairness of accuracy and decision bias converging below the predefined budgets (bounded by the colored dashed lines in Fig. 3).
Figure 4 illustrates the distribution of model performance across various clients. We have conducted experiments on three distinct datasets. For a clearer visualization, we prioritize the display of baselines that also take into account decision bias (FedAvg+Fairbatch, FedAvg+FairReg, FCFL) on the eICU dataset, which encompasses 11 clients. The results clearly demonstrate that our proposed
Figure 4: The distribution of model performance across different clients.
Figure 3: Testing results during \(2000\) communication rounds.
EFFL model ensures a more equitable performance distribution among clients, thereby indicating enhanced egalitarian fairness.
## 6 Conclusion
In this paper, we have investigated the egalitarian fairness issues in federated learning (FL), which have significant impacts on the sustainability of the FL system due to the **Matthew Effect**. We have analyzed the possible trade-offs for achieving egalitarian fairness and have formally defined **Egalitarian Fairness Federated Learning** (EFFL) as a multi-constrained multi-objectives optimization problem. Furthermore, we have designed an effective optimization path that decomposed the original problem into three sub-problems and proposed a three-stage algorithm to achieve Pareto optimal solutions under trade-offs. In the end, we have conducted a thorough empirical evaluation to demonstrate that our proposed method outperforms other state-of-the-art baselines in achieving a high-performance global model with enhanced egalitarian fairness among all clients.
|
2307.16694 | Investigating and Improving Latent Density Segmentation Models for
Aleatoric Uncertainty Quantification in Medical Imaging | Data uncertainties, such as sensor noise, occlusions or limitations in the
acquisition method can introduce irreducible ambiguities in images, which
result in varying, yet plausible, semantic hypotheses. In Machine Learning,
this ambiguity is commonly referred to as aleatoric uncertainty. In image
segmentation, latent density models can be utilized to address this problem.
The most popular approach is the Probabilistic U-Net (PU-Net), which uses
latent Normal densities to optimize the conditional data log-likelihood
Evidence Lower Bound. In this work, we demonstrate that the PU-Net latent space
is severely sparse and heavily under-utilized. To address this, we introduce
mutual information maximization and entropy-regularized Sinkhorn Divergence in
the latent space to promote homogeneity across all latent dimensions,
effectively improving gradient-descent updates and latent space
informativeness. Our results show that by applying this on public datasets of
various clinical segmentation problems, our proposed methodology receives up to
11% performance gains compared against preceding latent variable models for
probabilistic segmentation on the Hungarian-Matched Intersection over Union.
The results indicate that encouraging a homogeneous latent space significantly
improves latent density modeling for medical image segmentation. | M. M. Amaan Valiuddin, Christiaan G. A. Viviers, Ruud J. G. van Sloun, Peter H. N. de With, Fons van der Sommen | 2023-07-31T14:09:03Z | http://arxiv.org/abs/2307.16694v5 | Investigating and Improving Latent Density Segmentation Models for Aleatoric Uncertainty Quantification in Medical Imaging
###### Abstract
Data uncertainties, such as sensor noise or occlusions, can introduce irreducible ambiguities in images, which result in varying, yet plausible, semantic hypotheses. In Machine Learning, this ambiguity is commonly referred to as aleatoric uncertainty. Latent density models can be utilized to address this problem in image segmentation. The most popular approach is the Probabilistic U-Net (PU-Net), which uses latent Normal densities to optimize the conditional data log-likelihood Evidence Lower Bound. In this work, we demonstrate that the PU-Net latent space is severely inhomogeneous. As a result, the effectiveness of gradient descent is inhibited and the model becomes extremely sensitive to the localization of the latent space samples, resulting in defective predictions. To address this, we present the Sinkhorn PU-Net (SPU-Net), which uses the Sinkhorn Divergence to promote homogeneity across all latent dimensions, effectively improving gradient-descent updates and model robustness. Our results show that by applying this on public datasets of various clinical segmentation problems, the SPU-Net receives up to 11% performance gains compared against preceding latent variable models for probabilistic segmentation on the Hungarian-Matched metric. The results indicate that by encouraging a homogeneous latent space, one can significantly improve latent density modeling for medical image segmentation.
Probabilistic Segmentation, Aleatoric Uncertainty, Latent Density Modeling
## I Introduction
Supervised deep learning segmentation algorithms rely on the assumption that the provided reference annotations for training reflect the unequivocal ground truth. Yet, in many cases, the labeling process contains substantial inconsistencies in annotations by domain-level experts. These variations manifest from inherent ambiguity in the data (e.g., due to occlusions, sensor noise, etc.), also referred to as the _aleatoric uncertainty_[1]. In turn, subjective interpretation of readers leads to multiple plausible annotations. This phenomena is generally expressed with multi-annotated data, revealing that labels may vary within and/or across annotators (also known as the _intra-/inter- observer variability_). Arguably, the most significant impact of this phenomenon can be encountered in medical image segmentation (see Figure 1), where poorly guided decision-making by medical experts can have direct adverse consequences on patients [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. Given the severity of the involved risks, deep learning models that appropriately deal with aleatoric uncertainty can substantially improve clinical decision-making [12, 13]. Several deep learning architectures have been proposed for aleatoric uncertainty quantification. Presenting various plausible image segmentation hypotheses was initially enabled by the use of Monte-Carlo dropout [14, 15], ensembling methods [16, 17], or the use of multiple classification heads [18]. However, these methods approximate the weight distribution over the neural networks, i.e., the epistemic uncertainty, rather than the aleatoric uncertainty. Specifically, ensembling methods have been subject to criticism due to the lack of connection with Bayesian theory [19]. In this context, the PU-Net made significant advances by combining the conditional Variational Autoencoder (VAE) [20, 21] and U-Net [22]. In this case, a suitable objective is derived from the ELBO of the _conditional_ data log-likelihood, which is analogous to the unconditional VAE. Nonetheless, several works have recently highlighted the limitations of the PU-Net [23, 24, 25]. Most relevant to this work, which pertains improving the latent space, is the augmentation of Normalizing Flows to the posterior density to boost expressivity [26, 27]. Also, Bhat _et al._[28] demonstrate changes in model performance subject to density modeling design choices. Beyond this, the latent-space behaviour of the PU-Net has, to the best of our knowledge, not received much attention.
Therefore, this work explores the latent space of the PU-Net. We find that it can possess properties that inhibit aleatoric
Fig. 1: Samples of the LIDC-IDRI dataset with significant inter-observer variability where (dis-)agreement in the ground-truth masks is clearly visible.
uncertainty quantification. Specifically, the learned latent space successfully represents the data variability by adjusting the latent-density variances required to encapsulate the information variability. Nevertheless, this affects the performance during both optimization and deployment. During training, the inhomogeneous latent variances cause the network to be ill-conditioned, resulting in inefficient and unstable gradient descent updates. At test time, the output is disproportionately sensitive to the localization of latent samples, which leads to erroneous segmentations. We find clear relationship between the homogeneity of the latent-density singular values and the ability to accurately encapsulate the inter-observer variability. Therefore, we suggest considering the problem from the perspective of Optimal Transport (OT), rather than data log-likelihood maximization. This enables the introduction of a new model, the Sinkhorn PU-Net (SPU-Net), which uses the Sinkhorn Divergence in the latent space. The proposed SPU-Net architecture distributes the latent variances more evenly over all dimensions by retaining variances of possibly non-informative latent dimensions, rather than directly minimizing them. As such, the decoder is better conditioned and gradient-descent optimization is significantly improved. Furthermore, the decoder is more robust to shifting latent densities, because the probability that the decoder receives unseen samples during test time is greatly reduced. To summarize, the contributions of this work are:
* Providing detailed insights into the effect of singular values on the quantification of aleatoric uncertainty with latent density models.
* Introducing the SPU-Net, which rephrases the optimization in the context of Optimal Transport, which thereby significantly improves (up to 11%) upon preceding latent variable models for probabilistic segmentation.
The remainder of this paper is structured as follows. First, we present various theoretical frameworks, which includes the Variational Autoencoder, Probabilistic U-Net and Optimal Transport in Section II. Consequently, the OT problem is phrased in the context of probabilistic segmentation. Additionally, training and evaluation of the SPU-Net is introduced in Section III. Quantitative and qualitative results are shown in Section IV. Limitations of our work are discussed in Section V and finally, conclusions are drawn in Section VI.
## II Theoretical Background
This section provides an introduction to the VAE and PU-Net, where both models maximize the lower bound on the data log-likelihood. Additionally, the theory of Optimal Transport will be presented with the intention to enable the use of alternative divergence measures in latent space. In the remainder of this paper, we will use calligraphic letters (\(\mathcal{X}\)) for sets, capital letters (\(X\)) for random variables, and lowercase letters (\(x\)) for specific values. Furthermore, we will denote marginals as \(P_{X}\), probability distributions as \(P(X)\), and densities as \(p(x)\). Vectors and matrices are distinguished with **boldface** characters. Notation for data manifolds and their probability measures have carefully been adapted from the works of Dai _et al._[29] and Zheng _et al._[30].
### _Variational Autoencoders_
Let \(\mathbf{x}\in\mathcal{X}\) be an observable variable, taking values in \(\mathbb{R}^{D}\). We define \(\mathbf{z}\in\mathcal{Z}\) as a latent, lower-dimensional representation of \(\mathbf{x}\), taking values in \(\mathbb{R}^{d}\). It is assumed that \(\mathcal{X}\) possesses a low-dimensional structure in \(\mathbb{R}^{r}\), relative to the high-dimensional ambient space \(\mathbb{R}^{D}\). Thus, it is assumed that \(D\gg d\geq r\). In other words, the latent dimensionality \(d\) is greater than the intrinsic dimensionality \(r\) of the data. Let us also denote a ground-truth probability measure \(\mu_{gt}\) on \(\mathcal{X}\) such that \(\int_{\mathcal{X}}\mu_{gt}(d\mathbf{x})=1\), where \(\mu_{gt}(d\mathbf{x})\) pertains probability mass of infinitesimal \(d\mathbf{x}\) on the manifold. The VAE [20] aims to approximate the data distribution through \(\mathbf{z}\), by maximizing the Evidence Lower Bound (ELBO) on the data log-likelihood, described by
\[\log p(\mathbf{x}) \geq\mathbb{E}_{q_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})}\left[ \log\frac{p(\mathbf{x},\mathbf{z})}{q_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})}\right]\] \[\geq\mathbb{E}_{q_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})}\left[ \log p_{\mathbf{\phi}}(\mathbf{x}|\mathbf{z})\right]\] \[\quad-\mathrm{KL}\left[q_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x}) \left|\,\middle|p(\mathbf{z})\right], \tag{1}\]
with tractable encoder \(q_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})\) (commonly known as the posterior) and decoder \(p_{\mathbf{\phi}}(\mathbf{x}|\mathbf{z})\) densities, amortized with parameters \(\mathbf{\theta}\) and \(\mathbf{\phi}\), respectively. Furthermore, \(p(\mathbf{z})\) is a fixed latent density and is commonly referred to as the prior. As a consequence of the mean-field approximation, the densities are modeled with axis-aligned Normal densities. Furthermore, to enable prediction from test images, optimization is performed through amortization with neural networks. Nevertheless, due to these approximations, this construction can be sub-optimal, reduce the effectiveness of the VAE and void the guarantee that high ELBO values indicate accurate inference [31, 32].
It has been discussed in the work of Dai _et al._[29] that the VAE is inclined to informative latent variances as much as possible while matching the non-informative variances to the prior to satisfy the KL-divergence. Because of this, the VAE is well-capable of learning the intrinsic dimensionality of the data. Nevertheless, the minimization of the latent variances induces an intrinsic bias. Namely, the VAE will progressively improve its approximation of the support of \(\mu_{gt}\) on the manifold \(\mathcal{X}\), while neglecting the probability measure itself. This has detrimental effects on the ancestral sampling capabilities of the VAE.
### _The Probabilistic U-Net_
Similar to the VAE, we can consider input image and ground-truth segmentation masks \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{D}\). Then, the ELBO of the _conditional_ data log-likelihood can be written as
\[\log p(\mathbf{y}|\mathbf{x}) \geq\mathbb{E}_{q_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x},\mathbf{y} )}\left[\log\frac{p(\mathbf{y},\mathbf{z}|\mathbf{x})}{q_{\mathbf{\theta}}( \mathbf{z}|\mathbf{x},\mathbf{y})}\right]\] \[\geq\mathbb{E}_{q_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x},\mathbf{y} )}\left[\log p_{\mathbf{\phi}}(\mathbf{y}|\mathbf{x},\mathbf{z})\right]\] \[\quad-\mathrm{KL}\left[q_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x}, \mathbf{y})\left|\,q_{\mathbf{\psi}}(\mathbf{z}|\mathbf{x})\right]. \tag{3}\]
We denote the probability measure of the conditioning variable as \(\nu_{gt}\), where \(\int_{\mathcal{X}}\nu_{gt}(d\mathbf{x})=1\). For any \(\mathbf{x}\in\mathcal{X}\) we have subset \(\mathcal{Y}_{\mathbf{x}}\subseteq\mathcal{Y}\) containing probability measure \(\omega_{gt}^{\mathbf{x}}\), with
\(\int_{\mathcal{Y}_{\mathbf{x}}}\omega_{gt}^{\mathbf{x}}(d\mathbf{y})=1\) and \(\int_{\mathcal{X}\times\mathbf{y}_{\mathbf{x}}}\omega_{gt}^{\mathbf{x}}(d \mathbf{y})\,\nu_{gt}(d\mathbf{x})=1\). The predictive distribution of \(\mathbf{y}^{*}\) from test image \(\mathbf{x}^{*}\) can be obtained with
\[p(\mathbf{y}^{*}|\mathbf{x}^{*}):=\int_{\mathbb{R}^{d}}p_{\phi}(\mathbf{y}^{*}| \mathbf{z},\mathbf{x}^{*})q_{\boldsymbol{\psi}}(\mathbf{z}|\mathbf{x}^{*})d \mathbf{z}, \tag{3}\]
and test segmentations are obtained with ancestral sampling.
The success of the PU-Net for probabilistic segmentation can be accredited to several additional design choices in the implementation of the encoding-decoding structure. Firstly, an encoding \(\mathbf{z}\), is inserted pixel-wise at the final stages of a U-Net, followed by feature-combining \(1\times 1\) convolutions only. As a result, this significantly alters the \(r\)-dimensional manifold that \(\mathbf{z}\) attempts to learn. Namely, it learns the segmentation variability within a single image rather than features of the segmentation itself. Therefore, much smaller values of \(d\) are feasible. An illustration of the (S)PU-Net is provided in Figure 2. A crucial difference in the PU-Net formulation w.r.t. to that of the VAE is the conditioning of the prior \(p_{\boldsymbol{\psi}}(\mathbf{z}|\mathbf{x})\), which is not fixed but rather learned. Although it has been argued by Zheng _et al._[30] that learning the prior density is theoretically equivalent to using a non-trainable prior, their respective optimization trajectories can differ substantially such that the former method is preferred [33]. Consider the KL-divergence between two \(d\)-dimensional axis-aligned Normals \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\) with identical mean
\[2\cdot\mathrm{KL}\left[\mathcal{N}_{1}\,||\,\mathcal{N}_{2}\,\right]=\sum_{i }\frac{\sigma_{2}^{i}}{\sigma_{1}^{i}}+\ln\frac{\sigma_{1}^{i}}{\sigma_{2}^{i} }-d. \tag{4}\]
Minimizing this expression entails finding \(\sigma_{1}^{i}=\sigma_{2}^{i}\) for all values of \(i\). At the same time, the singular values are progressively minimized [30] and the variance vectors \(\boldsymbol{\sigma}_{1}\) and \(\boldsymbol{\sigma}_{2}\) are not constrained in any fashion, and therefore arbitrarily distributed. This fact carries potentially detrimental implications for the decoder \(p_{\phi}(\mathbf{y}|\mathbf{x},\mathbf{z})\), as will be discussed in the next section.
### _Decoder sensitivity_
A potential issue that can arise when training the PU-Net occurs when the model converges to solutions where respective latent dimensions variances vary significantly in magnitude. Namely, it causes the decoder to become extremely sensitive to the latent-space sample localization. The PU-Net decoder is dependent on the function \(\mathbf{f}_{\phi}(\mathbf{x},\mathbf{z})\), where \(p_{\phi}(\mathbf{y}|\mathbf{x},\mathbf{z})=\delta(\mathbf{y}-\mathbf{f}_{\phi }(\mathbf{x},\mathbf{z}))\), which intermediately involves latent variable \(\mathbf{z}\) to reconstruct a plausible segmentation hypothesis. The latent variable \(\mathbf{z}\) is controlled by the mean \(\boldsymbol{\mu}\) and singular values \(\boldsymbol{\sigma}\) of the underlying axis-aligned Normal density. If the input \(\mathbf{x}\) is considered to be fixed, then the relative condition number, \(\zeta_{i}\), of \(\mathbf{f}_{\phi}(\mathbf{x},\mathbf{z})\) can be interpreted as the sensitivity of its output w.r.t. its varying input \(z_{i}\), corresponding to the \(i\)-th latent dimension, which can be expressed as
\[\zeta_{i}=\lim_{\epsilon\to 0}\sup_{||\delta z_{i}||\leq\epsilon}\frac{|| \delta\mathbf{f}_{\phi}(\mathbf{x},z_{i})||\ /\ ||\mathbf{f}_{\phi}(\mathbf{x},\mathbf{z})||}{||\delta z_{i}||\ /\ || \mathbf{z}||}. \tag{5}\]
For convenient notation, only the varying \(z_{i}\) is included in \(\delta\mathbf{f}_{\phi}(\mathbf{x},z_{i})\) rather than the complete vector \(\mathbf{z}\). The condition number of a function has direct consequences to the numerical stability of its gradient updates. Suppose \(\sigma_{1}\gg\sigma_{2}\) and \(\mu_{1}\approx\mu_{2}\), then \(\zeta_{2}\gg\zeta_{1}\), and the function is said to be ill-conditioned. In turn, this skews the optimization landscape where the gradients w.r.t. \(z_{2}\) will dominate, leading to inefficient and unstable gradient descent updates. When considering that the dimensionality is usually higher than \(d=2\), the search towards an optimal minimum will rapidly suffer from irregular condition numbers [34]. For instance, normalization layers have been used in neural networks to smoothen the optimization landscape and have led to considerable improvements [35, 36, 37, 38, 39]. Hence, a mechanism that promotes uniform condition numbers will smoothen optimization and encourage similar performance gains. In the case of latent variable modeling, this pertains stimulating latent variances which are approximately of the same magnitude. We hypothesize that obtaining better gradient estimates can also encourage the model to more accurately model the ground-truth probability measure \(\omega_{gt}^{\mathbf{x}}\).
During deployment of the model, another problem can occur due to inhomogeneous latent-space variances. As mentioned before, the prior density is learned through amortization. This implies that the prior should be able to sufficiently reflect the posterior during test time, such that the decoder receives samples similar to that during training. Undoubtedly, misalignment between the two densities is expected, potentially leading to low-probability samples from the prior. In the case of extreme difference between the respective posterior latent variances, a simple shift in prior density mean can already cause out-of-distribution samples. This implies the decoder receives latent codes which were completely unseen during training, which in turn can cause highly inaccurate to even nonsensical predictions. Ideally, changes in sample localization should minimally affect its probability on the posterior density. To achieve this robustness, the singular values of the latent densities should be as homogeneous as possible. We visualize this phenomena in Figure 3.
Furthermore, past literature has made mention of enhancing the posterior density by augmentation of Normalizing Flows [40], which was proposed to enhance the expressivity and complexity of the posterior distribution. This has shown to improve the PU-Net [26, 27]. Nevertheless, it has been argued in previous works that the mean-field Gaussian assumption in the VAE is not necessarily the cause for the failure to learn the ground-truth manifold [29]. Additionally, the complexity of the posterior density is limited by the Normally-distributed prior density, which will be used during testing. Thus, further exploration on the influence of the NF-augmented posterior on the prior density is required.
### _Optimal Transport_
This section provides an introduction to the Optimal Transport (OT) framework. Let \(\mathcal{Y}\) and \(\tilde{\mathcal{Y}}\) be the two separable metric spaces. For the sake of clarity, the reader can assume that these sets contain the ground-truth and model predictions, respectively. We adopt the Monge-Kantorovich formulation [41] for the OT problem, by specifying
\[W(\mu,\hat{\mu}) := \tag{6}\] \[\inf\left\{\int_{\mathcal{Y}\times\tilde{\mathcal{Y}}}c(\mathbf{y}, \hat{\mathbf{y}})\,d\gamma(\mathbf{y},\hat{\mathbf{y}})\bigg{|}\gamma\in\Gamma( \mu,\hat{\mu})\right\},\]
where \(\gamma\in\Gamma(\mu,\hat{\mu})\) denotes a coupling in the tight collection of all probability measures on \(\mathcal{Y}\times\hat{\mathcal{Y}}\) with marginals \(\mu\) and \(\hat{\mu}\), respectively. The function \(c(\mathbf{y},\hat{\mathbf{y}})\): \(\mathcal{Y}\times\hat{\mathcal{Y}}\rightarrow\mathbb{R}_{+}\) denotes any lower semi-continuous measurable cost function. Equation (6) is also commonly referred to as the Wasserstein distance. Furthermore, the usual context of this formulation is in finding the lowest cost of moving samples from the probability measures in \(\mathcal{Y}\) to the measures in \(\hat{\mathcal{Y}}\). In the case of probabilistic segmentation, the aim is to learn the marginal ground-truth distribution \(P_{Y,X}=\int p(\mathbf{y}|\mathbf{x})\mu_{gt}(d\mathbf{x})\), by matching it with the marginal model distribution \(P_{\hat{Y},X}=\int p(\hat{\mathbf{y}}|\mathbf{x})\mu_{gt}(d\mathbf{x})\).
The Wasserstein distance is in practice a tedious calculation. As a solution, it has been proposed to introduce entropic regularization [42]. This is achieved by using the entropy of the couplings \(\gamma\) as a regularizing function, which is specified by
\[\tilde{S}_{\epsilon}(\mu,\hat{\mu}):= \tag{7}\] \[\inf\bigg{\{}\int_{\mathcal{Y}\times\hat{\mathcal{Y}}}s(\mathbf{ y},\hat{\mathbf{y}})d\gamma(\mathbf{y},\hat{\mathbf{y}})\bigg{|}\gamma\in\Gamma(\mu, \hat{\mu})\bigg{\}},\]
where
\[s(\mathbf{y},\hat{\mathbf{y}})=d(\mathbf{y},\hat{\mathbf{y}})+\epsilon\log \frac{d\gamma(\mathbf{y},\hat{\mathbf{y}})}{d\mu(\mathbf{y})d\hat{\mu}(\hat{ \mathbf{y}})}. \tag{8}\]
As mentioned by Cuturi _et al._[43], the entropy term can be expanded to \(\log(d\gamma(\mathbf{y},\hat{\mathbf{y}}))-\log(d\mu(\mathbf{y}))-\log(d\hat {\mu}(\hat{\mathbf{y}}))\). In this way, this formulation can be understood as constraining the joint probability to have _sufficient entropy_, or contain small enough _mutual information_ with respect to \(d\mu\) and \(d\hat{\mu}\). Cuturi _et al._[43] show that this entropic regularization allows optimization over a Lagrangian dual for faster computation with the iterative Sinkhorn matrix scaling algorithm. Additionally, the entropic bias is removed from the OT problem to obtain the Sinkhorn Divergence, specified as
\[S_{\epsilon}(\mu,\hat{\mu})=\tilde{S}_{\epsilon}(\mu,\hat{\mu})-\frac{1}{2} \left(\tilde{S}_{\epsilon}(\mu,\mu)+\tilde{S}_{\epsilon}(\hat{\mu},\hat{\mu}) \right). \tag{9}\]
The Sinkhorn Divergence interpolates between \(W_{p}\) when \(\epsilon\rightarrow~{}0\) with \(\mathcal{O}(\epsilon\log(\frac{1}{\epsilon}))\) deviation, and Maximum Mean Discrepancy when \(\epsilon\rightarrow\infty\), which favours dimension-independent sample complexity [44]. A viable option is to approximate the Sinkhorn Divergence via sampling with weights \(\alpha,\beta\in\mathbb{R}_{+}\). The regularized Sinkhorn algorithm performance lacks in lower temperature settings. To alleviate this limitation, as well as to increase efficiency, Kosowsky and Yuille [45] introduce _\(\epsilon\)-scaling_ or _simulated annealing_ to the Sinkhorn algorithm.
Various works have investigated constraining the latent densities by means of Optimal Transport [46]. Tolstikhin _et al._[47] introduce Wasserstein Autoencoders (WAEs), which
Fig. 3: Axis-aligned Normal density singular values and its effect on decoder sensitivity. a) If the singular values are equal, then the decoding function is well-conditioned and robust to sample sensitivity. A slight shift in sample localization \((\Delta x,\Delta y)\) does not push the vector off the density support b) If the difference in relative magnitude of the singular values is significant, the decoding function is ill-conditioned and a slight shift in sample localization causes it to be completely out-of-distribution.
Fig. 2: Schematic drawing of the Probabilistic U-Net training framework introduced by Kohl _et al._[33], where the latent density constraint is purely KL-divergence. The architecture maximizes the Evidence Lower Bound of the conditional data log-likelihood. The ground truth is denoted as Y, the input image as X, and the model prediction as Y’. During testing, only the prior density \(p_{\psi}(\mathbf{z}|\mathbf{x})\) is used to predict samples. Besides the KL-divergence, the SPU-Net additionally constrains the latent space with the Sinkhorn Divergence.
softly constrain the latent densities with a Wasserstein penalty term, a metric that emerged from OT theory [48]. The authors demonstrate that the WAEs exhibit better sample quality while maintaining the favourable properties of the VAE. Similarly, Patrini _et al._[49] introduce the Sinkhorn Autoencoder (SAE), in which the Sinkhorn algorithm [43] - an approximation of the Wasserstein distance - is leveraged as a latent divergence measure.
## III Methods
It has been argued that the PU-Net can converge to sub-optimal latent representations. In this section, we introduce the SPU-Net, which uses the Sinkhorn Divergence between the conditional latent densities. The utilized entropic regularization can induce favourable latent properties for aleatoric uncertainty quantification.
### _SPU-Net_
By using the theoretical frameworks of Bousquet _et al._[46], generative latent variable models can be trained from the perspective of OT. Most works employ this technique to provide an alternative to the conventional VAE objective [47, 49]. We specifically consider the optimization problem in the conditional setting for probabilistic segmentation and have framed previous literature accordingly to this context. Consequently, we can elegantly present an alternative training objective for the PU-Net.
Rather than maximizing the conditional data log-likelihood, the OT between the ground truth and model distributions is minimized. We consider the random variables \((X,Y,\hat{Y},Z)\in\mathcal{X}\times\mathcal{Y}\times\hat{\mathcal{Y}}\times \mathcal{Z}\), which correspond to the input image, ground-truth segmentation, model prediction and latent code, respectively. Then, we denote the joint distribution \(P_{Y,Z,X}\) where a latent variable is obtained as \(X\sim P(X)\) followed by \(Z\sim P(Z|X)\), and ground truth is obtained with \(Y\sim P(Y|Z,X)\). The OT problem considers couplings \(\Gamma(Y,\hat{Y}|X)=\Gamma(\hat{Y}|Y,X)P(Y)\), where \(\Gamma(\hat{Y}|Y,X)\) can be factored with a non-deterministic mapping through latent variable \(Z\), as is shown by Bousquet _et al._[46]. In fact, considering a probabilistic encoder \(Q(Z|Y,X)\), deterministic decoder \(f_{\phi}\) and the set of joint marginals \(\mathcal{P}_{Y,Z,X}\), the optimal solution of the OT problem can be stated as
\[W(P_{Y},P_{\hat{Y}})=\inf_{P\in\mathcal{P}_{Y,Z,X}}\mathbb{E}_{P_ {Y,Z,X}}\left[c(Y,f_{\phi}(Z,X)\right] \tag{10a}\] \[=\inf_{\mathcal{Q}:Q_{Z}=P_{Z}}\mathbb{E}_{P_{Y,X}}\mathbb{E}_{Z \sim Q}\left[c(Y,f_{\phi}(Z,X)\right], \tag{10b}\]
with induced aggregated prior and posterior, \(P_{Z}=\mathbb{E}_{P_{X}}\left[P(Z|X)\right]\) and \(Q_{Z}=\mathbb{E}_{P_{Y,X}}\left[Q(Z|Y,X)\right]\), respectively (see Theorem 1 of Bousquet _et al._[46] for further details). Equation (10a) follows by definition and Equation (10b) indicates that this search can be factored through a probabilistic encoder, which attempts to match the marginal \(P_{Z}\). In this context, the optimization runs over probabilistic encoders \(\mathcal{Q}\) rather than the couplings \(\Gamma\). Continuously, the constraint on the aggregated marginals can be relaxed with a convex penalty \(D:\mathcal{Q}\times\mathcal{P}\rightarrow\mathbb{R}_{+}\) such that \(D(Q_{Z},P_{Z})=0\) if, and only if \(Q_{Z}=P_{Z}\). Tolstikhin _et al._[47] demonstrate that when choosing \(D\) to be p-Wasserstein, defined as
\[W_{p}(Q_{\mathrm{Z}},P_{\mathrm{Z}}):=\\ \inf\left\{\left(\int_{\mathcal{Q}\times\mathcal{P}}d(\mathbf{q}, \mathbf{p})^{p}\,d\gamma(\mathbf{q},\mathbf{p})\right)^{\frac{1}{p}}\left| \gamma\in\Gamma(Q_{\mathrm{Z}},P_{\mathrm{Z}})\right\}, \tag{11}\]
the following equality can be established
\[W(P_{\mathrm{Y}},P_{\mathrm{Y}})=\inf_{Q\in\mathcal{Q}}\sqrt[]{ \mathbb{E}_{P_{Y,X}}\mathbb{E}_{Z\sim Q}[c(\mathrm{Y},f_{\phi}(Z,X))^{p}]}\\ +\delta\cdot W_{p}(Q_{\mathrm{Z}},P_{\mathrm{Z}}), \tag{12}\]
where \(f_{\phi}\) is at least \(\delta\)-Lipschitz. We highlight the importance of this continuity requirement, because it is directly related to the sensitivity of \(f_{\phi}\), and thus the effectiveness of gradient descent methods. Patrini _et al._[49] make use of the Sinkhorn iterations to approximate \(W_{p}\). We adopt this strategy to encourage full utilization of the latent space, with the hypothesis that this property will improve the condition number of the posterior. In turn, this will decrease probability of unlikely samples due to inaccurate prior-density estimates during testing and possibly improve estimation of ground-truth probability measure. As a consequence, the model is overall more robust and defective predictions are prevented.
When using OT solutions, constraining the localization of the prior latent samples is important. A valid option is to subject the samples to a characteristic kernel, which is restricted to Reproducible Kernel Hilbert Space. However, to respect the flexible nature of the parameterized prior density, we simply constrain by means of an additional KL-divergence term. To deal with a varying number of annotations, we constrain the individual posteriors as an approximation to the divergence of the aggregated densities. This also enables random sampling of ground-truth masks, which has shown to be effective when dealing with multi-annotated data [9, 33]. As a result, the minimization objective of the SPU-Net can be stated as
\[\mathcal{L}=-\mathbb{E}_{q_{\theta}(\mathbf{z}|\mathbf{x},\mathbf{ y})}\left[\log p_{\phi}(\mathbf{y}|\mathbf{x},\mathbf{z})\right]\\ +\alpha\cdot\mathcal{S}_{\epsilon}\left[q_{\theta}(\mathbf{z}| \mathbf{x},\mathbf{y})\,||\,p_{\phi}(\mathbf{z}|\mathbf{x})\right]\\ +\beta\cdot\mathrm{KL}\left[q_{\theta}(\mathbf{z}|\mathbf{x}, \mathbf{y})\,||\,p_{\phi}(\mathbf{z}|\mathbf{x})\right], \tag{13}\]
where \(\alpha\) and \(\beta\) are tunable parameters.
### _Data_
This study employs a diverse range of post-processed, multi-annotated medical image segmentation datasets, which exhibit significant variations in the ground truth, to train and evaluate the proposed models. The utilization of publicly available datasets in this research underscores the clinical significance and the imperative for models to capture the inherent ambiguity present within these images. Firstly, we use the LIDC-IDRI [50] dataset, a popular benchmark for aleatoric uncertainty quantification, which contains lung-module CT scans. Secondly, we use the Pancreas and Pancreatic-lesion CT datasets and, finally, the Prostate and Brain-growth MRI datasets provided by the QUBIQ 2021 Challenge [51]. See Table I for more details on these datasets. It is assumed that
the region of interest is detected and the desired uncertainty is solely regarding its exact delineation. Therefore, each image is center-cropped to 128\(\times\)128-pixel patches. Furthermore, a train/validation/test set is exploited where approximately 20% of the dataset is reserved for testing and three-fold cross-validation is used on the remaining 80%. We carefully split the data according to patient number to avoid data leakage or model bias.
### _Training_
The proposed SPU-Net is compared to the PyTorch implementation of the PU-Net [33] and its NF-augmented variant (with a 2-step planar flow) [26], which we will refer to as the PU-Net+NF. We point readers to the respective papers for further details on both the PU-Net and PU-Net+NF architectures. Our code implementation of the SPU-Net is publicly available1.
Footnote 1: Code repository will be shared after publication
For each particular dataset, we train the PU-Net, PU-Net+NF and SPU-Net. Each of those models are trained on three folds using cross-validation. Furthermore, the experiments per model and dataset are conducted across four latent dimensions. All hyperparameters across experiments are kept identical, except for the number of training epochs and parameters related to the Sinkhorn Divergence. This implies across all datasets an \(\alpha\) and \(\beta\) set at \(10\), a batch size of \(32\), using the Adam optimizer with a maximum learning rate of \(10^{-4}\) that is scheduled by a warm-up and cosine annealing, and weight decay of \(10^{-5}\). For the LIDC-IDRI dataset we set the number of training epochs to 200 and for all QUBIQ datasets we trained for 2,500 epochs. After training, we used the model checkpoint with the lowest validation loss. This was usually much earlier than the set number of iterations. Additionally, we found that clipping the gradient to unitary norm was essential for the SPU-Net. To implement the Sinkhorn Divergence, we make use of the GeomLoss[52] package. Also, data augmentation has been used during training, which consisted of random rotation, translation, scale and shears.
### _Evaluation_
#### Iii-D1 Data distribution
To evaluate the model performances, Hungarian Matching has recently been used as an alternative to the Generalized Energy Distance (GED). This, because the GED can be biased to simply reward sample diversity when subject to samples of poor quality [24, 26]. We refer to Hungarian Matching as the Empirical Wasserstein Distance, abbreviated as EWD or \(\hat{W}_{k}\), since it is essentially equivalent to the Wasserstein distance between the discrete set of samples from the model and the ground-truth distribution. Hence, the EWD is a well-suited metric, since the latent samples are attempted to be matched in a similar fashion as when using the Sinkhorn Divergence.
We implement the EWD as follows. Each set of ground-truth segmentations are multiplied to match the number of predictions \(N\) obtained during testing. We then apply a cost metric \(k\) to each combination of elements in the ground-truth and prediction set, to obtain an \(N\times N\) cost matrix. Subsequently, the unique optimal coupling between the two sets that minimizes the average cost is determined. We use unity minus the Intersection over Union (1 - IoU) as cost function for \(k\). We assign maximum score for correct empty predictions, which implies zero for the EWD. During test time, we sample 16 predictions for the LIDC-IDRI, Pancreas and Pancreatic-lesion dataset, 18 predictions for the Prostate and 21 predictions for the Brain-growth dataset, which were specifically chosen to be multiples of the number of available annotations.
#### Iii-D2 Latent distribution
To concretely measure inhomogeneous latent singular values of the prior density, one can consider two perspectives. In both cases, the acquisition of the singular value vector \(\mathbf{\sigma}=[\sigma_{1},\sigma_{2},...,\sigma_{d}]\) is required. Since the densities are axis-aligned Normals, \(\mathbf{\sigma}\) can be directly obtained from the diagonal of the covariance matrix and Singular Value Decomposition is not required. The first method uses \(\mathbf{\sigma}\) to determine the Effective Rank (ER) [53] of the covariance matrix, which can be regarded as an extension of the conventional matrix rank. Hurley _et al._[54] argue that entropy-based metrics, such as the ER, are sub-optimal as a sparsity measure according to their posed criteria. A different perspective indicates that a poor latent space is essentially a sparse \(\mathbf{\sigma}\). Instead, the authors indicate that the Gini Index (GI) is more appropriate to quantify sparsity. Therefore, the Gini index, which is calculated with
\[\mathrm{GI}(\mathbf{\sigma})=1-2\sum_{k=1}^{d}\frac{\sigma_{k}}{\|\mathbf{\sigma}\|_{ 1}}\left(\frac{d-k+\frac{1}{2}}{d}\right), \tag{13}\]
will be used to evaluate the latent-space homogeneity.
## IV Results & Discussion
To validate our implementation of the baseline, we have conducted a GED evaluation on the PyTorch implementation of the PU-Net. When evaluating the PU-Net using the LIDC-IDRI dataset, we have obtained GED values of \(0.327\pm 0.003\), which closely align with the reported values in the work of Kohl _et al._[24], yet exhibit significantly reduced standard deviation. Moreover, the Hungarian-Matched IoU values surpass the values reported by Kohl _et al._[24]. These discrepancies can be potentially attributed to differences in training splits and model initialization. Nonetheless, these findings underscore the comparable performance of our experiments, warranting a reliable basis for accurate comparisons.
\begin{table}
\begin{tabular}{c|l|c c c} \hline \hline \multicolumn{2}{c}{**Dataset**} & \multicolumn{1}{c}{**Modality**} & \# readers** & \# images \\ \hline \multicolumn{2}{c}{LIDC-IDRI} & \multicolumn{1}{c}{CT} & \(4\) & \(15096\) \\ \hline \multirow{4}{*}{QUBIQ} & Pancreas & CT & \(2\) & \(869\) \\ & Pancreatic lesion & CT & \(2\) & \(238\) \\ & Prostate & MRI & \(6\) & \(52\) \\ & Brain growth & MRI & \(7\) & \(39\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Details on the modality, and number of readers and images of the used datasets.
The quantitative results from the conducted experiments are presented in Table 2. Optimal values for \(d\) are found to reside within the range of \(4\!\leq\!d\!\leq\!7\). For each dataset, the best performing model across the experimented latent dimensionalities are indicated in boldface. It can immediately be observed that the SPU-Net is the best performing model within and across each latent dimensionality in terms of aleatoric uncertainty quantification (visible from the EWD metric). Additionally, a positive correlation is apparent between the latent space homogeneity (GI) and EWD. The SPU-Net evenly distributes all variances of each latent dimensionality, which results in the best performance in terms of the EWD. This is in contrast to the PU-Net, which has a sparse latent space (i.e. high GI) and generally performed worse. The results for the PU-Net+NF support this empirical correlation as well, since its metric evaluations often reside between the former two in terms of both performance and latent space sparsity.
It is generally known that both PU-Net and PU-Net+NF utilize values of \(d\) higher than the intrinsic inter-observer variability in the data. If the SPU-Net more appropriately deals with the latent dimensions, it can outperform the other two models with a smaller value of \(d\), even if the latter model has the potential to encapsulate more information. Interestingly, this case is apparent in our quantitative results. The SPU-Net often performs better with a lower latent dimensionality than the PU-Net with larger values of \(d\). A good example is found in the Pancreas dataset evaluations. For the PU-Net, the best results are found for \(d\!=\!7\), while better performance is observed with significantly lower value (\(d\!=\!4\)). This observation challenges the idea that tuning the latent dimensionality implies finding the inherent \(r\)-dimensional manifold. For reasons yet unknown, the results suggests a strong indication that the uninformative dimensions are required to reach the best possible performance in the PU-Net. This is possibly due to the parameterized prior density, which can be theoretically fixed. Nonetheless, it is clear that homogenizing the latent variances improves its impact on model performance, which incurs it to outperform the PU-Net, regardless of the setting of hyperparameter \(d\). Additionally, on the Pancreatic-lesion dataset, it is apparent that the SPU-Net outperforms the PU-Net+NF (\(d\!=\!7\)) with a smaller latent dimension (\(d\!=\!5\)). These findings further support the claim that a more homogeneous latent space improves model performance.
To further investigate the effect of latent-space sparsity on model performance, we have tracked the Gini index behaviour during model training. This is visualized for experiments with optimal values of \(d\) with respect to the SPU-Net, and can be observed in Figure 4. At first glance, it is clear that the Prostate and Brain-growth datasets result in different behaviour compared to the other datasets. This can be due to the different acquisition methods (i.e. MRI) or the fact that the datasets are substantially smaller in size. Specifically for the Brain-growth dataset, this difference can be accredited also to the channel-wise concatenation of three imaging modalities, rather than inference with a single-channel input. Therefore, we initially shift our focus for this analysis to the LIDC-IDRI, Pancreas and Pancreatic-lesion datasets.
It is clear that all three models are subject to some volatility in the Gini index during early-stage training. This can range from a simple draw-down to several fluctuations. In Figure 4, it can be noticed that the SPU-Net (green) rapidly declines to a homogeneous latent representation thereafter, while the PU-Net (red) remains relatively sparse (i.e. high Gini index). Also, it can be noted that the volatility is generally more intense for the PU-Net+NF (blue) model and more time is required to reach a stable Gini index. We attribute these findings to the addition of an NF to the posterior density that introduces stochasticity, due to a nonclosed-form KL-divergence. Whereas the augmented NF increases posterior expressivity, the additional stochasticity in training inhibits effective optimization. However, the SPU-Net has the benefit of more appropriate latent space modeling as well as a closed
\begin{table}
\begin{tabular}{c|c|c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multicolumn{2}{c}{\(\mathbf{d=4}\)} & \multicolumn{2}{c}{\(\mathbf{d=5}\)} & \multicolumn{2}{c}{\(\mathbf{d=6}\)} & \multicolumn{2}{c}{\(\mathbf{d=7}\)} \\ & & \(\hat{W}_{k}\downarrow\) & \multicolumn{2}{c}{Gini\(\downarrow\)} & \(\hat{W}_{k}\downarrow\) & \multicolumn{2}{c}{Gini\(\downarrow\)} & \(\hat{W}_{k}\downarrow\) & \(\hat{W}_{k}\downarrow\) & \(\hat{W}_{k}\downarrow\) & \(\hat{\text{Gini}}\downarrow\) \\ \hline \multirow{4}{*}{LIDC-IDRI} & PU-Net & \(0.451\pm 0.001\) & \(0.65\pm 0.01\) & \(0.450\pm 0.003\) & \(0.70\pm 0.01\) & \(0.451\pm 0.006\) & \(0.71\pm 0.01\) & \(0.451\pm 0.002\) & \(0.76\pm 0.02\) \\ & PU-Net+NF & \(0.451\pm 0.004\) & \(0.39\pm 0.08\) & \(0.450\pm 0.002\) & \(0.44\pm 0.11\) & \(0.446\pm 0.003\) & \(0.57\pm 0.07\) & \(0.450\pm 0.002\) & \(0.52\pm 0.05\) \\ & SPU-Net & \(0.440\pm 0.002\) & \(0.25\pm 0.03\) & \(0.440\pm 0.004\) & \(0.22\pm 0.01\) & \(\mathbf{0.439\pm 0.003}\) & \(0.38\pm 0.03\) & \(0.443\pm 0.004\) & \(0.45\pm 0.01\) \\ \hline \multirow{4}{*}{QUBIQ} & PU-Net & \(0.602\pm 0.034\) & \(0.64\pm 0.02\) & \(0.608\pm 0.024\) & \(0.74\pm 0.03\) & \(0.603\pm 0.030\) & \(0.78\pm 0.03\) & \(0.589\pm 0.002\) & \(0.80\pm 0.02\) \\ & PU-Net+NF & \(0.581\pm 0.004\) & \(0.52\pm 0.04\) & \(0.610\pm 0.080\) & \(0.49\pm 0.07\) & \(0.585\pm 0.021\) & \(0.61\pm 0.04\) & \(0.589\pm 0.054\) & \(0.55\pm 0.11\) \\ & SPU-Net & \(\mathbf{0.540\pm 0.030}\) & \(0.28\pm 0.06\) & \(0.581\pm 0.016\) & \(0.44\pm 0.04\) & \(0.551\pm 0.021\) & \(0.45\pm 0.01\) & \(0.564\pm 0.028\) & \(0.28\pm 0.03\) \\ \hline \multirow{4}{*}{QuBacteria} & PU-Net & \(0.628\pm 0.009\) & \(0.72\pm 0.01\) & \(0.606\pm 0.004\) & \(0.78\pm 0.00\) & \(0.618\pm 0.002\) & \(0.83\pm 0.00\) & \(0.611\pm 0.007\) & \(0.82\pm 0.02\) \\ & PU-Net+NF & \(0.536\pm 0.024\) & \(0.48\pm 0.06\) & \(0.559\pm 0.022\) & \(0.46\pm 0.05\) & \(0.568\pm 0.063\) & \(0.58\pm 0.03\) & \(0.535\pm 0.025\) & \(0.44\pm 0.06\) \\ & SPU-Net & \(0.528\pm 0.012\) & \(0.22\pm 0.06\) & \(0.540\pm 0.023\) & \(0.24\pm 0.07\) & \(\mathbf{0.503\pm 0.026}\) & \(0.54\pm 0.04\) & \(0.524\pm 0.022\) & \(0.38\pm 0.03\) \\ \hline \multirow{4}{*}{QUBIQ} & PU-Net & \(0.178\pm 0.014\) & \(0.20\pm 0.15\) & \(0.167\pm 0.005\) & \(0.38\pm 0.027\) & \(0.185\pm 0.008\) & \(0.32\pm 0.02\) & \(0.174\pm 0.007\) & \(0.34\pm 0.09\) \\ & PU-Net+NF & —
form KL-divergence for stable training.
Given that stochasticity during training is undesired, it is important that volatility is low and has a short duration, which will cause smoother gradients and better loss landscapes. Two specific cases can be observed with the Pancreas dataset in Table II that effectively demonstrate this undesired volatility. Thus, the Gini indices for these two datasets are depicted over the first training quartile in Figure 5. In the case of \(d\!=\!5\), it can be seen that the PU-Net+NF model has a similar Gini index to that of the SPU-Net. However, the performance is much worse. This is because PU-Net+NF has a more significant drop-down than the SPU-Net, leading to unstable gradient-descent updates. For the case of \(d\!=\!7\), the PU-Net and PU-Net+NF have similar performance even though their Gini indices vary significantly. Here, the volatility causes the PU-Net+NF to perform just as poor as the PU-Net, regardless of having a lower Gini index. These results further establish that the volatility during early-stage training inhibits effective gradient descent.
Applying the aforementioned analysis to the Prostate and Brain-growth datasets is less reliable, since their training curves in Figure 4 significantly coincide. It should be noticed that the PU-Net+NF is worse than the PU-Net for both datasets. We hypothesize that this is related to the early-stage volatility of the Gini index. However, this is not immediately clear and is difficult to confirm from the figures.
To better understand the distribution of the latent dimensions, the means and variances of the image-conditional prior densities are depicted in Figure 6. Here, it can be clearly seen that the PU-Net is modeling the ambiguity on a very sparse latent representation. The PU-Net+NF better distributes this, which improves the conditioning of the decoder. More specifically, previous works have argued that the augmentation of NFs results in a more complex and expressive posterior. Our results show that the introduced technique regularizes the relative condition numbers of the decoder by stretching the non-informative latent variances. In the context of the gradient descent, we can consider augmenting with NFs as a form of preconditioning, where it smoothens the optimization by an appropriate transformation on the latent samples. Even though the NF-augmented posterior density can adapt many shapes, we have observed in the conducted experiments that the posterior density only slightly deviates from an axis-aligned Normal density. This is because the PU-Net+NF is still constrained by the Gaussian nature of the prior density. Therefore, the effect of an improved posterior approximation does not manifest beyond a prior with latent space of reduced sparsity. Finally, in line with the previously discussed results, it can be concluded the SPU-Net has the most homogeneous prior latent density.
The quantitative evaluation confirms the superior performance of the SPU-Net. Nevertheless, the model deployment in clinical settings will mainly entail visual evaluation. Therefore, we qualitatively inspect the means and standard deviations of the sampled predictions of the (S)PU-Net in Figures 7, 8, 9, 10 and 11. Notably, across all the datasets it can be observed that the SPU-Net more accurately delineates the uncertain areas (comparing the ground truth to the std. dev. column) of the lesion and is less prone to defective segmentations. These results further confirm our hypothesis that the latent space of the PU-Net is unfavourable. Specifically, the decoder is too
Fig. 4: The Gini indices (lower is better) of the prior distribution during the 1st training quartile. The models differ with respect to each other according to two elements. Namely, the volatility in early-stage training and the eventual converged value.
Fig. 5: The Gini indices of the prior distribution during training. For the Pancreas dataset, it can be seen that the values corresponding to the PU-Net+NF model are more volatile in the early stages of training compared to the other two datasets.
sensitive to small shifts in latent-sample localization. During test time, the predicted prior density is often misaligned with the restrictive posterior manifold and therefore, the decoder receives very unlikely or even unseen samples. As a consequence, defective reconstructions are produced. For example, in the LIDC-IDRI dataset (Figure 7), it can be observed that some samples are empty, although there is obvious presence of a malicious lesion. In the QUBIQ Pancreas, Pancreatic-lesion and prostate predictions (Figures 8, 9 and 10) severe defects occur surrounding the areas of interest. It is very clear that the SPU-Net does not predict these defects. This is because its learned posterior manifold is stretched, which causes the decoder to be more robust to sample shift. The respective difference between the two models in the QUBIQ brain-growth dataset (Figure 11) is less apparent. Even though both models are free from severe misclassifications, under closer inspection it can be noticed that the uncertainty is modeled more accurately. This suggests that the model is not only more robust, but seems to have learned a more accurate probability measure.
In a similar vain, segmentation techniques with different approaches have been introduced as well [55, 56, 57]. In particular, Denoising Diffusion Probabilistic Models (DDPMs) have recently received substantial attention [58, 59, 60, 61, 62, 63, 64, 65]. Even though the segmentation results of DDPMs are promising, the inference procedure is extremely slow. This is especially cumbersome when sampling multiple predictions in uncertainty quantification. Additionally, modeling the ambiguity in latent space is a more natural approach, since the uncertainty in latent space is interpretable and semantically more relevant in contrast to pixel-level modeling. For example, this gives an opportunity to dissect components of the uncertainty that relate to image localization or annotation style. Furthermore, latent-variable modeling can aid in learning the lower-dimensional manifold intrinsic in the data, which can assist further development in more efficient compression-based algorithms.
To conclude, our results indicate sub-optimal latent spaces of the PU-Net(+NF). Namely, the PU-Net learns a restrictive manifold that minimizes the variances of the non-informative latent dimensions. While accurately estimating data manifolds can generally be seen as a desired property, we have found that this causes the decoder to become ill-conditioned, due to the highly inhomogenous latent variances. Another consequence is that the decoder becomes extremely sensitive to sample localization and can cause many erroneous predictions. The PU-Net+NF is able to alleviate this by learning more homogeneous latent variances. However, after inspection of both the qualitative and quantitative results, it is clear that the SPU-Net more effectively circumvents this issue with superior performance in uncertainty quantification. The results also suggest that thanks to this property, the estimated probability measure within the learned manifold can improve as well.
collapsing to minimal variance. However, adequate analysis has not yet been provided beyond this statement. It is important to understand which element of the entropic regularization contributes towards obtaining the desired latent space, so that further progress can be made in developing latent-density models. Secondly, this work attributes the resulting restrictive latent manifold to the KL-constraint in the ELBO formulation. However, this term still remains in the training objective. Experiments with an alternative for the KL-divergence should be provided to explore if additional performance gains are possible. Finally, the SPU-Net circumvents the requirement of comparing the aggregated prior and posterior to enable random sampling from the data and retain a simplified training procedure. Nevertheless, additional experimentation should be conducted with aggregated densities to achieve an architecture closer to the Wasserstein objective.
## VI Conclusion
In this work, we have evaluated the Probabilistic U-Net on several multi-annotated datasets and have found that the performance is significantly inhibited due to the nature of the latent space. More specifically, it is shown that training the Probabilistic U-Net results in severely inhomogeneous singular values of the Normal latent densities that cause the model to become ill-conditioned and extremely sensitive to latent-sample localization. To alleviate this issue, we have introduced the Sinkhorn PU-Net (SPU-Net) that encourages uniform latent space variances, which results in a more robust model and improves probabilistic image segmentation for aleatoric uncertainty quantification. For future work, this research will be applied to multi-resolution probabilistic segmentation models, which similarly model the uncertainty in latent densities with the purpose to surpass state-of-the-art performance. This research paves way for improved uncertainty quantification for image segmentation in the medical domain, thereby assisting clinicians with surgical planning and patient care. Nevertheless, this work is by definition limited to segmentation or the medical domain. Other fields that aim to encapsulate data variability in latent densities can benefit from this research as well.
|
2309.09218 | On the escape of low-frequency waves from magnetospheres of neutron
stars | We study the nonlinear decay of the fast magnetosonic into the Alfv\'en waves
in relativistic force-free magnetohydrodynamics. The work has been motivated by
models of pulsar radio emission and fast radio bursts (FRBs), in which the
emission is generated in neutron star magnetospheres at conditions when not
only the Larmor but also the plasma frequencies significantly exceed the
radiation frequency. The decay process places limits on the source luminosity
in these models. We estimated the decay rate and showed that the phase volume
of Alfv\'en waves available for the decay of an fms wave is infinite. Therefore
the energy of fms waves could be completely transferred to the small-scale
Alfv\'en waves not via a cascade, as in the Kolmogorov turbulence, but
directly. Our results explain the anomalously low radio efficiency of the Crab
pulsar and show that FRBs could not be produced well within magnetar
magnetospheres. | Ephim Golbraikh, Yuri Lyubarsky | 2023-09-17T09:12:22Z | http://arxiv.org/abs/2309.09218v1 | # On the Escape of Low-Frequency Waves from Magnetospheres of Neutron Stars
###### Abstract
We study the nonlinear decay of the fast magnetosonic into the Alfven waves in relativistic force-free magnetohydrodynamics. The work has been motivated by models of pulsar radio emission and fast radio bursts (FRBs), in which the emission is generated in neutron star magnetospheres at conditions when not only the Larmor but also the plasma frequencies significantly exceed the radiation frequency. The decay process places limits on the source luminosity in these models. We estimated the decay rate and showed that the phase volume of Alfven waves available for the decay of an fms wave is infinite. Therefore the energy of fms waves could be completely transferred to the small-scale Alfven waves not via a cascade, as in the Kolmogorov turbulence, but directly. Our results explain the anomalously low radio efficiency of the Crab pulsar and show that FRBs could not be produced well within magnetar magnetospheres.
Subject headings:magnetohydrodynamics - plasma astrophysics - radiative processes - radio transient sources - pulsars
## 1. Introduction
Nonlinear effects play an important role in powerful compact sources of radio emission, such as pulsars and fast radio bursts (FRBs). For example, in nonmagnetized plasmas, the induced Compton and Raman scattering could even prevent the escape of the waves. In the strongly magnetized magnetosphere of neutron stars, the electromagnetic waves propagate in two orthogonally polarized modes: the so-called O-mode is polarized in the plane set by the background magnetic field and the propagation direction, whereas the X-mode is polarized perpendicularly to the magnetic field and the propagation direction. Only the O-mode is subject to induced scattering because the electric field of this mode has a component along the background magnetic field. In the field of the X-mode, the particles oscillate only due to the weak \({\bf E}\times{\bf B}\) drift; therefore, the scattering is suppressed.
The O-mode could propagate only if, in the rest frame of the plasma, the wave's frequency exceeds the plasma frequency. If the density of the plasma is high enough so that not only the Larmor but also the plasma frequency is well above the wave frequency, the two magnetohydrodynamic (MHD) waves could propagate: the fast magnetosonic (fms) and the Alfven waves. The fms wave is polarized perpendicularly to the background magnetic field and the propagation direction. When this wave propagates towards decreasing plasma density, it is smoothly converted into the X-mode and could escape from the system. The Alfven wave does not escape; it follows the curved magnetic field lines and eventually decays via the Landau damping (Arons & Barnard, 1986).
Thus, in dense magnetospheres (such that both the Larmor and the plasma frequencies are above the emission frequency), the radiation propagates in the form of fms waves, independently of the emission mechanism. In this case, the nonlinear decay of fms into Alfven waves could strongly affect the outgoing radiation. The goal of this paper is to study the decay process. In the magnetospheres of neutron stars, the magnetic energy significantly exceeds the plasma energy; therefore, the wave interaction could be considered in the scope of relativistic force-free MHD. The wave energy is well below the energy of the background field; therefore, we could employ the methods of the weak turbulence theory (see, e.g., Zakharov et al., 1992). Namely, we write down and solve the kinetic equation for the waves. Note that in the non-relativistic case, solutions to the kinetic equations for fms and Alfven waves were investigated both analytically (Kuznetsov, 2001) and numerically (Chandran, 2005, 2008).
The paper is organized as follows. In sect. 2, we write down the kinetic equations for MHD waves in the relativistic force-free regime. In sect. 3, we analyze the equations, estimate the nonlinear decay rate of fms waves, and qualitatively describe the kinetics of the decay of fms into Alfven waves. In sect. 4, we solve the kinetic equations numerically, confirming our qualitative analysis. The implications of our findings for FRBs are outlined in sect. 5. Conclusions are presented in sect. 6.
## 2. Nonlinear Interaction of MHD Waves in Force-Free Regime
In this paper, we address weakly nonlinear interactions of MHD waves in the relativistic force-free regime when the plasma energy density, including the rest mass energy, is negligible compared to the magnetic energy density. This implies that the widely used magnetization parameter, \(\sigma=B^{2}/4\pi\rho c^{2}\), is infinite. Here \(B\) is the background magnetic field, and \(\rho\) is the plasma density.
In the force-free limit, there are two MHD waves: the fms wave, polarized perpendicularly to the background magnetic field and to the propagation direction, and the Alfven wave, polarized perpendicularly to the background magnetic field in the plane set by the field and the propagation direction. The dispersion equations in the force-free limit are very simple:
\[\omega=ck \tag{1}\]
for the fms waves and
\[\omega=ck|\cos\theta| \tag{2}\]
for the Alfven waves. Here \(\omega\) is the frequency, \(k\) the wave vector, and \(\theta\) the angle between the wave vector and the background magnetic field.
The interaction of weakly nonlinear MHD waves in the force-free limit was studied by Thompson & Blaes (1998) and Lyubarsky (2019). The strongest is the interaction of three waves satisfying the resonance conditions
\[\omega=\omega_{1}+\omega_{2};\quad\mathbf{k}=\mathbf{k}_{1}+\mathbf{k}_{2}, \tag{3}\]
which in fact represent energy and momentum conservation. It follows from the dispersion relations that the conservation laws are satisfied for the decay of an fms wave into an fms and an Alfven waves (\(S\leftrightarrow S+A\)) and into two Alfven waves (\(S\leftrightarrow A+A\)). Of course, reverse merging processes are also possible. The conservation laws also permit the process \(S\leftrightarrow S+S\) for the aligned fms waves, but in the force-free limit, the probability of the process is zero (Lyubarsky, 2019). At a finite \(\sigma\), the weakly nonlinear interaction of aligned fms waves becomes possible; it leads to the steepening of the waves and formation of shocks (Levinson & van Putten, 1997; Lyubarsky, 2003).
The three-wave interaction of Alfven waves, \(A+A\to A\), is possible only if one of two waves has zero frequency. Then an anisotropic Alfven cascade develops, transferring the energy to waves with large components of the wave vector perpendicular to the background field, \(k_{\perp}\gg k_{\parallel}\)(Montgomery & Matthaeus, 1995; Ng & Bhattacharjee, 1996; Goldreich & Sridhar, 1997). The important point is that the zero-frequency Alfven waves cannot be treated as linear waves. Such interaction occurs when the field lines of the background magnetic field wander away. The Alfven waves are stretched when propagating along diverging field lines, so the wave vector component perpendicular to the background field increases. Thus a cascade is formed, redistributing the Alfven waves towards the high-\(k_{\perp}\) domain, where they eventually decay. Goldreich & Sridhar (1997) presented a qualitative explanation of how small turbulent fluctuations, \(\delta B\ll B\), could lead to divergent field lines. Assume that the mean magnetic field is directed along \(z\)-axis and describe the turbulence as an ensemble of localized wave packets with the longitudinal and transverse scales \(l_{\parallel}\) and \(l_{\perp}\), correspondingly, and the amplitude \(\delta B\). The local magnetic field line turns within a wave packet by the angle \(\theta\sim\delta B/B\), so the field line deviates from the initial position by \(\theta l_{\parallel}\). Adding random deviations, one finds that the average displacement of the field line grows with the distance, \(s\sim\theta\sqrt{l_{\parallel}z}\). Therefore, the field lines that were initially separated by \(l_{\perp}\) diverge. This picture implicitly assumes a long wavelength tail in the fluctuation spectrum. Namely, the amplitude of the fluctuations at the scale \(z\) is \(\Delta B\sim Bs/z\sim\delta B\sqrt{l_{\parallel}/z}\). This implies \(\Delta B^{2}\propto k_{\parallel}\) so that the spectral power of the turbulence, \(\Delta B^{2}_{k_{\parallel}}=d\Delta B^{2}/dk_{\parallel}\), goes to a constant at \(k_{\parallel}\to 0\). Such a spectrum is obtained if the turbulence is presented as an ensemble of bell-shaped fluctuations. We here deal with Alfven waves produced by the decay of fms waves. The fms waves with the frequency \(\omega\) produce Alfven waves with the wave vector \(k_{\parallel}\sim\omega/c\). The production rate is proportional to \(\omega\) and the energy density of fms waves at this frequency, \(U\) (see eq. 21). Both these quantities decrease towards smaller frequencies, so the spectrum of the produced waves is cut off at long wavelengths. Therefore, in the case of interest, the three-wave Alfven cascade is suppressed and could be neglected.
Of course, the Alfven cascade could develop via four-wave interactions, \(A+A\to A+A\)(Sridhar & Goldreich, 1994; Goldreich & Sridhar, 1995). However, we assume that the turbulence is weak, i.e., the system's evolution is governed by the lowest-order processes. At \(k_{\perp}\sim k_{\parallel}\), the rate of four-wave processes is lower than that of three-wave processes by the ratio of the wave energy to the energy of the background field, \(8\pi U/B_{0}^{2}\). We will show that the fms waves with the frequency \(\omega\) decay first into Alfven waves with \(k_{\perp}\sim k_{\parallel}\sim\omega/c\), so if the above ratio is small, the fms-to-Alfven transformation initially occurs in the weak turbulence regime. However, higher \(k_{\perp}\) Alfven waves are produced in the course of time. It is well known that the role of nonlinearity grows at higher \(k_{\perp}\) so that turbulence ceases to be weak at \(k_{\perp}/k_{\parallel}\sim c/\delta v\), where \(\delta v=c\delta B/B_{0}\) is the velocity of turbulent motions, \(\delta B\) the fluctuating magnetic field (Goldreich & Sridhar, 1995). However, we will see that the spectrum of Alfven waves reaches high \(k_{\perp}\) only when the fms energy decreases \(\sim(ck_{\perp}/\omega)^{2}\) times. Therefore, most of the transformation process occurs in the weak turbulence regime.
In astrophysical applications, we typically deal with wide spectra and random phases of waves. Then the wave field is conveniently described in the quantum language via the occupation numbers, \(n_{\mathbf{k}}\), which are related to the wave energy density of a mode with the wave vector \(\mathbf{k}\) as \(E_{\mathbf{k}}=\omega_{\mathbf{k}}n_{\mathbf{k}}\). Since we deal with two types of waves, we denote the occupation numbers of Alfven waves by \(n_{\mathbf{k}}\) and of fms waves by \(N_{\mathbf{k}}\). The evolution of the system is described by the kinetic equations for the waves ( e.g., Zakharov et al., 1992). For fms waves, these equations may be written as
\[\frac{\partial N_{\mathbf{k}}}{\partial t}=\sum_{\mathbf{k}_{1},\mathbf{k}_{2 }}\biggl{[}-R_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\leftrightarrow A +S}+R_{\mathbf{k}_{2},\mathbf{k}_{1},\mathbf{k}}^{S\leftrightarrow A+S}- \frac{1}{2}R_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\leftrightarrow A+A} \biggr{]}, \tag{4}\]
where \(R_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\leftrightarrow A+S+A}\) and \(R_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\leftrightarrow A+A}\) are the rates of the \(S\leftrightarrow S+A\) and \(S\leftrightarrow A+A\) processes for the given set of wave vectors \(\mathbf{k}\), \(\mathbf{k}_{1}\), \(\mathbf{k}_{2}\). Here, the first term describes the decay of the wave \(\mathbf{k}\) into an fms and an Alfven waves and the reverse process; the second is for the production of the fms \(\mathbf{k}\)-wave via decay of an fms wave with the frequency \(\omega_{1}>\omega\) and the reverse process, and the third term is for the decay of the fms \(\mathbf{k}\)-waves into two Alfven waves and the reverse process. The factor \(1/2\) in the third term takes into account double counting in the case of the decay into two waves of the same type. Similarly, the
kinetic equation for the Alfven waves is written as
\[\frac{\partial n_{\mathbf{k}}}{\partial t}=\sum_{\mathbf{k}_{1},\mathbf{k}_{2}} \left[R_{\mathbf{k}_{2},\mathbf{k},\mathbf{k}_{1}}^{S\leftrightarrow A+S}+R_{ \mathbf{k}_{2},\mathbf{k}_{1},\mathbf{k}}^{S\leftrightarrow A+A+A}\right]. \tag{5}\]
Now the factor \(1/2\) does not appear in the corresponding term because one of the two Alfven quanta is fixed.
Denoting the probabilities of the spontaneous decay processes as \(W_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\leftrightarrow A+S}\) and \(W_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\leftrightarrow A+A}\), correspondingly, taking into account the induced processes and using the detailed balance principle, one can write
\[R_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+S}=W_{ \mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+S}\left[N_{\mathbf{k}}(n_{ \mathbf{k}_{1}}+1)(N_{\mathbf{k}_{2}}+1)\right. \tag{6}\] \[-(N_{\mathbf{k}}+1)n_{\mathbf{k}_{1}}N_{\mathbf{k}_{2}}\left. \right]\delta(\mathbf{k}-\mathbf{k}_{1}-\mathbf{k}_{2})\delta(\omega_{ \mathbf{k}}-\omega_{\mathbf{k}_{1}}-\omega_{\mathbf{k}_{2}});\] \[R_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\leftrightarrow A +A}=W_{\mathbf{k},\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+A}\left[ N_{\mathbf{k}}(n_{\mathbf{k}_{1}}+1)(n_{\mathbf{k}_{2}}+1)\right.\] (7) \[-(N_{\mathbf{k}}+1)n_{\mathbf{k}_{1}}n_{\mathbf{k}_{2}}\left. \right]\delta(\mathbf{k}-\mathbf{k}_{1}-\mathbf{k}_{2})\delta(\omega_{ \mathbf{k}}-\omega_{\mathbf{k}_{1}}-\omega_{\mathbf{k}_{2}}).\]
In all cases of interest, \(N_{\mathbf{k}}\gg 1\), therefore we neglect the linear in \(N_{\mathbf{k}}\) terms, which describe spontaneous processes, and get
\[R_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\leftrightarrow A +S}=W_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+S}\left(N_{ \mathbf{k}}n_{\mathbf{k}_{1}}+N_{\mathbf{k}}N_{\mathbf{k}_{2}}\right. \tag{8}\] \[-n_{\mathbf{k}_{1}}N_{\mathbf{k}_{2}}\left.\right)\delta(\mathbf{ k}-\mathbf{k}_{1}-\mathbf{k}_{2})\delta(\omega_{\mathbf{k}}-\omega_{\mathbf{k}_{1}}- \omega_{\mathbf{k}_{2}});\] \[R_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\leftrightarrow A +A}=W_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+A}\left(N_{ \mathbf{k}}n_{\mathbf{k}_{1}}+N_{\mathbf{k}}n_{\mathbf{k}_{2}}\right.\] (9) \[\left.-n_{\mathbf{k}_{1}}n_{\mathbf{k}_{2}}\left.\right)\delta( \mathbf{k}-\mathbf{k}_{1}-\mathbf{k}_{2})\delta(\omega_{\mathbf{k}}-\omega_{ \mathbf{k}_{1}}-\omega_{\mathbf{k}_{2}}).\]
The interaction probability is expressed via the amplitudes of the processes by the golden rule: \(W_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+A}=2\pi|V_{\mathbf{k}, \mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+A}|^{2}\), and analogously for \(W_{\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+A}\). The corresponding amplitudes were calculated by Lyubarsky (2019):
\[V_{\mathbf{k}\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+S}=i\sqrt{\frac{\pi\omega_ {1}}{2\omega\omega_{2}}}\frac{k_{1\perp}\left[k_{2}-\mathrm{sgn}(k_{1\parallel })k_{2\parallel}\right]}{B_{0}k_{1}k_{2\perp}}\left(\mathbf{\hat{z}}\cdot \mathbf{k}_{1}\times\mathbf{k}_{2}\right); \tag{10}\]
\[V_{\mathbf{k}\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+A}=i\sqrt{ \frac{2\pi\omega_{1}\omega_{2}}{\omega}}\left(\frac{k_{1\perp}(\mathbf{k}_{2 \perp}\cdot\mathbf{k}_{1})}{k_{2\perp}}\right.\] \[\left.+\frac{k_{2\perp}(\mathbf{k}_{1\perp}\cdot\mathbf{k}_{1})} {k_{1\perp}}\right)\frac{H(-k_{1\parallel}k_{2\parallel})}{B_{0}k_{\perp}}. \tag{11}\]
Here the indexes \(\parallel\) and \(\perp\) describe the components of the \(\mathbf{k}\) vector parallel and perpendicular to the background magnetic field. In equation (10), the index \(1\) is for the Alfven wave and \(2\) for the fms wave. In equation (11), the Heaviside step function, \(H(x)\), expresses the well-known fact that Alfven waves do not interact if they propagate in the same direction along the magnetic field.
## 3. Decay of fms waves; qualitative considerations
We study the possible decay of fms radiation produced by a strong enough source in a highly magnetized medium. Let us consider for simplicity the time evolution of a spatially homogeneous, isotropic fms radiation with the characteristic frequency of the order of \(\omega_{0}=ck_{0}\). Assume that the spectrum is moderately wide, \(\Delta\omega\sim\omega\), and the total radiation energy is
\[U=\int\omega N_{\mathbf{k}}d^{3}\mathbf{k}. \tag{12}\]
The fms waves decay into fms and Alfven waves of smaller frequency. The induced decay is possible only into states which are not empty; therefore, we assume that a weak background of fms and Alfven waves is present in the whole phase space.
The initial pulse decays into waves satisfying the conservation laws (3); therefore, the occupation numbers in the new states grow until the reverse merging process balances the decay. This happens when the occupation numbers in all three states become comparable. The decay \(S\to S+A\) occurs into states with \(k_{1}\) and \(k_{2}\) comparable with \(k_{0}\), and each initial fms quantum could produce only one quantum in the state \(\mathbf{k}_{1}\) and one quantum in the state \(\mathbf{k}_{2}\). Therefore the equilibrium is achieved, and the decay stops after the energy of the initial peak decreases roughly two times.
The important point is that the phase volume available for decay \(S\to A+A\) is in fact infinite because the Alfven waves could have the perpendicular component of their wave vector arbitrarily large. To demonstrate this, let us write the conservation laws (3) with the account of the dispersion laws (1) and (2) explicitly:
\[k=|k_{1\parallel}|+|k_{2\parallel}|; \tag{13}\] \[k_{\parallel}=k_{1\parallel}+k_{2\parallel};\] (14) \[\mathbf{k}_{\perp}=\mathbf{k}_{1\perp}+\mathbf{k}_{2\perp}. \tag{15}\]
One sees that for a given \(\mathbf{k}\), one finds \(k_{1\parallel}\) and \(k_{2\parallel}\) from the first two equations, whereas the third equation is satisfied with an arbitrary \(\mathbf{k}_{1\perp}\) by choosing \(\mathbf{k}_{2\perp}=\mathbf{k}_{\perp}-\mathbf{k}_{1\perp}\). In particular, the fms wave could decay into two Alfven waves with arbitrarily large but nearly oppositely directed perpendicular components of the wave vectors. This implies that the phase volume available for the decay of an fms wave into a pair of Alfven waves is infinite, so the fms pulse could decay significantly, practically to the background level.
Substituting \(\mathbf{k}_{2\perp}=\mathbf{k}_{\perp}-\mathbf{k}_{1\perp}\) into equation (11) and expanding in small \(k/k_{1\perp}\), one finds the interaction amplitude in the limit \(k_{1\perp}\gg k\):
\[V_{\mathbf{k}\mathbf{k}_{1},\mathbf{k}_{2}}^{S\to A+A}=i\sqrt{\frac{2\pi\omega _{1}\omega_{2}}{\omega}}k_{\perp}\left(1-\frac{(\mathbf{k}_{\perp}\cdot\mathbf{k}_{ 1\perp})^{2}}{k_{1\perp}^{2}k_{1\perp}^{2}}\right)\frac{H(-k_{1\parallel}k_{2 \parallel})}{B_{0}}. \tag{16}\]
One sees that the rate of the fms decay into Alfven waves with large perpendicular components of the wave vector does not go to zero.
The population of such states is described by equation (5), in which only the second term in the rhs should be retained. Taking into account that initially the occupation numbers for these states are small, one can write (note that in this equation, \(\mathbf{k}_{2}\) is referred to fms waves whereas \(\mathbf{k}\) to Alfven waves)
\[\frac{\partial n_{\mathbf{k}}}{\partial t}=\sum_{\mathbf{k}_{2}}W_{\mathbf{k}_ {2},\mathbf{k},\mathbf{k}_{2}-\mathbf{k}}^{S\to A+A}N_{\mathbf{k}_{2}}\left(n_{ \mathbf{k}}+n_{\mathbf{k}_{2}-\mathbf{k}}\right)\delta(\omega_{\mathbf{k}_{2} }-\omega_{\mathbf{k}}-\omega_{\mathbf{k}_{2}-\mathbf{k}}) \tag{17}\]
Here we could take \(n_{\mathbf{k}}\approx n_{\mathbf{k}_{2}-\mathbf{k}}\) because adding a quantum into one of these states is accompanied by adding a quantum into another state. Then one finally finds that population of the Alfven waves with \(k_{
initially (as soon as \(n_{\bf k}\ll N_{\bf k_{0}}\)) exponentially,
\[\frac{\partial n_{\bf k}}{\partial t}=qn_{\bf k}, \tag{18}\]
with the rate
\[q = 2\int W_{{\bf k}_{2},{\bf k}_{1},{\bf k}_{2}-{\bf k}}^{S-A+A}N_{{ \bf k}_{2}}\delta(\omega_{{\bf k}_{2}}-\omega_{{\bf k}}-\omega_{{\bf k}_{2}-{\bf k }})d{\bf k}_{2} \tag{19}\] \[= \frac{6\pi^{3}c}{B_{0}^{2}}\int\frac{|k_{1}|(k_{2}-|k_{|}|)k_{2 1}^{3}}{k_{2}}N_{{\bf k}_{2}}H\left(k_{\parallel}(k_{\parallel}-k_{2\parallel })\right)\] (20) \[\times\delta(k_{2}-|k_{|}|-|k_{2\parallel}-k_{\parallel}|)dk_{2 \parallel}dk_{2\perp}\]
Here we used equation (16) with an appropriate permutation of indexes \({\bf k}\), \({\bf k}_{1}\) and \({\bf k}\). Taking into account that \(k_{\parallel}\sim k_{2\parallel}\sim k_{2\perp}\sim k_{2}=\omega/c\) (according to the conservation laws, the longitudinal components of the wave vectors of the Alfven and fms waves are comparable, whereas all components of the fms wave vector are typically of the order of \(\omega/c\)), one can find a rough estimate:
\[q\sim q_{0}=\frac{8\pi}{B_{0}^{2}}U\omega. \tag{21}\]
The coefficient in the definition of \(q_{0}\) is chosen to show explicitly that the interaction rate is proportional to the small ratio of the wave energy and the energy of the background magnetic field.
The total number of the produced Alfven quanta, \({\cal N}=\int n_{\bf k}d{\bf k}\), grows as
\[\frac{\partial{\cal N}}{\partial t}=2\pi\int qn_{\bf k}k_{\perp}dk_{\parallel}. \tag{22}\]
Here the integration over \(k_{\parallel}\) is limited, due to the conservation laws, by the region \(k_{\parallel}\sim k_{0}\), whereas the integral over \(k_{\perp}\) is unlimited. Taking into account that \(q\) is independent of \(k_{\perp}\) at large \(k_{\perp}\), one sees that the total production rate of the Alfven quanta diverges unless \(n_{\bf k}\) decreases with \(k_{\perp}\) faster than \(k_{\perp}^{-2}\). Of course, Alfven waves are erased by dissipation processes at small enough wavelengths, but the corresponding \(k\) is typically too high so that the Alfven waves would be produced at an inappropriately high rate unless the background spectrum is steep enough. Here we adopt a natural assumption that the background waves have the spectrum \(n_{\bf k}\propto k^{-\alpha}\) with \(\alpha>2\).
In this case, the evolution of the system could be qualitatively described as follows. At the initial stage, the background Alfven spectrum exponentially grows with the rate of eq. (21) at all \(k_{\perp}\). The waves with \(k_{\perp}\sim k_{0}\) reach the saturation level, \(n_{\bf k}\sim N_{\bf k}\), first. At this stage, the fms pulse weakens a few times. The fms waves keep decaying into Alfven waves with larger \(k_{\perp}\) so that the energy of fms waves decreases monotonically. Then the Alfven waves with smaller \(k_{\perp}\) begin to merge into fms wave to maintain the equilibrium \(n_{\bf k}\sim N_{\bf k}\). Thus the Alfven population of the states with larger \(k_{\perp}\) gradually increases until the saturation level, \(n_{\bf k}\sim N_{\bf k}\). At smaller \(k_{\perp}\), a plateau is formed, which gradually decreases with time. In such a way, the fms pulse eventually could decay to the background level. The important point is that the decay rate is determined by the instant energy of the fms pulse. Since the energy of fms waves decreases, the decay process slows down with time. Therefore in real systems, the energy of fms and Alfven waves normally to the system's total energy, which is chosen to be equal to unity. One sees that the fms waves are efficiently converted to the Alfven waves with the rate given by equation (21).
The evolution of spectra is shown in figs. 2 and 3. We plotted the spectral distributions integrated over the longitudinal wave vectors because the evolution of the perpendicular component of the wave vector is the most interesting. The presented results confirm the qualitative picture described in the previous section. Namely, the population of the fms gradually decreases. The population of the Alfven waves grows across the whole spectrum until both populations become comparable, \(n_{\bf k}\sim N_{\bf k}\), at \(k_{\perp}\sim k_{0}\). This occurs at \(q_{0}t\sim 1-2\). At larger times, only the Alfven population of the high \(k_{\perp}\) states grows, whereas at smaller \(k_{\perp}\), a plateau is formed, the level of which slowly decreases.
Figure 1.— Time evolution of the energy of fms and Alfvén waves normalized to the total energy.
## 5. Astrophysical Applications
The nonlinear interactions of low-frequency waves play an important role in compact sources of powerful radio emission, such as pulsars and FRBs. The results of this paper could be applied if the waves may be described in the MHD limit, i.e., when the emission frequency is well below both the Larmor and the plasma frequencies.
### Fast radio bursts
FRBs are radio pulses of millisecond duration coming from cosmological distances and having isotropic luminosities \(L_{\rm iso}\sim 10^{42}-10^{45}\) erg\(\cdot\)s\({}^{-1}\). The origin of these pulses is still not known; however, there is some evidence that they are associated with magnetar flares (see, e.g., review by Zhang 2022 and references therein). In the magnetar magnetospheres, both the Larmor and the plasma frequencies are well above the radio band; therefore, independently of the emission mechanism, radio waves propagate as fms waves. Then, the nonlinear decay of fms into the Alfven waves places severe limits on radio emission power if FRBs are produced well within the magnetar magnetospheres.
The transformation efficiency is determined by the product of the transformation rate (21) and the propagation time, \(r/c\). Assuming the dipole magnetic field in the magnetosphere, \(B=\mu/r^{3}\), and expressing the fms energy density via the isotropic FRB luminosity, \(U=L_{\rm FRB}/4\pi cr^{2}\), one finds
\[\frac{q_{0}r}{c}=\frac{2L_{\rm FRB}\omega r^{5}}{\mu^{2}c^{2}}=13\frac{f_{9}L _{\rm FRB,43}r_{7}^{5}}{\mu_{33}^{2}}. \tag{25}\]
Here \(f=\omega/2\pi\) is the radiation frequency, and we employ the standard short-hand notation, \(s=10^{x}s_{x}\) in cgs units. One sees that at any FRB power, the radiation is absorbed at a distance from a few to a few dozen stellar radii.
This estimate assumes that the turbulence is weak, i.e., the ratio of the wave energy to the energy of the background field is small. A simple estimate shows that this condition is fulfilled in the region of interest:
\[\frac{8\pi U}{B^{2}}=7\cdot 10^{-6}\frac{L_{\rm FRB,43}r_{7}^{4}}{\mu_{33}^{2}}. \tag{26}\]
This justifies the neglect of four-wave processes1.
Footnote 1: Note that the rate of the four-wave interaction, \(q\sim(8\pi U/B_{0})^{2}\omega\), remains larger than the escape rate, \(c/r\), for \(L_{\rm FRB}>10^{37}\) erg\(\cdot\)s\({}^{-1}\). Therefore cascading of the excited Alfvén waves may be possible. However, the rate of three-wave processes is larger; therefore, we take into account only fms-to-Alfvén interaction.
Note that the wave amplitude exceeds the background field when this ratio exceeds unity. In this case, the fms radiation is heavily absorbed because the MHD condition, \(E<B\), is violated (Beloborodov, 2021, 2023). If the background field is a dipole, this happens at distances of a few hundred stellar radii. However, the magnetic disturbance from the magnetar flare propagates away as a large-scale electro-magnetic pulse, whose amplitude decreases as \(r^{-1}\). In this case, the high-frequency waves propagate at the top of the pulse. Then, the wave amplitude remains smaller than the background field, so Beloborodov's mechanism does not work. On the other hand, the fms-to-Alfvén transformation provides an effective absorption in any case.
The important point is that the wave transformation is a stimulated process, i.e., the transformation rate is proportional to the occupation number of the waves in the final state. The high rate is obtained because the initially small density of Alfven waves grows exponentially and rapidly becomes comparable with the density of fms waves. The transformation could be suppressed if an efficient absorption mechanism does not permit the growth of Alfven waves. The newly produced Alfven waves could be absorbed because of the current starvation, i.e., when the parameter
\[\xi=\frac{j}{enc} \tag{27}\]
exceeds unity (Thompson & Gill, 2014; Thompson, 2023). Here \(n\) is the plasma density, \(j=ck_{\perp}\delta B/4\pi\) the current density in the Alfven wave with the amplitude \(\delta B\). Let us consider this process.
In the magnetosphere of an active magnetar, the electron-positron pairs are produced by slow untwisting of magnetospheric magnetic field lines (Beloborodov & Thompson, 2007; Beloborodov, 2013). The plasma density is estimated as (Beloborodov, 2020)
\[n=\frac{\mathcal{M}\mu}{4\pi er^{3}r_{\rm x}}, \tag{28}\]
where \(\mathcal{M}\sim 10^{3}\) is the pair multiplicity, \(r_{\rm x}=5\cdot 10^{6}\mu_{33}^{1/3}\) cm the distance from the star where the magnetic field falls to \(10^{13}\) G, so the pair production stops. Substituting this estimate into eq. (27) and assuming that the FRB
Figure 3.— The time evolution of spectra of fms waves integrated in \(k_{\parallel}\).
Figure 2.— The time evolution of spectra of Alfvén waves integrated in \(k_{\parallel}\).
energy is completely transferred to Alfven waves, \(L_{\rm FRB}=\delta B^{2}r^{2}c\), one finds
\[\xi=\frac{k_{\rm r}r^{2}r_{\rm\star}}{{\cal M}\mu}\sqrt{\frac{L_{\rm FRB}}{c}}=1.9\frac{r_{\rm\star}^{2}f_{9}L_{\rm FRB,43}^{1/2}}{{\cal M}_{3}\mu_{33}^{2/3}} \frac{ck_{\rm\perp}}{\omega}. \tag{29}\]
One sees that the current starvation sets in only when a significant fraction of the FRB energy is transferred to Alfven waves. In this case, the FRB is absorbed via the transformation to Alfven waves, which decay because of current starvation. However, the important point is that the pairs are heated to high Lorentz factors and produce new pairs.
Let us assume that a fraction \(\zeta<1\) of the FRB energy is absorbed. Then the acquired Lorentz factor is estimated as
\[\gamma=\frac{\zeta L}{4\pi r^{2}m_{e}c^{3}n}=10^{7}\,\frac{r_{\rm\star}\zeta L _{\rm FRB,43}}{{\cal M}_{3}\mu_{33}^{2/3}}. \tag{30}\]
The particles with this Lorentz factor emit curvature photons with the energy
\[\varepsilon=\frac{\hbar c}{r}\gamma^{3}=2\cdot 10^{3}\frac{(\zeta L_{\rm FRB,43})^{3}r_{\rm\star}^{2}}{{\cal M}_{3}^{3}\mu_{33}^{2}}\,{\rm MeV}. \tag{31}\]
These photons produce pairs just as in pulsars. The condition for single photon pair production is
\[\chi=\frac{\varepsilon\sin\theta B}{2m_{e}c^{2}B_{q}}>0.1. \tag{32}\]
Here \(\theta\) is the angle between the photon direction and the magnetic field, \(B_{q}=m_{e}^{2}c^{3}/\hbar e^{2}=4.4\cdot 10^{13}G\) the quantum magnetic field. The photon is emitted along the magnetic field and, after passing the distance \(x\), acquires the angle \(\theta=x/r\). Now one finds
\[\chi=45\frac{x}{r}\frac{(\zeta L_{\rm FRB,43})^{3}}{{\cal M}_{3}^{3}\mu_{33}r _{\rm\star}}. \tag{33}\]
The condition (32) is fulfilled, so all the emitted photons are converted to pairs. One can easily check that the power of the curvature emission is sufficient for the particle to lose the whole energy to radiation. Therefore each particle emits
\[{\cal N}=\frac{m_{e}c^{2}\gamma}{\varepsilon}=2.5\cdot 10^{3}\frac{{\cal M}_{3 }^{2}\mu_{33}^{4/3}}{(\zeta L_{\rm FRB,43})^{2}r_{\rm\star}}. \tag{34}\]
More pairs are produced from the synchrotron photons emitted by the newly produced pairs. In any case, one sees that absorption of a fraction of the FRB energy produces enough pairs to provide conditions for fms-to-Alfven decay in the MHD regime.
The above consideration shows that FRBs could not be generated well within the magnetar magnetosphere. However, one could not directly extrapolate the obtained conclusion to the outer magnetosphere or the magnetar wind. First of all, the plasma density rapidly decreases with the distance, so eventually, waves in the radio band could not be described in the MHD approximation. Moreover, the magnetic perturbation from the magnetar flare propagates away as a large-scale MHD pulse, which amplitude decreases as \(1/r\), so that in the outer magnetosphere, the magnetic field of the pulse exceeds the dipole field and the plasma is pushed away with relativistic velocities. Therefore the ratio of the radiation energy density to the energy density of the background field stops growing, and moreover, one has to take into account the relativistic slowing down of time. This implies that magnetar flares could produce FRBs only far enough from the magnetar (see, e.g., review by Lyubarsky 2021 and references therein).
### Radio emission of the Crab pulsar
A typical pulsar produces a pencil beam of radio emission, presumably generated in the electron-positron plasma flowing along the magnetic axis of the neutron star within a narrow open field line tube. The emission is generally attributed to plasma oscillations in the flow (see the recent review by Philippov & Kramer 2022 and references therein). The frequency of these waves is comparable with the plasma frequency in the plasma rest frame; therefore, they could not be considered MHD waves. The nonlinear process under consideration is irrelevant to this emission.
In pulsars with large magnetic fields at the light cylinder, such as the Crab and millisecond pulsars, there is another emission site, namely, the current sheet separating, beyond the light cylinder, the oppositely directed magnetic fields. The energy release due to the magnetic reconnection in the current sheet feeds the powerful synchrotron emission in the gamma-ray, and sometimes also in the X-ray and optical, band (Lyubarskii, 1996; Bai & Spitkovsky, 2010; Cerutti et al., 2016). The fan beam thus formed rotates with the neutron star so that the observer typically sees two peaks per pulsar period. Some of these pulsars also exhibit radio pulses in phase with high-energy pulses. This radio emission could be produced because magnetic islands in the reconnecting current sheet continuously merge, giving rise to magnetic perturbations that propagate away in the form of fms waves, which further away are transformed into radio waves (Uzdensky & Spitkovsky, 2014; Lyubarsky, 2019; Philippov et al., 2019). The nonlinear interaction of fms waves places limits on the radio luminosity of these pulsars.
According to simulations by Philippov et al. (2019), about \(0.5\%\) of the total energy release in the current sheet is radiated away in the form of low-frequency waves. The luminosity of the Crab pulsar in the X- and \(\gamma\)-ray bands is roughly \(L_{\rm hard}=10^{36}\) erg\(\cdot\)s\({}^{-1}\). This quantity could be considered a proxy for the energy release rate in the current sheet. Then, one would expect the radio luminosity of the Crab to be of the order of \(L^{\prime}=5\cdot 10^{33}\) erg\(\cdot\)s\({}^{-1}\). However, the observed radio luminosity is two orders of magnitude smaller, \(L_{\rm radio}=7\cdot 10^{31}\) erg\(\cdot\)s\({}^{-1}\) (e.g., Malov et al., 1994). This may be attributed to the decay of the fms into the Alfven waves on the way out of the magnetosphere.
The decay rate (21) is calculated in the zero electric frame of the plasma because the non-linear interactions in the force-free regime are not affected by plasma moving along the magnetic field lines. Just beyond the light cylinder, the magnetospheric electric and magnetic fields are of the same order but not too close to each other so that the velocity of the zero electric field frame, \({\bf v}=c{\bf E}\times{\bf B}/B^{2}\), is only mildly relativistic. Therefore we
use the parameters in the lab frame. We find the radiation energy density, \(U\), from the condition that the decay rate (21) is comparable with the wave escape rate, \(q_{0}\sim c/r\). The radiation is produced near the light cylinder; therefore, \(r\sim c/\Omega\), where \(\Omega=2\pi/P\) is the angular velocity of the neutron star, and \(P\) is the pulsar period. Now this condition yields
\[\frac{8\pi U}{B^{2}}\sim\frac{\Omega}{\omega}, \tag{35}\]
The magnetic field in the equatorial zone at the distance of the light cylinder is \(B=2\mu(\Omega/c)^{3}\), where \(\mu\) is the magnetic moment of the neutron star. The last is related to the pulsar spin-down power:
\[L_{\rm sd}=(1+\sin^{2}\psi)\frac{\mu^{2}\Omega^{4}}{c^{3}}, \tag{36}\]
where \(\psi\) is the angle between the magnetic and rotational axes (Spitkovsky, 2006). The radio emission forms a fan beam with the opening angle \(\alpha\sim 0.1\), so that the radio luminosity may be presented as
\[L_{\rm radio}=2\pi\alpha Uc(c/\Omega)^{2}. \tag{37}\]
Now one finds
\[\frac{L_{\rm radio}}{L_{\rm sd}}\sim\frac{\alpha}{(1+\sin^{2}\psi)Pf}, \tag{38}\]
where \(f=\omega/2\pi\) is the radiation frequency. The period of the Crab pulsar is \(P=0.033\) s, and the spectrum is very steep without the low-frequency cutoff down to the decameter band. Therefore, we take a low frequency \(f=30\) MHz (Malov et al., 1994). Then one gets \(L_{\rm radio}/L_{\rm sd}\sim 10^{-7}\). Taking into account that the observed slowing down rate of the Crab corresponds to the spin-down power \(L_{\rm sd}=5\cdot 10^{38}\) ergs\({}^{-1}\), one sees that the obtained estimate is compatible with the observed radio luminosity.
Now let us check that the MHD conditions are fulfilled, i.e., the plasma density is sufficient to maintain Alfven waves. The plasma density in pulsars may be presented as
\[n=\kappa\frac{\Omega B}{2\pi e}, \tag{39}\]
where \(\kappa\) is the multiplicity. In young pulsars, \(\kappa>10^{5}\)(Timokhin & Harding, 2015), which is compatible with the observations of the Crab Nebula (de Jager et al., 1996). Assume that the emitted power, \(L^{\prime}\), (which is larger than the observed luminosity, \(L_{\rm radio}\), see above) is transformed into Alfven waves so that the relation between \(L^{\prime}\) and the Alfven energy density is the same as the relation (37) between the observed luminosity and the radiation energy density. Now we estimate the parameter \(\xi\) (see eq. 27) as
\[\xi=\frac{fP}{2\alpha\kappa}\sqrt{\frac{L^{\prime}}{2(1+\sin^{2} \psi)L_{\rm sd}}}\frac{k_{\perp}c}{\omega} \tag{40}\] \[=0.2\frac{f_{7.5}P_{-1.5}L_{33.7}^{\prime 1/2}}{\alpha_{-1}\kappa_{5 }(1+\sin^{2}\psi)L_{\rm sd,38.7}^{\prime 1/2}}\frac{k_{\perp}c}{\omega}.\]
One sees that the density is marginally sufficient to maintain rms-to-Alfven transformation at the expected luminosity of the current sheet. The produced Alfven waves decay because of current starvation when the energy is transferred to higher \(k_{\perp}\), as is described in previous sections. This justifies our explanation of the low luminosity of the Crab pulsar.
## 6. Conclusions
In this paper, we addressed the nonlinear decay of fms waves in relativistic force-free MHD. There are models of pulsars and FRBs, in which the radio emission is generated in dense magnetospheres such that not only the Larmor but also the plasma frequency is well below the radiation frequencies (see, e.g., reviews by Zhang, 2022; Lyubarsky, 2021; Philippov & Kramer, 2022). Within these sources, the radiation propagates in the form of fms waves, and the nonlinear decay of fms into Alfven waves could strongly affect the properties of the outgoing radiation.
Using the kinetic equations for the waves, we estimated the decay rate and studied the kinetics of the decay process. We have shown that an fms wave could decay into two Alfven waves with arbitrary large wave vectors if these wave vectors are nearly perpendicular to the background magnetic field and nearly oppositely directed. Therefore the phase volume available for the decay of an fms wave is in fact infinite. In this case, the energy of fms waves could be completely transferred to the small-scale Alfven waves not via a cascade, as in the Kolmogorov turbulence, but directly. Numerical solutions of the kinetic equations confirmed these conclusions. Our results explain the anomalously low radio efficiency (the ratio of the radio to the spin-down power) of the Crab pulsar and demonstrate that FRBs could not be produced well within magnetar magnetospheres.
## Acknowledgments
We are grateful to the anonymous referee for insightful comments. This research was supported by grant I-1362-303.7/2016 from the German-Israeli Foundation for Scientific Research and Development and by grant 2067/19 from the Israeli Science Foundation.
## Appendix. Numerical Procedure and Convergence.
The set of equations (4) and (5) was solved by the Runge-Kutta method. It is known that the determination of the error and stability of the Runge-Kutta method is quite difficult Butcher (2015). As a rule, a very small time step is necessary to minimize the error and ensure the stability of the method. However, if there are conserved parameters in the system (e.g., total energy, total mass, helicity, etc.), one can achieve better convergency and stability by making use of a correction procedure that explicitly exploits the conservation laws (see, e.g., Christlieb et al., 2011; Palha & Gerritsma, 2017; Coppola et al., 2019 and references there).
In our case, the conserved parameter is the total energy of the system,
\[U=\int\omega(N_{\bf k}+n_{\bf k})d^{3}k. \tag{41}\]
This means that summing up the rhs of equations (4) and (5), multiplying the obtained expression by \(\omega\) and
integrating over the whole phase space yields zero. However, the conservation law is violated when the integrals in the rhs of these equations are evaluated numerically. To keep the total energy exactly conserved, we introduce a correction to the obtained values of \(N({\bf k})\) and \(n({\bf k})\) at each time step.
Let \(U_{1}\) be the energy of the system obtained after the \(i\)-th time step, and \(U_{0}\) be the initial energy of the system. We multiply \(N_{i+1}({\bf k})\) and \(n_{i+1}({\bf k})\) by \(\xi=2-\frac{U_{1}}{U_{0}}\). Taking into account the identity
\[U_{1}(2-\frac{U_{1}}{U_{0}})=U_{0}(1-(1-\frac{U_{1}}{U_{0}})^{2}), \tag{42}\]
one sees that after such a correction, the total energy of the system remains equal to \(U_{0}\) to within \((1-\frac{U_{1}}{U_{0}})^{2}\). In our calculations the value of \(1-\frac{U_{1}}{U_{0}}\) does not exceed \(10^{-3}\). Consequently, the energy is conserved to within \(\sim 10^{-6}\).
In figure 4, we show how the above correction procedure affects the stabilization of the calculations. The calculations were carried out with the time step \(q_{0}\Delta t=10^{-4}\) and values \(\Delta k_{\perp}=\Delta k_{\parallel}=0.1k_{0}\). One sees that without correction (Fig.4a), the system quickly becomes unstable. When using the correction procedure (Fig.4b), the system remains stable, and the total energy of the system is conserved.
Figure 5 shows the spectra of Alfven waves calculated with different input parameters. The spectra are presented at \(q_{0}t=1\), when the energies of the fms and Alfven waves are comparable. Figure 5a shows the spectra calculated with different time steps, \(q_{0}\Delta t\), at value \(\Delta k_{\perp}/k_{0}=0.1\). One sees that when the time resolution is improved, the "waves" in the spectra disappear, whereas the overall spectral shape remains intact. Figure 5b shows the spectra obtained for different values \(\Delta k_{\perp}/k_{0}\) at \(q_{0}\Delta t=10^{-4}\). One sees that improving the resolution in the wave vector space does not significantly affect the shape of the spectrum.
|
2309.07235 | Autotuning Apache TVM-based Scientific Applications Using Bayesian
Optimization | Apache TVM (Tensor Virtual Machine), an open source machine learning compiler
framework designed to optimize computations across various hardware platforms,
provides an opportunity to improve the performance of dense matrix
factorizations such as LU (Lower Upper) decomposition and Cholesky
decomposition on GPUs and AI (Artificial Intelligence) accelerators. In this
paper, we propose a new TVM autotuning framework using Bayesian Optimization
and use the TVM tensor expression language to implement linear algebra kernels
such as LU, Cholesky, and 3mm. We use these scientific computation kernels to
evaluate the effectiveness of our methods on a GPU cluster, called Swing, at
Argonne National Laboratory. We compare the proposed autotuning framework with
the TVM autotuning framework AutoTVM with four tuners and find that our
framework outperforms AutoTVM in most cases. | Xingfu Wu, Praveen Paramasivam, Valerie Taylor | 2023-09-13T18:15:58Z | http://arxiv.org/abs/2309.07235v1 | # Autotuning Apache TVM-based Scientific Applications Using Bayesian Optimization
###### Abstract.
Apache TVM (Tensor Virtual Machine), an open source machine learning compiler framework designed to optimize computations across various hardware platforms, provides an opportunity to improve the performance of dense matrix factorizations such as LU (Lower-Upper) decomposition and Cholesky decomposition on GPUs and AI (Artificial Intelligence) accelerators. In this paper, we propose a new TVM autotuning framework using Bayesian Optimization and use the TVM tensor expression language to implement linear algebra kernels such as LU, Cholesky, and 3mm. We use these scientific computation kernels to evaluate the effectiveness of our methods on a GPU cluster, called Swing, at Argonne National Laboratory. We compare the proposed autotuning framework with the TVM autotuning framework AutoTVM with four tuners and find that our framework outperforms AutoTVM in most cases.
2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 233 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 202 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 233 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2323 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 23 2023 2023 2023 2023 2024 2023 2023 2023 2023 2024 2023 2023 2024 2023 2024 2024 2024 2024 2024 2024 2024 2024 2025 2024 2025 2024 2024 2025 2024 2025 2024 2024 2025 2026 2024 2025 2026 2024 2025 2026 2024 2026 2026 2026 2027 2026
platforms such as CPUs, GPUs, and ML accelerators. Figure 1 shows the TVM optimization compiler framework. It supports models from popular deep learning frameworks such as TensorFlow, PyTorch, and ONNX, making it versatile and widely applicable. When a model is imported into Apache TVM, it undergoes conversion into a high-level intermediate representation using TVM's high-level model language Relay (Krishnan et al., 2017) which is a functional language and intermediate representation (IR) for neural networks. Relay applies graph-level optimization passes to optimize the model. TVM provides the Tensor Expression (TE) language which is a domain-specific language for describing tensor computations. After applying the high-level optimizations, Relay runs FuseOps pass to partition the model into many small subgraphs and lowers the subgraphs to the TE representation. TE provides several schedule primitives to specify low-level loop optimizations, such as loop titling, vectorization, parallelization, unrolling and fusion. A schedule specifies the low-level loop optimizations for an operator or subgraph defined in TE.
For the auto-tuning module, Apache TVM leverages two approaches for auto-tuning to automate the process of optimizing tensor computation to search for the best schedule for loop optimizations, AutoTVM (Chen et al., 2017; Chen et al., 2017) and AutoScheduler (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017). AutoTVM relies on predefined tunable parameters search space to optimize the model, while AutoScheduler automatically generates the search space by analyzing the computation definition. The auto-tuning modules search for the best schedule and compare them with statistical cost models and on-device measurements.
After the auto-tuning process, Apache TVM generates a JSON file containing all the schedules, from which the best schedule is selected based on the tuning results. Then each TE subgraph is transformed into Tensor Intermediate Representation (TIR) and further optimization through low-level optimization passes. The optimized TIR is eventually lowered to the target compiler of the hardware platform, resulting in a final optimized code ready for deployment in production. Ultimately, the compiler-specific generated code can be translated into machine code, ensuring the optimized model can be efficiently executed on the target hardware platform with a lightweight TVM runtime.
In this paper, we investigate the effectiveness of the auto-tuning modules using AutoTVM and apply Bayesian Optimization to auto-tune tensor computation for TVM and compare their performance.
### **ytopt: A ML-based Autotuning Tool Using Bayesian Optimization**
ytopt (Chen et al., 2017; Chen et al., 2017) is a machine-learning-based search software package that consists of sampling a small number of input parameter configurations, evaluating them, and progressively fitting a surrogate model over the input-output space until exhausting the user-defined time or the maximum number of evaluations. The package is built based on Bayesian Optimization that solves optimization problems.
Figure 2 presents the framework for autotuning various applications. The application runtime is the primary user-defined metric. We analyze an application code to identify the important tunable application and system parameters to define the parameter space using ConfigSpace (Chen et al., 2017). We use the tunable parameters to parameterize an application code as a code mold. ytopt starts with the user-defined parameter space, the code mold, and user-defined interface that specifies how to evaluate the code mold with a particular parameter configuration. The search method within ytopt uses Bayesian optimization, where a dynamically updated Random Forest surrogate model that learns the relationship between the configurations and the performance metric, is used to balance exploration and exploitation of the search space. In the exploration phase, the search evaluates parameter configurations that improve the quality of the surrogate model, and in the exploitation phase, the search evaluates parameter configurations that are closer to the previously found high-performing parameter configurations. The balance is achieved through the use of the lower confidence bound (LCB) acquisition function that uses the surrogate models' predicted values of the unevaluated parameter configurations and the corresponding uncertainty values.
## 3. Proposed Autotuning Framework
For the auto-tuning module, Apache TVM leverages two approaches for auto-tuning: AutoTVM and AutoScheduler in Figure 1. AutoTVM relies on predefined tunable parameters search space to optimize the model, while AutoScheduler automatically generates the search space by analyzing the computation definition. Because AutoScheduler's search space is not explicit, it is hard to find the search space to be used to compare with different tuning strategies. In this paper, we focus on AutoTVM because it requires predefined tunable parameter search space for four different tuner strategies which are as follows:
* RandomTuner: enumerate the space in a random order;
* GridSearchTuner: enumerate the space in a grid search order;
* GATuner: use a genetic algorithm to search through the space;
* XGBTuner: train a XGBoost model (Chen et al., 2017) to predict the runtime of lowered IR and pick the next batch according to the prediction.
In our recent work (Chen et al., 2017; Chen et al., 2017), we developed and enhanced our auto-tuning framework ytopt to tune performance and energy for various scientific applications on large scale HPC systems. One question is, can we replace AutoTVM with ytopt to autotune the TVM-based scientific applications more efficiently? This is the motivation of this work.
Figure 3 presents the proposed TVM autotuning framework using ytopt. We basically replace the autotuning modules in Figure 1 with the ytopt module. Based on the TE code, we identify tunable parameters and use them to define the parameter space and to parameterize the TE code to generate its code mold. ytopt starts
Figure 2. ytopt Autotuning Framework
with the user-defined parameter space, the code mold, and user-defined interface that specifies how to evaluate the code mold with a particular parameter configuration.
The iterative phase of the autotuning framework has the following steps:
1. Bayesian optimization in vtopt selects a parameter configuration for evaluation.
2. The code mold is configured with the selected configuration to generate a new TE code.
3. The new code is compiled with other codes needed to generate an executable (machine code).
4. The machine code is executed to evaluate the application with the selected parameter configuration
5. The resulting application runtime (user-defined metric) is sent back to vtopt and recorded in the performance database.
Steps 1-5 are repeated until the maximum number of evaluations \(n\) or the wall-clock time is exhausted for the autotuning run. In the end, we query the performance database to output the optimization specification for the best configuration.
In this way, the proposed autotuning framework identifies the best configuration for the TE code on a target system. This is different from AutoTVM with four tuners, where AutoTVM identifies the best configuration and passes it to the TE code to generate the machine code for evaluation on the target system.
## 4. Linear Algebra Benchmarks
PolyBench 4.2 (Krishnan et al., 2017) is a benchmark suite of 30 numerical computations with static control flow, extracted from operations in various application domains (linear algebra computations, image processing, physics simulation, and data mining). For the sake of simplicity, in this paper, we focus on linear algebra kernel 3mm and linear algebra solvers Cholesky and LU.
Cholesky is Cholesky decomposition in Linear Algebra Solvers, which decomposes a matrix to triangular matrices and entails A = \(L*L^{T}\), where L: NxN lower triangular matrix, and A: N \(*\) N positive-definite matrix. We use the following large datasets: large dataset (N 2000) and extralarge dataset (N 4000) for our case study.
LU is LU (Lower-Upper) decomposition without pivoting in linear algebra solvers and entails A = L*U, where L is an NxN lower triangular matrix and U is an NxN upper triangular matrix. We use the following large datasets : large dataset (N 2000) and extralarge dataset (N 4000) for our case study.
3mm is one of the linear algebra kernels that consists of three matrix multiplications and entails G=(A*B)*(C*D), where A is a NxL matrix; B is a LxM matrix; C is an MxO matrix; and D is an OxP matrix. We use the following large datasets: large dataset (N 800, L 900, M 1000, O 1100, P 1200) and extralarge dataset (N 1600, L 1800, M 2000, O 2200, P 2400) for our case study.
We use Apache TVM to implement 3mm, Cholesky and LU based on the algorithms from the C implementation of PolyBench 4.2 and use them as the baselines to conduct the autotuning experiments. Table 1 shows the parameter space size for each application. We use TVM TE (tensor expression) to implement these kernels. For simplicity, we mainly focus on a split optimization using a block size of the reordering.
For instance, the basic TE implementation 3mm_basic() of 3mm in Python is as follows:
``` deflam_basic(),L,M,P,dtype): a=t_placholder((N,l,m),name="A,dtype=dtype) B=t_placholder((C,M),name="F,dtype=dtype) C=t_placholder((C,P),name="C,dtype=dtype) D=t_placholder((C,P),name="D,dtype=dtype) L=t_read_axis(C,D),name="I",D=t_read_axis(C,D),name="I",D=t_read_axis(C,M),name="I",D=t_read_axis(C,M),name="I",D=t_read_axis(C,D),name="I",
P8-CH.OrOrOrdiallyperparameter(name='F', sequence=[1, 2, 4, 5, 8, 10, 16, 20, 25, 40, 50, 88, 10, 125, 208, 254, 408, 50, 108, 2090])
P1-CH.OrOrOrdiallyperparameter(name='F', sequence=[1, 2, 4, 5, 8, 10, 16, 20, 25, 32, 40, 50, 84, 88, 106, 120, 28, 204, 408, 880, 1080])
P2-CH.OrOrdiallyperparameter(name='F', sequence=[1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 20, 24, 30, 408, 408, 880, 120, 240, 408, 880, 122, 240])
P3-CH.OrOrdiallyperparameter(name='F', sequence=[1, 2, 4, 5, 8, 10, 16, 20, 25, 40, 59, 108, 120, 150, 160, 200, 240, 300, 408, 408, 408, 408, 408, 50, 880, 120, 240, 240, 408, 880, 122, 240])
P4-CH.OrOrdiallyperparameter(name='F', sequence=[1, 2, 4, 5, 8, 10, 16, 20, 25, 40, 59, 108, 120, 150, 160, 200, 240, 300
Similarly, for Cholesky with the extralarge problem size, ytopt outperformed 4 AutoTVM tuners shown in Figure 10 in the smallest autotuning process time. Figure 11 shows that ytopt identified the best tensor size 80x32 to result in the smallest runtime (13.99s).
For 3mm, Figure 12 and Figure 13 show the autotuning process over time for 3mm with the extralarge problem size using 4 AutoTVM tuners and ytopt. AutoTVM-XGB results in the smallest runtime of 30.99s with the tensor size (1000x32, 600x2, 15x40), however, ytopt outperforms the other 3 AutoTVM tuners to identify the tensor sizes (1x5,120x25, 60x100) to result in the runtime of 31.1s.
Overall, for the effectiveness of AutoTVM, grid search tuner performed the worst for all the experiments; XGBoot search tuner could only do at most 56 evaluations no matter how many evaluations are set for some reason. ytopt outperformed AutoTVM in most cases and took the smallest autotuning process time with the extralarge problem sizes even though AutoTVM tuners use the statistical cost models to predict the next tiling factor. For the large problem sizes, because of using the statistical cost models to predict the next tiling factor AutoTVM takes relatively smaller autotuning process time for some cases.
## 6. Conclusions
TVM provides an opportunity to us to improve the performance of the dense matrix factorizations such as LU and Cholesky on GPUs
Figure 8. Performance comparison for Cholesky with large problem size
Figure 6. Performance comparison for LU with extralarge problem size
Figure 7. Minimum runtimes for LU with extralarge problem size
Figure 9. Minimum runtimes for Cholesky with large problem size
and other accelerators. In this paper, we proposed a new TVM auto-tuning framework using Bayesian Optimization in ytopt, used TVM tensor expression language to implement linear algebra kernels such as LU, Cholesky, and 3mm, and then used these kernels to evaluate its effectiveness. We compared the proposed framework with the TVM auto-tuning framework AutoTVM with four tuners: GATuner, RandomTuner, GridSearchTuner and XGBTuner, and find that our framework outperformed AutoTVM in most cases. For the effectiveness of AutoTVM with four tuners, grid search tuner performed the worst for all the experiments. The proposed autotuning framework outperformed AutoTVM and took the smallest autotuning process time in most cases. Future work will focus on using the proposed autotuning framework to tune deep learning models and operators using ResNet, MobileNet, and Deep Convolutional Generative Adversarial Networks on GPUs and AI accelerators.
## 7. Acknowledgments
This work was supported in part by DOE ECP PROTEAS-TUNE and in part by DOE ASCR RAPIDS2. We acknowledge the Argonne Laboratory Computing Resource Center (LCRC) for use of the GPU cluster Swing under LCRC project EE-ECP, and thank Prasanna Balaprakash from Oak Ridge National Laboratory for an initial discussion. This material is based upon work supported by the U.S. Department of Energy, Office of Science, under contract number DE-AC02-06CH11357.
Figure 11. Minimum runtimes for Cholesky with extralarge problem size
Figure 12. Performance comparison for 3mm with extralarge problem size
Figure 10. Performance comparison for Cholesky with extralarge problem size |
2309.17026 | Forecasting the changes between endemic and epidemic phases of a
contagious disease, with the example of COVID-19 | Predicting the endemic/epidemic transition during the temporal evolution of a
contagious disease.
Methods: Defining indicators for detecting the transition endemic/epidemic,
with four scalars to be compared, calculated from the daily reported news
cases: coefficient of variation, skewness, kurtosis, and entropy. The
indicators selected are related to the shape of the empirical distribution of
the new cases observed over 14 days. This duration has been chosen to smooth
out the effect of weekends when fewer new cases are registered. For finding a
forecasting variable, we have used the PCA (principal component analysis),
whose first principal component (a linear combination of the selected
indicators) explains a large part of the observed variance and can then be used
as a predictor of the phenomenon studied (here the occurrence of an epidemic
wave).
Results: A score has been built from the four proposed indicators using a
Principal Component Analysis (PCA), which allows an acceptable level of
forecasting performance by giving a realistic retro-predicted date for the
rupture of the stationary endemic model corresponding to the entrance in the
epidemic exponential growth phase. This score is applied to the
retro-prediction of the limits of the different phases of the COVID-19 outbreak
in successive endemic/epidemic transitions and three countries, France, India,
and Japan.
Conclusion: We provided a new forecasting method for predicting an epidemic
wave occurring after an endemic phase for a contagious disease. | Jacques Demongeot, Pierre Magal, Kayode Oshnubi | 2023-09-29T07:19:34Z | http://arxiv.org/abs/2309.17026v1 | # Forecasting the changes between
###### Abstract
Predicting the endemic/epidemic transition during the temporal evolution of a contagious disease.
_Methods:_ Defining indicators for detecting the transition endemic/epidemic, with four scalars to be compared, calculated from the daily reported news cases: coefficient of variation, skewness, kurtosis, and entropy. The indicators selected are related to the shape of the empirical distribution of the new cases observed over 14 days. This duration has been chosen to smooth out the effect of weekends when fewer new cases are registered. For finding a forecasting variable, we have used the PCA (principal component analysis), whose first principal component (a linear combination of the selected indicators) explains a large part of the observed variance and can then be used as a predictor of the phenomenon studied (here the occurrence of an epidemic wave).
_Results:_ A score has been built from the four proposed indicators using a Principal Component Analysis (PCA), which allows an acceptable level of forecasting performance by giving a realistic retro-predicted date for the rupture of the stationary endemic model corresponding to the entrance in the epidemic exponential growth phase. This score is applied to the retro-prediction of the limits of the different phases of the COVID-19 outbreak in successive endemic/epidemic transitions and three countries, France, India, and Japan.
_Conclusion:_ We provided a new forecasting method for predicting an epidemic wave occurring after an endemic phase for a contagious disease.
**Keywords:**_Contagious disease; Endemic phase; Epidemic wave; Endemic/epidemic transition forecasting; COVID-19 epidemic wave prediction_
_The paper is dedicated to James D. Murray, whose pioneering work in mathematical biology we admire._
## 1 Introduction
Finding a reliable prediction method of the frontiers between different stationary and non-stationary periods of a time series is a challenging problem. Since the seminal work by Deshayes and Picard on the stationarity rupture in time series [1, 2, 3], many works have dealt with the break in stationarity [4, 5, 6, 7, 8, 9], the most recent using the concepts of functional statistics [10, 11, 12, 13, 14, 15, 16]. Indeed, stationarity is crucial as many forecasting models of time series rely on stationarity for easy modeling and obtaining reliable results. A stationary time series presents statistical properties which do not change over time, as the empirical distribution of the random variable observed in the series, with its main characteristic parameters, mean, coefficient of variation, moments, and entropy. In the event of a break in stationarity, there may be a sudden transition with a sudden change in the values of these parameters and the appearance of a non-constant trend. The problem of the existence of this transition arises with particular acuity in the case of contagious diseases, which alternate stationary endemic periods and epidemic peaks having an initial exponential trend, which must be predicted to prevent the spread of the disease does not give rise to a pandemic.
The term endemic phase is understood to mean a period in which there is an equilibrium in a model whose parameters have changed in value following an epidemic phase, due to mitigation measures or a change in the virulence of the infectious agent. In the case of chronic diseases observed outside epidemic phases (bacterial meningitis, rabies, smallpox before its eradication, etc.), the definition of endemicity corresponds more to the sporadic appearance (by birth or human displacement) of individuals much more susceptible than the general population to the infectious agent [17, 18, 19].
We will propose in this article a method to estimate the breakdown of endemic stationarity based on four parameters linked to the empirical distribution of the number of daily reported new cases of COVID-19 in several countries, parameters whose isolated or joint predictive power will be analyzed. These parameters are the coefficient of variation, the skewness, the kurtosis, and the entropy of the stationary empirical measure calculated in a moving window.
The pair represented by the succession of an endemic phase and an epidemic wave in COVID-19 outbreak can be considered as a functional unit, the break between the two phases having to be found [20, 21, 22, 23, 24, 25]. The endemic phase is characterized by a low average level of new cases, with low variance. At the start of the epidemic phase, the average number of cases will grow exponentially, and the standard deviation will grow proportionally at the beginning,
then saturate, like that of an additive noise in part independent of growth, which explains the increase then decrease in the coefficient of variation and kurtosis, therefore those of the first principal component at the endemic/epidemic boundary.
## 2 Materials and Methods
We use in the following a moving window of length 14 days for calculating the empirical distribution of the random variable equal to the number of daily reported new cases. The empirical distribution \(N_{t}\) on day \(t\) is obtained from the daily number of reported new cases considered as a random variable \(N_{t}=(N(t-13),N(t-12),\ldots,N(t))\).
The length of the window has been chosen for eliminating the effect of the week periodicity observed in data due to the lack of reporting each weekend. Indeed, daily new infection cases are highly affected by weekends, such that new case numbers are lowest at the start of the week, and increase afterwards [26].
### Empirical first four moments
In the following we will use the terminology _endemic period_, to describe a period during which the daily new cases occurs randomly around some mean value. An _epidemic wave_ is a period during which the daily new cases occurs by contact between susceptible and infected.
Our goal in this paper is to explore the transition between the endemic period and the following epidemic wave which will be studied by calculating several parameters in a moving window around the frontier of 14 days on which we suspect this transition occurred.
We consider the first four moments of \(N_{t}\). We start with the _mean_
\[\mu=E(N_{t})=\frac{\sum_{i=0}^{13}N(t-i)}{14}, \tag{2.1}\]
where \(E\) is the expectation operator, with the _standard deviation_
\[\sigma=E\left(\left(N_{t}-\mu\right)^{2}\right)^{1/2}=\sqrt{\frac{\sum_{i=0}^{ 13}\left(N(t-i)-\mu\right)^{2}}{14}}. \tag{2.2}\]
From these two first parameters, we can compute the _coefficient of variation_
\[CV(N_{t})=\frac{\sigma}{\mu}. \tag{2.3}\]
The _skewness_ of the random variable \(N_{t}\) is the third standardized moment, defined as
\[Skew(N_{t})=E\left(\left(\frac{N_{t}-\mu}{\sigma}\right)^{3}\right). \tag{2.4}\]
Recall that the skewness verifies
\[Skew(N_{t})=\frac{E(N_{t}^{3})-3\mu\sigma^{2}-\mu^{3}}{\sigma^{3}}=\frac{E(N_{ t}^{3})}{\sigma^{3}}-\left(3\frac{1}{CV}+\frac{1}{CV^{3}}\right). \tag{2.5}\]
The _kurtosis_ is the fourth standardized moment, defined as
\[Kurt(N_{t})=E\left(\left(\frac{N_{t}-\mu}{\sigma}\right)^{4}\right). \tag{2.6}\]
The _empirical entropy_\(\mathcal{E}\) of the empirical distribution is defined as follows:
\[\mathcal{E}(N_{t})=-\sum_{i=1:d\text{ with }p_{i}>0}p_{i}\log p_{i}, \tag{2.7}\]
where the \(p_{i}\) are the weights of a histogram on d value intervals of \(N_{t}\). In the Results' section, we use the _approximate entropy_. We refer to [36] for more details.
### Phenomenological model used for multiple epidemic waves
To represent the data, we used a phenomenological model to fit the curve of cumulative reported cases. Such an idea is not new since it was already proposed by Bernoulli [17] in 1760 in the context of the smallpox epidemic. Here we used the so-called Bernoulli-Verhulst [37] model to describe the epidemic phase. Bernoulli [17] investigated an epidemic phase followed by an endemic phase. This appears clearly in Figures 9 and 10 of the paper by Dietz, and Heesterbeek [38] who revisited the original article of Bernoulli. We also refer to Blower [18] for another article revisiting the original work of Bernoulli. Several works comparing cumulative reported cases data and the Bernoulli-Verhulst model appear in the literature (see [39, 40, 41]). The Bernoulli-Verhulst model is sometimes called Richard's model, although Richard's work came much later in 1959.
The phenomenological model deals with data series of new infectious cases decomposed into two successive phases: 1) endemic phases followed by 2) epidemic phases.
**Endemic phase:** During the endemic phase, the dynamics of new cases appears to fluctuate around an average value independently of the number of cases. Therefore the average cumulative number of cases is given by
\[\text{CR}(t)=N_{0}+(t-t_{0})\times a,\text{ for }t\in[t_{0},t_{1}], \tag{2.8}\]
where \(t_{0}\) denotes the beginning of the endemic phase, \(N_{0}\) is the number of new cases at time \(t_{0}\), and \(a\) is the average value of the daily number of new cases.
We assume that the average daily number of new cases is constant. Therefore the daily number of new cases is given by
\[\text{CR}^{\prime}(t)=a. \tag{2.9}\]
**Epidemic phase:** In the epidemic phase, the new cases are contributing to produce secondary cases. Therefore the daily number of new cases is no longer constant, but varies with time as follows
\[\text{CR}(t)=N_{\text{base}}+\frac{\text{e}^{\chi(t-t_{0})}N_{0}}{\left[1+ \frac{N_{0}^{\theta}}{N_{\infty}^{\theta}}\left(\text{e}^{\chi\theta(t-t_{0})}- 1\right)\right]^{1/\theta}},\text{ for }t\in[t_{0},t_{1}]. \tag{2.10}\]
In other words, the daily number of new cases follows the Bernoulli-Verhulst [17, 37] equation. Namely, by setting
\[N(t)=\text{CR}(t)-N_{\text{base}}, \tag{2.11}\]
we obtain
\[N^{\prime}(t)=\chi\,N(t)\,\left[1-\left(\frac{N(t)}{N_{\infty}}\right)^{ \theta}\right], \tag{2.12}\]
completed with the initial value
\[N(t_{0})=N_{0}.\]
In the model, \(N_{\text{base}}+N_{0}\) corresponds to the value \(\text{CR}(t_{0})\) of the cumulative number of cases at time \(t=t_{0}\). The parameter \(N_{\infty}+N_{\text{base}}\) is the maximal value of the cumulative reported cases after the time \(t=t_{0}\). \(\chi>0\) is a Malthusian growth parameter, and \(\theta\) regulates the speed at which \(\text{CR}(t)\) increases to \(N_{\infty}+N_{\text{base}}\).
## 3 Results
Here we use cumulative numbers of reported new cases for COVID-19 in France, India, and Japan taken from WHO [42]. Data shows a succession of endemic periods (yellow background color regions) followed by epidemic waves (blue background color regions).
### Data for France
In Figure 2, each colored segment corresponds to a single endemic or epidemic phenomenological model. This change of color may occur due to a change of dynamic inside an epidemic phase when a second wave comes before the end of the previous one.
Figure 1: _In this figure we plot in blue the phenomenological model and in black the data. Data is the cumulative reported number of new cases with a 14-day rolling average._
By making a principal component analysis (i.e. the matlab function PCA) between the standardized variables \(CV_{s}(N_{t}),Skew_{s}(N_{t}),Kurt_{s}(N_{t}),\mathcal{E}_{s}(N_{t})\), we
Figure 3: _In this figure we plot in blue the first derivative of the phenomenological model and in black the data. Data is the daily reported number of new cases with a 14-day rolling average._
Figure 2: _In this figure we plot with multiple colors the phenomenological models obtained for each period._
obtain the percentage of the variance explained by each principal component
\[Explain=\left(\begin{array}{c}70.71\\ 21.92\\ 4.94\\ 2.43\end{array}\right)\]
and the matrix giving the projection coefficients of the principal components
\[coeff=\left(\begin{array}{cccc}0.5527&-0.1480&0.7163&-0.3995\\ 0.5631&-0.2162&-0.0348&0.7968\\ 0.5577&-0.0795&-0.6955&-0.4461\\ 0.2577&0.9618&0.0449&0.0808\end{array}\right).\]
By using the first column of the above matrix, we deduce the first principal component
\[0.55CV_{s}(N_{t})+0.56Skew_{s}(N_{t})+0.56Kurt_{s}(N_{t})+0.26\mathcal{E}_{s}( N_{t}) \tag{3.1}\]
which explains 70.71% of the variability.
We deduce that Kurtosis, Skewness, and the coefficient of variation (in decreasing order of importance) best explain the variability.
### Data for India
In this subsection, we consider the data for India. We present the same curves as for France, describing successively raw data and simulated results by the phenomenological models.
Figure 4: _In this figure we plot the first principal component for France (see formula (3.1)). The horizontal green lines correspond to \(\pm 1\)._
In Figure 6, each colored segment corresponds to a single endemic or epidemic phenomenological model. This change of color may occur due to a change of dynamic inside an epidemic phase when a second wave comes before the end of the previous one.
Figure 5: _In this figure we plot in blue the phenomenological model and in black the data. Data is the cumulative reported number of new cases with a 14-day rolling average._
Figure 6: _In this figure we plot with multiple colors the phenomenological models obtained for each period._
By making a principal component analysis (i.e. the matlab function PCA) between the standardized variables \(CV_{s}(N_{t}),Skew_{s}(N_{t}),Kurt_{s}(N_{t})\), and \(\mathcal{E}_{s}(N_{t})\), we obtain the percentage of the variance explained by each principal component
\[Explain=\left(\begin{array}{c}53.13\\ 22.55\\ 14.22\\ 10.09\end{array}\right)\]
and the matrix giving the projection coefficients of the principal components
\[coeff=\left(\begin{array}{cccc}0.5161&-0.2296&0.8212&-0.0810\\ 0.5714&-0.0587&-0.4434&-0.6881\\ 0.5660&-0.2235&-0.3478&0.7133\\ 0.2946&0.9455&0.0896&0.1062\end{array}\right).\]
By using the first column of the above matrix, we deduce the first principal component
\[0.52CV_{s}(N_{t})+0.57Skew_{s}(N_{t})+0.57Kurt_{s}(N_{t})+0.29\mathcal{E}_{s} (N_{t}) \tag{3.2}\]
which explains 53% of the variability.
Figure 7: _In this figure we plot in blue the first derivative of the phenomenological model and in black the data. Data is the daily reported number of new cases with a 14-day rolling average._
### Data for Japan
In this subsection, we consider the data for Japan. We present the same curves as for France and India, describing successively raw data and simulated results by the phenomenological models.
Figure 8: _In this figure we plot the first principal component for India (see formula (3.2)). The horizontal green lines correspond to the values \(\pm 1\)._
Figure 9: _In this figure we plot in blue the phenomenological model and in black the data. Data is the cumulative reported number of new cases with a 14-day rolling average._
By making a principal component analysis (i.e. the matlab function PCA) between the standardized variables \(CV_{s}(N_{t}),Skew_{s}(N_{t}),Kurt_{s}(N_{t}),\) and \(\mathcal{E}_{s}(N_{t}),\)
Figure 11: _In this figure we plot in blue the first derivative of the phenomenological model and in black the data. Data is the daily reported number of new cases with a 14-day rolling average._
Figure 10: _In this figure we plot with multiple colors the phenomenological models obtained for each period._
we obtain the percentage of the variance explained by each principal component
\[Explain=\left(\begin{array}{c}71.62\\ 17.54\\ 6.87\\ 3.97\end{array}\right)\]
and the matrix giving the projection coefficients of the principal components
\[coeff=\left(\begin{array}{cccc}0.5234&-0.2243&0.7969&-0.2018\\ 0.5452&-0.2406&-0.2310&0.7691\\ 0.5401&-0.1760&-0.5575&-0.6054\\ 0.3703&0.9278&0.0270&0.0358\end{array}\right).\]
By using the first column of the above matrix, we deduce the first principal component
\[0.52CV_{s}(N_{t})+0.55Skew_{s}(N_{t})+0.54Kurt_{s}(N_{t})+0.37\mathcal{E}_{s}( N_{t}) \tag{3.3}\]
which explains 71% of the variability.
We deduce that Kurtosis, Skewness, and the coefficient of variation (in decreasing order of importance) best explain the variability.
## 4 Discussion
The forecasting of the epidemic waves of the COVID-19 outbreak is based on a change in the nature of the time series dynamics related to the number of
Figure 12: _In this figure we plot the first principal component for Japan (see formula (3.3)). The horizontal green lines correspond to the values \(\pm 1\)._
daily new reported cases of this contagious disease. This change can concern the moments or the entropy of the empirical distribution of the stationary component at the end of the endemic phase, which disrupts when a not constant trend occurs, marking the start of an epidemic wave.
From a careful examination of Figures 4, 8 and 12, we can conclude that there are not constant but frequent patterns for \(C_{1}(t)\) identifiable in the three studied countries and for a majority of their endemic/epidemic transitions.
The predictive power of the first principal component \(C_{1}(t)\) can be quantified by its performance ratio, that is, by the percentage of correct retro-predictions obtained by fixing variation thresholds to forecast the occurrence of an epidemic wave. For France, if we fix the threshold to the value 1, \(C_{1}(t)\) predicts correctly an epidemic outbreak a weak after a decrease from this threshold value not reached elsewhere in the endemic phase. This prediction is correct at 53% only for India. For Japan, the performance of the retro-prediction is 71% and 70% for France.
It is clear that the level of prediction is not very high (71% in the best case), but, in the absence of a currently reliable predictor, we can consider that it is sufficient to trigger mitigation measures at level of a population. A more systematic study of the changing shape of the empirical distribution is needed, looking at many epidemic waves in many countries. A parameter measuring the deviation from classical laws (such as those linked to the Kolmogorov-Smirnov, chi-square or Shapiro-Wilk tests in the case of the normal law) could thus be added in subsequent works.
As previously noticed in [43], a classical epidemic peak with a near-symmetric growth and decline may be preceded or followed by a shoulder-like behavior corresponding to a prematurely stopped wave followed by another. Shoulder-like behavior for epidemic waves can be explained by using multiple sub-group epidemic models. This idea was first explored for SARS-CoV-1 in [44] and reconsidered for SARS-CoV-2 by [45]. But the changes between endemic and epidemic are still challenging to model.
## 5 Conclusions
We have studied in this article the evolution of four parameters related to the dynamics of the number of the daily reported new cases \(N_{t}\) at day \(t\) of a contagious disease, which can serve as early indicators of the appearance of epidemic waves from a previous endemic state. By applying this parameter calculation to COVID-19, we showed that a score obtained by PCA based on the linear combination of the four chosen parameters with specific coefficients for each one could forecast the variations of the empirical distribution of the daily reported number of new cases \(N_{t}\), then could be considered often as a good predictor (for the countries and the epidemic waves in these countries) of the endemic-epidemic transition. A systematic study of contagious diseases other than COVID-19 is necessary to confirm this forecasting property's existence. Still, we can already propose this score as a realistic indicator of the next occurrence of an epidemic
outbreak from a change in the dynamics of the observed daily new cases during the endemic periods.
|
2309.06895 | MagiCapture: High-Resolution Multi-Concept Portrait Customization | Large-scale text-to-image models including Stable Diffusion are capable of
generating high-fidelity photorealistic portrait images. There is an active
research area dedicated to personalizing these models, aiming to synthesize
specific subjects or styles using provided sets of reference images. However,
despite the plausible results from these personalization methods, they tend to
produce images that often fall short of realism and are not yet on a
commercially viable level. This is particularly noticeable in portrait image
generation, where any unnatural artifact in human faces is easily discernible
due to our inherent human bias. To address this, we introduce MagiCapture, a
personalization method for integrating subject and style concepts to generate
high-resolution portrait images using just a few subject and style references.
For instance, given a handful of random selfies, our fine-tuned model can
generate high-quality portrait images in specific styles, such as passport or
profile photos. The main challenge with this task is the absence of ground
truth for the composed concepts, leading to a reduction in the quality of the
final output and an identity shift of the source subject. To address these
issues, we present a novel Attention Refocusing loss coupled with auxiliary
priors, both of which facilitate robust learning within this weakly supervised
learning setting. Our pipeline also includes additional post-processing steps
to ensure the creation of highly realistic outputs. MagiCapture outperforms
other baselines in both quantitative and qualitative evaluations and can also
be generalized to other non-human objects. | Junha Hyung, Jaeyo Shin, Jaegul Choo | 2023-09-13T11:37:04Z | http://arxiv.org/abs/2309.06895v2 | # MagiCapture: High-Resolution Multi-Concept Portrait Customization
###### Abstract
Large-scale text-to-image models including Stable Diffusion are capable of generating high-fidelity photorealistic portrait images. There is an active research area dedicated to personalizing these models, aiming to synthesize specific subjects or styles using provided sets of reference images. However, despite the plausible results from these personalization methods, they tend to produce images that often fall short of realism and are not yet on a commercially viable level. This is particularly noticeable in portrait image generation, where any unnatural artifact in human faces is easily discernible due to our inherent human bias. To address this, we introduce MagiCapture, a personalization method for integrating subject and style concepts to generate high-resolution portrait images using just a few subject and style references. For instance, given a handful of random selfies, our fine-tuned model can generate high-quality portrait images in specific styles, such as passport or profile photos. The main challenge with this task is the absence of ground truth for the composed concepts, leading to a reduction in the quality of the final output and an identity shift of the source subject. To address these issues, we present a novel Attention Refocusing loss coupled with auxiliary priors, both of which facilitate robust learning within this weakly supervised learning setting. Our pipeline also includes additional post-processing steps to ensure the creation of highly realistic outputs. MagiCapture outperforms other baselines in both quantitative and qualitative evaluations and can also be generalized to other non-human objects.
## Introduction
To obtain high-quality portrait images suitable for resumes or wedding events, individuals typically have to visit a photo studio, followed by a costly and time-consuming process of photo retouching. Imagine a scenario where all that's required is a few selfie images and reference photos, and you could receive high-quality portrait images in specific styles,
such as passport or profile photos. This paper aims to automate this process.
Recent advancements in large-scale text-to-image models, such as Stable Diffusion [14] and Imagen [1], have made it possible to generate high-fidelity, photorealistic portrait images. The active area of research dedicated to personalizing these models seeks to synthesize specific subjects or styles using provided sets of train images. In this work, we formulate our task as a multi-concept customization problem. Here, the source content and reference style are learned respectively, and the composed output is generated. Unlike text-driven editing, using reference images allows users to provide fine-grained guidance, making it more suitable for this task.
However, despite the promising results achieved by previous personalization methods, they often produce images that lack realism and fall short of commercial viability. This problem primarily arises from attempting to update the parameters of large models using only a small number of images. This decline in quality becomes even more evident in a multi-concept generation, where the absence of ground truth images for the composed concepts frequently leads to the unnatural blending of disparate concepts or deviation from the original concepts. This issue is particularly conspicuous in portrait image generation, as any unnatural artifacts or shifts in identity are easily noticeable due to our inherent human bias.
To address these issues, we present MagiCapture, a multi-concept personalization method for the fusion of subject and style concepts to generate high-resolution portrait images with only a few subject and style references. Our method employs composed prompt learning, incorporating the composed prompt as part of the training process, which enhances the robust integration of source content and reference style. This is achieved through the use of pseudo labels and auxiliary loss. Moreover, we propose the Attention Refocusing loss in conjunction with a masked reconstruction objective, a crucial strategy for achieving information disentanglement and preventing information leakage during inference. MagiCapture outperforms other baselines in both quantitative and qualitative assessments and can be generalized to other non-human objects with just a few modifications.
The main contributions of our paper are as follows:
* We introduce a multi-concept personalization method capable of generating high-resolution portrait images that faithfully capture the characteristics of both source and reference images.
* We present a novel Attention Refocusing loss combined with masked reconstruction objective, effectively disentangling the desired information from input images and preventing information leakage during the generation process.
* We put forth a composed prompt learning approach that leverages pseudo-labels and auxiliary loss, facilitating the robust integration of source content and reference style.
* In both quantitative and qualitative assessments, our method surpasses other baseline approaches and, with minor adjustments, can be adapted to generate images of non-human objects.
## Related Work
Text-to-image diffusion modelsDiffusion models [13, 14, 15, 16] have recently achieved remarkable success in image generation, driving advancements in various applications and fields. Their powerful performance has significantly propelled the field of text-guided image synthesis [12, 13, 11, 15]. In particular, large-scale text-to-image diffusion models, trained on extensive text-image pair datasets, have set new benchmarks. Notable examples include Stable diffusion [16] and Imagen [1]. Our work is built upon the pre-trained stable diffusion model.
PersonalizationPersonalizing generative models for specific concepts is a key goal in the vision field. With the rise of GANs, there have been efforts to fine-tune GANs, like Pivotal Tuning [12], based on GAN inversion [13]. More recently, studies have sought to personalize diffusion models using small image datasets. DreamBooth [11] fine-tunes entire weights, Textual Inversion [15] adjusts text embeddings, and Custom Diffusion [12] adapts the mapping matrix for the cross-attention layer. While effective in learning concepts, these models sometimes generate less realistic or identity-losing images. Methods like ELITE [13] and InstantBooth [14] employ a data-driven approach for encoder-based domain tuning, which is not directly comparable to our approach.
## Preliminaries
Diffusion modelsDiffusion models [13, 14, 15] are a class of generative models that create images through an iterative denoising process. These models comprise a forward and backward pass. During the forward pass, an input image \(x^{(0)}\) is progressively noised using the equation \(x^{(t)}=\sqrt{\alpha_{t}}x^{(0)}+\sqrt{1-\alpha_{t}}\epsilon\), where \(\epsilon\) represents standard Guassian noise and \(\{\alpha_{t}\}\) is a pre-defined noise schedule with timestep \(t\), \(1<t<T\). During backward pass, the generated image is obtained by denoising the starting noise \(x_{T}\) using a UNet \(\epsilon_{\theta}(x^{(t)},t)\), which is trained to predict noise at the input timestep \(t\). Latent diffusion models (LDM) [14] are a variant of diffusion models where the denoising process occurs in the latent space. Specifically, an image encoder \(\mathcal{E}\) is used to transform the input image \(x\) into a latent representation \(z\), such that \(\mathcal{E}(x)=z\). During inference, the denoised latent representation is decoded to produce the final image \(x^{(0)\prime}=\mathcal{D}(z^{(0)})\), where \(\mathcal{D}\) represents the decoder of an autoencoder. Stable diffusion [16] is a text-guided latent diffusion model (LDM) trained on large-scale text-image pairs.
It has the following objective:
\[\mathcal{L}_{\text{LDM}}=\mathbb{E}_{z,c,\epsilon,t}\Big{[}||\epsilon_{\theta}(z^ {(t)},t,c)-\epsilon||_{2}^{2}\Big{]}, \tag{1}\]
where \(c\) refers to the text condition.
Customization of text-to-image modelsSeveral previous works have focused on the customization of text-to-image diffusion models, including DreamBooth (Ruiz et al., 2023), Textual Inversion (Gal et al., 2022), Custom Diffusion (Kumari et al., 2023), and others. These works employ pre-trained Stable Diffusion (Rombach et al., 2022), utilizing a small set of images, typically \(3\sim 5\) images, associated with a particular object or style and incorporating specialized text tokens to embed such concepts. For instance, when customizing models for a specific dog, the prompt "a [\(V1\)] dog" is used so that the special token can learn information specific to the dog. This customization involves finetuning the diffusion model based on the same reconstruction objective of Eq. (1), where \(c\) is the text prompt with the special token, and \(z=\mathcal{E}(x)\), where \(x\) is sampled from a set of images for customization.
Different methods finetune distinct components of the diffusion models. DreamBooth fine-tunes the entire UNet model, Textual Inversion exclusively adjusts the CLIP text embedding of the special token, and Custom Diffusion optimizes the key and value mapping matrices within the cross-attention layer of the UNet.
Attention mapsLarge-scale text-to-image diffusion models utilize cross-attention layers for text-conditioning. In Stable Diffusion (Rombach et al., 2022), CLIP text encoder (Radford et al., 2021) is used to produce text embedding features. These text embeddings are then transformed to obtain the key \(K\) and value \(V\) for the cross-attention layer through linear mapping, and spatial feature of image is projected to query \(Q\). The attention map of the cross-attention layer is computed as:
\[A=\text{softmax}\,\Big{(}\frac{QK^{T}}{\sqrt{d}}\Big{)}. \tag{2}\]
The attention map corresponding to a specific token with index \(k\) can be obtained as \(A_{k}=A[k]\). Such attention maps are useful for visualizing the influence of individual tokens in the text prompt. Moreover, they can be altered or manipulated for the purpose of image editing, as demonstrated in Prompt-to-Prompt (Hertz et al., 2022).
## Method
Given a small set of source images and reference style images, the goal of this paper is to synthesize images that integrate the source content with the reference style. While our method is primarily designed for generating portrait images, it can be easily adapted to handle other types of content with minor modifications. We utilize the customization of each
Figure 2: The overall pipeline of **MagiCapture**, where the training process is formulated as multi-task learning of three different tasks: source, reference, and composed prompt learning. In the composed prompt learning, reference style images serve as pseudo-labels, along with auxiliary identity loss between the source and predicted images. Attention Refocusing loss is applied to all three tasks, but is not shown in the figure for simplicity. After training, users can generate high-fidelity images with integrated concepts and can further manipulate them using varying text conditions.
concepts during the optimization phase and employ a composed prompt during inference to generate multi-concept images. A comprehensive overview of our approach is depicted in Fig. 2, and the details of our method will be elaborated upon in the subsequent sections.
Two-phase OptimizationSimilar to Pivotal Tuning [1] in GAN inversion, our method consists of two-phase optimization. In the first phase, we optimize the text embeddings for the special tokens [\(V^{*}\)] using the reconstruction objective as in [1]. While optimizing the text embeddings is not sufficient for achieving high-fidelity customization, it serves as a useful initialization for the subsequent phase. In the second phase, we jointly optimize the text embeddings and model parameters with the same objective. Rather than optimizing the entire model, we apply the LoRA [1], where only the residuals \(\Delta W\) of the projection layers in the cross-attention module are trained using low-rank decomposition. Specifically, the updated parameters are expressed as:
\[W^{{}^{\prime}}=W+\Delta W,\:\Delta W=UV^{T}, \tag{3}\]
where \(U\in\mathbb{R}^{n\times r},V\in\mathbb{R}^{m\times r}\), and \(r<<n,m\). Empirically, we find that this two-phase optimization coupled with LoRA strikes a favorable balance between reconstruction and generalization. It preserves the model's generalization capabilities for unseen prompts while effectively capturing the finer details of the source images.
Masked ReconstructionIn our approach, a source prompt \(c_{s}\) (e.g., A photo of [\(V1\)] person.) and a reference prompt \(c_{r}\) (e.g., A photo of a person in the [\(V2\)] style.) are used to reconstruct the source image \(I_{s}\) and a target style image \(I_{r}\) respectively. It is crucial to disentangle the identity of the source subject from non-facial regions, such as the background and clothing, to prevent this unwanted information from being encoded into the special token [\(V1\)]. Similarly, we need to disentangle the reference image to ensure that the facial details of the person in the reference image are not embedded into the special token [\(V2\)]. To achieve this, we propose to use a masked reconstruction loss. Specifically, we employ a mask that indicates the relevant region and apply it element-wise to both the ground truth latent code and the predicted latent code. In the context of portrait generation, a source mask \(M_{s}\) indicates the facial region of the image \(I_{s}\), and a target mask \(M_{r}\) denotes the non-facial areas of the reference image \(I_{r}\). Formally, the masked reconstruction loss for the source and the reference prompts are given by:
\[\mathcal{L}^{s}_{mask}=\mathbb{E}_{z_{s},c_{s},\epsilon,t}\Big{[}||\epsilon \odot M_{s}-\epsilon_{\theta}(z^{(t)}_{s},t,c_{s})\odot M_{s}||^{2}_{2}\Big{]}, \tag{4}\]
\[\mathcal{L}^{r}_{mask}=\mathbb{E}_{z_{r},c_{r},\epsilon,t}\Big{[}||\epsilon \odot M_{r}-\epsilon_{\theta}(z^{(t)}_{r},t,c_{r})\odot M_{r}||^{2}_{2}\Big{]}, \tag{5}\]
where \(z^{(t)}_{s}\) and \(z^{(t)}_{r}\) are the source and reference noised latent at timestep \(t\sim\) Uniform(1, \(T\)) and \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\).
Composed Prompt LearningGenerating images with a composed prompt \(c_{c}\) such as "A photo of a [\(V1\)] person in the [\(V2\)] style," leads to undefined behavior because the model had not been customized on such prompts. Typically, the resulting images generated using these unseen composed prompts suffer from a shift in the identity of the source subject and a decline in output quality. To address this issue, we include training on the composed prompt. However, no ground truth image exists for such a prompt. We approach this challenge as a weakly-supervised learning problem, where there are no available ground truth labels. We craft pseudo-labels and develop an auxiliary objective function to suit our needs. In the context of the portrait generation task, we want to retain the overall composition, pose, and appearance from the reference style image, excluding the facial identity. To achieve this, we employ the masked reconstruction objective given by:
\[\mathcal{L}^{c}_{mask}=\mathbb{E}_{z_{r},c_{c},\epsilon,t}\Big{[}||\epsilon \odot M_{r}-\epsilon_{\theta}(z^{(t)}_{r},t,c_{c})\odot M_{r}||^{2}_{2}\Big{]}. \tag{6}\]
For the facial regions, we use an auxiliary identity loss that utilizes a pre-trained face recognition model [1] \(\mathcal{R}\) and cropping function \(\mathcal{B}\) conditioned by the face detection model [1]:
\[\mathcal{L}_{id}=\mathbb{E}_{\hat{x}^{(0)},I_{s}}\Big{[}1-\text{cos}(\mathcal{ R}(\mathcal{B}(\hat{x}^{(0)})),\mathcal{R}(\mathcal{B}((I_{s})))\Big{]}, \tag{7}\]
where cos denotes the cosine similarity and \(\hat{x}^{(0)}=\mathcal{D}(\hat{z}^{(0)})\) refers to the estimated clean image from \(z^{(t_{id})}_{r}\) using Tweedie's formula [10]. Timestep \(t_{id}\) is sampled as \(t_{id}\sim\) Uniform(1, \(T^{{}^{\prime}}\)), where \(T^{{}^{\prime}}<T\), to avoid blurry and inaccurate \(\hat{x}^{(0)}\) estimated from noisy latent with large timesteps, which can impair cropping or yield odd facial embeddings.
We augment the composed prompt \(c_{c}\) by randomly selecting from predefined prompt templates to boost editing stability and generalization.
Figure 3: Visualization of aggregated attention maps from UNet layers before and after the application of Attention Refocusing (AR) loss illustrates its importance in achieving information disentanglement and preventing information spill.
**Attention Refocusing** When optimizing with training images, it is vital to achieve _information disentanglement_, ensuring that special tokens exclusively embed the information of the region of interest, denoted as \(M_{v}\) for \(v\in\{s,r\}\). However, the masked reconstruction objective falls short of this goal because the presence of transformer layers in the UNet backbone gives the model a global receptive field. The same limitation applies to denoising steps in the inference stage, where we desire attention maps of special tokens to focus only on the intended areas. For instance, in the portrait generation task, the special token [\(V1\)] should only attend to facial regions when generating images to avoid _information spill_. We observe that information spill is more prevalent when the model encounters an unseen prompt during inference. Fig. 3 demonstrates that special tokens do indeed attend to unwanted regions.
To solve this issue, we propose a novel Attention Refocusing (AR) loss, which steers the cross attention maps \(A_{k}\) of the special token [\(V^{*}\)] (where \(k=\text{index}([V^{*}])\)) using a binary target mask. Our AR loss incorporates two crucial details: First, it is applied only to regions where \(\neg M_{v}\), where the mask value is zero. For the attention map values \(A_{k}[i,j]\) where \((i,j)\in\{(i,j)|M_{v}[i,j]=1\}\), the optimal values can vary across different UNet layers and denoising time steps, so they do not necessarily have to be close to 1. Conversely, for \(A_{k}[i,j]\) where \((i,j)\in\{(i,j)|M_{v}[i,j]=0\}\), the values should be forced to 0 to achieve information disentanglement during training and minimize information spill in the inference stage. Second, it is essential to scale the attention maps to the [0,1] range. Both of these techniques are required to avoid disrupting the pre-trained transformer layers' internal operations, which would lead to corrupted outputs. The Attention Refocusing loss can be formulated as follows:
\[\mathcal{L}_{attn}=\mathbb{E}_{k,v\in\{s,r\}}\Big{[}\big{|}\big{|}(\mathcal{S }(A_{k})-M_{v})\odot\neg M_{v}\big{|}\big{|}^{2}_{2}\Big{]}, \tag{8}\]
where \(\mathcal{S}(\cdot)\) refers to a scaling function.
PostprocessingThe quality of images generated in a few-shot customization task is typically constrained by the capabilities of the pretrained text-to-image model used. Moreover, when provided with low-resolution source and target images, the fine-tuned model tends to produce lower-quality images. To overcome these limitations and further enhance the fidelity of the generated images, our pipeline includes optional postprocessing steps. Specifically, we employ a pre-trained super-resolution model Wang et al. (2021) and a face restoration model Zhou et al. (2022) to further improve the quality of the generated samples.
## Experiments
Training DetailsOur method utilizes pre-trained Stable Diffusion V1.5 Rombach et al. (2022). The first training phase consists of a total of 1200 steps, with a learning rate 5e-4 for updating the text embeddings. In the second LoRA phase, the learning rate is 1e-4 for the projection layers and 1e-5 for the text embeddings, with a total of 1500 training steps. The model is trained on a single GeForce RTX 3090 GPU, using a batch size of 1 and gradient accumulation over 4 steps. For all experiments, we employ 4 to 6 images for both the source and reference images. Please refer to the supplement for more details.
ComparisonsThe results of our method are demonstrated in Fig. 4. We compare our method with other personalization methods including DreamBooth Ruiz et al. (2023), Textual Inversion Gal et al. (2022), and Custom Diffusion Kumari et al. (2023) using the same source and reference images. We choose 10 identities, 7 from VGGFace Cao et al. (2018) and 3 in-the-wild identities gathered from the internet. We also manually select 10 style concepts, leading to 100 id-style pairs. For each pair, we train each baseline and our model, then generate 100 images with the _composed prompt_ for each of the trained model, resulting in 10,000 samples per baseline. Qualitative comparisons are shown in Fig. 5, where our method outperforms other baselines in image fidelity and source-reference image reflection.
We assess the facial appearance similarity between the source and generated portrait images by measuring the cosine similarity between their facial embeddings, using a pre-trained recognition network (CSIM) Zakharov et al. (2019).
Another important aspect of evaluation is style preservation, where we measure how well the results replicate the style of the reference images. We compute the cosine similarity between the _masked_ CLIP Radford et al. (2021) image embeddings of the reference and generated images, where facial regions are masked to exclude facial appearance from the assessment. We use CLIP similarity instead of texture similarity Gatys et al. (2016) since the term _style_ in our paper encompasses broader concepts such as image geometry and composition, in addition to texture and appearance of non-facial regions. Finally, we evaluate the overall image fidelity with the LAION aesthetic predictor Schuhmann Aug (2022). Table 1 shows that our method outperforms other baselines in all three metrics. Additionally, we conduct a user study involving 30 participants who were asked to rate images for ID preservation, style preservation, and image fidelity on a 1-5 scale. Table 2 summarizes the results, with our method consistently scoring higher than other baselines.
We observed that DreamBooth often overfits to the reference style images, leading to high style scores but low CSIM
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & CSIM \(\uparrow\) & Style \(\uparrow\) & Aesthetic \(\uparrow\) \\ \hline DreamBooth & 0.102 & 0.720 & 5.770 \\ Textual Inversion & 0.224 & 0.623 & 5.670 \\ Custom Diffusion & 0.436 & 0.606 & 5.263 \\ \hline
**Ours w/o AR \& CP** & 0.429 & 0.726 & 6.178 \\
**Ours** & **0.566** & **0.730** & **6.218** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison of our method against DreamBooth Ruiz et al. (2023), Textual Inversion Gal et al. (2022), and Custom Diffusion Kumari et al. (2023). Our method outperforms other baselines in terms of identity similarity measured between the source images (**CSIM**), masked CLIP similarity measure (**Style**), and **Aesthetic score**Schuhmann Aug (2022).
Figure 4: Curated results of MagiCapture.
scores. Conversely, Textual Inversion tends to underfit both the source and reference images, resulting in low-fidelity images that fail to preserve appearance details. Custom Diffusion better preserves source identity compared to the others, but still cannot consistently perform well for the composed prompt, leading to identity shifts and unnatural images.
Ablation StudyAs shown in Fig. 3, we find that Attention Refocusing loss effectively prevents attention maps from attending to unwanted regions, mitigating information spill and promoting information disentanglement. Empirically, we observe that the Attention Refocusing loss should only be applied during the second phase of training (LoRA training). We infer that text embeddings are not well-suited for learning geometric information related to attention maps. Moreover, without composed prompt learning, the generated images often exhibit undefined behaviors where only one of the source or reference sets is evident in the image, without blending. We present the evaluation metrics for both the presence and absence of composed prompt learning (CP) and Attention Refocusing (AR) in Table 1. For more results and detailed analysis, please refer to the supplement.
ApplicationsSince our method is robust to generalizations, users can further manipulate the composed results using prompts with more descriptions (e.g., \(c^{{}^{\prime}}_{c}=\) "A photo of \([V1]\) person in the \([V2]\) style, wearing sunglasses."). We demonstrate such results in Fig. 6 and in the supplement.
## Limitations and Conclusions
Our method occasionally produces abnormal body parts such as limbs, fingers, as shown in Fig. 7. Furthermore, the model tends to exhibit lower fidelity for non-white subjects and demonstrates a noticeable gender bias--for instance, it struggles to accurately generate images of men wearing wedding dresses. These issues are largely related to the inherent biases of the pre-trained text-to-image models, and addressing these problems within a few-shot setting represents a significant avenue for future research. We acknowledge the ethical implications of our work and are committed to taking them seriously. We are also proactive in leading and supporting efforts to prevent potential misuse of our contributions.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & ID \(\uparrow\) & Style \(\uparrow\) & Fidelity \(\uparrow\) \\ \hline DreamBooth & 2.025 & 3.648 & 2.683 \\ Textual Inversion & 2.907 & 3.038 & 2.965 \\ Custom Diffusion & 3.223 & 2.260 & 2.980 \\
**Ours** & **4.055** & **4.165** & **4.293** \\ \hline \hline \end{tabular}
\end{table}
Table 2: User study of our method against DreamBooth [16], Textual Inversion [1], and Custom Diffusion [15]. Our method outperforms other baselines in terms of identity similarity score (**ID**), style similarity measure (**Style**), and image fidelity score (**Fidelity**).
Figure 5: Qualitative comparisons of MagiCapture with other baseline methods.
Figure 6: Users can further manipulate the composed results using prompts with additional description.
Figure 7: Failure cases: Proposed method occasionally produces abnormal body parts such as limbs, fingers |
2308.16777 | Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models | Zero-shot referring image segmentation is a challenging task because it aims
to find an instance segmentation mask based on the given referring
descriptions, without training on this type of paired data. Current zero-shot
methods mainly focus on using pre-trained discriminative models (e.g., CLIP).
However, we have observed that generative models (e.g., Stable Diffusion) have
potentially understood the relationships between various visual elements and
text descriptions, which are rarely investigated in this task. In this work, we
introduce a novel Referring Diffusional segmentor (Ref-Diff) for this task,
which leverages the fine-grained multi-modal information from generative
models. We demonstrate that without a proposal generator, a generative model
alone can achieve comparable performance to existing SOTA weakly-supervised
models. When we combine both generative and discriminative models, our Ref-Diff
outperforms these competing methods by a significant margin. This indicates
that generative models are also beneficial for this task and can complement
discriminative models for better referring segmentation. Our code is publicly
available at https://github.com/kodenii/Ref-Diff. | Minheng Ni, Yabo Zhang, Kailai Feng, Xiaoming Li, Yiwen Guo, Wangmeng Zuo | 2023-08-31T14:55:30Z | http://arxiv.org/abs/2308.16777v2 | # Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models
###### Abstract
Zero-shot referring image segmentation is a challenging task because it aims to find an instance segmentation mask based on the given referring descriptions, without training on this type of paired data. Current zero-shot methods mainly focus on using pre-trained discriminative models (_e.g._, CLIP). However, we have observed that generative models (_e.g._, Stable Diffusion) have potentially understood the relationships between various visual elements and text descriptions, which are rarely investigated in this task. In this work, we introduce a novel Referring Diffusional segmentor (Ref-Diff) for this task, which leverages the fine-grained multi-modal information from generative models. We demonstrate that without a proposal generator, a generative model alone can achieve comparable performance to existing SOTA weakly-supervised models. When we combine both generative and discriminative models, our Ref-Diff outperforms these competing methods by a significant margin. This indicates that generative models are also beneficial for this task and can complement discriminative models for better referring segmentation. Our code is publicly available at [https://github.com/kodenii/Ref-Diff](https://github.com/kodenii/Ref-Diff).
## 1 Introduction
Referring Image Segmentation (RIS) aims to identify the referring instance region that is semantically consistent with the given text description. Different from semantic segmentation, this task often requires distinguishing instances of the same class, _e.g._, the tallest boy among these four children. Annotation of precise pairs (_i.e._, image, text description, and ground-truth instance mask) is costly and time-consuming. A recent weakly-supervised RIS approach [23] endeavors to alleviate the annotation difficulties, but it still needs specific pairs of images and referring texts for training. Conversely, a zero-shot solution is more valuable but may exacerbate the challenges further. On the one hand, it is training-free, and no referring annotation is required. On the other hand, it needs a deeper comprehension of the relationship between text and the visual elements in the images.
Recent multi-modal pre-training models have shown impressive capabilities in vision and language understanding. As one of the most representative discriminative models among them, CLIP [17] (which explicitly learns the global similarity between image and text through contrastive learning) has demonstrated significant improvements to various tasks, including object detection [34], image retrieval [17], and semantic segmentation [8]. However, directly applying a model like CLIP to zero-shot RIS is impractical, as it is trained to capture the global similarity of text and images, which cannot well learn the specific visual elements relating to a referring text. To address this, Yu _et al._[31] propose a global and local CLIP to bridge the gap between discriminative models and pixel-level dense prediction. Nevertheless, we observe that discriminative models themselves struggle to localize visual elements accurately. In recent years, generative models such as Stable Diffusion[19], DALL-E 2[18], and Imagen[21] have also attracted great attention due to their ability in generating the realistic or imaginative images. The semantic alignment in generated images demonstrates that
these generative models have implicitly captured the relationships between various visual elements and texts. However, unlike discriminative models, they are rarely exploited in zero-shot referring image segmentation tasks.
In this work, we attempt to investigate whether the generative models can benefit the zero-shot RIS task. To this end, we propose a novel Referring Diffusional segmentor (Ref-Diff). It leverages fine-grained multi-modal information from generative models to exploit the relationship between referring expressions and different visual elements in the image. Previous works usually adopt CLIP to rank the proposals from an offline proposal generator [11]. In contrast, our Ref-Diff can inherently provide these instance proposals using the generative models. This indicates that our Ref-Diff does not necessarily depend on third-party proposal generators.
Experiments on three datasets show that without the use of an offline proposal generator, only the generative model in our Ref-Diff achieves comparable performance against the SOTA weakly-supervised methods. Additionally, when incorporating an offline proposal generator and discriminative model, our Ref-Diff significantly outperforms the competing methods. Both quantitative and qualitative analyses demonstrate that the generative model is beneficial for this task, and the combination with discriminative models can lead to better referring segmentation results.
The main contributions can be summarized as follows:
* We demonstrate that the generative models can be leveraged to improve the zero-shot RIS task by exploiting the implicit relationships between visual elements and text descriptions.
* We show that the generative model can intrinsically perform proposal generation, thereby making our Ref-Diff independent of third-party proposal generators.
* We propose a feasible manner to combine the generative and discriminative models for the zero-shot RIS task, which complement each other and achieve better referring segmentation.
## 2 Related Work
Zero-shot Referring Segmentation.Referring image segmentation is one of the most fundamental and challenging tasks, as it involves a fine-grained understanding of both vision and language. Following a fully-supervised formulation, previous works [27; 28; 30; 13; 7; 29; 33] require labor-intensive training annotations, _i.e_., referring expressions and pixel-level masks. However, due to the absence of large-scale training annotations, these methods are often limited in their scalability and out-of-domain samples [23]. With the remarkable progress of discriminative vision-language pre-training [17], recent works [23; 31] explore their open-vocabulary recognition in weakly or zero-shot referring segmentation. Despite their considerable performance, the pre-trained discriminative models that learn the global similarity of text and image have inherent limitations in deeply understanding the object delineation or the fine-grained relationships between visual elements and text description. In contrast, our Ref-Diff utilizes the fine-grained understanding through generative models and thereby obtains more accurate predictions.
Visual Generative Models for Non-Synthesized Tasks.Large-scale text-to-image generative models [19; 18; 21] have achieved tremendous progress in imaginary generation and creative applications [25; 20; 9; 32; 15]. Apart from generation-related tasks, these generative models also demonstrate preeminent capabilities in fine-grained image understanding, _e.g_., semantic segmentation [33; 26; 2; 3; 1], object detection [5; 4], dense prediction [10; 22], and classification [12; 6]. Therefore, most existing research focuses on transfer learning or constructing synthetic data for specific tasks. Although a few studies have explored the application of using generative models in zero-shot image classification, the performance of generative models in zero-shot referring segmentation has not been well investigated and deserves to attract attention.
## 3 Methodology
### Problem Formulation and Inference Pipeline
Given an image \(\mathbf{x}\in\mathbb{R}^{W\times H\times C}\) and a referring text \(T\), Referring Segmentation aims to output a segmenting mask \(\mathbf{m}\in\{0,1\}^{W\times H}\) indicating the referring regions of the text \(T\) in image \(\mathbf{x}\), where
\(W\), \(H\), and \(C\) represent the width, height, and channel of the image, respectively. In the Zero-shot Referring Segmentation settings, the model cannot access any training data of Referring Segmentation, including images, referring texts, and instance mask annotations.
Our proposed framework is depicted in Figure. 1. Given an image and the referring text, our Ref-Diff generates a correlation matrix using the Generative Process, which can be used as 1) an alternative weight-free proposal generator and 2) a set of referring segmentation candidates. Optionally, our Ref-Diff can integrate the discriminative model with our proposed generative model within a unified framework. The final similarity for each mask proposal is obtained as follows:
\[\mathbf{s}_{i}=\alpha\mathbf{s}_{i}^{\mathrm{G}}+(1-\alpha)\mathbf{s}_{i}^{ \mathrm{D}}\,, \tag{1}\]
where \(\alpha\) is a hyper-parameter. \(\mathbf{s}_{i}^{\mathrm{G}}\) and \(\mathbf{s}_{i}^{\mathrm{D}}\) are the generative and discriminative scores between the referring text and \(i\)-th proposal, respectively. When \(\alpha\) is set to \(1\), only the generative model is adopted for this task. The final referring segmentation result is determined by selecting the proposal with the highest similarity score:
\[\hat{\mathbf{m}}=\operatorname*{arg\,max}_{\mathcal{M}_{i}}\mathbf{s}_{i}. \tag{2}\]
In the following sections, we provide a detailed introduction to each module of our framework.
### Generative Process
Stable Diffusion[19] is an effective generative model that consists of a series of inverse diffusion steps to gradually transform random Gaussian noise into an image. Therefore, it cannot directly operate on a real image to obtain its latent representations. Fortunately, as the diffusion process is computable, we can take the real image as one intermediate state generated by a diffusion model and run it backward to any step of the generation. In this work, we add a specific amount of Gaussian noise to obtain \(\mathbf{x}_{t}\) and then continue this process without compromising information:
\[\mathbf{x}_{\mathbf{t}}=\sigma_{t}(\mathbf{x}), \tag{3}\]
where \(\sigma_{t}\) is the function to obtain the noised image in step \(t\) and \(t\) is a hyper-parameter.
During the inverse diffusion process, let \(\Psi_{\mathrm{lan}}\) and \(\Psi_{\mathrm{vis}}\) be the text and image encoder of the generative model, respectively. In the generative model, the referring text \(T\) is encoded into text features using \(\mathbf{K}=\Psi_{\mathrm{lan}}(T)\in\mathbb{R}^{l\times d}\), where \(l\) is the token number and \(d\) is the dimension size for latent projection. Similarly, for the \(i\)-th step, the visual image \(\mathbf{x}_{\mathbf{i}}\) is projected to image features using \(\mathbf{Q}=\Psi_{\mathrm{vis}}(\mathbf{x}_{\mathbf{i}})\in\mathbb{R}^{w\times h \times d}\). Here, \(w\) and \(h\) are the width and height of encoded image features. The cross-attention between the text and image features can be formulated as:
\[\mathbf{a}=\mathrm{Softmax}(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{d}})\,, \tag{4}\]
where \(\mathbf{a}\in\mathbb{R}^{w\times h\times l\times N}\), and \(N\) is the number of attention heads. Following [26], we obtain the overall cross-attentions by averaging the value of each attention head to \(\bar{\mathbf{a}}\in\mathbb{R}^{w\times h\times l}\). The cross-attention
Figure 1: Overview of our Ref-Diff. Our proposed Generative Process (left) generates a correlation matrix between the referring text and the input image. This matrix serves as an alternative weight-free proposal generator and generative segmentation candidates \(\mathbf{s}^{\mathrm{G}}\). The Discriminative Process (right) is alternatively integrated into our framework and generates the discriminative candidates \(\mathbf{s}^{\mathrm{D}}\). The final referring segmentation result is obtained either from the generative candidates or a combination of both generative and discriminative candidates.
matrix \(\bar{\mathbf{a}}\) represents the correlation detected between each token in the referring text and each region feature in the image. In general, a higher value in \(\bar{\mathbf{a}}\) indicates a better correlation between the token and region features, which can be used to locate the related referring regions.
In the inverse diffusion process, the generative model captures the overall semantics of the language condition. However, the corresponding attention region for each token is not necessarily the same as they have different semantic representations (see Sec. 4.6). Without loss of generality, a referring text \(T\) is a sentence that describes the characteristics of a specific instance. To obtain the preferred token from the whole text description, we use syntax analysis to obtain its root token (_i.e._, the ROOT element in the syntax tree). Generally, the ROOT token in the latent space captures the contextual correlations (_e.g._, the token 'horse' in Figure 1 contains contextual representations from 'black' and 'jumping'). Then, the attention region projected for this root token has a higher probability of being the referring region. Let \(k\) be the index of the root token, and let \(\bar{\mathbf{a}}_{k}\in\mathbb{R}^{w\times h}\) denote the cross-attention matrix of the root token. We normalize and resize this cross-attention matrix by:
\[\mathbf{c}=\phi_{w\times h\to W\times H}\left(\frac{\bar{\mathbf{a}}_{k}- \min(\bar{\mathbf{a}}_{k})}{\max(\bar{\mathbf{a}}_{k})-\min(\bar{\mathbf{a}} _{k})+\epsilon}\right)\;, \tag{5}\]
where \(\epsilon\) is a small constant value. Here, \(\phi_{w\times h\to W\times H}\) is a bi-linear interpolation function used to resize the attention map to the same resolution as the given image.
### Discriminative Process
During the image encoding process by the discriminative model CLIP, the spatial position is inevitably attenuated. We observe that referring text descriptions usually contain explicit direction clues (_e.g._, left, right, top, and bottom), which are valuable but have been ignored in previous works. To emphasize such types of positional information, we propose a positional bias to explicitly encode the image with the given direction clues. This is achieved through element-wise multiplication:
\[\mathbf{x}^{\prime}=\mathbf{x}\odot\mathbf{P}\;, \tag{6}\]
where \(\mathbf{P}\in\mathbb{R}^{W\times H\times C}\) is a positional bias matrix. Specifically, if the text, after syntactic analysis, contains explicit direction clues, \(\mathbf{P}\) will be a soft mask with values ranging from \(1\) to \(0\) along the given direction axis. Lower values indicate regions that should receive less attention. Conversely, if no direction clue is detected, \(\mathbf{P}\) will be a matrix filled with \(1\).
Finally, the ultimate representation \(\mathbf{v}_{i}\in\mathbb{R}^{d}\) for each proposal \(\mathcal{M}_{i}\) in discriminative process is:
\[\mathbf{v}_{i}=\beta f_{\mathcal{M}_{i}}(\mathbf{x}\odot\mathbf{P})\;+(1- \beta)f(\mathbf{x}\odot\mathcal{M}_{i}), \tag{7}\]
where \(f\) and \(f_{\mathcal{M}_{i}}\) is the vanilla CLIP image encoder and CLIP image encoder with modified self-attention based on mask proposal \(\mathcal{M}_{i}\). Since the discriminative model (_i.e._, CLIP) is expected to encode the instance within each proposal region \(\mathcal{M}_{i}\) while disregarding other regions for reducing disturbances. To achieve this, we assign a weight of \(0\) to the attention values between the [CLS] token and the patch tokens outside the current proposal \(\mathcal{M}_{i}\). In this work, we utilize the output of the penultimate layer as the final representation, which is motivated by the observation that the representation in the last layer tends to encompass the entire image rather than focus on the corresponding proposal region.
### Proposals Extracting and Matching
**Weight-free Proposal Filter.** Since the generative models inherently encode instance representations, we can derive proposals from their cross-attention matrix \(\mathbf{c}\). In this work, we introduce a weight-free proposal filter to generate a series of mask proposals. This is formulated as:
\[\mathcal{M}=\{\psi(\mathbf{c}\geq\mu)|\mu\in\{5\%,10\%,...,95\%\}\}\;, \tag{8}\]
where \(\psi\) is a binarization function with a given predefined threshold value \(\mu\). Different from other works [31] which rely on external proposal generators and CLIP filters, the generative models in this work can efficiently and effectively produce the expected proposals. This approach offers a streamlined and integrated solution for obtaining high-quality proposals without additional tools.
**Pre-trained Segmentor.** If a reliable segmentor is available, we can also obtain proposals from it in a flexible manner. By leveraging the capability of generative and discriminative models for semantic
understanding, we can refine the proposals and prioritize those that align closely with the given referring description. This combined approach of using a segmentor for initial proposal generation ensures that the resulting proposals are coherent and better aligned with the given referring expression.
**Generative Matching.** After obtaining proposals either from the weight-free proposal filter or the pre-trained segmentor, the next step is to find the most similar proposal based on the cross-attention matrix. In this work, we quantify the similarity between the given referring text and all the proposals by measuring the distance on cross-attention matrix \(\mathbf{c}\) and \(\mathcal{M}_{i}\):
\[\mathbf{s}_{i}^{\mathrm{G}}=\frac{|\mathbf{c}\odot\mathcal{M}_{i}|}{| \mathcal{M}_{i}|}-\frac{|\mathbf{c}\odot(1-\mathcal{M}_{i})|}{|1-\mathcal{M}_ {i}|}. \tag{9}\]
**Discriminative Matching.** Given a referring text, we obtain the mean representation \(\mathbf{r}\in\mathbb{R}^{d}\) of the global text and the local subject token using the CLIP text encoder, which serves as the features of the referring text. To find out the most probable proposal from the perspective of the discriminative model, we calculate the similarity between the features of referring text \(\mathbf{r}\) and visual representation \(\mathbf{v}_{i}\) of each mask proposal \(\mathcal{M}_{i}\), which is defined as:
\[\mathbf{s}_{i}^{\mathrm{D}}=\mathbf{v}_{i}\mathbf{r}^{\top}\,. \tag{10}\]
This similarity allows us to identify the proposal that best aligns with the referring text. Higher similarity scores indicate a stronger correspondence between the proposal and the text, indicating a higher likelihood of being the correct segmentation result.
## 4 Experiments
### Implementation Details
Our Ref-Diff is a zero-shot solution, so we only need an inference process, without any training images and annotations. All experiments are conducted on a Tesla A100 GPU. We use the pre-trained Stable Diffusion[19] (V1.5) as our generative model. All test images are resized and padded to the resolution of \(1024\times 1024\). Since existing works mainly focus on using pre-trained segmentor and discriminative model [17], for a fair comparison, we select SAM [11] as the segmentor and CLIP of ViT-B/16 as the discriminative model. We set \(t\), \(\alpha\), and \(\beta\) to \(2\), \(0.1\), and \(0.3\), respectively.
### Experimental Setup
Following [31; 23], we adopt mIoU and oIoU as the evaluation metrics and apply them to three widely-used benchmarks, including RefCOCO [16], RefCOCO+ [16], and RefCOCOg [14]. For
Figure 2: **Effectiveness of generative model in segmentation capability.** Ref-Diff/G is capable of segmenting the right content even without the assistance of the pre-trained segmentor and CLIP. Combing with the pre-trained segmentor, Ref-Diff/Gs achieves precise segmentation of the correct regions.
fairly comparing with existing works, we conduct the experiments under two settings: a) Zero-shot RIS using a pre-trained segmentor and CLIP; b) Zero-shot RIS without a pre-trained segmentor and CLIP. The latter setting allows us to analyze the effectiveness of the generative model in this task.
For setting a), we select five competing baselines. 1) A weakly-supervised method TSEG [23]. It is not open-sourced and only provides the validation results of mIoU. So we did not report its results on other settings and test sets. 2)\(\sim\)4) Three zero-shot baselines from [31], including Region Token, Cropping, and Global-Local CLIP. 5) A zero-shot baseline SAM-CLIP proposed by us. Note that Yu _et al_. [31] adopt FreeSOLO [24] as their segmentor. However, considering the remarkable performance of SAM [11] in segmentation, we propose SAM-CLIP as a new discriminative baseline. Specifically, we first use SAM to extract all candidate proposals from the image and then leverage CLIP to identify the most relevant proposal using Eqn. 10. In this setting, our Ref-Diff combines both generative and discriminative models, and uses the same proposal segmentor as other methods.
The mIoU and oIoU comparisons are shown in Tables 1 and 2, respectively. We can observe that Ref-Diff exhibits significantly superior performance compared to the competing methods and baselines. Benefiting from the combination of both generative and discriminative models, our Ref-Diff achieves an improvement of approximately \(10\) mIoU on RefCOCO, RefCOCO+, and RefCOCOg datasets. From the comparison between SAM-CLIP and other methods, only a slight improvement is observed. We analyze that segmentation from SAM is highly detailed (_e_.\(g\)., it may split a single object into multiple small parts), which easily increases the possibility of CLIP filtering out erroneous proposals by only using the discriminative model CLIP. These types of erroneous proposals have almost no overlap with the correct solution, resulting in only marginal improvement. By incorporating the generative model, the filtering of erroneous proposals is further mitigated, contributing to our superior performance. This also demonstrates that our improvement over SAM-CLIP stems from a deeper understanding facilitated by both our generative and discriminative models.
Furthermore, we conduct additional experiments on PhraseCut dataset in Table 3. We can see that our Ref-Diff also outperforms the competing method Global-Local by a large margin on OIoU. This further validates the great generalization of our Ref-Diff on other datasets.
### Ablation Study
We conduct an ablation study in Table 4 and Table 5, which show the effectiveness of different components. Notably, our Ref-Diff/g achieves comparable performance to the weakly supervised
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c}{**RefCOCO**} & \multicolumn{3}{c}{**RefCOCO+**} & \multicolumn{3}{c}{**RefCOCOg**} \\ \cline{2-11} & val & test A & test B & val & test A & test B & val(U) & test(U) & val(G) \\ \hline \multicolumn{11}{l}{_Weakly-supervised Method_} \\ TSEG [23] & - & - & - & - & - & - & - & - & - \\ \hline \multicolumn{11}{l}{_Zero-shot Method_} \\ Region Token [31] & 21.71 & 20.31 & 22.63 & 22.61 & 20.91 & 23.46 & 25.52 & 25.38 & 25.29 \\ Cropping [31] & 22.73 & 21.11 & 23.08 & 24.09 & 22.42 & 23.93 & 28.69 & 27.51 & 27.70 \\ Global-Local CLIP [31] & 24.88 & 23.61 & 24.66 & 26.16 & 24.90 & 25.83 & 31.11 & 30.96 & 30.69 \\ SAM-CLIP & 25.23 & 25.86 & 24.75 & 25.64 & 27.76 & 26.06 & 33.75 & 34.80 & 33.65 \\
**Ref-Diff** & **35.16** & **37.44** & **34.50** & **35.56** & **38.66** & **31.40** & **38.62** & **37.50** & **37.82** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **The oIoU comparison on settings of using a pre-trained segmentor and CLIP. The improvement is statistically significant with \(p<0.01\) under \(t\)-test.****
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c}{**RefCOCO**} & \multicolumn{3}{c}{**RefCOCO+**} & \multicolumn{3}{c}{**RefCOCOg**} \\ \cline{2-11} & val & test A & test B & val & test A & test B & val(U) & test(U) & val(G) \\ \hline \multicolumn{11}{l}{_Weakly-supervised Method_} \\ Region Token [31] & 23.43 & 22.07 & 24.62 & 24.51 & 22.64 & 25.37 & 27.57 & 27.34 & 27.69 \\ Cropping [31] & 24.83 & 22.58 & 25.72 & 26.33 & 24.06 & 26.46 & 31.88 & 30.94 & 31.06 \\ Global-Local CLIP [31] & 26.20 & 24.94 & 26.56 & 27.80 & 25.64 & 27.84 & 33.52 & 33.67 & 33.61 \\ SAM-CLIP & 26.33 & 25.82 & 26.40 & 25.70 & 28.02 & 26.84 & 38.75 & 38.91 & 38.27 \\
**Ref-Diff** & **37.21** & **38.40** & **37.19** & **37.29** & **40.51** & **33.01** & **44.02** & **44.51** & **44.26** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **The mIoU comparison on settings of using a pre-trained segmentor and CLIP. The improvement is statistically significant with \(p<0.01\) under \(t\)-test.**
model TSEG [23] on RefCOCO+, and outperforms it on RefCOCOg when using the generative model alone. When combined with the segmentor, Ref-Diff/gs consistently exhibits superior performance across all test sets to the weakly supervised model. These observations collectively show that the generative model can not only perform proposal generation but also benefit the RIS task.
### Effect of Generative Model
Segmentation Capability.From Figure 2 it can be observed that even without the assistance of the segmentor and CLIP, our Ref-Diff/g is able to accurately segment the corresponding content. This is primarily attributed to the attention projection of the generative model onto the relevant visual content (kindly refer to Sec. 4.6). In the second example, despite the presence of two instances of the same object (person) in the image, our Ref-Diff/g is still capable of performing reasonably accurate segmentation. This highlights the significant potential of the generative model, as it exhibits 1) segmentation capabilities comparable to segmentation models and 2) categorization capabilities comparable to discriminative models.
Localization Capability.From the first example shown in Figure 3, it can be observed that the discriminative model fails to accurately locate the leftmost broccoli. Similarly, in the second example, CLIP fails to eliminate the redundant region of the plate. We analyze this issue may arise due to the inherent limitation of the discriminative model, which is trained to identify whether the given text and image are well aligned. So it lacks the capability of localizing objects within the image. Consequently, relying solely on the discriminative model to discern whether a region contains redundant content becomes challenging. However, when we combine the generative and discriminative models, we are able to achieve the best results.
### Effect of Discriminative Model
In the first example depicted in Figure 4, we observe that the generative model incorrectly segments the screen as a separate object due to its prominence as a significant visual feature of a laptop, attracting more attention from the generative model. Similarly, in the second example, the generative model places greater emphasis on the person's face, resulting in incomplete segmentation. However, these errors can be effectively mitigated in our full model Ref-Diff, due to the robust categorization capability of the discriminative model.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Methods** & **Segmentor** & **Generative** & **Discriminative** & **RefCOCO** & **RefCOCO+** & **RefCOCOg** \\ \hline Ref-Diff/g & & ✓ & & 16.18 & 17.23 & 20.39 \\ Ref-Diff/gs & ✓ & ✓ & & 26.04 & 26.68 & 26.84 \\ Ref-Diff/gs & ✓ & & ✓ & 33.64 & 34.43 & 34.83 \\ Ref-Diff & ✓ & ✓ & ✓ & **35.16** & **35.56** & **38.62** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **The ablation study on oIoU.**
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Methods** & **Geometry** & **Generative** & **Discriminative** & **RefCOCO** & **RefCOCO+** & **RefCOCOg** \\ \hline \multicolumn{4}{l}{_Weakly-supervised Method_} \\ TSEG [23] & & & & 25.95 & 22.62 & 23.41 \\ \hline \multicolumn{4}{l}{_Zero-shot Method_} \\ Ref-Diff/g & & ✓ & & 21.53 & 22.50 & 27.03 \\ Ref-Diff/gs & ✓ & ✓ & & 29.82 & 30.06 & 30.73 \\ Ref-Diff/gs & ✓ & & ✓ & 35.27 & 35.72 & 41.84 \\ Ref-Diff & ✓ & ✓ & ✓ & **37.21** & **37.29** & **44.02** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results on PraseCut. Ref-Diff outperforms the prior model with a significant improvement.**
### Attention from Generative Model
To investigate the region that the generative model focuses on, we provide visualizations of sample images along with the attention weights assigned to each token in the generative model, as depicted in Figure 5. We find that: 1) Generative models exhibit contextual understanding capabilities, as they effectively allocate attention to the relevant regions in the image, thereby accomplishing both
Figure 4: **Effectiveness of discriminative model. The generative model exhibits higher sensitivity to salient visual features, which can result in partial segmentation when solely relying on the generative model. By integrating the discriminative model, we can effectively mitigate such errors and achieve more accurate results.**
Figure 5: **Attention from generative model. Generative model projects attention to different regions of the image based on different tokens, which is the key reason for the effectiveness of Ref-Diff. The dashed box highlights the root token and its corresponding attention map.**
Figure 3: **Effectiveness of generative model in localization capability. The discriminative model focuses more on whether the image contains text-related content, which may result in mistakenly selecting larger regions.**
localization and segmentation tasks. Notably, the attention assigned to the subject token closely aligns with the final segmentation results. This finding elucidates why generative models can perform well in Zero-shot referring segmentation tasks without relying on a separate segmentor or a discriminative model. 2) In contrast to classification models, where the first token often represents the entirety of the text, we observe that the information associated with the first token in our generative model does not capture the complete textual context. However, as discussed earlier, the subject token successfully captures the comprehensive information from the entire text, enabling accurate segmentation results.
### Case Study
We presented three examples of varying difficulty in Figure 6 to showcase the effectiveness of our Ref-Diff. In the first example, we observed that Ref-Diff demonstrates the capability to accurately identify and segment the correct object within similar objects. In the second example, despite the presence of numerous objects in the image and the complex spatial relationships, Ref-Diff successfully identifies the correct objects through the accurate understanding of the generative and discriminative models. In the final example, we encountered a segmentation failure of Ref-Diff due to the presence of some degree of ambiguity in the referring expression. Ref-Diff incorrectly identifies the leftmost hand as the first arm, resulting in a segmentation error. Enhancing the robustness of Ref-Diff is an area of future work that deserves further investigation.
## 5 Broader Impact and Limitations
Zero-shot Referring Segmentation has broad applications in industrial and real-world domains, such as image editing, robot control, and human-machine interaction. our Ref-Diff, through the combination of generative and discriminative models, has successfully demonstrated the feasibility of Training-free yet high-quality Referring Segmentation in various data-scarce scenarios. This significantly reduces the deployment cost of artificial intelligence in related fields. However, due to the existence of pre-trained modules, the inference stage still incurs high computational overhead. Moreover, Referring Expression texts are sensitive to ambiguity (see Sec. 4.7), which currently results in noticeable segmentation errors. In the future, we will further investigate Zero-shot Referring Segmentation with lower computational costs and higher robustness.
## 6 Conclusion
In this work, we proposed a novel Referring Diffusional segmentor (Ref-Diff) for Zero-shot Referring Image Segmentation, which effectively leverages the fine-grained multi-modal information from generative models. We demonstrated that a generative model alone can achieve comparable performance to existing SOTA weakly-supervised models without requiring a proposal generator. Moreover, by combining both generative and discriminative models, Ref-Diff outperformed these competing methods by a significant margin. Overall, our work presented a simple yet promising direction for zero-shot referring image segmentation by exploiting the potential of generative models, which brought new insights for addressing the challenges of this task.
Figure 6: **Case studies.** Blue indicates the predicted regions. The third case is a failure example, where green denotes ground-truth regions. Ref-Diff demonstrates its ability to accurately segment objects based on the provided referring texts, even in the presence of complex spatial relationships within the image. |
2309.12638 | Auto-Lesion Segmentation with a Novel Intensity Dark Channel Prior for
COVID-19 Detection | During the COVID-19 pandemic, medical imaging techniques like computed
tomography (CT) scans have demonstrated effectiveness in combating the rapid
spread of the virus. Therefore, it is crucial to conduct research on
computerized models for the detection of COVID-19 using CT imaging. A novel
processing method has been developed, utilizing radiomic features, to assist in
the CT-based diagnosis of COVID-19. Given the lower specificity of traditional
features in distinguishing between different causes of pulmonary diseases, the
objective of this study is to develop a CT-based radiomics framework for the
differentiation of COVID-19 from other lung diseases. The model is designed to
focus on outlining COVID-19 lesions, as traditional features often lack
specificity in this aspect. The model categorizes images into three classes:
COVID-19, non-COVID-19, or normal. It employs enhancement auto-segmentation
principles using intensity dark channel prior (IDCP) and deep neural networks
(ALS-IDCP-DNN) within a defined range of analysis thresholds. A publicly
available dataset comprising COVID-19, normal, and non-COVID-19 classes was
utilized to validate the proposed model's effectiveness. The best performing
classification model, Residual Neural Network with 50 layers (Resnet-50),
attained an average accuracy, precision, recall, and F1-score of 98.8%, 99%,
98%, and 98% respectively. These results demonstrate the capability of our
model to accurately classify COVID-19 images, which could aid radiologists in
diagnosing suspected COVID-19 patients. Furthermore, our model's performance
surpasses that of more than 10 current state-of-the-art studies conducted on
the same dataset. | Basma Jumaa Saleh, Zaid Omar, Vikrant Bhateja, Lila Iznita Izhar | 2023-09-22T06:09:48Z | http://arxiv.org/abs/2309.12638v1 | # Auto-Lesion Segmentation with a Novel Intensity Dark Channel Prior for COVID-19 Detection
###### Abstract
During the COVID-19 pandemic, medical imaging techniques like computed tomography (CT) scans have demonstrated effectiveness in combating the rapid spread of the virus. Therefore, it is crucial to conduct research on computerized models for the detection of COVID-19 using CT imaging. A novel processing method has been developed, utilizing radiomic features, to assist in the CT-based diagnosis of COVID-19. Given the lower specificity of traditional features in distinguishing between different causes of pulmonary diseases, the objective of this study is to develop a CT-based radiomics framework for the differentiation of COVID-19 from other lung diseases. The model is designed to focus on outlining COVID-19 lesions, as traditional features often lack specificity in this aspect. The model categorizes images into three classes: COVID-19, non-COVID-19, or normal. It employs enhancement auto-segmentation principles using intensity dark channel prior (IDCP) and deep neural networks (ALS-IDCP-DNN) within a defined range of analysis thresholds. A publicly available dataset comprising COVID-19, normal, and non-COVID-19 classes was utilized to validate the proposed model's effectiveness. The best performing classification model, Residual Neural Network with 50 layers (Resnet-50), attained an average accuracy, precision, recall, and F1-score of 98.8%, 99%, 98%, and 98% respectively. These results demonstrate the capability of our model to accurately classify COVID-19 images, which could aid radiologists in diagnosing suspected COVID-19 patients. Furthermore, our model's performance surpasses that of more than 10 current state-of-the-art studies conducted on the same dataset.
Email: [email protected]
## 1 Introduction
The World Health Organization (WHO) declared the COVID-19 virus as a pandemic in 2020 [1]. In addressing this illness, among the most common diagnostic procedures is called reverse transcription polymerase chain reaction (RT-PCR), but it has several potential drawbacks that make it less reliable than other approaches [2]. It has been determined that chest computed tomography (CT) is a useful adjunctive technique for detection COVID-19. Chest CT exhibits high sensitivity in screening for COVID-19 infection and can provide a prompt diagnosis, particularly when compared to the RT-PCR
test [3]. Chest radiography has been helpful for the diagnosis, observation, and monitoring or tracking of COVID-19 progression when its natural history is evaluated [4]. Although chest X-rays are routinely taken, CT scans are far more sensitive than X-rays in detecting diseases. CT scans can effectively identify patchy ground-glass opacities (GGOs) and consolidations [5]. Despite the numerous benefits of lung ultrasound, such as no radiation, lower risk of contamination, reduced cost, and high reproducibility, there are also some drawbacks [6], compared to chest CT scans, it is less sensitive, which is one of its drawbacks [7]. Another limitation of ultrasonography is its limited ability to detect deep and intrapulmonary abnormalities [8]. More commonly, the major drawbacks of magnetic resonance imaging (MRI) that need to be discussed include the small sample size, potential lack of experience leading to restricted repeatability of the suggested protocol, and the necessity to allocate an MRI room to handle the needs of both COVID-19 and non-COVID-19 cases [4].
High-sensitivity computed tomography (CT) scans serve as an alternative option for the early detection of COVID-19, capable of addressing the limitations of PCR. A CT scan can achieve approximately 88% to 98% sensitivity [9]. According to authors [10], [11], RT-PCR's sensitivity in their analysis was inferior to chest CT's (59-71% versus 88-98%, respectively, P \(<.001\)). Furthermore, evaluation of COVID-19 lung impairment can be done using chest computed tomography (CT) as mentioned in [12], COVID-19 lung damage is typically indicated by numerous and peripheral ground-glass opacities (GGO) and potential concomitant consolidations. In comparison to RT-PCR in symptomatic patients, also found that chest CT had high sensitivity (97%) but intermediate specificity (56%) in patients.
Furthermore, in clinical practice, CT scans have limited specificity (approximately 34%) in detecting COVID-19 and differentiating it from other types of pneumonia based on traditional features [9]. To enhance early diagnosis, the development of novel approaches in medical imaging, such as texture radiomics analysis and mathematical strategies for extracting important features from images across different grayscales, appears crucial [13]. The sub-visual extraction of radiomic properties allows the computer to identify structures that may be imperceptible to the human eye. As a result, radiomics has the potential to complement traditional radiological evaluation by providing valuable clinical data [14].
On the other hand, numerous studies have attempted to merge the two concepts and focus on distinguishing between pneumonia infection and COVID-19 infection, which can pose a challenging task for researchers. The high similarity in CT-traditional findings, attributed to the absence of significant results [15; 16; 17; 18], makes this differentiation difficult. However, this study solely focuses on distinguishing between COVID-19 and normal cases, achieving high accuracy [19]. Furthermore, studies [20] and [21] have focused on segmenting regions of interest to extract radiomics features and achieve optimal results in multi-class classification. Many of these studies involve the manual segmentation of the entire volume of interest (VOI) for pneumonia lesions within the lungs, utilizing a popular open-source software package [21].
To enable a rapid, consistent, and human error immune framework for pulmonary and lung assessment, an artificially intelligent system has been utilized to semi-automatically segment pneumonia lesions in various studies. In this current work, a new radiomics feature technique called Auto-lesion segmentation (ALS) is suggested for the diagnosis of COVID-19 using computed tomography imaging. The ALS technique utilizes an intensity dark channel prior based on a Deep Neural Network (ALS-IDCP-DNN) model. The suggested methodology comprises three primary steps: data resizing, feature extraction and selection, segmentation, data augmentation, and classification. Diseased lung areas are upscaled during the pre-processing stage, and the dataset is augmented using doubling and resizing techniques. During the radiomics feature extraction step, the intensity features are extracted, selected, and the dataset is automatically segmented using the dark channel prior technique. As for the outcome, the classification task was performed using the DNN-based Resnet-50 architecture.
The experimental research utilized a COVID-CT dataset comprising COVID-19, normal, and non-COVID-19 virus types. The test results demonstrated a 98.8% success rate in accurately identifying COVID-19 infections. These results demonstrate that the proposed ALS-IDCP-DNN model performs exceptionally well in identifying the COVID-19 virus. In this research, the ALS-IDCP-DNN model
presents several key contributions to COVID-19 diagnosis using CT scans. Firstly, a framework for classifying COVID-19 was developed, utilizing methods for auto-segmentation, feature extraction, feature selection, and detection. Secondly, a novel radiomics feature extraction method was proposed, which combines IDCP and Guided specific threshold range techniques to enhance analysis and selection. The proposed framework demonstrated high accuracy in classifying CT images as either COVID-19 or non-COVID-19/healthy using only 996 chest CT scan samples, a small number of parameters, and minimal processing resources. The model was tested using three datasets (images of COVID-19 CT scans, non-COVID-19 CT scans, and healthy CT scans images) and outperformed prior works in terms of both explainable detection and precise COVID-19 case classification.
## 2 Methodology
In this work, a new model is proposed for detecting COVID-19 lesions. The proposed approach involves classifying images into three categories (COVID-19, non-COVID-19, or normal) by exploring the enhancement and automatic segmentation principles of intensity regions of interest (IROIs) based on the dark channel prior (DCP). During the segmentation stage, the areas of interest were found automatically inside the pulmonary lesions. For each lesion, the intensity of the radiomic feature was delineated using a specific range of analysis thresholds. Subsequently, the deep neural network model based on the augmentation process, namely the ALS-IDCP-DNN model, was applied. This study attempts to reduce the number of misdiagnoses caused by human error when interpreting chest CT scans. Its secondary purpose is to facilitate the rapid identification of patients with confirmed COVID-19 infection, other pneumonia diseases, and healthy individuals for the benefit of medical professionals. The proposed ALS-IDCP-DNN model consists of six main points, as follows:
1. Input a real-world dataset collected from CT scans that is available to the public.
2. Images from CT scans are resized to better highlight any regions that might be affected by COVID-19 and preserve homogeneity in features.
3. Extraction and analysis of the radiomic intensity features using the dark channel prior approach for infected lesions.
4. Auto-segmentation by enhancing the brightness of pixels within the specified threshold range of analysis.
5. Double data augmentation is employed to expand the dataset size during the process of resizing imagery.
6. The previously trained ResNet-50 model is utilized to extract deep features from each image. The classification outcomes are then determined based on the categorization of these deep features obtained from each deep structure.
7. The theoretical foundation and dataset utilized by the suggested ALS-IDCP-DNN model are described below.
### Data Sources and Analysis
In this paper, the actual dataset of COVID-19 CT scan digital images used in this project was created by Yang et al. [22], The dataset contains clinical information for 216 COVID-19 patients and 91 non-COVID-19 patients. This dataset includes 349 positive COVID-19 cases and 463 negative Chest Computed Tomography images for lung diseases (with lesions)/health (no lesions). The positive and negative images were gathered from bioRxiv1 ([https://www.biorxiv.org](https://www.biorxiv.org)) and medRxiv2 ([https://www.medrxiv.org](https://www.medrxiv.org)). This dataset has been obtained and developed by an authoritative study on COVID-19 CT patient cases and has been made available to the public.
The CT images are available in a variety of sizes equivalent to height (\(\text{max}=1853\), \(\text{average}=491\), and \(\text{min}=153\)) and width (\(\text{max}=1485\), \(\text{average}=383\), and \(\text{min}=124\)). Based on the available data, the average age of the unhealthy patients is estimated to be around 45 years old, with 86 males outnumbering 51 females. The usefulness of this dataset has been verified in accordance with standard practices in radiology.
### Proposed Scenarios
The main objective of this proposed effort is to increase the speed at which radiomics analysis can be conducted. This was achieved by having a radiologist with 10 years of experience in lung CT imaging manually identify the region of interest (ROI) within a lesion on a single CT image slice. This was achieved by having a radiologist with 10 years of experience in lung CT imaging manually identify the region of interest (ROI) within a lesion on a single CT image slice. When multiple lesions were present, the one with the largest volume and/or highest density was chosen as the target for the region of interest (ROI). In this section, the problem of auto-segmentation of the ROI on the chest CT medical image, extracting the radiomics feature of this region, and selecting the necessary features is presented using a modified dark channel prior (DCP).
The dark channel prior can directly estimate the thickness of haze in an image by analysing pixels with low intensity. In addition, the lung lesion tissue shares similarities with haze in an image. Based on the above, an automatic segmentation scheme was applied to the lungs and extracted using a modified DCP. The purpose was to estimate the threshold of the included intensity for increasing pixel brightness, specifically targeting the removal of haze (representing lung lesion tissue) from the pixels. Subsequently, the intensity was carefully observed in the analysis of most images to best distinguish COVID-19 lesions from other pulmonary conditions within a specific threshold range. This approach outperformed current state-of-the-art techniques and required minimal processing time. The process is outlined as follows:
#### 2.2.1 Dark Channel Prior (DCP)
The dark channel prior technique largely relies on the characteristics of haze-free images. In general, pixels with very low intensity, virtually zero in at least one colour channel, can be found in areas that are not covered by the sky, with details mentioned by [23]. To clearly describe this finding, researchers must first specify the mean of "dark channel." If you take any image in \(\mathfrak{S}\) and look at its dark channel, The source of \(\mathfrak{S}^{\text{DCP}}\) is:
\[\mathfrak{S}^{\text{DCP}}(x)=\text{Min}_{\mathtt{y}\in\mathcal{C}(\mathtt{x}) }(\text{Min}_{\mathtt{y}\in\{\mathtt{r},\mathtt{g},\mathtt{b}\}}\,\mathfrak{ S}^{\mathtt{c}}(\mathtt{y})\,) \tag{1}\]
where \(\mathfrak{S}^{\mathtt{c}}\) is a color-channel of \(\mathfrak{S}\) and \(\mathcal{C}(x)\)is a localized area centred at \(x\). Two minimum operators yielding a dark channel: \(\text{Min}_{\mathtt{y}\in\{\mathtt{r},\mathtt{g},\mathtt{b}\}}\) is achieved on each pixel, and \(\text{Min}_{\mathtt{y}\in\mathcal{C}(x)}\) is a lower limit filter. Commutativity holds for the operator minimum.
If \(\mathfrak{S}\) is a haze free external image other than the sky locale, our analysis indicates that the brightness of \(\mathfrak{S}\)'s dark channel is small and is typically zero:
\[\mathfrak{S}^{\text{DCP}}{\rightarrow}0 \tag{2}\]
The study in [23] provided a straightforward technique for calculating the globally atmospheric light \(\mathbb{A}\) based on DCP. 1) Select the 0.1 percent brightest dark-channel pixels. 2) The most intense of these pixels in the original input image \(\mathbb{I}\) is chosen to represent the atmospheric light.
#### 2.2.2 Transmission Map
He et al. used a double min operation shown in (1) across all three-color channels to estimate the transmission map of a hazy image, as described in [23]:
\[\text{Min}_{\mathtt{y}\in\mathcal{C}(x)}\left(\text{Min}_{\mathtt{c}}\frac{ \mathfrak{I}^{\mathtt{c}}(\mathtt{y})}{\mathbb{A}^{\mathtt{c}}}\right)= \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{ \mathfrak{ }}}}}}}}}}}}} \mathfrak{(x) \text{Min}_{ \mathtt{c}}\frac{\mathfrak{I}^{\mathtt{c}}(\mathtt{y})}{\mathbb{A}^{\mathtt{c}} }\right)+1-\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak { }}}}}}}}}}}}} \mathfrakmathfrak{(\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{ \mathfrak{ \mathfrak \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}} \mathfrakmathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrakmathfrakmathfrak{\mathfrak \mathfrak{ \mathfrak \mathfrak{ \mathfrak \mathfrak{ \mathfrak{ \mid
getting (8) by substituting (7) for (5).
\[\tilde{\mathfrak{k}}(\mathrm{x})=1-\mathrm{Min}_{\mathrm{y}\in\mathcal{C}(x)} \left(\mathrm{Min}_{\mathrm{c}}\frac{\mathrm{i}^{\mathrm{c}}(\mathrm{y})}{ \mathrm{\SIUnitSymbol{a}}^{\mathrm{c}}}\right) \tag{5}\]
A predefined parameter \(\omega\) (\(0<\omega\leq 1\)) is initiated to evaluate the depth of the image; otherwise, the image will look artificial. This technique is known as 'aerial perspective' [24].
\[\tilde{\mathfrak{k}}(\mathrm{x})=1-\omega\;\mathrm{Min}_{\mathrm{y}\in \mathcal{C}(x)}\left(\mathrm{Min}_{\mathrm{c}}\frac{\mathrm{i}^{\mathrm{c}}( \mathrm{y})}{\mathrm{\SIUnitSymbol{a}}^{\mathrm{c}}}\right) \tag{6}\]
### Proposed Algorithm based on Intensity Feature
The observation reveals that the tissue of the pulmonary lesion in the CT radiomics images of lungs bears similarity to the hazy area. The depth-like map exhibits a distinct intensity that accurately captures the presence of haze. To generate the final haze corresponding to different positions, the principles of 2-DCP are employed to extract the intensity component from the depth-like map:
1. The Dark Channel Prior estimates the thickness of the tissue in the pulmonary lung lesion by selecting and extracting pixels within a specified threshold range (obtained through dataset analysis). This leads to the following equation: \[\tilde{\Upsilon}(x)=1-\tilde{\mathfrak{k}}(x)\] (7) And analysis condition: \[\left(\left(\tilde{\Upsilon}(x)>0.35\right)\&\&\left(\tilde{\Upsilon}(x)<0.7 0\right)\right)\] (8)
where \(\tilde{\mathfrak{k}}(\mathrm{x})\) is the transmission map estimated, \(\tilde{\Upsilon}(\mathrm{x})\) is the thickness contains pulmonary lesion intensity in input chest CT image.
1. The input chest CT image is auto segmented by reversing the DCP process to remove haze from pixels after estimation. However, when estimating pixels with haze in the chest pulmonary lesion, an IROI (Interest Region of Interest) is delineated by increasing the brightness of pixels within the specified threshold range during analysis. Alternatively, the original scene's radiance can be restored. As a result, the model can be represented by the following: \[\tilde{\Upsilon}(x)=\tilde{\Upsilon}(x)_{\mathrm{Brightness}}\] (9)
Where, \(\tilde{\Upsilon}(x)_{\mathrm{Brightness}}\) is the thickness contained pulmonary lesion intensity from brightness image.
### Classifier: Resnet-50
Convolutional Neural Networks (CNNs) have recently demonstrated remarkable results in object recognition and classification tasks. Consequently, deep models based on convolutional neural networks (DNNs) have been developed by leveraging existing datasets. Rather than starting from scratch and training a model [25]. The proposed classifier chose the Resnet-50 architecture, which consists of 50 layers. An improvement was achieved by enhancing the brightness of the Region of Interest (ROI). This approach provides several advantages over classic algorithms and current state-of-the-art techniques, including Accelerated learning and better generalization effectiveness.
## 3 Experimental Results
The proposed model, Auto Lesion Segmentation-Intensity Dark Channel Prior, and Deep Neural Network (ALS-IDCP-DNN) were implemented using MATLAB software. For the experimental parts, a computer equipped with an Intel Xeon Silver 2.6 GHz processor, 16 GB RAM, and an NVIDIA GTX960M Quadro CPU card was used. The model was trained on CT radiomics scans for the classification of COVID-19 disease using the ResNet-50 framework that had been previously trained.
Figure 2 presents the identification rates of COVID-19 disease in CT radiomics scans using the ALS-IDCP-DNN model, both with and without IDCP. The COVID-19 infected lesion is not only detected but also has its boundaries clearly visible, and its brightness is increased (indicated by the arrow). The outcomes demonstrated that incorporating DCP (Dark Channel Prior) and employing data augmentation approaches significantly enhanced individual performances. The performance measurements achieved remarkable rates, including 98.8% accuracy, 99% precision, 98% recall, and 98% F1-score. The method demonstrates a better accuracy of 98.8%. Out of 500 COVID-19 images, 500 non-COVID-19 images, and 500 healthy images, they were classified into their respective classes. Only 4 of the non-COVID-19 images were misclassified, resulting in a precision of 99%. Furthermore, out of the total non-COVID-19 images, only 2 were misclassified. A literature survey, presented in Table 1, compared the proposed method with recent studies that utilized the same dataset. While no significant tests were conducted in all the references, our method yielded significant results.
Figure 1: Architecture of the proposed ALS-IDCP-DNN model.
## 4 Conclusion
A new computer-aided system, known as the ALS-IDCP-DNN model, has been designed to enable swift diagnosis and prognosis of CT scan samples within the shortest possible time frame, utilizing only 5 epochs. To detect COVID-19, this study relied on a ResNet-50 model that had been previously trained. To address clustering difficulties and enhance the CT dataset, common data augmentation methods such as doubling and resizing were employed. The ALS-IDCP-DNN system combines the intensity dark channel prior scenario for enhanced auto-segmentation, feature selection, and extraction with deep transfer learning using the Resnet-50 model. According to this combination of metrics, including F1-score, accuracy, precision, and recall, the identification of COVID-19, non-COVID-19, and healthy CT scans achieved above 98%. Several pneumonia chest CT datasets have demonstrated that combining ResNet-50 with intensity segmentation and augmentation yields the most effective deep-learning algorithm for detecting COVID-19. Our model surpasses over 10 of the most advanced and effective methods currently available in diagnosing suspected COVID-19 patients, as evidenced by its superior ability to classify COVID-19 images. In the future, to differentiate between COVID-19 and other pulmonary conditions, we plan to incorporate automatic detection and segmentation.
## 5 Data Availability
[https://github.com/UCSD-AI4H/COVID-CT](https://github.com/UCSD-AI4H/COVID-CT).
|
2309.14249 | Averages over the Gaussian Primes: Goldbach's Conjecture and Improving
Estimates | We prove versions of Goldbach conjectures for Gaussian primes in arbitrary
sectors. Fix an interval $\omega \subset \mathbb{T}$. There is an integer
$N_\omega $, so that every odd integer $n$ with $N(n)>N_\omega $ and
$\text{dist}( \text{arg}(n) , \mathbb{T}\setminus \omega ) > (\log N(n))
^{-B}$, is a sum of three Gaussian primes $n=p_1+p_2+p_3$, with
$\text{arg}(p_j) \in \omega $, for $j=1,2,3$. A density version of the binary
Goldbach conjecture in a sector is also proved. | Christina Giannitsi, Ben Krause, Michael Lacey, Hamed Mousavi, Yaghoub Rahimi | 2023-09-25T16:10:09Z | http://arxiv.org/abs/2309.14249v2 | # Averages over the Gaussian primes: Goldbach's conjecture and improving estimates
###### Abstract.
We prove versions of Goldbach conjectures for Gaussian Primes in arbitrary sectors. Fix an interval \(\omega\subset\mathbb{T}\). There is an integer \(N_{\omega}\), so that every odd integer \(n\) with \(\arg(n)\in\omega\) and \(N(n)>N_{\omega}\), is a sum of three Gaussian primes \(n=p_{1}+p_{2}+p_{3}\), with \(\arg(p_{j})\in\omega\), for \(j=1,2,3\). A density version of the binary Goldbach conjecture is proved. Both follow from a High/Low decomposition of the Fourier transform of averages over Gaussian primes, defined as follows. Let \(\Lambda(n)\) be the Von Mangoldt function for the Gaussian integers and consider the norm function \(N:\mathbb{Z}[i]\to\mathbb{Z}^{+}\), \(\alpha+i\beta\mapsto\alpha^{2}+\beta^{2}\). Define the averages
\[A_{N}f(x)=\frac{1}{N}\sum_{N(n)<N}\Lambda(n)f(x-n).\]
Our decomposition also proves the \(\ell^{p}\) improving estimate
\[\|A_{N}f\|_{\ell^{p^{\prime}}}\ll N^{1/p^{\prime}-1/p}\|f\|_{\ell^{p}},\qquad 1 <p\leq 2.\,,\]
###### Contents
* 1 Introduction
* 2 Notation and Preliminaries
* 3 Inequalities involving Ramanujan Sums
* 4 The Vinogradov Inequality
* 5 Approximating the Kernel
* 6 Estimates for the High and Low Parts
* 7 Improving Inequalities
* 8 Goldbach Conjecture
* 8.1 The Binary Goldbach Conjecture
* 8.2 The Ternary Goldbach Conjecture
## 1. Introduction
The principle goal of this paper is to establish a density version of the strong Goldbach conjecture for Gaussian integers, restricted to sectors in the complex plane.
Briefly, for integers \(n\in\mathbb{Z}[i]\) we write \(N(a+ib)=a^{2}+b^{2}\), and if we express \(n=\sqrt{N(n)}e^{i\theta}\), we set \(\arg(n)=\theta\); the units are \(\pm 1\) and \(\pm i\). The field \(\mathbb{Z}[i]\) is a Euclidean domain, inside of which an integer \(p=a+ib\) is a prime if it has no integer factor \(q\) with \(N(q)<N(p)\). The Gaussian primes can take one of two forms. First if \(a\) and \(b\) are non-zero, then \(N(p)=a^{2}+b^{2}\) is prime. Second, if \(a\) or \(b\) are zero, then it is of the form of a unit times a prime in \(\mathbb{N}\) congruent to \(3\mod 4\).
As in the case of \(\mathbb{Z}\), the Goldbach conjecture over \(\mathbb{Z}[i]\) requires a notion of evenness: a Gaussian integer \(x=a+ib\) is _even_ if and only if \(N(x)=a^{2}+b^{2}\) is even. Evenness is equivalent to either condition below.
1. The integer \(a+b\) is even.
2. The Gaussian integer \(1+i\) divides \(x\).
An integer which is not even is _odd_.
The main result concerns Goldbach type results. We show that there are very few even integers which are _not_ a sum of two primes. And, we do so even with significant restrictions on the arguments of the integers involved. Similarly, all sufficiently large odd integers are a sum of three primes. The only prior results in this direction we could find in the literature correspond to the entire complex plane. Below, we allow arbitrary sectors.
**Theorem 1.1**.: _Fix an integer \(B>10\) and interval \(\omega\subset\mathbb{T}\). There exists an \(N_{\omega,B}>0\), such that for all integers \(N>N_{\omega,B}\), the following holds._
1. _Every odd integer_ \(n\) _with_ \(N(n)>N_{\omega,B}\) _and_ \(\arg(n)\in\omega\) _is a sum of three Gaussian primes_ \(n=p_{1}+p_{2}+p_{3}\)_, with_ \(\arg(p_{j})\in\omega\) _for_ \(j=1,2,3\)_._
2. _We have_ \(|E_{2}(\omega,N)|\ll\frac{N}{\log(N)^{B}}\)_, where_ \(E_{2}(\omega,N)\) _is the set of_ even _Gaussian integers with_ \(N(n)<N\) _and_ \(\arg(n)\in\omega\)_, which_ cannot _be represented as sum of two Gaussian primes_ \(n=p_{1}+p_{2}\)_, with_ \(\arg(p_{j})\in\omega\) _for_ \(j=1,2\)_._
We can further estimate the number of representations in both the binary and ternary case.
Our proof derives from a detailed study of the analytic properties of averages over the Gaussian primes. In particular, let \(f:\mathbb{Z}[i]\to\mathbb{C}\) be an arithmetic function, and for \(x\in\mathbb{Z}[i]\) define
\[A_{N}f(x)=\frac{1}{N}\sum_{N(n)<N}\Lambda(n)f(x-n)\]
where the von Mangoldt function on \(\mathbb{Z}[i]\) is defined by
\[\Lambda(n)=\begin{cases}\log(N(\rho))&\text{ if $n=\rho^{\alpha}$ and $\rho\in\mathbb{Z}[i]$ is irreducible}\\ 0&\text{ otherwise.}\end{cases}\]
We prove an improving estimate for the averages above: The averages of \(\ell^{1}(\mathbb{Z}^{2})\) functions are nearly bounded.
**Theorem 1.2**.: _For all \(N\), and \(1<p\leqslant 2\), we have,_
\[\langle A_{N}f,g\rangle\ll N^{1-2/p}\|f\|_{p}\|g\|_{p}.\]
_Equivalently, for all \(1<p\leqslant 2\), whenever \(f\) is supported on a cube, \(Q\), of side length \(\sqrt{N}\),_
\[\frac{\|A_{N}f\|_{p^{\prime}}}{|Q|^{1/p^{\prime}}}\ll\frac{\|f\|_{p}}{|Q|^{1/p}}\]
To prove this, we need to develop many expected results, including Ramanujan type identities, and the Vinogradov type estimates for the Fourier transform of \(A_{N}\). Not being able to identify clear cut references for many of these estimates, we develop them below. We then develop a High/Low decomposition of the Fourier transform of \(A_{N}\). A particular innovation is our approach to the major arc estimate in Lemma 5.3, which are typically proved by Abel summation. But that method is is poorly adapted to the question at hand. We develop a different method. Also noteworthy is that the Low term is defined in terms of smooth numbers, a technique used in [12]. Smoothness facilitates the proof of Lemma 8.11. With this decomposition in hand, the deduction of the Theorems is relatively straight forward.
Indeed, we develop this for the more specialized averages, over sectors of the complex plane. For an interval \(\omega\subset\mathbb{T}\), extend the definition of the averages to this setting.
\[A_{N}^{\omega}f(x)=\frac{2\pi}{|\omega|N}\sum_{\begin{subarray}{c}N(n)<N\\ \arg(n)\in\omega\end{subarray}}\Lambda(n)f(x-n). \tag{1.3}\]
To address the binary Goldbach question, note that
\[A_{N}^{\omega}*A_{N}^{\omega}(m)=\frac{4\pi^{2}}{|\omega|^{2}N^{2}}\sum_{ \begin{subarray}{c}N(n_{1}),N(n_{2})<N\\ \arg(n_{1}),\arg(n_{2})\in\omega\\ n_{1}+n_{2}=m\end{subarray}}\Lambda(n_{1})\Lambda(n_{2})\]
The High/Low decomposition for the \(A_{N}^{\omega}\) can be leveraged to study the sum above. This is the path we follow to prove Theorem 1.1 in SS8.
Our motivations come from number theory and analysis. In this setting, the binary (and higher order) Goldbach conjectures has been addressed in the number field setting. Mitsui [16, SS11,12] addresses the binary and higher order cases of Goldbach's Conjecture in an arbitrary number field. Mitsui finds that all sufficiently large totally positive odd integers are the sum of odd totally positive primes. He does not address the sector case. Much earlier work of Rademacher [18, 19]. Rademacher [18] studied representations of totally positive integers in a class of quadratic extensions of \(\mathbb{Z}\). These papers assume the Generalized Riemann Hypothesis.
Holben and Jordan [7] raised conjectures in the spirit of our Theorem. Their Conjecture F states
**Conjecture 1.4**.: Each even Gaussian integer \(n\) with \(N(n)>10\) is the sum of two primes \(n=p_{1}+p_{2}\), with \(\arg(n)-\arg(p_{j})\leq\pi/6\), for \(j=1,2\).
As far as we know, there is no prior result in this direction, for neither the binary nor the ternary version of the Goldbach conjecture. But, we also note that our Theorem 1.1 provides a density version of a much stronger result. For all \(\delta>0\), most even Gaussian integers \(n\) with \(N(n)>N_{\delta}\), are the sum of two primes \(n=p_{1}+p_{2}\), with \(|\arg(n)-\arg(p_{j})|\leq\delta\), for \(j=1,2\). Indeed, partition the unit circle into intervals of length \(\delta\), and apply our Theorem to each interval.
The study of metric properties of the uniform distribution of \(p\alpha\), for irrational \(\alpha\in\mathbb{C}\), and Gaussian prime \(p\), is well developed by Harman [8, Chapters 11], [9], with effective results even small sectors. See the extensions to certain quadratic number fields by Baier and Technau [1]. The latter paper also addresses metric questions along lines in the complex plane.
On the analytical side, our improving estimate above is part of Discrete Harmonic Analysis, a subject invented by Bourgain [3]. The recent textbook of one of us [11] serves as a comprehensive summary of the subject. The study of the averages over the primes has been extensively studied, beginning with the work of Bourgain-Wierdl [4, 21], and continued by several [5, 6, 13, 14, 15].
## 2. Notation and Preliminaries
Throughout the paper we are using \(\|\beta\|\) to denote the distance of \(\beta\in\mathbb{C}\) to the closest point \(n\in\mathbb{Z}[i]\), and for \(a,b,c\in\mathbb{Z}[i]\) we write \((a,b)=c\) to indicate that \(c\) is the greatest (in norm) common divisor of \(a\) and \(b\) up to a unitary element.
We will use the \(\ell^{\infty}\) balls \(B_{\infty}(r)=\{x=a+bi\in\mathbb{Z}[i]\,:\,-r\leq a,b<r\}\) and \(B_{\infty}(c,r)=B_{\infty}(r)+c\). Notice that there is a small departure from the traditional notation of a ball with respect to the \(\infty\)-norm, in the sense that we are only including the lowest endpoint, however this variation is useful as it allows us to obtain a tessellation of the complex plane. It is obvious that translating and rotating the grid does not affect the tessellation. For a \(q=|q|e^{i\theta_{q}}\in\mathbb{Z}[i]\), we are interested in the grid formed by the squares
\[B_{q}:=e^{i\theta_{q}}B_{\infty}\left((1+i)\frac{|q|}{2},\frac{|q|}{2}\right)\]
where we have rotated the squares by the argument of \(q\), so that their sides are parallel and orthogonal to \(q\) respectively; see Figures 1 and 2. This particular decomposition of lattice points coincides with all the _unique_ remainders modulo \(q\) in \(\mathbb{Z}[i]\) and simplifies our calculations of complex exponentials. Indeed, it is known that for \(q\in\mathbb{Z}[i]\), there are \(N(q)\) distinct remainders modulo \(q\), which agrees with the number of points inside \(B_{q}\). It is straightforward to verify that any two points in \(B_{q}\) cannot be equivalent modulo \(q\)
which then proves our claim. We are also going to need the following, more geometric, description of our boxes:
\[B_{q}=\{r\in\mathbb{Z}[i]\mid 0\leqslant\langle r,q\rangle<N(q)\ \text{ and }\ 0\leqslant\langle r,iq\rangle<N(q)\}.\]
Let \(e(x):=e^{2\pi ix}\). For a function \(f:\mathbb{Z}[i]\rightarrow\mathbb{C}\), we can define the discrete Fourier Transform over the box \(B_{q}\) as
\[\mathcal{F}_{q}f\left(x\right):=\sum_{n\in B_{q}}f(n)e\bigl{(}-\langle x,\frac {n}{q}\rangle\bigr{)}, \tag{2.1}\]
and the corresponding inverse discrete Fourier transform as
\[\mathcal{F}_{q}^{-1}f\left(n\right):=\frac{1}{N(q)}\sum_{x\in B_{q}}f(x)e \bigl{(}\langle n,\frac{x}{\bar{q}}\rangle\bigr{)}.\]
The Euler totient function \(\phi\) for Gaussian integers counts the number of points in \(B_{q}\) that are coprime to \(z\). It is equal to
\[\phi(z)=N(z)\prod_{\begin{subarray}{c}p\,|\,z\\ p:\text{ prime in }\mathbb{Z}[i]\end{subarray}}\left(1-\frac{1}{N(p)}\right).\]
Clearly, \(\phi(z)=\phi(\bar{z})\). Thus, \(|\mathbb{A}_{q}|=\phi(q)\), where
\[\mathbb{A}_{q}:=\{r\in B_{q}\,:\,(r,q)=1\}.\]
We also need the arithmetic function \(r_{2}(n)\), which is called _Sum of Squares function_, which counts the number of representations of an integer \(n\) as sum of two squares. Obviously \(r_{2}(n)\) is the number of \(m\in\mathbb{Z}[i]\) such that \(N(m)=n\). It is known that \(r_{2}(n)=O(1)\) in an average sense, namely
\[\sum_{n<N}r_{2}(n)\simeq N \tag{2.2}\]
Define the Mobius function as follows.
\[\mu(n)=\begin{cases}(-1)^{k}&\text{ if }n=\epsilon\rho_{1}\rho_{2}\cdots\rho_{k} \\ 1&\text{ if }n=\epsilon\\ 0&\text{ otherwise.}\end{cases}\]
where \(\epsilon\in\{\pm 1,\pm i\}\) are units and \(\rho_{i}\) are distinct prime elements of \(\mathbb{Z}[i]\).
We define the classical form of the average over a sector. For an interval \(\omega\subset\mathbb{T}\), set
\[M_{N}^{\omega}=M_{N}=\frac{|\omega|}{2\pi N}\sum_{\begin{subarray}{c}N(n)<N\\ \arg(n)\in\omega\end{subarray}}\delta_{n}, \tag{2.3}\]
where \(\delta_{n}\) is a Dirac measure at \(n\).
We emphasize that _we do not attempt to track constants that depend upon \(\omega\)_. Frequently, we assume that we will assume that \(N\) is large enough, once \(\omega\) is fixed.
**Lemma 2.4**.: _Fix an interval \(\omega\subset\mathbb{T}\), \(0<|\omega|\leq 2\pi\). For \(N(\beta)<1\), and integers \(N>N_{\omega}\),_
\[\widehat{M}_{N}^{\omega}(\beta) :=\frac{|\omega|}{2\pi N}\sum_{\begin{subarray}{c}N(n)<N\\ \arg(n)\in\omega\end{subarray}}e(-\langle n,\beta\rangle) \tag{2.5}\] \[\ll_{\omega}\min\left(1,(N\cdot N(\beta))^{-\frac{3}{4}}\right). +\frac{1}{\sqrt{N}}.\]
_The implied constant only depends on \(\omega\)._
Proof.: Let \(I_{n}=B_{\infty}(n,1/2)\). For \(n=0\), we have
\[\int_{I_{0}}e(-\langle x,\beta\rangle)\;dx=\prod_{j=1}^{2}\frac{\sin(\pi\beta _{j})}{\pi\beta_{j}},\;\;\;\beta=(\beta_{1},\beta_{2}).\]
This function is bounded above, and away from zero, since \(N(\beta)<1\). Suppress the dependence on \(\omega\) in the notation. Modify the definition of
\[S_{N}(\beta):=N\widehat{M}_{N}(\beta)\]
to
\[T_{N}(\beta):=\sum_{\begin{subarray}{c}n\colon N(n)<N\\ \arg(n)\in\omega\end{subarray}}\int_{I_{n}}e(-\langle x,\beta\rangle)\;dx\]
\[=\prod_{j=1}^{2}\frac{\sin(\beta_{j}/2)}{\beta_{j}/2}\sum_{ \begin{subarray}{c}n:\ N(n)<N\\ \arg(n)\in\omega\end{subarray}}e(-\langle n,\beta\rangle)\] \[=S_{N}(\beta)\prod_{j=1}^{2}\frac{\sin(\beta_{j}/2)}{\beta_{j}/2}.\]
We see that it suffices to estimate \(T_{N}(\beta)\).
The symmetric difference between the sector defined by \(\omega\), and the area of integration defining \(T_{N}(\beta)\) is the set
\[\bigcup_{\begin{subarray}{c}n:\ N(n)<N\\ \arg(n)\in\omega\end{subarray}}I_{n}\,\triangle\{n\colon N(n)<N,\arg(n)\in \omega\}.\]
It has measure at most \(\ll\omega\sqrt{N}\), as the above set is supported in an \(O(1)\) neighborhood of the boundary \(\{n:N(n)<N,\ n\in\omega\}\), which has length \(\omega\sqrt{N}\). Thus,
\[T_{N}(\beta)\ll\int_{\begin{subarray}{c}N(x)<N\\ \arg(x)\in\omega\end{subarray}}e(-\langle x,\beta\rangle)\;dx+\sqrt{N}\]
By partitioning \(\omega\) into smaller arcs and arguing as in [17, Page 111-112], which addresses the case where \(\omega=\mathbb{T}\), we may bound the integral
\[\frac{1}{N}\int_{\begin{subarray}{c}N(x)<N\\ \arg(x)\in\omega\end{subarray}}e(-\langle x,\beta\rangle)\;dx\ll_{\omega} \big{(}\frac{1}{N\cdot N(\beta)}\big{)}^{3/4}.\]
## 3. Inequalities involving Ramanujan Sums
In this section, we prove two dimensional analogues of estimates and identities already known for one dimensional Ramanujan sums, like the Cohen Identity. This section is crucial to our High-Low decomposition. Some of these properties are known, but we include details for completeness. We start with standard facts about Fourier transform on \(B_{q}\).
**Lemma 3.1**.: _Consider the set \(B_{q}\) and \(n\in\mathbb{Z}[i]\). We have:_
\[\sum_{r\in B_{q}}e(\langle r,\frac{n}{\bar{q}}\rangle)=\begin{cases}N(q)& \text{ if }\bar{q}\ \mid\ n\\ 0&\text{ Otherwise.}\end{cases}\]
Below, we divide by \(\bar{d}\), where \(d\) is a divisor of \(q\).
**Corollary 3.2**.: _For \(d\ |\ q\) we have_
\[\sum_{r\in B_{q}}e\left(\langle r,\frac{n}{\bar{d}}\rangle\right)=\begin{cases} N(q)&\text{ if }\bar{d}\ \mid\ n\\ 0&\text{ Otherwise.}\end{cases}\]
Another consequence of Lemma 3.1 is a form of Parseval's Identity as stated below.
**Proposition 3.3**.: _For a function \(f\) defined on \(\mathbb{Z}[i]\) the Discrete Fourier Transform \(\mathcal{F}_{q}\) defined in (2.1) satisfies_
\[\sum_{n\in B_{q}}|f(n)|^{2}=\frac{1}{N(q)}\sum_{x\in B_{q}}|\mathcal{F}_{q}f(x )|^{2}.\]
Above, on the left we have \(B_{q}\), and on the right \(B_{\bar{q}}=\{\bar{n}:n\in B_{q}\}=\overline{B_{q}}\).
The analog of Ramanujan's sum is
\[\tau_{q}(x):=\sum_{n\in\mathbb{A}_{q}}e(\langle x,\tfrac{n}{q}\rangle). \tag{3.4}\]
**Lemma 3.5**.: _Let \(r\in\mathbb{Z}[i]\)._
\[\mathbf{1}_{\gcd(\bar{r},q)=1}=\frac{1}{N(q)}\sum_{k\in B_{q}}e\left(\langle -r,\frac{k}{q}\rangle\right)\tau_{\bar{q}}(k).\]
Proof.: Note from the definition in (3.4), that
\[\tau_{\bar{q}}(k)=\langle\mathbf{1}_{\gcd(\bar{r},q)=1},e(\langle\cdot,k/\bar {q}\rangle\rangle.\]
Then, the conclusion follows from general facts about orthogonal bases.
**Lemma 3.6**.: _For \(x\in\mathbb{Z}[i]\) we have_
\[\tau_{q}(x)=\sum_{d\mid(q,\bar{x})}\mu(q/d)N(d).\]
_In particular, \(\tau_{q}(q)=\phi(q)\) and \(\tau_{q}(1)=\mu(q)\). In addition, if \((q,\bar{x})=1\) then \(\tau_{q}(x)=\mu(q)\)._
Proof.: Note that thanks to Corollary 3.2 we have that if \(d\ |\ q\) then
\[\sum_{\begin{subarray}{c}k\in B_{q}\\ d|k\end{subarray}}e\left(\langle x,\frac{k}{q}\rangle\right) =\sum_{k\in B_{q}}e\left(\langle x,\frac{k}{q}\rangle\right) \mathbf{1}_{d|k}\] \[=\sum_{k\in B_{q}}e\left(\langle x,\frac{k}{q}\rangle\right)\frac {1}{N(q)}\sum_{r\in B_{q}}e\left(\langle-r,\frac{k}{d}\rangle\right)\] \[=\frac{1}{N(q)}\sum_{r\in B_{q}}\sum_{k\in B_{q}}e\left(\langle \frac{x-r\frac{\bar{q}}{d}}{\bar{q}},k\rangle\right)\] \[=\sum_{r\in B_{q}}\mathbf{1}_{\bar{q}\mid x-r\frac{\bar{q}}{d}}\] \[=N(\frac{q}{d})\mathbf{1}_{\frac{q}{d}\mid\bar{x}}.\]
The last equality comes from counting the number of \(r\)'s in \(B_{q}\) for which the indicator is non-zero. Let \(\bar{x}=\frac{q}{d}x^{\prime}\). Then \(\bar{x}-\bar{r}\frac{q}{d}\equiv 0\mod q\,\Leftrightarrow\,\frac{q}{d}(x^{ \prime}-\bar{r})\equiv 0\mod\bar{q}\). This means that \(x^{\prime}\equiv\bar{r}\mod d\), which means that there exists a unique such \(\bar{r}\in B_{d}\) and \(N(q/d)\) of them in \(B_{q}\).
Hence, and with the assistance of the Inclusion-Exclusion principle, we observe that
\[\tau_{q}(x) =\sum_{\begin{subarray}{c}k\in B_{q}\\ (k,q)=1\end{subarray}}e(\langle x,\tfrac{k}{q}\rangle)\] \[=\sum_{d\,|\,q}\mu(d)\sum_{\begin{subarray}{c}k\in B_{q}\\ d|k\end{subarray}}e(\langle x,\tfrac{k}{q}\rangle)\] \[=\sum_{d\,|\,q}\mu(d)N(\tfrac{q}{d})\mathbf{1}_{\frac{q}{d}|\bar {x}}\] \[=\sum_{d\,|\,q}\mu(q/d)N(d)\mathbf{1}_{d\,|\,\bar{x}}\] \[=\sum_{d\,|\,(q,\bar{x})}\mu(q/d)N(d).\]
**Corollary 3.7**.: _The function \(\tau_{q}(x)\) is multiplicative and we have that_
\[\tau_{q}(x)=\mu\Big{(}\frac{q}{(q,\bar{x})}\Big{)}\frac{\phi(q)}{\phi(\frac{q} {(q,\bar{x})})}.\]
Proof.: Let \(d=(q,\bar{x})\). A direct consequence of the Chinese Remainder Theorem gives us
\[\tau_{q}(x) =\sum_{r\in\mathbb{A}_{q}}e(\langle r,\tfrac{x}{\bar{q}}\rangle)\] \[=\sum_{m\in\mathbb{A}_{q/d}}\sum_{k\in\mathbb{A}_{d}}e(\langle k \frac{q}{d}+md,\frac{x}{\bar{q}}\rangle)\] \[=\sum_{m\in\mathbb{A}_{q/d}}e(\langle m,\tfrac{x}{\bar{q}/d} \rangle)\sum_{k\in\mathbb{A}_{d}}e(\langle k,\tfrac{x}{d}\rangle)\] \[=\tau_{q/d}(x)\,\phi(d)\]
and the result follows immediately from the previous lemma and the multiplicative properties of \(\phi\).
With these identities in mind, we prove Cohen's identity.
**Lemma 3.8**.: _For \(x\in\mathbb{Z}[i]\), the following identity holds:_
\[\sum_{r\in\mathbb{A}_{q}}\tau_{q}(x+r)=\mu(q)\tau_{q}(x).\]
Proof.: We have
\[\sum_{r\in\mathbb{A}_{\bar{q}}}\tau_{q}(x+r) =\sum_{r\in B_{\bar{q}}}1_{(r,\bar{q})=1}\tau_{q}(x+r)\] \[=\sum_{r\in B_{\bar{q}}}\frac{1}{N(q)}\sum_{k\in B_{q}}e\left( \left\langle-r,\frac{k}{q}\right\rangle\right)\,\tau_{\bar{q}}(k)\,\tau_{q}(x+r)\] \[=\frac{1}{N(q)}\sum_{k\in B_{q}}\tau_{\bar{q}}(k)\sum_{r\in B_{ \bar{q}}}e\left(\left\langle-r,\frac{k}{q}\right\rangle\right)\tau_{q}(x+r)\]
Now we compute the inner sum.
\[\sum_{r\in B_{\bar{q}}}e\left(\left\langle-r,\frac{k}{q}\right\rangle \right)\tau_{q}(x+r) =\sum_{r\in B_{\bar{q}}}e\left(\left\langle-r,\frac{k}{q}\right\rangle \right)\sum_{n\in\mathbb{A}_{q}}e\left(\left\langle\frac{n}{q},x+r\right\rangle\right)\] \[=\sum_{n\in\mathbb{A}_{q}}e\left(\left\langle\frac{n}{q},x\right \rangle\right)\sum_{r\in B_{\bar{q}}}e\left(\left\langle\frac{n-k}{q},r\right\rangle\right)\] \[=\sum_{n\in\mathbb{A}_{q}}e\left(\left\langle\frac{n}{q},x\right \rangle\right)N(q)\mathbf{1}_{q\,|\,n-k}\] \[=N(q)e\left(\left\langle\frac{k}{q},x\right\rangle\right) \mathbf{1}_{(k,q)=1}.\]
In addition, note that we have if \((k,q)=1\) then \(\tau_{\bar{q}}(k)=\tau_{\bar{q}}(1)\). Indeed, this means that \(\bar{k}\) is a unitary element modulo \(\bar{q}\) so the set \(\{\bar{k}s\,:\,s\in B_{\bar{q}}\}\) is just a rearrangement of \(B_{\bar{q}}\), and so
\[\tau_{\bar{q}}(k) =\sum_{s\in\mathbb{A}_{\bar{q}}}e\left(\left\langle k,\frac{s}{ \bar{q}}\right\rangle\right)\] \[=\sum_{s\in\mathbb{A}_{\bar{q}}}e\left(\left\langle 1,\frac{\bar{k} s}{\bar{q}}\right\rangle\right)\] \[=\sum_{s\in\mathbb{A}_{\bar{q}}}e\left(\left\langle 1,\frac{s}{ \bar{q}}\right\rangle\right)\] \[=\tau_{\bar{q}}(1).\]
Therefore we have that
\[\sum_{r\in\mathbb{A}_{\bar{q}}}\tau_{q}(x+r) =\sum_{k\in B_{q}}\tau_{\bar{q}}(k)e\left(\left\langle\frac{k}{q},x\right\rangle\right)\mathbf{1}_{(k,q)=1}\] \[=\tau_{\bar{q}}(1)\sum_{k\in B_{q}}e\left(\left\langle\frac{k}{q },x\right\rangle\right)\mathbf{1}_{(k,q)=1}\] \[=\tau_{\bar{q}}(1)\tau_{q}(x).\]
Finally, \(\tau_{\bar{q}}(1)=\mu(q)\)
**Corollary 3.9**.: _For \(x\in\mathbb{Z}[i]\) we have_
\[\bigg{|}\sum_{r\in\mathbb{A}_{q}}\tau_{q}(x+r)\bigg{|}\ll N(\gcd(x,q))\,N(q)^{ \epsilon}. \tag{3.10}\]
Proof.: By Lemma 3.8, it suffices to find a bound for \(\tau_{q}(x)\). The inequality follows immediately from Lemma 3.6 and Corollary 3.7
\[|\tau_{q}(x)|\ll N\big{(}\gcd(\bar{x},q)\big{)}N(q)^{\epsilon}.\]
Now we are ready to prove the Gaussian version of an inequality, originally due to Bourgain, involving the Ramanujan sums. It assures us that while the summands above can, in general, be as big as \(q\), this happens infrequently as \(x\) varies.
**Proposition 3.11**.: _For every \(B,k>2\), integers \(N>N_{k,B}\) and \(Q\leqslant(\log N)^{B}\) we have_
\[\frac{1}{N}\sum_{N(x)<N}\bigg{[}\sum_{q\,:\,q<Q}|\tau_{q}(x+r)|\bigg{]}^{k} \ll Q^{k+\epsilon}. \tag{3.12}\]
Proof.: The proof of this proposition is inspired from a proof due to Bourgain [2]. In view of (3.10), it suffices to show that
\[\frac{1}{N}\sum_{N(x)<N}\biggl{(}\sum_{N(q)<Q}N(\gcd(x,q))\biggr{)}^{k}\ll Q^ {k+\epsilon}.\]
Expanding out the \(k\)th power, we have
\[\sum_{N(q_{1}),N(q_{2}),\cdots,N(q_{k})<Q}\frac{1}{N}\sum_{N(x)<N}\prod_{i=1} ^{k}N(\gcd(x,q_{i}))\]
The function \(x\to\prod_{i=1}^{k}N(\gcd(x,q_{i}))\) has period of \(\mathfrak{L}:=N(\operatorname{lcm}(q_{1},\cdots,q_{k}))\). We are free to restrict attention to the case in which \(N\) is much larger than \(\mathfrak{L}\leqslant Q^{k}\). Thus, the bound is reduced to establishing that
\[\sum_{N(q_{1}),N(q_{2}),\cdots,N(q_{k})<Q}\frac{1}{\mathfrak{L}}\sum_{N(x)< \mathfrak{L}}\prod_{i=1}^{k}N(\gcd(x,q_{i}))\ll Q^{k+\epsilon}.\]
We will establish a bound for the inner sum, namely
\[\sum_{N(x)<\mathfrak{L}}\prod_{i=1}^{k}N(\gcd(x,q_{i}))\ll Q^{k}.\]
The sum above is multiplicative in the variables \(q_{1},\cdots,q_{k}\). So, specializing to the case of \(q_{i}=\rho^{r_{i}}\), we can estimate as follows. Assume that \(r=\max r_{i}\). Then \(\mathfrak{L}=N(\rho^{r}).\) Let \(\rho^{a}\|x\).
There are \(\mathfrak{L}N(\rho^{-a})=N(\rho^{r-a})\) such \(x\) with \(N(x)<\mathfrak{L}\). Also \(N(\gcd(x,\rho^{r_{i}}))=N(\rho^{\min(r_{i},a)})\). So
\[\sum_{N(x)<\mathfrak{L}}\prod_{i=1}^{k}N(\gcd(x,\rho^{r_{i}})) \ll N(\rho^{r-a})\prod_{i=1}^{k}N(\rho^{\min(r_{i},a)})\] \[\ll\prod_{i=1}^{k}N(\rho^{r_{i}}). \tag{3.13}\]
Above, we can assume that \(r=\max r_{i}=r_{1}\), and then estimate
\[N(\rho^{\min(r_{i},a)})\ll\begin{cases}N(\rho^{a})&i=1\\ N(\rho^{r_{i}})&2\leqslant i\leqslant k\end{cases}.\]
Using (3.13), the multiplicative property implies that
\[\sum_{N(x)<\mathfrak{L}}\prod_{i=1}^{k}N(\gcd(x,q_{i}))\ll\prod_{i=1}^{k}N(q_{ i})\leqslant Q^{k}.\]
Thus, to conclude the estimate, we note that
\[\sum_{N(q_{1}),N(q_{2}),\cdots,N(q_{k})<Q}\frac{1}{\mathfrak{L}}\ll Q^{\epsilon}\]
This completes the proof.
## 4. The Vinogradov Inequality
We prove an analogue of the Vinogradov inequality for \(\Lambda\) on \(\mathbb{Z}[i]\). That is, we control the Fourier transform of averages of the von Mangoldt function over a sector (1.3), at a point which is close to a rational with relatively large denominator.
**Theorem 4.1**.: _Let \(\alpha\in\mathbb{C}\) and \(a,q\in\mathbb{Z}[i]\) with \(0<N(a)<N(q)<\sqrt{N}\gcd(a,q)=1\) and \(N(\alpha-\frac{a}{q})<\frac{1}{N(q)^{2}}\). Fix \(\omega\subset\mathbb{T}\). For \(N>N_{\omega}\), we have_
\[\sum_{\begin{subarray}{c}n\colon N(n)<N\\ \arg(n)\leqslant\omega\end{subarray}}\Lambda(n)e(\langle n,\alpha\rangle) \ll\frac{N\log^{2}(N)}{N(q)^{1/2}}+N^{99/100}.\]
The remainder of the section is devoted to the proof. We collect some background material. The ring \(\mathbb{Z}[i]\) is an Euclidean domain, which means that for each \(a,b\in\mathbb{Z}[i]\), there exists a unique pair of \(q,r\) such that
\[a=bq+r\text{ where }r\in B_{b}\]
Note that if \(N(r)=\frac{N(q)}{2}\), then \(r\) is on the edges of square \(\langle b,ib\rangle\), we pick the point with positive phase on the line that contains origin, i.e. \(0\leqslant\arg(r)<\pi\).
Similar to the Farey dissection, we can find an approximation of \(\xi\) by rational points in \(\mathbb{Q}[i]\). In fact, we can say that for every \(N(\xi)<1\), there exists \(\frac{a}{q}\in\mathbb{Q}[i]\) such that
\[N(\xi-\tfrac{a}{q})<\frac{c(q)}{N(q)^{2}}\]
where \(c(q)\approx 1\) is a constant that depends on \(q\) but is uniformly bounded above and below. Thus, the assumption of the Theorem says that \(\alpha\) is well approximated by \(a/q\), and we apply the result above with \(N(q)\) large.
Next, we recall the Prime Number Theorem for Gaussian integers. For any choices of \(\omega\subset\mathbb{T}\), integers \(A\), and \(N(q)<\log^{\frac{A}{2}-1}N\),
\[\psi_{\Lambda}(x,q,r,\omega):=\sum_{\begin{subarray}{c}n\equiv r\bmod q\\ N(n)<x\\ \arg(n)\in\omega\end{subarray}}\Lambda(n)=\frac{|\omega|}{2\pi}\frac{x}{\phi(q)} +O\left(\frac{x\,N(q)}{\log^{A}x}\right) \tag{4.2}\]
the implied constants are absolute. The last two equations come from [10, Theorem 5.36 & following hint], which provides us with a uniform property of the Prime Number Theorem.
The principal technical tool is this Lemma.
**Lemma 4.3**.: _Let \(N\) be a large integer and \(m\in\mathbb{Z}[i]\) with \(N(m)>N^{1/100}\) or \(m=0\). For large \(T\), and \(N(\alpha-\frac{a}{q})\leq\frac{1}{N(q)^{2}}\) where \(0<N(a)<N(q)\) and \((a,q)=1\)_
\[B(T,N,\alpha,m) :=\sum_{t\colon 0<N(t)<T}\min\left(\frac{N}{N(t)},\left(\frac{N}{N(t) }\right)^{\frac{1}{4}}\frac{1}{N(\|(\bar{t}+m)\alpha\|)^{\frac{3}{4}}}\right)\] \[\ll\frac{N\log(T)}{N(q)}+N^{\frac{1}{4}}N(q)^{\frac{3}{4}}\log N( q)+N(q)^{\frac{1}{4}}N^{\frac{1}{4}}T^{\frac{3}{4}}+N^{99/100}.\]
Proof.: The proof proceeds by case analysis based on the values of \(h\) and \(r\) introduced here. Since \(\mathbb{Z}[i]\) is a Euclidean domain, we know that \(t+\bar{m}=\bar{h}\bar{q}+\bar{r}\), with \(r\in B_{q}\). Let \(\beta:=\alpha-\frac{a}{q}\). Then \(N(\beta)<N(q)^{-2}\) and
\[\|(\bar{t}+m)\alpha\|=\big{\|}hq\beta+r\beta+\tfrac{rq}{q}\big{\|}.\]
So we can rewrite our sum in terms of \(h\) and \(r\) as follows.
\[B(T,N,\alpha,m)=\sum_{N(h)<\frac{T}{N(q)}}\sum_{r\in B_{q}}\min\left(\frac{N} {N(t)},\left(\frac{N}{N(t)}\right)^{\frac{1}{4}}\frac{1}{N(\|(\bar{t}+m)\alpha \|)^{\frac{3}{4}}}\right),\]
where on the right we understand that \(t=t_{h,r}\) is given by
\[t+\bar{m}=\bar{h}\bar{q}+\bar{r},\]
**Case 1: \(\bar{t}+m=0\).** Hence \(\bar{t}=-m\), where \(m\) is fixed. There is only one term in the sum we are estimating. Since \(N(t)>0\), we see that \(m\neq 0\), and so \(N(t)>N^{1/100}\). Then
we use the trivial bound, which gives the contribution of at most \(N^{99/100}\) and we see the inequality holds in the case that \(\bar{t}+m=0\).
**Case 2: \(h=0\), and \(0<N(r)<N(q)/10\).** By assumption \(N(r\beta)\leq(2N(q))^{-1}\), so \(4N(\|(\bar{t}+m)\alpha\|)\geq N(\|\frac{ra}{q}\|)-\frac{1}{2N(q)}\). Therefore
\[\sum_{0<N(r)\leq N(q)/10}\left(\frac{N}{N(t)}\right)^{\frac{1}{4}}\frac{1}{N( \|(\bar{t}+m)\alpha\|)^{\frac{3}{4}}}\ll\sum_{N(r)<N(q)}\frac{N^{\frac{1}{4}}} {N(r)^{\frac{1}{4}}\left(N(\|\frac{ra}{q}\|)-\frac{1}{4N(q)}\right)^{\frac{3}{ 4}}} \tag{4.4}\]
Denote \(d_{a}(r)\equiv ar\pmod{q}\), so that \(N(\|\frac{ra}{q}\|)=\frac{N(d_{a}(r))}{N(q)}\). We see that we can ignore \(\frac{1}{4N(q)}\) in the denominator of the right hand side. So
\[(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq
The remainder of the argument concerns the second claim above. But that follows, since we always have for \(r_{1}\neq r_{2}\in B_{q}\)
\[N\left(\left\|\frac{r_{1}a}{q}+hq\beta+r_{1}\beta\right\|-\left\| \frac{r_{2}a}{q}+hq\beta+r_{2}\beta\right\|\right) \gg N\left(\frac{(r_{1}-r_{2})a}{q}+(r_{1}-r_{2})\beta\right)\] \[=N\left(\frac{r_{1}-r_{2}}{q}\alpha\right)\gg\frac{1}{N(q)}.\]
We can now turn to the sum \(B(T,N,\alpha)\) in this case. Using the trivial bound for the cases that \(r=0\) and picking the nontrivial bound for the other cases, we obtain
\[B(T,N,\alpha)\ll\sum_{N(h)<\frac{T}{N(q)}}\sum_{N(r)<N(q)/2}\min \left(\frac{N}{N(q)\left(N(h)+1\right)}\right.,\\ \frac{N^{1/4}}{N(q)^{1/4}\left(N(h)+1\right)^{1/4}N(\|hq\beta+ \frac{ra}{q}+r\beta\|)^{3/4}}\right)\\ \ll\sum_{N(h)<\frac{T}{N(q)}}\left(\frac{N}{N(q)\left(N(h)+1 \right)}\right.\]
Figure 3. The distances \(\|\frac{ra}{q}+hq\beta+r\beta\|\) for a fixed \(h\) are essentially uniformly distributed in case 3.
\[+\sum_{0<N(r)<N(q)/2}\frac{N^{\frac{1}{4}}}{N(q)^{\frac{1}{4}}(1+N(h))^{\frac{1}{4 }}N(\|hq\beta+\frac{ra}{q}+r\beta\|)^{\frac{3}{4}}}\] \[\ll\sum_{N(h)<\frac{T}{N(q)}}\left(\frac{N}{N(q)\left(N(h)+1 \right)}+\sum_{0<N(d)<N(q)/2}\frac{N^{\frac{1}{4}}}{N(q)^{\frac{1}{4}}(1+N(h) )^{\frac{1}{4}}N(\frac{d}{q})^{\frac{3}{4}}}\right)\] \[\ll\sum_{N(h)<\frac{T}{N(q)}}\left(\frac{N}{N(q)\left(N(h)+1 \right)}+\frac{N^{\frac{1}{4}}N(q)^{\frac{1}{2}}}{(1+N(h))^{\frac{1}{4}}} \sum_{k=1}^{N(q)}\frac{r_{2}(k)}{k^{\frac{3}{4}}}\right)\] \[\ll\sum_{N(h)<\frac{T}{N(q)}}\left(\frac{N}{N(q)\left(N(h)+1 \right)}+\frac{N^{\frac{1}{4}}N(q)^{\frac{3}{4}}}{(1+N(h))^{\frac{1}{4}}}\right)\] \[\ll\sum_{0<\ell<\frac{T}{N(q)}}r_{2}(\ell)\left(\frac{N}{N(q) \ell}+\frac{N(q)^{\frac{3}{4}}N^{\frac{1}{4}}}{\ell^{\frac{1}{4}}}\right)\] \[\ll\frac{N\log(T)}{N(q)}+N(q)^{\frac{1}{4}}N^{\frac{1}{4}}T^{ \frac{3}{4}}\]
where we have used the estimates in (2.2), and our proof is complete.
Now are now ready to prove the main theorem of this section.
Proof of Theorem 4.1.: Vaughan's identity, well-known in the one dimensional case, holds in two dimensions as well. That is, let \(|f|\leqslant 1\) be an arithmetic function and fix \(UV<N\). Then
\[\sum_{N(n)<N}f(n)\Lambda(n)\ll U+\log(N)A_{1}(N,U,V)+N^{\frac{1}{2}}\log^{3}( N)A_{2}(N,U,V) \tag{4.6}\]
where \(A_{1}\) and \(A_{2}\) are given by
\[A_{1}=\sum_{N(t)\leqslant UV}\max_{1\leqslant w\leqslant N}\biggl{|}\sum_{w \leqslant N(r)\leqslant\frac{N}{N(t)}}f(rt)\biggr{|}\]
\[A_{2}=\max_{U\leqslant M\leqslant N/V}\max_{V\leqslant N(j)\leqslant N/M}\left( \sum_{V<N(k)\leqslant N/M}\biggl{|}\sum_{M<N(m)<\min(2M,\frac{N}{N(j)},\frac{ N}{N(k)})}f(mj)\overline{f(mk)}\biggr{|}\right)^{\frac{1}{2}}.\]
The term we need to estimate is (4.6), with \(f(x)=e(\langle x,\alpha\rangle)\). The two auxiliary integers are \(U=N^{\frac{1}{2}}\) and \(V=N^{\frac{1}{4}}\). The first term requires the exponential sum estimate (2.5), and Lemma 4.3 in which we set \(T=UV\). It follows that
\[A_{1}=\sum_{N(t)<UV}\max_{w<\frac{N}{N(t)}}\biggl{|}\sum_{\begin{subarray}{c}w <N(r)<\frac{N}{N(t)}\\ \arg(r)\omega\end{subarray}}e(\langle rt,\alpha\rangle)\biggr{|}\]
\[\ll\sum_{N(t)<UV}\min\biggl{(}\frac{N}{N(t)},\biggl{(}\frac{N}{N(t)} \biggr{)}^{\frac{1}{4}}\,\frac{1}{N(\|\bar{t}\alpha\|)^{\frac{3}{4}}}\biggr{)}\] \[\ll\frac{N\log(N)}{N(q)}+N^{\frac{1}{4}}N(q)^{\frac{3}{4}}\log N( q)+N^{\frac{1}{4}}[UV]^{\frac{3}{4}}N(q)^{\frac{1}{4}}+N^{99/100}\] \[\ll\frac{N\log(N)}{N(q)}\Bigl{(}1+(N(q)/N)^{\frac{3}{4}}\log N(q) +\frac{N(q)^{\frac{1}{4}}}{N^{\frac{3}{16}}}\Bigr{)}+N^{99/100}\] \[\ll\frac{N\log^{2}(N)}{N(q)}+N^{99/100}\]
where in the last inequality we have used Lemma 4.3 with the hypothesis that \(m=0\). This bound meets the claimed bound in Theorem 4.1.
The second term from Vaughan's identity (4.6) is quadratic in nature.
\[A_{2}^{2} =\max_{U<M<\frac{N}{V}}\max_{V\leqslant N(j)<N/M}\sum_{V<N(k) \leqslant N/M}\Biggl{|}\sum_{\begin{subarray}{c}M<N(m)<2M\\ N(m)<N/N(k),N/N(j)\\ \arg(m)\in\omega\end{subarray}}e\left(mj-mk,\alpha\right)\Biggr{|}\] \[\ll\max_{U<M<\frac{N}{V}}\max_{V\leqslant N(j)<N/M}\sum_{V<N(k) \leqslant N/M}\min\biggl{(}M,\biggl{(}\frac{N}{N(k)}\biggr{)}^{\frac{1}{4}} \,N(\|(j-k)\alpha\|)^{-\frac{3}{4}}\biggr{)}\]
where we have used Lemma 2.4. Now we use Lemma 4.3 for \(m=j\). The conclusion of the Lemma applies since in the sum above, we have \(M\leqslant\frac{N}{N(k)}\). Therefore
\[A_{2} \ll\max_{U<M<\frac{N}{V}}\max_{V\leqslant N(j)<N/M}\left(M+ \frac{N\log(N)}{N(q)}+\frac{N\,N(q)^{\frac{1}{4}}}{M^{3/4}}+\left(\frac{N}{M} \right)^{\frac{1}{4}}N(q)+N^{\frac{99}{100}}\right)^{\frac{1}{2}}\] \[\ll\frac{N^{1/2}\log N}{N(q)^{1/2}}+\sqrt{\frac{N}{V}}+\frac{N^{ \frac{1}{2}}N(q)^{\frac{1}{8}}}{U^{3/8}}+\left(\frac{N}{U}\right)^{\frac{1}{8 }}N(q)^{\frac{1}{2}}+N^{\frac{99}{200}}.\]
The last inequality follow from our choice of \(U=N^{\frac{1}{2}}\) and \(V=N^{\frac{1}{4}}\), and completes the proof.
## 5. Approximating the Kernel
Define the approximating multiplier \(L_{N}^{a,q}\) as follows:
\[\widehat{L_{N}^{a,q}}(\xi):=\Phi(a,\bar{q})\widehat{M_{N}^{\omega}}(\xi-\frac {a}{q}). \tag{5.1}\]
Above, we suppress the \(\omega\) dependence in the already heavy notation, and we use the notation
\[\Phi(a,q)\coloneqq\frac{\tau_{q}(a)}{\phi(q)}, \tag{5.2}\]
In addition, recall that \(M_{N}=M_{N}^{\omega}\) is an average over a sector defined by a choice of interval \(\omega\subset\mathbb{T}\), see (2.3). The weighted variant is
\[A_{N}^{\omega}=A_{N}=\frac{2\pi}{|\omega|N}\sum_{\begin{subarray}{c}N(n)<N\\ \arg(n)\in\omega\end{subarray}}\Lambda(n)\delta_{n}.\]
**Lemma 5.3**.: _For \(\omega\subset\mathbb{T}\), \(\alpha\in\mathbb{C}\) with \(N(\alpha)<1\), assume that there are \(0\leqslant N(a)<N(q)<Q\) such that \(N(\alpha-\frac{a}{q})<\frac{Q}{N\cdot N(q)}\) and \(a\in\mathbb{A}_{q}\)). Then we have, for \(A>1\), and \(N>N_{\omega,A}\),_
\[\big{|}\widehat{A_{N}^{\omega}}(\alpha)-\widehat{L_{N}^{\widehat{a},\widehat{q }}}(\alpha)\big{|}\!\!\leqslant\frac{Q^{4}}{\log^{A}(N)}.\]
The usual one dimensional approach to these estimates uses the Prime Number Theorem, and Abel summation. Implementing that argument in the two dimensional case engages a number of complications. After all, the two dimensional Prime Number Theorem is adapted to annular sectors, whereas Abel summation is most powerful on rectangles in the plane. We avoid these technicalities below. (Mitsui [16] sums over rectangular regions.)
Proof.: The quantifications of the Prime Number Theorem are decisive. Write
\[\widehat{A_{N}^{\omega}}(\alpha) =\frac{2\pi}{|\omega|N}\sum_{\begin{subarray}{c}N(n)<N\\ \arg(n)\in\omega\end{subarray}}\Lambda_{\mathbb{Z}[i]}(n)e(\langle n,\alpha\rangle)\] \[=\frac{2\pi}{|\omega|N}\sum_{\begin{subarray}{c}r\in Bq_{q}\\ (r,\bar{q})=1\end{subarray}}\sum_{\begin{subarray}{c}n\equiv r\mod q\\ N(n)<N\\ \arg(n)\in\omega\end{subarray}}\Lambda(n)e(\langle n,\beta+a/q\rangle)\]
Figure 4. The decomposition of the complex field.
\[=\frac{2\pi}{|\omega|N}\sum_{\begin{subarray}{c}r\in B_{q}\\ (r,\bar{q})=1\end{subarray}}e(\langle r,a/q\rangle)\sum_{\begin{subarray}{c}n \equiv r\mod\bar{q}\\ N(n)<N\\ \arg(n)\in\omega\end{subarray}}\Lambda(n)e(\langle n,\beta\rangle)\] \[=\frac{1}{\phi(q)}\sum_{\begin{subarray}{c}r\in B_{q}\\ (r,\bar{q})=1\end{subarray}}e(\langle r,a/q\rangle)B_{N}(r,\beta),\]
where we define \(B_{N}(r,\beta)\), and a closely related quantity by
\[B_{N}(r,\beta) \coloneqq\frac{2\pi\phi(q)}{|\omega|N}\sum_{\begin{subarray}{c}n \equiv r\mod\bar{q}\\ N(n)<N\\ \arg(n)\in\omega\end{subarray}}\Lambda(n)e(\langle n,\beta\rangle),\] \[B^{\prime}_{N}(r,\beta) \coloneqq\frac{2\pi\phi(q)}{|\omega|N}\sum_{\begin{subarray}{c}n \equiv r\mod\bar{q}\\ N(n)<N\\ \arg(n)\in\omega\end{subarray}}e(\langle n,\beta\rangle).\]
Compare \(B_{N,r}\) to \(B^{\prime}_{N,r}\), as follows. Using the trivial estimate for \(N(n)\leq\sqrt{N}\)
\[B_{N,r}(\beta)-B^{\prime}_{N}(r,\beta)\ll N^{-1/2}+\frac{2\pi\phi(q)}{|\omega|N }\sum_{\begin{subarray}{c}n\colon\sqrt{N}<N(n)\leq N\\ n\equiv r\mod\bar{q}\\ \arg(n)\in\omega\end{subarray}}\big{(}\Lambda(n)-\tfrac{q}{\pi\phi(q)}\big{)}e( \langle n,\beta\rangle).\]
We continue with the last sum above. It is divided into annular rectangles, as follows. Let \(\mathcal{P}\) be a partition of the arc \([0,\omega]\subset\mathbb{T}\) into intervals of length approximately \((\log N)^{-10A}\). Set \(\rho=1+(\log N)^{-10A}\). For integers \(j\) with
\[N^{1/2}\leqslant N_{j}=\rho^{j}\sqrt{N}<N,\]
and an interval \(P\in\mathcal{P}\), set
\[R(j,P)=\{n\colon N_{j}\leqslant N(n)<N_{j+1},\ \arg(n)\in P,\ n\equiv r\mod\bar{q}\}.\]
See Figure 5. The set \(R(j,P)\) is the symmetric difference of four sets to which the prime counting function estimate (4.2) applies. From it, we see that
\[D(j,P) =\sum_{n\in R(j,P)}\bigl{(}\Lambda(n)-\tfrac{q}{\phi(q)}\bigr{)} e(\langle n,\beta\rangle)\] \[\leqslant\sup_{n,m\in R(j,P)}|1-e(\langle n-m,\beta\rangle)|\sum _{n\in R(j,P)}\Lambda(n)+\tfrac{q}{\phi(q)}\] \[\qquad+\Bigl{|}\sum_{n\in R(j,P)}\Lambda(n)-\tfrac{q}{\phi(q)} \Bigr{|}\] \[\ll\Bigl{[}\frac{Q}{N}\cdot\frac{N}{\log^{10}N}\Bigr{]}^{1/2}|R( j,P)|+\frac{N_{j+1}-N_{j}}{(\log N)^{10A}}\] \[\ll\sqrt{Q}\frac{N_{j+1}-N_{j}}{(\log N)^{10A}}.\]
The bound for the first term comes from the condition that \(N(\beta)\leqslant\frac{Q}{N}\), and for the second from (4.2).
Control of the absolute value of the \(D(j,P)\) is sufficient, since
\[B_{N,r}(\beta)-B^{\prime}_{N,r}(\beta) \ll N^{-1/2}+\frac{\phi(q)}{N}\sum_{P\in\mathcal{P}}\sum_{j\colon \rho^{j}\leqslant\sqrt{N}}|D(j,P)|\] \[\ll N^{-1/2}+\frac{\phi(q)}{N}\sum_{P\in\mathcal{P}}\sum_{j\colon \rho^{j}\leqslant\sqrt{N}}\frac{N_{j+1}-N_{j}}{(\log N)^{10A}}\] \[\ll\frac{Q(\log N)^{A}}{(\log N)^{10A}}\]
as there are only \(\ll(\log N)^{A}\) choices of the interval \(P\). We are free to choose \(A\) as large as we want.
This holds for all \(r\in B_{\bar{q}}\), so that we have
\[\widehat{A_{N}^{\omega}}(\alpha)-\frac{1}{\phi(q)}\sum_{\begin{subarray}{c}r \in B_{\bar{q}}\\ (r,\bar{q})=1\end{subarray}}e(\langle r,a/q\rangle)B^{\prime}_{N}(r,\beta) \ll\frac{Q}{(\log N)^{A}}.\]
Then, observe the elementary inequality that for \(r,s\in B_{\tilde{q}}\),
\[B^{\prime}_{N}(r,\beta)-B^{\prime}_{N}(s,\beta)\ll|r-s|\cdot|\beta|\ll Q\Big{[} \frac{Q}{N}\Big{]}^{1/2}\]
which just depends upon the Lipschitz bound on exponentials, and the upper bound on \(\beta\).
The conclusion of the argument is then clear. Up to an error term of magnitude \(Q^{3/2}(\log N)^{A}\) we can write
\[\widehat{A_{N}^{\omega}}(\alpha) =\frac{1}{\phi(q)}\sum_{\begin{subarray}{c}r\in B_{\tilde{q}}\\ (r,\tilde{q})=1\end{subarray}}e(\langle r,a/q\rangle)B^{\prime}_{N}(0,\beta)\] \[=\Phi(a,\tilde{q})B^{\prime}_{N}(0,\beta)\] \[=\Phi(a,\tilde{q})\frac{1}{|B_{\tilde{q}}|}\sum_{r\in B_{\tilde{ q}}}B^{\prime}_{N}(r,\beta)\] \[=\Phi(a,\tilde{q})\widehat{M_{N}^{\omega}}(\beta).\]
That is the conclusion of Lemma 5.3.
Consider the following dyadic decomposition of rationals
\[\mathcal{R}_{s}=\left\{\frac{a}{q}\,:\,2^{s}\leq N(q)<2^{s+1},\,a\in\mathbb{A }_{q}\right\}.\]
Let \(\Delta\) be be a continuous function on \(\mathbb{C}\), a tensor product of piecewise linear functions, with
\[\eta(\xi)=\begin{cases}1&\text{if }\xi=(0,0)\\ 0&\text{if }N(\xi)\geq\|\xi\|_{\infty}\geq 1\end{cases}\]
and let \(\Delta_{s}(\xi):=\eta(16^{s}\xi)\). Here, we remark that this definition is different from many related papers in the literature. With this defintion, the function \(\tilde{\eta}\) is a tensor product Fejer kernels. In particular, they are non-negative averages. Imposing this choice here will simplify considerations in the analysis of the Goldbach conjectures.
Recalling definitions of \(L_{N}^{a,q}\) in (5.1), further define
\[\widehat{B_{N}^{\omega}}(\xi):=\sum_{s\geq 0}\sum_{a/q\in\mathcal{R}_{s}} \widehat{L_{N}^{a,q}}\left(\xi\right)\Delta_{s}\left(\xi-\frac{a}{q}\right).\]
We remind the reader that the \(\omega\) dependence is suppressed in the notation on the right.
**Theorem 5.4**.: _Fix an integer \(A>10\) and \(\omega\subset\mathbb{T}\). Then, there is an \(N_{\omega}\) to that for all \(N>N_{\omega}\),_
\[\|\widehat{A_{N}^{\omega}}-\widehat{B_{N}^{\omega}}\|_{\infty}\ll(\log N)^{-A}. \tag{5.5}\]
_The implied constant is independent of \(\omega\)._
Proof.: A useful and familiar fact we will reference below is that for each \(s\), the functions below are disjointly supported.
\[\widehat{L_{N}^{\widehat{a},\widehat{q}}}\Big{(}\xi\Big{)}\Delta_{s}\Big{(}\xi- \frac{a}{q}\Big{)},\qquad a/q\in\mathcal{R}_{s} \tag{5.6}\]
Fix \(\xi\in\mathbb{T}^{2}\). By the higher-dimensional Dirichlet's Theorem there are relatively prime numbers \(a\) and \(q\) that satisfy \(1\leq N(a)\leq N(q)\leq N^{1/4}\) such that
\[N\big{(}\xi-\tfrac{a}{q}\big{)}\leq\frac{1}{N(q)\cdot N^{1/4}}.\]
To prove the theorem we need to consider two cases based on the value of \(N(q)\).
**Case 1: Suppose \(1\leq N(q)\leq(\log N)^{4A}\).** For \(\frac{a^{\prime}}{q^{\prime}}\neq\frac{a}{q}\) and \(N(q^{\prime})\leq(\log N)^{2A}\), we have
\[N\left(\xi-\frac{a^{\prime}}{q^{\prime}}\right) \geq N\left(\frac{a^{\prime}}{q^{\prime}}-\frac{a}{q}\right)-N \left(\xi-\frac{a}{q}\right)\] \[\geq\frac{1}{N(q^{\prime})N(q)}-\frac{1}{N(q)}\frac{1}{N^{1/4}} \geq(\log N)^{-6A}.\]
Using this, and the decay estimate for \(\widehat{M}_{N}\) in (2.5) to see that
\[\left|\widehat{L_{N}^{a^{\prime},q^{\prime}}}(\xi)\Delta_{s}\left(\xi-\frac{a ^{\prime}}{q^{\prime}}\right)\right|\ll\frac{1}{\sqrt{N(q^{\prime})}}\left(NN \left(\xi-\frac{a^{\prime}}{q^{\prime}}\right)\right)^{-3/4}\ll N^{-1/2}.\]
Appeal to the disjointness property (5.6). We have
\[\left|\sum_{s\colon 2^{s}\leq(\log N)^{2A}}\sum_{\frac{a^{\prime}}{q^{\prime}} \in\mathcal{R}_{s},\frac{a^{\prime}}{q^{\prime}}\neq\frac{a}{q}}\widehat{L_{N} ^{\widehat{a^{\prime}},q^{\prime}}}(\xi)\Delta_{s}\left(\xi-\frac{a^{\prime}} {q^{\prime}}\right)\right|\ll N^{-1/2}\sum_{s\colon 2^{s}\leq(\log N)^{2A}}2^{-s} \ll N^{-1/2}.\]
For \(2^{s}>(\log N)^{2A}\) we use the trivial bound for \(\widehat{M_{N}^{\widehat{\omega}}}\), the estimate for the Gauss sums in (5.2), as well as the support property (5.6). This yields the estimate
\[\sum_{s\colon 2^{s}>(\log N)^{2A}}\sum_{\frac{a^{\prime}}{q^{\prime}}\in \mathcal{R}_{s}\atop\frac{a^{\prime}}{q^{\prime}}\in\frac{a}{q}}\widehat{L_{N} ^{\widehat{a^{\prime}},q^{\prime}}}(\xi)\Delta_{s}\Big{(}\xi-\frac{a^{\prime} }{q^{\prime}}\Big{)}\ll\sum_{s\colon 2^{s}>(\log N)^{2A}}2^{-3s/4}\ll(\log N)^{-A}. \tag{5.7}\]
Above, the sums exclude the case of \(a^{\prime}/q^{\prime}=a/q\). That is the central case, the one Lemma 5.3 was designed for.
We turn to the case of \(a^{\prime}/q^{\prime}=a/q\) here. With appropriate choice of \(Q\) and \(A\) in that Lemma, we obtain from (5.5),
\[\left|\widehat{A_{N}^{\widehat{\omega}}}(\xi)-\widehat{L_{N}^{ \widehat{a},\widehat{q}}}(\xi)\Delta_{s}\Big{(}\xi-\frac{a}{q}\Big{)}\right| \leq\left|\widehat{A_{N}^{\widehat{\omega}}}(\xi)-\widehat{L_{N} ^{\widehat{a},\widehat{q}}}(\xi)\Big{|}+\left|\widehat{L_{N}^{\widehat{a}, \widehat{q}}}(\xi)\Big{(}1-\Delta_{s}\Big{(}\xi-\frac{a}{q}\Big{)}\Big{)}\right|\] \[\ll(\log N)^{-A}+\frac{N(q)^{2}}{N(\xi-a/q)^{1/2}}\ll(\log N)^{-A}.\]
This holds by choice of \(a/q\). That completes this case.
**Case 2: Suppose \((\log N)^{4A}\leqslant N(q)\)**. Both terms are small. By the Vinogradov inequality in Theorem 4.1 we have
\[|\widehat{A_{N}^{\omega}}(\xi)|\ll(\log N)^{-A}.\]
It remains to show that \(\widehat{B}_{N}(\xi)\) is also small. That function is a sum over integers \(s\geqslant 0\). For \(2^{s}>(\log N)^{2A}\), we only need to use the estimate (5.7). Thus, our focus turns to the case of \(2^{s}\leqslant(\log N)^{2A}\).
For \(2^{s}\leqslant(\log N)^{2A}\) we have for \(\frac{a^{\prime}}{q^{\prime}}\in\mathcal{R}_{s}\)
\[N\Big{(}\xi-\frac{a^{\prime}}{q^{\prime}}\Big{)} \geqslant N\Big{(}\frac{a^{\prime}}{q^{\prime}}-\frac{a}{q}\Big{)}- N\Big{(}\xi-\frac{a}{q}\Big{)}\] \[\geqslant\frac{1}{N(q^{\prime})N(q)}-\frac{1}{N(q)}\frac{1}{N^{1 /4}}\] \[\geqslant\frac{2^{-s-1}}{N(q)}\gg N^{-1/8}.\]
From the decay estimate in (2.5), we have
\[\widehat{L_{N}^{a^{\prime},q^{\prime}}}(\xi)\Delta_{s}\Big{(}\xi-\frac{a^{ \prime}}{q^{\prime}}\Big{)}\ll\left(NN\left(\xi-\frac{a^{\prime}}{q^{\prime}} \right)\right)^{-3/4}\ll N^{-3/32}\]
Using the disjointness property (5.6), it then easy to see that
\[\sum_{2^{s}\leqslant\sqrt{Q}}\sum_{\frac{a^{\prime}}{q^{\prime}}\in\mathcal{R }_{s},\frac{a^{\prime}}{q^{\prime}}\neq\frac{a}{q}}\widehat{L_{N}^{a^{\prime},q^{\prime}}}(\xi)\Delta_{s}\left(\xi-\frac{a^{\prime}}{q^{\prime}}\right) \ll(\log N)^{-A}.\]
That completes the second case, and hence the proof of our Theorem.
## 6. Estimates for the High and Low Parts
Our High and Low decomposition of the multiplier incorporates a notion of smooth numbers. For integer \(Q=2^{q_{0}}\ll(\log N)^{B}\), we say that a Gaussian integer is \(Q\)-smooth if \(q\) is square free and the product of primes \(\rho\) with \(N(\rho)\leqslant Q\). Here, \(B\) will be a fixed integer.
We write \(A_{N}^{\omega}=\operatorname{Lo}_{Q,N}^{\omega}+\operatorname{Hi}_{Q,N}^{\omega}\), where
\[\widehat{\operatorname{Lo}}_{Q,N}^{\omega}(\xi)=\sum_{q\colon N(q)<Q}\sum_{a \in\mathbb{A}_{q}}\Phi(a,\bar{q})\widehat{M_{N}^{\omega}}(\xi-\frac{a}{q}) \Delta_{q_{0}}(\xi-\frac{a}{q}). \tag{6.1}\]
Here, we recall that \(\omega\subset\mathbb{T}\) is an interval, and \(N>N_{\omega}\) is sufficiently large. The average \(M_{N}^{\omega}\) is defined in (2.3), the Gauss sum \(\Phi(a,\bar{q})\) in (5.2). (Note that the Gauss sum will be zero if \(q\) contains a square.) This definition is inspired by Theorem 5.4. But, the definition above incorporates not only smoothness, but the cutoff function \(\Delta_{q_{0}}\) is a function of \(Q\). Both changes are useful in the next section. There are two key properties of these terms. The first, is that the 'High' part has small \(\ell^{2}\) norm.
**Lemma 6.2**.: _For any \(\epsilon>0\), \(\omega\subset\mathbb{T}\), there in an \(N_{\omega}\) so that for all \(N>N_{\omega}\),_
\[\|\operatorname{Hil}_{Q,N}^{\omega}\|_{\ell_{2}\to\ell_{2}}\ll Q^{-1+\varepsilon}. \tag{6.3}\]
Proof.: The \(\ell^{2}\) norm is estimated on the frequency side. By Theorem 5.4, the High term is a sum of three terms. They are, suppressing the dependence on \(\omega\),
\[\widehat{\operatorname{Hil}_{Q,N}^{1}}(\xi) =\sum_{Q<2^{s+1}}\sum_{2^{s}\leq N(q)<2^{s-1}}\sum_{\frac{a}{q} \in\mathcal{R}_{s}}\Phi(a,\bar{q})\widehat{M_{N}^{\omega}}(\xi-\frac{a}{q}) \Delta_{s}(\xi-\frac{a}{q})\] \[\widehat{\operatorname{Hil}_{Q,N}^{2}}(\xi) =\sum_{s}\sum_{2^{s}\leq N(q)<2^{s-1}}\sum_{\begin{subarray}{c}a \in\mathcal{R}_{s}\\ N(q)<Q\end{subarray}}\Phi(a,\bar{q})\widehat{M_{N}^{\omega}}(\xi-\frac{a}{q}) \{\Delta_{q_{0}}(\xi-\frac{a}{q})-\Delta_{s}(\xi-\frac{a}{q})\}\] \[\widehat{\operatorname{Hil}_{Q,N}^{3}}(\xi) \coloneqq\widehat{A_{N}^{\omega}}-\widehat{B_{N}^{\omega}}.\]
We address them in reverse order. The last term is controlled by Theorem 5.4. It clearly satisfies our claim (6.3).
In the \(\widehat{\operatorname{Hil}_{Q,N}^{2}}\), the key term is the last difference between \(\Delta_{q_{0}}\) and \(\Delta_{s}\). In particular, if \(\Delta_{q_{0}}(\xi)-\Delta_{s}(\xi)\neq 0\), we have \(N(\xi)\gg 2^{-q_{0}}\). It follows that \(\widehat{M_{N}^{\omega}}(\xi)\) is relatively small. This allows us to estimate
\[\widehat{\|\operatorname{Hil}_{Q,N}^{2}}(\xi)\|_{\infty}\] \[\ll\sum_{s}2^{-3s/4}\min\{1,(N2^{-q_{0}})^{-3/4}+N^{-1/2}\}\ll Q ^{-1}.\]
Here, we have used the disjointness of support for the different functions, and the exponential sum estimate Lemma 2.4.
It remains to bound the term \(\widehat{\operatorname{Hil}_{Q,N}^{1}}\). But the smallest denominator \(q\) that we sum over satisfies at least \(N(q)>Q\), so that a similar argument leads to
\[\widehat{\|\operatorname{Hil}_{Q,N}^{1}}\|_{\infty} \ll\sum_{Q<2^{s}<N^{1/4}}\sum_{2^{s}\leq N(q)<2^{s-1}}\max_{ \begin{subarray}{c}\bar{q}\in\mathcal{R}_{s}\\ N(q)\geq Q\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_And, moreover, for all \(\epsilon>0\), and non-negative \(f\)_
\[\operatorname{Lo}_{Q,N}^{\omega}f(x)\ll Q^{\epsilon}\big{[}(M_{N}^{\omega}* \widetilde{\Delta_{q_{0}}})*f^{1+\epsilon}(x)\big{]}^{1/(1+\epsilon)}. \tag{6.6}\]
Proof.: We have for fixed \(q\),
\[\sum_{a\in\mathbb{A}_{q}}\Phi(a,\bar{q})\int_{D}\widehat{M_{N}^{ \omega}}(\xi-\frac{a}{q})\Delta_{q_{0}}(\xi-\frac{a}{q})e(\langle x,\xi\rangle )d\xi =\sum_{a\in\mathbb{A}_{q}}\Phi(a,\bar{q})e(\langle x,\frac{a}{q} \rangle)\left(M_{N}*\widetilde{\Delta_{q_{0}}}\right)(x)\] \[=\left(M_{N}^{\omega}*\widetilde{\Delta_{q_{0}}}\right)(x)\frac {1}{\phi(\bar{q})}\sum_{a\in\mathbb{A}_{q}}\tau_{\bar{q}}(a)e(\langle x, \frac{a}{q}\rangle)\] \[=\left(M_{N}^{\omega}*\widetilde{\Delta_{q_{0}}}\right)(x)\frac {1}{\phi(q)}\sum_{r\in\mathbb{A}_{q}}\tau_{q}(x+r).\]
We then apply Lemma 3.8, and sum over \(Q\)-smooth denominators \(q\) to conclude the first claim (6.5).
For the second claim, we use (3.12) in the standard way. Fix \(2^{s-1}\leqslant Q\), and consider the operator \(A\) with kernel
\[A_{s}(x) =\left(M_{N}^{\omega}*\widetilde{\Delta_{q_{0}}}\right)(x)\sum_{q: \;2^{s}\leqslant q<2^{s}}\frac{|\tau_{q}(x)|}{\phi(q)}.\]
For an integer \(k>2\epsilon^{-1}\), and non-negative \(f\in\ell^{1+\epsilon}\), we have
\[A_{s}f(x) =\sum_{y}\left(M_{N}^{\omega}*\widetilde{\Delta_{q_{0}}}\right)( y)\sum_{q:\;2^{s}\leqslant q<2^{s}}\frac{|\tau_{q}(y)|}{\phi(q)}f(x-y)\] \[\leqslant\left[\sum_{y}\left(M_{N}^{\omega}*\widetilde{\Delta_{q _{0}}}\right)(y)f(x-y)^{k/(k-1)}\right]^{(k-1)/k}\] \[\times\left[\sum_{y}\left(M_{N}^{\omega}*\widetilde{\Delta_{q_{0 }}}\right)(y)\sum_{q:\;2^{s}\leqslant q<2^{s}}\Bigl{[}\frac{|\tau_{q}(y)|}{ \phi(q)}\Bigr{]}^{k}\Bigr{]}^{1/k}\] \[\ll 2^{\epsilon s}\biggl{[}\sum_{y}\left(M_{N}^{\omega}*\widetilde{ \Delta_{q_{0}}}\right)(y)f(x-y)^{k/(k-1)}\biggr{]}^{(k-1)/k}.\]
We sum this over \(2^{s-1}<Q\) to complete the proof of (6.6).
## 7. Improving Inequalities
In this brief section, we establish the improving inequalities, namely Theorem 1.2. And, list some additional results that we could establish. For the convenience of the reader, we restate the improving inequality here, in a slightly more convenient form for our subsequent discussion. And, there is no loss of generality to reduce to the trivial case of a sector.
**Theorem 7.1**.: _For all \(N\), and \(1<p\leq 2\), we have, for \(\omega=\mathbb{T}\), and functions \(f,g\) supported on \([0,\sqrt{N}]^{2}\),_
\[N^{-1}\langle A_{N}^{\mathbb{T}}f,g\rangle\ll N^{-2/p}\|f\|_{p}\|g\|_{p}.\]
Proof.: As the angle \(\omega=\mathbb{T}\), we suppress it in the notation. We must prove the inequalities over the open range of \(1<p<2\). So, it suffices to consider the case that \(f=\mathbf{1}_{F}\) and \(g=\mathbf{1}_{G}\), for \(F,G\subset\{n\colon N(n)\leq N\}\). Interpolation proves them as stated.
Dominating the von Mangoldt function by \(\log N\), we always have
\[\langle A_{N}f,g\rangle\ll N(\log N)|F|\cdot|G|.\]
We can then immediately deduce the inequality if
\[N^{-2}|F|\cdot|G|\ll(\log N)^{-2p^{\prime}}\]
So, we assume that this inequality fails, which will allow us to use our High Low decomposition. Namely, for \(0<\epsilon<1/2\) sufficiently small, set
\[Q^{\frac{2(1+\epsilon)}{1-\epsilon}}\simeq\frac{N^{2}}{|F|\cdot|G|}\ll(\log N )^{2p^{\prime}}.\]
Write \(A_{N}=\operatorname{Hi}_{N,Q}+\operatorname{Lo}_{N,Q}\). Appealing to (6.3) for the High term, and (6.6) for the Low term, we have
\[\langle\operatorname{Hi}_{N,Q}f,g\rangle \ll Q^{-1+\epsilon}\|f\|_{2}\|g\|_{2}\ll Q^{-1+\epsilon}[\|f\|_{ 1}\|g\|_{1}]^{1/2},\] \[\langle\operatorname{Lo}_{N,Q}f,g\rangle \ll NQ^{\epsilon}\|f\|_{1+\epsilon}\|g\|_{1+\epsilon}.\]
By choice of \(Q\), the two upper bounds nearly agree and are at most
\[\langle\operatorname{Lo}_{N,Q}f,g\rangle\simeq N^{-1+2\epsilon}\big{[}|F| \cdot|G|\big{]}^{1-2\epsilon}.\]
That is the desired inequality, for \(p^{\prime}=\frac{1+2\epsilon}{\epsilon}\). And so completes the proof.
The techniques developed to establish the improving inequality can be elaborated on to prove additional results. We briefly describe them here.
1. An \(\ell^{p}\to\ell^{p}\), for \(1<p<\infty\) inequality for the maximal function \(\sup_{N}|A_{N}f|\). Compare to [21].
2. A \((p,p)\), \(1<p<2\), sparse bound for the maximal function. Here we use the terminology of [6], for instance. The interest in the sparse bound is that it immediately implies a range of weighted inequalities.
3. One can establish pointwise convergence of ergodic averages. Let \((T_{1},T_{2})\) be commuting invertible measure preserving transformations of a probability space \((X,\mu)\). For all \(1<p<\infty\), and \(f\in L^{p}(X)\), the limit \[\lim_{N}\frac{1}{N}\sum_{N(n)<N}\Lambda(n)f(T^{n}x)\]
exists for a.e.\((x)\). Here, \(T^{a+ib}=T_{1}^{a}T_{2}^{b}\). Compare to [15].
We have given references particular to the primes (in \(\mathbb{Z}\)).
## 8. Goldbach Conjecture
The purpose of this section is to prove analogues of the Goldbach Conjecture on the Gaussian setting. We recall some elementary facts about Gaussian primes. We address a binary and ternary form of the Goldbach conjecture. The binary form is addressed in density form. Namely, most even integers are the sum of two primes. On the other hand all sufficiently large odd integers are the sum of three primes. We further restrict the arguments of the integers to be in a fixed interval.
### The Binary Goldbach Conjecture
The Goldbach Conjecture states that every even Gaussian integer can be written as the sum of two primes. We prove a density version of this result. It uses the High/Low decomposition. Observe that
\[A_{N}^{\omega}*A_{N}^{\omega}(n)=\frac{2\pi}{|\omega|^{2}N^{2}}\sum_{ \begin{subarray}{c}N(m_{1}),N(m_{2})<N\\ m_{1}+m_{2}=n\\ \arg(m_{1}),\arg(m_{2})\in\omega\end{subarray}}\Lambda(m_{1})\Lambda(m_{2}).\]
If the sum is non-zero, \(n\) can be represented as the sum of two numbers in the support of the von Mangoldt function \(\Lambda\) intersected with \(S_{N}^{\omega}\), where we define
\[S_{N}^{\omega}=\{n\colon N(n)<N,\arg(n)\in\omega\}.\]
The von Mangoldt function is supported on the Gaussian primes, and their powers. The powers have density less than \(\ll\sqrt{N}\). Thus, it suffices to establish that
\[|\{n\in S_{N}^{\omega}\colon n\text{ even \& }A_{N}^{\omega}*A_{N}^{\omega}(n)=0\}| \ll\frac{N}{(\log N)^{B}}.\]
Recall from (6.1), that we can write
\[A_{N}^{\omega}=\operatorname{Hi}+\operatorname{Lo}. \tag{8.1}\]
This depends upon a choice of \(Q=(\log N)^{B}\) for some sufficiently large power of \(B\), and we suppress the dependence on \(Q\), \(N\) and \(\omega\) in the notation. Thus, we write
\[A_{N}*A_{N}=\operatorname{Hi}*A_{N}+\operatorname{Lo}*\operatorname{Hi}+ \operatorname{Lo}*\operatorname{Lo}.\]
On the right, the last term that is crucial. We further write it as
\[\operatorname{Lo}*\operatorname{Lo}=\operatorname{Main}+\operatorname{Error}.\]
Aside from the main term, everything is small. Our Theorem easily follows from the Lemma below, in which we collect the required estimates.
**Lemma 8.2**.: _We have the estimates below, valid for all choices of \(B>1\)._
\[N^{-1} \ll\min_{\begin{subarray}{c}n\in S_{N}^{\omega}\\ n\text{ even}\end{subarray}}\operatorname{Main}(n), \tag{8.4}\] \[|\{0<N(n)<N\colon|\text{Error}(n)| >N^{-1}(\log N)^{-(B-1)/2}|\ll N(\log N)^{-(B-1)2},\] (8.5) \[\|\mathrm{Lo}*\operatorname{Hi}(n)\|_{\ell^{2}} \ll(N(\log N)^{B-1})^{-1/2},\] (8.6) \[\|\mathrm{Hi}*A_{N}(n)\|_{\ell^{2}} \ll(N(\log N)^{B-1})^{-1/2}. \tag{8.3}\]
We focus on the first estimate above. The Main term is
\[\operatorname{Main}(x) :=\sum_{q\text{ is }Q\text{-smooth}}\frac{1}{\phi(q)^{2}}\sum_{a \in\mathbb{A}_{q}}\tau_{q}(a)^{2}e\big{(}\big{<}\tfrac{a}{q},x\big{>}\big{>} \int_{\mathbb{T}^{2}}\widehat{M_{N}^{\omega}}(\xi)^{2}e\big{(}\big{<}\xi,x \big{>}\big{)}\;d\xi \tag{8.7}\]
Above, we say that \(q\) is _\(Q\)-smooth_ if \(q\) is square free and all prime factors \(\rho\) of \(q\) satisfy \(N(q)<Q\). The expression above can be calculated explicitly, using the Ramanujan like sum (3.4).
**Lemma 8.8**.: _Recall that \(2^{q_{0}}=Q<(\log N)^{B}\). For every \(N(x)<N\) we have_
\[\operatorname{Main}(x) =M_{N}^{\omega}*M_{N}^{\omega}(x)\sum_{q\text{ is }Q\text{- smooth}}\frac{|\mu(q)|}{\phi(q)^{2}}\tau_{q}(x), \tag{8.9}\]
Proof.: The term in (8.7) is
\[\operatorname{Main}(x) =M_{N}(x)^{\omega}*M_{N}^{\omega}(x)\sum_{q\text{ is }Q\text{-smooth}}\frac{1}{\phi(q)^{2}}\sum_{a\in\mathbb{A}_{q}}\tau_{q}(a)^{2}e \big{(}\big{<}\tfrac{a}{q},x\big{>}\big{)}\]
In the arithmetic term above, we fix \(q\), expand the Ramanujan sums, and we use Lemma 3.8. This gives us
\[\frac{1}{\phi(q)^{2}}\sum_{a\in\mathbb{A}_{q}}\tau_{q}(a)^{2}e \big{(}\big{<}\tfrac{a}{q},x\big{>}\big{>}\big{>} =\frac{1}{\phi(q)^{2}}\sum_{a\in\mathbb{A}_{q}}\Big{[}\sum_{r\in \mathbb{A}_{q}}e\big{(}\big{<}\tfrac{a}{q},r\big{>}\big{)}\Big{]}^{2}e\big{(} \big{<}\tfrac{a}{q},x\big{>}\big{)}\] \[=\frac{1}{\phi(q)^{2}}\sum_{r_{1}\in\mathbb{A}_{q}}\sum_{r_{2}\in \mathbb{A}_{q}}\tau_{q}(x+r_{1}+r_{2})\] \[=\frac{1}{\phi(q)^{2}}\sum_{r_{1}\in\mathbb{A}_{q}}\tau_{q}(x+r_{ 1})\tau_{\bar{q}}(1) \tag{8.10}\] \[=\frac{1}{\phi(q)^{2}}\tau_{q}(x)\tau_{\bar{q}}(1)^{2},\]
where in the last line we have used Lemma 3.8 again. Finally \(\tau_{\bar{q}}(1)^{2}=|\mu(q)|\).
On the right in our equality for the Low-Low expression (8.9), the first convolution satisfies
\[\min_{n\in S_{N}^{\omega}}M_{N}^{\omega}*M_{N}^{\omega}(n)\gg N^{-1}.\]
The implied constant is a function of \(\omega\), but at no point have we sought to track this dependence.
We analyze the arithmetic part here, and the Lemma below completes the proof of (8.3). Indeed, it is exactly this Lemma and its proof that motivates the use of the smooth numbers.
**Lemma 8.11**.: _We have for all even \(x\),_
\[\sum_{q\text{ is }Q\text{-smooth}}\frac{|\mu(q)|}{\phi(q)^{2}}\tau_{q}(x)\gg 1\]
Proof.: Exploit the multiplicative structure of the sum. One sees that it is a product over primes. For any integer \(x\),
\[\sum_{q\text{ is }Q\text{-smooth}}\frac{|\mu(q)|}{\phi(q)^{2}}\tau_{q}(x)= \prod_{\rho:\ N(\rho)<Q}1+\frac{\tau_{\rho}(x)}{\phi(\rho)^{2}} \tag{8.12}\]
Above, the product is over primes \(\rho\) with \(N(\rho)<Q\), up to multiplication by units. And recall that if \(\rho\mid x\), we have \(\tau_{\rho}(x)=\phi(\rho)\).
It is important to single out the prime \(1+i\). This is the unique prime with \(\phi(1+i)=1\), and
\[1+\frac{\tau_{1+i}(x)}{\phi(1+i)^{2}}=\begin{cases}2&1+i\mid x\\ 0&1+i\nmid x\end{cases}\]
Thus, if \(1+i\nmid x\), the sum in (8.12) is zero. But, evenness of \(x\) is equivalent to \(1+i\mid x\). Thus, for even \(x\), single out the case of \(\rho=1+i\).
\[\prod_{\rho:\ N(\rho)\leq Q}1+\frac{\tau_{\rho}(x)}{\phi(\rho)^{2}} =2\prod_{\begin{subarray}{c}\rho\mid x,\ N(\rho)\leq Q\\ \rho\neq 1+i\end{subarray}}\left(1+\frac{1}{\phi(\rho)}\right)\left(1- \frac{1}{\phi(\rho)^{2}}\right)^{-1}\times\prod_{\begin{subarray}{c}\rho,\ N(\rho)\leq Q\\ \rho\neq 1+i\end{subarray}}1-\frac{1}{\phi(\rho)^{2}}\] \[=2\prod_{\begin{subarray}{c}\rho\mid x,\ N(\rho)\leq Q\\ \rho\neq 1+i\end{subarray}}\frac{\phi(\rho)}{\phi(\rho)-1}\times\prod_{ \begin{subarray}{c}\rho,\ N(\rho)\leq Q\\ \rho\neq 1+i\end{subarray}}1-\frac{1}{\phi(\rho)^{2}}\] \[:=h(x)\mathcal{G}.\]
In the last line, we have written the product as a 'local term' \(h(x)\) and a 'global term,' \(\mathcal{G}\). If there is no \(Q\)-smooth prime that divides \(x\), we understand that the local term is \(2\). Thus, \(h(x)\) is always at least \(2\) for even \(x\). The global term is finite:
\[\mathcal{G}\geq\prod_{\rho}1-\frac{1}{\phi(\rho)^{2}},\]
and the infinite sum is convergent to a positive number, since
\[\sum_{\rho\text{ prime}}\frac{1}{\phi(\rho)^{2}}\ll\sum_{\rho\text{ prime}}N(p)^{-2}\ll\sum_{k=1}^{\infty}k2^{-k}<\infty.\]
That completes our proof.
Proof of (8.4).: We control the difference between the Low term and the Main term. This is the term Error, and we only seek a distributional estimate on it. We begin by writing out this term explicitly. Recall that
\[\widehat{\mathrm{Lo}}(\xi)=\sum_{q:\;N(q)<Q}\sum_{a\in\mathbb{A}_{q}}\Phi(a, \bar{q})\widehat{M_{N}^{\omega}}(\xi-\frac{a}{q})\Delta_{q_{0}}(\xi-\frac{a}{q}).\]
By the disjointness of the supports of \(\Delta_{q_{0}}(\cdot-\frac{a}{q})\), for \(N(q)<Q\), and the definition of the Main term in (8.7), we see that
\[\widehat{\mathrm{Lo}}(\xi)\cdot\widehat{\mathrm{Lo}}(\xi)=\sum_{N(q)\leqslant Q }\sum_{a\in\mathbb{A}_{q}}\tau_{q}(a)^{2}\Delta_{q_{0}}(\xi-a/q)^{2}\widehat{M _{N}^{\omega}}(\xi-a/q)^{2}\]
We have done the work to invert this Fourier transform. In particular from (8.10), we have
\[\mathrm{Lo}*\mathrm{Lo}(x)=M_{N}^{\omega}*\check{\Delta}_{q_{0}}*M_{N}^{ \omega}*\check{\Delta}_{q_{0}}(x)\sum_{q:\;N(q)<N}\frac{|\mu(q)|}{\phi(q)^{2} }\tau_{q}(x).\]
We can then explicitly write \(\mathrm{Error}(x)\) as the sum of these three terms.
\[E_{1}(x) :=\int(1-\Delta_{q_{0}}(\xi)^{2})\widehat{M_{N}^{\omega}}(\xi)^{ 2}e(\langle\xi,x\rangle)\;d\xi\sum_{q:\;N(q)<Q}\frac{|\mu(q)|}{\phi(q)^{2}} \tau_{q}(x), \tag{8.14}\] \[E_{2}(x) :=M_{N}^{\omega}*M_{N}^{\omega}(x)\sum_{q:\;Q\leqslant N(q)<N^{1/8 }}\frac{|\mu(q)|}{\phi(q)^{2}}\tau_{q}(x),\] (8.15) \[E_{3}(x) :=M_{N}^{\omega}*M_{N}^{\omega}(x)\sum_{q:\;N(q)\geqslant N^{1/8 }}\frac{|\mu(q)|}{\phi(q)^{2}}\tau_{q}(x). \tag{8.13}\]
We address them in order.
For the control of \(E_{1}\) defined in (8.13), an easily accessible \(\ell^{2}\) estimate applies. We recall that \(Q\ll(\log N)^{B}\). By selection of \(\Delta_{q_{0}}\) as a scaled version of a Fejer kernel, we have \(1-\Delta_{q_{0}}(\xi)^{2}=O(Q^{-3})\) if \(N(\xi)<Q^{-4}\). So in this range we have
\[\left\|\int_{N(\xi)<Q^{-4}}(1-\Delta_{q_{0}}(\xi)^{2})\widehat{M_{N}^{\omega}} (\xi)^{2}e(\langle\xi,x\rangle)\;d\xi\right\|_{\ell^{2}}\ll Q^{-3}\int|\widehat {M_{N}^{\omega}}(\xi)|^{2}d\xi\ll N^{-1}Q^{-3}.\]
But then it follows that
\[\left\|\int(1-\Delta_{q_{0}}(\xi)^{2})\widehat{M_{N}^{\omega}}(\xi)^{2}e( \langle\xi,x\rangle)\;d\xi\right\|_{\ell^{2}}\ll N^{-1}Q^{-3}.\]
On the other hand, it is easy to see that
\[\sum_{q:\;N(q)<Q}\frac{|\mu(q)|}{\phi(q)^{2}}\tau_{q}(x)\ll Q.\]
These two estimates prove the control required in (8.4). In particular we have
\[\|E_{1}\|_{\ell^{2}}\ll(N^{2}(\log N)^{B-1})^{-1/2}.\]
This estimate is stronger than than the required distributional estimate.
We turn to the term \(E_{2}\) defined in (8.14). For this and \(E_{3}\), the leading term \(M_{N}^{\omega}\ast M_{N}^{\omega}(x)\ll N^{-1}\), so our focus is on the arithmetic terms. The definition \(E_{2}\) requires that the denominators \(q\) are at least \(Q\), and less than \(N^{1/8}\). The point is that the Ramanujan function \(\tau_{q}\) is rarely more than \(1\), as quantified by (3.12). For integers \(s\) with \(Q/2<2^{s}\leq N^{1/8}\), we have
\[\Big{|}\Big{\{}N(x)<N\colon\sum_{2^{s}<N(q)\leq 2^{s+1}}|\tau_{q}(x)|>2^{3s/2} \Big{\}}\Big{|}\ll N2^{-3s}.\]
This follows from (3.12), with \(k=8\), and trivial bounds on the totient function. It is clear that this can be summed over these values of \(s\) to complete the proof of (8.14) in this case. Indeed, we have
\[\Big{|}\Big{\{}N(x)<N\colon\sum_{2^{s}<N(q)\leq 2^{s+1}}|\tau_{q}(x) |>2^{3s/2}\Big{\}}\Big{|} \ll 2^{-12s}\sum_{x\colon N(x)<N}\Big{[}\sum_{2^{s}<N(q)\leq 2^{s+1 }}|\tau_{q}(x)|\Big{]}^{8}\] \[\ll 2^{-3s}N.\]
Using a trivial bound lower bound on the Totient function will complete this case.
We turn to the term \(E_{3}\) defined in (8.15). In this case, we require \(N^{1/8}<N(q)\). And \(N(q)\) can be as large and \(e^{Q}=e^{c(\log N)^{B}}\). This is too large to directly apply the previous argument. Instead, we will specialize the proof of (3.12) to this setting. For an integer \(s\) with \(N^{1/8}<2^{s+1}\), estimate
\[\sum_{\begin{subarray}{c}q\colon 2^{s}\leq N(q)<2^{s+1}\\ q\text{ }Q\text{-smooth}\end{subarray}}\frac{|\mu(q)|}{\phi(q)^{2}}\tau_{q}(x) \ll s^{2}2^{-2s}\sum_{\begin{subarray}{c}q\colon 2^{s}\leq N(q)<2^{s+1}\\ q\text{ }Q\text{-smooth}\end{subarray}}N((x,q)). \tag{8.16}\]
Above, we have used the familiar upper bound \(\tau_{q}(x)\ll N((x,q))\). Write \((x,q)=d\) and \(q=q^{\prime}d\). Continue
\[\ll s^{2}2^{-2s}\sum_{d\text{ }Q\text{-smooth}}\mathbf{1}_{d |x}N(d)\sum_{\begin{subarray}{c}q^{\prime}\colon 2^{s}\leqslant N(q^{\prime})N(d)<2^{s+1}\\ q^{\prime}\text{ }Q\text{-smooth}\end{subarray}}\mathbf{1}\] \[\ll s^{2}2^{-s}\sum_{d\text{ }Q\text{-smooth}}\mathbf{1}_{d|x}\ll 2^{-s/2}. \tag{8.16}\]
In the last line, we have \(0<N(x)<N\), and \(2^{s+1}>N^{1/8}\), so that we can use a favorable estimate on the divisor function. This estimate is summable in \(s\). It follows that for \(0<N(x)<N\), that we have
\[E_{3}(x)\ll N^{-17/16}.\]
This completes the analysis of the Error term.
Proof of (8.5).: We have from (6.3) and (6.6),
\[\|\mathrm{Lo}*\mathrm{Hi}\|_{\ell^{2}}^{2} =\int_{\mathbb{T}^{2}}\lvert\widehat{\mathrm{Lo}}\cdot\widehat{ \mathrm{Hi}}\rvert^{2}\;d\xi\] \[\ll Q^{-1+\epsilon}\int_{\mathbb{T}^{2}}\lvert\widehat{\mathrm{ Lo}}\rvert^{2}\;d\xi\] \[\ll\frac{Q^{-1+2\epsilon}}{N}.\]
The previous argument immediately implies the third and final estimate (8.6). That completes the proof of Lemma 8.2, and hence the proof of our binary Goldbach Theorem.
### The Ternary Goldbach Conjecture
We turn to the ternary Goldbach conjecture, using the same notation. We will show that for all intervals \(\omega\subset\mathbb{T}\), there is an \(N_{\omega}\), so that for \(N>N_{\omega}\), we have
\[\inf_{\begin{subarray}{c}n\in S_{N}^{\omega}\\ n\;\mathrm{odd}\end{subarray}}A_{N}^{\omega}*A_{N}^{\omega}*A_{N}^{\omega}(n )\gg N^{-1}. \tag{8.17}\]
That is, every odd integer in \(S_{N}^{\omega}\) has many representations as a sum of three primes, each of which is in the sector \(\omega\). Indeed, there are as many as \(N^{2}/(\log N)^{2}\) representations. This holds for all large \(N\), and we can assume that the sector is small, so that completes the proof of ternary Goldbach Theorem.
It remains to establish (8.17). We turn to the High/Low decomposition as in (8.1), and write
\[A_{N}^{\omega}*A_{N}^{\omega}*A_{N}^{\omega} =\mathrm{Hi}*A_{N}^{\omega}*A_{N}^{\omega}+\mathrm{Lo}*A_{N}^{ \omega}*A_{N}^{\omega}\] \[=\mathrm{Hi}*A_{N}^{\omega}*A_{N}^{\omega}+\mathrm{Lo}*\mathrm{ Hi}*A_{N}^{\omega}+\mathrm{Lo}*\mathrm{Lo}*A_{N}^{\omega}\] \[=\mathrm{Hi}*A_{N}^{\omega}*A_{N}^{\omega}+\mathrm{Lo}*\mathrm{ Hi}*A_{N}^{\omega}+\mathrm{Lo}*\mathrm{Lo}*\mathrm{Hi}+\mathrm{Lo}*\mathrm{Lo}* \mathrm{Lo}\] \[=:\mathrm{Err}+\mathrm{Lo}*\mathrm{Lo}. \tag{8.18}\]
As before, in (8.7) we will write
\[\mathrm{Lo}*\mathrm{Lo}*\mathrm{Lo}=\mathrm{Main}+\mathrm{Error}.\]
Thus, our focus is on the Lemma below. It easily completes the proof.
**Lemma 8.19**.: _We have the estimates_
\[N^{-1} \ll\min_{\begin{subarray}{c}x\in S_{N}^{\omega}\\ x\;\mathrm{odd}\end{subarray}}\mathrm{Main}(x), \tag{8.21}\] \[\|\mathrm{Error}\|_{\ell^{\infty}}+\|\mathrm{Err}\|_{\ell^{ \infty}} \ll[N(\log N)^{B-3}]^{-1}. \tag{8.20}\]
Recalling the definition of \(Q\)-smooth from (8.7), we set
\[\text{Main}(x)\coloneqq\tilde{M}(x)\sum_{q\text{ is }Q\text{ smooth}}\frac{\mu(q)}{\phi(q)^{3}}\tau_{q}(x), \tag{8.22}\]
where
\[\tilde{M}(x)\coloneqq M_{N}^{\omega}*M_{N}^{\omega}*M_{N}^{\omega}*\widetilde{ \Delta_{q_{0}}}*\widetilde{\Delta_{q_{0}}}*\widetilde{\Delta_{q_{0}}}.\]
Proof of (8.20).: The details here are very close to those of the binary case, so we will be a little brief. Following the arguments of Lemma 8.8, in the definition of the Main term, we have \(\inf_{x\in S_{N}^{\omega}}\tilde{M}(x)\gg N^{-1}\). In the arithmetic part of (8.22), we have the Mobius function \(\mu(q)\), instead of \(|\mu(q)|\) as in the binary case, and a third power of the totient function. Using the multiplicative properties, we have
\[\sum_{q\text{ is }Q\text{ smooth}}\frac{\mu(q)}{\phi(q)^{3}}\tau_{q}(x) =\prod_{N(p)<Q,p|x}\left(1-\frac{1}{\phi^{2}(p)}\right)\prod_{N(p )<Q,(p,x)=1}\left(1+\frac{1}{\phi^{3}(p)}\right)\] \[\coloneqq h_{3}(x)\mathcal{G}_{3}\]
Parity is again crucial. We have
\[1-\frac{\tau_{1+i}(x)}{\phi^{3}(1+i)}=\begin{cases}0&x\text{ even}\\ 2&x\text{ odd}\end{cases}\]
That is, \(h_{3}(x)\) is positive for odd \(x\). Note that \(\mathcal{G}_{3}=O(1)\), because
\[\mathcal{G}_{3}=\prod_{N(p)<Q,(p,x)=1}\left(1+\frac{1}{\phi^{3}(p)}\right)< \sum_{n}\frac{|\mu(n)|}{N(n)^{2}}=O(1).\]
This completes the proof of (8.20).
Proof of (8.21).: There are two estimates to prove. The first is to bound the \(\ell^{\infty}\) norm of
\[\text{Error}\coloneqq\text{Lo}*\text{Lo}*\text{Lo}-\text{Main}.\]
Switch to Fourier variables. We have
\[\widehat{\text{Lo}}(\xi)^{3}=\sum_{q<Q}\sum_{a\in\mathbb{A}_{q}}\widehat{M}( \xi-a/q)\frac{\tau_{q}(a)^{3}}{\phi(q)^{3}}.\]
It follows that
\[\widehat{\text{Error}}(\xi)=\sum_{q\begin{subarray}{c}Q\text{-smooth}\\ q\geq Q\end{subarray}}\sum_{a\in\mathbb{A}_{q}}\widehat{M}(\xi-a/q)\frac{\tau_ {q}(a)^{3}}{\phi(q)^{3}}.\]
So, the \(\ell^{\infty}\) norm of Error is at most the \(L^{1}(\mathbb{T}^{2})\) norm of the expression above. That is at most
\[\sum_{\begin{subarray}{c}q\text{ }Q\text{-smooth}\\ q\geq Q\end{subarray}}\Bigl{\|}\sum_{a\in\mathbb{A}_{q}}\widehat{M}(\xi-a/q) \tau_{q}(a)^{3}\Bigr{\|}_{L^{1}(\mathbb{T}^{2})}\]
\[\ll N^{-1}\sum_{q\geq Q}\phi(q)^{-2+\epsilon}\ll N^{-1}Q^{-1+\epsilon}.\]
By our choice of \(Q=(\log N)^{B}\), this estimate meets our requirements.
The second estimate concerns the term \(\operatorname{Err}\). The term \(\operatorname{Err}\) is a sum of three terms of the form \(\phi_{1}*\phi_{2}*\phi_{3}\), where \(\phi_{j}\in\{A_{N},\operatorname{Hi},\operatorname{Lo}\}\). And, at least one of the terms is a High term. See (8.18). We control each term. To fix ideas, consider
\[\|\operatorname{Hi}*A_{N}^{\omega}*A_{N}^{\omega}\|_{\infty} \leq\|\operatorname{Hi}*A_{N}^{\omega}\|_{2}\|A_{N}^{\omega}\|_{2}\] \[\leq\|\widehat{\operatorname{Hi}}\widehat{A}_{N}^{\omega}\|_{2} \|\widehat{A}_{N}^{\omega}\|_{2}\] \[\leq\|\widehat{\operatorname{Hi}}\|_{\infty}\|\widehat{A}_{N}^{ \omega}\|_{2}^{2}\]
where the last inequality is trivial. Now, \(\|\widehat{\operatorname{Hi}}\|_{\infty}\ll Q^{-1+\epsilon}\), by (6.3). Recall that \(Q=(\log N)^{B}\), for an integer \(B>10\). And,
\[\|\widehat{A}_{N}^{\omega}\|_{2}^{2} =\|A_{N}^{\omega}\|_{2}^{2}\] \[\ll N^{-2}\sum_{N(n)<N,\arg(n)\in\omega}\Lambda(n)^{2}\] \[\ll N^{-1}(\log N)^{2}\]
We see that \(\|\operatorname{Hi}*A_{N}^{\omega}*A_{N}^{\omega}\|_{\infty}\ll N^{-1}(\log N) ^{-(1-\epsilon)B-2}\). That completes this case.
The second term to control is
\[\|\operatorname{Hi}*\operatorname{Lo}*\operatorname{Lo}\|_{\infty}\leq\| \widehat{\operatorname{Hi}}\|_{\infty}\|\operatorname{Lo}\|_{2}^{2}.\]
To estimate this last term \(\|\operatorname{Lo}\|_{2}^{2}\), use (6.6), which gives a better estimate than the first term. The third term to control is
\[\|\operatorname{Hi}*A_{N}^{\omega}*\operatorname{Lo}\|_{\infty}\leq\| \widehat{\operatorname{Hi}}\|_{\infty}\|A_{N}^{\omega}\|_{2}\|\operatorname{ Lo}\|_{2}.\]
But the right hand side is the geometric mean of the other two terms. That completes the proof.
|
2309.10007 | Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive
Autonomous Vehicles using AutoDRIVE Ecosystem | This work presents a modular and parallelizable multi-agent deep
reinforcement learning framework for imbibing cooperative as well as
competitive behaviors within autonomous vehicles. We introduce AutoDRIVE
Ecosystem as an enabler to develop physically accurate and graphically
realistic digital twins of Nigel and F1TENTH, two scaled autonomous vehicle
platforms with unique qualities and capabilities, and leverage this ecosystem
to train and deploy multi-agent reinforcement learning policies. We first
investigate an intersection traversal problem using a set of cooperative
vehicles (Nigel) that share limited state information with each other in single
as well as multi-agent learning settings using a common policy approach. We
then investigate an adversarial head-to-head autonomous racing problem using a
different set of vehicles (F1TENTH) in a multi-agent learning setting using an
individual policy approach. In either set of experiments, a decentralized
learning architecture was adopted, which allowed robust training and testing of
the approaches in stochastic environments, since the agents were mutually
independent and exhibited asynchronous motion behavior. The problems were
further aggravated by providing the agents with sparse observation spaces and
requiring them to sample control commands that implicitly satisfied the imposed
kinodynamic as well as safety constraints. The experimental results for both
problem statements are reported in terms of quantitative metrics and
qualitative remarks for training as well as deployment phases. | Tanmay Vilas Samak, Chinmay Vilas Samak, Venkat Krovi | 2023-09-18T02:43:59Z | http://arxiv.org/abs/2309.10007v2 | Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive Autonomous Vehicles using AutoDRIVE Ecosystem
###### Abstract
This work presents a modular and parallelizable multi-agent deep reinforcement learning framework for imbibing cooperative as well as competitive behaviors within autonomous vehicles. We introduce AutoDRIVE Ecosystem as an enabler to develop physically accurate and graphically realistic digital twins of Nigel and FITENTH, two scaled autonomous vehicle platforms with unique qualities and capabilities, and leverage this ecosystem to train and deploy multi-agent reinforcement learning policies. We first investigate an intersection traversal problem using a set of cooperative vehicles (Nigel) that share limited state information with each other in single as well as multi-agent learning settings using a common policy approach. We then investigate an adversarial head-to-head autonomous racing problem using a different set of vehicles (FITENTH) in a multi-agent learning setting using an individual policy approach. In either set of experiments, a decentralized learning architecture was adopted, which allowed robust training and testing of the approaches in stochastic environments, since the agents were mutually independent and exhibited asynchronous motion behavior. The problems were further aggravated by providing the agents with sparse observation spaces and requiring them to sample control commands that implicitly satisfied the imposed kinodynamic as well as safety constraints. The experimental results for both problem statements are reported in terms of quantitative metrics and qualitative remarks for training as well as deployment phases.
Multi-Agent Systems, Autonomous Vehicles, Deep Reinforcement Learning, Game Theory, Digital Twins
## I Introduction
In the rapidly evolving landscape of connected and autonomous vehicles (CAVs), the pursuit of intelligent and adaptive driving systems has emerged as a formidable challenge. Multi-Agent Reinforcement Learning (MARL) stands out as a promising avenue in the quest to develop autonomous vehicles capable of navigating complex and dynamic environments, while taking into account the cooperative and/or competitive nature of interactions with their peers. Particularly, cooperative and competitive MARL represent two pivotal approaches to addressing the intricate challenges posed by multi-agent interactions in autonomous driving scenarios. While cooperative MARL encourages agents to collaborate and share information to achieve common objectives, competitive MARL introduces elements of rivalry and adversary among agents, where individual success may come at the expense of others. These paradigms offer crucial insights into the development of autonomous vehicles, and have the potential to reshape the future of transportation.
Cooperative MARL [1, 2, 3, 4, 5, 6] fosters an environment in which autonomous vehicles cooperate to accomplish collective objectives such as optimizing traffic flow, enhancing safety, and efficiently navigating road networks. It mirrors real-world situations where vehicles must work together, such as traffic merging, intersection management, or platooning scenarios. Challenges in cooperative MARL include coordinating vehicle actions to minimize congestion, maintaining safety margins, and ensuring smooth interactions between self-interested agents.
On the other hand, competitive MARL [7, 8, 9, 10] introduces a competitive edge to autonomous driving, simulating scenarios such as overtaking, merging in congested traffic, or competitive racing. In this paradigm, autonomous vehicles strive to outperform their counterparts, vying for advantages while navigating complex and dynamic environments. Chal
Fig. 1: Multi-agent deep reinforcement learning framework using AutoDRIVE Ecosystem.
lenges in competitive MARL encompass strategic decision-making, opponent modeling, and adapting to aggressive driving behaviors while preserving safety standards.
As the field of MARL gains momentum within the realm of autonomous vehicles, it is crucial to comprehensively examine the implications of both cooperative and competitive approaches. In this paper, we present AutoDRIVE Ecosystem [11, 12] as an enabler to develop physically accurate and graphically realistic digital twins of scaled autonomous vehicles viz. Nigel [13] and F1TENTH [14] in Section II. We then present the problem formulation, solution approach as well as training and deployment results for a cooperative non-zero-sum use-case of intersection traversal (refer Fig. 1(a)) in Section III and a competitive zero-sum use-case of head-to-head autonomous racing (refer Fig. 1(b)) in Section IV. Finally we present an overall summary of our work with some concluding remarks on either case-studies.
## II Digital Twin Creation
We leveraged AutoDRIVE Simulator [15, 16] to develop digital twin models of Nigel as well as F1TENTH. It is to be noted, however, that this work utilizes the said models in the capacity of virtual prototyping, but we seek to further investigate emerging possibilities of utilizing the digital thread for harnessing the true potential of digital twins.
### _Vehicle Dynamics Models_
The vehicle model is a combination of a rigid body and a collection of sprung masses \({}^{i}M\), where the total mass of the rigid body is defined as \(M=\sum{}^{i}M\). The rigid body's center of mass, \(X_{COM}=\frac{\sum{}^{i}M*^{i}X}{\sum{}^{i}M}\), connects these representations, with \({}^{i}X\) representing the coordinates of the sprung masses.
The suspension force acting on each sprung mass is computed as \({}^{i}M*^{i}\vec{Z}+^{i}B*(^{i}\vec{Z}-^{i}\hat{z})+^{i}K*(^{i}Z-^{i}z)\), where \({}^{i}Z\) and \({}^{i}z\) are the displacements of sprung and unsprung masses, and \({}^{i}B\) and \({}^{i}K\) are the damping and spring coefficients of the \(i\)-th suspension, respectively.
The vehicle's wheels are also treated as rigid bodies with mass \(m\), subject to gravitational and suspension forces: \({}^{i}m*^{i}\hat{z}+^{i}B*(^{i}\hat{z}-^{i}\hat{Z})+^{i}K*(^{i}z-^{i}Z)\).
Tire forces are computed based on the friction curve for each tire, represented as \(\begin{cases}^{i}F_{t_{x}}=F(^{i}S_{x})\\ ^{i}F_{t_{y}}=F(^{i}S_{y})\end{cases}\), where \({}^{i}S_{x}\) and \({}^{i}S_{y}\) are the longitudinal and lateral slips of the \(i\)-th tire, respectively. The friction curve is approximated using a two-piece cubic spline, defined as \(F(S)=\begin{cases}f_{0}(S);&S_{0}\leq S<S_{e}\\ f_{1}(S);&S_{e}\leq S<S_{a}\end{cases}\), where \(f_{k}(S)=a_{k}*S^{3}+b_{k}*S^{2}+c_{k}*S+d_{k}\) is a cubic polynomial function. The first segment of the spline ranges from zero \((S_{0},F_{0})\) to an extremum point \((S_{e},F_{e})\), while the second segment ranges from the extremum point \((S_{e},F_{e})\) to an asymptote point \((S_{a},F_{a})\).
The tire slip is influenced by factors including tire stiffness \({}^{i}C_{\alpha}\), steering angle \(\delta\), wheel speeds \({}^{i}\omega\), suspension forces \({}^{i}F_{s}\), and rigid-body momentum \({}^{i}P\). These factors impact the longitudinal and lateral components of the vehicle's linear velocity. The longitudinal slip \({}^{i}S_{x}\) of the \(i\)-th tire is calculated by comparing the longitudinal components of the surface velocity of the \(i\)-th wheel (i.e., longitudinal linear velocity of the vehicle) \(v_{x}\) with the angular velocity \({}^{i}\omega\) of the \(i\)-th wheel: \({}^{i}S_{x}=\frac{{}^{i}r*^{i}\omega-v_{x}}{v_{x}}\). The lateral slip \({}^{i}S_{y}\) depends on the tire's slip angle \(\alpha\) and is determined by comparing the longitudinal \(v_{x}\) (forward velocity) and lateral \(v_{y}\) (side-slip velocity) components of the vehicle's linear velocity: \({}^{i}S_{y}=\tan(\alpha)=\frac{v_{y}}{|v_{x}|}\).
### _Sensor Models_
The simulated vehicles can be equipped with the physically accurate interoceptive as well as exteroceptive sensing modalities. Specifically, the throttle (\(\tau\)) and steering (\(\delta\)) sensors are simulated using a straightforward feedback loop.
Incremental encoders are simulated by measuring the rotation of the rear wheels (i.e., the output shaft of driving actuators): \({}^{i}N_{ticks}={}^{i}PPR*^{i}GR*^{i}N_{rev}\), where \({}^{i}N_{ticks}\) represents the ticks measured by the \(i\)-th encoder, \({}^{i}PPR\) is the base resolution (pulses per revolution) of the \(i\)-th encoder, \({}^{i}GR\) is the gear ratio of the \(i\)-th motor, and \({}^{i}N_{rev}\) represents the number of revolutions of the output shaft of the \(i\)-th motor.
The Inertial Positioning System (IPS) and Inertial Measurement Unit (IMU) are simulated based on temporally-coherent rigid-body transform updates of the vehicle \(\{v\}\) with respect to the world \(\{w\}\): \({}^{w}\mathbf{T}_{v}=\left[\begin{array}{c|c}\mathbf{R}_{3\times 3}&\mathbf{t}_{3 \times 1}\\ \hline\mathbf{0}_{1\times 3}&1\end{array}\right]\in SE(3)\). The IPS provides 3-DOF positional coordinates \(\{x,y,z\}\) of the vehicle, while the IMU supplies linear accelerations \(\{a_{x},a_{y},a_{z}\}\), angular velocities \(\{\omega_{x},\omega_{y},\omega_{z}\}\), and 3-DOF orientation data for the vehicle, either as Euler angles \(\{\phi_{x},\theta_{y},\psi_{z}\}\) or as a quaternion \(\{q_{0},q_{1},q_{2},q_{3}\}\).
The LIDAR simulation employs iterative raycasting raycast\(\{{}^{w}\mathbf{T}_{l}\), \(\mathbf{\bar{R}}\), \(r_{max}\}\) for each angle \(\theta\in[\theta_{min}:\theta_{res}:\theta_{max}]\) at an approximate update rate of 7 Hz. Here, \({}^{w}\mathbf{T}_{l}={}^{w}\mathbf{T}_{v}*^{w}\mathbf{T}_{l}\in SE(3)\)
Fig. 2: Creating the digital twins of Nigel and F1TENTH in AutoDRIVE Simulator.
represents the relative transformation of the LIDAR \(\{l\}\) with respect to the vehicle \(\{v\}\) and the world \(\{w\}\), \(\vec{\mathbf{R}}=\left[r_{max}*sin(\theta)~{}r_{min}*cos(\theta)~{}0\right]^{T}\) defines the direction vector of each ray-cast \(R\), where \(r_{min}\) = 0.15 m and \(r_{max}\) = 12 m denote the minimum and maximum linear ranges of the LIDAR, \(\theta_{min}=0^{\circ}\) and \(\theta_{max}=360^{\circ}\) set the minimum and maximum angular ranges of the LIDAR, and \(\theta_{res}=1^{\circ}\) represents the angular resolution of the LIDAR. The laser scan ranges are determined by checking ray-cast hits and then applying a threshold to the minimum linear range of the LIDAR, calculated as ranges[i]= \(\begin{cases}\texttt{hit.dist}&\texttt{if ray[i].hit and hit.dist}\geq r_{min}\\ \infty&\texttt{otherwise}\end{cases}\), where ray.hit is a Boolean flag indicating whether a ray-cast hits any colliders in the scene, and hit.dist= \(\sqrt{(x_{hit}-x_{ray})^{2}+(y_{hit}-y_{ray})^{2}+(z_{hit}-z_{ray})^{2}}\) calculates the Euclidean distance from the ray-cast source \(\{x_{ray},y_{ray},z_{ray}\}\) to the hit point \(\{x_{hit},y_{hit},z_{hit}\}\).
The simulated physical cameras are parameterized by their focal length (\(f\) = 3.04 mm), sensor size (\(\{s_{x},s_{y}\}\) = {3.68, 2.76} mm), target resolution (default = 720p), as well as the distances to the near and far clipping planes (\(N\) = 0.01 m and \(F\) = 1000 m). The viewport rendering pipeline for the simulated cameras operates in three stages. First, the camera view matrix \(\mathbf{V}\in SE(3)\) is computed by obtaining the relative homogeneous transform of the camera \(\{c\}\) with respect to the world \(\{w\}\): \(\mathbf{V}=\begin{bmatrix}r_{00}&r_{01}&r_{02}&t_{0}\\ r_{10}&r_{11}&r_{12}&t_{1}\\ r_{20}&r_{21}&r_{22}&t_{2}\\ 0&0&0&1\end{bmatrix}\), where \(r_{ij}\) and \(t_{i}\) denote the rotational and translational components, respectively. Next, the camera projection matrix \(\mathbf{P}\in\mathbb{R}^{4\times 4}\) is calculated to project world coordinates into image space coordinates: \(\mathbf{P}=\begin{bmatrix}\frac{2*N}{R-L}&0&\frac{R+L}{R-L}&0\\ 0&\frac{2*N}{T-B}&\frac{T}{T-B}&0\\ 0&0&-\frac{F+N}{F-N}&-\frac{2*F*N}{F-N}\\ 0&0&-1&0\end{bmatrix}\), where \(N\) and \(F\) represent the distances to the near and far clipping planes of the camera, and \(L\), \(R\), \(T\), and \(B\) denote the left, right, top, and bottom offsets of the sensor. The camera parameters \(\{f,s_{x},s_{y}\}\) are related to the terms of the projection matrix as follows: \(f=\frac{2*N}{R-L}\), \(a=\frac{s_{y}}{s_{x}}\), and \(\frac{f}{a}=\frac{2*N}{T-B}\). The perspective projection from the simulated camera's viewpoint is given as \(\mathbf{C}=\mathbf{P}*\mathbf{V}*\mathbf{W}\), where \(\mathbf{C}=\left[x_{c}~{}y_{c}~{}z_{c}~{}w_{c}\right]^{T}\) represents image space coordinates, and \(\mathbf{W}=\left[x_{w}~{}y_{w}~{}z_{w}~{}w_{w}\right]^{T}\) represents world coordinates. Finally, this camera projection is transformed into normalized device coordinates (NDC) by performing perspective division (i.e., dividing throughout by \(w_{c}\)), leading to a viewport projection achieved by scaling and shifting the result and then utilizing the rasterization process of the graphics API (e.g., DirectX for Windows, Metal for macOS, and Vulkan for Linux). Additionally, a post-processing step simulates lens and film effects, such as lens distortion, depth of field, exposure, ambient occlusion, contact shadows, bloom, motion blur, film grain, chromatic aberration, etc.
### _Actuator Models_
The vehicle's motion is controlled by driving and steering actuators, with response delays and saturation limits matched to their real-world counterparts by tuning their torque profiles and actuation limits.
The driving actuators propel the rear/front/all wheels by applying a torque, calculated as \({}^{i}{}_{{}^{\prime}}{}_{drive}={}^{i}{}I_{w}*{}^{i}{}\dot{\omega}_{w}\), where \({}^{i}{}I_{w}=\frac{1}{2}*{}^{i}{}_{m}w*{}^{i}{}_{r}{}^{2}\) represents the moment of inertia, \({}^{i}{}\dot{\omega}_{w}\) is the angular acceleration, \({}^{i}{}m_{w}\) is the mass, and \({}^{i}{}_{r}{}_{w}\) is the radius of the \(i\)-th wheel. Additionally, the driving actuators simulate holding torque by applying an idle motor torque equivalent to the braking torque, i.e., \({}^{i}{}_{\tau}{}_{idle}={}^{i}{}_{\tau}{}_{brake}\).
The front wheels are steered using a steering actuator that generates a torque proportional to the required angular acceleration, given by \(\tau_{steer}=I_{steer}*\dot{\omega}_{steer}\). The individual turning angles, \(\delta_{l}\) and \(\delta_{r}\), for the left and right wheels, respectively, are computed based on the commanded steering angle \(\delta\), utilizing the Ackermann steering geometry defined by the whole \(l\) and track width \(w\), as follows: \(\left\{\begin{array}{l}\delta_{l}=\tan^{-1}\left(\frac{2*\tan(\delta)}{2*l+w *\tan(\delta)}\right)\\ \delta_{r}=\tan^{-1}\left(\frac{2*\tan(\delta)}{2*l-w*\tan(\delta)}\right)\end{array}\right.\)
### _Environment Models_
At each time step, the simulator conducts mesh-mesh interference detection and computes contact forces, frictional forces, momentum transfer, as well as linear and angular drag acting on all rigid bodies. Simulated environments can be established using one of the following approaches:
* _AutoDRIVE IDK:_ Custom scenarios and maps can be crafted by utilizing the modular and adaptable Infrastructure Development Kit (IDK). This kit provides the flexibility to configure terrain modules, road networks, obstruction modules, and traffic elements. Specifically, the intersection traversal scenario was developed using AutoDRIVE IDK.
* _Plug-In Scenarios:_ AutoDRIVE Simulator supports third-party tools, such as RoadRuner [17], and open standards like OpenSCENARIO [18] and OpenDRIVE [19]). This allows users to incorporate a diverse range of plugins, packages, and assets in several standard formats for creating or customizing driving scenarios. Particularly, the autonomous racing scenario was created based on the binary occupancy grid map of a real-world F1TENTH racetrack called "Proto" using a third-party 3D modelling software, which was then imported into AutoDRIVE Simulator and post-processed with physical as well as graphical enhancements to make it "sim-ready".
* _Unity Terrain Integration:_ Since the AutoDRIVE Simulator is built atop the Unity [20] game engine, it seamlessly supports scenario design and development through Unity Terrain [21]. Users have the option to define terrain meshes, textures, heightmaps, vegetation, skyboxes, wind effects, and more, allowing design of both on-road and off-road scenarios. This option is well-suited for modelling full-scale environments.
## III Cooperative Multi-Agent Scenario
Inspired by [6], this use-case encompassed both single-agent and multi-agent learning scenarios, where each agent's objective was autonomous traversal of a 4-lane, 4-way intersection without collisions or lane boundary violations. Each vehicle possessed intrinsic state information and received limited state information about its peers; no external sensing modalities were employed. A deep neural network policy was independently trained for each scenario, guiding the agents through the intersection safely. The entire case-study was developed using an integrated ML framework [22] within AutoDRIVE Simulator.
### _Problem Formulation_
In _single-agent learning scenario_, only the ego vehicle learned to traverse the intersection, while peer vehicles were controlled at different velocities using a simple heuristic. Peer vehicles shared their states with the ego vehicle via V2V communication. All vehicles were reset together, making this scenario quite deterministic.
In _multi-agent learning scenario_, all vehicles learned to traverse the intersection simultaneously in a decentralized manner. Vehicles shared their states with each other via V2V communication and were reset independently, resulting in a highly stochastic scenario.
In both the scenarios, the challenge revolved around autonomous navigation in an unknown environment. The exact structure/map of the environment was not known to any agent. Consequently, this decision-making problem was framed as a Partially Observable Markov Decision Process (POMDP), which captured hidden state information through environmental observations.
### _State Space_
As previously discussed, the state space \(S\) for intersection traversal problem could be divided into observable \(s^{o}\subset S\) and hidden \(s^{h}\subset S\) components. The observable component included the vehicle's 2D pose and velocity, denoted as \(s^{o}_{t}=[p_{x},p_{y},\psi,v]\,t\in\mathbb{R}^{4}\). The hidden component encompassed the agent's goal coordinates, represented as \(st^{h}=[g_{x},g_{y}]_{t}\in\mathbb{R}^{2}\). Thus, each agent could observe the pose and velocity of its peers (via V2V communication) but kept its own goal location hidden from others. Consequently, the complete state space of an agent participating in this problem was a vector containing all observable and hidden states:
\[s_{t}=\begin{bmatrix}s^{o}_{t},s^{h}_{t}\end{bmatrix} \tag{1}\]
### _Observation Space_
Based on the state space defined in Equation 1, each agent employed an appropriate subset of its sensor suite to collect observations (as per Equation 2). This included Inertial Positioning System (IPS) for positional coordinates \([p_{x},p_{y}]\,t\in\mathbb{R}^{2}\), Inertial Measurement Unit (IMU) for yaw \(\psi t\in\mathbb{R}^{1}\), and incremental encoders for estimating vehicle velocity \(v_{t}\in\mathbb{R}^{1}\). Each agent \(i\) (where \(0<i<N\)) was provided with an observation vector of the form:
\[o^{i}_{t}=\begin{bmatrix}g^{i},\tilde{p}^{i},\tilde{\psi}^{i},\tilde{v}^{i} \end{bmatrix}_{t}\in\mathbb{R}^{2+4(N-1)} \tag{2}\]
This formulation allowed \(g^{i}_{t}=\begin{bmatrix}g^{i}_{x}-p^{i}_{x},g^{i}_{y}-p^{i}_{y}\end{bmatrix} \,t\in\mathbb{R}^{2}\) to represent the ego agent's goal location relative to itself, \(\tilde{p}t^{i}=\begin{bmatrix}p^{j}_{x}-p^{i}_{x},p^{j}_{y}-p^{j}_{y}\end{bmatrix} \,t\in\mathbb{R}^{2(N-1)}\) to denote the position of every peer agent relative to the ego agent, \(\tilde{\psi}t^{i}=\psi^{j}_{t}-\psi^{i}_{t}\in\mathbb{R}^{N-1}\) to express the yaw of every peer agent relative to the ego agent, and \(\tilde{v}t^{i}=vt^{j}\in\mathbb{R}^{N-1}\) to indicate the velocity of every peer agent. Here, \(i\) represented the ego agent, and \(j\in[0,N-1]\) represented every other (peer) agent in the scene, with a total of \(N\) agents.
### _Action Space_
The vehicles were designed as non-holonomic rear-wheel-drive models featuring an Ackermann steering mechanism. As a result, the complete action space of an agent comprised longitudinal (throttle/brake) and lateral (steering) motion control commands. For longitudinal control, the throttle command \(\tau_{t}\) was set to 80% of its upper saturation limit. The steering command \(\delta_{t}\) was discretized as \(\delta_{t}\in\{-1,0,1\}\) and was the sole active control source for safely navigating the intersection, as expressed in Equation 3:
\[a_{t}=\delta_{t}\in\mathbb{R}^{1} \tag{3}\]
### _Reward Function_
The extrinsic reward function (as shown in Equation 4) was designed to reward each agent with \(r_{goal}=+1\) for successfully traversing the intersection. Alternatively, it penalized agents proportionally to their distance from the goal, represented as \(k_{p}*\left\|g^{i}_{t}\right\|_{2}\), for collisions or lane boundary violations. The penalty constant \(k_{p}\) was set to 0.425, resulting in a maximum penalty of 1.
\[r^{i}_{t}=\begin{cases}r_{goal};&\text{if traversed the intersection safely}\\ -k_{p}*\left\|g^{i}_{t}\right\|_{2};&\text{if collided or overstepped lanes}\end{cases} \tag{4}\]
This encouraged agents to get closer to their respective goals, reducing penalties and ultimately leading to a positive reward, \(rgoal\). This approach not only expedited convergence but also restricted reward hacking.
### _Optimization Problem_
The task of intersection traversal, with collision avoidance and lane-keeping constraints, was addressed through the extrinsic reward function described in Equation 4. This function motivated each individual agent to maximize the expected future discounted reward (as per Equation 5) by learning a policy \(\pi_{\theta}\left(a_{t}|o_{t}\right)\). Over time, this policy transitioned into the optimal policy \(\pi^{*}\).
\[\operatorname*{argmax}_{\pi_{\theta}\left(\alpha_{t}\left[o_{t}\right]\right)} \quad\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}\right] \tag{5}\]
### _Training_
At each time step \(t\), each parallelized agent \(i\) collected an observation vector \(o_{t}^{i}\) and an extrinsic reward \(r_{t}^{i}\). Based on these inputs, it took an action \(a_{t}^{i}\) determined by the policy \(\pi_{\theta}\), which was continually updated to maximize the expected future discounted reward (refer Fig. 3).
This use-case employed a fully connected neural network (FCNN) as a function approximator for \(\pi_{\theta}\left(a_{t}|o_{t}\right)\). The network had \(\mathbb{R}^{14}\) inputs, \(\mathbb{R}^{1}\) outputs, and three hidden layers with 128 neural units each. The policy parameters \(\theta\in\mathbb{R}^{d}\) were defined in terms of the network's parameters. The policy was trained to predict steering commands directly based on collected observations, utilizing the proximal policy optimization (PPO) algorithm [23].
### _Deployment_
The trained policies were deployed onto the simulated vehicles, separately for both single-agent and multi-agent scenarios. As previously mentioned, the single-agent scenario was relatively deterministic, and the ego vehicle could safely traverse the intersection in most cases. In contrast, the multi-agent scenario was highly stochastic, resulting in a significantly lower success rate, especially with all vehicles navigating the intersection safely simultaneously.
Fig.4(a)-(c) present three key stages of the single-agent intersection traversal scenario. The first stage depicts the ego vehicle approaching the conflict zone, where it could potentially collide with peer vehicles. The second stage shows the vehicle executing a left turn to avoid collisions. Finally, the third stage illustrates the vehicle performing a subtle right turn to reach its goal. Fig.4(d)-(f) display three critical stages of the multi-agent intersection traversal scenario. In the first frame, vehicles 1 and 4 successfully avoid collision. The second frame showcases vehicle 1 finding a gap between vehicles 2 and 3 to reach its goal. In the third frame, vehicles 2 and 3 evade collision, while vehicle 4 approaches its goal, and vehicle 1 is re-spawned.
## IV Competitive Multi-Agent Scenario
Inspired by [9], this use-case encompassed a multi-agent learning scenario, where each agent's objective was minimizing its lap time without colliding with the track or its opponent. Each vehicle possessed intrinsic state information and sparse LIDAR measurements; no state information was shared among the competing vehicles. The entire use-case was developed using the same integrated ML framework mentioned in Section III.
### _Problem Formulation_
This case-study addressed the problem of autonomous racing in an unknown environment. The exact structure/map of the environment was not known to any agent. Consequently, this decision-making problem was also framed as a POMDP, which captured hidden state information through environmental observations.
We adopted an equally-weighted hybrid imitation-reinforcement learning architecture to progressively inculcate autonomous driving and racing behaviors into the agents. Consequently, we recorded 5 laps worth of independent demonstration datasets for each agent by manually driving the vehicles in sub-optimal trajectories within a single-agent setting. We hypothesised that such a hybrid learning architecture would guide the agents' exploration, thereby reducing training time significantly.
### _Observation Space_
At each time step \(t\), the agent collected a vectorized observation, as shown in Equation 6, from the environment. These observations were obtained using velocity estimation and exteroceptive ranging modalities mounted on the virtual vehicle(s):
\[o_{t}^{i}=\left[v_{t}^{i},m_{t}^{i}\right]\in\mathbb{R}^{28} \tag{6}\]
Here, \(v_{t}^{i}\in\mathbb{R}^{1}\) represents the forward velocity of \(i\)-th agent, and \(m_{t}^{i}=\left[{}^{1}m_{t}^{i},{}^{2}m_{t}^{i},\cdots,{}^{27}m_{t}^{i}\right] \in\mathbb{R}^{27}\) is the
Fig. 4: Deployment results for (a)-(c) single-agent and (d)-(f) multi-agent intersection traversal scenarios: (a) and (d) denote first frozen snapshot, (b) and (e) denote second frozen snapshot, while (c) and (f) denote third frozen snapshot.
Fig. 3: Training results for (a)-(c) single-agent and (d)-(f) multi-agent intersection traversal scenarios: (a) and (d) denote cumulative reward, (b) and (e) denote episode length, while (c) and (f) denote policy entropy w.r.t. training steps.
measurement vector providing 27 range readings up to 10 meters. These readings are uniformly distributed over 270\({}^{\circ}\) around each side of the heading vector, spaced 10\({}^{\circ}\) apart. These observations were then input into a deep neural network policy denoted as \(\pi_{\theta}\), where \(\theta\in\mathbb{R}^{d}\) denotes the policy parameters.
### _Action Space_
The policy mapped the observations \(o_{t}\) directly to an appropriate action \(a_{t}\), as expressed in Equation 7:
\[a_{t}^{i}=\left[\tau_{t}^{i},\delta_{t}^{i}\right]\in\mathbb{R}^{2} \tag{7}\]
Here, \(\tau_{t}^{i}\in\{0.1,0.5,1.0\}\) represent the discrete throttle commands at 10%, 50% and 100% PWM duty cycles for torque limited (85.6 N-m) drive actuators, and \(\delta_{t}^{i}\in\{-1,0,1\}\) represent the discrete steering commands for left, straight, and right turns, respectively.
### _Reward Function_
The policy \(\pi_{\theta}\) was optimized based on Behavioral Cloning (BC) [24], Generative Adversarial Imitation Learning (GAIL) \({}^{g}r_{t}\) reward [25], curiosity \({}^{c}r_{t}\) reward [26] as well as an extrinsic reward \({}^{e}r_{t}\) (as detailed in Equation 8).
Particularly, The agent received a reward of \(r_{checkpoint}=+0.01\) for passing each of the 19 checkpoints \(c_{i}\), where \(i\in[\mathrm{A},\mathrm{B},\cdots,\mathrm{S}]\) on the racetrack, \(r_{lap}=+0.1\) upon completing a lap, \(r_{best\,lap}=+0.7\) upon achieving a new best lap time, and a penalty of \(r_{collision}=-1\) for colliding with any of the track walls \(w_{j}\), where \(j\in\mathbb{R}^{n}\). Additionally, the agent received continuous rewards proportional to its velocity \(v_{t}\), encouraging it to optimize its trajectory spatio-temporally.
\[e_{t}^{i}=\begin{cases}r_{collision}&\text{if collision occurs}\\ r_{checkpoint}&\text{if checkpoint is passed}\\ r_{lap}&\text{if completed lap}\\ r_{best\,lap}&\text{if new best lap time is achieved}\\ 0.01*v_{t}^{i}&\text{otherwise}\end{cases} \tag{8}\]
### _Training_
The policy \(\pi_{\theta}\) was optimized to maximize the expected future discounted reward (GAIL, curiosity and extrinsic rewards), while also minimizing the BC loss (refer Fig. 5).
This use-case also employed a fully connected neural network (FCNN) as a function approximator for \(\pi_{\theta}\left(a_{t}|o_{t}\right)\). The network had \(\mathbb{R}^{28}\) inputs, \(\mathbb{R}^{2}\) outputs, and three hidden layers with 128 neural units each. The policy parameters \(\theta\in\mathbb{R}^{d}\) were defined in terms of the network's parameters. The policy was trained to predict throttle and steering commands directly based on collected observations, utilizing the proximal policy optimization (PPO) algorithm [23].
### _Deployment_
The trained policies were deployed onto the respective simulated vehicles, which were made to race head-to-head on the same track with a phase-shifted initialization (as in real F1TENTH competitions).
Fig.6(a)-(c) present three snapshots of a block-block-overtake sequence, wherein the red agent kept blocking the blue agent throughout the straight, but the blue agent took a wider turn with higher velocity and took advantage of its under-steer characteristic to cut in front of the red agent and overtake it. Fig.6(d)-(f) display three snapshots of a let-pass-and-overtake sequence, wherein the blue agent found a gap between the red agent and inside edge of the track and opportunistically overtook it. However, due to its under-steering characteristic, it went wider in the corner, thereby giving the red agent an opportunity to overtake it and reclaim the leading position.
## V Conclusion
This work presented a multi-agent reinforcement learning framework for imbibing cooperative and competitive behaviors within autonomous vehicles using the real2sim approach. We discussed representative case-studies for each behavior type and analyzed the training and deployment results. A natural extension of this work would be to analyze the sim2real [27] transfer of these trained policies.
Fig. 5: Training results for multi-agent autonomous racing: (a) denotes BC loss, (b) denotes GAIL reward, (c) denotes curiosity reward, (d) denotes extrinsic reward, (e) denotes policy entropy, and (f) denotes episode length w.r.t. training steps.
Fig. 6: Deployment results for multi-agent autonomous racing: (a)-(c) denote three frozen snapshots of a block-block-overtake sequence, and (d)-(f) denote three frozen snapshots of a let-pass-and-overtake sequence. |
2305.19490 | Adoption of Blockchain Platform for Security Enhancement in Energy
Transaction | Renewable energy has become a reality in the present and is being preferred
by countries to become a considerable part of the central grid. With the
increasing adoption of renewables it will soon become crucial to have a
platform which would facilitate secure transaction of energy for consumers as
well as producers. This paper discusses and implements a Blockchain based
platform which enhances and establishes a secure method to exchange energy. It
would also lower the operation costs and accommodate other technologies like
the IoT. A basic market mechanism has been developed for peer-to-peer (P2P)
transaction of energy where different types of entities can be directly
involved. Another concept which is discussed in the paper is the consensus
mechanism and whether the model market could hold the security and privacy of
the individual users. | Madhuresh Gupta, Soumyakanti Giri, Prabhakar Karthikeyan Shanmugam, Mahajan Sagar Bhaskar, Jens Bo Holm-Nielsen, Sanjeevikumar Padmanaban | 2023-05-31T01:59:59Z | http://arxiv.org/abs/2305.19490v1 | # Adoption of Blockchain Platform for Security Enhancement in Energy Transaction
###### Abstract
Renewable energy has become a reality in the present and is being preferred by countries to become a considerable part of the central grid. With the increasing adoption of renewables it will soon become crucial to have a platform which would facilitate secure transaction of energy for consumers as well as producers. This paper discusses and implements a Blockchain based platform which enhances and establishes a secure method to exchange energy. It would also lower the operation costs and accommodate other technologies like the IoT. A basic market mechanism has been developed for peer-to-peer (P2P) transaction of energy where different types of entities can be directly involved. Another concept which is discussed in the paper is the consensus mechanism and whether the model market could hold the security and privacy of the individual users.
Renewable energy, Block chain, Internet of Things, P2P Transaction, Grid Security.
\({}^{1}\)Madhuresh Gupta, \({}^{1}\)Soumyakanti Giri, \({}^{1}\)Prabhakar Karthikeyan Shanmugam,
\({}^{2,}\)Mahajan Sagar Bhaskar \({}^{2}\)Jens Bo Holm-Nielsen, \({}^{2}\)Sanjeevikumar Padmanaban
\({}^{1}\)School of Electrical Engineering, VIT University, Vellore 632014, Tamil Nadu, India.
\({}^{2}\)Center for Bioenergy and Green Engineering, Department of Energy Technology, Aalborg University,
Esberg 6700, Denmark.
[email protected], [email protected], [email protected],
[email protected], [email protected], [email protected]
\({}^{\#}\)[email protected]
## I Introduction
In today's world where everyone pays high charges for electricity, they can argue the price being charged can be reduced further down with demand-side management for local networks [1]. There are fixed charges [2] which can be reduced considerably by using a blockchain based market due to minimal wastage in case of microgrids and protocols by the supplier side. Keeping the above points in mind, we also need to consider the smart grid scenarios where the individual devices are connected to IoT [3]-[7] which can be managed directly by proper grid protocol and data processing. When all the parameters are considered, it becomes necessary that all the processing is done in a decentralized manner.
Blockchain is a shared ledger that is encrypted and is based on a platform which consists of a network of servers or computers interconnected [8]. Each transaction is validated in the system, also known as mining, which is implemented by doing mathematical calculations to form a block [9]. Based on the type of blockchain, everyone can have access to the same environment, specifically when a public blockchain is considered [10]. Each block is chained or tied to the previous block by embedding the block with information from the previous block [11]. The blockchain is a distributed and decentralized system, which means that it needs to have a way of tracking the official current state of the system. Since the blockchain can include financial transactions and business agreements, it is important that all parties involved are in sync regarding the terms of the agreement.
Since blockchain is decentralized, there is no "higher authority" that can rubber-stamp and finalize the contents of a blockchain block. The architecture is such that no single miner is mining more than half of the total blocks as this would lead to potential manipulations in the data of the blocks added to the chain thereby making it vulnerable to dubious transactions [12]. Thus, to have a balance in the network, no one will be having resources more than half of the resources combined [13]. This is different compared to a traditional transaction where a third party is responsible to verify and authenticate all the transactions as discussed [14]. The comparison of blockchain security with traditional cybersecurity is given in Table I.
In this paper, the stability of the grid is considered which can be disrupted due to the integration of multiple renewable energy sources and variability in the production of these sources. The introduction of EV's and battery technology will help in stabilising the grid [15] to a great extent further approving the use of blockchain as a medium. The stability of the grid is key when it comes to the feasibility of a blockchain based grid. Hence, the focus is on creating a platform which is able to execute renewable energy transactions which stays secure from any external attacks.
The paper is organised as follows: Section I focuses on the literature survey, assumptions and the exact outcomes expected from blockchain driven energy market. Further in section II, the implementation of the blockchain platform scenario with two nodes is shown and how it can be achieved in a simplistic and fair manner. In Section III, different cases are elaborated with results and how they might influence the electrical grid. Finally, in Section IV, the conclusion provides a brief outlook on what this research paper does to integrate the P2P energy market [16] with blockchain as a backbone of the platform. The main highlights of the paper are,
* Secure Hash Algorithm (SHA-256) is used as a standard for all types of transactions to protect the chain.
* Everyone is allowed to participate in a common platform to exchange energy.
* Blockchain's direct relation to the electric grid via integration of the P2P market has experimented
* The P2P simulation is discussed in detail.
Blockchain is decentralized and distributed across P2P networks that are continually updated and kept in sync with each other and since they aren't contained in a central location, they don't have a single point of failure and cannot be manipulated from a single computer. It would require massive amounts of
computing power to access every instance (or at least a 51% majority) of a certain blockchain and alter them all at the same time.
Thus we have incorporated blockchain for the process of energy exchange among users and keep their data secure from any possible attacks. This also helps to nullify the transaction charges associated with the banks and the fixed charges levied by the energy companies. The structure of a block is defined in a set protocol, wherein the block number, previous block hash [17], transactions of the block, and proof are together hashed to form the hash of the current block.
A major drawback current grid networks is the lack of security in the mannar the transactions are controlled by involvement of mediators and other third parties. This hierarchical organizational trading structure of the grid leads to heavy operating costs with low efficiency of operation [18]. On the other hand, a blockchain-based trading infrastructure offers a decentralized platform that enables the Peer-to-Peer (P2P) trade of energy between consumers and prosumers in a secure manner. The identity privacy and security of transactions is higher in the decentralized platform compared to the traditional system. The P2P energy trade finds purpose in many applications including the Industrial Internet of Things (IloT) and enhances the possibility of developing micro-grids leading to sustainable energy utilization [19].
## II Proposed Methodology
A sample microgrid with 4 peers was designed to simulate a set of transactions among each other by using blockchain as a common platform to interact and transact energy. The sub-section \(A\) defines the microgrid used for simulation and sub-section \(B\) gives a detailed explanation on how the energy is exchanged over blockchain platform.
### A. Model Description
In Fig. 1, two peers have been considered in the microgrid. One of them, a prosumer with solar photovoltaics is considered as an energy producer with the provision to store surplus energy and sell them in the market. Thus, now the consumer has a choice to purchase energy in part from any available seller, the benefit here is a direct exchange of value from one customer to another. The low voltage bus is the general line which will act as the highway for all the exchanges taking place at any given moment. The model consists of 2 nodes. The consumer can directly access the open market
Fig. 1: Sample Microgrid model with 2 peers – 1 consumer and 1 prosumer.
need to be registered by an internal authentication process. The consensus algorithm depends on the number of people or nodes participating in the network for verifying the transactions. Two virtual nodes are created namely localhost:5000 and localhost:5001 for the sample problem, which has been referenced as one of the consumers and the prosumers. When a transaction occurs, both the nodes keep a history in their respective blocks and in a certain order which is inherent to every chain. In this case, due to the use of two nodes, both the nodes have different ownership in order to prevent a monopoly. The blockchain algorithm for energy transaction is shown in Fig. 2.
To simulate a few real-world scenarios, and how blockchain implementation tackles them in contrast to traditional grid energy exchange, the following cases have been discussed:
Case 1: When electrical faults affect the system.
Case 2: When software anomalies happen which could directly affect the grid.
### Simulation
A blockchain class is realised where every block generated will contain the following parameters:
* Multiple transactions
* Hash of the previous block
* Block number
* Timestamp
* Proof (nonce number)
The blockchain concept has been implemented using four sources and two nodes to simulate the real-time transactions which depend on the number of offers posted and the amount quoted by the seller. A user-friendly front-end platform has been developed which simplifies the process of placing energy offers and buying. All the offers available at a particular instant are visible in a tabular form in real-time. The system architecture for P2P transaction is shown in Fig. 3.
Figure 2: Blockchain Algorithm for energy transaction.
First, function _home()_ is defined. The _home()_ function has a _GET_ and _POST_ request. The _GET0_ request is called whenever the user clicks on the home tab on the navigation bar on the website. The render template method helps in opening the assigned HTML file which in this case is home.html. The home HTML file consists of the main website where all the functionality can be accessed. Next, a function _table()_ is defined to initialize the database for storing the transaction exchange data. The _table()_ function has a _GET_ and _POST_ request. The _GET()_ request is called whenever the user clicks on the _table tab_ on the navigation bar in the website. The _render_template_ method helps in opening the assigned HTML file which in this case is _table.html_. The _table html file_ consists of all the sellers information required and the price per unit they are offering. The _data.index()_ method is used to reiterate the indexes starting from 1. When a user wants to sell energy, they navigate to the _Sell tab_ which is rendered by the _sellenergy()_ function. When the necessary details are submitted in the seller tab, the user is redirected to the _table tab_ in order to see the rest of the pricing. Once the market is set up, the users can _POST()_ request from the _home tab_. This returns the required values from another function _buyenergy()_ which redirects the user to the page where they can submit the request to buy a certain amount of energy in kWh. After the user has entered the required details in the correct format, the name of the user and the number of units required are called using the _request.form()_ method which extracts the necessary data by tracking the form snippet in the HTML file and then matching the name of the form parameter and checking its value entered.
A CSV file named "_energydemand_" is created which stores the current market offerings. This list is read using the _read_csv()_ function from the pandas module. Then, check if the required energy by the user is in limits of what the market has to offer. If true, then break the energy required into smaller parts according to what the sellers have to offer. If the energy required is more or equal to the minimum viable amount given by a certain seller then the transaction is recorded between the concerned seller and buyer and the seller is removed from the market. The remaining units, if any, can be acquired from other sellers for which the algorithm needs to recalculate the minimum viable price per unit. When the required units are less than what anyone has to offer, then the seller with the minimum viable amount is contacted and the required units are subtracted from the total units the buyer has to offer. The _iloc_ and _loc_ methods are used to locate the necessary matrix rows and cells as and when required. After the necessary matchings, all of the transactions in the list are forwarded to class method _new_transactions()_ to be added to the next block when mined. The _new_transaction()_ method is created in order to account for the sender, the recipient and the amount interchanged between them. These parameters are stored in a dictionary. This dictionary gets appended as a single block to a list each time a new transaction happens in the energy market. The list content increases when the transactions are mined and forged into a new block in the
Fig. 3: System architecture for P2P transaction.
blockchain. After the block is added, the method returns the block number to the calling method. The block created is then passed as an object to the hash function which creates a 256-bit binary number in a 64-bit hexadecimal format. This is done by using the hashlib library. The block is in an object format which has to be converted into _autf-8 compliant string_. The proof of work algorithm makes use of the hash of the last block, the proof of the last block and the proof which iterates to find a hash with leading four zeros. The _proof_of_work()_ method acquires the proof of last block and the hash by calling the hash() property decorator. The two attributes along with the proof are passed to the _valid_proof()_ method to check whether the hash created by the combination of these parameters is in terms with what is required. The guess parameter is created which stores a single long string of all the three parameters and then _SHA256_ hashing is used in hexadecimal format to find the necessary hash when the proof attribute is iterated by 1 each time the _valid_proof()_ method fails to return True. The _mine()_ function is a GET request which when called returns the current block information in a _json_ format. Then, the last block is called and find the proof of the last block by calling the _proof_of_work()_ method inside the blockchain class as discussed previously. After the proof is calculated, a transaction takes place with default parameters to take a certain cut of the transaction because of the mining process. Then call the _new_block()_ method to create this block along with the calculated proof. A dictionary response is created which contains the 5 class parameters that are assumed in the beginning. The response is converted to _json_ format and returned to make it easy to store in SQL libraries.
## III Results and Discussion
A 24-hour average load curve of residential and commercial loads is shown in Fig. 4. This profile gives us an understanding of when the prices are high in the market and when the prices are low.
The overall price of electrical energy provided depends on parameters as discussed in [18]. Taking into account these parameters, the prospectus of having a set domain seems a valid point as discussed in the set of equations mentioned.
_X= {ppu, units}_
_0<=ppu< price set by the concerned authority based on the region(1)_
_0<units<= maximum demand of an area_
The change of the boundary values depends on the market competition, consumption pattern, and also according to the government compliance of the respective area, be it anywhere in the world. This domain in the algorithm discussed assures that no one can put arbitrary values or elude with customers who have less knowledge of the market but want to participate.
Fig. 4: A 24-hour average load curve of residential and commercial loads [14].
In Fig. 5, the frontend of the homepage for Peer-to-Peer market is shown which has multiple links that serve the function of buying, selling, viewing the current listings, and the transactions approved through mining with the completion of power transfer. This interface has been used to simulate the test cases as discussed in the following cases.
### Case 1: When electrical faults affect the system
In this case, electrical faults can cause loss of system memory and failure of live transactions in the blockchain network. This is due to the participation of prosumers and small entities in the market helping in the mining process, if eligible. Their systems cannot be expected to be as robust as the servers in the big industries. Using the blockchain platform, a list of transactions is created which were mined by both the localhosts. But some of the transactions were deliberately left out in the localhost: 5001 to simulate a power failure at one node. The localhost:5000 chain of transaction in json format is given in Table II. The localhost:5001 chain of transaction in json format - data is incomplete is given in Table III. In Table IV, Faulty node- localhost:5001was resolved and replaced with the updated set of transactions. On executing the consensus algorithm, the localhost:5001 with incomplete data block gets compared and replaced by localhost:5000 which has complete database thereby synchronising the transactions which were missing after power failure and electrical fault.
","proof":8486,"timestamp":1556983172.057091, "transactions":
[{"amount":4.0,"recipient": "Kristian Stromberg", "sender": "Ellis Acost"},
{"amount":5.0,"recipient": "Apryl Goulet", "sender":"Ellis Acost"},
{"amount":1,"recipient":"68fb958440d949a784d43f59ab4b69f3", "sender":"0"}},
"index":4,"previoushash":"8047bc(@ab2748566293a55f29b94f2a366855b8dfd6ce97f60d517d679f3f5
"proof":10419,"timestamp":1556983962.120725,"transactions":
[{"amount":5.0,"recipient":"Apryl Goulet", "sender":"Chisel Acincio"},
{"amount":1,"recipient":"68fb958440d949a784d43f59ab4b69f3", "sender":"0"}}],"length":4}
localhost:5000. A set of transactions is also saved in the local server which can be used as a comparison along with localhost:5000 to verify which chain is true and replace the false with the actual one. As seen in Fig. 6, both the server data and the chain in localhost:5000 have the same data.
In Fig. 6, we are showing two similar blockchains (representing different nodes of the network) so to represent someone trying to manipulate the second chain by adding a fraudulent transaction, we can observe how the chain breaks right from the block where the data was manipulated. Considering the two cases approached above, and the algorithm which aids the simulation, it can be seen how the effect of one of the systems fails does not have instant feedback on the system parameters and yet, when the system works perfectly, the transactions are comparable to the security and speed where intermediaries are involved. This delay in change helps the system to revert back to a healthy status without having any effect on the software, specifically the users and their wallets, referring to the transactional anomalies. Also, the grid shows anomaly in power transfers which could be a bigger problem as it could try to overload users with limited to no battery storage and create unnecessary faults in the grid which need to be manually repaired causing loss of money, comfort and time for the locals. In Table V, consensus algorithm replaced the broken (altered) chain with the correct chain automatically.
Figure 6: The second chain breaks due to alteration in block data (indicated by pink blocks)
Electrical grid integrity can be maintained locally, with the local transactions driving the algorithm for how smart grids behave. This would be much easier than extracting from a central server each time a small distance and small power transactions needs to take place which would be more common. The long-distance, large energy transactions would require a dedicated area-specific server to process data for which is considered in the form of a CSV file which keeps updating. These types of servers would be most useful in aiding the P2P transactions indirectly by making the grid calculations themselves without putting pressure on the nodes that are already busy in verifying the transactions and transferring power depending on the contract obligations. The results obtained prove that the blockchain model can bring out the meaning of prosumers in true sense wherein, the large company is at the same level in terms of price offerings. The product can be deployed at scale with little physical infrastructure, and henceforth, more and more servers can be simultaneously added for better reliability. There is no distinction between the large power providers and the small power provider, be it 1kWh seller or 1 MWh seller. This also means that the data which was kept by the large companies no longer belongs to them as they are equal. It belongs to everybody participating in the market and here blockchain creates an easy way to keep track of everything happening around the market open for every participant without any bias.
## IV Conclusions
This paper deals with the proposal of a P2P system using blockchain that enables it to maintain stability and uniformity in the grid. A very lightweight architecture was also created as to how the application can be handled effectively with few resources to be used upon the node client and user side. In particular, the paper focuses on how blockchain directly relates to the stability of the microgrid architecture. The sample demonstration has shown how the implementation proves to be robust and reliable in storing and performing transactions. This maintains the integrity of the local grid even in adverse conditions. The paper briefly discusses how the pricing will be affected based on the location and the consumption patterns. Overall, this paper gives a detailed insight into how a P2P transaction system is created on the principle of blockchain and how it benefits the maintenance and control of the next-gen grid in the long run.
|
2309.08890 | Stochastic Schrödinger equation approach to real-time dynamics of
Anderson-Holstein impurities: an open quantum system perspective | We develop a stochastic Schr\"odinger equation (SSE) framework to simulate
real-time dynamics of Anderson-Holstein (AH) impurities coupled to a continuous
fermionic bath. The bath degrees of freedom are incorporated through
fluctuating terms determined by exact system-bath correlations, which is
derived in an ab initio manner. We show that such an SSE treatment provides a
middle ground between numerically expansive microscopic simulations and
macroscopic master equations. Computationally, the SSE model enables efficient
numerical methods for propagating stochastic trajectories. We demonstrate that
this approach not only naturally provides microscopically-detailed information
unavailable from reduced models, but also captures effects beyond master
equations, thus serves as a promising tool to study open quantum dynamics
emerging in physics and chemistry. | Zhen Huang, Limin Xu, Zhennan Zhou | 2023-09-16T06:03:54Z | http://arxiv.org/abs/2309.08890v1 | Stochastic Schrodinger equation approach to real-time dynamics of Anderson-Holstein impurities: an open quantum system perspective
###### Abstract
We develop a stochastic Schrodinger equation (SSE) framework to simulate real-time dynamics of Anderson-Holstein (AH) impurities coupled to a continuous fermionic bath. The bath degrees of freedom are incorporated through fluctuating terms determined by exact system-bath correlations, which is derived in an _ab initio_ manner. We show that such an SSE treatment provides a middle ground between numerically expansive microscopic simulations and macroscopic master equations. Computationally, the SSE model enables efficient numerical methods for propagating stochastic trajectories. We demonstrate that this approach not only naturally provides microscopically-detailed information unavailable from reduced models, but also captures effects beyond master equations, thus serves as a promising tool to study open quantum dynamics emerging in physics and chemistry.
## I Introduction
Real-time dynamics of quantum impurity systems with fermionic baths have been a central topic in condensed matter physics for the past few decades. It is used to model a wide range of systems, such as magnetic impurities in metals [1], quantum dot systems [2], atom adsorption onto surface [3]. Among various types of quantum impurity models, the Anderson-Holstein (AH) model [4; 5] is of critical importance, since it is directly related to chemisorption [4], electrochemistry [6], heterogeneous catalysis [7] and molecular junctions [8].
The Anderson-Holstein model comprises of a molecular system as the impurity part and the continuous heat bath equilibrated at a certain temperature as the environment. For example, when modeling molecule-metal interfaces, the bath part is made up of metal orbitals of a continuum of spectra [4; 5]. The most straightforward simulation strategy would be a direct discretization of the bath orbitals (for example, see [9; 10]), and then propagate the many-body Schrodinger equation for the entire extended system. However, such calculations are often so expensive that either one is only capable of studying a model with a very small number of bath orbitals [11], or one makes crude single-particle approximations [12]. These treatments are far from exact and are only effective in limited scenarios.
The impurities could be viewed as a fermionic open quantum system due to the influence of the infinite bath. To incorporate the open-system effects with a manageable computational cost, various master equations are developed based on different physical approximations. This is generally seen in the study of molecular-metal interfaces. Treating the nuclei of the molecular system as classical particles, classical master equations (CME) [13; 14] are proposed along with surface hopping technique [15; 16] to efficiently sample the ensemble at equilibrium. However, due to the breakdown of the Born-Oppenheimer approximation, it is necessary to capture the nuclear quantum effects and nonadiabatic dynamics simultaneously. This is often achieved by quantum master equations (QME), which describe the dynamics for the reduced density matrix by tracing out the electronic degree of freedom. Lindblad master equations [17; 18] are used when one makes the Markovian approximation. To capture the memory effects, Nakajima-Mori-Zwanzig equations [19; 20], Redfield equations [21; 22] and other non-Markovian equations [23; 24; 25] are proposed as effective models. QME is widely used to model realistic systems [24; 25; 26; 27]. The hierarchy quantum master equation (HQME) [27; 28; 29], also known as the hierarchy equation of motion (HEOM), is a nonperturbative alternative but is often computationally too expensive.
However, there is another widely-used approach, namely Stochastic Schrodinger equations (SSE), for studying open quantum systems. By incorporating stochastic fluctuations arising from interactions with the external environment, SSE has been successfully used in modeling quantum decoherence[30; 31], quantum measurement [32; 33; 34], quantum jumps [35; 36] and so on. It is found to be consistent with QME calculations in many cases [37; 38; 39; 40], and in the meantime has the following additional advantages: on the one hand, SSE provides an ensemble of time-dependent quantum trajectories, and therefore is very convenient for studying the statistical and nonequilibrium properties of the open system; on the other hand, the numerical methods for simulating stochastic differential equations have already been studied extensively [41; 42], and as a result the above trajectories could be obtained effectively with a Monte-Carlo sampling scheme. However, the success of SSE models highly relies on the accurate modeling of the environment, i.e. obtaining the exact time-correlation function of the noise term in SSE.
Even though SSE is a very powerful tool in the modeling of open quantum systems, it has not been applied to study the real-time dynamics of quantum impurities
with fermionic bath, to the best of our knowledge. On the one hand, SSE approaches are mostly developed for bosonic heat bath in different regimes. On the other hand, although there are attempts of using SSE to study fermionic bath effects [37; 43], there lacks an ab initio treatment of the stochastic and non-Markovian effects, therefore not applicable to general chemical systems such as molecule-metal interfaces.
The purpose of this article is two-fold. On the one hand, we aim to fill in a gap in the study of Anderson-Holstein models as an open quantum system: the time-correlation function of the noise term in SSE is obtained directly from the Anderson-Holstein Hamiltonian. On the other hand, we emphasize the following hierarchy of modeling: SSE is at an intermediate level of approximation, right between the atomic-level AH model and the QME model:
\[\text{Anderson-Holstein}\rightarrow\text{SSE}\rightarrow\text{QME}.\]
For a detailed description of this modeling hierarchy, please see Fig. 1. Although every SSE is associated with a corresponding QME, SSEs are able to encapsulate many intricate microscopic details, while QMEs merely describe the statistical averages of the SSE trajectories. This is also seen in our numerical experiments Section V. In other words, QME should be viewed as a further approximation on top of SSE.
The rest of this article is organized as follows. In Section II, we derive the stochastic Schrodinger equations from the Anderson-Holstein model. After introducing the problem setup in Section II.1, we review the Bogoliubov transformation in Section II.2, and then derive the time correlation function in Section II.3. We finally arrive at the non-Markovian SSE model in Section II.4 and its wide band limit in Section II.5. In Section III.1, we discuss how to obtain quantum master equations from SSE by taking expectation values of density matrices. In Section III.2, we show how different approximations would lead to different versions of QME, and demonstrate the hierarchy of modeling in detail. In Section IV, we discuss the numerical methods for trajectory-based SSE simulations. We introduce how to generate time-correlated noise in Section IV.1, and how to do time evolution in Section IV.2. We show numerical experiments in Section V. We demonstrate that SSE offers directly an ensemble of particle trajectories with microscopic details, and compare its results to QME.
## II From AH model to SSE
### Setup
In the Anderson-Holstein model [5], the total Hamiltonian consists of three parts: the system Hamiltonian \(\hat{H}_{S}\), the bath Hamiltonian \(\hat{H}_{B}\), and the system-bath interaction \(\lambda\hat{H}_{S-B}\):
\[\hat{H}=\hat{H}_{S}+\hat{H}_{B}+\lambda\hat{H}_{S-B}. \tag{1}\]
Here \(\hat{H}_{S}\) is the system Hamiltonian, \(\hat{H}_{B}\) is the bath Hamiltonian, and \(\lambda\hat{H}_{S-B}\) is the Hamiltonian of the interaction between the system and bath where \(\lambda\) is the interaction strength. The system Hamiltonian \(\hat{H}_{S}\) consists of the kinetic and potential energy of the nuclei:
\[\hat{H}_{S}=\frac{\hat{p}^{2}}{2m_{\text{n}}}+U_{0}(\hat{x})+h(\hat{x})\hat{d }^{\dagger}\hat{d}, \tag{2}\]
where \(\hat{p}\) and \(\hat{x}\) are the momentum operator and the position operator for the nuclei, \(m_{\text{n}}\) is the mass of the nuclei, \(\hat{d}\) and \(\hat{d}^{\dagger}\) are the fermionic annihilation and creation operators corresponding to the ground state electronic orbital. Here we are considering a two-level nuclei: when the molecule is neutral, \(\hat{d}^{\dagger}\hat{d}=0\), and the nuclear potential is \(U_{0}(\hat{x})\); when the molecule is charged, \(\hat{d}^{\dagger}\hat{d}=1\), and the potential is \(U_{1}(x)=U_{0}(\hat{x})+h(\hat{x})\).
The system's Hamiltonian represents a two-level electronic system and therefore could be rewritten in the following matrix form:
\[\begin{split}\hat{H}_{S}&=\left(\begin{array}{ c}\hat{h}_{0}(x)\\ \hat{h}_{1}(x)\end{array}\right)\\ &=\left(\begin{array}{cc}-\frac{\varepsilon^{2}}{2}\Delta+U_{0}(x)\\ &-\frac{\varepsilon^{2}}{2}\Delta+U_{1}(x)\end{array}\right).\end{split} \tag{3}\]
Here \(\varepsilon\) is the ratio between the energy scale we are interested in and the macroscopic energy scale, which is a moderately small non-dimensionalized constant but fixed in the derivation, and it is also known as the semiclassical parameter [44]. The system wavefunction is
\[\boldsymbol{\psi}(x)=\left(\begin{array}{c}\psi_{0}(x)\\ \psi_{1}(x)\end{array}\right)\in\mathbb{L}^{2}(\mathbb{R}^{n})\otimes\mathbb{ C}^{2},\]
where \(\psi_{0}(x)\) is the nuclei wavefunction when the molecule is neutral (i.e. under potential \(U_{0}(x)\)), while \(\psi_{1}(x)\) is the nuclei wavefunction when the molecule is charged (i.e. under potential \(U_{1}(x)\)). We also introduce the following Dirac notation, as it is widely used in literature:
\[|\Psi\rangle=|\Psi_{0}\rangle|0\rangle+|\Psi_{1}\rangle|1\rangle,\]
and in this notation, \(\hat{d}\) and \(\hat{d}^{\dagger}\) acts as \(\hat{d}|1\rangle=|0\rangle,\hat{d}^{\dagger}|0\rangle=|1\rangle,\hat{d}|0 \rangle=0,\hat{d}^{\dagger}|1\rangle=0\), and we have
\[\begin{split}\hat{d}|\Psi\rangle&=\hat{d}|\Psi_{0}\rangle|0 \rangle+\hat{d}|\Psi_{1}\rangle|1\rangle=|\Psi_{1}\rangle|0\rangle\\ \hat{d}^{\dagger}|\Psi\rangle&=\hat{d}^{\dagger}|\Psi_{0}\rangle|0 \rangle+\hat{d}^{\dagger}|\Psi_{1}\rangle|1\rangle=|\Psi_{0}\rangle|1\rangle. \end{split} \tag{4}\]
In Eq. (4), when acting on \(|\Psi\rangle\in\mathbb{L}^{2}(\mathbb{R}^{n})\otimes\mathbb{C}^{2}\), \(\hat{d}\) acts as \(\hat{I}\otimes\hat{d}\) (so is \(\hat{d}^{\dagger}\)).
The heat bath is formed by non-interacting metal orbitals \(|E\rangle\) with energy \(E\in[E_{-},E_{+}]\), where \(E_{-}\) and
are the lower and upper bound of the spectrum of the metal continuum band. As a result, the bath Hamiltonian is
\[\hat{H}_{B}=\int_{E_{-}}^{E_{+}}(E-\mu)\hat{c}_{E}^{\dagger}\hat{c}_{E}\mathrm{d }E. \tag{5}\]
Here \(\hat{c}_{E}\) and \(\hat{c}_{E}^{\dagger}\) are the annihilation and creation operators for the metal electronic orbital \(|E\rangle\), and \(\mu\) is the chemical potential. For formal simplicity, \(\mu\) is chosen to be zero unless otherwise specified, but all of our derivations naturally apply to any choice of \(\mu\). For the sake of practical calculations, the continuous band \([E_{-},E_{+}]\) could be discretized using equidistant grid point [10]:
\[E_{k}=E_{-}+(k-\frac{1}{2})h_{N},\quad h_{N}=\frac{E_{+}-E_{-}}{N}. \tag{6}\]
or using Gaussian quadrature [9]. After discretization, the bath Hamiltonian becomes:
\[\hat{H}_{B}=\sum_{k=1}^{N}E_{k}\hat{c}_{k}^{\dagger}\hat{c}_{k}. \tag{7}\]
For now, we will proceed our discussion using the discretized metal orbitals. However, as we will show later, after deriving the nMSSE model, we will consider the continuous band limit by letting the number of metal orbitals \(N\) go to infinity, through which we recover the metal continuum and thus eliminate the quadrature error of continuous band discretization.
The system-bath interaction \(\hat{H}_{S-B}\), which describes the interaction between the molecule and metal continuum, is as follows:
\[\lambda\hat{H}_{S-B}=\lambda\left(\int\mathrm{d}E\,V(E,\hat{x})\hat{c}_{E}^{ \dagger}\hat{d}+\bar{V}(E,\hat{x})\hat{d}^{\dagger}\hat{c}_{E}\right). \tag{8}\]
Here \(V(E,x)\) describes the interaction strength between the molecule and the bath orbital with energy \(E\) while the nuclei is at \(x\), and \(\bar{V}(E,x)\) is its complex conjugate. With the same discretization orbitals \(\{E_{k}\}_{k=1}^{N}\) chosen as above, we have
\[\lambda\hat{H}_{S-B}=\lambda\left(\sum_{k=1}^{N}V_{k}(\hat{x})\hat{c}_{k}^{ \dagger}\hat{d}+\bar{V}_{k}(\hat{x})\hat{d}^{\dagger}\hat{c}_{k}\right) \tag{9}\]
where \(V_{k}(\hat{x})=\sqrt{h_{N}}V(E,x)\). We refer to [9] for the derivation of Eq. (9). Without loss of generality, let us assume that \(V_{k}(x)\) is real-valued.
### Bogoliubov transformation of the system-bath coupling
For clarity in the derivation, we adopt the Bogoliubov transformations, which were introduced for studying weakly interacting He\({}^{4}\) superfluid [45] and solving Bardeen-Cooper-Schrieffer theory [46] to understand superconductivity. Let us define the following pairs of Bogoliubov operators:
\[\begin{split}\hat{B}_{1k}=\hat{c}_{k}+\hat{c}_{k}^{\dagger}, \quad\hat{B}_{2k}=\mathrm{i}(\hat{c}_{k}-\hat{c}_{k}^{\dagger}),\\ \hat{g}_{1}=\hat{d}+\hat{d}^{\dagger},\quad\hat{g}_{2}=\mathrm{i} (\hat{d}-\hat{d}^{\dagger}).\end{split} \tag{10}\]
Note that \(\hat{B}_{1k},\hat{B}_{2k},\hat{g}_{1},\hat{g}_{2}\) are all Hermitian operators, and we have
\[\begin{split}\hat{c}_{k}=\frac{\hat{B}_{1k}-\mathrm{i}\hat{B}_{2 k}}{2},\quad\hat{c}_{k}^{\dagger}=\frac{\hat{B}_{1k}+\mathrm{i}\hat{B}_{2k}}{2} \\ \hat{d}=\frac{\hat{g}_{1}-\mathrm{i}\hat{g}_{2}}{2},\quad\hat{d}^{ \dagger}=\frac{\hat{g}_{1}+\mathrm{i}\hat{g}_{2}}{2}.\end{split} \tag{11}\]
In Dirac's notation, according to Eq. (4), we have
\[\begin{split}\hat{g}_{1}|\Psi\rangle&=(\hat{d}+\hat{ d}^{\dagger})(|\psi_{0}\rangle|0\rangle+|\psi_{1}\rangle|1\rangle)\\ &=|\psi_{0}\rangle|1\rangle+|\psi_{1}\rangle|0\rangle,\\ \hat{g}_{2}|\Psi\rangle&=\mathrm{i}(\hat{d}-\hat{d}^{ \dagger})(|\psi_{0}\rangle|0\rangle+|\psi_{1}\rangle|1\rangle)\\ &=-\mathrm{i}|\psi_{0}\rangle|1\rangle+\mathrm{i}|\psi_{1}\rangle |0\rangle.\end{split} \tag{12}\]
Therefore the Bogoliubov operators \(\hat{g}_{1},\hat{g}_{2}\) acts like Pauli matrices for the two-level system:
\[\hat{g}_{1}=\sigma_{x}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\quad\hat{g}_{2}=-\sigma_{y}=\left(\begin{array}{cc}0& \mathrm{i}\\ -\mathrm{i}&0\end{array}\right). \tag{13}\]
Note that \([B_{ik},g_{j}]_{+}=0\), we can rewrite the coupling Hamiltonian as:
\[\begin{split}&\hat{H}_{S-B}\\ &=\sum_{k=1}^{N}V_{k}\left(\frac{\hat{B}_{1k}+\mathrm{i}\hat{B}_{2k}}{2} \frac{\hat{g}_{1}-\mathrm{i}\hat{g}_{2}}{2}+\frac{\hat{g}_{1}+\mathrm{i}\hat{g}_ {2}}{2}\frac{\hat{B}_{1k}-\mathrm{i}\hat{B}_{2k}}{2}\right)\\ &=\sum_{k=1}^{N}V_{k}\left(\frac{\hat{g}_{1}+\mathrm{i}\hat{g}_{2}}{2} \frac{\hat{B}_{1k}-\mathrm{i}\hat{B}_{2k}}{2}-\frac{\hat{g}_{1}-\mathrm{i}\hat{ g}_{2}}{2}\frac{\hat{B}_{1k}+\mathrm{i}\hat{B}_{2k}}{2}\right)\\ &=\sum_{k=1}^{N}\frac{\mathrm{i}}{2}V_{k}(x)\hat{g}_{2}\hat{B}_{1k}- \frac{\mathrm{i}}{2}V_{k}(x)\hat{g}_{1}\hat{B}_{2k}.\end{split} \tag{14}\]
In open quantum systems, the system-bath interaction is often written as a sum of multiplication of system Hermitian operator and bath Hermitian operator:
\[\hat{H}_{S-B}=\sum_{i=1}^{2}\sum_{k=1}^{N}\hat{S}_{ik}\hat{B}_{ik}. \tag{15}\]
Combined with Eq. (14), we know that the system operators \(\hat{S}_{ik}\) are
\[\hat{S}_{1k}=\frac{\mathrm{i}}{2}V_{k}(x)\hat{g}_{2},\quad\hat{S}_{2k}=-\frac{ \mathrm{i}}{2}V_{k}(x)\hat{g}_{1}. \tag{16}\]
### Time correlation function of bath operators and the memory kernel
The non-Markovian Stochastic Schrodinger Equation (nMSSE) was formally derived in [47] and was applied to
simulate spin-boson systems. For a general open quantum system where the interaction between system and bath is described as \(\hat{H}_{S-B}=\sum_{\alpha}\hat{S}_{\alpha}\hat{B}_{\alpha}\), the effective nMSSE dynamics for the system is
\[\mathrm{i}\varepsilon\partial_{t}|\Psi(t)\rangle=\hat{H}_{S}|\Psi(t) \rangle+\lambda\sum_{\alpha}\eta_{\alpha}(t)\hat{S}_{\alpha}|\Psi(t)\rangle \tag{17}\] \[-\mathrm{i}\frac{\lambda^{2}}{\varepsilon}\int_{0}^{t}\ \mathrm{d}\tau\sum_{\alpha,\beta}C_{\alpha,\beta}(\tau)\hat{S}_{\alpha} \mathrm{e}^{-\frac{\mathrm{i}}{\varepsilon}\hat{H}_{S}\tau}\hat{S}_{\beta}| \Psi(t-\tau)\rangle.\]
Recall that \(\varepsilon\) is a small non-dimensionalized parameter. Here \(\eta_{\alpha}(t)\) is the complex-valued Gaussian stochastic noise satisfying
\[\begin{array}{c}\mathbb{E}\left(\eta_{\alpha}(t)\right)=0,\quad\mathbb{E} \left(\eta_{\alpha}(t)\eta_{\beta}(t^{\prime})\right)=0,\\ \mathbb{E}\left(\eta_{\alpha}^{*}(t)\eta_{\beta}(t^{\prime})\right)=C_{\alpha, \beta}(t-t^{\prime}),\end{array} \tag{18}\]
and the memory kernel \(C_{\alpha,\beta}(\tau)\) is the time-correlation function of the bath operators:
\[C_{\alpha,\beta}\left(t-t^{\prime}\right)=\mathrm{tr}_{B}\left(\hat{\rho}_{B }^{\mathrm{eq}}\hat{B}_{\alpha}(t)\hat{B}_{\beta}\left(t^{\prime}\right) \right), \tag{19}\]
where \(\hat{\rho}_{B}^{\mathrm{eq}}=\frac{1}{Z_{B}}\exp(-\beta\hat{H}_{B})=\frac{1}{Z _{B}}\exp(-\hat{H}_{B}/k_{B}T)\) is the density matrix of the bath, which is in thermal equilibrium of temperature \(T\), \(k_{B}\) is the Boltzmann constant, \(Z_{B}=\mathrm{tr}(\exp(-\hat{H}_{B}/k_{B}T))\) and \(\hat{B}_{\alpha}(t)=\exp\left(\frac{\mathrm{i}}{\varepsilon}\hat{H}_{B}t \right)\)\(B_{\alpha}\exp\left(-\frac{\mathrm{i}}{\varepsilon}\hat{H}_{B}t\right)\) is the Heisenberg representation of \(\hat{B}_{\alpha}\). In other words, the derivation of SSE model for Anderson-Holstein impurities boils down to calculating the time-correlation function Eq. (19) of the stochastic noise, which is also the memory kernel of the non-Markovian term.
Now let us derive an explicit expression for Eq. (19). Recall that \(N\) is the number of discrete metal bath orbitals. The natural basis for the Fock space \(\mathcal{H}_{B}\) of the bath, under the occupation number representation, would be
\[\mathcal{H}_{B}=\mathrm{span}\big{\{}|b_{1}\rangle\cdots|b_{N}\rangle\,,\quad b _{1},\cdots,b_{N}\in\{0,1\}\big{\}},\]
or, in the binary representation,
\[\mathcal{H}_{B}=\mathrm{span}\left\{|b\rangle\,|b=\sum b_{k}2^{k-1},b\in\{0,1, \cdots,2^{N}-1\}\right\}.\]
In this basis, the bath Hamitonian \(\hat{H}_{B}\) is a \(2^{N}\times 2^{N}\) diagonal matrix:
\[\begin{array}{c}\left(\hat{H}_{B}\right)_{bb^{\prime}}=\delta_{bb^{\prime}} \mathcal{E}_{b},\quad\mathcal{E}_{b}=\sum_{k}b_{k}E_{k},\\ \text{for }b,b^{\prime}\in\{0,1,\cdots,2^{N}-1\},\quad b_{1},\cdots,b_{N}\in\{0,1\}.\end{array} \tag{20}\]
For future reference, let us call \(b_{k}\) the \(k\)-th digit of \(b\). Now we are ready to calculate the correlation function \(C_{ik,i^{\prime}k^{\prime}}\left(t-t^{\prime}\right)\). We have
\[\begin{array}{c}C_{ik,i^{\prime}k^{\prime}}\left(t-t^{\prime}\right)=\mathrm{ tr}_{B}\left(\frac{1}{Z_{B}}\mathrm{e}^{-\beta\hat{H}_{B}}\hat{B}_{ik}(t)\hat{B}_{i^{ \prime}k^{\prime}}\left(t^{\prime}\right)\right)\\ =\frac{1}{Z_{B}}\sum_{b=0}^{2^{N}-1}\mathrm{e}^{-\beta\mathcal{E}_{b}} \langle b|\hat{B}_{ik}(t)\hat{B}_{i^{\prime}k^{\prime}}\left(t^{\prime}\right) |b\rangle,\end{array} \tag{21}\]
where \(Z_{B}=\sum_{b=0}^{2^{N}-1}\mathrm{e}^{-\beta\mathcal{E}_{b}}\) is the normalizing factor. Using the resolution of identity \(\mathbb{I}=\sum_{b^{\prime}}|b^{\prime}\rangle\langle b^{\prime}|\), one can rewrite the correlation function as
\[\begin{array}{c}C_{ik,i^{\prime}k^{\prime}}\left(t-t^{\prime}\right)\\ =\frac{1}{Z_{B}}\sum_{b=0}^{2^{N}-1}\sum_{b^{\prime}=0}^{2^{N}-1} \mathrm{e}^{-\beta\mathcal{E}_{b}}\langle b|B_{ik}(t)|b^{\prime}\rangle \langle b^{\prime}|B_{i^{\prime}k^{\prime}}\left(t^{\prime}\right)|b\rangle. \end{array} \tag{22}\]
Recall that \(\hat{B}_{1k}=\hat{f}_{1k}=\hat{c}_{k}+\hat{c}_{k}^{\dagger},\hat{B}_{2k}=\hat{ f}_{2k}=\mathrm{i}(\hat{c}_{k}-\hat{c}_{k}^{\dagger})\), and note that
\[\begin{array}{c}\langle b|c_{k}(t)|b^{\prime}\rangle=\langle b|\mathrm{e}^{ \frac{\mathrm{i}}{\varepsilon}\hat{H}_{B}t}c_{k}\mathrm{e}^{-\frac{\mathrm{i}}{ \varepsilon}\hat{H}_{B}t}|b^{\prime}\rangle\\ =\mathrm{e}^{\frac{\mathrm{i}}{\varepsilon}\left(\mathcal{E}_{b}-\mathcal{E}_{b^{ \prime}}\right)t}\langle b|c_{k}|b^{\prime}\rangle,\end{array} \tag{23}\]
Here we use \(\mathrm{e}^{-\frac{\mathrm{i}}{\varepsilon}\hat{H}_{B}t}|b\rangle=\mathrm{e}^{- \frac{\mathrm{i}}{\varepsilon}\hat{\mathcal{E}}_{b}t}|b\rangle\) according to Eq. (20). By definition of annihilation operators, \(\langle b|c_{k}|b^{\prime}\rangle\) is nonzero if and only if in the binary representation, \(b_{k}=0\), \(b_{k}^{\prime}=1\), and except for the \(k\)-th digit, \(b\) and \(b^{\prime}\) are the same. Similarly,
\[\begin{array}{c}\langle b|c_{k}^{\dagger}(t)|b^{\prime}\rangle=\langle b| \mathrm{e}^{\frac{\mathrm{i}}{\varepsilon}\hat{H}_{B}t}c_{k}^{\dagger}\mathrm{e}^{- \frac{\mathrm{i}}{\varepsilon}\hat{H}_{B}t}|b^{\prime}\rangle\\ =\mathrm{e}^{\frac{\mathrm{i}}{\varepsilon}\left(\mathcal{E}_{b}-\mathcal{E}_{b^{ \prime}}\right)t}\langle b|c_{k}^{\dagger}|b^{\prime}\rangle,\end{array} \tag{24}\]
where \(\langle b|c_{k}^{\dagger}|b^{\prime}\rangle\) is by definition nonzero if and only if in binary representation, \(b_{k}=1\), \(b_{k}^{\prime}=0\), and except for the \(k\)-th digit, \(b\) and \(b^{\prime}\) are the same.
In other words, given \(b\in\{0,\cdots,2^{N}-1\}\), there is a unique \(b^{\prime}\) such that \(\langle b|c_{k}(t)|b^{\prime}\rangle\) (\(\langle b|c_{k}^{\dagger}(t)|b^{\prime}\rangle\)) is nonzero if its \(k\)-th digit \(b_{k}=1\) (\(b_{k}=0\)). This greatly simplifies the double sum in Eq. (22). We have
\[\begin{array}{c}\sum_{b=0}^{2^{N}-1}\sum_{b^{\prime}=0}^{2^{N}-1}\mathrm{e}^{- \beta\mathcal{E}_{b}}\langle b|c_{k}(t)|b^{\prime}\rangle\langle b^{\prime}|c_{k ^{\prime}}^{\dagger}\left(t^{\prime}\right)|b\rangle\\ =\delta_{kk^{\prime}}\sum_{\{b|b_{k}=0\}}\mathrm{e}^{-\beta\mathcal{E}_{b}} \mathrm{e}^{-\frac{\mathrm{i}}{\varepsilon}E_{k}(t-t^{\prime})},\end{array} \tag{25}\]
\[\begin{array}{c}\sum_{b=0}^{2^{N}-1}\sum_{b^{\prime}=0}^{2^{N}-1} \mathrm{e}^{-\beta\mathcal{E}_{b}}\langle b|c_{k^{\prime}}^{\dagger}(t)|b^{ \prime}\rangle\langle b^{\prime}|c_{k}\left(t^{\prime}\right)|b\rangle\\ =\delta_{kk^{\prime}}\sum_{\{b|b_{k}=1\}}\mathrm{e}^{-\beta\mathcal{E}_{b}} \mathrm{e}^{\frac{\mathrm{i}}{\varepsilon}E_{k}(t-t^{\prime})},\end{array} \tag{26}\]
Let
where
\[\begin{split} C_{11,k}(\tau)&=C_{k}^{+}(\tau)+C_{k}^{- }(\tau),\\ C_{22,k}(\tau)&=C_{k}^{+}(\tau)+C_{k}^{-}(\tau),\\ C_{12,k}(\tau)&=-\mathrm{i}C_{k}^{+}(\tau)+ \mathrm{i}C_{k}^{-}(\tau),\\ C_{21,k}(\tau)&=\mathrm{i}C_{k}^{+}(\tau)-\mathrm{i} C_{k}^{-}(\tau).\end{split} \tag{28}\]
What's left to do is to calculate out \(C_{k}^{\pm}(\tau)\). Note that
\[C_{k}^{+}(\tau)+C_{k}^{-}(-\tau)=\frac{1}{Z_{B}}\sum_{b}\mathrm{e}^{-\beta E_{ b}}\mathrm{e}^{-\frac{i}{z}E_{k}\tau}=\mathrm{e}^{-\frac{i}{z}E_{k}\tau},\]
\[\frac{C_{k}^{+}(\tau)}{C_{k}^{-}(-\tau)}=\frac{\sum_{\{b|b_{k}=0\}}\mathrm{e} ^{-\beta E_{b}}\mathrm{e}^{-\frac{i}{z}E_{k}(t-t^{\prime})}}{\sum_{\{b|b_{k}= 1\}}\mathrm{e}^{-\beta E_{b}}\mathrm{e}^{-\frac{i}{z}E_{k}(t-t^{\prime})}}= \mathrm{e}^{\beta E_{k}},\]
therefore we have
\[C_{k}^{+}(\tau)=\frac{\exp{(-\frac{i}{\varepsilon}E_{k}\tau)}}{1+\exp{(- \beta E_{k})}},\quad C_{k}^{-}(\tau)=\frac{\exp{(\frac{i}{\varepsilon}E_{k} \tau)}}{1+\exp{(\beta E_{k})}}. \tag{29}\]
### Towards the nMSSE model
Now we are ready to write down the nMSSE for the Anderson-Holstein model. Let us derive the formula in the spatial representation. The wavefunction has two components:
\[\Psi(x,t)=\left(\begin{array}{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right). \tag{30}\]
The first term on the right hand side of Eq. (17) is the system Hamiltonian itself:
\[\begin{split}&\hat{H}_{S}\left(\begin{array}{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right)\\ =&\left(\begin{array}{cc}-\frac{\varepsilon^{2}}{2}\Delta+U_{0}(x)&\\ &-\frac{\varepsilon^{2}}{2}\Delta+U_{1}(x)\end{array}\right)\left(\begin{array} []{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right).\end{split} \tag{31}\]
The second term is the stochastic noise, and could be rewritten as
\[\begin{split}&\lambda\sum_{ik}\eta_{ik}(t)\hat{S}_{ik}\Psi(x,t)\\ &=\lambda\sum_{k}\left(\frac{\mathrm{i}}{2}V_{k}(x)\eta_{1k}(t) \left(\begin{array}{c}\mathrm{i}\psi_{1}\\ -\mathrm{i}\psi_{0}\end{array}\right)\right.\\ &\qquad\quad-\left.\frac{\mathrm{i}}{2}V_{k}(x)\eta_{2k}(t)\left( \begin{array}{c}\psi_{1}\\ \psi_{0}\end{array}\right)\right)\\ =&-\frac{\mathrm{i}\lambda}{2}\sum_{k=1}^{N}V_{k}(x)\left(\eta_{2k}(t) \sigma_{x}+\eta_{1k}(t)\sigma_{y}\right)\left(\begin{array}{c}\psi_{0}(x,t) \\ \psi_{1}(x,t)\end{array}\right),\end{split} \tag{32}\]
and the stochastic noise \(\eta_{ik}\) satisfies that:
\[\begin{split}&\mathbb{E}(\eta_{ik}(t))=0,\quad\mathbb{E}(\eta_{ik} (t)\eta_{i^{\prime}k^{\prime}}(t^{\prime}))=0,\\ &\mathbb{E}\left(\eta_{ik}^{*}(t)\eta_{i^{\prime}k^{\prime}} (t^{\prime})\right)=\delta_{kk^{\prime}}C_{ii^{\prime},k}(t-t^{\prime}),\quad i,i^{\prime}=1,2.\end{split} \tag{33}\]
The non-Markovian damping term is
\[-\mathrm{i}\frac{\lambda^{2}}{\varepsilon}\int_{0}^{t}\mathrm{d}\tau\sum_{ik, i^{\prime}k^{\prime}}C_{ik,i^{\prime}k^{\prime}}(\tau)\hat{S}_{ik}\mathrm{e}^{- \mathrm{i}\hat{H}_{s}\tau}\hat{S}_{i^{\prime}k^{\prime}}\psi(x,t-\tau).\]
Note that
\[\mathrm{e}^{-\frac{i}{\varepsilon}\hat{H}_{s}\tau}=\left(\begin{array}{cc} \mathrm{e}^{-\frac{i}{\varepsilon}\hat{h}_{0}\tau}&\\ &\mathrm{e}^{-\frac{i}{\varepsilon}\hat{h}_{1}\tau}\end{array}\right) \tag{34}\]
With a straightforward calculation, we have
\[\sum_{i,j=1}^{2}\sum_{k=1}^{N}C_{ij,k}(\tau)\hat{S}_{ik}e^{-\frac{i}{ \varepsilon}\hat{h}_{s}\tau}\hat{S}_{jk}\psi(x,t-\tau) \tag{35}\] \[=-\sum_{k=1}^{N}\left(\begin{array}{c}C_{k}^{-}(\tau)V_{k}(x) \mathrm{e}^{-\frac{i}{\varepsilon}\hat{h}_{1}\tau}V_{k}(x)\psi_{0}(x,t-\tau) \\ C_{k}^{+}(\tau)V_{k}(x)\mathrm{e}^{-\frac{i}{\varepsilon}\hat{h}_{0}\tau}V_{k}(x )\psi_{1}(x,t-\tau)\end{array}\right).\]
Here \(\mathrm{Diag}\left(\begin{array}{c}a\\ b\end{array}\right)\) means \(\left(\begin{array}{cc}a&0\\ 0&b\end{array}\right)\). Note that operators \(\mathrm{e}^{-\mathrm{i}\hat{h}_{0,1}\tau}\) and \(V_{k}\) typically does not commute.
Combining Eqs. (17), (31), (32) and (35), we arrive at the discretized nMSSE model for AH impurities:
\[\begin{split}&\mathrm{i}\varepsilon\frac{\partial}{\partial t} \left(\begin{array}{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right)=\left(\begin{array}{c}\hat{h}_{0}\psi_{0}(x,t )\\ \hat{h}_{1}\psi_{1}(x,t)\end{array}\right)\\ &-\frac{\mathrm{i}\lambda}{2}\sum_{k=1}^{N}V_{k}(x)\left(\eta_{2k}(t) \sigma_{x}+\eta_{1k}(t)\sigma_{y}\right)\left(\begin{array}{c}\psi_{0}(x,t) \\ \psi_{1}(x,t)\end{array}\right)\\ &+\mathrm{i}\frac{\lambda^{2}}{\varepsilon}\int_{0}^{t}\mathrm{d}\tau \left(\begin{array}{c}\sum_{k}C_{k}^{-}(\tau)V_{k}\mathrm{e}^{-\frac{i}{ \varepsilon}\hat{h}_{1}\tau}V_{k}\psi_{0}(x,t-\tau)\\ \sum_{k}C_{k}^{+}(\tau)V_{k}\mathrm{e}^{-\frac{i}{\varepsilon}\hat{h}_{0}\tau}V_{k} \psi_{1}(x,t-\tau)\end{array}\right).\end{split} \tag{36}\]
where \(\hat{h}_{0,1}=-\frac{\varepsilon^{2}}{2}\Delta+U_{0,1}(x)\), and \(\sigma_{x,y}\) is the Pauli matrices. Complex-valued Gaussian noise \(\eta_{ik}(t)\) is defined in Eq. (33), and the memory kernel \(C_{k}^{\pm}(\tau)\) is defined in Eq. (29).
### Wide band limit and continuous band limit
In the wide band limit, the system-bath coupling \(V(E,x)\) (or, in the discrete setting, \(V_{k}(x)\)) is considered to be independent of the metal spectrum \(E\) (or the discrete band index \(k\)), i.e. \(V(E,x)=V(x)\) for any \(E\in[E_{-},E_{+}]\), and \(V_{k}(x)=\sqrt{h_{N}}V(x)\). Let us define the total noise \(\xi_{i}^{(N)}(t)\)
\[\xi_{i}^{(N)}(t)=\sqrt{h_{N}}\sum_{k=1}^{N}\eta_{ik}(t).\]
Recalling Eq. (33), we have
\[\begin{split}&\mathbb{E}(\xi_{i}^{(N)}(t))=0,\quad\mathbb{E}(\xi_{i}^ {(N)}(t)\xi_{i^{\prime}}(t^{\prime}))=0,\\ &\mathbb{E}(\xi_{i}^{(N)*}(t)\xi_{i^{\prime}}^{(N)}(t^{\prime})) \\ =& h_{N}\sum_{kk^{\prime}}\mathbb{E}\left(\eta_{ik}^{ *}(t)\eta_{i^{\prime}k^{\prime}}\left(t^{\prime}\right)\right)=h_{N}\sum_{k=1} ^{N}C_{ii^{\prime},k}(t-t^{\prime}),\end{split} \tag{37}\]
For \(i=1\), \(i^{\prime}=1\), we have
\[\begin{split}\mathbb{E}(\xi_{1}^{(N)*}(t)\xi_{1}^{(N)}(t^{\prime }))&=h_{N}\sum_{k=1}^{N}C_{11,k}(t-t^{\prime})\\ &=h_{N}\sum_{k=1}^{N}\left(C_{k}^{+}(t-t^{\prime})+C_{k}^{-}(t-t ^{\prime})\right).\end{split}\]
Taking the continuous band limit, i.e. let \(N\to\infty\), we have
\[\begin{split} h_{N}\sum_{k=1}^{N}C_{k}^{\pm}(\tau)& =h_{N}\sum_{k=1}^{N}\frac{\exp\left(\mp\frac{\mathrm{i}}{\varepsilon}E_{k} \tau\right)}{1+\exp\left(\mp\beta E_{k}\right)}\\ &\to\int_{E_{-}}^{E_{+}}\frac{\exp\left(\mp\frac{\mathrm{i}}{ \varepsilon}E\tau\right)}{1+\exp\left(\mp\beta E\right)}\mathrm{d}E=:c^{\pm} (\tau).\end{split}\]
For \((i,i^{\prime})=(1,2),(2,1),(2,2)\), things are similar. Therefore when \(N\to\infty\), \(\xi_{i}^{(N)}(t)\) converges to \(\xi_{i}(t)\), which satisfies
\[\begin{split}&\mathbb{E}(\xi_{i}(t))=0,\quad\mathbb{E}(\xi_{i}(t) \xi_{i^{\prime}}(t^{\prime}))=0,\quad i=1,2,\\ &\mathbb{E}(\xi_{1}^{*}(t)\xi_{1}(t^{\prime}))=c^{+}(t-t^{\prime} )+c^{-}(t-t^{\prime}),\\ &\mathbb{E}(\xi_{2}^{*}(t)\xi_{2}(t^{\prime}))=c^{+}(t-t^{\prime })+c^{-}(t-t^{\prime}),\\ &\mathbb{E}(\xi_{1}^{*}(t)\xi_{2}(t^{\prime}))=-\mathrm{i}c^{+}( t-t^{\prime})+\mathrm{i}c^{-}(t-t^{\prime}),\\ &\mathbb{E}(\xi_{2}^{*}(t)\xi_{1}(t^{\prime}))=\mathrm{i}c^{+}(t- t^{\prime})-\mathrm{i}c^{-}(t-t^{\prime}).\end{split} \tag{38}\]
Now the noise term Eq. (32) becomes:
\[\begin{split}&-\frac{\mathrm{i}\lambda}{2}\sum_{k=1}^{N}V_{k}(x) \left(\eta_{2k}(t)\sigma_{x}+\eta_{1k}(t)\sigma_{y}\right)\left(\begin{array} []{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right)\\ &\xrightarrow{N\to\infty}-\frac{\mathrm{i}\lambda}{2}V(x)\left( \xi_{2}(t)\sigma_{x}+\xi_{1}(t)\sigma_{y}\right)\left(\begin{array}{c}\psi _{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right).\end{split} \tag{39}\]
In the wide band limit and the continuous band limit, we also have
\[\begin{split}&\sum_{k=1}^{N}C_{k}^{\pm}(\tau)V_{k}(x)\mathrm{e}^{ -\frac{\mathrm{i}}{\varepsilon}h_{0,1}\tau}V_{k}(x)\\ &=\sum_{k=1}^{N}h_{N}C_{k}^{\pm}(\tau)V(x)\mathrm{e}^{-\frac{ \mathrm{i}}{\varepsilon}h_{0,1}\tau}V(x)\\ &\to c^{\pm}(\tau)V(x)\mathrm{e}^{-\frac{\mathrm{i}}{\varepsilon}h_{0,1} \tau}V(x).\end{split}\]
Therefore Eq. (35) becomes
\[-\left(\begin{array}{c}c^{-}(\tau)V(x)\mathrm{e}^{-\frac{\mathrm{i}}{ \varepsilon}h_{1}\tau}V(x)\psi_{0}(x,t-\tau)\\ c^{+}(\tau)V(x)\mathrm{e}^{-\frac{\mathrm{i}}{\varepsilon}h_{0}\tau}V(x)\psi_{ 1}(x,t-\tau)\end{array}\right). \tag{40}\]
Replacing the noise and non-Markovian terms in Eq. (36) with Eqs. (39) and (40), we arrive at the nMSSE model in the wide and continuous band limit:
\[\begin{split}&\mathrm{i}\varepsilon\frac{\partial}{\partial t} \left(\begin{array}{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right)=\left(\begin{array}{c}\hat{h}_{0}\psi_{0}(x,t )\\ \hat{h}_{1}\psi_{1}(x,t)\end{array}\right)\\ &-\frac{\mathrm{i}\lambda V(x)}{2}\left(\xi_{2}(t)\sigma_{x}+\xi_{1}(t) \sigma_{y}\right)\left(\begin{array}{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right)\\ &+\mathrm{i}\frac{\lambda^{2}}{\varepsilon}\int_{0}^{t}\mathrm{d}\tau \left(\begin{array}{c}c^{-}(\tau)V(x)\mathrm{e}^{-\frac{\mathrm{i}}{ \varepsilon}\hat{h}_{1}\tau}V(x)\psi_{0}(x,t-\tau)\\ c^{+}(\tau)V(x)\mathrm{e}^{-\frac{\mathrm{i}}{\varepsilon}\hat{h}_{0}\tau}V(x) \psi_{1}(x,t-\tau)\end{array}\right).\end{split} \tag{41}\]
The wide band limit still retained interesting physics since the separation of variables still preserves the spatial-inhomogeneity of the stochastic noise.
In the infinite temperature limit, i.e. \(\beta=0\), if \(E_{\pm}=\pm\infty\), then the correlation function becomes
\[c^{\pm}(\tau)=\int_{-\infty}^{+\infty}\exp\left(\mp\frac{\mathrm{i}}{ \varepsilon}E\tau\right)\mathrm{d}\tau=\pm 2\pi\varepsilon\delta(\tau). \tag{42}\]
## III From SSE to QME
Quantum master equations (QME) are often used as semi-empirical models when studying open quantum systems such as metal surfaces [48; 49; 14]. Here we show that QME models of the Anderson-Holstein impurities are a second-step approximation to the SSE models in the interaction picture, and the well-known Redfield equation could be achieved with a further Markovian approximation.
### Analytic derivation
In Eq. (41), we could think of the nMSSE effective Hamiltonian being \(\hat{H}_{S}|\Psi\rangle+\hat{H}_{\mathrm{int}}|\Psi\rangle\), where \(\hat{H}_{\mathrm{int}}\) describes the effective interaction between the system and the bath:
\[\begin{split}&\hat{H}_{\mathrm{int}}\left(\begin{array}{c}\psi_{0}(x, t)\\ \psi_{1}(x,t)\end{array}\right)\\ =&-\frac{\mathrm{i}\lambda V(x)}{2}\left(\xi_{2}(t)\sigma_{x}+\xi_{1}(t) \sigma_{y}\right)\left(\begin{array}{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right)\\ &+\mathrm{i}\frac{\lambda^{2}}{\varepsilon}\int_{0}^{t}\mathrm{d}\tau \left(\begin{array}{c}c^{-}(\tau)V(x)\mathrm{e}^{-\frac{\mathrm{i}}{ \varepsilon}\hat{h}_{1}\tau}V(x)\psi_{0}(x,t-\tau)\\ c^{+}(\tau)V(x)\mathrm{e}^{-\frac{\mathrm{i}}{\varepsilon}\hat{h}_{0}\tau}V(x)\psi_{1}(x,t- \tau)\end{array}\right)\\ =&\left(\lambda\hat{H}_{\mathrm{int}}^{(1)}(t)+\frac{\lambda^{2}}{\varepsilon} \hat{H}_{\mathrm{int}}^{(2)}(t)\right)\left(\begin{array}{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right).\end{split} \tag{43}\]
Here we define \(\hat{H}_{\mathrm{int}}^{(1)}(t)\) and \(\hat{H}_{\mathrm{int}}^{(2)}(t)\) based on the order of \(\lambda\).
Let \(\Psi_{I}(x,t)\) be the wavefunction in the interaction picture. Then \(\Psi_{I}(x,t)\) satisfies that:
\[\mathrm{i}\varepsilon\frac{\partial}{\partial t}\Psi_{I}(x,t)=\hat{H}_{\mathrm{ int},I}(t)\Psi_{I}(x,t), \tag{44}\]
where \(\hat{H}_{\text{int},I}(t)\) is \(\hat{H}_{\text{int}}(t)\) in the interation picture:
\[\hat{H}_{\text{int},I}(t)=\text{e}^{\frac{\text{i}}{\varepsilon}\hat{H}_{S}t} \hat{H}_{\text{int}}(t)\text{e}^{-\frac{\text{i}}{\varepsilon}\hat{H}_{S}t}. \tag{45}\]
Note that we have the expansion \(\hat{H}_{\text{int},I}(t)=\lambda\hat{H}_{\text{int},I}^{(1)}(t)+\frac{ \lambda^{2}}{\varepsilon}\hat{H}_{\text{int},I}^{(2)}(t)\).
The quantum master equation describes the evolution of the expectation value of the density operator \(\hat{\rho}_{I}=|\Psi_{I}\rangle\langle\Psi_{I}|\). If the wavefunction \(|\Psi_{I}\rangle\) has the following asymptotic expansion:
\[|\Psi_{I}\rangle=|\Psi_{I}^{(0)}\rangle+\frac{\lambda}{\varepsilon}|\Psi_{I}^{ (1)}\rangle+\left(\frac{\lambda}{\varepsilon}\right)^{2}|\Psi_{I}^{(2)} \rangle+O\left(\left(\frac{\lambda}{\varepsilon}\right)^{3}\right), \tag{47}\]
then the expectation of the density matrix becomes
\[\mathbb{E}\hat{\rho}_{I} =\mathbb{E}\hat{\rho}_{I}^{(0)}+\frac{\lambda}{\varepsilon} \mathbb{E}\hat{\rho}_{I}^{(1)}+\left(\frac{\lambda}{\varepsilon}\right)^{2} \mathbb{E}\hat{\rho}_{I}^{(2)}+O\left(\left(\frac{\lambda}{\varepsilon}\right) ^{3}\right), \tag{48}\] \[\hat{\rho}_{I}^{(0)} =|\Psi_{I}^{(0)}\rangle\langle\Psi_{I}^{(0)}|,\quad\hat{\rho}_{ I}^{(1)}=|\Psi_{I}^{(0)}\rangle\langle\Psi_{I}^{(1)}|+|\Psi_{I}^{(1)}\rangle \langle\Psi_{I}^{(0)}|,\] \[\hat{\rho}_{I}^{(2)} =|\Psi_{I}^{(0)}\rangle\langle\Psi_{I}^{(2)}|+|\Psi_{I}^{(2)} \rangle\langle\Psi_{I}^{(0)}|+|\Psi_{I}^{(1)}\rangle\langle\Psi_{I}^{(1)}|.\]
In the current work, \(\varepsilon\) is fixed, and considered to be \(O(1)\).
Now we want to find \(|\Psi_{I}^{(0)}\rangle,|\Psi_{I}^{(1)}\rangle,|\Psi_{I}^{(2)}\rangle\). Integrating Eq. (44), and then substituting into itself, recalling that \(\hat{H}_{\text{int},I}(t)=\lambda\hat{H}_{\text{int},I}^{(1)}(t)+\frac{\lambda ^{2}}{\varepsilon}\hat{H}_{\text{int},I}^{(2)}(t)\) we have
\[\Psi_{I}(x,t) \tag{49}\] \[=\Psi_{I}(x,0)-\frac{\text{i}}{\varepsilon}\int_{0}^{t}\hat{H}_{ \text{int},I}(t_{1})\Psi_{I}(x,t_{1})\text{d}t_{1}\] \[=\Psi_{I}(x,0)-\frac{\text{i}}{\varepsilon}\left(\int_{0}^{t}\hat {H}_{\text{int},I}(t_{1})\text{d}t_{1}\right)\Psi_{I}(x,0)\] \[+\left(-\frac{\text{i}}{\varepsilon}\right)^{2}\int_{0}^{t}\text{ d}t_{1}\int_{0}^{t_{1}}\text{d}t_{2}\hat{H}_{\text{int},I}(t_{1})H_{\text{int},I}(t_{2}) \Psi_{I}(x,t_{2})\] \[=\Psi_{I}(x,0)-\frac{\text{i}}{\varepsilon}\lambda\left(\int_{0}^ {t}\hat{H}_{\text{int},I}^{(1)}(t_{1})\text{d}t_{1}\right)\Psi_{I}(x,0)\] \[-\frac{\text{i}}{\varepsilon}\frac{\lambda^{2}}{\varepsilon}\left( \int_{0}^{t}\hat{H}_{\text{int},I}^{(2)}(t_{1})\text{d}t_{1}\right)\Psi_{I}(x,0)\] \[+\left(-\frac{\text{i}}{\varepsilon}\right)^{2}\lambda^{2}\int_{0} ^{t}\text{d}t_{1}\int_{0}^{t_{1}}\text{d}t_{2}\hat{H}_{\text{int},I}^{(1)}(t_{ 1})H_{\text{int},I}^{(1)}(t_{2})\Psi_{I}^{(0)}(x,0)\] \[+O\left(\left(\frac{\lambda}{\varepsilon}\right)^{3}\right).\]
Therefore we have
\[\Psi_{I}^{(0)}(x,t) =\Psi_{I}(x,0), \tag{50}\] \[\Psi_{I}^{(1)}(x,t) =-\text{i}\left(\int_{0}^{t}\hat{H}_{\text{int},I}^{(1)}(t_{1}) \text{d}t_{1}\right)\Psi_{I}(x,0)\] \[\Psi_{I}^{(2)}(x,t) =-\text{i}\left(\int_{0}^{t}\hat{H}_{\text{int},I}^{(2)}(t_{1}) \text{d}t_{1}\right)\Psi_{I}(x,0)\] \[-\int_{0}^{t}\text{d}t_{1}\int_{0}^{t_{1}}\text{d}t_{2}\hat{H}_{ \text{int},I}^{(1)}(t_{1})H_{\text{int},I}^{(1)}(t_{2})\Psi_{I}(x,0).\]
Note that \(\mathbb{E}\Psi_{I}^{(1)}(x,t)=0\) since the noise term has zero mean, therefore
\[\mathbb{E}\hat{\rho}_{I}^{(0)} =\mathbb{E}\left(|\Psi_{I}^{(0)}\rangle\langle\Psi_{I}^{(0)}| \right)=|\Psi_{I}^{(0)}\rangle\langle\Psi_{I}^{(0)}|. \tag{51}\] \[\mathbb{E}\hat{\rho}_{I}^{(1)} =\mathbb{E}\left(|\Psi_{I}^{(0)}\rangle\langle\Psi_{I}^{(1)}|+| \Psi_{I}^{(1)}\rangle\langle\Psi_{I}^{(0)}|\right)\] \[=|\Psi_{I}^{(0)}\rangle\mathbb{E}\left(\langle\Psi_{I}^{(1)}| \right)+\mathbb{E}\left(|\Psi_{I}^{(1)}\rangle\right)\langle\Psi_{I}^{(0)}|=0.\] \[\mathbb{E}\hat{\rho}_{I}^{(2)} =\mathbb{E}\left(|\Psi_{I}^{(0)}\rangle\langle\Psi_{I}^{(2)}|+| \Psi_{I}^{(2)}\rangle\langle\Psi_{I}^{(0)}|+|\Psi_{I}^{(1)}\rangle\langle\Psi_{I }^{(1)}|\right)\] \[=|\Psi_{I}^{(0)}\rangle\mathbb{E}\left(\langle\Psi_{I}^{(2)}| \right)+\mathbb{E}\left(|\Psi_{I}^{(2)}\rangle\right)\langle\Psi_{I}^{(0)}|\] \[+\mathbb{E}\left(|\Psi_{I}^{(1)}\rangle\langle\Psi_{I}^{(1)}| \right).\]
Let \(\rho_{I}(t)=\mathbb{E}\hat{\rho}_{I}(t)\). With Eq. (48), we can differentiate \(\rho_{I}(t)\) with respect to \(t\) and truncate terms that are higher than second order in \(\lambda\), and then obtain the non-Markovian quantum master equation (nMQME):
\[\frac{\text{d}\rho_{I}(x,x^{\prime},t)}{\text{d}t}= \frac{\lambda^{2}}{\varepsilon^{2}}\left(\int_{0}^{t}\mathbf{c}_{h}(x, \tau)\rho_{I}(x,x^{\prime},t-\tau)\text{d}\tau\right. \tag{52}\] \[\int_{0}^{t}\mathbf{c}(x,x^{\prime},\tau)\ \rho_{I}^{(d)}(x,x^{\prime},t-\tau)\text{d}\tau+ \text{h.c.}\right)+O\left(\left(\frac{\lambda}{\varepsilon}\right)^{3}\right)\] \[\rho_{I}(0)=\rho_{0},\]
where h.c. represents the Hermitian conjugate of the preceding terms, and
\[\mathbf{c}(x,x^{\prime},\tau)=\left(\begin{array}{cc}c^{+}(\tau)V(x)V(x^{ \prime})&\\ &c^{-}(\tau)V(x)V(x^{\prime})\end{array}\right), \tag{53}\] \[\mathbf{c}_{h}(x,\tau)\] \[= \left(\begin{array}{cc}c^{-}(\tau)V(x)\text{e}^{-\text{i}\hat{h}_ {1}\tau}V(x)&\\ &c^{+}(\tau)V(x)\text{e}^{-\text{i}\hat{h}_{0}\tau}V(x)\end{array}\right),\] \[\rho_{I}^{(d)}(x,x^{\prime},\tau)=\left(\begin{array}{cc}\rho_{I,11}(x,x^{\prime},\tau)&\\ &\rho_{I,00}(x,x^{\prime},\tau)\end{array}\right).\]
For the detailed derivation of Eq. (52), which reduces to the further simplification of Eq. (51) and its derivative, please see Appendix A.
More generally, the above master equation could be derived starting with any initial time \(t_{0}\) other than time \(t_{0}=0\). In that case, the quantum master equation
In Schrodinger picture, it becomes
\[\begin{split}\frac{\mathrm{d}\rho(x,x^{\prime},t)}{\mathrm{d}t}& =-\mathrm{i}\left(\hat{H}_{s}(x)-\hat{H}_{s}(x^{\prime})\right)\rho(x,x^{ \prime},t)\\ +\frac{\lambda^{2}}{\varepsilon^{2}}\mathrm{e}^{-\mathrm{i}\hat{H }_{S}(x)(t-t_{0})}\mathrm{e}^{\mathrm{i}\hat{H}_{S}(x^{\prime})(t-t_{0})} \times&\left(\int_{t_{0}}^{t}\mathrm{d}\tau\left(\mathbf{c}_{h}(x,t-\tau)\mathrm{e}^{\mathrm{i}\hat{H}_{S}(x)(\tau-t_{0})}\mathrm{e}^{-\mathrm{ i}\hat{H}_{S}(x^{\prime})(\tau-t_{0})}\rho(x,x^{\prime},\tau)\right) \right.\\ &\left.+\int_{t_{0}}^{t}\mathrm{d}\tau\left(\mathbf{c}(x,x^{ \prime},t-\tau)\sigma_{x}\mathrm{e}^{\mathrm{i}\hat{H}_{S}(x)(\tau-t_{0})} \mathrm{e}^{-\mathrm{i}\hat{H}_{S}(x^{\prime})(\tau-t_{0})}\rho(x,x^{\prime}, \tau)\sigma_{x}+\mathrm{h.c.}\right),\\ \rho(t_{0})=\rho_{I}(t_{0})=\rho_{0}.\end{split} \tag{51}\]
Here \(\sigma_{x}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\) as mentioned in Eq. (13).
Born-Markov approximation, finite-history Quantum Master Equation (FH-QME), Redfield equation, and their corresponding SSE
The non-Markovian quantum master equations Eq. (50) are mathematically complicated and computationally expansive. The widely-used Born-Markov approximation could be performed under the conditions that (1) the interaction between the system and the bath is weak, (2) the correlation time of the bath is much shorter than the characteristic time of the system. With the basic assumption \(\rho_{I}(t-\tau)\approx\rho_{I}(t)\), the nMQME is reduced to
\[\begin{split}\frac{\mathrm{d}\rho_{I}(x,x^{\prime},t)}{\mathrm{d}t }&=\frac{\lambda^{2}}{\varepsilon^{2}}\left(\int_{0}^{t-t_{0}} \mathbf{c}_{h}(x,\tau)\mathrm{d}\tau\rho_{I}(x,x^{\prime},t)\right.\\ &\left.+\int_{0}^{t-t_{0}}\mathbf{c}(x,x^{\prime},\tau)\mathrm{d} \tau\rho_{I}^{(d)}(x,x^{\prime},t)\right)+h.c.,\\ &\rho_{I}(t_{0})=\rho_{0}.\end{split} \tag{52}\]
If \(t_{0}\) is set to be \(0\), we refer to Eq. (52) as the finite-history QME (FH-QME). The corresponding SSE of FH-QME in the interaction picture could be easily obtained by modifying Eq. (43) with the approximation \(\psi_{0,I}(x,t-\tau)\approx\psi_{0,I}(x,t)\) and \(\psi_{1,I}(x,t-\tau)\approx\psi_{1,I}(x,t)\).
\[\begin{split}&\mathrm{i}\varepsilon\frac{\partial}{\partial t} \left(\begin{array}{c}\psi_{0,I}(x,t)\\ \psi_{1,I}(x,t)\end{array}\right)\\ =&-\frac{\mathrm{i}\lambda V}{2}\left(\xi_{2}(t)\sigma_{x}+\xi_{1}(t) \sigma_{y}\right)\left(\begin{array}{c}\psi_{0,I}(x,t)\\ \psi_{1,I}(x,t)\end{array}\right)\\ &+\mathrm{i}\frac{\lambda^{2}}{\varepsilon}\int_{0}^{t}\mathrm{d}\tau \left(\begin{array}{c}c^{-}(\tau)V(x)e^{-\frac{\mathrm{i}}{h}t_{\tau}}V(x) \psi_{0,I}(x,t)\\ c^{+}(\tau)V(x)e^{-\frac{\mathrm{i}}{h}t_{0}\tau}V(x)\psi_{1,I}(x,t)\end{array} \right)\end{split} \tag{53}\]
If we let \(t_{0}\to-\infty\) in Eq. (52), we obtain the infinite-history QME, also known as the Redfield equation:
\[\begin{split}\frac{\mathrm{d}\rho_{I}(x,x^{\prime},t)}{\mathrm{d}t }&=\frac{\lambda^{2}}{\varepsilon^{2}}\left(\int_{0}^{\infty}\mathbf{c}_{h} (x,\tau)\mathrm{d}\tau\rho_{I}(x,x^{\prime},t)\right.\\ &\left.+\int_{0}^{\infty}\mathbf{c}(x,x^{\prime},\tau)\mathrm{d} \tau\rho_{I}^{(d)}(x,x^{\prime},t)\right)+h.c.,\\ &\rho_{I}(-\infty)=\rho_{0}.\end{split} \tag{54}\]
If we replace \(\int_{0}^{t}\) with \(\int_{-\infty}^{0}\) in Eq. (53), we obtain the infinite-history SSE. The correspondence between SSE and QME, for example non-Markovian SSE (Eq. (41)) and non-Markovian QME (Eq. (49)), finite-history SSE (Eq. (53)) and finite-history QME (Eq. (52)), infinite-history SSE and Redfield equations (Eq. (54)) could be seen as the quantum analog of the relation between a stochastic process and its corresponding Fokker-Planck equation.
Since QME is obtained by neglecting higher order terms in the von-Neumann type equation of SSE, it should be viewed as a further approximation on top of SSE. In other words, we have established a hierarchy of models for Anderson-Holstein impurities, as detailed in Fig. 1. Going from SSE, to QME, and further to classical master equations (CME), one will make more assumptions and conduct more approximations. As a result, one would possibly lose crucial physical features while going along this hierarchy of approximations.
## IV Numerical Methods
### Noise Generation
To numerically simulate SSE, we need to generate stochastic noises that are subject to Eq. (38). The conventional noise generation scheme [47] relies on the analytic continuation of the correlation function to the complex plane. Such methods suffer from non-causality and non-physical artifacts due to the numerical instability of many rational approximation schemes [50; 51]. Here we present a stable noise generation scheme which do not rely on the interpolation of any correlation function.
We first rewrite the noise term as follows:
\[-\frac{\mathrm{i}\lambda V(x)}{2}\left(\xi_{2}(t)\sigma_{x}+\xi_{1}( t)\sigma_{y}\right)\left(\begin{array}{c}\psi_{0}(x,t)\\ \psi_{1}(x,t)\end{array}\right)\] \[= \lambda V(x)\left(\begin{array}{c}\widetilde{\xi}_{+}(t)\psi_{1 }(x,t)\\ \widetilde{\xi}_{-}(t)\psi_{0}(x,t)\end{array}\right), \tag{55}\] \[\widetilde{\xi}_{+}(t)= \frac{-\xi_{1}(t)-\mathrm{i}\xi_{2}(t)}{2},\quad\widetilde{\xi}_ {-}(t)=\frac{\xi_{1}(t)-\mathrm{i}\xi_{2}(t)}{2}.\]
where \(\tilde{\xi}_{\pm}(t)\) satisfies that,
\[\mathbb{E}(\widetilde{\xi}_{\pm}(t))=0,\quad\mathbb{E}( \widetilde{\xi}_{+}^{*}(t)\widetilde{\xi}_{-}(t^{\prime}))=0,\] \[\mathbb{E}(\widetilde{\xi}_{\pm}(t)\widetilde{\xi}_{+}(t^{\prime }))=\mathbb{E}(\widetilde{\xi}_{\pm}(t)\widetilde{\xi}_{\mp}(t^{\prime}))=0, \tag{56}\] \[\mathbb{E}(\widetilde{\xi}_{+}^{*}(t)\widetilde{\xi}_{+}(t^{ \prime}))=c^{+}(t-t^{\prime}),\] \[\mathbb{E}(\widetilde{\xi}_{-}^{*}(t)\widetilde{\xi}_{-}(t^{ \prime}))=c^{-}(t-t^{\prime}).\]
In this way, \(\tilde{\xi}_{+}(t)\) and \(\tilde{\xi}_{-}(t)\) are decoupled, and could be generated separately. Let us define
\[W_{\pm}(t)=\int_{0}^{t}\tilde{\xi}_{\pm}(\tau)\mathrm{d}\tau,\]
Then \(W_{\pm}(t)\) is a Gaussian process with the covariance function \(K(t,s)\):
\[K_{\pm}(s,t)=\int_{0}^{s}\mathrm{d}\tau_{1}\int_{0}^{t}\mathrm{d}\tau_{2}c^{ \pm}(\tau_{1}-\tau_{2}).\]
In particular, when \(c^{\pm}(\tau_{1}-\tau_{2})=\delta(\tau_{1}-\tau_{2})\), we have \(K_{\pm}(s,t)=\min(s,t)\), and \(W_{\pm}(t)\) is the standard Brownian motion.
Generally, the sampling of \(W_{\pm}(t)\) can be achieved by the Karhunen-Loeve expansion. For a specified maximum time \(T_{\max}\), consider the following eigenvalue problem:
\[\int_{0}^{T_{\max}}K_{\pm}(s,t)\phi_{i}^{\pm}(t)\mathrm{d}t=\lambda_{i}^{\pm} \phi_{i}^{\pm}(s),\quad i=1,2,\ldots \tag{57}\]
with the normalization condition \(\int_{0}^{T_{\max}}\phi_{i}^{\pm}\phi_{j}^{\pm}\mathrm{d}t=\delta_{ij}\). Then \(W_{\pm}(t)\) has the Karhunen-Loeve expansion
\[W_{\pm}(t)=\sum_{k=1}^{\infty}\alpha_{k}^{\pm}\sqrt{\lambda_{k}^{\pm}}\phi_{ k}^{\pm}(t) \tag{58}\]
Figure 1: Hierarchy of various models for Anderson-Holstein impurities.
Here the \(\alpha_{k}\sim N(0,1)\) are i.i.d. random variables. In this way, the task of sampling the time-dependent noise is reduced to sampling scalar time-independent random variables. The eigenvalue problem Eq. (57) could be solved with a finite difference discretization in the time variable.
### Time evolution
We are solving Eq. (41) on the spatial domain \([a,b]\) and for time \([0,T]\). We use the following discretization:
\[\begin{split}\Delta x=\frac{(b-a)}{M},&\Delta t= \frac{T}{N}.\\ x_{j}:=a+j\Delta x,& j=0,1,\cdots,M-1,\\ t_{n}:=n\Delta t,& n=0,1,2\cdots,N.\end{split} \tag{59}\]
To numerically solve Eq. (41), our strategy is as follows:
* For the system Hamiltonian \(\hat{H}_{S}\), we use the Time Splitting Spectral method [52];
* For the fluctuation and dissipation terms, we use the Euler-Maruyama algorithm [53].
Let us present the numerical algorithm in detail for the Markovian SSE. With an efficient evaluation of the integration term, the strategy we described below is applicable to the non-Markovian case. In the Markovian limit, let \(c_{\pm}(t)\) becomes \(c_{0}^{\pm}\delta(t)\), where \(c_{0}^{\pm}\) are fixed constant. Then Eq. (41) becomes
\[\begin{split}&\mathrm{i}\varepsilon\left(\begin{array}{c} \mathrm{d}\psi_{0}(x,t)\\ \mathrm{d}\psi_{1}(x,t)\end{array}\right)\\ =&\left(\begin{array}{c}\hat{h}_{0}\psi_{0}(x,t)\\ \hat{h}_{1}\psi_{1}(x,t)\end{array}\right)\mathrm{d}t+\lambda V(x)\left( \begin{array}{c}\psi_{1}(x,t)\mathrm{d}W_{1}(t)\\ \psi_{0}(x,t)\mathrm{d}W_{0}(t)\end{array}\right)\\ &+\frac{\mathrm{i}\lambda^{2}}{\varepsilon}|V(x)|^{2}\left(\begin{array}{c }c_{0}^{-}\psi_{0}(x,t)\\ c_{0}^{+}\psi_{1}(x,t)\end{array}\right)\mathrm{d}t,\quad x\in[a,b],\end{split} \tag{60}\]
Let us consider the following initial condition, which means that at \(t=0\) the molecule is neutral:
\[\left(\begin{array}{c}\psi_{0}(x,0)\\ \psi_{1}(x,0)\end{array}\right)=\left(\begin{array}{c}\psi_{0}(x,0)\\ 0\end{array}\right).\]
Denote the numerical solution of \(\psi_{k}(x_{j},t_{n}),k=0,1\) by \(\psi_{k}^{j,n},k=0,1\). From time \(t=t_{n}\) to time \(t=t_{n+1}\), we do the following:
1. Evolve using the potential term for a time step: \[\begin{split}\psi_{k}^{j,*1}&=\exp\left(-\mathrm{i}\frac{U_{k }(x_{j})}{\varepsilon}\Delta t\right)\!\psi_{k}^{j,n},\\ k&=0,1,\quad j=0,\cdots,M-1,\end{split}\] (61)
2. Evolve using the kinetic term for a time step, but in the frequency domain: for \(k=0,1\), we have \[\begin{cases}\{\widehat{\psi}_{k}^{l,*1}\}_{l=-\frac{M}{2}}^{\frac{M}{2}-1}= \mathrm{FFT}\left(\{\psi_{k}^{j,*1}\}_{j=0}^{M-1}\right),\\ \widehat{\psi}_{k}^{l,*2}&=\widehat{\psi}_{k}^{l,*1}\exp\left(-\mathrm{i} \varepsilon\frac{\mu_{l}^{2}}{2}\Delta t\right),\ l=-\frac{M}{2},\cdots,\frac{ M}{2}-1,\\ \{\psi_{k}^{j,*2}\}_{j=0}^{M-1}&=\mathrm{i}\mathrm{FFT}\left(\{ \widehat{\psi}_{k}^{l,*2}\}_{l=-\frac{M}{2}}^{\frac{M}{2}-1}\right).\end{cases}\] (62) Here \(\mu_{l}=\frac{2\pi l}{b-a}\), FFT and iFFT denote the Fast Fourier Transform and its inverse.
3. Deal with the fluctuation-dissipation term using the Euler-Maruyama scheme: \[\left(\begin{array}{c}\psi_{0}^{j,n+1}\\ \psi_{1}^{j,n+1}\end{array}\right)\\ =\left(\begin{array}{c}\psi_{0}^{j,*2}\\ \psi_{1}^{j,*2}\end{array}\right)-\mathrm{i}\frac{\lambda V(x_{j})}{\varepsilon} \left(\begin{array}{c}\left(W_{1}(t_{n+1})-W_{1}(t_{n})\right)\psi_{1}^{j,*2 }\\ \left(W_{0}(t_{n+1})-W_{0}(t_{n})\right)\psi_{0}^{j,*2}\end{array}\right)\\ +\frac{\lambda^{2}}{\varepsilon^{2}}|V(x_{j})|^{2}\left(\begin{array}{c }c_{0}^{-}\psi_{0}^{j,*2}\\ c_{0}^{+}\psi_{1}^{j,*2}\end{array}\right)\Delta t.\] (63)
**Remark 1**.: _Here we use a first-order splitting scheme for the Hamiltonian part, which is sufficient for our problem. Higher-order splitting schemes also exist, such as Strang-Splitting and so on._
## V Numerical results
The purpose of our numerical experiments is two-fold. On the one hand, we demonstrate that an ensemble of realizations of SSE could be used to obtain samples of physical observables of interest and thus directly manifest distributional information of these quantities, while QME could not provide a detailed characterization of the distribution except for moment information. On the other hand, we show that SSE and QME approaches indeed have the same thermodynamic equilibrium, while exhibiting different transient dynamics towards reaching such equilibrium.
### Samples of physical observables by SSE
Let \(x\in[a,b]=[-\pi,\pi]\), the maximal time \(T=10\). For simplicity, in this subsection, we choose the interaction strength \(\lambda=\varepsilon\), which is neither necessary nor essential. Let us focus on the following system potentials which are harmonic with different centers:
\[U_{0}(x)=\frac{1}{2}x^{2},\quad U_{1}(x)=\frac{1}{2}x^{2}+0.1x.\]
We aim to numerically explore SSE (60) in the wide band limit and with \(\delta\)-correlated noise with special attention to investigating the role of the coupling potential \(V(x)\). To this end, we consider the following examples:
**Example 1**. Propagating a Gaussian wavepacket with a bimodal coupling function:
\[\psi_{0}(x,0) =\frac{1}{(\pi\varepsilon)^{\frac{1}{4}}}\exp{\left(\frac{-(x-q_{0} )^{2}}{2\varepsilon}+\mathrm{i}\frac{p_{0}(x-q_{0})}{\varepsilon}\right)}, \tag{64}\] \[V(x) =\exp{\left(-10(x-0.5)^{2}\right)}+\exp{\left(-40(x+2)^{2}-1 \right)},\] \[q_{0} =-1,p_{0}=0.5,\]
**Example 2**. Propagating a Gaussian wavepacket with another bimodal coupling function:
\[\psi_{0}(x) =\frac{1}{(\pi\varepsilon)^{\frac{1}{4}}}\exp{\left(\frac{-(x-q_{ 0})^{2}}{2\varepsilon}+\mathrm{i}\frac{p_{0}(x-q_{0})}{\varepsilon}\right)}, \tag{65}\] \[V(x) =2\exp{\left(-10(x-0.9)^{2}\right)}+5\exp{\left(-40(x+0.5)^{2} \right)},\] \[q_{0} =-1,p_{0}=0.5.\]
**Example 3**. Propagating a non-Gaussian wavepacket with a unimodal coupling function:
\[\psi_{0}(x) \propto\exp{\left(-5(x+1)^{2}+\mathrm{i}\frac{\sin(x)}{ \varepsilon}\right)}, \tag{66}\] \[V(x) =\exp{\left(-10x^{2}\right)}.\]
The coupling functions \(V(x)\) that we considered are shown in Fig. 2. We remark that the experiments are designed such that either the wave packet is expected to exhibit decoherence through the interaction with the bath or is not initialized from a coherent state.
A typical trajectory of time evolution of SSE is shown in Fig. 3. We observe that the wave function is initially populated solely on level-0, it oscillates and propagates within a finite region due to the confinement of the harmonic potentials. In particular, only when it passes through an interaction region (where the coupling potential \(V(x)\) is significant), it will partially and stochastically transmit to the other level. Thus the resulting dynamical behavior is rather complicated and the nonadiabatic phenomenon is nontrivial.
To explore the stochasticity of the SSE model, let us demonstrate with the numerical simulations of certain physical observables computed via an ensemble of realizations of SSE (60). We consider the transition rate \(\langle R(t)\rangle\) and spatial average \(\langle X(t)\rangle\), defined as:
\[\langle R(t)\rangle =\frac{\int|\psi_{0}(x,t)|^{2}\mathrm{d}x}{\int\left(|\psi_{0}(x, t)|^{2}+|\psi_{1}(x,t)|^{2}\right)\mathrm{d}x}, \tag{67}\] \[\langle X(t)\rangle =\frac{\int x\left(|\psi_{0}(x,t)|^{2}+|\psi_{1}(x,t)|^{2}\right) \mathrm{d}x}{\int\left(|\psi_{0}(x,t)|^{2}+|\psi_{1}(x,t)|^{2}\right)\mathrm{d }x},\]
We emphasize that \(\langle\cdot\rangle\) mean taking the average over the quantum state, which is obtained by a one-time realization of SSE (60). Hence, \(\langle R(t)\rangle\) and \(\langle X(t)\rangle\) are random variables due to the stochasticity in the dynamics of (60). We repeat simulation (60) of each example for 4000 times, and the statistics (in the form of the histogram) of the observables are shown in Fig. 4, Fig. 5 and Fig. 6, respectively.
We can see that the coupling function \(V(x)\) significantly affects the probability distribution of physical observables and SSE is capable of capturing both Gaussian and non-Gaussian distributional information of these quantities. Specifically, we have the following observations:
1. In Example 1, the coupling function \(V(x)\) is bimodal: the primary coupling peak is placed near the center of the propagation region and the secondary coupling peak is placed near the boundary of the propagation region, where wave function turns its moving direction. We observe that the distribution of transition rate \(\langle R(t)\rangle\) is non-Gaussian but the distribution of atomic position \(\langle X(t)\rangle\) is Gaussian;
2. In Example 2, the coupling function \(V(x)\) is also bimodal: both coupling peaks are near the center of the propagation region, but they are of different profiles. We observe that the distribution of transition rate \(\langle R(t)\rangle\) is Gaussian but the distribution of atomic position \(\langle X(t)\rangle\) is non-Gaussian;
3. In Example 3, although the wave function is initialized far from a coherent state, the coupling function \(V(x)\) is unimodal, we have the distributions of transition rate \(\langle R(t)\rangle\) and atomic position \(\langle X(t)\rangle\) are both Gaussian.
While the results above are not conclusively definitive, this set of experiments already sheds light on the diverse stochastic behavior contained within the SSE models.
### Comparison between SSE and QME
This part is devoted to a detailed comparison between SSE and QME models both in dynamics and equilibrium. Consider [14; 44; 48]
\[U_{0}(x)=\frac{1}{2}x^{2},\quad U_{1}(x)=U_{0}(x)+\sqrt{2}gx+g^{2}+E_{d}, \tag{68}\]
with the Markovian assumption \(c^{\pm}(\tau)=c_{0}^{\pm}\delta(\tau)\), the corresponding QME of Eq. (49), transformed back to the Schrodinger picture, is
\[\frac{\mathrm{d}\rho(t)}{\mathrm{d}t}=-\frac{\mathrm{i}}{\varepsilon}\left[ \hat{H}_{S},\rho(t)\right] \tag{69}\] \[+\frac{\lambda^{2}}{\varepsilon^{2}}|V|^{2}\begin{pmatrix}2(\rho_ {00}(t)-\rho_{11}(t))&\rho_{01}(t)+\rho_{10}(t)\\ \rho_{01}(t)+\rho_{10}(t)&2(\rho_{11}(t)-\rho_{00}(t))\end{pmatrix}.\]
This is equivalent to the traditional Markovian QME
\[\mathrm{i}\varepsilon\frac{\mathrm{d}\rho(t)}{\mathrm{d}t}=\left[ \begin{pmatrix}\hat{h}_{0}&0\\ 0&\hat{h}_{1}\end{pmatrix},\rho(t)\right] \tag{70}\] \[-\mathrm{i}\frac{\lambda^{2}V^{2}}{\varepsilon}\left[\hat{d}\hat{ d}\hat{l}^{\dagger}\rho(t)-\hat{d}^{\dagger}\rho(t)\hat{d}+\rho(t)\hat{d}^{\dagger} \hat{d}-\hat{d}\rho(t)\hat{d}^{\dagger}+h.c.\right],\]
where \(\hat{d}\) and \(\hat{d}^{\dagger}\) are defined as in (4). We choose the following parameter:
\[L=20,\quad g=0.1,\quad E_{d}=0.1,\quad V(x)=1, \tag{71}\]
and the initial value
\[\psi_{0}(x,0)\propto 1,\quad\rho(x,x^{\prime},0)=\psi_{0}(x,0)\psi_{0}^{*}(x^{ \prime},0).\]
In this example, we conduct the spatial discretization with Hermite polynomials. Also, note that the coupling function \(V\) is chosen to be spatially homogeneous, and the system is expected to reach the equilibrium.
Recall that the proposed models in this work are characterized by two parameters: the interaction strength \(\lambda\) and the semiclassical parameter \(\varepsilon\). From the derivation of QME based on SSE, we can see that it holds when \(\lambda\ll 1\) while \(\varepsilon\) is fixed, and a recent mathematical study reveals that QME can be directly derived only if \(\lambda\ll\varepsilon\)[44]. In fact, our formulation facilitates more systematic comparisons between SSE and QME with various combinations of \(\lambda\) and \(\varepsilon\). The purpose of the following two numerical experiments is to compare the dynamics and steady states between SSE and QME under different \(\varepsilon\) versus \(\lambda\) relationships.
In the two numerical experiments, we test with two different \(\varepsilon=1/32\) and \(1/64\). For each \(\varepsilon\), we let the \(\lambda\) vary from \(\varepsilon\) to \(\varepsilon/8\). The evolutions of the population of electrons on two levels based on SSE and QME for different \(\varepsilon\) and \(\lambda\) are shown in Fig. 7 and Fig. 8. The result indicates that (1) in this parameter setting, SSE and QME have the same equilibrium but different transient dynamics, and QME will reach the steady state faster than SSE; (2) for a fixed \(\varepsilon\), \(\lambda\) only affects the relaxation time to reach the equilibrium, and a smaller \(\lambda\) indicates a longer relaxation time.
The former observation can be understood in two ways. Mathematically, the QME is not exactly equivalent to SSE, they are equal only if the higher-order term \(O(\lambda^{3})\)
Figure 3: The evolution of a typical wavefunction trajectory in example 1.
Figure 2: Plots of coupling functions \(V(x)\) in example 1, 2 and 3.
is ignored. In a short time, the higher order terms lead to a difference in the dynamical behavior of the two, but in a long time, the higher order terms cancel each other out and thus have no effect on the steady state. Scientifically, QME is averaged over the stochastic term, such that the dissipation terms do not compete with the random fluctuations in dynamics, and thus the dynamics corresponding to QME will reach the steady state faster. This is a common relationship between SSE and QME and has been discussed in [54]. The latter shows that SSE still has the same steady state as QME when \(\lambda\) is not much less than \(\varepsilon\), at least for a large class of parameter settings. This indicates that \(\lambda\ll\varepsilon\) is only a sufficient condition, not a necessary one to establish qualitative connections between SSE and QME. More comparison studies and even qualitative connections are worth exploring in the future.
## VI Conclusion and discussions
In this article, we propose a stochastic Schrodinger equation model for the Anderson-Holstein impurities, filling a gap in the study of open quantum systems with fermionic baths. The SSE model is obtained directly from the microscopic model instead of using empirical correlation functions. Through analytic derivations, we establish the theoretical relations between AH, SSE and QME, and introduce the hierarchy of modeling in Fig. 1. We also discuss efficient algorithms for noise generation and sampling stochastic trajectories. Our numerical experiments show that SSE could be used to study physical observables and capture effects beyond the level of QME.
From the computational perspective, the non-Markovian term could be potentially expensive, especially in the high-dimensional case. If one is interested in the nonequilibrium dynamics of SSE, efficient algorithms that incorporate noise and treat the non-Markovian term with (quasi-)linear cost in time (for example, see [55]) would be required to propagate towards a longer time.
Our current studies focus on Holstein types of quantum impurities. Similar strategies are applicable to correlated systems via the pseudoparticle approach [56]. Understanding decoherence and relaxation dynamics of interacting systems [57; 27] will be the focus of our future work.
## Acknowledgement
Z.Z. is supported by the National Key R&D Program of China, Project Number 2021YFA1001200, and the NSFC, grant Number 12031013, 12171013. This work is partially supported by a grant from the Simons Foundation under Award No. 825053 (Z.H.). We thank helpful discussions with Dr. Yu Cao. L.X. thanks Dr. Hao Wu for his support and encouragement.
All authors contribute equally to this work.
|
2310.00377 | Mitigating the Effect of Incidental Correlations on Part-based Learning | Intelligent systems possess a crucial characteristic of breaking complicated
problems into smaller reusable components or parts and adjusting to new tasks
using these part representations. However, current part-learners encounter
difficulties in dealing with incidental correlations resulting from the limited
observations of objects that may appear only in specific arrangements or with
specific backgrounds. These incidental correlations may have a detrimental
impact on the generalization and interpretability of learned part
representations. This study asserts that part-based representations could be
more interpretable and generalize better with limited data, employing two
innovative regularization methods. The first regularization separates
foreground and background information's generative process via a unique
mixture-of-parts formulation. Structural constraints are imposed on the parts
using a weakly-supervised loss, guaranteeing that the mixture-of-parts for
foreground and background entails soft, object-agnostic masks. The second
regularization assumes the form of a distillation loss, ensuring the invariance
of the learned parts to the incidental background correlations. Furthermore, we
incorporate sparse and orthogonal constraints to facilitate learning
high-quality part representations. By reducing the impact of incidental
background correlations on the learned parts, we exhibit state-of-the-art
(SoTA) performance on few-shot learning tasks on benchmark datasets, including
MiniImagenet, TieredImageNet, and FC100. We also demonstrate that the
part-based representations acquired through our approach generalize better than
existing techniques, even under domain shifts of the background and common data
corruption on the ImageNet-9 dataset. The implementation is available on
GitHub: https://github.com/GauravBh1010tt/DPViT.git | Gaurav Bhatt, Deepayan Das, Leonid Sigal, Vineeth N Balasubramanian | 2023-09-30T13:44:48Z | http://arxiv.org/abs/2310.00377v1 | # Mitigating the Effect of Incidental Correlations on Part-based Learning
###### Abstract
Intelligent systems possess a crucial characteristic of breaking complicated problems into smaller reusable components or parts and adjusting to new tasks using these part representations. However, current part-learners encounter difficulties in dealing with incidental correlations resulting from the limited observations of objects that may appear only in specific arrangements or with specific backgrounds. These incidental correlations may have a detrimental impact on the generalization and interpretability of learned part representations. This study asserts that part-based representations could be more interpretable and generalize better with limited data, employing two innovative regularization methods. The first regularization separates foreground and background information's generative process via a unique mixture-of-parts formulation. Structural constraints are imposed on the parts using a weakly-supervised loss, guaranteeing that the mixture-of-parts for foreground and background entails soft, object-agnostic masks. The second regularization assumes the form of a distillation loss, ensuring the invariance of the learned parts to the incidental background correlations. Furthermore, we incorporate sparse and orthogonal constraints to facilitate learning high-quality part representations. By reducing the impact of incidental background correlations on the learned parts, we exhibit state-of-the-art (SoTA) performance on few-shot learning tasks on benchmark datasets, including MiniImagenet, TieredImageNet, and FC100. We also demonstrate that the part-based representations acquired through our approach generalize better than existing techniques, even under domain shifts of the background and common data corruption on the ImageNet-9 dataset. The implementation is available on GitHub: [https://github.com/GauravBh1010tt/DPViT.git](https://github.com/GauravBh1010tt/DPViT.git)
## 1 Introduction
Many datasets demonstrate a structural similarity by exhibiting "parts" or factors that reflect the underlying properties of the data [15; 18; 21; 28; 31; 43; 54]. Humans are efficient learners who represent objects based on their various traits or parts, such as a bird's morphology, color, and habitat characteristics. Part-based methods learn these explicit features from the data in addition to convolution and attention-based approaches (which only learn the internal representations), making them more expressive [7; 21; 41; 52; 54]. Most existing part-based methods focus on the unsupervised discovery of parts by modeling spatial configurations [14; 21; 52; 54; 61], while others use part localization supervision in terms of attribute vectors [25; 41; 50; 56] or bounding boxes [7]. Part-based methods come with a learnable part dictionary that provides a direct means of data abstraction and is effective in limited data scenarios, such as few-shot learning. Furthermore, the parts can be combined into hierarchical representations to form more significant components of the object
description [50; 55]. Part-based learning methods offer advantages in terms of interpretability and generalization, particularly in safety-critical domains like healthcare, aviation and aerospace industry, transportation systems (e.g., railways, highways), emergency response, and disaster management. These fields often face challenges in collecting large training samples, making part-based learning methods valuable. Furthermore, the ability to explain decisions becomes crucial in these contexts.
Various studies have indicated that correlations between image background and labels can introduce biases during machine learning model training [29; 32; 33; 34; 45; 46; 53; 59]. These correlations exist because specific background configurations create shortcuts for the model during training [32; 53]. While background information is crucial for decision-making, imbalanced background configurations can create unintended correlations, leading to undesirable outcomes. These correlations negatively impact the interpretability and generalization of part-based learners. For instance, let's consider a scenario where a laptop, a charger, and an iPod are on a table. In one case, let's examine the situation without any context or background information. Without background information, the model may struggle to understand the purpose and significance of components or parts such as the laptop, charger, and iPod on a table. It could fail to differentiate between these objects or grasp their functionalities, resulting in a lack of recognition and understanding. Conversely, suppose the model is predominantly trained with examples of these items on tables. In that case, it may overly focus on the background elements, such as the table itself, disregarding the individual entities. Thus it becomes essential to handle incidental correlations of image background to achieve a balanced understanding. Existing part-based approaches fail to handle incidental correlations that arise due to specific background signals dominating the training data (analogous to Figure 1(b)), thereby hampering their interpretability and generalization to limited data scenarios.
Having high-quality part representations is essential for achieving proficiency in part-based learning. In this context, quality pertains to the sparsity and diversity of the learned parts. Sparsity ensures that only a few parts are responsible for a given image, as images comprise a small subset of parts. Conversely, diversity in part representations prevents the parts from converging into a single representation and facilitates each part's learning of a unique data representation. Although incidental correlations can negatively affect learned parts' quality, the quality of part-based methods is a significant challenge that all part learners face. While previous studies have addressed the issue of part quality [50; 52], their solutions do not assure the sparsity and diversity of the learned parts, thereby failing to guarantee high-quality parts.
To solve the aforementioned challenges, we introduce the Disentangled Part-based Vision Transformer (DPViT), which is trained to be resilient to the incidental correlations of image backgrounds and ensures high-quality part representations. We propose a disentangled pre-training phase, which separates the generative process of the foreground and background information through a unique mixture-of-parts formulation. We impose structural constraints on the parts to ensure that the mixture-of-parts for foreground and background includes soft, object-agnostic masks without requiring direct supervision of part localization. The parts are learned to be invariant to incidental correlations using a self-supervised distillation fine-tune phase. To address the issue of the quality of learned parts, we impose sparse and spectral penalties on the part matrix to guarantee the high quality of learned part representations. We include an assessment of the sparse and spectral norms of the part matrix as a quantitative indicator of the learned parts' quality. Finally, we evaluate the effectiveness of our method on benchmark few-shot datasets, including MiniImagenet [35], TieredImageNet [40],
Figure 1: **Impact of incidental correlations on the interpretability of part learners. We visualize the attention maps projected by the learned part dictionaries. Figure 1(b) illustrates the ViT-S backbone featuring a learnable part dictionary. However, it encounters difficulties in correctly identifying significant elements like the laptop, giving more attention to the background instead. In contrast, the proposed DPViT method successfully detects the most crucial parts of the image even in the presence of incidental correlations.**
and FC100 [6]. To demonstrate the robustness of our proposed method to incidental correlations of backgrounds and common data corruptions, we use the benchmark ImageNet-9 dataset [53].
Our key contributions can be summarized as follows:
* We propose regularization techniques to disentangle the generative process of foreground and background information through a mixture-of-parts formulation. Additionally, we employ a self-supervised distillation regularization to ensure that the learned parts remain invariant to incidental correlations of the image background.
* We ensure the high quality of learned parts by employing both sparsity and spectral orthogonal constraints over the part matrix. These constraints prevent the parts from degenerating and encourage a diverse range of part representations.
* Apart from our evaluation of standard few-shot benchmark datasets, we also analyze the impact of incidental correlations of background and typical data distortions by utilizing the benchmark ImageNet-9 dataset [53].
## 2 Related Work
**Part-based learning**. The advantages of learning part-based representations have been extensively researched in image recognition tasks [15; 18; 38; 43; 49; 52; 55; 56]. Earlier methods attempted to learn parts by defining a stochastic generative process [19; 43]. Part-based methods have been broadly classified into unsupervised and supervised categories. Unsupervised methods concentrate on learning the spatial relationship between parts by using part dictionaries without the supervision of part localization [14; 15; 20; 21; 28; 49; 52; 54; 61]. In contrast, supervised part-based methods rely on the supervision of part localization through attribute vectors [16; 27; 41; 50] or part bounding box information [7]. In the literature, parts are also referred to as concepts when supervision about part localization is involved [7; 41].
Discovering parts in an unsupervised way is a more challenging scenario that is applicable to most practical problems. Part dictionaries help data abstraction and are responsible for learning implicit and explicit data representations. For example, [28] clustered DCNN features using part-based dictionaries, while [26] introduced a generative dictionary-based model to learn parts from data. Similarly, [20] uses part-based dictionaries and an attention network to understand part representations. The ConstellationNet [54] and CORL [21] are some of the current methods from the constellation family [14; 61], and use dictionary-based part-prototypes for unsupervised discovery of parts from the data. Our approach also belongs to this category, as we only assume the part structure without requiring any supervision of part localization.
**Incidental correlations of image background**. Image backgrounds have been shown to affect a machine learning model's predictions, and at times the models learn by utilizing the incidental correlations between an image background and the class labels [4; 32; 42; 45; 46; 53; 58; 59]. To mitigate this issue, background researchers have used augmentation methods by altering background signals and using these samples during the training [32; 53; 58]. [53] performed an empirical study on the effect of image background on in-domain classification. They introduce several variants of background augmentations to reduce a model's reliance on the background. Similarly, [58] uses saliency maps of the image foreground to generate augmented samples to reduce the effect of the image background. Recently, [32] showed the effectiveness of background augmentation techniques for minimizing the effect of incidental correlations on few-shot learning.
Unlike these methods, our approach does not depend on background augmentations but instead learns the process of generating foreground and background parts that are disentangled. Furthermore, our proposed approach is not sensitive to the quality of foreground extraction and can operate with limited supervision of weak foreground masks.
**Few-shot learning and Vision Transformers**. In recent years, few-shot learning (FSL) has become the standard approach to evaluate machine learning models' generalization ability on limited data [1; 2; 5; 13; 24; 36; 44; 47; 54]. Vision transformers (ViT) [11] have demonstrated superior performance on FSL tasks [10; 22; 23; 30; 48; 51], and self-supervised distillation has emerged as a popular training strategy for these models [8; 17; 22; 30; 51]. A recent trend involves a two-stage procedure where models are pretrained via self-supervision before fine-tuning via supervised learning [8; 17; 22; 30; 60]. For example, [17] leverages self-supervised training with iBOT [60] as a pretext task, followed by inner loop token importance reweighting for supervised fine-tuning. HCTransformer [22] uses attribute surrogates learning and spectral tokens pooling for pre-training vision transformers
and performs fine-tuning using cascaded student-teacher distillation to improve data efficiency hierarchically. SMKD [30] uses iBOT pre-training and masked image modeling during fine-tuning to distill knowledge from the masked image regions.
Our approach employs a two-stage self-supervised training strategy of vision transformers akin to [22; 30; 60]. However, unlike existing methods that focus on generalization in few-shot learning, our primary objective is to learn part representations that are invariant to incidental correlations of the image background. Our training procedure is designed to facilitate learning disentangled and invariant part representations, which is impossible through existing two-stage self-supervised pipelines alone.
## 3 Problem Formulation and Preliminaries
In few-shot classification, the aim is to take a model trained on a dataset of samples from seen classes \(\mathcal{D}^{seen}\) with abundant annotated data, and transfer/adopt this model to classify a set of samples from a disjoint set of unseen/novel classes \(\mathcal{D}^{novel}\) with limited labeled data. Formally, let \(\mathcal{D}^{seen}=\{(\mathbf{x},y)\}\), where \(\mathbf{x}\in\mathbb{X}\) corresponds to an image and \(y\in\mathbb{Y}^{seen}\) corresponds to the label among the set of seen classes. We also assume that during training, we have limited supervision of class-agnostic foreground-background mask (\(\mathcal{M}_{f}\),\(\mathcal{M}_{b}\)) for regularization during training, which can be easily obtained by any weak foreground extractor as a preprocessing step (following [53]). Please note that no mask information is required for \(\mathcal{D}^{novel}\) at inference.
We follow the work of [9; 60] on self-supervised training of ViTs to design our pretrain phase. During training, we apply random data augmentations to generate multiple views of the a given image \(x^{v}\in\mathcal{D}^{seen}\). These views are then fed into both the teacher and student networks. Our student network, with parameters \(\theta_{s}\), includes a ViT backbone encoder and a projection head \(\phi_{s}\) that outputs a probability distribution over K classes. The ViT backbone generates a \([cls]\) token, which is then passed through the projection head. The teacher network, with parameters \(\theta_{t}\), is updated using Exponentially Moving Average (EMA) and serves to distill its knowledge to the student by minimizing the cross-entropy loss over the categorical distributions produced by their respective projection heads.
\[\mathcal{L}_{cls}=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}^{seen}}\mathcal{ L}_{ce}(\mathcal{F}_{\phi}^{t}(\mathbf{x}^{\mathbf{1}})),\mathcal{F}_{\phi}^{s}( \mathcal{F}_{\theta}^{s}(\mathbf{x}^{\mathbf{2}}))). \tag{1}\]
For inference, we use the standard \(M\)-way, \(N\)-shot classification by forming _tasks_ (\(\mathcal{T}\)), each comprising of _support set_ (\(\mathcal{S}\)) and _query set_ (\(\mathcal{Q}\)), constructed from \(\mathcal{D}^{novel}\). Specifically, a support set consists of \(M\times N\) images; \(N\) random images from each of \(M\) classes randomly chosen from \(\mathbb{Y}^{novel}\). The query set consists of a disjoint set of images, to be classified, from the same \(M\) classes. Following the setup of [47], we form the class prototypes (\(\mathbf{c}_{m}\)) using samples from \(\mathcal{S}\). The class prototypes and learned feature extractor (\(\mathcal{F}_{\theta}\)) are used to infer the class label \(\hat{y}\) for an unseen sample \(\mathbf{x}^{q}\in\mathcal{Q}\) using a distance metric \(d\).
\[\hat{y}=\operatorname*{arg\,max}_{m}\,d(\mathcal{F}_{\theta}(\mathbf{x}^{q}), \mathbf{c}_{m});\,\,\mathbf{c}_{m}=\frac{1}{N}\sum_{(\mathbf{x},y_{m})\in \mathcal{S}}\mathcal{F}_{\theta}(\mathbf{x}). \tag{2}\]
## 4 Proposed Methodology
Given an input sample \(\mathbf{x}\in\mathbb{R}^{H\times W\times C}\), and a patch size \(f\), we extract flattened 2D patches \(\mathbf{x_{f}}\in\mathbb{R}^{N\times(F^{2}\cdot C)}\), where \(N\) is the number of patches generated and \((F,F)\) is the resolution of each image patch. Similar to a standard ViT architecture [11], we prepend a learnable \([class]\) token and positional embeddings to retain the positional information. The flattened patches are passed to multi-head self attention layers and MLP blocks to generate a feature vector \(z_{p}=MSA(x_{f})\).
Next, we define the parts as part-based dictionaries \(\mathbf{P}=\{\mathbf{p}_{k}\in\mathbb{R}^{F^{2}\cdot C}\}_{k=1}^{K}\), where \(\mathbf{p}_{k}\) denotes the part-vector for the part indexed as \(k\). The _part-matrix_ (\(\mathbf{P}\)) is initialized randomly and is considered a trainable parameter of the architecture. Note that the dimension of each part-vector is equal to the dimension of flattened 2D patches, which is \(F^{2}\cdot C\). For each part \(\mathbf{p}_{k}\), we compute a distance map \(\mathbf{D}^{k}\in\mathbb{R}^{N}\) where each element in the distance map is computed by taking dot-product between the part \(\mathbf{p}_{k}\) and all the \(N\) patches: \(\mathbf{D}^{k}=\mathbf{x_{f}}\cdot\mathbf{p}_{k}\).
Using the distance maps \(\mathbf{D}\in\mathbb{R}^{N\times K}\), we introduce a multi-head cross-attention mechanism and compute the feature vector: \(z_{d}=MCA(F_{\psi}(\mathbf{D}))\), where \(F_{\psi}\) is an MLP layer which upsamples \(\mathbf{D}:K\to F^{2}\cdot C\). The cross-attention layer shares a similar design to self-attention layers; the only
difference is the dimensions of input distance maps. The cross-attention helps contextualize information across part-dictionary and the spatial image regions and provides complementary properties to \(MSA\) layers. (Please refer to Appendix for experiments on complementary properties of \(MSA\) and \(MCA\)).
Finally, we add the output feature vectors of \(MSA\) and \(MCA\) to form the feature extractor \(\mathcal{F}_{\theta}\) defined in Eqn 1:
\[\mathcal{F}_{\theta}=[z_{p}\oplus z_{d}] \tag{3}\]
**Disentanglement of foreground-background space using mixture-of-parts**. We start by dividing the parts-matrix \(\mathbf{P}\in\mathbb{R}^{K\times F^{2}\cdot C}\) into two disjoint sets: foreground set \(\mathbf{P}_{\mathbf{F}}\in\mathbb{R}^{n_{f}\times C}\) and background set \(\mathbf{P}_{\mathbf{B}}\in\mathbb{R}^{n_{b}\times C}\), such that \(K=n_{f}+n_{b}\).
Next, we construct latent variables to aggregate the foreground and background information using a mixture-of-parts formulation over the computed distance maps \(\mathbf{D}\):
\[L_{F}=\sum_{k\in n_{f}}\alpha_{k}\mathbf{D}^{k}+\delta_{f};L_{B}=\sum_{k\in n _{b}}\beta_{k}\mathbf{D}^{k}+\delta_{b} \tag{4}\]
where, \(\alpha_{k}\) and \(\beta_{k}\) are the learnable weights given to the \(k^{th}\) part-vector in the corresponding mixture, whereas \(\delta_{f}\) and \(\delta_{b}\) are Gaussian noises sampled from \(\mathcal{N}(0,1)\). The Gaussian noise is added to the latent codes to ensure that mixture-of-parts are robust to common data distortions. Please note that the purpose of Gaussian noise is not to induce variability in the latent codes, as foreground information for a given image is deterministic.
Finally, our disentanglement regularization takes the form of an alignment loss between the latent codes and the class-agnostic foreground-background masks:
\[\mathcal{L}_{mix}=||\mathcal{I}(L_{F})-\mathcal{M}_{f}||_{2}+||\mathcal{I}(L_{ B})-\mathcal{M}_{b}||_{2} \tag{5}\]
where, \(\mathcal{I}(L)\) is the bilinear interpolation of a given latent code \(L\) to the same size as \(\mathcal{M}\).
During the architectural design phase, we employ distinct components (\(\mathbf{P}\)) for each encoder block. Consequently, \(z_{p}\) and \(z_{d}\) are calculated in an iterative manner and subsequently transmitted to the subsequent encoder block. Regarding the computation of \(L_{mix}\), we utilize the distance maps \(\mathbf{D}\) from the concluding encoder block. While it's feasible to calculate \(L_{mix}\) iteratively for each block and then aggregate them for a final \(L_{mix}\) computation, our observations indicate that this approach amplifies computational expenses and results in performance deterioration. As a result, we opt to compute \(L_{mix}\) using the ultimate encoder block.
**Learning high-quality part representations**. A problem with minimizing the mixture objective defined in Eqn 5 is that it may cause the degeneration of parts, thereby making the part representations less diverse. One solution is to enforce orthogonality on the matrix \(\mathbf{P}^{m\times n}\) by minimizing \(||\mathbf{P}^{T}\mathbf{P}-\mathbf{I}||\), similar to [50]. However, the solution will result in a biased estimate as \(m<n\); that is, the number of
Figure 2: **Overview of proposed architecture - DPVIT. We employ a learnable part dictionary to generate a formulation incorporating foreground and background information. The spatial distance maps, computed by the part dictionary, are utilized to determine the mixture of latent codes for foreground and background. Our transformer encoder comprises multi-head self-attention (MSA) and multi-head cross-attention (MCA) layers. The MSA layer takes embedded patches as input, while the MCA layers utilize the distance maps as input.**
parts (\(K\)) is always less than the dimensionality of parts (\(F^{2}\cdot C\)). In our experiments, we observed that increasing \(K\) beyond a certain threshold degrades the performance as computational complexity increases, and is consistent with the findings in [54]. (Please refer to our Appendix section for experiments on the different values of \(K\)). To minimize the degeneration of parts, we design our quality assurance regularization by minimizing the spectral norm of \(\mathbf{P}^{T}\mathbf{P}-\mathbf{I}\), and by adding \(L_{1}\) sparse penalty on the part-matrix \(\mathbf{P}\). The spectral norm of \(\mathbf{P}^{T}\mathbf{P}-\mathbf{I}\) has been shown to work with over-complete (\(m<n\)) and under-complete matrices (\(m\geq n\)) [3].
\[\mathcal{L}_{Q}(\lambda_{s},\lambda_{o})=\lambda_{s}||\mathbf{P}||_{1}+ \lambda_{o}\Big{[}\sigma\big{(}\mathbf{P_{F}}\cdot\mathbf{P_{F}}^{T}-\mathbf{ I}\big{)}+\sigma\big{(}\mathbf{P_{B}}\cdot\mathbf{P_{B}}^{T}-\mathbf{I}\big{)} \Big{]} \tag{6}\]
where \(\mathbf{I}\) is the identity matrix, \(\lambda_{s}\) and \(\lambda_{o}\) are the regularization coefficients for sparsity and orthogonality constraints. \(\sigma(\mathbf{P})\) is the spectral norm of the matrix \(\mathbf{P}\) which is computed using the scalable power iterative method described in [3].
**Disentangled Pretraining Objective**. We pretrain DPViT using the following loss function:
\[\mathcal{L}_{PT}=\lambda_{cls}\mathcal{L}_{cls}+\lambda_{mix}\mathcal{L}_{mix }+\mathcal{L}_{Q}(\lambda_{s},\lambda_{o}) \tag{7}\]
where \(\lambda_{cls},\lambda_{mix}\), \(\lambda_{s}\), and \(\lambda_{o}\) are the weights given to each loss term and are tuned on the validation set.
### Invariant fine-tuning
In the pretrain phase, our approach learns part representations that are disentangled and diverse, but it does not achieve invariance to the incidental correlations of image background. During the fine-tuning stage, we utilize the learned foreground latent code to extract the relevant foreground information from a given image \(x\): \(x_{f}=x\odot\mathcal{I}(L_{F})\), where \(\odot\) denotes the Hadamard product. The teacher network receives the original image, while the student network receives the foreground-only image. By distilling knowledge between the \([class]\) tokens and foreground latent codes \(L_{F}\) of the student and teacher networks, we achieve invariance to the incidental correlations of image background.
\[\mathcal{L}_{cls}^{inv}=\mathcal{L}_{ce}(\mathcal{F}_{\phi}^{t}(\mathcal{F}_{ \theta}^{t}(x)),\mathcal{F}_{\phi}^{s}(\mathcal{F}_{\theta}^{s}(x_{f}))); \mathcal{L}_{p}^{inv}=\mathcal{L}_{ce}(L_{F}^{t}(x),L_{F}^{s}(x_{f})) \tag{8}\]
The two proposed invariant regularizations serve distinct purposes: \(\mathcal{L}_{cls}^{inv}\) encourages the model to classify images independently of the background, while \(\mathcal{L}_{p}^{inv}\) ensures that the latent foreground code captures relevant foreground information even when the background is absent, making the learned parts invariant to the incidental correlations.
**Invariant Fine-tuning Objective**. Finally, our fine-tuning objective is given as :
\[\mathcal{L}_{FT}=\lambda_{cls}\mathcal{L}_{cls}+\lambda_{cls}^{inv}\mathcal{L }_{cls}^{inv}+\lambda_{p}^{inv}\mathcal{L}_{p}^{inv} \tag{9}\]
where \(\lambda_{cls},\lambda_{cls}^{inv}\), and \(\lambda_{p}^{inv}\) are the weights given to each loss term and are tuned on the validation set after pretraining.
## 5 Experiments
We evaluate the proposed approach on four datasets: MiniImageNet [35], TieredImageNet [40], FC100 [37], and ImageNet-9 [53]. The MiniImageNet, TieredImageNet, and FC100 are generally used as benchmark datasets for few-shot learning. For MiniImageNet, we use the data split proposed in [39], where the data samples are split into 64, 16, and 20 for training, validation, and testing, respectively. The TieredImageNet [40] contains 608 classes divided into 351, 97, and 160 for meta-training, meta-validation, and meta-testing. On the other hand, FC100 [37] is a smaller resolution dataset (32 \(\times\) 32) that contains 100 classes with class split as 60, 20, and 20.
To investigate the impact of background signals and data corruption on classifier performance, researchers introduced ImageNet-9 (IN-9L) [53]. IN-9L is a subset of ImageNet comprising nine coarse-grained classes: dog, bird, vehicle, reptile, carnivore, insect, instrument, primate, and fish. Within these super-classes, there are 370 fine-grained classes, with a training set of 183,006 samples. The authors of [53] created different test splits by modifying background signals, resulting in 4050
Figure 3: Invariant fine-tuning of DPViT via distillation framework.
samples per split to evaluate various classifiers. We use three of these test splits for our evaluation: Original (with no changes to the background), M-SAME (altering the background from the same class), and M-RAND (altering the background from a different class). Additionally, [53] introduced a metric called BG-GAP to assess the tendency of classifiers to rely on background signal, which is measured as the difference in performance between M-SAME and M-RAND.
We use a ViT-S backbone for all our experiments and follow the same pipeline in iBOT [60] for pre-training, keeping most hyperparameters unchanged. We use a batch size of 480 with a learning rate of 0.0005 decayed alongside a cosine schedule and pre-train DPViT for 500 epochs for all four datasets. Fine-tuning is carried out on the same train data for 50 epochs utilizing the class labels (similar to [22]). The DPViT architecture has \(K\)=64 and \(n_{f}\)=40 for all of our experiments. The pre-training and fine-tuning are carried out on 4 A40 GPUs. More details related to the architectures and optimization are provided in Appendix.
We start our experimental study by examining how incidental correlation affects part-learners' interpretability and quality of learned parts. This study is conducted on the MiniImageNet dataset. We then present experimental results on few-shot learning on the MiniImageNet, TieredImageNet, and FC100 datasets. Lastly, we provide a quantitative analysis of the influence of incidental background correlations on the various test splits in the ImageNet-9 dataset, followed by ablation studies and a discussion.
### How do incidental correlations affect the interpretability of part learners?
To examine how incidental correlations impact various components, we solely employ the \(\mathcal{L}_{cls}\) loss to train DPViT on the MiniImageNet dataset. (Note that \(\mathcal{L}_{cls}\) is equivalent to ViT with parts). We then present a visualization of the attention layers (\(MSA\) and \(MCA\)) in Figure 4. The \(MSA\) layers effectively recognize relevant in-context information of some objects, but the \(CSA\) layers fail to pinpoint foreground details. This is because incidental correlations in the background dominate the \(CSA\) layers. However, incorporating the \(\mathcal{L}_{mix}\) regularization results in improved localization by the \(MCA\) layers, which are no longer influenced by incidental correlations. An important point to highlight is that the design of \(MSA\) layers causes them to focus on objects specific to a particular class. As a result, their effectiveness is reduced when multiple objects are present (as seen in Figure 4, where MSA misses objects on the left side). Conversely, \(CSA\) layers learn class-agnostic features
Figure 4: Visualizing _MSA_ and _MCA_ layers. The joint representation is obtained by averaging all attention heads (\(\sum_{H}\)). We study the effect of \(\mathcal{L}_{mix}\) on the interpretability of part based learners.
Figure 5: **Visualizing the quality of learned parts for input image from Figure 4. In Figure 5(a) and 5(b), we show the foreground mixture \(\mathcal{I}(L_{F})\) and four foreground parts selected randomly from \(n_{f}\). Meanwhile, Figure 5(c) and 5(d) show the sparse and orthogonal norms as metrics for evaluating the quality of the part matrix \(\mathbf{P}\).**
consistently throughout the training set, enabling them to perform well in the presence of multiple objects. For further investigation of the complementary properties of \(MSA\) and \(MCA\), please see the Appendix. Additionally, the Appendix includes visualizations of individual attention heads.
### Studying the quality of learned part representations
As previously stated, the interpretability of learned parts is directly influenced by the sparsity and diversity of the part matrix \(\mathbf{P}\). This is achieved by examining the impact of \(\mathcal{L}_{Q}\) during pretraining. We present visualizations of the learned foreground parts in Figure 5(a) and 5(b) for the input image \(x\) from Figure 4. While both setups successfully learn the foreground mixture, \(\mathcal{I}(L_{F})\), without \(\mathcal{L}_{Q}\), the parts degenerate into a homogeneous solution (Figure 5(a)), lacking sparsity and diversity. With the inclusion of \(\mathcal{L}_{Q}\), however, the learned parts become sparse and diverse (Figure 5(b)).
We use the \(L\)-1 and orthogonal norms of the part matrix to assess the sparsity and diversity of the parts. As illustrated in Figure 5(c) and Figure 5(d), the addition of \(\mathcal{L}_{Q}\) maintains bounded norms, inducing sparsity and diversity among the parts. The results also highlight higher norm values by employing \(\mathcal{L}_{cls}+\mathcal{L}_{mix}\) losses, which show the degeneracy caused by the introduction of mixture loss. Moreover, the higher sparse and orthogonal norms for \(\mathcal{L}_{cls}\) demonstrate that part-based methods generally do not maintain the quality of parts.
### Few-shot Learning
We compare DPViT with recent part-based methods: ConstNet [54], TPMN [52], and CORL [21]; ViT-based methods: SUN [10], FewTure [23], HCTransformer [22], and SMKD [30] (for SMKD we compare with their prototype-based few-shot evaluation on all the datasets); and recent ResNet based few-shot methods: _Match-feat_[2], _Label-halluc_[24], and _FeLMi_[44].
As shown in Table 1, the proposed method outperforms existing part-based methods by clear margins. Moreover, DPViT achieves competitive performance compared to current ViT-based methods, especially on the FC100 dataset with low-resolution images. The FSL results show that the self-supervised ViT methods give an edge over the existing ResNet-based backbones.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Backbone**} & \multicolumn{2}{c}{**MinimImageNet**} & \multicolumn{2}{c}{**TieredImageNet**} & \multicolumn{2}{c}{**FC100**} \\ & & **-1shot** & **-5shot** & **1-shot** & **5-shot** & **1-shot** & **5-shot** \\ \hline
**ProtoNets** (2017) [47] & ResNet12 & 60.93\(\pm\)0.16 & 78.53\(\pm\)0.25 & 65.65\(\pm\)0.92 & 83.40\(\pm\)0.65 & 37.50\(\pm\)0.60 & 52.50\(\pm\)0.60 \\
**DeepEMD v2** (2020) [57] & ResNet12 & 68.77\(\pm\)0.29 & 84.13\(\pm\)0.53 & 71.16\(\pm\)0.87 & 86.03\(\pm\)0.58 & 46.47\(\pm\)0.26 & 63.22\(\pm\)0.71 \\
**COSOC** (2021) [32] & ResNet12 & 69.28\(\pm\)0.49 & 85.16\(\pm\)0.42 & 73.57\(\pm\)0.43 & 87.57\(\pm\)0.10 & - & - \\
**Match-feat** (2022) [1] & ResNet12 & 63.98\(\pm\)0.79 & 82.04\(\pm\)0.49 & 70.97\(\pm\)0.13 & 86.16\(\pm\)0.67 & - & - \\
**Match-feat** (2022) [2] & ResNet12 & 68.32\(\pm\)0.62 & 82.71\(\pm\)0.46 & 71.22\(\pm\)0.86 & 85.43\(\pm\)0.55 & - & - \\
**Label-Halluc** (2022) [24] & ResNet12 & 67.04\(\pm\)0.70 & 85.87\(\pm\)0.48 & 71.97\(\pm\)0.89 & 86.80\(\pm\)0.58 & 47.37\(\pm\)0.70 & 67.92\(\pm\)0.70 \\
**FeLMi** (2022) [44] & ResNet12 & 67.47\(\pm\)0.78 & 86.04\(\pm\)0.44 & 71.63\(\pm\)0.89 & 87.07\(\pm\)0.55 & 49.02\(\pm\)0.70 & 68.68\(\pm\)0.70 \\
**SIN** (2022) [10] & ViT & 67.80\(\pm\)0.45 & 83.25\(\pm\)0.30 & 72.99\(\pm\)0.50 & 87.67\(\pm\)0.33 & - & - \\
**FewTure** (2022) [23] & Swin-Tiny & 72.40\(\pm\)0.78 & 86.38\(\pm\)0.49 & 76.32\(\pm\)0.87 & 89.96\(\pm\)0.55 & 47.68\(\pm\)0.78 & 63.81\(\pm\)0.75 \\
**HCTransformer** (2022) [22] & 3\(\times\) ViT-S & **74.74\(\pm\)**0.17 & 89.19\(\pm\)0.13 & **79.67\(\pm\)**0.20 & 91.72\(\pm\)0.11 & 482.72\(\pm\)0.15 & 66.42\(\pm\)0.16 \\
**SMKD** (2023) [30] & ViTTS & 74.28\(\pm\)0.18 & 88.25\(\pm\)0.09 & 78.83\(\pm\)0.20 & 91.02\(\pm\)0.12 & 50.38\(\pm\)0.16 & 83.73\(\pm\)0.16 \\
**ConstNet** (2021) [54] & ResNet12 & 68.49\(\pm\)0.23 & 79.95\(\pm\)0.17 & 70.15\(\pm\)0.86 & 86.10\(\pm\)0.70 & 43.80\(\pm\)0.20 & 59.70\(\pm\)0.20 \\
**TPMN** (2021) [52] & ResNet12 & 67.64\(\pm\)0.63 & 83.44\(\pm\)0.43 & 72.24\(\pm\)0.70 & 86.55\(\pm\)0.63 & 46.93\(\pm\)0.71 & 63.26\(\pm\)0.74 \\
**CORL** (2023) [21] & ResNet12 & 65.74\(\pm\)0.53 & 83.03\(\pm\)0.33 & 73.82\(\pm\)0.58 & 86.76\(\pm\)0.52 & 44.82\(\pm\)0.73 & 61.31\(\pm\)0.54 \\ \hline
**VIT-with-parts** (\(L_{cls}\)) & ViT-S & 72.15\(\pm\)0.20 & 87.61\(\pm\)0.15 & 78.03\(\pm\)0.19 & 89.08\(\pm\)0.19 & 48.92\(\pm\)0.13 & 67.75\(\pm\)0.15 \\
**Ours - DPViT** & ViT-S & 73.81\(\pm\)0.45 & **89.85\(\pm\)**0.35 & 79.32\(\pm\)0.48 & **91.92\(\pm\)**0.40 & **50.75\(\pm\)**0.23 & **68.80\(\pm\)**0.45 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluating the performance of our proposed method on three benchmark datasets for few-shot learning - MiniImageNet, Tiered-ImageNet, and FC100. The top blocks show the non-part methods while the bottom block shows the part-based methods. The best results are bold, and \(\pm\) is the 95% confidence interval in 600 episodes.
\begin{table}
\begin{tabular}{c c c c c} \hline Method & 1-shot \(\uparrow\) & 5-shot \(\uparrow\) & \(\|\mathbf{p}\|_{1}\downarrow\|\) & \(\|\mathbf{p}\mathbf{p}^{\mathbf{p}}-\mathbf{p}\|_{1}\downarrow\) \\ \hline SMKD [30] & 60.93 & 80.38 & - & - \\ \(\mathcal{L}_{cls}+\mathcal{L}_{mix}\) & 61.24 & 81.12 & 8.41 & 25.82 \\ \(\mathcal{L}_{cls}+\mathcal{L}_{mix}\) & 62.81 & 83.25 & 8.73 & 24.61 \\ \(\mathcal{L}_{cls}+\mathcal{L}_{mix}+\mathcal{L}_{Q}\) & 62.15 & 82.95 & 0.35 & 0.56 \\ \hline \end{tabular}
\end{table}
Table 3: Ablation of different loss terms during pretraining of DP-ViT. We show the effect of each loss term on the MiniImageNet dataset.
\begin{table}
\begin{tabular}{c|c c c c} \hline Method & 1-shot \(\uparrow\) & 5-shot \(\uparrow\) & \(\|\mathbf{p}\|_{1}\downarrow\|\) & \(\|\mathbf{p}\mathbf{p}^{\mathbf{p}}-\mathbf{p}\|_{1}\downarrow\|\) \\ \hline SMKD [30] & 60.93 & 80.38 & - & - \\ \(\mathcal{L}_{cls}\) & 61.24 & 81.12 & 8.41 & 25.82 \\ \(\mathcal{L}_{cls}+\mathcal
### Studying the impact of incidental correlations of background on ImageNet-9
The impact of incidental background correlation on the ImageNet-9 dataset is examined, and the corresponding results are presented in Table 8. DPViT achieves the highest performance across IN-9L, Original, M-SAME, and M-RAND among the different test splits. Remarkably, our proposed DPViT method significantly reduces the BG-GAP value to \(5.9\), indicating its resilience to the effects of incidental correlation caused by varying backgrounds. In comparison to non-part methods such as ResNet-50 and WRN-50 \(\times\) 2 (as shown in Table 8), ConstNet [54], which is a part-based method, suffers more from the negative impact of incidental correlation, underscoring the detrimental effect on the generalization of part-learners. To evaluate ConstNet, we utilize the provided source code by the authors to train their model on the IN-9L training data.
## 6 Ablation study and Discussion
In Section 5.1 and 5.2, we investigate how the inclusion of \(\mathcal{L}_{mix}\) and \(\mathcal{L}_{Q}\) impacts the interpretability of DPViT during the pretraining phase. We will now conduct an ablation study to assess the impact of both these loss components during the pre-training phase. As indicated in Table 3, introducing \(\mathcal{L}_{mix}\) results in enhanced 1-shot and 5-shot performance compared to the baseline model represented by \(\mathcal{L}_{cls}\). It's worth noting that the SMKD [30] model performs worse than the part-ViT baseline (\(\mathcal{L}_{cls}\)). Upon incorporating \(\mathcal{L}_{Q}\), the few-shot performance experiences a slight decrease, but the norms remain constrained, indicating improved quality of part representations.
Additionally, in this section, we examine the advantages of incorporating \(\mathcal{L}_{cls}^{inv}\) and \(\mathcal{L}_{p}^{inv}\) during the fine-tuning process on the ImageNet-9 dataset. This ablation study focuses on the significance of invariance terms in handling background-related incidental correlations. The results presented in Table 4 demonstrate that the introduction of \(\mathcal{L}_{cls}^{inv}\) enhances generalization on M-S (M-SAME) and M-R (M-RAND), thereby illustrating the benefits of inducing invariance through \(\mathcal{L}_{cls}^{inv}\) across varying backgrounds. Moreover, the inclusion of \(\mathcal{L}_{cls}^{inv}\) yields a lower BG-GAP value of \(5.9\), in contrast to \(8.8\) obtained without \(\mathcal{L}_{cls}^{inv}\). Notably, the introduction of \(\mathcal{L}_{cls}^{inv}\) has no impact on the quality of parts, as evidenced by the final norm values of \(\mathbf{P}\). By employing \(\mathcal{L}_{p}^{inv}\) in conjunction with other losses, the norms remain bounded, thereby preserving the interpretability of parts. We also observe a minimal decrease in classification performance upon introducing \(\mathcal{L}_{p}^{inv}\), highlighting the trade-off between generalization and interpretability that will be explored in the following section.
6, assigning higher values to \(\lambda_{s}\) and \(\lambda_{o}\) places greater emphasis on \(\mathcal{L}_{Q}\), leading to lower norm values and consequently improving the quality of parts. However, this also results in a slight reduction in few-shot accuracy. After careful analysis, we determine that an optimal value of \(0.5\) for both \(\lambda_{s}\) and \(\lambda_{o}\) strikes a balance, maintaining the quality of parts while preserving few-shot generalization.
### Partial observability of foreground mask
Our training procedure employs foreground masks acquired through a foreground extractor, similar to the one described in [53]. To study the dependence of DPViT on the availability of foreground masks, we examine the weak/limited supervision scenario for foreground masks, where only a small subset of samples possesses the corresponding masks. As depicted in Table 7, we observe that DPViT achieves comparable performance even when only 10% of the training samples have mask information. The performance difference is less than 1.5% for 5-shot and less than 0.8% for 1-shot performance. Furthermore, Figure 7 presents visualizations of image patches surrounding a random foreground and a background part. We also find that in the setup with a \(0\%\) foreground mask, equivalent to \(\lambda_{mix}=0\), no disentanglement is observed in the extracted patches. (More visualizations can be found in the Appendix section).
### Working with weak foreground extractor
The features of DPViT (\(\mathcal{F}_{\theta}\)) depend entirely on the input sample \(\mathbf{x}\) in order to learn part representations, without explicitly incorporating the mask information. The mask serves as a weak signal to regularize our training objective and separate the foreground parts from the background. Additionally, introducing Gaussian noise in the latent codes enhances DPViT's ability to handle misalignment issues with the foreground masks. Consequently, the learned features remain unaffected by mistakes made by the existing foreground extractor. Moreover, we find that the mixture-of-parts representations can accurately determine the foreground location even when the mask information is missing or incorrect (as illustrated in Figure 6).
### Limitations of DPViT
A constraint within our framework involves relying on a pre-existing foreground extractor. In certain scenarios, such as the classification of tissue lesions for microbiology disease diagnosis, obtaining an existing foreground extractor might not be feasible. Similarly, DPViT focuses on learning components that are connected to the data, yet it doesn't encompass the connections between these components, like their arrangement and hierarchical combination. Introducing compositional relationships among these components could enhance comprehensibility and facilitate the creation of a part-based model capable of learning relationships among the parts.
## 7 Conclusion
In this work, we study the impact of incidental correlations of image backgrounds on the interpretability and generalization capabilities of part learners. We introduce DPViT, a method that effectively learns disentangled part representations through a mixture-of-parts approach. Furthermore, we enhance the quality of part representations by incorporating sparse and orthogonal regularization constraints. Through comprehensive experiments, we demonstrate that DPViT achieves competitive performance comparable to state-of-the-art methods, all while preserving both implicit and explicit interpretability.
## Acknowledgments and Disclosure of Funding
The computation resources used in preparing this research were provided by Digital Research Alliance of Canada, and The Vector Institute, Canada.
|
2309.14261 | The $s$-weak order and $s$-permutahedra II: The combinatorial complex of
pure intervals | This paper introduces the geometric foundations for the study of the
$s$-permutahedron and the $s$-associahedron, two objects that encode the
underlying geometric structure of the $s$-weak order and the $s$-Tamari
lattice. We introduce the $s$-permutahedron as the complex of pure intervals of
the $s$-weak order, present enumerative results about its number of faces, and
prove that it is a combinatorial complex. This leads, in particular, to an
explicit combinatorial description of the intersection of two faces. We also
introduce the $s$-associahedron as the complex of pure $s$-Tamari intervals of
the $s$-Tamari lattice, show some enumerative results, and prove that it is
isomorphic to a well chosen $\nu$-associahedron. Finally, we present three
polytopality conjectures, evidence supporting them, and some hints about
potential generalizations to other finite Coxeter groups. | Cesar Ceballos, Viviane Pons | 2023-09-25T16:19:48Z | http://arxiv.org/abs/2309.14261v2 | # The \(s\)-weak order and \(s\)-permutahedra II:
###### Abstract.
This paper introduces the geometric foundations for the study of the \(s\)-permutahedron and the \(s\)-associahedron, two objects that encode the underlying geometric structure of the \(s\)-weak order and the \(s\)-Tamari lattice. We introduce the \(s\)-permutahedron as the complex of pure intervals of the \(s\)-weak order, present enumerative results about its number of faces, and prove that it is a combinatorial complex. This leads, in particular, to an explicit combinatorial description of the intersection of two faces. We also introduce the \(s\)-associahedron as the complex of pure \(s\)-Tamari intervals of the \(s\)-Tamari lattice, show some enumerative results, and prove that it is isomorphic to a well chosen \(\nu\)-associahedron. Finally, we present three polytopality conjectures, evidence supporting them, and some hints about potential generalizations to other finite Coxeter groups.
Key words and phrases:Weak order, Permutahedron, Tamari Lattice, Associahedron 2020 Mathematics Subject Classification: Primary 20F55 and 52B05 ; Secondary 06B05 and 06B10 This project was supported by the ANR-FWF International Cooperation Project PAGCAP, funded by the ANR Project ANR-21-CE48-0020 and the FWF Project I 5788. Cesar Ceballos was also supported by the Austrian Science Fund FWF, Project P 33278.
###### Contents
* 1 Introduction
[MISSING_PAGE_POST]
olytopal complexes and polytopal subdivision realizations
* 32 Generalizations for finite Coxeter groups
* 33 Geometric realizations in dimensions 2 and 3
* 34 Realization via flow polytopes
## Introduction
This paper is the second contribution (after [1]) to a series of articles related to the \(s\)-weak order and the \(s\)-permutahedron. We originally introduced these objects in an extended abstract [1], which is now developed into two long versions: the prequel [1] and this present paper.
Figure 1. The \(s\)-weak order as the edge graph of the \(s\)-permutahedron for \(s=(0,2,2)\).
The purpose of the first paper [10] was to develop the combinatorial foundations for the study of the \(s\)-weak order and the \(s\)-Tamari lattice, two partially ordered structures defined on certain families of decreasing trees, see examples in Figures 1- 2 and 3. Our motivation for introducing those objects was to fill the gap between recent developments on generalizations of the classical Tamari lattice on one side, and its connections to the classical weak order of permutations on the other.
On one side, we have the classical Tamari lattice (see [12, 13]), a partial order on Catalan objects (such as binary trees) whose Hasse diagram is the edge graph of a polytope called the association. It is a central object in algebraic combinatorics, related to many interesting structures such as AVL trees in efficient sorting algorithms [1], Hopf algebras [10, 11, 12, 13, 14], Cambrian lattices [15], Permutrees [16], cluster algebras [14, 15], representation theory [17, 18], and more. Two important generalizations of the Tamari lattice that are relevant for our work are the \(m\)-Tamari lattice of Bergeron and Preville-Ratelle [15] and the more general \(\nu\)-Tamari lattice of Preville-Ratelle and Viennot [19], whose study is motivated by conjectural connections with the theory of (multivariate higher) diagonal harmonics in representation theory, see e.g. [1, 1, 1, 16].
On the other side, we have the classical weak order on permutations, whose Hasse diagram is the edge graph of a polytope called the permutahedron. It is known to be a lattice [10] obtained from an orientation of the Cayley graph of the symmetric group. Both the lattice construction and the associated polytope also exist for other finite Coxeter groups [11, 12]. The weak order has well known connections with the classical Tamari lattice, which can be obtained from it both as a sublattice [11] and as a lattice quotient [15]. These properties have interesting geometric and algebraic counterparts. Geometrically, the associahedron can be obtained by removing certain facets of the permutahedron [10, 12], and this translates into algebraic properties between certain related Hopf algebras [10, 11].
Figure 2. The \(s\)-weak order as the edge graph of the \(s\)-permutahedron for \(s=(0,0,2)\).
The recent extensions of the Tamari lattice and the lack of (type \(A\))1 generalizations of the weak order lead to the question: what is the equivalent of the weak order for the \(\nu\)-Tamari lattice? The \(s\)-weak order introduced in [10] gives an answer to this question. Given a sequence of non-negative integers \(s\), the \(s\)_-weak order_ is a lattice structure on a family of combinatorial objects called \(s\)_-decreasing trees_. We also introduced the \(s\)_-Tamari lattice_ as a sublattice of the \(s\)-weak order and showed that it is isomorphic to a well chosen \(\nu\)-Tamari lattice.
Footnote 1: There are further generalizations in the context of Coxeter groups. In this paper, we mainly focus on generalizations which are related to Coxeter groups of type \(A\). Some discussions about other types are mentioned in Section 3.1.
The purpose of this second paper is to investigate the underlying geometric structure of the \(s\)-weak order and the \(s\)-Tamari lattice. As we can see from Figures 1 and 3, the \(s\)-weak order and the \(s\)-Tamari lattice can be realized as the edge graph of a much richer geometric structure, whose faces can be explicitly described in terms of a special family of intervals which we call pure intervals and pure \(s\)-Tamari intervals. This leads to two main definitions: (1) the \(s\)_-permutahedron_ is the complex of pure intervals of the \(s\)-weak order and (2) the \(s\)_-associahedron_ is the complex of pure \(s\)-Tamari intervals of the \(s\)-Tamari lattice. In this second paper, we introduce these concepts and develop the geometric foundations for the study of \(s\)-permutahedra and \(s\)-associahedra.
Figure 3. The \(s\)-Tamari lattice as the edge graph of the \(s\)-associahedron for \(s=(0,2,2)\)
The paper is subdivided into three parts. Part 1 is dedicated to the introduction and study of the \(s\)-permutahedron (Definition 1.2.3), whose faces are given by pure intervals of the \(s\)-weak order (Definition 1.2.1). We present enumerative results about the number of faces by giving an explicit formula for the \(f\)-polynomial (Proposition 1.2.4), and describe a more efficient recursive formula to compute it (Proposition 1.2.5). The main result of this part is to prove that the \(s\)-permutahedron is a _combinatorial complex_ (Theorem 1.2.6) which, in particular, requires that the intersection of two faces is also a face of the complex (Theorem 1.4.1 and Corollary 1.4.13). Our proof is purely combinatorial and is based on an explicit characterization of pure intervals (Theorem 1.3.9) using the notions of _variations_ and _essential variations_.
In Part 2, we introduce and study the \(s\)-associahedron (Definition 2.2.3), whose faces are given by pure \(s\)-Tamari intervals of the \(s\)-Tamari lattice (Definition 2.2.1). We present an explicit formula for its \(f\)-polynomial (Proposition 2.2.4), which provides natural definitions of the \(s\)_-Catalan_ and the \(s\)_-Narayana_ numbers. The main result of this part is to show that the \(s\)-associahedron coincides with the \(\nu\)-associahedron introduced by Ceballos, Padrol, and Sarmiento in [10] (Theorem 2.2.9). The \(\nu\)-associahedron is a polytopal complex of bounded faces induced by certain arrangement of tropical hyperplanes, and its edge graph gives a geometric realization of the \(\nu\)-Tamari lattice [10]. This proves, in particular, that the \(s\)-associahedron is also a polytopal complex.
In Part 3, we present three conjectures about geometric realizations of the \(s\)-permutahedron and the \(s\)-associahedron (Section 3.1), as well as some hints about potential generalizations of our work in the context of finite Coxeter groups (Section 3.2). The first conjecture (Conjecture 3.1.1) states that pure intervals of the \(s\)-weak order can be realized as convex polytopes. The second conjecture (Conjecture 3.1.2) asserts that the \(s\)-permutahedron can be geometrically realized as a polytopal subdivision of a zonotope, and the third (Conjecture 3.1.3) that there is such a realization such that the \(s\)-associahedron can be obtained from it by removing some facet defining inequalities. All these conjectures are strongly supported by computer evidence. Several examples are illustrated in Figure 4. In particular, we show that they hold in full generality in dimensions 2 and 3 (Section 3.3).
We remark that these conjectures were already present in our initial extended abstract [11], and have been partially solved since. Indeed, in [1], for a composition \(s\) without any \(0\), the authors provide a realization of the \(s\)-permutahedron as the dual of the complex of interior faces of a triangulation of a flow polytope. From this, using the Cayley trick and techniques from tropical geometry, they are able to construct the desired geometric realization of the \(s\)-permutahedron. The results in [1] solve Conjectures 3.1.1 and 3.1.2 for the special case where \(s\) contains no zeros. The case where \(s\) contains zeros is still open, and Conjecture 3.1.3 remains open in general. We present a further discussion on these connections in Section 3.4. We remark that pure intervals also appear in the work of Lacina [15], where he shows that they are the only intervals of the \(s\)-weak order which are homotopy equivalent to a sphere.
These recent advances motive the further study of the \(s\)-weak order and the \(s\)-permutahedron, as well as a deeper understanding on the nice properties of the family of pure intervals. In comparison to [1], this paper develops the foundations for the study of \(s\)-permutahedra and \(s\)-associahedra and explores the combinatorial/geometric properties of the \(s\)-permutahedra for all \(s\) (with or without zeros).
Beside the paper, we provide the SageMath [15] code used to reach the results as well as a demo SageMath worksheet with computations of all examples in this paper [12].
## Part 1. The \(s\)-permutahedron
### 1.1. The \(s\)-weak order (background)
#### 1.1.1. Lattice definition through \(s\)-decreasing trees and tree-inversions
In this Section, we recall the main definitions and results of [10] needed for Part 1 of this paper. We call a sequence \(s=(s(1),s(2),\ldots,s(n))\) of non-negative integers a _weak composition_ of length \(\ell(s)=n\). If for all \(i\), \(s(i)>0\), this is simply a composition. An \(s\)_-decreasing tree_ is a _rooted planar_ tree with \(n\) internal nodes labeled bijectively by \(1,\ldots,n\), such that node \(i\) has \(s(i)+1\) children and such that labels are decreasing from root to leaves. An example is given on Figure 5.
As each node is given a unique label, we usually write _the node \(y\)_ instead of _the node labeled by \(y\)_ for convenience. For an \(s\)-decreasing tree \(T\) and a node \(y\) such that \(s(y)=m\), we write \(T_{0}^{y},\ldots,T_{m}^{y}\) the \(m+1\) children of \(y\). Then \(T_{0}^{y}\) is the _left_ child of \(y\), while \(T_{1}^{y},\ldots,T_{m-1}^{y}\) are the _middle children_ of \(y\) and \(T_{m}^{y}\) is the _right_ child of \(y\). Besides, if \(m>0\), we say that \(T_{0}^{y}\) (resp. \(T_{m}^{y}\)) is a _strict left_ (resp. strict right) child of \(y\). Note that the word _child_ refers here to the whole subtree not just the child node.
An \(s\)-decreasing tree \(T\) is characterized by its multi-set of _tree-inversions_\(\operatorname{inv}(T)\). Let \(c<a\) be two nodes a of an \(s\)-decreasing tree \(T\), the _cardinality_ of the tree-inversion \((c,a)\), written \(\#_{T}(c,a)\), is given
Figure 4. The \(s\)-permutahedron and the \(s\)-associahedron obtained from it by removing certain facets, for \(s=(0,2,2,2)\), \(s=(0,2,3,2)\), and \(s=(0,3,3,3)\).
by the relative position of \(a\) towards \(c\). If \(a\) is left of \(c\) or belongs to \(T_{0}^{c}\), then \(\#_{T}(c,a)=0\). If \(a\) belongs to a middle child \(T_{i}^{c}\) (with \(0<i<m\)), then \(\#_{T}(c,a)=i\). Finally, if \(a\) belongs to \(T_{m}^{c}\) or is right of \(c\), then \(\#_{T}(c,a)=m\). We give many examples on Figure 5.
As we explain in [10], \(s\)-decreasing trees are a generalization of permutations. More precisely, if \(s\) does not contain any zeros, \(s\)-decreasing trees correspond to certain multi-permutations where the letter \(i\) is repeated \(s(i)\) times: they are in bijection with _stirling permutations_, _i.e._, 121-avoiding multi-permutations. In particular, if \(s=(1,1,\dots)\), the \(s\)-decreasing trees are the decreasing binary trees, known to be in bijection with classical permutations. Thus, the tree-inversions are a generalization of the classical inversion sets of permutations. It can be understood as a _multi-set_ where each inversion \((c,a)\) appears \(\#(c,a)\) times. Many classical notions and operations on sets can be generalized to multi-sets.
**Definition 1.1.1** (Definitions 1.4 and 1.5 of [10]).: Let \(I\) be a multi-set of inversions \((c,a)\) with \(a<c\). We write \(\#_{I}(c,a)\) the number of occurrences of \((c,a)\) in \(I\). If there is no occurrence of \((c,a)\) in \(I\) we write \((c,a)\notin I\) or equivalently \(\#_{I}(c,a)=0\).
Given two multi-sets of inversions \(I\) and \(J\), we say that \(I\) is _included_ in \(J\) (resp. strictly included) and write \(I\subseteq J\) (resp. \(I\subset J\)) if \(\#_{I}(c,a)\leq\#_{J}(c,a)\) (resp. \(\#_{I}(c,a)<\#_{J}(c,a)\)) for all \((c,a)\in I\).
Given a weak composition \(s\) with \(\ell(s)=n\), we write \(\Sigma_{s}\) the _maximal \(s\)-inversion set_ defined by \(\#_{\Sigma_{s}}(c,a)=s(c)\) for all \(1\leq a<c\leq n\).
We say that a multi-set of inversions \(I\) is _transitive_ if for all triplets \(a<b<c\), either \(\#_{I}(b,a)=0\) or \(\#_{I}(c,a)\geq\#_{I}(c,b)\).
Given a weak composition \(s\) with \(\ell(s)=n\) and \(I\subseteq\Sigma_{s}\), then \(I\) is _planar_ if for all triplets \(1\leq a<b<c\leq n\), either \(\#_{I}(b,a)=s(b)\) or \(\#_{I}(c,b)\geq\#_{I}(b,a)\).
We prove the following.
**Proposition 1.1.2** (Proposition 1.6 of [10]).: _For \(s\) a given weak composition, a multi-set of inversions is the multi-set of tree-inversions of an \(s\)-decreasing tree \(T\) if and only if it is planar, transitive, and included in \(\Sigma_{s}\). We then call it an \(s\)-tree-inversion set._
Even though they might seem technical, the transitivity and planarity conditions can be naturally explained by picture, as in Figure 6. These conditions regard the possible positions for the triplet \(c>b>a\) inside the tree. By picture, it is then clear that the transitivity and planarity conditions are satisfied by any multi-set of inversions corresponding to an \(s\)-decreasing tree. The proof that they are sufficient conditions to obtain an \(s\)-decreasing tree is done using an explicit construction algorithm [10, Algorithm 1.8].
Tree-inversions and tree-inversion sets are fundamental notions to work on \(s\)-decreasing trees, and are specially useful in proofs. The methodology we often use is to first _understand_ looking at the trees, and then _prove_ using tree-inversions. In particular, we use tree-inversions to define a partial order on
Figure 5. an \(s\)-decreasing tree and its tree-inversions (Figure 4 of [10])
\(s\)-decreasing trees by inclusion of their tree-inversion sets. This is the _\(s\)-weak order_, generalizing the classical weak order which is also defined by inclusion of inversions. Furthermore, we proved in [10] that it is a lattice using the following two operations on multi-sets.
**Definition 1.1.3** (Definition 1.12 of [10]).: The _union_ of two multi-sets of inversions \(I\) and \(J\) is given by
\[\#_{I\cup J}(c,a)=\max\left(\#_{I}(c,a),\#_{J}(c,a)\right) \tag{1}\]
for all \(a<c\).
**Definition 1.1.4** (Definition 1.4 of [10]).: Let \(I\) be a multi-set of inversions. A _transitivity path_ between two values \(c>a\) is a list of \(k\geq 2\) values \(c=b_{1}>b_{2}>\cdots>b_{k}=a\) such that \(\#_{I}(b_{i},b_{i+1})>0\) for all \(1\leq i<k\). Note that if \(\#_{I}(c,a)>0\), then \((c,a)\) itself is a transitivity path. We write \(I^{\text{tc}}\) for the _transitive closure_ of \(I\), which is defined as follows. For all \(a<c\), \(\#_{I^{\text{tc}}}(c,a)=v\) with \(v\) the maximal value of \(\#_{I}(b_{1},b_{2})\) for \(c=b_{1}>\cdots>b_{k}=a\) a transitivity path between \(c\) and \(a\).
**Definition 1.1.5** ([10, Defintion 1.9]).: Let \(R\) and \(T\) be two \(s\)-decreasing trees of a same weak composition \(s\) with \(\ell(s)=n\). We say that \(R\preccurlyeq T\) if \(\operatorname{inv}(R)\subseteq\operatorname{inv}(T)\) using the inclusion of multi-sets. We call the relation \(\preccurlyeq\) the _\(s\)-weak order_.
**Theorem 1.1.6** (Theorem 1.21 of [10]).: _The \(s\)-weak order is a lattice for every weak composition \(s\). The join of two \(s\)-decreasing trees \(T\) and \(R\) is given by_
\[\operatorname{inv}(T\lor R)=\left(\operatorname{inv}(T)\cup\operatorname{inv }(R)\right)^{\text{tc}}. \tag{2}\]
We show an example of the computation of the join on Figure 7, please see [10] for more detailed examples and computations.
#### 1.1.2. Tree-ascents and cover relations
The cover relations of the \(s\)-weak order can be combinatorially described in terms of a notion of tree ascents defined in [10].
**Definition 1.1.7** (Definition 1.24 from [10]).: Let \(T\) be an \(s\)-decreasing tree of some weak composition \(s\) of length \(n\). We say that \((a,c)\) with \(a<c\) is a _tree-ascent_ of \(T\) if
1. \(a\) is a descendant of \(c\);
2. \(a\) does not belong to the right child of \(c\);
3. if \(a\) is a descendant of \(b\) with \(c>b>a\), then \(a\) belongs to the right descendant of \(b\);
4. if it exists, the strict right child of \(a\) is empty.
Figure 6. Illustration of the transitivity and planarity conditions on \(s\)-tree inversion sets. (Figure 5 of [10])
The above definition can be visually interpreted from the tree: basically, \(a\) is located at the _right end_ of a non right subtree of \(c\), with the exception \(s(a)=0\), when \(a\) is allowed to have more children on its unique leg. Figure 8 shows a schematic illustration of these two possible occurrences of an \((a,c)\) ascent.
We prove in [10, Theorem 1.32] that tree-ascents correspond to cover relations of the \(s\)-weak order. More precisely, to each tree ascent \((a,c)\) of a tree \(T\), one can associate an \(s\)_-tree rotation_ by increasing the cardinality of \(\#_{T}(c,a)\) by one and performing the transitive closure of the resulting multi-set of inversions. In [10, Lemma 1.29], we show that it increases all cardinalities \(\#(c,a^{\prime})\) by one where \(a^{\prime}\) is \(a\) or a non-left descendant of \(a\). The obtained tree covers \(T\) in the \(s\)-weak order.
It is sometimes useful to characterize tree-ascents using solely properties on tree-inversions. In [10], we prove the following result, which basically translates all the conditions of Definition 1.1.7 in terms of tree-inversions.
Figure 8. Schematic illustration of two \(s\)-trees containing an ascent \((a,c)\). (Figure 8 from [10])
Figure 7. The computation of \(T\lor R\) in the \(s\)-weak lattice (Figure 7 of [10])
**Proposition 1.1.8** (Proposition 1.27 of [13]).: _A couple \((a,c)\) is a tree-ascent if and only if it satisfies the following statements_
1. _for all_ \(d\) _such that_ \(c<d\leq n\)_, we have_ \(\#(d,c)=\#(d,a)\)_;_
2. \(\#(c,a)<s(c)\)_;_
3. _for all_ \(b\) _such that_ \(a<b<c\)_, then_ \(\#(c,b)=\#(c,a)\) _implies that_ \(\#(b,a)=s(b)\)_;_
4. _if_ \(s(a)>0\)_, for all_ \(a^{\prime}<a\)_, then_ \(\#(a,a^{\prime})=s(a)\) _implies that_ \(\#(c,a^{\prime})>\#(c,a)\)_._
### The \(s\)-permutahedron
The key idea of this paper is that even if the \(s\)-week order is just a poset, it actually posses a much richer underlying geometric structure, encoded in a combinatorial object which we will call the \(s\)-permutahedron. The "faces" of the \(s\)-permutahedron will be given by pure intervals of the \(s\)-weak order, which we now introduce.
#### 1.2.1. Pure intervals of the \(s\)-weak order
Let \(T\) be an \(s\)-decreasing tree and \(A\) be a (possibly empty) subset of tree-ascents of \(T\). We denote by \(T+A\) the \(s\)-decreasing tree whose inversion set is obtained by increasing the cardinality \(\#_{T}(c,a)\) for \((a,c)\in A\) by one and then taking the transitive closure.2
Footnote 2: We use this notation for simplicity but it is not entirely consistent with [13]. Indeed, we used the notation \(+(c,a)\) in [13] on multi-sets to increase the cardinality \(\#(c,a)\)_without_ taking the transitive closure. Besides, \(A\) is formally a set of tree-ascents \((a,c)\) and not a set of tree-inversions \((c,a)\).
**Definition 1.2.1**.: Let \(T_{1}\preccurlyeq T_{2}\) be two \(s\)-decreasing trees for a given weak composition \(s\). We say that the interval \([T_{1},T_{2}]\) is a _pure interval_ if \(T_{2}=T_{1}+A\) with \(A\) a subset of tree-ascents of \(T_{1}\).
We present examples on Figures 9, 10, and 11.
A pure interval \([T_{1},T_{1}+A]\) is represented by the tree \(T_{1}\) where we color in red all nodes \(a\) such that there is a tree-ascent \((a,c)\) in \(A\). By Remark 1.25 of [13], the smaller value \(a\) is enough to identify the tree-ascent \((a,c)\): indeed \(c\) is the ascendant of \(a\) with minimal value such that \(\#_{T_{1}}(c,a)<s(c)\). It is the only possibility for a tree-ascent \((a,*)\) to exist (whereas there could be other tree-ascents \((*,c)\)).
On Figure 9, the nodes \(2\) and \(3\) are colored, representing respectively the tree-ascents \((2,3)\) and \((3,4)\). On the right, the corresponding interval is shown. The maximum is obtained by increasing the
Figure 9. Example of a pure interval. On the left, the minimal tree with colored tree-ascents and on the right the corresponding interval.
cardinalities of the tree-inversions \((3,2)\) and \((4,3)\) and taking the transitive closure. For example, we see that \((4,1)\) has been increased by transitivity because \((4,3)\) was increased and \(\#(3,1)>0\). This is also equivalent to taking the lattice join of the two trees obtained by applying the two \(s\)-tree-rotations of the two selected tree-ascents (see Lemma 1.2.2). We present a second example on Figure 10, showing the minimal element of the interval with selected tree-ascents and the maximal element of the interval. You can check that all tree-inversions corresponding to tree-ascents have been increased.
**Lemma 1.2.2**.: _Let \(T_{1}\) and \(T_{2}\) be two \(s\)-decreasing trees such that \(T_{2}=T_{1}+A\) with \(A\) a subset of tree-ascents of \(T_{1}\). Then \(T_{2}\) can be obtained as the join \(T_{2}=\bigvee_{a\in A}(T_{1}+a)\) over all trees \(T_{1}+a\) obtained from \(T_{1}\) by rotating a tree-ascent \(a\in A\)._
Proof.: We denote by \(T_{2}\) the join of the rotated trees by the tree-ascents of \(A\) and prove that \(T_{2}=T_{1}+A\). For a tree-ascent \((a,c)\), let \(M_{(a,c)}\) be the multi-set of inversions such that \(\#_{M_{(a,c)}}(c,a)=\#_{T_{1}}(c,a)+1\) while all other values are unchanged. Similarly, we write \(M_{A}:=\cup_{(a,c)\in A}M_{(a,c)}\), _i.e._, all cardinalities \((c,a)\) with \((a,c)\in A\) have been increased by one. In particular, the tree-inversions of the tree \(T_{1}+(a,c)\) resulting from a single rotation \((a,c)\) are given by \(M_{(a,c)}^{\mathsf{tc}}\), _i.e._, the transitive closure of \(M_{(a,c)}\). Similarly, the tree-inversions of \(T+A\) are given by \(M_{A}^{\mathsf{tc}}\).
By definition, we have \(M_{(a,c)}\subseteq M_{A}\) for all \((a,c)\in A\) where the inclusion is the multi-set inclusion defined in Definition 1.1.1. This is compatible with the multi-set transitive closure of Definition 1.1.4,
Figure 11. Example of a pure interval. the minimal tree of the interval with selected
tress-ascents and on the right, the interval drawn as the skeleton of a 3d polytope.
Figure 10. Example of a pure interval. On the left, the minimal tree of the interval with selected tress-ascents and on the right, the maximal tree of the corresponding interval.
and so we have \(M^{\mathsf{tc}}_{(a,c)}\subseteq M^{\mathsf{tc}}_{A}\). In other words, \(T_{1}+(a,c)\preccurlyeq T_{1}+A\). By taking the lattice join, we obtain \(T_{2}\preccurlyeq T_{1}+A\).
Now by definition of the join, the inversions of \(T_{2}\) are given by \(\operatorname{inv}(T_{2})=M^{\mathsf{tc}}_{2}\) where \(M_{2}=\cup_{(a,c)\in A}M^{\mathsf{tc}}_{(a,c)}\). In particular, \(M_{2}\neq M_{A}\) as it is the union of the transitive closures of the \(M_{(a,c)}\) multi-sets. But, as the transitive closure can only increase the cardinality of a tree-inversion, for each \((a,c)\in A\) we know that \(\#_{M_{2}}(c,a)\geq\#_{M_{(a,c)}}(c,a)=\#_{T_{1}}(c,a)+1=\#_{M_{A}}(c,a)\). Besides, for each couple \(a^{\prime}<c^{\prime}\) such that \((a^{\prime},c^{\prime})\not\in A\), we have \(\#_{M_{2}}(c^{\prime},a^{\prime})\geq\#_{T_{1}}(c^{\prime},a^{\prime})=\#_{M _{A}}(c^{\prime},a^{\prime})\). We obtain \(M_{A}\subseteq M_{2}\). By taking the transitive closure, we have \(\operatorname{inv}(T_{1}+A)\subseteq\operatorname{inv}(T_{2})\) and so \(T_{1}+A\preccurlyeq T_{2}\).
#### 1.2.2. The combinatorial complex of pure intervals
We are now ready to define one of the main objects of this paper, the \(s\)-permutahedron.
**Definition 1.2.3** (The \(s\)-permutahedron).: The \(s\)_-permutahedron_ Perm\((s)\) is the collection of pure intervals \([T,T+A]\) of the \(s\)-weak order. Here, \(T\) denotes an \(s\)-decreasing tree and \(A\) a subset of tree-ascents of \(T\). The _dimension_ of \([T,T+A]\) is said to be equal to \(|A|\). In particular,
1. the vertices of Perm\((s)\) are \(s\)-decreasing trees \(T\), and
2. two \(s\)-decreasing trees are connected by an edge if and only if they are related by an \(s\)-tree rotation.
We often refer to pure intervals \([T,T+A]\) as _faces_ of Perm\((s)\), and say that one face is contained in another if the containment holds as intervals in the \(s\)-weak order.
Figure 1 illustrates an example of the \(s\)-permutahedron Perm\((0,2,2)\). As we can see, it is a polytopal complex whose faces are labeled by pure intervals, and whose edge graph is the Hasse diagram of the \(s\)-weak order.
As we have a (combinatorial) notion of face and dimension, we can also define the \(f\)_-polynomial_ associated to a weak composition \(s\) by
\[f_{s}(t):=\sum_{F\in\operatorname{Perm}(s)}t^{\dim(F)} \tag{3}\]
The \(f\)-polynomial for our example in Figure 1 is
\[f_{(0,2,2)}(t)=15+20t+6t^{2}. \tag{4}\]
Indeed, there are \(15\)\(s\)-decreasing trees (faces of dimension \(0\)), \(20\) edges (faces of dimension \(1\)) and \(6\) pure intervals of dimension \(2\), which correspond to the \(6\) polygons that appear naturally on the lattice. The following proposition is straightforward from the definition.
**Proposition 1.2.4**.: _The \(f\)-polynomial of the \(s\)-permutahedron_ Perm\((s)\)_is given by_
\[\sum_{T}(1+t)^{\operatorname{asc}(T)},\]
_where the sum runs over all \(s\)-decreasing trees \(T\) and \(\operatorname{asc}(T)\) denotes the number of ascents of \(T\)._
Proof.: Let
\[f_{s}(t)=\sum_{T}(1+t)^{\operatorname{asc}(T)}=c_{0}+c_{1}t+c_{2}t^{2}+\ldots.\]
We need to show that \(c_{k}\) counts the number of \(k\)-dimensional faces of Perm\((s)\). This follows from the fact that every subset of ascents \(A\) of \(T\), of size \(k\), contributes a \(t^{k}\) to the term \((1+t)^{\operatorname{asc}(T)}\).
The polynomial \(\sum_{T}t^{\operatorname{asc}(T)}\) is a natural generalization of the Eulerian polynomial which we call the \(s\)_-Eulerian polynomial_. Its coefficients are called the \(s\)_-Eulerian numbers_.
There is actually a recursive way to compute \(f_{s}\) which is much better in practice than enumerating all the \(s\)-decreasing trees. This is described in the following proposition.
**Proposition 1.2.5**.: _If \(s\) is a weak composition of length \(1\), then \(f_{s}(t)=1\). If \(s\) be a weak composition of length \(n\geq 2\), then we write \(s=(\tilde{s},u,v)\) where \(\tilde{s}\) is the sequence obtained by removing the last two values in \(s\), \(u=s(n-1)\) and \(v=s(n)\). Then_
1. _if_ \(n>2\) _and_ \(u=0\)_, then_ \(f_{s}=f_{(\tilde{s},v)}f_{(0,v)}\)_;_
2. _if_ \(n=2\) _or_ \(u>0\)_, then_ \(f_{s}(t)=(v+1)f_{(\tilde{s},u+v)}(t)+vtf_{(\tilde{s},u+v-1)}(t)\)_._
For example if \(s=(u,v)\) is a weak composition of length \(2\). The \(s\)-decreasing trees have two nodes labeled \(1\) and \(2\). The node \(2\) is the root and has \(v+1\) children. The \(s\)-weak order is the chain between the \(s\)-decreasing tree where \(1\) is at the extreme left and the \(s\)-decreasing tree where \(1\) is at the extreme right. There are \(v+1\)\(s\)-decreasing trees (corresponding to the possible positions of \(1\)) and \(v\) edges. We get \(f_{(u,v)}(t)=v+1+vt\). By Proposition 1.2.4, we also obtain \(f_{(u,v)}(t)=v(1+t)+1\). Indeed all trees but the maximal one have one ascent. The recursive computation of Proposition 1.2.5 also gives \(f_{(u,v)}(t)=(v+1)f_{(u+v)}(t)+vtf_{(u+v-1)}=v+1+vt\).
We can also compute the \(f\)-polynomial of \(\operatorname{Perm}(0,2,2)\) in Figure 1 using our recursive formula in Proposition 1.2.5. We obtain
\[f_{(0,2,2)}(t)=3f_{(0,4)}(t)+2tf_{(0,3)}(t)=3(5+4t)+2t(4+3t)=15+20t+6t^{2}.\]
For the \(\operatorname{Perm}(0,0,2)\) (Figure 2), we get
\[f_{(0,0,2)}(t)=f_{(0,2)}(t)f_{(0,2)}(t)=(3+2t)^{2}=9+12t+4t^{2}\]
We present many computations in Table 1. You can also compute more example using our demo SageMath worsheet [12].
Proof.: The proof can be done bijectively. The base case is trivial: if \(s\) is of length \(1\), whatever the value of \(s(1)\) is, there is a unique \(s\)-decreasing tree with no ascent giving \(f(s)=1\). We then define two different bijections corresponding to Cases (1) and (2) and illustrate them on examples in Figures 12 and 13 respectively. For Case (2), the bijection is divided into two different cases, depending on whether or not \((n-1,n)\) is a selected tree-ascent. This corresponds to the two summands of the recursion. Looking at the examples, we believe that the bijections are straightforward. We explain them in detail below.
Let us prove Case (1). We have \(s=(\tilde{s},0,v)\). We prove that the pure intervals of \(s\) are in bijection with products of pure intervals of \((\tilde{s},v)\) and \((0,v)\). Let \(T\) be an \(s\)-decreasing tree and \(A\) a subset of tree-ascents of \(T\). The root of \(T\) is necessarily \(n\) with \(v+1\) subtrees. The node \(n-1\) is the root of one of the subtrees. As we have \(s(n-1)=0\), the node \(n-1\) has a unique child which we write \(Q\) (it can be empty or has a root \(i<n-1\)). Let \(s^{\prime}=(\tilde{s},v)\), we construct an \(s^{\prime}\)-decreasing tree \(T^{\prime}\) by replacing the node \(n-1\) by its unique child \(Q\). All the other subtrees stay the same. We relabel the root to \(n-1\) instead of \(n\) to keep a standard labeling. If \((a,c)\) with \(a<c<n-1\) is a tree-ascent in \(T\), it is still a tree-ascent in \(T^{\prime}\): it is either in \(Q\) or in any of the other subtrees of \(n\). As \(s(n-1)=0\), we cannot have a tree-ascent \((a,n-1)\). Besides, if \((a,n)\) is a tree-ascent in \(T\) with \(a\neq n-1\), then \((a,n-1)\) is a tree-ascent in \(T^{\prime}\). Indeed, all conditions of Definition 1.1.7 are still satisfied. We set \(A^{\prime}=\{(a,c)\in A;a<c<n-1\}\cup\{(a,n-1);(a,n)\in A\text{ and }a\neq n-1\}\).
Now we define \(s^{\prime\prime}=(0,v)\) and \(T^{\prime\prime}\) the \(s^{\prime\prime}\)-decreasing tree obtained by keeping only the nodes \(n\) and \(n-1\), which we relabel into \(2\) and \(1\) respectively. If \((n-1,n)\) is a tree-ascent of \(T\), we set \(A^{\prime\prime}=\{(1,2)\}\), otherwise \(A^{\prime\prime}=\varnothing\). In the end, we have \(|A^{\prime}|+|A^{\prime\prime}|=|A|\) so the monomial corresponding to the pure interval \((T,A)\) is obtained by a product of monomials of \((T^{\prime},A^{\prime})\) and \((T^{\prime\prime},A^{\prime\prime})\). Besides, the reverse operation is easy: \(T^{\prime\prime}\) gives the position where \(n-1\) needs to be inserted. The selected tree-ascents stay the same up to relabeling. In particular, if \((a,n-1)\) is a tree-ascent in \(T^{\prime}\), then \((a,n)\) is a tree-ascent in \(T\) even if \(a\in Q\) because it is still at the right end of the subtree. We illustrate this bijection on Figure 12 with \(s=(0,0,2,0,3)\).
Let us prove Case (2). Note that if \(s\) is of length \(2\), we have shown earlier that the recursive computation gives the expected result. We then assume that \(s=(\tilde{s},u,v)\) and \(u\neq 0\). In this case, we prove that there is a bijection between pure intervals of \(s\) and some marked pure intervals of \((\tilde{s},u+v)\) and \((\tilde{s},u+v-1)\). Let \(T\) be an \(s\)-decreasing tree and \(A\) a subset of tree-ascents of \(T\). Let us first suppose that \((n-1,n)\not\in A\)
In this case, \((T,A)\) is sent to a marked pure interval of \(s^{\prime}=(\tilde{s},u+v)\). We construct \(T^{\prime}\) by _merging_ the nodes \(n\) and \(n-1\). In \(T\), \(n-1\) has \(u+1\) subtrees which become subtrees of the root in \(T^{\prime}\), at the former
Figure 12. Bijective illustration of Case (1) of the recursive \(f\)-polynomial computation.
Figure 13. Bijective illustration of Case (2) of the recursive \(f\)-polynomial computation.
position of \(n-1\). The root is relabeled \(n-1\) in \(T^{\prime}\) and has now \(u+v+1\) subtrees (it has lost one subtree and gained \(u+1\)). Any tree-ascent of \(T\), \((a,c)\) with \(a<c<n-1\) is still a tree-ascent in \(T^{\prime}\). Similarly, if \((a,n-1)\) was a tree-ascent in \(T\), it is also a tree-ascent in \(T^{\prime}\) (except that now \(n-1\) is the root). Finally, if \((a,n)\) was a tree-ascent in \(T\) and \(a\neq n-1\), then \((a,n-1)\) is a tree-ascent in \(T^{\prime}\) (whether \(a\) was a descendant of \(n-1\) or not). As we have by hypothesis that \((n-1,n)\not\in A\), we can _keep_ the selected tree-ascents of \(T\) in \(T^{\prime}\). We get \(A^{\prime}=\{(a,c)\in A;a<c\leq n-1\}\cup\{(a,n-1);(a,n)\in A\}\) with \(|A^{\prime}|=|A|\).
This operation is surjective. To obtain a bijection, we need to mark \(u+1\) consecutive edges from the root of \(T^{\prime}\): they give the subtrees and insertion position of the node \(n-1\) in \(T\). All tree-ascents of \(T^{\prime}\) are still tree-ascents in \(T\): a tree-ascent \((a,n-1)\) in \(T^{\prime}\) corresponds either to \((a,n-1)\) in \(T\) if \(a\) belongs to first \(u\) marked edges, otherwise it transforms into a \((a,n)\) tree-ascent. We thus obtain all pure intervals of \(s\) where \((n-1,n)\not\in A\). As there are \(v+1\) way to mark the edges of an \(s^{\prime}\)-decreasing tree, we obtain that \(f_{s}(t)=(v+1)f_{s^{\prime}}(t)+f_{s}^{\prime\prime}\) where \(f_{s}^{\prime\prime}\) enumerates the pure intervals of \(s\) where \((n-1,n)\) is in \(A\). This bijection is illustrated for \(s=(0,0,0,2,3)\) in the first column of Figure 13: we show the \(4\) possible markings on a given pure interval of \((0,0,0,5)\) and their pre-images.
If \((n-1,n)\in A\), we send \((T,A)\) to a pure interval of \(s^{\prime\prime}=(\tilde{s},u+v-1)\). As \((n-1,n)\) is a tree-ascent, we know that the last subtree of \(n-1\) in \(T\) is empty (this is true because \(u\neq 0\)). We then construct \(T^{\prime\prime}\) by _deleting_ this last subtree and then _merging_ the nodes \(n-1\) and \(n\). The new root in \(T^{\prime\prime}\) now has \(u+v\) subtrees. The same arguments as earlier work for tree-ascents different than \((n-1,n)\) and we can define \(A^{\prime\prime}=\{(a,c)\in A;a<c\leq n-1\}\cup\{(a,n-1);(a,n)\in A\mbox{ and }a\neq n-1\}\). As we _loose_ the tree-ascent \((n-1,n)\), we have \(|A^{\prime\prime}|=|A|-1\). Besides, to obtain a bijection we now need to mark \(u\) consecutive edges of the root but we cannot mark the last \(u\) edges. Indeed, we need \((n-1,n)\) to be a tree-ascent of \(T\) which implies that \(n-1\) can never be in the last subtree of \(n\). There are \(v\) possible ways to mark
the edges of an \(s^{\prime\prime}\)-decreasing tree, we obtain \(f^{\prime\prime}_{s}=vtf_{s^{\prime\prime}}\). We illustrate the bijection for \(s=(0,0,0,2,3)\) on the second column of Figure 13: we show the \(3\) possible markings on given pure interval of \((0,0,0,4)\) and their pre-images.
Looking at many different examples, we believe that the \(s\)-permutahedron can be realized as a polytopal complex for any weak composition \(s\) (see Part 3). Two essential conditions for this to hold are listed in the following theorem, which is the main result of Part 1 of this paper.
**Theorem 1.2.6**.: _For any weak composition \(s\), the \(s\)-permutahedron \(\operatorname{Perm}(s)\) is a combinatorial complex in the following sense._
1. _The face of a face is also a face:_ _If_ \(I\in\operatorname{Perm}(s)\) _is a face (a pure interval) and_ \(J\subseteq I\) _is a face (a pure interval) then_ \(J\in\operatorname{Perm}(s)\)_._
2. _The intersection of any two faces is also a face:_ _If_ \(I,J\in\operatorname{Perm}(s)\) _are two faces (pure intervals) then_ \(I\cap J\in\operatorname{Perm}(s)\) _is also a face (a pure interval)._
Part (1) of this theorem follows by definition. The intersection property in Part (2) is more involved and we need to develop some tools in order to prove it. First, we will give a complete characterization of pure intervals in Section 1.3 (Theorem 1.3.9), which will then be used to prove the intersection property in Section 1.4 (Theorem 1.4.1).
### Characterization of pure intervals
Although the definition of pure intervals is very simple and intuitive, we now aim to provide a characterization which is rather technical but conveniently useful for proofs (Theorem 1.3.9). In order to do this, we need to introduce the notions of variations, essential variations, and minimal essential variations of an interval. Roughly speaking, the variations are the inversions that increase in the interval, while the minimal essential variations of a pure interval \([T_{1},T_{1}+A]\) will characterize the tree-ascents \(A\) (Theorem 1.3.4).
#### 1.3.1. Variations and essential variations of an interval
**Definition 1.3.1** (Variations).: Let \(T_{1}\preccurlyeq T_{2}\) be two \(s\)-decreasing trees for a given weak composition \(s\). We say that a tree-inversion \((c,a)\) with \(c>a\)_varies_ in the interval \([T_{1},T_{2}]\) if \(\#_{T_{2}}(c,a)>\#_{T_{1}}(c,a)\). In this case, we say that \((c,a)_{v}\) where \(v=\#_{T_{1}}(c,a)\) is a _variation_ of \([T_{1},T_{2}]\). We call \(v\) the _value_ of the variation. The difference \(\#_{T_{2}}(c,a)-\#_{T_{1}}(c,a)\) is the _amplitude_. We sometimes omit the value \(v\) and write that \((c,a)\) is a variation, meaning that there exists \(v\) for which \((c,a)_{v}\) is a variation.
We say that \((c,a)_{v}\) is an _essential variation_ of \([T_{1},T_{2}]\) if \((c,a)_{v}\) is a variation and there is no \(b\) with \(a<b<c\) such that \(a\) belongs to a middle child of \(b\) and \((c,b)\) varies. Finally, we say that an essential variation \((c,a)\) is _minimal_ if there is no \(b\) with \(a<b<c\) with \((b,a)\) an essential variation.
We write \(\operatorname{Var}([T_{1},T_{2}])\) the set of variations of the interval, and \(\operatorname{EVar}(T_{1},T_{2})\) the set of essential variations.
We present below two examples based on Figures 9 and 10. This can be computed using SageMath as we show in [11].
**Example 1.3.2**.: On the pure interval of Figure 9, the variations with their respective values are \((3,1)_{1}\), \((3,2)_{1}\), \((4,1)_{0}\), \((4,2)_{0}\), \((4,3)_{0}\). They are all of amplitude \(1\). Only \((3,2)_{1}\) and \((4,3)_{0}\) are essential variations. For example, \((4,2)_{0}\) is not an essential variation because \(2\) belongs to the middle child of \(3\) and \((4,3)\) varies. In this case, both essential variations are minimal and correspond to the tree-ascents in \(A\) of the minimal tree.
**Example 1.3.3**.: The pure interval of Figure 10 has \(15\) variations, \(10\) of which are essential variations: \((4,1)_{0}\), \((6,2)_{1}\), \((9,1)_{0}\), \((9,4)_{0}\), \((9,6)_{1}\), \((10,1)_{2}\), \((10,4)_{2}\), \((10,5)_{0}\), \((10,8)_{0}\), \((10,9)_{2}\). For example, \((10,2)_{2}\) is a variation but not an essential variation. Besides, there are seven minimal essential variations: \((4,1)_{0}\), \((6,2)_{1}\), \((9,4)_{0}\), \((9,6)_{1}\), \((10,5)_{0}\), \((10,8)_{0}\), \((10,9)_{2}\). For example, \((9,1)_{0}\) is not minimal because \((4,1)_{0}\) is an
essential variation. The seven minimal essential variations in this case also correspond to the tree-ascents in \(A\) of the minimal tree.
In Examples 1.3.2 and 1.3.3, the minimal essential variations of both pure intervals \([T_{1},T_{1}+A]\) correspond to the tree-ascents in \(A\). This property holds in general and will be proved later on.
**Theorem 1.3.4** (Minimal essential variations of pure intervals).: _The set of minimal essential variations of a pure interval \([T_{1},T_{1}+A]\) is in correspondence with \(A\). More precisely, \((a,c)\in A\) if and only if there exists \(v\) for which \((c,a)_{v}\) is a minimal essential variation._
#### 1.3.2. **The Characterization Theorem**
In order to characterize the variation sets of pure intervals we need some further properties.
**Lemma 1.3.5**.: _For all \(a<c\) such that \((c,a)_{v}\) is a variation but not an essential variation, there is a unique \(b>a\) such that \((c,b)_{v}\) is an essential variation and \(a\) belongs to a middle child of \(b\)._
Proof.: Let us suppose that \((c,a)_{v}\) is a variation but not an essential variation. We know there is \(b\) such that \(a\) belongs to a middle child of \(b\) and \((c,b)\) varies. We take such a \(b\) to be maximal. Note that as \(a\) is a descendant of \(b\), then \(\#_{T_{1}}(c,b)=\#_{T_{1}}(c,a)=v\). As \(b\) is maximal, there is no \(b^{\prime}>b\) such that \((c,b^{\prime})\) varies and \(a\) belongs to a middle child of \(b^{\prime}\). This means that there is no \(b^{\prime}>b\) such that \((c,b^{\prime})\) varies and \(b\) belongs to a middle child of \(b^{\prime}\) (because \(a\) is a descendant of \(b\)). Therefore, \((c,b)\) is an essential variation.
Now, let us suppose we have \(c>b_{1}>b_{2}>a\) such that \((c,b_{1})_{v}\) is an essential variation and \(a\) belongs to a middle child of \(b_{1}\). If \(a\) is a descendant of \(b_{2}\), then \(b_{2}\) belongs to the same middle child of \(b_{1}\) as \(a\). Therefore, \((c,b_{2})\) is not an essential variation. This proves the uniqueness part.
For example, there are three non-essential variations on the pure interval of Figure 9. For \((3,1)_{1}\) the unique essential variation is \((3,2)_{1}\). For both \((4,2)_{0}\) and \((4,1)_{0}\), the unique essential variation is \((4,3)_{0}\).
**Lemma 1.3.6** (Transitivity).: _Let \([T_{1},T_{2}]\) be an interval. Suppose that we have \(c>b>a\) with \((c,b)\) and \((b,a)\) variations. Then \((c,a)\) also varies._
Proof.: This is immediate. Indeed, we have by hypothesis \(\#_{T_{1}}(b,a)<s(b)\), we obtain by planarity (Definition 1.1.1) that \(\#_{T_{1}}(c,a)\leq\#_{T_{1}}(c,b)\). As both \((c,b)\) and \((b,a)\) vary, we obtain \(\#_{T_{2}}(c,b)>\#_{T_{1}}(c,b)\) and \(\#_{T_{2}}(b,a)>\#_{T_{1}}(b,a)\geq 0\) which gives \(\#_{T_{2}}(c,a)\geq\#_{T_{2}}(c,b)>\#_{T_{1}}(c,b)\geq\#_{T_{1}}(c,a)\) by transitivity.
**Definition 1.3.7** (+1-interval).: An interval \([T_{1},T_{2}]\) is said to be a \(+1\)_-interval_ if all variations have amplitude one, _i.e._, for all \(a<b\) we have
\[\#_{T_{1}}(b,a)\leq\#_{T_{2}}(b,a)\leq\#_{T_{1}}(b,a)+1. \tag{5}\]
**Lemma 1.3.8**.: _Every pure interval is a \(+1\)-interval._
Proof.: Let \([T,T+A]\) be a pure interval and take \(b>a\). Suppose that \(\#_{T}(b,a)=v\) and \(\#_{T+A}(b,a)=v+k\) with \(k>1\). Let \(S\) be the multi-set of inversions such that \(\#_{S}(c,a)=\#_{T}(c,a)+1\) if \((a,c)\in A\) and otherwise \(\#_{S}(c,a)=\#_{T}(c,a)\). In particular, \(S^{\mathsf{tc}}\) is the tree-inversion set of \(T+A\). There is a transitivity path in \(S\)
\[b=b_{1}>b_{2}>\cdots>b_{k}=a\]
with \(\#_{S}(b_{1},b_{2})=v+k\) and \(\#_{S}(b_{i},b_{i+1})>0\). This path does not exist as such in \(\operatorname{inv}(T)\) because \(\operatorname{inv}(T)\) is transitive and \(\#_{T}(b,a)=v\). Besides, by definition of \(S\), we have \(\#_{T}(b_{i},b_{i+1})\leq\#_{S}(b_{i},b_{i+1})\leq\#_{T}(b_{i},b_{i+1})+1\). This gives us in particular
\[\#_{T}(b_{1},b_{2})\geq v+k-1>v \tag{6}\]
because \(k>1\) by hypothesis. This implies that there exist \(i\) such that \(\#_{T}(b_{i},b_{i+1})=0\). We take the minimal \(i\) that satisfies this property (\(b_{i}\) is as close of \(b\) as possible). In particular, by transitivity, \(\#_{T}(b,b_{i})\geq v+k-1>v\). By definition of \(S\), we have that \((b_{i+1},b_{i})\) is a tree-ascent of \(T\) which belongs to \(A\). By Condition (i) of Definition 1.1.7, it means that \(b_{i+1}\) is a descendant of \(b_{i}\). In particular, they belong to the same subtree of \(b\) and we have \(\#_{T}(b,b_{i+1})=\#_{T}(b,b_{i})>v\).
Now, if \(j>i\) with \(\#_{T}(b_{j},b_{j+1})=0\), by taking the minimal \(j\) satisfying this property, we prove in a similar way that \(\#_{T}(b,b_{j+1})>v\). By induction, we obtain that for all \(i\) with \(\#_{T}(b_{i},b_{i+1})=0\), we have \(\#_{T}(b,b_{i+1})>v\). Let \(i^{\prime}\) be the maximal value satisfying the property. We have that
\[b=b_{1}>b_{i^{\prime}}>b_{i^{\prime}+1}>\cdots>b_{k}=a\]
is a transitive path in \(\operatorname{inv}(T)\), which implies that \(\#_{T}(b,a)>v\) and leads to a contradiction.
Lemma 1.3.8 is not an equivalence; see Figure 14 for an example of a \(+1\)-interval that is not a pure interval. The main result of this section is the following characterization of pure intervals.
**Theorem 1.3.9** (Characterization Theorem of pure intervals).: _Let \(T_{1}\) and \(T_{2}\) be two \(s\)-decreasing trees for a given weak composition \(s\), then \([T_{1},T_{2}]\) is a pure interval if and only if it is a \(+1\)-interval and for all \(a<b<c\)_
1. _if_ \((c,a)_{v}\in\operatorname{Var}([T_{1},T_{2}])\) _and_ \((b,a)_{w}\in\operatorname{Var}([T_{1},T_{2}])\) _then_ \((c,b)_{v}\in\operatorname{Var}([T_{1},T_{2}])\)_;_
2. _if_ \((c,a)_{v}\in\operatorname{EVar}([T_{1},T_{2}])\) _and_ \((c,b)_{v}\in\operatorname{EVar}([T_{1},T_{2}])\) _and_ \(s(b)\neq 0\) _then_ \((b,a)_{0}\in\operatorname{Var}([T_{1},T_{2}])\)_._
Note that for Condition (2), we require that \(\#_{T_{1}}(c,a)=\#_{T_{1}}(c,b)\).
**Example 1.3.10**.: You can check that the two conditions of Theorem 1.3.9 are satisfied on the interval of Figure 10. For (1), you can see for example that both \((10,4)_{2}\) and \((9,4)_{0}\) are variations and indeed \((10,9)_{2}\) is also a variation. Similarly, \((9,1)_{0}\) and \((4,1)_{0}\) are essential variations, and \((9,4)_{0}\) is a variation.
Besides (2) implies that as \((10,4)_{0}\) and \((10,9)_{0}\) are essential variations and \(s(9)>0\), then \((9,4)_{0}\) is a variation. Similarly, as \((9,1)_{0}\) and \((9,4)_{0}\) are essential variations and \(s(4)>0\), this implies that \((4,1)_{0}\) is a variation. Note that we also have that \((10,8)_{0}\) and \((10,5)_{0}\) are essential variations, but in this case \((8,5)_{0}\) is not because \(s(8)=0\). Finally, you can see that (2) is not satisfied on variations: \((10,4)_{2}\) and \((10,3)_{2}\) are variations and \(s(4)>0\) but \((4,3)_{0}\) is not a variation.
**Example 1.3.11**.: On the other hand, Figure 14 shows an interval, _i.e._, two trees \(T_{1}\preccurlyeq T_{2}\), such that \([T_{1},T_{2}]\) is a \(+1\)-interval but not a pure interval. In other words, \(T_{2}\) is not obtained by increasing the cardinality of a subset of the tree-ascents of \(T_{1}\). In this example, both conditions of Theorem 1.3.9 are not satisfied. Indeed, for (1), \((4,1)_{0}\) and \((3,1)_{1}\) are variations but \((4,3)_{0}\) is not. For (2), \((5,4)_{0}\) and \((5,3)_{0}\) are essential variations and \(s(4)>0\) but \((4,3)_{0}\) is not a variation.
In the following two sections we will focus on the proof of the Characterization Theorem of pure intervals, Theorem 1.3.9. The techniques used to prove this result will also lead to a proof of Theorem 1.3.4, which characterizes the subset of tree-ascents of a pure interval in terms of its minimal essential variations.
For the sake of clarity, we are going to give a name to the \(+1\)-intervals satisfying both Conditions (1) and (2) of Theorem 1.3.9. We call them _pure-candidate intervals_. Our goal is to prove that pure-candidate intervals are the same as pure intervals. We separate this result into two parts: in Section 1.3.3 we show that pure-candidate intervals are pure, and in Section 1.3.4 we show that pure intervals are pure-candidate.
Figure 14. A \(+1\)-interval that is not a pure interval.
#### 1.3.3. Proof Part 1: Pure-candidate intervals are pure intervals
**Proposition 1.3.12**.: _If \([T_{1},T_{2}]\) is a pure candidate interval, then it is a pure interval. More precisely, we have that \([T_{1},T_{2}]=[T_{1},T_{1}+A]\) where \((a,c)\in A\) if and only if \((c,a)\) is a minimal essential variation of \([T_{1},T_{2}]\)._
We start by examining Condition (1) and show some useful properties.
**Lemma 1.3.13**.: _Let \([T_{1},T_{2}]\) be \(+1\)-interval satisfying Condition (1). We have that for all \(a<c\), if \((c,a)\) varies then \(a\) is a descendant of \(c\) in \(T_{1}\)._
Proof.: Suppose that it is not the case, in particular, \(a\) cannot be a middle descendant of \(c\) and so either \(\#_{T_{1}}(c,a)=0\) or \(\#_{T_{1}}(c,a)=s(c)\). The second option is not possible because \((c,a)\) varies. So \(\#_{T_{1}}(c,a)=0\) and there exists \(d>c\) such that \(\#_{T_{1}}(d,a)<\#_{T_{1}}(d,c)\). In \(T_{2}\), we have \(\#_{T_{2}}(c,a)=\#_{T_{1}}(c,a)+1=1\), which gives by transitivity that \(\#_{T_{2}}(d,a)\geq\#_{T_{2}}(d,c)\geq\#_{T_{1}}(d,c)>\#_{T_{1}}(d,a)\). In particular, \((d,a)\) varies. We have \(d>c>a\) with variations \((d,a)\) and \((c,a)\), using Condition (1) of pure-candidate intervals, we obtain that \((d,c)\) also varies. But if \(\#_{T_{2}}(d,c)>\#_{T_{1}}(d,c)\), the amplitude of the \((d,a)\) variation is greater than \(1\) and we reach a contradiction as we work in a \(+1\)-interval by hypothesis.
**Lemma 1.3.14**.: _Let \([T_{1},T_{2}]\) be \(+1\)-interval satisfying Condition (1). If \((c,a)_{v}\) and \((b,a)_{w}\) are variations and \((c,a)_{v}\) is an essential variation then \((c,b)_{v}\) is an essential variation._
Proof.: By Condition (1), \((c,b)_{v}\) is a variation. Using Lemma 1.3.13, \(a\) is a descendant of \(b\) because \((b,a)\) is a variation. So if \(b\) is a middle descendant of any \(b^{\prime}\) with \(c>b^{\prime}>b\), so is \(a\). As \((c,a)\) is an essential variation, this implies that \((c,b)\) is also an essential variation.
It is actually possible to prove that Condition (1) is equivalent to the one of Lemma 1.3.14, _i.e._, if the condition is true on essential variations then it is true for all variations. The proof is bit technical and not necessary at the moment.
We will prove Proposition 1.3.12 in two steps: first, we show that minimal essential variations are indeed tree-ascents of \(T_{1}\) (Proposition 1.3.18), then that every variation of the interval is obtained by transitivity from the minimal essential variations (Proposition 1.3.23).
##### 1.3.3.1. minimal essential variations are tree-ascents
The following lemmas explore some useful properties of variations and essential variations in pure-candidate intervals.
**Lemma 1.3.15** (Weak transitivity of essential variations).: _Let \([T_{1},T_{2}]\) be a pure-candidate interval. Suppose that we have \(c>b>a\) with \((c,b)_{v}\) and \((b,a)_{0}\) essential variations. Then \((c,a)_{v}\) is also an essential variation._
This is a version of Lemma 1.3.6 now specific to _essential_ variations. Note that Lemma 1.3.6 is not directly true for essential variations and that this version is not true in general: we need a pure-candidate interval. See for example the pure interval of Figure 10. We have that \((10,4)_{2}\) and \((4,1)_{0}\) are essential variations, the lemma implies that \((10,1)_{0}\) is also an essential variation. On the other hand, \((10,9)_{2}\) and \((9,6)_{1}\) are essential variations but \((10,6)_{2}\) is not. On the example of Figure 14 which is not a pure interval, we see that we have \((5,4)_{0}\) an essential variation as well as \((4,1)_{0}\) but \((5,1)_{0}\) is not an essential variation (it is a variation but \(1\) is a middle child of \(3\) and \((5,3)\) varies).
Proof.: By Lemma 1.3.6, we know that \((c,a)\) varies in \([T_{1},T_{2}]\). More precisely, \(\#_{T_{1}}(c,a)\leq v\) and \(\#_{T_{2}}(c,a)>v\). Because \([T_{1},T_{2}]\) is a pure-candidate interval, it is in particular a \(+1\)-interval and so \((c,a)_{v}\) is a variation of amplitude one. We need to prove that it is an essential variation. Suppose that it is not: there exists \(b^{\prime}\) with \(c>b^{\prime}>a\) such that \((c,b^{\prime})_{v}\) is an essential variation and \(a\) belongs to a middle child of \(b^{\prime}\). We have \(\#_{T_{1}}(b,a)=0\) so \(a\) does not belong to a middle child of \(b\) and \(b\neq b^{\prime}\). Suppose that \(c>b>b^{\prime}\). We have \((c,b)_{v}\) and \((c,b^{\prime})_{v}\) essential variations and \(s(b)\neq 0\) (because \((b,a)\) varies), by Condition (2) on pure-candidate intervals, we obtain that \((b,b^{\prime})_{0}\) is a variation, which contradicts the fact that \((b,a)\) is an essential variation. Now if \(c>b^{\prime}>b\), because \(s(b^{\prime})\neq 0\) (as \(a\) belongs to a middle child of \(b^{\prime}\)), we obtain that \((b^{\prime},b)_{0}\) is a variation. By planarity (Definition 1.1.1), this gives \(\#_{T_{1}}(b,a)=s(b)\) which contradicts the fact that \((b,a)\) varies.
**Lemma 1.3.16**.: _Let \([T_{1},T_{2}]\) be a pure-candidate interval. Suppose that \((c,a)_{v}\) is an essential variation. If \(b\) is such that \(c>b>a\) with \(\#_{T_{1}}(c,b)=v\) and \(\#_{T_{1}}(b,a)<s(b)\), then \((c,b)_{v}\) varies._
Proof.: Suppose that \((c,b)\) does not vary. As \((c,a)\) varies, we have \(\#_{T_{2}}(c,a)>\#_{T_{1}}(c,a)\). And because \((c,b)\) does not vary, we have \(\#_{T_{2}}(c,b)=\#_{T_{1}}(c,b)\). We obtain that \(\#_{T_{2}}(c,a)>\#_{T_{2}}(c,b)\) which implies by planarity (Definition 1.1.1) that \(\#_{T_{2}}(b,a)=s(b)\), _i.e._, \((b,a)\) varies. As \((c,a)\) and \((b,a)\) vary, Condition (1) implies that \((c,b)\) also varies which leads to a contradiction.
**Lemma 1.3.17**.: _Let \([T_{1},T_{2}]\) be a pure-candidate interval. Suppose that \((c,a)\) is an essential variation. Then, there is no \(b\) such that \(c>b>a\) and \(a\) is a middle descendant of \(b\)._
_In particular, if \(b<c\) with \((b,a)\) a variation, then \((b,a)\) is an essential variation._
Proof.: This is a direct consequence of Lemma 1.3.16. If \(a\) is a middle descendant of \(b\), then \(\#_{T_{1}}(c,b)=\#_{T_{1}}(c,a)=v\) and \(\#_{T_{1}}(b,a)<s(b)\) which implies that \((c,b)\) varies and so \((c,a)\) is not an essential variation.
Now if \((b,a)\) is a variation but not an essential variation, this means that \(a\) is a middle descendant of some \(b^{\prime}\) with \(b>b^{\prime}>a\) which contradicts the first part of the Lemma.
In particular, Lemma 1.3.17 implies that the variation \((b,a)_{0}\) in Condition (2) is always an essential variation. We can now prove the following proposition.
**Proposition 1.3.18** (Minimal essential variations are tree-ascents).: _Let \((c,a)_{v}\) be a minimal essential variation of a pure-candidate interval \([T_{1},T_{2}]\). Then \((a,c)\) is a tree-ascent of \(T_{1}\)._
Proof.: We check the conditions of Definition 1.1.7. Condition (i) is satisfied by Lemma (1.3.13). As \((c,a)\) varies, we have \(\#_{T_{1}}(c,a)<s(c)\) and so Condition (ii) is satisfied.
Let us prove Condition (iii), _i.e._, if there is \(b\) with \(c>b>a\) and \(a\) a descendant of \(b\), then \(a\) is a right descendant of \(b\). Suppose that \(\#_{T_{1}}(b,a)<s(b)\), by Lemma 1.3.16, we have that \((c,b)_{v}\) is a variation. This is actually an essential variation because \(a\) is a descendant of \(b\) and \((c,a)_{v}\) is an essential variation. We then have essential variations \((c,a)_{v}\) and \((c,b)_{v}\) and we have supposed that \(\#_{T_{1}}(b,a)<s(b)\) which implies that \(s(b)>0\). By Condition (2) we have that \((b,a)_{0}\) is a variation. And by Lemma 1.3.17, it is necessarily an essential variation. This contradicts the fact that \((c,a)\) is a minimal essential variation.
Let us prove Condition (iv), _i.e._, if it exists, the strict right child of \(a\) is empty. Suppose that it is not and that \(a^{\prime}\) is the root of its right subtree. As \(s(a)>0\), we have \(\#_{T_{2}}(c,a^{\prime})\geq\#_{T_{2}}(c,a)>\#_{T_{1}}(c,a)=\#_{T_{1}}(c,a^{ \prime})\) and so \((c,a^{\prime})_{v}\) is a variation. Moreover, it is an essential variation because if \(a^{\prime}\) is a middle descendant of any node \(b\), so is \(a\). Condition (2) then states that \((a,a^{\prime})_{0}\) is a variation and so \(\#_{T_{1}}(a,a^{\prime})=0\) which contradicts our initial statement.
#### 1.3.3.2. Transitive closure of minimal essential variations
To prove that every variation is obtained by transitivity from the minimal essential variations, we introduce the fundamental notion of _variation path_.
**Definition 1.3.19** (Variation path).: Let \(I=[T_{1},T_{2}]\) be an interval and \((c,a)_{v}\) a variation. The _variation path_ between \(c\) and \(a\) is given by
\[c>c_{1}>\cdots>c_{k}\]
where the set \(\{c_{i}\}_{i=1}^{k}\) consists of all values \(c_{i}\geq a\) such that \((c,c_{i})_{v}\) is an essential variation and either \(c_{i}=a\) or \(\#_{T_{1}}(c_{i},a)<s(c_{i})\).
**Example 1.3.20**.: Here are all variation paths for the variations of Figure 9:
* \((3,2)_{1}\): \(3>2\);
* \((3,1)_{1}\): \(3>2\);
* \((4,3)_{0}\): \(4>3\);
* \((4,2)_{0}\): \(4>3\);
* \((4,1)_{0}\): \(4>3\).
**Example 1.3.21**.: Here are all variation paths for the variations of Figure 10:
* \((10,8)_{0}\): \(10>8\);
* \((10,5)_{0}\): \(10>5\);
* \((10,9)_{2}\) and \((10,6)_{2}\) and \((10,2)_{2}\): \(10>9\);
* \((10,4)_{2}\) and \((10,3)_{2}\): \(10>9>4\);
* \((10,1)_{2}\): \(10>9>4>1\);
* \((9,4)_{0}\) and \((9,3)_{0}\): \(9>4\);
* \((9,1)_{0}\): \(9>4>1\);
* \((9,6)_{1}\) and \((9,2)_{1}\): \(9>6\);
* \((6,2)_{1}\): \(6>2\);
* \((4,1)_{0}\): \(4>1\).
**Lemma 1.3.22**.: _If \(I=[T_{1},T_{2}]\) is a pure-candidate interval, then the variation path of any variation \((c,a)_{v}\) satisfies the following properties_
1. _for all_ \(j>i\)_, then_ \((c_{i},c_{j})_{0}\) _is an essential variation;_
2. _either_ \(c_{k}=a\) _or_ \(a\) _belongs to a middle child of_ \(c_{k}\)_;_
3. \((c,c_{1})\) _is a minimal essential variation as well as_ \((c_{i},c_{i+1})\) _for all_ \(i<k\)_._
Proof.: Property (i) is a consequence of Condition (2) of pure-candidate intervals. Indeed, by definition of the variation path we have that \((c,c_{i})_{v}\) and \((c,c_{j})_{v}\) are essential variations and \(s(c_{i})>0\) as \(0\leq\#_{T_{1}}(c_{i},a)<s(c_{i})\). This implies that \((c_{i},c_{j})_{0}\) is an essential variation (using Condition (2) and Lemma 1.3.17).
Property (ii) is a direct consequence of Lemma 1.3.5. Indeed, suppose that \(c_{k}\neq a\), _i.e._, \((c,a)\) is not an essential variation. Then, there is a unique \(b>a\) such that \((c,b)_{v}\) is an essential variation with \(a\) a middle child of \(b\). In particular, \(\#_{T_{1}}(b,a)<s(b)\) so \(b\in\{c_{i}\}_{i=1}^{k}\). Now, if \(b\neq c_{k}\), then \(b=c_{i}\) with \(i<k\). By Property (i), we have \(\#_{T_{1}}(c_{i},c_{k})=0\), which implies \(\#_{T_{1}}(c_{k},a)=s(c_{k})\) and contradicts the definition of the variation path.
We now prove Property (iii). We know by definition that \((c,c_{1})_{v}\) is an essential variation. Let us suppose that it is not minimal, _i.e._, there is \(b\) with \(c>b>c_{1}\) such that \((b,c_{1})\) is an essential variation. Using Lemma 1.3.14, we obtain that \((c,b)_{v}\) is an essential variation. As \(b\) does not belong to the variation path, we have by definition that \(\#_{T_{1}}(b,a)=s(b)\). We obtain a contradiction to the planarity condition on tree-inversions (see Definition 1.1.1) on \(b>c_{1}>a\). Indeed, we should have either \(\#_{T_{1}}(c_{1},a)=s(c_{1})\) which is impossible by definition of the variation path, or \(\#_{T_{1}}(b,c_{1})\geq\#_{T_{1}}(b,a)=s(b)\) which is impossible as we have supposed \((b,c_{1})\) to be a variation.
This gives us that \((c,c_{1})\) is a minimal variation. Now, let us look at \((c_{i},c_{i+1})\). First, see that if \(i\neq k\), \((c_{i},a)_{0}\) is a variation. Indeed, \((c_{i},c_{k})_{0}\) is an essential variation and either \(c_{k}=a\) or \(a\) is a middle child of \(c_{k}\) so \((c_{i},a)\) varies by transitivity. Now if \(c_{i+1}\) is the first element of the variation path between \(c_{i}\) and \(a\), we are done by applying our previous argument on this new variation path. We need to prove that this is always the case, _i.e._, if \(b\) belongs to the variation path between \(c_{i}\) and \(a\), it also belongs to the variation path between \(c\) and \(a\). Let \(b\) be such that \(c_{i}>b>a\) with \((c_{i},b)_{0}\) an essential variation and \(\#_{T_{1}}(b,a)<s(b)\). We know that \((c,c_{i})_{v}\) is an essential variation, then by weak transitivity of essential variations (see Lemma 1.3.15), we obtain that \((c,b)_{v}\) is also an essential variation and so \(b\) belongs to the variation path.
**Proposition 1.3.23** (Transitive closure of minimal essential variations).: _Let \([T_{1},T_{2}]\) be a pure-candidate interval and let \(A\) be the set of couples \((a,c)\) such that \((c,a)\) is a minimal essential variation of \([T_{1},T_{2}]\). Then \(T_{2}=T_{1}+A\)._
Proof.: We want to prove that for all \(c>a\), we have \(\#_{T_{2}}(c,a)=\#_{T_{1}+A}(c,a)\). It is clear that \(T_{1}+A\leq T_{2}\). Indeed, let \(S\) be the multi-set of inversions such that \(\#_{S}(c,a)=\#_{T_{1}}(c,a)+1\) if \((c,a)\) is a minimal essential variation of \([T_{1},T_{2}]\) and \(\#_{S}(c,a)=\#_{T_{1}}(c,a)\) otherwise (\(S^{\text{tc}}=\text{inv}(T+A)\)). Clearly, as the minimal essential variations are a subset of the variations of \([T_{1},T_{2}]\), we have \(S\subseteq\text{inv}(T_{2})\) which implies that it is also the case for the transitive closure. In particular, if \((c,a)\) is not a variation, \(\#_{T_{2}}(c,a)=\#_{T_{1}}(c,a)=\#_{T_{1}+A}(c,a)\). Let us assume that \((c,a)_{v}\) is a variation in \([T_{1},T_{2}]\) and take \(c>c_{1}>\dots>c_{k}\geq a\) the variation path of \((c,a)\). By Lemma 1.3.22, \((c,c_{1})_{v}\) and \((c_{i},c_{i+1})_{0}\) are minimal essential variations for all \(i<k\). This means that \(\#_{S}(c,c_{1})=v+1\) and \(\#_{S}(c_{i},c_{i+1})=1\). Besides, if \(c_{k}\neq a\), then \(a\) is a middle
descendant of \(c_{k}\) and \(\#_{T_{1}}(c_{k},a)>0\). By taking the transitive closure of the multi-set of inversions \(S\), we obtain indeed \(\#_{T_{1}+A}(c,a)\geq v+1\).
Now we are ready to prove Proposition 1.3.12, which states that pure-candidate intervals are pure intervals.
Proof of Proposition 1.3.12.: Let \([T_{1},T_{2}]\) be a pure-candidate interval and let \(A\) be the set of couples \((a,c)\) such that \((c,a)\) is a minimal essential variation of \([T_{1},T_{2}]\). By Proposition 1.3.23, we have that \(T_{2}=T_{1}+A\). Furthermore, we know that \(A\) is subset of tree-ascents of \(T_{1}\) by Proposition 1.3.18. This proves that \([T_{1},T_{2}]\) is a pure interval.
#### 1.3.4. **Proof Part 2: Pure intervals are pure-candidate intervals.**
**Proposition 1.3.24**.: _Let \([T,T+A]\) be a pure interval, then it is a pure-candidate interval._
In order to prove this proposition we need to show that the variations and essential variations of any interval \([T,T+A]\) satisfy Conditions (1) and (2) of Theorem 1.3.9. For this, it is usefull to characterize the variations and essential variations of a pure interval in terms of its _ascent-paths_, a notion that we now define.
##### 1.3.4.1. _Ascent paths_
In the following, we adopt a notation for tree-ascents that is similar to the one we use for variations: we write \((a,c)_{v}\) a tree-ascent \((a,c)\) where \(\#(c,a)=v\).
**Definition 1.3.25** (Ascent paths).: Let \([T,T+A]\) be a pure interval and \(c>a\) with \(\#_{T}(c,a)=v\). We say that there is an _ascent-path_\(c>c_{1}>\dots>c_{k}\geq a\) if
1. \((c_{1},c)_{v}\in A\);
2. \((c_{i},c_{i-1})_{0}\in A\) for all \(1<i\leq k\);
3. either \(c_{k}=a\) or \(a\) is a middle descendant of \(c_{k}\).
You can check on Examples 1.3.20 and 1.3.21 that all variation paths are actually ascent paths. We are indeed going to prove that they are the same.
**Remark 1.3.26**.: As we have a series of tree-ascents, some properties of the ascent-paths are immediate
1. \(a\) is a descendant of \(c\) in \(T\);
2. for all \(i\), \(c>c_{1}>\dots c_{i}\) is also an ascent-path, in particular \(c_{i}\) is a descendant of \(c\) with \(\#_{T}(c,c_{i})=v\);
3. for all \(i<j\), \(c_{i}>\dots>c_{j}\) is also an ascent-path, in particular \(c_{j}\) is a descendant of \(c_{i}\) with \(\#_{T}(c_{i},c_{j})=0\);
4. for all \(i\), \(c_{i}>\dots>c_{k}\geq a\) is also an ascent-path, in particular \(a\) is a descendant of \(c_{i}\) with \(\#_{T}(c_{i},a)=0\).
**Lemma 1.3.27**.: _Let \(c>b>a\) such that there is an ascent-path \(c>c_{1}>\dots c_{k}\geq a\) between \(c\) and \(a\) with \(\#_{T}(c,a)=\#_{T}(c,b)=v\) (in particular, \(v<s(c)\)). Then, one of the following is true:_
1. \(b=c_{i}\) _for a certain_ \(i\)_;_
2. \(b\) _is a middle descendant of_ \(c_{i}\) _for a certain_ \(i\)_;_
3. \(\#_{T}(b,a)=s(b)\)_._
Proof.: Suppose that \(b\neq c_{i}\) for all \(i\). We first look at the case where \(b>c_{k}\). Let \(c_{0}:=c\) and \((c_{i+1},c_{i})\) with \(0\leq i<k\) be the tree-ascent surrounding \(b\), _i.e._, \(c_{i}>b>c_{i+1}\). Either \(c_{i}=c\) and \(\#_{T}(c,b)=\#_{T}(c_{i},c_{i+1})\) by hypothesis or \(\#_{T}(c_{i},c_{i+1})=0\). If \(\#_{T}(c_{i},b)=\#_{T}(c_{i},c_{i+1})\), then Statement (iii) of Proposition 1.1.8 gives that \(\#_{T}(b,c_{i+1})=s(b)\) and we are in Case (iii). If \(\#_{T}(c_{i},b)\neq\#_{T}(c_{i},c_{i+1})\), then \(c_{i}\neq c\) and \(\#_{T}(c_{i},b)>\#_{T}(c_{i},c_{i+1})=0\). As \((c_{i},c_{i-1})\) is a tree-ascent, either, \(\#_{T}(c_{i},b)<s(c_{i})\) and we are in Case (ii) or Statement (iv) of Proposition 1.1.8 gives \(\#_{T}(c_{i-1},b)>\#_{T}(c_{i-1},c_{i})\) (because the right child of \(c_{i}\) is empty). This implies that \(i-1>0\) and we can use the same reasoning until we reach an element \(c_{j}\) with \(0<j\leq i\) such that \(0<\#_{T}(c_{j},b)<s(c_{j})\): _i.e._, \(b\) is a middle descendant of \(c_{j}\).
In the case where \(b<c_{k}\), if \((a,c_{k})_{0}\) is a tree-ascent, then the previous case still applies. If not, \(a\) is a middle child of \(c_{k}\) and \((c_{k},c_{k-1})\) is a tree-ascent. If \(\#_{T}(c_{k},b)=0<\#_{T}(c_{k},a)\), we are in Case (iii). If \(\#_{T}(c_{k},b)<s(c_{k})\), then \(b\) is a middle descendant of \(c_{k}\). If \(\#_{T}(c_{k},b)=s(c_{k})\), we have \(s(c_{k})>0\) (it has a
middle child). We get that \(\#_{T}(c_{k-1},b)>\#_{T}(c_{k-1},c_{k})\) and we can apply the same reasoning as earlier to find \(c_{j}\) such that \(b\) is a middle descendant of \(c_{j}\).
**Example 1.3.28**.: Look again at Figure 10. There is an ascent path between \(10>9>4>1\) using the tree-ascents \((1,4)_{0}\), \((4,9)_{0}\), and \((9,10)_{2}\). We look at all \(b\) with \(10>b>1\) and \(\#_{T_{1}}(10,b)=2\).
* 4 and 9 belong to the ascent-path and correspond to Case (i);
* 2, 3 et 6 are middle descendants of either 4 or 9, they correspond to Case (ii);
* 7 corresponds to Case (iii) as we have \(\#_{T_{1}}(7,1)=1=s(7)\).
**Lemma 1.3.29**.: _If \(c>c_{1}\cdots>c_{k}\geq a\) is an ascent path in a pure interval \([T,T+A]\), with \(\#_{T}(c,a)=v\), then \((c,a)_{v}\) is a variation of \([T,T+A]\). It is an essential variation if and only if \(c_{k}=a\)._
_More specifically, for all \(b\) such that \(c>b>c_{k}\), then \(c_{k}\) is never a middle descendant of \(b\)._
Proof.: It is quite clear that \((c,a)\) varies. By hypothesis, \(\#_{T}(c,a)=v\). As each step in \(c>c_{1}>\cdots>c_{k}\), \((c_{i},c_{i-1})\) is a tree-ascent of \(A\), the cardinality increases and you get \(\#_{T+A}(c,c_{1})=v+1\) and \(\#_{T+A}(c_{i},c_{i+1})=1\). Besides, if \(a\neq c_{k}\), then it is a middle descendant of \(c_{k}\) meaning \(\#_{T+A}(c_{k},a)\geq\#_{T}(c_{k},a)>0\). By transitivity, \(\#_{T+A}(c,a)\geq v+1\).
If \(a\neq c_{k}\), then \((c,c_{k})\) varies and \((c,a)\) is not an essential variation. Reversely, assuming the second part of the lemma is true, we get that \((c,c_{k})\) is always an essential variation.
Now let \(b\) be such that \(c>b>c_{k}\). If \(\#_{T_{1}}(c,b)\neq v\), then \(c_{k}\) cannot be a middle descendant of \(b\). We assume \(\#_{T_{1}}(c,b)=v\) and look at the different possibilities of Lemma 1.3.27 on the ascent-path between \(c\) and \(c_{k}\). If \(b=c_{i}\) for a certain \(i\), then \(\#_{T}(b,c_{k})=0\). If \(b\) is a middle descendant of a certain \(c_{i}\), then \(c_{k}\) lies to the left of \(b\) and \(\#_{T}(b,c_{k})=0\). The only case left is \(\#_{T}(b,c_{k})=s(b)\). We get that \(c_{k}\) is never middle descendant of \(b\).
**Lemma 1.3.30**.: _Let \((c,a)_{v}\) be a variation of a pure interval \([T,T+A]\), then the variation path \(c>c_{1}>\cdots>c_{k}\geq a\) of \((c,a)_{v}\) as defined by Definition 1.3.19 is an ascent-path._
Proof.: Let \((c,a)_{v}\) be a variation of \([T,T+A]\). Let \(S\) be the multi-set of inversions set such that \(\#_{S}(c,a)=\#_{T}(c,a)+1\) if \((a,c)\in A\) and \(\#_{S}(c,a)=\#_{T}(c,a)\) otherwise, so \(S^{\mathsf{tc}}\) is the tree-inversion set of \(T+A\). By definition, there is a transitivity path in \(S\), \(c=c_{0}>c_{1}>\cdots>c_{k}>c_{k+1}=a\) such that \(\#_{S}(c,c_{1})=v+1\) and \(\#_{S}(c_{i},c_{i-1})>1\). We choose the path so that \(k\) is minimal and we prove that this is an ascent-path. In the following, we suppose that \(k\geq 1\) as otherwise, \((a,c)\) is a tree-ascent of \(A\) and there is nothing to prove.
Let us look first at \((c,c_{1})\). Suppose that we have \(\#_{T}(c,c_{1})=v+1\) and look at the next step, \((c_{1},c_{2})\) (this always exists but sometimes \(c_{2}=a\)). If \(\#_{T}(c_{1},c_{2})>0\), then \(\#_{T}(c,c_{2})\geq v+1\) and \(c>c_{2}>\cdots>a\) is a shorter transitivity path. This implies that \(\#_{T}(c_{1},c_{2})=0\). As \(\#_{S}(c_{1},c_{2})>0\), it means that \((c_{2},c_{1})\) is a tree-ascent of \(A\). By Condition (i) of Proposition 1.1.8, we obtain \(\#_{T}(c,c_{2})=\#_{T}(c,c_{1})=v+1\) and again \(c>c_{2}>\cdots\geq c_{k}\) is a shorter transitivity path. This means that \(\#_{T}(c,c_{1})=v\) and that \((c_{1},c)\in A\).
A similar reasoning can be made for \((c_{i-1},c_{i})\) for all \(1<i\leq k\). We have \(c_{i-1}>c_{i}>c_{i+1}\). If \(\#_{T}(c_{i-1},c_{i})>0\), then \(\#_{T}(c_{i},c_{i+1})=0\) otherwise one can build a shorter transitivity path skipping \(c_{i}\). Then \(\#_{T}(c_{i-1},c_{i})>0\) implies that \((c_{i+1},c_{i})\) is a tree-ascent of \(A\) and \(\#_{T}(c_{i-1},c_{i})=\#_{T}(c_{i-1},c_{i})=0\) otherwise, again, \(c_{i}\) could be skipped in the transitivity path. The only case left is \((c_{k},a)\). We know that \(\#_{S}(c_{k},a)>0\). If \(\#_{T}(c_{k},a)=0\), this means that \((a,c_{k})_{0}\) is a tree-ascent of \(A\) and \(c>c_{1}>\cdots>c_{k+1}=a\) is an ascent-path. If \(\#_{T}(c_{k},a)>0\), as \((c_{k},c_{k-1})\) is a tree-ascent, the strict right child of \(c_{k}\) is empty, _i.e._\(a\) is a middle child of \(c_{k}\).
We have proved that having a variation \((c,a)\) implies to have an ascent-path between \(c\) and \(a\), in particular, \(a\) has to be a descendant of \(c\). There is left to prove that this ascent-path is indeed the variation path of \((c,a)_{v}\). Note that for all \(i\), \(c>c_{1}>\cdots>c_{i}\) is an ascent-path as well and by Lemma 1.3.29 we obtain that \((c,c_{i})_{v}\) is an essential variation. For all \(i<k\), we have \(\#_{T}(c_{i},a)=0<s(c_{i})\) (there is a variation \((c_{i+1},c_{i})\) so \(s(c_{i})>0\)) and so \(c_{i}\) belongs to the variation path. Now either \((c_{k},a)_{0}\) is a variation or \(a\) is a middle child of \(c_{k}\), so in both cases, \(\#_{T}(c_{k},a)<s(c_{k})\) and \(c_{k}\) also belongs to the variation path. We need to prove that the variation path consists only of those elements. Let \((c,b)_{v}\) be an essential
variation such that \(c>b>a\) and \(\#_{T}(b,a)<s(b)\). We use Lemma 1.3.27. Case (iii) is forbidden by hypothesis. Case (ii) is not possible because \((c,b)_{v}\) is an essential variation. The only possibility left is Case (i): \(b\) is one of the \(c_{i}\).
**Proposition 1.3.31**.: _Variation paths and ascent paths of a pure interval are the same._
Proof.: This is a direct consequence of Lemma 1.3.30. The variation path is unique for each variation by definition. The ascent path (if it exists) between two values \(c\) and \(a\) is also unique: indeed if \(a\neq c_{k}\), then the choice for \(c_{k}\) is unique by Lemma 1.3.5 and then for \(1\leq i\leq k\), there is unique tree-ascent \((c_{i},*)\). Each ascent path corresponds to a variation and we have proved in Lemma 1.3.30 that the variation path of this variation is the ascent path.
#### 1.3.4.2. Variations and essential variations of pure intervals
As a straight forward consequence of Lemmas 1.3.29 and 1.3.30, we also obtain the following characterization of variations and essential variations of pure intervals.
**Proposition 1.3.32**.: _Let \([T,T+A]\) be a pure interval. Then, \((c,a)_{v}\) is a variation if and only if there is an ascent-path \(c>c_{1}>\dots>c_{k}\geq a\). It is an essential variation if and only if \(c_{k}=a\)._
Using this, we are now ready to prove that pure intervals are pure-candidate intervals.
Proof of Proposition 1.3.24.: Let \(T\) be an \(s\)-decreasing tree and \(A\) a subset of the tree-ascents of \(T\). We will prove that the pure interval \([T,T+A]\) is a pure-candidate interval, meaning that it satisfies Conditions (1) and (2) of Theorem 1.3.9.
We first prove that \([T,T+A]\) satisfies Condition (1). Let \((c,a)_{v}\) and \((b,a)_{w}\) be variations of \([T,T+1]\) with \(c>b>a\). Lemma 1.3.30 states that there is an ascent-path between \(c\) and \(a\) and an ascent-path between \(b\) and \(a\). In particular, \(a\) is a descendant of both \(c\) and \(b\) which implies \(\#_{T_{1}}(c,b)=\#_{T_{1}}(c,a)=v\). We now use Lemma 1.3.27 on \(c>b>a\) and the ascent-path \(c>c_{1}>\dots>c_{k}\geq a\). The three possible cases give
1. \(b=c_{i}\) for a certain \(i\): then \((c,b)_{v}\) is a variation.
2. \(b\) is a middle descendant of a certain \(c_{i}\): \((c,b)_{v}\) is variation by transitivity because \((c,c_{i})_{v}\) is a variation.
3. \(\#_{T}(b,a)=s(b)\): this is impossible because \((b,a)\) varies.
Now, let us prove Condition (2). Suppose that there exists \(c>b>a\) with \((c,a)_{v}\) and \((c,b)_{v}\) essential variations for some \(v\) and \(s(b)>0\) and \((b,a)_{0}\) is not a variation. We choose \(b\) such that the distance between \(c\) and \(b\) is minimal. In particular, if \(c>b_{1}>\dots>b_{k^{\prime}}=b\) is the variation path between \(c\) and \(b\), we have that \((b_{i},a)_{0}\) is a variation for \(i<k^{\prime}\). Indeed, \(c>b_{i}>b>a\) and \((c,b_{i})_{v}\) is an essential variation and \(s(b_{i})>0\) (as \((b_{i+1},b_{i})\) is a tree-ascent). We now look at the ascent-path \(c>c_{1}>\dots>c_{k}=a\) between \(c\) and \(a\) and again use Lemma 1.3.27 on \(c>b>a\). If \(b=c_{i}\) for some \(i\), then there is an ascent-path \(b=c_{i}>\dots>c_{k}=a\) and by Lemma 1.3.29, we have an essential variation \((b,a)_{0}\). As \((c,b)\) is an essential variation, \(b\) cannot be a middle descendant of any \(c_{i}\). Only the last case remains where \(\#_{T}(b,a)=s(b)>0\). We write, \(c_{0}:=c\). The ascent-path between \(c\) and \(b\) gives us that \((b,b_{k^{\prime}-1})\) is a tree-ascent. As \(s(b)>0\), then \(\#_{T}(b,a)=s(b)\) implies that \(\#_{T}(b_{k^{\prime}-1},a)>\#_{T}(b_{k^{\prime}-1},b)\) by Statement (iv) of Proposition 1.1.8. Either \(b_{k^{\prime}-1}=c\) and this contradicts \(\#_{T}(c,a)=\#_{T}(b,a)\) or \((b_{k^{\prime}-1},a)_{0}\) is an essential variation by minimality of the distance \(c-b\) and again we reach a contradiction.
#### 1.3.4.3. Proof of Theorems 1.3.9 and 1.3.4
We now have all the ingredients to prove Theorems 1.3.9 and 1.3.4.
Proof of Theorem 1.3.9.: A pure candidate interval is by definition an interval satisfying the two conditions in Theorem 1.3.9. So, we need to show that pure intervals and pure candidate intervals are the same. This was shown in Proposition 1.3.12 and Proposition 1.3.24.
Proof of Theorem 1.3.4.: Let \([T,T+A]\) be a pure interval. By Proposition 1.3.24, it is also a pure candidate interval. So, we can apply Proposition 1.3.12 to deduce that \((a,c)\in A\) if and only if \((c,a)\) is a minimal essential variation.
#### 1.3.5. Properties of variations of pure intervals
Now that we have proved that pure-candidate intervals and pure intervals are the same: all the properties of variations / essential variations that we have shown to prove either Proposition 1.3.12 or Proposition 1.3.24 are satisfied in pure intervals. In particular each variation is given by a certain variation path (from Definition 1.3.19), which is also an ascent-path (Definition 1.3.25). The specific properties of variations in pure intervals are crucial in our next section. We regroup some of them in this proposition for clarity. We call them the _middle variation_ properties as they all concern the variation of \((c,b)\) depending on properties of \((c,a)\) where \(c>b>a\).
**Proposition 1.3.33** (Middle variation properties).: _Let \([T,T+A]\) be a pure interval and \(c>b>a\). The following properties hold._
1. _If_ \((c,a)_{v}\) _is a variation, and_ \(\#_{T}(c,b)=v\) _and_ \(\#_{T}(b,a)<s(b)\)_, then then_ \((c,b)_{v}\) _is a variation._
2. _If_ \((c,a)_{v}\) _is a variation and_ \(a\) _is a middle descendant of_ \(b\)_, then_ \((c,b)_{v}\) _is a variation._
3. _If_ \((c,a)_{v}\) _is an essential variation, then_ \(a\) _is not a middle descendant of_ \(b\)_._
4. _If_ \((c,a)_{v}\) _is an essential variation and_ \((b,a)_{w}\) _a variation, then_ \((b,a)_{w}\) _and_ \((c,b)_{v}\) _are essential variations._
5. _If_ \((c,b)_{v}\) _is a variation with_ \(s(b)>0\) _and_ \(\#_{T}(b,a)=s(b)\) _and_ \(\#_{T}(c,a)=v\)_, then there exists_ \(b^{\prime}\) _with_ \(c>b^{\prime}>b\) _and_ \(a\) _is a middle descendant of_ \(b^{\prime}\) _and_ \((c,b^{\prime})\) _a variation._
6. _If_ \((c,a)_{v}\) _is an essential variation and_ \((c,b)_{v}\) _is a variation, then_ \(\#_{T}(b,a)=0\)_._
7. _If_ \((c,a)_{v}\) _is a variation and_ \(\#_{T}(c,b)=v\) _and_ \(\#_{T}(b,a)=0\) _and_ \(s(b)>0\)_, then either_ \((b,a)\) _is a variation or there is_ \(b^{\prime}\) _with_ \(c>b^{\prime}>b\) _with_ \(b\) _a middle descendant of_ \(b^{\prime}\) _and_ \((c,b^{\prime})_{v}\) _a variation._
8. _If_ \((c,a)_{v}\) _is a variation and_ \((c,b)_{v}\) _is an essential variation with_ \(s(b)>0\) _and_ \(\#_{T}(b,a)=0\) _then_ \((b,a)\) _is a variation._
Proof.: Property (1) is Lemma 1.3.16. It is also a consequence of Lemma 1.3.27. This implies Property (2): if \(a\) is a middle descendant of \(b\), then \(\#_{T}(c,b)=v\) and \(\#_{T}(b,a)<s(b)\). Then Property (3) is a consequence of (2): if \((c,b)\) varies and \(a\) is a middle descendant of \(b\), then \((c,a)\) is not an essential variation. Property 4 is Lemma 1.3.14 with the addition that \((b,a)\) is also an essential variation which is a consequence in particular of 1.
We prove Property (5). Suppose that we have \((c,b)_{v}\) a variation, \(s(b)>0\), \(\#_{T}(b,a)=s(b)\) and \(a\) is never a middle descendant of any \(b^{\prime}\) with \(c>b^{\prime}>b\) such that \((c,b^{\prime})\) varies. We prove that it implies \(\#_{T}(c,a)>v\). We look at the ascent-path \(c=c_{0}>c_{1}>\cdots>c_{k}\geq b\) between \(c\) and \(b\). Either \(c_{k}=b\) and by hypothesis \(\#_{T}(c_{k},a)=s(c_{k})>0\). Or \(b\) is middle descendant of \(c_{k}\) and as we have \(\#_{T}(b,a)=s(b)>0\), we get by transitivity \(\#_{T}(c_{k},a)\geq\#_{T}(c_{k},b)>0\). As \((c,c_{k})\) varies, by hypothesis \(a\) cannot be a middle descendant of \(c_{k}\), so this gives again \(\#_{T}(c_{k},a)=s(c_{k})>0\). As we have an ascent-path, \((c_{k},c_{k-1})\) is a tree-ascent. Using Statement (iv) of Proposition 1.1.8, we obtain \(\#_{T}(c_{k-1},a)>\#_{T}(c_{k-1},c_{k})\). Either \(c_{k-1}=c\) and we are done, or as \(a\) cannot be a middle descendant of \(c_{k-1}\), we get \(\#_{T}(c_{k-1},a)=s(c_{k-1})\). And \(s(c_{k-1})>0\) because \((c_{k-1},c_{k})\) varies. Besides, \((c_{k-1},c_{k-2})\) is a tree-ascent so we apply the same reasoning until we reach \(c_{0}=c\). This implies Property (6). Indeed, \((c,a)_{v}\) is an essential variation so \(a\) cannot be a middle descendant of \(b\) (Property (3)) so either \(\#_{T}(b,a)=s(0)\) or \(\#_{T}(b,a)=s(b)\). But in this last case, Property (5) tells us that \(a\) is a middle descendant of \(b^{\prime}\) which is forbidden.
Property (7) is a direct consequence of Lemma 1.3.27. As \(s(b)>0\) and \(\#_{T}(b,a)=0\), only cases (i) and (ii) are possible. In case (i), \(b=c_{i}\) for \(c_{i}\) an element of the variation path. If \(i<k\), then \((b,a)\) varies as in Remark 1.3.26. If \(i=k\), then \(a\) is a middle descendant of \(b\) and so \(\#_{T}(b,a)>0\). In case (ii), we have that \(b\) is a middle descendant of some other node as described in the property. This implies Property (8): as \((c,b)\) is an essential variation, \(b\) cannot be a middle descendant of any \(b^{\prime}\) and then we have that \((b,a)\) is a variation.
### Intersection of pure intervals
#### 1.4.1. The Intersection Theorem
The goal is to prove the following theorem.
**Theorem 1.4.1**.: _The intersection of two pure intervals is a pure interval._
**Example 1.4.2**.: Figure 1 shows all the pure intervals for the \(s\)-weak lattice with \(s=(0,2,2)\). The lattice is drawn such that each pure interval corresponds to a cell. The dimension of the cell is the number of
selected tree-ascents in the pure interval. In this case, the intersection of two pure intervals correspond to the geometric intersection of the cells. In Figure 15, we show two cells of dimension two intersecting in a cell of dimension one and two cells with an empty intersection.
First, we prove that the intersection of two intervals is always an interval. This is actually a general result on lattices and the proof is immediate.
**Lemma 1.4.3**.: _Let \(L\) be a lattice and \(I_{1}=[x_{1},y_{1}]\), \(I_{2}=[x_{2},y_{2}]\) two intervals of \(L\). Then, \(I_{1}\cap I_{2}\neq\emptyset\) if and only if \(x_{1}\lor x_{2}\leq y_{1}\wedge y_{2}\), and in this case we have \(I_{1}\cap I_{2}=[x_{1}\lor x_{2},y_{1}\wedge y_{2}]\)._
Proof.: Suppose that \(I_{1}\cap I_{2}\neq\emptyset\), then there is \(x\in I_{1}\cap I_{2}\). We have \(x_{1}\leq x\) and \(x_{2}\leq x\), which implies \(x_{1}\lor x_{2}\leq x\). Similarly, \(x\leq y_{1}\wedge y_{2}\). We obtain \(x_{1}\lor x_{2}\leq x\leq y_{1}\wedge y_{2}\), _i.e._, \(I_{1}\cap I_{2}\subseteq[x_{1}\lor x_{2},y_{1}\wedge y_{2}]\).
Now suppose \(x_{1}\lor x_{2}\leq y_{1}\wedge y_{2}\) and take \(x\in[x_{1}\lor x_{2},y_{1}\wedge y_{2}]\). By definition, we have \(x_{1}\leq x_{1}\lor x_{2}\leq x\leq y_{1}\wedge y_{2}\leq y_{1}\), so \(x\in I_{1}\). Similarly, \(x\in I_{2}\). This gives \(I_{1}\cap I_{2}\neq\emptyset\) and \([x_{1}\lor x_{2},y_{1}\wedge y_{2}]\subseteq I_{1}\cap I_{2}\).
So we know that by intersecting two pure intervals, we always obtain an interval. We will prove that the variations of the intersection satisfy the conditions of Theorem 1.3.9. For this we need to understand what are the variations and essential variations of the intersection.
#### 1.4.2. Variations of the intersection
As a first step, we prove the following lemma.
**Lemma 1.4.4** (intersection stability).: _Let \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) be two pure intervals with a non-empty intersection. Let \(X=T\lor T^{\prime}\). Suppose that there is \(b>a\) with \(\#_{T}(b,a)=\#_{T^{\prime}}(b,a)=v\). Then \(\#_{X}(b,a)=v\)._
Proof.: Let us suppose that there exist \(a<b\) with \(\#_{T}(b,a)=\#_{T^{\prime}}(b,a)=v\) and \(\#_{X}(b,a)>v\). We choose \(a\) such that \(|b-a|\) is minimal. As the intersection of \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) is non-empty, in particular, \(X\in[T,T+A]\): the inversion \((b,a)\) varies in \([T,T+A]\) and using the \(+1\) property of Lemma 1.3.8, we obtain \(\#_{X}(b,a)=v+1\). Let \(S=\operatorname{inv}(T)\cup\operatorname{inv}(T^{\prime})\) such that \(\operatorname{inv}(X)=S^{\operatorname{te}}\). There is a transitivity path in \(S\)
\[b=b_{1}>\cdots>b_{k}=a.\]
with \(\#_{S}(b_{1},b_{2})=v+1\) and \(\#_{S}(b_{i},b_{i+1})>0\). We choose the transitivity path such that \(k\) is minimal. Note that \(k>2\) because \(\#_{T}(b,a)=\#_{T^{\prime}}(b,a)=v\). Suppose that \(b_{3}\neq a\). If either \(\#_{T}(b_{1},b_{3})=v+1\) or \(\#_{T^{\prime}}(b_{1},b_{3})=v+1\), then \(k\) is not minimal. By transitivity, we have that \(\#_{X}(b_{1},b_{3})=v+1\), and the \(+1\) property gives us \(\#_{T}(b_{1},b_{3})=\#_{T^{\prime}}(b_{1},b_{3})=v\). This contradicts the minimality of \(|b-a|\).
Figure 15. Example of intersections of pure intervals for \(s=(0,2,2)\)
We now have \(b>b_{2}>a\) a transitivity path in \(S\) which does not exist either in \(T\) nor \(T^{\prime}\). Without loss of generality and using the \(+1\) property, we can assume that
\[\#_{T}(b,b_{2}) =v+1 \tag{8}\] \[\#_{T}(b_{2},a) =0\] (9) \[\#_{T^{\prime}}(b,b_{2}) =v\] (10) \[\#_{T^{\prime}}(b_{2},a) =1. \tag{7}\]
We have \(X\in[T,T+A]\) and \(\#_{X}(b_{2},a)=1>\#_{T}(b_{2},a)\), which means that \((b_{2},a)\) varies in \([T,T+A]\). We also have \(\#_{X}(b,a)>\#_{T}(b,a)\) by hypothesis, so \((b,a)\) varies in \([T,T+A]\). Using Condition (1) of Theorem 1.3.9, we obtain that \((b,b_{2})\) also varies in \([T,T+A]\) so \(\#_{T+A}(b,b_{2})=v+2\). By transitivity, this gives \(\#_{T+A}(b,a)=v+2\) which contradicts the fact that \(\#_{T}(b,a)=v\) by the \(+1\) property.
**Proposition 1.4.5** (Variation intersection).: _Let \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) be two pure intervals such that they have a non empty intersection \([X,Y]\) where \(X=T\lor T^{\prime}\) and \(Y=(T+A)\wedge(T^{\prime}+A^{\prime})\), then_
\[\operatorname{Var}([X,Y])=\operatorname{Var}([T,T+A])\cap\operatorname{Var}([ T^{\prime},T^{\prime}+A^{\prime}]). \tag{11}\]
_Note that the intersection is taken considering variations with their values, i.e., \((b,a)_{v}\in\operatorname{Var}([T,T+A])\cap\operatorname{Var}([T^{\prime},T^{ \prime}+A^{\prime}])\) if and only if \((b,a)_{v}\in\operatorname{Var}([T,T+A])\) and \((b,a)_{v}\in\operatorname{Var}([T^{\prime},T^{\prime}+A^{\prime}])\)._
Proof.: Let \(b>a\), be such that \((b,a)_{v}\) is a variation of both \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\). In particular, \(\#_{T}(b,a)=\#_{T^{\prime}}(b,a)=v\) and \(\#_{T+A}(b,a)=\#_{T^{\prime}+A^{\prime}}(b,a)=v+1\). Using intersection stability of Lemma 1.4.4, we obtain \(\#_{X}(b,a)=v\). Now, let \(S\) be the multi-set of inversions obtained by increasing by one the cardinality of \((b,a)\) in \(\operatorname{inv}(T)\). By definition, \(\operatorname{inv}(T)\subseteq\operatorname{inv}(T+A)\). Besides, as the intersection is non-empty, \(T\preccurlyeq T^{\prime}+A^{\prime}\) which means \(\operatorname{inv}(T)\subseteq\operatorname{inv}(T^{\prime}+A^{\prime})\). We also have \(\#_{T+A}(b,a)=\#_{T^{\prime}+A^{\prime}}(b,a)=v+1=\#_{S}(b,a)\) and this gives us \(S\subseteq\operatorname{inv}(T+A)\) and \(S\subseteq\operatorname{inv}(T^{\prime}+A^{\prime})\). As \(Y=(T+A)\wedge(T^{\prime}+A^{\prime})\), this gives that \(S\subseteq\operatorname{inv}(Y)\) and so \(\#_{Y}(b,a)\geq v+1\): \((b,a)_{v}\) is a variation of \([X,Y]\).
Conversely, if \((b,a)\) does not vary in \(T\). We have \(\#_{T}(b,a)=\#_{T+A}(b,a)=v\) and because \(T\leq X\leq Y\leq T+A\), this implies that \(\#_{X}(b,a)=\#_{Y}(b,a)=v\) and \((b,a)\) is not a variation of \([X,Y]\).
Proposition 1.4.5 can be summarized in a sentence: the variations of the intersection are the intersections of the variations. Note that this is true only if the intersection is non-empty. Besides, this does not extend to _essential_ variations. Indeed, in this case, the intersection of essential variations is only included in the set of essential variations of the intersection.
We present two examples also computed in our SageMath demo worksheet [10].
**Example 1.4.6**.: Figure 15 shows two examples of intersection of pure intervals for \(s=(0,2,2)\). On the first one, the intersection is not empty. The variations of the two pure intervals are respectively \(\{(3,2)_{1},(3,1)_{1},(2,1)_{0}\}\) and \(\{(3,2)_{1},(3,1)_{1},(2,1)_{1}\}\). The variations of the intersection are \(\{(3,2)_{1},(3,1)_{1}\}\). In this case, the intersection has only one essential variation, \((3,2)_{1}\) which is the only essential variation found in both pure intervals.
The second intersection of Figure 15 is an empty one but we see that it does not imply that the intersection of the variations is empty. Indeed, in this case \((3,1)_{1}\) is a variation in both pure intervals.
**Example 1.4.7**.: Figure 16 shows an intersection of pure intervals of size \(10\). For clarity, we have written the pure intervals along with their maximal trees. You can check that the minimal tree of the intersection is the join of the two minimal trees and it is smaller than the meet of the two maximal trees. The variations of the intersection are \(\{(9,4)_{0},(9,3)_{0},(9,1)_{0},(10,6)_{2},(10,2)_{2},(6,2)_{1}\}\). This is indeed the intersection of the variations in both intervals. However, the intersection has three essential variations: \(\{(9,4)_{0},(10,6)_{2},(6,2)_{1}\}\). We see that \((9,4)_{0}\) and \((6,2)_{1}\) are essential variations in both pure intervals (they are the only ones present in both) but \((10,6)_{2}\) is not (it is an essential variation only in the second one).
#### 1.4.3. Essential variations of the intersection
To prove that the essential variations satisfy the conditions of Theorem 1.3.9, we need to characterize the variations of the two intervals that give essential variations in the intersection. We will do this using the following compatibility notion.
**Definition 1.4.8**.: Let \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) be two pure intervals with a non-empty intersection \([X,Y]\). We say that \((c,a)_{v}\) is a _compatible_ variation of the intersection if \((c,a)_{v}\) is a variation in both \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) and for all \(b\) with \(c>b>a\) such that \(a\) is a middle descendant of \(b\) in \(T^{\prime}\) (resp. \(T\)) then \(\#_{T}(b,a)=s(b)\) (resp. \(\#_{T^{\prime}}(b,a)=s(b)\)).
Note that if \((c,a)_{v}\) is an essential variation of both \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) then it is compatible because by Property (3) of Proposition 1.3.33, \(a\) is never a middle descendant of any \(b\), \(c>b>a\) in neither \(T\) nor \(T^{\prime}\).
**Proposition 1.4.9**.: _Let \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) be two pure intervals with a non-empty intersection \([X,Y]\). Then \((c,a)_{v}\) is an essential variation of \([X,Y]\) if and only if, \((c,a)_{v}\) is a compatible variation of the intersection._
**Example 1.4.10**.: Let us call \(T\) and \(T^{\prime}\) respectively the two minimal trees of the pure intervals shown in Figure 16. The variation \((10,6)_{2}\) is the only compatible variation which is not an essential variation in both \(T\) and \(T^{\prime}\): in \(T^{\prime}\), \((10,6)_{2}\) is an essential variation so \(6\) is never a middle descendant of any \(b\) with \(10>b>6\). In \(T\), \(6\) is a middle descendant of \(9\) and we check that \(\#_{T^{\prime}}(9,6)=2=s(9)\). It is indeed an essential variation of the intersection.
On the other hand, \((9,1)_{0}\) is an essential variation of \(T\) and a variation in \(T^{\prime}\). In \(T^{\prime}\), \(1\) is a middle descendant of \(3\) and \(4\) but we have \(\#_{T}(3,1)=0<s(3)\) and \(\#_{T}(4,1)=0<s(4)\) so it is not a compatible variation. Indeed, we see that \((9,1)_{0}\) is a variation of the intersection but not an essential variation.
Figure 16. Example of an intersection of pure intervals in size \(10\)
Proof.: Let us first suppose that a variation \((c,a)_{v}\) is compatible for the intersection and show that it is an essential variation of the intersection \([X,Y]\). By definition, it is a variation of both \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) so by Proposition 1.4.5, it is a variation of \([X,Y]\). Now, take \(b\) with \(c>b>a\). If \(\#_{T}(b,a)=\#_{T^{\prime}}(b,a)=0\), then by intersection stability (Lemma 1.4.4), \(\#_{X}(b,a)=0\). Now if \(a\) is a middle descendant of \(b\) in \(T^{\prime}\), the compatibility implies that \(\#_{T}(b,a)=s(b)\) and so \(\#_{X}(b,a)=s(b)\). Similarly, if \(a\) is a middle descendant of \(b\) in \(T\), the compatibility gives \(\#_{X}(b,a)\geq\#_{T^{\prime}}(b,a)=s(b)\). So \(a\) is never a middle descendant of \(b\) in \(X\) which implies that \((c,a)_{v}\) is an essential variation.
We want to show the reverse implication, namely: the essential variations consists only of the compatible variations. Let us choose \((c,a)_{v}\) such that it is an essential variation of \([X,Y]\). Proposition 1.4.5 tells us that \((c,a)_{v}\) is a variation for both \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\). Let us suppose that there exists \(b\), with \(c>b>a\) and \(a\) is a middle descendant of \(b\) in \(T^{\prime}\). Remember that by Statement (2) of Proposition 1.3.33, this means that \((c,b)_{v}\) varies in \([T^{\prime},T^{\prime}+A^{\prime}]\).
Suppose first that \(a\) is still a middle descendant of \(b\) in \(X\). We know that \(\#_{X}(c,a)=v\). As \(a\) is a descendant of \(b\), this implies that \(\#_{X}(c,b)=v\). We know \(\#_{T}(c,b)\leq\#_{X}(c,b)\). If \(\#_{T}(c,b)<v\), as \(\#_{T}(c,a)=v\) this implies in particular that \(\#_{T}(b,a)=s(b)\) which is not possible as we have \(\#_{X}(b,a)<s(b)\), so \(\#_{T}(c,b)=v\). As \(\#_{T}(b,a)<s(b)\), we can apply Property (1) of Proposition 1.3.33 and we get that \((c,b)_{v}\) is a variation of \([T,T+A]\). As \((c,b)_{v}\) is also a variation of \([T^{\prime},T^{\prime}+A^{\prime}]\), using Proposition 1.4.5, we obtain that \((c,b)_{v}\) is a variation of \([X,Y]\) and then \((c,a)_{v}\) is not an essential variation of \([X,Y]\).
This means that \(a\) is no longer a middle descendant of \(b\) in \(X\). As the cardinality of \((b,a)\) can only increase, this means \(\#_{X}(b,a)=s(b)\). By the \(+1\) property (Lemma 1.3.8), \(\#_{T^{\prime}}(b,a)=s(b)-1\) and \(s(b)-1\leq\#_{T}(b,a)\leq s(b)\). But if \(\#_{T}(b,a)=s(b)-1\), the intersection stability (Lemma 1.4.4) tells us that \(\#_{X}(b,a)=s(b)-1\) which is not possible. We have proved that if \(a\) is a middle descendant of \(b\) in \(T^{\prime}\), then \(\#_{T}(b,a)=s(b)\). Symmetrically, we prove that if \(a\) is a middle descendant of \(b\) in \(T\), then \(\#_{T^{\prime}}(b,a)=s(b)\).
**Remark 1.4.11**.: Suppose that \((c,a)\) is an essential variation of an intersection \([X,Y]\) of two pure intervals \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\), then for all \(b\) such that \(a\) is a middle descendant of \(b\) in \(T\) with \(c>b>a\), we have \(\#_{T^{\prime}}(b,a)=s(b)\) whereas \(\#_{T}(b,a)<s(b)\). This means in particular that \(\#_{X}(b,a)=s(b)\) and implies
1. by the \(+1\) property \(\#(b,a)\) can vary by at most \(1\) and so \(\#_{T}(b,a)=s(b)-1\) (\(a\) is in the before last subtree of \(b\));
2. \((b,a)_{s(b)-1}\) is a variation of \([T,T+A]\).
**Proposition 1.4.12**.: _Let \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) be two pure intervals with a non-empty intersection \([X,Y]\). If \((c,a)_{v}\) is an essential variation of \([X,Y]\), then \((c,a)_{v}\) is an essential variation of either \([T,T+A]\) or \([T^{\prime},T^{\prime}+A^{\prime}]\)._
Proof.: We need to prove that if \((c,a)_{v}\) is compatible, then it is an essential variation for one of the intervals. Suppose that it is not. We have \(b\) and \(b^{\prime}\) such that \(a\) is a middle descendant of \(b\) in \(T\) and \(b^{\prime}\) in \(T^{\prime}\). The compatibility implies that \(\#_{T}(b^{\prime},a)=s(b^{\prime})\) and \(\#_{T^{\prime}}(b,a)=s(b)\) so in particular \(b\neq b^{\prime}\). Beside, we can chose \(b\) (resp. \(b^{\prime}\)) to be minimal, _i.e._, \(a\) is not a middle descendant in \(T\) (resp. \(T^{\prime}\)) of any node of value smaller than \(b\) (resp. \(b^{\prime}\)). Without loss of generality, we suppose \(b^{\prime}<b\). In \(T^{\prime}\), we have \(\#_{T^{\prime}}(b,a)=s(b)\) and \(a\) is a descendant of \(b^{\prime}\) so \(\#_{T^{\prime}}(b,b^{\prime})=s(b)\). Using Remark 1.4.11, \(\#_{T}(b,a)=s(b)-1\). Now the compatibility gives us \(\#_{T}(b^{\prime},a)=s(b^{\prime})\) which implies that \(\#_{T}(b,b^{\prime})\leq\#_{T}(b,a)\). By the \(+1\) property, we obtain \(\#_{T}(b,b^{\prime})=s(b)-1\). Moreover, both \((b,a)\) and \((b,b^{\prime})\) vary as their cardinality increases in \(T^{\prime}\). They are essential variations because we have chosen \(b\) to be minimal. By Condition (2) of pure intervals on \(b>b^{\prime}>a\), as \(s(b^{\prime})>0\), then \(\#_{T}(b^{\prime},a)=0\) which contradicts \(\#_{T}(b^{\prime},a)=s(b^{\prime})\).
#### 1.4.4. **Proof of the Intersection Theorem.**
Now that we characterized the variations and the essential variations of the intersection, we are ready to prove the Intersection Theorem 1.4.1.
Proof of Theorem 1.4.1.: Let \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) be two pure intervals with a non-empty intersection \([X,Y]\). We prove that the variations and essential variations of \([X,Y]\) satisfy the conditions of Theorem 1.3.9.
We start with Condition (1). We take \(c>b>a\) such that \((c,a)_{v}\) and \((b,a)_{w}\) are variations of \([X,Y]\). By Proposition 1.4.5, the variations of \([X,Y]\) are the intersection of the variations of \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\). So \((c,a)_{v}\) and \((b,a)_{w}\) are variations in both \([T,T+A]\) and \([T^{\prime},T+A^{\prime}]\). These are pure intervals which implies that \((c,b)_{v}\) is a variation in both. By Proposition 1.4.5, \((c,b)_{v}\) is a variation of \([X,Y]\).
We now prove Condition (2). We take \(c>b>a\) such that \((c,a)_{v}\) and \((c,b)_{v}\) are essential variations of \([X,Y]\) and suppose \(s(b)\neq 0\). By Proposition 1.4.9, \((c,a)_{v}\) and \((c,b)_{v}\) are compatible variations of \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\). We need to show that \((b,a)_{0}\) is a variation of \([X,Y]\), _i.e._, that it is a variation in both \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\).
Let us first prove that \(\#_{T}(b,a)=\#_{T^{\prime}}(b,a)=0\). We have proved in Proposition 1.4.12 that \((c,a)_{v}\) is an essential variation of at least one of the intervals. Without loss of generality, we suppose that \((c,a)_{v}\) is an essential variation of \([T,T+A]\). Using Statement (6) of Proposition 1.33, we obtain that \(\#_{T}(b,a)=0\). Suppose that \(\#_{T^{\prime}}(b,a)>0\). If \(a\) is a middle descendant of \(b\), then as \((c,a)_{v}\) is compatible, this means \(\#_{T}(b,a)=s(b)\) which contradicts our previous conclusion that \(\#_{T}(b,a)=0<s(b)\). So \(\#_{T^{\prime}}(b,a)=s(b)\). We use Statement (5) of Proposition 1.33: \(a\) is a middle descendant of some \(b^{\prime}\) in \(T^{\prime}\) with \(c>b^{\prime}>b\). We choose \(b^{\prime}\) to be minimal so that \(a\) is not a middle descendant of any other node of value between \(b\) and \(b^{\prime}\). As \(\#_{T^{\prime}}(b,a)=s(b)\), we have \(\#_{T^{\prime}}(b^{\prime},b)\leq\#_{T^{\prime}}(b^{\prime},a)<s(b^{\prime})\). As \((c,a)\) is a compatible variation, in \(T\) we have \(\#_{T}(b^{\prime},a)=s(b^{\prime})\) and as \(\#_{T}(b,a)=0\) this gives \(\#_{T}(b^{\prime},b)\geq\#_{T}(b^{\prime},a)=s(b^{\prime})\). So \((b^{\prime},b)\) is a variation of \([T^{\prime},T^{\prime}+A^{\prime}]\) as well as \((b^{\prime},a)\). We can apply again Statement (5) on \(b^{\prime}>b>a\) which contradicts the minimality of \(b^{\prime}\).
We prove now that \((b,a)\) varies in \([T,T+A]\). If \((c,b)_{v}\) is an essential variation, then it is the case by Statement (8) of Proposition 1.3.33. If not, then there exists \(b^{\prime}\) with \(c>b^{\prime}>b\) and \(b\) is a middle descendant of \(b^{\prime}\) in \(T\). We choose \(b^{\prime}\) to be minimal, so \(b\) is not a middle descendant of any other node of value between \(b\) and \(b^{\prime}\). Because \((c,a)\) is a compatible variation, this implies that \(\#_{T^{\prime}}(b^{\prime},b)=s(b^{\prime})\) and, moreover, by Proposition 1.4.12\((c,b)_{v}\) is an essential variation of \([T^{\prime},T^{\prime}+A^{\prime}]\). Using Statement (8), we obtain that \((b,a)\) varies in \(T^{\prime}\). In particular, \(a\) is a descendant of \(b\) in \(T^{\prime}\) and \(\#_{T^{\prime}}(b^{\prime},a)=s(b^{\prime})\). In \(T\), we have \(\#_{T}(b,a)=0\), and so \(\#_{T}(b^{\prime},a)\leq\#_{T}(b^{\prime},b)<s(b^{\prime})\). These cardinalities are higher in \(T^{\prime}\) so it implies that both \((b^{\prime},a)\) and \((b^{\prime},b)\) are variations of \([T,T+A]\) and by the \(+1\) property, we have \(\#_{T}(b^{\prime},b)=\#_{T}(b^{\prime},a)=s(b^{\prime})-1\). Besides, the minimality of \(b^{\prime}\) gives that \((b^{\prime},b)\) is an essential variation and by Statement (8) of the middle variations Proposition 1.33 on \(b^{\prime}>b>a\), we obtain that \((b,a)\) varies.
In this last proof, we have not made any assumptions on the variations of \([T,T+A]\) beside being compatible, especially we have not assumed that \((c,a)\) was an essential variation. So the proof applies in the same way to \([T^{\prime},T^{\prime}+A^{\prime}]\) and \((b,a)_{0}\) is also a variation of \([T^{\prime},T^{\prime}+A^{\prime}]\). This gives that \((b,a)_{0}\) is indeed a variation of \([X,Y]\).
**Corollary 1.4.13**.: _Let \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) be two pure intervals with non-empty intersection. Then their intersection is the pure interval \([T^{\prime\prime},T^{\prime\prime}+A^{\prime\prime}]\) where \(T^{\prime\prime}=T\lor T^{\prime}\) and \(A^{\prime\prime}\) is the set of couples \((a,c)\) such that \((c,a)\) is a minimal compatible variation of the two intervals, i.e., there is no \(b\) with \(a<b<c\) such that \((b,a)\) is a compatible variation._
Proof.: This is a consequence of Proposition 1.3.12 and Proposition 1.4.9.
## Part 2. The \(s\)-sacciahedron
### The \(s\)-Tamari lattice (background)
We recall the definition of the \(s\)-Tamari lattice and its connection with the \(\nu\)-Tamari lattice of Preville-Ratelle and Viennot, as shown in [3]. We refer to [3] for more detailed explanations.
#### 2.1.1. The \(s\)-Tamari lattice
**Definition 2.1.1**.: An \(s\)-decreasing tree \(T\) is called an \(s\)_-Tamari tree_ if for any \(a<b<c\) the number of \((c,a)\) inversions is less than or equal to the number of \((c,b)\) inversions:
\[\#_{T}(c,a)\leq\#_{T}(c,b).\]
In terms of the tree this means that the node labels in \(T_{i}^{c}\) are smaller than all the labels in \(T_{j}^{c}\) for \(i<j\). The multi set of inversions of an \(s\)-Tamari tree is called an _\(s\)-Tamari inversion set_.
The _\(s\)-Tamari poset_ is the restriction of the \(s\)-weak order to the set of \(s\)-Tamari trees.
The \(s\)-Tamari trees play the role of \(231\)-avoiding permutations of the \(s\)-weak order, and the number of \(s\)-Tamari trees may be regarded as the _\(s\)-Catalan number_.
An example of the \(s\)-Tamari lattice for \(s=(0,2,2)\) is illustrated on the left of Figure 17. The \(s\)-Catalan number in this case is \(12\) (a Fuss-Catalan number), which counts the number of elements of the lattice.
**Theorem 2.1.2** ([2, Theorem 2.2]).: _The \(s\)-Tamari poset is a sublattice of the \(s\)-week order. In particular, it is a lattice._
**Theorem 2.1.3** ([2, Theorem 2.20]).: _If \(s\) contains no zeros (except at the first position which is irrelevant), then the \(s\)-Tamari lattice is a quotient lattice of the \(s\)-weak order._
The cover relations of the \(s\)-Tamari lattice can be described as certain rotations on \(s\)-Tamari trees.
**Definition 2.1.4**.: Let \(T\) be an \(s\)-Tamari tree of some weak composition \(s\). We say that \((a,c)\) with \(a<c\) is a _Tamari-ascent_ of \(T\) if \(a\) is a non-right child of \(c\). It was shown in [2, Lemma 2.23] that if \((a,c)\) is a Tamari-ascent of \(T\) then by increasing the cardinality of \((c,a)\) by one in \(\operatorname{inv}(T)\) and taking the transitive closure, we obtain an \(s\)-Tamari inversion set. This operation is called an _\(s\)-Tamari rotation_ of \(T\).
**Theorem 2.1.5** ([2, Theorem 2.25]).: _The cover relations of the \(s\)-Tamari lattice are in correspondence with \(s\)-Tamari rotations._
#### 2.1.2. The \(\nu\)-Tamari lattice
The \(\nu\)-Tamari lattice is another lattice structure introduced in [10], which can be defined in terms of a family of combinatorial object called \(\nu\)-trees [20].
Let \(\nu\) be a lattice path on the plane consisting of a finite number of north and east unit steps, which are represented by the letters \(N\) and \(E\). We denote by \(A_{\nu}\) the set of lattice points weakly above \(\nu\) inside the smallest rectangle containing \(\nu\). We say that two points \(p,q\in A_{\nu}\) are _\(\nu\)-incompatible_ if and only if \(p\) is southwest or northeast to \(q\) and the south-east corner of the smallest rectangle containing \(p\) and \(q\) is weakly above \(\nu\). Otherwise, we say that \(p\) and \(q\) are _\(\nu\)-compatible_.
**Definition 2.1.6**.: A \(\nu\)-tree is a maximal collection of pairwise \(\nu\)-compatible elements in \(A_{\nu}\).
Although \(\nu\)-trees are just sets of lattice points above \(\nu\), they can be regarded as planar binary trees in the classical sense, by connecting consecutive nodes (or lattice points) in the same row or column [20]. The result is a planar binary tree with a root a the top left corner of \(A_{\nu}\).
Let \(\mathcal{T}\) be a \(\nu\)-tree and \(p,r\in\mathcal{T}\) be two elements which do not lie in the same row or same column. We denote by \(p\square r\) the smallest rectangle containing \(p\) and \(r\), and write \(p\llcorner r\) (resp. \(p\urcorner r\)) for the lower left corner (resp. upper right corner) of \(p\square r\).
Let \(p,q,r\in\mathcal{T}\) be such that \(q=p\llcorner r\) and no other elements in \(\mathcal{T}\) besides \(p,q,r\) lie in \(p\square r\). The node \(q\) is called a _\(\nu\)-ascent_ of \(T\), and the _\(\nu\)-tree rotation_ of \(\mathcal{T}\) at \(q\) is defined as the set \(\mathcal{T}^{\prime}=\big{(}\mathcal{T}\setminus\{q\}\big{)}\cup\{q^{\prime}\}\), where \(q^{\prime}=p\urcorner r\). As shown in [20, Lemma 2.10], the the rotation of a \(\nu\)-tree is also a \(\nu\)-tree.
**Definition 2.1.7**.: The _\(\nu\)-Tamari lattice_ is the poset on \(\nu\)-trees whose cover relations are given by \(\nu\)-tree rotations.
An example of this lattice for \(\nu=NEENEEN\) is shown on the right of Figure 17.
#### 2.1.3. The \(s\)-Tamari lattice and the \(\nu\)-Tamari lattice are isomorphic
Let \(s=(s(1),\ldots,s(n))\) be a weak composition and \(\nu(s)\) be the lattice path \(\nu(s):=NE^{s(1)}\ldots NE^{s(n)}\). We denote by \(\overleftarrow{s}\) the reversed sequence \((s(n),\ldots,s(2),s(1))\). In [20], we showed that the \(s\)-Tamari lattice and the \(\nu(\overleftarrow{s})\)-Tamari lattice are isomorphic. The bijection relating these two lattices is given as follows.
Let \(T\) be an \(s\)-Tamari tree. We use the reverse preorder transversal to label the nodes of \(T\) with the numbers \(0,1,\ldots,n\), such that a node \(x\) has label \(i\) if the number of internal nodes traversed striktly before \(x\) is equal to \(i\). We denote by \(\varphi(T)\) the unique \(\nu\)-tree containing as many nodes at height \(i\) as there are nodes in \(T\) with label \(i\). Such \(\nu\)-tree can be uniquely constructed using the right flushing algorithm
from [10]: denote by \(h_{i}\) the number of nodes that need to be added at height \(i\). We add these nodes from bottom to top, from right to left, avoiding forbidden positions. The forbidden positions are those above a node that is not the left most node in a row. See Figure 17 for an illustration.
**Proposition 2.1.8** ([10, Corollary 2.34]).: _The map \(\varphi\) is an isomorphism between the \(s\)-Tamari lattice and the \(\nu(\overs)\)-Tamari lattice._
### The \(s\)-associahedron
In this section we extend the \(s\)-Tamari lattice to a full polyhedral complex that we call the \(s\)-associahedron, and show that it coincides with the \(\nu\)-associahedron introduced by Ceballos, Padrol, and Sarmiento in [10].
#### 2.2.1. The \(s\)-associahedron
Let \(T\) be an \(s\)-Tamari tree and \(A\) be a (possibly empty) subset of Tamari ascents of \(T\). Similarly as in the case of the \(s\)-weak order, we denote by \(T+A\) the \(s\)-Tamari tree whose inversion set is obtained by increasing all cardinality \(\#_{T}(c,a)\) where \((a,c)\in A\) by one and then taking the transitive closure.
**Definition 2.2.1**.: Let \(T_{1}\preccurlyeq T_{2}\) be two \(s\)-Tamari trees for a given weak composition \(s\). We say that the interval \([T_{1},T_{2}]\) is a _pure \(s\)-Tamari interval_ if \(T_{2}=T_{1}+A\) with \(A\) a subset of Tamari-ascents of \(T_{1}\).
Examples of pure \(s\)-Tamari intervals are shown in Figures 18 and 19.
The following lemma is the analog of Lemma 1.2.2 for the \(s\)-Tamari lattice.
**Lemma 2.2.2**.: _Let \(T_{1}\) and \(T_{2}\) be two \(s\)-Tamari trees such that \(T_{2}=T_{1}+A\) with \(A\) a subset of Tamari-ascents of \(T_{1}\). Then \(T_{2}\) can be obtained as the join \(T_{2}=\bigvee_{a\in A}(T_{1}+a)\) over all \(s\)-Tamari trees \(T_{1}+a\) obtained from \(T_{1}\) by rotating a Tamari-ascent \(a\in A\)._
Proof.: The proof is similar to the one of Lemma 1.2.2. Indeed even though the definition of tree-ascents differs from the definition of Tamari-ascents, the _rotation_ is the same: increase the cardinality of the
Figure 17. Bijection between the \(s\)-Tamari lattice and the \(\nu(\overs)\)-Tamari lattice for \(s=(0,2,2)\). (Figure 16 of [10])
inversion and take the transitive closure. Besides, as \(s\)-Tamari is a sublattice of the \(s\)-weak order, the definition of the join is the same as in the \(s\)-weak order (by union and transitive closure of tree-inversions). Then all the arguments for proving Lemma 1.2.2 still work.
**Definition 2.2.3** (The \(s\)-associahedron).: The \(s\)_-associahedron_\(\operatorname{Asso}(s)\) is the collection of pure \(s\)-Tamari intervals \([T,T+A]\). Here, \(T\) denotes an \(s\)-Tamari tree and \(A\) a subset of Tamari-ascents of \(T\). The _dimension_ of \([T,T+A]\) is said to be equal to \(|A|\). In particular,
1. the vertices of \(\operatorname{Asso}(s)\) are \(s\)-Tamari trees \(T\), and
2. two \(s\)-Tamari trees are connected by an edge if and only if they are related by an \(s\)-Tamari rotation.
We refer to pure \(s\)-Tamari intervals \([T,T+A]\) as _faces_ of \(\operatorname{Asso}(s)\), and say that one face is contained in another if the containment holds as intervals in the \(s\)-Tamari lattice.
Figure 3 illustrates an example of the \(s\)-associahedron \(\operatorname{Asso}(0,2,2)\). As we can see, it is a polytopal complex whose faces are labeled by pure \(s\)-Tamari intervals, and whose edge graph is the Hasse diagram of the s-Tamari lattice. Its \(f\)-polynomial is
\[12+16t+5t^{2}.\]
Indeed, there are \(12\)\(s\)-Tamari trees (faces of dimension \(0\)), \(16\) edges (faces of dimension \(1\)) and \(5\) pure \(s\)-Tamari intervals of dimension \(2\), which correspond to the \(5\) polygons.
The following proposition is straightforward from the definition.
**Proposition 2.2.4**.: _The \(f\)-polynomial of the \(s\)-associahedron \(\operatorname{Asso}(s)\) is given by_
\[\sum_{T}(1+t)^{\operatorname{tasc}(T)},\]
_where the sum ranges over all \(s\)-Tamari trees \(T\) and \(\operatorname{tasc}(T)\) denotes the number of \(s\)-Tamari ascents of \(T\)._
Figure 18. Examples of \(2\)-dimensional pure \(s\)-Tamari intervals.
Proof.: The proof is exactly the same as the proof of Proposition 1.2.4. Let
\[\sum_{T}(1+t)^{\operatorname{tasc}(T)}=f_{0}+f_{1}t+f_{2}t^{2}+\dots.\]
where the sum runs over all \(s\)-Tamari trees \(T\). We need to show that \(f_{k}\) counts the number of \(k\)-dimensional faces of \(\operatorname{Perm}(s)\). This follows from the fact that every subset of ascents \(A\) of \(T\), of size \(k\), contributes a \(t^{k}\) to the term \((1+t)^{\operatorname{tasc}(T)}\).
The constant term \(f_{0}\) of this polynomial is the number of elements of the \(s\)-weak order, which is the \(s\)_-Catalan number_. Moreover, the coefficients of \(t^{k}\) in the polynomial \(\sum_{T}t^{\operatorname{tasc}(T)}\) may be regarded as \(s\)-generalizations of the Narayana numbers. The \(s\)_-Narayana number_\(\operatorname{Nar}_{s}(k)\) counts the number of \(s\)-Tamari trees with exactly \(k\) Tamari ascents. They have been already considered in [1, Section 4] and [1] in this general set up, and in [1] for the special case of signatures arising in rational Catalan combinatorics.
#### 2.2.2. The \(\nu\)-associahedron
In [1], Ceballos, Padrol and Sarmiento proved that the Hasse diagram of the \(\nu\)-Tamari lattice can be realized as the edge graph of a polyhedral complex induced by an arrangement of tropical hyperplanes. They named this polyhedral complex the \(\nu\)-associahedron and gave
Figure 19. Example of a \(3\)-dimensional pure \(s\)-Tamari interval.
a complete characterization of its faces in terms of certain combinatorial objects called "covering \((I,\overline{J})\)-forrest", see [10, Definition 5.1 and Theorem 5.2]. We present here an equivalent description of their \(\nu\)-associahedron phrased in terms of \(\nu\)-trees, which is more convenient for our purposes.
Let \(\nu\) be a lattice path consisting of north an east unit steps. Recall that a \(\nu\)-tree is a maximal \(\nu\)-compatible collection of elements in \(A_{\nu}\) (lattice points weakly above \(\nu\)), and that the \(\nu\)-Tamari lattice is the lattice of \(\nu\)-trees whose covering relation is given by \(\nu\)-Tamari rotations. This lattice was extended to a full simplicial complex in [10] called the _\(\nu\)-Tamari complex_. Its faces, which we call _\(\nu\)-faces_, are \(\nu\)-compatible collections of elements in \(A_{\nu}\) ordered by inclusion [10]. A \(\nu\)-face \(\mathcal{F}\) is said to be _covering_ if it contains the root (top left corner in \(A_{\nu}\)) and at least one point in each row and each column. The covering \(\nu\)-faces are exactly the interior faces of the \(\nu\)-Tamari complex, see [10, Lemma 4.3].
**Definition 2.2.5**.: The _\(\nu\)-associahedron_\(\operatorname{Asso}(\nu)\) is the polyhedral complex of interior faces of the \(\nu\)-Tamari complex, ordered by reversed inclusion. Equivalently, \(\operatorname{Asso}(\nu)\) is the polyhedral complex of covering \(\nu\)-faces, ordered by reversed inclusion. The _dimension of a covering \(\nu\)-face_\(\mathcal{F}\) is \(\ell(\nu)+1-|\mathcal{F}|\), where \(\ell(\nu)\) is the length of \(\nu\). In particular,
1. the vertices of \(\operatorname{Asso}(\nu)\) are \(\nu\)-trees, and
2. two \(\nu\)-trees are connected by an edge if and only if they are related by a \(\nu\)-Tamari rotation.
An example of the \(\nu\)-associahedron for \(\nu=NEENEEN\) is shown in Figure 20. Note that in this case, it is isomorphic to the \(s\)-associahedron for \(s=(0,2,2)\) illustrated in Figure 3.
Before matching the definitions of \(s\)-associahedra and \(\nu\)-associahedra we need the following results.
**Lemma 2.2.6**.: _Covering \(\nu\)-faces \(\mathcal{F}\) are in bijection with pairs \((\mathcal{T},\mathcal{A})\) such that \(\mathcal{T}\) is a \(\nu\)-tree and \(\mathcal{A}\) is a (possibly empty) subset of \(\nu\)-ascents of \(\mathcal{T}\). The bijection is determined by \(\mathcal{F}=\mathcal{T}\smallsetminus\mathcal{A}\) and \(\dim(\mathcal{F})=|\mathcal{A}|\)._
Proof.: If \(\mathcal{T}\) is a \(\nu\)-tree and \(a\) is a \(\nu\)-ascent of \(T\), then the \(\nu\)-face \(T\smallsetminus\{a\}\) is contained in two facets (\(T\) and the rotation of \(T\) at \(a\)). Therefore, \(T\smallsetminus\{a\}\) is an interior face of the \(\nu\)-Tamari complex. Similarly, if
is a subset of \(\nu\)-ascents of \(T\), the \(\nu\)-face \(T\smallsetminus A\) is also an interior face. Thus, \(\mathcal{F}=T\smallsetminus A\) is a covering \(\nu\)-face. By definition, its dimension is equal to \(\dim(\mathcal{F})=\ell(\nu)+1-|\mathcal{T}\smallsetminus\mathcal{A}|=\ell(\nu)+1 -|\mathcal{T}|+|\mathcal{A}|=|\mathcal{A}|\).
It remains to show that every covering \(\nu\)-face \(\mathcal{F}\) can be written uniquely as \(\mathcal{F}=\mathcal{T}\smallsetminus\mathcal{A}\) for some \(\nu\)-tree \(\mathcal{T}\) and a subset \(\mathcal{A}\) of \(\nu\)-ascents of \(\mathcal{T}\). We prove this using the connection between \(\nu\)-Tamari lattices and subword complexes presented in [10].
Let \(\mathcal{F}\) be a covering \(\nu\)-face. By [10, Theorem 5.5], the \(\nu\)-Tamari lattice is the increasing flip graph of a suitably chosen subword complex \(\mathcal{SC}(Q_{\nu},\pi_{\nu})\). It is not required to understand subword complexes in this proof, but to know that under this correspondence, \(\nu\)-trees correspond to facets of the subword complex and \(\nu\)-rotations to increasing flips. The link of a face in a subword complex is also a subword complex itself, see e.g. [11]. Therefore, the restriction of the \(\nu\)-Tamari lattice to the set of \(\nu\)-trees containing \(\mathcal{F}\) is the increasing flip graph of another suitable subword complex \(\mathcal{SC}(Q_{\mathcal{F},\nu},\pi_{\nu})\). Furthermore, it is known that the increasing flip graph of a subword complex has a unique source and a unique sink [10, Proposition 4.8]. In our language, this means that there is a unique \(\nu\)-tree \(\mathcal{T}_{\mathcal{F}}^{\mathrm{min}}\) (resp. \(\mathcal{T}_{\mathcal{F}}^{\mathrm{max}}\)) containing \(\mathcal{F}\) such that every "flippable" element of \(\mathcal{T}_{\mathcal{F}}^{\mathrm{min}}\) (resp. \(\mathcal{T}_{\mathcal{F}}^{\mathrm{max}}\)) that is not in \(\mathcal{F}\) is "increasingly flippable" (resp. "decreasingly flippable"), that is a \(\nu\)-ascent (resp. "\(\nu\)-descent"). Moreover, since \(\mathcal{F}\) is an interior face, then every element of \(\mathcal{A}=\mathcal{T}_{\mathcal{F}}^{\mathrm{min}}\smallsetminus\mathcal{F}\) is flippable. Thus, \(\mathcal{A}\) is a subset of \(\nu\)-ascents of \(\mathcal{T}_{\mathcal{F}}^{\mathrm{min}}\). Since \(\mathcal{F}=\mathcal{T}_{\mathcal{F}}^{\mathrm{min}}\smallsetminus\mathcal{A}\), this finishes our proof.
For \(a\in\mathcal{A}\) we denote by \(\mathcal{T}_{a}\) the \(\nu\)-tree obtained from \(\mathcal{T}\) by applying a \(\nu\)-Tamari rotation at the \(\nu\)-ascent \(a\), and by \(\mathcal{T}+\mathcal{A}\) the join of the set \(\{\mathcal{T}_{a}:a\in\mathcal{A}\}\). The interval \([\mathcal{T},\mathcal{T}+\mathcal{A}]\) of the \(\nu\)-Tamari lattice is called a _pure \(\nu\)-Tamari interval_. An example is illustrated in Figure 21.
As we have seen in Section 1.3, the classification of pure intervals in the \(s\)-weak order is rather involved and technical, and one might expect a similar situation to happen for the classification of pure \(s\)-Tamari intervals in the \(s\)-Tamari lattice. However, if \(\nu=\nu(\overs\,)\), we know by Proposition 2.1.8 and Lemma 2.2.2 that pure \(s\)-Tamari intervals are mapped to pure \(\nu\)-Tamari intervals under the bijection \(\varphi\) from Section 2.1.3. For instance, the pure \(s\)-Tamari interval in Figure 19 is mapped to the pure \(\nu\)-Tamari interval in Figure 21. Moreover, the following proposition gives a nice and simple classification of pure \(\nu\)-Tamari intervals, as the sets of \(\nu\)-trees containing covering \(\nu\)-faces.
**Proposition 2.2.7**.: _Let \(\mathcal{T}\) is a \(\nu\)-tree and \(\mathcal{A}\) is a (possibly empty) subset of \(\nu\)-ascents of \(\mathcal{T}\). If \(\mathcal{F}=\mathcal{T}\smallsetminus\mathcal{A}\) is the corresponding covering \(\nu\)-face, then_
\[[\mathcal{T},\mathcal{T}+\mathcal{A}]=\left\{\nu\text{-trees }\mathcal{T}^{\prime}:\,\mathcal{F}\subseteq\mathcal{T}^{\prime}\right\}. \tag{12}\]
Proof.: Using the same subword complex techniques from the proof of Lemma 2.2.6, the set of \(\nu\)-trees containing \(\mathcal{F}\) is in correspondence with the set of facets of the subword complex \(\mathcal{SC}(Q_{\mathcal{F},\nu},\pi_{\nu})\), and \(\nu\)-rotations correspond to increasing flips. As argued above, this set of facets has a unique minimal element \(\mathcal{T}_{\mathcal{F}}^{\mathrm{min}}\) and a unique maximal element \(\mathcal{T}_{\mathcal{F}}^{\mathrm{max}}\). Therefore,
\[[\mathcal{T}_{\mathcal{F}}^{\mathrm{min}},\mathcal{T}_{\mathcal{F}}^{\mathrm{ max}}]=\left\{\nu\text{-trees }\mathcal{T}^{\prime}:\,\mathcal{F}\subseteq\mathcal{T}^{\prime}\right\}. \tag{13}\]
By construction, we have that \(\mathcal{T}=\mathcal{T}_{\mathcal{F}}^{\mathrm{min}}\) and \(\mathcal{A}=\mathcal{T}\smallsetminus\mathcal{F}\). It remains to show that \(\mathcal{T}+\mathcal{A}=\mathcal{T}_{\mathcal{F}}^{\mathrm{max}}\).
By [10, Lemma 4.4 or Proposition 5.16], the interval in Equation (13) is a product of classical Tamari lattices. Its minimal element is \(\mathcal{T}\), and it contains the \(\nu\)-Tamari trees \(\mathcal{T}+a\) for \(a\in\mathcal{A}\) (because we rotate an element \(a\) that is not in \(\mathcal{F}\)). Since the dimension \(\dim(\mathcal{F})=|\mathcal{A}|\), the interval in Equation (13) contains all the covers of the minimal element. Furthermore, the join of the covers of the minimal element in a classical Tamari lattice is the top element of the lattice, and this property is preserved by taking products of Tamari lattices. Therefore, \(\mathcal{T}_{\mathcal{F}}^{\mathrm{max}}\) is the join of the set \(\{\mathcal{T}_{a}:a\in\mathcal{A}\}\), which is equal to \(\mathcal{T}+\mathcal{A}\) by definition.
Figure 21 shows an example of a pure \(\nu\)-Tamari interval. The \(\nu\)-ascents \(\mathcal{A}\) of the \(\nu\)-tree \(\mathcal{T}\) are the circled nodes in the figure, and the covering \(\nu\)-face \(\mathcal{F}=\mathcal{T}\smallsetminus\mathcal{A}\) is the set of filled red nodes. The \(\nu\)-trees in the interval \([\mathcal{T},\mathcal{T}+\mathcal{A}]\) are exactly the \(\nu\)-trees containing these filled red nodes.
**Corollary 2.2.8**.: _Let \([\mathcal{T},\mathcal{T}+\mathcal{A}]\) and \([\mathcal{T}^{\prime},\mathcal{T}^{\prime}+\mathcal{A}^{\prime}]\) be two pure \(\nu\)-Tamari intervals, and \(\mathcal{F}=\mathcal{T}\smallsetminus\mathcal{A}\) and \(\mathcal{F}^{\prime}=\mathcal{T}^{\prime}\smallsetminus\mathcal{A}^{\prime}\) be their corresponding covering \(\nu\)-faces. Then \([\mathcal{T},\mathcal{T}+\mathcal{A}]\subseteq[\mathcal{T}^{\prime},\mathcal{T}^{ \prime}+\mathcal{A}^{\prime}]\) if and only if \(\mathcal{F}\supseteq\mathcal{F}^{\prime}\)._
Proof.: Assume \(\mathcal{F}\supseteq\mathcal{F}^{\prime}\). Since every \(\nu\)-tree containing \(\mathcal{F}\) contains \(\mathcal{F}^{\prime}\), then by Equation 12 we have \([\mathcal{T},\mathcal{T}+\mathcal{A}]\subseteq[\mathcal{T}^{\prime},\mathcal{T }^{\prime}+\mathcal{A}^{\prime}]\). This proves the backward direction.
For the forward direction, assume that \(\mathcal{F}\not\supseteq\mathcal{F}^{\prime}\). We aim to show that \([\mathcal{T},\mathcal{T}+\mathcal{A}]\nsubseteq[\mathcal{T}^{\prime},\mathcal{ T}^{\prime}+\mathcal{A}^{\prime}]\). Let \(f^{\prime}\in\mathcal{F}^{\prime}\smallsetminus\mathcal{F}\), and \(\widetilde{\mathcal{T}}\) be a \(\nu\)-tree containing \(\mathcal{F}\). We consider two cases.
Case 1: \(f^{\prime}\notin\widetilde{\mathcal{T}}\). In this case, \(\widetilde{\mathcal{T}}\) does not contain \(\mathcal{F}^{\prime}\), and so \(\widetilde{\mathcal{T}}\in[\mathcal{T},\mathcal{T}+\mathcal{A}]\) but \(\widetilde{\mathcal{T}}\notin[\mathcal{T}^{\prime},\mathcal{T}^{\prime}+ \mathcal{A}^{\prime}]\) as we wanted to show.
Case 2: \(f^{\prime}\in\widetilde{\mathcal{T}}\). Since \(\mathcal{F}\subseteq\widetilde{\mathcal{T}}\) is an interior face and \(f^{\prime}\notin\mathcal{F}\), then \(f^{\prime}\) is flippable in \(\widetilde{\mathcal{T}}\). Flipping it, we obtain a new \(\nu\)-tree \(\mathcal{T}^{*}=\widetilde{\mathcal{T}}\smallsetminus\{f^{\prime}\}\cup\{f^{*}\}\). This new tree satisfies \(\mathcal{F}\subseteq\mathcal{T}^{*}\) but \(\mathcal{F}^{\prime}\nsubseteq\mathcal{T}^{*}\). Thus, \(\mathcal{T}^{*}\in[\mathcal{T},\mathcal{T}+\mathcal{A}]\) but \(\mathcal{T}^{*}\notin[\mathcal{T}^{\prime},\mathcal{T}^{\prime}+\mathcal{A}^{ \prime}]\) as we wanted.
#### 2.2.3. The \(s\)-associahedron and the \(\nu\)-associahedron are isomorphic
The bijection \(\varphi\) between \(s\)-Tamari trees and \(\nu(\overleftarrow{s})\)-trees described in Section 2.1.3 extends naturally to a bijection \(\overline{\varphi}\) between the faces of the \(s\)-associahedron and the faces of the \(\nu(\overleftarrow{s})\)-associahedron. For each pair \((T,A)\) of an \(s\)-Tamari tree \(T\) and a subset \(A\) of Tamari-ascents of \(T\), we can associate a pair \((\mathcal{T},\mathcal{A})\) of a \(\nu\)-tree \(\mathcal{T}=\varphi(T)\) and a subsets \(\mathcal{A}\) of \(\nu\)-assents of \(\mathcal{T}\) corresponding to \(A\). We denote by \(\overline{\varphi}\) the map that sends the pure \(s\)-Tamari interval \([T,T+A]\) to the covering \(\nu\)-face \(\mathcal{F}=\mathcal{T}\smallsetminus\mathcal{A}\).
**Theorem 2.2.9**.: _The map \(\overline{\varphi}\) is an isomorphism between \(\operatorname{Asso}(s)\) and \(\operatorname{Asso}(\nu(\overleftarrow{s}))\)._
Figure 21. Example of a pure \(\nu\)-Tamari interval
Proof.: By Lemma 2.2.6, the map \(\overline{\varphi}\) is a bijection between pure \(s\)-Tamari intervals and covering \(\nu(\overs\,)\)-faces. So, \(\overline{\varphi}\) is a bijection between the faces of the \(s\)-associahedron and the faces of the \(\nu(\overs\,)\)-associahedron. We need to show that this map preserves their order relation.
Let \([T,T+A]\) and \([T^{\prime},T^{\prime}+A^{\prime}]\) be two pure \(s\)-Tamari intervals. Consider the corresponding pure \(\nu\)-Tamari intervals \([\mathcal{T},\mathcal{T}+\mathcal{A}]\) and \([\mathcal{T}^{\prime},\mathcal{T}^{\prime}+\mathcal{A}^{\prime}]\) determined by the bijection \(\varphi\), and let \(\mathcal{F}=\mathcal{T}\smallsetminus\mathcal{A}\) and \(\mathcal{F}^{\prime}=\mathcal{T}^{\prime}\smallsetminus\mathcal{A}^{\prime}\) be the corresponding covering \(\nu\)-faces. Since \(\varphi\) is order preserving, \([T,T+A]\subseteq[T^{\prime},T^{\prime}+A^{\prime}]\) if and only if \([\mathcal{T},\mathcal{T}+\mathcal{A}]\subseteq[\mathcal{T}^{\prime},\mathcal{ T}^{\prime}+\mathcal{A}^{\prime}]\). Combining this with Corollary 2.2.8, we get that \([T,T+A]\subseteq[T^{\prime},T^{\prime}+A^{\prime}]\) if and only if \(\mathcal{F}\supseteq\mathcal{F}^{\prime}\) as we wanted.
## Part 3. Polytopal conjectures
### Polytopal complex and polytopal subdivision realizations
We proved in Theorem 1.2.6 that the \(s\)-permutahedron is what we call a _combinatorial complex_, i.e. a collection of cells or faces (pure intervals) satisfying two properties: (1) it is closed under taking faces and (2) any two faces intersect properly. These two conditions are necessary conditions for being a _polytopal complex_. In order to be a polytopal complex, we need the further property that all the cells of the complex can be geometrically realized as (convex) polytopes. This condition is stated in the following conjecture.
**Conjecture 3.1.1** (Polytopality of pure intervals).: _For any weak composition \(s\), the pure intervals of the \(s\)-weak order are polytopal in the following sense:_
1. _The inclusion poset of pure intervals contained in a pure interval_ \([T,T+A]\) _is the face lattice of some polytope_ \(P\) _of dimension_ \(|A|\)_._
2. _The Hasse diagram of the restriction of the_ \(s\)_-weak order to_ \([T,T+A]\) _is the edge graph of_ \(P\)_._
Item (2) in this conjecture is not really necessary because it follows from Item (1). However, we include it here because it is a nice property that we would like to highlight.
We have strong reasons to believe that this conjecture is true. On one side, it has been recently proven in the case where \(s\) does not contain any zeros [6], as we explain in Section 3.4. On the other side, we have an empirical polytopal construction of each pure interval as a generalized permutahedron which we call an _ascentope_. On Figure 11, you see on the right the ascentope corresponding to the pure interval. The code to compute the ascentopes is available on [10]. Computational experiments show that our ascentopes have the right properties with examples of dimensions up to 9. We plan to study these objects in future work.
Our next conjecture is even more general and was also proven when \(s\) does not contain any zeros [6], see Section 3.4. All of our figures seem to indicate that the Hasse diagram of the \(s\)-weak order seems to be realizable as the edge graph of a polytopal subdivision of a polytope, whose faces are in correspondence with pure intervals. This polytope should be combinatorially isomorphic to the zonotope
\[Z(s)=\sum_{1\leq i<j\leq n}s(j)\Delta_{ij}, \tag{14}\]
where \(\Delta_{ij}=\operatorname{conv}\{e_{i},e_{j}\}\subset\mathbb{R}^{n}\). In particular, if \(s\) has no zeros (except possibly for \(s(1)\)) then \(Z(s)\) is combinatorially an \((n-1)\)-dimensional permutahedron.
**Conjecture 3.1.2** (Polytopal subdivision realization).: _For any weak composition \(s\), the \(s\)-permutahedron can be geometrically realized as a polytopal subdivision of a polytope which is combinatorially isomorphic to \(Z(s)\). This means,_
1. _The inclusion poset of pure intervals of the_ \(s\)_-weak order is the face poset of the subdivision._
2. _The Hasse diagram of the_ \(s\)_-weak order is the edge graph of the subdivision._
Again, Item (1) in this conjecture follows from Item (2), but we include it here because it is a nice property that we would like to highlight. As we will discuss in Section 3.3, this conjecture holds in full generality in dimensions 2 and 3. Examples of geometric realizations of \(s\)-permutahedra are illustrated
Figure 23. Some geometric realizations of \(s\)-permutahedra in dimension \(3\).
Figure 22. Some geometric realizations of \(s\)-permutahedra in dimension \(2\).
in Figures 22 and 23. You can find \(3\)-dimensional animations of these polyhedral subdivisions and more in this webpage [11]. Besides, all examples in dimensions \(2\) and \(3\) can be computed with SageMath as shown in our demo worksheet [11].
One remarkable property of the classical permutahedron is that the classical associahedron can be obtained from it by removing certain facets [10, 10]. We believe that if \(s\) contains no zeros, this property also holds for the \(s\)-permutahedron and the \(s\)-assocoiahedron.
**Conjecture 3.1.3** (Removing facets of the \(s\)-permutahedron).: _If \(s\) has no zeros (except for \(s(1)\)), there exists a geometric realization of the \(s\)-permutahedron such that the \(s\)-associahedron can be obtained from it by removing certain facets._
Examples of \(s\)-associahedra obtained from \(s\)-permutahedra by removing certain facets are illustrated in Figures 24 and 4. The \(3\)d examples are also available on the webpage [11]. All these examples were computed with SageMath and can be obtained from our demo worksheet [11].
We remark that if \(s\) contains zeros other than \(s(1)\), then \(\operatorname{Asso}(s)\) is not convex, see an example in Figure 25. This follows from the analog result for \(\nu\)-associahedra in [12, Cor. 5.13], which states that the \(\nu\)-associahedron is convex if and only if \(\nu\) does not have two non-initial consecutive north steps. In such a case, the \(\nu\)-associahedron is a polytopal subdivision of a classical associahedron [12, Cor. 5.13]. Thus, Conjecture 3.1.3 may be thought as a generalization of Hohlweg and Lange's result in [10].
Figure 24. Realizations of \(s\)-associahedra from \(s\)-permutahedra
Figure 25. The \(s\)-permutahedron and \(s\)-associahedron for \(s=(0,0,2)\). In this case, the \(s\)-associahedron is not convex because \(s(2)=0\).
### Generalizations for finite Coxeter groups
Another natural related question is whether there are generalizations of our constructions for other Coxeter groups.
**Question 3.2.1**.: Is there a natural definition of the \(s\)-permutahedron and the \(s\)-associahedron for other finite Coxeter groups?
Figure 26 shows a tantalizing partial answer to this question in type \(B_{2}\). The left hand side of this figure shows what we believe could be Fuss-Catalan generalizations of the type \(B_{2}\) permutahedron (subdivided octagons), while the right hand side shows the corresponding Fuss-Catalan type \(B_{2}\) associahedra (subdivided cyclohedra) obtained by removing some facets. The number of vertices of these subdivided cyclohedra are \(6,15,28,45,66,\dots\), which are indeed the \(m\)_-Catalan numbers_ of type \(B_{2}\) (also called hexagonal numbers in [19, Sequence A000384]). This can be checked from the general formula of the Coxeter \(m\)-Catalan numbers, which is given by
\[\prod_{i=1}^{n}\frac{e_{i}+mh+1}{e_{i}+1}, \tag{15}\]
Figure 26. A tantalizing option for \(s\)-permutahedra and \(s\)-associahedra of Coxeter type \(B_{2}\).
where \(h\) denotes the Coxeter number and \(e_{1},\dots,e_{n}\) are the exponents of the Coxeter group. In type \(B_{2}\), we have \(h=4\), \(e_{1}=1\) and \(e_{2}=3\), so the previous formula reduces to \((2m+1)(m+1)\), which can be easily checked to count the number of vertices of the subdivided cyclohedra on the right of Figure 26. For instance, varying \(m\) gives the desired sequence \(6,15,28,45,66,\dots\).
As far as we know, there are no _m-generalizations_ of the \(n!\) number (or equivalently, \(|W|\)) for other finite Coxeter groups \(W\). In type \(A\), if we set \(s=(m,m,\dots,m)\) to be a sequence consisting of \(n\) copies of \(n\), then the number of elements of the \(s\)-weak order is the \(s!\) number [10]
\[s!=\prod_{i=0}^{n-1}(1+im). \tag{16}\]
In particular, if \(m=1\), this recovers the \(n!\) number, and so may be regarded as the _m-generalization_ of \(n!\) in type \(A\). On the other hand, the number of elements of a finite Coxeter group \(W\) of rank \(n\) can be computed by the uniform formula
\[|W|=\prod_{i=1}^{n}(1+e_{i}). \tag{17}\]
The \(n!\) number is recovered for type \(A_{n-1}\), in which case the exponents are \(e_{1},\dots,e_{n-1}=1,\dots,n-1\). It seems plausible for us, that a natural _m-generalization of \(|W|\)_ is
\[\prod_{i=1}^{n}(1+me_{i}). \tag{18}\]
In type \(A_{n-1}\), this recovers the number of elements of the \(s\)-week order for \(s=(m,m,\dots,m)\), expressed in Equation (16). For type \(B_{2}\), we get the sequence \((1+m)(1+3m)\), which counts exactly the number of vertices of the Fuss-Catalan \(s\)-permutahedron of type \(B_{2}\) that we are suggesting on the left of Figure 26. The first terms of this sequence are \(8,21,40,65,96,\dots\). These numbers are called octagonal numbers in [12, Sequence A000567]. It would be interesting to investigate if the \(m\)-generalization of \(|W|\) is the dimension of certain \(W\)-module in representation theory. Remarkable \(S_{n}\)-modules whose dimensions are given by \(n!\) appear in [11, 12, 13].
In forthcoming work, we will introduce and investigate the _\(s\)-braid arrangement_, an arrangement of hyperplanes which induces a cell decomposition that is dual to the \(s\)-permutahedron. This point of view will provide further insights on potential generalizations in the context of finite Coxeter groups. An example is illustrated on the right of Figure 27.
### Geometric realizations in dimensions \(2\) and \(3\)
In this section, we provide an explicit construction that shows that Conjectures 3.1.1, 3.1.2 and 3.1.3 hold in dimensions \(2\) and \(3\). There is a natural way of assigning coordinates to each \(s\)-decreasing tree.
Figure 27. The \(s\)-braid arrangement for \(s=(0,2,3)\).
Let \(e_{ij}:=e_{i}-e_{j}\) for \(i<j\), where \(e_{1},\dots,e_{n}\in\mathbb{R}^{n}\) are the standard basis vectors in \(\mathbb{R}^{n}\). Let \(s=(s(1),s(2),\dots,s(n))\) be a weak composition, \(T\) be an \(s\)-decreasing tree and \(A\) be a subset of tree-ascents of \(T\). We define
\[v_{T}=\sum_{i<j}\#_{T}(j,i)e_{ij}\quad\text{and}\quad F_{(T,A)}=\text{conv}\{v_ {T^{\prime}}:T^{\prime}\in[T,T+A]\}.\]
For \(n=3\) and \(s(3)\neq 0\), this gives a \(2\)-dimensional realization of the \(s\)-permutahedron in the subspace \(\{(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}:x_{1}+x_{2}+x_{3}=0\}\subset\mathbb{R}^ {3}\), see Figure 22. This realization "cuts" a polygon (a hexagon if \(s_{2}\neq 0\) or a quadrilateral if \(s(2)=0\)) into smaller polygons. Each polygon is convex and corresponds to one facet of the \(s\)-permutahedron.
One would hope that this construction directly extends to higher dimensions, but it is not the case. For \(n\geq 4\), the convex hull of all \(v_{T}\)'s is the zonotope \(Z(s)\) from (14), which is still cut into identifiable pieces; however, those pieces do not form convex polytopes. We were able to fix this realization in dimension \(3\) (\(n=4\) and \(s(4)\neq 0\)), using a procedure illustrated in Figure 28. The first image shows the direct realization obtained by \(v_{T}\): we notice some bent edges and can identify what we call a "broken pattern" in trees related to those edges. The solution is to push the selected trees into a given direction by a parameter given by a broken pattern itself. In certain compositions \(s\), this push leads to a "collision" (see middle of Figure 28) which again forces us to push certain trees further away. The process can be explicitly described for dimension \(3\). The new coordinates are given by \(\overline{v}_{T}=\sum_{i<j}(3\,\#_{T}(j,i)+f_{T}(j,i))e_{ij}\) where \(f_{T}(j,i)=0\) for \(j\neq 3\) and
\[f_{T}(3,i)=\begin{cases}0&\text{if}\ \,\#_{T}(3,i)=0,\\ s_{3}+(\#_{T}(4,1)-\#_{T}(4,3))+(\#_{T}(4,2)-\#_{T}(4,3))&\text{if}\ 0<\#_{T}(3,i)<s_{3},\\ 2s_{3}&\text{if}\ \,\#_{T}(3,i)=s_{3}.\end{cases}\]
See Figure 4 for examples, the demo webpage [14], or the SageMath demo worksheet [14]. We conjecture that such a construction also exists for higher dimensions.
Once we have a geometric realization of the \(s\)-permutahedron, we are able to identify what we call _Tamari-valid_ faces and construct a realization of the \(s\)-associahedron by removing faces of the \(s\)-permutahedron. This works for both our \(2D\) and \(3D\) realizations, and is an analogue of a construction in [10] giving rise to Loday's realization of the associahedron. The process is illustrated in Figure 24 in \(2D\), see Figure 4 for \(3D\) examples.
### Realization via flow polytopes
As mentioned before, the conjectures of this paper were announced in an extended abstract of our work back in 2019 [10], and this is the first time where we include all the details of our constructions. Meanwhile, Conjecture 3.1.2 (and consequently, Conjecture 3.1.1) have been proved in the particular
Figure 28. Construction of a \(3\) dimensional realization.
case where \(s\) contains no zeros [GDMP\({}^{+}\)23, Theorem 1.3]. The case when \(s\) contains zeros is still open. Besides dimensions 2 and 3, Conjecture 3.1.3 remains open for all \(s\).
The construction in [GDMP\({}^{+}\)23] is a beautiful work based on triangulations of flow polytopes. For each composition \(s\) (with no zeros), the authors introduce a graph called the \(s\)_-oruga graph_, and use a _framing_ of it to triangulate its corresponding flow polytope. They show that the dual of the poset of interior faces of the resulting triangulation is isomorphic to the face poset of the \(s\)-permutahedron. Finally, they use the Cayley trick and techniques from tropical geometry to obtain the desired geometric realization.
The presentation in [GDMP\({}^{+}\)23] has further important implications. One of them is a simpler and more convenient description of pure intervals of the \(s\)-weak order, when \(s\) has no zeros. They can be described as certain subsets of _coherent routes_ of the \(s\)-oruga graph corresponding to _interior_ faces of the triangulation. This has the advantage that intersecting two pure intervals corresponds to just intersecting the associated sets of coherent routes. This intersection is literarily an intersection as sets, and therefore, it is very simple to compute. Exactly the same situation happens for the \(\nu\)-associahedron in Section 2.2.2, which makes the proofs much simpler in this case.
In contrast, the combinatorial description of the intersection of two pure intervals that we present in Section 1.4.1 uses the characterization of pure intervals in Section 1.3, which is rather involved. However, we believe that our deep combinatorial exploration of their combinatorial properties will be very useful. Combining this with the geometric/combinatorial techniques from [GDMP\({}^{+}\)23] provides a powerful toolkit for future explorations.
|
2309.13268 | Derandomization of quantum algorithm for triangle finding | Derandomization is the process of taking a randomized algorithm and turning
it into a deterministic algorithm, which has attracted great attention in
classical computing. In quantum computing, it is challenging and intriguing to
derandomize quantum algorithms, due to the inherent randomness of quantum
mechanics. The significance of derandomizing quantum algorithms lies not only
in theoretically proving that the success probability can essentially be 1
without sacrificing quantum speedups, but also in experimentally improving the
success rate when the algorithm is implemented on a real quantum computer.
In this paper, we focus on derandomizing quanmtum algorithms for the triangle
sum problem (including the famous triangle finding problem as a special case),
which asks to find a triangle in an edge-weighted graph with $n$ vertices, such
that its edges sum up to a given weight.We show that when the graph is promised
to contain at most one target triangle, there exists a deterministic quantum
algorithm that either finds the triangle if it exists or outputs ``no
triangle'' if none exists. It makes $O(n^{9/7})$ queries to the edge weight
matrix oracle, and thus has the same complexity with the state-of-art
bounded-error quantum algorithm. To achieve this derandomization, we make full
use several techniques:nested quantum walks with quantum data structure,
deterministic quantum search with adjustable parameters, and dimensional
reduction of quantum walk search on Johnson graph. | Guanzhong Li, Lvzhou Li | 2023-09-23T05:24:59Z | http://arxiv.org/abs/2309.13268v1 | # Derandomization of quantum algorithm for triangle finding
###### Abstract
Derandomization is the process of taking a randomized algorithm and turning it into a deterministic algorithm, which has attracted great attention in classical computing. In quantum computing, it is challenging and intriguing to derandomize quantum algorithms, due to the inherent randomness of quantum mechanics. The significance of derandomizing quantum algorithms lies not only in theoretically proving that the success probability can essentially be 1 without sacrificing quantum speedups, but also in experimentally improving the success rate when the algorithm is implemented on a real quantum computer.
In this paper, we focus on derandomizing quanmtum algorithms for the triangle sum problem (including the famous triangle finding problem as a special case), which asks to find a triangle in an edge-weighted graph with \(n\) vertices, such that its edges sum up to a given weight. We show that when the graph is promised to contain at most one target triangle, there exists a deterministic quantum algorithm that either finds the triangle if it exists or outputs "no triangle" if none exists. It makes \(O(n^{9/7})\) queries to the edge weight matrix oracle, and thus has the same complexity with the state-of-art bounded-error quantum algorithm. To achieve this derandomization, we make full use several techniques: nested quantum walks with quantum data structure, deterministic quantum search with adjustable parameters, and dimensional reduction of quantum walk search on Johnson graph.
## 1 Introduction
Randomized algorithms play an important role in computer science, as they can be significantly efficient in some choice of computational resource for a lot of basic computational problems, such as time for primality testing [1, 2, 3], space for undirected s-t connectivity [4] and circuit depth for perfect matching [5]. Since the polynomial-time deterministic algorithm for primality testing [6] was proposed in 2004, the study of derandomization, i.e. the question of whether it's possible to come up with efficient deterministic versions of randomized algorithms, has been attracting attention from the academic community, see for instance, Refs. [7, 8, 9, 10, 11]. Indeed, a lot of exciting works have been dedicated to derandomizing concrete randomized algorithms, including the aforementioned primality testing [6], undirected s-t connectivity [12]
and perfect matching [13]. There are also entire books on derandomization, see for instance, Refs. [14, 15, 16].
Because of the inherent randomness of quantum mechanics, most of the existing quantum algorithms are randomized, i.e., have a probability of failure [17, 18, 19]. Derandomizing quantum algorithms seems difficult, with only few quantum algorithms having been successfully derandomized (succeed with certainty), such as deterministic quantum search [20, 21, 22, 23, 24] and deterministic quantum algorithms for Simon's problem [25] (and its generalization [26]), element distinctness problem [27] and the welded tree problem [28]. The significance of derandomizing quantum algorithms lies not only in theoretically proving that the success probability can essentially be 1 without sacrificing quantum speedups, but also in experimentally improving the success rate when the algorithm is implemented on a real quantum computer. Thus, it is intriguing to find more quantum algorithms that allows derandomization. In this paper we will focus on derandomizing quantum algorithms for triangle finding and its generalization, an important and extensively studied problem in quantum computing.
### Triangle finding and its generalization
The triangle finding problem has been extensively studied in quantum computing. It aims to find a triangle in an unknown graph with \(n\) vertices, making as few queries as possible to its adjacency matrix given as a black box. Compared to the classical query complexity of \(\Theta(n^{2})\), the quantum query complexity of the problem has gradually improved from the trivial \(O(n^{3/2})\) using Grover search on triples of the graph's vertices, to the state-of-the-art \(O(n^{5/4})\) using extended learning graph [29].
The first improvement over \(O(n^{3/2})\) was given by Buhrman et al. [30] for sparse graph with \(m=o(n^{2})\) edges, as they presented a quantum algorithm for the triangle finding problem with query complexity \(O(n+\sqrt{nm})\) using amplitude amplification. Using combinatorial ideas and amplitude amplification, Szegedy [31] (see also [32]) showed how to solve the problem with query complexity \(\widetilde{O}(n^{10/7})\), where \(\widetilde{O}(\cdot)\) hides logarithmic factors. Magniez et al. [32] then utilized quantum walk search on Johnson graphs, which was originally used to construct an optimal quantum algorithm for the element distinctness problem [33], to obtain a more efficient algorithm with \(\widetilde{O}(n^{13/10})\) queries. Belovs [34] introduced the learning graph framework and, as the first application of this framework, used it to improve the quantum query complexity of triangle finding to \(O(n^{35/27})\). Lee et al. [35] then further improved the query complexity to \(O(n^{9/7})\), again using learning graph. Finally, Le Gall [36] gave a \(\widetilde{O}(n^{5/4})\) quantum algorithm, which utilizes combinatorial structure of the problem, quantum walk search on Johnson graph, and variable time quantum search. The logarithmic factors in \(\widetilde{O}(n^{5/4})\) was later removed using extended learning graphs by Carette et al. [29].
As can be seen from the above progress, each improvement in the query complexity of triangle finding problem has brought deeper insight into the problem or stimulating new algorithmic technique (See [37] for a more detailed review). However, the gap between \(O(n^{5/4})\) and \(\Omega(n)\) is still open. At the same time, all the above quantum algorithms are bounded-error, that is, have a probability of failure.
In the above process, Belovs and Rosmanis [38] found that the \(O(n^{9/7})\) algorithm by Lee et al. [35] based on non-adaptive learning graph can in fact solve a more general problem -- the triangle sum problem, in which the underlying graph is now edge-weighted and the goal is to find a target triangle whose edges sum to a given value. They also showed that the algorithm is almost optimal by giving a matching lower bound of \(\Omega(n^{9/7}/\sqrt{\log n})\). Jeffery et al. [39] then proposed a new nested quantum walk with quantum data structures, which is based on the MNRS framework [40], and as an application they adapted Lee et al.'s algorithm [35] to a quantum-walk-based algorithm. However, in order to reduce errors when nesting the bounded
error subroutines, the algorithm makes \(\widetilde{O}(n^{9/7})\) queries with additional log factors.
### Our contribution
In this paper, we propose a deterministic quantum algorithm for the triangle sum problem (and thus also for the famous triangle finding problem), based on derandomization of the quantum-walk-based algorithm by Jeffery et al. [39]. Our algorithm has the same \(O(n^{9/7})\) query complexity with the state-of-the-art bounded-error quantum algorithm by Lee et al. [35], but we require an additional promise that the graph has at most one target triangle. Apart from nested quantum walks with quantum data structures [39], our algorithm also utilizes deterministic quantum search with adjustable parameters [24] (see Section 2.3 and especially Lemma 3), and a technique to reduce the dimension of invariant subspaces of quantum walk search on Johnson graph (see Section 2.2 and especially Lemma 1), which has also found application in designing a deterministic quantum algorithm for the element distinctness problem [27]. We think our algorithm has the following significance:
1. It's the first deterministic quantum algorithm for the triangle sum (and also triangle finding) problem making \(O(n^{9/7})\) queries, and provides a new example of deranomization of quantum algorithms.
2. It shows the usefulness of the techniques being utilized, and it's likely that more applications will be found.
**Formal statement of the triangle sum problem.** Consider an undirected and weighted simple graph \(G\) with \(n\) vertices, specified by its edge weight matrix \(A\in[M]^{n\times n}\), where \(M\) is a positive integer and \([M]:=\{0,1,\cdots,M-1\}\). The edge weight matrix \(A\) can be accessed through a quantum black box (oracle) \(O\) whose effect on the computational basis is as follows:
\[O\ket{i,j}\ket{b}\mapsto\ket{i,j}\ket{b\oplus A_{i,j}}, \tag{1}\]
where \((i,j)\in[n]\times[n]\) encodes an edge of \(G\) to be queried, and \(A_{i,j}\in[M]\) is the corresponding weight. Suppose the value \(A_{i,j}\) is stored in \(m\) qubits, then \(\oplus\) denotes bit-wise XOR and \(O^{2}=I\).
**Definition 1** (the triangle sum problem).: Given \(d\in[M]\), find three vertices \(a,b,c\in[n]\) in the graph \(G\) such that
\[A_{a,b}+A_{b,c}+A_{c,a}=d\mod M, \tag{2}\]
making as few queries to the oracle \(O\) as possible.
The triangle sum _promised_ problem has an additional promise that if such a target triangle \(\triangle abc\) exists in \(G\), there is _only_ one.
In this paper we obtain the following result.
**Theorem 1**.: _There is a deterministic quantum algorithm that solves the triangle sum promised problem with certainty and making \(O(n^{9/7})\) queries._
Specifically, the algorithm outputs the target triangle if it exists and claims there's none if no such triangle exists.
**Corollary 1**.: _There is a deterministic quantum algorithm for triangle finding which makes \(O(n^{9/7})\) queries._
Proof.: Triangle finding is a special case of the triangle sum problem, which can be seen by setting \(M=4,d=3\) and restricting \(A\) to \([2]^{n\times n}\). Thus, quantum algorithm solving the triangle sum problem also solves triangle finding.
Note that the addition in Eq. (2) is modulo \(M\), which is crucial in proving the \(\Omega(n^{9/7}/\sqrt{\log n})\) lower bound of the triangle sum problem [38]. Intuitively, 'addition modulo \(M\)' makes it impossible for an algorithm to rule out potential triangle, say \(\triangle a^{\prime}b^{\prime}c^{\prime}\) in \(G\), when the queried edge weights \(A_{a^{\prime},b^{\prime}}\) and \(A_{b^{\prime},c^{\prime}}\) already sum up to greater than \(d\) or either of them is zero. A more formal description of this property can be found in [41, 38] referred to as the orthogonal array condition or the orthogonality property.
### Paper organization
The rest of the paper is organized as follows. In Section 2 we introduce some important techniques that will be used later: two forms of quantum walk search on Johnson graph dubbed "edge-walk" and "vertex-walk", a technique to reduce the dimension of the invariant subspace of quantum vertex-walk on Johnson graph, and deterministic quantum search with adjustable parameters. In Section 3, we first sketch the procedure of our algorithm which contains four layers of subroutines in Section 3.1, and then show that each layer of subroutine can be derandomized in Section 3.2, thus obtaining a deterministic quantum algorithm for the triangle sum promised problem. The details of the quantum walk search on Johnson graph used in three of the four layers, i.e. how to implement the setup, update, checking operations and what is the probability of reaching the target state, are presented in Section 4. We conclude this paper with some related problems in Section 5.
## 2 Preliminaries
In this section, we present some techniques and results that will be used latter in our deterministic quantum algorithm for the triangle sum promised problem. We will use two forms of quantum walks search on Johnson graph with subtle differences. The first one described in Section 2.1 stems from the nested quantum walk [39], and we call it "quantum _edge_-walk on Johnson graph", since its basis state \(\left|R\right\rangle\left|R^{\prime}\right\rangle\) can be seen as an edge of a Johnson graph. The second one described in Section 2.2 originates from Ambainis' quantum walks for the element distinctness problem [33], and we call it "quantum _vertex_-walk on Johnson graph", since in its basis state \(\left|R,y\right\rangle\), \(\left|R\right\rangle\) can be seen as a vertex of a Johnson graph and \(\left|y\right\rangle\) as a coin used to update \(\left|R\right\rangle\).
In Section 2.2 we will also introduce a technique (Lemma 1) to reduce the dimension of the invariant subspace of quantum vertex-walk on Johnson graph, when certain conditions (Condition 1) are satisfied. In order to achieve certainty of success, we will also need deterministic quantum search (with adjustable parameters) as described in Section 2.3.
### Quantum edge-walk search on Johnson graph
A Johnson graph \(J(N,r)\) has \(\binom{N}{r}\) vertices. Each vertex is a subset \(R\subseteq[N]\) of size \(r\), and two vertices \(R,R^{\prime}\) are connected by an edge if and only if \(\left|R\cap R^{\prime}\right|=r-1\). We denote by \(V(G)\) the vertex set of \(G:=J(N,r)\).
The quantum edge-walk on \(G\) has state space \(\{\left|R\right\rangle\left|R^{\prime}\right\rangle\left|D(R)\right\rangle:R,R ^{\prime}\in V(G)\}\). The data \(D(R)\) associated with \(R\) relies on the input of the problem to be solved, and we check if a vertex \(R\) is marked based on whether \(D(R)\) satisfies certain condition. The goal of quantum walk search on \(G\) is to obtain a marked vertex with constant probability, starting from an initial state \(\left|\psi_{0}\right\rangle\) and using update operations that comply with the graph's edges (i.e. to walk on the graph). Specifically, the quantum edge-walk on \(G\) consists of the following three operations.
**Setup.** Denote by \(S\) the Setup operation that transforms the all zero state to the initial state:
\[\ket{\psi_{0}}=S\ket{0}, \tag{3}\]
where \(\ket{\psi_{0}}\) is an equal superposition of all the edges in \(G\):
\[\ket{\psi_{0}}=\frac{1}{\sqrt{\binom{N}{r}}}\sum_{R}\ket{R}\frac{1}{\sqrt{r(N-r) }}\sum_{R\to R^{\prime}}\ket{R^{\prime}}\ket{D(R)}, \tag{4}\]
where \(R\to R^{\prime}\) means \(R^{\prime}\) is adjacent to \(R\) in \(G\), or equivalently \(\ket{R\cap R^{\prime}}=r-1\).
**Update.** One step of quantum walk, denoted by the update operation \(U\), consists of three unitary operators:
\[U=\text{Data}\cdot\text{Swap}\cdot\text{Coin}, \tag{5}\]
where 'Coin' acting on \(\ket{R}\ket{R^{\prime}}\) is the Grover diffusion of vertices adjacent to \(\ket{R}\):
\[\text{Coin} =\sum_{R}\ket{R}\bra{R}\otimes\big{(}2\ket{\varphi(R)}\bra{\varphi (R)}-I\big{)}, \tag{6}\] \[\ket{\varphi(R)} =\frac{1}{\sqrt{r(N-r)}}\sum_{R\to R^{\prime}}\ket{R^{\prime}}, \tag{7}\]
and
\[\text{Swap}\ket{R}\ket{R^{\prime}}=\ket{R^{\prime}}\ket{R}, \tag{8}\] \[\text{Data}\ket{R^{\prime}}\ket{R}\ket{D(R)}=\ket{R^{\prime}}\ket{ R}\ket{D(R^{\prime})}. \tag{9}\]
**Checking.** The checking subroutine \(C\) adds phase shift \((-1)\) to \(\ket{R}\ket{R^{\prime}}\ket{D(R)}\), if the associated data \(\ket{D(R)}\) satisfies certain condition.
The whole process of quantum walk search on \(G\) can be formulated in the following equation
\[\ket{\psi_{k}}=(U^{\sqrt{r}}C)^{k}S\ket{0}. \tag{10}\]
The process is similar to Grover's algorithm, as \(U^{\sqrt{r}}\) can be seen as a reflection through the initial state \(\ket{\psi_{0}}\) and \(C\) as a reflection through the marked states. It should be noted that the \(U^{\sqrt{r}}\) we used in this paper is replaced with phase estimation and selected \((-1)\) phase shift in [39, 40]. Finally, by choosing an appropriate \(k\) and measuring \(\ket{\psi_{k}}\) in the first register, we will obtain a marked vertex with constant probability.
### Quantum vertex-walk search on Johnson graph
The quantum vertex-walk search on Johnson graph \(J(N,r)\) has state space \(\{\ket{R,y}\ket{D(R)}:R\subseteq[N],y\in[N]-R\}\).
**Setup.** The initial state is
\[\ket{\psi_{0}}=\frac{1}{\sqrt{\binom{N}{r}}}\sum_{R}\ket{R}\ket{D(R)}\frac{1} {\sqrt{N-r}}\sum_{y\in[N]-R}\ket{y}. \tag{11}\]
**Update.** One step of quantum walk \(U\) consists of two unitary operators: \(U=U_{B}(\theta_{2})U_{A}(\theta_{1})\). The first operator \(U_{A}(\theta_{1})\) acts on \(\ket{R,y}\), and can be seen as choosing a random \(y\in[N]-R\) to be moved into \(R\):
\[U_{A}(\theta_{1}) =\sum_{R}\ket{R}\bra{R}\otimes\big{(}I-(1-e^{i\theta_{1}})\ket{ \varphi(R)}\bra{\varphi(R)}\big{)}, \tag{12}\] \[\ket{\varphi(R)} =\frac{1}{\sqrt{N-r}}\sum_{y\in[N]-R}\ket{y}. \tag{13}\]
The second operator \(U_{B}(\theta_{2})\) can be seen as choosing a random \(y^{\prime}\in R\) being removed from \(R\) and at the same time moving \(y\) into \(R\). To update \(D(R)\) simultaneously, \(U_{B}(\theta_{2})\) acts on all registers, but we only define its effect on register \(|R,y\rangle\) for simplicity:
\[U_{B}(\theta_{2}) =I-(1-e^{i\theta_{2}})\sum_{R+y\subseteq[N]}\left|\varphi(R+y) \right\rangle\left\langle\varphi(R+y)\right|, \tag{14}\] \[\left|\varphi(R+y)\right\rangle =\frac{1}{\sqrt{r+1}}\sum_{y^{\prime}\in R+y}\left|R+y-y^{\prime},y^{\prime}\right\rangle. \tag{15}\]
Note that we enhance the original update operation from [33] with general phase shift \(e^{i\theta}\) instead of \((-1)\), in order to achieve dimensional reduction of the walk's invariant subspace.
**Checking.** The checking subroutine \(S_{\mathcal{M}}(\alpha)\) adds relative phase shift \(e^{i\alpha}\) to a marked state \(\left|R,y\right\rangle\left|D(R)\right\rangle\), based on whether the associated data \(\left|D(R)\right\rangle\) satisfies certain condition. In fact, \(S_{\mathcal{M}}(\alpha)\) can be implemented by \(C(I\otimes\mathrm{diag}(1,e^{i\alpha}))C\), if there is a checking subroutine \(C\) that flips an auxiliary qubit initialized with \(\left|0\right\rangle\), when the basis state is marked.
**5-dimensional invrariant subspace \(\mathcal{H}_{0}\).** Suppose the Checking subroutine \(C\) satisfies the following condition:
**Condition 1**.: _There is a special subset \(K\subseteq[N]\) with \(\left|K\right|=2\), such that for a certain \((j_{0},l_{0})\in\{(j,0)\}_{j=0}^{2}\cup\{(j,1)\}_{j=0}^{1}\), the checking subroutine \(C\) marks a basis state \(\left|R,y\right\rangle\) satisfying \(\left|R\cap K\right|=j_{0}\) and \(\left|y\cap K\right|=l_{0}\)._
Then the quantum vetex-walk search on \(J(N,r)\) can be reduced to a 5-dimensional invariant subspace \(\mathcal{H}_{0}\) spanned by:
\[\mathcal{B}_{0}:=\{\left|0,0\right\rangle,\left|0,1\right\rangle,\left|1,0 \right\rangle,\left|1,1\right\rangle,\left|2,0\right\rangle\}, \tag{16}\]
where \(\left|j,l\right\rangle\in\mathcal{B}_{0}\) is the equal superposition of basis states in \(S_{j}^{l}:=\{\left|R,y\right\rangle:\left|R\cap K\right|=j,\left|y\cap K\right| =l\}\). In basis \(\mathcal{B}_{0}\), the initial state takes the following form:
\[\left|\psi_{0}\right\rangle=\frac{1}{\sqrt{\binom{N}{r}\big{(}N-r \big{)}}}\sum_{\left|j,l\right\rangle\in\mathcal{B}_{0}}\sqrt{\left|S_{j}^{l} \right|}\cdot\left|j,l\right\rangle, \tag{17}\]
and the target state is \(\left|j_{0},l_{0}\right\rangle\).
Because \(S_{j}^{l}\) can also be seen as the set of \(\left|R,y\right\rangle\) satisfying \(\left|(R+y)\cap K\right|=j+l\) and \(\left|y\cap K\right|=l\), the size of \(S_{j}^{l}\) can be calculated in two ways. For \(\{\left|j,0\right\rangle\}_{j=0}^{2}\), we have:
\[\binom{2}{j}\binom{N-2}{r-j}(N-2-(r-j))=\left|S_{j}^{0}\right|= \binom{2}{j}\binom{N-2}{r+1-j}(r+1-j), \tag{18}\]
and for \(\{\left|j,1\right\rangle\}_{j=0}^{1}\), we have:
\[\binom{2}{j}\binom{N-2}{r-j}(2-j)=\left|S_{j}^{1}\right|=\binom{2}{j+1} \binom{N-2}{r-j}(j+1). \tag{19}\]
This is depicted in Fig. 1 by the two equivalent ways (from left or from right) of calculating the weight of the middle line marked by \(\left|j,l\right\rangle\) with dashed box.
From Fig. 1 and the definition of \(U_{A}(\theta_{1})\) and \(U_{B}(\theta_{2})\), it can be seen that \(U=U_{B}(\theta_{2})U_{A}(\theta_{1})\) takes the following form in \(\mathcal{B}_{0}\):
\[U =(I-(1-e^{i\theta_{2}})BB^{\dagger})\cdot(I-(1-e^{i\theta_{1}}) AA^{\dagger}), \tag{20}\] \[A.^{2} =\left[\begin{array}{ccc}1-\frac{2}{N-r}&0&0\\ \frac{2}{N-r}&0&0\\ 0&1-\frac{1}{N-r}&0\\ 0&\frac{1}{N-r}&0\\ 0&0&1\end{array}\right],\quad B.^{2}=\left[\begin{array}{ccc}1&0&0\\ 0&\frac{1}{r+1}&0\\ 0&1-\frac{1}{r+1}&0\\ 0&0&\frac{2}{r+1}\\ 0&0&1-\frac{2}{r+1}\end{array}\right], \tag{21}\]
where \(A,B\) are non-negative matrices, and \(A\).2 denotes the entry wise square of \(A\).
Footnote 2: The \(\mathcal{H}_{0}\)-norm is defined as \(\mathcal{H}_{0}=\frac{1}{2}\sum_{i=1}^{n}\frac{1}{2}\sum_{i=1}^{n}\frac{1}{2} \sum_{i=1}^{n}\frac{1}{2}\sum_{i=1}^{n}\frac{1}{2}\
**Lemma 2** ([22]).: _Suppose parameter \(\alpha\) satisfies the equation "\(\sin\frac{\pi}{4k+2}=\sqrt{\lambda}\sin\frac{\alpha}{2}\)", and integer \(k>k_{\mathrm{opt}}\), then_
\[\left|\left\langle\mathcal{M}\right|\left[G(\alpha,-\alpha)\right]^{k}\left| \psi_{0}\right\rangle\right|=1. \tag{27}\]
_The lower bound of number of iterations \(k\) is_
\[k_{\mathrm{opt}}=\frac{\pi}{4\arcsin\sqrt{\lambda}}-\frac{1}{2}. \tag{28}\]
If one of the parameters, for example \(\beta\), is uncontrollable but fixed and known, we can apply the following lemma from [24, Theorem 2].
**Lemma 3**.: _Suppose \(\beta\in(0,2\pi)\) is arbitrarily given, and \(k>k_{\mathrm{lower}}\), then there always exits a pair of parameters \((\alpha_{1},\alpha_{2})\) such that_
\[\left|\left\langle\mathcal{M}\right|\left[G(\alpha_{1},\beta)G(\alpha_{2}, \beta)\right]^{k}\left|\psi_{0}\right\rangle\right|=1. \tag{29}\]
_The lower bound of \(k\) is_
\[k_{\mathrm{lower}}=\frac{\pi}{\left|4\arcsin(\sqrt{\lambda}\sin\frac{\beta}{ 2})\mod[-\frac{\pi}{2},\frac{\pi}{2}]\right|}\in O(1/\sqrt{\lambda}), \tag{30}\]
_where the notation "\(x\mod[-\frac{\pi}{2},\frac{\pi}{2}]\)" means to add \(x\) with an appropriate integer multiples \(l\) of \(\pi\), such that \(x+l\pi\in[-\frac{\pi}{2},\frac{\pi}{2}]\)._
The explicit equations that \((\alpha_{1},\alpha_{2})\) need to satisfy in Lemma 3 can be found in [24].
## 3 Deterministic quantum algorithm for the triangle sum promised problem
Since our algorithm for the promised problem of triangle sum (Definition 1) is based on the derandomization of Jeffery et al.'s algorithm [39], it also has 4 layers of subroutines as described in Section 3.1. We show the derandomization of each layer in Section 3.2, and thus obtain a deterministic quantum algorithm and complete the proof of Theorem 1.
### Outline and query complexity
Recall that in the triangle sum problem, we are trying to find a triangle whose edges sum up to a given weight \(d\) modulo \(M\) in an edge-weighted graph \(G\) with \(n\) vertices, by querying its edge weight matrix \(A\in[M]^{n\times n}\) as few times as possible (through oracle \(O\) in Eq. (1)).
Our algorithm has 4 layers of subroutines: layers \(1,2,4\) are quantum walk search on Johnson graph, and layer 3 is a Grover search. Each layer implements the Check operation of its upper layer. Intuitively, layer 1, 2, and \((3,4)\) as a whole, find one by one, three vertices in the target triangle \(\triangle abc:=\triangle\).
Denote by \(S_{i},U_{i},C_{i}\) the setup, update, checking operation in the \(i\)-th layer, the query complexity of implementing the respective operations from the oracle \(O\) are denoted by \(s_{i},u_{i},c_{i}\). We also denote by \(\epsilon_{i}\) the proportion of marked vertices in \(V(G_{i})\), where \(G_{i}\) is the Johnson graph in the \(i\)-th layer. For two subsets \(R_{1}\) and \(R_{2}\) of graph \(G\)'s vertex set \([n]\), denote by \(G_{R_{1},R_{2}}\) the sub-matrix of the edge weight matrix \(A\) with rows indexed by \(R_{1}\) and columns indexed by \(R_{2}\).
The 4 layers of subroutines are listed below.
1. A quantum edge-walk search on Johnson graph \(G_{1}=J(n,r_{1})\). We set \(r_{1}=n^{4/7}\) so that the total query complexity \(c_{0}\) is \(n^{9/7}\) as shown by Eq. (42) (See [39] for why setting \(r_{1}=n^{4/7}\), and \(r_{2}=n^{5/7},m=n^{3/7}\) below). A basis state \(\ket{R_{1}}\ket{R_{1}^{\prime}}\ket{D(R_{1})}\) is marked if \(\ket{R_{1}\cap\triangle}=1\) and \(\ket{R_{1}^{\prime}\cap\triangle}=1\). The query complexity is (with big \(O\) omitted) \[c_{0} :=s_{1}+\frac{1}{\sqrt{\epsilon_{1}}}(\sqrt{r_{1}}u_{1}+c_{1})\] (31) \[=r_{1}r_{2}+\sqrt{\frac{n}{r_{1}}}(\sqrt{r_{1}}(r_{1}+r_{2})+c_{1}),\] (32) where the value of \(s_{1}\) and \(u_{1}\) will be explained below. The checking operation \(C_{1}\) which checks if \(\ket{R_{1}}\ket{R_{1}^{\prime}}\) is marked, is implemented in layer 2 (see also Section 3.2).
2. A quantum vertex-walk search on Johnson graph \(G_{2}=J(n_{1},r_{2})\), where \(n_{1}=n-r_{1}\), and we set \(r_{2}=n^{5/7}\). A basis state \(\ket{R_{1}}\ket{R_{2},y}\ket{G_{R_{1},R_{2}}}\), where \(R_{2}\subseteq[n]-R_{1}\) and \(y\in[n]-R_{1}-R_{2}\), is marked if \(\ket{R_{1}\cap\triangle}=1\), \(\ket{R_{2}\cap\triangle}=1\) and \(y\notin\triangle\). To avoid the intolerable Setup cost \(s_{2}=r_{1}r_{2}\) to construct \(\ket{G_{R_{1},R_{2}}}\), nested quantum walks with quantum data structure [39] are used. That is, the data \(D(R_{1})\) associated with \(\ket{R_{1}}\) in layer 1 is the partial initial state (but \(\ket{y}\) is not initialized) of layer 2: \[\ket{D(R_{1})}=\frac{1}{\sqrt{\binom{n_{1}}{r_{2}}}}\sum_{R_{2}\subseteq[n]-R_ {1}}\ket{R_{2}}\ket{G_{R_{1},R_{2}}},\] (33) where the data \(\ket{G_{R_{1},R_{2}}}\) stores \(G_{R_{1},R_{2}}\) in a \(r_{1}\times r_{2}\) matrix of registers. Therefore, to maintain this data structure, the setup cost in layer 1 is \(s_{1}=r_{1}r_{2}\), and \(u_{1}=2(r_{1}+r_{2})\) so as to implement \(D(R_{1})\mapsto D(R_{1}^{\prime})\) based on \(\ket{R_{1}^{\prime}}\ket{R_{1}}\) (see Lemma 5). Also, \(s_{2}=0\), and \(u_{2}=2r_{1}\) so as to implement \(G_{R_{1},R_{2}}\mapsto G_{R_{1},R_{2}^{\prime}}\). Thus, the query complexity of this layer is \[c_{1} =s_{2}+\frac{1}{\sqrt{\epsilon_{2}}}(\sqrt{r_{2}}u_{2}+c_{2})\] (34) \[=0+\sqrt{\frac{n}{r_{2}}}(\sqrt{r_{2}}r_{1}+c_{2}).\] (35) The checking operation \(C_{2}\) which checks if \(\ket{R_{2},y}\) is marked, is implemented in layer 3.
3. A Grover search on \([n]-R_{1}-R_{2}-y\) to find the last vertex \(c\in\triangle\), assuming \(R_{1}\cap\triangle=a\) and \(R_{2}\cap\triangle=b\). The query complexity of this layer is \[c_{2}=\sqrt{n}c_{3}.\] (36) The checking operation \(C_{3}\) which checks if \(z\in[n]-R_{1}-R_{2}-y\) is the last vertex \(c\in\triangle\), is implemented in layer 4.
4. A quantum vertex-walk search on the product Johnson graph \(G_{4}=J(r_{1},m)\times J(r_{2},m)\) with vertex set \(V(G_{4})=\{\ket{S_{1}}\ket{S_{2}}:S_{i}\subseteq R_{i},\ket{S_{i}}=m\}\). We set \(m=(r_{1}r_{2})^{1/3}=n^{3/7}\). Vertices \(\ket{S_{1}}\ket{S_{2}}\) and \(\ket{S_{1}^{\prime}}\ket{S_{2}^{\prime}}\) are adjacent if \(\ket{S_{i}\cap S_{i}^{\prime}}=m-1\) for \(i=1,2\). The data associated with \(\ket{S_{i}}\) is \(G_{S_{i},z}\). Assuming \(R_{1}\cap\triangle=a\), \(R_{2}\cap\triangle=b\) and \(z=c\), vertex \(\ket{S_{1}}\ket{S_{2}}\) is marked if \(a\in S_{1},b\in S_{2}\). The query complexity of this layer is \[c_{3} =s_{4}+\frac{1}{\sqrt{\epsilon_{4}}}(\sqrt{m}u_{4}+c_{4})\] (37) \[=m+\frac{\sqrt{r_{1}r_{2}}}{m}(\sqrt{m}\cdot O(1)+0).\] (38)
The checking operation \(C_{4}\) flips an auxiliary qubit if there exists \(a\in S_{1},b\in S_{2}\) such that \(A_{a,b}+A_{b,c}+A_{a,c}=d\left(\operatorname{mod}M\right)\). Since \(A_{a,b}\in G_{R_{1},R_{2}},A_{b,c}\in G_{S_{1},c},A_{a,c}\in G_{S_{2},c}\) are already stored in the data structures, \(C_{4}\) requires no oracle query.
We can now calculate the total query complexity as follows:
\[c_{3} =(r_{1}r_{2})^{1/3}=n^{3/7}, \tag{39}\] \[c_{2} =\sqrt{n}c_{3}=n^{6.5/7},\] (40) \[c_{1} =n^{1/7}(n^{6.5/7}+c_{2})=n^{7.5/7},\] (41) \[c_{0} =n^{9/7}+n^{1.5/7}(n+c_{1})=n^{9/7}. \tag{42}\]
### Proof of Theorem 1
The condition that \(R_{1},R_{2},y,z\) are mutually disjoint, and the promise that there's at most one target triangle, enable us to implement the checking operation \(C_{3}\), \(C_{2}\) and then \(C_{1}\) with \(100\%\) success probability successively as shown below.
**Implementing \(C_{3}\) with 100% success probability.** We consider \(C_{3}\) implemented in layer 4 first, whose checking operation \(C_{4}\) does not rely on other layer. The input state of this layer is \(\left|R_{1}\right\rangle\left|R_{2}\right\rangle\left|z\right\rangle\), and the quantum walk basis state is \(\left|S_{1}\right\rangle\left|S_{2}\right\rangle\) with \(S_{i}\subseteq R_{i}\). From the definition of the checking operation \(C_{4}\), we can see that only when the input state \(\left|R_{1}\right\rangle\left|R_{2}\right\rangle\left|z\right\rangle\) satisfies \(\left|R_{1}\cap\triangle\right|=1\), \(\left|R_{2}\cap\triangle\right|=1\) and \(\left|z\cap\triangle\right|=1\), dose \(C_{4}\) not degenerates to the identity operator \(I\). In this special case, the quantum walk process \((U^{t_{1}}C_{4})^{t_{2}}\left|\psi_{0}\right\rangle\) can be reduced to a 9-dimensional invariant subspace as shown in Section 4.1. Setting \(t_{1}=\Theta(\sqrt{m})\) and \(t_{2}=\Theta(\sqrt{r_{1}r_{2}}/m)\), the success amplitude \(p=\left|\left\langle t\right|(U^{t_{1}}C_{4})^{t_{2}}\left|\psi_{0}\right\rangle \right|=\Omega(1)\) by Lemma 4, where the target state \(\left|t\right\rangle\) is an equal superposition of \(\left|S_{1}\right\rangle\left|S_{2}\right\rangle\) with \(a\in S_{1},b\in S_{2}\) (assuming \(a\in R_{1},b\in R_{2}\)), and \(p\) can be computed exactly beforehand. Thus combined with Lemma 2, we can obtain \(\left|t\right\rangle\) with certainty. Denote by \(\mathcal{A}_{4}\) the above process that on input state \(\left|R_{1}\right\rangle\left|R_{2}\right\rangle\left|z\right\rangle\), implements \(\left|0\right\rangle\mapsto\left|t\right\rangle\). Then applying \(C_{4}\) once more we can flip an auxiliary qubit with certainty. Thus \(\mathcal{A}_{4}^{\dagger}C_{4}\mathcal{A}_{4}\) implements \(C_{3}\) with \(100\%\) success probability, and it acts nontrivially when \(\left|R_{1}\cap\triangle\right|=1\), \(\left|R_{2}\cap\triangle\right|=1\) and \(\left|z\cap\triangle\right|=1\).
**Implementing \(C_{2}\) with 100% success probability.** The input state of layer 3 is \(\left|R_{1}\right\rangle\left|R_{2},y\right\rangle\), and the search space is \(z\in[n]-R_{1}-R_{2}-y\). Thus combined with the effect of \(C_{3}\) shown above, we can see that only when the input state \(\left|R_{1}\right\rangle\left|R_{2},y\right\rangle\) satisfies \(\left|R_{1}\cap\triangle\right|=1\), \(\left|R_{2}\cap\triangle\right|=1\) and \(y\notin\triangle\), does \(C_{3}\neq I\). In this special case, suppose \(a\in R_{1},b\in R_{2}\), then the only target \(t\in[n]-R_{1}-R_{2}-y\) is \(c\in\triangle\). Therefore, using the deterministic quantum search shown by Lemma 2, we can obtain \(\left|t\right\rangle\). Denote by \(\mathcal{A}_{3}\) the above process that on input state \(\left|R_{1}\right\rangle\left|R_{2},y\right\rangle\), implements \(\left|0\right\rangle\mapsto\left|t\right\rangle\). Then applying \(C_{3}\) once more we can flip an auxiliary qubit with certainty, and thus \(\mathcal{A}_{3}^{\dagger}C_{3}\mathcal{A}_{4}\) implements \(C_{2}\) with \(100\%\) success probability, which acts nontrivially when \(\left|R_{1}\cap\triangle\right|=1\), \(\left|R_{2}\cap\triangle\right|=1\) and \(y\notin\triangle\).
**Implementing \(C_{1}\) with 100% success probability.** Recall that we want \(C_{1}\) to act nontrivially on \(\left|R_{1}\right\rangle\left|R_{1}^{\prime}\right\rangle\left|D(R_{1})\right\rangle\) when \(\left|R_{1}\cap\triangle\right|=1\) and \(\left|R_{1}^{\prime}\cap\triangle\right|=1\). We will implement \(C_{1}\) from \(\bar{C}_{1}\), which acts nontrivially on \(\left|R_{1}\right\rangle\left|D(R_{1})\right\rangle\) when \(\left|R_{1}\cap\triangle\right|=1\). The implementation of \(C_{1}\) from \(\bar{C}_{1}\) is described in Section 4.3, which requires \(c_{1}=2u_{1}+4\bar{c}_{1}\) queries. Since \(u_{1}=n^{5/7}\ll\bar{c_{1}}=n^{7.5/7}\), we have \(c_{1}=\Theta(\bar{c}_{1})\). We now describe how to implement \(\bar{C}_{1}\). The input state is \(\left|R_{1}\right\rangle\left|D(R_{1})\right\rangle\), and the quantum walk basis state is \(\left|R_{2},y\right\rangle\) with \(R_{2}\subseteq[n]-R_{1}\), \(y\in[n]-R_{1}-R_{2}\). From the effect of \(C_{2}\) shown above, we can see that only when the input state satisfies \(\left|R_{1}\cap\triangle\right|=1\), does \(C_{2}\neq I\). In this special case, \(\left|([n]-R_{1})\cap\triangle\right|=2\), and thus the quantum walk search process satisfies Condition 1 with \(K=\triangle-R_{1}\) and \((j_{0},l_{0})=(1,0)\). Therefore, the walk can be reduced to a 2-dimensional invariant subspace by Lemma 1, and then by Lemma 3 we can obtain with certainty the target state \(\left|t\right\rangle\), which is an equal superposition
of \(\left|R_{2},y\right\rangle\) with \(\left|R_{2}\cap\triangle\right|=1\) and \(y\notin\triangle\). See Section 4.2 for more details. Denote by \(\mathcal{A}_{2}\) the above process that on input state \(\left|R_{1}\right\rangle\left|D(R_{1})\right\rangle\), implements \(\left|0\right\rangle\mapsto\left|t\right\rangle\). Then applying \(C_{2}\) once more we can flip an auxiliary qubit with certainty, and thus \(\mathcal{A}_{2}^{\dagger}C_{2}\mathcal{A}_{2}\) implements \(\bar{C}_{1}\) with \(100\%\) success probability.
**Remark 1**.: _It's worth noting that in the implementation of \(\bar{C}_{1}\) in layer 2, we cannot use Lemma 2 to obtain with certainty the target state like in the implementation of \(C_{3}\) in layer 4. This is because in the implementation of the phase shift operator \(S_{\psi_{0}}(\beta)=e^{-i\beta\left|\psi_{0}\right\rangle\left\langle\psi_{0} \right|}\) in Lemma 2, we would otherwise need to query \(r_{1}\times r_{2}\) times to construct \(\left|G_{R_{1},R_{2}}\right\rangle\) in \(\left|\psi_{0}\right\rangle\), which is intolerable since \(r_{1}\times r_{2}=n^{9/7}\) would then be multiplied by \(1/\sqrt{\epsilon_{1}}=n^{1.5/7}\) of layer 1, making the query complexity exceed \(n^{9/7}\)._
**Derandomizing layer 1 and finishing the proof of Theorem 1.** As shown above, \(C_{1}\) acts nontrivially only when basis state \(\left|R_{1}\right\rangle\left|R_{1}^{\prime}\right\rangle\left|D(R_{1})\right\rangle\) satisfies \(\left|R_{1}\cap\triangle\right|=1\) and \(\left|R_{1}^{\prime}\cap\triangle\right|=1\). Therefore, the quantum walk search on \(G_{1}=J(n,r_{1})\) can be reduced to a 10-dimensional invariant subspace, as shown in Section 4.3. Also, by setting \(t_{1}=\lfloor\frac{\pi}{2}\sqrt{2r_{1}}\rfloor\) and \(t_{2}=\lfloor\frac{\pi}{4}\sqrt{\frac{n}{3r_{1}}}\rceil\), the success amplitude \(p=\left|\langle t\right|(W^{t_{1}}C_{1})^{t_{2}}\left|\psi_{0}\right\rangle \right|=\Omega(1)\) as shown by Lemma 6, and \(p\) can be computed exactly beforehand. Thus combined with Lemma 2, we can obtain with certainty the target state \(\left|t\right\rangle\), which is an equal superposition of \(\left|R_{1}\right\rangle\) satisfying \(\left|R_{1}\cap\triangle\right|=1\).
We can now apply the deterministic (quantum walk) search \(\mathcal{A}_{2},\mathcal{A}_{3},\mathcal{A}_{4}\) of layers 2,3,4 successively and then measure all the registers. This will lead to \(\left|R_{1}\right\rangle\left|R_{2}\right\rangle\left|z\right\rangle\left|S_{1 }\right\rangle\left|S_{2}\right\rangle\) satisfying \(a\in S_{1}\), \(b\in S_{2}\) and \(z=c\). Thus from the associated data \(\left|G_{R_{1},R_{2}}\right\rangle\left|G_{S_{1},z}\right\rangle\left|G_{S_{2},z}\right\rangle\) we can find the target \(\triangle abc\) with certainty. If there's no \(a\in S_{1},b\in S_{2}\) such that \(A_{a,b}+A_{a,z}+A_{b,z}=d\), we claim that the graph \(G\) does not contain the target triangle.
## 4 Details of quantum walk search in different layer
In the following subsections, we will fill in the missing details of the three aforementioned quantum walk search on Johnson graphs: (i) edge-walk on \(G_{1}=J(n,r_{1})\) of layer 1, (ii) vertex-walk on \(G_{2}=J(n_{1},r_{2})\) of layer 2, and (iii) vertex-walk on the product Johnson graph \(G_{4}=J(r_{1},m)\times J(r_{2},m)\) of layer 4. We start with the layer 4 whose checking operation does not rely on other layer.
### Layer 4
Suppose the input state \(\left|R_{1}\right\rangle\left|R_{2}\right\rangle\left|G_{R_{1},R_{2}}\right\rangle \left|z\right\rangle\) of this layer satisfies \(a\in R_{1},b\in R_{2},z=c\), the target state of quantum vertex-walk search on \(G_{4}=J(r_{1},m)\times J(r_{2},m)\) is an equal superposition of \(\left|S_{1}\right\rangle\left|S_{2}\right\rangle\) such that \(a\in S_{1},b\in S_{2}\).
**Setup.** The initial state is
\[\left|\psi_{0}\right\rangle=\bigotimes_{i=1}^{2}\frac{1}{\sqrt{{r_{i}\choose m }}}\sum_{S_{i}\subseteq R_{i}}\left|S_{i}\right\rangle\left|G_{S_{i},z} \right\rangle\frac{1}{\sqrt{r_{i}-m}}\sum_{z_{i}\in R_{i}-S_{i}}\left|z_{i} \right\rangle. \tag{43}\]
We first construct an equal superposition of \(\left|S_{i}\right\rangle\) s.t. \(S_{i}\subseteq R_{i}\) and \(\left|S_{i}\right|=m\) based on \(\left|R_{i}\right\rangle\), and then query the oracle for \(m\) times to construct the associated data \(\left|G_{S_{i},z}\right\rangle\) based on \(\left|S_{i}\right\rangle\left|z\right\rangle\), and finally construct an equal superposition of \(\left|z_{i}\right\rangle\) s.t. \(z_{i}\in R_{i}-S_{i}\) based on \(\left|R_{i}\right\rangle\left|S_{i}\right\rangle\).
**Update.** A step of quantum walk \(U\) consists of 2 query operations and 2 diffusion operations:
\[U=Q\cdot U_{B}\cdot Q\cdot U_{A}, \tag{44}\]
with working registers:
\[\left|z\right\rangle\bigotimes_{i=1}^{2}\left|R_{i}\right\rangle\left|S_{i} \right\rangle\left|z_{i}\right\rangle\left|G_{S_{i},z}\right\rangle\left|G_{z_{ i},z}\right\rangle. \tag{45}\]
The first diffusion operation \(U_{A}\) acts on registers \(\bigotimes_{i=1}^{2}\left|R_{i}\right\rangle\left|S_{i}\right\rangle\left|z_{ i}\right\rangle\), and can be seen as choosing a random \(z_{i}\in R_{i}-S_{i}\) to be moved into \(S_{i}\):
\[U_{A} =\sum_{R_{1},R_{2}}\sum_{S_{1}\subseteq R_{1}}\sum_{S_{2}\subseteq R _{2}}\bigotimes_{i=1}^{2}\left|R_{i},S_{i}\right\rangle\left\langle R_{i},S_{ i}\right|\otimes\Big{(}2\bigotimes_{i=1}^{2}\big{(}\left|\varphi(R_{i},S_{i}) \right\rangle\left\langle\varphi(S_{i},R_{i})\right|\big{)}-I\Big{)}, \tag{46}\] \[\left|\varphi(S_{i},R_{i})\right\rangle =\frac{1}{\sqrt{r_{i}-m}}\sum_{z_{i}\in R_{i}-S_{i}}\left|z_{i} \right\rangle. \tag{47}\]
The sum \(\sum_{R_{1},R_{2}}\) is over \(R_{1}\subseteq[n]\), \(R_{2}\subseteq[n]-R_{1}\), and for other \(R_{2},S_{1},S_{2}\) we define \(U_{A}\) to act trivially on \(\left|z_{1}\right\rangle\left|z_{2}\right\rangle\). The query operation \(Q\) calls the oracle \(O\) (Eq. (1)) on registers \(\left|z_{i}\right\rangle\left|z\right\rangle\left|G_{z_{i},z}\right\rangle\) for \(i=1,2\). The second diffusion operation \(U_{B}\) acts on all registers except \(\left|z\right\rangle\), and can be seen as choosing a random \(z_{i}^{\prime}\in S_{i}\) being removed from \(S_{i}\) and at the same time moving \(z_{i}\) into \(S_{i}\), while updating the associated data \(\left|G_{S_{i},z}\right\rangle\left|G_{z_{i},z}\right\rangle\) simultaneously:
\[U_{B} =\sum_{R_{1},R_{2}}\left|R_{1},R_{2}\right\rangle\left\langle R_{1 },R_{2}\right|\otimes C_{R_{1},R_{2}}, \tag{48}\] \[C_{R_{1},R_{2}} =2\bigotimes_{i=1}^{2}\left(\sum_{S_{i}+z_{i}\subseteq R_{i}} \sum_{G_{i}\in[M]^{m+1}}\left|\varphi_{S_{i}+z_{i}}^{\bar{G}_{i}}\right\rangle \left\langle\varphi_{S_{i}+z_{i}}^{\bar{G}_{i}}\right|\right)-I,\] (49) \[\left|\varphi_{S_{i}+z_{i}}^{\bar{G}_{i}}\right\rangle =\frac{1}{\sqrt{m+1}}\sum_{z_{i}^{\prime}\in S_{i}+z_{i}}\left|S _{i}^{\prime}\right\rangle\left|z_{i}^{\prime}\right\rangle\left|G_{S_{i}^{ \prime},z}\right\rangle\left|G_{z_{i}^{\prime},z}\right\rangle. \tag{50}\]
In \(\left|\varphi_{S_{i}+z_{i}}^{\bar{G}_{i}}\right\rangle\), suppose the vertical juxtaposition \([G_{S_{i},z};G_{z_{i},z}]=\bar{G}_{i}\), then for \(\left|S_{i}^{\prime}\right\rangle\left|z_{i}^{\prime}\right\rangle\) s.t. \(S_{i}^{\prime}=S_{i}-z_{i}^{\prime}+z_{i}\), its associated data \([G_{S_{i}^{\prime},z};G_{z_{i}^{\prime},z}]\) is a corresponding permutation of \(\bar{G}_{i}\): we exchange row \(z_{i}\) and \(z_{i}^{\prime}\) in \(\bar{G}_{i}\), and then sort the first \(m\) rows in the ascending order. Thus, if \(\bar{G}_{i}\) is a sub-matrix of the edge weight matrix \(A\), then \(G_{S_{i}^{\prime},z}\) is still a sub-matrix of \(A\) with rows indexed by \(S_{i}^{\prime}\) and columns indexed by \(z\).
**Checking.** The checking operation \(C_{4}\) flips an auxiliary qubit if there exists \(a\in S_{1},b\in S_{2}\) such that \(A_{a,b}+A_{b,z}+A_{a,z}=d\), based on the data \(\left|G_{S_{i},z}\right\rangle\) constructed in layer 4, and \(\left|G_{R_{1},R_{2}}\right\rangle\) constructed in layer 1. Thus no additional query is needed. Using the phase kick-back effect: \(X\left|-\right\rangle=-\left|-\right\rangle\), where \(X\) is the Pauli-X matrix and \(\left|-\right\rangle:=(\left|0\right\rangle-\left|1\right\rangle)/\sqrt{2}\), we can add \((-1)\) phase shift to the marked states.
**Invariant subspace.** Denote by \(\left|(j_{1},j_{2})-(k_{1},k_{2})\right\rangle\) the equal superposition of states in \(\{\left|S_{1},z_{1},S_{2},z_{2}\right\rangle:\left|S_{i}\cap\Delta\right|=j_{ i},\left|(S_{i}+z_{i})\cap\Delta\right|=k_{i}\}\). The 9 basis states of the quantum walk's invariant subspace \(\mathcal{H}_{0}\) is illustrated in Fig. 2.
It can be seen form Fig. 2 that a step of quantum walk \(U\) takes the following matrix form in \(\mathcal{H}_{0}\).
\[W=(2BB^{\dagger}-I)(2AA^{\dagger}-I), \tag{51}\]
where non-negative matrices \(A\) and \(B\) satisfy:
\[A.^{2}=\begin{bmatrix}(1-\frac{1}{r_{1}-m})(1-\frac{1}{r_{2}-m})&0&0&0\\ \frac{1}{r_{1}-m}(1-\frac{1}{r_{2}-m})&0&0&0\\ (1-\frac{1}{r_{1}-m})_{r_{2}-m}&0&0&0\\ \frac{1}{(r_{1}-m)(r_{2}-m)}&0&0&0\\ 0&1-\frac{1}{r^{2}-m}&0&0\\ 0&\frac{1}{r_{2}-m}&0&0\\ 0&0&1-\frac{1}{r_{1}-m}&0\\ 0&0&\frac{1}{r_{1}-m}&0\\ 0&0&0&1\end{bmatrix},B.^{2}=\begin{bmatrix}1&0&0&0\\ 0&\frac{1}{m+1}&0&0\\ 0&0&\frac{1}{m+1}&0\\ 0&0&0&\frac{1}{(m+1)^{2}}\\ 0&\frac{m}{m+1}&0&0\\ 0&0&0&\frac{m}{(m+1)^{2}}\\ 0&0&\frac{m}{(m+1)^{2}}\\ 0&0&0&\frac{m^{2}}{(m+1)^{2}}\end{bmatrix} \tag{52}\]
From the number of basis states in \(\left|(j_{1},j_{2})-(k_{1},k_{2})\right>\) (or weights of the 9 lines in Fig. 2), it's easy to see the initial state takes the following form in \(\mathcal{H}_{0}\) after simplification:
\[\left|\psi_{0}\right>.^{2}=\begin{bmatrix}(r_{1}-m-1)(r_{2}-m-1)\\ (r_{2}-m-1)\\ (r_{1}-m-1)\\ 1\\ m(r_{2}-m-1)\\ m(r_{1}-m-1)m\\ m^{2}\end{bmatrix}\div(r_{1}r_{2}), \tag{53}\]
and the overlap with the target state \(\left|t\right>=\left|e_{9}\right>\) is \(\sqrt{\epsilon_{4}}=\frac{m}{\sqrt{r_{1}r_{2}}}\).
**Lemma 4**.: _Setting \(t_{1}=\lfloor\frac{\pi}{2}\sqrt{\frac{m}{2}}\rceil\) and \(t_{2}=\lfloor\frac{\pi}{4}\frac{\sqrt{r_{1}r_{2}}}{m}\rceil\), the success amplitude \(p=\left|\left<t\right|(W^{t_{1}}C_{4})^{t_{2}}\left|\psi_{0}\right>\right|\) satisfies \(p=1-O(\frac{m}{r_{2}}+\frac{m}{r_{1}}+\frac{1}{m})\)._
Figure 2: Illustration of the 9 basis states \(\left|(j_{1},j_{2})-(k_{1},k_{2})\right>\) of the invariant subspace \(\mathcal{H}_{0}\) of quantum walk search on \(G_{4}=J(r_{1},m)\times J(r_{2},m)\).
Proof.: Similar to [42, Lemma 2], it can be shown that \(W\) has two eigenvectors \(\ket{u_{\pm}}\) with corresponding eigenvalues \(e^{\pm i\varphi}\) such that
\[\ket{\langle t|u_{\pm}\rangle} =\frac{1}{\sqrt{2}}+O(\frac{1}{m}+\frac{m}{r_{1}}+\frac{m}{r_{2}}), \tag{54}\] \[\varphi =\frac{2\sqrt{2}}{\sqrt{m}}(1+O(\frac{1}{\sqrt{m}}+\frac{m}{r_{1 }}+\frac{m}{r_{1}})). \tag{55}\]
Similar to [42, Lemma 3], it can be shown that when \(t_{1}\varphi\approx\pi\) and \(C_{4}=2\ket{t}\bra{t}-I\) adds relative phase shift \((-1)\) to the only target basis state \(\ket{t}\) in \(\mathcal{H}_{0}\), \(W^{t_{1}}C_{4}\) has two eigenvectors \(\ket{\theta_{\pm}}\) with corresponding eigenvalues \(e^{\pm i\theta}\) such that
\[\bra{t|\theta_{\pm}} =\frac{1}{\sqrt{2}}+\delta,\quad\bra{\psi_{0}|\theta_{\pm}}=\pm \frac{i}{\sqrt{2}}+\delta, \tag{56}\] \[\theta =\frac{2m}{\sqrt{r_{1}r_{2}}}(1+\delta), \tag{57}\]
where \(\delta=O(\frac{m}{r_{2}}+\frac{m}{r_{1}}+\frac{1}{m})\). Consider \(p(t_{2})=\ket{\langle t|}(W^{t_{1}}C_{4})^{t_{2}}\ket{\psi_{0}}\). Let \(\Pi_{\pm}:=\ket{\theta_{+}}\bra{\theta_{+}}+\ket{\theta_{-}}\bra{\theta_{-}}\), and \(\Pi_{j}\) be the projection onto the other 7 eigenvectors of \(W^{t_{1}}C_{4}\). Then \(\left\|\Pi_{\pm}\ket{\psi_{0}}\right\|=1-\delta\), and \(\left|\bra{t}\Pi_{j}\ket{\psi_{0}}\right|\leq\left\|\Pi_{j}\ket{\psi_{0}} \right\|=\delta\). Therefore,
\[p(t_{2}) \geq\left|e^{it_{2}\theta}\bra{t|\theta_{+}}\bra{\theta_{+}| \psi_{0}}+e^{-it_{2}\theta}\bra{t|\theta_{-}}\bra{\theta_{-}|\psi_{0}}\right|- \sum_{j}\left|\bra{t}\Pi_{j}\ket{\psi_{0}}\right| \tag{58}\] \[\geq\left|e^{it_{2}\theta}\frac{-i}{2}(1+\delta)+e^{-it_{2} \theta}\frac{i}{2}(1+\delta)\right|-7\delta\] (59) \[=\sin(t_{2}\theta)-O(\delta). \tag{60}\]
Setting \(t_{2}=\lfloor\frac{\pi}{4}\frac{\sqrt{r_{1}r_{2}}}{m}\rceil\) we have \(p(t_{2})=1-\delta\) which completes the proof.
### Layer 2
Suppose the input state \(\ket{R_{1}}\ket{D(R_{1})}\) (see Eq. (33) for \(D(R_{1})\)) of this layer satisfies \(\ket{R_{1}\cap\triangle}=1\), the target state of quantum edge-walk search on \(G_{2}=J(n_{1},r_{2})\) is an equal superposition of \(\ket{R_{2},y}\) such that \(\ket{R_{2}\cap\triangle}=1,y\notin\triangle\).
**Setup.** The initial state is
\[\ket{\psi_{0}}=\ket{R_{1}}\ket{D(R_{1})}\frac{1}{\sqrt{n_{2}}}\sum_{y}\ket{y}, \tag{61}\]
where \(\ket{D(R_{1})}=\frac{1}{\sqrt{\binom{n_{2}}{r_{2}}}}\sum_{R_{2}\subseteq[n]-R _{1}}\ket{R_{2}}G_{R_{1},R_{2}}\). We only need to construct an equal superposition of \(y\in[n]-R_{1}-R_{2}\) based on \(\ket{R_{1}}\ket{R_{2}}\), which requires no query.
**Update.** A step of quantum walk \(U\) consists of 2 query operations and 2 diffusion operations:
\[U=Q\cdot U_{B}(\theta_{2})\cdot Q\cdot U_{A}(\theta_{1}), \tag{62}\]
with working registers:
\[\ket{R_{1}}\ket{R_{2}}\ket{G_{R_{1},R_{2}}}\ket{y}\ket{G_{R_{1},y}}. \tag{63}\]
The first diffusion operation \(U_{A}(\theta_{1})\) acts on registers \(\left|R_{1}\right\rangle\left|R_{2}\right\rangle\left|y\right\rangle\), and can be seen as choosing a random \(y\in[n]-R_{1}-R_{2}\) to be moved in to \(R_{2}\):
\[U_{A}(\theta_{1}) =\sum_{R_{1}\cap R_{2}=\phi}\left|R_{1},R_{2}\right\rangle\left\langle R _{1},R_{2}\right|\otimes C_{R_{1},R_{2}}(\theta_{1}), \tag{64}\] \[C_{R_{1},R_{2}}(\theta_{1}) =I-(1-e^{i\theta_{1}})\left|\varphi(R_{1},R_{2})\right\rangle \left\langle\varphi(R_{1},R_{2})\right|,\] (65) \[\left|\varphi(R_{1},R_{2})\right\rangle =\frac{1}{\sqrt{n_{2}}}\sum_{y\in[n]-R_{1}-R_{2}}\left|y\right\rangle \tag{66}\]
The query operation \(Q\) calls the oracle \(O\) (Eq. (1)) for \(r_{1}\) times to update the data \(G_{R_{1},y}\) associated with \(\left|R_{1}\right\rangle\left|y\right\rangle\). The second diffusion operation \(U_{B}(\theta_{2})\) acts on all registers, and can be seen as choosing a random \(y^{\prime}\in R_{2}\) being removed from \(R_{2}\) and at the same time moving \(y\) into \(R_{2}\), while updating the associated data \(\left|G_{R_{1},R_{2}}\right\rangle\left|G_{R_{1},y}\right\rangle\) simultaneously:
\[U_{B}(\theta_{2}) =\sum_{R_{1}}\left|R_{1}\right\rangle\left\langle R_{1}\right| \otimes C_{R_{1}}(\theta_{2}), \tag{67}\] \[C_{R_{1}}(\theta_{2}) =I-(1-e^{i\theta_{2}})\sum_{R_{2}+y\subseteq[n]-R_{1}}\sum_{ \tilde{G}\in[M]^{r_{1}}\times(r_{2}+1)}\left|\varphi_{R_{2}+y}^{\tilde{G}} \right\rangle\left\langle\varphi_{R_{2}+y}^{\tilde{G}}\right|,\] (68) \[\left|\varphi_{R_{2}+y}^{\tilde{G}}\right\rangle =\frac{1}{\sqrt{r_{2}+1}}\sum_{y^{\prime}\in R_{2}+y}\left|R_{2} ^{\prime}\right\rangle\left|y^{\prime}\right\rangle\left|G_{R_{1},R_{2}^{ \prime}}\right\rangle\left|G_{R_{1},y^{\prime}}\right\rangle. \tag{69}\]
In \(\left|\varphi_{R_{2}+y}^{\tilde{G}}\right\rangle\), suppose the horizontal juxtaposition \([G_{R_{1},R_{2}},G_{R_{1},y}]\) equals \(\bar{G}\), then for \(\left|R_{2}^{\prime}\right\rangle\left|y^{\prime}\right\rangle\) s.t. \(R_{2}^{\prime}=R_{2}-y^{\prime}+y\), its associated data \([G_{R_{1},R_{2}^{\prime}},G_{R_{1},y^{\prime}}]\) is an appropriate permutation of the columns of \(\bar{G}\). This is similar to \(\left|\varphi_{S_{i}+z_{i}}^{\tilde{G}_{i}}\right\rangle\) defined in Eq. (50).
**Checking.** The checking operator \(S_{M}(\alpha)=C_{2}(I\otimes\operatorname{diag}(1,e^{i\alpha}))C_{2}\) adds phase shift \(e^{i\alpha}\) to states \(\left|R_{1}\right\rangle\left|R_{2}\right\rangle\left|G_{R_{1},R_{2}}\right\rangle \left|y\right\rangle\) which satisfies \(\left|R_{1}\cap\triangle\right|=1\), \(\left|R_{2}\cap\triangle\right|=1\) and \(y\notin\triangle\).
Assume \(\left|R_{1}\cap\triangle\right|=1\), then \(\left|([n]-R_{1})\cap\triangle\right|=2\). Therefore, the quantum vertex-walk search on \(G_{2}=J(n_{1},r_{2})\) satisfies Condition 1 with \(K=\triangle-R_{1}\) and \((j_{0},l_{0})=(1,0)\), and we can obtain the same 5-dimensional invariant subspace as shown by Fig. 1 in Section 2.2, but now \(N:=n_{1}\), \(r:=r_{2}\), \(R:=R_{2}\), and \(K:=\triangle-R_{1}\). Since the target state is now \(\left|t\right\rangle=\left|1,0\right\rangle\), the proportion of marked states becomes
\[\epsilon_{2} =\frac{2\binom{n_{1}-2}{r_{2}-1}(n_{1}-r_{2}-1)}{\binom{n_{1}}{r _{2}}(n_{1}-r_{2})} \tag{70}\] \[=2\frac{r_{2}}{n_{1}}(1-\frac{r_{2}}{n_{1}-1})=\Theta(\frac{r_{2} }{n}). \tag{71}\]
Thus by combing Lemma 1 (where now \(t\in O(\sqrt{r_{2}})\)) and Lemma 3 (let \(S_{\psi_{0}}(-\beta)=U^{t}\) and \(\lambda=\epsilon_{2}\)), we can obtain \(\left|t\right\rangle\) with certainty, and the query complexity is \(0+\sqrt{n/r_{2}}(\sqrt{r_{2}}r_{1}+c_{2})\), same as Eq. (35).
### Layer 1
The goal of this layer is to construct an equal superposition of \(\left|R_{1}\right\rangle\left|R_{1}^{\prime}\right\rangle\) such that \(\left|R_{1}\cap\triangle\right|=1\) and \(\left|R_{1}^{\prime}\cap\triangle\right|=1\). The quantum edge-walk search on \(G_{1}=J(n,r_{1})\) is the same as in Section 2.1. We first describe how to implement the update operator in the following lemma.
**Lemma 5**.: _The 'Data' operator that transform the data \(D(R_{1})\) to \(D(R_{1}^{\prime})\) based on \(\left|R_{1}^{\prime}\right\rangle\left|R_{1}\right\rangle\) requires \(2(r_{1}+r_{2})\) queries._
Proof.: The transformation consists of the following two steps:
\[\ket{D(R_{1})}= \frac{1}{\sqrt{\binom{n_{1}}{r_{2}}}}\sum_{R_{2}\cap R_{1}=\phi}\ket{ R_{2}}\ket{G_{R_{1},R_{2}}}\] \[U_{1,1}\mapsto \frac{1}{\sqrt{\binom{n_{1}}{r_{2}}}}\sum_{R_{2}^{\prime}\cap R_{1 }^{\prime}=\phi}\ket{R_{2}^{\prime}}\ket{G_{R_{1},R_{2}^{\prime}}} \tag{72}\] \[U_{1,2}\mapsto \frac{1}{\sqrt{\binom{n_{1}}{r_{2}}}}\sum_{R_{2}^{\prime}\cap R_{1 }^{\prime}=\phi}\ket{R_{2}^{\prime}}\ket{G_{R_{1}^{\prime},R_{2}^{\prime}}}= \ket{D(R_{1}^{\prime})}. \tag{73}\]
Specifically, let \(x_{1}:=R_{1}\setminus R_{1}^{\prime}\) and \(x_{1}^{\prime}=R_{1}^{\prime}\setminus R_{1}\), which can be calculated from \(\ket{R_{1}}\ket{R_{1}^{\prime}}\). Consider some \(R_{2}\subseteq[n]-R_{1}\). If \(x_{1}^{\prime}\notin R_{2}\), then \(U_{1,1}\) keeps \(R_{2}\) unchanged; if \(x_{1}^{\prime}\in R_{2}\), then \(U_{1,1}\) implements \(R_{2}\mapsto R_{2}^{\prime}=R_{2}-x_{1}^{\prime}+x_{1}\) and also update the data \(\ket{G_{R_{1},R_{2}}}\) simultaneously: it clears the data \(G_{R_{1},x_{1}^{\prime}}\) in column \(x_{1}^{\prime}\), and then writes back \(G_{R_{1},x_{1}}\), and finally permutes the \(r_{2}\) columns so that the indexes \(R_{2}^{\prime}=R_{2}-x_{1}^{\prime}+x_{1}\) remain the ascending order. Therefore, \(U_{1,1}\) requires \(2r_{1}\) queries.
Operator \(U_{1,2}\) does not need to deal with separate cases, and is simpler to implement. It updates \(G_{x_{1},R_{2}^{\prime}}\) to \(G_{x_{1}^{\prime},R_{2}^{\prime}}\) in row \(x_{1}\), and then permutes the \(r_{1}\) rows so that the indexes \(R_{1}^{\prime}=R_{1}-x_{1}+x_{1}^{\prime}\) remain the ascending order. Therefore, \(U_{1,2}\) requires \(2r_{2}\) queries. The above process is illustrated in Fig. 3.
**Checking.** The checking operator \(C_{1}\) adds phase shift to \(\ket{R_{1}}\ket{R_{1}^{\prime}}\ket{D(R_{1})}\) such that \(\ket{R_{1}}\cap\trianglevert=1\) and \(\ket{R_{1}^{\prime}\cap\trianglevert}=1\). Suppose \(\bar{C}_{1}\) implemented in layer 2 flips an auxiliary qubit based on \(\ket{D(R_{1})}\) if \(\ket{R_{1}\cap\trianglevert}=1\), then \(C_{1}\) can be implemented with costs \(c_{1}=2u_{1}+4\bar{c}_{1}\) as shown below:
1. apply \(\bar{C}_{1}\) to \(\ket{R_{1}}\ket{D(R_{1})}\) and store the result on \(\ket{b_{1}}\) initialized to \(\ket{0}\); 2. update the data \(\ket{D(R_{1})}\) to \(\ket{D(R_{1}^{\prime})}\) using Lemma 5; 3. apply \(\bar{C}_{1}\) to \(\ket{R_{1}^{\prime}}\ket{D(R_{1}^{\prime})}\) and store the result on \(\ket{b_{1}^{\prime}}\) initialized to \(\ket{0}\); 4. use phase kick-back to add phase shift \((-1)\) if \(b_{1}\wedge b_{1}^{\prime}=1\); 5. clear the register \(\ket{b_{1}}\), \(\ket{b_{1}^{\prime}}\) and recover \(D(R_{1})\) by applying the inverse of the first 3 steps.
**Invariant subspace.** Denote by \(\ket{j,l}\) the equal superposition of states \(\ket{R_{1}}\ket{R_{1}^{\prime}}\) satisfying \(\ket{R_{1}\cap\trianglevert}=j\) and \(\ket{R_{1}^{\prime}\cap\trianglevert}=l\). The 10 basis states of the quantum walk's invariant subspace \(\mathcal{H}_{0}\) is illustrated in Fig. 4.
It can be seen from Fig. 4 and the definition of a step of quantum walk \(U\) which consists of 'Coin' and 'Swap' operators as shown in Section 2.1, that \(U\) takes the following matrix form \(W\) in \(\mathcal{H}_{0}\):
\[W=S(2AA^{\dagger}-I), \tag{74}\]
Figure 3: The update operator in layer 1 requires \(2r_{1}+2r_{2}\) queries as shown in Lemma 5.
where \(S=\mathrm{diag}(1,X,1,X,1,X,1)\), and non-negative matrix \(A\) satisfies
\[A.^{2}=\begin{bmatrix}1-\frac{3}{n-r}&0&0&0\\ \frac{n}{n-r}&0&0&0\\ 0&\frac{n-r-2}{r(n-r)}&0&0\\ 0&\frac{(r-1)(n-r-2)+2}{r(n-r)}&0&0\\ 0&\frac{2(r-1)}{r(n-r)}&0&0\\ 0&0&\frac{2(n-r-1)}{r(n-r)}&0\\ 0&0&\frac{(r-2)(n-r-1)+2}{r(n-r)}&0\\ 0&0&\frac{r-2}{r(n-r)}&0\\ 0&0&0&\frac{3}{r}\\ 0&0&0&1-\frac{3}{r}\end{bmatrix}. \tag{75}\]
The initial state takes the following form in \(\mathcal{H}_{0}\):
\[\left|\psi_{0}\right\rangle.^{2}=\begin{bmatrix}(n-r_{1}-1)(n-r_{1}-2)(n-r_{1 }-3)\\ 3(n-r_{1}-1)(n-r_{1}-2)\\ 3(n-r_{1}-1)((r_{1}-1)(n-r_{1}-2)+2)\\ 6(r_{1}-1)(n-r_{1}-1)\\ 3(r_{1}-1)((r_{1}-2)(n-r_{1}-1)+2)\\ 3(r_{1}-1)(r_{1}-2)\\ 3(r_{1}-1)(r_{1}-2)\\ 3(r_{1}-1)(r_{1}-2)\\ (r_{1}-1)(r_{1}-2)(r_{1}-3)\end{bmatrix}\div(n(n-1)(n-2)). \tag{76}\]
It can be seen that the overlap between \(\left|\psi_{0}\right\rangle\) and the target state \(\left|t\right\rangle=\left|e_{4}\right\rangle\) is \(\sqrt{\epsilon_{1}}=\Theta(\sqrt{r_{1}/n})\).
**Lemma 6**.: _Setting \(t_{1}=\lfloor\frac{\pi}{2}\sqrt{2r_{1}}\rceil\) and \(t_{2}=\lfloor\frac{\pi}{4}\sqrt{\frac{n}{3r_{1}}}\rceil\), the success amplitude \(p=\left|\langle t\right|(W^{t_{1}}C_{1})^{t_{2}}\left|\psi_{0}\right\rangle\) satisfies \(p=1-O(\frac{1}{r_{1}}+\frac{r_{1}}{n})\)._
Figure 4: Illustration of the 10 basis states \(\left|j,l\right\rangle\) of the invariant subspace \(\mathcal{H}_{0}\) of quantum walk search on \(G_{1}=J(n,r_{1})\).
Proof.: Similar to [42, Lemma 2], it can be shown that \(W\) has two eigenvectors \(\ket{u_{\pm}}\) with eigenvalues \(e^{\pm i\varphi}\) such that
\[\ket{\langle t|u_{\pm}\rangle} =\frac{1}{\sqrt{2}}+\delta, \tag{77}\] \[\varphi =\sqrt{\frac{2}{r}}(1+\delta), \tag{78}\]
where \(\delta:=O(\frac{1}{r_{1}}+\frac{r_{1}}{n})\). To make \(t_{1}\varphi\approx\pi\), we set \(t_{1}\) to be the nearest integer to \(\frac{\pi}{2}\sqrt{2r_{1}}\). Note that \(C_{1}=2\ket{t}\bra{t}-I\) adds phase shift to the only target basis state \(\ket{t}=\ket{e_{4}}\) in \(\mathcal{H}_{0}\). Thus, similar to [42, Lemma 3], it can be shown that \(W^{t_{1}}C_{1}\) has two eigenvectors \(\ket{\theta_{\pm}}\) with eigenvalues \(e^{\pm i\theta}\) such that
\[\bra{t}\theta_{\pm} =\frac{1}{\sqrt{2}}+\delta,\quad\bra{\psi_{0}}\theta_{\pm}=\pm \frac{i}{\sqrt{2}}+\delta, \tag{79}\] \[\theta =2\sqrt{\frac{3r}{n}}(1+\delta). \tag{80}\]
Consider \(p(t_{2})=\ket{\bra{t}(W^{t_{1}}C_{1})^{t_{2}}\ket{\psi_{0}}}\). Let \(\Pi_{\pm}:=\ket{\theta_{+}}\bra{\theta_{+}}+\ket{\theta_{-}}\bra{\theta_{-}}\), and \(\Pi_{j}\) be the projection onto the other 8 eigenvectors of \(W^{t_{1}}C_{1}\). Then \(\left\|\Pi_{\pm}\ket{\psi_{0}}\right\|=1-\delta\), and \(\ket{\bra{t}\Pi_{j}\ket{\psi_{0}}}\leq\left\|\Pi_{j}\ket{\psi_{0}}\right\|=\delta\). Therefore,
\[p(t_{2}) \geq\ket{e^{it_{2}\theta}\bra{t}\theta_{+}\bra{\theta_{+}}+e^{- it_{2}\theta}\bra{t}\theta_{-}\bra{\theta_{-}}\ket{\psi_{0}}}-\sum_{j}\ket{ \bra{t}\Pi_{j}\ket{\psi_{0}}} \tag{81}\] \[\geq\ket{e^{it_{2}\theta}\frac{-i}{2}(1+\delta)+e^{-it_{2}\theta} \frac{i}{2}(1+\delta)}-8\delta\] (82) \[=\sin(t_{2}\theta)-O(\delta). \tag{83}\]
Setting \(t_{2}=\lfloor\frac{\pi}{4}\sqrt{\frac{n}{3r_{1}}}\rceil\), we have \(p(t_{2})=1-\delta\), which completes the proof.
## 5 Discussions
In this paper, we have shown that there is a deterministic quantum algorithm for the triangle sum promised problem (i.e. the graph contains at most one target triangle) based on derandomization of a nested-quantum-walk-based algorithm by Jeffery et al. Our algorithm achieves the same \(O(n^{9/7})\) queries with the state-of-the-art bounded error quantum algorithm, utilizing several non-trivial techniques. It may be worth further considering the following problems.
1. Will the lower bound of \(\Omega(n^{9/7}/\sqrt{\log n})\) remains unchanged for the triangle sum promised problem considered in this paper?
2. Is it still possible to design a deterministic quantum algorithm when the graph is promised to contain none or \(k>1\) target triangles?
3. Is it possible to derandomize the state-of-the-art \(O(n^{5/4})\)-query quantum algorithm for triangle finding [29] when given an additional promise?
|
2302.14675 | Flat semigroups and weighted homogeneous surface singularities | We consider numerical semigroups associated with normal weighted homogeneous
surface singularities with rational homology sphere links. We say that a
semigroup is representable if it can be realized in this way. In this article,
we study the representability of flat semigroups and prove that a numerical
semigroup is representable if and only if it can be written as a quotient of a
flat semigroup. | Zsolt Baja, Tamás László | 2023-02-28T15:37:40Z | http://arxiv.org/abs/2302.14675v1 | # Flat semigroups and weighted homogeneous surface singularities
###### Abstract.
We consider numerical semigroups associated with normal weighted homogeneous surface singularities with rational homology sphere links. We say that a semigroup is representable if it can be realized in this way.
In this article, we study the representability of flat semigroups and prove that a numerical semigroup is representable if and only if it can be written as a quotient of a flat semigroup.
Key words and phrases:numerical semigroups, Frobenius problem, weighted homogeneous surface singularities, Seifert rational homology spheres, flat semigroups 2010 Mathematics Subject Classification: Primary. 20M14, 32S05, 32S25, 32S50 Secondary. 14B05 TL is partially supported by the NKFIH Grant "Elvonal (Frontier)" KKP 144148. He is very grateful to the 'Erdos Center' for the warm hospitality and for providing with him an excellent research environment during his stay in Budapest.
###### Abstract
We consider the 'flat semigroups' appeared in the same classification theme of [10]. It turns out that they serve as a base towards to the characterization problem. First of all, one proves that they are representable. Moreover, for a given presentation of a flat semigroup we construct a _canonical representant_\(M\) whose properties induces many of the properties of the semigroup. In particular, one can show that every flat semigroup is symmetric and its Frobenius number realizes a sharp upper bound considered by Brauer [1]. Furthermore, they are interesting from singularity theoretic point of view as well, since they are semigroups associated with a special family of isolated complete intersection singularities whose equations are calculated explicitely in section 5.3. Then the main theorem is as follows.
Flat semigroups and weighted homogeneous surface singularities
## 1 Introduction
The 'flat semigroups' appeared in the same classification theme of [10]. It turns out that they serve as a base towards the characterization problem. First of all, one proves that they are representable. Moreover, for a given presentation of a flat semigroup we construct a _canonical representant_\(M\) whose properties induces many of the properties of the semigroup. In particular, one can show that every flat semigroup is symmetric and its Frobenius number realizes a sharp upper bound considered by Brauer [1]. Furthermore, they are interesting from singularity theoretic point of view as well, since they are semigroups associated with a special family of isolated complete intersection singularities whose equations are calculated explicitely in section 5.3. Then the main theorem is as follows.
**Characterization Theorem**.: _A numerical semigroup is representable if and only if it is a quotient of a flat semigroup._
The strategy of the proof is the following: first of all one shows that the flat semigroups, as well as their quotients are representable. For the reverse statement, we fix a negative definite Seifert rational homology sphere as a representant of a given semigroup. Then, one can perturb its Seifert invariants in such a way that the associated semigroup remains stable. On the other hand, one proves that via this perturbation we achieve a representant whose associated semigroup can be written as a quotient of a flat semigroup.
It is worth to emphasize that the characterization theorem rephrases the representability problem completely via the language of semigroup theory:
'_how big is the set of quotients of flat semigroups in the set of numerical semigroups?_
Further speculations on this question and a connection with the work of [10] and [21] regarding the presentation of semigroups as quotients of symmetric semigroups can be found at the end of the article.
The structure of the article is as follows. Section 2 presents the necessary ingredients and preliminaries regarding the topology of normal surface singularities and negative definite Seifert 3-manifolds. Section 3 introduces the Frobenius problem with a view towards the 'flat' classification of [12] and discusses the case of semigroups associated with weighted homogeneous surface singularities by presenting the main results of [13].
Section 4 contains some general observations regarding representable semigroups. Among others, Lemma 4.1.5 proves that every quotient of a representable semigroup is also representable. As an application, in section 4.2 we show a representation of a semigroup with two generators by'multiplying' the minimal embedded resolution graph of an irreducible plane curve singularity with one Puiseaux pair. Furthermore, in section 4.3 we construct monoids as bounds for a representable semigroup. This serves the base idea for the study of flat semigroups in section 5. In this part we prove that they are representable (Theorem 5.1.3). Moreover, we construct their canonical representants and study some of their properties. In particular, in section 5.3 we give explicit equations for a family of isolated complete intersection singularities whose links are the canonical representants of a flat semigroup.
Finally, section 6 contains the needed results with respect to a perturbation of the Seifert invariants of a reresentant, and proves the main theorem (Theorem 6.1.6) of our work. Section 6.2 ends the paper by giving important examples and discussing a new reformulation of the representability problem.
## 2. Preliminaries
### On the topology of normal surface singularities with \(\mathbb{Q}HS^{3}\) links
#### 2.1.1. Resolution and lattices
We consider a complex normal surface singularity \((X,o)\) and let \(\pi:\widetilde{X}\to X\) be a good resolution with dual resolution graph \(\Gamma\) whose set of vertices are denoted by \(\mathcal{V}\). That is, \(\pi\) is a birational proper analytic map, which is an isomorphism outside \(o\), and \(\pi^{-1}(o)\) is a simple normal-crossing divisor. Let \(\{E_{v}\}_{v\in\mathcal{V}}\) be the irreducible components of the exceptional set \(\pi^{-1}(o)\).
The link \(M\) of \((X,o)\) is a negative definite plumbed 3-manifold associated with \(\Gamma\). The graph \(\Gamma\) encodes the non-degenerate negative definite intersection form together with the collection of genera \(\{g(E_{v})\}_{v}\). Furthermore, by a theorem of Grauert [1], any such connected negative definite plumbing graph can be realized as a dual resolution graph of some normal surface singularity.
In the sequel, in most of the cases we will assume that \(M\) is a rational homology sphere \((\mathbb{Q}HS^{3})\), or, equivalently, \(\Gamma\) is a tree and all \(E_{v}\) are rational (ie. \(g(E_{v})=0\)).
The smooth complex analytic surface \(\widetilde{X}\) is the plumbed 4-manifold associated with \(\Gamma\), with boundary \(\partial\widetilde{X}=M\). We define the lattice \(L\) as \(H_{2}(\widetilde{X},\mathbb{Z})\), endowed with the non-degenerate negative definite intersection form \(I:=(,)\). It is freely generated by the (classes of the) exceptional divisors \(E_{v}\), \(v\in\mathcal{V}\), that is, \(L=\oplus_{v\in\mathcal{V}}\mathbb{Z}\langle E_{v}\rangle\). In the homology exact sequence of the pair \((\widetilde{X},M)\) one has \(H_{2}(M,\mathbb{Z})=0\), \(H_{1}(\widetilde{X},\mathbb{Z})=0\), hence the exact sequence has the form:
\[0\to L\to H_{2}(\widetilde{X},M,\mathbb{Z})\to H_{1}(M,\mathbb{Z})\to 0. \tag{2.1.1}\]
There is a perfect pairing \(L\otimes H_{2}(\widetilde{X},M,\mathbb{Z})\to\mathbb{Z}\) defined by the Lefschetz-Poincare duality \(H_{2}(\widetilde{X},M,\mathbb{Z})\cong H^{2}(\widetilde{X},\mathbb{Z})\). Hence \(L^{\prime}:=\operatorname{Hom}(H_{2}(\widetilde{X},\mathbb{Z}),\mathbb{Z})\), the dual lattice of \(L\), can be identified
with \(H_{2}(\widetilde{X},M,\mathbb{Z})\). By (2.1.1) \(L^{\prime}/L\cong H_{1}(M,\mathbb{Z})\), which is a finite group and it will be denoted by \(H\).
Since the intersection form is non-degenerate, \(L^{\prime}\) embeds into \(L_{\mathbb{Q}}:=L\otimes\mathbb{Q}\), and it can be identified with the rational cycles \(\{l^{\prime}\in L_{\mathbb{Q}}\,:\,(l^{\prime},L)_{\mathbb{Q}}\subset\mathbb{ Z}\}\), where \((\,,\,)_{\mathbb{Q}}\) denotes the extension of the intersection form to \(L_{\mathbb{Q}}\). Hence, in the sequel we regard \(L^{\prime}\) as \(\oplus_{v\in\mathcal{V}}\mathbb{Z}\langle E_{v}^{*}\rangle\), the lattice generated by the (anti-)dual cycles \(E_{v}^{*}\in L_{\mathbb{Q}}\), \(v\in\mathcal{V}\), where \((E_{u}^{*},E_{v})_{\mathbb{Q}}=-\delta_{u,v}\) (Kronecker delta) for any \(u,v\in\mathcal{V}\).
#### 2.1.2. The Lipman cone and minimal representatives of \(H\)
For \(l^{\prime}_{1},l^{\prime}_{2}\in L^{\prime}\) with \(l^{\prime}_{i}=\sum_{v}l^{\prime}_{i,v}E_{v}\) (\(i=\{1,2\}\)) one considers a partial order relation \(l^{\prime}_{1}\geq l^{\prime}_{2}\) defined coordinatewise by \(l^{\prime}_{1,v}\geq l^{\prime}_{2,v}\) for all \(v\in\mathcal{V}\). In particular, we say that \(l^{\prime}\) is an effective cycle if \(l^{\prime}\geq 0\). We set also \(\min\{l^{\prime}_{1},l^{\prime}_{2}\}:=\sum_{v}\min\{l^{\prime}_{1,v},l^{ \prime}_{2,v}\}E_{v}\) and analogously \(\min\{F\}\) for a finite subset \(F\subset L^{\prime}\).
For a cycle \(l^{\prime}\in L^{\prime}\) we denote by \([l^{\prime}]\in H\) its class in \(H:=L^{\prime}/L\). Then the lattice \(L^{\prime}\) admits a partition parametrized by the group \(H\), where for any \(h\in H\) one sets
\[L^{\prime}_{h}=\{\ell^{\prime}\in L^{\prime}\mid[\ell^{\prime}]=h\}\subset L ^{\prime}. \tag{2.1.2}\]
Note that \(L^{\prime}_{0}=L\). Given an \(h\in H\) one can define \(r_{h}:=\sum_{v}l^{\prime}_{v}E_{v}\in L^{\prime}_{h}\) as the unique element of \(L^{\prime}_{h}\) such that \(0\leq l^{\prime}_{v}<1\).
We define the rational Lipman cone by
\[\mathcal{S}_{\mathbb{Q}}:=\{\ell^{\prime}\in L_{\mathbb{Q}}\mid(\ell^{\prime},E_{v})\leq 0\text{ for all }v\in\mathcal{V}\},\]
which is a cone generated by the duals \(E_{v}^{*}\) over \(\mathbb{Q}_{\geq 0}\). Since \(\{E_{v}^{*}\}_{v}\) have positive entries, \(\mathcal{S}_{\mathbb{Q}}\setminus\{0\}\) is in the open first quadrant. In particular, this gives us the monoid \(\mathcal{S}^{\prime}:=\mathcal{S}_{\mathbb{Q}}\cap L^{\prime}\) of anti-nef cycles in \(L^{\prime}\), which is called the Lipman cone. It is generated over \(\mathbb{Z}_{\geq 0}\) by the duals \(E_{v}^{*}\).
The Lipman cone \(\mathcal{S}^{\prime}\) also admits a natural equivariant partition \(\mathcal{S}^{\prime}_{h}=\mathcal{S}^{\prime}\cap L^{\prime}_{h}\) indexed by \(H\). Furthermore, we have the following properties:
1. if \(l^{\prime}_{1},l^{\prime}_{2}\in\mathcal{S}^{\prime}_{h}\) then \(l^{\prime}_{2}-l^{\prime}_{1}\in L\) and \(\min\{l^{\prime}_{1},l^{\prime}_{2}\}\in\mathcal{S}^{\prime}_{h}\);
2. for any \(s\in L^{\prime}\) the set \(\{l^{\prime}\in\mathcal{S}^{\prime}\mid l^{\prime}\not\geq s\}\) is finite;
3. for any \(h\in H\) there exists a unique _minimal cycle_\(s_{h}:=\min\{\mathcal{S}^{\prime}_{h}\}\in\mathcal{S}^{\prime}_{h}\) (cf. 2.1.3);
#### 2.1.3. Generalized Laufer's algorithm
[10, Lemma 7.4] We fix \(h\in H\). Then for any \(l^{\prime}\in L^{\prime}_{h}\) there exists a unique minimal element \(s(l^{\prime})\in\mathcal{S}^{\prime}_{h}\) satisfying \(s(l^{\prime})\geq l^{\prime}\) which can be obtained by the following algorithm. Set \(x_{0}:=l^{\prime}\). Then one constructs a computation sequence \(\{x_{i}\}_{i}\) as follows. If \(x_{i}\) is already constructed and \(x_{i}\not\in\mathcal{S}^{\prime}\) then there exists some \(E_{v_{i}}\) such that \((x_{i},E_{v_{i}})>0\) and we take \(x_{i+1}:=x_{i}+E_{v_{i}}\) (for some choice of \(E_{v_{i}}\)). Then the procedure after finitely many steps stops, say at \(x_{t}\), and necessarily \(x_{t}=s(l^{\prime})\).
In particular, if we start with \(l^{\prime}=E_{v}\) with an arbitrarily chosen \(v\in\mathcal{V}\) then \(s(l^{\prime})=\min\{\mathcal{S}^{\prime}\setminus\{0\}\}\) is the _minimal (or Artin's fundamental) cycle_\(Z_{min}\in L\) of the singularity [1, 1]. In this special case, the above algorithm is called the _'Laufer's algorithm targeting \(Z_{min}\)'_.
In fact, \(s(r_{h})=s_{h}\) and \(r_{h}\leq s_{h}\), however, in general \(r_{h}\neq s_{h}\). Note that this fact does not contradict the minimality of \(s_{h}\) in \(\mathcal{S}^{\prime}_{h}\) since \(r_{h}\) might not sit in \(\mathcal{S}^{\prime}_{h}\).
#### 2.1.4.
We consider the anti-canonical cycle \(Z_{K}\in L^{\prime}\) defined by the adjunction formulae \((Z_{K},E_{v})=e_{v}+2\) for all \(v\).
We say that the singularity \((X,o)\), or its topological type, is _numerically Gorenstein_ if \(Z_{K}\in L\). Note that the \(Z_{K}\in L\) property is independent of the resolution, since \(Z_{K}\in L\) if and only if the line bundle \(\Omega^{2}_{X\setminus\{o\}}\) of holomorphic \(2\)-forms on \(X\setminus\{o\}\) is topologically trivial. \((X,o)\) is called _Gorenstein_ if \(\Omega^{2}_{\widetilde{X}}\) (the sheaf of holomorphic \(2\)-forms) is isomorphic to \(\mathcal{O}_{\widetilde{X}}(-Z_{K})\) (or, equivalently,
if the line bundle \(\Omega^{2}_{X\setminus\{o\}}\) is holomorphically trivial). Note that the adjunction formulae deduce the following expression
\[Z_{K}-E=\sum_{v\in\mathcal{V}}(\delta_{v}-2)E_{v}^{*}, \tag{2.1.3}\]
where we denote \(E:=\sum_{v\in\mathcal{V}}E_{v}\) and \(\delta_{v}\) is the valency of the vertex \(v\).
For more details regarding this section we refer to eg. [10, 11].
### Negative definite Seifert \(3\)-manifolds
In the present work the focus will be on star-shaped dual resolution graphs. Their associated plumbed \(3\)-manifolds are negative definite Seifert \(3\)-manifolds, and they can be analytically realized by weighted homogeneous surface singularities, see section 3.3.1. Therefore, in the sequel we will provide some details and set some notations regarding the arithmetics of such graphs.
#### 2.2.1.
We consider a star-shaped resolution graph \(\Gamma\) with \(d\) legs, \(d\geq 3\). (A leg is a chain of vertices which is connected to the central vertex). Each leg is determined by the so-called normalized Seifert invariant \((\alpha_{i},\omega_{i})\), where \(0<\omega_{i}<\alpha_{i}\) and \(\gcd(\alpha_{i},\omega_{i})=1\). If we consider the Hirzebruch-Jung (negative) continued fraction expansion
\[\alpha_{i}/\omega_{i}=[b_{i1},\dots,b_{i\nu_{i}}]=b_{i1}-1/(b_{i2}-1/(\dots-1/ b_{i\nu_{i}})\dots)\quad\,(b_{ij}\geq 2),\]
the \(i^{\text{th}}\) leg has \(\nu_{i}\) vertices, say \(v_{i1},\dots,v_{i\nu_{i}}\), with Euler decorations (self-intersection numbers) \(-b_{i1},\dots,-b_{i\nu_{i}}\), and \(v_{i1}\) is connected to the central vertex \(v_{0}\). All these vertices (except \(v_{0}\)) have genus-decorations zero. It will be also useful to define \(\omega_{i}^{\prime}\) satisfying
\[\omega_{i}\omega_{i}^{\prime}\equiv 1\,(\text{mod}\,\alpha_{i}),\,\,\,\,0< \omega_{i}^{\prime}<\alpha_{i}. \tag{2.2.1}\]
One can show that \(\alpha_{i}\) is the determinant of the \(i^{th}\)-leg \(\Gamma_{i}\), \(\omega_{i}=\det(\Gamma_{i}\setminus v_{i1})\), and \(\omega_{i}^{\prime}=\det(\Gamma_{i}\setminus v_{i\nu_{i}})\).
The central vertex \(v_{0}\) corresponds to the central genus \(g\) curve \(E_{0}\) with self-intersection number \(-b_{0}\). The negative definite plumbed \(3\)-manifold \(M\) associated with such a star-shaped graph has a Seifert structure and it is determined by the normalized Seifert invariants which will be denoted by
\[Sf=(-b_{0},g;(\alpha_{i},\omega_{i})_{i=1}^{d}).\]
In the sequel we will assume that \(M\) is a _(Seifert) rational homology sphere_, or, equivalently, the central node has \(g=0\). In this case, for simplicity we will omit the genus from the notations and we will simply write \(Sf=(-b_{0};(\alpha_{i},\omega_{i})_{i=1}^{d})\).
#### 2.2.2. **Key numerical invariants**
The orbifold Euler number of \(M\) is defined as \(e:=-b_{0}+\sum_{i}\omega_{i}/\alpha_{i}\). Then the negative definiteness of the intersection form is equivalent with \(e<0\).
Let \(\mathfrak{h}:=|H|\) be the order of \(H=H_{1}(M,\mathbb{Z})=L^{\prime}/L\), and let \(\mathfrak{o}\) be the order of the class \([E_{0}^{*}]\) (or of the generic \(S^{1}\) Seifert-orbit) in \(H\). Then, using the notation \(\alpha:=\text{lcm}(\alpha_{1},\dots,\alpha_{d})\), one shows that (see eg. [11])
\[\mathfrak{h}=\alpha_{1}\dotsm\alpha_{d}|e|\,\,\,\,\text{and}\,\,\,\,\mathfrak{ o}=\alpha|e|. \tag{2.2.2}\]
In particular, if \(M\) is an integral homology sphere (called Seifert homology sphere) then necessarily all \(\alpha_{i}\)'s are pairwise relatively prime and by (2.2.2) \(\alpha|e|=1\). This reads as the Diophantine equation \((b_{0}-\sum_{i}\omega_{i}/\alpha_{i})\alpha=1\), which determines all \(\omega_{i}\) and \(b_{0}\) uniquely by the \(\alpha_{i}\)'s. The corresponding Seifert homology sphere is denoted by \(\Sigma(\alpha_{1},\dots,\alpha_{d})\).
Next, we define the following combinatorial number
\[\gamma:=\frac{1}{|e|}\cdot\Big{(}d-2-\sum_{i=1}^{d}\frac{1}{\alpha_{i}}\Big{)} \in\mathbb{Q}, \tag{2.2.3}\]
which has a central importance regarding properties of weighted homogeneous surface singularities or Seifert rational homology spheres. It has several interpretations: it is the 'exponent' of the weighted homogeneous germ \((X,o)\); \(-\gamma\) is also called the 'log-discrepancy' of \(E_{0}\); \(\mathfrak{o}\gamma\) is usually named as the Goto-Watanabe \(a\)-invariant of the universal abelian cover of \((X,o)\), and \(e\gamma\) appears as the orbifold Euler characteristic in [10] (see also [11, 3.3.6]). Nevertheless, the most important interpretation for our purpose is the following.
In a star-shaped graph the \(E_{0}\)-coefficients of all \(E_{v}^{*}\) associated with the end-vertices are computed by \(-(E_{v}^{*},E_{0}^{*})=1/(|e|\alpha_{v})\) and the \(E_{0}\)-coefficient of \(E_{0}^{*}\) is \(-(E_{0}^{*},E_{0}^{*})=1/|e|\) (cf. [11, (11.1)]). Hence, (2.1.3) gives that the \(E_{0}\)-coefficient of \(Z_{K}\) is exactly \(\gamma+1\).
For any \(i=1,\ldots,d\) we rename the base element of the \(i^{th}\) end-vertex \(v_{i\nu_{i}}\) by \(E_{i}\) and compute the \(E_{i}\)-coefficient of \(Z_{K}\). For this one uses the identities \((E_{i}^{*},E_{j}^{*})=(e\alpha_{i}\alpha_{j})^{-1}\) for \(i\neq j\) and \((E_{i}^{*},E_{i}^{*})=(e\alpha_{i}^{2})^{-1}-\omega_{i}^{\prime}/\alpha_{i}\) if \(i=j\), cf. [11, (11.1)]. Therefore, (2.1.3) and a computation shows that
\[-(Z_{K},E_{i}^{*})=1+(\gamma-\omega_{i}^{\prime})/\alpha_{i}. \tag{2.2.4}\]
On the other hand, by [10, Lemma 2.2.1] we have the following expression
\[E_{v_{ij}}^{*}=m_{ij}E_{i}^{*}-\sum_{j<r\leq\nu_{i}}m_{ijr}E_{v_{ir}},\]
where \(m_{ij}\) and \(m_{ijr}\) are positive integers. Hence, it yields that \(-(Z_{K},E_{v_{ij}}^{*})=M_{ij}(Z_{K},-E_{i}^{*})+M_{ij}^{\prime}\) for some \(M_{ij},M_{ij}^{\prime}\in\mathbb{Z}\), which gives us the following.
**Lemma 2.2.5**.: \(\Gamma\) _is numerically Gorenstein if and only if \(\gamma\in\mathbb{Z}\) and \(\gamma\equiv\omega_{i}^{\prime}\ (\mathrm{mod}\ \alpha_{i})\) for all \(i=1,\ldots,d\)._
**Remark 2.2.6**.: Note that \(\gamma|e|=d-2-\sum_{i}1/\alpha_{i}\) is negative if and only if \(\pi_{1}(M)\) is finite, cf. [10]. This can happen only if \(d=3\) and \(\sum_{i}1/\alpha_{i}>1\), and in this case \((X,o)\) is a quotient singularity, hence rational. Therefore, if \((X,o)\) is not rational then \(\gamma\geq 0\), that is, the \(E_{0}\)-coefficient of \(Z_{K}\) is \(\geq 1\). In fact, in these cases all the coefficients of \(Z_{K}\) are strict positive, see eg. [10, 3.2.5]. Moreover, in the numerically Gorenstein non-rational case -- when we already know that \(\gamma\geq 0\) -- by the congruence from Lemma 2.2.5 we get the stronger \(\gamma\geq 1\).
## 3. Numerical semigroups and weighted homogeneous singularities
### The Frobenius problem
The Diophantine problem of Frobenius aims to find an explicit formula for the greatest integer not representable as a nonnegative linear form of a given system of \(d\) relatively prime integers \(1\leq a_{1}\leq\ldots\leq a_{d}\). The integer defined in this way is called the _Frobenius number_ of the system, or of the numerical semigroup \(G(a_{1},\ldots,a_{d})\), generated by the integers from the system itself. We will use the standard notation \(f_{G(a_{1},\ldots,a_{d})}\).
The problem is still open in full generality, however several fomulae for special systems and general bounds exist in the literature. For example, the very first result is the Sylvester formula expressing \(f_{G(a_{1},a_{2})}=a_{1}a_{2}-a_{1}-a_{2}\).
For our purpose it will be important to present some general bounds for the Frobenius number. These bounds motivated a classification of numerical semigroups due to Raczunas and Chrzastowski-Wachtel [12] which will be presented in section 3.2.
For the classical approach and different aspects of the problem the interested reader might consult with the monograph of Ramirez Alfonsin [1]. Further details regarding the theory of numerical semigroups can be found in [11] and [1].
First of all, in [Ba42] it was considered the following bound
\[f_{G(a_{1},\dots,a_{n})}\leq T(a_{1},\dots,a_{n}):=\sum_{i=1}^{n}\Big{(}\frac{d_{ i-1}}{d_{i}}-1\Big{)}a_{i}, \tag{3.1.1}\]
where \(d_{0}=0\) and \(d_{i}=\gcd(a_{1},\dots,a_{i})\) for all \(i\geq 1\). Moreover, in [BaSh62] the authors provided a characterization for semigroups that satisfy the equality \(f=T\) by
\[f=T\iff a_{i+1}/d_{i+1}\in G(a_{1}/d_{i},\dots,a_{i}/d_{i})\ \ \text{ for every }1\leq i\leq n-1.\]
However, it is important to emphasize that the value of \(T\), as well as the above condition depends on the order of the generators, and in general only an appropiate permutation of them gives the equality \(f=T\).
Raczunas and Chrzastowski-Wachtel [RChW96] characterized directly a subclass of semigroups for which \(f=T\) holds and \(T\) can be expressed in a form which is independent on the permutation of the generators. These are the _flat semigroups_.
Furthermore, they considered the upper bound
\[B(a_{1},a_{2},\dots,a_{n}):=(n-1)\mathrm{lcm}(a_{1},\dots,a_{n})-\sum_{i}a_{i}, \tag{3.1.2}\]
where \(f_{G(a_{1},\dots,a_{n})}\leq T(a_{1},\dots,a_{n})\leq B(a_{1},\dots,a_{n})\) and characterized the class of semigroups for which the equality \(f=T=B\) holds. They are called strongly flat semigroups and form a subclass of the flat semigroups.
In the next section we define precisely these classes and discuss the classification given in [RChW96].
### A classification of semigroups
[RChW96] Based on the decomposition of the generators, one considers four'shades' of flatness: _strongly flat, flat, almost flat and non-flat semigroups_.
Let \(A=\{a_{1},\dots,a_{n}\}\) be a system of generators of a numerical semigroup \(\mathcal{S}\), ie. \(\gcd(a_{1},\dots,a_{n})=1\). If we consider the numbers \(q_{i}:=\gcd(a_{1},\dots,a_{i-1},a_{i+1},\dots,a_{n})\) and \(\widehat{q}_{i}:=\prod_{j\neq i}q_{j}\) for all \(i\in\{1,\dots,n\}\), then one follows that \(\gcd(q_{i},q_{j})=1\) for every \(i\neq j\). Hence \(\widehat{q}_{i}\mid a_{i}\) and we can define \(\widehat{s}_{i}:=a_{i}/\widehat{q}_{i}\) (note also that \(\gcd(\widehat{s}_{i},q_{i})=1\)). Therefore, the system of generators can be presented in the following form:
\[A=\{a_{1},\dots,a_{n}\}=\{\widehat{s}_{1}\widehat{q}_{1},\dots,\widehat{s}_{ n}\widehat{q}_{n}\}. \tag{3.2.1}\]
**Definition 3.2.2**.: The set \(A\) is
* _Strongly flat_ **(SF)** if one has \(\widehat{s}_{i}=1\) for all \(i\);
* _Flat_ **(F)** if there exists an \(i\) such that \(\widehat{s}_{i}=1\);
* _Almost flat_ **(AF)** if there exists an \(i\) such that \(q_{i}>1\) and
* _Non-flat_ **(NF)** if for all \(i\) one has \(q_{i}=1\).
We say that the numerical semigroup \(\mathcal{S}\) is strongly flat, flat, almost flat or non-flat if the corresponding condition is satisfied for the minimal set of generators.
**Remark 3.2.3**.: Note that the full semigroup \(\mathcal{S}=\mathbb{N}\) and the semigroups minimally generated by two elements are automatically **SF**. On the other hand, we have \(\mathbf{SF}\subset\mathbf{F}\subset\mathbf{AF}\). Moreover, if one of these three conditions is satisfied for a non-minimal set of generators, then it is automatically satisfied for the minimal one too. This property does not hold for **NF**.
As we have already mentioned, the strongly flat semigroups are characterized by the equality \(f_{\mathcal{S}}=B(a_{1},\dots,a_{n})\), where \(\{a_{1},\dots,a_{n}\}\) is the minimal generating set of \(\mathcal{S}\). Moreover, using the
presentation (3.2.1) of the generators and the notation \(a:=\operatorname{lcm}(a_{1},\dots,a_{n})\) one rewrites the Frobenius number in the following form
\[f_{\mathcal{S}}=a\Big{(}n-1-\sum_{i}\frac{1}{q_{i}}\Big{)}.\]
If \(\mathcal{S}\) is flat then \(f_{\mathcal{S}}=T(b_{1},\dots,b_{n})\), where \(b_{1},\dots,b_{n}\) is an appropriate permutation of the generators \(a_{1},\dots,a_{n}\). In this case the Frobenius number is expressed in a direct form as
\[f_{\mathcal{S}}=\sum_{i}\left(q_{i}-1\right)a_{i}-\prod_{i}q_{i}, \tag{3.2.4}\]
see [10, Thm. 2.5].
### The case of semigroups associated with weighted homogeneous singularities
#### 3.3.1. Weighted homogeneous surface singularities
A normal weighted homogeneous surface singularity \((X,o)\) is defined as the germ at the origin of an affine variety \(X\) with a good \(\mathbb{C}^{*}\)-action. This means that its affine coordinate ring is \(\mathbb{Z}_{\geq 0}\)-graded: \(R_{X}=\oplus_{\ell\geq 0}R_{X,\ell}\). (In fact, all finitely generated \(\mathbb{Z}_{\geq 0}\)-graded \(\mathbb{C}\)-algebra corresponds to an affine variety with good \(\mathbb{C}^{*}\)-action.)
One considers the set \(\mathcal{S}_{(X,o)}:=\{\ell\in\mathbb{Z}_{\geq 0}|R_{X,\ell}\neq 0\}\). It is a numerical semigroup by the grading property and it is called the _semigroup associated with \((X,o)\)_.
The minimal good resolution of \((X,o)\) has a star-shaped dual graph, and the \(\mathbb{C}^{*}\)-action of the singularity induces an \(S^{1}\)-Seifert action on the link. In particular, the link \(M\) of \((X,o)\) is a negative definite Seifert \(3\)-manifold characterized by its normalized Seifert invariants \(Sf=(-b_{0},g;(\alpha_{i},\omega_{i})_{i=1}^{d})\).
By the work of [11] the complex structure is completely recovered by the Seifert invariants and the configuration of points \(\{P_{i}:=E_{0}\cap E_{i1}\}_{i=1}^{d}\subset E_{0}\), where \(E_{0}\) is the irreducible exceptional divisor indexed by the central vertex \(v_{0}\) and \(E_{i1}\) corresponds to \(v_{i1}\). Furthermore, the graded ring of the local algebra of the singularity is given by the following formula (called the _Dolgachev-Pinkham-Demazure_ formula)
\[R_{X}=\oplus_{\ell\geq 0}R_{X,\ell}=\oplus_{\ell\geq 0}H^{0}(E_{0},\mathcal{O}_ {E_{0}}(D^{(\ell)})) \tag{3.3.1}\]
with \(D^{(\ell)}:=\ell(-E_{0}|_{E_{0}})-\sum_{i=1}^{d}\lceil\ell\omega_{i}/\alpha_{i }\rceil P_{i}\), where \(\lceil r\rceil\) denotes the smallest integer greater than or equal to \(r\).
In particular, when \(M\) is a rational homology sphere, ie. \(E_{0}\simeq\mathbb{P}^{1}\), (3.3.1) implies that \(\dim(R_{X,\ell})=\max\{0,1+N(\ell)\}\) is topological, where \(N(\ell)\) is the quasi-linear function
\[N(\ell):=\deg D^{(\ell)}=b_{0}\ell-\sum_{i=1}^{d}\Big{\lceil}\frac{\ell\omega _{i}}{\alpha_{i}}\Big{\rceil}. \tag{3.3.2}\]
Since \(-\lceil x\rceil\leq-x\) one obtains \(N(\ell)\leq|e|\ell\), hence \(N(\ell)<0\) for \(\ell<0\). This means that the semigroup \(\mathcal{S}_{(X,o)}\) can be described with the Seifert invariants as follows
\[\mathcal{S}_{(X,o)}=\{\ell\in\mathbb{Z}\ |\ N(\ell)\geq 0\}. \tag{3.3.3}\]
Since in this case \(\mathcal{S}_{(X,o)}\) is a topological invariant of \((X,o)\), we will frequently use the notations \(\mathcal{S}_{M}\), or \(\mathcal{S}_{\Gamma}\) as well, depending whether we associate it with the Seifert rational homology sphere link \(M\), or with one of its star-shaped plumbing graph \(\Gamma\).
**Definition 3.3.4**.: In the followings we say that a numerical semigroup \(\mathcal{S}\) is _representable_ if it can be realized as a semigroup associated with a weighted homogeneous surface singularity \((X,o)\) with \(\mathbb{Q}HS^{3}\) link, or, equivalently, with a star-shaped resolution graph. Accordingly, we will say that \((X,0)\), its link \(M\), or \(\Gamma\) is a _representant_ of the numerical semigroup \(\mathcal{S}\).
Finally, in the next proposition we list some important properties of the quasi-linear function \(N(\ell)\) which will be used later.
**Proposition 3.3.5**.: _[_10_]__, [_11_, Prop. 3.2.11 & 3.2.13]_
1. \(-(\alpha-1)|e|-d\leq N(\ell)-\lceil\ell/\alpha\rceil\alpha|e|\leq-1\)_. In particular_ \(\lim_{\ell\to\infty}N(\ell)=\infty.\)__
2. _If_ \(\ell>\gamma\) _then_ \(h^{1}(E_{0},\mathcal{O}_{E_{0}}(D^{(\ell)}))=0\)_, ie._ \(N(\ell)\geq-1\)_._
3. \(N(\alpha)=\alpha(b_{0}-\sum_{i}\omega_{i}/\alpha_{i})=\alpha|e|=\mathfrak{o}>0\)_._
4. \(N(\ell+\alpha)=N(\ell)+N(\alpha)=N(\ell)+\mathfrak{o}>N(\ell)\) _for any_ \(\ell\geq 0\)_._
5. \(N(\ell)\geq 0\) _for any_ \(\ell>\alpha+\gamma\)_._
6. _If the graph is numerically Gorenstein (that is,_ \(Z_{K}\in L\)_), then_ (3.3.6) \[N(\ell)+N(\gamma-\ell)=-2\ \ \text{for any $\ell\in\mathbb{Z}$}.\]
### The Frobenius number of representable semigroups
If the star-shaped plumbing graph \(\Gamma\) of \(M\) satisfies \(b_{0}\geq d\) then the weighted homogeneous singularity \((X,o)\) supported on this topological type is minimal rational. Moreover, in this case \(\mathcal{S}_{M}=\mathbb{N}\). For the non-trivial cases the Frobenius number of \(\mathcal{S}_{M}\) can be expressed as follows:
**Theorem 3.4.1**.: _[_11_]_ _If \(b_{0}<d\) then one has_
\[f_{\mathcal{S}_{M}}=\gamma+\frac{1}{|e|}-\check{s}, \tag{3.4.2}\]
_where \(\check{s}\) denotes the \(E_{0}\)-coefficient of \(s_{[Z_{K}+E_{0}^{*}]}\), which is the unique minimal element in \(\mathcal{S}_{[Z_{K}+E_{0}^{*}]}^{\prime}\) given by the generalized Laufer's algorithm 2.1.3._
In a very important special case, when \(\Gamma\) is numerically Gorenstein (ie. \(Z_{K}\in L\)) and \(\mathfrak{o}=1\) then the 'algorithmic term' \(\check{s}\) vanishes, and the corresponding semigroup is symmetric as will be shown by the next proposition.
**Proposition 3.4.3**.: _Let \(\Gamma\) be a numerically Gorenstein graph which satisfies \(\mathfrak{o}=1\). Then \(\mathcal{S}_{\Gamma}\) is symmetric. Moreover, if \(\Gamma\) is determined by the Seifert invariants \(Sf=(-b_{0},(\alpha_{1},\omega_{1}),\ldots,(\alpha_{n},\omega_{n}))\) then the Frobenius number of \(\mathcal{S}_{\Gamma}\) simplifies to_
\[f_{\mathcal{S}_{\Gamma}}=\alpha+\gamma. \tag{3.4.4}\]
Proof.: Regarding the Frobenius number, the assumptions imply \(1/|e|=\alpha\) and \(\check{s}=0\), hence (3.4.2) immediately gives the simplified form, cf. [11, Corollary 3.2.12 or Example 6.2.4(1)].
The proof of the symmetry goes analogously as in the particular case of strongly flat semigroups presented in [11, 4.1.1]. Nevertheless, for the sake of completeness we will clarify the details here as well.
One needs to verify that \(\ell\in\mathcal{S}_{\Gamma}\) if and only if \(f_{\mathcal{S}_{\Gamma}}-\ell\not\in\mathcal{S}_{\Gamma}\) for every \(\ell\in\mathbb{Z}.\) Using the quasi-linear function \(N(\ell)\) and the expression (3.4.4) of the Frobenius number, this reads as
\[N(\ell)\geq 0\ \ \text{if and only if}\ \ \ N(\alpha+\gamma-\ell)<0\ \text{for every $\ell\in\mathbb{Z}$}. \tag{3.4.5}\]
Since \(\Gamma\) is numerically Gorenstein, by Proposition 3.3.5(f) we have \(N(\ell)+N(\gamma-\ell)=-2\). On the other hand, by part (d) of the same proposition one deduces that \(N(\alpha+\gamma-\ell)=N(\gamma-\ell)+\mathfrak{o}=N(\gamma-\ell)+1\), hence \(N(\ell)+N(\alpha+\gamma-\ell)=-1\). The fact that \(N(\ell)\) and \(N(\alpha+\gamma-\ell)\) are integers yields (3.4.5).
### Representability of strongly flat semigroups
Let \(M=\Sigma(\alpha_{1},\ldots,\alpha_{d})\) be a Seifert integral homology sphere. Thus, \(\alpha_{1},\ldots,\alpha_{d}\geq 2\) (\(d\geq 3\)) are pairwise relatively prime integers and both \(b_{0}\) and \((\omega_{1},\ldots,\omega_{d})\) are uniquely determined by the Diophantine equation \(\alpha(b_{0}-\sum_{i=1}^{d}\omega_{i}/\alpha_{i})=1\).
If we consider the integers \(a_{i}:=\alpha/\alpha_{i}\) then the greatest common divisor of \(a_{1},\ldots,a_{i-1},a_{i+1},\ldots,a_{d}\) is \(\alpha_{i}\), hence the system \(\{a_{i}\}_{i=1}^{d}\) generates a strongly flat semigroup \(G(a_{1},\ldots,a_{d})\). In fact, in this case \(\mathcal{S}_{M}=G(a_{1},\ldots,a_{d})\) and one has the next theorem.
**Theorem 3.5.1**.: _[_10_]_ _The strongly flat semigroups with at least three generators are representable. They can be realized as numerical semigroups associated with Seifert integral homology spheres \(\Sigma(\alpha_{1},\ldots,\alpha_{d})\)._
In particular, the theorem implies that strongly flat semigroups (with \(d\geq 3\)) are semigroups associated with numerically Gorenstein graphs with \(\mathfrak{o}=1\). Consequently, they are symmetric and their Frobenius number is expressed as \(f_{G(a_{1},\ldots,a_{n})}=\alpha+\gamma\). Its identification with the bound \(B(a_{1},\ldots,a_{d})\) can be seen using (3.1.2), (2.2.2) and by noticing that in this case \(\operatorname{lcm}(a_{1},\ldots,a_{d})=\alpha\).
Since the representation of a semigroup is not unique (see Remark 4.1.3), we say that a Seifert integral homology sphere is the _canonical representant_ of its associated semigroup.
## 4. On representable semigroups
Let \(\Gamma\) be a star-shaped minimal good resolution graph whose plumbed \(3\)-manifold is a rational homology sphere (cf. 2.2.1), and consider the numerical semigroup \(S_{\Gamma}\) determined by the graph via the quasi-linear function \(N(\ell)\).
In the followings, we will define new operations on such graphs which will help us to develop some properties of their associated numerical semigroups.
### Operations on star-shaped graphs
Let \(\Gamma_{1}\) and \(\Gamma_{2}\) be two star-shaped graphs with Seifert invariants
\[Sf_{1}=(-b_{0},(\alpha_{i},\omega_{i})_{i=1}^{m})\quad\text{and}\quad Sf_{2}= \left(-c_{0},(\beta_{j},w_{j})_{j=1}^{n}\right).\]
The first operation we consider is the addition of graphs in the following sense: we define \(\Gamma:=\Gamma_{1}+\Gamma_{2}\) as the plumbing graph determined by the Seifert invariants
\[Sf=\left(-b_{0}-c_{0},(\alpha_{i},\omega_{i})_{i=1}^{m},(\beta_{j},w_{j})_{j= 1}^{n}\right).\]
In other words, we glue all the legs of \(\Gamma_{1}\) and \(\Gamma_{2}\) to a central node with decoration the sum of the original ones.
Note that if \(e_{1},e_{2}\) and \(e\) are the orbifold Euler numbers of \(\Gamma_{1},\Gamma_{2}\) and \(\Gamma\) respectively, then \(e=e_{1}+e_{2}<0\). Hence the addition is well defined with respect to the negative definiteness.
Now, we examine this addition from the perspective of the associated numerical semigroups as well. Denote the quasi-linear functions associated with the plumbing graphs \(\Gamma_{1},\Gamma_{2}\) and \(\Gamma\) by \(N_{1},N_{2}\) and \(N\). Then the above construction yields \(N(l)=N_{1}(l)+N_{2}(l)\).
This allows us to give an upper and a lower bound for the numerical semigroup \(\mathcal{S}_{\Gamma}\) in terms of \(\mathcal{S}_{\Gamma_{1}}\) and \(\mathcal{S}_{\Gamma_{2}}\), which will be presented in the following lemma. Before that we need one additional notation: for two sets \(A,B\subset\mathbb{N}\) we denote their sum by \(A+B:=\{a+b:a\in A,\,b\in B\}\). In particular, if \(A,B\) are numerical semigroups then \(A+B\) is so.
**Lemma 4.1.1**.: _If \(\Gamma_{1}\) and \(\Gamma_{2}\) are minimal good resolution graphs of weighted homogeneous surface singularities with \(\mathbb{Q}HS^{3}\) link, then_
\[\mathcal{S}_{\Gamma_{1}}\cap\mathcal{S}_{\Gamma_{2}}\subset\mathcal{S}_{\Gamma _{1}+\Gamma_{2}}\subset\mathcal{S}_{\Gamma_{1}}+\mathcal{S}_{\Gamma_{2}}. \tag{4.1.2}\]
Proof.: First of all, the quasi-linear function associated with \(\Gamma_{1}+\Gamma_{2}\) is \(N_{1}+N_{2}\), hence one shows that
\[\mathcal{S}_{\Gamma_{1}+\Gamma_{2}}=\{\ell\in\mathbb{Z}_{\geq 0}:N_{1}(\ell)+N_{2} (\ell)\geq 0\}\supset\{\ell\in\mathbb{Z}_{\geq 0}:N_{1}(\ell)\geq 0\text{ and }N_{2}(\ell)\geq 0\}= \mathcal{S}_{\Gamma_{1}}\cap\mathcal{S}_{\Gamma_{2}}.\]
For the upper bound, note that we can write
\[\mathcal{S}_{\Gamma_{1}}+\mathcal{S}_{\Gamma_{2}}=\langle\mathcal{S}_{\Gamma_ {1}}\cup\mathcal{S}_{\Gamma_{2}}\rangle=\langle\{\ell\in\mathbb{Z}_{\geq 0}:N_{1}( \ell)\geq 0\text{ or }N_{2}(\ell)\geq 0\}\rangle,\]
which clearly implies the inclusion \(\mathcal{S}_{\Gamma_{1}+\Gamma_{2}}\subset\mathcal{S}_{\Gamma_{1}}+\mathcal{ S}_{\Gamma_{2}}\).
#### 4.1.1.
In the particular case when \(\mathcal{S}_{\Gamma_{1}}=\mathcal{S}_{\Gamma_{2}}\), we get identity in (4.1.2) implying that \(\mathcal{S}_{2\Gamma_{1}}:=\mathcal{S}_{\Gamma_{1}+\Gamma_{1}}=\mathcal{S}_{ \Gamma_{1}}\). More generally, for an arbitrary graph \(\Gamma\) and \(k\in\mathbb{N}^{*}\), we get \(\mathcal{S}_{k\Gamma}=\mathcal{S}_{\Gamma}\), where \(k\Gamma\) is defined as
\[k\Gamma:=\underbrace{\Gamma+\Gamma+\cdots+\Gamma}_{k-\text{times}}.\]
**Remark 4.1.3**.: Notice that, this also shows that the realization of a numerical semigroup \(\mathcal{S}\) as the semigroup associated with a star-shaped resolution graph is not unique. Moreover, for a given numerical semigroup one can construct infinitely many different representants which do not share immediate topological properties. In particular, the semigroup does not characterize completely the topological type of the singularity.
**Definition 4.1.4**.: Let \(\mathcal{S}\) be a numerical semigroup and \(k\in\mathbb{N}^{*}\). Then the set
\[\mathcal{S}/k:=\{\ell\in\mathbb{Z}_{\geq 0}:k\ell\in\mathcal{S}\}\]
is a numerical semigroup which is called _the quotient of the semigroup \(\mathcal{S}\)_ by \(k\).
Next we analyze this quotient construction from the perspective of the quasi-linear function.
We consider a representable semigroup \(\mathcal{S}\) and let \(N(\ell)\) be the associated quasi-linear function and \(k\in\mathbb{N}^{*}\). We set \(N^{(k)}(l):=N(kl)\). Then the semigroup associated with \(N^{(k)}\), given by
\[\{\ell\in\mathbb{Z}_{\geq 0}:N(k\ell)\geq 0\}=\{\ell\in\mathbb{N}:k\ell\in \mathcal{S}_{N}\}=\mathcal{S}/k,\]
is exactly the quotient of \(\mathcal{S}\) by \(k\). Moreover, one can prove that this quotient is also representable.
**Lemma 4.1.5**.: _The semigroup \(\mathcal{S}/k\) is representable for any representable semigroup \(\mathcal{S}\) and \(k\in\mathbb{N}^{*}\)._
Proof.: We represent \(\mathcal{S}\) by a normal weighted homogeneous surface singularity \((X,o)\) with minimal good dual resolution graph \(\Gamma\) whose central vertex is denoted by \(E_{0}\). Then the local algebra is expressed as \(R_{X}=\oplus_{\ell\geq 0}H^{0}(E_{0},\mathcal{O}_{E_{0}}(D^{(\ell)}))\), see (3.3.1) and the definition of \(D^{(\ell)}\) therein. Hence the algebra \(R_{X}^{(k)}:=\oplus_{\ell\geq 0}H^{0}(E_{0},\mathcal{O}_{E_{0}}(D^{(k\ell)}))\) is also normal and therefore it corresponds to a normal weighted homogeneous surface singularity. Moreover, the associated semigroup is exactly \(\mathcal{S}/k\).
**Remark 4.1.6**.: For the previous lemma one can give a combinatorial proof as well which constructs explicitely a graph representant \(\Gamma^{(k)}\) of \(\mathcal{S}/k\) from \(\Gamma\) as follows.
We fix \(k\geq 1\) and assume that a representation \(\Gamma\) of \(\mathcal{S}\) has Seifert invariants \(Sf=(-b_{0},(\alpha_{i},\omega_{i})_{i=1}^{n})\). Then the induced quasi-linear function of the quotient \(\mathcal{S}/k\) is expressed as
\[N^{(k)}(\ell)=N(k\ell)=b_{0}k\ell-\sum_{i=1}^{n}\Big{\lceil}\frac{k\ell\omega_ {i}}{\alpha_{i}}\Big{\rceil}.\]
For every \(i=1,\ldots,n\) we consider \(0\leq r_{i}<\alpha_{i}\) satisfying \(k\omega_{i}\equiv r_{i}\pmod{\alpha_{i}}\). Then one gets \(N^{(k)}(\ell)=(k|e|+\sum_{i=1}^{n}r_{i}/\alpha_{i})\ell-\sum_{i=1}^{n}\lceil r _{i}/\alpha_{i}\rceil\). First of all, notice that \(k|e|+\sum_{i=1}^{n}r_{i}/\alpha_{i}\in\mathbb{Z}_{>0}\) and the new orbifold Euler number \(e^{(k)}=ke\) is negative since \(e<0\). Hence, we can associate with \(N^{(k)}\) a negative definite star-shaped graph \(\Gamma^{(k)}\) with Seifert invariants \(Sf^{(k)}=(ke-\sum_{i=1}^{n}r_{i}/\alpha_{i},(\alpha_{i},r_{i})_{i=1}^{n})\), which represents the quotient semigroup \(\mathcal{S}/k\).
**Example 4.1.7**.: As an illustration of the previous construction we consider the semigroup \(G(3,5)\). We claim that it can be represented by the graph on the left hand side of Figure 1 (see section 4.2.2). Therefore, by Remark 4.1.6 the associated quasi-linear function of the quotient semigroup \(G(3,5)/2\) is written as \(N^{(2)}(\ell)=2\ell-2\lceil\ell/5\rceil-2\lceil 2\ell/3\rceil\) which provides the graph drawn on the right hand side of Figure 1. Moreover, since the set of gaps of this quotient is \(\{1,2\}\), we have \(G(3,5)/2=G(3,4,5)\).
### A representation of \(G(p,q)\)
A consequence of the previous section is that the strongly flat semigroups \(G(p,q)\) generated by two elements are representable. The idea which will be presented in the sequel is that \(G(p,q)\) is a semigroup of an irreducible plane curve singularity with one Puiseaux pair \((p,q)\) and then any \(k\geq 2\)-multiple of the associated minimal embedded resolution graph gives a representant for this semigroup.
#### 4.2.1. **A short detour: plane curve singularities.**
We will recall some classical facts from the theory of plane curve singularities which will be used in the sequel. More details can be found in general references such as [1, 20].
Let \((C,0)\subset(\mathbb{C}^{2},0)\) be an irreducible plane curve singularity defined by the analytic germ \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\). According to the general theory, \((C,0)\subset(\mathbb{C}^{2},0)\) admits a minimal good embedded resolution whose associated graph \(\Gamma_{f}\) is a connected, negative definite tree with an extra arrow representing the strict transform of \(C\), and has a very special shape.
The link of an irreducible plane curve singularity is an algebraic knot \(K\subset S^{3}\), whose isotopy type can be completely characterized by any of the following invariants: embedded resolution graph, semigroup of \((C,0)\), Puiseaux pairs, linking pairs or the Alexander polynomial of the knot \(K\subset S^{3}\).
For our purpose, it will be important to discuss in little more details the Puiseaux/linking pairs and the semigroup of the singularity.
The Puiseaux (or Newton) pairs are pairs of integers \(\{(p_{i},q_{i})\}_{i=1}^{r}\), where \(p_{i}\geq 2,q_{i}\geq 1\), \(q_{1}>p_{1}\) and \(\gcd(p_{i},q_{i})=1\). They are the exponents appearing in the normal form of \(f\), cf. [20].
From topological point of view, sometimes it is more convenient to consider the linking pairs \(\{(p_{i},a_{i})\}_{i=1}^{r}\), which can be defined using the Puiseaux pairs by the following recursive identities:
\[a_{1}:=q_{1}\quad\text{ and }\quad a_{i+1}:=q_{i+1}+a_{i}p_{i}p_{i+1}\,\text{ for }i\geq 1.\]
The decorations of the graph \(\Gamma_{f}\) can be determined with the Puiseaux (or linking) pairs, see eg. [21]. The details of this process will be discussed in the next section only in the needed special case when there is exactly one Puiseaux pair.
Figure 1. A representation of \(G(3,5)\) and \(G(3,4,5)\)
Another important invariant of an irreducible plane curve singularity is its numerical semigroup \(\mathcal{S}_{f}\). This is defined as the set of intersection multiplicities of \(f\) with all possible analytic germs. Although the definition is analytic, one can describe \(S_{f}\) combinatorially by expressing its minimal set of generators using eg. the linking pairs \((p_{i},a_{i})_{i=1}^{r}\):
\[\mathcal{S}_{f}=G(\{p_{1}\cdot\ldots\cdot p_{r},a_{i}p_{i}\ldots p_{r}:1\leq i \leq r\}). \tag{4.2.1}\]
#### 4.2.2. The case of one Puiseaux pair
Now assume that \(r=1\), ie. \((C,0)\subset(\mathbb{C}^{2},0)\) has exactly one Puiseaux pair \((p,q)\). In this case the Puiseaux pair coincides with the linking pair and the normal form of the defining equation is \(x^{p}+y^{q}=0\).
Figure 2 presents the shape of \(\Gamma_{f}\), where the decorations can be determined from \((p,q)\) as follows. We introduce the numbers \(0<\omega_{p}<p\) and \(0<\omega_{q}<q\) uniquely determined by the Diophantine equation \(pq-\omega_{p}q-\omega_{q}p=1\). Then the negative continued fractions \(p/\omega_{p}=[u_{1},\ldots,u_{k}]\) and \(q/\omega_{q}:=[v_{1},\ldots,v_{l}]\) give the corresponding decorations of \(\Gamma_{f}\).
By [10], the numerical semigroup associated with \((C,0)\) can be presented as
\[\mathcal{S}_{f}=\{\ell\in\mathbb{Z}_{\geq 0}:N(\ell)\geq 0\}, \tag{4.2.2}\]
where \(N(\ell):=\ell-\lceil\omega_{p}\ell/p\rceil-\lceil\omega_{q}\ell/q\rceil\) is the quasi-linear function associated with a Seifert structure \(Sf=(-1,(p,\omega_{p}),(q,\omega_{q}))\) of \(S^{3}\), represented by the negative definite plumbing graph \(\widetilde{\Gamma}_{f}:=\Gamma_{f}\setminus\{\text{arrow}\}\). Furthermore, one has \(\mathcal{S}_{f}=G(p,q)\) (cf. (4.2.1)) which leads the following result.
**Theorem 4.2.3**.: _The numerical semigroup \(G(p,q)\) is representable._
Proof.: From the previous discussion one knows that \(G(p,q)=\{\ell\in\mathbb{Z}_{\geq 0}:N(\ell)\geq 0\}\) where \(N(\ell):=\ell-\lceil\omega_{p}\ell/p\rceil-\lceil\omega_{q}\ell/q\rceil\). Moreover, by 4.1.1\(G(p,q)\) equals to the semigroup \(\{\ell\in\mathbb{Z}_{\geq 0}:kN(\ell)\geq 0\}\) for any integer \(k\geq 1\). Hence, for an arbitrary \(k\geq 2\), \(G(p,q)\) is realized as the semigroup associated with the negative definite star-shaped graph \(k\widetilde{\Gamma}_{f}\), defined by the Seifert invariants \(Sf=(-k,k\times(p,\omega_{p}),k\times(q,\omega_{q}))\). (Here the notation means that \((p,\omega_{p})\), as well as \((q,\omega_{q})\) appears \(k\) times.)
**Remark 4.2.4**.: Now, by Theorem 3.5.1 and Theorem 4.2.3, the representability of strongly flat semigroup is clarified completely.
### Bounds and representable semigroups
In this section we will give 'bounds' for representable semigroups. It will be crucial in the next section for the characterization of a special subclass of semigroups realizing these bounds.
Let \(\Gamma\) be a star-shaped resolution graph with Seifert invariants
\[Sf=\big{(}-b_{0},\underbrace{(\alpha_{1},\omega_{1}),\ldots,(\alpha_{1},\omega _{1})}_{s_{1}},\ldots,\underbrace{(\alpha_{n},\omega_{n}),\ldots,(\alpha_{n}, \omega_{n})}_{s_{n}}\big{)}, \tag{4.3.1}\]
Figure 2.
where \((\alpha_{i},\omega_{i})\neq(\alpha_{j},\omega_{j})\) for different indices \(i\neq j\). Here \(s_{i}\) stands for the'multiplicity' of the \((\alpha_{i},\omega_{i})\)-type leg. Then, regarding its associated numerical semigroup \(\mathcal{S}_{\Gamma}\) we obtain the following result.
**Theorem 4.3.2**.: _One has the following inclusions_
\[G(\alpha,s_{1}\alpha_{-1},\dots,s_{n}\alpha_{-n})\subset\mathcal{S}_{\Gamma} \subset G(\alpha,s_{1}\alpha_{1}^{*},\dots,s_{n}\alpha_{n}^{*})/\mathfrak{o}, \tag{4.3.3}\]
_where \(\alpha_{i}^{*}:=\alpha/\alpha_{i}\), \(\alpha_{-i}=\operatorname{lcm}_{j\neq i}(\alpha_{j})\) and the bounds are considered as submonoids of \(\mathbb{N}\)._
Proof.: We will prove first the upper bound. Using (2.2.2) and the definition of \(N(\ell)\) one writes the expression
\[\mathfrak{o}\ell=\alpha N(\ell)+\sum_{i=1}^{n}s_{i}\alpha_{i}^{*}\cdot\alpha_ {i}f\big{(}\frac{\omega_{i}\ell}{\alpha_{i}}\big{)}, \tag{4.3.4}\]
where \(f(x)=\lceil x\rceil-x\) for any \(x\in\mathbb{Q}\). Note that each term appearing on the right-hand side of (4.3.4) is a nonnegative integer, except maybe \(N(\ell)\). In particular, by considering the elements \(\alpha\) and \(\{s_{i}\alpha_{i}^{*}\}_{i=1}^{n}\) as generators, this implies that
\[\mathcal{S}_{\Gamma}\subset\{\ell\in\mathbb{Z}_{\geq 0}:\mathfrak{o}\ell\in G (\alpha,s_{1}\alpha_{1}^{*},\dots,s_{n}\alpha_{n}^{*})\}=G(\alpha,s_{1}\alpha_ {1}^{*},\dots,s_{n}\alpha_{n}^{*})/\mathfrak{o},\]
where \(G(\alpha,s_{1}\alpha_{1}^{*},\dots,s_{n}\alpha_{n}^{*})\) is a submonoid of \(\mathbb{N}\), not necessarily a numerical semigroup.
In order to see the lower bound we can proceed as follows. For any fixed \(i\in\{1,\dots,n\}\) the definition of the orbifold Euler number gives the expression
\[s_{i}\frac{\omega_{i}}{\alpha_{i}}=b_{0}+e-\sum_{j\neq i}s_{j}\frac{\omega_{j} }{\alpha_{j}},\]
which is used to deduce the following inequality:
\[N(s_{i}\alpha_{-i}) =b_{0}s_{i}\alpha_{-i}-\sum_{j\neq i}s_{j}\big{\lceil}s_{i}\alpha _{-i}\frac{\omega_{j}}{\alpha_{j}}\big{\rceil}-s_{i}\big{\lceil}\alpha_{-i} \big{(}s_{i}\frac{\omega_{i}}{\alpha_{i}}\big{)}\big{\rceil}\] \[=b_{0}s_{i}\alpha_{-i}-s_{i}\sum_{j\neq i}s_{j}\alpha_{-i}\frac{ \omega_{j}}{\alpha_{j}}-s_{i}\Big{(}b_{0}\alpha_{-i}-\sum_{j\neq i}\alpha_{-i} s_{j}\frac{\omega_{j}}{\alpha_{j}}+\big{\lceil}e\alpha_{-i}\big{\rceil}\Big{)}\] \[=-s_{i}\big{\lceil}e\alpha_{-i}\big{\rceil}\geq 0.\]
Furthermore, one also has the properties \(N(\ell_{1}+\ell_{2})\geq N(\ell_{1})+N(\ell_{2})\) and \(N(\alpha)=\mathfrak{o}>0\) (Proposition 3.3.5(c)) which imply that \(G(\alpha,s_{1}\alpha_{-1},\dots,s_{n}\alpha_{-n})\subset S_{\Gamma}\).
## 5. The topology and geometry of flat semigroups
We consider a special case of (4.3.3) and characterize those representable semigroups which realize the bounds. It turns out that these are exactly the _flat semigroups_. In this section we will provide their characterization, prove that they are representable and discuss the geometry of their 'canonical representants'.
Consider the inclusions from (4.3.3) and assume that \(\mathfrak{o}=1\), \(\alpha_{-i}=\alpha_{i}^{*}\) and \(\gcd(s_{i},\alpha_{i})=1\) for every \(i\). In this case the bounds are in fact numerical semigroups, they coincide and (4.3.3) becomes an identity. On the other hand, the condition \(\alpha_{-i}=\alpha_{i}^{*}\) can be achieved exactly when the numbers \(\alpha_{i}\) are pairwise relatively primes. Hence we get \(\alpha_{-i}=\alpha_{i}^{*}=\prod_{j\neq i}\alpha_{j}\) for which we will use the unified notation \(\widehat{\alpha}_{i}\).
In summary, one deduces the following consequence: if the graph \(\Gamma\) defined by the Seifert invariants \(Sf=\big{(}-b_{0},s_{1}\times(\alpha_{1},\omega_{1}),\ldots,s_{n}\times(\alpha_{n },\omega_{n})\big{)}\) satisfies \(\mathfrak{o}=1\), the numbers \(\{\alpha_{i}\}_{i=1}^{n}\) are pairwise relatively prime integers and \(\gcd(s_{i},\alpha_{i})=1\) for every \(i\), then
\[\mathcal{S}_{\Gamma}=G(\alpha,s_{1}\widehat{\alpha}_{1},\ldots,s_{n}\widehat{ \alpha}_{n}). \tag{5.1.1}\]
Moreover, in this case \(\mathcal{S}_{\Gamma}\) is a flat semigroup. Indeed, using the notations from section 3.2 one gets \(q_{0}=\gcd(\{s_{j}\}_{j})\) and \(q_{i}=\alpha_{i}\) for any \(i\geq 1\). This implies that \(\widehat{q}_{0}=\alpha_{1}\ldots\alpha_{n}=\alpha\), \(\widehat{q}_{i}=q_{0}\cdot\widehat{\alpha}_{i}\) for \(i\geq 1\), \(\widehat{s}_{0}=1\) and \(\widehat{s}_{i}=s_{i}/q_{0}\) for \(i\geq 1\), hence it clearly satisfies the flatness condition (cf. Definition 3.2.2) at \(i=0\).
Now, we start with a presentation \(G(a_{0},\ldots,a_{n})\) of a flat semigroup and assume that \(\widehat{s}_{0}=1\). Note that \(\{a_{0},\ldots,a_{n}\}\) is not necessarily the minimal set of generators. In particular, when \(n=1\), for the next construction one has to consider a presentation with at least three generators, eg. \(G(a_{0}a_{1},a_{0},a_{1})\).
Then, the chosen set of generators can be read as \(\{\widehat{q}_{0},\widehat{s}_{1}\widehat{q}_{1},\ldots,\widehat{s}_{n} \widehat{q}_{n}\}\) where we set the numbers \(q_{i}:=\gcd(a_{0},\ldots,a_{i-1},a_{i+1},\ldots,a_{n})\), \(\widehat{q}_{i}:=\prod_{j\neq i}q_{j}\) and \(\widehat{s}_{i}:=a_{i}/\widehat{q}_{i}\) for all \(i\in\{0,\ldots,n\}\). Note that \(\widehat{s}_{i}\) is an integer since \(\gcd(q_{i},q_{j})=1\) for every \(i\neq j\), and one also follows that \(\gcd(\widehat{s}_{i},q_{i})=1\). By setting \(\alpha_{i}:=q_{i}\) for \(i\geq 1\) and \(s_{i}:=q_{0}\cdot\widehat{s}_{i}\) for every \(i\geq 0\) implies that \(\{\alpha_{i}\}_{i}\) are pairwise relatively primes and \(\gcd(s_{i},\alpha_{i})=1\) if \(i\geq 1\). Moreover, one identifies \(\widehat{q}_{0}=\alpha\) and \(\widehat{s}_{i}\widehat{q}_{i}=s_{i}\widehat{\alpha}_{i}\) for every \(i\geq 1\), hence the semigroup is presented in the form of (5.1.1).
The previous argument provides the 'arithmetical' characterization of flat semigroups which was also proved in [10].
**Theorem 5.1.2** ([10]).: \(\mathcal{S}\) _is a flat semigroup if and only if there exist pairwise relatively prime integers \(\alpha_{i}\geq 2\) (\(i\in\{1,\ldots,n\}\)) such that \(\mathcal{S}\) can be presented as \(G(\alpha,s_{1}\widehat{\alpha}_{1},\ldots,s_{n}\widehat{\alpha}_{n})\) where \(\alpha=\alpha_{1}\ldots\alpha_{n}\), \(\widehat{\alpha}_{i}=\prod_{j\neq i}\alpha_{j}\) and \(\gcd(\alpha_{i},s_{i})=1\) for every \(i\neq j\)._
In the following we will show that, once the presentation (5.1.1) is fixed, there exists a canonical way to represent a flat semigroup as a semigroup associated with a star-shaped graph.
**Theorem 5.1.3**.: _Every flat semigroup is representable._
Proof.: Let \(\mathcal{S}=G(\alpha,s_{1}\widehat{\alpha}_{1},\ldots,s_{n}\widehat{\alpha}_{ n})\) be a fixed presentation of the flat semigroup \(\mathcal{S}\). We would like to find the appropiate \(b_{0}\geq 1\) and \(\omega_{1},\ldots,\omega_{n}\) such that \(0<\omega_{i}<\alpha_{i}\) and the Seifert invariants
\[Sf=(-b_{0},\underbrace{(\alpha_{1},\omega_{1}),\ldots,(\alpha_{1},\omega_{1}) }_{s_{1}},\ldots,\underbrace{(\alpha_{n},\omega_{n}),\ldots,(\alpha_{n},\omega _{n})}_{s_{n}}) \tag{5.1.4}\]
define a negative definite star-shaped plumbing graph \(\Gamma\) wiht \(\mathfrak{o}=\alpha|e|=1\). Note that this later condition is equivalent with \(b_{0}-\sum_{i}s_{i}\omega_{i}/\alpha_{i}=1/\alpha\).
If we ignore the \(s_{i}\)-multiplicities, then the Diophantine equation \(\alpha(\widetilde{b}_{0}-\sum_{i}\widetilde{w}_{i}/\alpha_{i})=1\) has a unique solution \(\widetilde{b}_{0},\widetilde{\omega}_{1},\ldots\widetilde{\omega}_{n}\), see [11]. In addition, if \(\widetilde{\omega}_{i}\) is divisible by \(s_{i}\) for every \(i\), then one writes \(\widetilde{b}_{0}-\sum_{i}s_{i}\frac{\widetilde{\omega}_{i}/s_{i}}{\alpha_{i} }=1/\alpha\), hence by setting \(\omega_{i}:=\widetilde{\omega}_{i}/s_{i}\) the construction is finished. However, in general, this divisibility does not hold and we have to perturb the initial solution \(\widetilde{b}_{0},\widetilde{\omega}_{1},\ldots,\widetilde{\omega}_{n}\) as follows.
Note that for arbitrary \(k_{1},\ldots,k_{n}\in\mathbb{Z}_{\geq 0}\) we can write the following identity
\[\frac{1}{\alpha}=\widetilde{b}_{0}+k_{1}+k_{2}+\cdots+k_{n}-\sum_{i=1}^{n} \frac{k_{i}\alpha_{i}+\widetilde{\omega}_{i}}{\alpha_{i}}.\]
Since the positive integers \(s_{i}\) and \(\alpha_{i}\) are relatively prime, we choose \(k_{i}\) to be the unique non-negative solution of the equation \(k_{i}\alpha_{i}+\widetilde{\omega}_{i}\equiv 0\,(\text{mod }s_{i})\) such that \(0\leq k_{i}<s_{i}\). In this case \(k_{i}\alpha_{i}+\widetilde{\omega}_{i}\) is divisible by \(s_{i}\), hence we can define
\[\omega_{i}:=(k_{i}\alpha_{i}+\widetilde{\omega}_{i})/s_{i}\in\mathbb{N}\ \ \text{for every }i\quad\text{ and}\quad b_{0}:=\widetilde{b}_{0}+k_{1}+\cdots+k_{n}>0.\]
This yields that
\[0<\omega_{i}<\frac{1}{s_{i}}\left((s_{i}-1)\alpha_{i}+\alpha_{i}\right)= \alpha_{i}\ \text{for every }1\leq i\leq n\quad\text{ and }\quad e=b_{0}-\sum_{i=1}^{n}s_{i}\frac{\omega_{i}}{\alpha_{i}}=\frac{1}{ \alpha},\]
hence we get \(\mathfrak{o}=1\) which finishes the proof.
**Remark 5.1.5**.: In the case when \(s_{i}=1\) for any \(i\), the graph constructed in the proof of Theorem 5.1.3 is the canonical representant of a strongly flat semigroup. This motivates the following definition.
**Definition 5.1.6**.: We say that the representant constructed in Theorem 5.1.3 is a _canonical representant_ of the flat semigroup \(\mathcal{S}\) associated with its presentation \(G(\alpha,s_{1}\widehat{\alpha}_{1},\dots,s_{n}\widehat{\alpha}_{n})\).
**Remark 5.1.7**.: (a) The next example will illustrate how does a canonical representant depend on the presentation \(G(\alpha,s_{1}\widehat{\alpha}_{1},\dots,s_{n}\widehat{\alpha}_{n})\) of the flat semigroup. Once the presentation is fixed, it is unique by the previous proof.
Consider the numerical semigroup \(\mathcal{S}\) generated by \(a_{0}=6\), \(a_{1}=15\) and \(a_{2}=20\). One can check that \(\mathcal{S}\) is flat at the first two generators, ie. \(\widehat{s}_{0}=\widehat{s}_{1}=1\). Hence, if we present it first of all as \(G(2\cdot 3,5\cdot 3,10\cdot 2)\), then this provides a canonical representant with Seifert invariants \(Sf=(-6,5\times(2,1),10\times(3,1))\). On the other hand, if we present \(\mathcal{S}\) as \(G(5\cdot 3,2\cdot 3,4\cdot 5)\) then one gets a graph with Seifert invariants \(Sf=(-3,4\times(3,1),2\times(5,4))\).
One can also choose a non-minimal presentation such as \(G(2\cdot 3\cdot 5,1\cdot 15,2\cdot 10,1\cdot 6)\). In this case the associated canonical representant is defined by the Seifert invariants \(Sf=(-2,(2,1),2\times(3,1),(5,4))\).
(b) If we run the algorithm from Theorem 5.1.3 for the strongly flat semigroup \(G(p,q)\) presented as \(G(pq,p,q)\), then the associated canonical representant will be given by the Seifert invariants \(Sf=(-b_{0},(p,\omega_{1}),(q,\omega_{2}))\) where \(b_{0},\omega_{1},\omega_{2}\) satisfy \(pqb_{0}-q\omega_{1}-p\omega_{2}=1\). The last identity immediately implies that \(b_{0}=1\) and the corresponding graph is \(\widetilde{\Gamma}_{f}\), see section 4.2.2 and Figure 2. In other words, the canonical representant is \(S^{3}\) with the corresponding Seifert structure.
In the sequel, we discuss some properties of the canonical representants and of flat semigroups. First, we observe the following.
**Proposition 5.2.1**.: _A canonical representant of a flat semigroup is numerically Gorenstein._
Proof.: Let \(\Gamma\) be the canonical representant associated with a fixed presentation of a flat semigroup and consider its Seifert invariants as in (5.1.4). By Lemma 2.2.5 one has to check that \(\gamma\in\mathbb{Z}\) and \(\gamma\equiv\omega_{i}^{\prime}\ (\text{mod }\alpha_{i})\) for every \(1\leq i\leq n\).
For the first we recall that for a canonical representant we have \(\alpha=\alpha_{1}\alpha_{2}\dots\alpha_{n}\) and \(\mathfrak{o}=1\), hence \(|e|=1/\alpha\). Then, from (2.2.3) we get
\[\gamma=\frac{1}{|e|}\cdot\Big{(}d-2-\sum_{\text{all }\alpha_{j}}\frac{1}{ \alpha_{j}}\Big{)}=\alpha\cdot\Big{(}d-2-\sum_{i=1}^{n}s_{i}\frac{1}{\alpha_{i }}\Big{)}=(d-2)\alpha-\sum_{i=1}^{n}s_{i}\widehat{\alpha}_{i}\in\mathbb{Z}.\]
Note that \(\gamma\equiv\omega_{i}^{\prime}\) (mod \(\alpha_{i}\)) is equivalent with \(\omega_{i}\gamma\equiv 1\,(\text{mod}\,\alpha_{i})\), see section 2.2.1. The previous calculation gives the expression:
\[\omega_{i}\gamma=\alpha_{i}\Big{(}(d-2)\widehat{\alpha}_{i}-\sum_{ \begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{n}s_{k}\prod_{\begin{subarray}{c}j=1\\ j\notin\{i,k\}\end{subarray}}^{n}\alpha_{j}\Big{)}\omega_{i}-s_{i}\widehat{ \alpha}_{i}\omega_{i},\]
which tells us that \(\omega_{i}\gamma\equiv-s_{i}\widehat{\alpha}_{i}\omega_{i}\,(\text{mod}\, \alpha_{i})\). On the other hand, the identity \(\mathfrak{o}=1\) reads as
\[1=\alpha|e|=\alpha\Big{(}b_{0}-\sum_{k=1}^{n}s_{k}\frac{\omega_{k}}{\alpha_{k }}\Big{)}=\alpha_{i}\Big{(}b_{0}\widehat{\alpha}_{i}-\sum_{\begin{subarray}{ c}k=1\\ k\neq i\end{subarray}}^{n}s_{k}\omega_{k}\prod_{\begin{subarray}{c}j=1\\ j\notin\{i,k\}\end{subarray}}^{n}\alpha_{j}\Big{)}-s_{i}\widehat{\alpha}_{i} \omega_{i},\]
which implies \(-s_{i}\widehat{\alpha}_{i}\omega_{i}\equiv 1(\text{mod}\,\alpha_{i})\). Hence, \(\omega_{i}\gamma\equiv 1(\text{mod}\,\alpha_{i})\) for any \(i\in\{1,\dots,n\}\).
Then, we can apply Proposition 3.4.3 to deduce that a flat semigroup is symmetric. Furthermore, in this case the Frobenius number simplifies to \(f_{\mathcal{S}}=\alpha+\gamma\). This, expressed by the minimal set of generators reproves the formula (3.2.4) from [10, Theorem 2.5].
**Theorem 5.2.2**.: _If \(\mathcal{S}\) is a flat semigroup, minimally generated by \(a_{0},a_{1},\dots,a_{n}\) (\(n\geq 2\)), then_
\[f_{\mathcal{S}}=\sum_{i=0}^{n}(\alpha_{i}-1)a_{i}-\prod_{i=0}^{n}\alpha_{i}, \tag{5.2.3}\]
_where \(\alpha_{i}=\gcd(a_{0},\dots,a_{i-1},a_{i+1}\dots,a_{n})\)._
Proof.: Using the previous discussions and the notation \(\alpha:=\prod_{i=1}^{n}\alpha_{i}\), after a possible permutation of the generators, we can assume that \((a_{0},a_{1},\dots,a_{n})=(\alpha,s_{1}\widehat{\alpha}_{1},\dots,s_{n} \widehat{\alpha}_{n})\). Then one has
\[\gamma+\alpha =\frac{1}{|e|}(d-2-\sum_{i=1}^{n}s_{i}\frac{1}{\alpha_{i}})+ \alpha=\alpha(d-2-\sum_{i=1}^{n}s_{i}\frac{1}{\alpha_{i}}+1)=d\alpha-\sum_{i =1}^{n}s_{i}\widehat{\alpha}_{i}-\alpha\] \[=\sum_{i=1}^{n}s_{i}\alpha_{i}\widehat{\alpha}_{i}-\sum_{i=1}^{n} s_{i}\widehat{\alpha}_{i}-\alpha=\sum_{i=1}^{n}(\alpha_{i}-1)s_{i}\widehat{ \alpha}_{i}+(\alpha_{0}-1)\alpha-\alpha_{0}\alpha\] \[=\sum_{i=0}^{n}(\alpha_{i}-1)a_{i}-\prod_{i=0}^{n}\alpha_{i}\]
### The geometry of the canonical representants
In the followings we study the canonical representants from geometric point of view. We can construct explicit equations for weighted homogeneous singularities whose link (or minimal good dual resolution graph) is a cannonical representant of a flat semigroup.
#### 5.3.1. **The universal abelian cover and the action of \(H\).**
**Lemma 5.3.1**.: _If \(\Gamma\) is the the canonical representant of a flat semigroup \(G(\alpha,s_{1}\widehat{\alpha}_{1},\dots,s_{n}\widehat{\alpha}_{n})\), then one has_
\[H\simeq\oplus_{i=1}^{n}\mathbb{Z}_{\alpha_{i}}^{s_{i}-1}.\]
Proof.: Recall that the canonical representant \(\Gamma\) associated with the given presentation is defined by the Seifert invariants \(Sf=(-b_{0},s_{1}\times(\alpha_{i},\omega_{i}),\dots,s_{n}\times(\alpha_{n}, \omega_{n}))\) where \(b_{0}\) and \(\omega_{i}\) is contructed by Theorem 5.1.3.
Let \(E_{0}\) be the base element associated with the central node of \(\Gamma\), and for simplicity, we will denote by \(E_{j(i)}\) (\(i\in\{1,\dots,n\}\), \(j\in\{1,\dots,s_{i}\}\)) the base elements associated with the end-vertices. The classes in \(H=L^{\prime}/L\) of the corresponding dual base elements will be denoted by \(g_{0}:=[E_{0}^{*}]\) and \(g_{j(i)}:=[E_{j(i)}^{*}]\). Then the group \(H\) can be presented as
\[H=\big{\langle}\,g_{0},\{g_{j(i)}\}_{i,j}\,|\,b_{0}\cdot g_{0}=\sum_{i=1}^{n} \sum_{j=1}^{s_{i}}\omega_{i}\cdot g_{j(i)};\,g_{0}=\alpha_{i}\cdot g_{j(i)}\ (1\leq i\leq n,\ 1\leq j \leq s_{i})\big{\rangle},\]
cf. Neumann [11].
Since \(\mathfrak{o}=1\) for the canonical representant \(\Gamma\), one gets \(g_{0}=0\) and the relations simplify to \(\sum_{i=1}^{n}\sum_{j=1}^{s_{i}}\omega_{i}\cdot g_{j(i)}=0\) and \(\alpha_{i}\cdot g_{j(i)}=0\). From the former relation we deduce that \(\widehat{\alpha_{i}}\omega_{i}\cdot\sum_{j=1}^{s_{i}}g_{j(i)}=0\), while the later gives \(\alpha_{i}\cdot\sum_{j=1}^{s_{i}}g_{j(i)}=0\). These imply that the order of \(\sum_{j=1}^{s_{i}}g_{j(i)}\) divides both \(\widehat{\alpha_{i}}\omega_{i}\) and \(\alpha_{i}\). Since \(\{\alpha_{i}\}_{i}\) are pairwise relatively prime, this is possible if and only if \(\sum_{j=1}^{s_{i}}g_{j(i)}=0\) for any \(i\in\{1,\dots,n\}\). Therefore, we get
\[H\simeq\oplus_{i=1}^{n}\langle\,g_{1(i)},\dots,g_{s_{i}(i)}\,|\,\alpha_{i} \cdot g_{j(i)}=0;\,\sum_{j(i)=1}^{s_{i}}g_{j(i)}=0\ (1\leq j\leq s_{i})\rangle\simeq\oplus_{i=1}^{n} \mathbb{Z}_{\alpha_{i}}^{s_{i}-1}. \tag{5.3.2}\]
Let \(\Gamma\) be a canonical representant of a flat semigroup as before and \((X,o)\) be a weighted homogeneous surface singularity whose minimal good dual resolution graph is \(\Gamma\). Then there exists the universal abelian cover \((X^{ab},0)\to(X,o)\) that induces an unramified Galois covering \(X^{ab}\setminus 0\to X\setminus o\) with Galois group \(H\simeq H_{1}(M,\mathbb{Z})\) (\(M\) is the link of \((X,o)\)), ie. \((X,o)=(X^{ab}/H,0)\). Furthermore, by a theorem of Neumann [11] (see also [10, 5.1.40]) this universal abelian cover \((X^{ab},0)\) is a Brieskorn complete intersection singularity, which can be given by the equations
\[\{z=(z_{j(i)})\in\mathbb{C}^{d}\ |\ f_{k}:=\sum_{i=1}^{n}\sum_{j(i)=1}^{s_{i}}c _{j(i)}^{k}z_{j(i)}^{\alpha_{i}}=0,\ \ k=1,\dots,d-2\}, \tag{5.3.3}\]
where the \((d-2)\times d\) matrix \((c_{j(i)}^{k})\) has full rank. Here \(d:=s_{1}+\dots+s_{n}\) and the variable \(z_{j(i)}\) is assigned to the end-vertex \(E_{j(i)}\) for any \(i=1,\dots,n\), \(j=1,\dots,s_{i}\). In the followings, we define the \(H\)-action on \(X^{ab}\).
Consider the Pontrjagin dual \(\hat{H}:=\operatorname{Hom}(H,S^{1})\) of \(H\) and let \(\theta:H\to\hat{H},[l^{\prime}]\mapsto e^{2\pi i(l^{\prime},\cdot)}\) be the isomorphism of \(H\) with \(\hat{H}\). The \(H\) acts on \(\mathbb{C}^{d}\) by the diagonal action \(\operatorname{diag}(\chi_{j(i)})_{i,j}:H\to\operatorname{Diag}(d)\subset GL_{ d}(\mathbb{C})\), where \(\chi_{j(i)}:=e^{2\pi i(E_{j(i)}^{*},\cdot)}\in\hat{H}\) is the character corresponding to \([E_{j(i)}^{*}]\). Since the equations \(f_{k}\) are eigenvectors we obtain an induced action on \(X^{ab}\) too. By [11, Theorem 2.1] this action is free off the origin and the orbit space \((X^{ab}/H,0)\simeq(X,o)\) (for the right choice of \(c_{j(i)}^{k}\)).
**Remark 5.3.4**.: Note that, although the complex structure of \((X^{ab},0)\) depends on the choice of the matrix \((c^{k}_{j(i)})\), its link \(M^{ab}\) (as a Seifert 3-manifold) is independent of it. Furthermore, by the result of [15, Thm. 7.2] (see also [14, 5.1.17]) one can show that in our case the resolution graph \(\Gamma^{ab}\) of \((X^{ab},0)\) inherits from \(\Gamma\) the following structure (in the sequel all invariants of \(M^{ab}\) will be marked by \(\star^{ab}\)):
* If \(s_{i}=1\) then the leg in \(\Gamma\) with \((\alpha_{i},\omega_{i})\) induces a leg in \(\Gamma^{ab}\) with \(\alpha_{i}^{ab}=\alpha_{i}\) and this will appear with multiplicity \(s_{i}^{ab}=\prod_{l=1}^{n}\alpha_{l}^{s_{l}-1}\).
* If \(s_{i}>1\) then for any \(j(i)=1,\ldots,s_{i}\) one gets \(\alpha_{j(i)}^{ab}=1\) and \(s_{j(i)}^{ab}=\alpha_{i}^{s_{i}-2}\prod_{l\neq i}\alpha_{l}^{s_{l}-1}\). Note that these legs completely dissapear since \(\alpha_{j(i)}^{ab}=1\), however their multiplicity contribute to the genus of the central fiber in \(M^{ab}\).
* More precisely, one gets \(g^{ab}=1+\frac{1}{2}\prod_{l=1}^{n}\alpha_{l}^{s_{l}-1}(\sum_{i,s_{i}>1}s_{i }(1-\frac{1}{\alpha_{i}})-2)\). Furthermore, \(b_{0}^{ab}\) and \(\omega_{i}^{ab}\) can be determined by the formulae: \(\omega_{i}^{ab}\widehat{\alpha}_{i}\equiv-1\,(\operatorname{mod}\alpha_{i})\) and \(-e^{ab}=\prod_{l=1}^{n}\alpha_{l}^{s_{l}-2}\).
We emphasize that \(M^{ab}\) is a rational homology sphere if and only if \(s_{i}=1\) for any \(i\). Or, equivalently, \(\Gamma\) is the canonical representant of a strongly flat semigroup by Remark 5.1.5.
#### 5.3.2. **The equations of \((X,o)\)**
Now we look at the induced action on the polynomial ring \(R=\mathbb{C}[(z_{j(i)})_{i,j}]\) and the invariant subring of \(R\) is denoted by \(R^{H}\). By considering the generators of the invariant monomials and their relations in \(R^{H}\), in the sequel we will study the possible equations for the analytic types of \((X,o)\).
Since the characters \(\chi_{j(i)}\) generate \(\hat{H}\), they satisfy the relations from (5.3.2), namely one has
\[\chi_{j(i)}^{\alpha_{i}}=1\ \ \text{and}\ \ \prod_{j(i)=1}^{s_{i}}\chi_{j(i)}=1 \ \ \text{for any}\ i=1,\ldots,n. \tag{5.3.5}\]
A monomial \(z^{a}:=\prod_{i}\prod_{j(i)}z_{j(i)}^{a_{j(i)}}\) is in \(R^{H}\) if and only if \(\prod_{j(i)}\chi_{j(i)}^{a_{j(i)}}=1\) for any \(i\). By (5.3.5) one deduces that the monomials \(z_{j(i)}^{\alpha_{i}}\) and \(\prod_{j(i)=1}^{s_{i}}z_{j(i)}\) are in \(R^{H}\) for any \(i\) and \(j(i)\). If there are any other generators, divided them with the already listed invariant monomials, they must have the form of \(\prod_{j(i)=1}^{s_{i}}z_{j(i)}^{a_{j(i)}}\) for some \(i\) with the exponents \((a_{j(i)})=(a_{1},\ldots,a_{s_{i}-1},0)\) where \(0\leq a_{j(i)}<\alpha_{i}\). We claim that in this case \(a_{j(i)}=0\) for \(j(i)=1,\ldots,s_{i}-1\) as well. Indeed, one has the identity \(\prod_{j(i)=1}^{s_{i}-1}\chi_{j(i)}^{a_{j(i)}}=1\) which implies that \(\sum_{j(i)=1}^{s_{i}-1}a_{j(i)}E_{j(i)}^{*}\in L\). In particular, \(\sum_{j(i)=1}^{s_{i}-1}a_{j(i)}g_{j(i)}=0\in H\). Then by the isomorphism from Lemma 5.3.1 and the assumptions on \(a_{j(i)}\) follows that \(a_{j(i)}=0\).
In summary, the generators associated with \(i\) are as follows: if \(s_{i}=1\) then \(z_{i}:=z_{j(i)}\) is a generator; in the case \(s_{i}>0\) we get \(w_{j(i)}:=z_{j(i)}^{\alpha_{i}}\) for \(j(i)=1,\ldots,s_{i}\) and \(w_{i}:=\prod_{j(i)=1}^{s_{i}}z_{j(i)}\). Then \(R^{H}\) can be presented as \(\mathbb{C}[z_{i},w_{i},w_{j(i)}]/I\) where the ideal \(I\) is given by the relations
\[\begin{cases}\sum_{i,s_{i}=1}c_{i}^{k}z_{i}^{\alpha_{i}}+\sum_{i,s_{i}\neq 1} \sum_{j(i)}c_{j(i)}^{k}w_{j(i)}=0,&\ k=1,\ldots,d-2;\\ w_{i}^{\alpha_{i}}=\prod_{j(i)=1}^{s_{i}}w_{j(i)}&\ \ \text{for every}\ i\ \text{with}\ s_{i}>1,\end{cases} \tag{5.3.6}\]
providing us the equations for the possible analytic types of \((X,o)\). Note that the first type of relations is coming from the equations (5.3.3) of \(X^{ab}\), the second type of equations is given by the relations of the monoid algebra \(\mathbb{C}[M_{i}]\) associated with the affine monoid \(M_{i}=\langle(t_{1}:=\alpha_{i},0,\ldots,0),t_{2}:=(0,\alpha_{i},0,\ldots,0), \ldots,(0,\ldots,0,\alpha_{i}),t_{s_{i}+1}:=(1,1,\ldots,1)\rangle\subset\mathbb{Z }_{\geq 0}^{s_{i}}\) for every \(i\) with \(s_{i}>1\). In other words, for a fixed \(i\) the corresponding relations generates the toric ideal \(I_{i}\) where \(\mathbb{C}[M_{i}]\simeq\mathbb{C}[w_{j(i)},w_{i}]/I_{i}\). In fact, in our case one has \(I_{i}=(w_{i}^{\alpha_{i}}-\prod_{j(i)=1}^{s_{i}}w_{j(i)})\). Indeed,
the generators of the relations in terms of the monoid generators have the form of \(\sum_{l\in I}a_{l}t_{l}=\sum_{m\in J}b_{m}t_{m}+b_{s_{i}+1}t_{s_{i}+1}\) for some \(a_{l},b_{m},b_{s_{i}+1}\geq 0\) where \(I,J\subset\{1,\ldots,s_{i}\}\) and \(I\cap J=\emptyset\). Looking at the coordinates, this implies that \(b_{m}=0\), \(I=\{1,\ldots,s_{i}\}\) and \(a_{l}\alpha_{i}=b_{s_{i}+1}\), which gives us the only generator \(\sum_{l=1}^{s_{i}}t_{l}=\alpha_{i}t_{s_{i}+1}\).
Note that many of the variables \(w_{j(i)}\) can be eliminated and the number of equations in (5.3.6) can be reduced. In the following we will distinguish three cases depending on the number \(K:=\#\{i\ :\ s_{i}=1\}\) of legs with multiplicity \(1\).
**I.**\(K=0\) In this case we have the linear system \(\{\sum_{i,s_{i}\neq 1}\sum_{j(i)}c^{k}_{j(i)}w_{j(i)}=0\}_{k=1,\ldots,d-2}\). Associated with two fixed, not necesarilly different indices \(i_{1}\) and \(i_{2}\) we choose \(j_{0}(i_{1})\in\{1,\ldots,s_{i_{1}}\}\) and \(j_{0}(i_{2})\in\{1,\ldots s_{i_{2}}\}\). For simplicity, we set the notations \(x:=w_{j_{0}(i_{1})}\) and \(y:=w_{j_{0}(i_{2})}\). Then, since the coefficients \(\{c^{k}_{j(i)}\}\) are generic (ie. the matrix of the system has rank \(d-2\)), the other variables can be expressed linearly in terms of \(x\) and \(y\). Therefore, we get that \((X,o)\subset(\mathbb{C}^{n+2},0)\) is defined by the following equations
\[w^{\alpha_{i}}_{i}=\prod_{j(i)=1}^{s_{i}}(a_{j(i)}x+b_{j(i)}y)\ \ \ \ i=1,\ldots n, \tag{5.3.7}\]
where \(a_{j_{0}(i_{1})}=b_{j_{0}(i_{2})}=1\), \(a_{j_{0}(i_{2})}=b_{j_{0}(i_{1})}=0\) and the other \(a_{j(i)},b_{j(i)}\in\mathbb{C}\) are generic coefficients.
**II.**\(K=1\) We simply write \(z\) for the variable associated with the only \(i\) with \(s_{i}=1\), set also \(\alpha:=\alpha_{i}\). On the other hand, we fix \(i_{0}\in\{i\ :s_{i}>1\}\) and one of its associated variables will be denoted by \(x:=w_{j_{0}(i_{0})}\). The other variables are linearly expressed with \(z^{\alpha}\) and \(x\), hence the equations of \((X,o)\subset\mathbb{C}^{n+1}\) are
\[w^{\alpha_{i}}_{i}=\prod_{j(i)=1}^{s_{i}}(a_{j(i)}z^{\alpha}+b_{j(i)}x)\ \ \ \ i\in\{i\ :\ s_{i}>1\}, \tag{5.3.8}\]
where \(a_{j_{0}(i_{0})}=0\), \(b_{j_{0}(i_{0})}=1\) and the others are generic.
**III.**\(K>1\) In the last case we choose \(i_{1},i_{2}\) with \(s_{i_{1}}=s_{i_{2}}=1\) and denote their associated variables by \(x:=z_{i_{1}}\) and \(y:=z_{i_{2}}\). Then, one can express \(z^{\alpha_{i}}_{i}\) for \(i\in\{i\ :\ s_{i}=1\}\) and \(i\neq i_{1},i_{2}\), as well as the variables \(w_{j(i)}\) for every \(i\in\{i\ :\ s_{i}>1\}\) and \(j(i)\), linearly in terms of \(x^{\alpha_{i_{1}}}\) and \(y^{\alpha_{i_{2}}}\). Therefore we get that \((X,o)\subset(\mathbb{C}^{n},0)\) is defined by the following equations
\[\begin{cases}z^{\alpha_{i}}_{i}=p_{i}x^{\alpha_{i_{1}}}+q_{i}y^{\alpha_{i_{2} }}&\ i\in\{i\ :\ s_{i}=1\}\setminus\{i_{1},i_{2}\};\\ w^{\alpha_{i}}_{i}=\prod_{j(i)=1}^{s_{i}}(a_{j(i)}x^{\alpha_{i_{1}}}+b_{j(i)}y ^{\alpha_{i_{2}}})&\ i\in\{i\ :\ s_{i}>1\},\end{cases} \tag{5.3.9}\]
where \(p_{i},q_{i},a_{j(i)},b_{j(i)}\) are generic coefficients.
**Remark 5.3.10**.: We emphasize that in all three cases the normal surface singularities \((X,o)\) are complete intersections. In particular, they are Gorenstein and Proposition 5.2.1 follows automatically.
**Example 5.3.11**.: Consider the flat semigroup \(G(6,15,20)\) discussed in Remark 5.1.7. We look at its (last) canonical representant, which is defined by the Seifert invariants \(Sf=(-2,(2,1),2\times(3,1),(5,4))\). Then, by case III. of the above construction, we get a family of suspension hypersurface singularities defined by \((X_{a_{i},b_{i}}=\{f(x,y,z)=(a_{1}x^{2}+b_{1}y^{5})(a_{2}x^{2}+b_{2}y^{5})+z^{ 3}=0\},0)\subset(\mathbb{C}^{3},0)\) where \(a_{i},b_{i}\in\mathbb{C}\) are generic coefficients. In particular, if we consider the hypersurface singularity defined by the equation \(x^{4}+y^{10}+z^{3}=0\) for example, one can check that its associated
semigroup is \(G(6,15,20)\). The other canonical representants considered in Remark 5.1.7 provide (eg. by case I.) other families of complete intersections with the same associated semigroup.
## 6. The characterization of representable semigroups
### Perturbation of the Seifert invariants and the characterization
In this section we characterize the representable semigroups by proving that they can be viewed as quotients of flat semigroups.
In order to prove this result two technical steps are needed: first of all we show that every Seifert invariant \((\alpha,\omega)\) can be 'perturbated' without affecting the quasi-linear function \(N(\ell)\); the second step claims that the Seifert invariants \((-b_{0},s_{1}\times(\alpha_{1},\omega_{1}),\ldots,s_{n}\times(\alpha_{n}, \omega_{n}))\) can be changed to \((-b_{0},s_{1}\times(\alpha_{1}^{\prime},\omega_{1}^{\prime}),\ldots,s_{n} \times(\alpha_{n}^{\prime},\omega_{n}^{\prime}))\) in such a way that the associated semigroup will be stable, \(\alpha_{1}^{\prime},\ldots\alpha_{n}^{\prime}\) are pairwise relatively prime and \(\gcd(\alpha_{i}^{\prime},s_{i})=1\).
We start our discussion with the characterization of the later case.
**Theorem 6.1.1**.: _Let \(\Gamma\) be a graph defined by the Seifert invariants_
\[Sf=\Big{(}-b_{0},\underbrace{(\alpha_{1},\omega_{1}),\ldots,(\alpha_{1}, \omega_{1})}_{s_{1}},\ldots,\underbrace{(\alpha_{n},\omega_{n}),\ldots,(\alpha _{n},\omega_{n})}_{s_{n}}\Big{)}.\]
_If the numbers \(\alpha_{i}\geq 2\) are pairwise relatively prime integers and \(\gcd(\alpha_{i},s_{i})=1\) for every \(i\), then its associated semigroup is a quotient of a flat semigroup. In fact, \(\mathcal{S}_{\Gamma}=G(\alpha,s_{1}\widehat{\alpha}_{1},\ldots,s_{n}\widehat{ \alpha}_{n})/\mathfrak{o}\)._
Proof.: First of all we observe that \(\gcd(\alpha_{i},\mathfrak{o})\neq 1\) implies \(\gcd(\alpha_{i},s_{i})\neq 1\), hence by the assumptions of the theorem we must have \(\gcd(\alpha_{i},\mathfrak{o})=1\) for every \(i\in\{1,\ldots,n\}\). Indeed, for a fixed \(i\) the expression \(\mathfrak{o}=\alpha b_{0}-\sum_{j\neq i}s_{j}\omega_{j}\widehat{\alpha}_{j}-s_ {i}\omega_{i}\widehat{\alpha}_{i}\) implies the identity \(s_{i}\omega_{i}\widehat{\alpha}_{i}\equiv 0\,(\operatorname{mod}\gcd(\alpha_{i}, \mathfrak{o}))\), which simplifies to \(s_{i}\omega_{i}\equiv 0\,(\operatorname{mod}\gcd(\alpha_{i},\mathfrak{o}))\) since \(\gcd(\alpha_{i},\alpha_{j})=1\) for \(j\neq i\). Then, by multiplying the last congruence with \(\omega_{i}^{\prime}\) and using (2.2.1) yields that \(s_{i}\equiv 0\,(\operatorname{mod}\gcd(\alpha_{i},\mathfrak{o}))\), which supports our claim.
Now, using similar ideas as in the proof of the Theorem 5.1.3 we consider the non-negative integers \(k_{1},\ldots,k_{n}\) as the solutions of the equations
\[k_{1}\alpha_{1}+\omega_{1}\equiv\cdots\equiv k_{n}\alpha_{n}+\omega_{n}\equiv 0 \,(\operatorname{mod}\mathfrak{o}), \tag{6.1.2}\]
with \(0\leq k_{i}<\mathfrak{o}\) for every \(1\leq i\leq n\). These, on one hand, allow us to define the numbers
\[\tilde{\omega}_{i}=\frac{1}{\mathfrak{o}}(k_{i}\alpha_{i}+\omega_{i})\quad \text{ for every }1\leq i\leq n,\]
which satisfy \(\gcd(\tilde{\omega}_{i},\alpha_{i})=1\) and \(0<\tilde{\omega}_{i}<\alpha_{i}\). On the other hand, they imply that \(b_{0}+\sum_{i=1}^{n}s_{i}k_{i}\equiv 0\,(\operatorname{mod}\mathfrak{o})\) too, hence one can define the new decoration as \(\tilde{b}_{0}=(b_{0}+\sum_{i=1}^{n}s_{i}k_{i})/\mathfrak{o}\). Indeed, from the equations (6.1.2) one gets \(\sum_{i=1}^{n}\alpha s_{i}k_{i}+s_{i}\omega_{i}\widehat{\alpha}_{i}\equiv 0\,( \operatorname{mod}\mathfrak{o})\). Furthermore, the definition of \(\mathfrak{o}\) reads as \(\mathfrak{o}=\alpha b_{0}-\sum_{i}s_{i}\omega_{i}\widehat{\alpha}_{i}\) which, applied to the previous equation gives \(\alpha(b_{0}+\sum_{i}s_{i}k_{i})\equiv 0\,(\operatorname{mod}\mathfrak{o})\). Since \(\gcd(\alpha,\mathfrak{o})=1\), we deduce that \((b_{0}+\sum_{i}s_{i}k_{i})\equiv 0\,(\operatorname{mod}\mathfrak{o})\).
We consider the graph \(\tilde{\Gamma}\) defined by the Seifert invariants
\[Sf=\big{(}-\tilde{b}_{0},\underbrace{(\alpha_{1},\tilde{\omega}_{1}),\ldots,( \alpha_{1},\tilde{\omega}_{1})}_{s_{1}},\ldots,\underbrace{(\alpha_{n},\tilde{ \omega}_{n}),\ldots,(\alpha_{n},\tilde{\omega}_{n})}_{s_{n}}\big{)}.\]
In the followings, any of the numerical data associated with \(\tilde{\Gamma}\) will be distinguished by the 'tilde' notation, eg. \(\tilde{e}\) will stand for the orbifold Euler number of \(\tilde{\Gamma}\). The first observation is that
\(\tilde{e}=e/o<0\), hence \(\tilde{\Gamma}\) is negative definite. Furthermore, \(\tilde{\alpha}=\alpha\) and one implies that \(\mathfrak{\tilde{o}}=1\). Then, by the assumptions of the theorem we conclude that \(\tilde{\Gamma}\) is the canonical representant of the flat semigroup \(\mathcal{S}_{\tilde{\Gamma}}=G(\alpha,s_{1}\widehat{\alpha}_{1},\ldots,s_{n} \widehat{\alpha}_{n})\).
On the other hand, if we look at the quasi-linear function \(\tilde{N}(l)=\tilde{b}_{0}\ell-\sum_{i=1}^{n}s_{i}[\tilde{\omega}_{i}\ell/ \alpha_{i}]\) associated with \(\mathcal{S}_{\tilde{\Gamma}}\), one can see that \(\tilde{N}^{(\mathfrak{o})}(\ell)=\tilde{N}(\mathfrak{o}\ell)=N(\ell)\). Hence \(\mathcal{S}_{\tilde{\Gamma}}=\mathcal{S}_{\tilde{\Gamma}}/\mathfrak{o}\) which concludes our claim.
**Lemma 6.1.3**.: _Let \(r\in\mathbb{Q}_{>0}\) be arbitrary. For every \(M>0\) there exists \(r_{M}\in\mathbb{Q}\) such that_
\[\lceil r\ell\rceil=\lceil r^{\prime}\ell\rceil\quad\text{ for every }\ell\in\mathbb{N},\ell\leq M\text{ and }r^{\prime}\in(r_{M},r]. \tag{6.1.4}\]
Proof.: Since \(r\in\mathbb{Q}_{>0}\) one writes \(r=\omega/\alpha\), where \(\omega,\alpha\in\mathbb{N}\) and \(\gcd(\omega,\alpha)=1\). For a fixed \(\ell\in\mathbb{N}\) we introduce the notation \(x:=\lceil\omega\ell/\alpha\rceil\) for simplicity. Notice that if \(\ell=0\) then (6.1.4) holds for any \(r^{\prime}\), so in the sequel we assume that \(\ell\neq 0\). By the properties of the function \(t\mapsto\lceil t\rceil\) one knows the inequalities
\[x-1+\frac{1}{\alpha}\leq\frac{\omega\ell}{\alpha}\leq x.\]
This implies that for any \(r^{\prime}\in(r-\frac{1}{\alpha\ell},r]\) we have \(x\geq r\ell\geq r^{\prime}\ell>r\ell-\frac{1}{\alpha}\geq x-1\), hence \(\lceil r^{\prime}\ell\rceil=x=\lceil r\ell\rceil\).
Thus, for a fixed \(\ell\) we have constructed an interval for \(r^{\prime}\) such that (6.1.4) is satisfied. When \(\ell\) is varying in \([0,M]\) we consider the intersection of these intervals
\[\bigcap_{0<\ell\leq M}\left(r-\frac{1}{\alpha\ell},r\right]=\left(r-\frac{1}{ \alpha M},r\right]=:(r_{M},r].\]
**Lemma 6.1.5**.: _Let \(N\colon\mathbb{Z}\to\mathbb{Z}\), \(N(\ell)=b_{0}\ell-\sum_{i=1}^{n}s_{i}\lceil\omega_{i}\ell/\alpha_{i}\rceil\) be a quasi-linear function of a representable semigroup. Then for every \(M\in\mathbb{N}\) there exists a modification \(N^{\prime}\colon\mathbb{Z}\to\mathbb{Z},N^{\prime}(\ell)=b_{0}\ell-\sum_{i=1} ^{n}s_{i}[\omega_{i}^{\prime}\ell/\alpha_{i}^{\prime}]\) ( \(\omega_{i}^{\prime},\alpha_{i}^{\prime}\in\mathbb{Z}_{>0}\), \(0<\omega_{i}^{\prime}<\alpha_{i}^{\prime}\) and \(\gcd(\omega_{i}^{\prime},\alpha_{i}^{\prime})=1\)), such that \(\alpha_{1}^{\prime},\ldots,\alpha_{n}^{\prime}\) are pairwise relatively prime, \(\gcd(\alpha_{i}^{\prime},s_{i})=1\) and it satisfies_
\[N^{\prime}(\ell)=N(\ell)\ \text{ for every }\ell\in\mathbb{N}\text{ and }\ell\leq M.\]
Proof.: Let \(\Gamma\) be the graph corresponding to \(N(\ell)\). We will say that the \(i\)-th block of \(\Gamma\) consists of the \(s_{i}\) number of legs with Seifert invariant \((\alpha_{i},\omega_{i})\). For a fixed \(M\in\mathbb{N}^{*}\) we will construct the modification by induction on \(i\).
First we describe the inductive step. Assume that the \((i-1)\)-th block is already modified. This means that the new Seifert invariants \((\alpha_{1}^{\prime},\omega_{1}^{\prime}),\ldots,(\alpha_{i-1}^{\prime},\omega_ {i-1}^{\prime})\) are constructed in a way that \(\alpha_{t}^{\prime}\) are pairwise relatively primes and \(\gcd(\alpha_{t}^{\prime},s_{t})=1\) for any \(t\in\{1,\ldots,i-1\}\).
Then, for a large enough \(k\in\mathbb{N}\), by Lemma 6.1.3 one finds a rational number \(r^{\prime}\in(r_{M,i}^{\prime},\omega_{i}/\alpha_{i})\) of the form \(r^{\prime}=x/(k\alpha_{1}^{\prime}\ldots\alpha_{i-1}^{\prime}s_{i}+1)\) (\(x\in\mathbb{N}\)) satisfying \(\lceil r^{\prime}\ell\rceil=\lceil\omega_{i}\ell/\alpha_{i}\rceil\) for any \(\ell\leq M\). This, written as \(\omega_{i}^{\prime}/\alpha_{i}^{\prime}:=r^{\prime}\) (where \(\gcd(\omega_{i}^{\prime},\alpha_{i}^{\prime})=1\)) gives us the perturbation. Since \(\alpha_{i}^{\prime}\) is a divisor of \(k\alpha_{1}^{\prime}\ldots\alpha_{i-1}^{\prime}s_{i}+1\), it is relatively prime to all the \(\alpha_{t}^{\prime}\) with \(t\leq i-1\) and \(s_{i}\). In this way, we get the \(i\)-th block of the modified graph with \(s_{i}\) legs, all having the Seifert invariant \((\alpha_{i}^{\prime},\omega_{i}^{\prime})\).
For \(i=1\) we can start by distinguishing two cases:
I. If \(\gcd(\alpha_{1},s_{1})=1\) then we do not modify them and we set \((\alpha_{1}^{\prime},\omega_{1}^{\prime}):=(\alpha_{1},\omega_{1})\).
II. If \(\gcd(\alpha_{1},s_{1})\neq 1\) then we do the same as in the inductive step. Therefore, we get a rational number of the form \(r^{\prime}=x/(ks_{1}+1)\) (\(x\in\mathbb{N}\)) satisfying \(\lceil r^{\prime}\ell\rceil=\lceil\omega_{1}\ell/\alpha_{1}\rceil\) for any \(\ell\leq M\), which provides \(\omega_{1}^{\prime}/\alpha_{1}^{\prime}:=r^{\prime}\).
By the construction one follows that \(N(\ell)=N^{\prime}(\ell)\) for any \(\ell\leq M\).
**Theorem 6.1.6**.: _A numerical semigroup is representable if and only if it is a quotient of a flat semigroup._
Proof.: By Lemma 4.1.5 and Theorem 5.1.3 follow that a quotient of a flat semigroup is representable. For the reverse, let \(S\) be a representable semigroup and consider one of its representant \(\Gamma\) defined by the Seifert invariants \((-b_{0},s_{1}\times(\alpha_{1},\omega_{1}),\ldots,s_{n}\times(\alpha_{n}, \omega_{n}))\). Let \(M\) be the maximum of the largest generator (in the minimal set of generators) and the Frobenius number \(f_{\mathcal{S}}\) and we apply Lemma 6.1.5 for this fixed \(M\). Then we get a new graph \(\Gamma^{\prime}\) defined by the perturbed Seifert invariants \((-b_{0},s_{1}\times(\alpha_{1}^{\prime},\omega_{1}^{\prime}),\ldots,s_{n} \times(\alpha_{n}^{\prime},\omega_{n}^{\prime}))\) satisfying that \(\alpha_{1}^{\prime},\ldots\alpha_{n}^{\prime}\) are pairwise relatively prime integers and \(\gcd(\alpha_{i}^{\prime},s_{i^{\prime}}^{\prime})=1\). Moreover, the associated semigroup \(S_{\Gamma^{\prime}}\) equals to \(S\). Indeed, the identity \(N(\ell)=N(\ell^{\prime})\) valid for all the integers up to the Frobenius number \(f_{S}\) implies that \(\mathcal{S}_{\Gamma^{\prime}}\subset\mathcal{S}\). On the other hand, \(N(\ell)=N(\ell^{\prime})\) for all the integers up to the largest generator deduces that \(\mathcal{S}\subset\mathcal{S}_{\Gamma^{\prime}}\). Finally, Theorem 6.1.1 applied to \(\Gamma^{\prime}\) clarifies the statement.
**Example 6.1.7**.: Consider the non flat semigroup \(G(4,6,7,9)\). We claim that it is representable and one of its representation is given by the graph with Seifert invariants \(Sf=(-2,2\times(2,1),2\times(4,1),(5,1))\) (the calculations were performed using GAP[GAP]). Following our previous discussions, a perturbation of the Seifert invariants can be as follows.
Let us first consider the two legs with \((2,1)\). Note that \(M=9\) and \((2,1)\) can be changed to eg. \((11,5)\) since \(\frac{5}{11}\in(\frac{1}{2}-\frac{1}{18}=\frac{4}{9},\frac{1}{2})\). Similarly, \((4,1)\) can be changed to \((13,3)\). These perturbations give us a new graph defined by the Seifert invariants \(Sf=(-2,2\times(11,5),2\times(13,3),(5,1))\) which has \(\mathfrak{o}=307\) and satisfies the assumptions of Theorem 6.1.1. Finally, we have
\[G(4,6,7,9)=\frac{1}{307}G(110,130,143).\]
### Further speculations and remarks
As a conclusion, we would like to propose a general question and emphasize some directions opened by the problem studied in this article.
In the theory of numerical semigroups one knows by [11] (see also [11]) that every numerical semigroup can be presented as one half of a symmetric numerical semigroup. More generally, for a fixed \(d\geq 2\), every semigroup can be presented as one over \(d\) of a symmetric numerical semigroup, cf. [23]. Note that the Example 4.1.7 is also an example for the construction given by Rosales and Garcia-Sanchez in [11].
As a consequence, by applying Lemma 4.1.5, one follows that if we can represent every symmetric semigroup as a semigroup associated with a weighted homogeneous surface singularity, then every numerical semigroup is representable. This approach naturally poses the following question.
**Question 6.2.1**.: _Is there a symmetric numerical semigroup which can not be represented as a semigroup associated with a weighted homogeneous surface singularity with \(\mathbb{Q}HS^{3}\) link?_
By the knowledge of the authors, there is no good understanding how the symmetric property of semigroups incarnates on the level of the representants, or how does it fit in the 'flat' classification theme of Raczunas and Chrzgastowski-Wachtel.
For example, one can construct representable semigroups which are not symmetric, but they have a numerically Gorenstein representant:
**Example 6.2.2** (Non-symmetric numerically Gorenstein case [11]).: Let \(\Gamma\) be the graph defined by the Seifert invariants
\[Sf=(-2,(2,1),(2,1),(3,1),(3,1),(7,1),(7,1),(84,1)).\]
Then \(Z_{K}=(86,43,43,29,29,13,13,2)\), hence \(\Gamma\) is numerically Gorenstein. In addition, one can compute that \(\gamma=85\), \(1/|e|=28\) and \(\check{s}=28\) (using 2.1.3), thus the Frobenius number equals \(f(\mathcal{S}_{\Gamma})=85\). In addition \(N(6)=N(85-6)=-1\) which implies \(6,f(\mathcal{S}_{\Gamma})-6\notin\mathcal{S}_{\Gamma}\), hence it is not symmetric.
One the other hand, using the proof of Theorem 3.4.3 one can construct numerical semigroups which are symmetric, but not flat, as shown by the next example. Therefore, the set of symmetric semigroups is rather bigger than that of the flat ones.
**Example 6.2.3** (Symmetric but not flat).: Let \(\Gamma\) be the graph defined by the Seifert invariants
\[Sf=(-2,(35,13),(35,13),(21,13),(21,13)).\]
Then one can verify eg. with [GAP] that \(\mathcal{S}_{\Gamma}=G(8,21,35)\). Hence \(\mathcal{S}_{\Gamma}\) is not a flat but an almost flat semigroup. However, it is symmetric.
|
2309.09230 | Cosmological Study in $F(R, T)$ Quasi-dilaton Massive Gravity | This study explores the cosmological implications of the $F(R, T)$
quasi-dilaton massive gravity theory, a modification of the de
Rham-Gabadadze-Tolley (dRGT) massive gravity theory. Our analysis focuses on
the self-accelerating solution of the background equations of motion, which are
shown to exist in the theory. Notably, we find that the theory features an
effective cosmological constant, which has important implications for our
understanding of the universe's accelerated expansion. To test the viability of
the $F(R, T)$ quasi-dilaton massive gravity theory, we utilize the Union2 Type
Ia Supernovae (SNIa) dataset, comprising 557 observations. Our results
demonstrate that the theory is capable of explaining the accelerated expansion
of the universe without requiring the presence of dark energy. This finding
supports the potential of the $F(R, T)$ quasi-dilaton massive gravity theory as
an alternative explanation for the observed cosmic acceleration. Moreover, we
investigate the properties of tensor perturbations within the framework of this
theory and derive a novel expression for the dispersion relation of
gravitational waves. Our analysis reveals interesting features of the modified
dispersion relation in the Friedmann-Lema\^itre-Robertson-Walker cosmology,
providing new insights into the nature of gravitational waves in the contest of
the $F(R, T)$ quasi-dilaton massive gravity theory. | Sobhan Kazempour, Amin Rezaei Akbarieh | 2023-09-17T10:02:05Z | http://arxiv.org/abs/2309.09230v1 | # Cosmological Study in \(F(r,t)\) Quasi-dilaton Massive Gravity
###### Abstract
This study explores the cosmological implications of the \(F(R,T)\) quasi-dilaton massive gravity theory, a modification of the de Rham-Gabadadze-Tolley (dRGT) massive gravity theory. Our analysis focuses on the self-accelerating solution of the background equations of motion, which are shown to exist in the theory. Notably, we find that the theory features an effective cosmological constant, which has important implications for our understanding of the universe's accelerated expansion. To test the viability of the \(F(R,T)\) quasi-dilaton massive gravity theory, we utilize the Union2 Type Ia Supernovae (SNIa) dataset, comprising 557 observations. Our results demonstrate that the theory is capable of explaining the accelerated expansion of the universe without requiring the presence of dark energy. This finding supports the potential of the \(F(R,T)\) quasi-dilaton massive gravity theory as an alternative explanation for the observed cosmic acceleration. Moreover, we investigate the properties of tensor perturbations within the framework of this theory and derive a novel expression for the dispersion relation of gravitational waves. Our analysis reveals interesting features of the modified dispersion relation in the Friedmann-Lemaitre-Robertson-Walker cosmology, providing new insights into the nature of gravitational waves in the context of the \(F(R,T)\) quasi-dilaton massive gravity theory.
## I Introduction
It is well-known that the origin of late-time accelerated expansion of the Universe which is confirmed by several observational research is unknown and would be considered as the biggest puzzles in cosmology [1; 2; 3; 4; 5; 6; 7]. Although the general theory of relativity has a lot of successful predictions [8; 9; 10], it solely can explain the late-time accelerated expansion of the Universe by considering the cosmological constant or dark energy component. As the origin of dark energy is not clear and the cosmological constant has a problem [11; 12; 13], there has been a tendency towards modified gravity theories throughout the years [14; 15; 16; 17; 18; 19; 20; 21; 22].
According to modern particle physics, general relativity is a unique theory of a massless Lorentz-invariant spin-2 particle [23]. Modifying gravity through the massive gravity theory offers a valuable approach to understanding the nature of gravity and its interactions. In this framework, a spin-2 massive graviton is thought to mediate the propagation of gravity, providing a fresh perspective on the fundamental laws of physics. Investigating the properties and behavior of such a graviton could lead to new insights into the structure of the universe and potentially reveal novel phenomena. It should be mentioned that a number of attempts have been made to describe graviton and its interactions to explain the current accelerated expansion of the Universe. In fact, the main effort is introducing a theory that would be stable and consistent with cosmological observations [24; 25; 26; 27; 28; 29].
The introduction of a massive spin-2 field theory was first successfully achieved by Fierz and Pauli in 1939. They developed a Lorentz invariant linear theory that included consistent interaction terms, which were later interpreted as a graviton mass. In fact, its theory includes a specific combination of the mass terms to have five physical degrees of freedom [30]. The theory of massive gravity has undergone a lot of changes over the years. In 1970, van Dam, Veltman, and Zakharov showed that the Fierz and Pauli theory could not reduce to general relativity (massless theory) in the limit of \(m_{g}\longrightarrow 0\). This discontinuity in predictions is called vDVZ discontinuity [31; 32]. In the following, Vainshtein presented a way to avoid vDVZ discontinuity. He proposed that one should consider the nonlinear completions of the Fierz-Pauli term instead of linear [33]. On the other hand, Boukware and Deser claimed that there is a ghost instability in non-linear theory which is called the Boulware-Deser ghost [34; 35; 36]. In line with these efforts to revive massive gravity theory, in 2010 de Rham, Gabadadze, and Tolley (dRGT) introduced a fully non-linear massive gravity theory without Boulware-Deser ghost. Their research proposes a novel approach to understanding the nature of gravity by introducing nonlinear interactions that could reveal the presence of a massive spin-2 field in a flat spacetime [25; 27]. However, this theory admits solely an open Friedmann-Lemaitre-Robertson-Walker (FLRW) solution. In fact, there are not any stable solutions for a homogeneous and isotropic Universe [37; 38]. Therefore, there are enough motivations to present new extension theories to find stable self-accelerating solutions in the context of massive gravity theory [39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55].
In this work, we aim to introduce a novel extension of the non-linear dRGT massive gravity theory, \(F(R,T)\) quasi-dilation massive gravity theory, which endeavors to provide a fresh perspective on explaining the late-time accelerated expansion of the universe within the framework of FLRW cosmology. It is interesting to note that the quasi-dilaton part of the action introduces an extra scalar degree of freedom to the dRGT theory [41]. But, because
of instabilities, some extensions of quasi-dilaton massive gravity theory have been introduced [42; 44; 45; 51]. Moreover, the \(F(R,T)\) gravity theory is an intriguing modification of traditional gravity theories, where \(R\) represents the curvature scalar and \(T\) denotes the torsion scalar. Notice that in this theory it could be possible to reproduce the unification of \(F(R)\) and \(F(T)\) gravity theories [57; 58]. We can point out a variety of extensions of the \(F(R,T)\) gravity model which have been considered by several researchers [59; 60; 61; 62; 63; 64].
In order to constrain the parameters of the theory we consider the supernova I-a observational data. The theory compares with these data using the Bayesian statistic method which is on the basis of minimum \(X^{2}\). Moreover, using the dispersion relation of gravitational waves one can impose the constraints on the theory. In addition, analysis of the dispersion relation of gravitation waves can be used in the phase evolution of the gravitational waveform.
The goals of the paper are to present a new extension of the massive gravity theory and find a stable self-accelerating solution to explain the late-time accelerated expansion of the Universe. Likewise, we try to present the constraints tools using cosmological data analysis and theoretical aspects of gravitational waves. The outline of this paper is organized as follows. In Sec. II, we present the new extension of massive gravity theory which is \(F(R,T)\) quasi-dilaton massive gravity theory. We obtain the background equations of motion and self-accelerating solution. In Sec. III, we constrain the parameters of the \(F(R,T)\) quasi-dilaton massive gravity theory using supernova I-a data by considering the Bayesian statistic technique. In Sec. IV, we undertake a perturbation analysis to derive the dispersion relationship of gravitational waves in the context of FLRW cosmology. In Sec. V, we indicate the conclusion. In this paper, we consider natural units, where \(c=\hbar=1\) and \(M_{Pl}^{2}=8\pi G=1\), where \(G\) is Newton's gravitational constant.
## II \(F(R,t)\) quasi-dilaton massive gravity
In this section, we present the new extension of the massive gravity theory. This theory is constructed by considering the \(F(R,T)\) modified gravity. In the following, we show the cosmological background equations and self-accelerating solution.
The total action is:
\[S=\frac{1}{2}\int d^{4}x\Bigg{\{}\sqrt{-g}\bigg{[}F(R,T)-\omega \partial_{\mu}\sigma\partial^{\mu}\sigma+2m_{g}^{2}U(\mathcal{K})\bigg{]} \Bigg{\}},\]
where \(\sqrt{-g}\) is the determinant of \(g_{\mu\nu}\) which is the physical dynamical metric, \(R\) is the curvature scalar, \(T\) is the torsion scalar, \(\omega\) is a dimensionless constant, \(\sigma\) is the quasi-dilaton scalar and the last part of the action is related to the massive part. We consider the forms of the curvature and torsion scalars below [57; 58],
\[F(R,T)=\xi R+\beta T, \tag{2}\]
where \(\xi\) and \(\beta\) are the constants. Also, \(R\) and \(T\) can be expressed as:
\[R=g^{\mu\nu}R_{\mu\nu}+u,\] \[T=S_{\rho}^{\ \mu\nu}T^{\rho}_{\ \mu\nu}+v, \tag{3}\]
we consider the FLRW spacetime, and \(u\) and \(v\) can be defined,
\[u=6\dot{j}+18\frac{\dot{a}}{aN}j+6j^{2}-3b^{2},\] \[v=6\big{(}j^{2}-b^{2}-(\frac{\dot{a}}{aN})^{2}\big{)}, \tag{4}\]
also, we know
\[S_{\rho}^{\ \mu\nu}=\frac{1}{2}(C_{\rho}^{\ \mu\nu}+\delta_{\rho}^{\mu}T_{ \theta}^{\ \theta\nu}-\delta_{\rho}^{\nu}T_{\theta}^{\ \theta\mu}), \tag{5}\]
which \(C\) is the contorsion [58]. In Eqs. (II), \(b\) and \(j\) are real functions and \(N\) is a lapse function of the dynamical metric [65]. It is obvious that \(a\) is the scale factor and \(\dot{a}\) is the derivative with respect to time. Moreover, the orthonormal tetrad components \(e_{i}(x^{\mu})\) are related to the metric as \(g_{\mu\nu}=\eta_{ij}e_{\mu}^{i}e_{j}^{j}\). As we mentioned before \(\sigma\) is the quasi-dilaton scalar and \(\omega\) is a dimensionless constant. According to the Ref [41], the theory is invariant under a global dilation transformation, \(\sigma\rightarrow\sigma+\sigma_{0}\).
It should be noted that the potential \(U\) is the cause of the creation of the graviton mass \(m_{g}\) which consists of three parts, i.e.
\[U(\mathcal{K})=U_{2}+\alpha_{3}U_{3}+\alpha_{4}U_{4}, \tag{6}\]
with \(\alpha_{3}\) and \(\alpha_{4}\) dimensionless free parameters. Therefore, the potential sentences can be expressed as [25]
\[U_{2} = \frac{1}{2}\big{(}[\mathcal{K}]^{2}-[\mathcal{K}^{2}]\big{)},\] \[U_{3} = \frac{1}{6}\big{(}[\mathcal{K}]^{3}-3[\mathcal{K}][\mathcal{K}^{ 2}]+2[\mathcal{K}^{3}]\big{)},\] \[U_{4} = \frac{1}{24}\big{(}[\mathcal{K}]^{4}-6[\mathcal{K}]^{2}[\mathcal{ K}^{2}]+8[\mathcal{K}][\mathcal{K}^{3}]+3[\mathcal{K}^{2}]^{2}-6[ \mathcal{K}^{4}]\big{)},\]
where "\([\cdot]\)" is interpreted as the trace of the tensor inside the brackets. Moreover, the building block tensor \(\mathcal{K}\) is defined as
\[\mathcal{K}^{\mu}_{\ \nu}=\delta^{\mu}_{\ \nu}-e^{\sigma}\big{(}\sqrt{g^{-1}f} \big{)}^{\mu}_{\ \nu}, \tag{8}\]
where \(f_{\alpha\nu}\) is the fiducial metric which is defined through
\[f_{\alpha\nu}=\partial_{\alpha}\phi^{c}\partial_{\nu}\phi^{d} \eta_{cd}, \tag{9}\]
where \(\eta_{cd}\) is the Minkowski metric (\(c,d=0,1,2,3\)) and \(\phi^{c}\) are the Stueckelberg fields which introduce to restore general covariance [25; 41].
### Background cosmological evolution
In this stage, we would like to consider the theory in an FLRW metric at the background level. The dynamical metric, the vierbien, and the fiducial metric are expressed as
\[g_{\mu\nu}=\mathrm{diag}\left[-N^{2},a^{2},a^{2},a^{2}\right], \tag{10}\]
\[e_{\mu}^{A}=\mathrm{diag}\left[N,a,a,a\right], \tag{11}\]
\[f_{\mu\nu}=\mathrm{diag}\left[-\dot{f}(t)^{2},1,1,1\right], \tag{12}\]
as it is mentioned before, \(a\) is the scale factor and \(N\) is the lapse function of the dynamical metric, which relates the coordinate-time \(dt\) to the proper-time \(d\tau\) using \(d\tau=Ndt\)[66; 67]. Also, we express that the function \(f(t)\) is the Stueckelberg scalar function, with \(\phi^{0}=f(t)\) and \(\frac{\partial\phi^{0}}{\partial t}=\dot{f}(t)\)[35].
Thus, we obtain the total pointlike Lagrangian in FLRW cosmology
\[\mathcal{L}=\bigg{[}\frac{-3a\dot{a}^{2}\xi}{N}+\frac{3a^{2}}{2} \bigg{(}a(2j^{2}(\beta+\xi)-b^{2}(2\beta+\xi)\] \[\qquad+2\xi\dot{j})N+6\xi j\dot{a}\bigg{)}\bigg{]}+\frac{\omega a ^{3}}{2N}\dot{\sigma}^{2}+m_{g}^{2}\bigg{\{}Na^{3}(Y\] \[\qquad-1)^{2}\alpha_{4}\bigg{]}+\dot{f}(t)a^{4}Y(Y-1)\bigg{[}3-3(Y\] \[\qquad\qquad\qquad-1)\alpha_{3}+(Y-1)^{2}\alpha_{4}\bigg{]} \bigg{\}}, \tag{13}\]
where
\[Y\equiv\frac{e^{\sigma}}{a}. \tag{14}\]
It is necessary to notice that the gauge transformations remove the unphysical fields from the theory on the classical level [68]. Therefore, we regard the unitary gauge i.e., \(f(t)=t\). So, a constraint equation would be achieved by varying with respect to \(f\),
\[\frac{\delta\mathcal{L}}{\delta f}=m_{g}^{2}\frac{d}{dt}\left\{a^{4}Y(Y-1)[3- 3(Y-1)\alpha_{3}+(Y-1)^{2}\alpha_{4}]\right\}=0. \tag{15}\]
We obtain the modified Friedman equation by considering the variation of the pointlike Lagrangian with respect to lapse function \(N\),
\[\frac{1}{a^{3}}\frac{\delta\mathcal{L}}{\delta N}=3H^{2}\xi-\frac {\omega}{2}\big{(}H+\frac{\dot{Y}}{YN}\big{)}^{2}+\frac{3}{2}\big{(}2j^{2}( \beta+\xi)\] \[\qquad-b^{2}(2\beta+\xi)+2\xi\dot{j}\big{)}-m_{g}^{2}(Y-1)\bigg{[} (Y-4)(Y\] \[\qquad\qquad\qquad-1)\alpha_{3}+(Y-1)^{2}\alpha_{4}-3(Y-2)\bigg{]}=0. \tag{16}\]
The equation of motion related to the scale factor \(a\) is given by
\[\frac{1}{6a^{2}N}\frac{\delta\mathcal{L}}{\delta a}=\frac{1}{4} \bigg{(}6j^{2}\big{(}\xi+\beta\big{)}-3b^{2}\big{(}2\beta+\xi\big{)}+6\xi\] \[\quad+\alpha_{4}-(2+r)(3+3\alpha_{3}+\alpha_{4})Y+(2r+1)(1\] \[\qquad\qquad\qquad+2\alpha_{3}+\alpha_{4})Y^{2}-r(\alpha_{3}+ \alpha_{4})Y^{3}\bigg{)}=0. \tag{17}\]
By varying the pointlike Lagrangian with respect to \(\sigma\), the equation of motion corresponding to the scalar field is obtained by
\[\frac{1}{a^{3}N}\frac{\delta\mathcal{L}}{\delta\sigma}=-3\frac{H^ {2}\omega}{N}+m_{g}^{2}Y\Bigg{\{}6(r+1)\big{(}\alpha_{4}+2\alpha_{3}\] \[\qquad\qquad\qquad\qquad+\alpha_{3})Y^{2}+4r\alpha_{4}Y^{3} \Bigg{\}}=0, \tag{18}\]
where \(r\equiv\frac{a}{N}\) and \(H\equiv\frac{\dot{a}}{N\alpha_{4}}\). Note that the below equations are obtained by considering the notation in Eq (14).
\[\frac{\dot{\sigma}}{N}=H+\frac{\dot{Y}}{NY},\qquad\ddot{\sigma}=\frac{d}{dt} \Big{(}NH+\frac{\dot{Y}}{Y}\Big{)}. \tag{19}\]
According to the Stuckelberg field \(f\) which introduces time reparametrization invariance, the Bianchi identity relates the four equations of motion. Therefore, the equation of motion has something to do with the scale factor is redundant and we disregard it.
\[\frac{\delta S}{\delta\sigma}\dot{\sigma}+\frac{\delta S}{\delta f}\dot{f}-N \frac{d}{dt}\frac{\delta S}{\delta N}+\dot{a}\frac{\delta S}{\delta a}=0. \tag{20}\]
It should be pointed out that in particular conditions, all of the background equations and total Lagrangian reduce to those in Refs. [41; 45].
### Self-accelerating background solutions
In order to evaluate self-accelerating solutions in the context of this new extension, we integrate the Stueckelberg constraint equation (15). Therefore, we have
\[Y(Y-1)\bigg{[}3-3(Y-1)\alpha_{3}+(Y-1)^{2}\alpha_{4}\bigg{]}\propto a ^{-4}. \tag{21}\]
It is obvious that we look forward to explaining the accelerated expansion of the Universe in the context of \(F(R,T)\) quasi-dilaton massive gravity theory. The constant solution of the above equation leads to the effective energy density which behaves similarly to the cosmological constant. Therefore, in an expanding universe, the right-hand side of Eq. (21) decays as \(a^{-4}\), so the right-hand side of that equation would decrease in a sufficiently long time and makes the left-hand side of the equation equal to zero. Hence, \(Y\) transforms into the saturate constant value \(Y_{SA}\) which would be a root of the left-hand side of the equation (21).
\[(Y-1)\big{[}3-3(Y-1)\alpha_{3}+(Y-1)^{2}\alpha_{4}\big{]}\bigg{]}_{Y=Y_{\rm SA }}=0. \tag{22}\]
Equation (22) admits four distinct solutions. The one obvious solution is \(Y=0\), however, this solution implies \(\sigma\longrightarrow-\infty\). As this solution would be multiplied by the perturbations of the auxiliary scalars, we encounter strong coupling in the vector and scalar sectors. So, this solution has to disregard avoiding strong coupling [41]. Furthermore, another solution that could be considered is \(Y=1\). But, it must be eliminated because of a vanishing cosmological constant and inconsistency [41]. As a result, there are solely two solutions which are given,
\[Y_{\rm SA}^{\pm}=\frac{3\alpha_{3}+2\alpha_{4}\pm\sqrt{9\alpha_{ 3}^{2}-12\alpha_{4}}}{2\alpha_{4}}. \tag{23}\]
Using Eq. (16) and Eq. (23), we can obtain the modified Friedmann equation as below,
\[\bigg{[}3H^{2}\xi+\frac{3}{2}\big{(}2j^{2}(\beta+\xi)-b^{2}(2 \beta+\xi)+2\xi\tilde{j}\big{)}\] \[-\frac{\omega}{2}H^{2}\bigg{]}=\Lambda_{\rm SA}^{\pm},\]
where we have the effective cosmological constant corresponding to the massive part of the action
\[\Lambda_{\rm SA}^{\pm}\equiv m_{g}^{2}(Y_{\rm SA}^{\pm}-1)\Big{[} 6-3Y_{\rm SA}^{\pm}+(Y_{\rm SA}^{\pm}-4)(Y_{\rm SA}^{\pm}\] \[-1)\alpha_{3}+(Y_{\rm SA}^{\pm}-1)^{2}\alpha_{4}\Big{]}.\]
The above equation should be re-written by considering Eq. (23),
\[\Lambda_{\rm SA}^{\pm}=\frac{3m_{g}^{2}}{2\alpha_{4}^{3}}\bigg{[} 9\alpha_{3}^{4}\pm 3\alpha_{3}^{3}\sqrt{9\alpha_{3}^{2}-12\alpha_{4}}-18 \alpha_{3}^{2}\alpha_{4}\] \[\mp 4\alpha_{3}\alpha_{4}\sqrt{9\alpha_{3}^{2}-12\alpha_{4}}+6 \alpha_{4}^{2}\bigg{]}. \tag{25}\]
Hence, \(H^{2}\) in given by equation (24),
\[H^{2}=\frac{-6j^{2}(\xi+\beta)+3b^{2}(\xi+2\beta)-6\xi\tilde{j}+2\Lambda_{\rm SA }^{\pm}}{6\xi-\omega}. \tag{26}\]
At the end of this subsection, we obtain \(r_{\rm SA}\) using Eq. (18),
\[r_{\rm SA}=1+\frac{H^{2}\omega}{m_{g}^{2}Y_{\rm SA}^{2\pm}\big{(}\alpha_{3}Y_{ \rm SA}^{\pm}-\alpha_{3}-2\big{)}}. \tag{27}\]
Here, it should be expressed that we substituted \(\alpha_{4}\) using the Stueckelberg equation (21). About the main result of this subsection, we presented that this theory consists of self-accelerating solutions with an effective cosmological constant ie., \(\Lambda_{\rm SA}^{\pm}\). Likewise, it is shown that we do have not any strong coupling, also the theory possesses a well-behaved self-accelerating solution that can explain the accelerated expansion of the Universe.
## III Cosmological data
It is obvious that the exploration of type I-a supernovae showed the accelerated expansion of the Universe [11; 69; 70; 71]. In the following, using the Union2 supernovae I-a dataset consisting of 557 SNIa [72], we would like to assess the \(F(R,T)\) quasi-dilaton massive gravity theory. Notice that the main goal is to constrain the theory using the cosmological data. The results of the SNIa dataset should be represented in terms of \(\mu_{\rm obs}\), and could be compared with the predictions of the model.
\[\mu_{\rm th}(z_{i})=5\log_{10}D_{L}(z_{i})+\mu_{0}, \tag{28}\]
note that \(\mu_{0}=42.38-5\log_{10}\mathcal{H}\) (\(\mathcal{H}\) would be the Hubble constant \(H_{0}\) in units of \(100\,\rm km/s/Mpc\)), also we have the luminosity distance as below,
\[D_{L}(z)=(1+z)\int_{0}^{z}\frac{dx}{E(x;p)}, \tag{29}\]
moreover, it is clear that \(E=H/H_{0}\) and \(p\) represents the model parameters. Notice that \(X^{2}\) is given by
\[X^{2}_{\mu}(p)=\sum_{i}\frac{[\mu_{\rm obs}(z_{i})-\mu_{\rm th}(z_{i})]^{2}}{ \sigma^{2}(z_{i})}, \tag{30}\]
here \(\sigma\) corresponds to \(1\sigma\) error and the parameter \(\mu_{0}\) is a nuisance parameter and is independent of the data points. We know that \(X^{2}_{\mu}\) in equation (30) make minimize it with respect to \(\mu_{0}\)[73; 74].
\[X^{2}_{\mu}(p)=\tilde{A}-2\mu_{0}\tilde{B}+\mu_{0}^{2}\tilde{C}, \tag{31}\]
where
\[\tilde{A}(p)=\sum_{i}\frac{[\mu_{\rm obs}(z_{i})-\mu_{\rm th}(z_{i}; \mu_{0}=0,p)]^{2}}{\sigma_{\mu_{\rm obs}}^{2}(z_{i})},\] \[\tilde{B}(p)=\sum_{i}\frac{\mu_{\rm obs}(z_{i})-\mu_{\rm th}(z_{i}; \mu_{0}=0,p)}{\sigma_{\mu_{\rm obs}}^{2}(z_{i})},\] \[\tilde{C}=\sum_{i}\frac{1}{\sigma_{\mu_{\rm obs}}^{2}(z_{i})}. \tag{32}\]
It should be explained that for \(\mu_{0}=\frac{\tilde{B}}{\tilde{C}}\), equation (31) has a minimum at
\[\tilde{X}_{\mu}^{2}(p)=\tilde{A}(p)-\frac{\tilde{B}^{2}(p)}{\tilde{C}}. \tag{33}\]
As it is clear that \(X_{\mu,\rm min}^{2}=\tilde{X}_{\mu,\rm min}^{2}\), it could be considered minimizing \(\tilde{X}_{\mu}^{2}\) which is independent of \(\mu_{0}\). We should pay attention that the best-fit model parameters are determined by minimizing \(X^{2}=\tilde{X}_{\mu}^{2}\). Meanwhile, we know that the corresponding \(\mathcal{H}\) should be determined by \(\mu_{0}=\frac{\tilde{B}}{\tilde{C}}\) for the best-fit parameters.
By considering equation (26), the change of variables \(a=\frac{1}{1+z}\) and \(\frac{d}{dt}=-H(z+1)\frac{d}{dz}\), the dimensionless Hubble parameter for this case should be written. Thus, the solution to the asymptotic state in small red-shifts should be given as,
\[E=\frac{H(z)}{H_{0}}=1-\frac{\mathcal{M}z}{2(\mathcal{M}+\mathcal{D})}+\frac {\left(3\mathcal{M}^{2}+4\mathcal{M}\mathcal{D}\right)z^{2}}{8(\mathcal{M}+ \mathcal{D})^{2}}+...., \tag{34}\]
where
\[\mathcal{M}=\frac{-6j^{2}(\xi+\beta)-6\xi\dot{j}}{6\xi-\omega},\] \[\mathcal{D}=\frac{3b^{2}(\xi+2\beta)+2\Lambda_{\rm SA}^{\pm}}{6 \xi-\omega}. \tag{35}\]
### Union2 I-a
In this step, we plotted the \(X^{2}\) and likelihoods as functions of parameters \(\mathcal{M}\) and \(\mathcal{D}\). According to calculations, the best fit has \(X_{\rm min}^{2}=546.947\), and the best-fit parameters are,
\[\mathcal{M}= -0.037, \tag{36}\] \[\mathcal{D}= 0.070. \tag{37}\]
Consequently, it should be mentioned the best fit for the \(F(R,T)\) quasi-dilaton massive gravity is \(\mathcal{H}=0.693\).
According to Figs (1-5), the result of fitting the \(F(R,T)\) quasi-dilaton massive gravity theory with the cosmological data gives us the best values of model parameters.
In Fig (6), we plotted the velocity \(V\) versus luminosity distance \(D_{L}\) for the \(F(R,T)\) quasi-dilaton massive gravity theory. For calculating the velocity from redshift, one should use \(V=c-\frac{2c}{(z+1)^{2}+1}\).
To test the viability of the \(F(R,T)\) quasi-dilaton massive gravity theory, we utilize the Union2 supernovae I-a dataset consisting of 557 SNIa. We employ the Bayesian statistics method to constrain the model parameters using the SNIa data. Specifically, we plot the \(X^{2}\) and likelihood functions as functions of the parameters \(\mathcal{M}\) and \(\mathcal{D}\), and determine the best-fit values of these parameters by minimizing \(X^{2}\).
Our analysis reveals that the best-fit values of the model parameters are \(\mathcal{M}=-0.037\) and \(\mathcal{D}=0.070\), with a minimum \(X^{2}\) value of 546.947. Using these parameter values, we calculate that the best fit of \(\mathcal{H}=0.693\) for the
Figure 2: The \(X^{2}\) functions of parameter \(\mathcal{M}\) for the \(F(R,T)\) quasi-dilaton massive gravity.
Figure 1: The distance modulus diagram for the best fit (red solid line) of the parameters of the \(F(R,T)\) quasi-dilaton massive gravity theory in comparison with the 557 Union2 SNIa data points (blue dots).
theory. In Fig. (1) we show the distance modulus diagram \(\mu\) versus redshift \(z\) and the best fit for the \(F(R,T)\) quasi-dilaton massive gravity theory using supernova I-a data. Furthermore, we plot the velocity \(V\) versus luminosity distance \(D_{L}\) for the \(F(R,T)\) quasi-dilaton massive gravity theory, and obtain a best-fit diagram in comparison with supernova I-a data.
Our results provide strong evidence in favor of the \(F(R,T)\) quasi-dilaton massive gravity theory as a viable explanation for the accelerated expansion of the Universe. In fact, these results exhibit good agreement between the theoretical predictions and observational data. These findings lend support to the idea that the \(F(R,T)\) quasi-dilaton massive gravity theory can successfully describe late-time cosmic acceleration. The derived values of the model parameters and the Hubble constant provide further credence to this theory. Future investigations involving larger and more diverse datasets will help to reinforce or challenge these conclusions, ultimately contributing to a deeper understanding of the fundamental laws governing the behavior of the Universe.
## IV Perturbations analysis
In this section, we demonstrate the perturbation analysis of \(F(R,T)\) quasi-dilaton massive gravity. It is worth noting that the significance of such analysis is that the stability conditions of the solutions can be determined. As we would be interested in quadratic perturbations, we should expand the physical metric \(g_{\mu\nu}\), the vierbein and the scalar field in terms of small fluctuations \(\delta g_{\mu\nu}\), \(\delta e^{A}_{\mu}\) and \(\delta\sigma\) around the background solution \(g^{(0)}_{\mu\nu}\) and \(\delta e^{A(0)}_{\mu}\):
\[g_{\mu\nu}=g^{(0)}_{\mu\nu}+\delta g_{\mu\nu}, \tag{38}\]
\[e^{A}_{\mu}=e^{A(0)}_{\mu}+\delta e^{A}_{\mu}, \tag{39}\]
Figure 4: The \(X^{2}\) functions of parameter \(\mathcal{D}\) for the \(F(R,T)\) quasi-dilaton massive gravity.
Figure 5: The likelihood as functions of parameter \(\mathcal{D}\) for the \(F(R,T)\) quasi-dilaton massive gravity.
Figure 3: The likelihood as functions of parameter \(\mathcal{M}\) for the \(F(R,T)\) quasi-dilaton massive gravity.
Figure 6: The best-fit diagram for the luminosity distance \(D_{L}\) with respect to velocity \(V\) for the \(F(R,T)\) quasi-dilaton massive gravity.
\[\sigma=\sigma^{(0)}+\delta\sigma. \tag{40}\]
Note that one can keep all terms up to quadratic order. We consider tensor perturbations around background \(\delta g_{ij}=a^{2}h_{ij}^{TT}\). Also, the tensor perturbations are transverse \(\partial^{i}h_{ij}=0\), and traceless \({h_{i}}^{i}=0\). We should pay attention that all perturbations are functions of time and space, and they are consistent with the transformations under spatial rotations [75; 44].
Besides, one can write the actions expanded in Fourier plane waves, namely \(\vec{\nabla}^{2}\rightarrow-k^{2}\), \(d^{3}x\to d^{3}k\). Also, the spatial indices are raised and lowered by \(\delta^{ij}\) and \(\delta_{ij}\). Hence, as all calculations are done in the unitary gauge, one does not need to specify gauge-invariant combinations [45].
### Dispersion Relation of GWs
Notice that we obtain the perturbed action at the second order separately for the different parts. The \(F(R,T)\) part is given as
\[S^{(2)}_{\rm F(R,T)}=\frac{1}{8}\int d^{3}kdta^{3}N\Bigg{\{} \xi\bigg{[}\frac{h_{ij}h^{ij}}{N^{2}}-\Big{(}\frac{k^{2}}{a^{2}}+\frac{4\dot{H }}{N}\] \[+6H^{2}+u\Big{)}h^{ij}h_{ij}\Bigg{]}+\beta\bigg{[}\frac{\dot{h}_ {ij}\dot{h}^{ij}}{N^{2}}-\Big{(}\frac{k^{2}}{a^{2}}\] \[+3H^{2}+v\Big{)}h^{ij}h_{ij}\bigg{]}\Bigg{\}}. \tag{41}\]
Meanwhile, the quasi-dilaton part of the perturbed action reads as
\[S^{(2)}_{\rm Quasi-dilaton}=-\frac{1}{8}\int d^{3}kdta^{3}N\Bigg{[} \left(\frac{\omega}{N^{2}}\dot{\sigma}^{2}\right)h^{ij}h_{ij}\Bigg{]}, \tag{42}\]
also, the massive gravity part becomes
\[S^{(2)}_{\rm massive}=\frac{1}{8}\int d^{3}kdta^{3}Nm_{g}^{2} \bigg{[}(\alpha_{3}+\alpha_{4})rY^{3}\] \[-(1+2\alpha_{3}+\alpha_{4})(1+3r)Y^{2}\] \[+(3+3\alpha_{3}+\alpha_{4})(3+2r)Y\] \[-2(6+4\alpha_{3}+\alpha_{4})\bigg{]}h^{ij}h_{ij}. \tag{43}\]
In the end, by considering the above terms, the second-order perturbed action for tensor perturbations \(S^{(2)}_{\rm total}=S^{(2)}_{\rm F(R,T)}+S^{(2)}_{\rm Quasi-dilaton}+S^{(2)}_ {\rm massive}\), becomes
\[S^{(2)}_{\rm total}=\frac{1}{8}\int d^{3}kdt\,a^{3}N\Bigg{\{} \frac{\dot{h}^{ij}h_{ij}}{N^{2}}(\xi+\beta)\] \[-\bigg{[}\frac{k^{2}}{a^{2}}(\xi+\beta)+M_{\rm GW}^{2}\bigg{]}h^ {ij}h_{ij}\Bigg{\}}, \tag{44}\]
where
\[M_{\rm GW}^{2}=\bigg{(}\frac{4\dot{H}}{N}+6H^{2}+u\bigg{)}\xi+ \bigg{(}3H^{2}+v\bigg{)}\beta\] \[+\frac{\omega}{N^{2}}\dot{\sigma}^{2}+\varpi, \tag{45}\]
and with
\[\varpi=\frac{m_{g}^{2}}{(Y_{\rm SA}^{\pm}-1)}\Bigg{\{} Y_{\rm SA}^{\pm}\bigg{[}18+8\alpha_{3}+Y_{\rm SA}^{\pm}\bigg{(}2 \alpha_{3}Y_{\rm SA}^{\pm}+Y_{\rm SA}^{\pm}-r_{\rm SA}\big{(}3(\alpha_{3}+2)\] \[+Y_{\rm SA}^{\pm}(\alpha_{3}Y_{\rm SA}^{\pm}-4\alpha_{3}-3)\big{)} -8\alpha_{3}-10\bigg{)}\bigg{]}-2(\alpha_{3}+3)\Bigg{\}}.\]
The last relation is obtained using (23) to substitute \(\alpha_{3}\) and \(\alpha_{4}\). It is obvious the equation (45) determines the modified dispersion relation of gravitational waves in \(F(R,T)\) quasi-dilaton massive gravity. In order to guarantee the stability of long-wavelength gravitational waves, the mass square of gravitational waves should be positive i.e., \(M_{\rm GW}^{2}>0\). But, if the mass square would be negative, it is unstable and can be considered as tachyonic. So, the instability could take the age of the Universe to develop by considering the mass of the tachyon in the order of the Hubble scale.
Notice that the main result of this section is the modified dispersion relation of gravitational waves. They showed the propagation of gravitational perturbations in the FLRW cosmology in the \(F(R,T)\) quasi-dilaton massive gravity. It is worth mentioning that cosmological events such as gravitational wave observations can be used for testing modified propagation. The modification introduced in this theory will lead to additional effects on the phase evolution of the gravitational waveform. Moreover, these extra contributions could be detected with accurate matched-filtering techniques in the data analysis.
It is interesting to note that Nishizawa and Arai investigated the modified propagation of gravitational waves in the cosmological context [76; 77; 78]. The result of this paper also demonstrates theory-specific analysis in \(F(R,T)\) quasi-dilaton massive gravity which is complementary to their work. At the end of this part, I should like to point out that future space-based gravitational-wave detectors which are sensitive, provide the testing of crucial properties of gravitation at different wavelengths.
## V Conclusions
In this study, we have successfully extended the dRGT massive gravity theory to develop the innovative \(F(R,T)\) quasi-dilaton massive gravity theory. Our exhaustive investigation has led to the derivation of the complete set of field equations for a Friedmann-Lemaitre-Robertson-Walker (FLRW) background, paving the way for a detailed examination of the late-time acceleration of the Universe.
To validate the predictions of the \(F(R,T)\) quasi-dilaton massive gravity theory, we embarked on a rigorous evaluation by comparing its forecasts with the Union2 Supernovae Type Ia (SNIa) dataset, comprising 557 observations. Our analysis revealed an exceptional agreement between the theoretical framework and the empirical data, strongly supporting the potential of the model in accurately describing the late-time cosmic acceleration. Furthermore, we were able to determine the best-fit parameter, yielding \(h=0.693\), via a thorough fitting process.
Motivated by the quest to deepen our understanding of gravitational phenomena, we conducted a comprehensive exploration of tensor perturbations. This endeavor allowed us to unveil the underlying principles governing the mass of the graviton within the context of the \(F(R,T)\) quasi-dilaton massive gravity theory. Our findings facilitated the derivation of the dispersion relation for gravitational waves and an examination of the propagation properties of gravitational perturbations in FLRW cosmology. These critical investigations hold significant implications for the current era of gravitational wave astronomy, where novel approaches to understanding the behavior of gravity are being actively pursued.
As we mentioned before, it is important to highlight that our work complements previous studies, such as those by Nishizawa and Arai, who investigated the modified propagation of gravitational waves in the cosmological context. Our research provides a theory-specific analysis in \(F(R,T)\) quasi-dilaton massive gravity, offering valuable insights into the nature of gravity in diverse environments.
As we conclude this paper, it is essential to acknowledge the profound impact that future space-based gravitational-wave detectors will have on our understanding of the universe. With their heightened sensitivity, they will enable the testing of crucial properties of gravitation at various wavelengths, further expanding our knowledge of the cosmos.
## Acknowledgements
This work is based upon research funded by the University of Tabriz, Iran National Science Foundation (INSF), and Iran National Elites Foundation (INEF), under project No. 4014244.
The authors are grateful to A. Emir Gumrukcuoglu for useful comments. Also, the authors are grateful to Nishant Agarwal for notes and codes related to tensor perturbations.
|
2309.10761 | Tropical initial degeneration for systems of algebraic differential
equations | We study the notion of degeneration for affine schemes associated to systems
of algebraic differential equations with coefficients in the fraction field of
a multivariate formal power series ring. In order to do this, we use an
integral structure of this field that arises as the unit ball associated to the
tropical valuation, first introduced in the context of tropical differential
algebra. This unit ball turns out to be a particular type of integral domain,
known as B\'ezout domain. By applying to these systems a translation map along
a vector of weights that emulates the one used in classical tropical algebraic
geometry, the resulting translated systems will have coefficients in this unit
ball. When the resulting quotient module over the unit ball is torsion-free,
then it gives rise to integral models of the original system in which every
prime ideal of the unit ball defines an initial degeneration, and they can be
found as a base-change to the residue field of the prime ideal.
In particular, the closed fibres of our integral models can be rightfully
called initial degenerations, since we show that the maximal ideals of this
unit ball naturally correspond to monomial orders. We use this correspondence
to define initial forms of differential polynomials and initial ideals of
differential ideals, and we show that they share many features of their
classical analogues. | Lara Bossinger, Sebastian Falkensteiner, Cristhian Garay-López, Marc Paul Noordman | 2023-09-19T17:01:18Z | http://arxiv.org/abs/2309.10761v1 | # Tropical initial degeneration for systems of algebraic differential equations
###### Abstract.
We study the notion of degeneration for affine schemes associated to systems of algebraic differential equations with coefficients in the fraction field of a multivariate formal power series ring. In order to do this, we use an integral structure of this field that arises as the unit ball associated to the tropical valuation, first introduced in the context of tropical differential algebra. This unit ball turns out to be a particular type of integral domain, known as Bezout domain. By applying to these systems a translation map along a vector of weights that emulates the one used in classical tropical algebraic geometry, the resulting translated systems will have coefficients in this unit ball. When the resulting quotient module over the unit ball is torsion-free, then it gives rise to integral models of the original system in which every prime ideal of the unit ball defines an initial degeneration, and they can be found as a base-change to the residue field of the prime ideal.
In particular, the closed fibres of our integral models can be rightfully called initial degenerations, since we show that the maximal ideals of this unit ball naturally correspond to monomial orders. We use this correspondence to define initial forms of differential polynomials and initial ideals of differential ideals, and we show that they share many features of their classical analogues.
Key words and phrases:Initial degeneration, Tropical Differential Algebra, Bezout domain, Non-Archimedean valuation, Maximal Ideals, Monomial Orders, Schemes over Bezout domains
initial degeneration with irreducible special fibre corresponds to a point in the tropical variety (see, e.g. [10]). The generalization of the fundamental theorem to polynomial differential systems was initiated in [1] (the ordinary case) and taken further to the partial case in [12, 13]. Our construction originates from these results, more precisely from the study of tropical solutions to tropical systems of algebraic differential equations, and our findings may be phrased in terms of non-Archimedean valuations or seminorms, as we explain below.
We would like to mention that this paper is not the first exploring possible generalizations of algebraic methods to the differential case. In particular, Grobner theoretical methods have been employed in a similar setting in the works [1, 13, 14].
### A new example of Bezout non-Archimedean norm
We consider the fraction field \(K(\!(\mathfrak{t})\!)\) of the ring of formal power series in \(m\geq 1\) variables \(\mathfrak{t}=(t_{1},\ldots,t_{m})\) and with coefficients in a field of characteristic zero \(K\).
We study the algebraic and geometric properties of a recently introduced non-Archimedean norm trop defined on \(K(\!(\mathfrak{t})\!)\) having values in the (idempotent) fraction semifield of the semiring of _vertex polynomials_\(V\mathbb{B}(\mathfrak{t})\) ([1, 12], see SS2.3 precisely (2). This is the tropical seminorm, and it is not a classical Krull valuation, but rather a new example of the concept of Bezout \(\ell\)-valuation1, which was introduced in [11].
Footnote 1: Recall that an \(\ell\)-valuation is a seminorm \(v:K\xrightarrow{}S\) defined on a field \(K\) and taking values in an idempotent semifield; it differs from Krull valuations in the sense that \(S\) does not need to be totally-ordered.
Even if the target semifield \(V\mathbb{B}(\mathfrak{t})\) is not totally ordered, this setting behaves in a similar way to the Krull valuations; in particular, the subring \(K(\!(\mathfrak{t})\!)^{\circ}\subset K(\!(\mathfrak{t})\!)\) of elements having trop norm bounded by \(1\in V\mathbb{B}(\mathfrak{t})\) turns out to be a Bezout domain. In this context, this subring is the analogue of the valuation ring of a Krull valuation, and we call it the _unit ball_ of the seminorm.
This yields one of the first concrete applications of Bezout \(\ell\)-valuations through tropical geometry. We choose to present this concept in the language of non-Archimedean seminorms using idempotent semiring theory, and we recall connections to Bezout \(\ell\)-valuations.
### Application
Consider the differential ring \((K[\![\mathfrak{t}]\!],D)\), where \(D=\{\frac{\partial}{\partial t_{i}}\,:\,i=1,\ldots,m\}\) is the set of usual derivations. The tropical seminorm trop appeared first in the tropical aspect of systems of differential algebraic equations with coefficients in \(K[\![\mathfrak{t}]\!]\), see [12].
The differential field \((K(\!(\mathfrak{t})\!),D)\) is a differential extension of \((K[\![\mathfrak{t}]\!],D)\), and when \(m>1\), the Bezout domain \(K(\!(\mathfrak{t})\!)^{\circ}\subset K(\!(\mathfrak{t})\!)\) is a proper extension of \(K[\![\mathfrak{t}]\!]\). We apply these previous results to set up the theory of tropical initial degeneration of systems of algebraic differential equations with coefficients in \(K(\!(\mathfrak{t})\!)\).
### Initial degenerations
Given an algebraic variety \(X\) over a field \(K\) and an integral domain \(R\) with fraction field \(\operatorname{Frac}(R)=K\), a _model_ of \(X\) over the base \(B=\operatorname{Spec}(R)\) is a flat morphism \(\pi:\mathcal{X}\to B\) such that \(X\cong\mathcal{X}\times_{B}\operatorname{Spec}(K)\).
Denote by \(F_{m,n}\), respectively \(R_{m,n}\), the ring of polynomials in the variables \(\{x_{i,\,J}\,:\,i=1,\ldots,n,\,J\in\mathbb{N}^{m}\}\) with coefficients in \(K(\!(\mathfrak{t})\!)\), respectively in the unit ball \(K(\!(\mathfrak{t})\!)^{\circ}\). For a given weight vector \(w=(w_{1},\ldots,w_{n})\in\mathbb{B}[\![\mathfrak{t}]\!]^{n}\) (here \(\mathbb{B}\) denotes
the Boolean semifield) and a differential polynomial \(P\in F_{m,n}\), we define its \(w\)-translation \(P_{w}\in R_{m,n}\) in (9). Broadly speaking we are generalizing the ordinary case which appeared in [12, 13]. The \(w\)-translated ideal of a given ideal in \(F_{m,n}\), is the ideal in \(R_{m,n}\) generated by the \(w\)-translations of all its elements (Definition 3.7). An initial degeneration is specified by choosing a prime ideal \(\mathfrak{p}\subset K(\mathfrak{k})^{\circ}\).
**Theorem**.: _(Proposition 3.8 and Theorem 3.10) Let \(w\in\mathbb{B}[\mathfrak{k}]^{n}\) and \(G\subset F_{m,n}\) be an ideal such that the quotient \(R_{m,n}/G_{w}\) of \(R_{m,n}\) by the \(w\)-translated ideal \(G_{w}\) is a torsion-free \(K(\mathfrak{k})^{\circ}\)-module. Then \(\mathcal{X}(w):=\mathrm{Spec}(R_{m,n}/G_{w})\) is a model for \(\mathcal{X}(w)_{\eta}:=\mathrm{Spec}(F_{m,n}/G)\) over \(K(\mathfrak{k})^{\circ}\). Moreover, given any maximal ideal \(\mathfrak{m}\in\mathrm{maxSpec}(K(\mathfrak{k})^{\circ})\) the corresponding closed fibre satisfies_
\[\mathcal{X}(w)_{\mathfrak{m}}\cong\mathrm{Spec}\left(K\{x_{i,J}\}/\overline{G }_{w}\right)\]
_where \(\overline{G}_{w}\) is the induced ideal in \(K\{x_{i,J}\}\) (see the Proof of Lemma 3.3 for the precise definition of \(\overline{G}_{w}\))._
A concept closely related to the fibre of a model is the one of initial ideals. We use the \(w\)-translation map together with a maximal ideal \(\mathfrak{m}\subset K(\mathfrak{k}(\mathfrak{k}))^{\circ}\) to define the initial form \(\mathrm{in}_{(w,\mathfrak{m})}(P)\) of \(P\in F_{m,n}\) with respect to the pair \((w,\mathfrak{m})\), and the initial ideal \(\mathrm{in}_{(w,\mathfrak{m})}(G)\) with respect to the pair \((w,\mathfrak{m})\) of an ideal \(G\subset F_{m,n}\).
We show that for a polynomial \(P\in F_{m,n}\), our expressions (10) for its \(w\)-translation \(P_{w}\in R_{m,n}\), and (11) for its initial form \(\mathrm{in}_{(w,\mathfrak{m})}(P)\) are natural generalizations of the same constructions for the case of standard tropical algebra, c.f. [12, SS5]. The second half of the above theorem follows from a careful analysis of the set of maximal ideals in \(K(\mathfrak{k})^{\circ}\) denoted by \(\mathrm{maxSpec}(K(\mathfrak{k}))^{\circ})\). We find the following result which is proven in a constructive manner in both directions.
**Theorem**.: _(Theorem 4.4) There exists an explicit one-to-one correspondence between maximal ideals of \(K(\mathfrak{k})^{\circ}\) and monomial orders on \(\mathbb{N}^{m}\)._
This characterization is important since it allows us to give explicit formulas for computing the initial form \(\mathrm{in}_{(w,\mathfrak{m})}(P)\) of \(P\in F_{m,n}\) with respect to the pair \((w,\mathfrak{m})\), see for instance (14), (15).
### Outline
In Section 2 we introduce non-Archimedean seminorms, idempotent semiring theory and Bezout \(\ell\)-valuations. We give results on these concepts which will be necessary for defining initial degeneration. Of uttermost importance hereby is the correspondence theorem between ideals in the domain and \(k\)-ideals in the image of a Bezout valuation (see Theorem 2.12) which is new up to our knowledge. In Subsection 2.3 we show that trop is indeed a Bezout seminorm on \(K(\mathfrak{k})\) (Proposition 2.19) such that the previous result can be applied. Moreover, in Proposition 2.22 we show that the ring \(K(\mathfrak{k})^{\circ}\) defined by trop is in the multivariate case not Noetherian.
In Section 3, the notions from tropical differential algebra are recalled and results from Section 2 are used to show that models are found in this setting (Proposition 3.8) such that initial degenerations can be properly defined, see Theorem 3.10. We give the definitions of initial form and initial ideal at the pair \((w,\mathfrak{m})\) in Definition 3.14.
The maximal ideals of \(K(\mathfrak{k})^{\circ}\) are studied detailed in the subsequent Section 4. In Proposition 4.13, we show that taking the initial form at the pair \((w,\mathfrak{m})\) of a
polynomial is a multiplicative map, as it is the case in classical tropical algebraic geometry.
## 2. Preliminaries
### Non-Archimedean seminorms
Let \(R\) be a (commutative) ring and let \(S\) be an idempotent (commutative) semiring. On \(S\) we define the order \(a\leq b\) if and only if \(a+b=b\). Order considerations on idempotent semirings will be made with respect to this order if no further remark is made.
**Definition 2.1**.: A _(non-Archimedean) seminorm_ is a map \(v:R\to S\) from a ring \(R\) to an idempotent semiring \(S\) that satisfies
1. (unit) \(v(0)=0\) and \(v(1)=1\);
2. (sign) \(v(-1)=1\);
3. (submultiplicativity) \(v(ab)\leq v(a)v(b)\); and
4. (subaditivity) \(v(a+b)\leq v(a)+v(b)\);
The seminorm \(v\) is a _norm_ if every \(a\neq 0\) fulfills \(v(a)\neq 0\), and it is called multiplicative if \(v(ab)=v(a)v(b)\) holds for every \(a,b\in R\). A multiplicative norm is called a _valuation_.
Let \(v:R\to S\) be a seminorm. The set \(R^{\circ}:=\{a\in R:v(a)\leq 1\}\) is called the _unit ball of \(v\)_, and is a subring of \(R\). By a little abuse of notation we also refer to the set \(S^{\circ}:=\{x\in S:x\leq 1\}\) as the unit ball of \(S\); it is a subsemiring of \(S\).
**Definition 2.2**.: A seminorm \(v:R\to S\) is called _integral_ if \(v(a)\leq 1\) for all \(a\in R\).
A seminorm \(v:R\to S\) is integral if and only if \(R^{\circ}=R\). By restricting the domain of definition of \(v\) to \(R^{\circ}\), an integral seminorm \(v^{\circ}:R^{\circ}\to S^{\circ}\) is induced.
**Definition 2.3**.: A subset \(I\) of a semiring \(S\) is an ideal if \(0\), \(a+b\) and \(ac\) are elements of \(I\) whenever \(a,b\in I\) and \(c\in S\). An ideal \(I\) of \(S\) is
1. a \(k\)-_ideal_ or a _subtractive ideal_, if whenever \(a+b\in I\), \(a\in I\) and \(b\in S\), then \(b\in I\).
2. _prime_, if its complement \(S\setminus I\) is a multiplicative subset of \(S\).
**Lemma 2.4**.: _Let \(S\) be an idempotent semiring. An ideal \(I\) of \(S\) is downward closed if whenever \(b\in I\) and \(a\leq b\), we have \(a\in I\). Subtractive ideals are equivalent to downward closed ideals._
Proof.: Assume \(I\subset S\) is subtractive and take \(b\in I\) and \(a\in S\) with \(a\leq b\). Then \(a+b=b\in I\) which implies \(a\in I\) as \(I\) is subtractive.
To the contrary, if \(I\) is downward closed, consider \(b\in I,a\in S\) and \(a+b\in I\). Since \(a+(a+b)=a+b\), it follows that \(a\leq a+b\), and because \(I\) is downward closed, \(a\in I\).
Let \(R\) be a ring. Let \(\operatorname{Id}(R)\), respectively \(\operatorname{fgId}(R)\), denote the set of ideals of \(R\), respectively finitely generated ideals of \(R\). Note that \(\operatorname{Id}(R)\) and \(\operatorname{fgId}(R)\) are semirings with respect to the sum and product of ideals.
If \(S\) is a semiring, we denote by \(\operatorname{Id}_{k}(S)\) the set of \(k\)-ideals of \(S\). By [1, Corollary 3.8] the map \(u_{R}:R\to\operatorname{fgId}(R)\) sending an element \(a\in R\) to its principal ideal \((a)\) is a surjective valuation that induces an isomorphism of semirings \(\operatorname{Id}(R)\cong\operatorname{Id}_{k}(\operatorname{fgId}(R))\). We use this correspondence for describing the maximal ideals of the unit balls \(R^{\circ}\) and \(V\mathbb{B}(\mathbf{t})^{\circ}\) of the tropical valuation \(\operatorname{trop}:K(\!(\mathbf{t})\!)\to V\mathbb{B}(\mathbf{t})\) in terms of monomial orders in Section 4.
### Bezout \(\ell\)-valuations as seminorms
If \(S\) is a semiring, we denote by \(U(S)\subset(S,\times,1)\) its group of multiplicatively invertible elements.
We continue with the following concepts.
**Definition 2.5**.: An (integral) domain \(R\) is called a _Bezout domain_ if every finitely generated ideal is principal.
Let \(R\) be a Bezout domain. The greatest common divisor (gcd) and least common multiple (lcm) of two elements \(a,b\in R\) always exist, and they satisfy
\[ab=u\gcd(a,b)\operatorname{lcm}(a,b)\]
for some \(u\in U(R)\).
We write \(\Gamma(R):=R/U(R)\) and denote by \(\pi:R\to\Gamma(R)\) the quotient projection, sending \(a\in R\) to its class \(\pi(a)=[a]\). Note that \(\Gamma(R)\) endowed with the product \([a][b]=[ab]\) and addition \([a]+[b]=[\gcd(a,b)]\) is an idempotent semiring, and \(\pi:R\to\Gamma(R)\) becomes an integral valuation.
**Remark 2.6**.: If \(R\) is a Bezout domain, then there is an isomorphism of semirings \(\operatorname{fgId}(R)\cong\Gamma(R)\), thus the norm \(u_{R}:R\to\operatorname{fgId}(R)\) coincides with the quotient projection \(\pi:R\to\Gamma(R)\).
If \(R\) is a Bezout domain, then the semiring \(\Gamma(R)\) is multiplicatively cancellative and the fraction semifield \(\operatorname{Frac}(\Gamma(R))\) exists. The valuation \(\pi:R\to\Gamma(R)\) can be extended to a valuation \(\operatorname{Frac}(\pi):\operatorname{Frac}(R)\to\operatorname{Frac}(\Gamma(R))\) with the usual definition: \(\operatorname{Frac}(\pi)(\frac{a}{b})=\frac{\pi(a)}{\pi(b)}\).
**Definition 2.7**.: Let \(R\) be a Bezout domain. Its divisibility semiring is the semiring \(\Gamma(R)\), and its divisibility semifield is the semifield \(\operatorname{Frac}(\Gamma(R))\).
Throughout the remaining part of section, \(R\) denotes a field and \(S\) a semifield. In this case, \(\operatorname{Frac}(R)=R\), \(\Gamma(R)=\{[0],[1]\}\), and \(\pi:R\to\Gamma(R)\) is the trivial norm.
**Definition 2.8**.: Let \(R\) be a field and let \(S\) be a semifield. We say that a multiplicative seminorm \(v:R\to S\) is _Bezout_ if for every \(a,b\in R\), there exists \(x,y\in R^{\circ}\) such that \(v(xa+yb)=v(a)+v(b)\).
Since \(R\) is a field, it follows that a Bezout seminorm is automatically a norm, hence, it is a valuation after Definition 2.1.
Let \(v:R\to S\) be Bezout valuation with induced integral norm \(v^{\circ}:R^{\circ}\to S^{\circ}\). We start with the following observation about principal ideals of \(R^{\circ}\).
**Lemma 2.9**.: _Given \(a\in R^{\circ}\), we have_
\[(a)=\{b\in R^{\circ}:v(b)\leq v(a)\}.\]
Proof.: The statement is clear if \(a=0\), so assume that \(a\neq 0\). Then a straightforward computation shows that
\[v(b)\leq v(a)\Longleftrightarrow v\left(\frac{b}{a}\right)=\frac{v(b)}{v(a)} \leq 1\Longleftrightarrow\frac{b}{a}\in R^{\circ}\Longleftrightarrow b\in(a).\,\Box\]
This already has the consequence that ideals are determined by seminorms, in the following sense.
**Corollary 2.10**.: _Let \(I\) be an ideal of \(R^{\circ}\) and \(a\in I\). Then for any \(b\in R^{\circ}\) with \(v(b)\leq v(a)\), we have \(b\in I\)._
Proof.: If \(v(b)\leq v(a)\) then by Lemma 2.9, we have \(b\in(a)\subset I\).
Therefore, an ideal \(I\subset R^{\circ}\) is uniquely determined by the subset \(v(I)\subseteq S^{\circ}\) and we have
\[I=\{a\in R:v(a)\in v(I)\}. \tag{1}\]
It follows from Lemma 2.9 that \(v(I)\) is a downward closed subset of \(S^{\circ}\). In particular, \(v(I)\) is closed under multiplication with elements in \(S^{\circ}\): if \(b\in v(I)\) and \(a\in S^{\circ}\), then \(ab\leq b\) (since \(a\leq 1\)) and so \(ab\in v(I)\).
**Lemma 2.11**.: _Let \(v:R\to S\) be a Bezout valuation such that \(v^{\circ}\) is surjective. Let \(I\) be an ideal of \(R^{\circ}\) and let \(x,y\in v(I)\). Then \(x+y\in v(I)\)._
Proof.: As \(v^{\circ}\) is surjective there exist \(f,g\in R^{\circ}\) such that \(v(f)=x\) and \(v(g)=y\). Moreover, by (1) we have that \(f,g\in I\). As \(v\) is Bezout there exist \(\alpha,\beta\in R^{\circ}\) such that
\[v(\alpha f+\beta g)=v(f)+v(g)=x+y.\]
Notice that \(\alpha f+\beta g\in I\) as \(I\) is an ideal and then again by (1) we conclude \(x+y\in v(I)\).
Lemma 2.11 implies that \(v(I)\) is closed under summation, hence it is subtractive by Lemma 2.4. Thus, if \(I\subset R^{\circ}\) is an ideal, then \(v(I)\subset S^{\circ}\) is a \(k\)-ideal.
**Theorem 2.12** (Correspondence theorem).: _Let \(v:R\to S\) be a Bezout valuation such that \(v^{\circ}:R^{\circ}\to S^{\circ}\) is surjective. There is a one-to-one correspondence between ideals \(I\subset R^{\circ}\) and \(k\)-ideals \(J\subset S^{\circ}\) given by \(I=v^{-1}(J)\) and \(J=v(I)\). This correspondence preserves prime ideals._
Proof.: The only thing left to verify is that \(v^{-1}(J)\) is an ideal of \(R^{\circ}\). Let \(f,g\in v^{-1}(J)=\{g\in R^{\circ}:v(g)\in J\}\) and \(r\in R^{\circ}\). Then \(f+g\in v^{-1}(J)\) and \(rf\in v^{-1}(J)\), since \(v(f+g)\leq v(f)+v(g)\in J\) and \(v(rf)\leq v(r)v(f)\in J\) and \(J\) is downwards closed.
For the second statement, let \(I\) and \(J\) be two ideals such that \(I=v^{-1}(J)\) and \(v(I)=J\). Since \(f\notin I\) if and only if \(v(f)\notin J\) and \(v:R^{\circ}\to S^{\circ}\) is surjective, the statement follows.
**Remark 2.13**.: As we pointed out before, the correspondence Theorem 2.12 is related to [1, Corollary 3.8], the difference is that in our case, the direct image \(v(I)\subset S^{\circ}\) of an ideal \(I\subset R^{\circ}\) under the surjective valuation \(v^{\circ}\) is already a \(k\)-ideal of \(S^{\circ}\), and there is no need to take its \(k\)-closure.
Although we will not need a generalization of the correspondence theorem to radical and primary ideals, we will give a proof here for the sake of completeness.
**Proposition 2.14**.: _The correspondence induced by a Bezout valuation \(v\) preserves radical and primary ideals._
Proof.: First, let us show the preservation of radical ideals. Any radical ideal can be written as intersection of minimal prime ideals, see e.g. [1, Corollary 2.12]. Since \(v\) preserves primes, it remains to show that minimal primes are preserved. For this purpose, let \(I\) be a minimal prime ideal in \(R^{\circ}\), i.e., for every prime ideal \(J\) with \(J\subseteq I\) it holds that \(J=I\), and let \(J^{\prime}\subseteq v(I)\) be a prime ideal. Then there is a prime ideal \(J\) such that \(v(J)=J^{\prime}\). Thus, \(v(J)\subseteq v(I)\) and therefore \(J\subseteq I\)
By the minimality, \(J=I\). The same argument can be used for \(v^{-1}\) and minimal prime \(k\)-ideals in \(S^{\circ}\) and the statement follows.
Now, let us show the preservation of primary ideals. Let \(J\subset S^{\circ}\) be a primary \(k\)-ideal and let \(I=v^{-1}(J)\). If \(fg\in I\), then \(v(fg)=v(f)v(g)\in J\). For \(v(f)\notin J\), we have \(f\notin I\). Then \(v(g)^{n}=v(g^{n})\in J\) and \(g^{n}\in I\) for some \(n\in\mathbb{N}\).
Conversely, let \(I\subset R^{\circ}\) be a primary ideal and let \(J=v(I)\). Let \(xy\in J\) and \(f\in v^{-1}(x),g\in v^{-1}(y)\) be arbitrary. Since \(fg\in v^{-1}(xy)\subset I\), then \(f\in I\) or \(g^{n}\in I\) for some natural number \(n\). Thus, \(v(f)\in J\) or \(v(g^{n})=v(g)^{n}\in J\).
The next result shows that for a surjective Bezout valuation \(v:R\to S\), the unit ball \(R^{\circ}\) is Bezout and \(v\) is characterized by the norm \(\pi:R^{\circ}\to\Gamma(R^{\circ})\).
**Lemma 2.15**.: _Let \(v:R\to S\) be a surjective Bezout valuation. Then \(v\) is isomorphic to the norm \(\operatorname{Frac}(\pi):\operatorname{Frac}(R^{\circ})\to\operatorname{Frac }(\Gamma(R^{\circ}))\)._
Proof.: First we show that \(S=\operatorname{Frac}(S^{\circ})\). Given some \(x\in S\) we want to write it as \(x=a/b\) with \(a,b\leq 1\). For this we just set \(a=x/(1+x)\) and \(b=1/(1+x)\). Since \(x\leq 1+x\) and \(1\leq 1+x\) we have \(a,b\leq 1\) and also clearly \(x=a/b\). So \(S^{\circ}\subseteq S\subseteq\operatorname{Frac}(S^{\circ})\), and the inclusion \(\operatorname{Frac}(S^{\circ})\subseteq S\) follows from the fact that \(S\) is a semifield.
Assume that \(v\) is surjective. Then for \(x\in S^{\circ}\) there exists \(a\in R\) such that \(v(a)=x\), but \(a\in R^{\circ}\) by definition. Thus, \(v^{\circ}\) is surjective.
Since \(\operatorname{Frac}(R^{\circ})=R\)[13, Proposition 1], there are \(a,b\in R^{\circ}\) such that \(x=\frac{a}{b}\). So \(v(x)=\frac{v(a)}{v(b)}=\frac{v^{\circ}(a)}{v^{\circ}(b)}\), and \(v:R\to S\) is the extension \(\operatorname{Frac}(v^{\circ})\) of \(v^{\circ}:R^{\circ}\to S^{\circ}\).
If \(v:R\to S\) is a Bezout valuation, then \(R^{\circ}\) is a Bezout domain by [13, Proposition 1]. Then every finitely generated ideal of \(R^{\circ}\) is principal, and the isomorphism \(\operatorname{Id}(R^{\circ})\cong\operatorname{Id}_{k}(S^{\circ})\) of Theorem 2.12 restricts to a isomorphism \(\Gamma(R^{\circ})\cong\operatorname{fgId}(R^{\circ})\cong\operatorname{fgId}_{ k}(S^{\circ})\) sending \((a)\) to \(v^{\circ}((a))\). Note that \(v^{\circ}((a))=(v^{\circ}(a))_{k}\): the \(k\)-closure of the principal ideal generated by \(v^{\circ}(a)\).
We now make the identification \(S^{\circ}\cong\operatorname{fgId}_{k}(S^{\circ})\) by sending \(s\) to \((s)_{k}\). This is clearly surjective, and it is injective since \((s)_{k}=(t)_{k}\) implies \(s\leq t\) and \(t\leq s\), so \(t=s\). Thus \(\phi:\Gamma(R^{\circ})\to S^{\circ}\) given by \(\phi((a))=v^{\circ}(a)\) is an isomorphism satisfying \(v^{\circ}=\phi\circ\pi\), and \(v=\operatorname{Frac}(v^{\circ})=\operatorname{Frac}(\phi)\circ\operatorname{ Frac}(\pi)\).
**Corollary 2.16**.: _Let \(v:R\to S\) be a Bezout valuation. Then \(v\) is surjective if and only if \(v^{\circ}\) is surjective._
Proof.: Let \(v^{\circ}\) be surjective. Since \(S=\operatorname{Frac}(S^{\circ})\) (see proof of Lemma 2.15), for any \(y\in S\) there are \(y_{1},y_{2}\in S^{\circ}\) such that \(y=\frac{y_{1}}{y_{2}}\), and since \(v^{\circ}\) is surjective, there exists \(a_{1},a_{2}\in R\) such that \(v(\frac{a_{1}}{a_{2}})=\frac{v^{\circ}(a_{1})}{v^{\circ}(a_{2})}=\frac{y_{1}}{ y_{2}}=y\). Thus, \(v\) is surjective.
The reverse direction is proven in Lemma 2.15.
**Corollary 2.17**.: _Let \(v:R\to S\) be a surjective Bezout valuation. Then \(U(R^{\circ})=\{x\in R\,:\,v(x)=1\}\)._
Proof.: If \(x\in R\) satisfies \(v(x)=1\), then \(x\in R^{\circ}\). Since \(v\) is surjective, we have that \(\operatorname{Frac}(R^{\circ})=R\), so there exist \(y,z\in R^{\circ}\) such that \(x=\frac{y}{z}\neq 0\), then \(v(y)=v(z)\), and \(x^{-1}=\frac{z}{y}\in R^{\circ}\).
Conversely, if \(x\in U(R^{\circ})\), then \(v(x)\) is invertible in \(S^{\circ}\) (since \(v\) is multiplicative), but \(U(S^{\circ})=\{1\}\) since it is simple (and thus sharp).
In the next section we apply these results to a concrete example of a Bezout valuation.
### Valuations in the context of formal power series rings
Let \(\mathbb{B}=\{0,1\}\) denote the Boolean idempotent semifield. Fix an integer \(m\geq 1\) and a tuple of variables \(\mathbf{t}=(t_{1},\dots,t_{m})\). We denote by \(\mathbb{B}[\![\mathbf{t}]\!]\) the semiring of formal Boolean power series, and by \(V\mathbb{B}[\![\mathbf{t}]\!]\) the idempotent semiring of vertex polynomials as in [1].
**Remark 2.18**.: Recall that \(\mathbb{B}[\![\mathbf{t}]\!]\) is isomorphic to \(\mathcal{P}(\mathbb{N}^{m})\) and \(V\mathbb{B}[\![\mathbf{t}]\!]\) is isomorphic to \(\mathbb{T}_{m}\) from [1] with the isomorphism given by taking supports.
More precisely, the elements of \(V\mathbb{B}[\![\mathbf{t}]\!]\) are subsets of \(\mathbb{N}^{m}\) that are equal to the vertices of the Newton polyhedra they generate. The addition \(\oplus\) on \(V\mathbb{B}[\![\mathbf{t}]\!]\) is given by taking the set union of two vertex polynomials and then projecting onto the vertices of the outcome. Similarly, the product \(\odot\) on \(V\mathbb{B}[\![\mathbf{t}]\!]\) is given by taking the Minkowski sum of the vertex polynomials and then projecting onto the vertices of the outcome.
Since \(V\mathbb{B}[\![\mathbf{t}]\!]\) is integral (i.e. multiplicatively cancellative, that is whenever \(a\odot b=a\odot c\) then either \(a=0\) or \(b=c\)), we can construct the fraction semifield \(V\mathbb{B}(\mathbf{t}):=\operatorname{Frac}(V\mathbb{B}[\![\mathbf{t}]\!])\) as follows: the elements of \(V\mathbb{B}(\mathbf{t})\) are of the form \(\frac{a}{b}\) with \(a,b\in V\mathbb{B}[\![\mathbf{t}]\!]\) and \(b\neq 0\), the map \(V\mathbb{B}[\![\mathbf{t}]\!]\to V\mathbb{B}(\![\mathbf{t}]\!)\) sending \(a\) to \(\frac{a}{1}\) is an embedding, and \(\frac{a}{b}=\frac{c}{d}\) if and only if \(a\odot d=b\odot c\). The sum and product of fractions are defined as usual: \(\frac{a}{b}\odot\frac{c}{d}=\frac{a\odot c}{b\odot d}\) and \(\frac{a}{b}\oplus\frac{c}{d}=\frac{a\odot d\oplus b\odot c}{b\odot d}\). Note that on \(V\mathbb{B}(\mathbf{t})\) we have the order \(\frac{a}{b}\leq\frac{c}{d}\) if and only if \(a\odot d\leq b\odot c\).
Let \(K[\![\mathbf{t}]\!]\) be the ring of formal power series in the variables \(\mathbf{t}=(t_{1},\dots,t_{m})\) with coefficients in the field \(K\). We define the tropical seminorm
\[\operatorname{trop}:K[\![\mathbf{t}]\!]\to V\mathbb{B}[\![\mathbf{t}]\!] \tag{2}\]
as the composition of the maps \(\operatorname{Supp}:K[\![\mathbf{t}]\!]\to\mathbb{B}[\![\mathbf{t}]\!]\) given by taking the support set of a power series, composed with \(V:\mathbb{B}[\![\mathbf{t}]\!]\to V\mathbb{B}[\![\mathbf{t}]\!]\) given by projecting onto the vertex set of the Newton polyhedron generated by an element of \(\mathbb{B}[\![\mathbf{t}]\!]\). The tropical seminorm is a surjective valuation, so we will call it the tropical valuation.
Recall that the map \(\operatorname{Frac}(\operatorname{trop}):K(\!(\mathbf{t})\!)\to V\mathbb{B}( \mathbf{t})\) is defined by \(\operatorname{Frac}(\operatorname{trop})(\frac{\varphi}{\psi}):=\frac{ \operatorname{trop}(\varphi)}{\operatorname{trop}(\psi)}\). This map is also a surjective valuation by [1, Corollary 7.3], thus we will call it also the tropical valuation and will be denoted by \(\operatorname{trop}\). Moreover, we have:
**Proposition 2.19**.: _The tropical valuation \(\operatorname{trop}:K(\!(\mathbf{t})\!)\to V\mathbb{B}(\mathbf{t})\) is a surjective \(K\)-algebra Bezout valuation._
Proof.: Let \(\varphi=\sum_{I}a_{I}\mathbf{t}^{I}\) and \(\psi=\sum_{I}b_{I}\mathbf{t}^{I}\) be nonzero elements in \(K[\![\mathbf{t}]\!]\). If \(\operatorname{trop}(\varphi+\psi)\neq\operatorname{trop}(\varphi)\oplus \operatorname{trop}(\psi)\), there exists \(I\in\operatorname{trop}(\varphi)\oplus\operatorname{trop}(\psi)\) such that \(a_{I}+b_{I}=0\). Since \(\operatorname{trop}(\varphi)\oplus\operatorname{trop}(\psi)\) is a polynomial, we can choose \(M\in\mathbb{N}\) such that \(a_{I}+Mb_{I}\neq 0\) for all \(I\in\operatorname{trop}(\varphi)\oplus\operatorname{trop}(\psi)\). It follows that \(\operatorname{trop}(\varphi+M\mathbf{t}^{0}\psi)=\operatorname{trop}(\varphi) \oplus\operatorname{trop}(\psi)\) and \(\operatorname{trop}(M\mathbf{t}^{0})=1\).
Now consider \(\frac{\varphi_{1}}{\varphi_{2}}\) and \(\frac{\psi_{1}}{\psi_{2}}\) elements in \(K(\!(\mathbf{t})\!)^{*}\). We apply the above argument to \(\varphi=\varphi_{1}\psi_{2}\) and \(\psi=\varphi_{2}\psi_{1}\) to find
\[\operatorname{trop}\left(\frac{\varphi_{1}}{\varphi_{2}}+M\mathbf{t}^{0}\frac{ \psi_{1}}{\psi_{2}}\right)=\frac{\operatorname{trop}(\varphi+M\mathbf{t}^{0} \psi)}{\operatorname{trop}(\varphi_{2}\psi_{2})}=\operatorname{trop}\left(\frac{ \varphi_{1}}{\varphi_{2}}\right)\oplus\operatorname{trop}\left(\frac{\psi_{1 }}{\psi_{2}}\right),\]
which finishes the proof.
The following result gives a concrete characterization of the tropical valuation \(\operatorname{trop}:K(\!(\mathfrak{t})\!)\to V\mathbb{B}(\mathfrak{t})\).
**Corollary 2.20**.: _Let \(\operatorname{trop}^{\circ}:K(\!(\mathfrak{t})\!)^{\circ}\to V\mathbb{B}( \mathfrak{t})^{\circ}\) be the induced integral valuation. Then \(V\mathbb{B}(\mathfrak{t})^{\circ}\cong\{\operatorname{trop}(x)\leq 1\}/\{ \operatorname{trop}(x)=1\}\), and \(\operatorname{trop}^{\circ}\) is isomorphic to the resulting quotient projection._
Proof.: Since \(\operatorname{trop}:K(\!(\mathfrak{t})\!)\to V\mathbb{B}(\mathfrak{t})\) is a surjective Bezout seminorm, by Proposition 2.19 and Lemma 2.15, it follows that
\[V\mathbb{B}(\mathfrak{t})^{\circ}\cong\Gamma(K(\!(\mathfrak{t})\!)^{\circ})= K(\!(\mathfrak{t})\!)^{\circ}/U(K(\!(\mathfrak{t})\!)^{\circ}),\]
and \(U(K(\!(\mathfrak{t})\!)^{\circ})=\{x\in K(\!(\mathfrak{t})\!)\::\operatorname {trop}(x)=1\}\) by Corollary 2.17.
The correspondence Theorem 2.12 says that we have a semiring isomorphism
\[\operatorname{Id}(K(\!(\mathfrak{t})\!)^{\circ})\cong\operatorname{Id}_{k}(V \mathbb{B}(\mathfrak{t})^{\circ})\cong\operatorname{Id}_{k}(K(\!(\mathfrak{t} )\!)^{\circ}/\{\operatorname{trop}(x)=1\}). \tag{3}\]
We use this correspondence to characterize the maximal \(k\)-ideals of the semiring \(V\mathbb{B}(\mathfrak{t})^{\circ}\) in Corollary 4.12.
**Example 2.21**.: Suppose that \(m=1\). Since for \(f\in K[\![t]\!]\) it holds that \(\operatorname{trop}(f)=\min(\operatorname{Supp}(f))\), we have that
\[K(\!(t)\!)^{\circ} =\{f/g\in K(\!(t)\!):\min(\min(\operatorname{Supp}(f)),\min( \operatorname{Supp}(g)))=\min(\operatorname{Supp}(g))\}\] \[=\{f/g\in K(\!(t)\!):\min(\operatorname{Supp}(f))\geq\min( \operatorname{Supp}(g))\}=K[\![t]\!],\]
which is Noetherian.
**Proposition 2.22**.: _For \(m>1\), the ring \(K(\!(\mathfrak{t})\!)^{\circ}\) is not Noetherian._
Proof.: We demonstrate the statement in the case of \(\mathfrak{t}=(t,u)\), the general case follows. Define, for each \(n\geq 1\), the element
\[\omega_{n}:=\frac{t^{2n+1}+u^{2n+1}}{t^{2n+1}+t^{n}u^{n}+u^{2n+1}}\in V \mathbb{B}(t,u).\]
Then for each \(n\) we have that \(\omega_{n}\in V\mathbb{B}(t,u)^{\circ}\), and also \(\omega_{n}<\omega_{n+1}\). Define
\[I_{n}=\{q\in K(\!(\mathfrak{t})\!)^{\circ}:\operatorname{trop}(q)\leq\omega_ {n}\}.\]
It follows from the correspondence Theorem 2.12 that \(I_{n}\) defines an ideal in \(K(\!(\mathfrak{t})\!)^{\circ}\), which can also be proven directly. Then it follows that \(I_{n}\subsetneq I_{n+1}\) and the \(I_{n}\) form a strictly increasing sequence of ideals in \(K(\!(\mathfrak{t})\!)^{\circ}\).
## 3. Models over the unit ball and initial degenerations
For a fixed integer \(m\geq 1\), we consider the Bezout-valued field \((K(\!(\mathfrak{t})\!),\operatorname{trop})\) together with its unit ball \(K(\!(\mathfrak{t})\!)^{\circ}\subset K(\!(\mathfrak{t})\!)\) from the previous section. The purpose of this section is to create models over \(K(\!(\mathfrak{t})\!)^{\circ}\) of certain schemes \(X\) defined over \(K(\!(\mathfrak{t})\!)\), as well as initial degenerations of certain \(K(\!(\mathfrak{t})\!)\)-algebras which appear in the theory of differential algebra. In this first section, we follow [12, SS4].
### Models over the unit ball
Let \(R\) be an integral domain such that \(\operatorname{Frac}(R)=K\). An \(R\)-_model_ of a scheme \(X\) over the field \(K\) is a flat scheme \(\mathcal{X}\) over \(R\) with generic fiber \(\mathcal{X}_{\eta}=\mathcal{X}_{K}=X\). See [1, Definition 4.1].
Equivalently, if \(B=\operatorname{Spec}(R)\), then a model of \(X\) over \(B\) is a flat morphism2\(\pi:\mathcal{X}\to B\) such that \(X=\mathcal{X}\times_{B}\operatorname{Spec}(K)\). Then \(X\) is called the _generic fibre_ of \(\mathcal{X}\). Sometimes we also ask the map \(\pi:\mathcal{X}\to B\) to be proper.
Footnote 2: Here _flat_ is in the sense of Definition II-28 and _proper_ as defined on page 95 in [1].
**Lemma 3.1**.: _A module over \(K(\!(\mathfrak{t})\!)^{\circ}\) is flat if and only if it is torsion-free._
Proof.: Any flat module is torsion-free. The converse follows from the fact that torsion-free modules over Prufer domains, and hence Bezout domains, are flat (see e.g. [1, Theorem 3.3]).
For another fixed integer \(n\geq 1\), we consider the set of variables \(\{x_{i,J}\::\:1\leq i\leq n,\:J\in\mathbb{N}^{m}\}\), which we abbreviate simply by \(\{x_{i,J}\}\). Let us denote by \(R_{m,n}\) the ring of polynomials in the variables \(\{x_{i,J}\}\) with coefficients in the unit ball \(K(\!(\mathfrak{t})\!)^{\circ}\). It is customary to express the elements of \(R_{m,n}\) as finite sums \(P=\sum_{M}a_{M}E_{M}\) where \(a_{M}\in K(\!(\mathfrak{t})\!)^{\circ}\) and \(E_{M}\) are finite products of the variables \(\{x_{i,J}\}\).
We consider flat schemes over \(K(\!(\mathfrak{t})\!)^{\circ}\) of the form \(\mathcal{X}=Spec(A)\) for \(A=R_{m,n}/J\), thus its generic fiber \(X=\mathcal{X}_{\eta}\) is of the form \(X=Spec(A_{K(\!(\mathfrak{t})\!)})\) for \(A_{K(\!(\mathfrak{t})\!)}:=A\otimes_{K(\!(\mathfrak{t})\!)^{\circ}}K(\!( \mathfrak{t})\!)\). By flatness we have \(A\subset A_{K(\!(\mathfrak{t})\!)}\). A closed subscheme \(Y\) of \(X\) is given by an ideal \(I_{Y}\subset A_{K(\!(\mathfrak{t})\!)}\). The closure \(\overline{Y}\) of \(Y\) in \(\mathcal{X}\) (in the Zariski topology) is defined as the closed subscheme of \(\mathcal{X}\) given by the ideal \(I_{Y}\cap A\). The following result and its proof are straightforward generalizations of [1, Proposition 4.4].
**Proposition 3.2**.: _The closure \(\overline{Y}\) of \(Y\) in \(\mathcal{X}\) is the unique closed subscheme of \(\mathcal{X}\) with generic fiber \(Y\) which is flat over \(K(\!(\mathfrak{t})\!)^{\circ}\)_
In particular, we can apply Proposition 3.2 to \(A=R_{m,n}\), so that \(A_{K(\!(\mathfrak{t})\!)}=F_{m,n}\) is the ring of polynomials in the variables \(\{x_{i,J}\}\) and coefficients in \(K(\!(\mathfrak{t})\!)\). Thus, given an ideal \(I_{X}\subset F_{m,n}\), its contraction \(I_{X}\cap R_{m,n}\) gives already a model \(\mathcal{X}:=\operatorname{Spec}(R_{m,n}/I_{X}\cap R_{m,n})\) over \(K(\!(\mathfrak{t})\!)^{\circ}\) for \(X=\operatorname{Spec}(F_{m,n}/I_{X})\). This shows that we can construct models effectively by taking the closure of ideals \(I\subset F_{m,n}\).
We will introduce the motivation from differential algebra behind these particular choices in Section 3.2.
We denote by \(\kappa(\mathfrak{p})\) the residue field of a point \(\mathfrak{p}\in\operatorname{Spec}(K(\!(\mathfrak{t})\!)^{\circ})\). If \(\mathcal{X}=\operatorname{Spec}(R_{m,n}/J)\) is a flat scheme over \(K(\!(\mathfrak{t})\!)^{\circ}\), the fibre \(\mathcal{X}_{\mathfrak{p}}\) of \(\mathcal{X}\) over \(\mathfrak{p}\) is the spectrum of the ring \((R_{m,n}/J)\otimes_{K(\!(\mathfrak{t})\!)^{\circ}}\kappa(\mathfrak{p})\). If the prime ideal under consideration is a maximal ideal \(\mathfrak{m}\subset K(\!(\mathfrak{t})\!)^{\circ}\), then \(\kappa(\mathfrak{m})=K(\!(\mathfrak{t})\!)^{\circ}/\mathfrak{m}\).
Let \(\mathfrak{m}\subset K(\!(\mathfrak{t})\!)^{\circ}\) be a maximal ideal and consider the quotient projection \(\pi:K(\!(\mathfrak{t})\!)^{\circ}\to K(\!(\mathfrak{t})\!)^{\circ}/ \mathfrak{m}\). By Corollary 4.9 below, there exists an isomorphism \(\psi_{\mathfrak{m}}:K(\!(\mathfrak{t})\!)^{\circ}/\mathfrak{m}\to K\), so there is an induced morphism \(\psi_{\mathfrak{m}}\circ\pi:K(\!(\mathfrak{t})\!)^{\circ}\to K\) (which with the help of Theorem 4.4 can be described concretely as in (14)).
This in turn induces a morphism
\[\psi_{\mathfrak{m}}\circ\pi:R_{m,n}\to K[x_{i,J}] \tag{4}\]
by applying \(\psi_{\mathfrak{m}}\circ\pi\) coefficient-wise: if \(P=\sum_{M}a_{M}E_{M}\) is an element in \(R_{m,n}\), then \(\psi_{\mathfrak{m}}\circ\pi(P)=\sum_{M}\psi_{\mathfrak{m}}\circ\pi(a_{M})E_{M}\).
**Lemma 3.3**.: _Let \(J\subset R_{m,n}\) be an ideal and let \(\mathfrak{m}\subset K(\!(\mathfrak{t})\!)^{\circ}\) be a maximal ideal. Then \((R_{m,n}/J)\otimes_{K(\mathfrak{t})^{\circ}}(K(\!(\mathfrak{t})\!)^{\circ}/ \mathfrak{m})\cong K[x_{i,J}]/\overline{J}_{\mathfrak{m}}\), where \(\overline{J}_{\mathfrak{m}}\subset K[x_{i,J}]\) denotes the extended ideal under the morphism (4)._
Proof.: By little abuse of notation we denote by \(\mathfrak{m}\) also the ideal it generates in \(R_{m,n}\) and also the induced map \(\psi_{\mathfrak{m}}:R_{m,n}/\mathfrak{m}\to K[x_{i,J}]\). Then
\[(R_{m,n}/J)\otimes_{K(\mathfrak{t})^{\circ}}(K(\!(\mathfrak{t}) \!)^{\circ}/\mathfrak{m}) \cong (R_{m,n}/J)/(R_{m,n}/J\cdot\mathfrak{m})\] \[\cong (R_{m,n}/\mathfrak{m})/(J\cdot\mathfrak{m})\] \[\cong K[x_{i,J}]/\overline{J}_{\mathfrak{m}}\]
where \(\overline{J}_{\mathfrak{m}}\) is the ideal generated by \(\psi_{\mathfrak{m}}(J\cdot\mathfrak{m})\subset K[x_{i,J}]\).
In particular, if \(\mathcal{X}=\operatorname{Spec}(R_{m,n}/J)\) is a flat scheme over \(K(\!(\mathfrak{t})\!)^{\circ}\) and \(\mathfrak{m}\subset K(\!(\mathfrak{t})\!)^{\circ}\) is a maximal ideal, then the closed fiber \(\mathcal{X}_{\mathfrak{m}}\) of the model \(\mathcal{X}\) is simply \(\operatorname{Spec}(K[x_{i,J}]/\overline{J}_{\mathfrak{m}})\), where \(\overline{J}_{\mathfrak{m}}\) is the ideal in \(K[x_{i,J}]\) generated by \(\{\psi_{\mathfrak{m}}\circ\pi(P)\::P\in J\}\).
### Tropical differential algebra
Let \(K\) be a field of characteristic zero and let \(n,m\geq 1\) be integers. Recall that \(F_{m,n}\) denotes the ring of polynomials in the variables \(\{x_{i,J}\}\) and coefficients in \(K(\!(\mathfrak{t})\!)\).
In order to define our notion of (initial) degeneration in this setting, we need some concepts from tropical differential algebra, for which we follow [1, SS7.1].
We denote by \(D=\{\frac{\partial}{\partial t_{i}}\::\:i=1,\ldots,n\}\) the usual partial derivations defined on \(K[\![\mathfrak{t}]\!]\). If \(E\) is a monomial in the variables \(\{x_{i,J}\}\) and \(\varphi=(\varphi_{1},\ldots,\varphi_{n})\in K[\![\mathfrak{t}]\!]^{n}\), we define the differential evaluation \(E(\varphi)\) by replacing the variable \(x_{i,J}\) for \(J=(j_{1},\ldots,j_{m})\) with \(\frac{\partial^{j_{1}+\cdots+j_{m}}\varphi_{i}}{\partial t^{j_{1}}\cdots \partial t^{j_{m}}_{m}}\). If \(P=\sum_{M}\frac{a_{M}}{b_{M}}E_{M}\) is an element in \(F_{m,n}\), where \(a_{M},b_{M}\in K[\![\mathfrak{t}]\!]\) and \(E_{M}\) are differential monomials, we define \(\operatorname{ev}_{P}(\varphi)=\sum_{M}\frac{a_{M}}{b_{M}}E_{M}(\varphi)\).
Denote by \(e_{K}:\mathbb{B}[\![\mathfrak{t}]\!]\to K[\![\mathfrak{t}]\!]\) the section of the map \(\operatorname{Supp}:K[\![\mathfrak{t}]\!]\to\mathbb{B}[\![\mathfrak{t}]\!]\) that sends \(A\subset\mathbb{N}^{m}\) to the formal power series \(e_{K}(A)=\sum_{I\in A}\mathfrak{t}^{I}\). If \(E\) is a monomial in the variables \(\{x_{i,J}\}\) and \(w=(w_{1},\ldots,w_{n})\in\mathbb{B}[\![\mathfrak{t}]\!]^{n}\), then we have
\[V(E(w))=\operatorname{trop}(E(e_{K}(w))), \tag{5}\]
where \(e_{K}(w)=(e_{K}(w_{1}),\ldots,e_{K}(w_{n}))\in K[\![\mathfrak{t}]\!]^{n}\).
The next step is to use these tools from differential algebra together with the tropical seminorm \(\operatorname{trop}:K(\!(\mathfrak{t})\!)\to V\mathbb{B}(\mathfrak{t})\) to define a seminorm \(\operatorname{trop}_{w}:F_{m,n}\longrightarrow V\mathbb{B}(\mathfrak{t})\) that depends on a fixed weight vector \(w=(w_{1},\ldots,w_{n})\in\mathbb{B}[\![\mathfrak{t}]\!]^{n}\).
**Definition 3.4**.: Given a fixed weight vector \(w\in\mathbb{B}[\![\mathfrak{t}]\!]^{n}\), we define the map \(\operatorname{trop}_{w}:F_{m,n}\longrightarrow V\mathbb{B}(\mathfrak{t})\) by sending \(P=\sum_{M}\frac{a_{M}}{b_{M}}E_{M}\) to
\[\operatorname{trop}_{w}(P):=\bigoplus_{M}\operatorname{trop}\biggl{(}\frac{a_{M }}{b_{M}}E_{M}(e_{K}(w))\biggr{)}=\bigoplus_{M}\biggl{(}\frac{\operatorname{ trop}(a_{M})}{\operatorname{trop}(b_{M})}\odot V(E_{M}(w))\biggr{)}. \tag{6}\]
The equality between the two expressions appearing in (6) follows from (5). It was shown in [1, Theorem 4.1] that \(\operatorname{trop}_{w}\) is a \(K\)-algebra seminorm.
Given a weight vector \(w=(w_{1},\ldots,w_{n})\in\mathbb{B}[\![\mathfrak{t}]\!]^{n}\) we will now use the seminorm (6) to construct a \(w\)-translation map
\[\operatorname{t}\!\operatorname{r}_{w}:F_{m,n}\to R_{m,n}. \tag{7}\]
The following construction is inspired by the works [11, 12] in the ordinary case. We will also see in (10) that the explicit expression for the map (7) does not fall very far to the usual expression of the translation to the origin of a torus, which is operative in classical tropical geometry, see [10, SS5].
Let us go back to the differential evaluation \(E(e_{K}(w))\) which appears in the definition of (6). Recall that for \(\varphi\in K[\![\mathbf{t}]\!]\setminus\{0\}\), we denote by \(\overline{\varphi}\in K[\![\mathbf{t}]\!]\) the restriction of \(\varphi\) to the vertices of its Newton polygon. Then \(\varphi=\overline{\varphi}+\widetilde{\varphi}\), where the _initial form_ of \(\varphi\) is \(\overline{\varphi}\), and it is a polynomial.
For every variable \(x_{i,J}\) appearing in \(E\), we write \(\Theta(J)(e_{K}(w_{i}))=\frac{\partial^{j_{1}+\cdots+j_{m}}e_{K}(w_{i})}{ \partial t_{1}^{j_{1}}\cdots\partial t_{m}^{j_{m}}}\). Thus the differential evaluation \(E(e_{K}(w))\) equals the usual algebraic evaluation of \(E\) at the vector \((\Theta(J)(e_{K}(w_{i}))\::\:i,J)\), and we have a decomposition
\[E(e_{K}(w))=E(\overline{\Theta(J)(e_{K}(w_{i}))}\::\:i,J)+R, \tag{8}\]
where again the _initial form_ of \(E(e_{K}(w))\) is stored in the first term of the right hand side of (8).
If \(P\in F_{m,n}\) satisfies \(\operatorname{trop}_{w}(P)=\frac{a}{b}\neq 0\) with \(A=\operatorname{trop}(a)\) and \(B=\operatorname{trop}(b)\), we set
\[T(\operatorname{trop}_{w}(P))^{-1}:=\frac{B}{A}\in K(\!(\mathbf{t})\!)\]
where \(A,B\) are considered in \(K(\!(\mathbf{t})\!)\) via the natural embedding. Since \(A\) and \(B\) are uniquely determined by \(a\) and \(b\), respectively, \(T(\operatorname{trop}_{w}(P))^{-1}\) exists. Moreover, \(T(\operatorname{trop}_{w}(P))^{-1}\) is well-defined by taking into account the multiplicativity of \(\operatorname{trop}\).
We now specify the polynomial \(\operatorname{tr}_{w}(P)=P_{w}\) by substituting every instance of \(x_{i,J}\) in \(P\) by \(\overline{\Theta(J)(e_{K}(w_{i}))}x_{i,J}\) and then multiplying the result by \(T(\operatorname{trop}_{w}(P))^{-1}\):
\[P_{w}:=\begin{cases}T(\operatorname{trop}_{w}(P))^{-1}P(\overline{\Theta(J)(e _{K}(w_{i}))}x_{i,J}),&\text{ if }\operatorname{trop}_{w}(P)\neq 0\\ 0,&\text{ if }\operatorname{trop}_{w}(P)=0.\end{cases} \tag{9}\]
**Proposition 3.5**.: _If \(P=\sum_{M}a_{M}E_{M}\in F_{m,n}\), then \(P_{w}\) is an element in \(R_{m,n}\). Also, if \(\operatorname{trop}_{w}(P)\neq 0\), then_
\[P_{w}=\sum_{M}[T(\operatorname{trop}_{w}(P))^{-1}a_{M}E_{M}(\overline{\Theta(J )(e_{K}(w_{i}))})]E_{M}, \tag{10}\]
_where \(E_{M}(\overline{\Theta(J)(e_{K}(w_{i}))})\) denotes the usual algebraic evaluation of the monomial \(E_{M}\) at the vector \((\overline{\Theta(J)(e_{K}(w_{i}))}\,i,J)\)._
Proof.: It was shown in [10, Lemma 7.14] that (9) is an element in \(R_{m,n}\) when instead of using \((\overline{\Theta(J)(e_{K}(w_{i}))}\::\:i,J)\) one uses \((T(w_{i},J)\::\:i,J)\), where \(T(w_{i},J)=e_{K}(\operatorname{Vert}((w_{i}-J)_{\geq 0}))\). These two expressions are linked as follows
\[T(w_{i},J)=e_{K}(\operatorname{Supp}(\overline{\Theta(J)(e_{K}(w_{i}))})), \quad\text{for all }i,J.\]
Thus the first part follows since \(T(w_{i},J)\) and \(\overline{\Theta(J)(e_{K}(w_{i}))}\) have the same support for all \(i,J\) and non-negative integer coefficients.
The second part follows from (8), since replacing every instance of \(x_{i,J}\) in \(P\) by \(\overline{\Theta(J)(e_{K}(w_{i}))}x_{i,J}\) sends the monomial \(E_{M}\) of \(P\) to \(E_{M}(\overline{\Theta(J)(e_{K}(w_{i}))})E_{M}\).
**Remark 3.6**.: Note that the following polynomial
\[\widetilde{P_{w}}=\sum_{M}[T(\operatorname{trop}_{w}(P))^{-1}a_{M}E_{M}(e_{K}( w))]E_{M},\]
is a modified version of (10), and it represents the exact copy in this setting of the translation to the origin of a torus appearing in [1, SS5]. However, our definition is simpler since it extracts only the initial data (which is a polynomial) of the evaluations \(E_{M}(e_{K}(w))\) (which are series), as we showed above.
**Definition 3.7**.: Let \(w=(w_{1},\dots,w_{n})\in\mathbb{B}\llbracket\textbf{t}\rrbracket^{n}\). For a given ideal \(G\subset F_{m,n}\), we define its _\(w\)-translation_ by \(G_{w}\) as the ideal in \(R_{m,n}\) generated by \(\{P_{w}\,:\,P\in G\}\).
**Proposition 3.8**.: _Let \(w=(w_{1},\dots,w_{n})\in\mathbb{B}\llbracket\textbf{t}\rrbracket^{n}\) and \(G\subset F_{m,n}\) be an ideal. If \(R_{m,n}/G_{w}\) is torsion-free, then_
\[\mathcal{X}(w):=\operatorname{Spec}(R_{m,n}/G_{w})\]
_is a model over \(K(\!(\textbf{t})\!)^{\circ}\) for the generic fibre \(\mathcal{X}(w)_{\eta}\)._
Proof.: This is a consequence of Lemma 3.1.
Let \(w\in\mathbb{B}\llbracket\textbf{t}\rrbracket^{n}\) and \(G\subset F_{m,n}\) be such that \(R_{m,n}/G_{w}\) is torsion-free. The fibre \(\mathcal{X}(w)_{\mathfrak{p}}\) of \(\mathcal{X}(w)\) over \(\mathfrak{p}\in\operatorname{Spec}(K(\!(\textbf{t})\!)^{\circ})\) is the spectrum of the ring \((R_{m,n}/G_{w})\otimes_{K(\!(\textbf{t})\!)^{\circ}}\kappa(\mathfrak{p})\).
**Definition 3.9**.: Let \(w=(w_{1},\dots,w_{n})\in\mathbb{B}\llbracket\textbf{t}\rrbracket^{n}\) and \(G\subset F_{m,n}\) be such that \(R_{m,n}/G_{w}\) is torsion-free. Further let \(\mathcal{X}(w)=\operatorname{Spec}(R_{m,n}/G_{w})\) with generic fibre \(X=\mathcal{X}(w)_{\eta}\). Given a point \(\mathfrak{p}\in\operatorname{Spec}(K(\!(\textbf{t})\!)^{\circ})\setminus\{\eta\}\) the fibre \(\mathcal{X}(w)_{\mathfrak{p}}\) of \(\mathcal{X}(w)\) over \(\mathfrak{p}\in\operatorname{Spec}(K(\!(\textbf{t})\!)^{\circ})\) is called the _initial degeneration_ of \(X\) at the pair \((w,\mathfrak{p})\).
**Theorem 3.10**.: _Let \(w=(w_{1},\dots,w_{n})\in\mathbb{B}\llbracket\textbf{t}\rrbracket^{n}\) and \(G\subset F_{m,n}\) be an ideal such that \(R_{m,n}/G_{w}\) is torsion-free. Then for any maximal ideal \(\mathfrak{m}\subset K(\!(\textbf{t})\!)^{\circ}\) we have that_
\[\mathcal{X}(w)_{\mathfrak{m}}\cong\operatorname{Spec}\big{(}K[x_{i,J}]/ \overline{G}_{w}\!\big{)}\]
_where \(\overline{G}_{w}\subset K[x_{i,J}]\) denotes the extended ideal under the morphism (4)._
Proof.: We have that \(\mathcal{X}(w)_{\mathfrak{m}}\) is the spectrum of \((R_{m,n}/G_{w})\otimes_{K(\!(\textbf{t})\!)^{\circ}}(K(\!(\textbf{t})\!)^{ \circ}/\mathfrak{m})\), which is isomorphic to \(K[x_{i,J}]/(\overline{G_{w}}\!)_{\mathfrak{m}}\) by Lemma 3.3.
**Remark 3.11**.: In the case of \(m=1\) (i.e. the case of discrete valuation rings) since \(K\llbracket\textbf{t}\rrbracket=K(\!(\textbf{t})\!)^{\circ}\subset K(\!(\textbf{ t})\!)\) it follows that \(\operatorname{Spec}(K(\!(\textbf{t})\!)^{\circ})=\{(0),\mathfrak{m}=(t)\}\). So there is only one initial degeneration \(\mathcal{X}(w)_{\mathfrak{m}}\) for every \(w=(w_{1},\dots,w_{n})\in\mathbb{B}\llbracket\textbf{t}\rrbracket^{n}\). As the residue field of \(K(\!(\textbf{t})\!)^{\circ}\) is \(\kappa(\mathfrak{m})=K\) we have that \(\mathcal{X}(w)_{\mathfrak{m}}\) is a scheme over \(K\).
We will study the maximal spectrum of \(K(\!(\textbf{t})\!)^{\circ}\) for the general case of \(m>1\) in SS4.
### Initial ideals along maximal ideals
Let \(G\subset F_{m,n}\) be any ideal and \(w\in\mathbb{B}\llbracket\textbf{t}\rrbracket^{n}\), so that \(G_{w}\) is an ideal of \(R_{m,n}\). If \(\mathfrak{m}\subset K(\!(\textbf{t})\!)^{\circ}\) is a maximal ideal, in this section we will focus on the extended ideal \((\overline{G_{w}}\!)_{\mathfrak{m}}\subset K[x_{i,J}]\) under the morphism (4), no matter if \(R_{m,n}/G_{w}\) is flat or not. We will show that these extended ideals share many properties with the construction of initial ideals in classical tropical geometry. We start with the following result
**Proposition 3.12**.: _Let \(G\subset F_{m,n}\) be an ideal, \(w=(w_{1},\dots,w_{n})\in\mathbb{B}\llbracket\textbf{t}\rrbracket^{n}\) and \(\mathfrak{m}\subset K(\!(\textbf{t})\!)^{\circ}\) a maximal ideal. Then the extended ideal \((\overline{G_{w}}\!)_{\mathfrak{m}}\subset K[x_{i,J}]\) under the morphism (4) is the ideal in \(K[x_{i,J}]\) generated by \(\{\psi_{\mathfrak{m}}\circ\pi(P_{w})\,:\,P\in G\}\)._
Proof.: If \(P=\sum_{M}a_{M}E_{M}\in F_{m,n}\) and \(\operatorname{trop}_{w}(P)\neq 0\), we have from (10) that \(P_{w}=\sum_{M}[T(\operatorname{trop}_{w}(P))^{-1}a_{M}E_{M}(\overline{\Theta(J) (e_{K}(w_{i}))})]E_{M}\), thus
\[\psi_{\mathfrak{m}}\circ\pi(P_{w})=\sum_{M}\psi_{\mathfrak{m}}\circ\pi[T( \operatorname{trop}_{w}(P))^{-1}a_{M}E_{M}(\overline{\Theta(J)(e_{K}(w_{i}))}) ]E_{M}. \tag{11}\]
Now, it is clear that the ideal in \(K[x_{i,J}]\) generated by \(\{\psi_{\mathfrak{m}}\circ\pi(P_{w})\,:\,P\in G\}\) is contained in \((\overline{G_{w}})_{\mathfrak{m}}\). Conversely, an element of \(G_{w}\) is a finite sum \(R=\sum_{P}Q_{P}P_{w}\) with \(P\in G\) and \(Q_{P}\in R_{m,n}\), thus every generator \((\overline{G_{w}})_{\mathfrak{m}}\) is of the form
\[\psi_{\mathfrak{m}}\circ\pi(R)=\psi_{\mathfrak{m}}\circ\pi\big{(}\sum_{P}Q_{P }P_{w}\big{)}=\sum_{P}\psi_{\mathfrak{m}}\circ\pi(Q_{P})\psi_{\mathfrak{m}} \circ\pi(P_{w}),\]
since the map \(\psi_{\mathfrak{m}}\circ\pi\) from (4) is a homomorphism of rings.
**Remark 3.13**.: Note that the following polynomial
\[\psi_{\mathfrak{m}}\circ\pi(\widetilde{P_{w}})=\sum_{M}\psi_{\mathfrak{m}} \circ\pi[T(\operatorname{trop}_{w}(P))^{-1}a_{M}E_{M}(e_{K}(w))]E_{M},\]
is a modified version of (11), and it would be the exact copy in this setting of the initial form appearing in [1, Remark 5.7], which is the usual definition of initial form which is operative in tropical geometry. Once again, our definition is simpler for the same reasons described in Remark 3.6.
**Definition 3.14**.: For a pair \((w,\mathfrak{m})\) of a weight \(w=(w_{1},\ldots,w_{n})\in\mathbb{B}\llbracket\mathfrak{t}\rrbracket^{n}\) and a maximal ideal \(\mathfrak{m}\subset K(\llbracket\mathfrak{t}\rrbracket)^{\circ}\), we denote (11) by \(\operatorname{in}_{(w,\mathfrak{m})}(P):=\psi_{\mathfrak{m}}\circ\pi(P_{w})\), and call it the _initial form of \(P\in F_{m,n}\) at \((w,\mathfrak{m})\)_._
Similarly, if \(G\subset F_{m,n}\) is an ideal, we denote by \(\operatorname{in}_{(w,\mathfrak{m})}(G)\) the ideal in \(K[x_{i,J}]\) generated by \(\{\operatorname{in}_{(w,\mathfrak{m})}(P)\,:\,P\in G\}\), and call it the _initial ideal of \(G\) at \((w,\mathfrak{m})\)_._
We now return to the differential setting. The set \(D=\{\frac{\partial}{\partial t_{i}}\,:\,i=1,\ldots,n\}\) of partial derivations defined on \(K\llbracket\mathfrak{t}\rrbracket\) can be extended to \(K(\llbracket\mathfrak{t}\rrbracket)\) in the usual way, and also to \(F_{m,n}\). If we denote them also by \(D\), the pair \((F_{m,n},D)\) is a differential ring, and an ideal \(G\subset F_{m,n}\) is differential if it is closed under the action of \(D\).
We are particularly interested in the case in which \(G\subset F_{m,n}\) is a differential ideal and \(m>1\).
**Example 3.15**.: Let \(P=x_{(1,1)}-tx_{(0,0)}\in K(\llbracket t,u\rrbracket\{x\}=F_{2,1}\) and \(w=\mathbb{N}^{2}\setminus\{(1,1)\}\in\mathbb{B}\llbracket t,u\rrbracket\). Then \(\operatorname{trop}_{w}(P)=\{(1,0),(0,1)\}\) and
\[P_{w}=x_{(1,1)}-\frac{t}{t+u}x_{(0,0)}.\]
Consider a maximal ideal \(\mathfrak{m}\subset K(\llbracket t,u\rrbracket)^{\circ}\). We see from Example 4.10 that
\[P_{w}\mod\mathfrak{m}=\begin{cases}x_{(1,1)},&\frac{t}{t+u}\in\mathfrak{m} \\ x_{(1,1)}-x_{(0,0)},&\frac{t}{t+u}\notin\mathfrak{m}\end{cases}\]
The derivatives of \(P\) are of the form
\[P_{(j_{1},j_{2})}=x_{(1+j_{1},1+j_{2})}-tx_{(j_{1},j_{2})}-j_{1}x_{(j_{1}-1,j_ {2})}.\]
We obtain for every \((j_{1},j_{2})\neq(0,0)\) that \(\operatorname{trop}_{w}(P_{(j_{1},j_{2})})=\{(0,0)\}\) and
\[(P_{(j_{1},j_{2})})_{w}=\begin{cases}x_{(1+j_{1},1+j_{2})}-t(t+u)x_{(j_{1},j_{2 })}-j_{1}x_{(j_{1}-1,j_{2})},&(j_{1},j_{2})=(1,1)\\ x_{(1+j_{1},1+j_{2})}-tx_{(j_{1},j_{2})}-j_{1}(t+u)x_{(j_{1}-1,j_{2})},&(j_{1},j _{2})=(2,1)\\ x_{(1+j_{1},1+j_{2})}-tx_{(j_{1},j_{2})}-j_{1}x_{(j_{1}-1,j_{2})},&\text{ otherwise}.\end{cases}\]
Since the differential ideal generated by \(P\) in \(F_{2,1}\), denoted as \([P]\), is prime, the algebraic ideal generated by the \((P_{(j_{1},j_{2})})_{w}\) already gives us \([P]_{w}\). Moreover, there is no element in \([P]\) that is in \(K(\!(t,u)\!)[x_{(0,0)},x_{(1,0)},x_{(0,1)}]\) and since all differential monomials of \((P_{(j_{1},j_{2})})_{w}\) are the same as that of \(P_{(j_{1},j_{2})}\), there is also no such element in \([P]_{w}\). All differential monomials involving higher derivatives can be reduced to one of order one or zero such that we obtain
\[R_{2,1}/[P]_{w}\cong K(\!(t,u)\!)^{\circ}[x_{(0,0)},x_{(1,0)},x_{(0,1)}].\]
A characterization of prime ideals of such a polynomial ring is given e.g. in [11].
## 4. Maximal ideals of the unit ball
In SS3 we saw that for any point \(\mathfrak{p}\in\operatorname{Spec}(K(\!(\mathfrak{t})\!)^{\circ})\setminus\{\eta\}\), we can construct a degeneration of a generic fibre. Among the most important prime ideals are the maximal ideals, and in this section we characterize the maximal ideals of the Bezout domain \(K(\!(\mathfrak{t})\!)^{\circ}\). We fix an integer \(m\geq 1\). First we recall the definition of monomial order.
**Definition 4.1**.: Let \(<\) be a total order on the monoid \((\mathbb{N}^{m},+,0)\). Then \(<\) is a _monomial order_ if
* \(0<a\) for all \(a\in\mathbb{N}^{m}\setminus\{0\}\),
* \(a<b\) implies \(a+c<b+c\) for all \(c\in\mathbb{N}^{m}\).
**Lemma 4.2**.: _Every monomial order on \(\mathbb{N}^{m}\) satisfies that for any non-empty subset \(S\subset\mathbb{N}^{m}\) there is a unique minimum \(\min_{<}(S)\) which moreover is a vertex of the convex hull of \(S\). In particular, a monomial order is a well-order._
Proof.: Consider \(a\in\mathbb{N}^{m}\) and observe that, by Definition 4.1(b), \(a=\min_{<}\{a+\mathbb{N}^{m}\}\). For a subset \(\emptyset\neq S\subset\mathbb{N}^{m}\), this observation generalizes as follows. Let \(P(S)\) be the convex hull of \(S+\mathbb{N}^{m}\) in \(\mathbb{R}^{m}\) and let \(U(S)\) be the union of all bounded faces of \(P(S)\). Then \(U(S)\cap\mathbb{N}^{m}\) contains the vertex set of \(S\) (which agrees with the set of vertices of \(P(S)\)) and moreover
\[\min{}_{<}\{U(S)\cap\mathbb{N}^{m}\}=\min{}_{<}\{S\}.\]
We need to show that this minimum is achieved at a unique vertex of \(S\). As \(U(S)\cap\mathbb{N}^{m}\) is finite, the monomial order \(<\) can be represented by a weight vector \(w\in\mathbb{R}^{m}\), i.e. \(\min_{<}\{U(S)\cap\mathbb{N}^{m}\}=\min_{w}\{U(S)\cap\mathbb{N}^{m}\}\) for some \(w\in\mathbb{R}^{m}\). Let \(m_{1}<\cdots<m_{k}\) be the elements of \(U(S)\cap\mathbb{N}^{m}\). Then any \(w\in\mathbb{R}^{m}\) satisfying \(w\cdot m_{1}<w\cdot m_{2}<\cdots<w\cdot m_{k}\) represents \(<\); moreover, we may assume \(w\in\mathbb{N}^{m}_{>0}\) (see e.g. Lemma 3.1.1 in [13]).
Suppose \(m=\min_{<}\{U(S)\cap\mathbb{N}^{m}\}\) is not a vertex. Then there exist \(0<\lambda_{i}<1\) with \(\sum_{i=1}^{k}\lambda_{i}=1\) such that \(m=\sum_{i=1}^{k}\lambda_{i}m_{i}\). In particular,
\[w\cdot m=\sum_{i=1}^{k}\lambda_{i}\,w\cdot m_{i}.\]
So either \(\lambda_{1}=\cdots=\lambda_{k}\) and \(w\cdot m=w\cdot m_{1}=\cdots=w\cdot m_{k}\), but this contradicts \(w\cdot m_{1}<\cdots<w\cdot m_{k}\); or \(w\cdot m_{i}<w\cdot m<w\cdot m_{j}\) for some \(1\leq i,j\leq k\) which contradicts the assumption that \(m=\min_{w}\{U(S)\cap\mathbb{N}^{m}\}\).
**Remark 4.3**.: Lemma 4.2 is closely related to [1, Lemma 3.7], yet slightly different: Aroca and Rond are dealing with total orders \(<\) in \(\mathbb{R}^{m}\) compatible with
its group structure, and which can be seen as monomial orders on \(\mathbb{R}^{m}\). In loc. cit. it is shown that for any such order \(<\) and any rational polyhedral cone \(\sigma\subset\mathbb{R}^{m}\) which is non-negative, i.e., \(0\leq a\) for all \(a\in\sigma\), the set \(\sigma\cap\mathbb{Z}^{m}\) is well ordered.
The aim of this section is to establish the following bijection between the set of monomial orders on the monoid \((\mathbb{N}^{m},+,0)\) and the set of maximal ideals of \(K(\!(\mathfrak{t})\!)^{\circ}\).
**Theorem 4.4**.: _There is an identification between maximal ideals \(\mathfrak{m}\) of \(K(\!(\mathfrak{t})\!)^{\circ}\) and monomial orders on \(\mathbb{N}^{m}\) defined, for \(I,J\in\mathbb{N}^{m}\), as_
\[I<J\quad\Longleftrightarrow\quad\frac{\mathfrak{t}^{J}}{\mathfrak{t}^{I}+ \mathfrak{t}^{J}}\in\mathfrak{m}\quad\Longleftrightarrow\quad\frac{ \mathfrak{t}^{I}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}}\notin\mathfrak{m}. \tag{12}\]
We will use the remaining of this section to prove this result; the respective identifications are given in Proposition 4.5 and Proposition 4.11.
Given \(f\in K[\![\mathfrak{t}]\!]\) and \(I\in\mathbb{N}^{m}\), we may denote by \(f_{I}\) the coefficient of \(\mathfrak{t}^{I}\) in \(f\).
**Proposition 4.5**.: _Given a monomial order \(<\) on \(\mathbb{N}^{m}\), the set_
\[\mathfrak{m}_{<}:=\{f/g\in K(\!(\mathfrak{t})\!)^{\circ}:f_{\min_{<}(\mathrm{ Supp}(g))}=0\}\]
_is a maximal ideal of \(K(\!(\mathfrak{t})\!)^{\circ}\)._
Proof.: We claim that \(\mathfrak{m}_{<}\) is the kernel of the map
\[\begin{split} c_{<}:K(\!(\mathfrak{t})\!)^{\circ}& \to K\\ \frac{f}{g}&\longmapsto\frac{f_{\min_{<}(\mathrm{ Supp}(g))}}{g_{\min_{<}(\mathrm{Supp}(g))}}.\end{split} \tag{13}\]
Let us first show that the map \(c_{<}\) is well-defined. Let \(f/g=h/\ell\) with \(f,g,h,\ell\in K[\![\mathfrak{t}]\!]\), we need to show that
\[f_{\min_{<}(\mathrm{Supp}(g))}\ell_{\min_{<}(\mathrm{Supp}(\ell))}=h_{\min_{< }(\mathrm{Supp}(\ell))}g_{\min_{<}(\mathrm{Supp}(g))}.\]
If \(f_{\min_{<}(\mathrm{Supp}(g))}=0\) or \(h_{\min_{<}(\mathrm{Supp}(\ell))}=0\), the equality follows easily. Let us suppose that \(f_{\min_{<}(\mathrm{Supp}(g))}\neq 0\) and \(h_{\min_{<}(\mathrm{Supp}(\ell))}\neq 0\). Note that \(\min_{<}(\mathrm{Supp}(g))\leq\min_{<}(\mathrm{Supp}(f))\) and \(\min_{<}(\mathrm{Supp}(g))<\min_{<}(\mathrm{Supp}(f))\) if and only if \(f_{\min_{<}(\mathrm{Supp}(g))}=0\). Thus, \(\min_{<}(\mathrm{Supp}(f))=\min_{<}(\mathrm{Supp}(g))\) and \(\min_{<}(\mathrm{Supp}(h))=\min_{<}(\mathrm{Supp}(\ell))\) for the same reason. Since \(f\ell=gh\), we have
\[X:=\operatorname{trop}(f\ell)=\operatorname{trop}(f)\operatorname{trop}(\ell)= \operatorname{trop}(hg)=\operatorname{trop}(h)\operatorname{trop}(g),\]
and \(I\in X\) if and only if there exist unique \(J,K,L,M\) in the respective vertex sets such that \(I=J+K=L+M\). In particular, for every \(I\in X\) we have
\[(f\ell)_{I}=f_{J}\ell_{K}=g_{L}h_{M}=(gh)_{I}.\]
We set
\[I_{<}:=\min_{<}(\mathrm{Supp}(f))+\min_{<}(\mathrm{Supp}(\ell))=\min_{<}( \mathrm{Supp}(g))+\min_{<}(\mathrm{Supp}(h)).\]
We need show that \(I_{<}\in X\). Suppose that \(I_{<}=A+B\) for some \(A\in\operatorname{trop}(f)\) and \(B\in\operatorname{trop}(\ell)\), then \(\min_{<}(\mathrm{Supp}(f))\leq A\) and \(\min_{<}(\mathrm{Supp}(\ell))\leq B\). Thus,
\[I_{<}=\min_{<}(\mathrm{Supp}(f\ell))=\min_{<}(\mathrm{Supp}(f))+\min_{<}( \mathrm{Supp}(\ell))\leq A+B=I_{<},\]
which holds only for the uniquely determined \(A=\min_{<}(\mathrm{Supp}(f))\) and \(B=\min_{<}(\mathrm{Supp}(\ell))\). Thus, \(I_{<}\in X\).
The additivity and multiplicativity of \(c_{<}\) can be shown in a similar way. Thus, \(c_{<}\) defines a surjective homomorphism such that, by the first isomorphism theorem,
\[K(\!(\mathfrak{t})\!)^{\circ}/\mathfrak{m}_{<}=K(\!(\mathfrak{t})\!)^{\circ}/ \mathrm{ker}(c_{<})\cong K,\]
which is the case exactly if \(\mathfrak{m}_{<}\) is a maximal ideal.
Note that given a monomial order \(<\) on \(\mathbb{N}^{m}\), expressions of the form \(\frac{\mathfrak{t}^{J}}{\mathfrak{k}^{I}+\mathfrak{k}^{J}}\) are in \(\mathfrak{m}_{<}\) if and only if \(I<J\). Later we will use exactly those fractions to define a monomial order on \(\mathbb{N}^{m}\) from a given maximal ideal \(\mathfrak{m}\subset K(\!(\mathfrak{t})\!)^{\circ}\). Before we do that, we first establish that \(K(\!(\mathfrak{t})\!)^{\circ}/\mathfrak{m}\cong K\) for every maximal ideal \(\mathfrak{m}\) of \(K(\!(\mathfrak{t})\!)^{\circ}\).
**Definition 4.6**.: Let \(a\leq b\) in \(V\mathbb{B}[\mathfrak{t}]\), with supports \(A\),\(B\) respectively. We say that \(a\) is _irrelevant_ for \(b\) if \(A\cap B=\emptyset\), and we write \(a\ll b\). Otherwise we say that \(a\) is relevant for \(b\). If \(\frac{a}{b}\leq\frac{c}{d}\) in \(V\mathbb{B}(\mathfrak{t})\), then \(\frac{a}{b}\ll\frac{c}{d}\) if and only if \(a\odot d\ll c\odot b\) in \(V\mathbb{B}[\mathfrak{t}]\).
**Lemma 4.7**.: _Let \(q\in K(\!(\mathfrak{t})\!)^{\circ}\) be such that \(\mathrm{trop}(q)\ll 1\). Then \(q\) is contained in every maximal ideal of \(K(\!(\mathfrak{t})\!)^{\circ}\)._
Proof.: Let \(\mathfrak{m}\) be a maximal ideal of \(K(\!(\mathfrak{t})\!)^{\circ}\). Suppose it does not contain \(q\). Then \((1)=\mathfrak{m}+(q)\) by maximality of \(\mathfrak{m}\). So there are \(r\in\mathfrak{m}\) and \(s\in K(\!(\mathfrak{t})\!)^{\circ}\) such that \(1=r+qs\). Since \(\mathrm{trop}(q)\ll 1\) and \(\mathrm{trop}(s)\leq 1\) we have \(\mathrm{trop}(qs)\ll 1\). Now write \(r=f/g\) and \(qs=h/g\) for \(f,g,h\in K[\![\mathfrak{t}]\!]\) (we have already equalized the denominators here). Then \(1=r+qs=(f+h)/g\). But \(\mathrm{trop}(h/g)\ll 1\) means that the support of \(h\) has no vertices of \(g\). So \(f\) has in its support all the vertices of \(g\), meaning that \(\mathrm{trop}(f)=\mathrm{trop}(g)\), so that \(r=f/g\) is a unit in \(K(\!(\mathfrak{t})\!)^{\circ}\). But \(r\in\mathfrak{m}\), so \(\mathfrak{m}=(1)\), which is not a maximal ideal of \(K(\!(\mathfrak{t})\!)^{\circ}\).
**Lemma 4.8**.: _Given \(q\in K(\!(\mathfrak{t})\!)^{\circ}\), there exists an integer \(n\geq 1\) and elements \(\alpha_{1},\ldots,\alpha_{n}\in K\) such that \(\mathrm{trop}\left(\prod_{k=1}^{n}(q-\alpha_{k})\right)\ll 1\)._
Proof.: Let \(q=f/g\) with \(f,g\in K[\![\mathfrak{t}]\!]\). If \(f\ll g\), then \((q-0)=q\ll 1\). Otherwise, if \(\{I_{1}\ldots,I_{n}\}\) is the vertex set of \(g\), then \(\mathrm{trop}(f)\cap\{I_{1}\ldots,I_{n}\}\neq\emptyset\). For each \(k=1,\ldots,n\), we set \(\alpha_{k}:=\frac{f_{I_{k}}}{g_{I_{k}}}\). Notice that the denominator \(g_{I_{k}}\) is never zero, but the numerator, and thus \(\alpha_{k}\), might be zero. Let \(r=\prod_{k=1}^{n}(q-\alpha_{k})\). We claim that \(\mathrm{trop}(r)\ll 1\). First, we compute
\[r=\prod_{k=1}^{n}(q-\alpha_{k})=\frac{\prod_{k=1}^{n}[f-\alpha_{k}g]}{g^{n}}.\]
We need to show that \(\mathrm{trop}(\prod_{k=1}^{n}[f-\alpha_{k}g])\ll\mathrm{trop}(g^{n})\). Note that the vertices of \(g^{n}\) are of the form \(nI_{k}\) for \(k=1,\ldots,n\). Fix a value for \(k\); we prove that \(nI_{k}\) is not a vertex of \((f-\alpha_{1}g)\cdots(f-\alpha_{n}g)\): If \(I_{k}\notin\mathrm{trop}(f)\cap\{I_{1}\ldots,I_{n}\}\), then \(\alpha_{k}=0\) and \((f-\alpha_{k}g)_{I_{k}}=f_{I_{k}}=0\). If \(I_{k}\in\mathrm{trop}(f)\cap\{I_{1}\ldots,I_{n}\}\), then \((f-\alpha_{k}g)_{I_{k}}=0\).
Pick any weight vector \(w\in\mathbb{R}^{m}\) such that \(I_{k}\) is the \(w\)-minimal vertex of \(\mathrm{Supp}(g)\). Since \(\mathrm{trop}(f)\leq\mathrm{trop}(g)\) we have that \(\mathrm{trop}(f-\alpha_{i}g)\leq\mathrm{trop}(g)\) for each \(i\), and so either \(I_{k}\) appears in \(\mathrm{trop}(f-\alpha_{i}g)\) in which case it is the \(w\)-minimal vertex of it, or \(I_{k}\) does not appear, in which case the \(w\)-minimal vertex of \(f-\alpha_{i}g\) is strictly bigger than \(I_{k}\). The latter happens at least once, namely for \(i=k\), because by construction, the coefficient \((f-\alpha_{k}g)_{I_{k}}\) is zero. The \(w\)-minimal vertex of the product \(\prod_{i}(f_{i}-\alpha_{i}g_{i})\) is therefore strictly bigger (with respect to the \(w\)-ordering) than \(nI_{k}\). Since \(nI_{k}\) is the \(w\)-minimal vertex of \(\mathrm{Supp}(g^{n})\), it follows that \(nI_{k}\) is not a vertex of the product.
**Corollary 4.9**.: _Let \(\mathfrak{m}\) be a maximal ideal of \(K(\!(\mathfrak{t})\!)^{\circ}\). Then the composition of the inclusion \(K\hookrightarrow K(\!(\mathfrak{t})\!)^{\circ}\) with the quotient map \(K(\!(\mathfrak{t})\!)^{\circ}\to K(\!(\mathfrak{t})\!)^{\circ}/\mathfrak{m}\) is an isomorphism. Consequently, \(K(\!(\mathfrak{t})\!)^{\circ}/\mathfrak{m}\cong K\)._
Proof.: Let \(q\in K(\!(\mathfrak{t})\!)^{\circ}\) and pick \(\alpha_{1},\dots,\alpha_{n}\in K\) as in Lemma 4.8. Then Lemma 4.7 implies that \(\prod_{i}(q-\alpha_{i})\in\mathfrak{m}\). Since \(\mathfrak{m}\) is prime, at least one of the factors is in \(\mathfrak{m}\). Thus for every \(q\in K(\!(\mathfrak{t})\!)^{\circ}\) there is an \(\alpha\in K\) such that \(q-\alpha\in\mathfrak{m}\). This is equivalent to the statement that the composition \(K\to K(\!(\mathfrak{t})\!)^{\circ}\to K(\!(\mathfrak{t})\!)^{\circ}/ \mathfrak{m}\) is surjective and an isomorphism as injectivity follows from the fact that \(K\cap\mathfrak{m}=\{0\}\).
Now let us show that a maximal ideal \(\mathfrak{m}\) of \(K(\!(\mathfrak{t})\!)^{\circ}\) induces a monomial order on \(\mathbb{N}^{m}\). For any \(I,J\in\mathbb{N}^{m}\) we define
\[I\preceq_{\mathfrak{m}}J\quad\Longleftrightarrow\quad\frac{\mathfrak{t}^{I} }{\mathfrak{t}^{I}+\mathfrak{t}^{J}}\notin\mathfrak{m}.\]
**Example 4.10**.: We continue with Example 3.15. Consider a maximal ideal \(\mathfrak{m}\subset K(\!(t)\!)^{\circ}\). Then either \(u\prec_{\mathfrak{m}}t\) or \(t\prec_{\mathfrak{m}}u\). In the first case, we have \(\frac{t}{t+u}\in\mathfrak{m}\), so \(P_{w}\mod\mathfrak{m}=x_{(1,1)}\). In the second case, \(\frac{u}{t+u}\in\mathfrak{m}\) and so
\[\frac{t}{t+u}-1=\frac{u}{t+u}\in\mathfrak{m}.\]
Hence, in this case \(P_{w}\mod\mathfrak{m}=x_{(1,1)}-x_{(0,0)}\).
**Proposition 4.11**.: _The relation \(\preceq_{\mathfrak{m}}\) on \(\mathbb{N}^{m}\) is a monomial order._
Proof.: We note that the relations \(I\preceq_{\mathfrak{m}}I\) and \(0\preceq_{\mathfrak{m}}J\) follow from the observations that \(1/2\) and \(1/(1+\mathfrak{t}^{J})\) are units in \(K(\!(\mathfrak{t})\!)^{\circ}\) and therefore not in \(\mathfrak{m}\).
Let \(I,J\in\mathbb{N}^{m}\) with \(I\preceq_{\mathfrak{m}}J\) and let \(K\in\mathbb{N}^{m}\). Since \(\frac{\mathfrak{t}^{I}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}}\notin\mathfrak{m}\), we obtain that
\[\frac{\mathfrak{t}^{K}}{\mathfrak{t}^{K}}\cdot\frac{\mathfrak{t}^{I}}{ \mathfrak{t}^{I}+\mathfrak{t}^{J}}=\frac{\mathfrak{t}^{I+K}}{\mathfrak{t}^{I+ K}+\mathfrak{t}^{J+K}}\notin\mathfrak{m}\]
and thus, \(I+K\preceq_{\mathfrak{m}}J+K\). Now let \(I,J\in\mathbb{N}^{m}\) be arbitrary. Note that
\[\frac{\mathfrak{t}^{I}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}}+\frac{\mathfrak{t }^{J}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}}=1\notin\mathfrak{m}\]
so at least one of the two terms is not in \(\mathfrak{m}\). So we have \(I\preceq_{\mathfrak{m}}J\) or \(J\preceq_{\mathfrak{m}}I\). Suppose that both terms are not in \(\mathfrak{m}\). If \(I\neq J\) then we have \(\operatorname{trop}(\mathfrak{t}^{J}/(\mathfrak{t}^{I}+\mathfrak{t}^{J}))= \operatorname{trop}(\mathfrak{t}^{J}/(\mathfrak{t}^{I}-\mathfrak{t}^{J}))\), so by Corollary 2.10 we find that \(\mathfrak{t}^{J}/(\mathfrak{t}^{I}-\mathfrak{t}^{J})\) is also not in \(\mathfrak{m}\). Now consider the product
\[q=\frac{\mathfrak{t}^{I}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}}\cdot\frac{ \mathfrak{t}^{J}}{\mathfrak{t}^{I}-\mathfrak{t}^{J}}=\frac{\mathfrak{t}^{I+J} }{\mathfrak{t}^{2I}-\mathfrak{t}^{2J}}.\]
This is a product of two elements of \(K(\!(\mathfrak{t})\!)^{\circ}\setminus\mathfrak{m}\), and therefore not in \(\mathfrak{m}\). But we have \(\operatorname{trop}(q)\ll 1\) since \(I+J\) is not a vertex of \(\{2I,2J\}\). This contradicts Lemma 4.7. So we conclude that \(I=J\), which yields the anti-symmetry.
Now it remains to show that \(\preceq_{\mathfrak{m}}\) is transitive. Let \(I,J,L\in\mathbb{N}^{m}\) all distinct with \(I\preceq_{\mathfrak{m}}J\) and \(J\preceq_{\mathfrak{m}}L\). We have
\[\frac{\mathfrak{t}^{J}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}+\mathfrak{t}^{L}}= \frac{\mathfrak{t}^{J}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}}\frac{\mathfrak{t}^ {I}+\mathfrak{t}^{J}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}+\mathfrak{t}^{L}}\in \mathfrak{m}\]
since the first factor is in \(\mathfrak{m}\) and the second in \(K(\!(\mathfrak{t})\!)^{\circ}\). Similarly, we have
\[\frac{\mathfrak{t}^{L}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}+\mathfrak{t}^{L}}= \frac{\mathfrak{t}^{L}}{\mathfrak{t}^{J}+\mathfrak{t}^{L}}\frac{\mathfrak{t}^ {J}+\mathfrak{t}^{L}}{\mathfrak{t}^{I}+\mathfrak{t}^{J}+\mathfrak{t}^{L}}\in \mathfrak{m}\]
Since \(1=(\mathbf{t}^{I}+\mathbf{t}^{J}+\mathbf{t}^{L})/(\mathbf{t}^{I}+\mathbf{t}^{J}+ \mathbf{t}^{L})\) is not in \(\mathfrak{m}\), we must have that \(\mathbf{t}^{I}/(\mathbf{t}^{I}+\mathbf{t}^{J}+\mathbf{t}^{L})\) is not in \(\mathfrak{m}\). From the same factorization
\[\frac{\mathbf{t}^{I}}{\mathbf{t}^{I}+\mathbf{t}^{J}+\mathbf{t}^{L}}=\frac{ \mathbf{t}^{I}}{\mathbf{t}^{I}+\mathbf{t}^{L}}\frac{\mathbf{t}^{I}+\mathbf{t}^ {L}}{\mathbf{t}^{I}+\mathbf{t}^{J}+\mathbf{t}^{L}}\]
it then follows that \(\mathbf{t}^{I}/(\mathbf{t}^{I}+\mathbf{t}^{L})\) is not in \(\mathfrak{m}\), so that \(I\preceq_{\mathfrak{m}}L\).
For the case \(m=1\), we have that \(K(\!(t)\!)^{\circ}=K[\![t]\!]\) and \(\mathfrak{m}=(t)\), and we recover the fact that the unique maximal ideal encodes the unique monomial order on \(\mathbb{N}\).
**Corollary 4.12**.: _Every maximal \(k\)-ideal of \(V\mathbb{B}(\mathbf{t})^{\circ}\) is of the form_
\[\mathfrak{m}_{<}^{\dagger}=\{\frac{a}{b}\in V\mathbb{B}(\mathbf{t})^{\circ}\, :\,a_{\min_{<}(b)}=0\}\]
_for some monomial order \(<\) on \(\mathbb{N}^{m}\)._
Proof.: By Theorem 4.4 and Proposition 4.5, every maximal ideal of \(K(\!(\mathbf{t})\!)^{\circ}\) is of the form \(\mathfrak{m}_{<}=\{f/g\in K(\!(\mathbf{t})\!)^{\circ}:f_{\min_{<}(\mathrm{Supp }(g))}=0\}\) for some monomial order \(<\) on \(\mathbb{N}^{m}\).
By Theorem 2.12 and Proposition 2.19, every maximal ideal of \(V\mathbb{B}(\mathbf{t})^{\circ}\) is the image under trop of a maximal ideal of \(K(\!(\mathbf{t})\!)^{\circ}\), and we have \(\mathrm{trop}(\mathfrak{m}_{<})=\mathfrak{m}_{<}^{\dagger}\).
An important consequence is that by construction taking initial forms commutes with multiplication. We derive this fact from Theorem 4.4 as follows.
**Proposition 4.13**.: _Consider a pair \((w,\mathfrak{m})\) of a weight \(w=(w_{1},\ldots,w_{n})\in\mathbb{B}[\![\mathbf{t}]\!]^{n}\) and a maximal ideal \(\mathfrak{m}\subset K(\!(\mathbf{t})\!)^{\circ}\). If \(P,Q\in F_{m,n}\), then_
\[\text{in}_{(w,\mathfrak{m})}(PQ)=\text{in}_{(w,\mathfrak{m})}(P)\text{in}_{( w,\mathfrak{m})}(Q).\]
Proof.: Let \(P=\sum_{M}a_{M}E_{M}\) and \(Q=\sum_{N}b_{N}E_{N}\), with \(\mathrm{trop}_{w}(P)=\frac{a}{b}\) and \(\mathrm{trop}_{w}(Q)=\frac{c}{d}\), so that \(T(\mathrm{trop}_{w}(P))^{-1}=\frac{B}{A}\) and \(T(\mathrm{trop}_{w}(Q))^{-1}=\frac{D}{C}\). Then by (10), we have
\[P_{w}Q_{w} =[\sum_{M}[\frac{B}{A}a_{M}E_{M}(\overline{\Theta(J)(e_{K}(w_{i}) )})]E_{M}][\sum_{N}[\frac{D}{C}b_{N}E_{N}(\overline{\Theta(J)(e_{K}(w_{i}))})]E _{N}]=\] \[=\sum_{O}[\sum_{M+N=O}\frac{BD}{AC}a_{M}b_{N}E_{M}(\overline{ \Theta(J)(e_{K}(w_{i}))}))E_{N}(\overline{\Theta(J)(e_{K}(w_{i}))}))]E_{M}E_{N}.\]
Now let \(PQ=\sum_{O}(\sum_{M+N=O}a_{M}b_{N})E_{M}E_{N}\) with \(\mathrm{trop}_{w}(PQ)=\frac{e}{f}\), so that \(T(\mathrm{trop}_{w}(PQ))^{-1}=\frac{F}{E}\). Then
\[(PQ)_{w}=\sum_{O}[\sum_{M+N=O}\tfrac{F}{E}a_{M}b_{N}E_{M}(\overline{\Theta(J) (e_{K}(w_{i}))})E_{N}(\overline{\Theta(J)(e_{K}(w_{i}))}))]E_{M}E_{N}.\]
Let \(<_{\mathfrak{m}}\) be the monomial order obtained by applying Theorem 4.4, then by Proposition 4.5, the reduction map \(\psi_{\mathfrak{m}}\circ\pi:K(\!(\mathbf{t})\!)^{\circ}\to K\) appearing in (4) can be described concretely by
\[\psi_{\mathfrak{m}}\circ\pi(\tfrac{f}{g})=\frac{f_{\min_{<_{\mathfrak{m}}}( \mathrm{Supp}(g))}}{g_{\min_{<_{\mathfrak{m}}}(\mathrm{Supp}(g))}}. \tag{14}\]
Clearly \(P_{w}Q_{w}\) and \((PQ)_{w}\) only differ by the constants \(\frac{BD}{AC}\) and \(\frac{F}{E}\). Now we have \(\frac{a}{b}\odot\frac{c}{d}=\frac{e}{f}\), since \(\mathrm{trop}_{w}\) is multiplicative. In particular we have \(AC=E+R\) and \(BD=F+S\) where \(R\) (respectively \(S\)) does not have any vertex of \(E\) (respectively of \(F\)). Thus \((BD)_{\min_{<_{\mathfrak{m}}}(\mathrm{Supp}(AC))}=F_{\min_{<_{\mathfrak{m}}}( \mathrm{Supp}(E))}\) and \((AC)_{\min_{<_{\mathfrak{m}}}(\mathrm{Supp}(AC))}=E_{\min_{<_{\mathfrak{m}}}( \mathrm{Supp}(E))}\), which yields \(\psi_{\mathfrak{m}}\circ\pi(\frac{BD}{AC})=\psi_{\mathfrak{m}}\circ\pi(\frac{F }{E})\). We now apply (11), which finishes the proof.
One last remark concerning the concept of initial defined on [12] for polynomials with coefficients in \(K[\![\mathfrak{t}]\!]\) is convenient. Given a weight \(w=(w_{1},\ldots,w_{n})\in\mathbb{B}[\![\mathfrak{t}]\!]^{n}\) and \(P=\sum_{M}a_{M}E_{M}\in K[\![\mathfrak{t}]\!][x_{i,J}]\), its initial was defined as
\[\mathrm{in}_{w}(P)=\sum_{\operatorname{trop}_{w}(a_{M}E_{M})\cap\operatorname{ trop}_{w}(P)\neq\emptyset}\overline{a_{M}}E_{M}.\]
Given a maximal ideal \(\mathfrak{m}\subset K(\!(\mathfrak{t})\!)^{\circ}\) corresponding to the monomial order \(<_{\mathfrak{m}}\), by (14) we have
\[in_{(w,\mathfrak{m})}(P)=\sum_{I+J=\min<_{\mathfrak{m}}(\operatorname{trop}_{w }(P))\in\operatorname{trop}_{w}(a_{M}E_{M})}[c(w)_{J}(a_{M})_{I}]E_{M} \tag{15}\]
where \(c(w)_{J}\in\mathbb{N}\) is a constant that depends only on the weight vector \(w\). Certainly, every \(\mathrm{in}_{(w,\mathfrak{m})}(P)\) can be recovered from \(\mathrm{in}_{w}(P)\).
It remains future work to verify whether non-maximal prime ideals of \(K(\!(\mathfrak{t})\!)^{\circ}\) can also be encoded as some type of orders of \(\mathbb{N}^{m}\).
## Acknowledgments
This paper was finished during a research visit from S.F. and C.G. at the Unidad Oaxaca IM-UNAM in the context of a CIMPA School in Oaxaca June 2023. We are thankful for the financial support of the Apoyo Especial Alfonso Napoles Gandara (IM-UNAM) and the great hospitality during this time. We would like to acknowledge the support and input of Mercedes Haiech in the beginning of this project.
L.B. is partially supported by PAPIIT project IA100122 dgapa UNAM. S.F. is partially supported by the grant PID2020-113192GB-I00 (Mathematical Visualization: Foundations, Algorithms and Applications) from the Spanish MICINN and by the OeAD project FR 09/2022.
|
2309.07209 | A Tilted Dark Halo Origin of the Galactic Disk Warp and Flare | The outer disk of the Milky Way Galaxy is warped and flared. Several
mechanisms have been proposed to explain these phenomena, but none have
quantitatively reproduced both features. Recent work has demonstrated that the
Galactic stellar halo is tilted with respect to the disk plane, suggesting that
at least some component of the dark matter halo may also be tilted. Here we
show that a dark halo tilted in the same direction as the stellar halo can
induce a warp and flare in the Galactic disk at the same amplitude and
orientation as the data. In our model the warp is visible in both the gas and
stars of all ages, which is consistent with the breadth of observational
tracers of the warp. These results, in combination with data in the stellar
halo, provide compelling evidence that our Galaxy is embedded in a tilted dark
matter halo. This misalignment of the dark halo and the disk holds clue to the
formation history of the Galaxy, and represents the next step in the dynamical
modeling of the Galactic potential. | Jiwon Jesse Han, Charlie Conroy, Lars Hernquist | 2023-09-13T18:00:00Z | http://arxiv.org/abs/2309.07209v1 | # A Tilted Dark Halo Origin of the Galactic Disk Warp and Flare
###### Abstract
The outer disk of the Milky Way Galaxy is warped and flared. Several mechanisms have been proposed to explain these phenomena, but none have quantitatively reproduced both features. Recent work has demonstrated that the Galactic stellar halo is tilted with respect to the disk plane, suggesting that at least some component of the dark matter halo may also be tilted. Here we show that a dark halo tilted in the same direction as the stellar halo can induce a warp and flare in the Galactic disk at the same amplitude and orientation as the data. In our model the warp is visible in both the gas and stars of all ages, which is consistent with the breadth of observational tracers of the warp. These results, in combination with data in the stellar halo, provide compelling evidence that our Galaxy is embedded in a tilted dark matter halo. This misalignment of the dark halo and the disk holds clue to the formation history of the Galaxy, and represents the next step in the dynamical modeling of the Galactic potential.
We construct a model of the gravitational potential of the Galaxy in which 30% of the dark halo mass is comprised of a triaxial distribution that is tilted \(25^{\circ}\) with respect to the disk plane. The potential also includes bulge and disk components, where the latter increases in time to mimic the growth of the Galactic disk with time. We calculate the orbits of collisionless (stars) and collisional (gas) particles over 5 Gyr in this potential. In Figure 1 we show the present-day distribution of stars in the simulation. In the top panel we plot the Galactocentric vertical height (\(Z\)) and signed cylindrical radius (\(R\)), clearly revealing an S-shaped _warp_ of the disk. Negative \(R\) indicates the Galactic quadrants that are within \(\pm 90^{\circ}\) of the positive peak (the "Northern" warp) and positive \(R\) indicates Galactic quadrants that are within \(\pm 90^{\circ}\) of the negative peak (the "Southern" warp). In the bottom panels, we plot the vertical deviation (\(Z-Z_{warp}\)) of tracer particles from the mean warp. The average vertical deviation grows markedly towards outer radii, demonstrating the _flare_ of the disk. Particles are color-coded by age, which shows that stars of all ages exhibit a warp and flare. The warp is most pronounced for the youngest stars, consistent with observations [1, 2].
In Figure 2 we compare the warp and flare of the simulated stars (left panel) and gas (right panel) to observations. For conciseness of comparison, we focus on the HI [3, 4] gas and Cepheid [3] stars. We note that a similar warp has also been observed for ionized gas [5], molecular gas [6], dust [7], stars [8, 9, 2, 10], and star clusters [11]. In the top panels we plot the observed warp in open circles, and a maximum likelihood fit to the simulation in solid colored lines, with \(1\sigma\) uncertainty of the fit shaded. The onset radius \(R_{w}\) of the fit is fixed to the observed \(R_{w}\) values. The flare is measured by fitting an exponential scale height to the vertical deviation from the warp at each radius. In both stars and gas, the simulated warp and flare quantitatively match the observations. Additionally, the tilted halo simulation produces a population of stars on circular orbits that reach high-latitude (\(|Z|>2\) kpc) on either side of the warp extrema. This population of stars is reminiscent of known stellar overdensities towards the Galactic anticenter [12, 13, 14, 10].
The parameters chosen for the simulation were not tuned to match the Galactic warp or flare. The two key parameters affected the predicted warp and flare are: (1) the shape and extent of the tilted halo, and (2) the scale radius of newly-formed stars. For the former, we adopt the scale radius, triaxiality, and tilt angle of the halo directly from the shape of the accreted stellar halo [15]. For the latter, we adopt the scale radius of the molecular disk of the Galaxy [16]. Details of the simulation setup and variations to the adopted parameters are given in the Supplementary Material.
The large-scale warp of the Galactic disk has been known for over half a century [17, 18], and there is a commensurately rich history of warp and flare models [19, 20, 21, 22]. For example, studies have investigated the warp as a result of perturbed bending modes [23], misaligned angular momenta of the halo and the disk [24, 25], repeated impact from the Sagittarius dwarf galaxy [26], misaligned gas accretion [27], or quadrupolar torque from a tumbling triaxial halo [28]. However, previous studies have been unable to quantitatively reproduce the warp (and simultaneously the flare) of the Galactic disk. Among the recently investigated models is the tidal influence of the Large Magellanic Cloud (LMC) [29]. In Figure 3 we plot the disk warp at \(R=16\) kpc produced by the tilted halo model and an LMC model as a function of Galactic longitude. The LMC was modeled as a live halo on its first infall into a live Milky Way halo and disk [29]. The HI data [3] at this radius is marked in open circles. Even for the highest LMC mass--which is 80% higher than other models [30]--the warp amplitude is less than a third of the observed amplitude. In the right panel, we sum the amplitudes of the warp from the LMC and the tilted halo models. The combined model matches the observa
tions better than the tilted halo alone, although we caution that this simple summation does not capture the possible coupling between the effect of the LMC and the tilted halo.
Tilted dark halos are common in galaxy simulations that include baryonic physics [31; 32], and several independent lines of evidence have pointed toward a tilted Galactic dark halo [33; 34; 35]. Furthermore, Han et al. (in prep) show that such tilted dark halos are long-lived and can warp Galactic disks in Illustris TNG50 [36; 37]. In a realistic model we can expect the tilt angle to change with time, for example due to interaction between the (growing) disk and the halo. While the tilt of the halo at earlier epochs can leave interesting observational signatures in old disk stars, the main point of this paper is to demonstrate that the young disk (and gas) is responsive to the tilt on \(<1\) Gyr timescales, and on such timescales the tilt is constrained observationally to be \(\sim 25^{\circ}\). A plausible origin of a tilted dark halo in the Galaxy is a major merger [38; 39]\(8-10\) Gyr ago [40; 41] that deposited a significant fraction of dark matter on an eccentric, tilted orbit [35; 42]. Dynamical models of the Galaxy often assume a spherical dark halo at all scales, or a flattened halo that is aligned with the disk. A tilted dark halo at \(\sim 30\) kpc would have novel applications for Galactic dynamics. For example, a tilted and triaxial halo influences the shape of the stellar halo [35], and can affect orbit reconstruction of stellar streams [43]. Furthermore, the tilt and triaxiality of the inner halo (\(\sim 0.1R_{\rm virial}\)) encodes information about the self-interacting properties of dark matter [44] that is unique compared to larger-scale probes such as galaxy clusters or large scale structure. Future work will investigate the imprint of dark matter self-interaction through the tilt and triaxiality of the dark halo at \(30\) kpc. In addition, a global tilt in the dark halo implies an anisotropic velocity distribution of the dark matter particles. The resulting asymmetric velocity distribution should affect ground-based dark matter detection experiments [45].
Finally, while we have focused on demonstrating that the warp and flare are likely manifestations of a tilted dark halo, we can also reverse the argument: precise measurements of the warp may further constrain the tilt of the dark halo. We have intentionally avoided "fitting" a dark halo to match the warp, but there is much to explore in allowing for more flexible halo models. For example, the warp is sensitive to the tilt angle of the dark halo, and the fraction of mass in the tilted component (see Methods). Jointly modeling the numerous tracers of the disk warp at various ages and Galactic radii is the next step in uncovering the distribution of dark matter in the Galaxy.
Figure 1: Present-day distribution of simulated stars in Galactocentric cylindrical coordinates. Negative R indicates azimuthal angles that are within \(90^{\circ}\) of the Northern warp, and positive R indicates azimuthal angles that are within \(90^{\circ}\) of the Southern warp. The top panels show the warp, and the bottom panels show the vertical deviation from the average warp. The vertical deviation systematically increases toward the outer Galaxy, demonstrating the flare of the disk. Points that are in the outer Galaxy are plotted with larger markers. Particles colored by stellar age clearly demonstrate a warp at all ages, and most strongly at the youngest population.
Figure 3: The amplitude and orientation of the disk warp at \(R=16\) kpc. In the left panel we plot the simulated warp from the tilted dark halo in blue, the LMC models[29] in red, and the HI warp in open circles[3]. Shaded regions indicate \(1\sigma\) uncertainty of the fit. The individual data points from the LMC simulation are marked in faint dots, and a polynomial fit to the points are drawn as solid lines. In the right panel, we show the sum of the warps induced by a tilted halo and the LMC.
Figure 2: Comparison of the simulation to the observed warp and flare. The top panels show the warp, and bottom panels show the flare. The observed points (open circle) are extracted directly from the cited papers[1, 3, 4]. (a) In the top panel, the observed warp of Cepheids[1] is plotted in open circles, and the maximum-likelihood fit of stellar warp in the simulation is plotted in red. We fix the onset radius of the warp to the observed value. Shaded regions indicate \(1\sigma\) uncertainty of the fit. In the bottom panel, the observed scale height of Cepheids are plotted in black dashed lines, and the scale height of the simulated stars are plotted in red. (b) In the top panel, the observed warp of HI[3, 4] is plotted in open circles, and the maximum-likelihood fit of the gaseous warp in the simulation is plotted in blue. In the bottom panel, the observed scale height of HI is plotted in black dashed lines, and the scale height of the simulated gas are plotted in blue.
## Methods
### Simulation details
Here we describe the time-dependent Galactic potential in which we compute the orbits of tracer particles. The potential is a summation of the following components. The halo has a total mass of \(8\times 10^{11}\)M\({}_{\odot}\), 70% of which is in a spherical halo with NFW[46] radial profile with scale radius \(r_{s}=15\) kpc, and 30% of which is in a tilted, triaxial halo. The triaxiality is 10:8.1:7.3[15] and the tilt angle is \(25^{\circ}\)[15]. The radial density along the principal axes of the triaxial halo follows an NFW profile with scale radius \(r_{s}=30\) kpc[35]. The disk has two components: a "thick" disk with fixed mass at \(6\times 10^{9}\)M\({}_{\odot}\), scale radius 2 kpc, and scale height 0.9 kpc[47], and a "thin" disk that linearly increases in mass up to \(3.5\times 10^{10}\)M\({}_{\odot}\) at present day, with fixed scale radius 2.6 kpc and scale height 0.3 kpc[47] assuming a Miyamoto-Nagai potential[48]. The initial mass of the thin disk is \(1.3\times 10^{10}\)M\({}_{\odot}\). Lastly, we include a spherical bulge with Hernquist radial profile[49] and fixed mass \(1.8\times 10^{10}\)M\({}_{\odot}\) and scale radius of 1 kpc.
The tracer particles are initialized on circular orbits at radii sampled from a truncated exponential distribution, discarding any stars sampled within 1 kpc or beyond eight times the scale radius from the Galactic center. The scale radius increases linearly with time to 8 kpc at present day. The final scale radius is chosen to be where the H\({}_{2}\) mass surface density[16] is \(1/e\) of the maximum value. At each time step of the simulation, we spawn new tracer particles at a rate that is commensurate with the growth of the disk, culminating in 50,000 star particles and 50,000 gas particles. We calculate orbits using the gala python package[50] using a fixed 1 Myr timestep and standard Leapfrog integrator.
While the star particles are collisionless, the gas particles are collisional and follow an inelastic scattering prescription. If a gas particle comes within \(0.1\) kpc of another gas particle and they have negative relative speeds, they exchange relative velocities and lose \(10\%\) of the collective kinetic energy. This method of simulating gas particles is valid when the velocity dispersion is low[51, 52], as it is the case for circular orbits. We note that all of our disk particles remain on circular orbits throughout all 5 Gyr, with the most eccentric orbits exhibiting less than 0.1% of their angular momentum off of the Galactic \(Z\) axis. Lastly, at each timestep, we allow for a probabilistic change of radius of each particle commensurate to what is measured for the Galaxy[53, 54].
### Warp Modeling
In Figure 2 we plot a fit to the warp in the simulation using an analytical formula that is a power-law in radius and a sinusoid in azimuth:
\[Z(R\geq R_{w}) =A\times(R-R_{w})^{b}\times\sin{(\phi-\phi_{w})}\] \[Z(R<R_{w}) =0\]
Here, \(R\) and \(Z\) are Galactocentric cylindrical coordinates, \(A\) is the amplitude of the warp, \(b\) the power-law index, \(\phi_{w}\) the orientation of the warp, and \(R_{w}\) the onset radius of the warp. This function has also been used to fit the Cepheids data[1]. The fit is performed using a maximum likelihood method.
### Which Stars comprise the Warp?
In Extended Data Figure 1 we color the star particles by their birth radius. We find a clear correlation in the birth radius of the star and its final warp amplitude. We can thus understand the prominence of the warp in young stars as a result of the "inside-out" growth of the disk[55], in which the birth radius of a particle correlates inversely with age. Since young stars can be born at larger radii than old stars, they trace a cleaner and larger warp. Due to radial migration[54], old stars that are born in the inner Galaxy can also migrate outwards to eventually trace the warp. This mechanism can explain why older stars appear to have smaller warp amplitudes in observations[2].
### Timescale of the Warp
In Extended Data Figure 2 we show the time evolution of the warp amplitude at a fixed radius \(R=16\) kpc. The error bars indicate \(1\sigma\) statistical uncertainties in the warp fit. At \(t=0\), the disk is initialized to have no warp. Within the first few hundred Myr, the warp reaches its maximum amplitude. This is consistent with the rotation period of the disk at 16 kpc, which is approximately \(400\) Myr. Thus, this Figure shows that the disk responds to the tilted dark halo quickly, within one rotation period of the disk. The warp amplitude experiences a transient oscillatory phase until \(1500\) Myr, then converges to a steady state. This transient phase is likely a numerical effect, since there are not many stars out at \(R=16\) kpc at these times (the tracer particle scale length is 3 kpc at \(t=0\) and 4.5 kpc at \(t=1500\) Myr) and the warp amplitude is determined by a small number of particles.
### Variations to Model Parameters
In this study, we have intentionally avoided "fitting" model parameters to the data, in order to demonstrate that no tuning is required for a tilted dark halo to reproduce the observed warp/flare. In Extended Data Figures 3 and 4 we show how changes in the model parameters can affect the warp. In the former, we vary (1) the scale length of the tilted dark halo and (2) the present-day scale-length of the tracer particles. In the latter, we vary (1) the tilt angle of the dark halo and (2) the mass fraction of the tilted component of the dark halo compared to the spherical component. Aside from the parameters being modulated, all other parameters are fixed to their values from the simulation presented in Figure 1. In each panel, we calculate the fraction of stars that are off the \(Z=0\) plane by more than 0.25 kpc, \(N_{\rm warp}/N_{\rm plane}\). If \(N_{\rm warp}/N_{\rm plane}\) is greater than \(0.5\%\), we fit a warp model (dotted lines) and show the warp amplitude at 20 kpc as \(Z_{20}\). The disk warps in all models except for the case of a non-tilted halo (last row of Extended Data Figure 4). These Figures demonstrate that warping is a general response of the disk to a tilted dark halo. Furthermore, a more complete observational picture of the Galactic warp can help constrain properties of the tilted dark matter halo, such as its mass fraction and tilt angle.
### Context in Cosmological Simulations
A key assumption of our idealized simulation is the fixed orientation of the halo with respect to the growing disk. It is thus important to understand to what extent these assumptions are applicable in a simulation with a live halo and self-consistently growing disk. Specifically, the time that it takes for a tilted dark halo to eventually align with the disk is an open question. If this
timescale is substantially longer than the warp onset timescale (a few hundred Myr, see Extended Data Fig. 2), then the mechanisms studied in our idealized simulation can apply more generally to slowly-changing halos.
To address this question, Han et al. (in prep) analyze Milky Way-like galaxies in the Illustris TNG50 cosmological magneto-hydrodynamic simulations [36, 37]. They first show that a significant fraction of MW analogs in TNG50 have present-day tilted dark halos at \(r<50\) kpc. About 50% of halos are tilted more than \(10^{\circ}\), and 25% of halos are tilted more than \(20^{\circ}\). Then, they identify a galaxy that experiences a major merger at \(\sim 7\) Gyr ago that results in a tilted dark halo. For this galaxy, the angle of misalignment of the disk and the dark halo reduces from \(50^{\circ}\) to \(20^{\circ}\) over 5 Gyr. Furthermore, the disk of this galaxy warps shortly after the onset of the tilted dark halo with a delay time of \(<1\) Gyr. The galaxy does not have any massive satellites at the time of the onset of the warp. TNG50 was not tuned in any way to produce these results; rather, the long-lived tilted dark halos and warps emerge naturally in the simulations. These results show that tilted dark halos can (1) be long-lived in a live, cosmological environment, and (2) generate warps in galactic disks on timescales shorter than the change in the tilt angle. Furthermore, this result shows that the Milky Way's dark halo was likely more tilted in the past and has decreased to its current value (\(\sim 25^{\circ}\)) at present day.
|
2307.16870 | An entanglement-aware quantum computer simulation algorithm | The advent of quantum computers promises exponential speed ups in the
execution of various computational tasks. While their capabilities are hindered
by quantum decoherence, they can be exactly simulated on classical hardware at
the cost of an exponential scaling in terms of number of qubits. To circumvent
this, quantum states can be represented as matrix product states (MPS), a
product of tensors separated by so-called bond dimensions. Limiting bond
dimensions growth approximates the state, but also limits its ability to
represent entanglement. Methods based on this representation have been the most
popular tool at simulating large quantum systems. But how to trust resulting
approximate quantum states for such intractable systems sizes ? I propose here
a method for inferring the fidelity of an approximate quantum state without
direct comparison to its exact counterpart, and use it to design an
``entanglement-aware'' (EA) algorithm for both pure and mixed states. As
opposed to state of the art methods which limit bond dimensions up to an
arbitrary maximum value, this algorithm receives as input a fidelity, and
adapts dynamically its bond dimensions to both local entanglement and noise
such that the final quantum state fidelity at least reaches the input fidelity.
I show that this algorithm far surpasses standard fixed bond dimension
truncation schemes. In particular, a noiseless random circuit of 300 qubits and
depth 75 simulated using MPS methods takes one week of computation time, while
EA-MPS only needs 2 hours to reach similar quantum state fidelity. | Maxime Oliva | 2023-07-31T17:27:04Z | http://arxiv.org/abs/2307.16870v1 | # An entanglement-aware quantum computer simulation algorithm
###### Abstract
The advent of quantum computers promises exponential speed ups in the execution of various computational tasks. While their capabilities are hindered by quantum decoherence, they can be exactly simulated on classical hardware at the cost of an exponential scaling in terms of number of qubits. To circumvent this, quantum states can be represented as matrix product states [1; 2] (MPS), a product of tensors separated by so-called bond dimensions. Limiting bond dimensions growth approximates the state, but also limits its ability to represent entanglement. Methods based on this representation have been the most popular tool at simulating large quantum systems. But how to trust resulting approximate quantum states for such intractable systems sizes? I propose here a method for inferring the fidelity of an approximate quantum state without direct comparison to its exact counterpart, and use it to design an "entanglement-aware" (EA) algorithm for both pure and mixed states. As opposed to state of the art methods which limit bond dimensions up to an arbitrary maximum value, this algorithm receives as input a fidelity, and adapts dynamically its bond dimensions to both local entanglement and noise such that the final quantum state fidelity at least reaches the input fidelity. I show that this algorithm far surpasses standard fixed bond dimension truncation schemes. In particular, a noiseless random circuit of 300 qubits and depth 75 simulated using MPS methods takes one week of computation time, while EA-MPS only needs 2 hours to reach similar quantum state fidelity.
## I Introduction
Tensor network methods have been among the most popular avenue at circumventing the exponential scaling of exact quantum simulations [1; 2; 3; 4; 5]. While lowly entangled pure states can be efficiently simulated as matrix product states[1; 2], mixed states can be simulated as matrix product operators[2; 4]. Both approaches have allowed the reach of systems sizes far beyond what exact computation can achieve.
Any pure state can be represented as a matrix product state (MPS). The state vector of a quantum state of \(N\) qubits is cast into a factorized form of \(N\) tensors connected to each other with what are generally called "bond dimensions". Physically, bond dimensions can be thought of as the amount of entanglement a quantum state can encapsulate[1]. Contracting every tensor along its bond dimensions gives back the quantum state in its standard state vector form. The usual approach to approximating MPS consists in limiting these bond dimensions up to a maximum value set arbitrarily prior to the simulation. The more entanglement, the bigger the truncation errors, which limits MPS effectiveness to lowly entangled, but possibly very large quantum states. Such methods suffer two limitations. First, there is no way to assess how well the resulting quantum state approximates the exact quantum state when the system size is out of exact simulation reach. Secondly, there is no general method for guessing which maximum bond dimension is best for a given problem.
I show that for both pure and mixed states simulations, an approximate quantum state fidelity with respect to its exact counterpart can be indirectly computed, that is without ever having to compute the exact quantum state itself. This computation is done in real-time throughout the simulation, by efficiently computing the fidelity of every bond dimension truncation.
Based on this result, an entanglement-aware simulation algorithm can be designed, fully leveraging the presence or absence of both noise and entanglement. Rather than limiting bond dimensions uniformly as is the case in standard algorithms, bond dimensions are allowed to increase or decrease solely based on how this affects the resulting quantum state fidelity. As such the algorithm does not receive as input a maximum bond dimension, but instead a fidelity the final quantum state has to at least reach. Decreasing (resp. increasing) the desired fidelity decides how aggressive (resp. conservative) the algorithm will be at truncating bond dimensions.
Since bond dimensions are kept as low as possible for every operation, the algorithm efficiency far surpasses that of regular fixed maximum bond dimensions algorithms, while simultaneously allowing trustworthy output quantum states.
## II The matrix product formalism
Any pure quantum state can be written as a state vector \(\ket{\psi}\) defined by :
\[\ket{\psi}=\sum_{\sigma}C_{\sigma_{1},...,\sigma_{N}}\ket{\sigma_{1}}\otimes...\otimes\ket{\sigma_{N}} \tag{1}\]
with \(C_{\sigma_{1},...,\sigma_{N}}\) a 1D tensor containing \(2^{N}\) complex values and \(\{\ket{\sigma_{i}}\}\) forming an orthonormal basis.
As described in Vidal's article[1], \(\ket{\psi}\) can be decomposed as an MPS via successive singular value decompo
sition (SVD) of \(|\psi\rangle\) in Eq. (1):
\[|\psi\rangle=\sum_{\sigma,\chi}A^{[1]\sigma_{1}}_{1,\chi_{1}}A^{[2]\sigma_{2}}_{ \chi_{1},\chi_{2}}...A^{[N]\sigma_{N}}_{\chi_{N-1},1}\ \ |\sigma_{1}\rangle\otimes...\otimes|\sigma_{N}\rangle \tag{2}\]
We obtain a product of \(N\) complex-valued tensors \(\{A^{[i]\sigma_{i}}\}\), separated by bond dimensions \(\{\chi_{1},\chi_{2},...,\chi_{N-1}\}\). Assuming the \(\chi\) are bounded by a maximum bond dimension \(\chi_{max}\), the number of values contained in the MPS in Eq. (2) scales in \(\mathcal{O}(N\chi_{max}^{2})\) with \(N\) the number of qubits of the system.
Similarly, for mixed state, instead of considering the state vector \(|\psi\rangle\), we consider the density matrix \(\hat{\rho}\):
\[\hat{\rho}=\sum_{\sigma,\sigma^{\prime}}C_{\sigma_{1}\sigma^{\prime}_{1},..., \sigma_{N}\sigma^{\prime}_{N}}|\sigma_{1}\rangle\langle\sigma^{\prime}_{1}| \otimes...\otimes|\sigma_{N}\rangle\langle\sigma^{\prime}_{N}| \tag{3}\]
with \(C_{\sigma_{1}\sigma^{\prime}_{1},...,\sigma_{N}\sigma^{\prime}_{N}}\) a 2D tensor containing \(2^{N}\times 2^{N}\) complex values and \(\{|\sigma_{i}\rangle\}\) forming an orthonormal basis. Again, \(\hat{\rho}\) can be cast into a matrix product operator (MPO) [2] via successive SVD decompositions:
\[\hat{\rho}=\sum_{\chi_{1}...\chi_{N}}A^{[1]\sigma_{1},\sigma^{ \prime}_{1}}_{1,\chi_{1}}A^{[2]\sigma_{2},\sigma^{\prime}_{2}}_{\chi_{1},\chi _{2}}...A^{[N]\sigma_{N},\sigma^{\prime}_{N}}_{\chi_{N-1},1}\] \[|\sigma_{1}\rangle\langle\sigma^{\prime}_{1}|\otimes...\otimes| \sigma_{N}\rangle\langle\sigma^{\prime}_{N}| \tag{4}\]
Limiting the growth of bond dimensions \(\chi\!\leq\!\chi_{max}\) approximates the state, and let the MPO scale in \(\mathcal{O}(N^{2}\chi_{max}^{3})\).
When it comes to quantum circuit simulations, such representations can only simulate circuits with linear nearest neighbour (LNN) topology, meaning gates have to be applied on neighbouring qubits only. Bond dimensions grow only by application of operators acting on multiple qubits.
## II Canonicalization
Casting quantum state vectors or density matrices into their matrix product representations allow for various simplifications. Most of these simplifications use the concept of canonicalization[2; 6], which is a consequence of the SVD or QR operations required for constructing and truncating quantum states in matrix product representations. Take the SVD of a matrix \(M\), we have \(M=U\Lambda V^{\dagger}\), where \(\Lambda\) is diagonal, \(U^{\dagger}U=I\) and \((V^{\dagger}V)^{\dagger}=VV^{\dagger}=I\). As such, \(U\) is left-normalized, while \(V^{\dagger}\) is right-normalized. Similarly, a QR operation on a matrix \(M\) gives \(M=QR^{\dagger}\) with \(Q\) and \(R^{\dagger}\) respectively left and right-normalized matrices. By controlling which tensors are left- or right-normalized, many operations can be done on a small subset of tensors, rather than on the entire quantum state.
To illustrate this, let us compute the expectation value of an observable \(\hat{O}\) whose support is on \(|\sigma_{k}\rangle\) and \(|\sigma_{k+1}\rangle\) for a quantum state \(|\psi\rangle\) in the MPS form defined in Eq. (2):
\[\langle\hat{O}\rangle_{\psi}=\sum_{\sigma,\sigma^{\prime},\chi}A^ {[0]\sigma^{\prime}_{0}}_{1,\chi_{1}}...A^{[k]\sigma^{\prime}_{k}}_{\chi_{k}, \chi_{k+1}}A^{[k+1]\sigma^{\prime}_{k+1}}_{\chi_{k+1},\chi_{k+2}}...\] \[A^{[N]\sigma^{\prime}_{N}}_{\chi_{N-1}}O^{\sigma^{\prime}_{k}, \sigma^{\prime}_{k+1}}_{\sigma_{k},\sigma_{k+1}}A^{[0]\sigma_{0}}_{1,\chi_{1} }...\] \[A^{[k]\sigma_{k}}_{\chi_{k},\chi_{k+1}}A^{[k+1]\sigma_{k+1}}_{ \chi_{k+1},\chi_{k+2}}...A^{[N]\sigma_{N}}_{\chi_{N-1},1}\langle\sigma^{ \prime}|\sigma\rangle \tag{5}\]
For now the MPS \(|\psi\rangle\) is in arbitrary form. We apply QR to each tensor \(A^{[i]}\) for \(i\in~{}[1,k[\) such that \(\sum A^{[i]}A^{[i]}=I\), and similarly apply a RQ for \(i\in[N,k+1[\) such that \(\sum A^{[i]}A^{[i]}=I\). The state is now in "canonical form". The left-normalized tensors multiplied with themselves simplify to identity, and similarly for the right-normalized tensors.
\(\hat{O}\) expectation value can then be computed by only multiplying:
\[\langle\hat{O}\rangle_{\psi}=\sum_{\sigma,\sigma^{\prime},\chi}A^{[k] \sigma^{\prime}_{k}}_{\chi_{k},\chi_{k+1}}A^{[k+1]\sigma^{\prime}_{k+1}}_{\chi_ {k+1},\chi_{k+2}}\] \[O^{\sigma^{\prime}_{k},\sigma^{\prime}_{k+1}}_{\sigma_{k},\sigma_{k +1}}A^{[k]\sigma_{k}}_{\chi_{k},\chi_{k+1}}A^{[k+1]\sigma_{k+1}}_{\chi_{k+1}, \chi_{k+2}} \tag{6}\]
The canonical form is needed for efficiently computing the truncation fidelities, and is enforced throughout the entanglement-aware simulation.
## III Quantum state fidelity
The quantum state fidelity is a measure of the "closeness" of two quantum states.
For pure states, the fidelity \(\mathcal{F}\) between two states \(|\psi\rangle\) and \(|\phi\rangle\) is defined by the square modulus of their overlap:
\[\mathcal{F}(\psi,\phi)=|\langle\psi|\phi\rangle|^{2} \tag{7}\]
Similarly, Josza[7] defined the quantum state fidelity between two mixed states \(\rho\) and \(\sigma\) as:
\[\mathcal{F}(\rho,\sigma)=Tr\left(\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right)^{2} \tag{8}\]
However, truncations of mixed states in MPO representation can lead to the loss of their positive semi-definite (PSD) property, which is a necessary condition for computing the matrix square roots in Eq. (8).
An alternative formulation obeying all four of Josza's axioms[7] has been given by X. Wang et al. [8] for mixed states. It is defined by:
\[\mathcal{F}(\rho,\sigma)=\frac{|Tr(\rho\sigma)|}{\sqrt{Tr(\rho^{2})Tr(\sigma^{2})}} \tag{9}\]
This definition removes the need for the PSD property of the quantum state, and is usable for non-normalized density matrices.
For the rest of the article, the quantum state fidelity for pure state will refer to Eq. (7), while it will refer to Eq. (9) for mixed states.
## IV Truncation fidelity
We call truncation fidelity the quantum state fidelity before and after truncation. We show truncation fidelities can be efficiently computed using only the singular values associated with the bond dimension that is to be truncated.
Let us assume we want to truncate a pure state \(|\psi\rangle\) at site \(T\), to obtain the truncated state \(|\tilde{\psi}\rangle\). The state \(|\psi\rangle\) is in canonical form. Contracting both tensors neighbouring the bond dimension \(T\), and performing its SVD, we obtain the diagonal matrix \(\Lambda\), containing the singular values in decreasing order. Truncating this matrix reduces the overall bond dimension at site \(T\), and approximates the state.
Since the state is in canonical form, the truncation fidelity of such truncation is given by:
\[f_{T}(\tilde{\Lambda})=|\langle\psi|\tilde{\psi}\rangle|^{2}=\sum_{i}\Lambda_{ ii}^{2}\tilde{\Lambda}_{ii}^{2} \tag{10}\]
with \(\Lambda\) and \(\tilde{\Lambda}\) the diagonal matrices representing the singular values obtained at site \(T\)_before_ and _after_ truncation respectively.
Similarly, truncating a mixed state \(\rho\) in canonical form at site \(T\), we obtain the truncated mixed state \(\tilde{\rho}\). Using Eq. (9), and simplifying the tensors by using the canonical simplifications previously described, we obtain:
\[f_{T}(\tilde{\Lambda})=\frac{\sum_{i}\Lambda_{ii}^{2}\tilde{\Lambda}_{ii}^{2} }{\sum_{i}\Lambda_{ii}^{2}\sum_{i}\tilde{\Lambda}_{ii}^{2}} \tag{11}\]
In both cases, the truncation fidelity \(f_{T}\) depends only on the actual truncated singular values in \(\tilde{\Lambda}\). Computing the truncation fidelity is now simply a scalar operation between singular values _before_ and _after_ truncation.
## V Entanglement-aware simulations
Throughout a matrix product-based simulation algorithm, the quantum state endures successive truncations. The more entangled the state, the bigger the truncation errors, and thus the higher the bond dimensions need to be. As shown in Zhou et. al. article[9] and further confirmed in [10], the overall quantum fidelity \(\mathcal{F}\) between an exact quantum state \(|\psi\rangle\) and its truncated counterpart \(|\tilde{\psi}\rangle\) can be approximated for noiseless simulations by:
\[\mathcal{F}(n)\approx\prod_{i=1}^{n}f_{i} \tag{12}\]
with \(n\) the number of truncations and \(f_{i}\) the individual truncation fidelities obtained at each truncation. This approximation proves remarkably robust at all regimes studied[9; 10]. In the case of noisy systems, Eq. (12) underestimates \(\mathcal{F}\) since the additional noise reduces the truncation errors committed. In the general case we have:
\[\mathcal{F}(n)\gtrsim\prod_{i=1}^{n}f_{i} \tag{13}\]
By enforcing the canonical form at all times throughout the simulation, we can efficiently compute truncation fidelities, and thus have direct access to how close the resulting approximated quantum state is to its exact non-truncated counterpart. Using Eq. (10) for pure states or Eq. (11) for mixed states, we know how many singular values have to be truncated in order to reach a specific target truncation fidelity. Since we know in advance how many 2-qubit gates are to be applied onto the quantum state, we can revert Eq. (12) and define the target truncation fidelities \(\{f_{t}^{(i)}\}\) to reach at every truncation so that a fidelity \(\mathcal{F}_{min}\) is at least obtained. The bond dimension truncation scheme becomes adaptive.
The simplest way to define the target truncation fidelities \(\{f_{t}^{(i)}\}\) is to define them uniformly based on a fidelity \(\mathcal{F}_{min}\) the quantum state has to at least reach:
\[f_{t}^{(i)}(n)=\mathcal{F}_{min}^{1/n}\quad\forall i\ \in\ [0,n] \tag{14}\]
Here, \(n\) is the number of 2-qubit gates of the quantum circuit to simulate, i.e. the number of possible truncations. Throughout the simulation, at every possible truncation, we truncate bond dimensions such that Eq. (10) for pure states and Eq. (11) for m
Figure 1: Comparison between the target fidelity and the actual quantum state fidelity obtained for a 25 qubits Haar-random circuit of depth 20. The “naive” strategy corresponds to no target truncation fidelity update throughout the simulation. The “nearest” strategy corresponds to the update of only the upcoming target truncation fidelity, while the “global” strategy updates all upcoming target truncation fidelities.
to but not lower than \(f_{t}\). In practice the actual truncation fidelity will always be higher than the target truncation fidelity \(f_{t}\), resulting in a final quantum state fidelity higher than the chosen input fidelity \(\mathcal{F}_{min}\). To achieve a quantum state fidelity closer to the initial fidelity desired, one can update target truncation fidelities dynamically after every truncation so that the product of truncation fidelities is as close to \(\mathcal{F}_{min}\) as possible, see Fig. 1.
Since bond dimensions now depend on a direct truncation fidelity measure, they increase or decrease based on how they affect the final quantum state fidelity. The smaller the \(\mathcal{F}_{min}\) value, the lower the target truncation fidelities, and the more aggressive the truncations will be. As such, the algorithm is "entanglement-aware" since it is able to detect local entanglement changes, and adapt bond dimensions accordingly.
Fig. 2 demonstrates this adaptivity on a circuit composed of two subcircuits, the second subcircuit being the adjoint of the first. The initial state is a product state requiring only bond dimensions of 1. In the first subcircuit, as entanglement increases bond dimensions increase, to then decrease in the second subcircuit. At the end of the simulation, the state goes back to the initial state with a bond dimensions of 1 everywhere on the tensor.
In Fig. 2, as entanglement dominates, bond dimensions increase. But eventually noise takes over and reduces the overall state entanglement, leading to decreased bond dimension needs. The lower the desired fidelity, the faster the algorithm is at picking up these changes.
This brings new insights on the dynamics of quantum systems. One could for example expect that the bond dimensions requirements in a Haar-random circuit would be maximal around the middle junction of the MPS. But this is not always the case, as shown in Fig. 3. Noisy systems tend to display large bond dimensions variations that cannot be exploited using fixed maximum bond dimension truncation schemes, see Fig. 3. This makes adaptive truncation schemes particularly effective for noisy systems.
Benchmarks of noiseless and noisy entanglement-aware simulations are presented in Fig. 4 and Fig. 4, and compared to standard methods. The random circuit used is defined based on Cheng's article[5], which alternates randomly chosen layers of one- and two-qubit gates. The standard MPS simulation requires one week of simulation time for a 300 qubits random circuit of depth 75, while the entanglement-aware version only needs 2 hours. For a depth of 25, the standard MPO simulation requires also one week, but EA-MPO only needs 40 minutes. Fig. 4 and Fig. 4 show the bond dimensions of the resulting approximate quantum state for 300 qubits. One has to keep in mind that, assuming it is possible to guess which maximum bond dimension was needed in the first place, standard MPS and MPO simulation algorithms have to simulate the state using very large bond dimensions over the entire tensor network for a final quantum state fidelity gain lower than 0.001. Note also that while it may be tempting to truncate some of the high peaks displayed in Fig. 4 or Fig. 4, it is exactly these truncations which would lead to the largest fidelity losses on the final quantum state, since it is where the entanglement is highest, and thus where the truncation errors are maximal.
Both EA-MPS and EA-MPO simulation algorithms vastly outperform regular MPS and MPO simulations in all cases studied, may it be in terms of quantum state
Figure 3: (_a_) Simulation of a 40 qubits Haar-random quantum circuit of depth 20 with a target fidelity of \(\mathcal{F}_{min}=0.9\). Contrary to intuition, bond dimensions are not higher at the center of the MPS. (_b_) Noisy simulation of a 30 qubits Haar-random quantum circuit of depth 10 with a target fidelity of \(\mathcal{F}_{min}=0.9\). Large local entanglement variations are exploited and induce large bond dimensions variations along the MPO.
Figure 2: (_a_) Evolution of the bond dimensions at each MPS site throughout the simulation of a 40 qubits circuit composed of a Haar-random circuit of 20 layers followed by its adjoint. All quantum gates are defined by Haar-random unitary matrices. The bond dimensions increase as the entanglement increases. When the adjoint circuit is reached, entanglement entropy decreases, and the bond dimensions follow. (_b_) Simulation of a noisy quantum circuit of 8 qubits with depolarizing noise parameters \(\epsilon_{1}=\epsilon_{2}=0.05\) for both one- and two-qubit gates. Noise increases as the depth of the circuit increases. This leads to reduced entanglement needs, which is then exploited by the entanglement-aware algorithm to reduce the state bond dimensions. The lower the desired fidelity \(\mathcal{F}_{min}\), the more aggressive the algorithm will be at truncating bond dimensions. Inversely, the higher \(\mathcal{F}_{min}\), the more conservative the truncations will be.
fidelity, computation time or memory footprint.
It is however difficult to assess the exact scaling improvement over regular fixed maximum bond dimension methods, as it is problem dependent. Also, as bond dimensions are allowed to grow indefinitely, a quantum circuit inducing too much entanglement will remain almost impossible to simulate. For that reason, it is advisable to cap adaptive bond dimensions up to a maximum value which, if reached, enforces bond dimension truncations. In that case, the desired fidelity can no longer be guaranteed.
## IV Conclusion
In conclusion, I have shown that an adaptive bond dimension method far outperforms methods based on arbitrarily chosen maximum bond dimensions. The metric used in this article is the quantum state fidelity because of its straightforward relationship with truncation fidelities, but other metrics could be used. This method only adds the computation of the truncation fidelities, but this additional cost is largely compensated for by the overall bond dimension reduction it allows. Since bond dimensions growth can still be constrained by a maximum bond dimension, and because the loss in final quantum state fidelity is negligible, I expect adaptive truncation methods to push even further the quantum systems simulation capabilities of tensor network-based methods on classical hardware.
|
2309.04700 | From Programming Bugs to Multimillion-Dollar Scams: An Analysis of
Trapdoor Tokens on Decentralized Exchanges | We investigate in this work a recently emerging type of scam token called
Trapdoor, which has caused the investors hundreds of millions of dollars in the
period of 2020-2023. In a nutshell, by embedding logical bugs and/or owner-only
features to the smart contract codes, a Trapdoor token allows users to buy but
prevent them from selling. We develop the first systematic classification of
Trapdoor tokens and a comprehensive list of their programming techniques,
accompanied by a detailed analysis on representative scam contracts. We also
construct the very first dataset of 1859 manually verified Trapdoor tokens on
Uniswap and build effective opcode-based detection tools using popular machine
learning classifiers such as Random Forest, XGBoost, and LightGBM, which
achieve at least 0.98% accuracies, precisions, recalls, and F1-scores. | Phuong Duy Huynh, Thisal De Silva, Son Hoang Dau, Xiaodong Li, Iqbal Gondal, Emanuele Viterbo | 2023-09-09T06:47:23Z | http://arxiv.org/abs/2309.04700v3 | # From Programming Bugs to Multimillion-Dollar Scams: An Analysis of Trapdoor Tokens on Uniswap
###### Abstract
We investigate in this work a recently emerging type of scam token called Trapdoor, which has cost the investors hundreds of millions of dollars in the period of 2020-2023. In a nutshell, by embedding logical bugs and/or owner-only features to the smart contract codes, a Trapdoor token allows users to buy but prevent them from selling. We develop the first systematic classification of Trapdoor tokens and a comprehensive list of their programming techniques, accompanied by a detailed analysis on representative scam contracts. We also construct the very first dataset of 1859 manually verified Trapdoor tokens on Uniswap and build effective opcode-based detection tools using popular machine learning classifiers such as Random Forest, XGBoost, and LightGBM, which achieve at least 0.98% accuracies, precisions, recalls, and F1-scores.
## 1 Introduction
The widespread adoption of the blockchain technology together with the ever-growing demand for trading digital assets has led to the emergence of hundreds of cryptocurrency _centralized exchanges_ (CEXs) such as Binance [3], Coinbase [12], and KuCoin [27], all of which employ the traditional trading mechanism with the vital role of a central authority. By contrast, _decentralized exchanges_ (DEXs) such as Uniswap [39], Pancakeswap [30], and Sushiswap [32], have been developed to facilitate decentralization and enhance user privacy. Governed solely by a set of smart contracts, DEXs allow users to trade their digital assets directly to each other _without_ any intermediary, hence providing them with a full control of their assets, better anonymity, as well as censorship resistance. On the other hand, the lack a central authority on DEXs also means little quality control, regulations, and customer support, making their users susceptible to a plethora of issues, most notably price slippage, smart contracts bugs and vulnerabilities, and low-quality or downright malicious scam tokens [4, 5, 13, 17, 28, 45].
Founded in 2018, Uniswap has become one of the most popular DEXs operating on top of the Ethereum blockchain, with the daily trading volume exceeding US$40 million [15] at the time of writing. This DEX, however, has been reported to be littered with scam tokens. For example, the recent study by Xia _et
al._[45] indicated that about 95% among more than 10,000 tokens collected by their team from Uniswap are actually scam tokens. Marorra _et al._[28] extended this dataset further to include more than 26,000 tokens and also concluded that around 97.7% of them are malicious. In a recent significant cryptocurrency-related court case [22, 41], six investors from North Carolina, Idaho, New York, and Australia, who lost money after investing in various scam tokens on Uniswap, decided to sue Universal Navigation Inc., the company behind the exchange, and its founder/CEO Hayden Z. Adams to the US District Court of New York. Uniswap's counsel argued that making them liable for scam tokens on their DEX is like holding "a developer of self-driving cars liable for a third party's use of the car to commit a traffic violation or to rob a bank", to which the judge agreed. The case was dismissed by the judge in August 2023, but has surely become a legal landmark on cryptocurrency investment scams.
In this manuscript, we set out to investigate _Trapdoor scam_, a recently emerging type of investment scam on DEXs that has cost the investors hundreds of millions of dollars1, according to our analysis on a collection of nearly 2,000 scam smart contracts collected in the period from 2020 to 2023 on Uniswap (see our labeled dataset [1]). We also found in our initial analysis that around 50,000 investors (unique addresses) had lost their investment to these contracts. The top three losses reached 3,452 ETHs or US$5,702,117, whereas the top three scam contracts WallStreetBets, Hashmasks, and Soulja made in total 6,992 ETHs or US$11,541,904 in this period. In a nutshell, a Trapdoor smart contract employs programming logical bugs such as an "if" condition that is never satisfied, a fee-manipulation mechanism that can only be called by the contract owner, or numerical exceptions such as division by zero, in order to allow the investors to buy newly created tokens (paid with ETH or a valuable token) but prevent them from selling the tokens back to the contract to earn a profit.
Footnote 1: How much of this is due to _wash trading_ is the subject of another on-going research.
Trapdoor scam is very often misclassified as _Honeypot_, which is a closely related type of cryptocurrency scam [37]. A Honeypot scam, as it name suggests, lures the investors (victims) in by exposing a vulnerability in the contract code that could potential be exploited, which turns out to be a fake one. A trapdoor scam, on the contrary, hides the bugs and pretends to be a legitimate and high-yield token. A more detailed comparison between these two types of scam token is given in Section 3.1. Trapdoor scam is also sometimes treated as a sub-type of Rug-pull scams (see, e.g. [45, 28]), which was the most common cryptocurrency financial scam and responsible for 37% of all cryptocurrency scam revenue in 2021 according to the 2022 Crypto Crime Report from Chainanalysis [7]. In fact, Rug-pull is an umbrella term that refers to scams in which the project developers first lure the investors into buying a new and seemingly profitable token and then disappear with all of the funds, leaving the victims with worthless assets. Trapdoor, on the other hand, refers to a particular set of techniques that ensure that the investors cannot sell back their bought tokens, which is one specific way to allow the liquidity pool to be rug-pulled later.
To the best of our knowledge, there has been no systematic study of Trapdoor scam tokens on Uniswap. We aim to address that gap in this manuscript. Our main contributions are given below.
* We provided the very first comprehensive analysis of Trapdoor tokens, including a classification and a full list of scam techniques and maneuvers. As part of the discussion, we also dissected a number of scam contracts to demonstrate how they actually work. Additionally, full contract analyses are provided for 20 representative scam tokens4. Footnote 4: [https://github.com/bsdp2023/trapdoor_reports](https://github.com/bsdp2023/trapdoor_reports)
* We built the very first dataset5 of 1,859 scam Trapdoor tokens on Uniswap, which were first tested by a buy-sell simulation and then manually verified. Footnote 5: [https://github.com/bsdp2023/trapdoor_tool](https://github.com/bsdp2023/trapdoor_tool)
* We also developed machine learning detection tools6 that can detect the malicious tokens with very high accuracy. These tools rely on the frequencies of opcodes extracted from the contract's bytecode, which will work even when the token Solidity codes are not available. We also extracted a list of the most important opcodes and gave an interpretation of why such opcodes play a more important role in distinguishing a Trapdoor token. Footnote 5: [https://github.com/bsdp2023/trapdoor_tool](https://github.com/bsdp2023/trapdoor_tool)
The paper is organized as follows. In Section 2 we provide the background knowledge of the Ethereum blockchain, ERC-20, decentralised exchange (DEX), and Uniswap interaction flow. In Section 3, we first define Trapdoor and then describe how our dataset of Trapdoor tokens on Uniswap was created.We present in Section 4 a classification of Trapdoor tokens based on the three key techniques they use to prevent investors from selling. This is followed by the analysis of various tricks the scammers use to place and conceal the traps in the code. In Section 6, we explain in detail our machine learning based detection tools, including feature aggregation, experimental configuration, evaluation metrics, and feature analysis. The paper is concluded in Section 7.
## 2 Background
### Ethereum smart contract and ERC-20
The concept of a smart contract, an automated executable program, was introduced in 1997 by Nick Szabo [34]. However, this concept only got its first practical implementation in the release of Ethereum in 2015. Smart contracts on Ethereum can be implemented using a Turing-complete programming language called Solidity [21]. A contract's source code is then compiled into low-level _bytecode_ and deployed onto the Ethereum blockchain through a transaction. Once the contract's _bytecode_ is stored successfully on the chain, it becomes immutable and publicly available on the chain for everyone to freely access. Besides, a unique address is provided to identify a newly deployed smart contract, which users can use later for interacting with the contract. Any communication with
the contract, e.g. function invoking, fund transferring, will be recorded in its transaction history with complete information about the function called, function input, success status, execution time, etc.
Fungible (interchangeable) tokens are the most popular and special smart contracts that can represent virtual assets, such as company shares, lottery tickets, or even fiat currencies. These tokens are implemented by following **ERC-20 standard**, which defines a list of rules that developers must follow [20]. These rules include standard methods and events (see Table 1) that should come with any tokens.
Among the signatures mentioned in Table 1, transfer and transferFrom are two functions that we primarily focus on in this study because these are fundamental methods used to transfer tokens between two arbitrary accounts. Moreover, the token exchange operations on DEX platforms also relies on these two transfer functions for swapping tokens back and forth. Thus, scammers often aim at these two functions or relevant functions called from them to embed their malicious code.
### Decentralized Exchange
Decentralised exchanges (DEXs) allow users to exchange their digital assets without the involvement of central authorities [44]. Instead, all trades on DEX are stored permanently onto a chain that can be audited publicly. A DEX executes a digital trade without storing user funds (non-custodial exchange) or personal data on a centralised server. Instead, the mechanism of pairing buyers and sellers for a trade is defined clearly in the smart contract.
DEXs are primarily operated based on one of two different trading price determination mechanisms: **order-book** and **automated market maker (AMM)**. Like the traditional stock exchanges, an order-book-based DEX performs a trade by recording traders' orders into the order-book and waiting until the DEX finds
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Type** & **Signature** & **Description** \\ \hline \multirow{6}{*}{Method} & name() & Return name of the token (e.g., Dogecoin) \\ & symbol() & Return symbol of the token (e.g., DOGE) \\ & decimals() & Returns the number of decimals the token uses \\ & totalSupply() & Return the total amount of the token in circulation \\ & balanceOf() & Return the amount of token owned by given address \\ & transfer() & Transfers amount of tokens to given address from message caller \\ & transferFrom() & Transfers amount of tokens between two given accounts \\ & approve() & Allow a _spender_ spend token on behalf of _owner_ \\ & allowance() & Return amount that the _spender_ will be allowed to spend \\ \hline \multirow{2}{*}{Event} & Transfer() & Trigger when tokens are transferred, including zero value transfers. \\ & Approval() & Trigger on any successful call to _approve()_ \\ \hline \end{tabular}
\end{table}
Table 1: The ERC-20 Standard
a suitable order that matches the preset price, allowing traders to buy or sell their digital asset at the expected price. Unlike the order-book model, digital assets' price in the AMM model is calculated using a mathematical formula. This model is more popularly adopted in blockchain environments due to its computational performance [44]. In general, AMM DEXs work on the concept of **liquidity pool** that allows traders to exchange their virtual assets without any permission. Particularly, users exchange assets by transferring one asset into the pool and withdrawing another by a corresponding ratio (an asset price). Different DEX uses different formulas to determine an asset price.
### Token Exchange on Uniswap
Uniswap [39] is the largest decentralized exchange that adopts the AMM model successfully. This exchange platform was launched onto the Ethereum network in 2018, and currently, it has three different versions that operate independently. Among these three versions, UniswapV2 entirely outperforms other versions in terms of the number of listed tokens and the number of exchange pools (liquidity pools) [14]. Moreover, UniswapV2 has more than 450 forks [19] across different blockchains due to its public source code and license [40]. For the reasons mentioned above, this research only focuses on UniswapV2 but the approaches in this study are applicable to other versions and folks.
In UniswapV2, each liquidity pool is like an automatic exchange counter for a pair of two different ERC-20 tokens. Imagine the pool consists of two boxes, each storing an exchangeable amount of a single token so that the amount of those two is equivalent in terms of market value. To clarify how an exchange pool works, we look into the interaction flow between users and UniswapV2 as depicted clearly in Figure 1. It is worth noting that UniswapV2 operates mainly through three smart contracts: UniswapV2Factory7, UniswapV2Pair8, and UniswapV2Router9.
Footnote 7: [https://github.com/Uniswap/v2-core/blob/master/contracts/UniswapV2Factory.sol](https://github.com/Uniswap/v2-core/blob/master/contracts/UniswapV2Factory.sol)
Footnote 8: [https://github.com/Uniswap/v2-core/blob/master/contracts/UniswapV2Pair.sol](https://github.com/Uniswap/v2-core/blob/master/contracts/UniswapV2Pair.sol)
Footnote 9: [https://github.com/Uniswap/v2-periphery/blob/master/contracts/UniswapV2Router02.sol](https://github.com/Uniswap/v2-periphery/blob/master/contracts/UniswapV2Router02.sol)
1. Everything starts from the UniswapV2Factory. On a particular blockchain, there is only one UniswapV2Factory contract which has the primary mission is to generate a UniswapV2Pair contract (liquidity pool) for two given arbitrary ERC-20 tokens. For example, if a user wants to create a new exchange pool and launch it onto Uniswap, he must call the function createPair() in the UniswapV2Factory contract with the inputs are two corresponding token addresses and the user who create a pool is referred as a **pool creator (PC)**.
2. After the liquidity pool is launched successfully, any user (even PC) is able to deposit two paired tokens into the pool as exchange liquidity and who added liquidity into the pool is called **liquidity provider (LP)**. Since a UniswapV2Pair itself is also an ERC-20 token, the pool **mints** its token (**LP-tokens**) as the proof of liquidity contribution and sent back to the liquidity
contributor every time it receives tokens from LPs. These LP-tokens are then used to withdraw the fund back by **burning** them. In UniswapV2Pair contract, two corresponding functions **mint()** and **burn()** are provided to support the aforementioned features.
3. When a user wants to exchange one of the tokens in the pair for another, he must use the swap() function of a liquidity pool. The exchange rate is calculated based on the ratio of tokens available in the pool, following the constant product formula below. \[R_{x}*R_{y}=(R_{x}+\Delta_{x}*(1-\gamma))*(R_{y}-\Delta_{y}),\] (1) where \(R_{x}\) and \(R_{y}\) are the current amounts (reserves) of the two tokens in the pool. \(\Delta_{y}\) is the amount of tokens that the user receives while \(\Delta_{x}\) is the amount the user needs to deposit into the pool. For each exchange on the pool, \(\gamma=0.003\) is the fee charged from the trader and proportionally distributed to all LPs in the pool as rewards for their liquidity contribution. Hence, when users exchange one token for another token, the price of the latter will rise. In a swap, a user must first transfer his token to the pool. Then the corresponding amount of the target token will be calculated based on the exchange rate and sent to the buyer. In UniswapV2, UniswapV2Router acts as an intermediary between the user and the UniswapV2Pair, which helps the user estimate the amount of output token before swapping or exchange ETH to WETH or vice versa.
Generally, a newly created token often pairs with a popular and valuable token for easier approaching traders on the market. Hence, we refer to the swap from a popular token to a newly created token as buying a new token, while the reversed swap is referred to as selling a new token.
Figure 1: Interaction flow between users and UniswapV2
## 3 Building a Dataset of Trapdoor Tokens
In this section, we first introduce the concept of Trapdoor tokens and describe the main steps for a scammer to execute the scam on a DEX. We then present in detail the process of creating a reliable dataset of 1,859 Trapdoor tokens on Uniswap, in particular, how we collected tokens on this DEX, what filtering criteria we used to extract tokens that are very likely scams, and how we tested and labeled them.
### Definition of Trapdoor
Note that scam tokens are always paired with high-valued tokens, a general definition of which is given below.
Definition 1 (High-value Token): A high-value token is a token that is very popular in the cryptocurrency market and trusted by investors worldwide. Moreover, such a token must have a consistently high market cap and has been paired with many other tokens on exchanges.
Definition 2 (Trapdoor Token): Trapdoor token is a digital token pretending to be a potential cryptocurrency that seemingly brings a high profit to investors (victims) if exchanged. However, the investment fund (in the form of a high-valued token) from investors will be trapped in the liquidity pool due to intentional programming tricks, which can only be withdrawn by the token creator (scammer) or some specific addresses allowed by the creator.
A Trapdoor scam can be carried out on a DEX in four steps (see Fig. 2).
1. The scammer deploys a Trapdoor token onto the chain and creates a liquidity pool on DEX that includes the Trapdoor token and a high-value one.
2. Once the liquidity pool is active, the investors can buy the Trapdoor token by transferring the high-value token to the pool. The scammer often sets the buying fee minimal to encourage the investor to buy more.
Figure 2: The execution of a Trapdoor scam in four steps. An investor can buy the scam token using a high-value token, but cannot sell the scam token back to the pool.
3. As more investors buy the Trapdoor token from and add more high-value token to the pool, the value of the scam token rises with respect to the high-value one. However, the investors cannot sell to gain profit as they expected.
4. The scammer now withdraws all high-value tokens from the liquidity pool, including what investors have invested, and disappears.
We note that Trapdoor scams sometimes are referred to as Honeypot scams in several sources. However, these two types are quite different with respect to the targeted victims, how they attract them, and how they embed malicious codes to achieve their goal. More specifically, Honeypot scams often target investigators with some level of experience, who can read the smart contracts, while Trapdoor scams target novice investors who cannot understand the contract very well. Honeypot scams lure the victims by intentionally exposing an easy-to-spot loophole that seemingly could be exploited by investors to gain a big profit from the contract, which turns out to be a fake loophole: the hopeful investor observes the (fake) loophole, invests into the contract, and ends up losing all the investment. A Trapdoor token, on the contrary, aims to hide the malicious/buggy code, making it harder for users to detect. The techniques used in a Honeypot scam, as reported in [38], are also very different from those in a Trapdoor scam.
### Collection of Trapdoor Tokens
**Data collection.** Data in this study was collected by using web3 library [43] for querying data on the Infura archive node [24]. As depicted in Fig 3, we first gathered all token addresses listed on UniswapV2 from May 2020, when this platform was launched, to January 2023. We queried from the chain all PairCreated events of UniswapV2Factory10 contract that store the creation information of generated liquidity pools. Next, we scanned over each event and collected 131,172 unique token addresses listed on this platform. Besides, we also retrieved information on each token, such as the creator address, token name, token symbol, token decimal, and token total supply with the help of Etherscan API [36] for further analyses.
Footnote 10: [https://etherscan.io/address/0x5c69bee701ef814a2b6a3edd4b1652cb9cc5aa6f](https://etherscan.io/address/0x5c69bee701ef814a2b6a3edd4b1652cb9cc5aa6f)
Figure 3: Data collection process. Tokens listed on the Uniswap exchange are downloaded and categorized into high-value and suspicious lists
Afterwards, we filtered all non-malicious tokens (high-value tokens) in the list according to their popularity. The popularity of a token is based on its market capitalization and number of liquidity pools it is involved in. Following information on the cryptocurrency ranking website [16], we collected 843 high-value tokens using the below criteria.
\[\text{is\_high\_value\_token}=\begin{cases}\text{TRUE},&\text{if }\text{ market\_cap}\geq\text{US$\$1 million},\\ \text{TRUE},&\text{if }\text{number\_of\_pairing\_pools}\geq 50,\\ \text{FALSE},&\text{otherwise}.\end{cases}\]
Finally, for retrieving Trapdoor suspicious tokens we conducted consequently two steps. In the first step, we gathered tokens that pair with high-value tokens because the main goal of a Trapdoor token is to trap valuable assets from users. In the second step, we filtered and selected tokens that are highly likely to be scam tokens by following the filtering criteria listed below:
1. Number of sellers \(\leq\) 1
2. Number of buying transactions \(\geq\) 0
3. Number of selling transactions / Number of buying transactions \(\leq\) 5%
4. Time period from token deployment to data collection \(\geq\) 1 month
According to the nature of the Trapdoor token, nobody can sell a Trapdoor token to get a high-value token back except the token creator (scammer). Therefore, we have criterion (1) to reflect this characteristic. The lower sign happens when the creator withdraws all liquidity in the pool but not selling the Trapdoor tokens for the high-value tokens. Criteria (2) and (3) ensure that this token is successful in attracting investors and eliminates all under-performing tokens. Criterion (4) is used to discard all newly listed tokens because there is a high probability that the user who just bought them will not sell due to insufficient profit, leaving them with similar trading histories like Trapdoor's. As a result, we gathered **4150** suspicious tokens but only **2723** tokens have source code available on Etherscan [35]. We would like to emphasize that our aim is to build a reliable data set but not detect all Trapdoor tokens on Uniswap. Therefore, we only include tokens that we ensure they are malicious and remove all uncertainties.
**Limitation.** Those proposed filtering criteria are used for retrieving tokens with a high probability of being malicious. That does not mean a token that does not satisfy the criteria is non-malicious. They can be frauds, but it is very hard to discriminate them from normal tokens if only based on the auto-filtering method. It requires an intensive analysis of the contract source code to prove the token is malicious, which is very time-consuming. Although applying those criteria may skip some Trapdoor variants that adopts a wash-trading technique, it helps reduce the workload significantly, and the number of matching tokens is enough for this scam's first dataset, which is the foundation for building a machine learning-based detection tool.
**Trapdoor ground truth labelling.** To construct a ground truth dataset, we must ensure every token in this dataset has Trapdoor malicious nature that
allows investors to buy but prevents them from selling. To verify each token in the retrieved list, we follow the process displayed in Figure 4, noting that we only focus on **2723** source code available tokens since we have to analyse the source code and transaction history of each to prove that a considered token is scam token.
First of all, for determining if a suspicious token really acts as a Trapdoor token, we simulate buy and sell transactions by using Brownie [6], a Python-based development and testing framework for smart contracts. The detail of the verification method is represented below.
```
1:investor:=brownie_prepared_account[0];
2:token_pool_addresses:=load_test_data();
3:for(token,pool) in token_pool_addressesdo
4:buying_test:=TokenBuyingTest(investor,pool,token);
5:selling_test:=TokenSellingTest(investor,pool,token);
6:save_test_result(token,pool,buying_test,selling_test);
7:endfor
```
**Procedure 1 TrapdoorVerification**
The test scenario of token buying and selling includes two parts: trading result asserting and trading fee asserting. For trading result asserting, we aim to check if the token transfer success. We expect that token buying is successfully completed while token selling is not because that will prove the characteristic of Trapdoor. The trading fee asserting will be only conducted if the transferring result in success. In this assertion, we will check if the trading fee that the user will be charged directly from the transferring amount while trading is affordable. In our study, we set the acceptable threshold for this fee at 30% of the sending amount. Similarly to the previous assertion, we expect a very small fee to be applied on buying transactions to encourage users to buy this token as many as possible while the selling fee should be extremely high to prevent them from retrieving high-value tokens back. The test result of each token then will be stored and used for the next comprehensive analysis.
In the next step, we only focus on tokens that failed in the sell test regardless of the buy test outcome because, in some situations, a scammer will disable the
Figure 4: Trapdoor verification process. The trading process is simulated for checking the buy-ability and sell-ability of a token. If the token behaves as Trapdoor, its source code will be analysed to determine all tricks that have been used in the contract.
token trading (triggering the trap) to prevent the sale from investors or apply a considerable trading fee for both buying and selling transactions that make the users lose their funds instantly after exchanging this scam token. Then every selected token has been manually assessed to figure out all tricks used by scammers. The detail of Trapdoor tricks is categorized and explained in Section 4. Ultimately, our ground truth dataset is constructed from **1859** verified Trapdoor tokens.
### General Overview of Trapdoor Tokens and Impacts
In this section, we analyze the collected Trapdoor tokens from different perspectives to obtain more insights into this type of scam.
**The lifetime of Trapdoor tokens.** One of the characteristic of Blockchain is an immutability that means once a token is deployed onto Ethereum, it will be there forever. Thus, we define the _lifetime_ of a token as follow: The lifetime of a token starts at the block where this token was created and finishes at the block where the last Transfer event of this token was emitted. We found that 1098 tokens (59%) in our dataset only had a lifetime less than 24 hours. Especially, 309 tokens of them had a lifetime even less than 1 hour but they were still exchanged by users. For example, the token EternalMoon11 was purchased by 132 different users although its lifetime only last for 68 blocks. The remaining 41% are long-life Trapdoor tokens that seem to be more hidden and difficult to detect. Surprisingly, 17 tokens among them lived longer than 1 years. The most
successful scam token is Mommy Milkers, which stole money from about 1,400 different investors.
**Same-name-fake-token strategy.** While tokens should have a unique name and symbol, scammers can still name their Trapdoor token similarly to a popular token to make users mistakenly think that they are exchanging with the original one. By comparing token names and symbols, we found 303 tokens (15.7%) that have the same name or symbol with high-value tokens. For instance, 35 different Trapdoor tokens among them have the same symbol with the high-value token Bone ShibaSwap (BONE)12 and those tokens have successfully scammed 2,723 investors in total. This strategy was also observed in an earlier work by Xia _et al._[45] for the general Rug-pull scams.
Footnote 12: [https://coinmarketcap.com/currencies/bone-shibaswap/](https://coinmarketcap.com/currencies/bone-shibaswap/)
**Contract cloning.** In our analysis, we also found 1,647 different scammers (unique addresses) behind 1,859 scam tokens, 96% of them only created one token. A scammer might have used different strategies such as using multiple accounts for creating multiple scam tokens or generating a fresh profile for each newly deployed scam token. To dig further, we examined tokens that have ex
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Representative token** & **Tokens** & **Creators** \\ \hline
0xc51a16e0573c796a445bfd2ec33d9ab9151dfbd & 84 & 25 \\
0x520ad62c62fa93799b68c9095e3402db4236847 & 52 & 52 \\
0x15489e5e954f91f530f5521dec51d9e704e931e9 & 48 & 48 \\
0xa9fba963eb3bc7d6dd9b73c377a46d26a86ee148 & 34 & 34 \\
0xaf9f277eda18928414934dc93c63e8687aed01b3 & 26 & 7 \\ \hline \end{tabular}
\end{table}
Table 2: Top-five token groups, each has the same source code across all tokens.
actly the same source codes and grouped them into 52 different groups. The largest group contains 84 different tokens which have the same source code as the token Fei Protocol13. Those tokens were created by 25 different accounts. Furthermore, the second largest token group contains 52 same source-code tokens created by 52 different accounts. Table 2 presents more information about the five largest token groups.
Footnote 13: [https://etherscan.io/address/0xc51a16e0573c796a445bfda2ec33d9ab9151dfbd](https://etherscan.io/address/0xc51a16e0573c796a445bfda2ec33d9ab9151dfbd)
**Trapdoor impacts.** We provide in Table 3 some economic statistics of 1,859 Trapdoor tokens, with top five profit tokens identified. In total, the scam tokens in our dataset have collected over 133,676 ETHs (or 220 million US dollars) from 53,318 unique investor addresses. Among them, the top three tokens WallStreetBets, Hashmasks, and Soulja have gained a total of 6,992 ETH from nearly 700 investors. Moreover, the top three losses reached 3,452 ETHs (US$5,702,11), in which the investors purchased at least 50 different scam tokens.
## 4 Trapdoor classification
In this section we classify Trapdoor tokens into three broad groups based on the key techniques that they use to prevent the investors from selling the token back to the pool: conditional assertion, trading fee manipulation, and numerical exception.
### Conditional assertion
The contracts of tokens in the first group often define a condition that must be satisfied before any selling. To illustrate this technique, let's consider the contract of a Trapdoor token called AquaDrops (see Fig. 5). At first glance, the contract code looks quite normal. It provides two functions transfer and transferFrom to support token transferring between two arbitrary addresses. Both functions call the common function _beforeTokenTransfer before starting the transfer, which performs a special check when someone sends tokens to the liquidity pool uniswapV2Pair. More specifically, it uses an assert function require for checking
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Address** & **Name** & **ETH** & **Victims** \\ \hline
0xbbff2cccd0e774478b2316fbbb22913f1f1475a7 & WallStreetBets & 3024 & 323 \\
0xf836bcea14bfd5fcc449539ad9bcd58032cd5aef & Hashmasks & 2234 & 205 \\
0x34787b6b81173385729242caad95837321cc1bf9 & Soulja & 1664 & 161 \\
0x34173aa873f754169e269d5ec3526573c1433200 & Doge Set Dollar & 1516 & 218 \\
0x4c334f280f9e440788b66c4e5260c03e4477267a & Hop.Exchange & 1510 & 169 \\ \hline
**Total** & 1859 tokens & 133677 & 53318 \\ \hline \end{tabular}
\end{table}
Table 3: Top-five most profitable tokens and the number of victims.
if the sender address is in the eligible list _enable (see line 657). If the sender's address is _invalid_, an error will be raised and the transaction will be terminated. However, _enable is used nowhere except in the contract's constructor where the address of the creator itself is added to this list. Thus, only the contract creator (scammer) can transfer a Trapdoor token to the liquidity pool uniswapV2Pair to receive the high-value token from the pool.
Another example is the token LGT (see Fig. 6). Similar to AquaDrops, the function _transfer is used as the core function of both mandatory functions transfer and transferFrom. The first if-else block (line 394-408) seems to be used for preventing any transaction from snipers, i.e. auto-trading bots. Such a check is clearly necessary to prevent instant buying and selling from bots that may drive the price of the token go down dramatically. However, it turns out that this is just smoke and mirrors. By examining the _transaction history_ of this token18, we discovered that the token owner had added the address of the corresponding liquidity pool into the list snipers. As snipers now contains the pool address, the first if and else if conditions are satisfied only if both to and from are the pool address (automatedMarketMakerPairs also contains the pool address), which never happens. Therefore, if an investor tries to sell, as snipers[to] = TRUE, the if condition in the last else block will be satisfied, and the function is returned without executing the sell transfer.
Figure 5: AquaDrops15 token adopts the _conditional assertion_ technique. It will raise an error if an investor tries to sell the token back to the pool. No investors were in _enable.
### Trading fee manipulation
Unlike conditional assertion, tokens in the trading fee manipulation group will charge senders an extremely high trading fee, e.g. by burning a large amount of Trapdoor token every time the investor buys or sells. As a consequence, after a buy-sell cycle, investors will lose almost all, e.g. 99%, of the investment. The burning percentage/trading fee ratio can be either defined explicitly in the contract or updated later by the token creator. Let's examine the contract of Seppuku as an example of the former (see Fig. 7). In this contract, both functions transfer and transferFrom call findNinetyPercent to calculate the amount of token to be burned (permanently removed) from the transferring amount. Scammer deceives investors by naming the output of findNinetyPercent as tenPercent but it is actually almost 90%. Thus, only 10% of what was sent will be transferred to the receiver. As the deduction is applied to every buy/sell transaction, investors will be charged 99% when conducting a buy-sell cycle, and the remaining 1% will be insufficient to cover even for the transaction fee.
Figure 6: LGT token17 is another member in the condition assertion group which ignore all selling transactions from investors silently, instead of raising any error, to prevent the trace-back from experienced users.
A less obvious fee manipulation strategy was employed by the token 88 Dollar Millionaire (see Fig. 8, which started with low fee but raised it later to steal all the investment fund. In the contract, the buying fee is the sum of buymktFee (marketing fee) and buyliqFee (liquidity fee) whereas the selling fee is combined from sellmktFee and sellliqFee. At the beginning, the value of buymktFee and buyliqFee were 1% and 6% (at the beginning of the code, not captured in Fig. 8), respectively, which are reasonable. sellmktFee and sellliqFee also had the same initial values. However, sellmktFee was later updated to 99% by the scammer21 using the backdoor function sellFee at line 439, making the fee rate rise up to 105% (99% + 6%). As a consequence, the investor won't receive any payment back.
Footnote 21: [https://etherscan.io/tx/0xb61b83676396791d4edc4243fb8ae6d6725efd8578809d41efe25f8354c77b60](https://etherscan.io/tx/0xb61b83676396791d4edc4243fb8ae6d6725efd8578809d41efe25f8354c77b60)
### Numerical exceptions
Scam tokens in the third group also tend to cause errors to terminate transactions on pre-set conditions. However, instead of causing an error actively using
Figure 7: Seppuku20 token applies a blatant 90% trading fee to both buying and selling transactions, hence stealing 99% funds of investors for one round of buy & sell.
require, a _numerical exception_ occurs due to the calculation of invalid values intentionally updated by the scammer. That makes those tricks harder to detect. For instance, the contract of CPP4U (Figure 9) uses a variable totalSellTax to record the trading fee percentage, which was originally set to 0 (line 505-507). However, the token creator later secretly increased24 this value to 1000%, resulting in the fee ten times higher than the value _amount of the transaction (line 676). The subtraction of fee from _amount, therefore, will be a negative number (line 677), which is then assigned to rAmount, an unsigned integer (line 636). This assignment immediately causes the exception "Integer overflow".
Footnote 24: [https://etherscan.io/tx/0x034963d54d10c801d9674224ab41c305111214f708983c37c8ce60c2c0837548](https://etherscan.io/tx/0x034963d54d10c801d9674224ab41c305111214f708983c37c8ce60c2c0837548)
In the case of CPP4U, an investor who came and checked the transaction history of the token after the fee had been increased to 1000% would be immediately alerted. The token Squid-X (Figure 10) has another way to trick even these more careful investors. It intentionally updated the buying fee27 and
Figure 8: 88 Dollar Millionaire23 token initially introduced a small commission fee (6%) as a bait. The token creator later increased this fee to 105% via a transaction, effectively making the investors lose their entire investment.
selling fee to zero28 to attract those investors who actually looked into the transaction. To make the contract look even more legitimate, trading fee value checks are given in setBuyFees and setSellFees, line 613 and 625, respectively, to ensure updated fees do not exceeds 30%. However, by inspecting the function swapBack (line 500-520) more closely, realTotalFee turns out to be zero because buyTotalFee, sellTotalFee, stakingFee, and sellFeeStaking are all zeros, which then cause "division by zero" numeric exception at line 505. Moreover, in transfer and transferFrom, the swapBack function is called based on the shouldSwapBack's return value. Regardless of swap status and token's balance checking, the function shouldSwapBack only returns true if the sender is not the liquidity pool (line 476). Therefore, investors can buy Squid-X tokens without any problem but cannot sell them back.
Figure 9: CPP406 token intentionally increased the trading fee percentage, originally 0%, to 1000%, causing an "Integer overflow" exception on every selling transaction.
## 5 Trapdoor Maneuvers Analysis
Although we only list three types of Trapdoor token above, there are several important aspects of these tokens that should also be investigated. We structure our discussion around three essential factors of a Trapdoor token: how the trap is created (trap mechanism), where the trap is located (trap location), and how it is concealed (trap concealment).
Figure 10: Squid-X\({}^{30}\) intentionally changed the selling fee to zero to attract careful investors who looked into the transaction history. However, this update led to a “division by zero” error when investors try to sell.
**Trap mechanisms.** There are a number of simple mechanisms that could be used to create a trap in the token contract, which will prevent the investors to sell the token back to the pool and get paid.
The simplest trap can be constructed by using a boolean variable as a switch, which was originally off so that investors can exchange tokens back and forth. At a particular point, the creator can turn the switch on to activate the trap, effectively preventing the tokens being sold from investors. Some good case studies for this trap include the ILLUSION31 token (lines 255-259) and the miniKeiko32 token (lines 293-295). The scammer can put a switch in the assertion method require that will raise an error and terminate all transfers from users if the value of the switch is false (switch off). Another way is to put it in an if-else condition to throw exception when it's off.
Footnote 31: [https://etherscan.io/address/0x950247e6697d3e62b80cb49ffd5cb78a1cab7233](https://etherscan.io/address/0x950247e6697d3e62b80cb49ffd5cb78a1cab7233)
Footnote 32: [https://etherscan.io/address/0xFD6A7390c424A2c2c3cb06433B7D29926FfAf09F](https://etherscan.io/address/0xFD6A7390c424A2c2c3cb06433B7D29926FfAf09F)
Instead of using a switch, scammers can also employ an exchange limit (integer number) for restricting the transfer amount in each transaction from investors and open a backdoor function to manipulate this exchange limit. In the early stage, the limit is set to a large number, letting investors buy the scam token without any restriction. Subsequently, the scammer updates this limit to a very small number (e.g, 0 or 1), effectively preventing selling. For example, the creator of the token Cobrashi33 set the sell limit to 1 and the buy limit to \(10^{27}\), making it technically a buy-only token. Apart from applying a limit on the sell amount, a scammer can also use a trading fee to drain the expected transfer amount as mentioned in the section on trading fee manipulation.
Footnote 33: [https://etherscan.io/address/0x579aA9419741eb4842A4Bc2439176A34260A259f](https://etherscan.io/address/0x579aA9419741eb4842A4Bc2439176A34260A259f)
Using a blacklist or a whitelist is also very typical among the scam tokens. While a blacklist contains all _prohibited_ addresses that is forbidden from exchanging tokens, a whitelist stores addresses that have permission to trade in some circumstances. while these two lists were originally designed to prohibit transactions from auto-trading programs (sniper bots) or to refuse transactions from angel investors in the early stages (prior to DEXs listing), they are often manipulated in scam contracts to restrict selling from investors, while still create a false perception of safety. For example, a scammer can embed malicious codes or call an owner-only function to automatically include investor addresses into a blacklist in their first transaction (buy transaction). This is the case for the
\begin{table}
\begin{tabular}{|c|c|} \hline
**Types** & **Functions** \\ \hline Switch & Turn the trap on/off \\ Exchange-limit & Limit/Deduct the quantity in a sell transaction \\ Blacklist & Prohibit all transfers from addresses in the list \\ Whitelist & Accept only transfers from addresses in the list \\ \hline \end{tabular}
\end{table}
Table 4: Simple trap mechanisms in Trapdoor contracts.
token EasyINU34 (lines 277-290). Another example is where the scammer marks an exchange pool address as a black address, which makes the contract reject all (selling) transactions sent to the pool address (see [35]). The scammer can also add his address into a whitelist so that only he can sell the Trapdoor token to withdrawn the high-value one from the pool (e.g., see [1], see [1], [2]).
Footnote 34: [https://etherscan.io/address/0x93fe5eabd054524fdaaeaeae7913a90bf73889ebf9](https://etherscan.io/address/0x93fe5eabd054524fdaaeaeae7913a90bf73889ebf9)
**Trap placement.** In trapdoor tokens, malicious codes can be placed anywhere as long as they are used by the transfer functions. More complex Trapdoor tokens may place their traps in a location other than the transfer function, making finding this trap in thousands of lines of code like finding a needle in a haystack. According to our analysis, there are three different locations a trap are often be placed:
* **Modifier:** modifier is a special type of function whose mission is to perform an additional task before a function is executed (e.g., to validate function inputs). To use a modifier, a developer must attach it to a function by placing its signature next to the function definition where is often neglected by users while scanning a contract. Hence, a scammer often put the trap in a modifier and attaches this modifier to the transfer functions, thereby making the trap more hidden (see, e.g. [1]).
* **Function:** Similar to the idea of using a modifier, a scammer can place the trap anywhere and then call it from the transfer functions indirectly across multiple code layers to hide the association between the trap and the transfer functions, making investors lose track while analyzing the source code. We can take The Reckoning Force38 as a good example, in which a transfer fee is calculated from multiple sources and from different nested functions that make it very challenging to know where the fee comes from and how it is calculated. Footnote 35: [https://etherscan.io/address/0x93fe5eabd054524fdaaeaeae7913a90bf73889ebf9](https://etherscan.io/address/0x93fe5eabd054524fdaaeaeae7913a90bf73889ebf9)
* **Contract:** In an even more complicated scenario, scammers can place their trap in _another_ contract. This will require more advanced detection techniques than mere source code inspection to discover their trap. For example, [1], calls functions in another contract botProtection to manage their trap. Those functions' names are encoded already, which makes it hard to tell what those functions actually perform from the calls. Moreover, the source code of the contract botProtection is _unavailable_, making it rather difficult to explore the malicious logic used by the scammer. In such cases, we can first check the transaction history to find the address of botProtection and then use the tool Panoramix40 to decompile the contract bytecode to obtain a readable source code. Footnote 36: [https://etherscan.io/address/0x5a8003ee9cae173c8c2dcb7e8b6e897c3021ba8a](https://etherscan.io/address/0x5a8003ee9cae173c8c2dcb7e8b6e897c3021ba8a)
Footnote 37: [https://etherscan.io/address/0x03ed8890912679a0796c759e0224F32E1A3b2F0B7](https://etherscan.io/address/0x03ed8890912679a0796c759e0224F32E1A3b2F0B7)
Footnote 38: [https://etherscan.io/address/0xa027eb7d1f17a6f888a504c5fb32fe42ed0d7d8e](https://etherscan.io/address/0xa027eb7d1f17a6f888a504c5fb32fe42ed0d7d8e)
Footnote 39: [https://etherscan.io/address/0x348bb716bc4378560cd269fa039aba957e24d1b](https://etherscan.io/address/0x348bb716bc4378560cd269fa039aba957e24d1b)
Footnote 40: [https://oko.palleko.com/0xe20EC16A3B574Fd639ecec29c6886bf35A0CCc7/code/](https://oko.palleko.com/0xe20EC16A3B574Fd639ecec29c6886bf35A0CCc7/code/)
**Trap concealing.** Embedding malicious code in different locations is not the only trick for hiding a trap. We list below all the tricks that scammers have used to conceal their traps that we have encountered in our analysis of the dataset.
* **Blank error message:** Using a revert or a require method with a blank message that gives investor no information when they receive an error, making them clueless about the real cause of the (sell) transaction failure. Some tokens using this technique are Takeoff41, and ElonTheDoughboy42. Footnote 41: [https://etherscan.io/address/0x8196464fb1319b4dad2f4d569089554c78e17b3](https://etherscan.io/address/0x8196464fb1319b4dad2f4d569089554c78e17b3)
* **Single-character names:** Using hard-to-see letters such as i, t or l for a switch's name that could be overlooked when reading a contract. Furthermore, searching for a single letter in a contract will yield many unrelated results, making it hard to track the updates of the switch' value. A good example of a scam token using this trick is Teen Mutant Turtle43. Footnote 42: [https://etherscan.io/address/0x86e11ef3ed33577a1ce9948a9e594b882b6e2778](https://etherscan.io/address/0x86e11ef3ed33577a1ce9948a9e594b882b6e2778)
* **Misleading names:** Typically, scammers name their malicious variables or functions with misleading names to deceive the investors. For instance, they often name a blacklist as a bot list or an exchange pool address as zero address, which misleads the users. An example of such tokens is The Art44. Footnote 43: [https://etherscan.io/address/0x457A0677d206970A20212f95f35378Cfc68eaA0C](https://etherscan.io/address/0x457A0677d206970A20212f95f35378Cfc68eaA0C)
* **Dummy function:** Fraudsters create dummy functions to show users that the value of a particular variable is updated correctly. These functions are often named as initialization functions (e.g. init()). However, this function will never be called and the variable will always have the value the scammer wants, serving the purpose of the scam (e.g. AIRSHIB45). Footnote 45: [https://etherscan.io/address/0x457A0677d206970A20212f95f35378Cfc68eaA0C](https://etherscan.io/address/0x457A0677d206970A20212f95f35378Cfc68eaA0C)
* **Incomplete renouncement:** Contract ownership renouncement is one way to build trust from users, showing that the contract creator no longer own the token. However, scammers can actually exploit this action to create _false_ trust. For example, before giving up ownership, the scammer can secretly add another of his (unknown to the investors) addresses, which still allows him to manipulate the contract, e.g., modify the transaction fees without the ownership. Therefore, although investors can see that the contract creator has relinquished the ownership, the token is still secretly under his control (see, e.g. DYBZ46 and POLY ELON47). Footnote 45: [https://etherscan.io/address/0x46e11ef3ed33577a1ce9948a9e594b882b6e2778](https://etherscan.io/address/0x46e11ef3ed33577a1ce9948a9e594b882b6e2778)
* **Invalid callback:** This maneuver works as follows. Every time investors call the transfer function to sell a token, a special function is used to call back the transfer function with _invalid_ inputs that cause the transaction to fail and rollback. An example of this sophisticated maneuver is ELONAJA48. Footnote 48: [https://etherscan.io/address/0x8196464fb1319b4dad2f4d569089554c78e17b3](https://etherscan.io/address/0x8196464fb1319b4dad2f4d569089554c78e17b3)
* **Invalid callback:** This maneuver works as follows. Every time investors call the transfer function to sell a token, a special function is used to call back the transfer function with _invalid_ inputs that cause the transaction to fail and rollback. An example of this sophisticated maneuver is ELONAJA48. Footnote 48: [https://etherscan.io/address/0x457A0677d206970A20212f95f35378Cfc68eaA0C](https://etherscan.io/address/0x457A0677d206970A20212f95f35378Cfc68eaA0C)
*
## 6 Trapdoor Tokens Detection
### Non-malicious data
Unlike Trapdoor tokens, non-malicious tokens cannot be verified based on buying/selling tests. It is because a token that can be bought and sold now can be turned into a Trapdoor token in the future when a scammer activates a trap. Or the token can simply be another type of scam (e.g., Rug-Pull, Ponzi). Thus, we gathered non-malicious data based on the _high-value token_ dataset, including 843 tokens collected in Section 3.2 and non-malicious dataset including 631 tokens from previous studies [28, 29]. We obtained **621** non-malicious tokens by taking the intersection of these datasets. In fact, these tokens have very high popularity ranked on some reputable websites (e.g., www.coinmarketcap.com, www.coingecko.com), and have been purchased by many investors. Moreover, those tokens have been also audited by external companies such as Certik, Quantstamp, and Hacken (see [28, Section 6.1.2]). Finally, we combined them with **1,859** Trapdoor tokens collected in Section 3.2 to create the experimental dataset of **2,480** tokens used for our detection experiments in Section 6.4.
### Code Features
Scam detection tools often use two main data sources: smart contract codes (source codes, bytecodes, or opcodes) and transaction history. Using transaction features for detecting Trapdoor tokens has some advantages: the transaction history is permanently available on the chain and captures several aspects not visible in the contract code, such as trading patterns and contract owner's actions. However, we need to be cautious in building the dataset when using transaction histories for training machine learning models. In fact, it is quite hard to select the non-malicious group because if we pick the most popular ones, their transaction histories will surely look very different from those of the scam contracts, which often have much fewer activities and exist for much shorter periods. This will cause over-fit. On the other hand, if we also pick less popular contracts, then the problem is that it is not easy to verify if they are definitely _not_ a scam.
To avoid the aforementioned dilemma, in this research, we decide to build our detection tool based on the smart contract _opcodes_, which has proved to be quite successfully in detecting Ponzi scam on blockchain [9, 11, 10, 25, 42]. The intuition for using opcode features is that they should look quite different between contracts that have different purposes. Moreover, opcode features are not impacted by the token popularity and can be extracted even when the source code (in Solidity) of the smart contracts have been removed. To aggregate opcode features, we first collect all contracts' bytecode from Etherscan APIs [36]. After that, we disassemble the smart contracts' bytecode into opcodes and calculate the occurrence frequency of each unique opcode. As a result, **283** different code features are collected from the **2,480** contracts' opcodes.
### Classification Models
Our goal is to train and test a few well-known machine classification methods using our proposed dataset and compared their performance to find the most suitable classification model for this problem. The models are listed below
* **Random Forest (RF) [33]** is a computationally efficient classification algorithm that works effectively in several domains [31] including fraud detection [2]. The key idea of this algorithm is to get a better result by aggregating the predictions from all trees in the forest, which is generated by using the Bootstrap resampling technique.
* **XGBoost (XGB) [8]** is a gradient-boosting-based algorithm that creates gradient-boosted decision trees in sequential form and then groups these trees to form a strong model. Unlike RF, which aggregates the results from all trees to get the final result, and the result of XGB is the prediction of the last model, which addressed data misclassified from previous models.
* \(K\)**-nearest neighbour (KNN) [18]** is a non-parametric classifier that uses proximity to estimate the likelihood that a data point will become a member of one group.
* **Support vector machine (SVM) [23]** is the algorithm that has been widely applied in binary classification or fraud detection problems. SVM performs classification by establishing a hyperplane that enlarges the boundary between two categories in a multi-dimensional feature space.
* **LightGBM (LGBM) [26]** is also a gradient-boosting-based algorithm that grows a tree vertically (leaf-wise). By adopting leaf-wise algorithms, LGBM often has better accuracy and shorter training time than other gradient-boosting-based algorithms.
### Detection Experiment
**Datasets.** In our experiment, instances in the dataset are shuffled randomly and split into a training set (80%) and a test set (20%). While a training set is used for training our detection models, a test set is used for validating the model's performance. Furthermore, we also adopt the 5-fold cross-validation to train and validate the selected model on the training set. Finally, a trained model was used to classify the tokens in the unseen test dataset. To make our result more reliable, we repeated the experiment process ten times, and the final result was obtained by taking the average. We note that the same hyperparameters were used for the same models to make a fair comparison. Our Python code and dataset are available online at [https://github.com/bsdp2023/trapdoor_tool](https://github.com/bsdp2023/trapdoor_tool).
**Evaluation metrics.** Every selected classifier's performance is evaluated by using four different standard metrics that are calculated based on the numbers of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). More specifically, **Accuracy** is the fraction of correct predictions, calculated as (TP + TN)/(TP + FP + TN + FN), **Precision** is the fraction of the actual scans out of all the predicted scans by the method, calculated as
TP/(TP + FP), **Recall** is the fraction of detected scams among all actual scams, calculated as TP/(TP + FN), and **F1-score** is the harmonic mean of Precision and Recall and is calculated as \((2\cdot\)Precision\(\cdot\)Recall\()\)/(Precision + Recall).
**Experimental results.** A comparison of various scores of different detection models is provided in Table 5. The best scores are shown in a bold font. Overall, all selected classification algorithms have the F1-score over 0.90, indicating that models built based on contract opcode features are able to detect Trapdoor tokens quite well. Among the implemented models, we noticed that classifiers built on decision trees were more efficient in Trapdoor token detection than others. In particular, Random Forest, XGBoost and LightGBM achieved all four metrics greater than or equal to 0.98. LightGBM had the best performance not only in the F1-score but also in Accuracy, Precision and Recall.
**Feature analysis.** To understand the effectiveness of opcode features in detecting Trapdoor tokens and make an effort to interpret the results, we retrieved the _importance_ of each opcode from the best performance model LightGBM in the previous experiment. The importance of a feature is defined by the number of times this feature is used to split the data across all decision trees. Hence, the most important feature is the most efficient opcode used to discriminate non-malicious and Trapdoor tokens. To this end, we extracted the top ten important features and counted their occurrences in each token in our dataset. The statistics are provided in Table 6.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Opcode** & **Description** & **Trapdoor** & **Normal** \\ \hline CALLDATALOAD & Get input data of environment & 19 & 23 \\ SLOAD & Load from storage & 66 & 42 \\ CALLDATASIZE & Get size of input data in environment & 13 & 10 \\ GT & Greater-than comparison & 20 & 11 \\ PUSH16 & Place 16 byte item on stack & 0 & 1 \\ MUL & Multiplication operation & 28 & 12 \\ OR & Bitwise OR operation & 10 & 4 \\ LOG1 & Append log record with one topic & 0 & 2 \\ LT & Less-than comparison & 24 & 16 \\ NOT & Bitwise NOT operation & 14 & 9 \\ \hline \end{tabular}
\end{table}
Table 6: Average frequencies of top ten important opcode features in Trapdoor and non-malicious (normal) contracts.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Classifier** & **Accuracy** & **Precision** & **Recall** & **F1-score** \\ \hline KNN & 0.891 & 0.886 & 0.980 & 0.931 \\ SVM & 0.855 & 0.839 & 0.998 & 0.911 \\ Random Forest & 0.980 & 0.993 & 0.981 & 0.987 \\ XGBoost & 0.980 & 0.986 & 0.987 & 0.987 \\ LightGBM & **0.983** & **0.989** & **0.988** & **0.989** \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of different detection models for Trapdoor tokens.
From Table 6, we can observe that the opcodes GT, LT, OR, and NOT appear in the Trapdoor contracts more than normal contracts. This is not surprising since Trapdoor tokens often use comparisons as the condition to throw an exception intentionally, such as comparing a transfer amount and a transfer limit or checking whether a receiver's address is an exchange pool's address. Besides, the number of MUL opcode, which is often used to calculate and apply the transfer fee, is also higher in Trapdoor tokens than in normal contracts. Finally, SLOAD opcode appears in Trapdoor tokens more frequently than in non-malicious tokens, which could be because the blacklist and whitelist in Trapdoor tokens are always loaded for checking transfer permission.
## 7 Conclusion
We provide in this work a comprehensive analysis of Trapdoor tokens, a new financial fraud on Uniswap, one of the most popular decentralized exchange. The study reveals a range of simple to advanced techniques successfully employed by the scammers and the devastating economic impact they have had on the victims. The verified dataset of Trapdoor scam tokens, the machine learning detection models, together with the detailed analysis provided in this paper will facilitate future studies in crypto scams, which are currently running rampant across different investment platforms.
|
2309.05378 | Steps Towards Satisficing Distributed Dynamic Team Trust | Defining and measuring trust in dynamic, multiagent teams is important in a
range of contexts, particularly in defense and security domains. Team members
should be trusted to work towards agreed goals and in accordance with shared
values. In this paper, our concern is with the definition of goals and values
such that it is possible to define 'trust' in a way that is interpretable, and
hence usable, by both humans and robots. We argue that the outcome of team
activity can be considered in terms of 'goal', 'individual/team values', and
'legal principles'. We question whether alignment is possible at the level of
'individual/team values', or only at the 'goal' and 'legal principles' levels.
We argue for a set of metrics to define trust in human-robot teams that are
interpretable by human or robot team members, and consider an experiment that
could demonstrate the notion of 'satisficing trust' over the course of a
simulated mission. | Edmund R. Hunt, Chris Baber, Mehdi Sobhani, Sanja Milivojevic, Sagir Yusuf, Mirco Musolesi, Patrick Waterson, Sally Maynard | 2023-09-11T10:58:51Z | http://arxiv.org/abs/2309.05378v2 | # Steps Towards Satisficing Distributed Dynamic Team Trust
###### Abstract
Defining and measuring trust in dynamic, multiagent teams is important in a range of contexts, particularly in defense and security domains. Team members should be trusted to work towards agreed goals and in accordance with shared values. In this paper, our concern is with the definition of goals and values such that it is possible to define 'trust' in a way that is interpretable, and hence usable, by both humans and robots. We argue that the outcome of team activity can be considered in terms of 'goal', 'individual/team values', and 'legal principles'. We question whether alignment is possible at the level of 'individual/team values', or only at the 'goal' and 'legal principles' levels. We argue for a set of metrics to define trust in human-robot teams that are interpretable by human or robot team members, and consider an experiment that could demonstrate the notion of'satisficing trust' over the course of a simulated mission.
1School of Engineering Mathematics and Technology, University of Bristol
2School of Computer Science, University of Birmingham
3Bristol Digital Futures Institute, University of Bristol
4Department of Computer Science, University College London
5Human Factors and Complex Systems Group, Loughborough University
[email protected], [email protected], [email protected], [email protected],
[email protected], [email protected], [email protected], [email protected]
## Introduction
Defining the interpersonal and technical factors that relate to trust in human-AI/robot teaming is an open problem in the research community [2]. In particular a key problem is the definition of 'trust' in these scenarios. We expect trust to vary according to the developing situation faced by each teammate. Thus, obtaining a trust level sufficient for a given situation will always involve satisficing, i.e. obtaining a minimally acceptable level to progress a team's mission. We use the metaphor of a _Ladder of Trust_, whereby teammates may 'climb' down or up the ladder to satisfice situation-dependent requirements [1].
To study trust in human-robot teams, it is necessary to define the concept of trust in a manner which is meaningful for both humans and robots. Previously, trust has been seen as a multi-dimensional concept that focuses on human perceptions [1, 13], e.g., through self-report questionnaires for the human team members [1, 12], but this assumes cognitive and cultural capabilities beyond those of robots. While humanlike robots can be perceived to be capable of making'moral' decisions [14], in general, people do not accept the idea of machines making moral decisions [1]. We believe that there has been less attention given to trust held by robots of their human team mates. In our project, we seek metrics appropriate for both humans and robots to quantify, and hence regulate, their trust in their teammates.
### Principles, Values and Goals
We see a hierarchy of _principles_, _values_ and _goals_ (Fig. 1). For human-robot teams, trust requires team members to behave consistently with the team's goals, team values and legal principles to a standard of performance agreed by the team [1]. Goals are pre-defined ends, aims or objectives for humans or robots in a team. A team goal, for example, could be to defuse a bomb. Values, on the other hand, have a sense of purpose. They are _'those ends deemed worth pursuing'_[17, p.559]. Values define desirable outputs: what motivates us, what is worth striving for. As such, they can be individual, team, and social. Social values, such as liberty, freedom, justice, democracy, and respect for fundamental rights, are accepted by society and are aspirational. Legal principles are 'norms laying down essential elements of a legal order' (van Bogdandy, cited in Williams 2009, p.559). Following Habermas, we consider legal principles to be general propositions from which norms arise, _'certain standards that might be based in law or practice, which contribute to forming a framework for decision-making and action'_[17, p.559]. Acting according to legal principles, in our example, could be to defuse the bomb without a loss of human life. Legal principles have a sense of obligation attached to them; they set the bounds or constraints within which activities are permitted. Values could fill the gap where legal principles fail to provide guidance. For example, acting according to social values would be to defuse a bomb even if that means a loss of human life, where such action saves the lives of many others. Acting according to _team_ values would be defusing a bomb with a loss of human life, but with preservation of the capability of the human-robot team to continue its mission. Values are moral in character. They are what is the best for the individual, the team, or society. Principles command; values recommend; goals direct. Principles are binary (valid or invalid), values are not. As such, values can be considered in terms of a trade-off in which individual, team and societal values influence the choice of goal and definition of acceptable outcome in a specific situation.
### Values in Human-Robot Teams
To speak of 'values' in a human-robot team might imply that these can be defined with sufficient clarity that they can be quantified. Indeed, the Artificial Intelligence (AI) literature has discussed 'value alignment' [1], through which the 'values' that an AI system pursues match those of the human stakeholders affected by the system [11]. An early example of explicit ethics in AI systems can be seen in the 'ethical governor' [13]. This was proposed for a weapons targeting system in which selection of target could be over-ruled if there were rules which the decision violated. In this instance, the 'rules' could be defined in a similar manner to our notion of legal principles. We might be able to codify legal principles, as legal norms setting the essential elements of the legal order. But 'values' are, as Habermas (cited in Williams 2009) suggests, _intersubjectively shared preferences_. The concepts of 'value alignment' and 'explicit ethics' in AI seem to rest on an assumption that it is possible to codify'social values' with sufficient clarity that an action might be evaluated. We are skeptical that this might be possible. We believe that alignment of goals and principles (between humans and robots in a system) is possible, but that'social values' must remain the province of the human actors who are either directly participating in the system or acting as overseers, regulators, or other stakeholders. Our emphasis on context could make such an assumption difficult to articulate (because the 'values' might vary with context). We note that approaches that use, for example, an 'ethical consequence engine' [12], might deal with context through the simulation of the outcome of an action. However, this still implies the codification of 'values' within a normative moral framework. Nonetheless, while our position is that 'values' remain in the human domain, we accept that it might be possible for engineered systems to seek '_goal_ alignment' (i.e. between humans and robots).
The interpretation of a goal (in terms of the desirability of its outcome) will depend on the context in which it is performed. From this perspective, we might define the problem as one of inferring goals from the activity of other teammates as a problem of Inverse Reinforcement Learning [14]. In a recent study, the tension between selfish and social choices (as a basic version of moral problems in social dilemma games) was explored using Reinforcement Learning [15]. In this study, Reinforcement Learning agents are provided with intrinsic rewards that reflect different views of ethics (i.e., utilitarian, deontological, consequentialist) and play a variety of iterated social dilemma games (i.e., Prisoner's Dilemma, Volunteer's Dilemma, Stag Hunt). Within a game, an agent seeks to respond to the state of the game by performing an action that will maximize both the game reward and an intrinsic _moral_ reward. Pitching agents with different moral stances against each other revealed systematic differences in strategy, particularly in terms of cooperative or exploitative activity. Alternative strategies or stances can be considered in terms of counterfactual analysis, which is a fundamental element of causality analysis [16, 17], and can be used to understand which changes to a particular data model would change the model decision. Counterfactual analysis in a multi-agent reinforcement learning environment [18] could support performance in mission-critical environments through the ability to review alternative courses of action. We regard goals as explicit statements of intent that team members can choose to pursue, and also recognize their pursual by others. Thus, if an actor (human or robot) sees a teammate performing a sequence of tasks, it might be reasonable for it to assume that this sequence is directed toward achieving a specific goal, and that an alternative goal might be more desirable (to achieve particular team or social values) in that context. This area has been of great interest for the community in the recent years [1, 15].
## Trust Specification
To define 'trust', we follow Lewis and Marsh (2022) in claiming that a minimal specification of trust involves:
1. _Capability_, i.e., is the teammate most appropriate for a given task in that situation?
2. _Predictability_, i.e., is the teammate acting in a way that fits the team goals and set principles, and is appropriate to its situation?
3. _Integrity_, i.e., is the teammate acting to support the team and the set principles?
In order to define metrics for the three-element view of trust outlined above, we consider what could be sensed or perceived when humans and robots interact in a collaborative task. The literature on trust generally refers to dyadic relationships between a 'trustor' and 'trustee' (e.g. Hurley 2006; Kim, Dirks, and Cooper 2009) and this can be represented as a network of directed edges, where each edge represents one of the three elements (Figure 2). We do not suppose that the three elements can be measured with equal certainty (some aspects might need to be inferred rather than
Figure 1: Alignment of goals and principles between humans and robots is possible; expressing ‘values’ in contextualized goal priorities remains the province of human actors.
perceived), but each agent will track the relative increase or decrease of these elements over time to adjust their 'trust' in a teammate.
Recent work has introduced the terms system-wide trust and component-level trust (Walliser, de Visser, and Shaw 2023). These map well to our view of distributed, dynamic team trust, where we would clarify component-level trust as comprising the three elements of estimated capability, predictability and integrity, as contextualized by the agents' situation. As Huang et al. (2021) note, it is important to understand the context in which human-AI-robot teaming occurs "including the tasks, environment, the stakeholders, and artificial agents involved...[as well as]...the kinds of interactions that are available between the entities involved in a situation context, where transitive properties of trust take place." (Huang et al., p.307). From this, Huang et al. (2021) propose a 4-step process that involves identification of context and stakeholders and defining and measuring trust relationships in the trust network. In our approach, we define a mission using Cognitive Work Analysis (CWA) (Rasmussen, Pejtersen, and Goodstein (1994); Vicente (1999); Jenkins, Stanton, and Walker (2009)).
From the Work Domain Analysis phase of CWA, a mission can be decomposed into an Abstraction Hierarchy that (reading from top-to-bottom) describes the _purpose_ of the system and (reading from bottom-to-top) describes the _activity_ of the system. The claim is that any system is intended to achieve an outcome (or set of outcomes) that can be evaluated in terms of desirable consequences. Such consequences reflect the values of the stakeholders working with and affected/impacted by the system and can serve as constraints on the goals that the system is seeking to achieve. Goals could be defined by more than one 'value', and the values might conflict. Where there is conflict, this either requires negotiation between teammates or intervention by a higher authority. Legal principles could, to some extent, define the constraints within which a mission is performed, but these will need to be filtered through values appropriate to the situation.
With our metaphor of a 'ladder of trust' (Baber et al. 2023), it is these sub-component-level trust metrics (e.g. \(C_{12}\), \(P_{12}\), \(I_{12}\), indicating Agent 1's estimation of capability, predictability, integrity of Agent 2 in Fig. 2) that will change as more information is gathered from their interactions. To satisfice trust, then, means for the necessary combination of component (agent) level trust estimates, each comprising these three elements, to reach a certain threshold before a mission can proceed effectively. In practical terms, this might mean Agent 1 improving its estimate of Agent 2's integrity (\(I_{12}\), an element of \(T_{12}\)) before it believes that Agent 2 will carry out a certain task within the mission in an acceptable manner. As a working hypothesis, we assume that system-wide trust is limited by its weakest component. In cases where an action only requires a subset of team members to be carried out, deficiencies in system-wide trust need not hold back progress, but instead the demand is to satisfice the relevant combination of component-level trust between the situated team members.
## 5 Motivating Use-Case
We have designed an environment in which the concept developed in this paper can be experimented upon. Human participants work with wheeled rover robots (Figure 3) to collaborate on search tasks in an environment. In the environment, Augmented Reality (AR) Tags are positioned at approximately 1 m height, for the human to scan, and also at approximately 20 cm height, for the robots to scan (Figure 3). The tags provide information about the environment (whether an area is safe or hazardous to the human) and status of objects in the environment (e.g., whether these are operational, risky, or require repair). For instance, the tags could indicate whether a location has low or high levels of radiation or could indicate whether there is a suspect package that needs to be investigated. The team might comprise two robots and one human in the field and one human working as a coordinator (remotely). Together, the robots and humans need to coordinate a search mission in an experimental environment. As the mission is performed, data can be collected from the human (using a computer tablet interface) or from sensors on the robot. The data from human and robot actors follow the same structure, e.g., {agent ID, time, location, object, action...}. All the data and communication between team members (robot(s) and human(s)) are relayed through a Django server and recorded in a database. For instance, the mobile application ('app') used as an interface for communicating to the robot is connected to the server so that any commands from a human to a robot will be recorded along with other relevant information like time. The robot messages to human teammates also goes through the server before appearing on the app. The location of all team mem
Figure 3: A ‘Leo Rover’ robot equipped with 2D LiDAR and built-in front facing camera, which can scan ARTags.
Figure 2: Agent 1’s trust in Agent 2, \(T_{12}\), depends on its running estimates of Agent 2’s capability, predictability and integrity; these are contextualized by the local situation.
bers are also being published on the server to record the tracking data before being passed to the interface.
## Defining Metrics for Trust
In order to interpret a sequence of tasks and their relation to a goal, it is necessary to define metrics through which observation of tasks and inference of goals can be performed by team members. Table 1 indicates the variables that we seek to capture and whether these can be directly measured or inferred. In Table 1, entries that are in plain text can be directly obtained from the agents, their activity, or the database recorded during the experiment; entries in italic text can be inferred from the mission plan, i.e., there will be goals that are achieved by performing combinations of tasks. Finally, the 'why' will be derived from the competing values of individuals, the team and society.
Having outlined a set of metrics (Table 1), we can relate these to the three-element model as follows:
* Capability (of an Agent) is a function of WHAT (task, object) and WHO (goals). We assume that a given agent will perceive the affordances of the environment in terms of their own and others' ability to perform a task on an object in pursuit of a goal.
* Predictability (of an Action) is a function of Capability and Context (i.e., WHERE and WHEN). We assume that an agent will perceive a teammate in a context and infer the likelihood of success of an agent (with a perceived capability) achieving an outcome.
* Integrity (of an Action) is a function of Predictability and WHY. For _robot actors_, an action is interpreted by a robot and any onlooking robot teammates in terms of the likelihood of success relative to the individual/team _goal priorities_ and compliance with codified principles. For _human actors_, an action is also interpreted by the human and any onlooking human teammates in relation to fulfilling relevant goal priorities in compliance with legal principles; _and also_ performance criteria, constraints, and determinants of acceptable outcomes, _as influenced by individual, team and social values_.
Agents will begin a mission with initial teammate-specific estimates for each of these elements, which could be based on factors such as prior experiences with the teammates over the longer-term (e.g. de Visser et al., 2020), or proxies for trustworthiness (e.g. Lewis and Marsh, 2022).
Notice that we are claiming that integrity dynamics are linked to each distinct, situated action rather than an agent's identity. This is a very different perspective to conventional definitions of integrity; in our view, this allows us to attribute changes in integrity without requiring a theory of mind (as in e.g. de Visser et al.; Mou et al., 2020). Our argument is two-fold. First, integrity arises from the action performed in a context (which we capture in the predictability function) and the individual and team values, as well as set principles by which the action can be judged. For example, assume that some actions can benefit the agent ('selfish') and others benefit the team. In a context in which no other teammate will be affected, a'selfish' action might be judged neutrally, but in a context where the action might be chosen in preference to one which could aid a teammate, the judgment might be negative. Or, in a context where some actions benefit the agent and the team, but are clashing with set principles ('do not break the law by X'), the judgment will be negative. Second, each agent will have a _reputation_ which reflects the history of these instances of integrity. As other teammates learn the history of the actions performed by an agent, so the reputation of that agent will be formed. Given an understanding of a teammate's reputation, one can define an expectation of the action that they might perform. This can be modeled as a reinforcement learning problem (Anastassacos et al., 2020; Anastassacos et al., 2021).
### Integrity, Reputation and Values
In our definition of trust, each member of a team will form estimates of their teammates' capability, predictability and integrity as task performance is observed. Observation might be of the actual performance of the task or an outcome of this performance (either the result of the task or a report of from another agent). As information is accumulated, an agent will infer the _reputation_ of its teammates. Reputation is a quantifiable factor, and (as we noted above) is based on the history of performing tasks in pursuit of goals relative to the individual and team values. However, rather than the expectation of prosocial behavior being fixed solely by reputation, we assume that this will be moderated by context. For example, a teammate might have a reputation for performing'selfish' actions, i.e., seeking to gain rewards for themselves at the expense of their teammate. However, in a given situation, there might not be an option to assist a teammate or the outcome of the action might not be detrimental to the team. In this case, the severity of the outcome of a selfish action might be minimal. When there is sufficient information about a task being performed, the integrity of this task can be defined in terms of selfish or team values. When there is insufficient information, knowing the relationship between task and goal, and the context in which the task is performed, can enable the prediction of the most likely goal being pursued. We describe this by updating a Bayesian Belief Network that is held by each agent (Figure 4). Note, here we tend to think of reputation as being a short-term, mission-bounded metric, but we also recognize that this metric could interact with other factors, e.g. long-term trust weightings such as relationship equity (de Visser et al., 2020), or short-term weightings such as robot appearance (which may have a proxy trust effect; Lewis and Marsh, 2022); such weightings could be added into the BBN.
## Trust and Distributed Situation Awareness
We assume agents have a partial view of the context. This view consists of their perception of the environment, their inference of which goal is appropriate to perform in the context, their belief in the reputation of their teammates, and the goal that they expect their teammates to be pursuing. Situation awareness in a team is likely to be distributed (Stanton et al., 2006) and we have previously demonstrated that Distributed Situation Awareness can be formally described
using a Bayesian Belief Network (BBN) model (Yusuf and Baber 2022).
In a BBN, the system can be modeled using a graph \(G(N,E)\) where \(N\) is a set of nodes connected by a causal directional edges \(E\). Each node represents an element of component-level trust with a defined number of states (i.e., selected situations as illustrated in Figure 4). States are defined as probabilities, i.e., between 0%-100% to reflect the assumption that these are estimates formed by an agent. These probabilities can be updated based on mission information (e.g., Equation 1), or learned over time (Yusuf and Baber 2022). As such, the trust elements (e.g., goals, tasks, reputation etc.) can be modeled using BBN nodes with assigned probabilities and causal relationships based on the operating mission context. For example, assume a scenario where an agent has two goals (\(G_{A}\) and \(G_{B}\)), e.g., \(G_{A}\): to mark the locations of hazards in an environment by scanning AR Tags, and \(G_{B}\): to construct the map of the environment using Simultaneous Localization and Mapping algorithms (SLAM).
Each of these goals can be achieved by completing a number of tasks \(\alpha_{i},\forall i\in[1,N]\); and these tasks can be spread across goals (e.g., the SLAM task to search for a hazard is the same as the one for a mapping task) i.e., \(G_{A}\to a_{i}\times a_{j}\), such that, \(\alpha_{i}\cap\alpha_{j}\neq\{\},\forall\alpha_{i},\alpha_{j}\in G_{A}\cap G _{B},i,j\in[1,N]\) or mapped singly to a goal \(G_{i}\to\alpha_{i},\forall i\in[1,N]\) as illustrated in Figure 4. From Figure 4, an agent is capable or incapable of achieving a goal, and the goal achievement depends on the assigned task(s) and context (defined by time, location, and opportunity).
Thus, each task achieved will increase the goal's probability of success, i.e., a task with a 90% 'achieved' state has a higher contribution to the goal achievement than the one with 50% (though this depends on the criticality of the task towards the goal achievement Yusuf and Baber (2022)). Equation 1 is an example of a protocol-based mode of updating a probability of each state of the BBN after every mission event (e.g., sensor sampling by the agent):
\[P(R_{i})=P(R_{i-1})+f(w_{c}) \tag{1}\]
where \(P(R_{i})\) is the updated prior of the state (e.g., after the event occurrence), \(P(R_{i-1})\) is the previous prior of the state, \(f(w_{c})\) is the probability decrement/increment weight function (to be assigning values based on subject matter expert (SME) judgments of the context \(c\), e.g., information from a reliable sensor weights a complement of 100%), \(c\) is the mission context (e.g., as defined by time, location, and opportunity in Figure 4), and \(i\) is the event number, such that, \(i\in[1,N]\). For example, if a goal has two tasks with a weight ratio of 9:1 (e.g., as assigned by \(f(w_{c})\) of Equation 1) towards goal achievement, accomplishing task A contributes to \(90\%\), i.e., \(0\%\) (as the current probability value of the 'achieved' goal state, i.e., assuming no prior progress on task B) \(+90\%=90\%\) for the designated goal achieved state. Note that, \(f(w_{c})\) reduces the prior \(P(R_{i})\) for the non-occuring states and sum up the states probabilities to \(100\%\).
The probability of a parent node can be defined by the child(ren) node(s) states using the conditional probability table (CPT), i.e., a table mapping parent node states probabilities with the joint child(ren) states. One of the advantages of modeling the system concepts using a BBN is the ability to predict states using conditional probabilities (Equation 2):
\[P(A)=P(A\cap B)/P(A|B) \tag{2}\]
where \(P(A)\) is the expected probability of the querying state \(A\), \(P(A|B)\) is the conditional probability of state A given B, and \(P(A\cap B)\) is the joint probability of \(A\) and \(B\). As such, based on the previous reputation of an agent, its capability on a particular task can be predicted. Expectation maximization algorithms can be used to improve the prediction accuracy, i.e., by checking the agent's performance history (Yusuf and Baber 2022).
## Illustrating a Ladder of Trust in the Experimental Environment
Having defined trust and proposed how it might be measured and captured in a BBN, we now explain how this framework can be tested in our experimental environment. From the motivating use-case, scanning the AR tags to define the safety of an area could be defined as a 'team' goal, in that it will benefit other team members. This goal could be further emphasized by giving the robot a competing'selfish' goal: each scan could involve a cost to the robot (e.g., each scan requires energy and the robot will need to leave the environment to recharge when energy level are below a threshold)
\begin{table}
\begin{tabular}{l|l|l} Category & Variable & Derivation \\ \hline \multirow{2}{*}{WHEN} & Time at which an action is performed & Clock reading for logged data \\ & Prediction of future action to be performed & _Inferred from mission plan_ \\ & History of previous actions & Store of logged data \\ \hline WHERE & Fixed location & Location of, e.g., AR Tag \\ & Path & Waypoints recorded from, e.g., ultrasonic tracking \\ \hline WHAT & Object & Identification of object used \\ & Task & Action performed on an object \\ \hline \multirow{2}{*}{WHO} & Agent & Agent ID \\ & Goal & _Inferred from mission plan_ \\ & Reputation & Store of integrity from prior actions \\ \hline \multirow{3}{*}{WHY} & Robots: Compatibility of own and other’s goal & _Inferred individual/team goal priorities_ \\ & priorities and compliance with codified principles & _Inferred from individual, team and social values_ \\ \cline{1-1} & Humans: As above, plus judged performance criteria, & _Inferred from individual, team and social values_ \\ \cline{1-1} & constraints, and determinants of acceptable outcomes & \\ \end{tabular}
\end{table}
Table 1: Initial set of metrics to be measured or inferred. Similarly applicable to humans and robots, except for ‘Why’.
and the robot could be rewarded for minimizing cost (e.g., by staying as long as possible in the environment). The human could be rewarded for scanning all AR Tags within a time limit, but there could be a cost to entering unsafe areas. Scanning an AR Tag in an unsafe area could also represent a cost to robots, who would need to enter that area to help the human (e.g., by guiding the human to safety).
Defining the potential for conflict between team members allows us to manipulate selfish and team goals, and hence to manipulate the 'values' of the team performance. Each team member will be asked, at set intervals, to rate what their teammates are doing (in terms of expectation of the actions between selfish or team goals). It is possible that the task is at the limits of the agent's ability or demands significant resources. In this case, the expectation of a trustor is that the teammate (trustee) is exerting themself to achieve the outcome. For a'selfish' goal this could be interpreted as the agent recklessly pursuing a reward at personal cost (leading to an increase in distrust); for a 'team' goal this could be interpreted as the agent risking themself for their teammates (leading to an increase in trust across trustors). As well as expectations about capability, the task will be interpreted in terms of predictability of outcome, i.e., is the outcome likely to be completely successful? As with the previous example, one might expect the reputation assigned to a teammate to be affected by the success of the outcome. Successful outcome(s) will increase perceptions of that teammate's capability [11]; though if not calibrated could lead to overtrust [10]. The final element, integrity, relates to the interpretation of the goal against the goal priorities (robots and humans) and values (humans only) that the individual or team applies.
## Discussion
Distributed, Dynamic Team Trust [14] requires metrics that reflect the activity and interactions of members of a team. In this paper we share our definition of such metrics and illustrate how these can be applied to the conduct of a mission.
We contribute to the debate on trust in human-robot teams in the following ways. First, we propose that trust arises from a hierarchy of principles, values and goals. Second, we argue that 'integrity' (as a component of trust) should be judged in terms of the observed choice of task in a given context; this can lead to an inference regarding the choice of goal in that context. Where the goal might be considered selfish and where this might have negative consequences for teammates, this will lead to a lower perception of the integrity of the action. Over a history of observations of such choices made by a specific actor, the'reputation' of that actor will be formed. It is likely that such a reputation will (a) reflect the observations of specific agents and, thus, might differ between agents, but (b) could be shared between agents. In contexts where there is no history to draw upon and hence, no evidence on which to define a'reputation', this will either have to be assumed by the observing agent (e.g. 'proxy trust'), or communicated by the other agent. For example, a robot might be programmed to assume that its human teammates will behave in a prosocial manner. This would lead it to ascribe a positive reputation to its human teammate - until it collected sufficient evidence (from observation or from other reports) to the contrary. Third, we argue that 'trust' is dynamic and context-dependent. In this, we are in agreement with Huang et al. (2021). Our approach has been to define the metrics for which each member of a team is able to acquire information. We aim to define these metrics in a way that allow humans or robots to perceive sufficient data to update similar Bayesian Belief Networks (BBN). This is not to assume that human reasoning is reducible to a BBN but allows the human to infer the robots' choices (from a BBN that represents the robots' behavior) and the robot to infer human choices. Fourth, the concept of a ladder of trust (on which perceptions of teammates can move up and down) provides a metaphor for the ways in which trust changes during a mission.
We intend to develop these concepts in a concrete way by carrying out a form of the experiment described in this paper, to observe whether our operationalization of trust metrics obtains plausible dynamics and supports our notion of'satisficing trust' on a 'ladder of trust' [1]. We hope to obtain insights that can guide both human and robot actors in dynamic trust building.
Figure 4: A teammate’s Bayesian Belief Network (BBN) based on their observation of Agent 1’s choice of action in a context.
## Acknowledgements
The research reported in this paper is supported by grant EP/X028569/1 'Satisficing Trust in Human Robot Teams' from the UK Engineering and Physical Sciences Research Council. This project runs from 2023 to 2026 and involves the Universities of Birmingham, Bristol, Loughborough and UCL.
|
2309.04066 | Class Number Formulas for Certain Biquadratic Fields | We consider the class numbers of imaginary quadratic extensions
$F(\sqrt{-p})$, for certain primes $p$, of totally real quadratic fields $F$
which have class number one. Using seminal work of Shintani, we obtain two
elementary class number formulas for many such fields. The first expresses the
class number as an alternating sum of terms that we generate from the
coefficients of the power series expansions of two simple rational functions
that depend on the arithmetic of $F$ and $p$. The second makes use of
expansions of $1/p$, where $p$ is a prime such that $p \equiv 3 \pmod{4}$ and
$p$ remains inert in $F$. More precisely, for a generator $\varepsilon_F$ of
the totally positive unit group of $\mathcal{O}_F$, the base-$\varepsilon_{F}$
expansion of $1/p$ has period length $\ell_{F,p}$, and our second class number
formula expresses the class number as a finite sum over disjoint cosets of size
$\ell_{F,p}$. | Elizabeth Athaide, Emma Cardwell, Christina Thompson | 2023-09-08T01:47:33Z | http://arxiv.org/abs/2309.04066v1 | # Class number formulas for certain biquadratic fields
###### Abstract.
We consider the class numbers of imaginary quadratic extensions \(F(\sqrt{-p})\), for certain primes \(p\), of totally real quadratic fields \(F\) which have class number one. Using seminal work of Shintani, we obtain two elementary class number formulas for many such fields. The first expresses the class number as an alternating sum of terms that we generate from the coefficients of the power series expansions of two simple rational functions that depend on the arithmetic of \(F\) and \(p\). The second makes use of expansions of \(1/p\), where \(p\) is a prime such that \(p\equiv 3\pmod{4}\) and \(p\) remains inert in \(F\). More precisely, for a generator \(\varepsilon_{F}\) of the totally positive unit group of \(\mathcal{O}_{F}\), the base-\(\varepsilon_{F}\) expansion of \(1/p\) has period length \(\ell_{F,p}\), and our second class number formula expresses the class number as a finite sum over disjoint cosets of size \(\ell_{F,p}\).
## 1. Introduction
The theory of class numbers has a rich history, beginning with Gauss's effort to understand how primes could be represented by positive definite binary quadratic forms [2]. Gauss recognized that \(\mathrm{SL}_{2}(\mathbb{Z})\) acts naturally on positive definite integral binary quadratic forms \(f(X,Y)=aX^{2}+bXY+cY^{2}\) with fixed discriminant \(-d=b^{2}-4ac\). He proved that the set of equivalence classes under this action is a finite abelian group; the order of this group is known as the _class number_\(h(-d)\). The class group for quadratic forms of discriminant \(d\) is also isomorphic to the ideal class group for the ring of integers of the quadratic field \(\mathbb{Q}(\sqrt{d})\). Therefore, it is natural to ask whether results about Gauss's class numbers are glimpses of results for the class numbers \(h_{K}\) of more general number fields \(K\). In this spirit, we recall two surprising results for Gauss's class numbers.
In the 1970s, Hirzebruch [6] and Zagier [11] found an elegant formula for \(h(-p)\), when \(7\leq p\equiv 3\pmod{4}\) is prime and \(h(4p)=1\). If the simple continued fraction for \(\sqrt{p}\) is written as
\[\sqrt{p}=a_{0}+\frac{1}{a_{1}+\frac{1}{a_{2}+\frac{1}{\ddots}}}=[a_{0}, \overline{a_{1},\ldots,a_{2t}}],\]
where the repeating period begins with \(a_{1}\) and has minimal even length \(2t\), they proved that
\[h(-p)=\frac{1}{3}\sum_{k=1}^{2t}(-1)^{k}a_{k}. \tag{1.1}\]
More recently in the 1990s, Girstmair [4] found another elegant formula as an alternating sum of numbers that are even simpler to describe. Namely, if \(g\) is a primitive root modulo \(p\), he examines the base \(g\) expansion of \(1/p\), which is eventually periodic with period length \(p-1\) (see [5], Section 9.6). If this period is \(\overline{x_{1}x_{2}...x_{p-1}}\), where \(0\leq x_{i}\leq g-1\), then he proved that
\[h(-p)=\frac{1}{g+1}\sum_{k=1}^{p-1}(-1)^{k}x_{k}. \tag{1.2}\]
A priori, these results are unexpected relationships between combinatorial sums and class numbers of binary quadratic forms with discriminant \(-p\). Since class numbers of binary quadratic forms are also examples of class numbers of number fields, it is natural to ask whether (1.1) and (1.2) are glimpses of a more general theory where class numbers of number fields can be described as alternating sums of combinatorial numbers. We show that this is indeed the case for a large class of imaginary quadratic extensions of real quadratic fields \(F\). To make this precise, suppose that \(F=\mathbb{Q}(\sqrt{d})\), where \(d>1\) is square-free. Throughout, we assume that its ring of integers \(\mathcal{O}_{F}\) has class number \(1\). We note that \(\mathcal{O}_{F}=\mathbb{Z}[\theta_{F}]\), where we let
\[\theta_{F}\coloneqq\begin{cases}\sqrt{d}&\text{ if }d\equiv 3\pmod{4}\\ \frac{1+\sqrt{d}}{2}&\text{ if }d\equiv 1\pmod{4}.\end{cases}\]
The imaginary quadratic extensions of \(F\) that we consider are of the form \(F(\sqrt{-p})\), where \(p\) is a prime for which \(7\leq p\equiv 3\pmod{4}\) and \(\binom{d}{p}=-1\). These conditions imply that the relative discriminant ideal is the prime ideal \(p\mathcal{O}_{F}\) (see Lemma 2.3). Moreover, for convenience, we fix a generator \(\rho_{F,p}\coloneqq a+b\theta_{F}\in\mathcal{O}_{F}\) such that \((\mathcal{O}_{F}/p\mathcal{O}_{F})^{\times}=\langle\rho_{F,p}+p\mathcal{O}_{F }\rangle\cong\mathbb{F}_{p^{2}}^{\times}\).
In this setting, we derive a class number formula for \(F(\sqrt{-p})\) as an alternating sum that arises from \(p\) and invariants of \(F\). Our key observation is that the combinatorial structure that underlies (1.1) and (1.2) can be reformulated in terms of recurrence relations that can be captured by the coefficients of distinguished rational functions. Therefore, our goal is to define two rational functions (reflecting that \(F\) has degree \(2\) over \(\mathbb{Q}\)) whose coefficients can be incorporated into an alternating sum that yields the class number \(h_{F(\sqrt{-p})}\).
To this end, we use \(\rho_{F,p}=a+b\theta_{F}\) to define integers
\[C_{F,p} \coloneqq a^{2}+ab\cdot\operatorname{Tr}_{F/\mathbb{Q}}(\theta_{ F})+\operatorname{Norm}_{F/\mathbb{Q}}(\theta_{F})b^{2}, \tag{1.3}\] \[D_{F,p} \coloneqq 2a+b\cdot\operatorname{Tr}_{F/\mathbb{Q}}(\theta_{F}), \tag{1.4}\]
and in turn, to define the rational functions as
\[X_{F,p}(z) =\sum_{m\geq 1}x(m)z^{m}\coloneqq\frac{az-C_{F,p}z^{2}}{C_{F,p}z^{ 2}-D_{F,p}z+1}, \tag{1.5}\] \[Y_{F,p}(z) =\sum_{m\geq 1}y(m)z^{m}\coloneqq\frac{bz}{C_{F,p}z^{2}-D_{F,p}z+1}. \tag{1.6}\]
Moreover, we must delicately take into account the presence of nontrivial units as they inform class number calculations. To make this precise, we recall that Dirichlet's Unit Theorem implies that \(\mathcal{O}_{F}^{\times}=\{\pm\varepsilon_{F}^{j},j\in\mathbb{Z}\}\), where \(\varepsilon_{F}=s+t\theta_{F}\) is the totally positive fundamental unit. We then define \(t\) pairs of sequences, say \(\{(x_{i}(m),y_{i}(m))\ :\ \ m\geq 1\}\), where \(t\) is the coefficient of \(\theta_{F}\) in \(\varepsilon_{F}\), that encode the action of \(\varepsilon_{F}\) by means of expressions involving \(x(m)\) and \(y(m)\) (see (3.4)). Finally, we find that the analogues of the right hand side of (1.2) turn out to be obtained from the quadratic form
\[Q_{F}(Y_{1},Y_{2})\coloneqq\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon_{F})Y _{1}^{2}+4Y_{1}Y_{2}+\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon_{F})Y_{2}^ {2}.\]
In terms of this data, we obtain the following theorem, which gives a formula for the class number \(h_{F(\sqrt{-p})}\).
**Theorem 1.1**.: _Assuming the notation and hypotheses above, we have_
\[h_{F(\sqrt{-p})}=\frac{1}{16t^{2}p^{2}}\sum_{\begin{subarray}{c}1\leq m\leq p ^{2}-1\\ 1\leq i\leq t\end{subarray}}(-1)^{m}Q_{F}\left(x_{i}(m),y_{i}(m)\right).\]
**Remark.** For real quadratic fields \(F\) with \(h_{F}=1\), Theorem 1.1 applies for one-fourth of the primes. This follows from the strong version of Dirichlet's Theorem on primes in arithmetic progressions, which implies that the primes \(p\) such that \(p\equiv 3\pmod{4}\) and \(\left(\frac{d}{p}\right)=-1\) have density \(1/4\).
**Example.** Here we illustrate Theorem 1.1 with \(F=\mathbb{Q}(\sqrt{3})\) and \(p=7\). The field \(F(\sqrt{-7})\) has class number \(h_{F(\sqrt{-7})}=2\). Note that \(F\) has class number \(1\), and its totally positive fundamental unit is \(\varepsilon_{F}=2+\sqrt{3}\), and so \(t=1\). The prime \(p=7\) satisfies the required conditions that \(p\equiv 3\pmod{4}\) and \(\left(\frac{3}{7}\right)=-1\). Therefore, we have that the principal ideal \(7\mathcal{O}_{F}\subset\mathcal{O}_{F}=\mathbb{Z}[\sqrt{3}]\) is prime, and so we have that \(\mathcal{O}_{F}/7\mathcal{O}_{F}\cong\mathbb{F}_{49}\). One can check that \(\rho_{F,p}=6+\sqrt{3}\) generates the multiplicative cyclic group \((\mathcal{O}_{F}/7\mathcal{O}_{F})^{\times}\cong\mathbb{F}_{49}^{\times}\). Thus we have \(a=6,b=1\), and using (1.3) and (1.4), we find that \(C_{F,p}=33\), and \(D_{F,p}=-12\), which in turn by (1.5) and (1.6) give
\[X_{F,p}(z) =\sum_{m\geq 1}x(m)z^{m}=6z+39z^{2}+270z^{3}+1953z^{4}+\ldots= \frac{6z-33z^{2}}{33z^{2}+12z+1},\] \[Y_{F,p}(z) =\sum_{m\geq 1}y(m)z^{m}=z+12z^{2}+111z^{3}+936z^{4}+\ldots= \frac{z}{33z^{2}+12z+1}.\]
Theorem 1.1 offers a formula for \(h_{F(\sqrt{-7})}\) as an alternating sum of \(7^{2}-1=48\) terms that are assembled from the first \(48\) coefficients of \(X_{F,p}(z)\) and \(Y_{F,p}(z)\). Furthermore, because \(t=1\), the relevant pairs \(\{x_{1}(m),y_{1}(m)\}\) are merely reductions of the pairs of coefficients \(\{x(m),y(m)\}\) to a specific _fundamental domain_, as given in (3.4). One finds that
\[\begin{array}{llll}x_{1}(1)=1,&x_{1}(2)=-5,&\ldots,&x_{1}(48)=-5,\\ y_{1}(1)=-5,&y_{1}(2)=-4,&\ldots,&y_{1}(48)=-7.\end{array}\]
We now use Theorem 1.1 to calculate \(h_{F(\sqrt{-7})}\):
\[h_{F(\sqrt{-7})} =\frac{1}{784}\sum_{1\leq m\leq 48}(-1)^{m}\Big{[}4x_{1}(m)^{2}+4x _{1}(m)y_{1}(m)+4y_{1}(m)^{2}\Big{]}\] \[=\frac{1}{784}(-84+76-300+52-28+\cdots+436)=2.\]
We circle back to the fact that the class number formula in (1.2) makes use of the base \(g\) expansion of \(1/p\). We stress that the number of terms in the sum, which is \(p-1\), is the length of the repeating period of this expansion. Therefore, we ask whether the expression in Theorem 1.1 can be reformulated so that the number of terms in the sum equals the period length of an analogous expansion of \(1/p\). We find, indeed, that this is the case.
In the setting of Theorem 1.1, it is natural to consider the _base-\(\varepsilon_{F}\) expansion_ of elements \(\alpha\in F\). To be precise, there is a unique sequence of integers \(a_{n},a_{n-1},\ldots,a_{0},a_{-1},a_{-2},\ldots\), with \(0\leq a_{i}\leq\lfloor\varepsilon_{F}\rfloor\), for which
\[\alpha=a_{n}\varepsilon_{F}^{n}+a_{n-1}\varepsilon_{F}^{n-1}+\ldots+a_{0}+a_ {-1}\varepsilon_{F}^{-1}+a_{-2}\varepsilon_{F}^{-2}+\ldots. \tag{1.7}\]
The above expression is called the _base-\(\varepsilon_{F}\) expansion_ of \(\alpha\), and it is well-known that such expansions are eventually periodic (see, for example, [9]). To recast Theorem 1.1 in terms of these expansions, we require the following finite set:
\[R_{F,p}:=\left\{r_{1}+r_{2}\varepsilon_{F}\in\frac{1}{p}\mathcal{O}_{F}:r_{1} \in\mathbb{Q}\cap(0,1],r_{2}\in\mathbb{Q}\cap[0,1)\right\}, \tag{1.8}\]
which is known as the _Shintani set_ for \(F\) at \(p\), when \(p\equiv 3\pmod{4}\) and \(\left(\frac{d}{p}\right)=-1\). The totally positive units define a group action of \(\mathcal{O}_{F}^{\times,+}:=\langle\varepsilon_{F}\rangle\) onto \(R_{F,p}\) as follows.
\[\varepsilon_{F}*(r_{1}+r_{2}\varepsilon_{F})\coloneqq(1-r_{2})+\{r_{1}+r_{2} \mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon_{F})\}\varepsilon_{F},\]
where \(\{x\}\coloneqq x-\lfloor x\rfloor\) is the fractional part of \(x\). Under this action, the set \(R_{F,p}\) is a finite disjoint union of orbits, say
\[R_{F,p}=\bigsqcup_{r\in\mathcal{O}_{F}^{\times,+}\backslash R_{F,p}}\mathcal{ O}_{F}^{\times,+}*r.\]
For \(r\in R_{F,p}\backslash\mathcal{O}_{F}\), we prove (see Lemma 4.8) that the number of elements in the orbit of \(r\) under \(\varepsilon_{F}\) is equal to the period length of \(1/p\) in base \(\varepsilon_{F}\), which we denote \(\ell_{F,p}.\) This allows us to now state the desired class number formula as a sum over \(\ell_{F,p}\) terms, where we make the following abuse of notation:
\[Q_{F}(r_{1}+r_{2}\varepsilon_{F})=Q_{F}(r_{1},r_{2}).\]
**Theorem 1.2**.: _Assuming the notation and hypotheses from Theorem 1.1, we have_
\[h_{F(\sqrt{-p})}=\frac{1}{4}\sum_{i=1}^{\ell_{F,p}}\ \sum_{r\in\mathcal{O}_{F}^{ \times,+}\backslash R_{F,p}}\chi_{F(\sqrt{-p})/F}\left(rp\mathcal{O}_{F} \right)Q_{F}(\varepsilon_{F}^{i}*r),\]
_where \(\chi_{F(\sqrt{-p})/F}\) is the unique quadratic Hecke character of conductor \(p\mathcal{O}_{F}\)._
**Example.** Now we illustrate Theorem 1.2 with \(F=\mathbb{Q}(\sqrt{3})\) and \(p=7\), where \(h_{F}=1\) and \(\varepsilon_{F}=2+\sqrt{3}\) (so \(t=1\)). One can check (for example, using SageMath) that the base-\(\varepsilon_{F}\) expansion of \(1/7\) is
\[\frac{1}{7} =\varepsilon_{F}^{-2}+\sum_{i=0}^{\infty}\left(3\varepsilon_{F}^ {-8i-3}+2\varepsilon_{F}^{-8i-4}+2\varepsilon_{F}^{-8i-5}+2\varepsilon_{F}^{ -8i-7}+2\varepsilon_{F}^{-8i-8}+3\varepsilon_{F}^{-8i-9}\right)\] \[=0.01\overline{32202230}.\]
Thus, we see that the base \(\varepsilon_{F}\) expansion of \(1/7\) has period length \(\ell_{F,7}=8\). Since \(|R_{F,7}-\mathcal{O}_{F}|=tp^{2}-t=48\) (see Lemmas 2.6 and 2.9), we deduce that \(\mathcal{O}_{F}^{\times,+}\backslash(R_{F,7}-\mathcal{O}_{F})\) contains \(48/\ell_{F,7}=6\) disjoint orbits. One can also verify that the set
\[\left\{\frac{1}{7}+\frac{1}{7}\varepsilon_{F},\ \frac{1}{7},\ \frac{1}{7}+\frac{4}{7} \varepsilon_{F},\ \frac{1}{7}+\frac{5}{7}\varepsilon_{F},\ \frac{2}{7}+\frac{2}{7} \varepsilon_{F},\ \frac{3}{7}\right\}.\]
is a complete set of orbit representatives for \(\mathcal{O}_{F}^{\times,+}\backslash(R_{F,7}-\mathcal{O}_{F})\). Equipped with these values, Theorem 1.2 states that
\[h_{F(\sqrt{-7})} =\frac{1}{4}\sum_{i=1}^{8}\ \ \sum_{r\in R_{F,p}\backslash\mathcal{O}_{F} ^{\times,+}}\chi_{F(\sqrt{-7})/F}\left(rp\mathcal{O}_{F}\right)Q_{F}( \varepsilon_{F}^{i}*r)\] \[=\frac{1}{4}\Big{(}-\frac{220}{7}+\frac{228}{7}-\frac{188}{7}+ \frac{212}{7}-\frac{180}{7}+\frac{204}{7}\Big{)}=2.\]
Theorems 1.1 and 1.2 are generalizations of the results from Hirzebruch-Zagier and Girstmair to the setting of imaginary quadratic extensions of real quadratic fields \(F\) with \(h_{F}=1\). Within this new setting, we prove our theorems by working with a class number formula analogous to the one used in the quadratic setting by Hirzebruch, Zagier, and Girstmair.
Both (1.1) and (1.2) arise from a finite version of Dirichlet's class number formula, which relates the Dirichlet \(L\)-function, an infinite series, to the class number of \(\mathbb{Q}(\sqrt{-d})\):
\[L(1,\chi_{d})=\frac{2\pi}{\omega\sqrt{d}}h(-d),\]
where \(\omega\) represents the number of roots of unity in \(\mathbb{Q}(\sqrt{-d})\), and \(\chi_{d}\) is a primitive Dirichlet character of conductor \(d\). Using the functional equation of this \(L\)-function, the above equation can be written in terms of \(L(0,\chi_{d})\), which in turn allows us to use the Hurwitz \(\zeta\)-function and the periodicity of \(\chi_{d}\) to rewrite this class number formula as a finite sum of Bernoulli evaluated at integer points. Our work uses an analogous formula of Shintani [10], which expresses the class numbers of totally imaginary quadratic extensions of totally real fields as finite sums assembled from Bernoulli numbers.
In Section 2, we review the background needed to state Shintani's class number formula for imaginary quadratic extensions of real quadratic fields \(F\) with \(h_{F}=1\). These formulae involve "Shintani sets," which are something like fundamental domains for the action of the totally positive units on \(\frac{1}{p}\mathcal{O}_{F}\). The crux of our work relies on combinatorial properties of these sets, which we derive in Section 2. Then, we prove Theorem 1.1 in Section 3 and Theorem 1.2 in Section 4. Finally, in Section 5, we use Theorems 1.1 and 1.2 to calculate class numbers of \(\mathbb{Q}(\sqrt{3},\sqrt{-p})\), where \(p\equiv 3\pmod{4}\) is prime, \(\left(\frac{d}{p}\right)=-1\), and \(p<100\).
## Acknowledgements
The authors were participants in the 2023 UVA REU in Number Theory. They are grateful for the support of grants from Jane Street Capital, the National Science Foundation (DMS-2002265 and DMS- 2147273), the National Security Agency (H98230-23-1-0016), and the Templeton World Charity Foundation. The authors thank Ken Ono, Wei-Lun Tsai, Alejandro De Las Penas Castano, and Eleanor McSpirit for suggesting the problem and for their mentorship and support. They would also like to thank Marie-Helene Tome and the other participants of the 2023 UVA REU for many thoughtful discussions.
## 2. Shintani's Class Number Formula and Properties of Shintani Sets
In this section, we discuss the background needed to state Shintani's class number formula. While Shintani's theorem is true for totally imaginary quadratic extensions of a totally real field of arbitrary degree, we restrict the following commentary and definitions to the case that \(F\) is quadratic with class number \(1.\) Throughout this section, we fix a real quadratic field \(F\) of class number is \(1\) and a totally imaginary quadratic extension of \(F\), which we denote \(K=F(\sqrt{-p})\), where \(p\equiv 3\pmod{4}\).
### Algebraic Background
Shintani's formula can be used to calculate the relative class number \(h_{K}/h_{F}\) in terms of invariants of \(F\), \(K\), and the extension \(K/F\) itself. Before stating the formula, we review the definitions of these invariants.
The _regulator_\(R_{L}\) of a number field \(L\) measures the density of units in the ring of integers. The regulator can be determined by considering the matrix
\[[N_{j}\log(\sigma_{j}(u_{i}))],\]
where each \(u_{i}\) is a fundamental unit from the set \(u_{1},\cdots,u_{k}\) generating the unit group in \(\mathcal{O}_{L}\), each \(\sigma_{j}\) is a unique Archimedian place of \(L\), and \(N_{j}\) is defined to be \(1\) if \(\sigma_{j}\) is real, and \(2\) if \(\sigma_{j}\) is complex. If we define \(r_{1}\) and \(r_{2}\) respectively to be the number of real and complex embeddings of \(L\), by Dirichlet's unit theorem, we see that this matrix has dimension \((r_{1}+r_{2}-1)\times(r_{1}+r_{2})\). The regulator \(R_{L}\) is the determinant of the square submatrix which is formed by deleting any single column of this matrix.
Since the sum of the entries in each row of this matrix is \(0\), this determinant is independent of which column is deleted. If we consider the rows of this matrix as forming a lattice in \(\mathbb{R}^{r_{1}+r_{2}-1}\), then the regulator is directly proportional to the volume of the fundamental domain associated to this lattice.
Next, we examine the unit groups of \(\mathcal{O}_{K}\) and \(\mathcal{O}_{F}.\) Since \(F\) is a real quadratic field, and \(K\) is a totally imaginary quadratic extension of \(F\), Dirichlet's unit theorem implies that \(\mathcal{O}_{F}\) and \(\mathcal{O}_{K}\) are both \(\mathbb{Z}\)-modules of rank \(1\). More precisely, if we let \(\mu_{F}\) and \(\mu_{K}\) represent the groups of roots of unity in \(F\) and \(K\) respectively, there exists \(\varepsilon_{F}\in\mathcal{O}_{F}\) and \(\varepsilon_{K}\in\mathcal{O}_{K}\) such that \(\mathcal{O}_{F}^{\times}=\mu_{F}\times\langle\varepsilon_{F}\rangle\) and \(\mathcal{O}_{K}^{\times}=\mu_{K}\times\langle\varepsilon_{K}\rangle.\) Since \(F\) is real quadratic and any of \(\varepsilon_{F},-\varepsilon_{F},\varepsilon_{F}^{-1},-\varepsilon_{F}^{-1}\) can generate the free part of \(\mathcal{O}_{F}\), we can choose \(\varepsilon_{F}\) to be totally positive and greater than \(1\).
**Lemma 2.1**.: _We have \(\mathcal{O}_{K}^{\times}=\mathcal{O}_{F}^{\times}.\) In particular, we may choose \(\varepsilon_{K}=\varepsilon_{F}.\)_
Proof.: A theorem by Frolich and Taylor shows that \([\mathcal{O}_{K}^{\times}:\mathcal{O}_{F}^{\times}\mu_{K}]=1\) or \(2\) (see Theorem 42 in [3]). Since \(K=\mathbb{Q}(\sqrt{d},\sqrt{-p})\) for \(p\geq 7\), \(\mu_{K}=\{\pm 1\}.\) Thus \(\mu_{K}=\mu_{F}\), so \([\mathcal{O}_{K}^{\times}:\mathcal{O}_{F}^{\times}]=1\) or \(2\).
Now, assume for the sake of contradiction that \([\mathcal{O}_{K}^{\times}:\mathcal{O}_{F}^{\times}]=2\), so \(\varepsilon_{K}\notin\mathcal{O}_{F}^{\times}\), and \(\varepsilon_{K}^{2}\in\mathcal{O}_{F}^{\times}\). Since \(K\cap\mathbb{R}=F,\) we see that \(\varepsilon_{K}\not\in\mathbb{R}.\) However, we know that \(\varepsilon_{K}^{2}\in\mathcal{O}_{F}\subset\mathbb{R}\). Observe that both \(\varepsilon_{K}\in\mathbb{C}-\mathbb{R}\) and \(\varepsilon_{K}^{2}\in\mathbb{R}\) if and only if \(\operatorname{Re}(\varepsilon_{K})=0\). Additionally, \(\operatorname{Norm}_{K/\mathbb{Q}}(\varepsilon_{K})=\pm 1\), which implies that \(\varepsilon_{K}=\pm i.\) However, this is a contradiction since \(\pm i\notin K,\) so we see that \([\mathcal{O}_{K}^{\times}:\mathcal{O}_{F}^{\times}]=1\), and hence \(\mathcal{O}_{K}^{\times}=\mu_{F}\times\langle\varepsilon_{F}\rangle=\mu_{K} \times\langle\varepsilon_{F}\rangle\), so we can choose \(\varepsilon_{K}=\varepsilon_{F}.\)
Equipped with the fact that \(\varepsilon_{K}=\varepsilon_{F}\), we may now relate the regulators \(R_{K}\) and \(R_{F}\) of \(K\) and \(F,\) which we do in the lemma which follows.
**Lemma 2.2**.: _For \(F\) and \(K\) as defined in the beginning of this section, we have that \(R_{K}=2R_{F}.\)_
Proof.: Since \(\varepsilon_{F}=\varepsilon_{K}\) by Lemma 2.1, the regulators of the fields \(F\) and \(K\) as previously defined are determined using the following matrices:
\[R_{F}:\left[\log|\varepsilon_{F}|\qquad\log|-\varepsilon_{F}|\right]\] \[R_{K}:\left[2\log|\varepsilon_{F}|\qquad 2\log|-\varepsilon_{F}|\right]\]
Thus \(R_{F}=\log|\varepsilon_{F}|\) and \(R_{K}=2\log|\varepsilon_{F}|=2R_{F}.\)
Next, we review the definition of the _relative discriminant ideal_\(D_{K/F}\) for our fields \(K\) and \(F\). Recall that \(F\) has class number \(1\), so \(D_{K/F}\) is principal. Since \(K/F\) is quadratic, it is Galois, and its Galois group consists of two elements: the identity and complex conjugation. In this setting, \(D_{K/F}\) is given by
\[D_{K/F}\coloneqq\left(\det\begin{bmatrix}\omega_{1}&\omega_{2}\\ \omega_{1}&\omega_{2}\end{bmatrix}\right)^{2}\mathcal{O}_{F},\]
where \(\{\omega_{1},\omega_{2}\}\) is an integral basis of \(K/F\). We know that an integral basis will exist in our case by the following argument. From the structure theorem for finitely generated modules over a Dedekind domain, we have that \(\mathcal{O}_{K}\cong\mathcal{O}_{F}^{n}\oplus\mathfrak{a},\) where \(\mathfrak{a}\) is an ideal of \(\mathcal{O}_{F}\) and \(n\in\mathbb{Z}_{\geq 0}\) (see Theorem 1.32 of [7]). Since \(h_{F}=1\), implying \(\mathcal{O}_{F}\) is a principal ideal domain, \(\mathcal{O}_{K}\) must be a free \(\mathcal{O}_{F}\)-module of rank \(2=[K:F].\)
**Lemma 2.3**.: _The set \(\{1,\frac{1+\sqrt{-p}}{2}\}\) is an integral basis of \(K/F,\) and thus we have \(D_{K/F}=p\mathcal{O}_{F}.\)_
Proof.: Let \(A\) be the change-of-basis matrix from the integral basis \(\{\omega_{1},\omega_{2}\}\) to the \(F\)-basis \(\{1,\frac{1+\sqrt{-p}}{2}\}.\) We see that
\[p\mathcal{O}_{F}=\left(\det\begin{bmatrix}1&\frac{1+\sqrt{-p}}{2}\\ 1&\frac{1-\sqrt{-p}}{2}\end{bmatrix}\right)^{2}\mathcal{O}_{F}=(\det A)^{2}D_ {K/F}.\]
Since \(\mathcal{O}_{F}\) is a Dedekind domain, ideals in \(\mathcal{O}_{F}\) factor uniquely. Therefore since \(p\mathcal{O}_{F}\) is prime by assumption, \(\det A\) must be a unit in \(\mathcal{O}_{F}\), so \(A\in\operatorname{GL}_{2}(\mathcal{O}_{F})\). Thus, \(\{1,\frac{1+\sqrt{-p}}{2}\}\) is an integral basis of \(K/F\). Using this integral basis, we see that
\[D_{K/F}=\left(\frac{1-\sqrt{-p}}{2}-\frac{1+\sqrt{-p}}{2}\right)^{2}\mathcal{O }_{F}=p\mathcal{O}_{F}.\]
Finally, since \(\operatorname{Gal}(K/F)\cong\mathbb{Z}/2\mathbb{Z}\), there is a unique nontrivial character \(\chi:\operatorname{Gal}(K/F)\to\mathbb{C}^{\times}\). By class field theory, we can consider the precomposition of \(\chi\) with the Artin symbol to obtain a character \(\chi_{K/F}\) of the group of fractional ideals that are relatively prime to \(D_{K/F}\). This is known as the _Hecke character of \(K/F\) with conductor \(D_{K/F}\)_. By definition of the Artin symbol (see, for example, [2] page 106), we can explicitly compute the value of \(\chi_{K/F}\) for any prime ideal \(\mathfrak{p}\):
\[\chi_{K/F}(\mathfrak{p})=\begin{cases}1&\mathfrak{p}\text{ splits in }\mathcal{O}_{K} \\ -1&\mathfrak{p}\text{ remains inert in }\mathcal{O}_{K}\\ 0&\mathfrak{p}\text{ ramifies in }\mathcal{O}_{K}.\end{cases}\]
**Remark**.: _Shintani's class number formula relies on the narrow ideal class group character with conductor \(D_{K/F}\) evaluated at fractional ideals. This corresponds to a primitive Grossencharakter with modulus \(D_{K/F}\) (see Prop. 6.9 in [8]). Since \(\operatorname{Gal}(K/F)\cong\mathbb{Z}/2\mathbb{Z}\), the nontrivial character \(\chi:\operatorname{Gal}(K/F)\to\mathbb{C}^{\times}\) is unique and injective, so class field theory implies that the primitive Grossencharacter with modulus \(D_{K/F}\) is unique and corresponds to \(\chi\). Hence, we can see that the character used in Shintani's formula is exactly the Grossencharakter. For more details, see Sections 6 and 10 in [8]._
**Remark**.: _For any unit \(u\in K\) and any ideal \(\mathfrak{a}\subset\mathcal{O}_{K}\), \(u\cdot\mathfrak{a}=\mathfrak{a},\) and hence \(\chi_{K/F}(u\cdot\mathfrak{a})=\chi_{K/F}(\mathfrak{a})\)._
### Shintani's Class Number Formula
In this section, we prove a simplified version of Shintani's formula for real quadratic base fields \(F\) with \(h_{F}=1\).
**Proposition 2.4**.: _For a totally real quadratic extension \(F\) of \(\mathbb{Q}\) with \(h_{F}=1\) and \(K=F(\sqrt{-p})\) a totally imaginary quadratic extension of \(F\) where \(7\leq p\equiv 3\pmod{4}\) remains inert in \(\mathcal{O}_{F}\), Shintani's formula simplifies to the following:_
\[h_{K}=\frac{1}{2}\sum_{r\in R_{F,p}}\chi_{K/F}\Big{(}(r_{1}+r_{2}\varepsilon_{ F})D_{K/F}\Big{)}\sum_{\begin{subarray}{c}0\leq l_{1},l_{2}\leq 2\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(r_{1})}{l_{1}!}\frac{B_{l_{2}}(r_{2} )}{l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon_{F})^{l_{2}-1},\]
_where_
\[R_{F,p}=\left\{r=r_{1}+r_{2}\varepsilon_{F}\;:\;0<r_{1}\leq 1,0\leq r_{2}<1,r \in\tfrac{1}{p}\mathcal{O}_{F}\right\},\]
_and \(B_{n}(x)\) is the degree \(n\) Bernoulli polynomial. As \([F:\mathbb{Q}]=2\), Shintani's formula only requires the following Bernoulli polynomials:_
\[B_{0}(x)=1,\quad B_{1}(x)=x-\frac{1}{2},\quad B_{2}(x)=x^{2}-x+\frac{1}{6}.\]
Proof.: We follow [10] by first considering the embedding \(F\to\mathbb{R}^{2}\) via
\[F\hookrightarrow\mathbb{R}^{2}\quad\alpha\mapsto\left(\alpha,\alpha^{\prime} \right),\]
where \(\alpha\mapsto\alpha^{\prime}\) is the nontrivial automorphism in \(\mathrm{Gal}(F/\mathbb{Q})\). Shintani shows that the first quadrant \(\mathbb{R}_{+}^{2}\coloneqq\{(x,y)\in\mathbb{R}^{2}\;:\;x,y>0\}\) can be decomposed as the following disjoint union:
\[\mathbb{R}_{+}^{2}=\bigcup_{\eta\in\mathcal{O}_{F}^{\times,+}}\eta C_{1}\sqcup \bigcup_{\eta\in\mathcal{O}_{F}^{\times,+}}\eta C_{2}\]
where \(C_{1}\) is generated by the images of \(1,\varepsilon_{F}\) in \(\mathbb{R}^{2}\) and \(C_{2}\) is generated by the image of \(1\):
\[C_{1}=\{\lambda_{1}(1,1)+\lambda_{2}(\varepsilon_{F},\varepsilon_{F}^{\prime} )\in\mathbb{R}^{2}\;:\;\lambda_{1},\lambda_{2}>0\},\quad C_{2}=\{\lambda(1,1) \in\mathbb{R}^{2}\;:\;\lambda>0\},\]
and \(\eta\in\mathcal{O}_{F}^{\times,+}\) acts by component-wise multiplication. Next, for each cone \(C_{i}\), Shintani defines the set \(R(i,\frac{1}{p}\mathcal{O}_{F})\) as the following vectors with components in \(\mathbb{Q}\cap(0,1]\):
\[R\left(1,\frac{1}{p}\mathcal{O}_{F}\right) \coloneqq\left\{(r_{1},r_{2})\in\mathbb{Q}^{2}\;:\;0<r_{1},r_{2} \leq 1,\;r_{1}+r_{2}\varepsilon_{F}\in\frac{1}{p}\mathcal{O}_{F}\right\}\] \[R\left(2,\frac{1}{p}\mathcal{O}_{F}\right) \coloneqq\left\{r_{3}\in\mathbb{Q}\;:\;0<r_{3}\leq 1,\;r_{3}\in \frac{1}{p}\mathcal{O}_{F}\right\}.\]
Let \(\chi_{K/F}\) be the unique quadratic character of the narrow ideal class group of \(F\) with conductor \(p\mathcal{O}_{F}\), associated to \(K\). Then, assuming the notation above, we have the class number formula
\[h_{K}=\frac{2\omega_{K}R_{F}}{R_{K}\left[\mathcal{O}_{F}^{\times }:\mathcal{O}_{F}^{\times,+}\right]}\left(\sum_{r\in R\left(1,\frac{1}{p} \mathcal{O}_{F}\right)}\chi_{K/F}\Big{(}\left(r_{1}+r_{2}\varepsilon_{F} \right)p\mathcal{O}_{F}\Big{)}\sum_{\begin{subarray}{c}(l_{1},l_{2})\in \mathbb{Z}_{\geq 0}^{2}\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(r_{1})B_{l_{2}}(r_{2})}{2\cdot l_{1} l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}\left(\varepsilon_{F}^{l_{2}-1}\right)\right.\] \[-\left.\sum_{r_{3}\in R\left(2,\frac{1}{p}\mathcal{O}_{F}\right)} \chi_{K/F}(r_{3}p\mathcal{O}_{F})B_{1}(r_{3})\right),\]
where \(\omega_{K}\) is the number of roots of unity in \(K\) ([10], Theorem 2).
We first simplify the coefficient term in this formula. Recall from Lemma 2.2 that \(R_{F}/R_{K}\;=\;1/2\). Since \(K=\mathbb{Q}(\sqrt{d},\sqrt{-p})\) for \(p\geq 7\), we have \(\omega_{K}=2.\) Furthermore, since we may choose the fundamental unit of \(F\) to be totally positive, we see that \(\mathcal{O}_{F}^{\times}=\{\pm 1\}\times\mathcal{O}_{F}^{\times,+}\), so we get \([\mathcal{O}_{F}^{\times}:\mathcal{O}_{F}^{\times,+}]=2\). Then,
\[\frac{2\omega_{K}R_{F}}{R_{K}\left[\mathcal{O}_{F}^{\times}:\mathcal{O}_{F}^{ \times,+}\right]}=1.\]
Next, we reindex the sum. First we split the set \(R(1,\frac{1}{p}\mathcal{O}_{F})\) into two parts. Consider the sets \(R_{1},R_{2}\) given by
\[R_{1} \coloneqq\left\{(r_{1},r_{2})\in\mathbb{Q}^{2}\;:\;0<r_{1}\leq 1,0<r_ {2}<1,r_{1}+r_{2}\varepsilon_{F}\in\frac{1}{p}\mathcal{O}_{F}\right\}\] \[R_{2} \coloneqq\left\{(r_{1},r_{2})\in\mathbb{Q}^{2}\;:\;0<r_{1}\leq 1,r_ {2}=1,r_{1}+r_{2}\varepsilon_{F}\in\frac{1}{p}\mathcal{O}_{F}\right\}.\]
Additionally, for simplicity we denote the inner sum of Shintani's formula by:
\[\mathcal{B}(r_{1}+r_{2}\varepsilon_{F})\coloneqq\sum_{\begin{subarray}{c}(l_{1 },l_{2})\in\mathbb{Z}_{\geq 0}^{2}\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(r_{1})B_{l_{2}}(r_{2})}{2\cdot l_{1 }!l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}\left(\varepsilon_{F}^{l_{2}-1}\right).\]
By splitting the sum with \(R(1,\frac{1}{p}\mathcal{O}_{F})=R_{1}\bigsqcup R_{2},\) we see that
\[h_{K}=\sum_{r\in R_{1}}\chi_{K/F}\Big{(}\left(r_{1}+r_{2}\varepsilon _{F}\right)p\mathcal{O}_{F}\Big{)}\mathcal{B}(r_{1}+r_{2}\varepsilon_{F})\\ +\sum_{r\in R_{2}}\chi_{K/F}\Big{(}\left(r_{1}+\varepsilon_{F} \right)p\mathcal{O}_{F}\Big{)}\mathcal{B}(r_{1}+\varepsilon_{F})-\sum_{r\in R \left(2,\frac{1}{p}\mathcal{O}_{F}\right)}\chi_{K/F}(rp\mathcal{O}_{F})(r-1/2). \tag{2.1}\]
Since \(\varepsilon_{F}\in\mathcal{O}_{F}^{\times}\) and \(\chi_{K/F}\) has conductor \(p\mathcal{O}_{F},\) we have \(\chi_{K/F}(r_{1}p\mathcal{O}_{F})=\chi_{K/F}\big{(}(r_{1}+\varepsilon_{F})p \mathcal{O}_{F}).\) Moreover, comparing \(\mathcal{B}(r_{1})\) and \(\mathcal{B}(r_{1}+\varepsilon_{F}),\) we see that
\[\mathcal{B}(r_{1}) =\frac{r_{1}^{2}-r_{1}+1/3}{4}\mathrm{Tr}_{F/\mathbb{Q}}( \varepsilon_{F})-\frac{r_{1}-1/2}{2}\] \[\mathcal{B}(r_{1}+\varepsilon_{F}) =\frac{r_{1}^{2}-r_{1}+1/3}{4}\mathrm{Tr}_{F/\mathbb{Q}}( \varepsilon_{F})+\frac{r_{1}-1/2}{2}=\mathcal{B}(r_{1})+r_{1}-1/2.\]
Thus (2.1) simplifies to
\[h_{K}=\frac{1}{2}\sum_{r\in R_{F,p}}\chi_{K/F}\Big{(}\left(r_{1}+r_{2} \varepsilon_{F}\right)p\mathcal{O}_{F}\Big{)}\sum_{\begin{subarray}{c}(l_{1},l_{2})\in\mathbb{Z}_{\geq 0}^{2}\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(r_{1})B_{l_{2}}(r_{2})}{l_{1}!l_{2}! }\mathrm{Tr}_{F/\mathbb{Q}}\left(\varepsilon_{F}^{l_{2}-1}\right)\]
where \(R_{F,p}\) is given by
\[R_{F,p}=\left\{r=r_{1}+r_{2}\varepsilon_{F}\;:\;0<r_{1}\leq 1,0\leq r_{2}<1,r \in\tfrac{1}{p}\mathcal{O}_{F}\right\}.\]
**Definition 2.5**.: _We call \(R_{F,p}\) the **Shintani set** associated to \(F\) and \(p\)._
### Properties of Shintani Sets
In this subsection, we identify a correspondence between \(R_{F,p}\) and the finite field \(\mathbb{F}_{p^{2}},\) which will play an important role in our proof of Theorem 1.1. Namely, we make use of this correspondence and the cyclic structure of the multiplicative group \(\mathbb{F}_{p^{2}}^{\times}\) to enumerate the elements of \(R_{F,p}-\mathcal{O}_{F}\) using the powers of a generator of \(\mathbb{F}_{p^{2}}^{\times}.\)
Throughout this subsection, we fix a totally real quadratic field \(F\) and an imaginary quadratic extension \(K=F(\sqrt{-p}),\) where \(p\equiv 3\pmod{4}\) and \(p\) remains inert in \(\mathcal{O}_{F}\). We let \(\varepsilon_{F}=s+t\theta_{F},\) and to simplify notation, we denote
\[\varepsilon\coloneqq\varepsilon_{F}\quad\text{and}\quad R\coloneqq R_{F,p}.\]
We begin by giving an explicit construction of the Shintani set:
**Lemma 2.6**.: _The Shintani set \(R\) can be written as:_
\[R=\left\{\frac{A}{tp}+\frac{B}{tp}\varepsilon\;\;:\;A+sB\equiv 0\pmod{t},\;A \in(0,tp]\cap\mathbb{Z},B\in[0,tp)\cap\mathbb{Z}\right\}.\]
Proof.: By Lemma 2.3, we have \(D_{K/F}=p\mathcal{O}_{F}.\) As such, for any element \(r_{1}+r_{2}\varepsilon\in\frac{1}{p}\mathcal{O}_{F},\) we have
\[r_{1}+r_{2}\varepsilon=(r_{1}+sr_{2})+tr_{2}\theta_{F}\in\frac{1}{p}\mathcal{O }_{F}.\]
The set \(\{1,\theta_{F}\}\) constitutes an integral basis of \(\mathcal{O}_{F}\), meaning we can write any element of the Shintani set as \(r_{1}+r_{2}\varepsilon=\frac{A^{\prime}}{p}+\frac{B}{p}\theta_{F}\in\frac{1}{p} \mathcal{O}_{F}\), for some \(A^{\prime},B\in\mathbb{Z}\). Note that
\[\frac{A^{\prime}}{p}=r_{1}+sr_{2}\quad\text{and}\quad\frac{B}{p}=tr_{2}.\]
In particular, we have
\[r_{2}=\frac{B}{tp},\]
and since \(r_{2}\in[0,1)\), we see that \(B\in[0,tp)\). Additionally, we see that
\[r_{1}=\frac{A^{\prime}}{p}-sr_{2}=\frac{tA^{\prime}-sB}{tp}=\frac{A}{tp}\]
where \(A\coloneqq tA^{\prime}-sB.\) We know that \(r_{1}\in(0,1]\), so \(A\in(0,tp]\). Moreover, since
\[\frac{A}{tp}+\frac{B}{tp}\varepsilon=\frac{A+sB}{tp}+\frac{B}{p}\theta_{F}\in \frac{1}{p}\mathcal{O}_{F},\]
we must also have \(A+sB\equiv 0\pmod{t}.\) From the expression above, we can see that every element of the form \(\frac{A}{tp}+\frac{B}{tp}\varepsilon\) with \(A,B\in\mathbb{Z}\), \(A\in(0,tp]\), \(B\in[0,tp)\), \(A+sB\equiv 0\pmod{t}\) is in the Shintani set. This finishes the proof.
Next, we want to identify \(R\) with the finite field \(\mathbb{F}_{p^{2}}\). We begin with the work of Barquero-Sanchez, Masri, and Tsai, who proved that \(R\) is a finite abelian group with respect to the following operation:
\[r\oplus r^{\prime}\coloneqq r+r^{\prime}+\mathbb{Z}[\varepsilon]\]
(see Proposition 4.3 in [1]). This allows us to prove the following proposition relating \(R\) to \(\mathbb{F}_{p^{2}}\), a property that is central to our proof of Theorem 1.1.
**Proposition 2.7**.: _The Shintani set \(R\) has a structure as a \(\mathbb{Z}[\varepsilon]\)-module. This structure admits a surjective \(\mathbb{Z}[\varepsilon]\)-module homomorphism \(\pi:R\to\mathbb{F}_{p^{2}}\)._
Proof.: We begin with the \(\mathbb{Z}[\varepsilon]\)-module structure on \(R\). By definition, the fractional ideal \(\frac{1}{p}\mathcal{O}_{F}\) is an \(\mathcal{O}_{F}\)-module, and since \(\mathbb{Z}[\varepsilon]\) is a subring of \(\mathcal{O}_{F}\), we observe that \(\frac{1}{p}\mathcal{O}_{F}\) is a \(\mathbb{Z}[\varepsilon]\)-module by restriction of scalars. Furthermore, since \(\mathbb{Z}[\varepsilon]\) is a \(\mathbb{Z}[\varepsilon]\)-submodule of \(\frac{1}{p}\mathcal{O}_{F}\), we have that \(\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\) is a \(\mathbb{Z}[\varepsilon]\)-module. Moreover, \(R\subset\frac{1}{p}\mathcal{O}_{F}\) is a complete reduced set of coset representatives for \(\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\) (see Proposition 4.1 in [1]). Thus \(R\) has the structure of a \(\mathbb{Z}[\varepsilon]\)-module and can be identified with \(\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\).
Since \(R\subseteq\frac{1}{p}\mathcal{O}_{F}\), multiplication by \(p\) defines an injective \(\mathbb{Z}[\varepsilon]\)-module homomorphism
\[R\longrightarrow\mathcal{O}_{F},\quad\ r\longmapsto pr+p\mathcal{O}_{F}.\]
If we compose this map with the projection \(\mathcal{O}_{F}\longrightarrow\mathcal{O}_{F}/p\mathcal{O}_{F}\), we obtain the map
\[\pi:R\longrightarrow\mathcal{O}_{F}/p\mathcal{O}_{F},\quad\ r \longmapsto pr,\]
which is surjective, as shown in Proposition 4.4 of [1]. If \(p\) remains inert in \(F\), then \(p\mathcal{O}_{F}\) is prime and thus maximal, so \(\mathcal{O}_{F}/p\mathcal{O}_{F}\) is a finite field. Then, since \(F\) is quadratic and \(\mathcal{O}_{F}=\mathbb{Z}[\theta_{F}]\), we know that
\[\mathcal{O}_{F}/p\mathcal{O}_{F}\cong\mathbb{Z}[\theta_{F}]/p\mathbb{Z}[ \theta_{F}]\cong\mathbb{F}_{p}[\theta]\cong\mathbb{F}_{p^{2}}.\]
**Remark**.: By the First Isomorphism Theorem,
\[R/\ker(\pi)\cong\mathcal{O}_{F}/p\mathcal{O}_{F}.\]
Note that this is an isomorphism of groups, and therefore pertains only to the structure of \(R\) as an additive abelian group. We do not require a multiplicative structure within \(R\) here; rather we point out that the map
\[\pi:R/\ker(\pi)\to\mathbb{F}_{p^{2}}\]
is bijective. We will make use of this bijective correspondence in the proof of Theorem 1.1.
**Lemma 2.8**.: _The elements of \(\ker(\pi)\) are exactly those elements of \(R\) which are in \(\mathcal{O}_{F}.\)_
Proof.: Assume \(r\in\ker(\pi)\). Then, since \(\operatorname{Im}(\pi)=\mathcal{O}_{F}/p\mathcal{O}_{F}\), we have
\[\pi(r)=0\iff pr\in p\mathcal{O}_{F}\iff r\in\mathcal{O}_{F}.\]
Now we are in a position to explicitly describe the elements in \(\ker(\pi)\).
**Lemma 2.9**.: _The kernel of the map \(\pi\) is given by_
\[\ker(\pi)=\left\{1-\left\{\frac{si}{t}\right\}_{[0,1)}+\frac{i}{t}\varepsilon \mid 0\leq i\leq t-1\right\}\]
_In particular, we have that \(|\ker(\pi)|=t\)._
Proof.: Consider \(r\in\ker(\pi).\) Using Lemma 2.6 and noting that \(\varepsilon=s+t\theta_{F},\) we see that \(r\) has the form
\[r=\frac{A}{tp}+\frac{B}{tp}\varepsilon=\frac{A+sB}{tp}+\frac{B}{p}\theta_{F}.\]
By Lemma 2.8, \(r\in\ker(\pi)\iff r\in R\cap\mathcal{O}_{F},\) so we have that \(\frac{A+sB}{tp}\in\mathbb{Z}\) and \(\frac{B}{p}\in\mathbb{Z}.\) The second condition implies \(p|B,\) and since \(B\in[0,tp)\cap\mathbb{Z}\) by Lemma 2.6, we see that \(B=pi\) for \(i\in[0,t)\cap\mathbb{Z}\). The condition that \(\frac{A+sB}{tp}\in\mathbb{Z}\) implies that \(A\equiv-sB=spi\pmod{tp}.\) Since \(A\in(0,tp]\cap\mathbb{Z}\) by Lemma 2.6, \(A\) is uniquely determined by \(B.\) More precisely,
\[A=tp-(spi\pmod{tp})\]
where \(spi\pmod{tp}\) is the least positive residue of \(spi\in\mathbb{Z}\) modulo \(tp.\) We can further simplify this expression; since
\[spi\pmod{tp}=spi-tp\Big{\lfloor}\frac{spi}{tp}\Big{\rfloor},\]
we have that
\[\ker(\pi)\subseteq\left\{1-\left\{\frac{si}{t}\right\}_{[0,1)}+\frac{i}{t} \varepsilon\;:\;i\in[0,t)\cap\mathbb{Z}\right\},\]
where \(\{\cdot\}\) denotes the fractional part function \(\{x\}_{I}\), defined as the unique element of \(I\) satisfying \(x-\{x\}_{I}\in\mathbb{Z}\). The converse containment is seen immediately from the fact that that \(\varepsilon=s+t\theta_{F}\) and the definition of \(spi\pmod{tp}.\) Thus, \(\ker(\pi)\) has size exactly \(t.\)
We will denote the elements of \(\ker(\pi)\) as
\[\kappa_{i}\coloneqq 1-\left\{\frac{si}{t}\right\}_{[0,1)}+\frac{i}{t} \varepsilon\quad\text{for}\quad i\in\{0,1,\ldots,t-1\}.\]
## 3. Proof of Theorem 1.1
Equipped with these facts about the Shintani set described in the previous section, we now prove Theorem 1.1. Our proof relies on features of the structure of the Shintani set which come from from the bijection between \(R_{F,p}/\ker(\pi)\) and \(\mathbb{F}_{p^{2}}\), as well as some properties we derive of the Hecke character across the Shintani set. Again, to simplify notation, we let \(\varepsilon=\varepsilon_{F}\), \(R=R_{F,p}\), and \(\rho=\rho_{F,p}\).
We are now able to describe the Shintani set using the multiplicative structure of \(\mathbb{F}_{p^{2}}^{\times}=\langle\rho+p\mathcal{O}_{F}\rangle\). Using the bijection from \(R/\ker(\pi)\) to \(\mathbb{F}_{p^{2}}\), we have
\[R=\ker(\pi)\sqcup\left(\bigcup_{m=1}^{p^{2}-1}\pi^{-1}(\rho^{m}+p\mathcal{O}_ {F})\right).\]
For each \(m\) between \(1\) and \(p^{2}-1\), choose one element in the coset \(\pi^{-1}(\rho^{m}+p\mathcal{O}_{F})\), which we denote \(\tilde{x}(m)+\tilde{y}(m)\varepsilon\in R\).
Next, we explicitly calculate each \(\tilde{x}(m)\) and \(\tilde{y}(m)\) in terms of \(\rho^{m}\). Note that \(\{1,\theta_{F}\}\) is a \(\mathbb{F}_{p}\)-basis of \(\mathbb{F}_{p^{2}}\), so we can write \(\rho^{m}+p\mathcal{O}_{F}\coloneqq x(m)+y(m)\theta_{F}\) for some integers \(x(m),y(m)\). Observe that, since \(\varepsilon=s+t\theta_{F}\), we have
\[\rho^{m}+p\mathcal{O}_{F}=x(m)-\frac{s\cdot y(m)}{t}+\frac{y(m)}{t}\varepsilon.\]
Under multiplication by \(p\) and reduction modulo \(p\), the point
\[\frac{x(m)}{p}-\frac{s\cdot y(m)}{tp}+\frac{y(m)}{tp}\varepsilon\in\frac{1}{p }\mathcal{O}_{F}\]
maps to \(\rho^{m}+p\mathcal{O}_{F}\). Thus, if we subtract a suitable element of \(\mathbb{Z}[\varepsilon]\) from this point, we obtain a point \(\tilde{x}(m)+\tilde{y}(m)\varepsilon\in R\) that is a preimage of \(\pi^{-1}(\rho^{m}+p\mathcal{O}_{F})\). In particular, we see that
\[\tilde{x}(m)=\left\{\frac{x(m)}{p}-\frac{s\cdot y(m)}{tp}\right\}_{(0,1]}, \hskip 14.226378pt\tilde{y}(m)=\left\{\frac{y(m)}{tp}\right\}_{[0,1)}.\]
Since \(\pi(\tilde{x}(m)+\tilde{y}(m)\varepsilon)=\rho^{m}+p\mathcal{O}_{F}\), we can construct the entire coset from this element:
\[\pi^{-1}(\rho^{m}+p\mathcal{O}_{F})=\Big{\{}(\tilde{x}(m)+\tilde{y}(m) \varepsilon)\oplus\kappa_{i}:1\leq i\leq t\Big{\}}.\]
For simplicity, we write
\[\tilde{x}_{i}(m)+\tilde{y}_{i}(m)\varepsilon\coloneqq(\tilde{x}(m)+\tilde{y}( m)\varepsilon)\oplus\kappa_{i}. \tag{3.1}\]
Using our explicit construction of \(\ker(\pi)\) given in Lemma 2.9, we can similarly explicitly construct each \(\tilde{x}_{i}(m)\), \(\tilde{y}_{i}(m)\). We see that
\[\tilde{x}_{i}(m) =\left\{\frac{x(m)}{p}-\frac{s\cdot y(m)}{tp}+1-\Big{\{}\frac{si} {t}\Big{\}}_{[0,1)}\right\}_{(0,1]}\] \[\tilde{y}_{i}(m) =\left\{\frac{y(m)}{tp}+\frac{i}{t}\right\}_{[0,1)}.\]
Thus, we can write \(R\) as the following disjoint union:
\[R=\ker(\pi)\sqcup\left(\bigcup_{m=1}^{p^{2}-1}\Big{\{}\tilde{x}_{i}(m)+ \tilde{y}_{i}(m)\varepsilon:1\leq i\leq t\Big{\}}\right).\]
By Proposition 2.4, we simplify Shintani's class number formula to obtain
\[h_{K}=\frac{1}{2}\sum_{\begin{subarray}{c}1\leq m\leq p^{2}-1\\ 1\leq i\leq t\end{subarray}}\chi_{K/F}\Big{(}(\tilde{x}_{i}(m)+\tilde{y}_{i}(m) \varepsilon)\cdot p\mathcal{O}_{F}\Big{)}\cdot\sum_{\begin{subarray}{c}0\leq l _{1},l_{2}\leq 2\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(\tilde{x}_{i}(m))}{l_{1}!}\frac{B_{ l_{2}}(\tilde{y}_{i}(m))}{l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon_{F})^{l_{2}-1}. \tag{3.2}\]
We can also simplify the Hecke character term. Consider any element \(r_{1}+r_{2}\varepsilon\in R\) and any element \(k_{1}+k_{2}\varepsilon\in\ker(\pi)\). By Lemma 2.8, we have \(k_{1}+k_{2}\varepsilon\in\mathcal{O}_{F}.\) Since the Hecke character has conductor \(p\mathcal{O}_{F}\), we have
\[\chi_{K/F}\Big{(}(r_{1}+r_{2}\varepsilon+k_{1}+k_{2}\varepsilon)p\mathcal{O}_ {F}\Big{)}=\chi_{K/F}\Big{(}\left((r_{1}+r_{2}\varepsilon)p\mathcal{O}_{F} \right)+p\mathcal{O}_{F}\Big{)}=\chi_{K/F}\Big{(}(r_{1}+r_{2}\varepsilon)p \mathcal{O}_{F}\Big{)}.\]
Thus, the value of \(\chi_{K/F}(r_{1}+r_{2}\varepsilon)\) depends only on the coset of \(r_{1}+r_{2}\varepsilon\) in \(R/\ker(\pi)\). Therefore, we have that
\[\chi_{K/F}\Big{(}(\tilde{x}_{i}(m)+\tilde{y}_{i}(m)\varepsilon)p\mathcal{O}_ {F}\Big{)}=\chi_{K/F}\Big{(}(\tilde{x}(m)+\tilde{y}(m)\varepsilon)p\mathcal{O} _{F}\Big{)}.\]
By definition,
\[p(\tilde{x}(m)+\tilde{y}(m)\varepsilon)-\rho^{m}\in p\mathcal{O}_{F}.\]
Using this and the multiplicativity of the Hecke character, we get
\[\chi_{K/F}((\tilde{x}_{i}(m)+\tilde{y}_{i}(m)\varepsilon)p\mathcal{O}_{F})= \chi_{K/F}((\rho^{m}+p\mathcal{O}_{F})\mathcal{O}_{F})=\chi_{K/F}((\rho+p \mathcal{O}_{F})\mathcal{O}_{F})^{m}.\]
If \(\chi_{K/F}((\rho+p\mathcal{O}_{F})\mathcal{O}_{F})=0\), then since \((\mathcal{O}_{F}/p\mathcal{O}_{F})^{\times}=\langle\rho+p\mathcal{O}_{F}\rangle\), we would have that \(\chi_{K/F}(rp\mathcal{O}_{F})=0\) for all \(r\in R.\) However, this contradicts the definition of \(\chi_{K/F}\). Moreover, \(\chi_{K/F}((\rho+p\mathcal{O}_{F})\mathcal{O}_{F})\neq 1\), since we would then similarly have that \(\chi_{K/F}(rp\mathcal{O}_{F})=1\) for all \(r\in R-\mathcal{O}_{F}\), but \(\chi_{K/F}\) is a non-trivial character by construction. Thus, we see that \(\chi_{K/F}((\rho+p\mathcal{O}_{F})\mathcal{O}_{F})=-1\), which implies
\[\chi_{K/F}\Big{(}(\tilde{x}_{i}(m)+\tilde{y}_{i}(m)\varepsilon)p\mathcal{O}_ {F}\Big{)}=\chi_{K/F}((\rho+p\mathcal{O}_{F})\mathcal{O}_{F})^{m}=(-1)^{m}.\]
Thus, Equation 3.2 simplifies further:
\[h_{K}=\frac{1}{2}\sum_{\begin{subarray}{c}1\leq m\leq p^{2}-1\\ 1\leq i\leq t\end{subarray}}(-1)^{m}\sum_{\begin{subarray}{c}0\leq l_{1},l_{2} \leq 2\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(\tilde{x}_{i}(m))}{l_{1}!}\frac{B_ {l_{2}}(\tilde{y}_{i}(m))}{l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon_{F}) ^{l_{2}-1}.\]
Next, we simplify the Bernoulli polynomial part of the class number formula. We consider
\[\sum_{\begin{subarray}{c}0\leq l_{1},l_{2}\leq 2\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(\tilde{x}_{i}(m))}{l_{1}!}\frac{B _{l_{2}}(\tilde{y}_{i}(m))}{l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)^{l_ {2}-1}\] \[\qquad\qquad=\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)\frac{\tilde{ x}_{i}(m)^{2}-\tilde{x}_{i}(m)+1/6}{2}+2\left(\tilde{x}_{i}(m)-\frac{1}{2}\right)\left( \tilde{y}_{i}(m)-\frac{1}{2}\right)+\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon) \frac{\tilde{y}_{i}(m)^{2}-\tilde{y}_{i}(m)+1/6}{2}\] \[\qquad\qquad=\frac{\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)}{2} \left(\tilde{x}_{i}(m)-\frac{1}{2}\right)^{2}+4\left(\tilde{x}_{i}(m)-\frac{1}{ 2}\right)\left(\tilde{y}_{i}(m)-\frac{1}{2}\right)+\frac{\mathrm{Tr}_{F/ \mathbb{Q}}(\varepsilon)}{2}\left(\tilde{y}_{i}(m)-\frac{1}{2}\right)^{2}+c_{0},\]
for some constant \(c_{0}\). Since for any constant \(c\),
\[\sum_{\begin{subarray}{c}1\leq m\leq p^{2}-1\\ 1\leq i\leq t\end{subarray}}(-1)^{m}\cdot c=0,\]
we can ignore the constant term \(c_{0}\) that arises in the inner sum of Bernoulli polynomials. We can write the class number \(h_{K}\) as
\[h_{K}=\frac{1}{4}\sum_{\begin{subarray}{c}1\leq m\leq p^{2}-1\\ 1\leq i\leq t\end{subarray}}(-1)^{m}\bigg{[}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon )\left(\tilde{x}_{i}(m)-\frac{1}{2}\right)^{2}+4\left(\tilde{x}_{i}(m)-\frac{ 1}{2}\right)\left(\tilde{y}_{i}(m)-\frac{1}{2}\right)+\mathrm{Tr}_{F/\mathbb{Q }}(\varepsilon)\left(\tilde{y}_{i}(m)-\frac{1}{2}\right)^{2}\bigg{]}. \tag{3.3}\]
If we define
\[x_{i}(m)\coloneqq tp(2\tilde{x}_{i}(m)-1)\quad\text{and}\quad y_{i}(m) \coloneqq tp(2\tilde{y}_{i}(m)-1), \tag{3.4}\]
we can then rewrite the above equation as
\[h_{K}=\frac{1}{16t^{2}p^{2}}\sum_{\begin{subarray}{c}1\leq m\leq p^{2}-1\\ 1\leq i\leq t\end{subarray}}(-1)^{m}\Big{[}\mathrm{Tr}_{F/\mathbb{Q}}( \varepsilon)\big{(}x_{i}(m)\big{)}^{2}+4\big{(}x_{i}(m)\big{)}\big{(}y_{i}(m) \big{)}+\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)\big{(}y_{i}(m)\big{)}^{2} \Big{]}. \tag{3.5}\]
Finally, by defining the quadratic form
\[Q_{F}(Y_{1},Y_{2})\coloneqq\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)Y_{1}^{2}+4 Y_{1}Y_{2}+\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)Y_{2}^{2},\]
we rewrite (3.5) as
\[h_{K}=\frac{1}{16t^{2}p^{2}}\sum_{\begin{subarray}{c}1\leq m\leq p^{2}-1\\ 1\leq i\leq t\end{subarray}}(-1)^{m}Q_{F}\left(x_{i}(m),y_{i}(m)\right). \tag{3.6}\]
The last step is to derive recurrence relations for \(x(m),y(m)\), the coefficients of \(\rho^{m}+p\mathcal{O}_{F}=x(m)+y(m)\theta_{F}\). The minimal polynomial of \(\theta_{F}\) is \(x^{2}-\mathrm{Tr}(\theta_{F})x+\mathrm{Norm}_{F/\mathbb{Q}}(\theta_{F})\), which implies
\[\theta_{F}^{2}=\mathrm{Tr}_{F/\mathbb{Q}}(\theta_{F})\theta_{F}-\mathrm{Norm }_{F/\mathbb{Q}}(\theta_{F}).\]
To simplify notation, let \(T=\mathrm{Tr}_{F/\mathbb{Q}}(\theta_{F})\) and \(N=\mathrm{Norm}_{F/\mathbb{Q}}(\theta_{F})\). Since \(\rho\coloneqq a+b\varepsilon\), we have the initial conditions \(x(1)=a\) and \(y(1)=b\). Then, we get
\[\rho^{m+1}=x(m+1)+y(m+1)\theta_{F} =(x(m)+y(m)\theta_{F})\cdot(a+b\theta_{F})\] \[=a\cdot x(m)-Nb\cdot y(m)+\Big{(}b\cdot x(m)+\Big{(}a+Tb\Big{)} \cdot y(m)\Big{)}\theta_{F}.\]
This implies the following recurrence relations:
\[x(m+1) =a\cdot x(m)-Nb\cdot y(m)\] \[y(m+1) =b\cdot x(m)+(a+Tb)\cdot y(m).\]
Then, consider functions \(X(z)\), \(Y(z)\) given by
\[X(z)=\sum_{m=1}^{\infty}x(m)\cdot z^{m},\quad Y(z)=\sum_{m=1}^{\infty}y(m) \cdot z^{m}. \tag{3.7}\]
Using our recurrence relations, we can set up a system of equations to find explicit expressions for \(X(z),Y(z)\) as rational functions determined by \(x(1)\) and \(y(1)\). We see that
\[X(z) =z\cdot[a\cdot X(z)-Nb\cdot Y(z)]+az\] \[Y(z) =z\cdot\Big{[}b\cdot X(z)+\Big{(}a+Tb\Big{)}\cdot Y(z)\Big{]}+bz,\]
which gives
\[X(z) =\frac{az-(a^{2}+abT+Nb^{2})z^{2}}{(a^{2}+abT+Nb^{2})z^{2}-(2a+bT)z+1}\] \[Y(z) =\frac{bz}{(a^{2}+abT+b^{2}N)z^{2}-(2a+bT)z+1}.\]
We simplify these by letting \(C_{F,p}\coloneqq a^{2}+abT+Nb^{2}\) and \(D_{F,p}\coloneqq 2a+bT\) to get
\[X(z)=\frac{az-C_{F,p}z^{2}}{C_{F,p}z^{2}-D_{F,p}z+1},\quad Y(z)=\frac{bz}{C_{F,p}z^{2}-D_{F,p}z+1}.\]
Note that the coefficients \(x(m),y(m)\) of the power series of these rational functions correspond to those \(x(m),y(m)\) which we use to generate each \(x_{i}(m),y_{i}(m)\) using the formulas
\[x_{i}(m)\coloneqq tp(2\tilde{x}_{i}(m)-1)\quad\text{and}\quad y_{i}(m) \coloneqq tp(2\tilde{y}_{i}(m)-1).\]
This concludes the proof of Theorem 1.1.
## 4. Proof of Theorem 1.2
Here we prove Theorem 1.2, which relies heavily on the structure of the Shintani set as a \(\mathbb{Z}[\varepsilon_{F}]\)-module and the related attributes of the base-\(\varepsilon_{F}\) expansions of its elements. Through a series of preliminary lemmas, we set up the proof of Theorem 1.2 by relating the base-\(\varepsilon_{F}\) expansion of \(1/p\) to the orbit of elements in \(R_{F,p}-\ker(\pi)\) under the action of \(\varepsilon_{F}\). This allows us to derive a finite sum analogous to Girstmair's (1.2), in which the number of summands is equal to the period length of the base \(\varepsilon_{F}\) expansion of \(1/p\).
Thoughout this section, we fix a totally real quadratic field \(F\) and an imaginary quadratic extension \(K\coloneqq F(\sqrt{-p})\), where \(p\equiv 3\pmod{4}\) and \(p\) remains inert in \(\mathcal{O}_{F}\). To simplify notation, we also let
\[\varepsilon\coloneqq\varepsilon_{F}\quad\text{and}\quad R\coloneqq R_{F,p}.\]
Additionally, we denote \(r\in R\) as \(r\coloneqq r_{1}+r_{2}\varepsilon\).
### Shintani Cycles
Recall from Section 2 that we can identify \(R\) with \(\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\) to make it into a \(\mathbb{Z}[\varepsilon]\) module. In particular, the multiplicative group \(\langle\varepsilon\rangle\) acts on \(\frac{1}{p}\mathcal{O}_{F}\) via scalar multiplication. If we denote the map for this group action by
\[\mu:\langle\varepsilon\rangle\times\frac{1}{p}\mathcal{O}_{F}\longrightarrow \frac{1}{p}\mathcal{O}_{F},\]
we can compose \(\mu\) with the projection map
\[\nu:\frac{1}{p}\mathcal{O}_{F}\longrightarrow\frac{1}{p}\mathcal{O}_{F}/ \mathbb{Z}[\varepsilon]\]
to yield
\[\mu^{\prime}\coloneqq\nu\circ\mu:\langle\varepsilon\rangle\times\frac{1}{p} \mathcal{O}_{F}\longrightarrow\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[ \varepsilon].\]
Note that, since \(\nu\) is a \(\mathbb{Z}[\varepsilon]\)-module homomorphism, \(\mu^{\prime}\) constitutes a group action of \(\langle\varepsilon\rangle\) on \(\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\).
**Lemma 4.1**.: _The map_
\[\overline{\mu}:\langle\varepsilon\rangle\times\frac{1}{p}\mathcal{O}_{F}/ \mathbb{Z}[\varepsilon]\longrightarrow\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[ \varepsilon],\ \ (\varepsilon,\alpha+\mathbb{Z}[\varepsilon])\longmapsto\mu^{\prime}( \varepsilon,\alpha)\]
_is a well-defined group action._
Proof.: Since \(\mu^{\prime}\) is a group action, it is sufficient to show that \(\overline{\mu}\) is well-defined. Take \(\alpha,\alpha^{\prime}\in\frac{1}{p}\mathcal{O}_{F}\) such that \(\alpha+\mathbb{Z}[\varepsilon]=\alpha^{\prime}+\mathbb{Z}[\varepsilon]\). Thus, for any \(n\in\mathbb{Z}\), we have
\[\overline{\mu}(\varepsilon^{n},\alpha+\mathbb{Z}[\varepsilon]) =\mu^{\prime}(\varepsilon^{n},\alpha+\mathbb{Z}[\varepsilon])\] \[=\varepsilon^{n}\alpha+\mathbb{Z}[\varepsilon]=\varepsilon^{n} \alpha^{\prime}+\mathbb{Z}[\varepsilon]\] \[=\overline{\mu}(\varepsilon^{n},\alpha^{\prime}+\mathbb{Z}[ \varepsilon]).\]
Thus, \(\overline{\mu}\) is well-defined, so \(\overline{\mu}\) constitutes a group action of \(\langle\varepsilon\rangle\) on \(\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\).
We know from Proposition 4.1 in [1] that \(R\) is a complete and reduced set of representatives of \(\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\). Thus, we obtain a group action \(\langle\varepsilon\rangle\curvearrowright R\) given by \((\varepsilon,r)\mapsto\varepsilon*r\), where \(\varepsilon*r=\varepsilon r+z\), and \(z\) is the unique element of \(\mathbb{Z}[\varepsilon]\) such that \(\varepsilon r+z\in R\). We define the _Shintani cycle_ of any element \(r\in R\) to be the orbit of \(r\) under this action, and we denote this set as \(C_{r}\coloneqq\langle\varepsilon\rangle*r\).
**Remark**.: Note that \(\varepsilon*r\in\mathcal{O}_{F}\iff r\in\mathcal{O}_{F}\). Thus, for any \(r\in R\cap\mathcal{O}_{F}\), every element in the Shintani cycle of \(r\) is an element of \(\mathcal{O}_{F}\). We will call Shintani cycles containing elements in \(R-\mathcal{O}_{F}\) the _nontrivial Shintani cycles_ of \(R\). We refer to Shintani cycles of elements in \(R\cap\mathcal{O}_{F}\) as _trivial Shintani cycles_ because the elements in these cycles are weighted by a factor of \(0\) in Shintani's class number formula (see the remark in Section 4.3), and hence for our purposes are "trivial."
### Epsilon Expansions
A base-\(\varepsilon\) expansion is an analogue to the usual decimal expansion. The base-\(\varepsilon\) expansion of any element \(\alpha\in F\) is computed in the following way. Let \(n\coloneqq\lfloor\log_{\varepsilon}(\alpha)\rfloor.\) Then we have
\[\alpha=a_{n}\varepsilon^{n}+a_{n-1}\varepsilon^{n-1}+\ldots+a_{0}+a_{-1} \varepsilon^{-1}+\ldots,\]
where
\[a_{n}\coloneqq\lfloor\alpha/\varepsilon^{n}\rfloor,a_{n-1}\coloneqq\lfloor( \alpha-a_{n}\varepsilon^{n})/\varepsilon^{n-1}\rfloor,\ldots,a_{i}\coloneqq \lfloor(\alpha-a_{n}\varepsilon^{n}-\ldots-a_{i+1}\varepsilon^{i+1})/ \varepsilon^{i}\rfloor,\ldots\]
We observe that \(\varepsilon\) is an algebraic integer which is real since \(F\) is real quadratic, and that \(\varepsilon\) must be \(>1\) since it is a totally positive fundamental unit. Moreover, \(\varepsilon\) must have Galois conjugate with absolute value \(<1\) since \(F\) is a real quadratic field and \(\varepsilon\) has norm \(1\). Thus, \(\varepsilon\) is a Pisot number by definition, and by consequence, Theorem 3.1 in [9] shows that any element of \(R\) has an eventually periodic base-\(\varepsilon\) expansion.
For some \(\alpha\in F\) whose base \(\varepsilon\) expansion can be written as
\[\alpha=a_{n}\varepsilon^{n}+\ldots+a_{0}+a_{-1}\varepsilon^{-1}+\ldots+a_{-k} \varepsilon^{-k}+\overline{a_{-k-1}\varepsilon^{-k-1}+\ldots+a_{-k-P_{\alpha}} \varepsilon^{-k-P_{\alpha}}},\]
we call \(P_{\alpha}\) the _period length_ of the base-\(\varepsilon\) expansion of \(\alpha\). Additionally, we will call the ordered set
\[\{a_{-k-1},\ldots,a_{-k-P_{\alpha}}\}\]
the _period set_ of the base-\(\varepsilon\) expansion of \(\alpha\). We can further observe that any element of \(F\) whose base-\(\varepsilon\) expansion is finite is an element of \(\mathbb{Z}[\varepsilon]\), by the following argument.
**Lemma 4.2**.: _If \(\alpha\in F\) has a finite base-\(\varepsilon\) expansion, then \(\alpha\in\mathbb{Z}[\varepsilon]\)._
Proof.: If \(\gamma\) has a finite base-\(\varepsilon\) expansion, we can express it as
\[\gamma=\sum_{i=K_{1}}^{K_{2}}m_{i}\varepsilon^{i}\]
where \(K_{1},K_{2}\) are integers. Using that \(\varepsilon^{2}=\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\varepsilon-1\) and that \(\varepsilon^{-1}=\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)-\varepsilon\), we can perform the following replacement on any term \(m_{k}\varepsilon^{k}\) where \(k\neq 0\) or \(1\):
\[m_{k}\varepsilon^{k}=\begin{cases}m_{k}(\operatorname{Tr}_{F/\mathbb{Q}}( \varepsilon)\varepsilon-1)^{k/2}&\text{if $k$ is an even positive integer}\\ m_{k}(\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\varepsilon-1)^{(k-1)/2} \varepsilon&\text{if $k$ is an odd positive integer}\\ m_{k}(\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)-\varepsilon)^{-k}&\text{if $k$ is a negative integer}.\end{cases}\]
The third equality implies that any negative integer power of \(\varepsilon\) can be converted to a linear combination of positive integer powers of \(\varepsilon\). Therefore, it suffices to show that a linear combination of positive integer powers of \(\varepsilon\) can be expressed as an element of \(\mathbb{Z}[\varepsilon]\). The first two equalities guarantee that any positive power of \(\varepsilon\) can be expressed a strictly lower positive power of \(\varepsilon\). Thus by induction, any finite linear combination of (possibly negative) powers of \(\varepsilon\) can be expressed as an element of \(\mathbb{Z}[\varepsilon]\).
**Proposition 4.3**.: _The repeating part in the base-\(\varepsilon\) expansion of any two elements in the same Shintani cycle is the same._
To prove this lemma, we require some preliminaries. Consider some \(r\in R\), where \(r=r_{1}+r_{2}\varepsilon\), and recall that \(0<r_{1}\leq 1\) and \(0\leq r_{2}<1\), \(r_{1},r_{2}\in\mathbb{Q}\). Hence the action of \(\varepsilon\) on \(R\) amounts to:
\[\varepsilon*r=\varepsilon\cdot(r_{1}+r_{2}\varepsilon)+z_{1}+z_{2}\varepsilon\]
where \(z_{1}+z_{2}\varepsilon\) is the unique element in \(\mathbb{Z}[\varepsilon]\) such that \(\varepsilon\cdot(r_{1}+r_{2}\varepsilon)+z_{1}+z_{2}\varepsilon\in R\). We can explicitly compute bounds for \(z_{1}\) and \(z_{2}\):
**Lemma 4.4**.: _If \(\varepsilon*(r_{1}+r_{2}\varepsilon)=\varepsilon\cdot(r_{1}+r_{2}\varepsilon) +z_{1}+z_{2}\varepsilon\), then_
\[z_{1}=1,\quad\text{and}\quad z_{2}=-\lfloor r_{1}+r_{2}\mathrm{Tr}_{F/ \mathbb{Q}}(\varepsilon)\rfloor.\]
Proof.: The minimal polynomial of \(\varepsilon\) is
\[x^{2}-\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)x+\mathrm{Norm}_{F/\mathbb{Q}}( \varepsilon)=x^{2}-\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)x+1,\]
and thus
\[\varepsilon^{2}=\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)\varepsilon-1.\]
Consider
\[\varepsilon(r_{1}+r_{2}\varepsilon)=r_{1}\varepsilon+r_{2}\varepsilon^{2}=-r_{ 2}+(r_{1}+r_{2}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon))\varepsilon.\]
To find \(\varepsilon*(r_{1}+r_{2}\varepsilon)\), we must shift \(\varepsilon(r_{1}+r_{2}\varepsilon)\) by some \(z_{1}+z_{2}\varepsilon\in\mathbb{Z}[\varepsilon]\) such that
\[-r_{2}+z_{1}\in(0,1],\quad\text{and}\quad r_{1}+r_{2}\mathrm{Tr}_{F/\mathbb{Q} }(\varepsilon)+z_{2}\in[0,1).\]
It is immediately apparent that \(z_{2}=-\lfloor r_{1}+r_{2}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)\rfloor.\) Further, since \(r_{1}+r_{2}\varepsilon\in R\), we have that \(r_{2}\in[0,1)\), so we see that \(z_{1}=1\).
Note that the above proposition and our bounds on \(r_{1}\) and \(r_{2}\) imply that \(-\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)\leq z_{2}\leq 0\). Before we show that the repeating part of the base \(\varepsilon\) of elements in the same Shintani cycle is the same, we require one more fact about \(\varepsilon\), which we now prove.
**Lemma 4.5**.: _We have that \(\lceil\varepsilon\rceil=\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)\)._
Proof.: Let \(\varepsilon=s+t\sqrt{d}\). We start by showing \(\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)\geq\lceil\varepsilon\rceil\). Observe that
\[\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)=\varepsilon+\frac{1}{\varepsilon}\implies \mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)>\varepsilon\implies\mathrm{Tr}_{F/ \mathbb{Q}}(\varepsilon)\geq\lceil\varepsilon\rceil\,,\]
because the trace of an algebraic integer is always an element of \(\mathbb{Z}\).
Now we will show that \(\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\leq\lceil\varepsilon\rceil\). Assume for the sake of contradiction that \(\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\geq\lceil\varepsilon\rceil+1\). Then we have that
\[\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\geq\lceil\varepsilon\rceil+1\implies \frac{\varepsilon^{2}+1}{\varepsilon}\geq\lceil\varepsilon\rceil+1\implies 1-\varepsilon\geq\varepsilon\lceil\varepsilon\rceil-\varepsilon^{2}\geq 0 \implies 1\geq\varepsilon.\]
However, by definition, \(\varepsilon>1\), so we see that
\[\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)<\lceil\varepsilon\rceil+1\implies \operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\leq\lceil\varepsilon\rceil.\]
Now we proceed to prove that the repeating part in the base-\(\varepsilon\) expansion of any two elements in the same Shintani cycle is the same.
Proof of Proposition 4.3.: Consider some element \(r\in R\), with base-\(\varepsilon\) expansion
\[r=a_{1}\varepsilon+a_{0}+a_{-1}\varepsilon^{-1}+a_{-2}\varepsilon^{-2}+a_{-3} \varepsilon^{-3}+\ldots\]
Note that since \(r\in R\), \(\lfloor\log_{\varepsilon}(r)\rfloor=0\) or \(1\), so the highest power of \(\varepsilon\) appearing in the base-\(\varepsilon\) expansion of \(r\) is at most \(1\). Given this base-\(\varepsilon\) expansion of \(r\), we have that
\[\varepsilon*r=\varepsilon\cdot r+z_{2}\varepsilon+1=a_{1}\varepsilon^{2}+(a_{ 0}+z_{2})\varepsilon^{1}+(a_{-1}+1)\varepsilon^{0}+a_{-2}\varepsilon^{-1}+a_ {-3}\varepsilon^{-2}+\ldots \tag{4.1}\]
Recall that in a base-\(\varepsilon\) expansion, each digit (in this case \(a_{i}\) for \(i\in\mathbb{Z}\)) must be an element of the set \(A\coloneqq\{0,1,\ldots,\lfloor\varepsilon\rfloor\}\). We now consider the following two cases: in Case 1, both \(a_{0}+z_{2}\) and \(a_{-1}+1\) are in \(A\); in Case 2, one or both of \(a_{0}+z_{2}\) and \(a_{-1}+1\) is not in \(A\).
**Case 1.** In Case 1, the expression in (4.1) is already a valid base-\(\varepsilon\) expansion of \(\varepsilon*r\). We can see that only a finite number of digits differ between the base-\(\varepsilon\) expansion of \(\varepsilon*r\) and the base-\(\varepsilon\) expansion of \(r\), so in this case the repeating part of \(\varepsilon*r\) must be the same as \(r\).
**Case 2.** Now we address Case 2, which we can split into Case 2.1 and Case 2.2. In Case 2.1, \(a_{0}+z_{2}\not\in A\); in Case 2.2, \(a_{-1}+1\not\in A\).
**Case 2.1.** Assume that \(a_{0}+z_{2}\not\in A\). Since \(a_{0}\) is a digit in the base-\(\varepsilon\) expansion of \(r\), \(0\leq a_{0}\leq\lfloor\varepsilon\rfloor\) by definition. Additionally, \(-\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\leq z_{2}\leq 0\) by Lemma 4.4. Thus it always true that \(-\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\leq a_{0}+z_{2}\leq\operatorname {Tr}_{F/\mathbb{Q}}(\varepsilon)\). Therefore if \(a_{0}+z_{2}\not\in A\), it must be that \(-\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\leq a_{0}+z_{2}\leq-1\). Then we have that \(0\leq a_{0}+z_{2}+\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\leq\operatorname {Tr}_{F/\mathbb{Q}}(\varepsilon)-1\), so \(a_{0}+z_{2}+\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\in A\) and is hence an acceptable digit. Since \(\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)=\varepsilon+\varepsilon^{-1}\), we can rewrite (4.1) as
\[\varepsilon*r =(a_{1}-1)\varepsilon^{2}+(a_{0}+z_{2}+\operatorname{Tr}_{F/ \mathbb{Q}}(\varepsilon))\varepsilon+(a_{-1}+1-1)+a_{-2}\varepsilon^{-1}+a_{-3 }\varepsilon^{-2}+\ldots\] \[=(a_{1}-1)\varepsilon^{2}+(a_{0}+z_{2}+\operatorname{Tr}_{F/ \mathbb{Q}}(\varepsilon))\varepsilon+a_{-1}+a_{-2}\varepsilon^{-1}+a_{-3} \varepsilon^{-2}+\ldots \tag{4.2}\]
Since \(0\leq a_{0}+z_{2}+\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)<\operatorname {Tr}_{F/\mathbb{Q}}(\varepsilon)\), the above base-\(\varepsilon\) expansion is valid as long as \(0\leq a_{1}-1\leq\lfloor\varepsilon\rfloor\). Since \(0\leq a_{1}\leq\lfloor\varepsilon\rfloor\), we know \(-1\leq a_{1}-1\leq\lfloor\varepsilon\rfloor-1\). Thus unless \(a_{1}-1=-1\), it must be true that \(0\leq a_{1}-1\leq\lfloor\varepsilon\rfloor\). Let us assume for the sake of contradiction that \(a_{1}-1=-1\). If we let
\[\alpha =\varepsilon^{2}\text{ and }\] \[\beta =(a_{0}+z_{2}+\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)) \varepsilon+a_{-1}+a_{-2}\varepsilon^{-1}+a_{-3}\varepsilon^{-2}+\ldots,\]
then (4.2) implies that \(\varepsilon*r=-\alpha+\beta\). However, it follows directly from the definition of a base-\(\varepsilon\) expansion and the fact that \(\varepsilon\) is a Pisot number that \(\alpha>\beta\). Thus \(\varepsilon*r=-\alpha+\beta<0\), which contradicts the fact that \(\varepsilon*r\in R\). Thus we have that \(0\leq a_{1}-1\leq\lfloor\varepsilon\rfloor\), so (4.2) is a valid base-\(\varepsilon\) expansion of \(\varepsilon*r\). We can see that only a finite number of digits differ between the base-\(\varepsilon\) expansion of \(\varepsilon*r\) and the base-\(\varepsilon\) expansion of \(r\). Therefore the repeating part of \(\varepsilon*r\) must be the same as \(r\).
**Case 2.2** Assume \(a_{-1}+1\not\in A\). By Case 2.1, we may assume without loss of generality that \(a_{0}+z_{2}\in A\). Since \(0\leq a_{-1}\leq\lfloor\varepsilon\rfloor\), we know that \(1\leq a_{-1}+1\leq\lfloor\varepsilon\rfloor+1\). Thus if \(a_{-1}+1\not\in A\)
it must be that \(a_{-1}+1=\lfloor\varepsilon\rfloor+1\), so \(a_{-1}=\lfloor\varepsilon\rfloor\). Since \(\lceil\varepsilon\rceil=\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\) by Lemma 4.5, we have \(a_{-1}+1=\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\). Again using that \(\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)=\varepsilon+\varepsilon^{-1}\), we can rewrite (4.1) as
\[\varepsilon*r=a_{1}\varepsilon^{2}+(a_{0}+z_{2}+1)\varepsilon+(a_{-2}+1) \varepsilon^{-1}+a_{-3}\varepsilon^{-2}+\ldots \tag{4.3}\]
Since \(0\leq a_{0}+z_{2}\leq\lfloor\varepsilon\rfloor\) by assumption, if \(a_{0}+z_{2}+1\not\in A\), then \(a_{0}+z_{2}+1=\lfloor\varepsilon\rfloor+1\). This would imply that \(\varepsilon*r>\varepsilon+1\), which contradicts the fact that \(\varepsilon*r\in R\). Thus it must be that \(0\leq a_{0}+z_{2}+1\leq\lfloor\varepsilon\rfloor\). With this, we see that if \(a_{-2}+1\in A\), then 4.3 is a valid base-\(\varepsilon\) expansion of \(\varepsilon*r\). Otherwise, if \(a_{-2}+1\not\in A\), then since \(a_{-2}\in A\), it must be that \(a_{-2}=\lfloor\varepsilon\rfloor\), so \(a_{-2}+1=\lceil\varepsilon\rceil=\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)\). Using that \(\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon)=\varepsilon+\varepsilon^{-1}\), we can rewrite 4.3 as
\[\varepsilon*r=a_{1}\varepsilon^{2}+(a_{0}+z_{2}+1)\varepsilon+1+(a_{-3}+1) \varepsilon^{-2}+\ldots \tag{4.4}\]
We note that by the same argument used before, if \(a_{i}+1\not\in A\) for any \(i\in\mathbb{Z}\), then \(a_{i}=\lfloor\varepsilon\rfloor=\operatorname{Tr}_{F/\mathbb{Q}}(\varepsilon )-1\). Thus if we let \(j\) be the smallest positive integer such that \(a_{-j}\neq\lfloor\varepsilon\rfloor\), then continuing in the same manner, we see that
\[\varepsilon*r=a_{1}\varepsilon^{2}+(a_{0}+z_{2}+1)\varepsilon+1+\varepsilon^{ -1}+\varepsilon^{-2}+\ldots+\varepsilon^{-j+3}+(a_{j}+1)\varepsilon^{-j+1}+ a_{j+1}\varepsilon^{-j}\ldots \tag{4.5}\]
Since the base-\(\varepsilon\) expansion of \(r\) must be finite or periodic, it is certainly possible to choose such an index \(j\). We can assume that \(r\) does not have repeating part \(\overline{\lfloor\varepsilon\rfloor}\), since \(\lfloor\varepsilon\rfloor\varepsilon^{i}+\lfloor\varepsilon\rfloor\varepsilon^ {i-1}+\lfloor\varepsilon\rfloor\varepsilon^{i-2}+\ldots=\varepsilon^{i+1}\) for any \(i\in\mathbb{Z}\). Thus in Case 2.2, we see that only a finite number of digits differ between the base-\(\varepsilon\) expansion of \(\varepsilon*r\) and the base-\(\varepsilon\) expansion of \(r\). Therefore the repeating part of \(\varepsilon*r\) must be the same as \(r\) in this case.
Now we have seen that in all cases, the repeating part of the base-\(\varepsilon\) expansion of \(\varepsilon*r\) is the same as that of \(r\), which finishes the proof.
In many of the results which follow, it will prove useful for us to note the following fact about the map \(\pi\) as defined in Proposition 2.7.
**Lemma 4.6**.: _The map \(\pi\) is equivariant under the action of \(\langle\varepsilon\rangle\)._
Proof.: Since \(\frac{1}{p}\mathcal{O}_{F}\) is an ideal of \(\mathcal{O}_{F}\), it is also a \(\mathcal{O}_{F}\)-module. Moreover, \(\mathcal{O}_{F}\) is trivially an \(\mathcal{O}_{F}\)-module, and \(p\mathcal{O}_{F}\) is a submodule of \(\mathcal{O}_{F}\) since \(p\mathcal{O}_{F}\) is an ideal of \(\mathcal{O}_{F}\). Thus the map
\[\phi:\frac{1}{p}\mathcal{O}_{F}\to\mathcal{O}_{F}/p\mathcal{O}_{F}\]
defined by multiplication by \(p\) is an \(\mathcal{O}_{F}\)-module homomorphism. The kernel of this map is \(\mathcal{O}_{F}\), so by the first isomorphism theorem, \(\frac{1}{p}\mathcal{O}_{F}/\mathcal{O}_{F}\stackrel{{\sim}}{{\to}} \mathcal{O}_{F}/p\mathcal{O}_{F}\) is an isomorphism of \(\mathcal{O}_{F}\)-modules. Since \(\mathbb{Z}[\varepsilon]\) is a subring of \(\mathcal{O}_{F}\), by restriction of scalars, \(\frac{1}{p}\mathcal{O}_{F}/\mathcal{O}_{F}\cong\mathcal{O}_{F}/p\mathcal{O}_{F}\) is also an isomorphism of \(\mathbb{Z}[\varepsilon]\)-modules.
We can also observe that since \(\mathcal{O}_{F}\) is a submodule of \(\frac{1}{p}\mathcal{O}_{F}\), the projection map
\[\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\to\frac{1}{p}\mathcal{O}_{ F}/\mathbb{Z}[\varepsilon]\Big{/}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\]
is a surjective \(\mathbb{Z}[\varepsilon]\)-module homomorphism. By the third isomorphism theorem, we further have that
\[\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\Big{/}\mathcal{O}_{F}/ \mathbb{Z}[\varepsilon]\cong\frac{1}{p}\mathcal{O}_{F}/\mathcal{O}_{F},\]
which implies
\[\psi:\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\to\frac{1}{p}\mathcal{ O}_{F}/\mathcal{O}_{F}\]
is a surjective \(\mathbb{Z}[\varepsilon]\)-module homomorphism. Since \(R\subset\frac{1}{p}\mathcal{O}_{F}\) and \(R\) constitutes a complete set of coset representatives for \(\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\) (see [1], Proposition 4.1), the identity map
\[\iota:R\to\frac{1}{p}\mathcal{O}_{F}/\mathbb{Z}[\varepsilon]\]
is a \(\mathbb{Z}[\varepsilon]\)-module isomorphism. Now we can see that since \(\pi=\phi\circ\psi\circ\iota\), \(\pi\) is a surjective \(\mathbb{Z}[\varepsilon]\)-module homomorphism. Thus \(\pi\) is equivariant under the action of \(\langle\varepsilon\rangle\).
With this result, we may now examine more closely the action of \(\varepsilon\) on \(R.\) Namely, we can deduce the following fact about nontrivial Shintani cycles.
**Lemma 4.7**.: _All nontrivial Shintani cycles have length equal to the multiplicative order of \(\varepsilon+p\mathcal{O}_{F}\) in \((\mathcal{O}_{F}/p\mathcal{O}_{F})\)._
Proof.: Let \(M\) denote the multiplicative order of \(\varepsilon+p\mathcal{O}_{F}\) in \(\mathcal{O}_{F}/p\mathcal{O}_{F},\) and consider \(r\in R-\mathcal{O}_{F}.\) We will show that \(|C_{r}|=M.\) Suppose that \(\varepsilon^{m}\ast r=r\) for some \(m\in\mathbb{Z}^{+}.\) Then by Lemma 4.6,
\[\pi(r)=\pi(\varepsilon^{m}\ast r)=(\varepsilon^{m}+p\mathcal{O}_{F})\pi(r) \implies(\varepsilon^{m}-1+p\mathcal{O}_{F})\pi(r)=0.\]
By assumption \(r\not\in\mathcal{O}_{F},\) so \(\pi(r)\neq 0\) by Lemma 2.8. Since \(\mathcal{O}_{F}/p\mathcal{O}_{F}\) is a field, it must be that \((\varepsilon^{m}-1+p\mathcal{O}_{F})=0.\) Therefore \(\varepsilon^{m}\equiv 1\mod p\mathcal{O}_{F},\) so \(M|m.\) As a consequence, if we denote the stabilizer subgroup associated to \(r\) under the action of \(\varepsilon\) as \(\langle\varepsilon\rangle_{r},\) then \(\langle\varepsilon\rangle_{r}\subset\langle\varepsilon^{M}\rangle.\) Moreover, we note that since \(\langle\varepsilon\rangle_{r}\) is a subgroup of \(\langle\varepsilon^{M}\rangle,\) it must be that \(\langle\varepsilon\rangle_{r}=\langle\varepsilon^{M^{\prime}}\rangle\) for some \(M^{\prime}\) such that \(M|M^{\prime}.\) Moreover, we also have that
\[\varepsilon^{M^{\prime}}\ast r=r\implies(\varepsilon^{M^{\prime}}-1+p\mathcal{ O}_{F})\pi(r)=0\]
where \(\pi(r)\neq 0,\) so that \(\varepsilon^{M^{\prime}}\equiv 1\mod p\mathcal{O}_{F}.\) Thus it must also be that \(M^{\prime}|M,\) so \(M^{\prime}=M,\) and hence \(\langle\varepsilon\rangle_{r}=\langle\varepsilon^{M}\rangle.\) In other words, \(\langle\varepsilon^{M}\rangle\) is the stabilizer subgroup of \(r\) for all \(r\in R-\mathcal{O}_{F}\). By the orbit-stabilizer theorem, we then have that \(|C_{r}|=[\langle\varepsilon\rangle:\langle\varepsilon^{M}\rangle]=M.\)
In the lemma which follows, we equate \(M\) with the minimal period length of the base-\(\varepsilon\) expansion of \(r\) for all \(r\in R-\mathcal{O}_{F}.\) This fact, combined with Lemma 4.7, will then imply that for all \(r\in R-\mathcal{O}_{F},\)\(|C_{r}|=P_{r},\) where \(P_{r}\) denotes the minimal period length of the base-\(\varepsilon\) expansion of \(r.\)
**Lemma 4.8**.: _For any \(r\in R-\mathcal{O}_{F}\), the minimal period length of the base-\(\varepsilon\) expansion of \(r\) is equal to the multiplicative order of \(\varepsilon+p\mathcal{O}_{F}\) in \((\mathcal{O}_{F}/p\mathcal{O}_{F})\)._
Proof.: Consider an element \(r=r_{1}+r_{2}\varepsilon\in R-\mathcal{O}_{F}.\) We start by showing that \(P_{r}|M\). As mentioned at the beginning of Section 4.2, the base-\(\varepsilon\) expansion of \(r\) is always eventually periodic, say
\[r=\sum_{i=-1}^{N-1}a_{i}^{\prime}\varepsilon^{-i}+\varepsilon^{-N}\sum_{j=0}^{ \infty}\big{(}a_{1}\varepsilon^{-jP_{r}}+a_{2}\varepsilon^{-jP_{r}-1}+\ldots+ a_{P_{r}}\varepsilon^{-jP_{r}-P_{r}+1}\big{)}. \tag{4.6}\]
We remind the reader that since \(r\in R,\lfloor\log_{\varepsilon}(r)\rfloor=0\text{ or }1,\) so the highest power of \(\varepsilon\) in (4.6) is \(1.\) Multiplying (4.6) by \(\varepsilon^{P_{r}},\) we obtain
\[\varepsilon^{P_{r}}r=\sum_{i=-1}^{N-1}a_{i}^{\prime}\varepsilon^{-i+P_{r}}+ \varepsilon^{-N}\sum_{j=0}^{\infty}\Big{(}a_{1}\varepsilon^{-(j-1)P_{r}}+a_{2} \varepsilon^{-(j-1)P_{r}-1}+\ldots+a_{P_{r}}\varepsilon^{-(j-1)P_{r}-P_{r}+1} \Big{)}.\]
Reindexing (4.6), we get
\[r=\sum_{i=-1}^{N-1}a_{i}^{\prime}\varepsilon^{-i}+\varepsilon^{-N}\sum_{j=1}^{ \infty}\Big{(}a_{1}\varepsilon^{-(j-1)P_{r}}+a_{2}\varepsilon^{-(j-1)P_{r}-1}+ \ldots+a_{P_{r}}\varepsilon^{-(j-1)P_{r}-P_{r}+1}\Big{)}.\]
And thus
\[\varepsilon^{P_{r}}r-r=\left(\sum_{i=-1}^{N-1}a^{\prime}_{i}\varepsilon^{-i+P_{r}} -a^{\prime}_{i}\varepsilon^{-i}\right)+\varepsilon^{-N}\left(a_{1}\varepsilon^ {P_{r}}+a_{2}\varepsilon^{P_{r}-1}+\ldots+a_{P_{r}}\varepsilon\right).\]
Let
\[\alpha=\sum_{i=-1}^{N-1}a^{\prime}_{i}\varepsilon^{-i+P_{r}},\quad\text{and} \quad\beta=-\sum_{i=-1}^{N-1}a^{\prime}_{i}\varepsilon^{-i},\quad\text{and} \quad\gamma=\varepsilon^{-N}\left(a_{1}\varepsilon^{P_{r}}+a_{2}\varepsilon^ {P_{r}-1}+\ldots+a_{P_{r}}\varepsilon\right).\]
Note that, because \(\alpha\), \(\beta\), and \(\gamma\) have finite \(\varepsilon\) expansions, we have that \(\alpha,\beta,\gamma\in\mathbb{Z}[\varepsilon]\), and thus
\[\alpha+\beta+\gamma=\varepsilon^{P_{r}}r-r\in\mathbb{Z}[\varepsilon].\]
By definition, we know that
\[\varepsilon^{P_{r}}*r=\varepsilon^{P_{r}}r+z\]
for some \(z\in\mathbb{Z}[\varepsilon]\). Thus, using Lemmas 2.8 and 4.6, we have
\[\varepsilon^{P_{r}}*r-r-z=\varepsilon^{P_{r}}r-r\implies\pi(\varepsilon^{P_{r }}*r-r-z)=\pi(\varepsilon^{P_{r}}r-r)\implies(\varepsilon^{P_{r}}+p\mathcal{O }_{F}-1)\pi(r)=0.\]
Since \(F\) is a field in which \(\pi(r)\neq 0\) since \(r\not\in\mathcal{O}_{F}\), we have that
\[\varepsilon^{P_{r}}-1+p\mathcal{O}_{F}=0\implies\varepsilon^{P_{r}}\equiv 1 \pmod{p\mathcal{O}_{F}}.\]
for any \(r\in R-\mathcal{O}_{F}\). Recall that \(M\) is the multiplicative order of \(\varepsilon\) in \(\mathcal{O}_{F}/p\mathcal{O}_{F}\), so we see that \(M|P_{r}\).
Next, we show that \(P_{r}|M\). Since both \(r\) and \(\varepsilon*r\) have periodic base-\(\varepsilon\) expansions, we let \(N_{1}\) represent the smallest integer such that the repeating part of the base-\(\varepsilon\) expansion of \(r\) begins in the \(\varepsilon^{-N_{1}}\) place. Similarly, let \(N_{2}\) represent the smallest integer such that the repeating part of the base-\(\varepsilon\) expansion of \(\varepsilon*r\) begins in the \(\varepsilon^{-N_{2}}\) place.
Let \(S=\max(N_{1},N_{2})\) be the smallest integer such that the base-\(\varepsilon\) expansion of both \(r\) and \(\varepsilon*r\) is periodic for all indices greater than \(S\). Thus, the digits in the \(\varepsilon^{-S},\varepsilon^{-S-1},\ldots,\varepsilon^{-S-P_{r}}\) place of the base-\(\varepsilon\) expansion of \(r\) constitute a full period, and we let the ordered set
\[\{x_{1},x_{2},\ldots,x_{P_{r}}\}\]
represent the period set of \(r\). As shown in Proposition 4.3, the operation \(\varepsilon*r\) shifts the digits within the repeating part of the base-\(\varepsilon\) of \(r\) to the left by one index. In other words, the period set of \(\varepsilon*r\) is the ordered set
\[\{x_{2},\ldots,x_{P_{r}},x_{1}\}.\]
Note that moving between the period set of \(r\) and the period set of \(\varepsilon*r\) can be represented by applying the permutation
\[\tau=(1\ \ 2\ \cdots\ \ P_{r})\in S_{P_{r}}\]
to the period set of \(r.\) Additionally, we have that \(r=\varepsilon^{M}*r\), so the period sets of \(r\) and \(\varepsilon^{M}*r\) must be equal. Thus,
\[\tau^{M}\{x_{1},x_{2},\ldots,x_{P_{r}}\}=\{x_{1},x_{2},\ldots,x_{P_{r}}\},\]
which implies that \(\tau^{M}\) is the identity permutation. Since the order of \(\tau\in S_{P_{r}}\) is \(P_{r}\), we have that \(P_{r}|M\). So, we see that \(P_{r}=M\).
In our final lemma before we prove Theorem 1.2, we show that for any nontrivial Shintani cycle, the sum of the coefficients \(r_{1}\) and \(r_{2}\) where \(r=r_{1}+r_{2}\varepsilon\) of all the elements \(r\) in the Shintani cycle is a constant. In fact, these coefficients sum to \(M\).
**Lemma 4.9**.: _For any \(r\in R-\mathcal{O}_{F},\) let \(r^{\prime}\coloneqq r_{1}^{\prime}+r_{2}^{\prime}\varepsilon\). Then,_
\[\sum_{r^{\prime}\in C_{r}}\left(r_{1}^{\prime}+r_{2}^{\prime}\right)=M.\]
Proof.: Let \(\varepsilon^{i}*r\coloneqq r_{1}(i)+r_{2}(i)\varepsilon,\) so \(r_{1}(i+1)+r_{2}(i+1)\varepsilon=\varepsilon*(r_{1}(i)+r_{2}(i)\varepsilon).\) Recall that
\[\varepsilon*(r_{1}(i)+r_{2}(i)\varepsilon)=(1-r_{2}(i))+\{r_{1}(i)+\mathrm{Tr }_{F/\mathbb{Q}}(\varepsilon)r_{2}(i)\}\varepsilon\]
by Lemma 4.4. Comparing coefficients, we see that \(r_{1}(i+1)+r_{2}(i)=1\) for all \(i\in\mathbb{Z}\). Moreover, since \(M=|C_{r}|\) for all \(r\in R-\mathcal{O}_{F}\) by Lemma 4.7, we have that \(r_{1}(m)=r_{1}(m+M)\) for any integer \(m.\) Using these facts, we see that
\[\sum_{r\in C_{r}}\left(r_{1}(i)+r_{2}(i)\right) =\sum_{i=1}^{M}\left(r_{1}(i)+r_{2}(i)\right)=r_{1}(1)+r_{2}(M)+ \sum_{i=1}^{M-1}r_{1}(i+1)+\sum_{j=1}^{M-1}r_{2}(j)\] \[=r_{1}(M+1)+r_{2}(M)+\sum_{i=1}^{M-1}\left(r_{1}(i+1)+r_{2}(i) \right)=1+(M-1)=M.\]
### Proof of Theorem 1.2
Since \(\langle\varepsilon\rangle\) acts on \(R,\)\(R\) decomposes into a disjoint union of Shintani cycles, under this action. Letting \(\mathcal{L}\) denote a complete reduced set of Shintani cycle representatives for \(R,\) and recalling that \(C_{r}\) denotes the Shintani cycle of \(r,\) we can rewrite Shintani's formula as follows:
\[h_{K} =\frac{1}{2}\sum_{r\in R}\chi_{K/F}(rp\mathcal{O}_{F})\sum_{ \begin{subarray}{c}0\leq l_{1},l_{2}\leq 2\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(r_{1})}{l_{1}!}\frac{B_{l_{2}}(r_{ 2})}{l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)^{l_{2}-1}\] \[=\frac{1}{2}\sum_{i=1}^{|C_{r}|}\sum_{r\in\mathcal{L}}\chi_{K/F}( \varepsilon_{F}^{i}*r\cdot p\mathcal{O}_{F})\sum_{\begin{subarray}{c}0\leq l _{1},l_{2}\leq 2\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(r_{1})}{l_{1}!}\frac{B_{l_{2}}(r_{ 2})}{l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)^{l_{2}-1} \tag{4.7}\]
First, we show that \(\chi_{K/F}(r^{\prime}\cdot p\mathcal{O}_{F})\) is constant for all \(r^{\prime}\in C_{r}\). By definition of \(\varepsilon*r\), we see that \(\varepsilon*r=\varepsilon r+z\) for some \(z\in\mathcal{O}_{F}.\) Thus
\[\chi_{K/F}(\varepsilon^{i}*r\cdot p\mathcal{O}_{F})=\chi_{K/F}((\varepsilon^{ i}r+z)\cdot p\mathcal{O}_{F})=\chi_{K/F}(\varepsilon^{i}rp\mathcal{O}_{F}+zp \mathcal{O}_{F}).\]
Since \(zp\mathcal{O}_{F}\subset p\mathcal{O}_{F},\)\(p\mathcal{O}_{F}|zp\mathcal{O}_{F},\) and since \(p\mathcal{O}_{F}\) is the conductor of this Hecke character, we see that
\[\chi_{K/F}(\varepsilon^{i}*r\cdot p\mathcal{O}_{F})=\chi_{K/F}(\varepsilon^{ i}rp\mathcal{O}_{F}).\]
Additionally, \(\varepsilon\) is a unit, so we know
\[r\mathcal{O}_{F}=\varepsilon^{i}r\mathcal{O}_{F}\]
for any integer \(i.\) Therefore
\[\chi_{K/F}(\varepsilon^{i}*r\cdot p\mathcal{O}_{F})=\chi_{K/F}(\varepsilon^{ i}rp\mathcal{O}_{F})=\chi_{K/F}(rp\mathcal{O}_{F}),\]
and thus the Hecke character value in (4.7) is constant throughout each Shintani cycle.
By Lemma 4.7, all nontrivial Shintani cycles in \(R\) contain the same number of elements. Since \(1/p\) is an element of \(R-\mathcal{O}_{F}\) by Lemma 4.8, the period length \(\ell_{F,p}\) of the base-\(\varepsilon\) expansion of \(1/p\) is equal to the length of each nontrivial cycle.
**Remark**.: Note that, for all \(r\in R\cap\mathcal{O}_{F},\) the Hecke character \(\chi_{K/F}(rp\mathcal{O}_{F})\) evaluates to \(0\), so elements \(r\in R\cap\mathcal{O}_{F}\) are all weighted by a factor of \(0\) in (4.7). Hence, we can ignore them in our calculations.
Using these facts, we obtain
\[h_{K}=\frac{1}{2}\sum_{i=1}^{\ell_{F,p}}\sum_{r\in\mathcal{L}}\chi_{K/F}(rp{\mathcal{ O}}_{F})\sum_{\begin{subarray}{c}0\leq l_{1},l_{2}\leq 2\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(r_{1})}{l_{1}!}\frac{B_{l_{2}}(r_{2} )}{l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)^{l_{2}-1}. \tag{4.8}\]
As shown in [1], we have that
\[\sum_{r\in R}\chi_{K/F}(rp{\mathcal{O}}_{F})=\sum_{r\in R\cap{\mathcal{O}}_{F} }\chi_{K/F}(rp{\mathcal{O}}_{F})+\sum_{r\in R-{\mathcal{O}}_{F}}\chi_{K/F}(rp{ \mathcal{O}}_{F})=0.\]
For all \(r\in{\mathcal{O}}_{F}\), we have already seen that \(\chi_{K/F}(rp{\mathcal{O}}_{F})=0\). Thus,
\[0=\sum_{r\in R-{\mathcal{O}}_{F}}\chi_{K/F}(rp{\mathcal{O}}_{F})=\sum_{r\in \mathcal{L}-{\mathcal{O}}_{F}}\chi_{K/F}(rp{\mathcal{O}}_{F})\cdot\ell_{F,p} =\ell_{F,p}\sum_{r\in\mathcal{L}-{\mathcal{O}}_{F}}\chi_{K/F}(rp{\mathcal{O}} _{F}),\]
which yields
\[\sum_{r\in\mathcal{L}-{\mathcal{O}}_{F}}\chi_{K/F}(rp{\mathcal{O}}_{F})=0.\]
In other words, we have character orthogonality across the elements \(r\in\mathcal{L}-{\mathcal{O}}_{F}\). With this, we consider the sum over Bernoulli polynomials within this formula. Letting
\[\mathcal{B}(r_{1}+r_{2}\varepsilon)\coloneqq\sum_{\begin{subarray}{c}0\leq l _{1},l_{2}\leq 2\\ l_{1}+l_{2}=2\end{subarray}}\frac{B_{l_{1}}(r_{1})B_{l_{2}}(r_{2})}{l_{1}!l_{2}!}\mathrm{Tr}_{F/\mathbb{Q}}\left(\varepsilon^{l_{2}-1}\right),\]
we see that
\[\mathcal{B}(r_{1}+r_{2}\varepsilon) =\frac{r_{1}^{2}-r_{1}+\frac{1}{6}}{2}\mathrm{Tr}_{F/\mathbb{Q} }(\varepsilon)+2\Big{(}r_{1}-\frac{1}{2}\Big{)}\Big{(}r_{2}-\frac{1}{2} \Big{)}+\frac{r_{2}^{2}-r_{2}+\frac{1}{6}}{2}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)\] \[=\frac{\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)}{2}\Big{(}r_{1}^{2 }+r_{2}^{2}-(r_{1}+r_{2})+\frac{1}{3}\Big{)}+2r_{1}r_{2}-(r_{1}+r_{2})+\frac{ 1}{2}. \tag{4.9}\]
Recall that by Lemmas 4.9, for all \(r\in\mathcal{L}-{\mathcal{O}}_{F}\),
\[\sum_{r^{\prime}\in C_{r}}r_{1}^{\prime}+r_{2}^{\prime}=M.\]
Thus, we can further simplify (4.9) to
\[\mathcal{B}(r_{1}+r_{2}\varepsilon)=\frac{\mathrm{Tr}_{F/\mathbb{Q}}( \varepsilon)}{2}\Big{(}r_{1}^{2}+r_{2}^{2}-M+\frac{1}{3}\Big{)}+2r_{1}r_{2}-M- \frac{1}{2}. \tag{4.10}\]
Because we have character orthogonality over \(\mathcal{L}-{\mathcal{O}}_{F}\), we can add a constant to the inner Bernoulli sum of (4.8) without changing the value of the whole expression. In particular, if we let
\[c\coloneqq\frac{\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)}{2}\Big{(}-M+\frac{1} {3}\Big{)}-M-\frac{1}{2},\]
we see that (4.10) can be rewritten as
\[\mathcal{B}(r_{1}+r_{2}\varepsilon) =\frac{\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)}{2}\Big{(}r_{1}^{2 }+r_{2}^{2}\Big{)}+2r_{1}r_{2}+c\] \[=\frac{1}{2}\Big{(}\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)r_{1}^{ 2}+4r_{1}r_{2}+\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)r_{2}^{2}\Big{)}+c.\]
Using these results and letting \(\varepsilon*r\coloneqq r_{1}(i)+r_{2}(i)\varepsilon\), we obtain
\[h_{K}=\frac{1}{4}\sum_{i=1}^{\ell_{F,p}}\sum_{r\in\mathcal{L}- \mathcal{O}_{F}}\chi_{K/F}\Big{(}rp\mathcal{O}_{F}\Big{)}\Big{(}\mathrm{Tr}_{F/ \mathbb{Q}}(\varepsilon)r_{1}(i)^{2}+4r_{1}(i)r_{2}(i)+\mathrm{Tr}_{F/\mathbb{ Q}}(\varepsilon)r_{2}(i)^{2}\Big{)}.\]
Recall that \(Q_{F}(Y_{1},Y_{2})\coloneqq\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)Y_{1}^{2}+4 Y_{1}Y_{2}+\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)Y_{2}^{2}.\) Thus if we make a slight abuse of notation by letting \(Q_{F}(\varepsilon^{i}*r)=\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)(r_{1}(i)^{2 }+4r_{1}(i)r_{2}(i)+\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon)r_{2}(i)^{2},\) then we can express \(h_{K}\) as
\[h_{K}=\frac{1}{4}\sum_{i=1}^{\ell_{F,p}}\sum_{r\in\mathcal{L}- \mathcal{O}_{F}}\chi_{K/F}(rp\mathcal{O}_{F})Q(\varepsilon^{i}*r).\]
## 5. Examples
Here we illustrate Theorems 1.1 and 1.2 for \(\mathbb{Q}(\sqrt{3},\sqrt{-p})\), where \(p\) is prime. Note that the ring of integers of \(F=\mathbb{Q}(\sqrt{3})\) is given by \(\mathbb{Z}[\sqrt{3}]\), and its totally positive unit group \(\mathcal{O}_{F}^{\times,+}\) is generated by \(\varepsilon_{F}=2+\sqrt{3}\). We require that \(p\equiv 3\pmod{4},\big{(}\frac{3}{p}\big{)}=-1,\) and \(7\leq p.\) The first two conditions imply that the relative discriminant ideal is the prime ideal \(p\mathbb{Z}[\sqrt{3}]\). Consequently, \(\mathbb{Z}[\sqrt{3}]/p\mathbb{Z}[\sqrt{3}]\ \cong\ \mathbb{F}_{p}[\sqrt{3}]\).
### Theorem 1.1 with \(F=\mathbb{Q}(\sqrt{3})\)
Let \(\rho_{F,p}=a+b\sqrt{3}\) be a generator of \(\mathbb{F}_{p}[\sqrt{3}]\). Table 1 lists values of \(\rho_{F,p}\) as computed with SageMath. Using these values, we use (1.3) and (1.4) to calculate \(C_{F,p}\) and \(D_{F,p}\), then use (1.5) and (1.6) to find the corresponding rational functions \(X_{F,p}(z)\) and \(Y_{F,p}(z)\), which are also displayed in Table 1.
We extract the first \(p^{2}-1\) coefficients from our rational functions by taking the \(k^{th}\) derivative of \(X(z)\) and \(Y(z)\), evaluating each function at \(z=0\), and dividing by \(k!\). Note that in this case, since \(t=1\), we obtain only \(1\) sequence \(x_{1}(m)\), and \(y_{1}(m)\), from each of \(X_{F,p}(z)\) and \(Y_{F,p}(z)\) respectively. Since \(\mathrm{Tr}_{F/Q}(\varepsilon_{F})=4\), we have
\[Q_{F}(Y_{1},Y_{2})=4Y_{1}^{2}+4Y_{1}Y_{2}+4Y_{2}^{2}.\]
Now we may apply Theorem 1.1 to obtain
\[h_{F(\sqrt{-p})}=\frac{1}{16p^{2}}\sum_{1\leq m\leq p^{2}-1}(-1)^{m}Q_{F} \Big{(}x_{1}(m),y_{1}(m)\Big{)}.\]
The smallest suitable prime for which we can apply Theorem 1.1 here is \(p=7\), for which we calculate
\[h_{F(\sqrt{-7})}=\frac{1}{784}( -84+76-300+52-28+436-100+148-196+52-108+124-84+148\] \[-36+172-28+124-12+76-196+172-4+156-84+76-300+52-28\] \[+156-100+316-196+52-108+124-84+316-36+228-28+124-12\] \[+76-196+228-4+436)=2.\]
In Table 1, we list some terms of our alternating sum for the class numbers of all such primes less than \(100\), along with the corresponding class numbers calculated using Theorem 1.1 and verified using SageMath.
### Theorem 1.2 with \(F=\mathbb{Q}(\sqrt{3})\)
We illustrate Theorem 1.2 in the same setting. Letting \(F=\mathbb{Q}(\sqrt{3})\), we calculate \(h_{K}\) for \(p\equiv 3\pmod{4}\) where \(7\leq p\) and \(\binom{3}{p}=-1\). We remind the reader that \(\varepsilon_{F}=2+\sqrt{3}\), so \(t=1\). Thus by Lemma 2.9, \(\ker(\pi)=R_{F,p}\cap\mathcal{O}_{F}=\{1\}\).
In the case that \(p=7\), we first calculate the base-\(\varepsilon_{F}\) expansion of \(1/7\),
\[\frac{1}{7} =\varepsilon_{F}^{-2}+3\varepsilon_{F}^{-3}+2\varepsilon_{F}^{- 4}+2\varepsilon_{F}^{-6}+2\varepsilon_{F}^{-7}+3\varepsilon_{F}^{-8}+3 \varepsilon_{F}^{-11}+2\varepsilon_{F}^{-12}+2\varepsilon_{F}^{-14}+2 \varepsilon_{F}^{-15}+3\varepsilon_{F}^{-16}+\ldots\] \[= 0.01\overline{32202230}.\]
Noticing that \(1/7\) has period length \(\ell_{F,7}=8\), by Lemma 4.8, we can then deduce that there are
\[\frac{|R_{F,7}-\mathcal{O}_{F}|}{\ell_{F,7}}=\frac{1\cdot 7^{2}-1}{8}=6\]
disjoint Shintani cycles which comprise \(R_{F,7}-\mathcal{O}_{F}\). We can generate these Shintani cycles explicitly, by calculating \(\varepsilon_{F}^{i}*r\) for \(0\leq i<8\) for \(r\in R_{F,7}-\mathcal{O}_{F}\). One can verify that
\[\mathcal{L}=\left\{\frac{1}{7}+\frac{1}{7}\varepsilon_{F},\ \frac{1}{7},\ \frac{1}{7}+\frac{4}{7} \varepsilon_{F},\ \frac{1}{7}+\frac{5}{7}\varepsilon_{F},\ \frac{2}{7}+\frac{2}{7} \varepsilon_{F},\ \frac{3}{7}\right\}\]
is a complete reduced set of representatives for all \(6\) distinct nontrivial cycles in \(R_{F,7}.\) With these values, we now calculate \(h_{F(\sqrt{-7})}\) using Theorem 1.2. Noting that \(\mathrm{Tr}_{F/\mathbb{Q}}(\varepsilon_{F})=4\) so \(Q_{F}(Y_{1},Y_{2})=4\) so \(Q_{F}(Y_{1},Y_{2})=4\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(p\) & \(\rho_{F,p}\) & \(X_{F,p}(z)\) & \(Y_{F,p}(z)\) & \(h_{F_{(}\sqrt{-p})}\) Calculation \\ \hline \hline \(7\) & \(6+\sqrt{3}\) & \(\frac{6z-33z^{2}}{33z^{2}-12z+1}\) & \(\frac{z}{33z^{2}-12z+1}\) & \(\frac{1}{784}\Big{(}-84+76-\ldots+436\Big{)}=2\) \\ \hline \(19\) & \(1+4\sqrt{3}\) & \(\frac{6z+47z^{2}}{-47z^{2}-12z+1}\) & \(\frac{4z}{-47z^{2}-12z+1}\) & \(\frac{1}{5776}\Big{(}-364+252-\ldots+3892\Big{)}=2\) \\ \hline \(31\) & \(1+6\sqrt{3}\) & \(\frac{z+107z^{2}}{-107z^{2}-2z+1}\) & \(\frac{6z}{-107z^{2}-2z+1}\) & \(\frac{1}{15376}\Big{(}-1084+676-\ldots+10804\Big{)}=6\) \\ \hline \(43\) & \(1+5\sqrt{3}\) & \(\frac{z+74z^{2}}{-74z^{2}-2z+1}\) & \(\frac{5z}{-74z^{2}-2z+1}\) & \(\frac{1}{29584}\Big{(}-3556+4836-\ldots+21172\Big{)}=6\) \\ \hline \(67\) & \(2+5\sqrt{3}\) & \(\frac{2z+71z^{2}}{-71z^{2}-4z+1}\) & \(\frac{5z}{-71z^{2}-4z+1}\) & \(\frac{1}{71824}\Big{(}-11772+2212-\ldots+52276\Big{)}=6\) \\ \hline \(79\) & \(2+6\sqrt{3}\) & \(\frac{2z+104z^{2}}{-104z^{2}-4z+1}\) & \(\frac{6z}{-104z^{2}-4z+1}\) & \(\frac{1}{99856}\Big{(}-16068+7372-\ldots+73012\Big{)}=30\) \\ \hline \end{tabular}
\end{table}
Table 1. Theorem 1.1 for primes \(p<100\).
\(4Y_{1}^{2}+4Y_{1}Y_{2}+4Y_{2}^{2}\), we compute
\[h_{F(\sqrt{-7})} =\frac{1}{4}\sum_{i=1}^{8}\ \ \sum_{r\in\mathcal{L}}\chi_{F(\sqrt{-7})/F} \left(rp\mathcal{O}_{F}\right)\left(4r_{1}(i)^{2}+4r_{1}(i)r_{2}(i)+4r_{2}(i)^{2 }\right)\] \[=\frac{1}{4}\Big{(}-\frac{220}{7}+\frac{228}{7}-\frac{188}{7}+ \frac{212}{7}-\frac{180}{7}+\frac{204}{7}\Big{)}=2.\]
In Table 2, we carry out the same procedure for all suitable primes less than \(100\).
|
2309.03709 | Mpemba Effect and Superuniversality across Orders of Magnetic Phase
Transition | The quicker freezing of hotter water, than a colder sample, when quenched to
a common lower temperature, is referred to as the Mpemba effect (ME). While
this counter-intuitive fact remains a surprize since long, efforts have begun
to identify similar effect in other systems. Here we investigate the ME in a
rather general context concerning magnetic phase transitions. From Monte Carlo
simulations of model systems, viz., the $q$-state Potts model and the Ising
model, with varying range of interaction and space dimension, we assert that
hotter paramagnets undergo ferromagnetic ordering faster than the colder ones.
The above conclusion we have arrived at following the analyses of the
simulation results on decay of energy and growth in ordering following quenches
from different starting temperatures, to fixed final temperatures below the
Curie points. We have obtained a unique scaling picture, on the strength of the
effect, with respect to the variation in spatial correlation in the initial
states. These results are valid irrespective of the order of transition and
relevant to the understanding of ME in other systems, including water. | Sohini Chatterjee, Soumik Ghosh, Nalina Vadakkayil, Tanay Paul, Sanat K. Singha, Subir K. Das | 2023-09-07T13:37:05Z | http://arxiv.org/abs/2309.03709v1 | # Mpemba Effect and Superuniversality across Orders of Magnetic Phase Transition
###### Abstract
The quicker freezing of hotter water, than a colder sample, when quenched to a common lower temperature, is referred to as the Mpemba effect (ME). While this counter-intuitive fact remains a surprize since long, efforts have begun to identify similar effect in other systems. Here we investigate the ME in a rather general context concerning magnetic phase transitions. From Monte Carlo simulations of model systems, viz., the \(q\)-state Potts model and the Ising model, with varying range of interaction and space dimension, we assert that hotter paramagnets undergo ferromagnetic ordering faster than the colder ones. The above conclusion we have arrived at following the analyses of the simulation results on decay of energy and growth in ordering following quenches from different starting temperatures, to fixed final temperatures below the Curie points. We have obtained a unique scaling picture, on the strength of the effect, with respect to the variation in spatial correlation in the initial states. These results are valid irrespective of the order of transition and relevant to the understanding of ME in other systems, including water.
## I Introduction
If two bodies of liquid water, differing in temperature, are placed in contact with a thermal reservoir, operating at a subzero temperature (\(<0^{\circ}\)C), the most common prediction will be that the colder between the two will freeze faster. The report in Ref. [1], by Mpemba and Osborne, however, contradicts such an expectation. There has been a surge [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29] in interest in further investigating this forgotten counter-intuitive fact, which found mention even in the works of Aristotle [30], now referred to as the Mpemba effect (ME). In recent times, questions relevant to the ME are posed in
more general ways [2], going much beyond the domain of water [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]: When two samples of the same material, from two different temperatures, are quenched to a common lower temperature, which one will reach the new equilibrium quicker? If there exists a point of phase transition, between the initial and the final temperatures, for which starting temperature the transformation will occur earlier? There is a rapid growth in interest in studying systems of different types. Experimental studies on systems such as colloidal systems [10], clathrate hydrates [11], carbon nanotube resonators [12] and magnetic alloys [13] show the presence of ME. In the theoretical and computational literature, the varieties that are studied include granular matter [14; 15; 16; 17], spin glass [18] and few other systems of magnetic origin [19; 20; 21; 22]. Nevertheless, the underlying reason remains a puzzle, inviting the need for stronger theoretical intervention. The pertaining new questions are fundamental from statistical mechanical and other theoretical points of view. In addition, the effect can be exploited to much practical advantages [28].
In the case of water, for a transformation from liquid to solid, overcoming metastability is a serious problem [31]. Nucleation there, as well as in general, is strongly influenced by the choice of the final temperature (\(T_{f}\)), the type and the volume fraction of impurity, etc. However, how the role of the starting temperature (\(T_{s}\)) enters the picture is an important new fundamental issue. Is it that the above mentioned metastable aspect gets affected by the nature of the initial thermodynamic state in an unexpected order? With in-built frustration in the model Hamiltonian, some of the theoretical works perhaps set the objective of exploring this angle. Such studies are by primarily involving the magnetic systems with anti-ferromagnetic interactions, including spin glasses. However, simpler systems must also be considered. If the effect is found to be present in their evolution dynamics, route to a proper understanding can be easier.
In fact, the standard ferromagnetic Ising model, without any impurity, is seen to exhibit the effect [21]. The positive observation of the ME there was attributed to the structural changes associated with the critical divergence of the spatial correlation with the variation of \(T_{s}\) in the paramagnetic phase [2; 21]. To justify the validity of the attribution and estimate the corresponding quantitative influence, it is important to study the model in other situations like in different space dimensions (\(d\)) and with varying range of interactions. Very importantly, it should be checked what are the effects of the order of transitions [2; 29; 22]. This is by considering the fact that in the case of water, the transition is of first order character, while for the Ising model the problem was designed [21] in such a way that the influence of second order transition is captured. Keeping these in mind [2; 22; 29], here we chose the \(q\)-state Potts model [32; 33] for which the order of transition changes with the variation of its number of states \(q\). Interestingly, we observe the presence of the ME in all the above mentioned cases. For the Potts model, it appears that with the
increase of \(q\) the effect gets weaker, as far as the variation in \(T_{s}\) is concerned. However, interestingly, regarding the effects of spatial correlation in the initial states there exists a unique scaling, not only for the simple variation of \(q\), but also with the change of the order of transition and space dimension. We believe, in addition to being important for the class of systems considered, this study also provides crucial angle with respect to the interpretation of the effect in water.
## II Models and methods
As already mentioned, here we study a class of discrete spin systems with pure ferromagnetic interactions. We investigate the Ising model [32] in different dimensions, having short and long-range inter-site potentials [34]. Furthermore, generalization of this two-component system into multi-component ones have also been considered. In general, the Hamiltonian can be written as \(H=-\sum_{i(\neq)j}J(r_{ij})\delta_{S_{i},S_{j}};\ S_{i},S_{j}=1,2,...,q\); \(r_{ij}\) being the separation between lattice sites. For \(q=2\), this Potts model Hamiltonian corresponds to the Ising model, differing only by a factor of 2. The latter, by correcting for the factor, we have studied in \(d=2\) and 3, with nearest neighbor (NN) interactions, by setting the interaction strength \(J\) to unity, on square and simple cubic lattices, respectively. In \(d=2\), we have also presented results for the long-range (LR) version of the Ising model with [34]\(J(r_{ij})=1/r_{ij}^{2+\sigma}\), for \(\sigma=0.8\), again using the square lattice. Most extensive results are obtained for the (short-range) Potts model, \(q\) varying between 2 and 10. The critical temperature for this model has the \(q\)-dependence [32]\(T_{c}=J/[k_{B}\ln(1+\sqrt{q})]\), \(k_{B}\) being the Boltzmann constant, to be set to unity. Depending on the value of \(q\) the order of transition can alter. For \(q>4\) the model loses its "critical" character [33], the transition being of first order. For the Ising case, the values of \(T_{c}\) in \(d=2\) and 3 are \(\simeq 2.269\,J/k_{B}\) and \(\simeq 4.51\,J/k_{B}\), respectively [32]. For the long-range case we have used [35]\(T_{c}=9.765\,J/k_{B}\).
The kinetics of transition from para to ferro states are studied via Monte Carlo simulations [32], with the Glauber spin-turn mechanism [32]. The preparation of the initial configurations near the critical point encounters critical slowing down [32]. To avoid this, we have used cluster algorithms. In the case of Ising or Potts model, this is done by implementing the Wolff algorithm [36], and for the LR Ising model, we have used the Fukui-Todo algorithm [35; 37]. The presence of the correlated spatial fluctuations in a system and its variation with temperature can be quantified via the calculation of the structure factor: \(S(k,t)=\left\langle\psi_{k}\left(t\right)\psi_{-k}(t)\right\rangle\), \(\psi_{k}\left(t\right)\) being the Fourier transform of the order parameter [38]\(\psi\left(\vec{r},t\right)\) (\(=\exp(i\,\theta(\vec{r}));\ \theta=2\pi n/q,\ n=1,\ldots,q\)). In the small \(k\) regime [\(\in(0,0.4)\) or shorter], for the short-range cases, \(S(k,t)\) is described well by the Ornstein-Zernike relation [39; 40], \(S(k)=k_{B}\,T_{s}\,\chi/(1+k^{2}\,\xi^{2})\), \(\chi\) being the susceptibility and \(\xi\) the correlation length. The average domain length, \(\ell(t)\), of the clusters, during an
evolution towards the ferromagnetic state, has been estimated via the first moment of the domain size distribution function [41], \(P(\ell_{d})\), i.e., \(\ell(t)=\int P(\ell_{d},t)\ell_{d}d\ell_{d}\), where \(\ell_{d}\) is the distance between two consecutive interfaces along a given direction.
## III Results
In Fig. 1 we depict how the choice of \(T_{s}\), in the case of Potts model, can influence the structural features in the initial configurations, for different values of \(q\). For both the considered cases, viz., \(q=2\) and \(5\), the configurations at higher \(T_{s}\) [see the snapshots at the bottom of the columns in (a) and (b)] appear random or structureless. With the decrease of \(T_{s}\), spatial correlations emerge. This is more clearly identifiable in the case of \(q=2\) for which one
Figure 1: Typical equilibrium configurations, for \(q=2\) and \(5\) state Potts models, are shown in (a) and (b) from different starting temperatures \(T_{s}\) that are located above the respective critical temperatures \(T_{c}\). (c) The plots of the correlation lengths, \(\xi\), versus \(\epsilon\) (\(=(T_{s}-T_{c})/T_{c}\)), for the same Potts models. For \(q=2\), \(\xi\) diverges as \(\epsilon^{-\nu}\), with \(\nu=1\) (see the solid line). These results are obtained for \(L=256\).
expects the critical divergence [39; 40]\(\xi\sim\epsilon^{-\nu}\), with \(\nu=1\). For a wide range of \(\epsilon\) (\(=|T_{s}-T_{c}|/T_{c}\)) such a behavior can be appreciated from Fig. 1(c). For \(q=5\), the phase transition is of first order [33], and we do not associate any exponent with the data set. The enhancement in the value of \(\xi\) can be appreciated for this \(q\) as well. In this case, the bending on the log-log scale over the whole presented range can well be, in addition to the finite-size effects, due to the first-order nature of the transition. Our expectation is that for this \(q\) the effect will be weaker. We proceed with the objective of quantifying it and to investigate if there exists any scaling rule [2] for arbitrary \(q\) that can as well comply with the other considered models.
We quench the systems from different [42]\(T_{s}\) to \(T_{f}=0.5\,T_{c}\), for a large set of \(q\) values. In Fig. 2 we show results
Figure 2: Demonstration of relaxation in the 5-state Potts model, following quenches to \(T_{f}=0.5\,T_{c}\), from different \(T_{s}\) values, with equal fraction of different spin states in boxes having \(L=256\). In (a) and (b) we show evolution snapshots, taken at different times, in units of standard Monte Carlo steps, for the systems initially at (a) \(T_{s}=0.9\) and (b) \(T_{s}=1.1\). Different colours represent \(q\) different Potts states. (c) Plots of energy versus time, following quenches from several \(T_{s}\) values. The upper half corresponds to early time behavior and the lower one shows the late time behavior. The dashed horizontal lines are drawn to extract the times, \(t_{c,E^{\mathrm{ref},i}},i=1,2\), corresponding to the crossings of certain values of energy \(E^{\mathrm{ref},i}\) by the energy curves of the systems with different \(T_{s}\). (d) Plots of \(t_{c,E^{\mathrm{ref},1}}\) (upper panel) and \(t_{c,E^{\mathrm{ref},2}}\) (lower panel), versus \(T_{s}\).
obtained during evolutions following such quenches, for the 5-state Potts model. In parts (a) and (b) we show the snapshots at different stages of evolutions for \(T_{s}=0.9\) and \(1.1\), respectively. It is evident that the system from the higher \(T_{s}\) reaches the final equilibrium faster. This comparative picture is true not only for the chosen set of initial configurations, the observation indeed stands correct for a vast majority of the combinations of starting configurations. In part (c) we look at the decay of the average energy (\(E\)) of the systems during the relaxation processes. We have included results for several \(T_{s}\). Each of the data sets is presented after averaging over 300000 independent initial configurations. For clarity, we have enlarged the early and late time behavior separately, in the upper and lower panels, respectively. The orders of appearances of the plots, in terms of \(T_{s}\), are systematic and opposite in the two panels. This implies that there are crossings amongst the energy curves, due to faster equilibrations of configurations prepared at higher \(T_{s}\). This is the essence of Mpemba effect [18; 21; 1; 2]. For a demonstration of systematicity, we perform the following exercise. The dashed horizontal lines in these panels correspond to two reference energy values, \(E^{\rm ref,1}\) and \(E^{\rm ref,2}\). We calculate the crossing time between a dashed line and the energy curve of the systems starting from each of the \(T_{s}\) values. Such a crossing time is denoted by \(t_{c,E^{\rm ref,i}},i=1,2\). Part (d) of Fig. 2 shows \(t_{c,E^{\rm ref,i}}\) as a function of \(T_{s}\). The upper and lower panels capture the early and late time behavior, respectively. The early time quantity, i.e., \(t_{c,E^{\rm ref,i}}\), increases with the increase in \(T_{s}\) but, at late times we see a different behavior, i.e., \(t_{c,E^{\rm ref,2}}\) decreases with the increase in \(T_{s}\). This implies faster relaxation of the systems with higher \(T_{s}\), indicating the presence of ME.
The faster relaxation of the higher \(T_{s}\) systems can as well be quantified by calculating the average domain length, \(\ell(t)\), a key probe to investigate coarsening dynamics [41; 42; 43]. In Fig. 3(a), we plot \(\ell(t)\), vs \(t\), for different \(T_{s}\) values, for \(q=5\). The early time behavior for different \(T_{s}\) are presented in the lower part of the divided graph. The late time comparisons are in the upper part. The systems starting at higher \(T_{s}\) tend to approach the new equilibrium earlier. This conveys a picture same as that derived from the energy decay, further strongly suggesting the presence of the Mpemba effect. We record the times at which the domain lengths of the systems at different finite \(T_{s}\) (\(<\infty\)) values are crossed or overtaken by the corresponding plots for the systems starting from \(T_{s}=\infty\). We denote this by \(t_{c,\ell_{\infty}}\). In Fig. 3(b) we have plotted \(t_{c,\ell_{\infty}}\) as a function of \(T_{s}-T_{c}\), for a few values of \(q\), covering transitions of first as well as second order varieties. For each \(q\), the crossing time increases with the approach of \(T_{s}\) to the corresponding \(T_{c}\). Given that depending upon the value of \(q\) the nature of critical fluctuation is different, presence of any unique scaling behavior may not emerge from this figure. It appears, nevertheless, that for a given distance of \(T_{s}\) from \(T_{c}\), the crossing time is longer for higher \(q\). A quantitative picture for this is shown in Fig. 3(c). This is a signature that
the ME gets weaker with the increase of \(q\). Considering the influence of both \(q\) and \(T_{s}\), the issue, however, is complex, that we address later.
Next we check whether the same scenario is true for the case of the LR Ising model. Due to the demanding computation, we analyze results for this case after averaging over 100 independent initial configurations. Note that the LR systems encounter finite-size effects much faster than its short-range counterpart, due to faster growth [34; 44; 45] with the decrease of \(\sigma\). To avoid this problem, we choose big systems and a large value of \(\sigma\), viz., \(\sigma=0.8\), which, nevertheless, falls well within the long-range interaction domain [34]. In Fig. 4(a) we plot \(\ell(t)\), vs \(t\), for quenches to \(T_{f}=0.3\,T_{c}\), from three \(T_{s}\) values, with \(L=1024\). From these plots it is clear that the systems with the highest \(T_{s}\) have the largest \(\ell(t)\), at late times. Thus, ME appears to be present in the LR Ising model as well. Note that because of the above mentioned reasons we have used Ewald summation [35; 46], and parallelized our codes, in this case, to speed up the output.
So far we have dealt with 2D systems. Now we present results from the 3D NN Ising model in Fig. 4(b), where also the faster relaxation of the systems for the higher \(T_{s}\) value is quite clear. Here we have quenched the systems from different initial \(T_{s}\) values to \(T_{f}=0.6\,T_{c}\). These results are presented after averaging over runs with 1440 independent
Figure 3: (a) The plots of \(\ell(t)\) versus \(t\), for the 5-state Potts model, for quenches from various \(T_{s}\) to \(T_{f}=0.5\,T_{c}\). The lower frame captures the early time trend, while the upper frame depicts the late time behavior. (b) Plots of time, \(t_{c,\ell_{\infty}}\), corresponding to crossing between growth curves for systems starting at \(T_{s}=\infty\) and a finite \(T_{s}\), as a function of \(T_{s}-T_{c}\), for different \(q\) values. (c) Plot of \(t_{c,\ell_{\infty}}\) versus \(q\) for systems prepared at \(T_{s}=1.3\,T_{c}\). All results correspond to \(L=256\).
initial configurations, with \(L=256\).
Returning to the Potts results in Fig. 3, we recall that an important objective of our work is to obtain a scaling picture [2]. Note that for different \(q\) values, one expects differing fluctuations in the critical vicinity. Thus, a unique behavior of the data sets in Fig. 3(b) should not be expected. It is more instructive to replace the abscissa variable there by \(\xi\). Results from such an exercise is shown in Fig. 5(a). On a log-log scale it appears that the data sets from different \(q\) are reasonably parallel to each other. In Fig. 5(b), thus, we introduce a prefactor \(a\), for the abscissa, constant for a particular value of \(q\), to obtain an overlap of the data sets in Fig. 5(a). A nice collapse of the data sets can be appreciated. In fact, the results for the 2D and 3D Ising models also comply with that. It is worth mentioning here that accurate estimations of the crossing times require huge statistics.
Figure 4: (a) Plots of \(\ell(t)\) versus \(t\), corresponding to a few different \(T_{s}\) values, for quenches to \(0.3\,T_{c}\), for the LR Ising model. The value of \(\sigma\) is \(0.8\) and we have \(L=1024\). (b) Same as (a) but here the results are for the 3D nearest-neighbor (NN) Ising model with \(T_{f}=0.6\,T_{c}\), and \(L=256\).
## IV Conclusion
We have investigated the presence of the Mpemba effect [1; 2; 3; 4; 5] during para- to ferromagnetic transitions in several model systems with discrete spin values. These include short-range Ising model in \(d=2\) and \(3\), as well as long-range Ising model in \(d=2\). Very extensive set of results are presented for the \(q\)-state Potts model for a wide range of \(q\) values. It is important to note that in none of the considered models there exist in-built frustration. Irrespective of the space dimension, range of interaction and order of transition, we have observed the Mpemba effect. It has interesting connection with the length of spatial correlations at the considered initial temperatures. The relative delay in approach to the final equilibrium, following quenches from para to ferro regions, with the lowering of starting temperatures, has unique dependence upon \(\xi\). For second order transitions we have obtained a universal scaling for models with critical exponent \(\nu\) varying nearly by a factor of \(1.6\). More interestingly, the scaling is valid for
Figure 5: (a) Plots of \(t_{c,\ell_{\infty}}\) versus \(\xi\), for the Potts model, with a few different \(q\) values, on a double-log scale. (b) Same as (a) but here the abscissa of the data sets are scaled by constant factors to obtain a “possible” overlap. In addition to the results from the Potts cases (\(q\geq 3\)), here we have included data for the NN Ising model, from different space dimensions. We expect discrepancy in scaling very close to \(T_{c}\) due to strong finite-size effects from multiple sources. Dashed lines represent power-laws.
even first order transitions. This implies, for a given model, if two initial temperatures possess nearly same spatial correlations, possible for large \(q\), configurations from these states will equilibrate almost simultaneously at the final temperature, showing no detectable ME, even for large differences in the \(T_{s}\), in terms of times taken for reaching the final destination. We believe that our results contain important message for the understanding of the effect in water. Particularly the observation of it in the cases of first order transition can shed light on the mystery with respect to the latter. It will be interesting to investigate how the power-law dependence upon \(\xi\), with exponent 0.9, may be connected to the scaling picture described in Ref. [2].
**Author contributions**: SKD proposed the topic, designed the problem, participated in the analyses, supervised the work and wrote the manuscript. NV oversaw a few coding details and took part in progress on all the models at the initial stages, alongside contributing to the writing. SC obtained all the final results on the Potts model, analyzed these, and contributed to the writing. SG and TP obtained and analyzed the results on the long-range and the 3D Ising models, respectively. Simulations of SKS provided the first hints of the Mpemba effect in the 3D Ising model.
**Acknowledgments**: SKD acknowledges a discussion with R. Pandit at an early stage and partial financial support from Science and Engineering Research Board, India, via Grant No. MTR/2019/001585. The authors are thankful to the supercomputing facility, PARAM Yukti, at JNCASR, under National Supercomputing Mission.
|
2301.13338 | Continuous Spatiotemporal Transformers | Modeling spatiotemporal dynamical systems is a fundamental challenge in
machine learning. Transformer models have been very successful in NLP and
computer vision where they provide interpretable representations of data.
However, a limitation of transformers in modeling continuous dynamical systems
is that they are fundamentally discrete time and space models and thus have no
guarantees regarding continuous sampling. To address this challenge, we present
the Continuous Spatiotemporal Transformer (CST), a new transformer architecture
that is designed for the modeling of continuous systems. This new framework
guarantees a continuous and smooth output via optimization in Sobolev space. We
benchmark CST against traditional transformers as well as other spatiotemporal
dynamics modeling methods and achieve superior performance in a number of tasks
on synthetic and real systems, including learning brain dynamics from calcium
imaging data. | Antonio H. de O. Fonseca, Emanuele Zappala, Josue Ortega Caro, David van Dijk | 2023-01-31T00:06:56Z | http://arxiv.org/abs/2301.13338v2 | # Continuous Spatiotemporal Transformers
###### Abstract
Modeling spatiotemporal dynamical systems is a fundamental challenge in machine learning. Transformer models have been very successful in NLP and computer vision where they provide interpretable representations of data. However, a limitation of transformers in modeling continuous dynamical systems is that they are fundamentally discrete time and space models and thus have no guarantees regarding continuous sampling. To address this challenge, we present the Continuous Spatiotemporal Transformer (CST), a new transformer architecture that is designed for the modeling of continuous systems. This new framework guarantees a continuous and smooth output via optimization in Sobolev space. We benchmark CST against traditional transformers as well as other spatiotemporal dynamics modeling methods and achieve superior performance in a number of tasks on synthetic and real systems, including learning brain dynamics from calcium imaging data.
## 1 Introduction
The theory of dynamical systems has found profound applications throughout the sciences, both theoretical and applied. Traditionally, dynamical system analysis aims to find the rules that govern the dynamics of an underlying system. In this setting, we first obtain a model that describes the given system, either through theoretical principles (model-based) or through experimental data (data-driven) (Ghadami and Epureanu, 2022), and then study the mathematical properties of it. Having a model of the dynamical system grants a deeper understanding of the phenomena, allowing for predictions of the system state continuously in time (Ghadami and Epureanu, 2022; Krishnapriyan et al., 2022). Such dynamical systems can be found throughout engineering and science. In biology, the brain is a notably complex dynamical system (Wang and Kennedy, 2016). The spiking activity of neurons within the neural population produces complex spatiotemporal patterns (Muller et al., 2018). These neural activity patterns represent a dynamical system that evolves over time, where the state of the system is defined in terms of the joint firing patterns of the neural populations (Vyas et al., 2020). For complex systems such as the brain, model-based approaches for learning dynamics are not amenable, thus the dynamics have to be learned directly from collected data. However, learning continuous dynamics from discretely sampled data is challenging and is an active area of study in machine learning (Willard et al., 2020).
Among the continuous time model approaches, Neural ODEs (Chen et al., 2018; Rubanova et al., 2019) have found important applications. While effective in modeling temporal dynamics, these models are unable to capture long-range spatiotemporal relations in the data and do not provide interpretable models (Zappala et al., 2022). In the meantime, Transformers (Vaswani et al., 2017) have become state-of-the-art in several tasks across domains (Lu et al., 2021), wherein their performance is mainly attributed to their capacity to capture long-range dependencies in the data as well as their training scalability (Bertasius et al., 2021).
from instances of the system. In such circumstances, we are interested in learning an operator that corresponds to the system. An example could be learning an operator that maps the function representing a system at time \(t=0\) to the system at later time points. This is the setting of operator learning problems, and several approaches, including using deep learning, have been presented [13, 14, 15, 16]. Operator learning problems are often formulated on finite grids, and passing to the continuous limit is a significant issue. Moreover, in practical cases such as in physics, it is of interest to be able to compute derivatives of the model's output, in which case smoothness is needed. Our main goal and contribution in this article are to introduce an operator learning framework for continuous and smooth functions on space-time domains.
Figure 1: Diagram of CST’s workflow. (A) The model receives a mix of real and “dummy” data points. These points are initialized via a linear interpolation of the real data points. (B) All points are perturbed with Gaussian noise. (C) Each point is treated as a token of the sequence. The points and their positional information are encoded to a latent space and fed to a multi-head self-attention module. (D) The model’s output is a prediction for each input coordinate. The model is trained to minimize the Sobolev loss.
### Sobolev spaces
Sobolev spaces were introduced as a framework for solving differential equations (Brezis and Brezis, 2011). In such spaces, one studies weak solutions for differential equations, i.e. solutions that hold almost everywhere with respect to the integral over the domain of interest. Then, for regular enough solutions, the equation is also a strong solution, i.e. a solution in the usual sense where equality holds for each point of the domain without the sign of integral. The study of such spaces also leads to the notion of weak differentiability and the Sobolev norm, which is a norm that takes into account the function itself as well as its derivatives. Sobolev spaces are a fundamental object of study in functional analysis, especially in relation to differential equation theory. More recently, they have found important applications in machine learning, where minimizing the Sobolev norm with respect to target data as well as its derivatives has shown good regularization effects (Czarnecki et al., 2017; Son et al., 2021; Kissel and Diepold, 2020; Cardona and Hecht, 2022; Fischer and Steinwart, 2020; Vlassis and Sun, 2021). Our optimization task is formulated in the Sobolev space to ensure that the learned operator outputs functions that are both continuous and smooth. However, our approach differs with respect to previous methods in that we do not use the derivatives of the target data functions explicitly, but we rather minimize the \(p\)-norm of the higher derivatives on sampled points, without directly comparing them to data. Therefore, our approach does not require extra knowledge or computation of the derivative of data.
### Continuous time models
A fundamental issue in machine learning is that of modeling continuous systems from discretely sampled data. Mathematical modeling of dynamical systems in the sciences and engineering, in fact, is performed through continuous and differentiable functions, due to their favorable analytical properties. We are therefore interested in machine learning models whose output is continuous and smooth, and that can therefore be interpolated with accuracy even when the data set is irregularly sampled. Several methods have been proposed, e.g. Chen et al. (2018); Rubanova et al. (2019); Poli et al. (2020); Zappala et al. (2022), based on the idea of solvers. In contrast, our approach combines operator learning techniques based on transformers (Vaswani et al., 2017) and Sobolev norm (Brezis and Brezis, 2011) to obtain an operator that outputs smooth functions with a high degree of accuracy on interpolation tasks for irregularly sampled data.
### Transformers
The self-attention mechanism and Transformers models were introduced in Vaswani et al. (2017) and have shown exquisite performance in sequence modeling problems, such as natural language processing (NLP). Since its first appearance, Transformers have excelled in several domains (Lu et al., 2021a). The Transformer uses self-attention mechanisms to learn the relationship between the elements of a sequence and use this information to make contextualized predictions. When trained on large corpora, Transformers can learn to abstract semantics from the text (Devlin et al., 2018). The state-of-the-art performance of Transformers in NLP is attributed to their capacity to capture long-range dependencies among words (i.e. extract contextual meaning) as well as their training scalability (Bertasius et al., 2021). More recently, studies focused on the computation performed by self-attention have shown it acts as a learnable integral kernel with non-local properties. This makes the Transformer especially fit for learning complex sequential data with long-range dependencies (Cao, 2021; Cao et al., 2022), while also being computationally efficient for long sequences (Choromanski et al., 2020).
### Modeling brain dynamics
Modeling brain dynamics has been a focal point of neuroscience since its start (Hodgkin and Huxley, 1952; Rall, 1959). However, until recently, technological limitations have significantly hindered the field in two perspectives: 1) Difficulties in collecting high-throughput data, and 2) computational limitations to model complex non-linear dynamics (Stevenson and Kording, 2011). Recently, several neural-network-based methods have been developed to model the temporal dynamics of neuronal circuits. One framework is based on inferring latent neural dynamics via dynamic models. Within this framework, LFADS has shown great success in spiking neuronal datasets. This model consists of a sequential variational autoencoder that is tasked with reconstructing its input from a low-dimensional set of factors (Pandarinath et al., 2018; Zhu et al., 2022). For continuous models, PLNDE has been successful in modeling spiking neuronal dynamics via a Poisson neural differential equations model (Kim et al., 2021). Another approach has been to use encoding models to understand how neurons represent sensory inputs (Sinz et al., 2018; Walker et al., 2019; Bashiri et al., 2021). These models are trained to reconstruct neuronal activity based on inputs such as images or sound sequences. This approach has been applied to spiking and 2-photon calcium data. However, it has not been used for whole-brain 1-photon calcium dynamics. Another category consists of goal-driven models, which are models trained to perform tasks that require human-like cognition in order to produce outputs that are correlated to neuronal brain
dynamics (Yamins et al., 2014; Yamins and DiCarlo, 2016; Tang et al., 2018; Cadena et al., 2019; Li et al., 2022). While such models have been widely employed to predict neuronal activity, they require complex experimental validations to infer meaningfulness.
## 3 Method
One of the essential components of the Transformer model is the positional encoding function, which specifies the order of the elements (or 'tokens') in the sequence and combines it with the encoded representation of the tokens. This is a successful approach for NLP and computer vision tasks, but too restrictive for datasets that are intrinsically continuous such as brain activity. Thus, we redesigned the Transformer to work more appropriately on the continuous domain. These modifications result in a new framework, called Continuous Spatiotemporal Transformer (CST) (Figure 1). During training, the model receives a mix of real (i.e., sampled data) and randomly sampled in-between-data ("dummy") coordinates. The dummy points are initialized via a linear interpolation fitted on the sampled data points and evaluated at the dummy coordinates (Figure 1A). Next, this sequence of points is augmented via the addition of Gaussian noise (Figure 1B). Each point of the sequence is treated as a token, which is encoded via a linear encoder to a latent space and then fed into the Multi-Head Attention module (Figure 1C), which computes the self-attention between the tokens of the sequence (Vaswani et al., 2017). Finally, a linear decoder projects the tokens of the sequence from latent space to data space, resulting in the model's prediction. The model is optimized to minimize a Sobolev loss where the \(p\)-norm is computed between output and target data, while simultaneously minimizing the \(p\)-norm of higher derivatives. This prevents the formation of cusp points and other singularities in the interpolation output.
To better elucidate the aforementioned Sobolev loss optimization, first recall that a function \(f\in C([a,b])\) is said to be weakly differentiable if there exists an integrable function \(f^{\prime}\) such that \(\int_{a}^{b}f\phi^{\prime}=-\int_{a}^{b}f^{\prime}\phi\) for all differentiable functions \(\phi\in C^{1}([a,b])\). Note that if a function is differentiable in the usual sense, then its weak derivative is easily seen to coincide with the notion of weak derivative (see e.g. Brezis and Brezis (2011)). For higher dimensional spaces a similar definition can be introduced as well. Then, the Sobolev space \(W^{k,p}\) is inductively defined as the space of (weakly) \((k-1)\)-differentiable functions \(f\) with \(f^{\prime}\in W^{k-1,p}\), and with norm given by
\[||f||_{W^{k,p}}=||f||_{p}+\sum_{q=1}^{k}||D^{q}f||_{p}.\]
As base of the induction definition, the case \(k=1\) is defined as the space of weakly differentiable functions and equipped with norm given by
\[||F||_{W^{1,p}}=||f||_{p}+||f^{\prime}||_{p},\]
where \(f^{\prime}\) indicates the weak derivative. When the domain space \(\Omega\in\mathbb{R}^{n}\) is higher dimensional, the definition is the same as above, but we take into account all the partial derivatives indexed by multi-indices \(\mathbf{q}\). For example, for \(\mathbf{y}(x_{1},x_{2},x_{3})\), we have \(D^{(1,0,3)}\mathbf{y}:=\partial_{1}\partial_{3}^{3}\mathbf{y}\), where we have used the notation \(\partial_{j}^{k}:=\frac{\partial^{k}}{\partial x_{j}^{k}}\).
Our optimization is performed in the Sobolev space, where we minimize the loss \(\mathcal{L}\) defined as
\[\mathcal{L}(\mathbf{y},D)=||\mathbf{y}_{D}||_{p}+\mu*\sum_{|\mathbf{q}|=1}^{k }||D^{\mathbf{q}}(\mathbf{y})||_{p},\]
where \(\mathbf{y}\) is the output of the model, \(D\) indicates the data, \(\mathbf{y}_{D}\) is the function obtained as \(\mathbf{y}_{D}:=y-D\) and \(\mu\) is a hyperparameter that regulates the contribution of higher derivatives to the optimization. This parameter determines the wanted trade-off between accuracy for the model to fit the data, and the bound on the derivatives. In addition, \(k\) and \(p\) are also hyperparameters that define the Sobolev space in which the optimization is performed. Observe that while the zero term takes into account the data, the higher degree terms do not refer to the data contrary to other approaches such as Czarnecki et al. (2017). This allows us to sample arbitrarily many points from the domain for the evaluation of the derivatives, for which we do not have data points.
While it is conceptually desirable to have a model whose output, and its derivatives, can be sampled continuously, we also demonstrate by means of experimentation (see Section 4 below) that simply interpolating the output of a model does not necessarily give good interpolation results. In fact, in the presence of noise or irregularly sampled data, interpolating the output of the model using traditional polynomial methods can be negatively affected by fluctuations that cause overshooting. Our approach shows that when CST outputs the interpolated points through evaluation of the model itself, a lower interpolation error is obtained. As a further conceptual gain in our approach with CST, we can upsample the attention weights of CST via evaluation at any arbitrary point within the domain. As the model is shown to accurately predict the interpolated points, this attention results in a meaningful upsampling.
Because CST combines both content and positional information of the data points to make predictions, the training forces the in-between coordinates to carry meaningful information about the modeled data. This allows us to use discretely sampled data to make predictions while generalizing for any arbitrary time point. This is important for computing smooth transitions between data points, facilitating interpolations, and eliminating the dependence upon regularly sampled data.
## 4 Experiments
To benchmark CST with respect to continuity and smoothness, we have considered several synthetic and real-world datasets for which we have evaluated the interpolation error. Our experiments consistently show that while all models can fit the given datasets, CST outperforms them in interpolation tasks for noisy and irregularly sampled data.
### Synthetic 2D spirals dataset
To clearly show-case the properties of CST in comparison to the conventional Transformer, we use both methods for modeling 2D spirals generated by integral equations. This data consists of 500 2D spirals of 100-time points each. The data was split into 70% of the spirals for training and 30% for validation. For training, 10 data points were sampled from each curve while the remaining points were reserved for the interpolation test. Details about the data generation are described in Appendix A and an example of a curve from this dataset is shown in Figure 6.
To train the Transformer model, we used the training procedure used by the authors of BERT (Devlin et al., 2018). At each training step, we randomly select 30% of the points for masking. The selected points are either replaced by a constant (80% of the time), replaced by another random point (10% of the time), or not replaced at all (10% of the time). The model is trained to predict the data point selected for masking. Both CST and the Transformer have 4 layers, 4 heads, and \(d_{model}\)=32 (see Table 5 for more details).
To inspect the models, we sampled 1000 new time coordinates within the time interval of the data. The results obtained for CST and the Transformer model are shown in Figure 2. We show that a Transformer model trained with the framework used in language modeling results in a step-like output, which yields poor interpolation performance. On the other hand, CST provides an output that better represents the original data. To evaluate the performance of both models in learning the dynamics of the dataset, we use the trained models to interpolate for the unseen data coordinates and compute the mean of the error per interpolated point. We show that CST has a significantly lower interpolation error (\(P<0.0001\), N=150 spirals of the validation dataset) than the Transformer (Figure 2C, Table 4).
Next, we compare CST's interpolation performance to common interpolation methods, such as linear and cubic spline interpolations. While simply performing interpolation is not our primary goal, we show that CST is more robust to noise than commonly used interpolation methods. To test this, we perturb the spirals with Gaussian noise \(\mathcal{N}(0,0.1)\). We show that CST has a significantly lower interpolation error (\(P<0.0001\), N=150 spirals of the validation dataset, Figure 9) than linear and cubic spline interpolation. Examples of outputs are shown in Figure 8.
Figure 2: Continuous sampling of Transformer and CST. The Transformer shows step-like behavior whereas CST is smooth. A) Example of model fits to 2D spirals. B) Individual spiral dimensions over time. Both CST and the Transformer were trained to fit the data sampled from the spirals (‘Train’ blue points). During inference, the models were evaluated at 1000 coordinates along the spiral (lines shown for CST (red) and the Transformer (blue)). C) Zoomed-in view, emphasizing the difference in smoothness between CST and the Transformer. D) The interpolation error (L2-norm) for the test points (green) shows that CST has significantly (\(P<0.0001\)) better interpolation than the Transformer.
Next, we show that CST can up-sample self-attention weights better than up-sampling via interpolation. To test this, we use a model trained with 10 real points and 10 randomly sampled dummy points. During inference, we provide the same 10 points used during training and extra 10 fixed-coordinate points with their real values, thus providing a ground truth self-attention between the input points, as illustrated in Figure 3 for a given input. To up-sample self-attention with CST, we provide the 10 points used during training and the time coordinates of the extra 10 points. We use CST to obtain outputs for all 20 points with their respective self-attention as described in Sec. 3. The self-attention obtained is illustrated in Figure 3 for the same curve as in Figure 3. Another commonly used approach to up-sample attention is the use of interpolation methods (Caron et al., 2021). Here we use linear interpolation to up-sample the self-attention weights of the model's output for the 10 training points to 20 points. The result is shown in Figure 3. More examples are shown in Figure 10. We evaluate the up-sampling performance of both approaches in terms of the attention error for the time coordinates not used during training. Figure 11 shows the error distribution for CST and the linear interpolation in the up-sampling task for the validation curves. We observe that CST significantly (\(P<0.0001\)) outperforms the linear interpolation in terms of lower approximation error of self-attention up-sampling.
### Modeling dynamics in a video dataset
Video recordings are a common type of data that benefits from continuous modeling. Although frames are discrete in time, they represent samples of a continuous process, and therefore, the dynamics in videos is conveniently modeled as such. In this section, we use CST to model dynamics in the KITTI video dataset (Geiger et al., 2013). This dataset consists of recordings captured by a vehicle moving in the city of Karlsruhe. We utilized the version of the dataset presented in PredNet (Lotter et al., 2016) and split the dataset into 70% for training and 30% for validation. We modified the task to make it a video inpainting task by extracting 10 frames of the video sequence and adding gaussian noise (\(\mathcal{N}(0,\sigma=0.5)\)) to 40-60% of each frame, this noise perturbs the information in the frames(see Figure 13 for an example of a video sequence). Then, we trained the model to reconstruct the uncorrupted sequence based on the masked input.
We compared CST to other neural-networks-based models that are able to model spatiotemporal dynamics: ConvGRU (Ballas et al., 2015), 3D-ViT and ViViT (Arnab et al., 2021) (see Table 6 for architecture details). ConvGRU was trained to recursively predict the frames from a single frame as input for every timepoint. 3D-ViT is a model based on the transformer architecture, and has 3-dimensional tokens for a 3d-tensor input. The ViViT model was trained following their factorized-encoder approach, wherein space and time are modeled by two separate Transformers. All models were trained on an RTX 3090 NVIDIA GPU for up to 150 epochs or until convergence. In Table 1 the validation mean squared error of the models trained on the video inpainting task is reported. We can observe that CST has a lower validation mean squared error compared to all other models. Furthermore, the reconstructed frames generated by CST have a lot more high-frequency similarities to the initial frame compared to other models (see examples in Figure 13).
### Navier-Stokes equations
We consider a \((2+1)D\) PDE system, namely the Navier-Stokes equation (Chorin, 1968; Fefferman, 2000), to evaluate the capability of CST to continuously model dynamical systems. The dataset consists of \(5K\) instances of numerical solutions of the Navier-Stokes equation with random initial conditions. Further details on the dataset can be found in
Figure 3: CST can accurately up-sample self-attention weights. Shown are, from left to right, attention maps for: ground truth data, down-sampled input data, up-sampling via CST, and up-sampling via linear interpolation (as performed in Caron et al. (2021)). We observe that CST provides up-sampled self-attention weights that more closely match the ground truth (\(P<0.0001\)) compared to linear interpolation (Figure 11). More examples are shown in Figure 10.
Appendix A.2. We trained CST on \(1K\) dynamics and then tested the model on \(300\) unseen noisy dynamics. Training is performed on \(10\) time points of the dynamics, while testing is performed on a time sequence that includes \(10\) additional time points that were unseen during training. Therefore, this is both an extrapolation task (with respect to the new initial condition of the dynamics), and an interpolation task (with respect to the unseen time points).
We compare CST with a Transformer model whose output is interpolated to obtain the predictions at data points between the given frames, and FNO2D and FNO3D (Li et al., 2020). We see that interpolation methods applied to the output of the transformers do not perform as well as CST, since they are negatively affected by noisy data. Moreover, we observe that while FNO3D is known to have achieved excellent results in interpolation tasks, the presence of irregularly sampled time points and noise greatly decreases the interpolation capabilities of the model, resulting in poor interpolation. We were unable to obtain good interpolations for FNO3D, despite properly fitting the data during training. The results of this experiment are shown in Table 2, and a list of parameters is given in Table 7. Overall, this experiment shows that CST is able to learn continuous dynamics and that this model is a powerful tool when operating on noisy and sparsely sampled data.
### Learning brain dynamics from calcium imaging data
Understanding how brain activity dynamics relate to cognition is a major open question in neuroscience (MacDowell and Buschman, 2020; Cardin et al., 2020; Vyas et al., 2020). Since CST is able to model complex non-local continuous spatiotemporal dynamics from data, we use CST to learn brain dynamics from widefield calcium imaging in freely behaving mice.
Widefield calcium imaging measures neuronal activity by measuring the amount of calcium influx in the cells (Chen et al., 2013; Cardin et al., 2020). We use the data from Lohani et al. (2020) in which the mice are presented with visual stimuli of varying contrasts (Lohani et al., 2020).
The widefield imaging generates videos of shape \(x\in\mathbb{R}^{H\times W\times T}\), where \((H,W)\) represents the spatial resolution of a frame and \(T\)is the length of the video. To model brain dynamics, we want to account for how the different regions of the brain influence each other over time. To model the interaction between regions in space and time, we split the images into patches \(x_{P}\in\mathbb{R}^{N\times F_{0}\times P_{1}\times T}\), where \(N=\left(HW\right)/P_{0}P_{1}\) is the total number of patches per frame for a patch of size \(P_{0}\times P_{1}\). A similar approach is used inDosovitskiy et al. (2020) and He et al. (2022).
Next, the patches are perturbed and treated as tokens, such as described in Section 3 and illustrated in Figure 4. For this experiment, the frames of size \(H=184\) and \(W=208\) were divided into patches of size \(P_{0}=23\) and \(P_{1}=16\), making a total of \(N=104\) patches per frame. The video is presented to the model as multiple segments of 10 frames. The recording is split in 70/30 for training and validation. During training, the patches are perturbed with noise \(\mathcal{N}(0,\sigma=0.1)\). Figure 5 shows CST's performance in modeling a segment from the validation, and the corresponding \(R^{2}\) coefficients. In addition to neural activity prediction, we also inspect the learned attention weights of the last frame
\begin{table}
\begin{tabular}{l c} \hline \hline & Mean Squared Error \\ \hline ConvGRU & 0.363 \\ ViViT & 0.3651 \\
3D-ViT & 0.2505 \\
**CST** & **0.1138** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean Squared Error on Video Inpainting Task for KITTI Dataset.
\begin{table}
\begin{tabular}{l c} \hline \hline & \(MSE\) \\ \hline Linear & \((5.12\pm 0.27)\times 10^{-3}\) \\ Spline & \((5.54\pm 0.31)\times 10^{-3}\) \\ FNO3D & \((1.38\pm 0.08)\times 10^{0}\) \\ FNO2D & \((1.88\pm 0.11\times 10^{-2}\) \\
**CST** & **(4.88 \(\pm\) 0.21)\(\times\)10\({}^{-3}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for the interpolation task on the Navier-Stokes dataset reported as (mean \(\pm\) std). The models were trained using \(10\) time points per curve, and during inference, \(20\) points randomly selected from the dynamics were predicted for unseen curves (i.e. new initial conditions).
of the sequence. Among the learned attention weights, some show patterns that match specific locations of the brain, such as the visual cortex (Figure 5).
Next, we test to which extent the learned attention weights encode information about the visual stimulus in comparison to the raw data (baseline). To test this, we used PCA to reduce the dimensionality of the attention distribution from its original flattened shape (tokens \(\times\) tokens) to 10 principal components and used a regression model to predict the contrast of the visual stimulus associated with the last frame of the sequence. We used 500 segments from the validation set, from which 70% were used to train a KNN regressor (k=3) and the remaining 30% were used for testing. The process of partitioning the data and fitting the regressor was repeated 10 times. To evaluate the influence of the number of parameters, we report CST-base (Transformer with 12 layers, 12 heads) and CST-small1 (6 layers, 6 heads) (Table 8 for more details). The mean square error (MSE) and the coefficient of determination (\(R^{2}\)) between the contrast of the visual stimulus presented and the prediction are shown in Table 3. Both CST-base and CST-small showed significantly higher \(R^{2}\) and significantly lower MSE than the baseline.
Figure 4: Diagram of CST spatiotemporal encoding for calcium imaging recordings. Each frame (time point) of the recording is partitioned into patches. Dummy frames are inserted using the procedure described in Sec. 4.4. Each patch is treated as a token of the sequence and is combined with its positional information (i.e., position in space and time). These tokens are encoded by CST as described in Figure 1B and C. Loss of the model output w.r.t. the input data is computed in Sobolev space.
Figure 5: Modeling widefield calcium imaging with CST. The first row illustrates the input data. The second row shows CST’s prediction. The coefficient of determination (\(R^{2}\)) indicates the quality of the prediction. The third row shows the corresponding attention weights obtained between the last frame of the sequence and the different regions of the previous frames. The fourth row shows changes in the visual stimuli, which is used to illustrate changes in neural activity and attention caused by changes in stimuli. See Figure 12 for more examples.
We compare CST to other neural network-based models that are capable of learning a latent representation of the dynamics. The models and their performances are listed in Table 3 (see Table 8 for architecture details). All the architectures were decided in order to have models roughly with the same number of parameters and allow them to be trained and converge within 2 days on an RTX 3090 NVIDIA GPU. The LatentODE (Rubanova et al., 2019) was trained with a reversed RNN-encoder from the last to the first frame and then predicted the whole sequence from the encoded latent space. The LSTM (Hochreiter and Schmidhuber, 1997) was trained to recursively predict the frames from the sequence with windows of 5 frames as input. LFADS was trained to reconstruct 10 frames of the sequence through a bottleneck layer of 100 dimensions (factors). The results shown in Table 3 reflect the poor capacity of these models in learning the dynamics of brain activity, which spans from the low information contained in their latent spaces. We hypothesize that the non-local spatial dynamics induced by high contrast stimuli are responsible for the decreased performance of other models compared to CST.
Next, we trained two models that account for spatially structured data: ConvLSTM (Shi et al., 2015) and ViViT. The ConvLSTM was trained similarly to the setup described for the LSTM. Despite ViViT being significantly larger than CST (Table 8), we observe that both CST-base and CST-small present more meaningful latent spaces with respect to encoding relevant information about the dynamics that describe the response to visual stimuli (Table 3).
## 5 Conclusions
We have presented the Continuous Spatiotemporal Transformer (CST), a new framework for applying Transformers to continuous systems. We have demonstrated CST's ability to learn the dynamics of several systems and the benefits that stem from a continuous representation. CST shows better interpolation performance compared to other methods and is capable of effectively up-sampling self-attention weights. Finally, we demonstrated CST on modeling brain dynamics from calcium imaging recordings. We showed that the latent space learned by CST is more informative of behavioral relevant variables, such as visual stimulus contrast, compared to other models. Furthermore, the self-attention weights learned by CST provide a biologically meaningful representation of the underlying brain dynamics. We anticipate that CST can be used to model spatiotemporal dynamical systems from other domains.
|
2309.06842 | The Pulsar Magnetosphere with Machine Learning: Methodology | In this study, we introduce a novel approach for deriving the solution of the
ideal force-free steady-state pulsar magnetosphere in three dimensions. Our
method involves partitioning the magnetosphere into the regions of closed and
open field lines, and subsequently training two custom Physics Informed Neural
Networks (PINNs) to generate the solution within each region. We periodically
modify the shape of the boundary separating the two regions (the separatrix) to
ensure pressure balance throughout. Our approach provides an effective way to
handle mathematical contact discontinuities in Force-Free Electrodynamics
(FFE). We present preliminary results in axisymmetry, which underscore the
significant potential of our method. Finally, we discuss the challenges and
limitations encountered while working with Neural Networks, thus providing
valuable insights from our experience. | Ioannis Dimitropoulos, Ioannis Contopoulos, Vassilis Mpisketzis, Evangelos Chaniadakis | 2023-09-13T09:46:41Z | http://arxiv.org/abs/2309.06842v4 | # The Pulsar Magnetosphere with Machine Learning: Methodology
###### Abstract
We propose a new method for obtaining the general solution of the ideal force-free steady-state pulsar magnetosphere in 3D. We divide the magnetosphere in the regions of closed and open field lines and train two custom Physics Informed Neural Networks (PINNs) to yield the solution in each of these two regions. We also periodically adjust the shape of the separatrix between the two regions to satisfy pressure balance everywhere. Our method introduces several innovations over traditional methods that are based on numerical grids and finite differences. In particular, it introduces a proper treatment of mathematical contact discontinuities in FFE. We present preliminary results in axisymmetry which confirm the significant potential of our method.
keywords: pulsars - magnetic fields - numerical methods - machine learning
## 1 Introduction
55 years after the discovery of pulsars (Hewish et al., 1968), there still does not exist a complete physical model for their strongly magnetized plasma-filled magnetospheres. The ground for their modelling was set by Goldreich & Julian (1969), who suggested that one part of the magnetosphere rigidly corotates with the central star (the 'closed line region' or 'dead zone'), while the parts around the magnetic poles (the 'polar caps') open up to infinity beyond the light cylinder. An electric field generated by the stellar rotation fills the magnetosphere with electric charges and the open field lines with electric currents. For 30 years, everyone's focus was on the light cylinder, which was considered to be the source of magnetospheric dissipation. Our understanding changed dramatically when the first solution of the force-free ideal aligned pulsar magnetosphere was obtained (Contopoulos, Kazanas & Fendt, 1999; hereafter CKF). It was clearly proposed then that magnetospheric dissipation most probably takes place along the large scale magnetospheric electric current sheet that the new solution clearly identified, and not along the light cylinder.
CKF introduced a novel iterative numerical method that relaxed to the steady-state solution of the axisymmetric pulsar magnetosphere. The non-axisymmetric problem, however, was solved with a different approach. Most studies of 3D pulsar magnetospheres are performed with time-dependent numerical simulations that start with a dipole magnetic field that is set into rotation at \(t=0\). The simulations run for long enough times to attain a rotating steady-state configuration which is then presented as the solution of the pulsar magnetosphere. This has been the approach of the pioneering FFE (Force-Free Electrodynamics) simulations of Spitkovsky (2006) and Kalapotharakos & Contopoulos (2009), the MHD (Magneto-Hydro-Dynamics) simulations of Tchekhovskoy, Spitkovsky & Li (2013), the 'ab initio' PIC (Particle-In-Cell) simulations of Philippov & Spitkovsky (2014), Philippov, Spitkovsky & Cerutti (2015), and Kalapotharakos et al. (2018), and most recently, the hybrid PIC-MHD simulations of Cerutti et al. (2022). These solutions have confirmed that the bulk of the magnetosphere operates under ideal force-free conditions, while electromagnetic dissipation, particle acceleration and high-energy radiation take place mostly along the equatorial current sheet. The steady-state problem was also solved with spectral methods (Parfrey, Beloborodov & Hui, 2012, Petri, 2012) which, unfortunately, behave very poorly in the equatorial current sheet.
Time-dependent 3D numerical simulations have evolved dramatically in the past 10 years. Unfortunately, there exists no reference 3D ideal force-free solution with which to compare and validate their results. Thus, several important issues are left unanswered and new problems arose:
1. In all PIC simulations the magnetosphere opens-up a significant distance inside the light cylinder (their so-called 'Y-point' may appear down to 85% of the light cylinder radius; e.g. Hu & Beloborodov 2022). If true, this would dramatically modify the pulsar spindown rate, the pulsar braking index, as well as the shape and spectrum of the high-energy pulsar radiation originating around the Y-point. We do not understand the physical origin of this effect in PIC simulations. We do not agree with the common explanation that it is due to the artificially high inertia of the PIC superparticles. Instead, we suspect that it is related to a subtle minimum of the magnetospheric electromagnetic energy (Contopoulos, Notsikas & Gourgouliatos, 2023). We still need to understand where
the Y-point truly lies, and how fast it moves out as the pulsar spins down.
2. One major complication of pulsar magnetospheres is the appearance of electric current sheets inside which the ideal MHD approximation breaks down. Current sheets exist as mathematical contact-discontinuities across which \(B^{2}-E^{2}\) is everywhere continuous (see below). MHD/FFE methods smooth out such discontinuities across several computational grid cells. Unfortunately, ideal FFE conditions are not valid inside current sheets. Moreover, different MHD numerical methods treat the equatorial current sheet differently (e.g. CKF and Timokhin 2006 erroneously place it inside the closed line region), and it is not possible in general to differentiate between artificial (numerical) and true (physical) features in the published solutions (e.g. interchanging multiple positive and negative charge layers as in Kalapotharakos et al. 2012, Mahlmann et al. 2022). Unfortunately, there does not exist a reference solution of the ideal (dissipation-less) force-free 3D problem against which to evaluate our numerical simulations. A reference solution exists only in 2D (CKF, Timokhin 2006).
3. Recent global PIC simulations (Cerutti, Philippov & Dubus 2020, Hakobyan, Philippov & Spitkovsky 2023) show that the electromagnetic (Poynting) flux remaining in the pulsar magnetosphere decreases as the logarithm of the distance r over the radius of the light cylinder \(R_{\rm LC}\), namely \(\propto 1-\beta_{\rm rec}\ln(r/R_{\rm LC})\). With \(\beta_{\rm rec}\sim 0.1\) as obtained from local PIC simulations of relativistic reconnection layers (e.g. Lyubarsky 2005, Werner et al. 2018), this results in the full pulsar magnetosphere completely disappearing via dissipation in the equatorial current sheet within a few hundred light cylinder radii. According to Hakobyan, Philippov & Spitkovsky (2023), this logarithmic gradual electromagnetic energy dissipation applies to the full magnetosphere, not only to its undulating non-axisymmetric component as Cerutti, Philippov & Dubus (2020) claim. This leaves no pulsar wind to extend to the termination shock to power the X-ray pulsar nebula. We do not understand this result.
4. Time dependent simulations relax to one final steady-state solution. In nature, however, pulsars often exhibit mode-switching which cannot be accounted for by a unique solution. We must look for alternative global pulsar magnetospheric solutions, and we need to understand why and how pulsars may spontaneously jump between them (e.g. Ntotsikas et al. 2023).
5. The resolution of current PIC simulations is inadequate by several orders of magnitude to properly model the microphysics of the equatorial current sheet. In order to generate pulsar light curves and spectra that may be compared with observations, the simulation results (particle Lorentz factors, magnetic field values, electromagnetic spectra) are extrapolated by several orders of magnitude. Unfortunately, there is no agreement among different research groups on the particular method of extrapolation, and as a result, there is still no generally accepted understanding of the physical origin of the high-energy radiation from pulsars (e.g. synchrotron as in Hakobyan, Philippov & Spitkovsky 2023; gamma-ray fundamental plane as in Kalapotharakos, Wadiasingh, Harding & Kazanas 2022; inverse Compton as in Richards & Lyutikov 2018; etc.). The remedy would be to investigate the trajectories of accelerated particles for realistic physical parameters (not PIC parameters) in the current sheet obtained in a reference solution.
We believe that it is probably too early to derive safe conclusions about the pulsar magnetosphere from current global ('ab initio') PIC simulations, thus we can only trust them qualitatively (not quantitatively) to make meaningful comparisons with observations. Their parameters are orders of magnitude away from realistic values, and except for the aligned (non-pulsar) rotator, there is currently no way to validate their results by comparison with a reference solution. Given the inherent problems of all numerical methods discussed above, we believe that this is a dangerous development that may strongly mislead pulsar research. We thus propose to return to the basics and independently obtain the reference ideal force-free magnetosphere with a novel numerical method. In the present paper we will present our first solutions of the axisymmetric problem, but our results will be directly generalized in the case of the 3D inclined rotator in a forthcoming publication. We present the general mathematical formulation of the problem in SS 3. In SS 4 we propose two novel numerical approaches that will allow us to obtain the solution in 3D. In SS 5 we formulate the problem in the axisymmetric case of the aligned rotator and present our first solutions of the pulsar equation with a trained Physics Informed Neural Network. We conclude in SS 6 with a discussion of our results and forthcoming investigations.
## 2 General mathematical formulation
The magnetospheres of isolated neutron stars are generally considered to be dominated by the magnetic field. The physical conditions in the magnetosphere, i.e. the low mass of the charge carriers and the low density of the magnetospheric plasma, allow us to neglect gravity, thermal pressure and particle inertia as they are several orders of magnitude smaller than the electromagnetic forces. Particle inertia may only become important at the tip of the closed line region very close to the light cylinder. Other than that, force balance in the bulk of the pulsar magnetosphere is reduced to
\[\rho_{\rm e}{\bf E}+{\bf J}\times{\bf B}=0\;, \tag{1}\]
where \({\bf B}\) and \({\bf E}\) are the magnetic and the electric field respectively, \(\rho_{\rm e}\) is the electric charge density, and \({\bf J}\) is the electric current
\[{\bf J} \equiv \frac{c}{4\pi}\nabla\cdot{\bf E}\;\frac{{\bf E}\times{\bf B}}{B^ {2}}+\frac{c}{4\pi}\frac{{\bf B}\cdot\nabla\times{\bf B}-{\bf E}\cdot\nabla \times{\bf E}}{B^{2}}\;. \tag{2}\]
The above general expression was obtained by Gruzinov (1999) and Blandford (2002), and simplifies considerably in steady-state.
All fields are calculated in the non-rotating inertial lab frame. In that frame, electromagnetic fields need to satisfy Maxwell's equations, namely
\[\nabla\cdot{\bf B}=0\;,\;\nabla\times{\bf B}=\frac{1}{c}\frac{ \partial{\bf E}}{\partial t}+\frac{4\pi}{c}{\bf J}\;,\] \[\nabla\cdot{\bf E}=\frac{4\pi}{c}\rho_{\rm e}\;,\;\nabla\times{ \bf E}=-\frac{1}{c}\frac{\partial{\bf B}}{\partial t}\;. \tag{3}\]
We will be searching for the steady-state solution to the above set of equations. Following the approach of Muslimov & Harding (2009), we will define the transformation of partial time derivatives between our lab frame, and a mathematical (_not physical_) reference frame with the pulsar angular velocity \(\Omega\) around the axis \(\theta=0\), namely
\[\frac{\partial{\bf B}}{\partial t} = \frac{\partial{\bf B}}{\partial t}\bigg{|}_{\rm cort}-\nabla\times \left(r\sin\theta\,\Omega\,\hat{\phi}\times{\bf B}\right)\;,\] \[\frac{\partial{\bf E}}{\partial t} = \frac{\partial{\bf E}}{\partial t}\bigg{|}_{\rm cort}-\nabla\times \left(r\sin\theta\,\Omega\,\hat{\phi}\times{\bf E}\right)+r\sin\theta\,\Omega\, \hat{\phi}\,\nabla\cdot{\bf E}\;,\] \[\frac{\partial\rho_{\rm e}}{\partial t} = \frac{\partial\rho_{\rm e}}{\partial t}\bigg{|}_{\rm cort}+r\sin \theta\,\Omega\,\hat{\phi}\cdot\nabla\rho_{\rm e}\;. \tag{4}\]
Time derivatives in that frame (denoted by the subscript 'corot') vanish in steady state, and therefore, the Maxwell's equations that
describe the steady state of the force-free pulsar magnetosphere become
\[\nabla\times\left(\mathbf{E}+\frac{r\sin\theta\,\Omega}{c}\dot{ \phi}\times\mathbf{B}\right) = 0\,, \tag{5}\] \[\nabla\times\left(\mathbf{B}-\frac{r\sin\theta\,\Omega}{c}\dot{ \phi}\times\mathbf{E}\right) = \frac{4\pi}{c}\mathbf{J}-\frac{r\sin\theta\,\Omega}{c}\dot{\phi} \,\nabla\cdot\mathbf{E}\;. \tag{6}\]
Let us define here the light cylinder radius \(R_{LC}\equiv c/\Omega\). From eq. (5) above we obtain that
\[\mathbf{E}\equiv\mathbf{E}_{p}=-\frac{r\sin\theta}{R_{LC}}\dot{ \phi}\times\mathbf{B}=\frac{r\sin\theta}{R_{LC}}(B_{\ast},-B_{r},0) \tag{7}\]
in component form, where the subscript 'p' denotes a poloidal component. Following eq. (7), \(\mathbf{E}\cdot\nabla\times\mathbf{E}=0\), and the expression for \(\mathbf{J}\) in eq. (2) simplifies considerably. Substituting everything back in eq. (6) we obtain \(\nabla\times\left[\mathbf{B}_{p}(1-(r\sin\theta/R_{LC})^{2})+\mathbf{B}_{ \theta}\right]\times\mathbf{B}=0\) or equivalently,
\[\nabla\times\left\{\mathbf{B}_{p}\left(1-\left(\frac{r\sin\theta}{R_{LC}} \right)^{2}\right)+\mathbf{B}_{\theta}\right\}=\alpha\mathbf{B}\;. \tag{8}\]
Here, \(\alpha\) is the force-free parameter that obeys the constraint
\[\mathbf{B}\cdot\nabla\alpha=0\;, \tag{9}\]
namely that \(\alpha\) is constant along magnetic field lines. In other words, field lines lie along iso-contours of \(\alpha\) in 3D. Notice that solving for the steady state configuration in the co-rotating frame has an important advantage over time-dependent simulations in the nonrotating (lab) frame. In the latter, the final configuration is rotating, i.e. time-dependent. All its complex features (current sheets, Y-points, etc.) rotate in the simulation frame of reference, hence it is difficult to treat them and to guarantee that a steady-state is truly reached. On the other hand, solving for the steady-state configuration in the co-rotating frame is a relaxation approach that allows a better treatment of current sheets and more naturally reaches the final steady state. Eq. (8) has first been obtained by Endean (1974) and Mestel (1975) but has never been solved before in 3D.
We now need to introduce the vector potential \(\mathbf{A}\) such that \(\mathbf{B}=\nabla\times\mathbf{A}\). We will work in spherical coordinates \((r,\theta,\phi)\) centered onto the central star. Our notation will be slightly simplified if we introduce the vector magnetic flux components
\[\Psi_{r}\equiv rA_{r}\;,\;\Psi_{\theta}\equiv rA_{\theta}\;,\; \Psi_{\theta}\equiv r\sin\theta\,A_{\phi}\;. \tag{10}\]
In that notation,
\[B_{\ast} \equiv \frac{1}{r^{2}\sin\theta}\left(\frac{\partial\Psi_{\phi}}{ \partial\theta}-\frac{\partial\Psi_{\phi}}{\partial\phi}\right)\;,\] \[B_{\theta} \equiv \frac{1}{r^{2}\sin\theta}\left(\frac{\partial\Psi_{r}}{\partial \phi}-r\frac{\partial\Psi_{\phi}}{\partial r}\right)\;,\] \[B_{\phi} \equiv \frac{1}{r}\frac{\partial\Psi_{\phi}}{\partial r}-\frac{1}{r^{2 }}\frac{\partial\Psi_{r}}{\partial\theta}\;, \tag{11}\]
and eq. (8) becomes
\[\frac{\partial^{2}\Psi_{r}}{\partial\phi^{2}}-\frac{\partial^{2} \Psi_{\phi}}{\partial\partial\phi}=\frac{r\sin\theta}{1-(r\sin\theta/R_{LC})^ {2}}\left\{-\alpha\left(\frac{\partial\Psi_{\phi}}{\partial\theta}-\frac{ \partial\Psi_{\phi}}{\partial\phi}\right)\right.\] \[\left.+\cos\theta\,\frac{\partial\Psi_{\phi}}{\partial r}-\frac{ \cos\theta}{r}\frac{\partial\Psi_{r}}{\partial\theta}+\sin\theta\,\frac{ \partial^{2}\Psi_{\phi}}{\partial\theta\partial r}-\frac{\sin\theta}{r}\frac{ \partial^{2}\Psi_{r}}{\partial\theta^{2}}\right\}\;, \tag{12}\]
\[\frac{\partial^{2}\Psi_{\theta}}{\partial\phi^{2}}-r\frac{\partial^{2} \Psi_{\phi}}{\partial\theta\partial\phi}=\frac{r\sin\theta}{1-(r\sin\theta/R_ {LC})^{2}}\left\{-\alpha\left(\frac{\partial\Psi_{r}}{\partial\phi}-r\frac{ \partial\Psi_{\phi}}{\partial r}\right)\right.\] \[-\left.r\sin\theta\,\frac{\partial\Psi_{\phi}}{\partial r^{2}}- \frac{\sin\theta}{r}\frac{\partial\Psi_{r}}{\partial\theta}+\sin\theta\,\frac{ \partial^{2}\Psi_{r}}{\partial\theta\partial r}\right\}\;, \tag{13}\]
\[\frac{\partial^{2}\Psi_{\theta}}{\partial r^{2}}+\frac{1}{r^{2} }\frac{\partial^{2}\Psi_{\phi}}{\partial\theta^{2}}-\frac{1}{r^{2}}\frac{ \partial^{2}\Psi_{\theta}}{\partial\theta\phi}-\frac{1}{r}\frac{\partial^{2} \Psi_{r}}{\partial\vartheta\phi}\] \[+\frac{1}{r^{2}}\frac{\partial\Psi_{r}}{\partial\phi}-\frac{\cos \theta}{r^{2}\sin\theta}\left(\frac{\partial\Psi_{\phi}}{\partial\theta}- \frac{\partial\Psi_{\phi}}{\partial\phi}\right)=\] \[\frac{\sin\theta}{1-(r\sin\theta/R_{LC})}\left\{-\alpha\left( \frac{\partial\Psi_{\phi}}{\partial r}-\frac{1}{r}\frac{\partial\Psi_{r}}{ \partial\theta}\right)\right.\] \[+\left.\frac{2\cos\theta}{R_{LC}^{2}}\left(\frac{\partial\Psi_{ \phi}}{\partial\theta}-\frac{\partial\Psi_{\phi}}{\partial\theta}\right)- \frac{2\sin\theta}{R_{LC}^{2}}\left(\frac{\partial\Psi_{r}}{\partial\phi}-r \frac{\partial\Psi_{\phi}}{\partial r}\right)\right\} \tag{14}\]
From the r.h.s. of eq. (14), the reader can check directly that the general regularization condition at the light cylinder \(r\sin\theta=R_{LC}\) becomes
\[\alpha=\left.2\,\frac{\cos\theta\,B_{r}-\sin\theta\,B_{\theta}}{B_{\theta}R_{LC} }\right|_{\mathrm{LC}}\equiv\left.\frac{2B_{z}}{B_{\theta}R_{LC}}\right|_{ \mathrm{LC}}\;. \tag{15}\]
Eq. (15) determines the value of the function \(\alpha\) along all field lines that cross the light cylinder and never return to the star. In an untwisted magnetosphere, all other field lines that do not cross the light cylinder and form the closed line region have \(\alpha=0\).
## 3 Machine Learning
We propose to solve the steady-state pulsar magnetosphere problem using Machine Learning techniques, and in particular by training a custom Neural Network (hereafter NN). In SS 4 below we will introduce several innovations that will lead to a better solution than what was obtained before in both 2D and 3D. Our formulation is general, but our method will also be implemented in axisymmetry in SS 5.
(iv) \(\alpha=0\) along field lines that do not cross the light cylinder (we will assume here an untwisted closed line region)
(v) Boundary magnetic field conditions on the stellar surface. If we consider a central dipole magnetic field on a star of radius \(r_{*}\) with the axis of the dipole along some direction \(\hat{m}\) with a star-centered spherical coordinate system \((r,\theta_{m},\phi_{m})\) along that direction, then
\[\Psi_{r}(r_{*},\theta_{m},\phi_{m}) = \Psi_{\theta_{m}}(r_{*},\theta_{m},\phi_{m})=0\] \[\Psi_{\theta_{m}}(r_{*},\theta_{m},\phi_{m}) = \Psi_{\rm max}\sin^{2}\theta_{m} \tag{16}\]
From the above star-centered components \(\Psi_{r},\Psi_{\theta_{m}},\Psi_{\theta_{m}}\) of the vector potential on the stellar surface, we will obtain the actual components \(\Psi_{r},\Psi_{\theta},\Psi_{\theta}\) of the vector potential on the stellar surface as boundary conditions of our problem.
One nice thing about NNs is that we can consider any number of conditions that the physical problem requires and, given enough internal layers and nodes, training points and training steps, the NN will manage to satisfy all of them by minimizing a global loss function which is just the sum of the various constraints. Note that boundary conditions are also introduced in the form of extra constraints. For example, the loss functions that need to be minimized to zero and describe eq. (16) are \(|\Psi_{r}|,|\Psi_{\theta_{m}}|\), and \(|\Psi_{\theta_{m}}-\Psi_{\rm max}\sin^{2}\theta_{m}|\) summed over the \((r_{*},\theta_{m},\phi_{m})\) training points chosen randomly along the stellar surface. One other condition that needs to be satisfied everywhere is \(E<B\). We have checked that this condition is automatically satisfied when magnetic field lines that cross the light cylinder open up to infinity, thus it is not necessary to impose it through a loss function term.
In our present PINN implementation, we used the PyTorch Deep Learning library. Code development was implemented in Python on Anaconda Jupyter Notebooks running on local GPUs, while production runs were performed in the Cloud in Google's Colab. What are the optimal number of internal nodes and training steps, the optimal activation functions and NN optimization parameters are not known a priori, and are determined after several trial trainings of the problem (see one particular implementation in SS 5.5 below). We implemented Adam optimizers, SiLU activation functions, and ReduceLROnPlateau schedulers from the PyTorch library. Another nice thing about NNs is that they are mesh-less, and the distribution of training points in the computational domain may be randomly chosen. This allows us to train the NN more precisely around the critical regions of interest (e.g. the separatrix between closed and open field lines, the Y-point at the tip of the closed line region, the equatorial current sheet, etc.). It also allows us to easily change the computational domain when the separatrix changes shape as we will see below. Both of the above benefits are very hard to implement in classical finite-difference numerical integration techniques with adaptive mesh refinement in a moving and deformable numerical grid. In our 2D axisymmetric runs (see SS 5 below) we used 500 random training points in each magnetospheric domain (closed and open field lines), 800 random training points along the boundaries \(r=r_{*}\) (the stellar surface), \(q=0\) (infinity), \(\theta=\pi/2\) (the equator), and 1,000 random training points along the separatrix. These random training points were updated every 1,000 training steps. We trained our PINNs for a total of about 100,000 training steps before we were satisfied with the loss convergence. One particular encouraging result was the reproduction of important known features of the solution, such as the distribution of the poloidal electric current in the open line region (see figure 3 below).
## 4 Innovations
The general 3D PINN numerical procedure described in SS 3 should relax to a solution where the closed line region extends up to some distance inside the light cylinder. The main problem with all numerical methods is that the solution of the pulsar magnetosphere contains electric current sheets along the separatrix and along the equator where the force-free ideal MHD condition breaks down. Unfortunately, the pulsar equation is not informed of their presence, and therefore, without special attention from the part of the programmer, it is incapable of supporting the mathematical contact discontinuities along them. This problem is addressed in the following subsection. One other problem is that, in past numerical methods, the imposition of the constraint \(E<B\) was used to open up magnetic field lines that cross the light cylinder (\(E<B\) is automatically satisfied inside the light cylinder where \(r\sin\theta<R_{\rm LC}\)). Therefore, if we want to study solutions where the closed line region does not extend up to the light cylinder, we must find other ways to open up the magnetosphere outside the closed line region. This problem will be addressed in subsection 4.2 below.
### Decomposition of the computational domain
The first thing we want to do is to decompose the computational domain into the region of closed and open field lines. This is a central element of our methodology, namely that we choose from the very beginning of our simulation which field lines will be open and which ones will be closed. This is equivalent to an ad hoc determination from the very beginning of the extent of the polar cap (see below). In general, the solution that we will obtain will contain a closed line region that does not extend all the way to the light cylinder. It will, however, be a valid solution as we know since Contopoulos (2005) and Timokhin (2006). Up to this day it is not yet clear why in some time-dependent simulations the closed line region extends all the way to the light cylinder (e.g. Spitkovsky 2006, Kalapotharakos & Contopoulos 2009, etc.), while in others it extends to only about 85% of the light cylinder (e.g. Hu & Beloborodov 2022, Hakobyan et al. 2023). Which size of the polar cap corresponds to the physical (true) solution is a very important question that we want to elucidate in our next paper.
The two domains are separated by the separatrix surface characterized by the line where it originates on the surface of the star (see figure 1). In principle, one may obtain an analytic expression for the shape of the separatrix surface, namely
\[r_{\rm S}=r_{\rm S}(\theta,\phi)\;. \tag{17}\]
Obviously, the shape of the separatrix surface is unknown and must be determined. Let's assume for the moment that we choose ad hoc some initial form of \(r_{\rm S}\) in eq. (17) that originates around the magnetic poles of the central star at inclined polar angles \(\theta_{m}=\theta_{\rm es}\), \(0\leq\phi_{m}\leq 2\pi\) and \(\pi-\theta_{m}=\theta_{\rm es}\), \(0\leq\phi_{m}\leq 2\pi\). Here, \(\theta_{\rm es}\) is the angular opening of the polar cap, and \(\theta_{m},\phi_{m}\) are spherical coordinates centered around the inclined magnetic axis \(\hat{m}\) of the central star. Notice that for angles \(\theta_{m}<\theta_{\rm es}\) and \(\theta_{m}>\pi-\theta_{\rm es}\), the radius connecting the point \((r,\theta,\phi)\) with the origin does not cross the separatrix. Thus, in the open line region, the domain of integration becomes \(r\geq r_{*}\), \(0\leq\theta_{m}\leq\theta_{\rm es}\) and \(\pi-\theta_{\rm es}\leq\theta_{m}\leq\pi\), and \(r\geq r_{\rm S},\theta_{\rm es}\leq\theta_{m}\leq\pi-\theta_{\rm es}\). In the closed line region, the domain of integration becomes \(r_{*}\leq r\leq r_{\rm S}\), \(\theta_{\rm es}\leq\theta_{m}\leq\pi-\theta_{\rm es}\). The above \(r,\theta_{m},\phi_{m}\) intervals correspond to \(r,\theta,\phi\) intervals and express mathematically what is obvious geometrically as closed line and open line regions in the schematic of figure 1.
The reason we decided to separate the region of open and closed field lines is that, due to the electric current sheet that flows along the separatrix between the two regions, a strong contact discontinuity (of practically infinitesimal thickness) in the magnetic and electric fields exists along the separatrix. Uzdensky (2003) (see also Lyubarskii 1990) integrated eq. (1) across current sheets and obtained that
\[(B^{2}-E^{2})_{\rm below}=(B^{2}-E^{2})_{\rm above}. \tag{18}\]
Treating such discontinuities inside any computational domain is very problematic and any computation method would generate spurious Gibbs oscillations around the separatrix. This is seen in all previous solutions of the pulsar magnetosphere (e.g. Kalapotharakos et al. 2012, Hu & Beloborodov 2022, Mahlmann et al. 2022). We propose here a better way to treat such discontinuities:
1. Solve eqs. (12)-(14) in the two regions independently for an initial arbitrary choice of the separatrix surface between them. An informed but not necessary unique initial choice would be the dipole magnetic field surface that originates at \(\theta_{m}=\theta_{\rm pc}\) and \(\theta_{m}=\pi-\theta_{\rm pc}\) on the stellar surface that corresponds to \[\Psi_{\rm dipole}\equiv\Psi_{\rm max}\frac{r_{\rm s}}{r}\sin^{2}\theta_{m} \equiv\Psi_{\rm max}\sin^{2}\theta_{\rm pc}\,\] (19) which yields \[r_{\rm S}\equiv r_{\rm s}\frac{\sin^{2}\theta_{m}}{\sin^{2}\theta_{\rm pc }}\ {\rm if}\ \theta_{\rm pc}\leq\theta_{m}\leq\pi-\theta_{\rm pc}\] (20)
Obviously, for such an ad hoc shape of the separatrix surface, the quantity \(B^{2}-E^{2}\) will in general be discontinuous across that surface.
2. Iteratively adjust the shape of the separatrix surface \(r=r_{\rm S}(\theta,\phi)\) at all points \((\theta,\phi)\) so as to satisfy the condition that \(B^{2}-E^{2}\) is continuous at all points of the separatrix.
In other words, we choose an initial dipolar shape for the separatrix surface, train the two PINNs in the open and closed line regions for a number of steps, calculate \(B^{2}-E^{2}\) at each point of the separatrix in the adjacent closed and open line regions, and displace that point proportionally to the difference between the two corresponding values in the direction of the smaller value. We must acknowledge that there is a high probability that the shape of the separatrix \(r_{\rm S}(\theta,\phi)\) may not be a single-valued function of \(\theta\) and \(\phi\) for highly inclined rotators. We will thus take particular care in 3D to displace each separatrix point along the direction _perpendicular_ to its surface. After each re-adjustment of the separatrix surface continue the training of the PINN for a few more steps, and then re-peat the re-adjustment. This procedure has never been tried before and, as we will see below, yields the final solution where \(B^{2}-E^{2}\) is continuous everywhere across the separatrix.
### The numerical 'disappearance' of the equatorial current sheet
One second improvement to the solution method has to do with the numerical treatment of the equatorial current sheet that originates at the tip of the closed line region. The reason there exists an equatorial current sheet is that magnetic field lines leave one pole of the star, open up to infinity, and return from infinity to the other pole of the star. In doing so, they also carry the same amount of poloidal electric current in each hemisphere from each pole of the star to infinity. These two electric currents return to the star through the equatorial current sheet. In other words, the equatorial current sheet is there to close the global poloidal electric current circuit (CKF). It is thus obvious that, if we artificially (numerically) invert the direction of the field lines that leave the star from the southern pole, the electric current direction in the southern hemisphere will be inverted, and therefore there will be no need to close the global poloidal electric current circuit through an equatorial current sheet. This configuration is clearly artificial (it is equivalent to a magnetic monopole), but it is mathematically and dynamically equivalent to the configuration that we are investigating in the open line region, only without the mathematical discontinuity of the equatorial current sheet! We are able to implement this trick because we are treating the open line region independently from the closed line region. This is the same trick assumed by Bogovalov (1999) when he obtained the solution for the tilted split monopole. We here generalize his approach and show that it is also valid (and very helpful) in the numerical treatment of the open line region in the more general dipole magnetosphere. We have thus found a way to make the equatorial current sheet discontinuity 'numerically disappear'.
A numerical implementation of this trick is to assume that, in the open line region outside the closed line region, the boundary conditions for the magnetic flux function on the stellar polar caps will be
\[\Psi_{\theta_{m}}(r_{*},\theta_{m},\phi_{m}) = \Psi_{\rm max}\sin^{2}\theta_{m}\ {\rm for}\ 0\leq\theta_{m}\leq\theta_{\rm pc}\,\] \[\Psi_{\theta_{m}}(r_{*},\theta_{m},\phi_{m}) = 2\Psi_{\rm S}-\Psi_{\rm max}\sin^{2}\theta_{m}\] \[= \Psi_{\rm max}(2\sin^{2}\theta_{\rm pc}-\sin^{2}\theta_{m})\] \[\Psi_{r}(r_{*},\theta_{m},\phi_{m}) = 0\,\] \[\Psi_{\theta_{m}}(r_{*},\theta_{m},\phi_{m}) = 0. \tag{21}\]
What we have done here is to express the boundary conditions in terms of the vector magnetic flux components \(\Psi_{r},\Psi_{\theta_{m}},\Psi_{\theta_{m}}\) in the inclined spherical system of coordinates \((r,\theta_{m},\phi_{m})\) in which a pure magnetic dipole is described as \(\Psi_{r}=\Psi_{\theta_{m}}=0\) and \(\Psi_{\theta_{m}}=\Psi_{\rm max}\sin^{2}\theta_{m}\). Obviously, in the open line region on the surface of the star, \(0\leq\Psi_{\theta_{m}}\leq 2\Psi_{\rm S}\), whereas in the closed line region on the surface of the star, \(\Psi_{\rm S}\leq\Psi_{\theta_{m}}\leq\Psi_{\rm max}\) (\(\Psi_{\rm S}\equiv\Psi_{\rm max}\sin^{2}\theta_{\rm pc}\)). This mathematical trick simplifies tremendously the numerical treatment of the large-scale open line region. Notice that, with or without the
Figure 1: Schematic of the decomposition of the computational domain into a region of closed and open field lines. closed line region: between the central star \(r=r_{\rm s}\) and the separatrix \(r=r_{\rm S}(\theta,\phi)\) (thick red line). open line region: outside the separatrix \((r>r_{\rm S}(\theta,\phi))\). In the open line region all field lines are artificially outflowing and there is no equatorial current sheet during the PINN training. When the PINN is trained, the open field lines in the southern hemisphere are reversed and the equatorial current sheet in the open line region appears along the surface of field-direction reversal (dotted red line).
artificial flux reversal in the southern hemisphere, the pressure balance condition along the separatrix remains the same. After the solution is obtained, the field will be reversed in the southern hemisphere, and the true magnetic field configuration will be obtained with an equatorial current sheet along the surface of flux direction reversal.
There is an important advantage of this method over standard numerical methods that include the equatorial current sheet in their domain of numerical integration. As \(\theta\) approaches the equatorial current sheet at \(\theta=\theta_{\rm u,e,s}\) from above, the first term in the expression for the field component \(B_{r}\) in eq. (11), namely \(\partial\Psi_{\theta}/\partial\theta/(r^{2}\sin\theta)\), reaches in general a non-zero value that is immediately reversed below the equatorial current sheet. In other words,
\[\frac{\Psi_{\theta}}{\partial\theta}(r,\theta\to\theta_{\rm u,e,s}^{-})=- \frac{\Psi_{\theta}}{\partial\theta}(r,\theta\to\theta_{\rm u,e,s}^{+})\neq 0. \tag{22}\]
Unfortunately, from the symmetry of \(\Psi_{\theta}\) and its derivatives above and below the equatorial current sheet, in a numerical scheme without artificial field reversal in the southern hemisphere, the latitudinal derivative \(\partial\Psi_{\theta}/\partial\theta\) will be equal to zero in the middle of the current sheet, which is not true in general. The numerical trick that we propose in this subsection naturally allows for \(\partial\Psi_{\theta}/\partial\theta(r,\theta\to\theta_{\rm u,e,s})\neq 0\), which, after the final field reversal, naturally satisfies eq. (22).
Finally, by choosing the angular size \(\theta_{\rm pc}\) of the polar cap, this approach allows as to choose the amount of magnetic flux \(\Psi_{\rm S}\equiv\Psi_{\rm max}\sin^{2}\theta_{\rm pc}\) that will open to infinity.1 Due to the artificial field reversal, this amount is 'forced' to extend to infinity, since it has no way to return to the star in the open line region. As we acknowledged above, we checked that the condition \(E<B\) is automatically satisfied in the open line region beyond the light cylinder.
Footnote 1: Notice that we can even choose a more general non-circular polar cap by specifying a non-uniform polar cap angular distribution \(\theta_{\rm pc}(\phi_{\rm m})\), as e.g. in Petri (2018). This will be implemented when we will adjust the azimuthal shape of the polar cap so that the separatrix surface touches the light cylinder at all azimuthal angles \(\phi\). Notice also that, because we assume no twisting in the closed line region, the angular sizes of the north and south polar caps are the same in azimuth, i.e. the shape of the south polar cap is equal to \(\theta_{\rm pc}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
In these new functions, the pulsar equation (eq. 27) may be rewritten as
\[(q^{2}-(1-\mu^{2}))\left(q^{2}\frac{\partial^{2}f}{\partial^{2}q}+(1 -\mu^{2})\frac{\partial^{2}f}{\partial\mu^{2}}\right)\] \[+2q^{3}\frac{\partial f}{\partial q}-(4\mu^{2}+2(q^{2}-(1-\mu^{2} ))f\] \[+(2\mu(1-\mu^{2})-4\mu(q^{2}-(1-\mu^{2}))\frac{\partial f}{ \partial\mu}+\mathcal{S}=0\, \tag{29}\]
while the condition that \(I=I(\Psi)\) (eq. 25) may be rewritten as
\[\frac{\partial f}{\partial q}\left(-2\mu I+(1-\mu^{2})\frac{\partial T}{ \partial\mu}\right)-\frac{\partial T}{\partial q}\left(-2\mu f+(1-\mu^{2}) \frac{\partial f}{\partial\mu}\right)=0 \tag{30}\]
The source term in eq. (29) is equal to
\[\mathcal{S} \equiv I\frac{\mathrm{d}I}{\mathrm{d}\Psi}=I\frac{|\nabla I|}{|\nabla \Psi|}\ \mathrm{sgn}\left(\nabla\cdot\nabla\Psi\right) \tag{31}\] \[= I\frac{\sqrt{q^{2}(1-\mu^{2})\left(\frac{\partial f}{\partial q }\right)^{2}+\left(2\mu I-(1-\mu^{2})\frac{\partial f}{\partial q}\right)^{2} }}{\sqrt{q^{2}(1-\mu^{2})\left(\frac{\partial f}{\partial q}\right)^{2}+\left( 2\mu f-(1-\mu^{2})\frac{\partial f}{\partial q}\right)^{2}}}\] \[\cdot\mathrm{sgn}\left(q^{2}(1-\mu^{2})\frac{\partial T}{ \partial q}\frac{\partial f}{\partial q}\right.\] \[\left.\qquad+\left.\left(2\mu I-(1-\mu^{2})\frac{\partial T}{ \partial\mu}\right)\left(2\mu f-(1-\mu^{2})\frac{\partial f}{\partial\mu} \right)\right)\right.\]
In this reformulation of the problem, the entries of the PINN are the spatial coordinates \((q,\mu)\), and the exits are \((f,I)\). The loss functions must then satisfy the conditions
1. Eq. (29)
2. Eq. (30)
3. \(f=1\) on the stellar surface
4. \(\mathcal{I}=0\) in the closed line region
### Domain decomposition
The domain decomposition is simpler in the case of the aligned pulsar where the direction \(\hat{m}\) of the stellar magnetic dipole is along \(\theta=0\), and therefore \(\theta_{\mathrm{m}}\equiv\theta\) and \(\phi_{\mathrm{m}}\equiv\phi\) (see figure 2). The open line region is \(r\geq r_{\mathrm{S}}\), \(0\leq\theta\leq\pi\). The closed line region is \(r_{\mathrm{c}}\leq r\leq r_{\mathrm{S}}\), \(\theta_{\mathrm{pc}}\leq\theta\leq\pi-\theta_{\mathrm{pc}}\). Here, the initial choice of the separatrix radius \(r_{\mathrm{S}}\) is
\[r_{\mathrm{S}}(\theta)=r,\frac{\sin^{2}\theta}{\sin^{2}\theta_{\mathrm{pc}}} \quad\mathrm{if}\ \theta_{\mathrm{pc}}<\theta<\pi-\theta_{\mathrm{pc}} \tag{32}\]
The shape of \(r_{\mathrm{S}}(\theta)\) will be adjusted iteratively in order to attain continuity of \(B^{2}-E^{2}\equiv B_{\mathrm{S}}^{2}(1-(r\sin\theta)^{2})+B_{\mathrm{\phi}}^{2}\), or equivalently continuity of
\[(q^{2}-(1-\mu^{2}))\left(q^{2}(1-\mu^{2})\left(\frac{\partial f }{\partial q}\right)^{2}+4\mu^{2}f^{2}\right.\] \[+\left.(1-\mu^{2})^{2}\left(\frac{\partial f}{\partial\mu} \right)^{2}-4\mu(1-\mu^{2})f\frac{\partial f}{\partial\mu}\right)+(1-\mu^{2}) \mathcal{I}^{2} \tag{33}\]
across the separatrix as described in subsection 4.1 above. In the latter expression, \(\mathcal{I}=0\) in the closed line region, and \(\mathcal{I}\neq 0\) in the open line region across the separatrix.
We want to impose that \(\Psi(r_{\mathrm{S}},\theta)=\Psi_{\mathrm{S}}=\Psi_{\mathrm{m}}\sin^{2}\theta _{\mathrm{pc}}\) in the separatrix between the two domains. We, therefore, introduce one more loss function in both domains, namely one that requires
\[f\left(q=\frac{1}{r_{\mathrm{S}}},\mu\right)=\Psi_{\mathrm{m}}\frac{\sin^{2} \theta_{\mathrm{pc}}}{1-\mu^{2}}. \tag{34}\]
### Field reversal
The boundary conditions for the magnetic field in the closed line region \(r_{\mathrm{c}}\leq r\leq r_{\mathrm{S}}(\theta)\), \(\theta_{\mathrm{pc}}<\theta<\pi-\theta_{\mathrm{pc}}\) are
\[\Psi(r_{*},\theta) = \Psi_{\mathrm{m}}\sin^{2}\theta\,\] \[\Psi(r_{\mathrm{S}}(\theta),\theta) = \Psi_{\mathrm{S}}. \tag{35}\]
As before, \(\theta_{\mathrm{pc}}\) is the opening of the polar cap, and \(\Psi_{\mathrm{S}}\equiv\Psi_{\mathrm{m}}\sin^{2}\theta_{\mathrm{pc}}\). We implement the mathematical field reversal in the open line region outside the closed line region with boundary conditions
\[\Psi(r_{*},\theta) = \Psi_{\mathrm{m}}\sin^{2}\theta\qquad\mathrm{for}\ 0\leq\theta\leq\theta_{\mathrm{pc}}\,\] \[\Psi(r_{*},\theta) = 2\Psi_{\mathrm{S}}-\Psi_{\mathrm{m}}\sin^{2}\theta\] \[= \Psi_{\mathrm{m}}(2\sin^{2}\theta_{\mathrm{pc}}-\sin^{2}\theta)\ \ \mathrm{for}\ \pi- \theta_{\mathrm{pc}}\leq\theta\leq\pi\] \[\Psi(r_{\mathrm{S}}(\theta),\theta) = \Psi_{\mathrm{S}}\qquad\qquad\ \mathrm{for}\ \theta_{\mathrm{pc}}<\theta<\pi-\theta_{\mathrm{pc}}. \tag{36}\]
Obviously, in the open line region, \(0\leq\Psi\leq 2\Psi_{\mathrm{S}}\), whereas in the closed line region, \(\Psi_{\mathrm{S}}\leq\Psi\leq\Psi_{\mathrm{m}}\). This mathematical trick simplifies tremendously the numerical treatment of the large-scale open line region. There is no problem with the closed line region because the two regions are in contact along \(\Psi(r,\theta)=\Psi_{\mathrm{S}}\). After the solution is obtained, the field will be reversed in the open line region where \(\Psi_{\mathrm{S}}\leq\Psi\leq 2\Psi_{\mathrm{S}}\), and the true magnetic field configuration will be presented with the equatorial current sheet along \(\Psi=\Psi_{\mathrm{S}}\).
As we discussed above, one extra benefit of this approach is that \(B_{r}(r>1,\theta=\pi/2^{-})\) is naturally allowed to be non-zero and equal to \(-B_{r}(r>1,\theta=\pi/2^{+})\) across the equatorial current sheet. However, without the mathematical field reversal that we propose in the open line region, due to symmetry, \(B_{r}=(\partial\Psi/\partial\theta)/(r^{2}\sin\theta)=0\) along the equator \(\theta=\pi/2\). One might think that in a typical MHD/FFE grid simulation, the equatorial field reversal takes place within one grid cell. Unfortunately, in all known solutions in the literature the equatorial current sheet extends over several grid cells in thickness, which is not correct. This unfortunate result has the following repercussion: all components of the magnetic field are zero in a region of finite thickness immediately outside the Y-point. As a result, the magnetic (and electric) field pressure from immediately inside the tip of the closed line region pushes and protrudes outwards, forming a true Y-point as seen in all MHD/FFE numerical simulations. As is shown in detail in Contopoulos, Ntotsikas & Gourgouliatos (2023) however, this is not true, and the Y-point is in fact a T-point as first proposed by Uzdensky (2003). This is captured in our present PINN solution with a proper treatment of the separatrix and equatorial current sheet via domain decomposition and field direction reversal in the open line region (see below).
### First results
We implemented three independent NNs with multiple internal (hidden) layers:
1. One PINN with two entries \(q,\mu\), three hidden layers with 256 nodes each, and two exits \(f,\mathcal{I}\). This PINN solves the pulsar equation in the open line region outside the separatrix and is the most complex among the three. It encounters no problems at the light cylinder where the functional form \(I|_{\mathrm{LC}}=I(\Psi|_{\mathrm{LC}})\) is determined. Its training requires tens of thousands of steps, mainly because the condition that \(I=I(\Psi)\) everywhere is very slowly enforced. The types of the individual activation functions and the weights of the individual losses are carefully chosen so as to yield consistent and stable convergence of the training.
2. One PINN with two entries \(q,\mu\), two hidden layers with 256 nodes each, and one exit \(f\). This PINN solves the pulsar equation in the closed line region inside the separatrix where \(I=0\). The training of this PINN is stable and fast.
3. The most crucial part of our method is the readjustment of the shape of the separatrix. We choose random angles \(\theta_{\rm pc}<\theta<\pi-\theta_{\rm pc}\) where we know the values of \(B^{2}(r_{\rm S},\theta)-E^{2}(r_{\rm S},\theta)\) both inside the outside the separatrix from the two PINNS trained in the closed and open line regions respectively. In general, these two values are not equal, so the radius of the separatrix must be adjusted at those angles. We implemented the adjustment \[r_{\rm S\ new}=r_{\rm S}+2\beta\,\left(\frac{r_{\rm S}-r_{\rm s}}{1-r_{\rm s} }\right)^{2}\,\,\,\frac{(B^{2}-E^{2})_{\rm r=r_{\rm S}^{2}}-(B^{2}-E^{2})_{ \rm r=r_{\rm S}^{2}}}{(B^{2}-E^{2})_{\rm r=r_{\rm S}^{2}}+(B^{2}-E^{2})_{\rm r =r_{\rm S}^{2}}}\] (37)
at those angles \(\theta\). In the above expression, the separatrix is displaced due to the pressure imbalance in the third term of the right hand side product. The first term is introduced to avoid moving the separatrix footpoint on the stellar surface, and \(\beta\) is an adjustable positive parameter. After the new position of the separatrix is defined on these randomly chosen angles \(\theta_{\rm pc}<\theta<\pi-\theta_{\rm pc}\), a third NN with one entry \(\mu\), two hidden layers with 128 nodes each, and one exit \(r_{\rm S}\) is trained to yield the shape of the separatrix at all angles \(\theta\) required by the first two main PINNs. We have assumed here that the separatrix shape \(r_{\rm S}(\theta)\) is a single-valued function of \(\theta\). This is indeed true in axisymmetry, but is not in general true in highly-inclined oblique rotators. In that case we will displace each point of the separatrix in the direction perpendicular to its surface, not in the radial direction as in eq. (37).
The number of internal nodes and layers in the two PINNs that solve the pulsar equation was chosen by trial and error to be able to reproduce known features of the solution (e.g. the functional form of the electric current distribution \(I=I(\Psi)\) shown in figure 3). Our number of internal nodes is higher than the number of internal nodes in the PINN implementation by Stefanou et al. (2023b). The number of internal nodes in the third NN was chosen so that it describes in detail the deformed shape of the separatrix.
In figure (4) we show the evolution of our training losses as a function of the number of training steps. The various losses that we implemented to be minimized to zero are the averages of \(|f-1|\) over the stellar surface, of \(|\Psi-\Psi_{\rm S}|\) along the separatrix and the equator in the open line region, and of course the average residuals of the central PDEs of the problem, eqs. (29) and (30). The total loss is the sum of the above residuals, each one weighted by a corresponding weighting factor.
In practice, the two main PINNs are initially trained for 50,000 steps before satisfactory convergence is achieved in the two regions. After that initial training stage, the separatrix is displaced based on the resulting pressure differences between the closed and open line regions, and then the third NN is trained. The two main PINNs are re-trained for another 5,000 steps, and the process is repeated 10 times. In total, we run 100,000 training steps for the two main PINNs. In the resulting configuration, pressure balance is achieved across the separatrix to within 1%.
We present our preliminary results in figure 3. We first show the solution for \(\theta_{\rm pc}=1.13(r_{\rm s}/R_{\rm LC})^{1/2}\) and \(\Psi_{\rm open}=\Psi_{\rm S}=\Psi_{\rm max}\sin^{2}\theta_{\rm pc}\). This particular value of \(\theta_{\rm pc}\) was specially chosen with the following in mind: It is straightforward to calculate that, in a dipolar magnetic field configuration, the magnetic field line that crosses the light cylinder corresponds to \(\Psi_{\rm dipole\ LC}=\Psi_{\rm max^{\prime}}/R_{\rm LC}\), and \(\theta_{\rm dipole\ pc}=\arcsin[(r_{\rm s}/R_{\rm LC})^{1/2}]\approx(r_{\rm s}/ R_{\rm LC})^{1/2}\). In previous high-resolution solutions of the pulsar equation (e.g. Timokokhin, 2006), the magnetic field line that reaches the light cylinder corresponds to \(\Psi_{\rm LC\,T\,Imashlin}=1.23\Psi_{\rm dipole\ LC}\) and \(\theta_{\rm pc\,T\,Imashlin}=\arcsin[(1.23r_{\rm s}/R_{\rm LC})^{1/2}]=1.13(r_{ \rm s}/R_{\rm LC})^{1/2}\). Therefore, this particular value of \(\theta_{\rm pc}\) (and correspondingly \(\Psi_{\rm open}\)) was chosen because in the standard solution (e.g. Timokhin, 2006) the Y-point lies very close to the light cylinder. Instead, we found that \(R_{\rm Y}=0.76R_{\rm LC}\) which, not only is inside \(R_{\rm LC}\), it is even _inside_\(R_{\rm dipole}(\Psi_{\rm S})=r_{\rm s}/\sin^{2}\theta_{\rm pc}=0.81R_{\rm LC}\). This unexpected result is in tension with _all_ previous numerical MHD/FFE solutions and certainly merits further investigation. It was found previously that the dipolar magnetic field configuration is stretched outwards, not inwards as in the present configuration. We suspect that this tension is due to the improper treatment of the separatrix contact discontinuity in all previous MHD/FFE solutions.
We also show the solution for \(\theta_{\rm pc}=0.94(r_{\rm s}/R_{\rm LC})^{1/2}\) that yields \(\Psi_{\rm open}=0.87\Psi_{\rm dipole\ LC}\) and \(\hat{E}=0.75\hat{E}_{\rm vacuum}(90^{\circ})\). Here, \(\hat{E}_{\rm vacuum}(I)\) is the spindown rate of a vacuum dipole rotator with inclination angle \(4\). We may tentatively generalize this solution for non-zero pulsar inclination angles according to Spitkovsky (2006) as \(\hat{E}(\lambda)\approx 0.75\hat{E}_{\rm vacuum}(90^{\circ})(1+\sin^{2}\lambda)\), and since \(\hat{E}_{\rm vacuum}(\lambda)=\hat{E}_{\rm vacuum}(90^{\circ})\sin^{2}\lambda\), we obtain that
\[\frac{\hat{E}(\lambda)}{\hat{E}_{\rm vacuum}(\lambda)}\approx 0.75\,\,\frac{1+\sin^{ 2}\lambda}{\sin^{2}\lambda}\geq 1.5. \tag{38}\]
It is interesting that in all previous solutions of the FFE pulsar magnetosphere, the above ratio was found to be greater than 3 (e.g. Li, Spitkovsky & Tchekhovskoy, 2012). This value is significantly larger than the ratio of spindown rates \(\hat{E}_{\rm ON}/\hat{E}_{\rm OFF}\) observed in the intermittent pulsars PSR B1931+24, PSR J1832+0029 and PSR J1841-0500 (1.5, 1.7 and 2.5 respectively; e.g. Rea et al., 2008, Wang et al., 2020). The inability to account for observed values lower than 3 is what led to the development of resistive magnetic-spheric solutions (e.g. Kalapotharakos et al., 2012b, Li, Spitkovsky & Tchekhovskoy, 2012). With our new solutions it seems that there is no need for resistivity to explain intermittent pulsars. This result certainly merits further investigation.
Another interesting point is the squeezing of the separatrix surface in the latitudinal direction right above the stellar surface. This is due to the pressure of the nonzero toroidal field \(B_{\theta}^{2}\) in the open line region. Furthermore, as expected, the Y-point is indeed a T-point, i.e. the separatrix crosses the equator vertically, and \(B_{r}(r>R_{\rm Y},\theta=\pi/2^{\circ})=-B_{r}(r>R_{\rm Y},\theta=\pi/2^{\circ} )\neq 0\) (superscripts \(-/+\) are used to denote 'right above/below the equator' respectively). The above observations merit further investigation in a future publication.
Figure 2: Same as figure 1 in axisymmetry. Note that we have implemented artificial field-direction reversal in the open line region.
## 6 Discussion and Conclusions
In this work, we have presented a novel numerical method for solving the generalized pulsar equation with PINNs. We have observed that PINNs behave satisfactorily around the light cylinder, but fail to properly treat the separatrix current sheet. For this reason, we have also introduced two innovations, namely the decomposition of the computation domain into two regions (one inside the separatrix current sheet and one outside), and the mathematical field reversal in the open line region. The latter allows us to completely avoid the equatorial current sheet discontinuity outside the light cylinder. The former allows PINNs to relax to a continuous and smooth solution that does not contain any mathematical singularities. The domain decomposition and field reversal that we introduced may also be implemented in more general situations with MHD current sheets, as for example in active regions of the solar corona. Regarding computational cost, axisymmetric PINN training is completed in a few hours on one GPU, whereas standard high-resolution solu
Figure 3: Representative axisymmetric solutions with domain decomposition and field reversal in the southern hemisphere. Top: Solution for \(\theta_{\rm pc}=1.13(r_{*}/R_{\rm LC})^{1/2}\) (left) and \(\delta_{\rm pc}=0.94(r_{*}/R_{\rm LC})^{1/2}\) (right). Shown poloidal magnetic field lines as \(\Psi\) (and \(I(\Psi)\)) iso-contours (same isocontours shown in both solutions). Thick line: separatrix \(\Psi_{\rm open}\equiv\Psi_{\rm S}=\Psi_{\rm max}\sin^{2}\theta_{\rm pc}\) and equatorial current sheet. Notice that the right solution contains 30% less open magnetic flux than the left solution. Vertical line: light cylinder. Notice the shape of the tip of the closed line: in agreement with Uzdensky 2003, its true shape is a T-point. Bottom left: electric current distribution \(I(\Psi)_{\rm BCL}/\Psi_{\rm S}\) across open magnetic field lines \(0\leq\Psi/\Psi_{\rm S}\leq 1\) for the top left solution. Dotted line: analytic expression for split monopole \(\Psi/\Psi_{\rm S}(2-\Psi/\Psi_{\rm S})\). Notice that \(I(\Psi_{\rm S})=0\) in agreement with eq. (15) at \(r=R_{\rm LC}\) and \(\theta=\pi/2\). Bottom right: the distribution of \(B_{\rm r}(r,\theta)/B_{\rm r}(r,0)\) for \(r=R_{\rm LC}\) for the top left solution. Notice that \(B_{\rm r}(r,\theta=\pi/2^{*})=-B_{\rm r}(r,\theta=\pi/2^{*})\neq 0\) (without field reversal, \(B_{\rm r}(r,\theta=\pi/2)\) would erroneously be equal to zero).
tions of the pulsar equation require about 1 day on a standard PC. Standard HPC time-dependent 3D simulations require days, and we can only hope that extrapolation of our PINN methodology to an oblique rotator will be faster. Although PINNs are fast, their benefit is not as much reduced computational cost, but their capability to find the solution for a deformable surface separating open from closed field lines.
We have obtained preliminary solutions of the pulsar equation for particular values of the open magnetic flux \(\Psi_{\rm open}=\Psi_{\rm max}\sin^{2}\theta_{\rm pc}\) with \(\theta_{\rm pc}=1.13(r_{*}/R_{\rm LC})^{1/2}\) and \(\theta_{\rm pc}=0.94(r_{*}/R_{\rm LC})^{1/2}\), in which the closed line region is found to extend radially up to \(0.8R_{\rm LC}\) and \(0.9R_{\rm LC}\) respectively. We would like to correct here an important misunderstanding found in the literature. With the solution obtained in SS 5 above, we confirm Uzdensky (2003) that in the presence of the separatrix return current sheet, the Y-point is in fact a T-point at its tip. This means that at its tip it has a finite height and it is not at a non-vertical angle with respect to the equator as Gruzinov (2005) surmises. This realization makes a huge difference in the calculation of the electromagnetic field energy contained in this region of finite height at the tip of the closed line region which diverges as the Y-point approaches the light cylinder (see Contopoulos, Ntotsikas & Gourgouliatos 2023 for a detailed analysis of this effect). This result differs from Gruzinov (2005) that the magnetospheric energy contained around the Y-point is finite (that result was based on his conclusion that the separatrix arrives on the equator at a non-vertical angle; in that case, the electromagnetic energy contained in the tip of the closed line region indeed remains finite). This may explain the placement of the Y-point well inside the light cylinder in all high-resolution global PIC simulations of the last decade.
In summary, our implementation differs from Stefanou et al. (2023b) who were the first to solve the pulsar equation with a PINN and their work was an inspiration for our present work. We were the first to introduce a proper treatment of mathematical contact discontinuities in FFE. This yielded important details that differ from previous solutions of the pulsar magnetosphere (e.g. the shape and the extent of the tip of the closed line region). A more detailed description of our results in the axisymmetric magnetosphere will be presented in a follow-up publication. A detailed solution of the 3D magnetosphere where the real power of our method will be manifested will also be presented in the near future.
## Acknowledgements
We would like to thank Petros Stefanou and Jose Pons for interesting and inspiring discussions. We would also like to thank the International Space Science Institute (ISSI) for providing financial support for the organization of the meeting of ISSI Team No 459 led by I. Contopoulos and D. Kazanas where the ideas presented in the paper originated. This research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the 4\({}^{\rm th}\) Call for HFRI PhD Fellowships (Fellowship Number: 9239).
## Data Availability Statement
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2309.12837 | Pre-foliations of co-degree one on $\mathbb{P}^{2}_{\mathbb{C}}$ with a
flat Legendre transform | A holomorphic pre-foliation $\mathscr{F}=\ell\boxtimes\mathcal{F}$ of
co-degree $1$ and degree $d$ on $\mathbb{P}^{2}_{\mathbb{C}}$ is the data of a
line $\ell$ of $\mathbb{P}^{2}_{\mathbb{C}}$ and a holomorphic foliation
$\mathcal{F}$ on $\mathbb{P }^{2}_{\mathbb{C}}$ of degree $d-1.$ We study
pre-foliations of co-degree $1$ on $\mathbb{P}^{2}_{\mathbb{ C}}$ with a flat
Legendre transform (dual web). After having established some general results on
the flatness of the dual $d$-web of a homogeneous pre-foliation of co-degree
$1$ and degree $d$, we describe some explicit examples and we show that up to
automorphism of $\mathbb{P}^{2}_{\mathbb{C}}$ there are two families and six
examples of homogeneous pre-foliations of co-degree $1$ and degree $3$ on
$\mathbb {P}^{2}_{\mathbb{C}}$ with a flat dual web. This allows us to prove an
analogue for pre-foliations of co-degree $1$ and degree~$3$ of a result,
obtained in collaboration with D. Mar\'{\i}n, on foliations of degree $3$ with
non-degenerate singularities and a flat Legendre transform. We also show that
the dual web of a reduced convex pre-foliation of co-degree $1$ on
$\mathbb{P}^{2}_{\mathbb{C}}$ is flat. This is an analogue of a result on
foliations of $\mathbb{P}^{2}_{\mathbb{C}}$ due to D. Mar\'{\i}n and J. V.
Pereira. | Samir Bedrouni | 2023-09-22T12:52:53Z | http://arxiv.org/abs/2309.12837v4 | # Pre-Foliations of co-degree one on \(\mathbb{P}^{2}_{\mathbb{C}}\) with a flat Legendre transform
###### Abstract
A holomorphic pre-foliation \(\mathscr{F}=\ell\boxtimes\mathscr{F}\) of co-degree 1 and degree \(d\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) is the data of a line \(\ell\) of \(\mathbb{P}^{2}_{\mathbb{C}}\) and a holomorphic foliation \(\mathscr{F}\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) of degree \(d-1\). We study pre-foliations of co-degree 1 on \(\mathbb{P}^{2}_{\mathbb{C}}\) with a flat Legendre transform (dual web). After having established some general results on the flatness of the dual \(d\)-web of a homogeneous pre-foliation of co-degree 1 and degree \(d\), we describe some explicit examples and we show that up to automorphism of \(\mathbb{P}^{2}_{\mathbb{C}}\) there are two families and six examples of homogeneous pre-foliations of co-degree 1 and degree 3 on \(\mathbb{P}^{2}_{\mathbb{C}}\) with a flat dual web. This allows us to prove an analogue for pre-foliations of co-degree 1 and degree 3 of a result, obtained in collaboration with D. Marin, on foliations of degree 3 with non-degenerate singularities and a flat Legendre transform. We also show that the dual web of a reduced convex pre-foliation of co-degree 1 on \(\mathbb{P}^{2}_{\mathbb{C}}\) is flat. This is an analogue of a result on foliations of \(\mathbb{P}^{2}_{\mathbb{C}}\) due to D. Marin and J. V. Pereira.
2010 Mathematics Subject Classification: -- _14C21, 32S65, 53A60._
## Introduction
This article is a continuation of a series of joint works with D. Marin[4, 5, 6, 7] on holomorphic foliations on the complex projective plane. For the definitions and notations used (web, discriminant \(\Delta(\mathscr{W})\), homogeneous foliation, inflection divisor \(\mathrm{I}_{\mathscr{F}}\), radial singularity, etc.) we refer to [4, Sections 1 and 2].
Definition A: _Let \(0\leq k\leq d\) be integers. A holomorphic pre-foliation \(\mathscr{F}\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) of co-degree \(k\) and degree \(d\), or simply of type \((k,d)\), is the data of a reduced complex projective curve \(\mathcal{C}\subset\mathbb{P}^{2}_{\mathbb{C}}\) of degree \(k\) and a holomorphic foliation \(\mathscr{F}\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) of degree \(d-k\). We write \(\mathscr{F}=\mathcal{C}\boxtimes\mathscr{F}\) and call \(\mathcal{C}\) (resp. \(\mathscr{F}\)) the associated curve (resp. the associated foliation) to \(\mathscr{F}\)._
Such a pre-foliation is given in homogeneous coordinates \([x:y:z]\in\mathbb{P}^{2}_{\mathbb{C}}\) by a \(1\)-form of type \(\Omega=F(x,y,z)\Omega_{0},\) where \(\mathbb{C}[x,y,z]_{k}\ni F(x,y,z)=0\) is a homogeneous equation of the curve \(\mathcal{C}\) and \(\Omega_{0}\) is a homogeneous \(1\)-form of degree \(d-k+1\) defining the foliation \(\mathcal{F}\), _i.e._
\[\Omega_{0}=a(x,y,z)\mathrm{d}x+b(x,y,z)\mathrm{d}y+c(x,y,z)\mathrm{d}z,\]
where \(a,\)\(b\) and \(c\) are homogeneous polynomials of degree \(d-k+1\) without common factor and satisfying the Euler condition \(i_{\mathrm{R}}\omega=0\), where \(\mathrm{R}=x\frac{\partial}{\partial x}+y\frac{\partial}{\partial y}+z\frac{ \partial}{\partial z}\) denotes the radial vector field and \(i_{\mathrm{R}}\) is the interior product by \(\mathrm{R}\).
The pre-foliations of type \((0,d)\) are precisely the foliations of degree \(d\) on \(\mathbb{P}^{2}_{\mathbb{C}}\).
By [13], to every pre-foliation \(\mathcal{F}=\mathcal{C}\boxtimes\mathcal{F}\) of degree \(d\geq 1\) and co-degree \(k<d\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) we can associate a \(d\)-web of degree \(1\) on the dual projective plane \(\tilde{\mathbb{P}}^{2}_{\mathbb{C}}\), called Legendre transform (or dual web) of \(\mathcal{F}\) and denoted by \(\mathrm{Leg}\mathcal{F}\); if \(\mathcal{F}\) is given in an affine chart \((x,y)\) of \(\mathbb{P}^{2}_{\mathbb{C}}\) by a \(1\)-form \(\omega=f(x,y)\left(A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y\right)\) then, in the affine chart \((p,q)\) of \(\tilde{\mathbb{P}}^{2}_{\mathbb{C}}\) associated to the line \(\{y=px-q\}\subset\mathbb{P}^{2}_{\mathbb{C}}\), \(\mathrm{Leg}\mathcal{F}\) is described by the implicit differential equation
\[F(p,q,x):=f(x,px-q)\left(A(x,px-q)+pB(x,px-q)\right)=0,\qquad\text{with} \qquad x=\frac{\mathrm{d}q}{\mathrm{d}p}.\]
When \(k\geq 1\), \(\mathrm{Leg}\mathcal{F}\) decomposes as \(\mathrm{Leg}\mathcal{F}=\mathrm{Leg}\mathcal{C}\boxtimes\mathrm{Leg}\mathcal{F}\), where \(\mathrm{Leg}\mathcal{C}\) is the algebraic \(k\)-web on \(\tilde{\mathbb{P}}^{2}_{\mathbb{C}}\) defined by the equation \(f(x,px-q)=0\) and \(\mathrm{Leg}\mathcal{F}\) is the irreducible \((d-k)\)-web of degree \(1\) on \(\tilde{\mathbb{P}}^{2}_{\mathbb{C}}\) given by \(A(x,px-q)+pB(x,px-q)=0\).
Conversely, every decomposable \(d\)-web of degree \(1\) on \(\tilde{\mathbb{P}}^{2}_{\mathbb{C}}\) is necessarily the Legendre transform of a certain pre-foliation on \(\mathbb{P}^{2}_{\mathbb{C}}\) of type \((k,d)\), with \(1\leq k<d\).
Thus, understanding the geometry of decomposable \(d\)-webs of degree \(1\) on \(\tilde{\mathbb{P}}^{2}_{\mathbb{C}}\) comes down to understanding the geometry of pre-foliations on \(\mathbb{P}^{2}_{\mathbb{C}}\) of type \((k,d)\), with \(1\leq k<d\). We are interested here in the problem of the flatness of dual webs of co-degree \(1\) pre-foliations, _i.e._ whose associated curve is a line.
In [4, Sections 3, 4 and 5] the authors studied the flatness of dual webs of homogeneous foliations of \(\mathbb{P}^{2}_{\mathbb{C}}\), then they showed that it is possible to reduce the study of the flatness of dual webs of certain inhomogeneous foliations to the homogeneous framework, _see_[4, Section 6]. It seemed natural to us to adapt this approach to the case of co-degree one pre-foliations.
**Definition B**.: A pre-foliation on \(\mathbb{P}^{2}_{\mathbb{C}}\) is said to be homogeneous if there is an affine chart \((x,y)\) of \(\mathbb{P}^{2}_{\mathbb{C}}\) in which it is invariant under the action of the group of homotheties \((x,y)\longmapsto\lambda(x,y),\ \lambda\in\mathbb{C}^{*}\).
A homogeneous pre-foliation \(\mathcal{H}\) of type \((1,d)\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) is then of the form \(\mathcal{H}=\ell\boxtimes\mathcal{H},\) where \(\mathcal{H}\) is a homogeneous foliation of degree \(d-1\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) and where \(\ell\) is a line passing through the origin \(O\) or \(\ell=L_{\infty}\).
Theorem 3.1 of [4] states that the web \(\mathrm{Leg}\mathcal{H}\) is flat if and only if its curvature is holomorphic on the transverse part of its discriminant \(\Delta(\mathrm{Leg}\mathcal{H})\). We prove in Section SS3 a similar result (Theorem 3.7) for the web \(\mathrm{Leg}\mathcal{H}\).
When \(\ell\) passes through the origin, we establish effective criteria for the holomorphy of the curvature of \(\mathrm{Leg}\mathcal{H}\) on certain irreducible components of the discriminant \(\Delta(\mathrm{Leg}\mathcal{H})\) (Theorems 3.13 and 3.18). In fact, Theorems 3.7, 3.13 and 3.18 provide a complete characterization of the flatness of \(\mathrm{Leg}\mathcal{H}\).
When \(\ell=L_{\infty}\) we show (Theorem 3.1) that the webs \(\mathrm{Leg}\mathcal{H}\) and \(\mathrm{Leg}\mathcal{H}\) have the same curvature; in particular the flatness of \(\mathrm{Leg}\mathcal{H}\) is equivalent to that of \(\mathrm{Leg}\mathcal{H}\). More particularly, in degree \(d=3\) the web \(\mathrm{Leg}\mathcal{H}\) is flat (Corollary 3.2).
Recall (_see_[13]) that a holomorphic foliation on \(\mathbb{P}^{2}_{\mathbb{C}}\) is said to be convex if its leaves other than straight lines have no inflection points. Note (_see_[14]) that if \(\mathcal{F}\) is a foliation of degree \(d\geq 1\) on \(\mathbb{P}^{2}_{\mathbb{C}}\), then \(\mathcal{F}\) cannot have more than \(3d\) (distinct) invariant lines. Moreover, if this bound is reached, then \(\mathcal{F}\) is necessarily convex; in this case \(\mathcal{F}\) is said to be reduced convex. We naturally extend the notions of convexity and reduced convexity of foliations to pre-foliations by putting:
**Definition C**.: Let \(\mathcal{F}=\mathcal{C}\boxtimes\mathcal{F}\) be a pre-foliation on \(\mathbb{P}^{2}_{\mathbb{C}}\). We say that \(\mathcal{F}\) is convex (resp. reduced convex) if the foliation \(\mathcal{F}\) is convex (resp. reduced convex) and if moreover the curve \(\mathcal{C}\) is invariant by \(\mathcal{F}\).
From this definition and Theorem 3.7 we will deduce the following corollary, which is an analogue of Corollary 3.4 of [4].
**Corollary D** (Corollary 3.11).: The dual web of a homogeneous convex pre-foliation of co-degree \(1\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) is flat.
In SS4 we give an application of the results of SS3 to homogeneous pre-foliations \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) of co-degree \(1\) such that the degree of type of \(\mathcal{H}\) is equal to \(2\), _i.e._\(\deg T_{\mathcal{H}}=2\) (_see_[4, Definition 2.3] for the definitions of the type \(T_{\mathcal{H}}\) and the degree of type \(\deg T_{\mathcal{H}}\)). More precisely, we describe, up to automorphism of \(\mathbb{P}^{2}_{\mathbb{C}}\), all homogeneous pre-foliations \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) of co-degree \(1\) and degree \(d\geq 3\) such that \(\deg T_{\mathcal{H}}=2\) and the \(d\)-web \(\operatorname{Leg}\!\mathcal{H}\) is flat (Proposition 4.4). We obtain in particular, for \(d=3\), the classification up to automorphism of homogeneous pre-foliations of type \((1,3)\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) whose dual \(3\)-web is flat: up to automorphism of \(\mathbb{P}^{2}_{\mathbb{C}}\), there are two families and six examples of homogeneous pre-foliations of co-degree \(1\) and degree \(3\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) with a flat Legendre transform, _see_ Corollary 4.5.
In 2013 Marin and Pereira[13, Theorem 4.2] proved that the dual web of a reduced convex foliation on \(\mathbb{P}^{2}_{\mathbb{C}}\) is flat. We show in SS5 the following analogous result for co-degree one pre-foliations.
**Theorem E**.: Let \(\mathcal{F}=\ell\boxtimes\mathcal{F}\) be a reduced convex pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}^{2}_{\mathbb{C}}\). Then the \(d\)-web \(\operatorname{Leg}\!\mathcal{F}\) is flat.
The following problem then arises.
**Problem**.: Let \(\mathcal{F}\) be a reduced convex foliation of degree greater than or equal to \(2\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) and let \(\ell\) be a line of \(\mathbb{P}^{2}_{\mathbb{C}}\) which is not invariant by \(\mathcal{F}\). Determine the relative position of the line \(\ell\) with respect to the invariant lines of \(\mathcal{F}\) such that the dual web of the pre-foliation \(\ell\boxtimes\mathcal{F}\) is flat.
To our knowledge the only reduced convex foliations known in the literature are those presented in [13, Table 1.1]: the Fermat foliation \(\mathcal{F}^{d-1}_{0}\) of degree \(d-1\), the Hesse foliation \(\mathcal{F}^{4}_{H}\) of degree \(4\), the Hilbert modular foliation \(\mathcal{F}^{5}_{H}\) of degree \(5\) and the Hesse foliation \(\mathcal{F}^{7}_{H}\) of degree \(7\) defined in affine chart respectively by the \(1\)-forms
\[\varpi^{d-1}_{0}=x\mathrm{d}y-y\mathrm{d}x+y^{d-1}\mathrm{d}x-x^{d-1}\mathrm{ d}y,\]
\[\omega^{4}_{H}=y(2x^{3}-y^{3}-1)\mathrm{d}x+x(2y^{3}-x^{3}-1)\mathrm{d}y,\]
\[\omega^{5}_{H}=(y^{2}-1)(y^{2}-(\sqrt{5}-2)^{2})(y+\sqrt{5}x)\mathrm{d}x-(x^{ 2}-1)(x^{2}-(\sqrt{5}-2)^{2})(x+\sqrt{5}y)\mathrm{d}y,\]
\[\omega^{7}_{H}=(y^{3}-1)(y^{3}+7x^{3}+1)y\mathrm{d}x-(x^{3}-1)(x^{3}+7y^{3}+1 )x\mathrm{d}y.\]
The following two propositions, which will be proved in SS5, give an answer to the above problem in the case of the Fermat foliation \(\mathcal{F}_{0}^{d-1}\) and the Hesse foliation \(\mathcal{F}_{H}^{4}\).
Proposition F: Let \(d\geq 3\) be an integer and let \(\ell\) be a line of \(\mathbb{P}_{\mathbb{C}}^{2}\). Assume that \(\ell\) is not invariant by the Fermat foliation \(\mathcal{F}_{0}^{d-1}\) and that the \(d\)-web \(\operatorname{Leg}(\ell\boxtimes\mathcal{F}_{0}^{d-1})\) is flat. Then \(d\in\{3,4\}\) and the line \(\ell\) joins two (resp. three) singularities (necessarily non-radial) of \(\mathcal{F}_{0}^{d-1}\) if \(d=3\) (resp. if \(d=4\)).
Proposition G: Let \(\ell\) be a line of \(\mathbb{P}_{\mathbb{C}}^{2}\) which is not invariant by the Hesse foliation \(\mathcal{F}_{H}^{4}\). Assume that the 5-web \(\operatorname{Leg}(\ell\boxtimes\mathcal{F}_{H}^{4})\) is flat. Then the line \(\ell\) passes through four (necessarily non-radial) singularities of \(\mathcal{F}_{H}^{4}\).
The idea of the proofs of Propositions F and G will be to reduce to the homogeneous case, by showing that the closures of the \(\operatorname{Aut}(\mathbb{P}_{\mathbb{C}}^{2})\)-orbits of the pre-foliations \(\ell\boxtimes\mathcal{F}_{0}^{d-1}\) and \(\ell\boxtimes\mathcal{F}_{H}^{4}\) contain homogeneous pre-foliations.
Theorem 6.1 of [4] says that every foliation of degree 3 on \(\mathbb{P}_{\mathbb{C}}^{2}\) with non-degenerate singularities and a flat Legendre transform is linearly conjugate to the Fermat foliation \(\mathcal{F}_{0}^{3}\). We prove in SS6 the following similar result for pre-foliations of co-degree 1 and degree 3.
Theorem H: Let \(\mathcal{F}=\ell\boxtimes\mathcal{F}\) be a pre-foliation of co-degree 1 and degree 3 on \(\mathbb{P}_{\mathbb{C}}^{2}\). Assume that the foliation \(\mathcal{F}\) has only non-degenerate singularities and that the 3-web \(\operatorname{Leg}\)\(\mathcal{F}\) is flat. Then \(\mathcal{F}\) is linearly conjugate to the Fermat foliation \(\mathcal{F}_{0}^{2},\) and the line \(\ell\) is either invariant by \(\mathcal{F}\) or it joins two non-radial singularities of \(\mathcal{F}\).
The proof of this theorem will essentially use the classification of homogeneous pre-foliations of type \((1,3)\) on \(\mathbb{P}_{\mathbb{C}}^{2}\) whose dual web is flat (Corollary 4.5).
## 1. Reminders on the fundamental form and curvature of a web
In this section, we briefly recall the definitions of the fundamental form and the curvature of a \(d\)-web \(\mathcal{W}\). Let us first assume that \(\mathcal{W}\) is a germ of completely decomposable \(d\)-web on \((\mathbb{C}^{2},0)\), \(\mathcal{W}=\mathcal{F}_{1}\boxtimes\cdots\boxtimes\mathcal{F}_{d}\). For \(i=1,\ldots,d\), let \(\omega_{i}\) be a \(1\)-form with an isolated singularity at \(0\) defining the foliation \(\mathcal{F}_{i}\). Following [15], for each triple \((r,s,t)\) with \(1\leq r<s<t\leq d\), one defines \(\eta_{rst}=\eta(\mathcal{F}_{r}\boxtimes\mathcal{F}_{s}\boxtimes\mathcal{F}_{t})\) as the unique meromorphic \(1\)-form satisfying the following equalities:
\[\left\{\begin{array}{rcl}\mathrm{d}(\delta_{st}\,\omega_{r})&=&\eta_{rst} \wedge\delta_{st}\,\omega_{r}\\ \mathrm{d}(\delta_{rr}\,\omega_{s})&=&\eta_{rst}\wedge\delta_{rr}\,\omega_{s} \\ \mathrm{d}(\delta_{rs}\,\omega_{t})&=&\eta_{rst}\wedge\delta_{rs}\,\omega_{t} \end{array}\right. \tag{1.1}\]
where \(\delta_{ij}\) denotes the function defined by \(\omega_{i}\wedge\omega_{j}=\delta_{ij}\,\mathrm{d}x\wedge\mathrm{d}y\). One calls fundamental form of the web \(\mathcal{W}=\mathcal{F}_{1}\boxtimes\cdots\boxtimes\mathcal{F}_{d}\) the \(1\)-form
\[\eta(\mathcal{W})=\eta(\mathcal{F}_{1}\boxtimes\cdots\boxtimes\mathcal{F}_{d} )=\sum_{1\leq r<s<t\leq d}\eta_{rst}. \tag{1.2}\]
One can easily check that \(\eta(\mathcal{W})\) is a meromorphic \(1\)-form with poles along the discriminant \(\Delta(\mathcal{W})\) of \(\mathcal{W}\), and that it is well-defined up to addition of a closed logarithmic \(1\)-form \(\dfrac{\mathrm{d}f}{f}\) with \(f\in\mathcal{O}^{*}(\mathbb{C}^{2},0)\) (_cf._[17, 4]).
Now, if \(\mathcal{W}\) is a (not necessarily completely decomposable) \(d\)-web on a complex surface \(M\) then its pull-back by a suitable ramified Galois covering is completely decomposable. The invariance of the fundamental form of this new web by the action of the Galois group allows us to descend it to a global meromorphic \(1\)-form \(\eta(\mathcal{W})\) on \(M\), with poles along the discriminant of \(\mathcal{W}\) (_see_[13]).
The curvature of the web \(\mathcal{W}\) is by definition the \(2\)-form
\[K(\mathcal{W})=\mathrm{d}\eta(\mathcal{W}).\]
It is a meromorphic \(2\)-form with poles along the discriminant \(\Delta(\mathcal{W})\) of \(\mathcal{W}\), canonically associated to \(\mathcal{W}\). More precisely, for any dominant holomorphic map \(\varphi\), one has \(K(\varphi^{*}\mathcal{W})=\varphi^{*}K(\mathcal{W})\).
A \(d\)-web \(\mathcal{W}\) is said to be flat if its curvature \(K(\mathcal{W})\) vanishes identically.
Let us finally note that a \(d\)-web \(\mathcal{W}\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) is flat if and only if its curvature is holomorphic over the generic points of the irreducible components of \(\Delta(\mathcal{W})\). This follows from the holomorphy of \(K(\mathcal{W})\) on \(\mathbb{P}^{2}_{\mathbb{C}}\setminus\Delta(\mathcal{W})\) and from the fact that there are no holomorphic \(2\)-forms on \(\mathbb{P}^{2}_{\mathbb{C}}\) other than the zero \(2\)-form.
## 2. Discriminant of the dual web of a co-degree one pre-foliation
Recall that if \(\mathcal{F}\) is a foliation on \(\mathbb{P}^{2}_{\mathbb{C}}\), the Gauss map of \(\mathcal{F}\) is the rational map \(\mathcal{G}_{\mathcal{F}}:\mathbb{P}^{2}_{\mathbb{C}}\dashrightarrow\breve{ \mathbb{P}}^{2}_{\mathbb{C}}\) defined at every regular point \(m\) of \(\mathcal{F}\) by \(\mathcal{G}_{\mathcal{F}}(m)=\mathbb{T}^{\mathbb{P}}_{m}\mathcal{F}\,,\) where \(\mathbb{T}^{\mathbb{P}}_{m}\mathcal{F}\) denotes the tangent line to the leaf of \(\mathcal{F}\) passing through \(m\). If \(\mathcal{C}\) is a curve on \(\mathbb{P}^{2}_{\mathbb{C}}\) passing through some singular points of \(\mathcal{F}\), one defines \(\mathcal{G}_{\mathcal{F}}(\mathcal{C})\) as the closure of \(\mathcal{G}_{\mathcal{F}}(\mathcal{C}\setminus\text{Sing}\,\mathcal{F})\).
**Lemma 2.1**.: _Let \(\mathcal{F}=\ell\boxtimes\mathcal{F}\) be a pre-foliation of co-degree \(1\) on \(\mathbb{P}^{2}_{\mathbb{C}}\)._
**1.** _If the line_ \(\ell\) _is invariant by_ \(\mathcal{F}\)_, then_
\[\Delta(\operatorname{Leg}\!\mathcal{F})=\Delta(\operatorname{Leg}\!\mathcal{F} )\cup\check{\Sigma}^{\ell}_{\mathcal{F}},\]
_where_ \(\check{\Sigma}^{\ell}_{\mathcal{F}}\) _denotes the set of lines dual to the points of_ \(\Sigma^{\ell}_{\mathcal{F}}:=\operatorname{Sing}\!\mathcal{F}\cap\ell\)_._
**2.** _If the line_ \(\ell\) _is not invariant by_ \(\mathcal{F}\)_, then_
\[\Delta(\operatorname{Leg}\!\mathcal{F})=\Delta(\operatorname{Leg}\!\mathcal{F })\cup\mathcal{G}_{\mathcal{F}}(\ell).\]
Proof.: We have
\[\Delta(\operatorname{Leg}\!\mathcal{F})=\Delta(\operatorname{Leg}\!\mathcal{F })\cup\operatorname{Tang}(\operatorname{Leg}\!\ell,\operatorname{Leg}\! \mathcal{F}).\]
When \(\ell\) is not invariant by \(\mathcal{F},\) we obtain by an argument of [1, page 33] that
\[\operatorname{Tang}(\operatorname{Leg}\!\ell,\operatorname{Leg}\!\mathcal{F} )=\mathcal{G}_{\mathcal{F}}(\ell).\]
Let us assume that \(\ell\) is invariant by \(\mathcal{F}\) and show that \(\operatorname{Tang}(\operatorname{Leg}\!\ell,\operatorname{Leg}\!\mathcal{F} )=\check{\Sigma}^{\ell}_{\mathcal{F}}.\) Let \(s\in\Sigma^{\ell}_{\mathcal{F}}.\) The fact that \(s\in\ell\) (resp. \(s\in\operatorname{Sing}\!\mathcal{F}\)) implies that the line \(\check{s}\) dual to \(s\) is invariant by \(\operatorname{Leg}\!\ell\) (resp. by \(\operatorname{Leg}\!\mathcal{F}\)). Thus \(\check{s}\subset\operatorname{Tang}(\operatorname{Leg}\!\ell,\operatorname{ Leg}\!\mathcal{F}),\) hence \(\check{\Sigma}^{\ell}_{\mathcal{F}}\subset\operatorname{Tang}(\operatorname{Leg} \!\ell,\operatorname{Leg}\!\mathcal{F}).\) Conversely, let \(\mathcal{C}\) be an irreducible component of \(\operatorname{Tang}(\operatorname{Leg}\!\ell,\operatorname{Leg}\!\mathcal{F}).\) Let us show that \(\mathcal{C}\) is invariant by \(\operatorname{Leg}\!\mathcal{F}.\) Assume by contradiction that \(\mathcal{C}\) is transverse to \(\operatorname{Leg}\!\mathcal{F}.\) Let \(m\) be a generic point of \(\mathcal{C}.\) Denote by \(\check{\ell}\in\check{\mathbb{P}}^{2}_{\mathbb{C}}\) the dual point of \(\ell;\) then the line \((\check{\ell}m)\) is not invariant by \(\operatorname{Leg}\!\mathcal{F}\) and is tangent to \(\operatorname{Leg}\!\mathcal{F}\) at \(m.\) Since \(\ell\) is \(\mathcal{F}\)-invariant, the point \(\check{\ell}\) is singular for \(\operatorname{Leg}\!\mathcal{F};\) it is therefore also a tangency point between \(\operatorname{Leg}\!\mathcal{F}\) and \((\check{\ell}m).\) The number of tangency points between \(\operatorname{Leg}\!\mathcal{F}\) and \((\check{\ell}m)\) is then \(\geq 2,\) which contradicts the equality \(\operatorname{deg}(\operatorname{Leg}\!\mathcal{F})=1.\) Hence the invariance of \(\mathcal{C}\) by \(\operatorname{Leg}\!\mathcal{F}\) is proved. Then \(\mathcal{C}\) is also invariant by \(\operatorname{Leg}\!\ell\) and is therefore a line passing through \(\check{\ell}.\) There therefore exists \(s\in\operatorname{Sing}\!\mathcal{F}\) such that \(\mathcal{C}=\check{s};\) since \(\check{\ell}\in\mathcal{C},\) we have \(s\in\ell\) and therefore \(s\in\Sigma^{\ell}_{\mathcal{F}}.\) Consequently, \(\mathcal{C}\subset\check{\Sigma}^{\ell}_{\mathcal{F}}.\)
We will now apply the above lemma to the case of a homogeneous pre-foliation \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) of co-degree \(1\) on \(\mathbb{P}^{2}_{\mathbb{C}}.\) If \(\operatorname{deg}\mathcal{H}=d,\) the homogeneous foliation \(\mathcal{H}\) is given, for a suitable choice of affine coordinates \((x,y),\) by a homogeneous \(1\)-form
\[\omega=A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y,\quad\text{where}\;\;A,B\in \mathbb{C}[x,y]_{d-1}\;\;\text{with}\;\;\gcd(A,B)=1.\]
If \(\ell=L_{\infty}\) then \(\ell\) is invariant by \(\mathcal{H}\) and Lemma 2.1 ensures that
\[\Delta(\operatorname{Leg}\!\mathcal{H})=\Delta(\operatorname{Leg}\!\mathcal{H })\cup\check{\Sigma}^{\infty}_{\mathcal{H}},\]
where \(\check{\Sigma}^{\infty}_{\mathcal{H}}\) denotes the set of lines dual to the points of \(\Sigma^{\infty}_{\mathcal{H}}:=\operatorname{Sing}\!\mathcal{H}\cap L_{\infty}.\)
Assume that \(\ell\) passes through the origin. If \(\ell\) is not invariant by \(\mathcal{H},\) then (Lemma 2.1)
\[\Delta(\operatorname{Leg}\!\mathcal{H})=\Delta(\operatorname{Leg}\!\mathcal{H })\cup\mathcal{G}_{\mathcal{H}}(\ell).\]
If \(\ell\) is invariant by \(\mathcal{H},\) then the point \(s:=L_{\infty}\cap\ell\) is singular for \(\mathcal{H}\) and, according to [4, Proposition 2.2], we have \(\Sigma^{\ell}_{\mathcal{H}}=\{O,s\}.\) Denoting by \(\check{O}\) (resp. \(\check{s}\)) the dual line of the point \(O\) (resp. \(s\)), Lemma 2.1 therefore implies that
\[\Delta(\operatorname{Leg}\!\mathcal{H})=\Delta(\operatorname{Leg}\!\mathcal{H })\cup\check{O}\cup\check{s}=\Delta(\operatorname{Leg}\!\mathcal{H})\cup \check{s},\]
because \(\check{O}\subset\Delta(\operatorname{Leg}\!\mathcal{H}).\) In fact, according to [4, Lemma 3.2], the discriminant of \(\operatorname{Leg}\!\mathcal{H}\) decomposes as
\[\Delta(\operatorname{Leg}\!\mathcal{H})=\mathcal{G}_{\mathcal{H}}(\mathrm{I}^{ \mathrm{tr}}_{\mathcal{H}})\cup\check{\Sigma}^{\mathrm{rad}}_{\mathcal{H}}\cup \check{O},\]
where \(I^{tr}_{\mathcal{H}}\) denotes the transverse inflection divisor of \(\mathcal{H}\) and \(\check{\Sigma}^{rad}_{\mathcal{H}}\) is the set of lines dual to the radial singularities of \(\mathcal{H}\) (_see_[4, SS1.3] for precise definitions of these notions). Recall however that to the homogeneous foliation \(\mathcal{H}\) one can also associate the rational map \(\underline{G}_{\mathcal{H}}:\mathbb{P}^{1}_{\mathbb{C}}\to\mathbb{P}^{1}_{ \mathbb{C}}\) defined by
\[\underline{G}_{\mathcal{H}}([y:x])=[-A(x,y):B(x,y)],\]
and that this map allows us to completely determine the divisor \(I^{tr}_{\mathcal{H}}\) and the set \(\Sigma^{rad}_{\mathcal{H}}\) (_see_[4, Section 2]):
* \(\Sigma^{rad}_{\mathcal{H}}\) consists of \([b:a:0]\in L_{\infty}\) such that \([a:b]\in\mathbb{P}^{1}_{\mathbb{C}}\) is a fixed critical point of \(\underline{G}_{\mathcal{H}}\);
* \(I^{tr}_{\mathcal{H}}=\prod_{i}T^{n_{i}}_{i}\), where \(T_{i}=(b_{i}y-a_{i}x=0)\) and \([a_{i}:b_{i}]\in\mathbb{P}^{1}_{\mathbb{C}}\) is a non-fixed critical point of \(\underline{G}_{\mathcal{H}}\) of multiplicity \(n_{i}\).
From the above considerations, we deduce the following lemma.
**Lemma 2.2**.: _Let \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) be a homogeneous pre-foliation of co-degree \(1\) on \(\mathbb{P}^{2}_{\mathbb{C}}\). **1.** If \(\ell=L_{\infty}\) then_
\[\Delta(\operatorname{Leg}\mathcal{H})=\Delta(\operatorname{Leg}\mathcal{H}) \cup\check{\Sigma}^{\infty}_{\mathcal{H}}=\mathcal{G}_{\mathcal{H}}(I^{tr}_{ \mathcal{H}})\cup\check{\Sigma}^{\infty}_{\mathcal{H}}\cup\tilde{O}.\]
**2.** If the line \(\ell\) passes through the origin, then
\[\Delta(\operatorname{Leg}\mathcal{H})=\Delta(\operatorname{Leg}\mathcal{H}) \cup D_{\ell}=\mathcal{G}_{\mathcal{H}}(I^{tr}_{\mathcal{H}})\cup\check{ \Sigma}^{rad}_{\mathcal{H}}\cup\tilde{O}\cup D_{\ell},\]
where the component \(D_{\ell}\) is defined as follows. If \(\ell\) is invariant by \(\mathcal{H}\), then \(D_{\ell}:=\check{s}\) is the dual line of the point \(s=L_{\infty}\cap\ell\in\operatorname{Sing}\mathcal{H}\). If \(\ell\) is not invariant by \(\mathcal{H}\), then \(D_{\ell}:=\mathcal{G}_{\mathcal{H}}(\ell)\).
## 3 Flatness of the dual web of a co-degree one homogeneous pre-foliation
Our first result shows that, for a homogeneous foliation \(\mathcal{H}\) on \(\mathbb{P}^{2}_{\mathbb{C}}\), the webs \(\operatorname{Leg}\mathcal{H}\) and \(\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H})\) have the same curvature, so that we have equivalence between the flatness of \(\operatorname{Leg}\mathcal{H}\) and that of \(\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H})\).
**Theorem 3.1**.: _Let \(d\geq 3\) be an integer and let \(\mathcal{H}\) be a homogeneous foliation of degree \(d-1\) on \(\mathbb{P}^{2}_{\mathbb{C}}\). Then_
\[K(\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H}))=K(\operatorname{Leg} \mathcal{H}).\]
In particular, the \(d\)-web \(\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H})\) is flat if and only if the \((d-1)\)-web \(\operatorname{Leg}\mathcal{H}\) is flat.
**Corollary 3.2**.: _Let \(\mathcal{H}\) be a homogeneous foliation of degree \(2\) on \(\mathbb{P}^{2}_{\mathbb{C}}\). Then the \(3\)-web \(\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H})\) is flat._
To establish Theorem 3.1, we will need the following definition and theorem.
**Definition 3.3** ([12]).: _Let \(\mathcal{W}=\mathcal{F}_{1}\boxtimes\cdots\boxtimes\mathcal{F}_{d}\) be a regular \(d\)-web on \((\mathbb{C}^{2},0)\). A transverse symmetry of \(\mathcal{W}\) is a germ of vector field \(\operatorname{X}\) which is transverse to the foliations \(\mathcal{F}_{i}\) (\(i=1,\ldots,d\)) and whose local flow \(\exp(t\operatorname{X})\) preserves the \(\mathcal{F}_{i}\)'s._
**Theorem 3.4**.: _Let \(d\geq 3\) be an integer and let \(\mathcal{W}_{d-1}\) be a regular \((d-1)\)-web on \((\mathbb{C}^{2},0)\) which admits a transverse symmetry \(\operatorname{X}\). Denote by \(\mathcal{F}_{\operatorname{X}}\) the foliation defined by \(\operatorname{X}\). Then_
\[K(\mathcal{F}_{\operatorname{X}}\boxtimes\mathcal{W}_{d-1})=K(\mathcal{W}_{d- 1}).\]
_In particular, the \(d\)-web \(\mathcal{F}_{\operatorname{X}}\boxtimes\mathcal{W}_{d-1}\) is flat if and only if the \((d-1)\)-web \(\mathcal{W}_{d-1}\) is flat._
Before proving this theorem, let us briefly recall the definition of the rank \(\operatorname{rk}(\mathcal{W})\) of a regular \(d\)-web \(\mathcal{W}=\mathcal{F}_{1}\boxtimes\cdots\boxtimes\mathcal{F}_{d}\) on \((\mathbb{C}^{2},0)\). For \(1\leq i\leq d\), let \(\omega_{i}\) be a \(1\)-form defining the foliation \(\mathcal{F}_{i}\). One defines the \(\mathbb{C}\)-vector space \(\mathcal{A}(\mathcal{W})\) of abelian relations of \(\mathcal{W}\) by
\[\mathcal{A}(\mathcal{W}):=\Big{\{}(\eta_{1},\ldots,\eta_{d})\in(\Omega^{1}( \mathbb{C}^{2},0))^{d}\ \Big{|}\ \forall i=1,\ldots,d,\ \mathrm{d}\eta_{i}=0,\ \eta_{i}\wedge\omega_{i}=0\ \ \text{and}\ \ \sum_{i=1}^{d}\eta_{i}=0\Big{\}}.\]
Then \(\operatorname{rk}(\mathcal{W}):=\dim_{\mathbb{C}}\mathcal{A}(\mathcal{W})\). One has the following optimal bound (_cf._[16, Chapter 2]):
\[\operatorname{rk}(\mathcal{W})\leq\pi_{d}:=\frac{(d-1)(d-2)}{2}.\]
Recall also that every \(d\)-web of maximal rank (_i.e._ of rank \(\pi_{d}\)) is necessarily flat by Mihileanu's criterion (_cf._[16, Theorem 6.3.4]).
In fact, Theorem 3.4 is an analogue for flat webs of a result on webs of maximal rank, due to Martin-Pereira-Pirio, namely:
**Theorem 3.5** ([12], Theorem 1).: -- With the notations of Theorem 3.4, one has
\[\operatorname{rk}(\mathcal{F}_{\mathrm{X}}\boxtimes\mathcal{W}_{d-1})= \operatorname{rk}(\mathcal{W}_{d-1})+(d-2).\]
In particular, \(\mathcal{F}_{\mathrm{X}}\boxtimes\mathcal{W}_{d-1}\) is of maximal rank if and only if \(\mathcal{W}_{d-1}\) is of maximal rank.
The proof of Theorem 3.4 consists essentially in applying this result for \(d=3\).
Proof of Theorem 3.4.: -- Writing \(\mathcal{W}_{d-1}=\mathcal{F}_{1}\boxtimes\cdots\boxtimes\mathcal{F}_{d-1}\), we have
\[K(\mathcal{F}_{\mathrm{X}}\boxtimes\mathcal{W}_{d-1})=K(\mathcal{W}_{d-1})+ \sum_{1\leq i<j\leq d-1}K(\mathcal{W}_{3}^{i,j}),\]
where \(\mathcal{W}_{3}^{i,j}:=\mathcal{F}_{\mathrm{X}}\boxtimes\mathcal{F}_{i} \boxtimes\mathcal{F}_{j}\). Moreover, since \(\mathrm{X}\) is a transverse symmetry of the \(2\)-web \(\mathcal{F}_{i}\boxtimes\mathcal{F}_{j}\) and since every \(2\)-web is of maximal rank, equal to \(0\), Theorem 1 of [12] (_cf._ Theorem 3.5 above) implies that the \(3\)-web \(\mathcal{W}_{3}^{i,j}\) is of maximal rank, equal to \(1\), so that \(K(\mathcal{W}_{3}^{i,j})\equiv 0\), hence the announced equality holds.
Proof of Theorem 3.1.: -- By [4, Section 2], we can locally decompose the \(d\)-web \(\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H})\) as
\[\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H})=\operatorname{Leg}(L_{ \infty})\boxtimes\mathcal{W}_{d-1},\]
where \(\mathcal{W}_{d-1}=\mathcal{F}_{1}\boxtimes\cdots\boxtimes\mathcal{F}_{d-1}\) and, for any \(i\in\{1,\ldots,d-1\}\), \(\mathcal{F}_{i}\) is given by \(\hat{\omega}_{i}:=\lambda_{i}(p)\mathrm{d}q-q\mathrm{d}p\), with \(\lambda_{i}(p)=p-p_{i}(p)\) and \(\{p_{i}(p)\}=\underline{\mathcal{G}}_{d}^{-1}(p)\). Now, the vector field \(\mathrm{X}:=q\frac{\partial}{\partial q}\) defines the radial foliation \(\operatorname{Leg}(L_{\infty})\) and is a transverse symmetry of the web \(\mathcal{W}_{d-1}\). Therefore, \(K(\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H}))=K(\operatorname{Leg} \mathcal{H})\) by Theorem 3.4.
**Remark 3.6**.: -- We can also prove Theorem 3.1 directly, without using results on webs of maximal rank. Indeed, putting \(\mathcal{W}_{3}^{i,j}:=\operatorname{Leg}(L_{\infty})\boxtimes\mathcal{F}_{i} \boxtimes\mathcal{F}_{j}\), for all \(i,j\in\{1,\ldots,d-1\}\) with \(i\neq j,\) we have
\[K(\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H}))=K(\operatorname{Leg} \mathcal{H})+\sum_{1\leq i<j\leq d-1}K(\mathcal{W}_{3}^{i,j}).\]
The foliation \(\operatorname{Leg}(L_{\infty})\) being defined by \(\hat{\omega}_{0}:=\mathrm{d}p,\) a direct computation using formula (1.1) shows that
\[\eta(\mathcal{W}_{3}^{i,j})=\frac{\mathrm{d}\Big{(}(\lambda_{i}\lambda_{j})(p) \Big{)}}{(\lambda_{i}\lambda_{j})(p)}+\frac{\mathrm{d}q}{q},\]
so that \(K(\mathcal{W}_{3}^{i,j})=\mathrm{d}\eta(\mathcal{W}_{3}^{i,j})\equiv 0\), hence \(K(\operatorname{Leg}(L_{\infty}\boxtimes\mathcal{H}))=K(\operatorname{Leg} \mathcal{H})\).
The following theorem gives an important characterization of the flatness of the dual web of a co-degree one homogeneous pre-foliation.
**Theorem 3.7**.: _Let \(\mathscr{H}=\ell\boxtimes\mathscr{H}\) be a homogeneous pre-foliation of type \((1,d)\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) with \(d\geq 3\). If the line \(\ell\) is invariant (resp. not invariant) by \(\mathscr{H}\), then the \(d\)-web \(\operatorname{Leg}\mathscr{H}\) is flat if and only if its curvature \(K(\operatorname{Leg}\mathscr{H})\) is holomorphic on \(\mathcal{G}_{\mathscr{H}}(\operatorname{I}^{\operatorname{tr}}_{\mathcal{H}})\) (resp. on \(\mathcal{G}_{\mathscr{H}}(\operatorname{I}^{\operatorname{tr}}_{\mathcal{H}}) \cup D_{\ell}=\mathcal{G}_{\mathscr{H}}(\operatorname{I}^{\operatorname{tr}}_{ \mathcal{H}}\cup\ell)\))._
To prove this theorem, we will need the following lemma, which is a reformulation of Lemma 3.1 of [2] in terms of homogeneous pre-foliations.
**Lemma 3.8** ([2], Lemma 3.1).: _Let \(\mathscr{H}\) be a homogeneous pre-foliation on \(\mathbb{P}^{2}_{\mathbb{C}}\). If the curvature of \(\operatorname{Leg}\mathscr{H}\) is holomorphic on \(\breve{\mathbb{P}}^{2}_{\mathbb{C}}\setminus\breve{O},\) then \(\operatorname{Leg}\mathscr{H}\) is flat._
We will also need the following proposition, which has its own interest.
**Proposition 3.9**.: _Let \(\mathcal{W}_{\nu}\) be a germ of \(\nu\)-web on \((\mathbb{C}^{2},0)\) with \(\nu\geq 2\). Assume that \(\Delta(\mathcal{W}_{\nu})\) has an irreducible component \(C\) totally invariant by \(\mathcal{W}_{\nu}\) and of minimal multiplicity \(\nu-1\). Let \(\mathcal{F}\) be a germ of foliation on \((\mathbb{C}^{2},0)\) leaving \(C\) invariant and let \(\mathcal{W}_{d-\nu-1}\) be a germ of regular \((d-\nu-1)\)-web on \((\mathbb{C}^{2},0)\) transverse to \(C\). Then the curvature of the \(d\)-web \(\mathcal{W}=\mathcal{F}\boxtimes\mathcal{W}_{\nu}\boxtimes\mathcal{W}_{d-\nu-1}\) is holomorphic along \(C\)._
Proof.: As in the beginning of the proof of [13, Proposition 2.6], we choose a local coordinate system \((U,(x,y))\) such that \(C\cap U=\{y=0\},\,\mathbb{T}\mathcal{F}\,|_{U}=\{\mathrm{d}y+yh(x,y)\mathrm{ d}x=0\},\)
\[\operatorname{T}\mathcal{W}_{\nu}|_{U}=\left\{\mathrm{d}y^{\nu}+y\left(a_{ \nu-1}(x,y)\mathrm{d}y^{\nu-1}\mathrm{d}x+\cdots+a_{0}(x,y)\mathrm{d}x^{\nu} \right)=0\right\}\quad\text{and}\quad\operatorname{T}\mathcal{W}_{d-\nu-1}|_{ U}=\left\{\prod_{l=1}^{d-\nu-1}(\mathrm{d}x+g_{l}(x,y)\mathrm{d}y)=0\right\}.\]
Then, by passing to the ramified covering \(\pi:(x,y)\mapsto(x,y^{\nu})\), we obtain that \(\pi^{*}\mathcal{F}=\mathcal{F}_{0},\ \pi^{*}\mathcal{W}_{\nu}=\boxtimes_{k=1}^{\nu} \mathcal{F}_{k}\) and \(\pi^{*}\mathcal{W}_{d-\nu-1}=\boxtimes_{l=1}^{d-\nu-1}\mathcal{F}_{\nu+l},\) where
\[\mathcal{F}_{0}:\mathrm{d}y+\tfrac{1}{\nu}yh(x,y^{\nu})\mathrm{d}x=0,\ \ \ \ \ \mathcal{F}_{k}:\mathrm{d}x+y^{\nu-2}f(x,\zeta^{k}y)\zeta^{-k}\mathrm{d}y=0, \ \ \ \ \ \mathcal{F}_{\nu+l}:\mathrm{d}x+y^{\nu-1}g_{l}(x,y^{\nu})\mathrm{d}y=0,\]
with \(\zeta=\exp(\tfrac{2i\pi}{\nu})\). Therefore we have
\[K(\pi^{*}\mathcal{W})=K\big{(}\pi^{*}(\mathcal{W}_{\nu}\boxtimes\mathcal{W}_{d -\nu-1})\big{)}+\sum_{1\leq i<j\leq d-1}K(\mathcal{F}_{0}\boxtimes\mathcal{F} _{i}\boxtimes\mathcal{F}_{j}).\]
Now, on the one hand, [13, Proposition 2.6] ensures that \(K(\mathcal{W}_{\nu}\boxtimes\mathcal{W}_{d-\nu-1})\) is holomorphic along \(\{y=0\}\); therefore so \(K\big{(}\pi^{*}(\mathcal{W}_{\nu}\boxtimes\mathcal{W}_{d-\nu-1})\big{)}=\pi^{* }\big{(}K(\mathcal{W}_{\nu}\boxtimes\mathcal{W}_{d-\nu-1})\big{)}.\) On the other hand, since \(\{y=0\}\) is invariant by \(\mathcal{F}_{0}\) and \(\{y=0\}\not\subset\operatorname{Tang}(\mathcal{F}_{0},\mathcal{F}_{i}\boxtimes \mathcal{F}_{j})\), then \(K(\mathcal{F}_{0}\boxtimes\mathcal{F}_{i}\boxtimes\mathcal{F}_{j})\) is holomorphic on \(\{y=0\}\) by applying [13, Theorem 1], _see_ also [2, Theorem 1.1] or [3, Corollary 1.30]. It follows that \(\pi^{*}K(\mathcal{W})=K(\pi^{*}\mathcal{W})\) is holomorphic on \(\{y=0\}\). As a consequence \(K(\mathcal{W})\) is holomorphic along \(C\).
**Remark 3.10**.: _--_ Similarly, we obtain an analogue of Proposition 3.9 by replacing the foliation \(\mathcal{F}\) by a \(2\)-web \(\mathcal{W}_{2}^{\prime}=\mathcal{F}_{1}\boxtimes\mathcal{F}_{2}\) leaving the component \(C\subset\Delta(\mathcal{W}_{\nu})\) totally invariant._
Proof of Theorem 3.7.: _i._ First assume that \(\ell=L_{\infty}\). Then Theorem 3.1 ensures that \(K(\operatorname{Leg}\mathscr{H})=K(\operatorname{Leg}\mathscr{H})\). Now, we know from [4, Theorem 3.1] that the flatness of the web \(\operatorname{Leg}\mathscr{H}\) is characterized by the holomorphy of its curvature \(K(\operatorname{Leg}\mathscr{H})\) on \(\mathcal{G}_{\mathscr{H}}(\operatorname{I}^{\operatorname{tr}}_{\mathcal{H}})\). Therefore the same is true for the web \(\operatorname{Leg}\mathscr{H}\), _i.e._\(\operatorname{Leg}\mathscr{H}\) is flat if and only if \(K(\operatorname{Leg}\mathscr{H})\) is holomorphic along \(\mathcal{G}_{\mathscr{H}}(\operatorname{I}^{\operatorname{tr}}_{\mathcal{H}})\).
_ii._ Now assume that \(\ell\) passes through the origin. Let us fix \(s\in\Sigma^{\infty}_{\mathscr{H}}\) and describe the \(d\)-web \(\operatorname{Leg}\mathscr{H}\) in a neighborhood of a generic point \(m\) of the line \(\tilde{s}\) dual to \(s\). Denote by \(\nu-1\geq 0\) the radiality order of \(s\); by [13, Proposition 3.3], in a neighborhood of \(m\), we can decompose \(\operatorname{Leg}\mathscr{H}\) as
\[\operatorname{Leg}\mathscr{H}=\operatorname{Leg}\ell\boxtimes\mathcal{W}_{\nu} \boxtimes\mathcal{W}_{d-\nu-1}, \tag{3.1}\]
where \(\mathcal{W}_{\nu}\) is an irreducible \(\nu\)-web leaving \(\check{s}\) invariant and whose discriminant \(\Delta(\mathcal{W}_{\nu})\) has minimal multiplicity \(\nu-1\) along \(\check{s}\), and where \(\mathcal{W}_{d-\nu-1}\) is a \((d-\nu-1)\)-web transverse to \(\check{s}\). More explicitly, up to linear conjugation, we can write \(\ell=(y=\alpha x)\), \(s=[1:\rho:0]\), \(\check{s}=\{p=\rho\}\), \(m=(\rho,q)\) and \(\underline{\mathcal{G}}_{\mathcal{H}}^{-1}(\rho)=\{\rho,r_{1},\ldots,r_{d-\nu- 1}\},\) so that (_see_[4, Section 2])
\[\operatorname{Leg}\ell:(p-\alpha)\mathrm{d}q-q\mathrm{d}p=0,\qquad\qquad \mathcal{W}_{\nu}\Big{|}_{\check{s}}:\mathrm{d}p=0,\qquad\qquad\mathcal{W}_{d -\nu-1}\Big{|}_{\check{s}}:\prod_{i=1}^{d-\nu-1}\Big{(}(\rho-r_{i})\mathrm{d}q -q\mathrm{d}p\Big{)}=0.\]
We deduce, in particular, the two following properties:
1. if \(\check{s}\not\subset\mathcal{G}_{\mathcal{H}}(\mathrm{I}_{\mathcal{H}}^{ \mathrm{tr}})\), the web \(\mathcal{W}_{d-\nu-1}\) is regular in a neighborhood of \(m\), because we then have \(r_{i}\neq r_{j}\) if \(i\neq j\);
2. if \(\check{s}\neq D_{\ell}=\{p=\underline{\mathcal{G}}_{\mathcal{H}}(\alpha)\},\) then \(\operatorname{Leg}\ell\) is transverse to \(\check{s}\) and \(\check{s}\not\subset\operatorname{Tang}(\operatorname{Leg}\ell,\mathcal{W}_{d- \nu-1})\).
If \(s\in\Sigma^{\mathrm{rad}}_{\mathcal{H}}\) is such that \(\check{s}\not\subset\mathcal{G}_{\mathcal{H}}(\mathrm{I}_{\mathcal{H}}^{ \mathrm{tr}})\cup D_{\ell}\), then properties (a) and (b) ensure that the \((d-\nu)\)-web \(\mathcal{W}_{d-\nu}:=\operatorname{Leg}\ell\boxtimes\mathcal{W}_{d-\nu-1}\) is transverse to \(\check{s}\) and is regular in a neighborhood of \(m\). Therefore the curvature of \(\operatorname{Leg}\mathcal{H}=\mathcal{W}_{\nu}\boxtimes\mathcal{W}_{d-\nu}\) is holomorphic in a neighborhood of \(m\) by applying [13, Proposition 2.6]. It follows that \(K(\operatorname{Leg}\mathcal{H})\) is holomorphic on \(\check{\Sigma}_{\mathcal{H}}^{\mathrm{rad}}\setminus(\mathcal{G}_{\mathcal{H} }(\mathrm{I}_{\mathcal{H}}^{\mathrm{tr}})\cup D_{\ell})\). Thus, according to the second assertion of Lemma 2.2 and Lemma 3.8, \(\operatorname{Leg}\mathcal{H}\) is flat if and only if \(K(\operatorname{Leg}\mathcal{H})\) is holomorphic along \(\mathcal{G}_{\mathcal{H}}(\mathrm{I}_{\mathcal{H}}^{\mathrm{tr}})\cup D_{\ell}\).
Let us show that in the particular case where \(\ell\) is invariant by \(\mathcal{H}\), the flatness of \(\operatorname{Leg}\mathcal{H}\) is equivalent to the holomorphy of \(K(\operatorname{Leg}\mathcal{H})\) on \(\mathcal{G}_{\mathcal{H}}(\mathrm{I}_{\mathcal{H}}^{\mathrm{tr}})\). From the above discussion, it suffices to prove that if \(D_{\ell}\) is not contained in \(\mathcal{G}_{\mathcal{H}}(\mathrm{I}_{\mathcal{H}}^{\mathrm{tr}})\), then \(K(\operatorname{Leg}\mathcal{H})\) is holomorphic on \(D_{\ell}\). The invariance of \(\ell\) by \(\mathcal{H}\) implies the existence of \(s\in\Sigma^{\infty}_{\mathcal{H}}\) such that \(\ell=(Os)\); then \(D_{\ell}=\check{s}\) is invariant by the radial foliation \(\operatorname{Leg}\ell\). Moreover, the condition \(D_{\ell}\not\subset\mathcal{G}_{\mathcal{H}}(\mathrm{I}_{\mathcal{H}}^{ \mathrm{tr}})\) implies that \(\mathcal{W}_{d-\nu-1}\) is regular in a neighborhood of every generic point \(m\) of \(D_{\ell}\) (property (a)). By applying Theorem 1 of [13] if \(\nu=1\) and Proposition 3.9 if \(\nu\geq 2,\) we deduce that \(K(\operatorname{Leg}\mathcal{H})\) is holomorphic along \(D_{\ell}\).
From Theorem 3.7 we deduce the two following corollaries.
**Corollary 3.11**.: _Let \(\mathcal{H}\) be a homogeneous convex pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}_{\mathbb{C}}^{2}\). Then the \(d\)-web \(\operatorname{Leg}\mathcal{H}\) is flat._
**Corollary 3.12**.: _Let \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) be a homogeneous pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}_{\mathbb{C}}^{2}\). Assume that the homogeneous foliation \(\mathcal{H}\) is convex and that the line \(\ell\) is not invariant by \(\mathcal{H}\). Then the \(d\)-web \(\operatorname{Leg}\mathcal{H}\) is flat if and only if its curvature \(K(\operatorname{Leg}\mathcal{H})\) is holomorphic on \(D_{\ell}=\mathcal{G}_{\mathcal{H}}(\ell)\)._
The following theorem is an effective criterion for the holomorphy of the curvature of the web dual to a homogeneous pre-foliation \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) (with \(O\in\ell\)) along an irreducible component of \(\Delta(\operatorname{Leg}\mathcal{H})\setminus(D_{\ell}\cup\dot{O})\).
**Theorem 3.13**.: _Let \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) be a homogeneous pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}_{\mathbb{C}}^{2}\), defined by the \(1\)-form_
\[\omega=(\alpha x+\beta y)\left(A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y\right),\quad A,B\in\mathbb{C}[x,y]_{d-1},\ \ \gcd(A,B)=1.\]
_Let \((p,q)\) be the affine chart of \(\mathbb{P}_{\mathbb{C}}^{2}\) associated to the line \(\{y=px-q\}\subset\mathbb{P}_{\mathbb{C}}^{2}\) and let \(D=\{p=p_{0}\}\) be an irreducible component of \(\Delta(\operatorname{Leg}\mathcal{H})\setminus(D_{\ell}\cup\dot{O})\). Write \(\underline{\mathcal{G}}_{\mathcal{H}}^{-1}([p_{0}:1])=\{[a_{1}:b_{1}],\ldots,[a_{ n}:b_{n}]\}\) and denote by \(\nu_{i}\) the ramification index of \(\underline{\mathcal{G}}_{\mathcal{H}}\) at the point \([a_{i}:b_{i}]\in\mathbb{P}_{\mathbb{C}}^{1}\). For \(i\in\{1,\ldots,n\}\), define the polynomials \(P_{i}\in\mathbb{C}[x,y]_{d-\nu_{i}-1}\) and \(Q_{i}\in\mathbb{C}[x,y]_{2d-\nu_{i}-3}\) by_
\[P_{i}(x,y;a_{i},b_{i}):=\frac{\left|\begin{array}{cc}A(x,y)&A(b_{i},a_{i})\\ B(x,y)&B(b_{i},a_{i})\\ \hline\end{array}\right.}{(b_{i}y-a_{i}x)^{\nu_{i}}}\quad\text{and}\quad Q_{i}(x,y;a_{i},b_{i}):=(\nu_{i}-2)\left(\frac{\partial B}{\partial x}-\frac{ \partial A}{\partial y}\right)P_{i}(x,y;a_{i},b_{i})+2(\nu_{i}+1)\left|\begin{array}[ ]{cc}\frac{\partial P_{i}}{\partial x}&A(x,y)\\ \hline\frac{\partial P_{i}}{\partial y}&B(x,y)\end{array}\right.}. \tag{3.2}\]
Then the curvature of \(\mathrm{Leg}\mathcal{H}\) is holomorphic on \(D\) if and only if
\[\sum_{i=1}^{n}\left(1-\frac{1}{\nu_{i}}\right)\left(p_{0}b_{i}-a_{i}\right) \left(\frac{Q_{i}(b_{i},a_{i};a_{i},b_{i})}{B(b_{i},a_{i})P_{i}(b_{i},a_{i};a_{i },b_{i})}+\frac{3\nu_{i}(\alpha+p_{0}\beta)}{\alpha b_{i}+\beta a_{i}}\right)=0.\]
Proof. -- Let \(\delta\in\mathbb{C}\) be such that \(\beta+\alpha\delta\neq 0\) and \(b_{i}-a_{i}\delta\neq 0\) for all \(i=1,\ldots,n\). Up to conjugating \(\omega\) by the linear transformation \((x+\delta y,y)\), we can assume that none of the lines \(\ell=(\alpha x+\beta y=0)\) and \(L_{i}=(b_{i}y-a_{i}x=0)\) is vertical, _i.e._ that \(\beta\neq 0\) and \(b_{i}\neq 0\) for all \(i=1,\ldots,n\). Let us then put \(\rho:=-\frac{\alpha}{\beta}\) and \(r_{i}:=\frac{a_{i}}{b_{i}}\); we have \(\underline{G}_{\mathcal{H}}^{-1}(p_{0})=\{r_{1},\ldots,r_{n}\}\) with \(\underline{G}_{\mathcal{H}}(z)=-\frac{A(1,z)}{B(1,z)}\). According to [7, Lemma 3.5], there therefore exists a constant \(c\in\mathbb{C}^{*}\) such that
\[-A(1,z)=p_{0}B(1,z)-c\prod_{i=1}^{n}(z-r_{i})^{\nu_{i}}.\]
Moreover, the \(d\)-web \(\mathrm{Leg}\mathcal{H}\) is given in the affine chart \((p,q)\) by the differential equation
\[\left((p-\rho)x-q\right)\left(A(x,px-q)+pB(x,px-q)\right)=0,\qquad\text{with} \qquad x=\frac{\mathrm{d}q}{\mathrm{d}p}; \tag{3.3}\]
since \(A,B\in\mathbb{C}[x,y]_{d-1},\) this equation can then be rewritten as
\[0 =x^{d-1}\left((p-\rho)x-q\right)\left(A(1,p-\frac{q}{x})+pB(1,p- \frac{q}{x})\right)\] \[=x^{d}\left(p-\frac{q}{x}-\rho\right)\left((p-p_{0})B(1,p-\frac{ q}{x})+c\prod_{i=1}^{n}(p-\frac{q}{x}-r_{i})^{\nu_{i}}\right),\qquad\text{with} \qquad x=\frac{\mathrm{d}q}{\mathrm{d}p}.\]
Put \(\ddot{x}:=q,\ \ddot{y}:=p-p_{0}\ \text{ and }\ \ddot{p}:=\frac{ \mathrm{d}\ddot{y}}{\mathrm{d}\ddot{x}}=\frac{1}{x}\); in these new coordinates \(D=\{\ddot{y}=0\}\) and \(\mathrm{Leg}\mathcal{H}\) is described by the differential equation
\[F(\ddot{x},\ddot{y},\ddot{p}):=\left(\ddot{y}+p_{0}-\ddot{p}\ddot{x}-\rho \right)\left(\ddot{y}B(1,\ddot{y}+p_{0}-\ddot{p}\ddot{x})+c\prod_{i=1}^{n} \bigl{(}\ddot{y}+p_{0}-\ddot{p}\ddot{x}-r_{i}\bigr{)}^{\nu_{i}}\right)=0.\]
We have \(F(\ddot{x},0,\ddot{p})=c(-\ddot{x})^{d}\bigl{(}\ddot{p}-\varphi_{0}(\ddot{x}) \bigr{)}\prod_{i=1}^{n}\bigl{(}\ddot{p}-\varphi_{i}(\ddot{x})\bigr{)}^{\nu_{i}},\) where \(\varphi_{0}(\ddot{x})=\frac{p_{0}-\rho}{\ddot{x}}\) and \(\varphi_{i}(\ddot{x})=\frac{p_{0}-r_{i}}{\ddot{x}}\); the hypothesis that \(D\neq D_{\ell}=\{p=\underline{G}_{\mathcal{H}}(\rho)\}\) translates into the fact that, for all \(i\in\{1,\ldots,n\}\), \(r_{i}\neq\rho\) and therefore \(\varphi_{i}\not\equiv\varphi_{0}\). Note that if \(\nu_{i}\geq 2,\) then \(\partial_{\ddot{y}}F\bigl{(}\ddot{x},0,\varphi_{i}(\ddot{x})\bigr{)}=(r_{i}- \rho)B(1,r_{i})\neq 0\); since \(\partial_{\ddot{p}}F\bigl{(}\ddot{x},0,\varphi_{0}(\ddot{x})\bigr{)}\not\equiv 0\) and \(\partial_{\ddot{p}}F\bigl{(}\ddot{x},0,\varphi_{i}(\ddot{x})\bigr{)}\not\equiv 0\) if \(\nu_{i}=1,\) we deduce that the surface
\[S_{\mathrm{Leg}\mathcal{H}}:=\bigl{\{}(\ddot{x},\ddot{y},\ddot{p})\in\mathbb{P} \mathrm{T}^{*}\dot{\mathbb{P}}_{\mathbb{C}}^{2}\ |\ F(\ddot{x},\ddot{y},\ddot{p})=0\bigr{\}}\]
is smooth along \(D=\{\ddot{y}=0\}\). Thus, according to [7, Theorem 2.1], the curvature of \(\mathrm{Leg}\mathcal{H}\) is holomorphic on \(D=\{\ddot{y}=0\}\) if and only if \(\sum_{i=1}^{n}(\nu_{i}-1)\varphi_{i}(\ddot{x})\psi_{i}(\ddot{x})\equiv 0\) and \(\sum_{i=1}^{n}(\nu_{i}-1)\frac{\mathrm{d}}{\mathrm{d}\dot{x}}\psi_{i}(\ddot{x})\equiv 0\), where, for all \(i\in\{1,\ldots,n\}\) such that \(\nu_{i}\geq 2\),
\[\psi_{i}(\ddot{x})=\frac{1}{\nu_{i}}\left[(\nu_{i}-2)\left(d-\varphi_{i}(\ddot{x })\frac{\partial_{\ddot{p}}\partial_{\ddot{y}}F\bigl{(}\ddot{x},0,\varphi_{i}( \ddot{x})\bigr{)}}{\partial_{\ddot{y}}F\bigl{(}\ddot{x},0,\varphi_{i}(\ddot{x}) \bigr{)}}\right)-2(\nu_{i}+1)\left(\frac{\varphi_{0}(\ddot{x})}{\varphi_{i}( \ddot{x})-\varphi_{0}(\ddot{x})}+\sum_{j=1,j\neq i}^{n}\frac{\nu_{j}\varphi_{j}( \ddot{x})}{\varphi_{i}(\ddot{x})-\varphi_{j}(\ddot{x})}\right)\right].\]
Now, if \(\nu_{i}\geq 3\) then \(\partial_{\ddot{p}}\partial_{\ddot{y}}F\bigl{(}\ddot{x},0,\varphi_{i}(\ddot{x}) \bigr{)}=-\ddot{x}\Bigl{(}B(1,r_{i})+(r_{i}-\rho)\partial_{y}B(1,r_{i})\Bigr{)}\). It follows that
\[\psi_{i}(\ddot{x})=\psi_{i}:=\frac{1}{\nu_{i}}\left[(\nu_{i}-2)\left(d+\frac{ \Bigl{(}p_{0}-r_{i}\Bigr{)}\Bigl{(}B(1,r_{i})+(r_{i}-\rho)\partial_{y}B(1,r_{i}) \Bigr{)}}{(r_{i}-\rho)B(1,r_{i})}\right)+2(\nu_{i}+1)\left(\frac{p_{0}-\rho}{r _{i}-\rho}+\sum_{j=1,j\neq i}^{n}\frac{\nu_{j}(p_{0}-r_{j})}{r_{i}-r_{j}}\right) \right].\]
Therefore \(K(\mathrm{Leg}\mathcal{H})\) is holomorphic on \(D=\{\tilde{y}=0\}\) if and only if \(\sum_{i=1}^{n}(\nu_{i}-1)\varphi_{i}(\tilde{x})\psi_{i}\equiv 0\). On the other hand, arguing as in the proof of [7, Theorem 3.1], we obtain that
\[\sum_{j=1,j\neq i}^{n}\frac{\nu_{j}(p_{0}-r_{j})}{r_{i}-r_{j}}=\frac{\left| \begin{array}{cc}\partial_{x}P_{i}(1,r_{i};r_{i},1)&A(1,r_{i})\\ \partial_{y}P_{i}(1,r_{i};r_{i},1)&B(1,r_{i})\end{array}\right|}{B(1,r_{i})P_ {i}(1,r_{i};r_{i},1)}\]
and that, for all \(i\in\{1,\ldots,n\}\) such that \(\nu_{i}\geq 2,\)
\[(d-1)B(1,r_{i})+(p_{0}-r_{i})\partial_{y}B(1,r_{i})=\partial_{x}B(1,r_{i})- \partial_{y}A(1,r_{i}),\]
so that
\[\nu_{i} =\frac{1}{\nu_{i}}\left[(\nu_{i}-2)\left(\frac{p_{0}-\rho}{r_{i}- \rho}+\frac{\partial_{x}B(1,r_{i})-\partial_{y}A(1,r_{i})}{B(1,r_{i})}\right) +2(\nu_{i}+1)\left(\frac{p_{0}-\rho}{r_{i}-\rho}+\frac{\left|\begin{array}{ cc}\partial_{x}P(1,r_{i};r_{i},1)&A(1,r_{i})\\ \partial_{y}P_{i}(1,r_{i};r_{i},1)&B(1,r_{i})\end{array}\right|}{B(1,r_{i})P(1,r_{i};r_{i},1)}\right)\right]\] \[=\frac{Q_{i}(1,r_{i};r_{i},1)}{\nu_{i}B(1,r_{i})P_{i}(1,r_{i};r_{ i},1)}+\frac{3(p_{0}-\rho)}{r_{i}-\rho}.\]
As a result, \(K(\mathrm{Leg}\mathcal{H})\) is holomorphic along \(D=\{\tilde{y}=0\}\) if and only if
\[\frac{1}{\tilde{x}}\sum_{i=1}^{n}\left(1-\frac{1}{\nu_{i}}\right)\left(p_{0}- r_{i}\right)\left(\frac{Q_{i}(1,r_{i};r_{i},1)}{B(1,r_{i})P_{i}(1,r_{i};r_{i},1)}+ \frac{3\nu_{i}(p_{0}-\rho)}{r_{i}-\rho}\right)=0,\]
hence the theorem follows.
**Remarks 3.14**.:
* (i) We recover the fact (_cf._ step _ii_. of the proof of Theorem 3.7) that the curvature of \(\mathrm{Leg}\mathcal{H}\) is always holomorphic along \(\hat{\Sigma}_{\mathcal{H}}^{\mathrm{rad}}\setminus(\mathcal{G}_{\mathcal{H}}( \mathrm{I}_{\mathcal{H}}^{\mathrm{tr}})\cup D_{\ell}).\) Indeed, if \(D\) is contained in \(\hat{\Sigma}_{\mathcal{H}}^{\mathrm{rad}}\setminus(\mathcal{G}_{\mathcal{H}}( \mathrm{I}_{\mathcal{H}}^{\mathrm{tr}})\cup D_{\ell}),\) then the fiber \(\mathcal{G}_{\mathcal{H}}^{-1}([p_{0}:1])\) does not contain any non-fixed critical point of \(\mathcal{G}_{\mathcal{H}}\), so that we have \(p_{0}b_{i}-a_{i}=0\) if \(\nu_{i}\geq 2,\) which implies (Theorem 3.13) that \(K(\mathrm{Leg}\mathcal{H})\) is holomorphic on \(D\).
* We know from [7, Theorem 3.1] that the curvature of \(\mathrm{Leg}\mathcal{H}\) is holomorphic on \(D\) if and only if \[\sum_{i=1}^{n}\left(1-\frac{1}{\nu_{i}}\right)\frac{(p_{0}b_{i}-a_{i})Q_{i}(b _{i},a_{i};a_{i},b_{i})}{B(b_{i},a_{i})P_{i}(b_{i},a_{i};a_{i},b_{i})}=0.\] From this result and Theorem 3.13, we deduce the following properties:
* If the curvature of \(\mathrm{Leg}\mathcal{H}\) is holomorphic on \(D\), then the curvature of \(\mathrm{Leg}\mathcal{H}\) is holomorphic on \(D\) if and only if \[(\alpha+p_{0}\beta)\sum_{i=1}^{n}\frac{(\nu_{i}-1)(p_{0}b_{i}-a_{i})}{\alpha b _{i}+\beta a_{i}}=0.\]
* In particular, when \(d=3\) the fiber \(\mathcal{G}_{\mathcal{H}}^{-1}([p_{0}:1])\) is reduced to a single point, say \([a:b],\) and the holomorphy of the curvature of \(\mathrm{Leg}\mathcal{H}\) on \(D\) is equivalent to \((\alpha+p_{0}\beta)(p_{0}b-a)=0\), _i.e._ to \(\alpha+p_{0}\beta=0\) or \([a:b]=[p_{0}:1]\), and therefore to \((1,p_{0})\in\ell\) or \([p_{0}:1]\) is fixed by \(\mathcal{G}_{\mathcal{H}}\).
* If \((1,p_{0})\in\ell\) then we have equivalence between the holomorphy on \(D\) of \(K(\mathrm{Leg}\mathcal{H})\) and that of \(K(\mathrm{Leg}\mathcal{H})\).
3. Assume that \(\nu_{i}=\nu\geq 2\) for all \(i\in\{1,\ldots,n\}\). Then the curvature of \(\operatorname{Leg}\mathcal{H}\) is holomorphic on \(D\) if and only if \[\sum_{i=1}^{n}\left(p_{0}b_{i}-a_{i}\right)\left(\frac{\left(\nu-2\right)\left( \partial_{x}B(b_{i},a_{i})-\partial_{y}A(b_{i},a_{i})\right)}{B(b_{i},a_{i})}+ \frac{3\nu(\alpha+p_{0}\,\beta)}{\alpha b_{i}+\beta\,a_{i}}\right)=0.\] Indeed, in the above proof, put \(\delta_{i,j}=\frac{(p_{0}-r_{i})(p_{0}-r_{j})}{(r_{i}-r_{j})}\) and note that \[\sum_{i=1}^{n}\left((\nu-1)\varphi_{i}(\tilde{x})\sum_{j=1,j\neq i}^{n}\frac{ \nu(p_{0}-r_{j})}{r_{i}-r_{j}}\right)=\frac{\nu(\nu-1)}{\tilde{x}}\sum_{i=1}^{ n}\sum_{j=1,j\neq i}^{n}\delta_{i,j}=\frac{\nu(\nu-1)}{\tilde{x}}\sum_{1\leq i<j \leq n}(\delta_{i,j}+\delta_{j,i})\equiv 0.\] In particular, if the fiber \(\underline{\mathcal{G}}_{\mathcal{H}}^{-1}([p_{0}:1])\) contains a single non-fixed critical point of \(\underline{\mathcal{G}}_{\mathcal{H}}\), say \([a:b]\), then * either \(\underline{\mathcal{G}}_{\mathcal{H}}^{-1}([p_{0}:1])=\{[a:b]\}\), in which case \(\nu=d-1\); * or \(\#\underline{\mathcal{G}}_{\mathcal{H}}^{-1}([p_{0}:1])=2\), in which case \(d\) is necessarily odd, \(d=2k+1,\) and \(\nu=k\). In both cases, the curvature of \(\operatorname{Leg}\mathcal{H}\) is holomorphic on \(D\) if and only if \[(\nu-2)(\alpha b+\beta\,a)\Big{(}\partial_{x}B(b,a)-\partial_{y}A(b,a)\Big{)} +3\nu(\alpha+p_{0}\,\beta)B(b,a)=0.\]
**Example 3.15**.: Let us consider the homogeneous pre-foliation \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) of co-degree \(1\) and odd degree \(2k+1\geq 5\) on \(\mathbb{P}_{\mathbb{C}}^{2}\) defined by the \(1\)-form
\[\omega=(x-\tau y)\left(y^{k}(y-x)^{k}\mathrm{d}x+(y-\lambda x)^{k}(y-\mu x)^{ k}\mathrm{d}y\right),\quad\text{where }\lambda,\mu\in\mathbb{C}\setminus\{0,1\}\ \text{ and }\ \tau\in\mathbb{C}\setminus\{1\}.\]
We know from [7, Example 3.4] that \(D:=\{p=0\}\subset\Delta(\operatorname{Leg}\mathcal{H})\) and that the fiber \(\underline{\mathcal{G}}_{\mathcal{H}}^{-1}([0:1])\) consists of the two points \([0:1]\) and \([1:1]\): the point \([0:1]\) (resp. \([1:1]\)) is critical and fixed (resp. non-fixed) for \(\underline{\mathcal{G}}_{\mathcal{H}}\) with multiplicity \(k-1\). Moreover, since \(\tau\neq 1\), we have \([1:\tau]\not\in\underline{\mathcal{G}}_{\mathcal{H}}^{-1}([0:1])\), so that \(D\neq D_{\ell}=\{[p:1]=\underline{\mathcal{G}}_{\mathcal{H}}([1:\tau])\}\). From Remark 3.14 (iii), we deduce that the curvature of \(\operatorname{Leg}\mathcal{H}\) is holomorphic along \(D\) if and only if
\[0=(k-2)(1-\tau)\Big{(}\partial_{x}B(1,1)-\partial_{y}A(1,1)\Big{)}+3kB(1,1)=k( 1-\lambda)^{k}(1-\mu)^{k}\left(\frac{(k-2)(\tau-1)(\lambda+\mu-2\lambda\mu)}{( \lambda-1)(\mu-1)}+3\right),\]
_i.e._ if and only if the quadruple \((k,\lambda,\mu,\tau)\) satisfies the equation \((k-2)(\tau-1)(\lambda+\mu-2\lambda\mu)+3(\lambda-1)(\mu-1)=0\). Note that, according to [7, Example 3.4], the holomorphy of the curvature of \(\operatorname{Leg}\mathcal{H}\) along \(D\) is characterized by the equation \((k-2)(\lambda+\mu-2\lambda\mu)=0\). It follows, in particular, that if the curvature of \(\operatorname{Leg}\mathcal{H}\) is holomorphic on \(D\), then the curvature of \(\operatorname{Leg}\mathcal{H}\) is not holomorphic on \(D\).
**Corollary 3.16**.: Let \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) be a homogeneous pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}_{\mathbb{C}}^{2}\), defined by the \(1\)-form
\[\omega=(\alpha x+\beta\,y)\left(A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y\right), \quad A,B\in\mathbb{C}[x,y]_{d-1},\ \ \gcd(A,B)=1.\]
Assume that the foliation \(\mathcal{H}\) has a transverse inflection line \(T=(ax+by=0)\) of order \(\nu-1\). Assume moreover that \([-a:b]\in\mathbb{P}_{\mathbb{C}}^{1}\) is the only non-fixed critical point of \(\underline{\mathcal{G}}_{\mathcal{H}}\) in its fiber \(\underline{\mathcal{G}}_{\mathcal{H}}^{-1}(\underline{\mathcal{G}}_{\mathcal{H} }([-a:b]))\) and that \([-\alpha:\beta]\not\in\underline{\mathcal{G}}_{\mathcal{H}}^{-1}(\underline{ \mathcal{G}}_{\mathcal{H}}([-a:b]))\). Then the curvature of \(\operatorname{Leg}\mathcal{H}\) is holomorphic on \(T^{\prime}=\mathcal{G}_{\mathcal{H}}(T)\) if and only if
\[(\alpha b-\beta\,a)Q(b,-a;a,b)+3\nu\Big{(}\alpha B(b,-a)-\beta A(b,-a)\Big{)}P( b,-a;a,b)=0,\]
where
\[Q(x,y;a,b):=(\nu-2)\left(\frac{\partial B}{\partial x}-\frac{\partial A}{\partial y }\right)P(x,y;a,b)+2(\nu+1)\left|\begin{array}{cc}\frac{\partial P}{ \partial x}&A(x,y)\\ \frac{\partial P}{\partial y}&B(x,y)\end{array}\right|\quad\text{and}\quad P(x,y;a,b):=\frac{\left|\begin{array}{cc}A(x,y)&A(b,-a)\\ B(x,y)&B(b,-a)\end{array}\right|}{(ax+by)^{\nu}}.\]
Proof. -- Up to linear conjugation, we can assume that \(T^{\prime}\neq L_{\infty}\); then \(T^{\prime}\) has the equation \(p=p_{0}\), where \(p_{0}=-\frac{A(b,-a)}{B(b,-a)}\). According to Theorem 3.13, the curvature of \(\mathrm{Leg}\mathcal{H}\) is holomorphic on \(T^{\prime}\) if and only if
\[\left(1-\tfrac{1}{\nu}\right)(p_{0}b+a)\left(\frac{Q(b,-a;a,b)}{B(b,-a)P(b,-a ;a,b)}+\frac{3\nu(\alpha+p_{0}\beta)}{\alpha b-\beta a}\right)=0.\]
Now, the hypothesis that the point \([-a:b]\) is not fixed by \(\underline{G}_{\mathcal{H}}\) translates into \(p_{0}b+a\neq 0\). It follows that \(K(\mathrm{Leg}\mathcal{H})\) is holomorphic on \(T^{\prime}\) if and only if
\[\frac{Q(b,-a;a,b)}{P(b,-a;a,b)}+\frac{3\nu\big{(}\alpha B(b,-a)-\beta A(b,-a) \big{)}}{\alpha b-\beta a}=0,\]
hence the corollary holds.
In particular, we have:
**Corollary 3.17**.: _Let \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) be a homogeneous pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}^{2}_{\mathbb{C}}\), defined by the \(1\)-form_
\[\omega=\left(\alpha x+\beta y\right)\left(A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y \right),\quad A,B\in\mathbb{C}[x,y]_{d-1},\ \ \gcd(A,B)=1.\]
Assume that \(\mathcal{H}\) admits a transverse inflection line \(T=(ax+by=0)\) of maximal order \(d-2\) and that \(T\neq\ell\). Then the curvature of \(\mathrm{Leg}\mathcal{H}\) is holomorphic along \(T^{\prime}=\mathcal{G}_{\mathcal{H}}(T)\) if and only if
\[(d-3)(\alpha b-\beta a)\Big{(}\partial_{x}B(b,-a)-\partial_{y}A(b,-a)\Big{)} +3(d-1)\Big{(}\alpha B(b,-a)-\beta A(b,-a)\Big{)}=0.\]
The following theorem is an effective criterion for the holomorphy of the curvature of the web dual to a homogeneous pre-foliation \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) (with \(O\in\ell\)) along the component \(D_{\ell}\subset\Delta(\mathrm{Leg}\mathcal{H})\).
**Theorem 3.18**.: _Let \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) be a homogeneous pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}^{2}_{\mathbb{C}}\), defined by the \(1\)-form_
\[\omega=\left(\alpha x+\beta y\right)\left(A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y \right),\quad A,B\in\mathbb{C}[x,y]_{d-1},\ \ \gcd(A,B)=1.\]
_Write \(\underline{G}_{\mathcal{H}}^{-1}(\underline{G}_{\mathcal{H}}([-\alpha:\beta]) )=\{[-\alpha:\beta],[a_{1}:b_{1}],\ldots,[a_{n}:b_{n}]\}\) and denote by \(\nu_{i}\) (resp. \(\nu_{0}\)) the ramification index of \(\underline{G}_{\mathcal{H}}\) at the point \([a_{i}:b_{i}]\) (resp. \([-\alpha:\beta]\)). Define the polynomials \(P_{0}\in\mathbb{C}[x,y]_{d-\nu_{0}-1}\) and \(Q_{0}\in\mathbb{C}[x,y]_{2d-\nu_{0}-3}\) by_
\[P_{0}(x,y;\alpha,\beta):=\frac{\left|\begin{array}{cc}A(x,y)&A(\beta,-\alpha )\\ B(x,y)&B(\beta,-\alpha)\end{array}\right|}{(\alpha x+\beta y)^{\nu_{0}}}\quad \text{and}\quad Q_{0}(x,y;\alpha,\beta):=(\nu_{0}-1)\left(\frac{\partial B}{ \partial x}-\frac{\partial A}{\partial y}\right)P_{0}(x,y;\alpha,\beta)+(2\nu_ {0}+1)\left|\begin{array}{cc}\frac{\partial P_{0}}{\partial x}&A(x,y)\\ \frac{\partial P_{0}}{\partial y}&B(x,y)\end{array}\right|.\]
_Assume that \(\underline{G}_{\mathcal{H}}([-\alpha:\beta])\neq\infty\) and let \(p_{0}\in\mathbb{C}\) be such that \([p_{0}:1]=\underline{G}_{\mathcal{H}}([-\alpha:\beta])\). Then the curvature of \(\mathrm{Leg}\mathcal{H}\) is holomorphic on \(D_{\ell}\) if and only if_
\[\left(1+\frac{1}{\nu_{0}}\right)\frac{(\alpha+p_{0}\beta)Q_{0}(\beta,-\alpha; \alpha,\beta)}{B(\beta,-\alpha)P_{0}(\beta,-\alpha;\alpha,\beta)}+\sum_{i=1}^ {n}\left(1-\frac{1}{\nu_{i}}\right)\left(p_{0}b_{i}-a_{i}\right)\left(\frac{Q_{i }(b_{i},a_{i};a_{i},b_{i})}{B(b_{i},a_{i})P_{i}(b_{i},a_{i};a_{i},b_{i})}+ \frac{3\nu_{i}(\alpha+p_{0}\beta)}{\alpha b_{i}+\beta a_{i}}\right)=0,\]
_where the \(P_{i}\)'s and the \(Q_{i}\)'s (\(i=1,\ldots,n\)) are the polynomials given by (3.2)._
Note that the \(d\)-web \(\operatorname{Leg}\!\mathcal{H}=\operatorname{Leg}\ell\boxtimes\operatorname{Leg} \!\mathcal{H}\) is not smooth along the component \(D_{\ell}\subset\operatorname{Tang}(\operatorname{Leg}\!\ell,\operatorname{Leg} \!\mathcal{H})\) and therefore we cannot apply Theorem 2.1 of [7] to \(\operatorname{Leg}\!\mathcal{H}\) as we did in the proof of Theorem 3.13. To prove Theorem 3.18, we will first establish, for a foliation \(\mathcal{F}\) and a web \(\mathcal{W}\) smooth along an irreducible component \(D\) of \(\operatorname{Tang}(\mathcal{F},\mathcal{W}),\) an effective criterion for the holomorphy of the curvature of \(\mathcal{F}\boxtimes\mathcal{W}\) along \(D.\)
**Theorem 3.19**.: _Let \(\mathcal{W}\) be a holomorphic \((d-1)\)-web on a complex surface \(M.\) Let \(\mathcal{F}\) be a holomorphic foliation on \(M.\) Assume that \(\mathcal{W}\) is smooth along an irreducible component \(D\) of \(\operatorname{Tang}(\mathcal{F},\mathcal{W}).\) Then the fundamental form \(\eta(\mathcal{F}\boxtimes\mathcal{W})\) has simple poles along \(D.\) More precisely, choose a local coordinate system \((x,y)\) on \(M\) such that \(D=\{y=0\}\) and let \(F(x,y,p)=0,\)\(p=\frac{\mathrm{d}y}{\mathrm{d}x},\) be an implicit differential equation defining \(\mathcal{W}.\) Write \(F(x,0,p)=a_{0}(x)\prod\limits_{\alpha=1}^{n}(p-\varphi_{\alpha}(x))^{\nu_{ \alpha}},\) with \(\varphi_{\alpha}\not\equiv\varphi_{\beta}\) if \(\alpha\neq\beta,\) and assume that \(\mathcal{F}\) is given by a \(1\)-form \(\omega\) of type \(\omega=\mathrm{d}y-(\varphi_{1}(x)+yf(x,y))\,\mathrm{d}x.\) Define \(Q(x,p)\) by \(F(x,0,p)=(p-\varphi_{1}(x))^{\nu_{1}}Q(x,p)\) and put
\[h(x)=\frac{1}{\nu_{1}}\left[(\nu_{1}-1)\left(d-1-\varphi_{1}(x)\frac{\partial _{p}\partial_{y}F(x,0,\varphi_{1}(x))+2\delta_{\nu_{1},2}f(x,0)Q(x,\varphi_{1} (x))}{\partial_{y}F(x,0,\varphi_{1}(x))}\right)-(2\nu_{1}+1)\sum\limits_{ \alpha=2}^{n}\frac{\nu_{\alpha}\varphi_{\alpha}(x)}{\varphi_{1}(x)-\varphi_{ \alpha}(x)}\right]\]
(where \(\delta_{\nu_{1},2}=1\) if \(\nu_{1}=2\) and \(0\) otherwise). Let \(\psi_{\alpha}\) be a function of the coordinate \(x\) defined, for all \(\alpha\in\{1,\ldots,n\}\) such that \(\nu_{\alpha}\geq 2,\) by
\[\psi_{\alpha}(x)=\frac{1}{\nu_{\alpha}}\left[(\nu_{\alpha}-2)\left(d-1-\varphi _{\alpha}(x)\frac{\partial_{p}\partial_{y}F\big{(}x,0,\varphi_{\alpha}(x) \big{)}}{\partial_{y}F\big{(}x,0,\varphi_{\alpha}(x)\big{)}}\right)-2(\nu_{ \alpha}+1)\sum\limits_{\beta=1,\beta\neq\alpha}^{n}\frac{\nu_{\beta}\varphi_{ \beta}(x)}{\varphi_{\alpha}(x)-\varphi_{\beta}(x)}\right].\]
Then the \(1\)-form \(\eta(\mathcal{F}\boxtimes\mathcal{W})-\frac{\theta}{6y}\) is holomorphic along \(D=\{y=0\},\) where
\[\theta=(\nu_{1}+1)\left[h(x)\big{(}\mathrm{d}y-\varphi_{1}(x)\mathrm{d}x \big{)}+(\nu_{1}-1)\mathrm{d}y\right]+\sum\limits_{\alpha=2}^{n}(\nu_{\alpha}- 1)\left[\left(\psi_{\alpha}(x)+\frac{3\varphi_{1}(x)}{\varphi_{1}(x)-\varphi_{ \alpha}(x)}\right)(\mathrm{d}y-\varphi_{\alpha}(x)\mathrm{d}x)+(\nu_{\alpha}- 2)\mathrm{d}y\right].\]
In particular, the curvature \(K(\mathcal{F}\boxtimes\mathcal{W})\) is holomorphic along \(D\) if and only if
\[(\nu_{1}+1)\varphi_{1}(x)h(x)+\sum\limits_{\alpha=2}^{n}(\nu_{\alpha}-1)\varphi _{\alpha}(x)\left(\psi_{\alpha}(x)+\frac{3\varphi_{1}(x)}{\varphi_{1}(x)- \varphi_{\alpha}(x)}\right)\equiv 0\]
and
\[\frac{\mathrm{d}}{\mathrm{d}x}\left((\nu_{1}+1)h(x)+\sum\limits_{\alpha=2}^{n} (\nu_{\alpha}-1)\left(\psi_{\alpha}(x)+\frac{3\varphi_{1}(x)}{\varphi_{1}(x)- \varphi_{\alpha}(x)}\right)\right)\equiv 0.\]
Proof.: In a neighborhood of a generic point \(m\) of \(D,\) the web \(\mathcal{W}\) decomposes as \(\mathcal{W}=\boxtimes_{\alpha=1}^{n}\mathcal{W}_{\alpha},\) where \(\mathcal{W}_{\alpha}=\boxtimes_{i=1}^{\nu_{\alpha}}\mathcal{F}_{i}^{\alpha}\) and \(\mathcal{F}_{i}^{\alpha}|_{y=0}:\mathrm{d}y-\varphi_{\alpha}(x)\mathrm{d}x=0.\) Then \(\eta(\mathcal{F}\boxtimes\mathcal{W})=\eta(\mathcal{W})+\eta_{1}+\eta_{2}+\eta_{3 }+\eta_{4},\) where
\[\eta_{1}=\sum\limits_{1\leq i<j\leq\nu_{1}}\eta(\mathcal{F} \boxtimes\mathcal{F}_{i}^{1}\boxtimes\mathcal{F}_{j}^{1}), \eta_{2}=\sum\limits_{\alpha=2}^{n}\sum\limits_{1\leq i\leq\nu_{1} \atop 1\leq j\leq\nu_{\alpha}}\eta(\mathcal{F}\boxtimes\mathcal{F}_{i}^{1} \boxtimes\mathcal{F}_{j}^{\alpha}),\] \[\eta_{3}=\sum\limits_{\alpha=2}^{n}\sum\limits_{1\leq i<j\leq\nu_{ \alpha}}\eta(\mathcal{F}\boxtimes\mathcal{F}_{i}^{\alpha}\boxtimes\mathcal{F}_{j}^ {\alpha}), \eta_{4}=\sum\limits_{2\leq\alpha<\beta\leq n}\sum\limits_{1\leq i\leq\nu_{ \alpha}\atop 1\leq j\leq\nu_{\beta}}\eta(\mathcal{F}\boxtimes\mathcal{F}_{i}^{\alpha} \boxtimes\mathcal{F}_{j}^{\beta}).\]
According to [7, Theorem 2.1], the principal part of the Laurent series of \(\eta(\mathcal{W})\) at \(y=0\) is given by \(\frac{\theta_{0}}{y},\) where
\[\theta_{0}=\frac{1}{6}\sum\limits_{\alpha=1}^{n}\big{(}\nu_{\alpha}-1\big{)} \Big{[}\psi_{\alpha}(x)\big{(}\mathrm{d}y-\varphi_{\alpha}(x)\mathrm{d}x\big{)}+ \big{(}\nu_{\alpha}-2\big{)}\mathrm{d}y\Big{]}.\]
As for the \(1\)-forms \(\eta_{1},\ldots,\eta_{4}\), first note that, as in the proof of [7, Theorem 2.1], the slope \(p_{j}\) (\(j=1,\ldots,\nu_{\alpha}\)) of \(\Upsilon_{(x,y)}\mathcal{F}_{j}^{\alpha}\) can be written as
\[p_{j}=\lambda_{\alpha,j}(x,y):=\varphi_{\alpha}(x)+\sum_{k\geq 1}f_{\alpha,k}(x )\zeta_{\alpha}^{jk}y^{\frac{k}{\nu_{\alpha}}},\quad\text{where }f_{\alpha,k}\in\mathbb{C}\{x\},\]
with \(f_{\alpha,1}\not\equiv 0\) and \(\zeta_{\alpha}=\exp(\frac{2i\pi}{\nu_{\alpha}})\). Moreover, for \(\alpha=1\), if \(\nu_{1}\geq 2\), then
\[(f_{1,1}(x))^{\nu_{1}}=-\frac{\partial_{y}F\left(x,0,\varphi_{1}(x)\right)}{Q( x,\varphi_{1}(x))} \tag{3.4}\]
and, for all \(\alpha\in\{1,\ldots,n\}\) such that \(\nu_{\alpha}\geq 2\), we have
\[\frac{f_{\alpha,2}(x)}{(f_{\alpha,1}(x))^{2}}=\frac{1}{\nu_{\alpha}}\left[ \frac{\partial_{y}\partial_{y}F\left(x,0,\varphi_{\alpha}(x)\right)}{ \partial_{y}F\left(x,0,\varphi_{\alpha}(x)\right)}-\sum_{\beta=1,\beta\not= \alpha}^{n}\frac{\nu_{\beta}}{\varphi_{\alpha}(x)-\varphi_{\beta}(x)}\right]. \tag{3.5}\]
Put \(\lambda_{0}(x,y)=\varphi_{1}(x)+yf(x,y)\); according to [7, Lemma 2.8], we have \(\eta(\mathcal{F}\boxtimes\mathcal{F}_{i}^{1}\boxtimes\mathcal{F}_{j}^{1})=a _{i,j}(x,y)\mathrm{d}x+b_{i,j}(x,y)\mathrm{d}y\), where
\[a_{i,j}=-\frac{(\partial_{y}(\lambda_{1,i}\lambda_{1,j})-\partial_{x}\lambda_ {0})\lambda_{0}}{(\lambda_{1,i}-\lambda_{0})(\lambda_{1,j}-\lambda_{0})}- \frac{(\partial_{y}(\lambda_{1,i}\lambda_{0})-\partial_{x}\lambda_{1,j}) \lambda_{1,j}}{(\lambda_{1,i}-\lambda_{1,j})(\lambda_{0}-\lambda_{1,j})}- \frac{(\partial_{y}(\lambda_{1,j}\lambda_{0})-\partial_{x}\lambda_{1,j}) \lambda_{1,j}}{(\lambda_{1,j}-\lambda_{1,j})(\lambda_{0}-\lambda_{1,i})}\]
and
\[b_{i,j}=\frac{\partial_{y}(\lambda_{1,i}\lambda_{1,j})-\partial_{x}\lambda_ {0}}{(\lambda_{1,i}-\lambda_{0})(\lambda_{1,i}-\lambda_{0})}+\frac{\partial_ {y}(\lambda_{1,i}\lambda_{0})-\partial_{x}\lambda_{1,j}}{(\lambda_{1,i}- \lambda_{1,j})(\lambda_{0}-\lambda_{1,j})}+\frac{\partial_{y}(\lambda_{1,j} \lambda_{0})-\partial_{x}\lambda_{1,j}}{(\lambda_{1,j}-\lambda_{1,j})(\lambda _{0}-\lambda_{1,i})}.\]
Writing \(f(x,y)=\sum_{k\geq 0}f_{0,k}(x)y^{k}\) and putting \(w_{1}=y^{\frac{1}{\nu_{1}}}\), a straightforward computation leads to the following equalities:
\[\partial_{y}(\lambda_{1,i}\lambda_{1,j})-\partial_{x}\lambda_{0} =\frac{1}{\nu_{1}y}\left[(\zeta_{1}^{i}+\zeta_{1}^{j})\varphi_{1 }f_{1,1}w_{1}+2\left(\zeta_{1}^{i+j}f_{1,1}^{2}+(\zeta_{1}^{2i}+\zeta_{1}^{2j })\varphi_{1}f_{1,2}-\delta_{\nu_{1},2}\varphi_{1}^{{}^{\prime}}\right)w_{1}^ {2}+\cdots\right],\] \[\partial_{y}(\lambda_{1,i}\lambda_{0})-\partial_{x}\lambda_{1,j} =\frac{1}{\nu_{1}y}\left[\zeta_{1}^{i}\varphi_{1}f_{1,1}w_{1}+2 \left(\zeta_{1}^{2i}\varphi_{1}f_{1,2}+(\varphi_{1}f_{0,0}-\varphi_{1}^{{}^{ \prime}})\delta_{\nu_{1},2}\right)w_{1}^{2}+\cdots\right],\] \[(\lambda_{1,i}-\lambda_{0})(\lambda_{1,j}-\lambda_{0})=\zeta_{1}^ {i+j}f_{1,1}^{2}w_{1}^{2}+(\zeta_{1}^{2i+j}+\zeta_{1}^{i+2j})f_{1,1}f_{1,2}w_{1 }^{3}+\cdots,\] \[(\lambda_{1,i}-\lambda_{1,j})(\lambda_{0}-\lambda_{1,j})=(\zeta_{ 1}^{2j}-\zeta_{1}^{i+j})f_{1,1}^{2}w_{1}^{2}+f_{1,1}\left((2\zeta_{1}^{3j}- \zeta_{1}^{2i+j}-\zeta_{1}^{i+2j})f_{1,2}-2\delta_{\nu_{1},2}f_{0,0}\right)w_{ 1}^{3}+\cdots.\]
These equalities allow us to check that \(a_{i,j}\) and \(b_{i,j}\) can be written as
\[a_{i,j}=\frac{\varphi_{1}\left(\varphi_{1}f_{1,2}-f_{1,1}^{2}-\delta_{\nu_{1}, 2}\varphi_{1}f_{0,0}\right)+w_{1}A_{i,j}}{\nu_{1}yf_{1,1}^{2}},\qquad\qquad b _{i,j}=\frac{2f_{1,1}^{2}-\varphi_{1}f_{1,2}+\delta_{\nu_{1},2}\varphi_{1}f_{0,0}+w_{1}B_{i,j}}{\nu_{1}yf_{1,1}^{2}},\]
where \(A_{i,j},B_{i,j}\in\mathbb{C}\{x,w_{1}\}\). Since \(\eta_{1}\) is a uniform and meromorphic \(1\)-form, we deduce that the principal part of the Laurent series of \(\eta_{1}\) at \(y=0\) is given by \(\frac{\varphi_{1}}{y}\), where
\[\mathfrak{o}_{1} =\binom{\nu_{1}}{2}\left(\frac{\varphi_{1}(x)\left(\varphi_{1}(x )f_{1,2}(x)-f_{1,1}(x)^{2}-\delta_{\nu_{1},2}\varphi_{1}(x)f_{0,0}(x)\right)}{ \nu_{1}f_{1,1}(x)^{2}}\mathrm{d}x+\frac{2f_{1,1}(x)^{2}-\varphi_{1}(x)f_{1,2}(x )+\delta_{\nu_{1},2}\varphi_{1}(x)f_{0,0}(x)}{\nu_{1}f_{1,1}(x)^{2}}\mathrm{d}y\right)\] \[=\frac{1}{2}(\nu_{1}-1)\left[\varphi_{1}(x)\left(\frac{\delta_{ \nu_{1},2}f_{0,0}(x)}{f_{1,1}(x)^{2}-\frac{f_{1,2}(x)}{f_{1,1}(x)^{2}}}\right) \left(\mathrm{d}y-\varphi_{1}(x)\mathrm{d}x\right)+2\mathrm{d}y-\varphi_{1}(x )\mathrm{d}x\right].\]
Thanks to (3.4), (3.5) and the equality \(f_{0,0}(x)=f(x,0)\), the \(1\)-form \(\theta_{1}\) can be rewritten as
\[\theta_{1}=\frac{1}{2}\left(1-\frac{1}{\nu_{1}}\right)\left(d-1-\varphi_{1}(x )\frac{\partial_{y}\partial_{y}F(x,0,\varphi_{1}(x))+2\delta_{\nu_{1},2}f(x,0)Q(x, \varphi_{1}(x))}{\partial_{y}F(x,0,\varphi_{1}(x))}+\frac{\pi_{1}}{\alpha-2} \frac{\nu_{\alpha}\varphi_{\alpha}(x)}{\varphi_{1}(x)-\varphi_{\alpha}(x)} \right)(\mathrm{d}y-\varphi_{1}(x)\mathrm{d}x)+\frac{1}{2}(\nu_{1}-1) \mathrm{d}y.\]
Let us now pass to \(\eta_{2}\). Put \(w_{\alpha,1}=y^{\frac{1}{\nu_{1}\nu_{\alpha}}}\); again by [7, Lemma 2.8], we have \(\eta(\mathcal{F}\boxtimes\mathcal{F}_{i}^{1}\boxtimes\mathcal{F}_{j}^{\alpha})= a_{i,j}^{\alpha}(x,y)\mathrm{d}x+b_{i,j}^{\alpha}(x,y)\mathrm{d}y\), where
\[a_{i,j}^{\alpha} =-\frac{\left(\partial_{y}(\lambda_{1,j}\lambda_{\alpha,j})- \partial_{x}\lambda_{0}\right)\lambda_{0}}{(\lambda_{1,i}-\lambda_{0})( \lambda_{\alpha,j}-\lambda_{0})}-\frac{\left(\partial_{y}(\lambda_{1,j}\lambda _{0})-\partial_{x}\lambda_{\alpha,j}\right)\lambda_{\alpha,j}}{(\lambda_{1,i}- \lambda_{\alpha,j})(\lambda_{0}-\lambda_{\alpha,j})}-\frac{\left(\partial_{y} (\lambda_{\alpha,j}\lambda_{0})-\partial_{x}\lambda_{1,j}\right)\lambda_{1,i}} {(\lambda_{\alpha,j}-\lambda_{1,i})(\lambda_{0}-\lambda_{1,i})}\] \[=\frac{1}{\nu_{1}y}\left(\frac{\varphi_{1}\varphi_{\alpha}}{ \varphi_{1}-\varphi_{\alpha}}+w_{\alpha,1}A_{i,j}^{\alpha}\right)\]
and
\[b_{i,j}^{\alpha} =\frac{\partial_{y}(\lambda_{1,j}\lambda_{\alpha,j})-\partial_{x }\lambda_{0}}{(\lambda_{1,i}-\lambda_{0})(\lambda_{\alpha,j}-\lambda_{0})}+ \frac{\partial_{y}(\lambda_{1,j}\lambda_{0})-\partial_{x}\lambda_{\alpha,j}}{ (\lambda_{1,i}-\lambda_{\alpha,j})(\lambda_{0}-\lambda_{\alpha,j})}+\frac{ \partial_{y}(\alpha_{\alpha,j}\lambda_{0})-\partial_{x}\lambda_{1,i}}{( \lambda_{\alpha,j}-\lambda_{1,j})(\lambda_{0}-\lambda_{1,i})}\] \[=-\frac{1}{\nu_{1}y}\left(\frac{\varphi_{\alpha}}{\varphi_{1}- \varphi_{\alpha}}+w_{\alpha,1}B_{i,j}^{\alpha}\right),\]
where \(A_{i,j}^{\alpha},B_{i,j}^{\alpha}\in\mathbb{C}\{x,w_{\alpha,1}\}\). The 1-form \(\eta_{2}\) being uniform and meromorphic, it follows that the principal part of the Laurent series of \(\eta_{2}\) at \(y=0\) is given by \(\frac{\theta_{2}}{y}\), where
\[\theta_{2} =\sum_{\alpha=2}^{n}\nu_{1}\nu_{\alpha}\left(\frac{\varphi_{1}(x )\varphi_{\alpha}(x)}{\nu_{1}\left(\varphi_{1}(x)-\varphi_{\alpha}(x)\right)} \mathrm{d}x-\frac{\varphi_{\alpha}(x)}{\nu_{1}\left(\varphi_{1}(x)-\varphi_{ \alpha}(x)\right)}\mathrm{d}y\right)\] \[=-(\mathrm{d}y-\varphi_{1}(x)\mathrm{d}x)\,\sum_{\alpha=2}^{n} \frac{\nu_{\alpha}\varphi_{\alpha}(x)}{\varphi_{1}(x)-\varphi_{\alpha}(x)}.\]
Similarly, putting \(w_{\alpha}=y^{\frac{1}{\nu_{\alpha}}}\) and using [7, Lemma 2.8], we obtain that
\[\eta(\mathcal{F}\boxtimes\mathcal{F}_{i}^{\alpha}\boxtimes\mathcal{F}_{j}^{ \alpha})=\frac{1}{\nu_{\alpha}y}\left[\left(-\frac{\varphi_{1}(x)\varphi_{ \alpha}(x)}{\varphi_{1}(x)-\varphi_{\alpha}(x)}+w_{\alpha}\tilde{A}_{i,j}^{ \alpha}(x,w_{\alpha})\right)\mathrm{d}x+\left(\frac{\varphi_{1}(x)}{\varphi_ {1}(x)-\varphi_{\alpha}(x)}+w_{\alpha}\tilde{B}_{i,j}^{\alpha}(x,w_{\alpha}) \right)\mathrm{d}y\right],\]
where \(\tilde{A}_{i,j}^{\alpha},\tilde{B}_{i,j}^{\alpha}\in\mathbb{C}\{x,w_{\alpha}\}\), so that the principal part of the Laurent series of \(\eta_{3}\) at \(y=0\) is given by \(\frac{\theta_{1}}{y}\), where
\[\theta_{3} =\sum_{\alpha=2}^{n}\binom{\nu_{\alpha}}{2}\left(-\frac{\varphi_{ 1}(x)\varphi_{\alpha}(x)}{\nu_{\alpha}\left(\varphi_{1}(x)-\varphi_{\alpha}(x) \right)}\mathrm{d}x+\frac{\varphi_{1}(x)}{\nu_{\alpha}\left(\varphi_{1}(x)- \varphi_{\alpha}(x)\right)}\mathrm{d}y\right)\] \[=\frac{1}{2}\sum_{\alpha=2}^{n}\frac{(\nu_{\alpha}-1)\varphi_{1}(x )\left(\mathrm{d}y-\varphi_{\alpha}(x)\mathrm{d}x\right)}{\varphi_{1}(x)- \varphi_{\alpha}(x)}.\]
Finally, since \((\varphi_{1}-\varphi_{\alpha})(\varphi_{\alpha}-\varphi_{\beta})(\varphi_{\beta }-\varphi_{1})\not\equiv 0\) for all \(\beta>\alpha\geq 2\), [7, Lemma 2.8] implies that the 1-form \(\eta(\mathcal{F}\boxtimes\mathcal{F}_{i}^{\alpha}\boxtimes\mathcal{F}_{j}^{ \beta})\) has no poles along \(y=0\); therefore the same is true for the 1-form \(\eta_{4}\).
As a result, the principal part of the Laurent series of \(\eta(\mathcal{F}\boxtimes\mathcal{W})\) at \(y=0\) is given by \(\frac{\widehat{\theta}}{y}\), where
\[\widehat{\theta} =\theta_{0}+\theta_{1}+\theta_{2}+\theta_{3}\] \[=\frac{1}{6}\Big{(}(\nu_{1}+1)h(x)-(\nu_{1}-1)\psi_{1}(x)\Big{)} \big{(}\mathrm{d}y-\varphi_{1}(x)\mathrm{d}x\big{)}+\frac{1}{2}(\nu_{1}-1) \mathrm{d}y+\frac{1}{6}\sum_{\alpha=1}^{n}\big{(}\nu_{\alpha}-1\big{)}\Big{[} \psi_{\alpha}(x)\big{(}\mathrm{d}y-\varphi_{\alpha}(x)\mathrm{d}x\big{)}+ \big{(}\nu_{\alpha}-2\big{)}\mathrm{d}y\Big{]}\] \[\quad+\frac{1}{2}\sum_{\alpha=2}^{n}\frac{(\nu_{\alpha}-1)\varphi_{ 1}(x)\left(\mathrm{d}y-\varphi_{\alpha}(x)\mathrm{d}x\right)}{\varphi_{1}(x)- \varphi_{\alpha}(x)}\] \[=\frac{1}{6}(\nu_{1}+1)\left[h(x)\Big{(}\mathrm{d}y-\varphi_{1}(x) \mathrm{d}x\Big{)}+(\nu_{1}-1)\mathrm{d}y\right]+\frac{1}{6}\sum_{\alpha=2}^{n} (\nu_{\alpha}-1)\left[\left(\psi_{\alpha}(x)+\frac{3\varphi_{1}(x)}{\varphi_{1} (x)-\varphi_{\alpha}(x)}\right)\big{(}\mathrm{d}y-\varphi_{\alpha}(x)\mathrm{d}x \big{)}+(\nu_{\alpha}-2\big{)}\mathrm{d}y\right]\] \[=\frac{\theta}{6},\]
hence the theorem follows.
Proof of Theorem 3.18.: As in the proof of Theorem 3.13, up to linear conjugation, we can assume that \(\beta\neq 0\) and \(b_{i}\neq 0\) for all \(i\in\{1,\ldots,n\}\). Then, by putting \(r_{0}:=-\frac{\alpha}{\beta}\) and \(r_{i}:=\frac{a_{i}}{b_{i}}\) for \(i\in\{1,\ldots,n\}\), [7, Lemma 3.5] implies the existence of a constant \(c\in\mathbb{C}^{*}\) such that
\[-A(1,z)=p_{0}B(1,z)-c\prod_{i=0}^{n}(z-r_{i})^{\nu_{i}}.\]
Since \(A,B\in\mathbb{C}[x,y]_{d-1}\), the differential equation (3.3) describing \(\operatorname{Leg}\!\mathcal{H}\) in the affine chart \((p,q)\) then becomes
\[x^{d}\left(p-\frac{q}{x}-r_{0}\right)\left((p-p_{0})B(1,p-\frac{q}{x})+c\prod_{ i=0}^{n}(p-\frac{q}{x}-r_{i})^{\nu_{i}}\right)=0,\qquad\text{with}\qquad x= \frac{\mathrm{d}q}{\mathrm{d}p}.\]
Put \(\ddot{x}:=q\), \(\ddot{y}:=p-p_{0}\) and \(\ddot{p}:=\frac{\mathrm{d}\dot{y}}{\mathrm{d}\ddot{x}}=\frac{1}{x}\); in this new coordinate system \(D_{\ell}=\{\ddot{y}=0\}\) and \(\operatorname{Leg}\!\mathcal{H}=\operatorname{Leg}\!\ell\boxtimes \operatorname{Leg}\!\mathcal{H}\) is given by the differential equation \((\ddot{y}+p_{0}-\ddot{p}\ddot{x}-r_{0})F(\ddot{x},\ddot{y},\ddot{p})=0\), where
\[F(\ddot{x},\ddot{y},\ddot{p})=\ddot{y}B(1,\ddot{y}+p_{0}-\ddot{p}\ddot{x})+c \prod_{i=0}^{n}(\ddot{y}+p_{0}-\ddot{p}\ddot{x}-r_{i})^{\nu_{i}}.\]
We have \(F(\ddot{x},0,\ddot{p})=c(-\ddot{x})^{d-1}\prod_{i=0}^{n}\left(\ddot{p}-\varphi _{i}(\ddot{x})\right)^{\nu_{i}}\), where \(\varphi_{i}(\ddot{x})=\frac{p_{0}-r_{i}}{\dot{x}}\). Furthermore the radial foliation \(\operatorname{Leg}\!\ell\) is described by \(\ddot{\omega}_{0}=\mathrm{d}\dot{y}-\left(\varphi_{0}(\ddot{x})+\frac{\ddot{y }}{\dot{x}}\right)\mathrm{d}\dot{x}\); in particular we have \(D_{\ell}\subset\operatorname{Tang}(\operatorname{Leg}\!\ell,\operatorname{ Leg}\!\mathcal{H})\). Note that if \(\nu_{i}\geq 2\), then \(\partial_{\ddot{y}}F\left(\ddot{x},0,\varphi_{i}(\ddot{x})\right)=B(1,r_{i})\neq 0\); since \(\partial_{\ddot{p}}F\left(\ddot{x},0,\varphi_{i}(\ddot{x})\right)\not\equiv 0\) if \(\nu_{i}=1\), it follows that the surface
\[S_{\operatorname{Leg}\!\mathcal{H}}:=\left\{(\ddot{x},\ddot{y},\ddot{p})\in \mathbb{PT}^{*}\dot{\mathbb{P}}_{\mathbb{C}}^{2}\mid F\left(\ddot{x},\ddot{y}, \ddot{p}\right)=0\right\}\]
is smooth along \(D_{\ell}=\{\ddot{y}=0\}\). Therefore, according to Theorem 3.19, the curvature of \(\operatorname{Leg}\!\mathcal{H}\) is holomorphic on \(D_{\ell}=\{\ddot{y}=0\}\) if and only if
\[(\nu_{0}+1)\varphi_{0}(\ddot{x})h(\ddot{x})+\sum_{i=1}^{n}(\nu_{i}-1)\varphi_ {i}(\ddot{x})\left(\psi_{i}(\ddot{x})+\frac{3\varphi_{0}(\ddot{x})}{\varphi_{0 }(\ddot{x})-\varphi_{i}(\ddot{x})}\right)\equiv 0\]
and
\[\frac{\mathrm{d}}{\mathrm{d}\ddot{x}}\left((\nu_{0}+1)h(\ddot{x})+\sum_{i=1}^ {n}(\nu_{i}-1)\left(\psi_{i}(\ddot{x})+\frac{3\varphi_{0}(\ddot{x})}{\varphi_ {0}(\ddot{x})-\varphi_{i}(\ddot{x})}\right)\right)\equiv 0,\]
where
\[h(\ddot{x})=\frac{1}{\nu_{0}}\left[(\nu_{0}-1)\left(d-1-\varphi_{0}(\ddot{x}) \frac{\partial_{\ddot{p}}\partial_{\ddot{y}}F\left(\ddot{x},0,\varphi_{0}( \ddot{x})\right)-2c\delta_{\nu_{0},2}(-\ddot{x})^{d-2}\prod_{j=1}^{n}\left( \varphi_{0}(\ddot{x})-\varphi_{j}(\ddot{x})\right)^{\nu_{j}}}{\partial_{ \ddot{y}}F\left(\ddot{x},0,\varphi_{0}(\ddot{x})\right)}\right)-(2\nu_{0}+1) \sum_{j=1}^{n}\frac{\nu_{j}\varphi_{j}(\ddot{x})}{\varphi_{0}(\ddot{x})- \varphi_{j}(\ddot{x})}\right]\]
and, for all \(i\in\{1,\ldots,n\}\) such that \(\nu_{i}\geq 2\),
\[\psi_{i}(\ddot{x})=\frac{1}{\nu_{i}}\left[(\nu_{i}-2)\left(d-1-\varphi_{i}( \ddot{x})\frac{\partial_{\ddot{p}}\partial_{\ddot{y}}F\left(\ddot{x},0,\varphi_{ 0}(\ddot{x})\right)}{\partial_{\ddot{y}}F\left(\ddot{x},0,\varphi_{0}(\ddot{x} )\right)}\right)-2(\nu_{i}+1)\sum_{j=0,j\neq i}^{n}\frac{\nu_{j}\varphi_{j}( \ddot{x})}{\varphi_{0}(\ddot{x})-\varphi_{j}(\ddot{x})}\right].\]
Now, if \(\nu_{i}\geq 2\) then \(\partial_{\ddot{p}}\partial_{\ddot{y}}F\left(\ddot{x},0,\varphi_{i}(\ddot{x}) \right)=-\ddot{x}\left(\partial_{y}B(1,r_{i})+2c\delta_{\nu_{i},2}\prod_{j=0,j \neq i}^{n}\left(r_{i}-r_{j}\right)^{\nu_{j}}\right)\). From this we deduce that
\[h(\ddot{x})=h_{0}:=\frac{1}{\nu_{0}}\left[(\nu_{0}-1)\left(d-1+\frac{(p_{0}-r_{ 0})\partial_{y}B(1,r_{0})}{B(1,r_{0})}\right)+(2\nu_{0}+1)\sum_{j=1}^{n}\frac{ \nu_{j}(p_{0}-r_{j})}{r_{0}-r_{j}}\right]\]
and
\[\psi_{i}(\ddot{x})=\psi_{i}:=\frac{1}{\nu_{i}}\left[(\nu_{i}-2)\left(d-1+\frac{ (p_{0}-r_{i})\partial_{y}B(1,r_{i})}{B(1,r_{i})}\right)+2(\nu_{i}+1)\sum_{j=0,j \neq i}^{n}\frac{\nu_{j}(p_{0}-r_{j})}{r_{i}-r_{j}}\right].\]
Thus \(K(\operatorname{Leg}\!\mathcal{H})\) is holomorphic along \(D_{\ell}=\{\vec{y}=0\}\) if and only if
\[(\nu_{0}+1)(p_{0}-r_{0})h_{0}+\sum_{i=1}^{n}(\nu_{i}-1)(p_{0}-r_{i})\left(\psi_{ i}+\tfrac{3(p_{0}-r_{0})}{r_{i}-r_{0}}\right)=0.\]
Moreover, we have (_cf._ proof of [7, Theorem 3.1])
\[\sum_{j=1}^{n}\frac{\nu_{j}(p_{0}-r_{j})}{r_{0}-r_{j}}=\frac{\left|\begin{array} []{cc}\partial_{x}P_{0}(1,r_{0};-r_{0},1)&A(1,r_{0})\\ \partial_{x}P_{0}(1,r_{0};-r_{0},1)&B(1,r_{0})\\ B(1,r_{0})P_{0}(1,r_{0};-r_{0},1)\end{array}\right.}{B(1,r_{0})P_{0}(1,r_{0}; -r_{0},1)},\qquad\sum_{j=0,j\neq i}^{n}\frac{\nu_{j}(p_{0}-r_{j})}{r_{i}-r_{j}}= \frac{\left|\begin{array}{cc}\partial_{x}P_{1}(1,r_{i};r_{1},1)&A(1,r)\\ \partial_{x}P_{1}(1,r_{i};r_{i},1)&B(1,r_{i})\\ \end{array}\right|}{B(1,r_{i})P_{i}(1,r_{i};r_{i},1)}\quad(\text{for }i=1, \ldots,n)\]
and, for all \(i\in\{0,\ldots,n\}\) such that \(\nu_{i}\geq 2,\)
\[(d-1)B(1,r_{i})+(p_{0}-r_{i})\partial_{y}B(1,r_{i})=\partial_{x}B(1,r_{i})- \partial_{y}A(1,r_{i}).\]
By the definition of the polynomials \(Q_{i}\)'s, it follows that
\[h_{0}=\frac{Q_{0}(1,r_{0};-r_{0},1)}{\nu_{0}B(1,r_{0})P_{0}(1,r_{0};-r_{0},1)} \qquad\text{ and }\qquad\psi_{i}=\frac{Q_{i}(1,r_{i};r_{i},1)}{\nu_{i}B(1,r_{i})P_{i}(1,r_{i };r_{i},1)}.\]
As a consequence, \(K(\operatorname{Leg}\!\mathcal{H})\) is holomorphic on \(D_{\ell}=\{\vec{y}=0\}\) if and only if
\[\left(1+\frac{1}{\nu_{0}}\right)\frac{(p_{0}-r_{0})Q_{0}(1,r_{0};-r_{0},1)}{ B(1,r_{0})P_{0}(1,r_{0};-r_{0},1)}+\sum_{i=1}^{n}\left(1-\frac{1}{\nu_{i}} \right)(p_{0}-r_{i})\left(\frac{Q_{i}(1,r_{i};r_{i},1)}{B(1,r_{i})P_{i}(1,r_{i };r_{i},1)}+\frac{3\nu_{i}(p_{0}-r_{0})}{r_{i}-r_{0}}\right)=0.\]
This ends the proof of the theorem.
**Corollary 3.20**.: _Let \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) be a homogeneous pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}^{2}_{\mathbb{C}}\), defined by the \(1\)-form_
\[\omega=(\alpha x+\beta y)\left(A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y\right), \quad A,B\in\mathbb{C}[x,y]_{d-1},\ \ \gcd(A,B)=1.\]
_Assume that the line \(\ell\) is not invariant by \(\mathcal{H}\) and that the fiber \(\underline{\mathcal{G}}_{\mathcal{H}}^{-1}\left(\underline{\mathcal{G}}_{ \mathcal{H}}([-\alpha:\beta])\right)\) does not contain any non-fixed critical point of \(\underline{\mathcal{G}}_{\mathcal{H}}\). Then the curvature of \(\operatorname{Leg}\!\mathcal{H}\) is holomorphic on \(D_{\ell}=\mathcal{G}_{\mathcal{H}}(\ell)\) if and only if \(Q(\beta,-\alpha;\alpha,\beta)=0,\) where_
\[Q(x,y;\alpha,\beta):=\left|\begin{array}{cc}\frac{\partial P}{\partial x}&A( \beta,-\alpha)\\ \frac{\partial P}{\partial y}&B(\beta,-\alpha)\end{array}\right|\qquad\text{ and }\qquad P(x,y;\alpha,\beta):=\frac{\left|\begin{array}{cc}A(x,y)&A( \beta,-\alpha)\\ B(x,y)&B(\beta,-\alpha)\end{array}\right|}{\alpha x+\beta y}. \tag{3.6}\]
**Remark 3.21**.: _In particular, in degree \(d=3\), the curvature of \(\operatorname{Leg}\!\mathcal{H}\) is holomorphic along \(D_{\ell}\) if and only if the line with equation \(A(\beta,-\alpha)x+B(\beta,-\alpha)y=0\) is invariant by \(\mathcal{H}\), or equivalently, if and only if \(\underline{\mathcal{G}}_{\mathcal{H}}\left(\underline{\mathcal{G}}_{ \mathcal{H}}([-\alpha:\beta])\right)=\underline{\mathcal{G}}_{\mathcal{H}}([- \alpha:\beta])\)._
_Indeed, putting \(a=A(\beta,-\alpha)\), \(b=B(\beta,-\alpha)\) and \(P(x,y;\alpha,\beta)=f(\alpha,\beta)x+g(\alpha,\beta)y\) we obtain_
\[Q(\beta,-\alpha;\alpha,\beta)=f(\alpha,\beta)b-g(\alpha,\beta)a=P(b,-a;\alpha, \beta)=-\frac{bA(b,-a)-aB(b,-a)}{\beta a-\alpha b}=-\frac{\operatorname{C}_{ \mathcal{H}}(b,-a)}{\operatorname{C}_{\mathcal{H}}(\beta,-\alpha)},\]
_where \(\operatorname{C}_{\mathcal{H}}=xA+yB\) denotes the tangent cone of \(\mathcal{H}\) at the origin \(O,\) see [4, Section 2]._
Combining Corollaries 3.12 and 3.20, we obtain:
**Corollary 3.22**.: _Let \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) be a homogeneous pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}^{2}_{\mathbb{C}}\), defined by the \(1\)-form_
\[\omega=\left(\alpha x+\beta y\right)\left(A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y \right),\quad A,B\in\mathbb{C}[x,y]_{d-1},\ \ \gcd(A,B)=1.\]
_Assume that the homogeneous foliation \(\mathcal{H}\) is convex and that the line \(\ell\) is not invariant by \(\mathcal{H}\). Then the \(d\)-web \(\mathrm{Leg}\mathcal{H}\) is flat if and only if \(Q(\beta,-\alpha;\alpha,\beta)=0,\) where \(Q\) is the polynomial given by (3.6)._
Here are two examples that will be useful in Section SS5.
**Example 3.23**.: Let us consider the homogeneous foliation \(\mathcal{H}^{d-1}_{0}\) defined in the affine chart \(z=1\) by the \(1\)-form
\[\omega_{0}^{d-1}=(d-2)y^{d-1}\mathrm{d}x+x\left(x^{d-2}-(d-1)y^{d-2}\right) \mathrm{d}y.\]
We know from [4, Example 6.5] that \(\mathcal{H}^{d-1}_{0}\) is convex, of type \(1\cdot\mathrm{R}_{d-2}+(d-2)\cdot\mathrm{R}_{1}\) and with inflection divisor
\[\mathrm{I}_{\mathcal{H}^{d-1}_{0}}=\mathbf{I}^{\mathrm{inv}}_{\mathcal{H}^{d- 1}_{0}}=-(d-1)(d-2)xz\rho^{d-1}\left(y^{d-2}-x^{d-2}\right)^{2}.\]
If \(\ell\) is one of the invariant lines of \(\mathcal{H}^{d-1}_{0}\), _i.e._ if \(\ell\in\{xyz(y-\zeta^{k}x)=0,\)\(k=0,\ldots,d-3\},\) where \(\zeta=\exp\left(\frac{2\mathrm{i}\pi}{d-2}\right),\) then the \(d\)-web \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}^{d-1}_{0})\) is flat by Corollary 3.11.
If \(\ell=(y-\rho x=0)\) is not invariant by \(\mathcal{H}^{d-1}_{0}\), then the \(d\)-web \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}^{d-1}_{0})\) is flat if and only if \(\rho^{d-2}=\frac{1}{2(d-2)}\), _i.e._ if and only if \(\ell\in\left\{y-\rho_{0}\zeta^{k}x=0,\)\(k=0,\ldots,d-3\right\},\) where \(\rho_{0}=\sqrt[d-2]{\frac{1}{2d-4}}.\) Indeed, with the notations of Corollary 3.20, we have
\[Q(x,y;-\rho,1)=\left(1-(d-1)\rho^{d-2}\right)\frac{\partial P}{\partial x}-(d -2)\rho^{d-1}\frac{\partial P}{\partial y}\]
and
\[P(x,y;-\rho,1)=-(d-2)\left((d-1)(\rho y)^{d-2}-\frac{y^{d-1}-(\rho x)^{d-1}}{ y-\rho x}\right)=-(d-2)\left((d-1)(\rho y)^{d-2}-\sum_{i=0}^{d-2}\rho^{i}x^{j}y^{d-2 -i}\right),\]
so that, according to Corollary 3.22, the flatness of \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}^{d-1}_{0})\) is characterized by
\[0=Q(1,\rho;-\rho,1)=\frac{1}{2}(d-1)(d-2)^{2}\rho^{d-2}\left(\rho^{d-2}-1 \right)\left((2d-4)\rho^{d-2}-1\right)\Longleftrightarrow\rho^{d-2}=\frac{1} {2(d-2)}.\]
In all cases, for any line \(\ell\subset\mathbb{P}^{2}_{\mathbb{C}}\) such that \(O\in\ell\) or \(\ell=L_{\infty},\) the \(d\)-web \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}^{d-1}_{0})\) is flat if and only if, up to linear conjugation, \(\ell=L_{\infty}\) or \(\ell\in\{xy(y-x)(y-\rho_{0}x)=0\}\). Indeed, putting \(\varphi(x,y)=(x,\zeta^{k}y)\), we have
\[\varphi^{*}\left(\left(y-\zeta^{k}x\right)\omega_{0}^{d-1}\right)=\zeta^{2k} \big{(}y-x\big{)}\omega_{0}^{d-1}\qquad\text{ and }\qquad\varphi^{*}\left(\left(y-\rho_{0}\zeta^{k}x\right)\omega_{0}^{d-1} \right)=\zeta^{2k}\big{(}y-\rho_{0}x\big{)}\omega_{0}^{d-1}.\]
**Example 3.24**.: For \(d\geq 4\), let \(\mathcal{H}^{d-1}_{4}\) be the homogeneous foliation defined in the affine chart \(z=1\) by the \(1\)-form
\[\omega_{4}^{d-1}=y(\sigma_{d}x^{d-2}-y^{d-2})\mathrm{d}x+x(\sigma_{d}\,y^{d-2 }-x^{d-2})\mathrm{d}y,\qquad\text{ where }\sigma_{d}=1+\tfrac{2}{d-3}.\]
This foliation is convex of type \((d-2)\cdot\mathrm{R}_{2}\); indeed, a straightforward computation shows that
\[\mathrm{I}_{\mathcal{H}^{d-1}_{4}}=\mathrm{I}^{\mathrm{inv}}_{\mathcal{H}^{d-1 }_{4}}=\sigma_{d}(\sigma_{d}-1)xyz\big{(}x^{d-2}+y^{d-2}\big{)}^{3}.\]
Let \(\ell\) be a line of \(\mathbb{P}^{2}_{\mathbb{C}}\) such that \(O\in\ell\) or \(\ell=L_{\infty}.\) If \(\ell\) is invariant by \(\mathcal{H}^{d-1}_{4},\) then Corollary 3.11 ensures that \(\operatorname{Leg}(\ell\boxtimes\mathcal{H}^{d-1}_{4})\) is flat, and we have \(\ell\in\{xyz(y-\xi^{2k+1}\,x)=0,\,k=0,\ldots,d-3\},\) where \(\xi=\exp\big{(}\frac{\mathrm{i}\pi}{d-2}\big{)}.\) If \(\ell\) is not invariant by \(\mathcal{H}^{d-1}_{4},\) then \(\ell=\{y-\rho\,x=0\}\) with \(\rho(\rho^{d-2}+1)\neq 0;\) by applying Corollary 3.22, we obtain that the \(d\)-web \(\operatorname{Leg}(\ell\boxtimes\mathcal{H}^{d-1}_{4})\) is flat if and only if
\[0=Q(1,\rho;-\rho,1)=-\sigma_{d}(d-2)(\rho^{d-2}+1)^{2}(\rho^{d-2}-1),\]
hence if and only if \(\rho^{d-2}=1,\) which is equivalent to \(\ell\in\{y-\xi^{2k}x=0,\,k=0,\ldots,d-3\}.\)
Note that, in all cases, the \(d\)-web \(\operatorname{Leg}(\ell\boxtimes\mathcal{H}^{d-1}_{4})\) is flat if and only if, up to linear conjugation, \(\ell=L_{\infty}\) or \(\ell\in\{x(y-x)(y-\xi x)=0\}.\) Indeed, putting \(\varphi(x,y)=(y,x)\) and \(\psi(x,y)=(x,\xi^{2k}y),\) we have
\[\varphi^{*}(y\omega_{4}^{d-1})=x\omega_{4}^{d-1},\quad\quad\psi^{*}\left((y- \xi^{2k}x)\omega_{4}^{d-1}\right)=\xi^{4k}\big{(}y-x)\omega_{4}^{d-1},\quad\quad \psi^{*}\left((y-\xi^{2k+1}x)\omega_{4}^{d-1}\right)=\xi^{4k}\big{(}y-\xi x \big{)}\omega_{4}^{d-1}.\]
_Corollary 3.25_.: _Let \(d\geq 3\) be an integer and let \(\mathcal{H}\) be a homogeneous foliation of degree \(d-1\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) defined by the \(1\)-form_
\[\omega=A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y,\quad A,B\in\mathbb{C}[x,y]_{d-1}, \ \ \gcd(A,B)=1.\]
_Assume that \(\mathcal{H}\) admits a transverse inflection line \(\ell=(\alpha x+\beta y=0)\) of order \(\nu-1\). Assume moreover that \([-\alpha:\beta]\in\mathbb{P}^{1}_{\mathbb{C}}\) is the only non-fixed critical point of \(\underline{G}_{\mathcal{H}}\) in its fiber \(\underline{G}_{\mathcal{H}}^{-1}(\underline{G}_{\mathcal{H}}([-\alpha:\beta])).\) Put \(\mathcal{H}:=\ell\boxtimes\mathcal{H}.\) Then the curvature of \(\operatorname{Leg}\mathcal{H}\) is holomorphic along \(D_{\ell}\) if and only if \(Q(\beta,-\alpha;\alpha,\beta)=0,\) where_
\[Q(x,y;\alpha,\beta):=(\nu-1)\left(\frac{\partial B}{\partial x}-\frac{\partial A }{\partial y}\right)P(x,y;\alpha,\beta)+(2\nu+1)\left|\begin{array}{cc} \frac{\partial P}{\partial x}&A(x,y)\\ \frac{\partial P}{\partial y}&B(x,y)\end{array}\right|\quad\text{and}\quad P (x,y;\alpha,\beta):=\frac{\left|\begin{array}{cc}A(x,y)&A(\beta,-\alpha)\\ B(x,y)&B(\beta,-\alpha)\\ \hline(\alpha x+\beta y)^{\nu}\end{array}\right|}{.}\]
_Corollary 3.26_.: _Let \(d\geq 3\) be an integer and let \(\mathcal{H}\) be a homogeneous foliation of degree \(d-1\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) defined by the \(1\)-form_
\[\omega=A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y,\quad A,B\in\mathbb{C}[x,y]_{d-1}, \ \ \gcd(A,B)=1.\]
_Assume that \(\mathcal{H}\) has a transverse inflection line \(\ell=(\alpha x+\beta y=0)\) of maximal order \(d-2\). Put \(\mathcal{H}:=\ell\boxtimes\mathcal{H}.\) Then the curvature of \(\operatorname{Leg}\mathcal{H}\) is holomorphic along \(D_{\ell}\) if and only if the \(2\)-form \(\mathrm{d}\omega\) vanishes on the line \(\ell.\)_
_Remark 3.27_.: When \(d\geq 4\) the condition ( do vanishes on the line \(\ell\) ) also expresses the holomorphy of the curvature of \(\operatorname{Leg}\mathcal{H}\) along \(D_{\ell},\) thanks to [4, Theorem 3.8]. Thus Corollary 3.26 establishes the equivalence between the holomorphy on \(D_{\ell}\) of \(K(\operatorname{Leg}\mathcal{H})\) and that of \(K(\operatorname{Leg}\mathcal{H}).\)
Flatness and homogeneous pre-foliations \(\ell\boxtimes\mathcal{H}\) of co-degree \(1\) such that \(\deg\mathcal{T}_{\mathcal{H}}=2\)
In this section we propose to classify, up to automorphism of \(\mathbb{P}^{2}_{\mathbb{C}}\), all homogeneous pre-foliations \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) such that \(\deg\mathcal{T}_{\mathcal{H}}=2\) and the \(d\)-web \(\operatorname{Leg}\mathcal{H}\) is flat. The equality \(\deg\mathcal{T}_{\mathcal{H}}=2\) holds if and only if the type \(\mathcal{T}_{\mathcal{H}}\) of \(\mathcal{H}\) is of one of the following three forms: \(2\cdot\operatorname{R}_{d-2}\), \(2\cdot\operatorname{T}_{d-2}\), \(1\cdot\operatorname{R}_{d-2}+1\cdot\operatorname{T}_{d-2}\). According to [4, Proposition 4.1], every homogeneous foliation of type \(2\cdot\operatorname{R}_{d-2}\) is linearly conjugate to the convex foliation \(\mathcal{H}_{1}^{d-1}\) defined by the \(1\)-form
\[\omega_{1}^{d-1}=y^{d-1}\mathrm{d}x-x^{d-1}\mathrm{d}y.\]
The homogeneous foliations of type \(2\cdot\operatorname{T}_{d-2}\), resp. \(1\cdot\operatorname{R}_{d-2}+1\cdot\operatorname{T}_{d-2}\), are given, up to linear conjugation, by
\[\omega_{2}^{d-1}(\lambda,\mu)=(x^{d-1}+\lambda y^{d-1})\mathrm{d}x +(\mu x^{d-1}-y^{d-1})\mathrm{d}y,\quad\text{where }\lambda,\mu\in\mathbb{C}, \operatorname{with}\lambda\mu\neq-1,\] \[\text{resp. }\omega_{3}^{d-1}(\lambda)=(x^{d-1}+\lambda y^{d-1}) \mathrm{d}x+x^{d-1}\mathrm{d}y,\quad\text{where }\lambda\in\mathbb{C}^{*},\]
_cf._ proof of [4, Proposition 4.1]. We will denote by \(\mathcal{H}_{2}^{d-1}(\lambda,\mu)\), resp. \(\mathcal{H}_{3}^{d-1}(\lambda)\), the foliation defined by \(\omega_{2}^{d-1}(\lambda,\mu)\), resp. \(\omega_{3}^{d-1}(\lambda)\).
In the following three lemmas, \(\ell\) denotes a line of \(\mathbb{P}^{2}_{\mathbb{C}}\) such that \(O\in\ell\) or \(\ell=L_{\infty}\).
**Lemma 4.1**.: The \(d\)-web \(\operatorname{Leg}(\ell\boxtimes\mathcal{H}_{1}^{d-1})\) is flat if and only if, up to linear conjugation, \(\ell=L_{\infty}\) or \(\ell\in\{x(y-x)(y-\xi x)=0\},\) where \(\xi=\exp\big{(}\frac{\mathrm{i}\pi}{d-2}\big{)}\).
Proof.: Note first of all that the foliation \(\mathcal{H}_{1}^{d-1}\) has inflection divisor
\[\operatorname{I}_{\mathcal{H}_{1}^{d-1}}=\operatorname{I}_{\mathcal{H}_{1}^{d- 1}}^{\mathrm{inv}}=(d-1)zx^{d-1}y^{d-1}\big{(}y^{d-2}-x^{d-2}\big{)}.\]
\(i\). If \(\ell\) is invariant by \(\mathcal{H}_{1}^{d-1}\), then \(\ell\in\{xyz(y-\xi^{2k}x)=0,\)\(k=0,\ldots,d-3\}\) and the \(d\)-web \(\operatorname{Leg}(\ell\boxtimes\mathcal{H}_{1}^{d-1})\) is flat (Corollary 3.11).
\(ii\). Assume that \(\ell\) is not invariant by \(\mathcal{H}_{1}^{d-1}\); then \(\ell=(y-\rho x=0)\) with \(\rho(\rho^{d-2}-1)\neq 0\). According to Corollary 3.22, the \(d\)-web \(\operatorname{Leg}(\ell\boxtimes\mathcal{H}_{1}^{d-1})\) is flat if and only if \(Q(1,\rho;-\rho,1)=0,\) where
\[Q(x,y;-\rho,1)=-\frac{\partial P}{\partial x}-\rho^{d-1}\frac{\partial P}{ \partial y}\qquad\text{and}\qquad P(x,y;-\rho,1)=-\frac{y^{d-1}-(\rho x)^{d-1 }}{y-\rho x}=-\sum_{i=0}^{d-2}\rho^{i}x^{j}y^{d-2-i}.\]
Thus \(Q(1,\rho;-\rho,1)=\frac{1}{2}(d-1)(d-2)\rho^{d-2}(\rho^{d-2}+1)\), and the flatness of \(\operatorname{Leg}(\ell\boxtimes\mathcal{H}_{1}^{d-1})\) is equivalent to \(\rho^{d-2}=-1\) and therefore to \(\ell\in\{y-\xi^{2k+1}x=0,\)\(k=0,\ldots,d-3\}\).
In the two cases considered, \(\operatorname{Leg}(\ell\boxtimes\mathcal{H}_{1}^{d-1})\) is flat if and only if, up to conjugation, \(\ell=L_{\infty}:=(z=0)\) or \(\ell\in\{x(y-x)(y-\xi x)=0\}.\) Indeed, putting \(\varphi_{1}(x,y)=(y,x)\) and \(\varphi_{2}(x,y)=(x,\xi^{2k}y)\), we have
\[\varphi_{1}^{*}(y\omega_{1}^{d-1})=-x\omega_{1}^{d-1},\quad\varphi_{2}^{*} \big{(}(y-\xi^{2k}x)\omega_{1}^{d-1}\big{)}=\xi^{4k}(y-x)\omega_{1}^{d-1},\quad \varphi_{2}^{*}\Big{(}(y-\xi^{2k+1}x)\omega_{1}^{d-1}\Big{)}=\xi^{4k}(y-\xi x )\omega_{1}^{d-1}.\]
**Lemma 4.2**.: The \(d\)-web \(\operatorname{Leg}\big{(}\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda,\mu)\big{)}\) is flat if and only if, up to linear conjugation, one of the following cases occurs:
1. \(\ell=L_{\infty}\) and \(d=3\);
2. \(\ell=L_{\infty}\), \(d\geq 4\) and \(\lambda=\mu=0\);
3. \(\ell=(x=0)\) and \(\lambda=\mu=0\);
4. \(\ell=(y-x=0),\)\(d\geq 4\) and \((\lambda,\mu)=\big{(}\frac{3}{d},-\frac{3}{d}\big{)}\);
5. \(\ell=(y-\xi^{\prime}x=0),\)\(d\geq 4\) and \((\lambda,\mu)=\Big{(}\frac{3\xi^{\prime}}{d},-\frac{3}{d\xi^{\prime}}\Big{)}\), where \(\xi^{\prime}=\exp\big{(}\frac{i\pi}{d}\big{)}\).
Proof. -- We have \(\omega_{2}^{d-1}(\lambda,\mu)=A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y\), where \(A(x,y)=x^{d-1}+\lambda y^{d-1}\) and \(B(x,y)=\mu x^{d-1}-y^{d-1}\); an immediate computation shows that
\[\mathrm{I}_{\mathcal{H}_{2}^{d-1}(\lambda,\mu)}^{\mathrm{inv}}=z(x^{d}+\mu x^{d- 1}y+\lambda xy^{d-1}-y^{d})\qquad\qquad\text{ and }\qquad\qquad\mathrm{I}_{\mathcal{H}_{2}^{d-1}( \lambda,\mu)}^{\mathrm{tr}}=x^{d-2}y^{d-2}.\]
**1.** If \(\ell=L_{\infty}\) and \(d=3\), then the web \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{2}^{2}(\lambda,\mu))\) is flat by Corollary 3.2.
**2.** Assume that \(\ell=L_{\infty}\) and \(d\geq 4\). Then, according to [4, Theorem 3.1 and 3.8], the web \(\mathrm{Leg}(\mathcal{H}_{2}^{d-1}(\lambda,\mu))\) is flat if and only if \(\mathrm{d}\big{(}\omega_{2}^{d-1}(\lambda,\mu)\big{)}\) vanishes on the two lines \(xy=0\). Now,
\[\mathrm{d}\big{(}\omega_{2}^{d-1}(\lambda,\mu)\big{)}\Big{|}_{x=0}=-(d-1) \lambda y^{d-2}\mathrm{d}x\wedge\mathrm{d}y\qquad\text{ and }\qquad\mathrm{d}\big{(}\omega_{2}^{d-1}(\lambda,\mu)\big{)} \Big{|}_{y=0}=(d-1)\mu x^{d-2}\mathrm{d}x\wedge\mathrm{d}y.\]
Therefore \(\mathrm{Leg}(\mathcal{H}_{2}^{d-1}(\lambda,\mu))\) is flat if and only if \(\lambda=\mu=0\); hence the same holds for \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda,\mu))\) (Theorem 3.1).
**3.** Let us consider the case where \(\ell\in\{xy=0\}\). Up to permuting the coordinates \(x\) and \(y\), we can assume that \(\ell=(x=0)\). According to Theorem 3.7, the \(d\)-web \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda,\mu))\) is flat if and only if its curvature is holomorphic on \(\mathcal{G}_{\mathcal{H}_{2}^{d-1}(\lambda,\mu)}(\{xy=0\})\). Now, on the one hand, \(K(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda,\mu)))\) is holomorphic on \(D_{\ell}=\mathcal{G}_{\mathcal{H}_{2}^{d-1}(\lambda,\mu)}(\ell)\) if and only if \(\mathrm{d}\big{(}\omega_{2}^{d-1}(\lambda,\mu)\big{)}\) vanishes on \(\ell=(x=0)\) (Corollary 3.26), _i.e._ if and only if \(\lambda=0\). On the other hand, according to Corollary 3.17, \(K(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda,\mu)))\) is holomorphic on \(\mathcal{G}_{\mathcal{H}_{2}^{d-1}(\lambda,\mu)}(\{xy=0\})\) if and only if
\[0=(d-3)\big{(}\partial_{x}B(1,0)-\partial_{y}A(1,0)\big{)}+3(d-1)B(1,0)=d(d-1 )\mu\Longleftrightarrow\mu=0.\]
It follows that \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda,\mu))\) is flat if and only if \(\lambda=\mu=0\).
**4.** Let us examine the case where \(\ell=(y-\rho x=0)\) with \(\rho\neq 0\). By Corollary 3.17, \(K(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda,\mu)))\) is holomorphic on \(\mathcal{G}_{\mathcal{H}_{2}^{d-1}(\lambda,\mu)}(\{xy=0\})\) if and only if
\[\left\{\begin{array}{l}0=-(d-3)\big{(}\partial_{x}B(0,-1)-\partial_{y}A(0,-1 )\big{)}-3(d-1)\Big{(}A(0,-1)+\rho B(0,-1)\Big{)}=(-1)^{d}(d-1)(d\lambda-3\rho) \\ \\ 0=-\rho(d-3)\Big{(}\partial_{x}B(1,0)-\partial_{y}A(1,0)\Big{)}-3(d-1)\Big{(} A(1,0)+\rho B(1,0)\Big{)}=-(d-1)(d\rho\mu+3),\end{array}\right.\]
_i.e._ if and only if \(\lambda=\lambda_{0}:=\frac{3\rho}{d},\ \mu=\mu_{0}:=-\frac{3}{d\rho}\ \text{ and }\ d\neq 3\), because \(\lambda\mu\neq-1\). We now distinguish two cases according to whether or not \(\ell\) is invariant by \(\mathcal{H}_{2}^{d-1}(\lambda_{0},\mu_{0})\).
**4.1.** Assume that \(\ell\) is invariant by \(\mathcal{H}_{2}^{d-1}(\lambda_{0},\mu_{0})\). Then the dual web of \(\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda_{0},\mu_{0})\) is flat by Theorem 3.7. Since \(\mathrm{I}_{\mathcal{H}_{2}^{d-1}(\lambda_{0},\mu_{0})}^{\mathrm{inv}}=\big{(} \frac{3}{d}-1\big{)}\left(\rho^{d}-1\right)zx^{d}\), the invariance of \(\ell\) by \(\mathcal{H}_{2}^{d-1}(\lambda_{0},\mu_{0})\) is equivalent to \(\rho^{d}=1\); as a consequence \((\rho,\lambda_{0},\mu_{0})\in\left\{\left(\xi^{\prime 2k},\frac{3\xi^{\prime 2k}}{d},- \frac{3}{d\xi^{\prime 2k}}\right),k=0,\ldots,d-1\right\}\). Up to conjugation, \((\rho,\lambda_{0},\mu_{0})=(1,\frac{3}{d},-\frac{3}{d})\); indeed, putting \(\varphi(x,y)=(x,\xi^{\prime 2k}y)\) we have
\[\varphi^{*}\left((y-\xi^{\prime 2k}x)\omega_{2}^{d-1}\left(\frac{3\xi^{\prime 2k}}{d},- \frac{3}{d\xi^{\prime 2k}}\right)\right)=\xi^{\prime 2k}\left(y-x\right)\omega_{2}^{d-1} \left(\frac{3}{d},-\frac{3}{d}\right).\]
**4.2.** Assume that \(\ell\) is not invariant by \(\mathcal{H}_{2}^{d-1}(\lambda_{0},\mu_{0})\). Then, by Theorem 3.7 and Corollary 3.20, the flatness of \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda_{0},\mu_{0}))\) translates into \(Q(1,\rho;-\rho,1)=0,\) where
\[Q(x,y;-\rho,1)=\left(\mu_{0}-\rho^{d-1}\right)\frac{\partial P}{\partial x}- \left(\lambda_{0}\rho^{d-1}+1\right)\frac{\partial P}{\partial y}\]
and
\[P(x,y;-\rho,1)=\frac{\left(\lambda_{0}\mu_{0}+1\right)\left(y^{d-1}-(\rho x)^ {d-1}\right)}{y-\rho x}=\left(\lambda_{0}\mu_{0}+1\right)\sum_{i=0}^{d-2}\rho ^{i}x^{j}y^{d-2-i}.\]
Hence \(Q(1,\rho;-\rho,1)=\frac{1}{2}\left(\frac{3}{d}-1\right)\left(\frac{3}{d}+1 \right)^{2}(d-1)(d-2)\rho^{d-3}(\rho^{d}+1)\), and consequently \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{2}^{d-1}(\lambda_{0},\mu_{0}))\) is flat if and only if \(\rho^{d}=-1\), hence if and only if \((\rho,\lambda_{0},\mu_{0})\in\left\{\left(\xi^{\prime 2k+1},\frac{3\xi^{\prime 2k+ 1}}{d},-\frac{3}{d\xi^{\prime 2k+ 1}}\right),k=0,\ldots,d-1\right\}\), or equivalently, if and only if, up to conjugation, \((\rho,\lambda_{0},\mu_{0})=\left(\xi^{\prime},\frac{3\xi^{\prime}}{d},-\frac{ 3}{d\xi^{\prime}}\right)\), because
\[\varphi^{*}\left(\left(y-\xi^{\prime 2k+1}x\right)\omega_{2}^{d-1}\left(\frac{3 \xi^{\prime 2k+1}}{d},-\frac{3}{d\xi^{\prime 2k+ 1}}\right)\right)=\xi^{\prime 2k}\left(y-\xi^{\prime}x\right)\omega_{2}^{d-1} \left(\frac{3\xi^{\prime}}{d},-\frac{3}{d\xi^{\prime}}\right).\]
**Lemma 4.3**.: _The \(d\)-web \(\mathrm{Leg}\big{(}\ell\boxtimes\mathcal{H}_{3}^{d-1}(\lambda)\big{)}\) is flat if and only if one of the following cases holds:_
* \(\ell=L_{\infty}\) _and_ \(d=3\)_;_
* \(\ell=(dy+3x=0),\)__\(d\geq 4\) _and_ \(\lambda=\frac{(-1)^{d}(d-3)d^{d-2}}{3^{d-1}}\)_;_
* \(\ell=(dy+3x=0)\) _and_ \(\lambda=\frac{(-1)^{d}(d+3)d^{d-2}}{3^{d-1}}\)_._
Proof.: We have \(\omega_{3}^{d-1}(\lambda)=A(x,y)\mathrm{d}x+B(x,y)\mathrm{d}y\), where \(A(x,y)=x^{d-1}+\lambda y^{d-1}\) and \(B(x,y)=x^{d-1}\); an immediate computation leads to
\[I_{\mathcal{H}_{3}^{d-1}(\lambda)}^{\mathrm{inv}}=zx^{d-1}\left(x^{d-1}+x^{d-2 }y+\lambda y^{d-1}\right)\qquad\qquad\text{ and }\qquad\qquad I_{\mathcal{H}_{3}^{d-1}(\lambda)}^{ \mathrm{tr}}=y^{d-2}.\]
_1._ Assume that \(\ell=L_{\infty}.\) If \(d=3\), then the web \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{3}^{2}(\lambda))\) is flat, thanks to Corollary 3.2. For \(d\geq 4,\) the webs \(\mathrm{Leg}(\mathcal{H}_{3}^{d-1}(\lambda))\) and \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{3}^{d-1}(\lambda))\) have the same curvature (Theorem 3.1) and cannot be flat. Indeed, we have
\[\mathrm{d}\big{(}\omega_{3}^{d-1}(\lambda)\big{)}\Big{|}_{y=0}=(d-1)x^{d-2} \mathrm{d}x\wedge\mathrm{d}y\not\equiv 0;\]
this implies, according to [4, Theorem 3.8], that \(K(\mathrm{Leg}(\mathcal{H}_{3}^{d-1}(\lambda)))\) cannot be holomorphic along \(\mathcal{G}_{\mathcal{H}_{3}^{d-1}(\lambda)}(\{y=0\})\).
_2._ If \(\ell=(y=0),\) then the fact that \(\mathrm{d}\big{(}\omega_{3}^{d-1}(\lambda)\big{)}\) does not vanish on \(\ell\) implies, by Corollary 3.26, that \(K(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{3}^{d-1}(\lambda)))\) cannot be holomorphic on \(\mathcal{G}_{\mathcal{H}_{3}^{d-1}(\lambda)}(\ell),\) so that \(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{3}^{d-1}(\lambda))\) cannot be flat.
_3._ Assume that \(\ell=(x-\rho y=0),\) where \(\rho\in\mathbb{C}.\) By Corollary 3.17, \(K(\mathrm{Leg}(\ell\boxtimes\mathcal{H}_{3}^{d-1}(\lambda)))\) is holomorphic on \(\mathcal{G}_{\mathcal{H}_{3}^{d-1}(\lambda)}(\{y=0\})\) if and only if
\[0=(d-3)\Big{(}\partial_{x}B(1,0)-\partial_{y}A(1,0)\Big{)}+3(d-1)\Big{(}B(1,0)+ \rho A(1,0)\Big{)}=(d-1)(3\rho+d),\]
hence if and only if \(\rho=-\frac{d}{3}\), which is equivalent to \(\ell=\ell_{0}\) where \(\ell_{0}=(dy+3x=0).\) Then we have to distinguish two cases:
**3.1.** If \(\ell_{0}\) is invariant by \(\mathcal{H}_{3}^{d-1}(\lambda)\), then Theorem 3.7 ensures that the \(d\)-web \(\operatorname{Leg}(\ell_{0}\boxtimes\mathcal{H}_{3}^{d-1}(\lambda))\) is flat; since
\[\operatorname{I}_{\mathcal{H}_{3}^{d-1}(\lambda)}^{\operatorname{inv}}\Big{|}_ {x=-\frac{1}{3}y}=-d^{d-1}z\left(\frac{y}{3}\right)^{2d-2}\left((-1)^{d}3^{d-1 }\lambda-(d-3)d^{d-2}\right),\]
the invariance of \(\ell_{0}\) by \(\mathcal{H}_{3}^{d-1}(\lambda)\) is characterized by \(\lambda=\frac{(-1)^{d}(d-3)d^{d-2}}{3^{d-1}}\) and \(d\neq 3\), because \(\lambda\neq 0\).
**3.2.** Assume that \(\ell_{0}\) is not invariant by \(\mathcal{H}_{3}^{d-1}(\lambda)\). Then, according to Theorem 3.7 and Corollary 3.20, the \(d\)-web \(\operatorname{Leg}(\ell_{0}\boxtimes\mathcal{H}_{3}^{d-1}(\lambda))\) is flat if and only if \(Q(d,-3;3,d)=0,\) where
\[Q(x,y;3,d)=d^{d-1}\frac{\partial P}{\partial x}-\left(d^{d-1}+(-3)^{d-1} \lambda\right)\frac{\partial P}{\partial y}\]
and
\[P(x,y;3,d)=\frac{\lambda\left((dy)^{d-1}-(-3x)^{d-1}\right)}{dy+3x}=\lambda \sum_{i=0}^{d-2}(-3x)^{i}(dy)^{d-2-i}.\]
Thus \(Q(d,-3;3,d)=-\frac{1}{6}\lambda(d-1)(d-2)(3d)^{d-2}\left(3^{d-1}\lambda-(-1)^ {d}(d+3)d^{d-2}\right)\) and the flatness of \(\operatorname{Leg}(\ell_{0}\boxtimes\mathcal{H}_{3}^{d-1}(\lambda))\) translates into \(\lambda=\frac{(-1)^{d}(d+3)d^{d-2}}{3^{d-1}}\).
Lemmas 4.1, 4.2 and 4.3 imply the following proposition.
**Proposition 4.4.** -- Let \(\mathcal{H}=\ell\boxtimes\mathcal{H}\) be a homogeneous pre-foliation of co-degree \(1\) and degree \(d\geq 3\) on \(\mathbb{P}_{\mathbb{C}}^{2}\). Assume that \(\deg\mathcal{T}_{\mathcal{H}}=2,\) or equivalently that the map \(\underline{\mathcal{G}}_{\mathcal{H}}\) has exactly two critical points. Then, for \(d\geq 4\) the web \(\operatorname{Leg}\mathcal{H}\) is flat if and only if \(\mathcal{H}\) is linearly conjugate to one of the ten following pre-foliations
1. \(\mathcal{H}_{1}^{d}=L_{\infty}\boxtimes\mathcal{H}_{1}^{d-1}\);
2. \(\mathcal{H}_{2}^{d}=\{x=0\}\boxtimes\mathcal{H}_{1}^{d-1}\);
3. \(\mathcal{H}_{3}^{d}=\{y-x=0\}\boxtimes\mathcal{H}_{1}^{d-1}\);
4. \(\mathcal{H}_{4}^{d}=\{y-\xi x=0\}\boxtimes\mathcal{H}_{1}^{d-1}\), where \(\xi=\exp\left(\frac{\mathrm{i}\pi}{d-2}\right)\);
5. \(\mathcal{H}_{5}^{d}=\{x=0\}\boxtimes\mathcal{H}_{2}^{d-1}(0,0)\);
6. \(\mathcal{H}_{6}^{d}=\{dy+3x=0\}\boxtimes\mathcal{H}_{3}^{d-1}(\lambda_{0}),\) where \(\lambda_{0}=\frac{(-1)^{d}(d+3)d^{d-2}}{3^{d-1}}\);
7. \(\mathcal{H}_{7}^{d}=\{dy+3x=0\}\boxtimes\mathcal{H}_{3}^{d-1}(\lambda_{1}),\) where \(\lambda_{1}=\frac{(-1)^{d}(d-3)d^{d-2}}{3^{d-1}}\);
8. \(\mathcal{H}_{8}^{d}=L_{\infty}\boxtimes\mathcal{H}_{2}^{d-1}(0,0)\);
9. \(\mathcal{H}_{9}^{d}=\{y-x=0\}\boxtimes\mathcal{H}_{2}^{d-1}\left(\frac{3}{d}, -\frac{3}{d}\right)\);
10. \(\mathcal{H}_{10}^{d}=\{y-\xi^{\prime}x=0\}\boxtimes\mathcal{H}_{2}^{d-1}\left( \frac{3\xi^{\prime}}{d},-\frac{3}{d\xi^{\prime}}\right)\), where \(\xi^{\prime}=\exp\left(\frac{\mathrm{i}\pi}{d}\right)\).
For \(d=3\) the web \(\operatorname{Leg}\mathcal{H}\) is flat if and only if, up to linear conjugation, either \(\mathcal{H}\) is one of the six pre-foliations \(\mathcal{H}_{1}^{3},\mathcal{H}_{2}^{3},\ldots,\mathcal{H}_{6}^{3},\) or \(\mathcal{H}\) is of one of the following two types
11. \(\mathcal{H}_{7}^{3}(\lambda)=L_{\infty}\boxtimes\mathcal{H}_{3}^{2}(\lambda),\) where \(\lambda\in\mathbb{C}^{*}\);
12. \(\mathcal{H}_{8}^{3}(\lambda,\mu)=L_{\infty}\boxtimes\mathcal{H}_{2}^{d}( \lambda,\mu),\) where \(\lambda,\mu\in\mathbb{C}\) with \(\lambda\mu\neq-1\).
Combining Proposition 4.4 with the fact that every homogeneous foliation of degree \(2\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) has degree of type \(2\), we obtain the classification, up to automorphism of \(\mathbb{P}^{2}_{\mathbb{C}}\), of homogeneous pre-foliations of type \((1,3)\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) whose dual web is flat.
**Corollary 4.5**.: _Up to automorphism of \(\mathbb{P}^{2}_{\mathbb{C}}\), there are six examples and two families of homogeneous pre-foliations of co-degree \(1\) and degree \(3\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) with a flat Legendre transform, namely:_
1. \(\mathcal{H}^{3}_{1}=L_{\infty}\boxtimes\mathcal{H}^{2}_{1}\)_;_
2. \(\mathcal{H}^{2}_{3}=\{x=0\}\boxtimes\mathcal{H}^{2}_{1}\)_;_
3. \(\mathcal{H}^{3}_{3}=\{y-x=0\}\boxtimes\mathcal{H}^{2}_{1}\)_;_
4. \(\mathcal{H}^{4}_{3}=\{y+x=0\}\boxtimes\mathcal{H}^{2}_{1}\)_;_
5. \(\mathcal{H}^{3}_{5}=\{x=0\}\boxtimes\mathcal{H}^{2}_{2}(0,0)\)_;_
6. \(\mathcal{H}^{3}_{6}=\{y+x=0\}\boxtimes\mathcal{H}^{2}_{3}(-2)\)_;_
7. \(\mathcal{H}^{3}_{7}(\lambda)=L_{\infty}\boxtimes\mathcal{H}^{2}_{3}(\lambda)\)_, where_\(\lambda\in\mathbb{C}^{*}\)_;_
8. \(\mathcal{H}^{3}_{8}(\lambda,\mu)=L_{\infty}\boxtimes\mathcal{H}^{2}_{2}( \lambda,\mu)\)_, where_\(\lambda,\mu\in\mathbb{C}\) _with_\(\lambda\mu\neq-1\)_._
In Section SS6 we will need, for \(\mathcal{H}\in\{\mathcal{H}^{2}_{1},\mathcal{H}^{2}_{2}(0,0),\mathcal{H}^{2}_{ 3}(-2)\}\), the values of the Camacho-Sad indices \(\operatorname{CS}(\mathcal{H},L_{\infty},s)\), \(s\in\operatorname{Sing}\mathcal{H}\cap L_{\infty}\). For this, we have computed, for each of these three foliations, the following polynomial (called Camacho-Sad polynomial of the homogeneous foliation \(\mathcal{H}\))
\[\operatorname{CS}_{\mathcal{H}}(\lambda)=\prod_{s\in\operatorname{Sing} \mathcal{H}\cap L_{\infty}}(\lambda-\operatorname{CS}(\mathcal{H},L_{\infty},s)).\]
The following table summarizes the types and the Camacho-Sad polynomials of the foliations \(\mathcal{H}^{2}_{1}\), \(\mathcal{H}^{2}_{2}(0,0)\) and \(\mathcal{H}^{2}_{3}(-2)\).
## 5 Pre-foliations of co-degree \(1\) whose associated foliation is reduced convex
We now give the proofs of Theorem E and Propositions F and G stated in the Introduction.
Proof of Theorem E.: Since by hypothesis \(\mathcal{F}\) is reduced convex, all its singularities are non-degenerate ([4, Lemma 6.8]). According to [2, Lemma 2.2], the discriminant of \(\operatorname{Leg}\mathcal{F}\) then consists of the lines dual to the radial singularities of \(\mathcal{F}\). The first assertion of Lemma 2.1 therefore implies that
\[\Delta(\operatorname{Leg}\mathcal{F})=\bar{\Sigma}^{\operatorname{rad}}_{ \mathcal{F}}\cup\bar{\Sigma}^{\ell}_{\mathcal{F}}.\]
To show that the curvature of \(\operatorname{Leg}\mathcal{F}\) is identically zero, it suffices therefore to show that it is holomorphic along the dual line of every point of \(\Sigma^{\operatorname{rad}}_{\mathcal{F}}\cup\Sigma^{\ell}_{\mathcal{F}}\). Let \(s\) be an arbitrary point of \(\Sigma^{\operatorname{rad}}_{\mathcal{F}}\cup\Sigma^{\ell}_{\mathcal{F}}\). Denote by \(\nu=\tau(\mathcal{F},s)\)
the tangency order of \(\mathcal{F}\) with a generic line passing through \(s\); then \(\nu-1\) denotes the radiality order of \(s\), and \(s\in\Sigma^{\mathrm{rad}}_{\mathcal{F}}\) if and only if \(\nu\geq 2\), _see_[4, SS1.3]. By [13, Proposition 3.3], locally near the line \(\check{s}\) dual to \(s\), we can decompose \(\mathrm{Leg}\mathcal{F}\) as \(\mathrm{Leg}\mathcal{F}=\mathcal{W}_{\nu}\boxtimes\mathcal{W}_{d-\nu-1},\) where \(\mathcal{W}_{\nu}\) is an irreducible \(\nu\)-web leaving \(\check{s}\) invariant and whose discriminant \(\Delta(\mathcal{W}_{\nu})\) has minimal multiplicity \(\nu-1\) along \(\check{s}\), and where \(\mathcal{W}_{d-\nu-1}\) is a \((d-\nu-1)\)-web transverse to \(\check{s}\). Furthermore, the convexity of \(\mathrm{Leg}\mathcal{F}\) implies, by an argument of the proof of [13, Theorem 4.2], that the web \(\mathcal{W}_{d-\nu-1}\) is regular near \(\check{s}\), _i.e._ that through a generic point of \(\check{s}\) pass \((d-\nu-1)\) distinct tangent lines to \(\mathcal{W}_{d-\nu-1}\).
Thus, near the line \(\check{s}\), we have the decomposition
\[\mathrm{Leg}\mathcal{F}=\mathrm{Leg}\ell\boxtimes\mathcal{W}_{\nu}\boxtimes \mathcal{W}_{d-\nu-1}.\]
We now distinguish two cases:
_I._ If \(s\in\ell\) then \(\check{s}\) is invariant by \(\mathrm{Leg}\ell\); by applying Theorem 1 of [13] if \(\nu=1\) and Proposition 3.9 if \(\nu\geq 2\), it follows that \(K(\mathrm{Leg}\mathcal{F})\) is holomorphic along \(\check{s}\).
_2._ Assume that \(s\not\in\ell\); then \(s\in\Sigma^{\mathrm{rad}}_{\mathcal{F}}\setminus\Sigma^{\ell}_{\mathcal{F}}\). In this case the radial foliation \(\mathrm{Leg}\ell\) is transverse to \(\check{s}\). From the above discussion, the \((d-\nu)\)-web \(\mathcal{W}_{d-\nu}:=\mathrm{Leg}\ell\boxtimes\mathcal{W}_{d-\nu-1}\) is therefore also transverse to \(\check{s}\) and we have \(\mathrm{Leg}\mathcal{F}=\mathcal{W}\boxtimes\mathcal{W}_{d-\nu}\). Moreover, since \(\ell\) is \(\mathcal{F}\)-invariant, \(\mathrm{Tang}(\mathrm{Leg}\ell,\mathrm{Leg}\mathcal{F})=\check{\Sigma}^{ \ell}_{\mathcal{F}}\) (_cf._ proof of Lemma 2.1); in particular, \(\mathrm{Tang}(\mathrm{Leg}\ell,\mathcal{W}_{d-\nu-1})\subset\check{\Sigma}^{ \ell}_{\mathcal{F}}\) and therefore \(\check{s}\not\subset\mathrm{Tang}(\mathrm{Leg}\ell,\mathcal{W}_{d-\nu-1})\). It follows that the web \(\mathcal{W}_{d-\nu}\) is regular near \(\check{s}\), because \(\mathcal{W}_{d-\nu-1}\) is so. As a consequence the curvature of \(\mathrm{Leg}\mathcal{F}\) is holomorphic along \(\check{s}\) by applying [13, Proposition 2.6].
Proof of Proposition F. -- The Fermat foliation \(\mathcal{F}_{0}^{d-1}\) is given in homogeneous coordinates by the 1-form
\[\overline{\Omega}_{0}^{d-1}=x^{d-1}(y\mathrm{d}z-z\mathrm{d}y)+y^{d-1}(z \mathrm{d}x-x\mathrm{d}z)+z^{d-1}(x\mathrm{d}y-y\mathrm{d}x).\]
It has the following \(3(d-1)\) invariant lines:
\[x=0,\quad y=0,\quad z=0,\quad y=\zeta^{k}x,\quad y=\zeta^{k}z,\quad x=\zeta^{k }z,\quad\text{where }k\in\{0,\ldots,d-3\}\;\text{ and }\;\zeta=\exp(\tfrac{2i\pi}{d-2}).\]
Since the coordinates \(x,y\) and \(z\) play a symmetric role and since \(\ell\) is not invariant by \(\mathcal{F}_{0}^{d-1}\), we can assume that \(\ell=\{\alpha x+\beta y-z=0\}\) with \(\beta\neq 0\). Then \(\overline{\mathcal{O}(\ell\boxtimes\mathcal{F}_{0}^{d-1})}\) contains the following homogeneous pre-foliations:
\[\mathcal{H}_{1}=\{y-\alpha x=0\}\boxtimes\mathcal{H}_{1}^{d-1},\quad\quad \mathcal{H}_{2}=\{y-\beta x=0\}\boxtimes\mathcal{H}_{1}^{d-1},\quad\quad \mathcal{H}_{3}=\left\{x-(\alpha+\beta)y=0\right\}\boxtimes\mathcal{H}_{0}^{d -1}.\]
Indeed, \(\ell\boxtimes\mathcal{F}_{0}^{d-1}\) is described in the affine chart \(z=1\) by \(\omega=(\alpha x+\beta y-1)\overline{\omega}_{0}^{d-1}\); putting \(\varphi_{1}=\left(\frac{x}{y},\frac{\varepsilon}{y}\right)\), \(\varphi_{2}=\left(\frac{\varepsilon}{y},\frac{x}{y}\right)\) and \(\varphi_{3}=\left(\frac{y+\varepsilon}{x},\frac{y}{x}\right)\), we obtain that
\[\lim_{\varepsilon\to 0}\varepsilon^{-1}y^{d+2}\varphi_{1}^{*}\omega=(y-\alpha x) \omega_{1}^{d-1},\quad\quad\lim_{\varepsilon\to 0}\varepsilon^{-1}y^{d+2}\varphi_{2}^{*} \omega=(\beta x-y)\omega_{1}^{d-1},\quad\quad\lim_{\varepsilon\to 0}\varepsilon^{-1}x^{d+2} \varphi_{3}^{*}\omega=\left((\alpha+\beta)y-x\right)\omega_{0}^{d-1}.\]
The hypothesis that \(\mathrm{Leg}(\ell\boxtimes\mathcal{F}_{0}^{d-1})\) is flat implies that the webs \(\mathrm{Leg}\mathcal{H}_{i}\;(i=1,2,3)\) are also flat. Let us show that the flatness of \(\mathrm{Leg}\mathcal{H}_{1}\) and \(\mathrm{Leg}\mathcal{H}_{2}\) implies that, up to linear conjugation,
\[(\alpha,\beta)\in E:=\left\{(0,\xi),(1,1),(1,\xi),(\xi,\xi)\right\},\quad \text{where }\xi=\exp\left(\tfrac{i\pi}{d-2}\right).\]
First of all, the \(d\)-web \(\mathrm{Leg}\mathcal{H}_{1}\), resp. \(\mathrm{Leg}\mathcal{H}_{2}\), is flat if and only if (_cf._ proof of Lemma 4.1)
\[\alpha(\alpha^{d-2}-1)(\alpha^{d-2}+1)=0,\qquad\qquad\text{ resp. }(\beta^{d-2}-1)(\beta^{d-2}+1)=0,\]
_i.e._ if and only if \(\alpha\in\{0,\zeta^{k},\xixi^{k},\,k=0,\ldots,d-3\},\) resp. \(\beta\in\{\zeta^{k},\xixi^{k},\,k=0,\ldots,d-3\}.\) If \(\alpha=0\) then \(\beta\neq\zeta^{k},\) because otherwise \(\ell\) would be invariant by \(\mathcal{F}_{0}^{d-1}.\) It follows that
\[(\alpha,\beta)\in\Big{\{}(0,\xixi^{k}),(\zeta^{k},\zeta^{k^{\prime}}),(\zeta^{ k},\xixi^{k^{\prime}}),(\xixi^{k},\xi^{k^{\prime}}),(\xixi^{k},\xixi^{k^{\prime}}), \,\,k,k^{\prime}=0,\ldots,d-3\Big{\}}.\]
If, for \(k,k^{\prime}\in\{0,\ldots,d-3\},\)
\[(\alpha,\beta)=(0,\xixi^{k}),\quad\text{ resp. }(\alpha,\beta)\in\Big{\{}(\zeta^{k}, \zeta^{k^{\prime}}),(\zeta^{k},\xixi^{k^{\prime}}),(\xixi^{k},\xixi^{k^{\prime }})\Big{\}},\qquad\quad\text{ resp. }(\alpha,\beta)=(\xixi^{k},\xi^{k^{\prime}}),\]
then by conjugating \(\omega\) by \(\Big{(}x,\frac{y}{\zeta^{k}}\Big{)},\) resp. \(\Big{(}\frac{x}{\zeta^{k}},\frac{y}{\zeta^{k^{\prime}}}\Big{)},\) resp. \(\Big{(}\frac{y}{\zeta^{k}},\frac{x}{\zeta^{k^{\prime}}}\Big{)},\) we reduce ourselves to \((\alpha,\beta)=(0,\xi),\) resp. \((\alpha,\beta)\in\{(1,1),(1,\xi),(\xi,\xi)\},\) resp. \((\alpha,\beta)=(1,\xi).\) Thus, up to conjugation, \((\alpha,\beta)\) belongs to \(E.\)
Moreover, according to Example 3.23, the flatness of \(\text{Leg}\mathcal{H}_{3}\) is equivalent to
\[0=(\alpha+\beta)\Big{(}(\alpha+\beta)^{d-2}-1\Big{)}\Big{(}(\alpha+\beta)^{d- 2}-2(d-2)\Big{)}=:f_{d}(\alpha,\beta).\]
Since
\[f_{d}(0,\xi) =2\xi(2d-3)\neq 0, f_{d}(1,1) =4(2^{d-2}-1)(2^{d-3}-d+2)=0\Longleftrightarrow d\in\{3,4\},\] \[f_{d}(\xi,\xi) =2\xi(2^{d-2}+1)(2^{d-2}+2d-4)\neq 0, f_{d}(1,\xi) =(\xi+1)\Big{(}(\xi+1)^{d-2}-1\Big{)}\Big{(}(\xi+1)^{d-2}-2(d-2) \Big{)}=0\Longleftrightarrow d=3,\]
we deduce that \(d\in\{3,4\}\) and, up to conjugation,
\[(\alpha,\beta)\in\{(1,1),(1,-1)\}\text{ if }d=3\qquad\qquad\text{ and }\qquad\qquad(\alpha,\beta)=(1,1)\text{ if }d=4,\]
_i.e._, putting \(\ell_{1}=\{x+y-z=0\}\) and \(\ell_{2}=\{x-y-z=0\},\) we have \(\ell\in\{\ell_{1},\ell_{2}\}\) if \(d=3\) and \(\ell=\ell_{1}\) if \(d=4.\)
To conclude, it suffices to note that \(\ell_{1}=(s_{1}s_{2}s_{4})\) and \(\ell_{2}=(s_{1}s_{3}),\) where \(s_{1}=[1:0:1],\)\(s_{2}=[0:1:1],\)\(s_{3}=[1:1:0]\) and \(s_{4}=[-1:1:0]\): the points \(s_{1},s_{2}\) and \(s_{3}\) are singular for \(\mathcal{F}_{0}^{d-1},\) and \(s_{4}\in\text{Sing}\mathcal{F}_{0}^{d-1}\) if and only if \(d\) is even; in particular the point \(s_{4}\) is singular for \(\mathcal{F}_{0}^{3}\) but not for \(\mathcal{F}_{0}^{2}.\)
Proof of Proposition 6.: The Hesse foliation \(\mathcal{F}_{H}^{4}\) is described in homogeneous coordinates by the 1-form
\[\Omega_{H}^{4}=yz(2x^{3}-y^{3}-z^{3})\text{d}x+xz(2y^{3}-x^{3}-z^{3})\text{d}y +xy(2z^{3}-x^{3}-y^{3})\text{d}z.\]
Its 12 invariant lines are given by
\[xyz(x+y+z)(\zeta x+y+z)(x+\zeta y+z)(x+y+\zeta z)(\zeta^{2}x+y+z)(x+\zeta^{2}y+ z)(x+y+\zeta^{2}z)(\zeta^{2}x+\zeta y+z)(\zeta x+\zeta^{2}y+z)=0,\]
where \(\zeta=\exp(\frac{2i\pi}{3}).\) As above, we can assume that \(\ell=\{\alpha x+\beta y-z=0\}\) with \(\beta\neq 0.\) Then the closure of the \(\text{Aut}(\mathbb{P}_{\mathbb{C}}^{2})\)-orbit of \(\ell\boxtimes\mathcal{F}_{H}^{4}\) contains the following three homogeneous pre-foliations:
\[\mathcal{H}_{1}=\{y-\alpha x=0\}\boxtimes\mathcal{H}_{4}^{4},\qquad\quad \mathcal{H}_{2}=\{y-\beta x=0\}\boxtimes\mathcal{H}_{4}^{4},\qquad\quad\mathcal{ H}_{3}=\big{\{}ax+by=0\big{\}}\boxtimes\mathcal{H}_{4}^{4},\]
where \(a=\alpha+\beta-1\) and \(b=\alpha+\zeta^{2}\beta-\zeta.\) Indeed, in the affine chart \(z=1,\) the pre-foliation \(\ell\boxtimes\mathcal{F}_{H}^{4}\) is given by \(\omega=(\alpha x+\beta y-1)\omega_{H}^{4};\) putting \(\psi_{1}=\Big{(}\frac{x}{y},\frac{\varepsilon}{y}\Big{)},\)\(\psi_{2}=\Big{(}\frac{\varepsilon}{y},\frac{x}{y}\Big{)}\) and \(\psi_{3}=\Big{(}\frac{x+y}{x+\zeta y+\varepsilon},\frac{x+\zeta^{2}y}{x+\zeta y +\varepsilon}\Big{)},\) a straightforward computation shows that
\[\lim_{\varepsilon\to 0}\varepsilon^{-1}y^{7}\psi_{1}^{*}\omega=(\alpha x-y)\omega_{4}^{ 4},\qquad\lim_{\varepsilon\to 0}\varepsilon^{-1}y^{7}\psi_{2}^{*}\omega=(\beta x-y)\omega_{4}^{ 4},\qquad\lim_{\varepsilon\to 0}\varepsilon^{-1}(x+\zeta y+\varepsilon)^{7}\psi_{3}^{*}\omega=-9 \zeta(ax+by)\omega_{4}^{4}.\]
Since the 5-web \(\text{Leg}(\ell\boxtimes\mathcal{F}_{H}^{4})\) is flat by hypothesis, so are the 5-webs \(\text{Leg}\mathcal{H}_{i},\,i=1,2,3.\) Now, according to Example 3.24, for any line \(\ell_{0}\) passing through the origin, \(\text{Leg}(\ell_{0}\boxtimes\mathcal{H}_{4}^{4})\) is flat if and only if \(\ell_{0}=\{x=0\}\) or \(\ell_{0}=\{y-\rho x=0\}\) with \(\rho(\rho^{3}-1)(\rho^{3}+1)=0,\)_i.e._\(\rho\in E:=\{0,\zeta^{k},-\zeta^{k},\,k=0,1,2\}.\) Therefore, the
flatness of \(\operatorname{Leg}\mathcal{H}_{1}\) (resp. \(\operatorname{Leg}\mathcal{H}_{2}\)) is equivalent to \(\alpha\in E\) (resp. \(\beta\in E\setminus\{0\}\) because \(\beta\neq 0\)). Note that \((\alpha,\beta)\neq(-\xi^{k},-\xi^{k^{\prime}})\), for otherwise \(\ell\) would be invariant by \(\mathcal{F}_{H}^{4}\). As a result
\[(\alpha,\beta)\in\Big{\{}(0,\zeta^{k}),(0,-\zeta^{k}),(\zeta^{k},\zeta^{k^{ \prime}}),(\zeta^{k},-\zeta^{k^{\prime}}),(-\zeta^{k},\zeta^{k^{\prime}}),\ k,k^{ \prime}=0,1,2\Big{\}}.\]
If, for \(k,k^{\prime}\in\{0,1,2\}\),
\[(\alpha,\beta)\in\Big{\{}(0,\zeta^{k}),(0,-\zeta^{k})\Big{\}},\quad\text{ resp. }(\alpha,\beta)\in\Big{\{}(\zeta^{k},\zeta^{k^{\prime}}),(\zeta^{k},-\zeta^{k^{ \prime}})\Big{\}},\quad\qquad\text{resp. }(\alpha,\beta)=(-\zeta^{k},\zeta^{k^{\prime}}),\]
then by conjugating \(\omega\) by \(\Big{(}x,\frac{y}{\zeta^{k}}\Big{)}\), resp. \(\Big{(}\frac{x}{\zeta^{k}},\frac{y}{\zeta^{k^{\prime}}}\Big{)}\), we reduce ourselves to \((\alpha,\beta)\in\{(0,1),(0,-1)\}\), resp. \((\alpha,\beta)\in\{(1,1),(1,-1)\}\), resp. \((\alpha,\beta)=(1,-1)\). It follows that, up to linear conjugation,
\[(\alpha,\beta)\in F:=\big{\{}(0,1),(0,-1),(1,1),(1,-1)\big{\}}\,.\]
Now, for \((\alpha,\beta)\in F\), \(\operatorname{Leg}\mathcal{H}_{3}\) is flat if and only if \((\alpha,\beta)=(0,1),\) since putting \(\tau(\alpha,\beta)=-\frac{a}{b}\), we have
\[\tau(0,1)=0\in E\,,\qquad\quad\tau(0,-1)=2\not\in E\,,\qquad\quad\tau(1,1)= \tfrac{\zeta^{2}}{2}\not\in E,\qquad\quad\tau(1,-1)=\tfrac{1}{2}\not\in E.\]
Therefore, up to conjugation, \((\alpha,\beta)=(0,1)\), _i.e._\(\ell=\{y-z=0\}\); then \(\ell\) passes through four singular points of \(\mathcal{F}_{H}^{4}\), namely the points \(s_{1}=[1:0:0],\)\(s_{2}=[1:1:1]\), \(s_{3}=[\zeta:1:1]\) and \(s_{4}=[\zeta^{2}:1:1]\).
## 6 Pre-foliations of type \((1,3)\) whose associated foliation has only non-degenerate singularities
In this section, we prove Theorem H stated in the Introduction. To do this, we need two preliminary results, the first of which holds in any degree.
Let us first recall that in SS5 we have proved Propositions F and G by reducing to the homogeneous case; in fact this argument is implicitly based on the following proposition.
**Proposition 6.1**.: _Let \(\mathcal{F}=\ell\boxtimes\mathcal{F}\) be a pre-foliation of co-degree \(1\) and degree \(d\geq 2\) on \(\mathbb{P}^{2}_{\mathbb{C}}\). Assume that the foliation \(\mathcal{F}\) has an invariant line \(D\) and that all its singularities on \(D\) are non-degenerate. There is a homogeneous pre-foliation \(\mathcal{H}=\ell_{0}\boxtimes\mathcal{H}\) of co-degree \(1\) and degree \(d\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) such that:_
1. \(\mathcal{H}\in\overline{\mathcal{O}(\mathcal{F})}\) _and_ \(\mathcal{H}\in\overline{\mathcal{O}(\mathcal{F})}\)_;_
2. _if_ \(\ell=D\) _(resp._ \(\ell\neq D\)_), then_ \(\ell_{0}=L_{\infty}\) _(resp._ \(\ell_{0}\neq L_{\infty}\)_);_
3. \(D\) _is invariant by_ \(\mathcal{H}\)_;_
4. \(\operatorname{Sing}\mathcal{H}\cap D=\operatorname{Sing}\mathcal{F}\cap D\)_;_
5. \(\forall\,s\in\operatorname{Sing}\mathcal{H}\cap D,\,\mu(\mathcal{H},s)=1\) _and_ \(\operatorname{CS}(\mathcal{H},D,s)=\operatorname{CS}(\mathcal{F},D,s)\)_._
If, moreover, \(\operatorname{Leg}\mathcal{F}\) (resp. \(\operatorname{Leg}\mathcal{F}\)) is flat, then \(\operatorname{Leg}\mathcal{H}\) (resp. \(\operatorname{Leg}\mathcal{H}\)) is also flat.
This proposition is an analogue for co-degree one pre-foliations of Proposition 6.4 of [4] on foliations of \(\mathbb{P}^{2}_{\mathbb{C}}\).
Proof.: Choose a homogeneous coordinate system \([x:y:z]\in\mathbb{P}^{2}_{\mathbb{C}}\) such that \(D=L_{\infty}=(z=0)\). Since \(D\) is \(\mathcal{F}\)-invariant, \(\mathcal{F}\) is defined in the affine chart \(z=1\) by a \(1\)-form \(\omega\) of type
\[\omega=\sum_{i=0}^{d-1}(A_{i}(x,y)\mathrm{d}x+B_{i}(x,y)\mathrm{d}y),\]
where \(A_{i},B_{i}\) are homogeneous polynomials of degree \(i\). According to [4, Proposition 6.4], since all the singularities of \(\mathcal{F}\) on \(D\) are non-degenerate, the \(1\)-form \(\omega_{d-1}=A_{d-1}(x,y)\mathrm{d}x+B_{d-1}(x,y)\mathrm{d}y\) defines a homogeneous foliation \(\mathcal{H}\) of degree \(d-1\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) belonging to \(\overline{\mathcal{O}(\mathcal{F})}\) and satisfying the stated properties (iii), (iv) and (v).
Now, write \(\ell=\{\alpha x+\beta y+\gamma z=0\}\); in homogeneous coordinates, \(\mathcal{F},\) resp. \(\mathcal{H},\) is given by
\[\Omega_{d+1}=(\alpha x+\beta y+\gamma z)\sum_{i=0}^{d-1}z^{d-i-1} \Big{(}A_{i}(x,y)(z\mathrm{d}x-x\mathrm{d}z)+B_{i}(x,y)(z\mathrm{d}y-y\mathrm{ d}z)\Big{)},\] \[\mathrm{resp.}\ \Omega_{d}=A_{d-1}(x,y)(z\mathrm{d}x-x\mathrm{d}z)+B_{d-1} (x,y)(z\mathrm{d}y-y\mathrm{d}z).\]
Putting \(\varphi=\varphi_{\mathrm{e}}=\left[\frac{x}{\varepsilon}:\frac{y}{\varepsilon} :z\right],\) we see that if \((\alpha,\beta)=(0,0),\) resp. \((\alpha,\beta)\neq(0,0),\) then
\[\lim_{\varepsilon\to 0}\varepsilon^{d}\gamma^{-1}\varphi^{*}\Omega_{d+1}=z \Omega_{d},\qquad\qquad\qquad\mathrm{resp.}\ \lim_{\varepsilon\to 0}\varepsilon^{d+1}\varphi^{*}\Omega_{d+1}=( \alpha x+\beta y)\Omega_{d}.\]
It follows that the closure of the \(\mathrm{Aut}(\mathbb{P}^{2}_{\mathbb{C}})\)-orbit of \(\mathcal{F}\) contains the homogeneous pre-foliation \(\mathcal{H}=\ell_{0}\boxtimes\mathcal{H},\) where \(\ell_{0}=L_{\infty}\) if \(\ell=D\) and \(\ell_{0}=\{\alpha x+\beta y=0\}\neq L_{\infty}\) if \(\ell\neq D.\)
The following technical lemma is an analogue for pre-foliations of type \((1,3)\) of Lemma 6.7 of [4] on foliations of degree \(3.\) It plays a key role in the proof of Theorem H.
**Lemma 6.2**.: _Let \(\mathcal{F}=\ell\boxtimes\mathcal{F}\) be a pre-foliation of co-degree \(1\) and degree \(3\) on \(\mathbb{P}^{2}_{\mathbb{C}}.\) Assume that the \(3\)-web \(\mathrm{Leg}\mathcal{F}\) is flat and that the foliation \(\mathcal{F}\) has a non-degenerate singularity \(m\) satisfying \(\mathrm{BB}(\mathcal{F},m)\neq 4\). Then, through the point \(m\) pass exactly two \(\mathcal{F}\)-invariant lines._
Proof.: The hypotheses \(\mu(\mathcal{F},m)=1\) and \(\mathrm{BB}(\mathcal{F},m)\neq 4\) ensure the existence of an affine chart \((x,y)\) of \(\mathbb{P}^{2}_{\mathbb{C}}\) in which \(m=(0,0)\) and \(\mathcal{F}\) is defined by a \(1\)-form \(\omega_{0}\) of type \(\omega_{0}=\omega_{0,1}+\omega_{0,2}+\omega_{0,3},\) where
\[\omega_{0,1}=\lambda\mathrm{yd}x+\mu\mathrm{x}\mathrm{d}y,\qquad\omega_{0,2}= \left(\sum_{i=0}^{2}a_{i}x^{2-i}y^{i}\right)\mathrm{d}x+\left(\sum_{i=0}^{2}b_ {i}x^{2-i}y^{i}\right)\mathrm{d}y,\qquad\omega_{0,3}=\left(\sum_{i=0}^{2}c_{i} x^{2-i}y^{i}\right)(x\mathrm{d}y-y\mathrm{d}x),\]
with \(\lambda\mu(\lambda+\mu)\neq 0.\)
The only lines passing through \(m\) and which can be invariant by \(\mathcal{F}\) are \((x=0)\) and \((y=0).\) Indeed, denote by \(\mathrm{R}=x\frac{\partial}{\partial x}+y\frac{\partial}{\partial y}\) the radial vector field centered at \(m;\) if \(L=(ux+vy=0)\) is \(\mathcal{F}\)-invariant, then \(ux+vy\) must divide the tangent cone \(\mathrm{C}_{\omega_{0,1}}:=\omega_{0,1}(\mathrm{R})=(\lambda+\mu)xy,\) so that \(u=0\) or \(v=0.\)
We will show that indeed \((x=0)\) and \((y=0)\) are invariant by \(\mathcal{F},\) which will establish the lemma. We have to prove that \(a_{0}=b_{2}=0,\) since the invariance by \(\mathcal{F}\) of \((x=0),\) resp. \((y=0),\) is equivalent to the vanishing of \(b_{2},\) resp. \(a_{0}.\)
If \(\ell=\{\alpha x+\beta y+\gamma=0\}\) then, in the affine chart \((p,q)\) of \(\hat{\mathbb{P}}^{2}_{\mathbb{C}}\) corresponding to the line \(\{y=px-q\}\subset\mathbb{P}^{2}_{\mathbb{C}},\) the \(3\)-web \(\mathrm{Leg}\mathcal{F}\) is described by the symmetric \(3\)-form
\[\tilde{\omega}=\big{(}(\gamma-\beta q)\mathrm{d}p+(\alpha+\beta p)\mathrm{d}q \big{)}\tilde{\omega}_{0},\]
where
\[\tilde{\omega}_{0}=\mu\,p\mathrm{d}p\mathrm{d}q+(a_{0}+b_{0}p+c_{0}q)\mathrm{d} q^{2}+\Big{(}\lambda\mathrm{d}p+(a_{1}+b_{1}p+c_{1}q)\mathrm{d}q\Big{)}(p \mathrm{d}q-q\mathrm{d}p)+\big{(}a_{2}+b_{2}p+c_{2}q\big{)}(p\mathrm{d}q-q \mathrm{d}p)^{2}.\]
Assume by contradiction that \(a_{0}\neq 0.\) Consider the family of automorphisms \(\varphi=\varphi_{\mathrm{e}}=(a_{0}\varepsilon p,a_{0}\varepsilon^{2}q).\) We see that if \(\gamma\neq 0,\) resp. \(\gamma=0\) and \(\alpha\neq 0,\) resp. \(\gamma=\alpha=0,\) then
\[\lim_{\varepsilon\to 0}\varepsilon^{-5}\gamma^{-1}a_{0}^{-4}\varphi^{*} \tilde{\omega}=\theta_{1}\eta,\quad\mathrm{resp.}\ \lim_{\varepsilon\to 0}\varepsilon^{-6}\alpha^{-1}a_{0}^{-4}\varphi^{*} \tilde{\omega}=\theta_{2}\eta,\quad\mathrm{resp.}\ \lim_{\varepsilon\to 0}\varepsilon^{-7}\beta^{-1}a_{0}^{-5} \varphi^{*}\tilde{\omega}=\theta_{3}\eta,\]
where
\[\theta_{1}=\mathrm{d}p,\qquad\theta_{2}=\mathrm{d}q,\qquad\quad\theta_{3}=p \mathrm{d}q-q\mathrm{d}p,\qquad\quad\eta=-\lambda q\mathrm{d}p^{2}+(\lambda+ \mu)p\mathrm{d}p\mathrm{d}q+\mathrm{d}q^{2}.\]
For \(i=1,2,3,\) put \(\mathcal{W}^{(i)}_{3}=\mathcal{F}_{i}\boxtimes\mathcal{W}^{2}_{2},\) where \(\mathcal{W}^{2}_{2}\) (resp. \(\mathcal{F}_{i}\)) denotes the \(2\)-web (resp. the foliation) defined by \(\eta\) (resp. by \(\theta_{i}\)). It follows that if \(\gamma\neq 0,\) resp. \(\gamma=0\) and \(\alpha\neq 0,\) resp. \(\gamma=\alpha=0,\) then the closure of the \(\mathrm{Aut}(\hat{\mathbb{P}}^{2}_{\mathbb{C}})\)-orbit of \(\mathrm{Leg}\mathcal{F}\) contains the \(3\)-web \(\mathcal{W}^{(1)}_{3},\) resp. \(\mathcal{W}^{(2)}_{3},\) resp. \(\mathcal{W}^{(3)}_{3}.\) Now, since \(\mathrm{Leg}\mathcal{F}\) is
flat by hypothesis, every \(3\)-web belonging to \(\overline{\mathcal{O}(\mathrm{Leg}\mathcal{F})}\) is also flat. Therefore, to obtain a contradiction, it suffices to show that for every \(i=1,2,3,\ \mathcal{W}_{3}^{(i)}\) is not flat. Since \(\Delta(\eta)=f(p,q):=4\lambda q+(\lambda+\mu)^{2}p^{2}\), it suffices again to show that for every \(i=1,2,3\), the curvature of \(\mathcal{W}_{3}^{(i)}\) is not holomorphic along the component \(\mathcal{C}=\{f(p,q)=0\}\subset\Delta(\mathcal{W}_{2}),\) which is a parabola, because \(\lambda(\lambda+\mu)\neq 0\).
First of all, let us note that \(\mathcal{C}\) is not invariant by \(\mathcal{W}_{2}\), since putting \(\eta_{0}=(\lambda+\mu)p\mathrm{d}p+2\mathrm{d}q\), we have
\[\eta\big{|}_{\mathcal{C}}=\left(\frac{\eta_{0}}{2}\right)^{2}\qquad\qquad \text{and}\qquad\qquad\eta_{0}\wedge\mathrm{d}f=-4\mu(\lambda+\mu)p\mathrm{d} p\wedge\mathrm{d}q\not\equiv 0.\]
Let us consider the case where \(i\in\{1,2\}.\) Since
\[\eta_{0}\wedge\theta_{1}\Big{|}_{\mathcal{C}}=-2\mathrm{d}p\wedge\mathrm{d}q \not\equiv 0\qquad\qquad\text{and}\qquad\qquad\eta_{0}\wedge\theta_{2} \Big{|}_{\mathcal{C}}=(\lambda+\mu)p\mathrm{d}p\wedge\mathrm{d}q\not\equiv 0,\]
we have \(\mathcal{C}\not\subset\mathrm{Tang}(\mathcal{W}_{2},\mathcal{F}_{i}).\) Therefore, according to [13, Theorem 1] (_cf._[2, Theorem 1.1]), the curvature \(K(\mathcal{W}_{3}^{(i)})\) is holomorphic on \(\mathcal{C}\) if and only if \(\mathcal{C}\) is invariant by \(\mathcal{F}_{i},\) which is impossible, because each \(\mathcal{F}_{i}\) is a pencil of lines and hence cannot admit a parabola as an invariant curve.
Let us now examine the case where \(i=3.\) In this case \(\mathcal{C}\subset\mathrm{Tang}(\mathcal{W}_{2},\mathcal{F}_{3})\) if and only if \(\lambda=\mu,\) because
\[\eta_{0}\wedge\theta_{3}\Big{|}_{\mathcal{C}}=\frac{1}{2\lambda}(\lambda-\mu) (\lambda+\mu)p^{2}\mathrm{d}p\wedge\mathrm{d}q\equiv 0\Longleftrightarrow \lambda=\mu.\]
If \(\lambda\neq\mu,\) then, as above, we can apply Theorem 1 of [13] and assert that \(K(\mathcal{W}_{3}^{(3)})\) cannot be holomorphic on \(\mathcal{C}.\)
We therefore assume that \(\lambda=\mu\) and prove that \(K(\mathcal{W}_{3}^{(3)})\not\equiv 0.\) The pull-back of \(\mathcal{W}_{3}^{(3)}\) by the rational map writes as \(\psi^{*}\mathcal{W}_{3}^{(3)}=\mathcal{F}_{3}^{(1)}\boxtimes\mathcal{F}_{3}^{ (2)}\boxtimes\mathcal{F}_{3}^{(3)},\) where
\[\mathcal{F}_{3}^{(1)}:(p^{2}+q^{2})\mathrm{d}p-2pq\mathrm{d}q=0,\qquad\mathcal{ F}_{3}^{(2)}:(p+q)\mathrm{d}p-2q\mathrm{d}q=0,\qquad\mathcal{F}_{3}^{(3)}:(p-q) \mathrm{d}p-2q\mathrm{d}q=0.\]
Using formula (1.1), a direct computation leads to
\[\eta(\psi^{*}\mathcal{W}_{3}^{(3)})=-\frac{p\mathrm{d}p}{q^{2}}+\frac{4 \mathrm{d}q}{q}+\frac{\mathrm{d}(p^{2}-q^{2})}{p^{2}-q^{2}},\]
so that
\[K(\psi^{*}\mathcal{W}_{3}^{(3)})=\mathrm{d}\eta(\psi^{*}\mathcal{W}_{3}^{(3)}) =-\frac{2p}{q^{3}}\mathrm{d}p\wedge\mathrm{d}q\not\equiv 0,\]
hence \(\psi^{*}K(\mathcal{W}_{3}^{(3)})=K(\psi^{*}\mathcal{W}_{3}^{(3)})\not\equiv 0\) and therefore \(K(\mathcal{W}_{3}^{(3)})\not\equiv 0.\)
We have thus shown that \(a_{0}=0,\) which means that the line \((y=0)\) is invariant by \(\mathcal{F}.\) Exchanging the roles of the coordinates \(x\) and \(y,\) the same argument shows that \(b_{2}=0,\)_i.e._ that the line \((x=0)\) is also invariant by \(\mathcal{F}.\)
Before starting the proof of Theorem H, let us recall (_cf._[8]) that if \(\mathcal{F}\) is a foliation of degree \(d\) on \(\mathbb{P}_{\mathbb{C}}^{2}\) then
\[\sum_{s\in\mathrm{Sing}\mathcal{F}}\mu(\mathcal{F},s)=d^{2}+d+1\qquad\qquad \qquad\text{and}\qquad\qquad\sum_{s\in\mathrm{Sing}\mathcal{F}}\mathrm{BB}( \mathcal{F},s)=(d+2)^{2}. \tag{6.1}\]
Proof of Theorem H. -- Write \(\operatorname{Sing}\mathcal{F}=\Sigma^{1}\cup\Sigma^{2}\), where
\[\Sigma^{1}=\{s\in\operatorname{Sing}\mathcal{F}\ :\operatorname{BB}( \mathcal{F},s)=4\}\qquad\qquad\text{ and }\qquad\qquad\Sigma^{2}=\operatorname{Sing}\mathcal{F}\setminus\Sigma^{1}.\]
For \(i=1,2\), put \(\kappa_{i}=\#\,\Sigma^{i}\). By hypothesis, we have \(\deg\mathcal{F}=2\) and, for any \(s\in\operatorname{Sing}\mathcal{F}\), \(\mu(\mathcal{F},s)=1\). Formulas (6.1) then give
\[\#\operatorname{Sing}\mathcal{F}=\kappa_{1}+\kappa_{2}=7\qquad\qquad\qquad \text{ and }\qquad\qquad 4\kappa_{1}+\sum_{s\in\Sigma^{2}}\operatorname{BB}( \mathcal{F},s)=16. \tag{6.2}\]
It follows that \(\Sigma^{2}\) is non-empty. Let \(m\) be a point of \(\Sigma^{2}\); by Lemma 6.2 through the point \(m\) pass exactly two \(\mathcal{F}\)-invariant lines \(D_{m}^{(1)}\) and \(D_{m}^{(2)}\). Then, for \(i=1,2\), Proposition 6.1 ensures the existence of a homogeneous pre-foliation \(\mathcal{H}_{m}^{(i)}=\ell_{m}^{(i)}\boxtimes\mathcal{H}_{m}^{(i)}\) of type \((1,3)\) on \(\mathbb{P}_{\mathbb{C}}^{2}\) belonging to \(\overline{\mathcal{O}(\mathcal{F})}\) and such that the line \(D_{m}^{(i)}\) is \(\mathcal{H}_{m}^{(i)}\)-invariant. Since \(\operatorname{Leg}\mathcal{F}\) is flat by hypothesis, so are \(\operatorname{Leg}\mathcal{H}_{m}^{(1)}\) and \(\operatorname{Leg}\mathcal{H}_{m}^{(2)}\). Therefore, \(\mathcal{H}_{m}^{(i)}\) (\(i=1,2\)) is linearly conjugate to one of the eight models of Corollary 4.5. For \(i=1,2\), Proposition 6.1 also ensures that
1. if \(\ell\neq D_{m}^{(i)}\), then \(\ell_{m}^{(i)}\neq L_{\infty}\);
2. \(\operatorname{Sing}\mathcal{F}\cap D_{m}^{(i)}=\operatorname{Sing}\mathcal{H} _{m}^{(i)}\cap D_{m}^{(i)}\);
3. \(\forall\,s\in\operatorname{Sing}\mathcal{H}_{m}^{(i)}\cap D_{m}^{(i)},\quad \mu(\mathcal{H}_{m}^{(i)},s)=1\) and \(\operatorname{CS}(\mathcal{H}_{m}^{(i)},D_{m}^{(i)},s)=\operatorname{CS}( \mathcal{F},D_{m}^{(i)},s)\).
Since \(\operatorname{CS}(\mathcal{F},D_{m}^{(1)},m)\operatorname{CS}(\mathcal{F},D_{ m}^{(2)},m)=1,\) we have
\[\operatorname{CS}(\mathcal{H}_{m}^{(1)},D_{m}^{(1)},m)\operatorname{CS}( \mathcal{H}_{m}^{(2)},D_{m}^{(2)},m)=1. \tag{6.3}\]
Let us first assume that \(\ell\neq D_{m}^{(i)}\) for \(i=1,2\); this is obviously the case if \(\ell\) is not invariant by \(\mathcal{F}\). Then, by (a), we have \(\ell_{m}^{(i)}\neq L_{\infty}\) for \(i=1,2\). Therefore, each of the \(\mathcal{H}_{m}^{(i)}\) is conjugate to one of the five pre-foliations \(\mathcal{H}_{j}^{3},j=2,\dots,6\), so that each of the \(\mathcal{H}_{m}^{(i)}\) is conjugate to one of the three foliations \(\mathcal{H}_{1}^{2}\), \(\mathcal{H}_{2}^{2}(0,0),\)\(\mathcal{H}_{3}^{2}(-2)\) (Corollary 4.5). Consulting Table 1 and using equality (6.3) as well as relations (b) and (c), we see that
\[\operatorname{CS}(\mathcal{H}_{m}^{(1)},D_{m}^{(1)},m)=\operatorname{CS}( \mathcal{H}_{m}^{(2)},D_{m}^{(2)},m)=-1,\qquad\#\,(\Sigma^{1}\cap D_{m}^{(1)} )=\#\,(\Sigma^{1}\cap D_{m}^{(2)})=2,\qquad\Sigma^{2}\cap D_{m}^{(1)}=\Sigma^ {2}\cap D_{m}^{(2)}=\{m\}. \tag{6.4}\]
Let us now assume that the line \(\ell\) is equal to one of the lines \(D_{m}^{(i)}\), say \(\ell=D_{m}^{(2)}\). Let us show that equalities (6.4) still hold. Since \(\ell\neq D_{m}^{(1)}\), \(\mathcal{H}_{m}^{(1)}\) is conjugate to one of the foliations \(\mathcal{H}_{1}^{2}\), \(\mathcal{H}_{2}^{2}(0,0),\)\(\mathcal{H}_{3}^{2}(-2)\). Moreover \(\Sigma^{2}\cap D_{m}^{(1)}=\{m\}\); indeed, if \(\Sigma^{2}\cap D_{m}^{(1)}\) contained another point \(m^{\prime}\neq m\), we would have \(\ell\neq D_{m^{\prime}}^{(i)}\) for \(i=1,2,\) so that \(\{m^{\prime}\}=\Sigma^{2}\cap D_{m^{\prime}}^{(i)}=\Sigma^{2}\cap D_{m}^{(1)} \supset\{m,m^{\prime}\}\), which is impossible. From Table 1, we deduce that
\[\operatorname{CS}(\mathcal{H}_{m}^{(1)},D_{m}^{(1)},m)=\operatorname{CS}( \mathcal{H}_{m}^{(2)},\ell,m)=-1\qquad\qquad\text{ and }\qquad\qquad\#\,(\Sigma^{1}\cap D_{m}^{(1)})=2,\]
hence
\[\operatorname{CS}(\mathcal{F},D_{m}^{(1)},m)=\operatorname{CS}( \mathcal{F},\ell,m)=-1.\]
Since these equalities are valid for any choice of \(m\in\Sigma^{2}\cap\ell\) and since every line of \(\mathbb{P}_{\mathbb{C}}^{2}\) cannot contain more than \(\deg\mathcal{F}+1=3\) singular points of \(\mathcal{F}\), the Camacho-Sad formula (_see_[10]) \(\sum_{s\in\operatorname{Sing}\mathcal{F}\cap\ell}\operatorname{CS}(\mathcal{F}, \ell,s)=1\) implies that
\[\#\,(\Sigma^{1}\cap\ell)=2\qquad\qquad\text{ and }\qquad\qquad\Sigma^{2}\cap\ell=\{m\}.\]
Equalities (6.4) are thus established in all cases. It follows in particular that \(\operatorname{BB}(\mathcal{F},m)=0\). The point \(m\in\Sigma^{2}\) being arbitrary, \(\Sigma^{2}\) consists of \(s\in\operatorname{Sing}\mathcal{F}\) such that \(\operatorname{BB}(\mathcal{F},s)=0\). System (6.2) then rewrites as \(\kappa_{1}+\kappa_{2}=7\) and \(4\kappa_{1}=16\), whose unique solution is \((\kappa_{1},\kappa_{2})=(4,3)\), that is \(\operatorname{Sing}\mathcal{F}=\Sigma^{1}\cup\Sigma^{2},\)\(\#\,\Sigma^{1}=4\) and \(\#\,\Sigma^{2}=3\). Since \(\Sigma^{2}\cap(D_{m}^{(1)}\cup D_{m}^{(2)})=\{m\}\), \(\mathcal{F}\) has \(3\cdot 2=6\) invariant lines, which means that \(\mathcal{F}\) is reduced convex.
It then follows from the classification of convex foliations of degree two (_cf._[11, Proposition 7.4] or [5, Theorem A]) that \(\mathcal{F}\) is linearly conjugate to the Fermat foliation \(\mathcal{F}_{0}^{2}\). We conclude by noting that if the line \(\ell\) is not invariant by \(\mathcal{F}\), the flatness of \(\operatorname{Leg}\!\mathcal{F}\) and Proposition F imply that \(\ell\) must join two non-radial singularities of \(\mathcal{F}\).
|
2310.00388 | TODDLERS: A new UV-mm emission library for star-forming regions. 1.
Integration with SKIRT and public release | We present and publicly release a new star-forming regions emission library
TODDLERS (Time evolution of Observables including Dust Diagnostics and Line
Emission from Regions containing young Stars) for the publicly available
radiative transfer code SKIRT. The library generation involves the spherical
evolution of a homogeneous gas cloud around a young stellar cluster that
accounts for stellar feedback processes including stellar winds, supernovae,
and radiation pressure, as well as the gravitational forces on the gas. The
semi-analytical evolution model is coupled with the photoionization code Cloudy
to calculate time-dependent UV-mm spectral energy distributions (SEDs) from
star-forming regions of varying metallicity, star-formation efficiency,
birth-cloud density, and mass. The calculated SEDs include the stellar,
nebular, and dust continuum emission along with a wide range of emission lines
originating from H ii, photo-dissociation, and molecular gas regimes tabulated
at high resolution. The SEDs incorporated in SKIRT are generated by calculating
a stellar-mass normalized luminosity, which assumes that each emission source
is composed of a power-law population of star-forming clouds. When compared to
the previous treatment of star-forming regions in SKIRT, TODDLERS shows a
better agreement with low-redshift observational data in the IR wavelength
range while offering a more comprehensive line-emission support. This paves the
way for a variety of applications using simulated galaxies at low and high
redshift. | Anand Utsav Kapoor, Maarten Baes, Arjen van der Wel, Andrea Gebek, Peter Camps, Angelos Nersesian, Sharon E. Meidt, Aaron Smith, Sebastien Vicens, Francesco D'Eugenio, Marco Martorano, Daniela Barrientos, Nina Sanches Sartorio | 2023-09-30T14:12:16Z | http://arxiv.org/abs/2310.00388v2 | TODDLERS: A new UV-mm emission library for star-forming regions. I. Integration with SKIRT and public release
###### Abstract
We present and publicly release a new star-forming regions emission library TODDLERS (Time evolution of Observables including Dust Diagnostics and Line Emission from Regions containing young Stars) for the publicly available radiative transfer code SKIRT. The library generation involves the spherical evolution of a homogeneous gas cloud around a young stellar cluster that accounts for stellar feedback processes including stellar winds, supernovae, and radiation pressure, as well as the gravitational forces on the gas. The semi-analytical evolution model is coupled with the photoionization code Cloudy to calculate time-dependent UV-mm spectral energy distributions (SEDs) from star-forming regions of varying metallicity, star-formation efficiency, birth-cloud density, and mass. The calculated SEDs include the stellar, nebular, and dust continuum emission along with a wide range of emission lines originating from H ii, photo-dissociation, and molecular gas regimes tabulated at high resolution. The SEDs incorporated in SKIRT are generated by calculating a stellar-mass normalized luminosity, which assumes that each emission source is composed of a power-law population of star-forming clouds. When compared to the previous treatment of star-forming regions in SKIRT, TODDLERS shows a better agreement with low-redshift observational data in the IR wavelength range while offering a more comprehensive line-emission support. This paves the way for a variety of applications using simulated galaxies at low and high redshift.
keywords: radiative transfer - methods: numerical - dust, extinction - H ii regions - ISM: lines and bands - galaxies: star formation.
## 1 Introduction
Galaxy formation and evolution is a complex problem involving multi-scale and multi-physics phenomena. Such complexity necessitates the use of numerical experiments to track the interplay of many involved processes (Somerville & Dave, 2015; Naab & Ostriker, 2017; Vogelsberger et al., 2020). One of the key products of such numerical experiments is the distribution of baryonic mass and its associated properties, e.g. the distribution of dark matter, gas, metals, dust, stars, and black holes throughout the Universe over cosmic history.A fair comparison between the observable and numerical universes necessarily requires a conversion of mass to light, or vice-versa. The generation of synthetic/mock observations utilizes the former methodology. In this forward modeling approach, not only the effects of the complex interplay of radiation, dust, and gas in realistic geometries are realized, but instrumental effects are also considered. This makes it possible to meaningfully compare observations with simulated data (Guidi et al., 2015; Torrey et al., 2015; Diemer et al., 2019; Popping et al., 2021; Kapoor et al., 2021; Camps et al., 2022; Treka et al., 2022; Gebek et al., 2023). In this framework, multi-wavelength comparison of simulations with observations can provide increasingly comprehensive understanding of the properties and behavior of astronomical objects, as different wavelengths of radiation can reveal complementary aspects of an object's characteristics.
The spectral energy distribution (SED) is the rate of energy emitted by luminous sources at different wavelengths of the electromagnetic spectrum. The SED encodes information about the physical processes occurring within a galaxy, such as its star formation rate (SFR), star formation history (SFH), the presence of an active galactic nucleus, or the amount and characteristics of dust and gas (see, for example Conroy, 2013; Leja et al., 2017; Smith & Hayward, 2018; Leja et al., 2019; Boquien et al., 2019). The SED of a galaxy is significantly influenced by the presence of massive stars in the galaxy. The most massive of which live up to a few Myr and their radiation is highly/efficiently reprocessed by the dust and gas present in their
natal environments (star-forming regions1), making stellar clusters containing massive stars significant contributors to the UV and IR continuum (Maeder and Conti, 1994; Churchwell, 2002; Churchwell et al., 2009; Hanaoka et al., 2019). These wavelengths offer complementary approaches to accurately measure a galaxy's SFR, one of the most fundamental properties of a galaxy. Apart from this, massive young stars also serve as engines for the recombination and collisionally excited lines, which serve as diagnostic tools for determining SFR, densities, temperatures, chemical compositions, and ionization states (Bellfior et al., 2016; Kreckel et al., 2019; Kewley et al., 2019; Grasha et al., 2022). Synthetic observations involving line and continuum emission, in the UV, optical, and IR have thus become increasingly important in the current era of integral field unit (IFU) instruments such as JWST-NIRSpec, KMOS, MUSE at VLT, SINFONI. Here mock observations play a crucial role in interpreting the observational data and understanding various biases that affect the overall systematics and statistical scatter (Hirschmann et al., 2022; Jang et al., 2022; Barrientos Acevedo et al., 2023). At the same time, mock observables shed light on the fidelity and limitations of the numerical simulations when it comes to the mass buildup and kinematics of galaxies. Modeling the dust and gas emission from star-forming regions requires, among other things, the intrinsic spectra of the radiation sources, the dust/gas geometry, and a description of the physical properties of the interstellar medium (ISM). For large-box simulations with a sizeable galaxy sample, this sub-pc information is generally not available from the simulation snapshots due to the lack of resolution and the fact that the small-scale physics and the feedback processes are usually dealt with in a very approximate manner. Thus, sub-grid models describing the emission from young clusters are usually necessary in order to generate synthetic data products from simulated galaxies (see, for example Yang et al., 2023). At the same time, galaxy formation simulations continue to become increasingly realistic due to improved physics and resolution (see, for example, Kannan et al., 2020; Tress et al., 2020; Feldmann et al., 2022; Katz et al., 2022; Smith et al., 2023; Hopkins et al., 2023) and there is an ongoing effort to produce synthetic observables in a self-consistent manner without employing sub-grid models (Smith et al., 2022; Tacchella et al., 2022). However, these efforts remain limited to isolated galaxy simulations in the current state of affairs.
Footnote 1: We use the term star-forming region to refer to the gas in various phases i.e., ionized, neutral, and molecular gas surrounding a stellar cluster.
The SKIRT radiative transfer code (Baes et al., 2011; Camps and Baes, 2015, 2020) is a Monte Carlo radiative transfer (RT) code that has been used extensively to generate multi-wavelength synthetic data for simulated galaxies. Apart from dust RT, the current version of the code is designed to perform Lyman-\(\alpha\) RT (Camps et al., 2021), X-ray RT (Vander Meulen et al., 2023), and non-LTE line RT (Matsumoto et al., 2023) without any constraints on geometrical complexity while accounting for polarization and kinematics. When generating synthetic UV-mm observations of galaxies with SKIRT, emission sources (particles from the parent simulation snapshot) are typically separated by age, with young stars (age below 10 Myr) assumed to be enshrouded by dust and gas. Such particles are assigned an SED from the library discussed in Groves et al. (2008) generated using the MAPPINGS-III code. These templates model emission from both the immediate H ii region and the surrounding photodissociation region (PDR), including the dust contained within each of these regions. We refer to the version of this library implemented in SKIRT (Jonsson et al., 2010) as Hi iN32 throughout this work. Hi iM3 has been used to generate synthetic broadband fluxes for simulated galaxies from the EAGLE simulation suite (Camps et al., 2016; Baes et al., 2019; Treka et al., 2020). More recently it has been applied in a similar fashion to TNG-50 galaxies (Treka et al., 2022, Gebek et al. in preparation). It has also been used to generate synthetic high-resolution multi-wavelength images for zoom-in simulations, Auriga (Kapoor et al., 2021) and Artemis (Camps et al., 2022). However, it has been recognized by the aforementioned authors that the use of Hi iM3 could be partly responsible for the high FUV and MIR, and tension in MIR-FIR colors of the simulated galaxies when compared with their observational counterparts. This motivates the development of updated options for the treatment of emission from star-forming regions in SKIRT.
Footnote 2: Document online at [https://skirt.ugent.be/skirt9/class_mappings_s_e_d_family.html](https://skirt.ugent.be/skirt9/class_mappings_s_e_d_family.html)
The aim of this work is to construct a physically motivated, time-resolved model for the UV-mm emission from star-forming regions. In order to post-process simulated galaxies, we need a model that encompasses a large parameter space while leveraging the simulation's information. The model should incorporate relevant physics and remain computationally feasible for parameter sweeping 3. To this end, two relevant state-of-the-art models currently available include Hi iH3 and the recent model presented in Pellegrini et al. (2020, WARPFIELD-EMP). The former model is not time-resolved and is designed for modeling the integrated spectra of starburst galaxies by luminosity-weighted averaging young clusters of different ages (\(<\) 10 Myr). The latter model couples the evolution of gas clouds under stellar feedback (Rahner et al., 2017, WARPFIELD) with a photoionization code to generate time-dependent observables. Given its suitability for our work, we adopt an approach similar to the second model mentioned above, i.e., a semi-analytical calculation for the evolution of a spherical, homogeneous gas cloud exposed to stellar feedback coupled with the photoionization code Cloudy to generate observables. Our approach expands on WARPFIELD/WARPFIELD-EMP by covering a broader range of metallicities and incorporating an additional feedback channel into our semi-analytical calculations, namely radiation pressure from the resonant scattering of Lyman-\(\alpha\) (Ly\(\alpha\)) photons. The resulting emission spectra span the UV-mm electromagnetic spectrum, including features like the sub-mm CO lines.
Footnote 3: A totalder is a child aged 1–3 years old. The word is derived from “to toddle”, which means to walk unsteadily, like a child of this age. The toddler years are a time of great cognitive, emotional, and social development.
Our custom-made model is designed to seamlessly integrate into SKIRT. We consider the fact that young stellar particles in simulations do not represent a single star-forming region but rather a population with a range of ages. This custom model also facilitates the incorporation of additional modifications, such as substituting the stellar library, initial mass function (IMF), dust models, and so on.
This is the first of a two-part series of papers with the main goal of presenting the new library, TODLERS4. The paper organizes its contents as follows: In Sec. 2, we describe the semi-analytic evolution model and its output. Sec. 3 discusses the methodology used for generating the observables using Cloudy. Sec. 4 showcases key diagnostics resulting from the coupling of the evolution model and Cloudy post-processing. In Sec. 5, we integrate the TODLERS' observables within SKIRT. Sec. 6 focuses on comparing TODLERS and HiiH3, particularly the IR colors resulting from the application of
these two libraries without any other changes. Finally, in Sec. 7, we summarize and conclude.
## 2 Evolution of gas cloud under feedback from young stars
We model the evolution of a homogeneous gas cloud around a young stellar cluster under stellar feedback in spherical symmetry inspired by the work presented by Rahner et al. (2019). The semi-analytical model calculates the evolution of a finite gas cloud under stellar feedback, accounting for stellar winds, supernovae (SNe), and radiation pressure due to ionizing radiation and dust. The bubble expansion is initially mediated by the shocked stellar wind and is pressure-driven, but a switch to momentum-driven evolution takes place based on prescriptions for instabilities that could lead to a rapid loss of bubble pressure. The gravitational force on the gas, both due to the self-gravity of the gas and due to the central stellar cluster is taken into consideration. This allows for multiple star-formation events to take place in the event that the gravitational force overpowers the feedback of the cluster. In contrast, if the stellar feedback is strong enough, it could eventually lead to the dissolution of the cloud. This is schematically shown in Fig. 1. We refer to the semi-analytical treatment as the shell-evolution model throughout this work.
The equations of motion for the shell during various evolutionary phases are discussed briefly next. The stellar feedback data (quantities such as mass-loss rates, terminal velocities of the stellar ejecta, ionizing/non-ionizing luminosities, etc.) used in this work come from STARBURST99 models whose details are given in Sec. 3.1. The evolution of two such quantities, the force due to stellar ejecta (\(F_{\rm ram}\)) and the rate of production of Hydrogen ionizing photons by the cluster (\(Q_{\rm H}\)) are shown as a function of cluster metallicity in Fig. 2. Initially (\(t<3\)Myr), \(F_{\rm ram}\) comes almost entirely from the stellar winds of OB stars, which increases with metallicity. W-R stars can contribute significantly to \(F_{\rm ram}\) starting around \(3-4\) Myr, lasting for a period of \(0.25-2\) Myr depending on the metallicity. Once the W-R phase is over, \(F_{\rm ram}\) comes mostly from the SNe. The production rate of Hydrogen ionizing photons drops as the massive stars die. Increasing the metallicity leads to increasing line blanketing, lowering the production rate of ionizing photons.
Figure 1: Schematic of the key ingredients of the shell-evolution model setup consisting of the rapid formation of a shell (dark blue) under stellar feedback. The stellar feedback and radiation pressure push the shell, while the gravitational force due to the central cluster and the shell’s self-gravity oppose the expansion. If the shell is optically thin to ionizing photons, then any remaining unswept gas cloud also opposes the expansion. These forces lead to two possible outcomes for this system: either the feedback is strong enough to push the gas, so that the cloud is assumed to be dissolved, or the gravitational force dominates and leads to a secondary collapse episode that forms another stellar cluster. In the latter case, the model equations are solved again assuming the presence of the younger cluster formed with the same egsf.
Figure 2: Evolution of two key stellar feedback properties used in this work shown as a function of metallicity. This assumes a coeval population of stellar mass \(10^{6}~{}M_{\odot}\) and a well sampled IMF (see Sec. 3.1). The solid curves represent the force due to mass loss, \(F_{\rm ram}\) as a function of time and metallicity for the employed stellar library. The dashed curves are the production rate of Hydrogen ionizing photons in units of \(10^{20}~{}{\rm s}^{-1}\).
### Dynamics of the shell
It is assumed that feedback from a central star cluster interacts with a finite, massive cloud of number density \(n_{\rm cl}\) surrounding it. The central source is an instantaneously born, young stellar cluster. The amount of stellar mass (\(M_{\star}\)) and the cloud gas mass susceptible to the stellar feedback are dictated by the star formation efficiency parameter (\(\rm\epsilon_{SF}\)).
\[M_{\star}=\epsilon_{\rm SF}M_{\rm cl}\quad{\rm and}\quad M_{\rm cl,r}=(1- \epsilon_{\rm SF})M_{\rm cl}\,, \tag{1}\]
where \(M_{\rm cl}\) is the initial mass of the gas cloud, while \(M_{\rm cl,r}\) is the remaining cloud mass around the cluster. The cloud and the central stellar cluster/s are assumed to have the same metallicity. The gas has a mean mass per nucleus \(\mu_{\rm B}=(14/11)\)\(m_{\rm H}\) and the mean mass per particle \(\mu_{\rm p}=(14/23)\)\(m_{\rm H}\), where \(m_{\rm H}\) is the proton mass. The cloud's density is given as \(\rho_{\rm cl}=\mu_{\rm u}n_{\rm cl}\), where \(n_{\rm cl}\) is the Hydrogen number density of the cloud. We note that while we have adopted \(Z=0.02\) values for \(\mu_{\rm u}\) and \(\mu_{\rm p}\) throughout, this choice is expected to have minimal impact on the shell dynamics.
In order to solve for the shell's dynamics, we solve the equations of the conservation of mass, momentum, and energy. For the conservation of mass, it is assumed that as the shell expands, the unswept cloud quickly settles and becomes a part of the shell. This allows us to write the mass conservation as:
\[\frac{dM_{\rm sh}}{dt}=\left\{\begin{array}{ll}\rho_{\rm cl}A_{\rm sh}v_{ \rm sh}&\mbox{ if }M_{\rm sh}<M_{\rm cl,r}\mbox{ and }v_{\rm sh}>0\\ 0&\mbox{ if }M_{\rm sh}=M_{\rm cl,r}\mbox{ or }v_{\rm sh}\leq 0\end{array}\right.\,. \tag{2}\]
In Eqn. (2), \(A_{\rm sh}\), \(v_{\rm sh}\) refer to the shell's surface area, and velocity, respectively. We consider only homogeneous clouds in this work, hence, \(\rho_{\rm cl}\) is a constant for a given cloud in this work. Note that during infall (see Sec. 2.4), no mass change occurs. The conservation of momentum is written by considering the forces due to the mechanical luminosity and the radiation pressure due to the stellar cluster, gravitational forces on the shell, and the external pressure of the cloud in which the shell is expanding. The general form of the momentum equation is as follows:
\[\frac{\rm d}{\rm d\,t}\left(M_{\rm sh}v_{\rm sh}\right)=F_{\rm w,sn}-F_{\rm grav }+F_{\rm rad}^{\rm UV,IR}+F_{\rm rad}^{\rm Ly\alpha}-F_{\rm ext}\,. \tag{3}\]
In Eqn. (3), \(F_{\rm w,sn}\) is the term attributed to the stellar winds and SNe, and its exact form depends on the evolutionary phase (Sec. 2.1.1, 2.1.2) and is discussed along with the specific phase. \(F_{\rm grav}\) is the gravitational force on a thin shell of radius \(r_{\rm sh}\) due to the star cluster and its self-gravity. \(F_{\rm grav}\) can be written as:
\[F_{\rm grav}=\frac{GM_{\rm sh}}{r_{\rm sh}^{2}}\left(M_{\star}+\frac{M_{\rm sh }}{2}\right). \tag{4}\]
The last three terms in Eqn. (3) are dependent on the shell structure. The third and the fourth terms are the forces due to radiation pressure on the shell. We have two components of the radiation pressure acting on the shell. The first radiation pressure term, \(F_{\rm rad}^{\rm UV,IR}\), is due to photoionization and dust, and includes the additional momentum provided by dust scattering. The second radiation pressure term, \(F_{\rm rad}^{\rm Ly\alpha}\) is due to the resonant scattering of the Ly\(\alpha\) photons by neutral hydrogen. This component becomes increasingly important as the metallicity of the system decreases. We describe the methodology to calculate this force in Sec. 2.2. The fifth term is the force due to the ISM (cloud or diffused) outside of the shell. These terms are described along with the shell structure in Sec. 2.1.3.
#### 2.1.1 Pressure driven phase
The initial bubble evolution is dominated by the shocked stellar ejecta from the massive stars in the cluster. This evolutionary phase continues till the shell fragments and the shell loses pressure support due to the hot gas in the bubble. We refer to this evolutionary phase as the pressure-driven phase. During this phase, the stellar winds and/or the supernovae are shocked and their energy feeds the hot bubble interior. The bubble pressure then pushes the shell. Due to the high temperatures and short sound crossing time within the hot bubble, the bubble interior is assumed to be isobaric. The energy equation can be written as follows:
\[\frac{dE_{\rm b}}{dt}=L_{\rm mech}-L_{\rm cool}-P_{\rm b}A_{\rm sh}v_{\rm sh}. \tag{5}\]
The mechanical luminosity from the wind and the SNe is given as:
\[L_{\rm mech}=L_{\rm w}+L_{\rm sn}=\frac{1}{2}\left(M_{\rm w}v_{\rm w}^{2}+M_{ \rm sn}v_{\rm sn}^{2}\right), \tag{6}\]
where \(\dot{M}_{\rm w}\) and \(\dot{M}_{\rm sn}\) are the mass loss rates due to stellar winds and supernovae (SNe), respectively, and \(v_{\rm w}\) and \(v_{\rm sn}\) are the terminal velocities of the winds and SNe ejecta, respectively. The bubble pressure, \(P_{\rm b}\) is given as:
\[P_{\rm b}=(\gamma-1)\frac{E_{\rm b}}{\frac{4}{3}\left(r_{\rm sh}^{3}-r_{\rm w }^{3}\right)}\,, \tag{7}\]
with \(\gamma=5/3\) being the adiabatic index for an ideal gas. Here \(r_{\rm w}\), \(r_{\rm sh}\) are the free-streaming radius and the overall bubble radius, respectively. The region between these two radii contains the shocked stellar wind (see Fig. 1 in Weaver et al., 1977). \(r_{\rm w}\) can be found by equating \(F_{\rm ram}\) (see Eqn.(10)) and the force due to the bubble pressure. The first term in Eqn. (3) for this phase is given as: \(F_{\rm w,sn}=P_{\rm b}A_{\rm sh}\). Geen & de Koter (2022) find that the radiative cooling rate from the wind bubble is \(\leq 1\)% of the wind luminosity. Thus, for the pressure-driven phase, we set \(L_{\rm cool}=0\). We do note that there could be other channels for cooling the hot bubble, which could slow down its radial expansion. For example, the presence of a turbulent mixing layer at the contact discontinuity at the hot bubble-shell interface could serve as a means of efficient radiative cooling (see, for example, El-Badry et al., 2019; Fielding et al., 2020; Tan et al., 2021; Lancaster et al., 2021, 2021). This cooling could be added as an additional contribution to the cooling in the energy equation following El-Badry et al. (2019). However, we do not address this complexity in the present work in order to limit the number of free variables in the model.
The equation for the conservation of energy is coupled to the system only when the shocked stellar ejecta drives the expansion of the shell. In this case, the energy of the stellar ejecta feeds the bubble pressure which pushes the shell. The coupling of the energy equation is, therefore, limited to the period when the hot stellar ejecta is strongly confined within the bubble. If the shell fragments, which we describe next, the hot gas is assumed to escape at a time scale determined by the sound crossing time. Once the bubble is devoid of hot gases, the expansion is due to the direct impingement of the stellar ejecta onto the shell. Since no energy build-up takes place in the bubble, the energy conservation equation is no longer coupled to the rest of the equations.
#### 2.1.2 Momentum driven phase
We terminate the pressure-driven phase assuming efficient cooling at the contact discontinuity and/or shell fragmentation when the conditions for the Rayleigh-Taylor (RT) instability or the gravitational instability are met. The RT instability occurs when a dense fluid is accelerated with respect to a lighter fluid. Thus, an accelerating dense shell in the cloud/ISM would be RT unstable. RT instability causes disruption of sharp density jumps at contact discontinuities
and promotes turbulent mixing (Chevalier and Klein, 1978; Duffell, 2016). Similarly, the system is expected to be gravitationally unstable when the thermal and kinetic energy of a gas parcel are overcome by its local gravitational binding energy (Ostriker and Cowie, 1981; Elmegreen, 2011). Additionally, it is assumed that shell fragmentation occurs when the entire cloud is swept by the shell during the pressure-driven phase.
In order to identify the time of onset of the aforementioned instabilities (\(t_{\rm frag}\)), and consequently, the end of the pressure-driven phase, we use the prescriptions given in Rahner et al. (2019) and the references therein. For the RT instability, this is simply an acceleration condition for the dense shell, i.e., \(f_{\rm sh}>0\), while the gravitational instability is assumed to occur when
\[0.67\frac{3\,G\,M_{\rm sh}}{4\pi\,v_{\rm sh}r_{\rm sh}c_{\rm s,sh}}>1, \tag{8}\]
where, \(c_{\rm s,sh}\) is the minimum sound speed in the shell. Based on Eqn. (8), it is clear that neutral shells (characterized by a lower sound speed) with high mass and low velocities are vulnerable to gravitational instability.
The termination of the pressure-driven phase is associated with a rapid loss of the bubble's pressure, leading to a rapid decrease of density at the inner face of the shell. In this case, the cooling term in Eqn. (5) is:
\[L_{\rm cool}=\frac{E_{\rm b}(t_{\rm frag})}{\Delta t_{c}}\,, \tag{9}\]
\(E_{\rm b}(t_{\rm frag})\) is the bubble energy at the moment of shell fragmentation. \(\Delta t_{c}\) is the sound crossing time scale assuming a volume averaged bubble temperature of \(10^{6}\) K and the radius of the bubble at the time of the fragmentation, \(\Delta t_{c}=r_{\rm sh}(t_{\rm frag})/c_{\rm s,b}\). Here \(c_{\rm s,b}\) is the volume averaged speed of sound in the bubble. This transition lasts till all of the bubble's energy is lost, and generally lasts less than a Myr. Once the pressure-driven phase is over, it is assumed that there are no intervening media in the interior of the bubble, leading to direct impingement of the cluster's ejecta on the shell. Since there is no energy buildup in the bubble, the evolution is determined by the momentum conservation equation alone. The first term in Eqn. (3) for this phase is given as: \(F_{\rm w,sn}=F_{\rm ram}\), where \(F_{\rm ram}\) is given as:
\[F_{\rm ram}=\dot{M}_{\rm w}v_{\rm w}+\dot{M}_{\rm sn}v_{\rm sn}\,. \tag{10}\]
Momentum-driven evolution is weaker in comparison to pressure-driven evolution. At constant stellar feedback, the momentum-driven bubble radius scales as \(\propto t^{1/2}\), while the pressure-driven scaling is \(\propto t^{3/5}\)(Weaver et al., 1977). Momentum-driven H ii regions are a plausible explanation for weak X-ray emission (Harper-Clark and Murray, 2009; Lopez et al., 2011; Verdolini et al., 2013) and low shell expansion velocities of observed sources (Lancaster et al., 2021). Generally, when the evolution switches to the momentum-driven phase, the shell is massive enough to have a significant amount of gravitational force. Thus, the bubble either dissolves if the feedback is strong enough (Sec. 2.3), or collapses under gravity (Sec. 2.4).
#### 2.1.3 Shell structure
To calculate the shell's structure, we use the model presented in Draine (2011). The model couples the number density in the shell, \(n_{\rm sh}(r)\), the attenuation function for the ionizing radiation, \(\phi(r)\), and the optical depth of the dust, \(r_{\rm d}\). These equations are written in two energy regimes, ionizing radiation (photons with energies above 13.6 eV) which is absorbed by hydrogen and dust, and non-ionizing radiation which is absorbed by dust alone. Thus for the ionized region of the shell, we have:
\[\frac{\rm d}{{\rm d}r}\left(\frac{\mu_{\rm n}}{\mu_{\rm p}}n_{\rm sh}kT_{\rm i }\right)=-\frac{1}{4\pi r^{2}c}\frac{\rm d}{{\rm d}r}\left(L_{\rm n}e^{-r_{ \rm d}}+L_{\rm i}\phi\right), \tag{11}\]
\[\frac{\rm d\phi}{{\rm d}r}=-\frac{4\pi r^{2}}{Q_{\rm H}}\alpha_{\rm B}n_{\rm sh }^{2}-n_{\rm sh}\alpha_{\rm d}\phi,\quad\rm{and} \tag{12}\]
\[\frac{\rm d\tau_{\rm d}}{{\rm d}r}=n_{\rm sh}\alpha_{\rm d}\,. \tag{13}\]
Eqn. (11) is an equation of hydrostatic equilibrium of the shell in the limit of low magnetic and turbulent pressures. The two components on the right side of Eqn. (11) quantify the absorption of neutral radiation due to the dust in the shell and the ionizing radiation due to the gas, \(L_{\rm n}(t)\) and \(L_{\rm i}(t)\) is the neutral and ionizing luminosities, respectively. Eqn. (12) describes the two ways to attenuate ionizing radiation in the shell, i.e., by ionizing the gas (which can be written in terms of the recombination rate), and by dust. Here, \(Q_{\rm H}(t)\) is the rate of hydrogen ionizing photons emitted by the cluster, \(\alpha_{\rm B}\) is the case B recombination coefficient \(=2.59\times 10^{-13}\) cm\({}^{3}\)s\({}^{-1}\) at T = \(10^{4}\)K (Osterbrock and Ferland, 2006), \(\sigma_{\rm d}\) is the dust cross-section, and \(c\) is the speed of light. It is assumed that the quantity of dust scales linearly with metallicity, hence, \(\sigma_{\rm d}=\sigma_{0}Z/Z\odot\), where \(\sigma_{0}=1.5\times 10^{-21}\) cm\({}^{2}\)H\({}^{-1}\)(Draine, 2011). Eqn. (13) calculates the dust optical depth.
The density values at the inner edge of the shell result from the assumption of hydrostatic equilibrium between the forces resulting from winds/SNe and the thermal pressure at the inner edge of the shell. Depending on whether the shell is pressure-driven or momentum-driven, the winds/SNe force at the shell's inner edge differ and result in different density initial conditions for the coupled differential equations, given as:
\[n_{\rm sh}(r_{\rm sh}^{\rm inj})=\left\{\begin{array}{ll}\frac{P_{\rm H}}{ \mu_{\rm n}kT_{\rm i}}P_{\rm b}&\rm{Energy\ driven}\\ \frac{P_{\rm H}}{\mu_{\rm n}kT_{\rm i}}\frac{P_{\rm m}}{A_{\rm m}}&\rm{ Momentum\ driven}\end{array}\right.\,. \tag{14}\]
In Eqn. (14), as the inner edge of the shell is part of an H ii region, its temperature is assumed to be \(T_{\rm i}=10^{4}\)K. We noted in Sec. 2.1.2, a switch from pressure-driven to momentum-driven shells is expected
Figure 3: The shell density normalized by the density at its inner edge as a function of the normalized shell depth (i.e., shell depth divided by shell thickness) at selected times for a system with \(Z=0.02\), \(\epsilon_{\rm SF}=5\%\), \(n_{\rm d}=160\) cm\({}^{-3}\) and \(\log M_{\rm cl}=5.75\).
based on the weak X-ray emission and low shell expansion velocities. Another important point about the switch to momentum-driven can be made based on the inner edge density of the shell. This density value, along with the flux of ionizing photons is a key parameter that determines the emission line ratios from H ii regions. As noted by Dopita et al. (2005), pressure-driven shells typically exhibit high inner shell density and expand to large radii, resulting in a low ratio of ionizing photon flux to the inner edge density of the shell. This is inconsistent with observations. Transitioning to the momentum-driven phase helps resolve this issue, a point elaborated upon in Sec. 4.1.
The initial conditions for the other two variables follow from zero initial attenuation:
\[\phi(r_{\rm sh}^{\rm in})=1,\ \ \tau_{\rm d}(r_{\rm sh}^{\rm in})=0\,. \tag{15}\]
The solution process is terminated either if the attenuation function drops to zero, or if the entire shell's mass is accounted. If the former happens, a modified set of equations is then solved using the \(n_{\rm sh}\) and \(\tau_{\rm d}\) at the termination radius as initial conditions. Following Martinez-Gonzalez et al. (2014), we have:
\[\frac{\rm d}{\rm d\tau}\left(n_{\rm sh}kT_{\rm n}\right)=-\frac{1}{4\pi r^{2}c} \frac{\rm d}{\rm d\tau}\left(L_{\rm n}e^{-\tau_{\rm d}}\right)\,, \tag{16}\]
\[\frac{\rm d\tau_{\rm d}}{\rm d\tau}=n_{\rm sh}\sigma_{\rm d}\,. \tag{17}\]
These are essentially the same equations as those for the ionized shell, except that the terms related to ionizing radiation have been dropped (as \(\phi(r)=0\)), and the neutral shell temperature, \(T_{\rm n}=100\) K is employed.
These equations are terminated at a radius where the computed mass (integrating the shell density structure) equals the mass of the shell determined by integrating Eqn. (2), allowing us to calculate \(F^{\rm UV,IR}\) as
\[F^{\rm UV,IR}_{\rm md}\approx f_{\rm abs}\,\frac{L_{\rm bol}}{c}\left(1+\tau_{ \rm IR}\right)\,. \tag{18}\]
Here \(\tau_{\rm IR}\) is the IR optical depth of the shell given as
\[\tau_{\rm IR}=\kappa_{\rm IR}\int_{r_{\rm sh}^{\rm in}}^{r_{\rm sh}}\mu_{\rm n }n_{\rm sh}\rm d\tau\,, \tag{19}\]
where \(\kappa_{\rm IR}=(Z/Z_{\odot})\kappa_{\rm IR,0}\) with \(\kappa_{\rm IR,0}=4\) cm\({}^{2}\)g\({}^{-1}\), noting that for gas-to-dust ratio \(\sim 100\), values of \(\kappa_{\rm IR}\) are likely to be in the range \(1-5\) cm\({}^{2}\) g\({}^{-1}\)(Semenov et al., 2003; Skinner & Ostriker, 2015).
Only the first term in Eqn. (18) would show up if each photon interacts just once with the medium and then escapes the system. The presence of an additional term allows for momentum exchange between trapped IR radiation and the shell, assuming that the gas and the dust are dynamically well coupled. Here, \(L_{\rm bol}=L_{\rm i}+L_{\rm n}\). The absorption fraction \(f_{\rm abs}\) is a luminosity-weighted average of absorption fractions in the ionizing and neutral wavebands at the outer edge of the shell, \(\mu_{\rm sh}^{\rm out}\), given as:
\[f_{\rm abs}=\frac{f_{\rm abs,i}L_{\rm i}+f_{\rm abs,n}L_{\rm n}}{L_{\rm bol}}\,, \tag{20}\]
where, \(f_{\rm abs,i}=1-\phi(r_{\rm sh}^{\rm out})\), and \(f_{\rm abs,n}=1-e^{-\tau_{\rm d}(r_{\rm sh}^{\rm out})}\).
Fig. 3 shows examples of the density profile calculated using the method discussed here. The notable increase in density signifies the transition from the ionized to the neutral regions of the shell. The decline in the maximum relative density with age is largely due to the thinning of the shell as the gas is pushed out.
We note that the density structure in the H ii regions of the shell results from the use of Eqns. 11, 12, and 13 is fairly consistent with those obtained using detailed calculations in Cloudy. We explicitly compared these profiles for a subset of the parameter space and found the difference in the mass-weighted density to be within 10%, implying a similar position of the ionization front when using the approximate calculations. On the other hand, neutral parts of the shell can show significant deviations when calculated using Cloudy, and those calculated using Eqns. (16), (17). The differences originate from the approximate shell density profiles using a constant grain cross-section, assuming a constant temperature of 100 K, and ignoring the absorption of Lyman-Werner band radiation by H\({}_{2}\) when the shell is dense enough to form molecules. The deviations in the density profiles of the neutral shells are unlikely to affect the calculation of \(F^{\rm UV,IR}_{\rm rad}\) as the density profiles differ the most in optically thick cases where molecules form. In such cases, the absorption fraction associated with neutral radiation, \(f_{\rm abs,n}\) tends to unity whether or not we use Cloudy.
#### 2.1.4 External pressure
If the shell is completely ionized, ionizing radiation leaks out to ionize the cloud behind it, photo-heating it to a temperature of \(\sim 10^{4}\) K. This represents a significant increase in external force in Eqn. (3) in comparison to the force due to the cloud when the shell is neutral and the cloud has a temperature of \(\simeq 10^{2}\) K. The different cases can be written as:
\[F_{\rm ext}=\left\{\begin{array}{ll}\frac{\mu_{\rm n}}{\mu_{\rm n}}\epsilon_ {\rm el}kT_{\rm n}A_{\rm sh}&\mbox{if $M_{\rm sh}<M_{\rm cl}$ and $\phi(r_{\rm sh}^{\rm out})>0$}\\ \epsilon_{\rm el}kT_{\rm n}A_{\rm n}&\mbox{if $M_{\rm sh}<M_{\rm cl}$ and $\phi(r_{\rm sh}^{\rm out})=0$}\\ 0&\mbox{if $v_{\rm sh}<0$}\end{array}\right.\,. \tag{21}\]
Additionally, once the shell has swept through the entire cloud, we assume that an external pressure from the diffuse ISM, given by \(P_{\rm ext}/k=10^{3}\) cm\({}^{-3}\) K, acts on the expanding shell.
We remark that we assume that the cloud is in virial equilibrium, which implies that we do not allow for the natal cloud to infall. Including this would lead to a significant difference in the evolution of the system for the highest density and cloud mass cases. This is exemplified by the change in the amount of minimum stellar mass required to unbind the cloud where this additional force is taken into account, as in Kourniotis et al. (2023). We additionally note that the cloud itself is assumed to be not affected by the ionized gas pressure to simplify the calculations.
Figure 4: \(M_{\rm F}\) in the case of dusty gas as a function of the Ly\(\alpha\) optical depth at line center. The solid lines are the fits provided in Kimm et al. (2018) at the metallicities considered in this work.
### Ly\(\alpha\) radiation pressure
Ly\(\alpha\) photons resonantly scatter in optically thick regions due to the large absorption cross-section of neutral hydrogen before they escape or get absorbed by dust. As the metallicity of the system decreases, the central cluster's population exhibits weakened line-driven winds and undergoes lower mass loss, which impacts their main sequence lifetimes. As compared to their metal-rich counterparts, the low-metallicity clusters exhibit a more gradual decrease in the rate of production of ionizing photons. Therefore, there is a shift in the mode of stellar feedback. Decreasing the metallicity shifts the dominant mode of energy removal from the central cluster from stellar winds to radiative. At the same time, the gas around the lower metallicity clusters is increasingly devoid of dust grains in our models. This translates to a lower coupling to radiation by absorption and scattering by dust grains. On the other hand, the decreased presence of dust grains ensures reduced destruction of Ly\(\alpha\) photons in the neutral medium, opening another feedback channel. Avoiding destruction, the trapping of Ly\(\alpha\) photons could represent a significant radiative force (Dijkstra & Loeb, 2008, 2009; Smith et al., 2017; Kimm et al., 2018).
Tomaselli & Ferrara (2021) argue for the implementation of Ly\(\alpha\) pressure in galaxy formation due to its greater influence compared to photoionization and UV radiation pressure in initiating gas acceleration around bright sources. Their conclusions bracket a broad range of gas columns and metallicities (\(16<\log N_{\rm HI}<23\) ; \(\;-4<\log Z/Z_{\odot}<0\)). The trapping of the Ly\(\alpha\) results in force multiplication. The multiplication factor (\(M_{\rm F}\)) represents the number of times, on average, a photon contributes to the momentum deposition, with respect to the case in which only a single scattering takes place.
In this work, we use the approach presented in Kimm et al. (2018) to calculate \(M_{\rm F}\) in dusty media. They provide fitting formulas for \(M_{\rm F}\) in the dusty case based on calculations performed using 3D Monte Carlo Ly\(\alpha\) radiative transfer assuming a central source in a uniform medium. The fitting formulas use the multiplication factor in the dust-free case, \(M_{\rm F,no\,dust}\), and the escape fraction that mimics the destruction of Ly\(\alpha\) by dust, \(f_{\rm esc,dust}^{Ly\alpha}\). They are written as follows:
\[\begin{split} M_{\rm F,\,no\,dust}\approx 29\left(\frac{\tau_{0}}{10 6}\right)^{0.29}&\left(\tau_{0}\geq 10^{6}\right),\\ \log M_{\rm F,\,no\,dust}\approx&-0.433+0.874\log \tau_{0}-0.173\left(\log\tau_{0}\right)^{2}\\ &+0.0133\left(\log\tau_{0}\right)^{3}&\left(\tau_{0} <10^{6}\right),\end{split} \tag{22}\]
\[\tau_{0}^{\rm peak}=4.06\times 10^{6}\,T_{4}^{-1/4}\left(\frac{\sigma_{4,-21} }{3}\right)^{-3/4}, \tag{23}\]
\[\tau_{0}^{\prime}=\min\left(\tau_{0},\tau_{0}^{\rm peak}\right),\quad N_{\rm HI }^{\prime}=\min\left(N_{\rm HI},N_{\rm HI}^{\rm peak}\right), \tag{24}\]
\[f_{\rm esc,dust}^{Ly\alpha}=1/\cosh\left\{\frac{\sqrt{3}}{\pi^{5/2}\xi}\left[ \left(a_{\rm V}\tau_{0}\right)^{1/3}\tau_{\rm da}\right]^{1/2}\right\}, \tag{25}\]
\[\tau_{\rm da}=N_{\rm HI}\sigma_{\rm d}\left(1-\mathcal{A}_{\rm b}\right), \tag{26}\]
\[M_{\rm F}=M_{\rm F,\,no\,dust}\left(\tau_{0}^{\prime}\right)\times f_{\rm esc,dust}^{Ly\alpha}\left(N_{\rm HI}^{\prime}.\xi\right). \tag{27}\]
In the equations above, \(\tau_{0}\) is the Ly\(\alpha\) optical depth at the line center using the cross-section \(\sigma_{0}=5.88\times 10^{-14}\) cm\({}^{2}T_{4}^{-1/2}\) (\(T_{4}\equiv T_{\rm n}/10^{4}\)K), \(N_{\rm HI}\) is the atomic Hydrogen column density, \(Z^{\prime}\) is the metallicity in units of solar metallicity, \(\sigma_{\rm d,-21}\equiv\sigma_{\rm d}/10^{-21}\) cm\({}^{2}\)/H\({}^{5}\), \(f_{\rm d/m}\) is the dust to metal ratio, \(\mathcal{A}_{\rm b}=0.46\) is the dust albedo, \(a_{\rm V}=4.7\times 10^{-4}T_{4}^{-1/2}\) is the Voigt parameter in the optically thick regime, and \(\xi=1.78\) is a fitting parameter.
Note that \(\tau_{0}\) is calculated using the shell profile given in Sec. 2.1.3. We use Eqn. (17) to estimate \(N_{\rm HI}\) after removing the contribution of the ionized column. In order to not overestimate \(N_{\rm HI}\), we use the model presented in Krumholz (2013) to get an approximate value of the molecular hydrogen fraction in the shell as a function of depth. We compute \(N_{\rm HI}\) by considering the neutral gas column up to the point in the shell where the molecular hydrogen fraction becomes non-zero. The calculation method for the molecular fraction is provided in Appendix B, while examples of the atomic column density can be found in Appendix C. In practice, the Ly\(\alpha\) multiplication factor at all metallicities saturates at neutral column depths lower than where any molecular hydrogen could be present. The \(M_{\rm F}\) values as a function of the Ly\(\alpha\) optical depth in the case of the five metallicities considered in this work are shown in Fig. 4.
At each time step during the shell evolution, we calculate the Ly\(\alpha\) luminosity, \(L_{\rm Ly\alpha}\) as:
\[L_{\rm Ly\alpha}=4\pi\int_{r(\theta=1)}^{r(\theta=0)}P_{\rm B}\,E_{\rm Ly \alpha}\,\alpha_{\rm B}\,n_{\rm sh}^{2}r^{2}\mathrm{d}r\,. \tag{28}\]
The integral is carried out from the inner edge of the shell, where the attenuation function for the ionizing radiation (\(\phi\)) is unity, till the point where the shell turns neutral (details of the shell structure are given in Sec. 2.1.3). This takes into account the absorption of ionizing photons by the dust present in the system. The probability \(\mathcal{P}_{\rm B}\) of an absorbed Lyman continuum photon resulting in the emission of a Ly\(\alpha\) photon is taken to be a fixed, Case-B value of 0.68 at \(T=10^{4}\) K. Similarly, \(\alpha_{\rm B}\) represents the Case-B recombination coefficient and is assigned a value of \(2.59\times 10^{-13}\) cm\({}^{3}\)s\({}^{-1}\)(Osterbrock & Ferland, 2006). \(E_{\rm Ly\alpha}\) is the energy of individual Ly\(\alpha\) photon (10.16
Figure 5: Evolution of the shell density for a system with \(Z=0.02\), \(\epsilon_{\rm SF}=5\%\), \(n_{\rm cl}=160\) cm\({}^{-3}\) and \(\log M_{\rm d}=5.75\). Density values at the inner edge and the maximum in the shell are shown, obtained using the profiles such as those in Fig. 3. The shell is considered dissolved when the maximum density falls below the threshold value (dashed black line) for a period of more than 1 Myr. The darker shaded area represents the period when the shell is expanding due to the pressure of the shocked gases in the bubble, while the lighter one is that when the expansion is momentum-driven. The non-shaded region between the two is the transition between the two regimes.
eV). Given this, we calculate \(F_{\rm red}^{\rm Ly\alpha}\) as:
\[F_{\rm red}^{\rm Ly\alpha}=f_{\rm esc,\,v}^{\rm Ly\alpha}\,M_{\rm F}\,\frac{L_{ \rm Ly\alpha}}{c}\,. \tag{29}\]
Here we have added a shell velocity-dependent escape fraction, \(f_{\rm esc,v}^{\rm Ly\alpha}\), to mimic the reduction in the opacity of moving shells to Ly\(\alpha\) photons. This is based on the scaling given in Dijkstra & Loeb (2008)) (refer to Fig. 6 in that paper), which suggests a drop in the multiplication factor by an order of magnitude if the absolute velocity of the shells increases to 100 km/s. In practice, \(f_{\rm esc,v}^{\rm Ly\alpha}\) is a linear function that drops from 1 to 0.1 as the absolute value of the shell velocity increases from 0 to 100 km/s. We note that the shell velocity rarely exceeds 100 km/s in our parameter space, in which case, \(f_{\rm esc,v}^{\rm Ly\alpha}\) is assumed to be zero.
Apart from the caveats highlighted in Kimm et al. (2018), we note additional caveats of our approach here. We calculate \(N_{\rm HI}\) assuming the ionized and neutral parts of the shell to be in hydrostatic equilibrium, not accounting for any force gradients that would necessarily be present. We have also ignored the force on the cloud if the ionization front lies outside the shell. Another aspect we have not considered here is the leakage of Ly\(\alpha\) photons once the shell fragments. This is difficult to handle in our model without introducing another free parameter, for example, the escape fraction of Lyman Continuum, which would propagate to Ly\(\alpha\) pressure. Additionally, fragmentation of the shell could alter the Ly\(\alpha\) radiation pressure by changing the geometry to one with escape channels (see, for example Gronke et al., 2017; Smith et al., 2019, for Ly\(\alpha\) escape mechanisms through clumpy gas.)
To address these limitations, a comprehensive approach using Monte Carlo radiation hydrodynamics is required (see, for example, Smith et al., 2020). However, this is outside the purview of this study, which necessitates a broad exploration of the parameter space.
### Shell dissolution
Following Rahner et al. (2017), it is assumed that the shell is dissolved if the entire cloud has been swept and the maximum density in the shell falls below 1 cm\({}^{-3}\) for a period greater than 1 Myr.
The maximum density in the shell is determined by the density at the inner edge and the gas/dust column density through the shell-structure equations given in Sec. 2.1.3. The former is determined by the winds/SNe feedback intensity, and the latter by the shell's mass and the impinging radiation intensity. Both of these quantities are affected by the shell's radius. As the shell expands, the density at the inner edge generally decreases due to the increasing radius and the aging of the stellar cluster, except during the Supernovae (SNe) and Wolf-Rayet (W-R) phases, when the feedback intensity experiences a significant increase. This is shown in Fig. 5, with additional examples given in Figs. 1 and 2. Keeping in mind that the models presented here employ finite clouds, the shell begins to thin and the column density decreases once the entire cloud is swept up and no more mass is added to the shell. These cumulative effects result in a decrease in the maximum shell density as the shell expands beyond the cloud, which is the case when the stellar feedback provides sufficient outward momentum to overcome the cloud's binding energy. Fig. 5 also shows the declining maximum number density in the shell for a case where the shell eventually dissolves.
In Sec. 2.1.3, we emphasized that the impact of utilizing approximate profiles for the shell structure is expected to be limited. Additionally, we observe that as the shells become thinner, the discrepancies in the density profiles in the neutral regions of the shell, calculated using the approximate method of the evolutionary model, tend to decrease compared to the results from Cloudy. This suggests that the criteria for shell dissolution would also not be significantly affected even if we were to transition to more sophisticated calculations.
### Shell collapse and multi-generational star formation
During the momentum-driven phase, it is possible that the feedback is not strong enough to dissolve the shell. In such a scenario, the shell starts to shrink under the gravitational force. We follow the evolution of such a system till the moment the shell radius becomes equal to the initial radius given by Eqn. (31). At this point, we restart the evolution (initially pressure-driven and later momentum-driven) with an additional star cluster. The new cluster is formed with the same \(\epsilon_{\rm SF}\) out of the available cloud mass. This process can take place multiple times during the total evolution time, \(t_{\rm rev}=\min\left(t_{\rm diss,\,30}\right)\) Myr). The remaining cloud mass surrounding a system containing \(N\) generations of stars is, \(M_{\rm cl,\,\rm\,
Miura et al., 2012; Colombo et al., 2014; Miville-Deschenes et al., 2017). We use all five metallicities available for the STARBURST99 templates. Finally, the star formation efficiency parameter runs from \(1-15\%\)(Franco et al., 1994; Kim et al., 2016). The parameter space is summarized in Tab. 1 and represents a total of 2520 models that were evolved for the library. The model ODEs are written in Python and solved using the stiff ODE solver support in the scipy.integrate.solve_ivp6 function.
Footnote 6: Documented online at [https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html)
### Trends in the feedback channels driving the shell evolution
To elucidate the impact of changing the model parameters on the evolution of the system, We define the relative total outward momentum deposition on the shell by a force due to a given feedback channel \(i\) at the end of the calculation period, \(t_{\rm evo}=\min\left(t_{\rm diss.},30\ {\rm Myr}\right)\). This could be written as:
\[\mathcal{R}_{\rm i}=\frac{\int_{0}^{t_{\rm evo}}F_{\rm i}\ {\rm d}t}{\int_{0}^{t_{\rm evo}} \left(F_{\rm w,sn}+F_{\rm rad}^{\rm UV,IR}+F_{\rm rad}^{\rm Ly\alpha}\right) \ {\rm d}t}\,. \tag{32}\]
Before we look at the results from the models, it is worth looking at the expected values of \(\mathcal{R}_{\rm i}\) as a function of the cluster age and metallicity resulting directly from the template library. Fig. 6 shows the values of \(\mathcal{R}_{\rm i,\rm y\alpha}\) and \(\mathcal{R}_{\rm w,sn}\) assuming, (i) the maximum value of \(M_{\rm F}\) (see Fig. 4) and conversion of all ionizing photons to Ly\(\alpha\), (ii) \(F_{\rm rad}^{\rm w,sn}=F_{\rm ram}\), which assumes that the shell evolution is momentum-driven throughout (cf. Sec. 2.1.2), and (iii) \(F_{\rm rad}^{\rm UV,IR}=L_{\rm bolo}/c\). As momentum deposition by the shocked gases during the pressure-driven phase tends to be higher in comparison to the direct deposition by \(F_{\rm ram}\)7, the values of \(\mathcal{R}_{\rm i,\rm y\alpha}\) in Fig. 6 serve as an upper limit on the value expected in our models. Additional factors, such as the lack of neutral gas, the absorption of ionizing photons by dust, and lower opacity to Ly\(\alpha\) photons due to shell thinning and/or high shell velocity would lower the values of \(\mathcal{R}_{\rm Ly\alpha}\) depending upon the model parameters. The metallicity based reduction in \(\mathcal{R}_{\rm Ly\alpha}\) is clear in Fig. 6. It can also be inferred that shells that dissolve slowly are expected to exhibit lower \(\mathcal{R}_{\rm Ly\alpha}\) or higher \(\mathcal{R}_{\rm w,sn}\).
Footnote 7: In Fig. 5, the reduction of force acting on the shell can be inferred by comparing the densities at the shell’s inner edge as the switch to momentum-driven phase takes place.
In the following, we examine \(\mathcal{R}_{\rm i}\) and discuss how the variations in the model parameters affect the feedback mechanisms and the dissolution and fragmentation times of the shells. The top three rows in Figs. 7 and 8 show the contours of \(\mathcal{R}_{\rm i}\) due to the forces attributed to winds and supernovae (\(\mathcal{R}_{\rm w,sn}\)), the absorption of UV and trapped IR (\(\mathcal{R}_{\rm UV,IR}\)), and the Ly\(\alpha\) radiation pressure (\(\mathcal{R}_{\rm Ly\alpha}\)), respectively. The contours for \(t_{\rm frag}\) (row 4) and those for \(t_{\rm evo}\) (row 5) are also shown. In Fig. 7, the contours are shown as a function of \(n_{\rm cl}\) and \(M_{\rm cl}\) at a fixed \(\epsilon_{\rm SF}=5\%\), while in Fig. 8, they are shown as a function of \(\epsilon_{\rm SF}\) and \(M_{\rm cl}\) at a fixed \(n_{\rm cl}=320\ {\rm cm}^{-3}\). Some additional examples of the time evolution of key shell properties as a function of the model parameters are provided in the Appendix, in Figs. F1 and F2.
#### 2.6.1 Metallicity
It can be inferred from the \(\mathcal{R}_{\rm i}\) values in Figs. 7 and 8 that at the lower end of the metallicity range considered in this work (\(Z\leq 0.004\)), a significant amount of the total momentum imparted to the shells can come from Ly\(\alpha\) scattering pressure. At the higher metallicity end, the main drivers are the winds and the supernovae. This departure from Ly\(\alpha\) driven shells with increasing metallicity is a result of both increased energy by the stellar winds relative to radiative energy, as well as the increased destruction of the Ly\(\alpha\) photons by dust. As the Ly\(\alpha\) radiation pressure only affects the shell dynamics only if neutral Hydrogen is present, even at the lowest metallicity values, the impact of Ly\(\alpha\) can be subdominant when considering the lower end of cloud densities as we discuss below along with the effects of changing the cloud density.
\(\mathcal{R}_{\rm UV,IR}\) remains subdominant across the parameter space considered in this work, although its values rise as the cloud density and mass are increased owing to increasing dust optical depths. For the metallicities in the range \(Z\leq 0.008\) shown in Fig. 7, the ratio \(\mathcal{R}_{\rm Ly\alpha}/\mathcal{R}_{\rm UV,IR}\) tends to fall in the range \(5-20\), while this ratio is around \(0.5-3\) at the higher metallicity end.
#### 2.6.2 Cloud density
Cloud density plays a role in determining when the switch to the momentum-driven phase takes place (fourth row, Fig. 7). The pressure-driven phase lasts longer for low-density clouds as it takes longer for them to become susceptible to either gravitational fragmentation or the Rayleigh-Taylor instability, which leads to shell fragmentation when the entire cloud has been swept and the shell
\begin{table}
\begin{tabular}{l c} \hline Parameter & Values \\ \hline \(n_{\rm cl}\) [ cm\({}^{-3}\)] & 10, 20, 40, 80, 160, 320, 640, 1280, 2560 \\ \(\log M_{\rm cl}\) [ \(M_{\odot}\)] & 5.00, 5.25, 5.50, 5.75, 6.00, 6.25, 6.50, 6.75 \\ \(\epsilon_{\rm SF}\) [\%] & 1.0, 2.5, 5.0, 7.5, 10.0, 12.5, 15.0 \\ \(Z\) & 0.001, 0.004, 0.008, 0.02, 0.04 \\ \hline \end{tabular}
\end{table}
Table 1: The parameter space used for the evolutionary models in this work, which includes the cloud density \(n_{\rm cl}\), cloud mass \(M_{\rm cl}\), star-formation efficiency \(\epsilon_{\rm SF}\), and stellar/gas metallicity \(Z\).
Figure 6: Upper limit on the relative contribution of Ly\(\alpha\) radiation pressure to the total outward momentum deposition, \(\mathcal{R}_{\rm Ly\alpha}\) (solid) and the associated lower limit on the \(\mathcal{R}_{\rm w,sn}\) (dashed) as a function of metallicity and age based on the stellar template data. See the discussion in Sec. 2.6.
expands in the low-density ISM (see Sec. 2.1.2). As mentioned before, the deposition of momentum by winds and SNe in the pressure-driven phase is significantly higher. While this is true at all metallicities, at the lower end of the metallicities, the shells of low-density clouds (\(n_{\rm cl}<100\) cm\({}^{-3}\)) tend to be optically thin to ionizing photons for prolonged periods and couple less efficiently with Ly\(\alpha\) radiation due to a lack of neutral columns. Thus, \({\cal R}_{\rm w,an}\) remains high at lower cloud densities in general. The contribution of \(F_{\rm md}^{\rm UV,IR}\) towards overall momentum transfer to the shell also tends to increase as the cloud density is increased.
Increasing \(n_{\rm cl}\) while keeping the \(M_{\rm cl}\) fixed increases the gravitational binding energy, thus a higher amount of outward momentum is required to dissolve them. The dominant feedback mechanisms in the case of high metallicity systems (winds and SNe) scale with the stellar mass present in the system, thus increasing the gravitational binding energy while keeping the stellar mass fixed leads to slower dissolution of the shell as reflected by the \(t_{\rm evo}\) contours. For the lower metallicity cases, Ly\(\alpha\) radiation pressure and the external pressure due to the ionized cloud can play a significant role in shaping the expansion of the shell. Both of these quantities depend on the density structure of the shell. The contribution of the Ly\(\alpha\) radiation pressure to the total momentum deposition increases with cloud density at a fixed \(M_{\rm cl}\) as shown in Fig. 7 as the shells formed out of dense clouds have higher column densities of neutral gas and expand slowly. These factors favor the presence of neutral gas and an increase in the effective \(M_{\rm F}\) in our models. This can be understood by considering the fact that the shells carved out of denser clouds have smaller radii, higher inner-edge densities, and are more massive during the initial period of expansion. For a purely wind-driven shell in the pressure-driven phase \(r_{\rm sh}\propto n_{\rm cl}^{-0.2}\), \(n_{\rm sh}(r_{\rm sh}^{\rm in})\propto n_{\rm cl}^{0.6}\), and \(M_{\rm sh}\propto n_{\rm cl}^{0.4}\) (see, for example Weaver et al., 1977) promoting higher neutral column densities, as can be inferred from Figs. 12 and 13. This difference in neutral column densities during the initial period of expansion is particularly important as it plays out when the ionizing radiation is the strongest.
At the lower cloud-density end, other parameters fixed, shells have lower inner-edge densities and larger radii compared to their higher-density counterparts, both these effects can push the ioniza
Figure 7: The top three rows depict contours of \({\cal R}_{\rm w,an}\) which quantifies the relative total outward momentum deposition by channel \(i\) at the end of the evolution period. Specifically, the rows show \({\cal R}_{\rm w,an}\) (top row), \({\cal R}_{\rm UV,IR}\) (second row), and \({\cal R}_{\rm w,an}\) (third row) as functions of cloud density \(n_{\rm cl}\) and mass \(M_{\rm cl}\). The fourth row illustrates the fragmentation time, \(t_{\rm frag}\), marking the switch to momentum–driven evolution for the first generation of star formation (if multiple star formation events take place), and the bottom row displays the total evolution time, \(t_{\rm evo}=\min\left(t_{\rm dm},30\ {\rm Myr}\right)\). All panels assume \(\epsilon_{\rm SF}=5\ {\rm per}\ {\rm cent}\).
tion front deeper in the shell, or make the shell density bounded. Shells that are optically thin to ionizing radiation while they are still in the natal cloud can lead to strong ambient force which reduces the effective outward momentum deposition, slowing the shell expansion and delaying its dissolution. The effects of shell structure, which manifest distinctly in different density regimes - retardation due to external pressure in low-density cases and Ly\(\alpha\) radiation pressure in higher-density cases - are especially pronounced for lower metallicity models. This results in non-monotonic trends in the evolutionary time, \(t_{\rm evo}\), with cloud density.
#### 2.6.3 Cloud mass
Increasing \(M_{\rm cl}\) at a fixed \(n_{\rm cl}\) increases its binding energy, which varies as \(M_{\rm cl}^{2}\). Generally, at a fixed \(\epsilon_{\rm SF}\) and \(n_{\rm cl}\), more massive clouds generally take longer to dissolve. This effect is clear in the contours of \(t_{\rm evo}\).
For the low metallicity, low-density models, increasing the cloud mass generally increases \({\cal R}_{Ly\alpha}\). This is due to the increasing atomic gas columns with increasing cloud mass (c.f. Fig. 22) and the increase in \(M_{\rm F}\) whose saturation values tend to lie at higher \(N_{\rm H}\) than those encountered in this regime. At high cloud densities, increasing the cloud mass tends to have a non-monotonic effect on \({\cal R}_{Ly\alpha}\). At high cloud densities, the shell neutral gas columns can exceed those at which \(M_{\rm F}\) saturates. As noted above, massive clouds take longer to dissolve or in some cases contain multiple generations. The late dissolution leads to higher contributions by the \(F_{\rm w,sn}\) increasing \({\cal R}_{\rm w,sn}\), as can be gauged from Fig. 6. Increasing the metallicity decreases the \(N_{\rm H}\) value at which \(M_{\rm F}\) saturates, leading to the downward shift of the peak of \({\cal R}_{Ly\alpha}\).
Increasing \(M_{\rm cl}\) at a fixed \(n_{\rm cl}\) generally tends to increase the contribution of \(F_{\rm red}^{\rm V,W,W}\) to the overall outward momentum deposition due to increasingly large dust columns encountered with higher \(M_{\rm cl}\).
#### 2.6.4 Star formation efficiency
Fig. 8 shows the effect of changing \(\epsilon_{\rm SF}\) on the quantities discussed here while keeping the cloud density fixed at 320 cm\({}^{-3}\). As expected, the increase in the amount of stellar mass relative to the cloud mass leads to a faster dissolution of the clouds across all metallicities.
The quantities that scale with stellar mass increase with increasing \(\epsilon_{\rm SF}\) at fixed \(M_{\rm cl}\). These include the force due to winds and SNe, and the rate of production of ionizing photons. At the lower end of \(M_{\rm cl}\), increasing \(\epsilon_{\rm SF}\) results in faster-moving, rapidly thinning shells.
Figure 8: Same as Fig. 7, but here the x-axis represents \(\epsilon_{\rm SF}\). The cloud density is fixed at 320 cm\({}^{-3}\).
This results in an increase in the relative contribution of the momentum deposition by the winds and the SNe with respect to the Ly\(\alpha\) radiation pressure which is reflected by the increase in \(\mathcal{R}_{\rm w,sn}\), and a decrease in \(\mathcal{R}_{\rm Ly\alpha}\) at low cloud masses with increasing \(\epsilon_{\rm SF}\). At the higher metallicity end, the above-mentioned increase in \(\mathcal{R}_{\rm w,sn}\) is slightly offset by increased contribution by \(F_{\rm rad}^{\rm UV,IR}\). At the higher cloud mass end, the increase in \(\epsilon_{\rm SF}\) does not lead to an appreciable change in the relative amount of momentum deposition by the various feedback channels in Fig. 8, this is reflective of the various forces showing a similar scaling with stellar mass. This is expected in this particular example where massive clouds of relatively dense gas are considered. In such cases, the shells are likely to possess the highest \(M_{\rm F}\) values due to large gas columns, making \(F_{\rm rad}^{\rm 1,3\alpha}\) scale with the stellar mass. Note that the models at \(\epsilon_{\rm SF}\leq 2.5\%\) harbour multiple generations of star clusters at the end of \(\epsilon_{\rm Fe0}\) (cf. Fig. 9, top panel).
The top panel in Fig. 9 shows the contours of the minimum star formation efficiency, \(\epsilon_{\rm SF,min}\), required to dissolve a cloud of given mass and density. While similar values are obtained for the low-mass, low-density clouds across all metallicities, we find that the low-metallicity systems are able to destroy high-mass, high-density clouds with a lower burst stellar mass due to the Ly\(\alpha\) radiation pressure. At the higher density, intermediate cloud mass end of the low-metallicity systems, a distinct sequence of events emerges. The momentum-driven phase is initiated early when the shell is still within the cloud. Once the inner edge's density diminishes, shells with lower mass become ionized, having insufficient mass to stay radiation-bound. Confronted by the cloud's elevated external pressure and gravitational force, these shells decelerate and recollapse. Conversely, the more massive shells, which remain neutral during the phase transition, are propelled by the Ly\(\alpha\) radiation pressure, resisting recollapse. This leads to a contour pattern seen at the highest cloud density and intermediate cloud mass values.
In order to compare our results to those of Rahner et al. (2019), we ran tests with Ly\(\alpha\) radiation feedback turned off at solar metallicity using a similar parameter space. We find a very good agreement between the \(\epsilon_{\rm SF,min}\) values derived in that work and our results, the details of the comparison are given in Appendix A.
Apart from this, we also calculate the star formation efficiency per free-fall time (\(\epsilon_{\rm ff}\)) for recollapsing models at \(\epsilon_{\rm Fe0}\) as:
\[\epsilon_{\rm ff}=\frac{M_{\star}/\epsilon_{\rm Fe0}}{M_{\rm cl}/\tau_{\rm ff} },\quad\mbox{where}\quad\tau_{\rm ff}=\sqrt{\frac{3\pi}{32,G,\rho_{\rm cl}}} \tag{33}\]
is the gravitational free-fall time, and \(M_{\star}(\epsilon_{\rm Fe0})\) is the total stellar mass in the system at \(\epsilon_{\rm Fe0}\). This quantity is a measure of the star formation rate relative to the maximum rate dictated by gravity, thus representing the opposition offered by stellar feedback. The contours of this quantity for \(Z=0.02\) and a selected set of \(n_{\rm cl}\) are shown in the bottom panel of Fig. 9.
The recollapsing models exhibit \(0.20\%\leq\epsilon_{\rm ff}\leq 1.65\%\), which encompasses the range of values exhibited by star formation in giant molecular clouds of nearby galaxies (Utomo et al., 2018, \(0.40\%\leq\epsilon_{\rm ff}\leq 1.10\%\)).
## 3 Post processing and library generation
The output from the evolutionary model is passed on to the photo-ionization code, Cloudy8 (Ferland et al., 2017) in the second step. This allows us to produce various observables following the transition from H ii to H\({}_{2}\) regions while self-consistently accounting for gas, dust, and molecular microphysics.
Footnote 8: Available at [https://gitlab.mublado.org/cloudy/cloudy/-tree/master](https://gitlab.mublado.org/cloudy/cloudy/-tree/master),
Commit SHA: 69c3fa5871da3262341910e37c6ed2e5fb76dd3c
We use a closed, spherical geometry for all the models generated
Figure 9: Top: Contours of the minimum single burst efficiency (\(\epsilon_{\rm SF,min}\)) required to disrupt the clouds as a function of the cloud mass, density, and metallicity. At all \(\epsilon_{\rm SF}\) values smaller than this value, more than one generation of stars is present at the end of the \(\epsilon_{\rm Fe0}\). The dashed line corresponds to \(n_{\rm cl}=320\) cm\({}^{-3}\), which is the value used in Fig. 8. Bottom: Contours of the star formation efficiency per free-fall time (\(\epsilon_{\rm ff}\)) given as percentage values for \(Z=0.02\) and a selection of \(n_{\rm cl}\) values listed above each panel along with the associated free-fall time \(\tau_{\rm ff}\). The contours are calculated according to Eqn. (2.6.4). The blue dashed line represents the minimum \(\epsilon_{\rm SF}\) required to disrupt the cloud. The white region represents the models not exhibiting recollapse.
using Cloudy. The data required for running Cloudy models is summarized in Fig. 10. As the shell expands and sweeps its birth cloud, two gas configurations arise: (i) Shell embedded within the cloud, or, (ii) shell has swept the entire cloud. During this post-processing, the shell's density profile is derived within Cloudy assuming a hydrostatic equilibrium. The density calculation starts from the inner face of the shell and, therefore, requires the initial density condition. This is given using Eqn. (14). It's worth noting that We rely on Cloudy to compute the density profile with detailed chemical and thermal calculations instead of using the approximate profiles computed in Sec. 2.1.3, although, in both cases, hydrostatic equilibrium is assumed.
In order to limit the parameter space of our models, we do not account for the turbulent and magnetic pressures in the shell models. In the cases where the unswept cloud is present, the stellar spectra go through initial processing due to the shell, followed by subsequent processing of the shell output due to the unswept cloud beyond the shell. We use the dlaw table command in Cloudy, which allows the code to use arbitrary density values as a function of radius. Each of the cases where the shell is embedded in the birth cloud involves two Cloudy simulations: (i) Shell only simulation: to get the density structure of the shell, (ii) Unified shell and cloud simulation: We use the dlaw table command to input an overall density structure for the shell-cloud system. In such cases, the density structure obtained from the shell-only simulation is augmented with a constant cloud density profile at radii beyond the shell. We assume a transition length equal to 10% of the shell depth. We note one could model the unswept cloud using the transmitted continuum from the shell as input SED; however, this is problematic as Cloudy lacks knowledge of the shell model's optical depth effects, which can result in inaccuracies (van Hoof, 2022, private communication).
In the case where the entire cloud has been swept into the shell, only a single simulation is required. The stellar data is consistent with that used for the shell-evolution model and is discussed in Sec. 3.1. Only a mass-based stopping criterion is employed for all models, i.e., the radial extent of the model is determined by the density structure and the total mass of the gas.
### Stellar evolutionary tracks and spectra
The current work makes use of the high mass-loss Geneva tracks (Meynet et al., 1994) in STARBURST99 population synthesis code (Leitherer et al., 1999). These do not consider binary population or stellar rotation. The reasoning behind this choice is two-fold, (i) They offer a better sampling of the metallicity. The newer models, including the ones considering stellar rotation (see Leitherer et al., 2014) are only available for two metallicities, \(Z=0.002\) and \(0.014\) (\(Z_{\odot}=0.014\)), whereas, the older ones are available for five metallicities, \(Z=0.001,\ 0.004,\ 0.008,\ 0.02,\ \rm{and}\ 0.04\) (\(Z_{\odot}=0.02\)). (ii) For the instantaneous burst models considered in this work, the high mass-loss rates produce a better agreement with observational data when comparing emission-line diagnostics (Levesque et al., 2010). Thus, the other set of the "standard" mass-loss tracks is not used.
The spectra used for Cloudy models employ STARBURST99's Pauldrach/ Hillier model atmospheres, which use the WMBASIC wind models of Pauldrach et al. (2001) for younger ages when O stars dominate the luminosity (\(<3\) Myr), and the CMFGEN Hillier & Miller (1998) atmospheres for later ages when W-R stars are dominant. A Kroupa initial mass function (IMF) between \(0.1-100\ M_{\odot}\), with a power law break at \(0.5\ M_{\odot}\) has been employed. The power-law exponent for the lower mass end is 1.3, while for the higher mass end is 2.3. The higher mass threshold is consistent with the Auriga (Grand et al., 2017), EAGLE (Schaye et al., 2015), and Illustris-TMS (Pilipchi et al., 2018) models, although they rely on the Chabrier IMF (Chabrier, 2003). The results would not change appreciably if the IMFs were interchanged as we are only concerned with the early evolution driven by massive stars, whose number does not differ between these IMFs. All other parameters are set to the default values recommended on the STARBURST99 webpage9.
Footnote 9: www.stsci.edu/science/starburst99/docs/default.htm
The left panel in Fig. 11 shows the spectral hardness by means of the ratio of the rate of helium ionizing photons (first ionization, 24.6 eV) to the Hydrogen ionizing photons. The right panel is the ratio of the rate of hydrogen ionizing photons to that of the compressive force on the shell, \(F_{\rm ram}\). This ratio scales with the ionization parameter, \(U\), during the momentum-driven phase, as discussed in Sec. 4.1.
### Chemical and dust abundances
We use the solar abundance set from Grevesse et al. (2010) (The GASS abundance set in Cloudy).
We scale the abundances according to the metallicity value of the stellar templates using the GASS value of \(Z_{\odot}=0.014\). The abundance of some specific elements is modified. For helium, we use the relation given in Dopita et al. (2002)
\[{\rm He}/{\rm H}=0.0737+0.024\,(Z/Z_{\odot})\,. \tag{34}\]
Figure 10: A schematic describing the kinds and ingredients of the Cloudy models generated for this work. The models’ input parameters describing the density at the inner face, the radius of the shell, the mass of the shell, and the age and number of stellar clusters present come directly from the shell-evolution model. Other parameters, such as the elemental abundances, metal depletion factors, the dust model, and cosmic-ray spectra are also required for these models. Depending on the mass of the shell, one or two models are run in order to generate the observables. In the case where the shell has not overtaken the entire gas cloud, a second model which includes both the shell density profile and a constant density profile of the cloud is run.
For carbon and nitrogen, we use the prescription described in Dopita et al. (2013), interpolating between the values in Tab. 3 of that work. The depletion factors used here are the "classic" Cloudy set (refer to the Cloudy documentation). We note that the depletion factors employed are independent of the metallicity of the system. Finally, we note that the aforementioned modifications to the abundances are done before applying the depletion factors.
The dust model employed in this work specifies the graphite and silicate grains with size distribution and abundance appropriate for those along the line of sight to the Trapezium stars in Orion. The grain population is modeled by a power law of index \(-3.5\), resolved by ten bins running from \(0.03-0.25\)\(\mu\)m The Orion size distribution is deficient in small particles and so produces relatively grey extinction. The polycyclic aromatic hydrocarbons (PAH) are added to the dust model as a fixed fraction of the dust mass, scaled by the HI abundance at a given location in the nebula. This is based on the assumption that PAHs are present only in PDRs, i.e., assuming PAH destruction in ionized parts of the gas cloud, while depleting onto larger grains in molecular regions. The maximum possible value for the PAH to dust mass fraction (\(q_{\rm PAH}\)) is taken to be the same as the Galactic diffuse dust value of 4.6% (Li & Draine, 2001; Weingartner & Draine, 2001). The PAH population is a power law of index \(-3.5\), resolved by ten bins in the range of \(4.3-11\)A (\(30-500\) C atoms). The grain abundances are scaled along with the stellar template metallicity. Further details about the dust modeling in Cloudy can be found in van Hoof et al. (2004), Abel et al. (2008), and references therein. We note that while there is evidence of a decline in dust-to-metal ratio at low metallicities on galaxy-wide scales and on smaller scales (Remy-Ruyer et al., 2014; Chiang et al., 2018), we adopt a constant dust-to-metal ratio in our study. This simplification is supported by the idea that regions of active star formation, where our study primarily focuses, should have a dust content that is relatively enhanced compared to the broader galactic environment (Priestley et al., 2022). Considering the limitations of current observations, it's reasonable to assume a constant dust-to-metal ratio in these star-forming regions.
Finally, for the highest metallicity models (\(Z=0.02,~{}0.04\)), we employ a cosmic ray abundance of \(2\times\), \(5\times\) the Galactic background value when the shell has not fully swept the natal cloud. This is done for the stability of the chemical network in the cases where the simulation goes deep into the molecular region.
### Time sampling
The temporal sampling of the library represents a balance between the number of Cloudy models and resolving major changes in the physical conditions of the system. For the sake of simplicity, we employ a single time grid for the generation of a look-up table used by SKIRT. About \(15\%\) of models in our parameter space do not dissolve till the end of the \(r_{\rm exo}=30\) Myr. Based on this we fix the endpoint of the template library, \(t_{\rm lib}=30\) Myr. On the other hand, nearly \(80\%\) of the collapse events in our models occur before \(15\) Myr. Keeping that in mind, we deploy a higher number of the points in the period between \(0-15\) Myr. For each of the models, we use \(90\) time points between \(0-30\) Myr to resolve the evolution of the system, where \(60\) points are uniformly deployed in the first \(15\) Myr, and the rest are distributed uniformly in the last \(15\) Myr. This gives us a temporal resolution of \(0.25\) Myr in the first half of \(t_{\rm lib}\), and \(0.5\) Myr in the subsequent half. At each of these time steps, we run the Cloudy model/s as previously described to get the observables. For all the time steps where the shells have dissolved, stellar spectra without any gas/dust reprocessing are added to the look-up table.
## 4 H H H region diagnostics: emission line ratios and IR colors
As a means to highlight the parameter space of the model observables, we focus on two H H H region diagnostics. Firstly, the BPT diagram using the [N ii]\(d\)6584/H\(\alpha\) and [O iii]\(d\)5007/H\(\beta\) emission line ratios, and secondly, the dust emission continuum arising from the H ii regions by using color-color diagram making use of the four IRAS bands centered at 12, 25, 60, 100 \(\mu m\). For each of these diagnostics, we discuss the impact of the evolutionary model's free parameters (primary variables) and the connection with other parameters that result from either the evolutionary model, like the shell's radius or velocity, or as a result of the Cloudy post-processing, like PAH to gas fraction or dust temperature. We refer to the latter as secondary variables.
Figure 11: Evolution of stellar cluster properties that affect the emission from the surrounding gas as a function of metallicity. The properties assume a coeval population of stars having a well-sampled IMF (see Sec. 3.1). Left: The ratio of the rate of helium ionizing photons to that of hydrogen ionizing photons. This serves as a proxy for spectral hardness. This value decreases as the OB stars die but it is affected by the presence of W-R stars. The extreme mass loss suffered by W-R stars can expose their stellar cores and significantly harden the ionizing spectrum of the stellar populations. The highest metallicity systems shown here show the highest spectral hardening driven by W-R stars. Right: The ratio of the ionizing photons to the compressive force at the inner face of the shell, this ratio scales with the ionization parameter during the momentum-driven phase due to the density condition given by Eqn. (14).
### BPT diagram
The classic BPT diagram (Baldwin et al., 1981) has been extensively used to classify objects based on the emission excitation mechanism. The [N ii]/H\(\alpha\)-[O iii]/H\(\beta\) BPT diagram exploits the fact that N and O have similar second ionization potentials, and the proximity of [N ii] \(\lambda\)6584 and [O iii] \(\lambda\)5007 to the hydrogen recombination lines H\(\alpha\) and H\(\beta\). This allows O\({}^{++}\) to serve as a proxy for N\({}^{++}\), implying that any increase in the abundance of O\({}^{++}\) levels must come at the expense of N\({}^{+}\) abundance, thus shedding light on the photo-ionizing source's spectral properties. The spectral proximity of [O iii] and [N ii] lines to the hydrogen recombination lines makes these ratios almost completely unaffected by the dust effects exterior to the ionized region.
Traditionally, the BPT diagram is populated by grids of H ii models with varying metallicity, stellar cluster age, and ionization parameter. The ionization parameter is generally defined as the value at the inner edge of the nebula and is given as:
\[U\equiv\frac{Q_{\rm H}}{4\pi R^{2}\ n_{\rm H}\ c}\,, \tag{35}\]
where \(Q_{\rm H}\) is the total ionizing photon rate incident on the inner edge. For a given density, \(U\) acts as a normalization for a given ionizing spectrum shape and combines the intensity of the ionizing source, the density, and the geometry of the gas cloud. As \(U\) folds three physical parameters in it, each can be modified by keeping the rest constant, e.g., Moy et al. (2001); Levesque et al. (2010); Byler et al. (2017). The grids generally assume no correlation among the grid parameters. In contrast, the current work introduces relations between the quantities listed above by connecting cluster evolution and the state of the gas around it through stellar feedback and gravity.
Fig. 12 shows how the models populate the [N ii]/H\(\alpha\)-[O iii]/H\(\beta\) plane as a function of the primary variables. The top row in this figure shows the histogram for our models in the BPT diagram space. Also shown as the green dashed curve is the polynomial fit to H\(\alpha\) spaxels found by Rousseau-Nepton et al. (2018). Examining the locus of the most frequent data points in this plot as a function of metallicity allows us to see the dual-valued nature of the [O iii]/H\(\beta\) ratio (Pilyugin & Thuan, 2005; Kewley & Ellison, 2008; Byler et al., 2017), which is driven by the fact that this ratio is a function of the oxygen abundance, but also of the temperature of the gas and the spectral hardness of the ionizing source. While increasing the metallicity of the system boosts the amount of oxygen, it also lowers the equilibrium temperature. This leads to the lower [O iii]/H\(\beta\) ratio at the higher end of the metallicity. In contrast, on the lower end, oxygen abundance is the limiting factor when it comes to oxygen emission. On the higher end of the metallicity, it is noteworthy that the amount of ionizing photons is lower due to line-blanketing in stellar atmospheres and the spectra are softer except for the W-R phase (see fig. 2). [N ii]/H\(\alpha\), on the other hand, does not show a
Figure 12: The BPT diagram with the colormaps representing the relative number of models in a given bin (top row) and median values of the primary variables (second row onward). The columns represent the different metallicity of the models as listed in the first row. In all panels, the solid black curve shows the demarcation between starburst galaxies and AGN defined in Kewley et al. (2001), the dashed black curve shows the revised demarcation from Kauffmann et al. (2003) and the green curve shown at \(Z=0.02\) is a polynomial fit to the H ii regions’ line ratios obtained by Rousseau-Nepton et al. (2018) for the nearby galaxy NGC-628.
double-valued trend with respect to the metallicity, and the locus of the most frequent data points tends to shift rightwards with increasing metallicity.
The second row in Fig. 12 shows the line ratios for our models as a function of the age of the system. Broadly speaking, the systems move from high to low values of the \([{\rm O\,m}]/{\rm H}\beta\) ratio with the fall in \(U\)10 with age. The BPT diagram is populated as a function of \(U\) in the first row of Fig. 13. The decreasing \(U\) leads to an increase in the \([{\rm N\,n}]/{\rm H}\alpha\) with a saturation based on the metallicity. As mentioned previously, \(U\) has three parameters in it, the flux of the ionizing photons, the shell's radius, and the gas density. Given Eqn. (14), \(U\) for our models wraps together the gas compression due to the stellar feedback and the cluster's ionizing flux. The gas compression depends on whether the expansion is pressure-driven or momentum-driven. Following Dopita et al. (2006), one could write a scaling for \(U\) during the pressure-driven phase as:
Footnote 10: In this work, \(U\) is defined at the shell’s inner edge. We don’t use the spherical ionization parameter, evaluated at the Strömgren radius. In Cloudy, this corresponds to where the neutral Hydrogen fraction is 0.5 for radiation-bounded gas or the cloud’s edge for density-bounded gas, requiring photoionization calculations.
\[U(t)\propto\,\delta(t)^{3/2}\frac{\sum Q_{\rm H}(t)}{\sum L_{\rm mech}(t)}\,, \tag{36}\]
\(\delta(t)\) is the instantaneous ratio of the shell's inner face density to the cloud density. The presence of this factor in the above equation leads to a coupling between \(U\) and both the cloud density and the mass of the central cluster/s. In comparison, \(U\) during the momentum-driven phase is given as:
\[U(t)\propto\frac{\sum Q_{\rm H}(t)}{\sum F_{\rm ram}(t)}\,. \tag{37}\]
The summation in Eqn. (36) and (37) are due to the possibility of shell recollapse in our models, accounting for the feedback and ionizing flux from all the generations present. In the case of systems containing only a single generation of stars, Eqn. (36) shows a \(U\propto n_{\rm cl}{}^{-1/5}(\epsilon_{\rm SF}M_{\rm cl})^{1/5}\) scaling. On the other hand, \(U\) in the case
Figure 13: The BPT diagram with the colormaps representing the median values of the secondary variables. The columns represent the different metallicity of the models as listed in the first row.
of the momentum-driven phase and for a system containing a single stellar population is free from additional dependencies on stellar mass and cloud density at a given time and metallicity. \(U\) also shows a positive correlation with the fraction of ionizing photons absorbed by dust (\(f_{\rm abs,dust}^{\rm ion}\)) in H ii regions (for example, Inoue, 2002; Hunt & Hirashita, 2009). This quantity is shown in the third row of Fig. 13. In our models, at the higher end of \(U\) for metallicity values \(Z=0.02\) and \(Z=0.04\), up to 52 % of the ionizing photons can be absorbed by dust. We mention that Cloudy does not directly report \(f_{\rm abs,dust}^{\rm ion}\). To estimate this quantity, we adopt the method outlined by Draine (2011). An explanation of this method can be found in Appendix D.
The third row in Fig. 12 shows the line ratios as \(\epsilon_{\rm SF}\) is varied. This parameter controls the amount of stellar mass relative to the cloud mass that needs to be pushed by the stellar feedback. Due to the dependence on stellar mass, \(U\propto M_{\rm v}^{1/5}\) during the initial pressure-dominated phase, all other parameters kept the same, systems with higher \(\epsilon_{\rm SF}\) possess somewhat higher \(U\) values and thus higher values of [O iii]/H\(\beta\). Furthermore, \(\epsilon_{\rm SF}\) determines the feedback strength and, therefore, the number of generations present in the system at a given time. We discuss the impact of the presence of multiple stellar generations on \(U\) along with the discussion on cloud density and mass below.
Another interesting impact of varying \(\epsilon_{\rm SF}\) arises due to the presence of leaky H ii regions. Models with low density (\(<320\) cm\({}^{-3}\)), low mass (\(<10^{5.75}M_{\odot}\)), and high \(\epsilon_{\rm SF}\) (\(\geq 10\%\)) exhibit a tendency to sweep the entire cloud and lower the shell column densities rapidly (\(t<5\)Myr for \(Z=0.02\)) while the central cluster continues producing significant ionizing photons. This results in shells with low surface density, susceptible to ionizing radiation leakage, leading to reduced [N ii]/H\(\alpha\) values. The absence of intervening gas to soften the spectra diminishes [N ii] production. These models occupy a distinct region on the BPT diagram, especially prominent for higher metallicity systems when \(Z\geq 0.02\). The Hydrogen ionizing radiation's escape fraction11 and the shell surface densities, as demonstrated in Fig. 13, further elucidate this behavior. For \(Z=0.02\), these models partially span the region defined by \(-2.5<\log(\rm{[N\,{\rm II}]/H\alpha})<-1\) and \(-0.5<\log(\rm{[O\,{\rm II}]/H\beta})<0.5\). For \(Z=0.04\), this region is \(-2.5<\log(\rm{[N\,{\rm II}]/H\alpha})<-1\) and \(-2.4<\log(\rm{[O\,{\rm II}]/H\beta})<-0.6\). Analogous models with lower metallicity reside approximately within the regions given by \(\log(\rm{[N\,{\rm II}]/H\alpha})<-2,-2.5,-3\) and \(\log(\rm{[O\,{\rm II}]/H\beta})>0\) for \(Z=0.008,~{}0.004,~{}0.001\) respectively. It's noteworthy that the rapid thinning of the shell, driven by the intensity of stellar feedback, offers an additional avenue by which stellar feedback modifies the line ratios, beyond the influence exerted by \(U\).
Footnote 11: This is calculated using incident and transmitted source SED in the range 1–1.8 \(E_{\rm H}\), where \(E_{\rm H}=13.6\rm{eV}\).
In contrast, low \(\epsilon_{\rm SF}\) systems with elevated density and mass tend to harbor multiple generations of stars with their shells still embedded within the unswept cloud. This characteristic manifests in notably high \(A_{\rm v}\) values for these models, as highlighted in the last row of Fig. 13.
The influence of dust on line ratios in high \(A_{\rm v}\) nebulae merits attention. Although the BPT diagram remains uninfluenced by a dust screen, additional effects are present in the case of high surface density shells that exhibit high \(A_{\rm v}\) values. Models with these characteristics are located in a distinct region of the BPT diagram. The primary reason for this positioning is the dominance of a secondary H\(\alpha\) emission, originating from non-recombination contributions outside the H ii region, over the highly attenuated emission from within the H ii region itself. Consequently, there is an increase in the denominator of the line ratios under discussion. Further details are given in Appendix E. The high \(A_{\rm v}\) branch seen prominently in the case of \(Z=0.008,~{}0.02\) around \(-3.0<\log(\rm{[N\,{\rm II}]/H\alpha})<-1.5\) and \(0.0<\log(\rm{[O\,{\rm II}]/H\beta})<-3.0\). At lower metallicities, the overall \(A_{\rm v}\) is lower, while in the case of \(Z=0.04\), they fall outside the range of the BPT diagram shown here. It is important to keep in mind that these embedded systems are unlikely to be observed in the optical due to the very high \(A_{\rm v}\).
The BPT diagram as a function of the cloud density and the cloud mass is shown in the fourth and fifth row of Fig. 12, respectively. Both of these parameters play a role in determining the gravitational binding energy of the system. The cloud mass and density impact the position on the BPT diagram by determining the number of stellar generations present in the system and the time difference between star formation events. The impact of the presence of multiple generations on \(U\) could be understood by considering that the rate of production of ionizing photons for an instantaneous burst of stars decreases as the most massive stars die, while the mechanical luminosity and the ram force decrease less rapidly as they are sustained by the SNe (cf. Fig. 2). Thus, in the case of a system with multiple generations (if the generations have a sufficient difference in age), the youngest one contributes the most to the ionizing photons, while the contribution to the denominator can come from all generations, resulting in a lower \(U\) in comparison to systems with a single population of the same age. The evolution of \(U\) as a function of the number of generations and the age of the youngest cluster in the system is shown in Fig. 14. Systems with multiple generations tend to exhibit a lower \(U\) based on the argument above. The
Figure 14: \(U\) at the shell inner edge as a function of the age of the youngest cluster for all the models in our parameter space. The shape of the curve enveloping the hexbin plot in each panel and its color is the same as that shown in the right panel of Fig. 11, but scaled as \(U=\frac{\mu_{\rm H}T}{\mu_{\rm SF}}\frac{Q_{\rm H}}{\mu_{\rm em}}\) using the shell’s inner face density condition during the momentum driven phase (Eqn. (14)).
larger range of \(U\) encountered during the first few years in Fig. 14 is due to the aforementioned dependence of \(U\) on the stellar mass and cloud mass during the pressure-driven phase. On the other hand, the later evolution is momentum-driven where the range of \(U\) is significantly reduced. In general, \(U\) during the pressure-dominated phase is lower than that during the momentum-driven phase in our models. The highest possible \(U\) is shown by the green curve enveloping the hexbin plots in Fig. 14 and is given by \(\frac{\mu_{\mathrm{PL}}T_{1}}{\mu_{\mathrm{PL}}c}\frac{Q_{\mathrm{PH}}(t)}{F_{ \mathrm{max}}(t)}\). Given that the switch to the momentum-driven phase is faster for the densest clouds (cf. Fig. 7), they exhibit high \(U\) values in our models.
Based on Fig. 14, we also note that the \(U\) values of our models are consistent with the range of \(3\times 10^{-4}\leq U\leq 3\times 10^{-3}\) reported by Kewley & Dopita (2002). As mentioned in Sec. 2.1.3, this range of \(U\) is not reproduced if only a pressure-driven expansion is employed (Dopita et al., 2005).
Finally, a few remarks could be made on the basis of a comparison between our models and the polynomial fit to NGC-628's H ii regions. The fit (going from high to low [O iii]/H\(\beta\)) follows an increase in the gas metallicity. We note that our models within the metallicity range \(Z=0.008\) to \(0.04\) appear to be consistent with this fit. The shell expansion velocity of these models falls in the range \(5-15\) kms\({}^{-1}\) along with an \(A_{\mathrm{v}}<3\).
### IRAS colours
In this section we consider the infrared regime to identify trends in our models. This serves as a complimentary diagnostic to the BPT diagram, especially for highly extincted sources. We use the four IRAS bands with wavelengths centered at 12, 25, 60, 100 \(\mu\)m to populate the \(F_{\nu}(60)/F_{\nu}(12)\) vs. \(F_{\nu}(100)/F_{\nu}(25)\) color-color plane with our models. In the following, we refer to the color-color plane as the IRAS plane, while the flux densities in the bands are simply written as \(F_{60}\), \(F_{12}\), \(F_{100}\) and \(F_{12}\). \(F_{12}\) represents the PAH emission from the PDRs, and \(F_{25}\) comes predominantly from the hot dust in ionized regions within the PAH-containing zones. \(F_{60}\), \(F_{100}\) track relatively cooler dust components of the shells. \(F_{60}/F_{12}\) could be considered a proxy for \(q_{\mathrm{PAH}}^{-1}\) and \(F_{100}/F_{25}\) tracks the amount of cooler dust relatively to the hot one.
Figs. 15 and 16 show the IRAS plane populated by our models as a function of the primary and secondary variables, respectively. In each sub-panel, the dashed blue line marks the boundary derived by Yan et al. (2018) to distinguish Galactic H ii regions from other sources. This criterion suggests that the points lying in the region \(\log(F_{60}/F_{12})\geq(-0.19\times\log(F_{100}/F_{25})+1.52)\) are those containing dust illuminated by young stars. In a manner akin to the analysis performed for the BPT diagram, we explore the distribution of the models across the IRAS plane in relation to the primary variables, invoking the secondary variables as needed.
The top row in Fig. 15 shows the histogram for our models on the IRAS plane. Overall, the models move upward and rightward with
Figure 15: Same as Fig. 12, but using the IRAS colors. The dashed blue line in each panel marks the boundary derived by Yan et al. (2018) distinguishing Milky-Way H ii regions from other sources.
increasing metallicity. We attribute this to the dust in the shell/cloud systems becoming colder and the FIR peak moving rightward with increasing metallicity. The evolution of the effective dust temperatures can be seen through the FIR SED peak shown in the second row of Fig. 16. Apart from this, the \(F_{60}/F_{12}\) is impacted by the \(F_{12}\) emission which comes predominantly from PAHs. As described in Sec. 3.2 the maximum value of \(q_{\rm PAH}\) in our models is 4.6%, but it is allowed to vary throughout the dust-containing gas based on its chemical composition, i.e., it is scaled by the atomic hydrogen abundance. At all metallicities, models with low \(q_{\rm PAH}\) tend to separate out, forming a recognizable branch with higher \(F_{60}/F_{12}\) values.
The second row of Fig. 15 shows the model IRAS colors as a function of the age of the system. The evolution of the models' colors with age can be understood by considering the instantaneous incident flux at the inner edge of the shell,
\[{\cal F}_{\rm in}=\frac{L_{\rm\star,total}}{r_{\rm sh,in}^{2}}\,, \tag{38}\]
which is directly linked to the dust temperature. This parameter is shown in the top row of Fig. 16. \({\cal F}_{\rm in}\) falls with age for systems with no recollapse events as the luminosity of the central cluster falls and the shells expand with age. Broadly speaking, models of this kind initially move up, reach a maximum, and then move to lower values of the \(F_{60}/F_{12}\) color as the FIR peak rightward and \(F_{60}\) decreases. Furthermore, as the shells expand and the stellar population ages, shells start to diffuse out. The diffused shells irradiated with softer radiation after the death of the most massive stars tend to be rich in atomic hydrogen, thus possessing a \(q_{\rm PAH}\) close to the saturation value of 4.6%, which also contributes to lowering the \(F_{60}/F_{12}\) value. The time evolution of the models on the \(F_{100}/F_{25}\) axis can also be understood in terms of the rightward movement of the FIR peak.
Figure 16: Similar to Fig. 13, but using the IRAS colors and a different set of secondary variables. The top three rows are variables relevant to IR emission, with colormaps representing the median values of \({\cal F}_{\rm in}\), peak wavelength in the \(20-1000\,\mu\)m range, and the model mass-weighted average value \(q_{\rm PAH}\). The next four rows show the IRAS plane with colormaps for shell radius, velocity, mean surface density, and the age of the youngest stellar cluster.
In contrast, systems that undergo multiple collapse events tend to have, on average, shells that are more compact during their evolution. This results in higher values of \(\mathcal{F}_{\rm in}\), allowing these models to remain above the blue demarcation line as long as the youngest cluster within the system contains massive stars. Most of these shells start infalling after they have swept through the entire cloud. Infalling shells are generally high-density, high-cloud mass systems that possess high column densities. In the case of higher metallicity systems, the shrinking radii lead to increased \(\mathcal{F}_{\rm in}\) and lower \(q_{\rm PAH}\) as the high gas and dust columns allow for the formation of molecular hydrogen. This leads to higher \(F_{60}/F_{12}\) and lower values of \(F_{100}/F_{25}\) for the systems undergoing re-collapse forming a distinct region in the IRAS plane for these metallicities (cf. row 5 in Fig. 16 for shell velocities). On the other hand, low metallicity clusters do not show such a distinct region on the IRAS plane populated by infalling shells.
The third, fourth, and fifth rows in Fig. 15 shows the trends on the IRAS plane as a function of \(\epsilon_{\rm SF}\), \(n_{\rm cl}\) and \(M_{\rm cl}\), respectively. The trends on the IRAS plane due to these three parameters can be understood by considering the combinations of their extreme values in the parameter space. Fig. 17 shows the evolution of these extreme systems for \(Z=0.02\). The green line shows the evolution on the IRAS plane for the parameter trio listed at the bottom of each sub-panel. The scatter points are the results from the points adjacent to the mentioned trio on both the lower and higher sides in the parameter space. The values at the top right and top left corners are the times at which the bubble begins the transition to the momentum-driven phase (for the first time if recollapse/s occur), and the dissolution time for the shell, respectively. To facilitate comparison with observational data, the histogram representing the IRAS colors of Galactic H ii regions as compiled by Yan et al. (2018), is presented as grey hexbins in each sub-panel. It can be gathered from Fig. 17 that shells formed from low-mass clouds have a tendency to move away from Yan et al. (2018) demarcation and towards higher \(F_{60}/F_{12}\) values as they evolve. This is due to the falling \(q_{\rm PAH}\) as these clouds are rapidly thinned out. Only in the cases where the parent clouds are dense and the star formation efficiency is low, these shells are optically thick to the ionizing radiation throughout their lifetime. In contrast, the shells formed out of high-mass clouds tend to remain optically thick throughout their lifetimes except for the shells carved out of the lowest-density clouds containing the highest stellar content. The higher overall \(q_{\rm PAH}\) keeps these kinds of shells from possessing high \(F_{60}/F_{12}\) as the shell expands. High \(F_{60}/F_{12}\) are only encountered in the case of high-density, low star formation efficiency cases where the system exhibits recollapse events. As these shells live longer than their low-mass counterparts, if the stellar feedback is successful, they are quite diffuse at later times and exhibit low \(F_{100}/F_{25}\). Without a detailed comparison with the observational data, we note that the parameter space of our \(Z=0.02\) models is able to capture the variety in the IRAS colors of Galactic H ii regions.
## 5 Integrating TODLERS INTO SKIRT
The inclusion of the observables from the Cloudy post-processing in SKIRT consists of two steps:
1. Normalisation of the data. This is to make the library widely applicable to simulations of varying mass resolutions.
2. The generation of SEDs including the stellar, nebula, and dust continua, with high resolution around the included emission lines.
We describe these two steps below.
### Star-forming cloud complexes
A key objective of this work is to generate a library for post-processing simulated galaxies of different resolutions using SKIRT.
Figure 17: The time evolution of the models at the extremes of the parameter space, as discussed in Sec. 4.2. In each sub-panel, the black triangle marks the start of the evolution, and the blue one represents the end. The dissolution time (in Myr) is given on the top left, a value of 30 corresponds to the shell not being dissolved during the period of evolution. The cross is the point when the shell switches from being pressure-driven to momentum-driven, which for the lowest density cases is also the moment when the shell has swept the entire cloud. The associated time (in Myr) is given on the top right. For systems undergoing multiple shell re-collapse events, only the first switch time is given. The text at the bottom specifies the model parameters in the order \(\epsilon_{\rm SF}[\%]\), \(n_{\rm cl}[{\rm cm}^{-3}]\), \(\log(M_{\rm cl}/M_{\odot})\). The hexbins represent the histogram for Galactic H ii regions as given in Yan et al. (2018).
In order to do so, we generate star-forming complexes consisting of a family of clouds (referred to as the components of a cloud complex) for each point in the parameter space.
We assume that stellar clusters originate in a cloud population that follows a power-law distribution in masses (see the discussion in Heyer and Dame, 2015, and references therein). This power-law is given as:
\[\frac{\mathrm{d}N_{\mathrm{cl}}}{\mathrm{d}M}\propto M^{\alpha_{\mathrm{cl}}} \quad\mathrm{with}\ M\in\left[10^{5},10^{6.75}\right]\mathrm{M}_{\odot},\ \mathrm{and}\ \alpha_{\mathrm{cl}}=-1.8 \tag{39}\]
We then normalize the observables by the stellar mass present in the system at that time, making the particle mass simply a scaling factor for the assigned SED. By doing so, we aim to conserve the total mass of young stellar particles in a given simulation being post-processed.
We opted for a power-law population to minimize the models' parameter space while maximizing the utilization of simulation data. This approach allows us to directly extract information on age, metallicity, and star and cloud density from the simulation. The cloud density could potentially be estimated from the cold gas density derived in the sub-grid effective equation of state. However, it is important to note that alternative methods for reducing and averaging over the parameter space could be conceived. For instance, one could use an age average, average over a log-normal density distribution (for example Burkhart et al., 2015; Kobayashi et al., 2022), or relate cloud mass and density through Larson's laws (Larson, 1981), among other possibilities. This approach leads to a realistic representation of star-forming regions, consisting of several embedded and unembedded sources, including those containing multi-generation stellar populations. We note that although such a cloud complex generation is a physically motivated way of normalizing the SEDs assigned to simulation particles, it effectively reduces the spatial resolution by carrying out a mass-weighted average over individual clouds. An effect of this, for example, is on the emergent H\(\alpha\) luminosity from a star-forming complex. As the emergent H\(\alpha\) luminosity is going to be dominated by unembedded components, the Balmer decrement-based correction would lead to an underestimation of intrinsic H\(\alpha\) for the particle. This effect is similar to the one discussed in Vale Asari et al. (2020). Therefore, individual cloud models could be used for applications requiring higher resolution, which are also available to use in SKIRT.
Fig. 18 shows the stellar mass normalized emergent H\(\alpha\) luminosity for a selected part of our parameter space. In any given row, moving from left to right increases the density by a factor of four, while moving top to bottom in the same column increases the star-formation efficiency. Both the emergent and intrinsic H\(\alpha\) line luminosities are shown for two metallicities, \(Z=0.02\) and \(0.004\). In Fig. 18, increasing the density at a given \(\epsilon_{\mathrm{SF}}\) increases the likelihood of re-collapse events and the overall dissolution of the complex takes longer. The bumps in the intrinsic H\(\alpha\) luminosity are seen when one of the component shells undergoes a re-collapse event. The emergent H\(\alpha\) does not necessarily show an immediate rise in its value due to high \(A_{\mathrm{v}}\) values which occur at large shell densities (see Fig.13) and the presence of the unswept cloud around the shells after they are initially formed. Note that metallicity is directly linked to the amount of dust in our models, and high metallicity models show an overall higher extinction. Increasing the \(\epsilon_{\mathrm{SF}}\) at a fixed density, on average, pushes out the gas more rapidly leading to a quicker dissolution of the component shells. The presence of enhanced Ly\(\alpha\) pressure in the
Figure 18: The emergent (solid curves) and intrinsic (dashed curves) stellar mass normalized H\(\alpha\) luminosity from various star-forming complexes consisting of a population described in Sec. 5.1 with \(Z=0.02\) (red), and \(Z=.004\) (purple). The cloud density from left to right: \(n_{\mathrm{cl}}=20,\,80,\,320,\,1280\) cm\({}^{-3}\), star formation efficiency from the top to bottom: \(\epsilon_{\mathrm{SF}}=2.5,\,5,\,7.5,\,10\,\,\%\).
low metallicity case leads to a faster dissolution of the component clouds, especially at the higher-density and intermediate to high star formation efficiency end (cf. \(t_{\rm evo}\) in Figs. 7 and 8).
Fig. 19 shows examples of the evolution of the UV-mm SED for \(Z=0.004\) (top row) and \(Z=0.02\) (bottom row) for the parameters in the second row, Fig. 18, excluding the lowest density case of 20 cm\({}^{-3}\). In the two lowest-density cases where the population of shells does not exhibit any recollapse, the time evolution is dictated by a monotonic expansion of the component shells and the aging cluster population. This lowers the UV extinction and the overall dust temperatures fall shifting the IR peak rightward (see the discussion in Sec. 4.2). Increasing the density leads to an overall higher \(\mathcal{F}_{\rm in}\) due to more compact components and the likelihood of re-collapse, and thus a lower rate of fall of dust temperature. At \(Z=0.02\), in the highest density case shown here, all components with \(M_{\rm cl}>10^{6.0}\)\(M_{\odot}\) show a re-collapse, while this threshold moves up to \(10^{6.25}\)\(M_{\odot}\) at \(Z=0.004\). For the higher metallicity case, the result of the presence of populations younger than 10 Myr in the highest density case is reflected by the less prominent red supergiant near-IR hump at 10 Myr.
It is worth noting that different wavelength ends of the UV-mm SED are tracking different mass components of the star-forming cloud complex. The lower mass shells which expel the gas most rapidly and thin out the shells rapidly contribute to the UV end, while the embedded components emit predominantly in the IR. This effect is seen in the UV slope being nearly the same at the same age in all three densities, while the normalizations are different. The highest density case where a significant portion remains embedded in all cases shown shows lower UV. A small fraction of unembedded clusters can thus have an outsized impact on the UV slope (see the discussion in Popping et al., 2017).
### Line and continuum emission integration
We include the line emission from the mass normalized star-forming cloud complexes in SKIRT by converting the line luminosities into a continuum SED by assigning a Gaussian profile. The linewidth is selected to ensure that the lines in our list do not blend. The Gaussian profile is truncated at \(4\sigma\), which means that \(\Delta\lambda=4\sigma\). This choice of \(\Delta\lambda\) is made by a comparison with a triangular profile of a resolution \(R=5\times 10^{4}\). To resolve the Gaussian profile, \(\Delta\lambda\) is sampled at 37 equidistant points, implying that we achieve a spectral resolution for the lines that exceeds \(5\times 10^{4}\).
This approach of slightly broadening the lines is valid given that the major sources of line broadening in a simulated galaxy are likely to be the bulk motion of a simulated particle and the sub-grid gas motions in a complex. The bulk motion of the different particles with respect to each other and with respect to the observer is taken care of within SKIRT, given the 3D velocity vector of each particle (Camps and Baes, 2020). The second source of Doppler broadening linked to the sub-grid motions of the gas can also be accounted for within SKIRT by assigning a user-defined value for each emitting entity. Given that the requisite data concerning sub-grid motions of the gas can be straightforwardly derived from the shell velocities computed in Sec. 2 and incorporated via the SKIRT interface, we opted not to include its broadening effect on the lines in the SKIRT tables.
We add about 150 lines emanating from various phases of the gas. The list is essentially a merger of the lines' lists12 appropriate for low-density Hn regions, PDR, and the molecular gas supplied with
Figure 19: UV–mm SED emerging from star-forming complexes described in Sec. 5.1. Top row: \(Z=0.004\), bottom row: \(Z=0.02\). From left to right, the densities are 80, 320, 1280 cm\({}^{-3}\) with the star formation efficiency fixed at 5%. The SEDs shown here have a spectral resolution \(R=3000\).
the version of Cloudy used in this work. The consolidated list can be found at www.toddlers.ugent.be. We directly use the stellar, nebular, and dust continua reported by Cloudy which have a spectral resolution of \(R\approx 300\).
## 6 Comparison with HiiM3
In this section, we compare the TODDLERS library as implemented in SKIRT with HiiM3 focusing on their outputs in the MIR-FIR wavelength regime. A direct comparison between the two libraries is made difficult by the wide variety of differences that exist between the two libraries. TODDLERS uses physical parameters (\(t,Z,\epsilon_{\rm SF},n_{\rm cl}\)) and considers a finite gas reservoir along with the forces of gravity and shell state-dependent external pressure accounted in the dynamical evolution. The evolution could be pressure or momentum-driven, with a possibility of shell re-collapse in the momentum-driven phase. On the other hand, the parameters in HiiM3 (\(Z,f_{\rm PDR},\log C,P_{\rm ISM}\)) result from the evolution of an adiabatic, pressure-driven bubble evolution without gravity. For such systems, combinations of the cluster's stellar mass and the external gas pressure serve as a scaling on the ionization parameter and control the dust temperature. The combination controlling the dust temperature in the ionized region is \(\log C\), which is given as:
\[\log C=\frac{3}{5}\log\left(\frac{M_{\star}}{M_{\odot}}\right)+\frac{2}{5}\log \left(\frac{P_{\rm ISM}/k}{\rm cm^{-3}~{}K}\right). \tag{40}\]
Increasing \(\log C\) at a fixed \(P_{\rm ISM}\) increases the stellar mass in the system and leads to hotter dust, moving the IR peak leftward. \(f_{\rm PDR}\) is a time-averaged quantity accounting for the absorption of radiation arising from the ionized region by a surrounding neutral medium (similar to our neutral cloud around the shell). Cases in which the PDR entirely surrounds the ionized region correspond to \(f_{\rm PDR}=1\), while completely uncovered ionized regions have \(f_{\rm PDR}=0\). Increasing the value of \(f_{\rm PDR}\) to 1 leads to higher PAH and cooler dust emission on account of increasing UV absorption. The two models also differ in the dust models, e.g., TODDLERS uses a dust mix which has a lower-end cutoff in dust sizes at 0.03 \(\mu\)m, while this value is significantly lower in HiiM3 at 0.004 \(\mu\)m. PAHs in TODDLERS are not associated with ionized or molecular gas, whereas, HiiM3 includes them in the molecular cloud covering the ionized region. Further differences in the gas density law also exist, we refer the reader to the discussion in Sec. 3 and that in Groves et al. (2008). We remark that although this list of differences is useful, it is unlikely to be exhaustive.
Due to the wide array of differences that exist between the two libraries, we compare them by simply contrasting a selected set of observables mapped by their input parameters. For each of the two libraries, we scan their respective parameter spaces. The parameter values studied are listed in Tab. 2. In the case of HiiM3, We fix \(P_{\rm ISM}\) to a value of \(10^{-12}\) Pa. Changing \(P_{\rm ISM}\) at a fixed \(\log C\) does not impact the broadband continua, which is the basis of the comparison here. We also note that \(f_{\rm PDR}\) is only available at the values of 0 and 1, and is linearly interpolated at all other values in between. We restrict the comparison of the two libraries in the metallicity regime that represents massive galaxies in the nearby universe and the Milky Way. This is also the regime where the majority of the past work (Baes et al., 2019; Trcka et al., 2020, 2022; Kapoor et al., 2021; Camps et al., 2022) has recognized the shortcomings of HiiM3. Therefore, we consider three metallicities, \(Z\in[0.008,0.02,0.04]\) to confront the results from the two libraries with observational data. We mention that the highest metal fraction available in the case of HiiM3 is 0.025, thus in all the plots where comparison is made with the TODDLERS's value of \(Z=0.04\), HiiM3 data is missing. As HiiM3 considers a luminosity weighted average of various ages between \(0-10\) Myr, we generate fluxes for TODDLERS by uniformly sampling the period between \(0-30\) Myr. We note that the SKIRT implementation of TODDLERS is normalized by the stellar mass of the system (see Sec. 5), while HiiM3 is normalized by the SFR. Thus, for an arbitrary SFR, we ensure that \(\rm SFR[\it M_{\odot}\rm yr^{-1}]\times 10^{7}[\rm yr]=\it M_{\rm TODDLERS}\). In practice, we use an SFR of unity and sample \(10^{4}\) emitting entities with an age lying in the range \(0-30\) Myr, each one of their SED is then scaled by the mass \(\it M_{\rm TODDLERS}/10^{4}\). We focus on the comparison in the IR regime and use two comparison strategies: (i) The IRAS color plane, similar to the one discussed in Sec. 4.2. (ii) The MIR--FIR colour plane using IRAC 8\(\mu\)m, MIPS 24 \(\mu\)m and PACS 70 \(\mu\)m, and SPIRE 500 \(\mu\)m bands. We mention that the IR bands used here are affected by the presence of diffuse dust outside the star-forming
Figure 20: The IRAS colour-colour diagram for TODDLERS and HiiM3. The red and blue colors represent the free parameters for the TODDLERS library, \(n_{\rm cl}\) and \(\epsilon_{\rm SF}\), respectively. The green and orange colours represent the free parameters of HiiM3, \(\log C\), and \(f_{\rm PDR}\), respectively. Increasing opacity of the curves represents increasing values of the variables which are given in Tab. 2. The data is missing at \(Z=0.04\) for HiiM3, where the upper limit is \(Z=0.025\). The star labeled as “Fiducial” gives the colors for HiiM3 parameters \(\log C=5.0\) and \(f_{\rm PDR}=0.2\). These numbers have been employed by various authors as their fiducial values when post-processing simulated galaxies using HiiM3 (Jossson et al., 2010; Rodriguez-Gomez et al., 2019; Vogelsberger et al., 2020; Trcka et al., 2022). The grey hexhins represent the Galactic H ii regions discussed in Yan et al. (2018). The colormap gives the median value of the total flux density in the four IRAS bands.
regions, but it is worthwhile to investigate the behavior of our models in isolation. A detailed comparison using simulated galaxies, including the impact of diffuse dust, is the subject of the second paper in this series. As the UV emission is also subject to attenuation by diffuse dust, we also defer the comparison in the UV regime to that paper.
### IRAS colours
Following the discussion in Sec. 4.2, we use the IRAS plane to populate the luminosity-weighted models as shown in Fig. 20. The TODDLERS parameter space is shown in red-blue, while that of HiiM3 is shown in orange-green. Higher opacity of the curves is associated with higher variable values, which are listed in Tab. 2. The Hii region demarcation is shown along with the colors from individual Galactic Hii regions. The individual Hii regions' data is shown as hexbins of the median value of the total flux density in all four IRAS bands, serving as a proxy for the total-IR flux density. Note that this dataset is comprised of individual Hii regions, likely composed of different cluster ages, stellar/gas masses, and metallicities. In Fig. 17, we showed that the individual TODDLERS models are able to span the range of colors exhibited by the observational data. In contrast, Fig. 20 shows time-averaged model data where the colors are dominated by younger, brighter components of the mix. Thus, the observational data is shown simply to give an idea of the parameter space on the IRAS plane.
The two models cover somewhat different regions on the IRAS plane with the HiiM3 data generally exhibiting lower values of both \(F_{100}/F_{25}\) and \(F_{60}/F_{12}\) than TODDLERS. The TODDLERS models occupy the region marked by bright Galactic Hii regions' with a better coverage of the IRAS plane. The offset between HiiM3 and TODDLERS is in part driven by two factors, (i) The differences in the lower end of the grain size distribution. The model employed in TODDLERS lacks non-PAH small grains which could emit efficiently at 25 \(\mu\)m (see, e.g., Robitaille et al. (2012) for dust size dependence of emission in various bands). In contrast, the larger grains in our model emit predominantly at longer wavelengths, shifting the TODDLERS' data upwards and rightwards. (ii) In Dopita et al. (2005), the mechanical luminosity is reduced by a factor 10 to ensure that the bubble expands slowly and the internal pressure remains low. We suspect that this alteration of dust radius relative to the stellar cluster could also be playing a role in driving the higher dust temperatures in the case of HiiM3, resulting in the different regions of the IRAS color-color plot covered by TODDLERS and HiiM3.
### MIR-FIR colours
We further examine the shape of the IR SED using the color-color plot employing the IRAC 8 \(\mu\)m, MIPS 24 \(\mu\)m, PACS 70 \(\mu\)m, and SPIRE 500 \(\mu\)m bands. This is inspired by the work carried out by Calzetti et al. (2018); Gregg et al. (2022), where this colour-colour plot is generated at a kpc resolution for nearby galaxies. For the rest of this section, we use the notation \(F_{n}\) to represent \(vF_{v}(n)\), the flux in a band with pivot wavelength \(n\).
Fig. 21 shows how the two libraries cover this plane. We also show the relation found by Calzetti et al. (2018) along with the colors for individual pixels for the "Regime-1" galaxies discussed in Gregg et al. (2022) which exhibit uniform and high star-formation surface densities. Note that the curve from Calzetti et al. (2018) uses \(F_{1100}\) instead of \(F_{500}\), here we have made the conversion based on the factor given in Gregg et al. (2022).
As we compare the trends on this color-color plot, it is worth noting that \(F_{500}\) and \(F_{8}\) are impacted by the dust emission originating outside of the star-forming regions. The observational sample, despite being the one with high star formation density is affected by the presence of cold dust (\(F_{500}\)) and to some extent, dust heating by
\begin{table}
\begin{tabular}{l c} \hline Parameter & Values \\ \hline \multicolumn{3}{c}{TODDLERS} \\ \(Z\) & 0.008, 0.02, 0.04 \\ \(n\) [cm\({}^{-3}\)] & 10.0, 20.0, 80.0, 320.0, 640.0, 1280.0, 2560.0 \\ \(\epsilon_{\rm SF}\)[\%] & 1.0, 2.5, 5.0, 7.5, 10, 12.5, 15.0 \\ Age & Uniform sampling between 0-30 Myr \\ \hline \multicolumn{3}{c}{HiiM3} \\ \(Z\) & 0.008, 0.02 \\ \(P_{\rm ISM}\) [Pa] & \(10^{-12}\) \\ \(f_{\rm PDR}\) & 0.0, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0 \\ \(\log C\) & 4.0,4.5, 5.0, 5.5 6.0, 6.5 \\ \hline \end{tabular}
\end{table}
Table 2: The parameters used for comparing TODDLERS and HiiM3.
Figure 21: Same as Fig. 20, but for the MIR-FIR colour-colours discussed in Sec. 6.2. The grey hexbins represent the colors from the “Regime-1” galaxies discussed in Gregg et al. (2022). The hexbin colormap depicts the number of spaxels in a given bin. The black broken curve in each panel represents the relation from Calzetti et al. (2018).
old stars (\(F_{8}\)). This model data will move downwards and slightly rightwards once the dust emission originating outside of our model star-formation regions is taken into account. This trend is also expected based on the nature of the Calzetti et al. (2018) curve, where the addition of diffuse, cooler dust moves the data points rightwards and downwards.
In the models considered in Fig. 21, an increase in \(n_{\rm cl}\) and/or \(\epsilon_{\rm SF}\) leads to an increase in the luminosity-weighted \({\cal F}_{\rm in}\) resulting in an increase in \(F_{\rm 70}/F_{\rm 500}\), a trend observed with increasing \(\log C\) (and to some extent with increasing \(f_{\rm PDR}\)) in the case of Hi iM3. The lower \(\epsilon_{\rm SF}\) systems tend to have a higher number of shell recollapse events leading to an increasingly narrow range of values in \(F_{\rm 70}/F_{\rm 500}\) as the effective star formation efficiency (\(M_{\star}/M_{\rm cl}\) at the end of 30 Myr) could reach values closer to higher \(\epsilon_{\rm SF}\) models. The Hi iM3 parameter, \(f_{\rm PDR}\), is completely free from the evolution of the bubbles and serves as a geometrical factor on the non-ionizing UV absorbed by the molecular cloud assuming a fixed column depth. At a fixed \(\log C\), this leads to models moving rightward and downward on the MIR-FIR plane due to the increasing contribution of PAHs and cold dust. In contrast, TODDLERS calculate IR emission in a single model, and column depths are affected by the state of the component shells of the star-forming complexes. The youngest and the most massive components tend to have the highest impact on the IR SED. These components are ones that are still embedded in their birth cloud and have high dust column depths. If we consider \(F_{8}/F_{24}\) a proxy for \(q_{\rm PAH}\), at lower densities, the unswept clouds are more diffused and contain neutral Hydrogen. This leads to a higher luminosity-weighted PAH abundance, moving the models rightward with decreasing \(n_{\rm cl}\) at a fixed \(\epsilon_{\rm SF}\). Increasing \(\epsilon_{\rm SF}\) at a fixed \(n_{\rm cl}\) tends to decrease the PAH fraction, leading to leftward movement on the MIR-FIR plane.
On this MIR-FIR color plane the two libraries cover markedly different regions, with the TODDLERS library showing a closer match to the observational colors. We confirm that this trend is also seen in the colors generated with simulated galaxies accounting for dust outside the star-forming regions (subject of Paper 2 in this series). We attribute the large differences in the \(F_{8}/F_{24}\) to the differences in the dust models used in the two libraries, in particular, the dust model employed in Hi iM3 appears to be more emissive around the 24 \(\mu\)m band.
## 7 Summary and Outlook
In this work, we have presented a new emission library TODDLERS with the primary aim of producing time-dependent emission diagnostics from gas and dust around young stars for simulated galaxies. To achieve this, we have run a large suite of semi-analytic calculations that allow us to infer the gas-star geometry as the gas evolves under stellar feedback. The calculations assume a cloud with a constant density profile and a finite mass evolving under the influence of the feedback of a central cluster. This idealized approach allows us to sweep a large parameter space while accounting for complex feedback physics in a simplified fashion. The dominant stellar feedback channel evolves as a function of metallicity. At high metallicities, the gas is pushed predominantly by stellar winds and the subsequent SNe, whereas, the dominant feedback channel at metallicities below \(Z\lesssim 0.004\) is the Ly\(\alpha\) radiation pressure due to multiple resonant scatterings. This is especially the case for the higher end of cloud densities and masses where there is ample neutral gas for prolonged periods. At higher metallicities, the role of Ly\(\alpha\) radiation pressure is subdominant but non-negligible. The clues from this idealized Ly\(\alpha\) radiation pressure setup point to the importance of this feedback channel in young star-forming clouds warranting more detailed studies, even at higher metallicities.
The semi-analytic calculations are passed on to Cloudy to carry out calculations involving detailed chemistry enabling us to produce time-dependent UV-mm observables. The tabulated observables include emission lines originating from ionized, photo-dissociation, and molecular regions along with the nebular, stellar, and dust continuum emission. We have used the BPT diagram and the IRAS color-color diagram to map the parameter space of the models onto the observable space demonstrating that they populate the expected regions when compared to observational data from nearby galaxies and the Milky Way. We have integrated these observables into SKIRT to use these data for post-processing simulations where star-forming regions are not resolved by assuming a power-law cloud population. A comparison focused on the IR colors produced using the aforementioned TODDLERS implementation to that of the currently only available star-forming regions' library in SKIRT was carried out. When confronted with observations, TODDLERS' IR colors are in better agreement with observations in comparison to Hi iM3, which until now was the only option in SKIRT to incorporate star forming regions' emission. In a companion paper, we use TODDLERS to produce UV-submm diagnostics using simulated galaxies. By doing this, we can make a direct and detailed comparison between broadband and line-emission data from simulated galaxies with those from recent observational studies, such as Gregg et al. (2022) and Groves et al. (2023).
This work serves as a proof of concept where we followed the evolution of homogeneous clouds using a single set of stellar templates and the observable generation employs a single chemical abundance set and dust model. Thus, several areas remain open for exploration within the framework employed in this work. As far as the evolutionary model is concerned, the cloud density profile dictates its binding energy, modifying which could lead to significant changes in the system's evolution (Rahmer et al., 2019). For example, consider two clouds with identical mean density and mass. One exhibits a Bonnor-Ebert profile with a high-density core (Ebert, 1955; Bonnor, 1956), while the other has a constant density. Comparatively, the cloud with the Bonnor-Ebert profile possesses a greater gravitational binding energy than its constant-density counterpart, making it more resistant to destruction.
We have assumed that the unswept cloud is not dynamically affected by stellar feedback, which, for example, lowers the effective outward momentum deposition at low metallicities when the ionization front lies outside the shell. We have also neglected the density and velocity gradients that would appear as a result of the Ly\(\alpha\) feedback which could lower the gas columns seen by Ly\(\alpha\) photons in non-trivial ways. While the problem of how feedback affects the gas in star-forming regions is intrinsically 3D, relaxing these assumptions in spherical symmetry would be an interesting iteration of the current work (see, for example, Kourniotiis et al., 2023).
We have shown that one of the key parameters that determine the emission properties of a nebula, \(U\), is critically dependent on the properties resulting directly from the stellar templates (\(Q_{\rm H}\) and \(F_{\rm mm}\)). At the same time, the dust temperature is a function of \({\cal F}_{\rm in}\), which depends on the stellar feedback. Such dependencies demand a thorough investigation of the results presented here while changing the stellar library. Modifying the stellar library could mean, among other things, varying the initial elemental abundances, the physical processes employed in the stellar models, and its IMF. Grasha et al. (2021) have shown the need to use abundance sets beyond the metallicity-scaled solar abundances for massive stars, especially due
to their effects on the line emission at the lower metallicity end. Similarly, the presence of stellar rotation, binarity, and changes in the IMF play important roles in controlling the ionizing photon production rate and mass loss from the stellar clusters (Leitherer et al., 2014; Stanway & Eldridge, 2019). Such model choices will have an impact on the various feedback channels incorporated here, leading to interesting effects on the emission lines and dust emission. Thus considering variations of the stellar templates based on the abundance sets, physical processes, and IMF represent interesting avenues to explore.
Another intriguing prospect is to consider variations in the dust-to-metal ratio as a function of metallicity and environment, which are expected based on numerical modeling and observations (Asano et al., 2013; Remy-Ruyer et al., 2013; Schneider et al., 2016; De Vis et al., 2019). The computational efficiency offered by the 1D calculations makes it feasible to generate models with the aforementioned variations, we plan to pursue this in the future.
## 8 Data availability
The TODLERS library for post-processing galaxy formation simulations is available for download along with the SKIRT code. All other data used in this work, including the Cloudy output for individual models, are publicly available at www.toddlers.ugent.be.
## 9 Acknowledgements
We thank Ilse De Looze, Jeremy Chastenet, and Brian Van Den Noortgate for useful discussions. We thank Benjamin Gregg for providing the MIR-FIR colors' data from the KINGFISH sample. We thank Qing-Zeng Yan for providing the IRAS colors' data. AUK, AN, MB acknowledge the financial support of the Flemish Fund for Scientific Research (FWO-Vlaanderen). The simulations carried out for this work used the Tier-2 facilities of the Flemish Supercomputer Center ([https://www.vscentrum.be/](https://www.vscentrum.be/)) located at Ghent University. We are grateful to the members, particularly Gary Ferland and Peter van Hoof, for addressing numerous questions on the Cloudy online forum. We also thank the anonymous referee for their valuable comments and suggestions.
|
2309.13297 | OATS: Opinion Aspect Target Sentiment Quadruple Extraction Dataset for
Aspect-Based Sentiment Analysis | Aspect-based sentiment analysis (ABSA) delves into understanding sentiments
specific to distinct elements within a user-generated review. It aims to
analyze user-generated reviews to determine a) the target entity being
reviewed, b) the high-level aspect to which it belongs, c) the sentiment words
used to express the opinion, and d) the sentiment expressed toward the targets
and the aspects. While various benchmark datasets have fostered advancements in
ABSA, they often come with domain limitations and data granularity challenges.
Addressing these, we introduce the OATS dataset, which encompasses three fresh
domains and consists of 27,470 sentence-level quadruples and 17,092
review-level tuples. Our initiative seeks to bridge specific observed gaps: the
recurrent focus on familiar domains like restaurants and laptops, limited data
for intricate quadruple extraction tasks, and an occasional oversight of the
synergy between sentence and review-level sentiments. Moreover, to elucidate
OATS's potential and shed light on various ABSA subtasks that OATS can solve,
we conducted experiments, establishing initial baselines. We hope the OATS
dataset augments current resources, paving the way for an encompassing
exploration of ABSA (https://github.com/RiTUAL-UH/OATS-ABSA). | Siva Uday Sampreeth Chebolu, Franck Dernoncourt, Nedim Lipka, Thamar Solorio | 2023-09-23T07:39:16Z | http://arxiv.org/abs/2309.13297v2 | OATS: Opinion Aspect _T_arget Sentiment Quadruple Extraction Dataset for Aspect-Based Sentiment Analysis
###### Abstract
Aspect-based sentiment Analysis (ABSA) delves into understanding sentiments specific to distinct elements within textual content. It aims to analyze user-generated reviews to determine a) the target entity being reviewed, b) the high-level aspect to which it belongs, c) the sentiment words used to express the opinion, and d) the sentiment expressed toward the targets and the aspects. While various benchmark datasets have fostered advancements in ABSA, they often come with domain limitations and data granularity challenges. Addressing these, we introduce the OATS dataset, which encompasses three fresh domains and consists of 20,000 sentence-level quadruples and 13,000 review-level tuples. Our initiative seeks to bridge specific observed gaps: the recurrent focus on familiar domains like restaurants and laptops, limited data for intricate quadruple extraction tasks, and an occasional oversight of the synergy between sentence and review-level sentiments. Moreover, to elucidate OATS's potential and shed light on various ABSA subtasks that OATS can solve, we conducted in-domain and cross-domain experiments, establishing initial baselines. We hope the OATS dataset augments current resources, paving the way for an encompassing exploration of ABSA.
## 1 Introduction
User-generated reviews on e-commerce and other platforms are invaluable for both consumers and product makers or service providers. These reviews provide insights into past customer experiences, guiding potential consumers in their decision-making process. On the other hand, producers and merchants benefit from understanding user feedback on specific product features, allowing them to strategize improvements. Given the explosive growth in the volume of online reviews, discerning valuable insights from the vast amounts of data becomes increasingly challenging. A local survey highlighted that a staggering 91% of young consumers place significant trust in online reviews, often equating them to personal recommendations [14].
The trend in online reviews has shifted from a broad understanding of a product's overall performance to a more granular examination of individual product aspects. This shift demands a different approach to analyzing reviews. Aspect-based sentiment Analysis (ABSA) emerged as the answer to this nuanced requirement, focusing on sentiment pertaining to specific aspects of an entity [13].
However, current datasets often fall short of capturing the complete spectrum of ABSA. One of the significant limitations is the inability to perform joint detection of all ABSA elements due to the absence of at least one critical component in collected reviews. This limitation stunts the potential of ABSA tasks. Furthermore, despite the SemEval datasets' popularity, many of these sentences often group multiple aspects under a single sentiment polarity. As noted by [10], this approach simplifies the ABSA task, effectively reducing it to mere sentence-level sentiment analysis. As a solution, they proposed a new large-scale Multi-Aspect Multi-Sentiment (MAMS) dataset, in which each phrase has at least two independent aspects with different sentiment polarity. However, their approach of introducing "miscellaneous" categories or neutral sentiments when a sentence contains only one opinion tuple is impractical. While it adds complexity, it does not necessarily reflect real-world sentiments in reviews.
Recently, ABSA datasets such as ACOS [11] and ASQP [12] have been at the forefront, providing comprehensive annotations for all four elements. However, they
are largely limited to the long-standing traditional domains of restaurants and laptops, a trend that has been observed since 2014. In contrast, newer datasets like DM_ASTE and DE_ASTE (Domain Expanded ASTE) have made strides by introducing annotations from fresh domains, encompassing home appliances, fashion, groceries, and more, moving beyond the typical restaurant and laptop reviews Xu et al. (2023); Chia et al. (2023). Yet, these innovative datasets have their limitation: they lack the critical aspect category annotations required for a holistic joint detection of all elements.
Several crucial insights emerge in delving deep into the landscape of ABSA datasets Chebolu et al. (2023). Firstly, there needs to be more consistency in how ABSA components are defined and structured across various sources, accentuating an urgent need for a universal standardized format. Secondly, the challenge of accurately detecting opinion words, which are central to discerning sentiment polarity and their corresponding aspect terms, still needs to be solved. The current datasets, while extensive, do not always reflect real-world complexities. It is imperative to have datasets encompassing various domains or languages to achieve a truly comprehensive ABSA. This need is further emphasized by recent advancements in unified models using generative frameworks Chebolu et al. (2021); Zhang et al. (2021), highlighting the immense potential of models that jointly solve ABSA tasks. These models necessitate datasets with richer annotations, similar to Zhang et al. (2021). Another stark observation is the overwhelming predominance of English in ABSA datasets. This monopolization overlooks the vast linguistic landscape, sidelining low-resource languages. Addressing this gap is vital for a more inclusive and global ABSA application. Lastly, the prevailing ABSA research seems overly fixated on a sentence-centric approach, often ignoring that user reviews usually extend beyond single sentences. This calls for a renewed focus on review-level ABSA datasets to capture sentiment in its entire breadth and depth.
Given these challenges, the OATS dataset has been developed with a vision to rectify existing gaps. The contributions of OATS are threefold:
* Introduces both sentence-level quadruples and review-level tuples, capturing sentiments in all their granularity.
* The dataset spans multiple domains, ensuring its applicability across a diverse range of ABSA tasks.
* OATS will be publicly available in two formats: XML (as introduced by Pontiki et al. (2014)), catering to detailed character-level annotations, and Text, following the format set by Zhang et al. (2021).
* We provide a few baselines for our dataset, focusing on three primary tasks (ASTE, TASD, and ASQP) from the 14 ABSA subtasks in two distinct settings: in-domain ABSA and cross-domain ABSA.
## 2 OATS Dataset
In this section, we explore the resources behind the dataset and then discuss the annotation process, its statistics, and its relevance to ABSA and the wider NLP community.
### Data Resources and Annotation Procedure
We first describe the sources from which we gathered the data for our corpus, followed by a detailed account of the annotation procedure.
#### 2.1.1 Data Resources
We derived three distinct datasets from multiple sources. Our corpus is primarily built from text reviews available on Kaggle (consistent with the CCO and ODbL licenses) and materials provided by Yin et al. (2017).
Amazon Fine Foods DatasetWe refer to this dataset as Amazon_FF, and is extracted from a Kaggle competition 1 containing around 500k reviews of fine foods from Amazon. These reviews span topics such as the quality of food or products, promptness of delivery, packaging standards, and product availability, among others. We curated 1521 complete reviews from this dataset, consisting of over 8200 sentences and approximately 5600 opinion quadruples.
Footnote 1: [https://www.kaggle.com/snap/amazon-fine-food-reviews](https://www.kaggle.com/snap/amazon-fine-food-reviews)
Coursera DatasetOriginating from a Kaggle competition, this dataset contains reviews scraped from the Coursera website, totaling close to 100k reviews. These reviews reflect diverse perspectives on the course's quality, content, comprehensiveness, and the alignment of faculty lessons with
course content. For our work, we selected 1211 comprehensive reviews, which include roughly 6K sentences and about 7K opinion quadruples.
TripAdvisor DatasetThis dataset is based on data from the [22], featuring over 100k hotel reviews. The reviews comment on various facets of the hotel experience, such as pricing, design attributes, and more. We collated 1206 complete reviews from this extensive collection, resulting in approximately 6.3K sentences and nearly 8.3K opinion quadruples.
#### 2.1.2 Annotation Procedure
After the data collection, we utilized the services of annotators from the Upwork platform and employed the BRAT tool for annotation. Three annotators were chosen for this task. We provided them with a guidelines document to acquaint themselves with the topic, dataset, BRAT tool, and specific annotation requirements. Initially, annotator (A) took the lead in annotating a subset of the data, which was subsequently reviewed by annotator (B) for corrections. This process was iterated twice, with pairs (A, C) and then (B, C) for checks and balances. Any emerging disagreements were addressed and resolved through consultations with one of our NLP experts. The remaining dataset sentences were equally divided among the three annotators. They were also given supplementary instructions derived from earlier discrepancies. When consensus was elusive, collective decisions were made. When all three annotators held differing views, a consensus was achieved in collaboration with an expert annotator. Majority voting was the chosen mechanism to finalize the annotations. For clarity, an illustrative example of an opinion quadruple from the Coursera domain in BRAT format and the finalized XML format of sentence-level and review-level annotations of a review are presented in Figure 4 (located in the Appendix).
#### 2.1.3 Inter-Annotator Agreement
According to hripcsak2005autracting, the infamous Kappa metric [20, 21] may not be the best fit for span-extraction annotation in textual data to measure the inter-annotator agreement. The limitation arises from the requirement of Kappa to compute the number of negative cases, which is unidentifiable for spans as they constitute sequences of words without a predetermined quantity of items for annotation in a text. The F-measure is often more suitable for gauging inter-annotator agreement in span extraction annotation tasks such as target and opinion phrase extraction [3]. We can calculate the F1-score by considering one annotator's annotations as the reference (or gold annotations or groundtruth) and the other's as a system's responses (or predictions). We computed the inter-annotator agreement scores, following this F1 metric, for each domain using several combinations of the ABSA elements as shown in Table 1. The average quadruple extraction agreement F1 score for the three domains is 69.39%, indicating high consistency among the annotators.
### Dataset Statistics
We provide the statistics for the ASQP dataset [23], which is the only dataset with all the elements of ABSA together in Table 2 for comparison with the proposed dataset. We present several statistics in this section that give an overview of the OATS dataset. We provide both the sentence-level and review/text-level statistics for all the domains in Tables 3, 5, 6, and 4.
To begin with, we present some basic statistics of each of the domains in the dataset in Table 3. It provides information about the total number of reviews, sentences, review-level opinion tuples, and sentence-level opinion quadruples for every domain. Table 4 has the averaged statistics such as the sentences per review, length of sentence (and review), and the number of opinions per sentence (and review) with the domains as columns. We also provide fine-grained statistics in Table 5, such as the number of sentences and reviews with zero, one, two, three, and more than three opinions. The
\begin{table}
\begin{tabular}{l c c c c} \hline
**Dataset** & **Tgt. Span** & **Opi. Ph. Span** & **Asp-Sen** & **Quadruple** \\ \hline Amazon\_FF & 72.57 & 69.72 & 85.43 & 65.78 \\ Coursera & 78.26 & 71.26 & 79.63 & 68.56 \\ Hotels & 74.78 & 72.05 & 87.32 & 73.84 \\ \hline \end{tabular}
\end{table}
Table 1: Inter-Annotator agreement F1-scores for the OATS Datasets. Tgt. Span: Target span extraction. Opi. Ph. Span: Opinion Phrase span extraction. Asp-Sent: Aspect Category and Sentiment combination categorization.
\begin{table}
\begin{tabular}{l c c c c} \hline
**Dataset** & **\#sents** & **\#pos** & **\#neg** & **\#neu** & **\#Total** \\ \hline Rest-15 & 1562 & 1710 & 701 & 85 & 2496 \\ \hline Rest-16 & 2024 & 2293 & 877 & 125 & 3295 \\ \hline
**Total** & **3586** & **4003** & **1578** & **210** & **5791** \\ \hline \end{tabular}
\end{table}
Table 2: Current ASQP Dataset Statistics
numbers provided inside "()" for the number of opinions per sentence and per review (in Table 4) are computed after excluding the zero opinion sentences and reviews. The stats corresponding to the number of opinions with different sentiment polarities, including positive, negative, neutral, and conflict at both the sentence and review levels, is presented in Table 6.
The hotels domain has more number of opinions per sentence as well as per review when compared to the other two domains. Despite having the most reviews (from Table 3, the fine foods domain has the least number of opinions per sentence and review among the three. One of the main reasons is that the number of zero opinion sentences in the fine foods domain is higher than in the hotels and Coursera (from Table 5). Furthermore, the ratio of two-opinion sentences to one-opinion sentences for the Amazon_FF and the Coursera domains (\(\approx 1:5\)) pushed the average numbers closer to one opinion per sentence. The requirement of having a minimum of 2 distinct aspect categories at the review level (as defined in Appendix Section A) is clearly evident from the average number of opinions per review. However, it was lacking at the sentence level.
Another important factor for ABSA datasets is the distribution of aspect categories, which is presented in Figures 1, 2, and 3. The _General_ attribute for different entities in all the datasets has the highest number of opinions, such as the _FOOD#GENERAI_ in the Amazon_FF, _COURSE# GENERAL_ and _FACULTY#GENERAL_ for the Coursera domain, and _HOTEL#GENERAL_, _LOCATION#GENERAL_, and _SERVICE#GENERAL_ in the hotel domain. The review-level and the sentence-level opinions have similar distributions for the aspect categories.
## 3 Experimental Evaluation
This section will first list the tasks on which we conduct experiments using our dataset. Then, we
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Stats/Domain** & **Amazon_FF** & **Coursera** & **Hotels** \\ \hline Avg. Sentences/Review & 4.98 & 4.9 & 5.2 \\ \hline Avg. Length of Sentence & 71.34 & 79.02 & 75.95 \\ Avg. Length of Review & 359.9 & 391.55 & 405.29 \\ \hline Avg. Opinions/Sentence & 0.92 (1.25) & 0.94 (1.23) & 1.3 (1.69) \\ \hline Avg. Opinions/Review & 2.4 (2.48) & 3.16 (3.18) & 4.59 (5.07) \\ \hline \hline \end{tabular}
\end{table}
Table 4: tab: OATS Average Statistics. The values inside the () are the average opinions per sentence/review, excluding those with zero opinions.
Figure 1: Distribution of Aspect Categories for Sentence-Level and Review-Level Annotations for Amazon_FF Domain
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Domain** & **\#Rexs.** & **\#Sent.** & **\#RexOp.** & **\#Sent.Op.** & **\#Total Op** \\ \hline Amazon\_FF & 1616 & 8016 & 3879 & 7628 & 11507 \\ Coursera & 1211 & 5852 & 3745 & 5360 & 9105 \\ Hotels & 1204 & 6377 & 5874 & 8948 & 14882 \\
**Total** & **4031** & **20245** & **13498** & **21936** & **35434** \\ \hline \hline \end{tabular}
\end{table}
Table 3: OATS Dataset Overall Statistics
Figure 2: Distribution of Aspect Categories for Sentence-Level and Review-Level Annotations for Coursera Domain
point to the baseline methods used for each task, dividing them into task-specific and unified methods, followed by the evaluation metrics and the experimental results.
### Tasks
We selected three major joint detection tasks for our sentence-level ABSA experiments, which are target-aspect-sentiment detection (TASD) Wan et al. (2020), aspect sentiment triplet extraction (ASTE) Peng et al. (2020), and aspect sentiment quadruple prediction (ASQP) Zhang et al. (2021). The rationale for choosing these tasks is that they cover all four elements of ABSA in different combinations. Moreover, the emphasis was placed on tasks involving joint extraction rather than single-element extraction, which stems from findings in Chebolu et al. (2021). The authors have shown that joint models, designed to handle multiple interrelated elements simultaneously, consistently outperformed explicitly fine-tuned models for extracting a single element.
For the review-level (or text-level) ABSA, where we have to extract aspect category and sentiment tuples, we chose to use the aspect sentiment joint detection task (ASD) Sun et al. (2019) for our experiments. The two main challenges with the review-level task are that we have to use the entire review context rather than simply using each sentence to detect the aspect categories and the sentiment polarity of the review. Secondly, there can be multiple opinions on the same aspect category with multiple polarities, from which we must assign the dominating sentiment to each category.
### Baseline Methods
We implement several representative models from various frameworks, including MRC-based, generation-based, and BERT-based frameworks, for the task evaluations.
Task-specific methodsHere, we present various approaches used to evaluate a specific task.
For ASTE, we use the following methods:
* BMRC Liu et al. (2022): a MRC-based method. It extracts aspect-oriented triplets and opinion-oriented triplets. Then, it obtains the final results by merging the two directions.
* GEN-NAT-SCL Peper and Wang (2022): a generative-based method. It combines a new generative format with a supervised contrastive learning objective to predict the ASTE triplets and quadruples.
* BDTF (BERT) Zhang et al. (2022): a BERT-based method that uses table-filling to solve the problem. It transforms the ASTE task into detecting and classifying relation regions in the 2D table representing each triplet in addition to an effective relation representation learning approach to understand word and relation interactions.
These are the methods for ASQP:
\begin{table}
\begin{tabular}{l|r r r|r r r r} \hline \hline \multirow{2}{*}{**Domain**} & \multicolumn{3}{c|}{**Sentence-Level ABSA**} & \multicolumn{3}{c}{**Review-Level ABSA**} \\ & **\#pos** & **\#neg** & **\#neu** & **\#pos** & **\#neg** & **\#neu** & **\#conf** \\ \hline Amazon\_FF & 5577 & 1187 & 234 & 2900 & 606 & 74 & 82 \\ \hline Coursera & 4403 & 1008 & 213 & 2910 & 721 & 129 & 71 \\ \hline Hotels & 6952 & 1207 & 169 & 4557 & 817 & 110 & 53 \\ \hline
**Total** & **16932** & **3402** & **616** & **10367** & **2144** & **313** & **206** \\ \hline \hline \end{tabular}
\end{table}
Table 6: OATS Dataset Statistics for the total number of positive, negative, neutral, and conflict sentiment polarities observed for the aspect categories at Review-Level and the aspect terms, aspect category, and opinion phrase triplets at the Sentence-Level in each domain.
\begin{table}
\begin{tabular}{l|r r r r r|r r r r} \hline \hline \multirow{2}{*}{**Domain**} & \multicolumn{3}{c|}{**Sentence-Level ABSA**} & \multicolumn{3}{c}{**Review-Level ABSA**} \\ & **0-Op** & **1-Op** & **2-Op** & **3-Op** & **3-Op** & **0-Op** & **1-Op** & **2-Op** & **3-Op** & **3-Op** \\ \hline Amazon\_FF & 1957 & 4529 & 890 & 170 & 42 & 47 & 175 & 626 & 490 & 183 \\ Coursera & 1395 & 3685 & 690 & 134 & 36 & 8 & 107 & 298 & 349 & 449 \\ Hotels & 1454 & 2910 & 1167 & 499 & 337 & 116 & 11 & 58 & 138 & 883 \\ \hline
**Total** & **4806** & **11124** & **2747** & **803** & **415** & **171** & **293** & **982** & **977** & **1515** \\ \hline \hline \end{tabular}
\end{table}
Table 5: OATS Dataset Statistics for the total number of sentences/reviews with the respective number of opinions
* Template-ILO (Hu et al., 2022): a generative-based method. Similar to Zhang et al. (2021) with an additional step to identify the most proper orders at the _instance level_, and further combine multiple proper templates as data augmentation, instead of having a fixed order template, that is passed as input to the generative model.
* Template-ILO (Hu et al., 2022): a generative-based method. Similar to Zhang et al. (2021), but has an additional step to identify the most proper orders at the _dataset level_, and further combine multiple proper templates as data augmentation, instead of having a fixed order template, that is passed as input to the generative model.
For the TASD task, we use the TAS-BERT (Wan et al., 2020), a BERT-based method. It fine-tunes the pre-trained BERT model to solve the aspect-sentiment detection task using the classification token and then detects the targets corresponding to those tuples using the token classification with CRF/softmax decoding.
#### Methods for review-level ASD
* BERT-pair-NLI-B (Sun et al., 2019): a BERT-based method. It generates an auxiliary sentence from the aspect and sentiment to transform ABSA into a sentence-pair classification task similar to natural language inference or question-answering.
#### Unified methods
The following methods are unified generative frameworks that could be adapted to any NLP problem. In our case, these are employed for all the sentence-level and review-level tasks:
* GEN-NAT-SCL (Peper and Wang, 2022): a generative-based method. It combines a new generative format with a supervised contrastive learning objective to predict the ASTE's triplets and ASQP's quadruples.
* GAS (Zhang et al., 2021): a generation-based method. It transforms the ASTE, ASQP, TASD, and ASD tasks into a text generation problem that inputs the review and generates all the respective combinations of ABSA opinion elements as output.
* Paraphrase (Zhang et al., 2021): a generative-based method. It is similar to GAS but also transforms output opinion elements into paraphrases that read as natural sentences. We substituted the corresponding element for implicit targets and opinions with the word "it."
### Evaluation
Following Zhang et al. (2021); Xu et al. (2023); Wan et al. (2020), we employ the F1 score to measure the performance of different approaches on all the tasks from Section 3.1. All experimental results are reported using the average of 5 different runs using distinct random seeds. We divided each domain dataset into train, validation, and test sets with 80%, 10%, and 10% splits, respectively. A tuple, triplet, and quadruple is considered correct only if all the corresponding prediction elements match the gold standard labels. We consider any partial matches as wrong predictions following Zhang et al. (2021). We adopt the base versions of all the transformer models for our experiments, including the BERT-base (Devlin et al., 2019), T5-base (Raffel et al., 2020), RoBERTa-base (Liu et al., 2019), and others.
### Results
In this section, we present the results of different baseline systems on the selected tasks, highlighting the benefits and challenges of current methods on the OATS dataset and the specific task.
#### Asqp
Table 7 reports the performance on the ASQP task for the quadruple extraction using five different baseline systems on the three datasets. On a high level, the overall performance on the Hotels domain is better compared to the other two domains in all the baseline systems.
For the Amazon_FF domain, even though Template-ILO gave the best performance, all the other methods are not far from it. Similarly, for the Coursera domain, GAS-T5 performed better than the Paraphrase and GEN-SCL-NAT, but the Template method is close to its performance. Paraphrase outperformed all the other methods in the Hotels domain with a 34.51% F1 score.
Prior works have shown that the Template method significantly outperformed Paraphrase and GAS in the restaurant domain (Hu et al., 2022). That trend only translated to the Amazon_FF domain, leaving out Coursera and Hotels. Similarly, GEN-SCL-NAT, which outperformed the Paraphrase method on the restaurants and laptops
domain, failed to carry it to any of the three domains.
AsteTable 8 shows the performance on the ASTE task (the aspect term, opinion term, and sentiment polarity) triplet extraction using five different baseline systems on the three datasets. Unlike ASQP, a single method (BMRC) outperformed all the other baselines on all domains for this ASTE task. The GEN-SCL-NAT method consistently underperformed on all the domains.
TasdWe present the results of the TASD task experiments on all the domains in Table 9. The generative-based models outperformed the BERT-based methods on the Amazon_FF and Hotels domains, while TAS-BERT performed best in the Coursera reviews.
## 4 Significance of OATS
Given that the ASQP task was introduced very recently, there is a need for a benchmark dataset to evaluate the task effectively. Although [22] created a dataset when they proposed the task, they built it based on several SemEval Shared Challenges [10, 20] that had annotations for aspect terms, aspect categories, and their sentiment polarity. The opinion term and aspect category annotations are derived from [23] and [24], respectively. Then, they align the samples from these two sources, merge the annotations with the same aspect term in each sentence as the anchor, and add the implicit target with their opinion terms excluded by [23].
The main differences between the current ASQP datasets and OATS are:
1. we annotate the data from scratch with all the elements together instead of aligning from several sources
2. we include all the implicit aspect terms and opinion phrases, unlike ASQP, which excluded implicit opinion terms
3. we provide both the review-level and sentence-level annotations facilitating the analysis and experiments for the review/text-level aspect-based sentiment analysis that differentiates [22] from our dataset
4. there are \(\approx 5.8K\) opinion quadruples in total for the ASQP corpus, which includes two restaurant datasets that are comparable to the \(\approx 5.7K\) opinion quadruples of the Coursera domain itself, which is the smallest in the OATS corpus. OATS has a total of \(\approx 21K\) opinion quadruples, almost four times larger than the ASQP corpus. Also, if we include the review-level annotations, we have nearly \(34K\) opinions in OATS.
The OATS corpus will help analyze and understand the inherent relationships among all the elements of ABSA and exploit them to solve the ABSA task holistically. The comprehensive nature of the OATS corpus not only allows the evaluation of the ASQP task but also unlocks the potential analysis and exploration of all the subtasks of ABSA, including TASD, ASTE, TOWE, TAD, and many more. As pointed out by [22], the characteristic of tackling various ABSA tasks in a unified framework enables the knowledge to be easily transferred across related subtasks, which is especially beneficial under low-resource settings. It also allows cross-task transfer for subtasks that underperform when trained using only a
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Method** & **Amazon_FF** & **Coursera** & **Hotels** \\ \hline GAS-T5 & 19.62 & **22.23** & 26.33 \\ \hline Paraphrase & 20.84 & 19.78 & **34.51** \\ \hline Template-ILO & **21.01** & 21.24 & 24.62 \\ \hline Template-DLO & 20.39 & 21.96 & 26.59 \\ GEN-SCL-NAT & 20.36 & 20.20 & 28.58 \\ \hline \hline \end{tabular}
\end{table}
Table 7: F1 scores of in-domain ASQP on OATS and the best results are highlighted in bold font.
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Method** & **Amazon_FF** & **Coursera** & **Hotels** \\ \hline TAS-BERT-BIO & 40.13 & **42.49** & 30.89 \\ \hline TAS-BERT-TO & 41.06 & 38.18 & 30.01 \\ GAS & 43.04 & 41.53 & **50.69** \\ Paraphrase & **44.89** & 40.24 & 49.81 \\ \hline \hline \end{tabular}
\end{table}
Table 9: F1 scores of in-domain TASD on OATS and the best results are highlighted in bold font.
task-specific dataset. Many researchers would like to explore ABSA for whole reviews rather than individual sentences or single-sentence reviews. For such settings, OATS will help push the research ahead with all the review-level annotations we provide as a part of it.
## 5 Related Works
In this section, we explain the several tasks of ABSA and their related datasets, followed by the task-specific methods for a few triplet and quadruple extraction tasks.
### Current Datasets
Traditionally, when ABSA was first introduced by Hu and Liu (2004) as a task in NLP, the main aim then was to extract different aspects given a sentence followed by assigning polarities given those aspects. As a decade passed, Pontiki et al. (2014) added another important element of ABSA: aspect categories. In the SemEval-2014 shared task on ABSA, two sub-tasks were drafted in addition to the aspect term extraction and its sentiment polarity. They are the aspect category detection and their sentiment polarity. Two datasets from the restaurant and laptop domains were released as part of this shared task.
In the consecutive years, Pontiki et al. (2015, 2016) proposed two more ABSA shared tasks at the SemEval, redefining the subtasks of ABSA and giving them a more concrete structure. Aspect categories were a combination of entity and attribute pairs, aspect terms were called targets, and the sentiment polarity was assigned jointly for the targets and the aspect categories. Several datasets with the three elements from multiple languages and domains were published, including English, Dutch, Spanish, French, Russian, Turkish restaurants, English laptops, Arabic hotels, and many others.
Later, Wan et al. (2020) utilized the data from the SemEval-2015 and SemEval-2016 to define the target-aspect-sentiment joint detection task (TASD). After that, researchers felt the need for opinion phrases to detect the sentiment polarity and understand the relationship with targets and aspect categories. This resulted in three new subtasks with new datasets that stemmed from the SemEval challenges, namely aspect-opinion-pair extraction (AOPE) (Chen et al., 2020), target-opinion-word extraction (TOWE) (Fan et al., 2019), and aspect-sentiment-triplet-extraction (ASTE) (Peng et al., 2020). More recently, the ABSA research has shifted towards the joint detection of all four elements of ABSA, which proved to better identify the inter-relationships among the elements, thereby enhancing the performance of other subtasks. This task is called aspect sentiment quadruple prediction (ASQP) (Zhang et al., 2021) or aspect-category-opinion-sentiment joint detection (ACOS) (Cai et al., 2021). The ASQP task is evaluated on the dataset with quadruples created from the SemEval challenges, ASTE triplets, and TOWE and AOPE tuples. The ACOS task introduced two datasets from the restaurant and the laptop domain with implicit and explicit aspect terms and opinion phrases.
The current ABSA dataset landscape is notably skewed towards reviews and particular domains such as restaurants and electronics. This stems from the easy availability and volume of data on review websites and other online platforms. Although these datasets have led to many successful applications, this narrow focus restricts the wider application of ABSA, limiting its ability to deliver insights across different sectors. Expanding the diversity of ABSA datasets is essential to push research and applications beyond the boundaries of mere reviews. We propose three new datasets for the quadruple extraction task from three new domains, along with implicit and explicit targets and opinion phrase annotations, to address this limitation. We also include the review-level tuples for identifying the overall sentiment polarity for different aspect categories in a review. This dataset could be used to solve all the above-mentioned subtasks of ABSA.
### Related Methods
We focus on a few tasks from the joint detection tasks arena: the TASD, ASTE, and ASQP tasks for our research. Chebolu et al. (2023) have provided a detailed survey of the other datasets, tasks, and their challenges.
Triplet Extraction TasksIn ASTE research, three primary methodologies have emerged: MRC-based techniques (Liu et al., 2022), methods anchored on BERT and table-filling (Zhang et al., 2022), and generative approaches (Peper and Wang, 2022; Zhang et al., 2021, 2021). The MRC-based methods involve crafting a specific query for each component in the triplet, subsequently extracting them based on the response to this query. Gen
erative methods, in contrast, frame the ASTE challenge as a sequence generation task and employ sequence-to-sequence (seq2seq) models. The triplets are then decoded using a specially tailored algorithm. In this study, we employ representative techniques from these three categories and investigate OATS using these methods.
For the TASD task, there are two paths: BERT-based methods (Wan et al., 2020), and generative-based approaches (Chebolu et al., 2021; Zhang et al., 2021). The BERT-based methods extract the aspect sentiment tuple from a sentence using the sentence-pair classification as a backbone (Sun et al., 2019) followed by extracting the targets for each pair using the token classification with the BIO or a softmax classifier. The generative-based approaches convert the task similar to an abstractive summarization and use the sequence-to-sequence models like the T5, BART, and others (Raffel et al., 2020; Lewis et al., 2019) to predict the triplets.
Quadruple Extraction TaskResearchers have pointed out two promising directions. Cai et al. (2021) propose a two-stage method by extracting the aspect and opinion terms first. Then, these items are utilized to classify aspect categories and sentiment polarity. Another method is based on the generation model (Zhang et al., 2021). By paraphrasing the input sentence, the quadruplet can be extracted end-to-end. In this work, we follow the generative direction and consider the order-free property of the quadruplet. A few other studies also proposed generative-based models with this paraphrasing as a backbone, including Hu et al. (2022); Peper and Wang (2022).
## 6 Conclusion
ABSA still presents a lot of difficulties in the rapidly developing field of generative artificial intelligence. We present a thoroughly curated ABSA dataset including quadruples, both explicit and implicit attributes, and views, spanning three unique domains. In addition, the dataset incorporates review-level sentiment polarity for each aspect category, providing a comprehensive perspective of the sentiments expressed in the reviews. Our annotations outnumber those found in previous datasets. We do an in-depth study, give comprehensive annotation guidelines, and provide dataset statistics. We also enable evaluations of generating and non-generative benchmarks on a range of common ABSA tasks.
|
2306.17674 | X-RiSAWOZ: High-Quality End-to-End Multilingual Dialogue Datasets and
Few-shot Agents | Task-oriented dialogue research has mainly focused on a few popular languages
like English and Chinese, due to the high dataset creation cost for a new
language. To reduce the cost, we apply manual editing to automatically
translated data. We create a new multilingual benchmark, X-RiSAWOZ, by
translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean;
and a code-mixed English-Hindi language. X-RiSAWOZ has more than 18,000
human-verified dialogue utterances for each language, and unlike most
multilingual prior work, is an end-to-end dataset for building
fully-functioning agents.
The many difficulties we encountered in creating X-RiSAWOZ led us to develop
a toolset to accelerate the post-editing of a new language dataset after
translation. This toolset improves machine translation with a hybrid entity
alignment technique that combines neural with dictionary-based methods, along
with many automated and semi-automated validation checks.
We establish strong baselines for X-RiSAWOZ by training dialogue agents in
the zero- and few-shot settings where limited gold data is available in the
target language. Our results suggest that our translation and post-editing
methodology and toolset can be used to create new high-quality multilingual
dialogue agents cost-effectively. Our dataset, code, and toolkit are released
open-source. | Mehrad Moradshahi, Tianhao Shen, Kalika Bali, Monojit Choudhury, Gaël de Chalendar, Anmol Goel, Sungkyun Kim, Prashant Kodali, Ponnurangam Kumaraguru, Nasredine Semmar, Sina J. Semnani, Jiwon Seo, Vivek Seshadri, Manish Shrivastava, Michael Sun, Aditya Yadavalli, Chaobin You, Deyi Xiong, Monica S. Lam | 2023-06-30T14:03:30Z | http://arxiv.org/abs/2306.17674v1 | # X-RiSAWOZ: High-Quality End-to-End Multilingual Dialogue Datasets and Few-shot Agents
###### Abstract
Task-oriented dialogue research has mainly focused on a few popular languages like English and Chinese, due to the high dataset creation cost for a new language. To reduce the cost, we apply manual editing to automatically translated data. We create a new multilingual benchmark, X-RiSAWOZ, by translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean; and a code-mixed English-Hindi language. X-RiSAWOZ has more than 18,000 human-verified dialogue utterances for each language, and unlike most multilingual prior work, is an end-to-end dataset for building fully-functioning agents.
The many difficulties we encountered in creating X-RiSAWOZ led us to develop a toolset to accelerate the post-editing of a new language dataset after translation. This toolset improves machine translation with a hybrid entity alignment technique that combines neural with dictionary-based methods, along with many automated and semi-automated validation checks.
We establish strong baselines for X-RiSAWOZ by training dialogue agents in the zero- and few-shot settings where limited gold data is available in the target language. Our results suggest that our translation and post-editing methodology and toolset can be used to create new high-quality multilingual dialogue agents cost-effectively. Our dataset, code, and toolkit are released open-source.1
Footnote 1: [https://github.com/stanford-oval/dialogues](https://github.com/stanford-oval/dialogues)
\(\clubsuit\)Co-first authors \(\clubsuit\)Co-corresponding authors
## 1 Introduction
In recent years, tremendous effort has been put into the research and development of task-oriented dialogue agents; yet, it has been mainly focused on only a handful of popular languages, hindering the adoption of dialogue technology around the globe. Collecting dialogue data from scratch for a new language is ideal but prohibitively expensive and time-consuming, leading to the current lack of reliable multilingual dialogue benchmarks.
In recent years, several non-English task-oriented dialogue (ToD) datasets have been created. These datasets are either collected from scratch [23, 24], synthesized using a state machine with manually written templates, and paraphrased for fluency by crowdworkers [10], or manually translated from another language [11]. All of these approaches are labor-intensive, costly, and time-consuming; such investment is unlikely to be made for less widely spoken languages.
This motivates the development of zero and few-shot techniques that can produce a usable agent in a new language with no or only a few gold training dialogues in the target language. Concurrent with this work, [25, 26]; [27] adopt a _translation and manual post-editing_ process where data is translated with neural machine translation models first, and then post-edited by crowdworkers. This approach has shown promise on MultiWOZ; however, reported zero- and few-shot accuracies show a big degradation in performance compared to full-shot accuracy in the source language. Besides, the performance of the agent in the original language was not good to begin with, in part due to misannotations in the dataset [1]. Lastly, these datasets either focus only on the subtask of Dialogue State Tracking (DST) [26] or auxiliary tasks such as Response Retrieval [27], or are too small [26] to train end-to-end dialogue agents that require policy, interactions with databases, and response generation components.
Our overall goal is to make task-oriented dialogue research in major languages available to low-resource languages. The key is to produce high-quality few-shot training, validation, and test set
with as little manual effort as possible to enable zero-shot or few-shot training. We describe below our contributions towards this goal.
### Data Translation Techniques and Toolset
Machine translation followed by human post-editing has been used as a method for extending monolingual NLP datasets to new languages Yang et al. (2019); Ziemski et al. (2016); Giannakopoulos et al. (2011); Conneau et al. (2018). However, we discovered human post-editing to be the main pain point in creating new dialogue datasets. The process is costly, requiring a lot of back-and-forth among developers, translators, and annotators. Even after several rounds, the results are still not adequate. To alleviate this, we devised a scalable methodology and an associated toolkit that automates parts of this process, and aids translators and annotators to iteratively check their work themselves without developer supervision. This allows fast and accurate creation of a new dialogue dataset annotated with slot values for a new language.
We show that the entity-aware translation technique proposed by Moradshahi et al. (2023) is also applicable to other end-to-end dialogue datasets. We combine this technique with a dictionary-based alignment where multiple translations are generated for each entity individually (i.e. without context), using the same translation model used to translate the sentence. Then, the translated sentence is scanned to match any of the translation candidates, resulting in an improvement in the agent's performance.
Furthermore, we automatically check each step of data translation to ensure annotation consistency between dialogue utterances and API calls to the database. We are releasing this toolkit open-source for reproducibility as well as a resource for others.
### X-RiSAWOZ Dataset
We created X-RiSAWOZ, a multi-domain, large-scale, and high-quality task-oriented dialogue benchmark, produced by translating the Chinese RiSAWOZ data to four diverse languages: English, French, Hindi, and Korean; and one code-mixed English-Hindi language. X-RiSAWOZ is an improvement over previous works in several aspects:
* **End-to-End**: Contains translations for all parts of dialogue including user and agent utterances, dialogue state, agent dialogue acts, and database results.
* **Larger**: RiSAWOZ is larger than MultiWOZ and covers a total of 11,200 dialogues with 151,982 turns. It also covers 12 domains compared to 7. In addition to translating validation and test data, we also sample 100 dialogue examples from the training set and translate them using the same process to use as few-shot training data. This way, X-RiSAWOZ can be used to experiment with few-shot techniques as well as zero-shot.
* **Higher Quality**: We choose RiSAWOZ as it exhibits the lowest misannotation rate among popular dialogue benchmarks as shown by Moradshahi et al. (2021). The data translation methodology described above reduces the mismatch between entities in the sentence and annotations, meaning that our translation process does not introduce new misannotations.
* **Cheaper**: First, the methodology and toolset reduce the amount of post-editing effort needed. Second, instead of using commercial translation systems such as Google Translate, we rely on open-source multilingual translation models such as MBART Liu et al. (2020) for the translation of training data. This reduces the translation cost by at least 100x which could otherwise be a prohibiting factor when building datasets for new languages.
### Experimental Results
We establish strong baseline results for our new X-RiSAWOZ dataset. In the full-shot setting, our model produces a new SOTA on the original Chinese dataset. With few-shot training, across languages, our model achieves between 60.7-84.6% accuracy for Dialogue State Tracking (DST), 38.0-70.5% accuracy for Dialogue Act (DA), and 28.5-46.4% for BLEU score when evaluated using gold data as the conversational context. Cumulatively over a conversation, our model achieves 17.2%, 11.9%, 11.3%, 10.6%, and 2.3% on Dialogue Success Rate (DSR), respectively. The remaining gap between zero or few-shot results on new languages and the full-shot results on Chinese creates opportunities for research and finding new techniques to further improve the dialogue agent performance.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline & \multicolumn{3}{c}{**Dataset**} \\ \hline & **Few-shot** & **Validation** & **Test** \\ \hline \# Domains & 12 & 12 & 12 \\ \# Dialogues & 100 & 600 & 600 \\ \# Utrances & 1,318 & 8,116 & 9,286 \\ \# Stots & 140 & 148 & 148 \\ \# Values & 658 & 2,358 & 3,571 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics for the few-shot, validation, and test.
## 2 Related Work
### Multilingual Dialogue Datasets
MultiWOZ Budzianowski et al. (2018); Ramadan et al. (2018); Eric et al. (2019), CrossWOZ Zhu et al. (2020), and RiSAWOZ Quan et al. (2020) are three monolingual Wizard-Of-Oz multi-domain dialogue datasets for travel dialogue agents. For the 9th Dialog System Technology Challenge (DSTC-9) Gunasekara et al. (2020), MultiWOZ was translated to Chinese and CrossWOZ was translated to English using Google Translate. A portion of their evaluation and test sets were post-edited by humans, while the training set remained entirely machine translated. Moradshahi et al. (2021) translated RiSAWOZ to English and German using open-source machine translation models with alignment. However, the validation and test data were not verified by humans, resulting in potentially over-estimating the accuracy of agents. Several works Ding et al. (2022); Zuo et al. (2021); Hung et al. (2022) continued translation of MultiWOZ to other languages. For example, GlobalWOZ translates to several languages, with human translators post-editing machine-translated dialogue templates, and filling them with newly collected local entities. However, these works address only one or two subtasks of a full dialogue, and therefore training an end-to-end agent is not possible with them.
Different from these translation-based approaches, Lin et al. (2021) introduced BiToD, the first bilingual dataset for _end-to-end_ ToD modeling. BiToD uses a dialogue simulator to generate dialogues in English and Chinese, and asks crowdworkers to paraphrase them for naturalness. This simulation-based approach eliminates the need for translation but requires hand-engineered templates and savvy developers with knowledge of the target language and dialogue systems. Besides, paraphrasing the entire dataset is costly.
### Cross-Lingual Approaches for ToD
With the advent of pre-trained language models, contextual embeddings obtained from pre-trained multilingual language models Devlin et al. (2018); Xue et al. (2021); Liu et al. (2020) have been used to enable cross-lingual transfer in many natural language tasks, including task-oriented dialogue agents. Unfortunately, most of this work has only focused on the DST subtask, which is a limitation we aim to rectify with this paper.
To further improve the cross-linguality of these embeddings, Tang et al. (2020) and Moghe et al. (2021) proposed fine-tuning multilingual BERT on a synthetic code-switching dataset. Glavas et al. (2020) performed language adaptation by using intermediate masked language modeling in the target languages and improving zero-shot cross-lingual transfer for hate speech detection task.
Using machine translation for multilingual dialogue tasks has also been studied. Uhrig et al. (2021) used machine translation during inference to translate to English for semantic parsing. Instead, Sherborne et al. (2020) use machine translation to generate semantic parsing data to train a semantic parser in the target language which leads to better results. Moradshahi et al. (2023); Nicosia et al. (2021) proposed using alignment to improve the quality of translated data by ensuring entities are translated faithfully.
## 3 The End-to-End ToD Task
In end-to-end task-oriented dialogues, a user speaks freely with an agent over several turns to accomplish their goal according to their intents (e.g., "book a hotel with at least 5 stars"). In each turn, the agent must access its database if necessary to find the requested information (e.g., find a hotel that meets user constraints), decide on an action (e.g., present the information to the user or ask for additional information), and finally respond to the user in natural language based on the action it chooses. Following Moradshahi et al. (2023), we decompose a dialogue agent into four subtasks:
1. [leftmargin=*,noitemsep,topsep=0pt]
2. _Dialogue State Tracking (DST)_: Generate the new belief state, for the current turn based on the previous belief state, the last two agent dialogue acts, and the current user utterance.
3. _API Call Detection (ACD)_: Determine if an API call is necessary to query the database.
4. _Dialogue Act Generation (DAG)_: Generate the agent dialogue act based on the current belief state, the last two agent dialogue acts, the user utterance, and the result from the API call.
5. _Response Generation (RG)_: Convert the agent dialogue act to produce the new agent utterance.
## 4 The Common Dialogue Interface
Over the years, various ToD datasets have been introduced Budzianowski et al. (2018); Byrne et al. (2019); Zhu et al. (2020); Quan et al. (2020); Lin et al. (2021), each with its own representation, making it difficult for researchers to experiment with
different datasets. To facilitate experimentation, we have developed Common Dialogue, a standard interface for ToD tasks. This interface defines a unified format for datasets, their annotations, ontologies, and API interfaces. We show that the most widely-used recent dialogue datasets (such as MultiWoZ, RiSAWOZ, and BiToD) can be converted to this representation with a simple script. The standardization lets all different datasets be processed with the same software and models, significantly reducing the implementation time and cost.
Previously, other libraries such as ParlAI Miller et al. (2017), ConvLab Zhu et al. (2020, 2022), and Nemo Kuchaiev et al. (2019) were introduced so researchers can work with different dialogue datasets and interact with the trained models. However, these libraries are limited. They either do not provide a standard abstraction, making it difficult to add new datasets, or a modular interface that can connect with other code bases, requiring new models to be implemented in their repository before they can be used. Additionally, the training code needs to be modified to support a new dataset or language for an existing dataset.
## 5 Dataset Creation
In this section, we describe the process used to extend RiSAWOZ to the new languages. The original RiSAWOZ dataset is in Chinese. We manually translate the validation data (600 dialogues), test data (600 dialogues), and 1% of the training dataset (100 dialogues), which we refer to as _few-shot_, from Chinese to English. For other languages, we use English as the source language, since bilingual speakers of English and the target language are more accessible than Chinese and the target language. Since the English data is manually translated, this approach avoids double translationese Vanmassenhove et al. (2021) and ensures the best data quality. We machine-translate the English data and manually post-edit the translation for fluency and correctness. Besides the few-shot data, we also machine-translate all of the Chinese training data into each of the languages (including English) and train with them; we refer to training with just this data set as _zero-shot_, since no human labor is used during dataset creation.
In the following, we discuss the steps and methods for preparing data for translation, including building alignment between entities and performing iterative quality checks. We also describe how to create the target language ontology, which serves as a database for API calling and provides a mapping between source and target language entities.
### Translation and Alignment for Few-Shot, Validation, and Test Data
#### 5.1.1 From Chinese to English
Figure 1 shows the process used to translate the Chinese dataset to English. First, human professional translators manually translate the Chinese dialogue utterances and ontology in the validation, test, and few-shot training data sets to English. We provide the translators with an annotation tool (Figure 2) to navigate through data examples, perform translation, and highlight entity spans in the translated sentence. The tool helps verify the consistency of slot value translations between user/agent utterances and their annotations after translation.
For each utterance in a dialogue, our tool automatically identifies the values in dialogue states and user/agent actions. Slots are _canonicalized_ before calling the database, meaning that their values must lexically match those in the ontology. Since slot values appearing in the utterances may differ from the canonicalized version, we ask translators to manually identify and mark the non-canonicalized form of slot values and their word spans in the utterances.
The tool automatically checks the number of highlighted spans to prevent missing entity translations. After checking, the annotation tool outputs the English dialogue texts and a correspondence (i.e. alignment) between source and target language slot values.
#### 5.1.2 From English to Other Languages
Automatic Translation.For validation, test, and few-shot data, we use commercial translation models since (1) translation is done only once, (2) data size is smaller so it is affordable, and (3) higher data quality reduces post-editing effort.
Manual Post Editing.We hire bilingual speakers of English and the target language to post-edit the translations for fluency and correctness. We instruct them to update the alignment if they modify the translated entities. We provide several tools that automatically check their work and help them during the process. We describe the details in Section 5.4.2.
### Zero-Shot Training Data Translation & Alignment
To create the zero-shot training datasets for the target languages (including English), we use open-source machine translation models to translate the Chinese data to the target language. We pick open-source models since (1) their results are reproducible, (2) open-source models provide access to model weights necessary for hybrid alignment (described below), (3) they allow tuning text generation hyperparameters such as temperature (Ficler and Goldberg, 2017) or beam size (Freitag and Al-Onaizan, 2017) and (4) they cost less, thus allowing effective scaling to more languages.
Hybrid Alignment for NMT.Previous work (Moradshahi et al., 2021; Li et al., 2021) proposed using alignment for tracking the position of entities during translation to ensure they can be replaced with the desired translation both in the utterance and the belief state. For this purpose, the encoder-decoder cross-attention weights of the neural machine translation model were used in a method called _neural alignment_. Although neural alignment often works well, it can produce incorrect spans as it is a probabilistic approach and has particularly low recall on long multi-token entities.
Ideally, if there exists a dictionary that provides a mapping between each source entity and all possible translations in the target language, we can directly scan the translated sentence to see if there is a match. We call such an approach _dictionary alignment_. Unfortunately, there is no such dictionary. We propose to build such a dictionary for each sentence on-the-fly. To do so, we first extract the entities from the sentence, then translate each individually and use nucleus sampling (Holtzman et al., 2019) with different temperature values to generate \(K\) translation candidates. This way, we build a mapping between each entity and possible translations which serves as the dictionary for dictionary alignment. Finally, we combine the two methods in a _hybrid_ approach: We try to use dictionary alignment first, and if there is no matching translation in the output, we fall back to neural alignment.
### Creating English-Hindi Code-Mixed Zero-Shot Training Data
For generating English-Hindi code-mixed train set, we implemented a pipeline combining GCM (Rizvi et al., 2021), and alignment based word substitution. An overview of the pipeline is shown in Fig. 3. GCM automatically generates code-mixed text given parallel data in two languages, based on two linguistic theories of code-mixing, the Equivalence Constraint theory (Poplack, 1980) and the
Figure 1: The translation and annotation process of X-RiSAWOZ from Chinese to English. There are 4 major steps: (1) Translate utterances and provide entity alignments between source and target sentences using the UI tool. (2) Create the value mapping using entity alignments. (3) Create slot and domain mappings by manually translating them from Chinese. (4) Translate slot values in the annotations and ontology using the value mapping.
Matrix Language theory (Scotton, 1993).
We take the Chinese training set as source and translate user and agent utterances to English (en) and Hindi (hi). The translated sentences are fed as input to GCM, which produces code-mix utterances. For sentences where GCM fails to generate any candidate, we rely on word-alignment-based word substitution to generate a code-mixed utterance. Alignments are generated using cosine similarities between sub-word representations from mBERT in a parallel sentence pair (Dou and Neubig, 2021).
### Translation of Annotations
The next step is to translate the slot values in the belief state, user and agent acts, and database search results in the source language to the target language. Since the translations of the same slot value may vary according to the context (e.g., "\(\frac{\text{\#F}}{\text{\#F}}\)" corresponds to is, does, has or other words indicating affirmative), we create a one-to-many mapping between source language slot values and corresponding translations based on the slot value alignments obtained above. We ask human translators to select the most appropriate expression from all candidate translations as the canonicalized translation. We follow two basic principles in this process:
**Part-of-Speech (POS) Consistency.** The translator should pick, for each slot, values with the same POS tags where possible. For example, for the "production country/region" slot in the TV series domain, we will use the unified noun form (i.e., "America"/"India") instead of mixing the noun and adjective form (i.e., "American"/"India").
**Value Consistency.** The translator should use the same translation across domains and slots. For example, the Chinese word "\(\frac{\text{\#F}}{\text{\#F}}\)" when used as a "price-range" can be translated into "moderate" or "medium". We consistently map "\(\frac{\text{\#F}}{\text{\#F}}\)" to "moderate" for all "price-range" slots across all domains.
#### 5.4.1 Creating Ontology and Databases
We found that ontology construction should be done in tandem with dataset translation. In prior work, using a predefined ontology limited fluency and diversity of the translations (Zuo et al., 2021), and replacing entities in sentences after translation without careful attention to parts of speech or context resulted in grammatically incorrect sentences (Moradshahi et al., 2020; Ding et al., 2022). Each value in the source database is automatically mapped to its canonicalized translation. Note that since not all slot values are seen in the training dataset, translators are asked to provide canonicalized translations for those values.
The original RiSAWOZ dataset only provides final search results from databases instead of intermediate API calls. We hence also restore the API calls through the dialogue state, database, and search results for complete database interactions. This improves the extensibility of the dataset and helps to generalize RiSAWOZ to other languages and domains in the future.
#### 5.4.2 Annotation Checker
Manual errors are inevitable, especially for translators who are unfamiliar with the process. We have developed an annotation checker to automatically flag and correct errors where possible:
**Entity Checking.** Our annotation checker ensures that changes made in the English translation of entities are propagated to the downstream translation for other target languages. It compares the revised annotations with current annotations and deleted incorrect or redundant slots. Additionally, it locates missing entities or entities that need re-annotation to help annotators quickly synchronize the latest changes.
**API Checking.** Some datasets such as RiSAWOZ, include the ground truth database search results. For these datasets, we can check the consistency of the API by comparing the results of the API call with the provided ground truth. Our checker resolves observed discrepancies by automatically deleting redundant slots and values in constraints and adding the differences to the slot value mappings. It also shows the precise locations of changes for annotators to review.
## 6 Experiment
The goal of our experiments is to create an agent in a _target_ language, given full training data in the source (Chinese) language, and a varying amount of training data in the target language. We also assume we have access to a machine translation model from Chinese to the target language. We perform our experiments on different target languages in X-RiSAWOZ. Table 1 shows statistics of different data splits used in the experiments, which is the same across all _target_ languages.
### Setting
**Full-Shot (mono-lingual).** This setting is only possible for Chinese since we do not have full train
ing data for target languages. In the full-shot experiments, all of the original Chinese training data is used for training. Note that this setting is not a cross-lingual experiment per se, but a point of comparison for other settings.
Zero-Shot (cross-lingual).In our zero-shot experiments, no manually created target language data is available for training. Instead, we automatically create training data by machine translation of the source language as described in Section 5.1.2. Additionally, we perform two ablations on our automatic training data translation approach: (1) Only using neural alignment (\(-\) Dictionary Align) (2) No alignment of any type (\(-\) Neural Align).
Few-Shot (cross-lingual).In the few-shot setting, we start from a zero-shot model (with its various ablations) and further fine-tune it on the few-shot dataset in the target language. So the model is trained on both machine translated data and few-shot manually created dataset. In this setting, we also perform an ablation where we only train on the few-shot training data and no machine translated data (_Few-shot Only_).
### Models
In all our experiments, we use the m2m100 Fan et al. (2020) model for Korean and mBART Liu et al. (2020) for all other languages. We found mBART to be especially effective in zero-shot settings as the language of its outputs can be controlled by providing a language-specific token at the beginning of decoding. Additionally, its denoising pre-training objective improves its robustness to the remaining translation noise. In each setting, all four dialogue subtasks are done with a single model, where we specify the task by prepending a special token to the input.
Since the dataset for target languages is introduced in this paper, there is only prior work on the Chinese dataset. In Section 7.3, we compare our results to the best previously reported result on RiSAWOZ from Moradshahi et al. (2021) that achieved SOTA on the DST subtask using an mBART model, and from Quan et al. (2020) for other subtasks which use DAMD Zhang et al. (2020), a Seq2Seq RNN end-to-end dialogue model. We use seven widely-used automatic metrics to compare different models. Please see Section A.2 for details of each metric.
## 7 Results and Discussion
We first evaluate the models for each turn, assuming that all previous subtasks and steps are correct. We then evaluate the end-to-end accuracy for the whole conversation.
### Turn by Turn Evaluation
To understand how each component performs independently, our first experiment uses the gold data of all the previous turns and subtasks as input in our evaluation (Table 2). In this scenario, errors do not propagate from one subtask to the next in each turn. _Ours_ refers to our main approach, which combines all techniques. Each ablation incrementally takes away one of the techniques.
In the zero-shot setting, results vary across added languages, where the agent achieves between 34.6-84.2% on DST, 42.8-67.3% on DA, and 10.2-29.9% on BLEU score. Fine-tuning on the few-shot data improves all metrics for all languages, with the agent achieving between 60.7-84.6% on DST, 38.0-70.5% on DA, and 28.5-46.4% on BLEU score. The improvement in DST is particularly prominent for Hindi, Korean, and English-Hindi, where the quality of machine translation may not be as good. Nonetheless, adding automatically translated data to training greatly improves the accuracy for these languages over the "few-shot only" result.
### Error Analysis
To better understand the inference limitations of our trained agents, we manually inspected the model predictions by randomly selecting 100 validation turns for each domain where the prediction was incorrect. The following are the most common error patterns we observed across all languages:
**Implicit Entities**: In X-RiSAWOZ dialogues, some entities are not mentioned explicitly in the user's utterance and need to be _inferred_. These entities include the corresponding price range for a luxury dimer, a speaker's desired attraction for a date with their partner, and hotel rating. These errors are partly due to the limited common-sense capability of the pre-trained language model used Zhou et al. (2020) and partly due to the training data encouraging the model to copy entities verbatim from the input instead of making logical reasoning. This category accounts for 27% of errors observed.
**Multiple Correct Dialogue Acts**: In X-RiSAWOZ, the agent often provides an answer as soon as it receives the API call results. However, in some
cases, the agent asks follow-up questions (e.g., "how many seats do you want for the car?") to narrow down the search results. Since the dataset is constructed via human interactions and not simulation, there are no well-defined policies governing the agent's behavior. Thus, there are examples where multiple dialogue acts can be correct given the input and API constraints. Since during evaluation we can only check the model output against the one response provided as gold, another perfectly fine response can be deemed as incorrect. We discovered that 38% of errors are of this nature.
**Incorrect Entities**: In DST and DA subtasks, the accuracy is highly dependent on identifying the correct entities in the input. However, there are cases where the model (1) predicts a wrong entity, (2) predicts part of the entity, (3) predicts the entity along with prepositions, articles, etc. (4) omits the entity, or (5) fully hallucinates an entity. We found (1) and (2) to be the most common patterns. (3) can be addressed by a simple pre-processing or text lemmatization. (4) happens with complex sentences with many entities where the model often mispredicts the slot names as well as slot values. (5) is usually caused by data mis-annotations or errors in data processing, where a slot is missing from the input and the model generates the most probable value for it. The remaining 35% of errors fall under this category.
For each language, we also performed a similar analysis to understand if there are language-specific attributes that affect the accuracy and quality of the translated datasets. The result of these analyses is included in the appendix (A.4-A.7).
### Full Conversation Evaluation
The main results of our experiments are reported in Table 3. Following Lin et al. (2021), the evaluation for these experiments is performed end-to-end meaning for each turn, the model output from the previous subtask is used as input for the next subtask. This reflects a real-world scenario where an agent is conversing with the user interactively.
Overall, in the full-shot setting, when training on the Chinese dataset, we improve the state of the art in Joint Goal Accuracy (JGA) by 1.33%, Task Success Rate (TSR) by 5.04%, Dialogue Success Rate (DSR) by 5.35%, and BLEU by 6.82%.
\begin{table}
\begin{tabular}{l|l|c|c|c} \hline \hline Language & Setting & DST Acc. \(\uparrow\) & DA Acc. \(\uparrow\) & BLEU \(\uparrow\) \\ \hline \multicolumn{5}{c}{_Full-Shot_} \\ \hline Chinese & Ours & **96.43** & **91.74** & **51.99** \\ \hline \multicolumn{5}{c}{_Zero-Shot_} \\ \hline \multirow{3}{*}{English} & Ours & **84.23** & **67.27** & **27.14** \\ & – Dictionary Align & 83.42 & 66.51 & 22.67 \\ & – Neural Align & 82.33 & 67.79 & 13.24 \\ \hline \multirow{3}{*}{French} & Ours & **70.75** & **59.27** & **29.88** \\ & – Dictionary Align & 68.22 & 56.32 & 25.43 \\ & – Neural Align & 64.53 & 53.33 & 18.12 \\ \hline \multirow{3}{*}{Hindi} & Ours & **52.09** & **56.06** & **27.42** \\ & – Dictionary Align & 50.12 & 54.34 & 23.43 \\ & – Neural Align & 48.11 & 53.21 & 18.32 \\ \hline \multirow{3}{*}{Korean} & Ours & **34.55** & 49.56 & **10.17** \\ & – Dictionary Align & 31.47 & **50.17** & 9.87 \\ & – Neural Align & 29.87 & 49.51 & 4.59 \\ \hline \multicolumn{5}{c}{_Few-Shot_} \\ \hline Chinese & Few-shot Only & **82.75** & **77.33** & **38.87** \\ \hline \multirow{3}{*}{English} & Ours & **84.62** & 69.44 & **46.37** \\ & – Dictionary Align & 83.37 & 69.74 & 46.16 \\ & – Neural Align & 82.01 & **70.45** & 45.43 \\ & Few-shot Only & 74.52 & 58.97 & 45.53 \\ \hline \multirow{3}{*}{French} & Ours & **73.12** & **61.11** & **42.21** \\ & – Dictionary Align & 71.12 & 60.21 & 40.12 \\ & – Neural Align & 69.68 & 57.12 & 38.14 \\ & Few-shot Only & 67.55 & 50.96 & 44.77 \\ \hline \multirow{3}{*}{Hindi} & Ours & 75.16 & **59.02** & **38.38** \\ & – Dictionary Align & **75.32** & 57.66 & 37.54 \\ & – Neural Align & 73.21 & 54.32 & 34.32 \\ & Few-shot Only & 55.77 & 49.88 & 38.18 \\ \hline \multirow{3}{*}{Korean} & Ours & **71.17** & **53.52** & **34.93** \\ & – Dictionary Align & 69.57 & 52.37 & 34.75 \\ & – Neural Align & 69.91 & 52.00 & 33.80 \\ & Few-shot Only & 60.65 & 41.47 & 32.76 \\ \hline \multirow{3}{*}{English-Hindi} & Ours & **60.67** & **37.97** & 26.77 \\ & Few-shot Only & 56.53 & 36.50 & **28.54** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on the validation set of X-RiSAWOZ, obtained by feeding the gold input for each subtask in each turn. The best result in each section is in bold. \(\uparrow\) indicates higher number shows better performance.
The improvements are due to the improved and succinct dialogue representation we have created (Section 4), and contextual representations of transformer models.
In the zero-shot setting, results vary across languages, where the English, French, Hindi, Korean, and English-Hindi agents achieve 35%, 16%, 9%, 11%, and 4% of the DSR score of the full-shot Chinese agent, respectively. In the few-shot setting, the ratio improves to 38%, 26%, 25%, 23%, and 5%. The smallest and biggest improvements are on the English and Hindi dataset respectively. This suggests that the impact of few-shot data is greater when the quality of the pretraining data is lower, which is related to the quality of the translation model between Chinese and the target language.
The Response Generation subtask receives the largest improvement in performance when provided with human supervision in the few-shot data, with a BLEU score improvement of over 10%. This suggests that while translation with alignment is effective for understanding user input, it is not as effective for generating output text. This is partly due to the agent model used, mBART, which is trained with a denoising objective and is thus able to handle noisy input text better.
## 8 Conclusion
This paper presents a solution for balancing the trade-offs between standard machine translation and human post-editing. By standardizing and establishing best practices for "translation with manual post-editing", and releasing associated toolkits, post-editing can be made faster, more efficient, and cost-effective. We use our methodology to create X-RiSAWOZ, a new end-to-end, high-quality, and large multi-domain multilingual dialogue dataset, covering 5 diverse languages and 1 code-mixed language. We also provide strong baselines for zero/few-shot creation of dialogue agents via cross-lingual transfer. In the few-shot setting, our agents achieve between 60.7-84.6% on DST, 38.0-70.5% on DA, and 28.5-46.4% on RG subtasks across different languages. Overall, our work paves the way for more efficient and cost-effective development of multilingual task-oriented dialogue systems.
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c|c|c} \hline \multicolumn{2}{l|}{Language} & Setting & JGA \(\uparrow\) & TSR \(\uparrow\) & DSR \(\uparrow\) & API \(\uparrow\) & DAA \(\uparrow\) & BLEU \(\uparrow\) & SER \(\downarrow\) \\ \hline \multicolumn{2}{l|}{\multirow{2}{*}{Chinese}} & \multicolumn{4}{c|}{_Full-Shot_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-9} & Ours & **78.23** & **53.67** & **45.67** & **72.70** & **73.68** & **34.72** & **26.41** \\ & \multicolumn{1}{c}{SOTA} & 76.90 & 48.63 & 40.32 & – & – & 27.90 & 30.32 \\ \hline \multicolumn{2}{c|}{\multicolumn{2}{c|}{_Zero-Shot_}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \multirow{2}{*}{English} & Ours & **43.64** & **22.46** & **16.00** & **44.95** & **40.81** & **14.12** & **47.08** \\ & – Dictionary Align & 38.70 & 19.22 & 13.50 & 39.84 & 37.35 & 11.34 & 49.64 \\ & – Neural Align & 38.96 & 9.50 & 5.67 & 40.95 & 41.96 & 8.21 & 59.90 \\ \hline \multirow{2}{*}{French} & Ours & **24.04** & **12.58** & **7.17** & **34.20** & **38.32** & **10.88** & **58.45** \\ & – Dictionary Align & 20.32 & 5.43 & 4.18 & 28.51 & 35.78 & 9.72 & 60.25 \\ & – Neural Align & 19.43 & 3.23 & 2.11 & 24.64 & 28.36 & 6.81 & 68.89 \\ \hline \multirow{2}{*}{Hindi} & Ours & **20.32** & **10.11** & **4.32** & **32.32** & **34.23** & **9.13** & **60.43** \\ & – Dictionary Align & 18.31 & 5.15 & 3.98 & 30.12 & 32.31 & 8.11 & 65.43 \\ & – Neural Align & 17.32 & 3.12 & 3.10 & 28.51 & 28.13 & 7.00 & 67.25 \\ \hline \multirow{2}{*}{Korean} & Ours & **21.41** & **10.75** & **5.00** & **32.08** & **36.57** & 7.27 & **64.33** \\ & – Dictionary Align & 19.53 & 9.46 & 4.83 & 27.75 & 36.33 & **7.55** & 35.84 \\ & – Neural Align & 17.77 & 8.77 & 3.67 & 27.19 & 25.45 & 7.12 & 38.98 \\ \hline \multicolumn{2}{c|}{English-Hindi} & Ours & **9.22** & **4.81** & **2.03** & **10.43** & **26.47** & **5.41** & **63.26** \\ \hline \multicolumn{2}{c|}{\multicolumn{2}{c|}{_Few-Shot_}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \multirow{2}{*}{Chinese} & Few-shot Only & **37.69** & **28.04** & **21.00** & **40.73** & **42.30** & **13.89** & **45.44** \\ & Ours & **48.91** & **23.13** & **71.71** & **50.06** & **42.45** & **26.33** & **44.93** \\ & – Dictionary Align & 48.40 & 22.79 & 16.67 & 50.03 & 42.26 & 25.29 & 45.01 \\ & – Neural Align & 46.31 & 22.68 & 16.50 & 47.61 & 42.54 & 25.78 & 44.78 \\ & –shot Only & 29.87 & 16.09 & 10.50 & 32.30 & 30.45 & 20.00 & 52.79 \\ \hline \multirow{2}{*}{French} & Ours & **30.85** & **17.17** & **11.83** & **39.97** & **45.03** & **20.92** & **46.26** \\ & – Dictionary Align & 28.51 & 16.11 & 9.54 & 38.11 & 43.41 & 19.91 & 48.35 \\ & – Neural Align & 26.45 & 15.54 & 9.13 & 35.74 & 42.15 & 16.99 & 49.26 \\ & – Fast-shot Only & 19.43 & 3.23 & 2.11 & 24.64 & 28.36 & 6.81 & 68.89 \\ \hline \multirow{2}{*}{Hindi} & Ours & **25.62** & **15.67** & **11.31** & **37.54** & **41.32** & **18.51** & **44.26** \\ & – Dictionary Align & 23.12 & 15.11 & 10.32 & 35.14 & 39.51 & 16.34 & 46.76 \\ & – Neural Align & 21.12 & 13.22 & 8.61 & 34.11 & 34.12 & 15.33 & 48.97 \\ & Few-shot Only & 18.48 & 8.16 & 4.50 & 19.09 & 23.41 & 13.15 & 62.24 \\ \hline \multirow{2}{*}{Korean} & Ours & **26.24** & **14.32** & **10.60** & **35.42** & **38.42** & **20.32** & **43.21** \\ & – Dictionary Align & 24.13 & 12.53 & 8.45 & 23.42 & 33.34 & 19.32 & 47.32 \\ & – Neural Align & 23.54 & 10.23 & 7.54 & 22.31 & 30.42 & 18.34 & 50.33 \\ & Few-shot Only & 20.66 & 9.16 & 5.17 & 19.39 & 23.56 & 17.81 & 54.57 \\ \hline \multirow{2}{*}{English-Hindi} & Ours & **21.80** & **4.13** & 1.83 & **22.64** & **21.69** & **5.29** & **66.31** \\ & Few-shot Only & 16.07 & 3.69 & **2.33** & 15.65 & 16
## 9 Limitations
We would have liked to evaluate the generalization of our cross-lingual approach on more languages. For instance, we partially rely on machine translation models for Chinese-to-English translation. Available translation models for other language pairs, especially from/to low-resource languages have much lower quality, and it would be desirable to measure the effect of that in our experiments.
The ontology used for new languages is derived by translating the Chinese ontology. As a result, the entities are not localized. Creating local ontology requires manual effort as one would need to identify websites or databases for scraping or collecting the entities. Once the local entities are collected, we can automatically replace translated entities with local ones to localize the dataset.
Another limitation is the lack of human evaluation for agent responses. BLEU score does not correlate well with human judgment [21], and SER only accounts for the factuality of the response but not grammar or fluency. In future work, we wish to address this by conducting human evaluations in addition to automatic metrics.
## 10 Ethical Considerations
We do not foresee any harmful or malicious misuse of the technology developed in this work. The data used to train models is about seeking information about domains like restaurants, hotels and tourist attractions, does not contain any offensive content, and is not unfair or biased against any demographic. This work does focus on widely-spoken languages, but we think the cross-lingual approach we proposed can improve future dialogue language technologies for a wider range of languages.
We fine-tune multiple medium-sized (several hundred million parameters) neural networks for our experiments. We took several measures to avoid wasted computation, like performing one run instead of averaging multiple runs (since the numerical difference between different models is large enough to draw meaningful conclusions), and improving batching and representation that improved training speed, and reduced needed GPU time. Please refer to Appendix A.1 for more details about the amount of computation used in this paper.
## Acknowledgements
We would like to thank Ruchi Jain for helping us validate the automatically translated Hindi dialogues. This work is supported in part by the National Science Foundation under Grant No. 1900638, the Alfred P. Sloan Foundation under Grant No. G-2020-13938, the Verdant Foundation, Microsoft, KDDI, JPMorgan Chase, and the Stanford Human-Centered Artificial Intelligence (HAI) Institute. This work is also co-funded by the Natural Science Foundation of Xinjiang Uygur Autonomous Region (No. 2022D01D43), Zhejiang Lab (No. 2022KH0AB01). This project has also received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No. 101021797 (Starlight), and the European Union's Horizon Europe research and innovation programme under grant agreement N\({}^{\circ}\) 101070192 (CORTEX\({}^{2}\)). This work is also supported in part by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2020-0-01373 and IITP-2022-2021-0-01817).
|
2309.10946 | Closure algebras of depth two with extremal relations: Their frames,
logics, and structural completeness | We consider varieties generated by finite closure algebras whose canonical
relations have two levels, and whose restriction to a level is an "extremal"
relation, i.e. the identity or the universal relation. The corresponding logics
have frames of depth two, in which a level consists of a set of simple clusters
or of one cluster with one or more elements. | Ivo Düntsch, Wojciech Dzik | 2023-09-19T21:56:56Z | http://arxiv.org/abs/2309.10946v1 | Closure algebras of depth two with extremal relations: their frames, logics, and structural completeness
###### Abstract.
We consider varieties generated by finite closure algebras whose canonical relations have two levels, and whose restriction to a level is an "extremal" relation, i.e. the identity or the universal relation. The corresponding logics have frames of depth two, in which a level consists of a set of simple clusters or of one cluster with one or more elements.
_Dedicated to our dear colleague Ivo G. Rosenberg_
## 1. Introduction
In [10] we have investigated _ideal algebras_\(\langle B,f\rangle\) which are closure algebras in which the set \(B^{\mathrm{c}}\) of closed elements is an ideal with the top element added. Our starting point was the unary discriminator \(f^{\mathbf{1}}\), for which
\[f^{\mathbf{1}}(x)\stackrel{{\mathrm{df}}}{{=}}\begin{cases}0,& \text{if $x=0$,}\\ 1,&\text{otherwise.}\end{cases} \tag{1.1}\]
\(f^{\mathbf{1}}\) is the largest element in the additive semilattice of modal operators on \(B\) which was investigated in [11], and the set \(B^{\mathrm{c}}\) of its closed elements is \(\{0,1\}\); in other words, \(\langle B,f^{\mathbf{1}}\rangle\) is an ideal algebra with associated ideal \(\{0\}\). The other extreme is the closure algebra in which every element is closed, i.e. where \(f\) is the identity, and \(B^{\mathrm{c}}=I=B\). It turned out that the equational class generated by all ideal algebras is a locally finite positive universal class, and that the canonical frame of a non-simple finite ideal algebra has depth two and consists of a set of simple clusters on the first level and a single cluster on the second level; in other words, the canonical frame relation restricted to the lower level is the identity and its restriction to the upper level is the universal relation.
In the present paper we investigate closure algebras related to ideal algebras and their logics. Our first case are closure algebras in which the set of closed elements is a filter \(F\) with the smallest element added, followed by two other classes of algebras for which the set of closed elements may also be related to ideals or filters. All these have the common property that their depth is at most two, and that the canonical relation of a finite algebra consists of at most two levels in which the restriction of the frame relation to a level is either universal or the identity.
The structure of the paper is as follows: After the introductory section, we briefly recall some facts about ideal algebras and their logic, followed by a section on filter algebras and their logic. Then we consider the remaining cases for algebras and frames of depth two with extremal relations and their logics. These sections are followed by a section in which we exhibit meet and join of the varieties generated by the classes of algebras we have
considered. We close with some remarks on quasivarieties of algebras of depth two and structural completeness.
## 2. Notation and First Definitions
A _frame_ is a structure \(\mathfrak{F}=\langle W,R\rangle\), where \(W\) is a nonempty set, and \(R\) is a binary relation on \(W\). The identity relation on \(W\) is denoted by \(1^{\prime}\). For \(x\in W\) we set \(R(x)\stackrel{{\mathrm{df}}}{{=}}\{y:x\ R\ y\}\). The _converse of \(R\)_ is the relation \(R\char 10^{\prime}\stackrel{{\mathrm{df}}}{{=}}\{(y,x):x\ R\ y\}\). \(R\) is called _convergent_ if \(x\ R\ y\) and \(x\ R\ z\) imply the existence of some \(w\in W\) such that \(y\ R\ w\) and \(z\ R\ w\). \(R\) is called _directed_, if for all \(x,y\in W\) there is some \(z\in W\) such that \(x\ R\ z\) and \(y\ R\ z\). A reflexive and transitive relation is called a _quasiorder_. An antisymmetric quasiorder is called a _partial order_. A partial order can be obtained from a quasiorder \(R\) on the classes of the equivalence relation defined by \(x\ \theta_{R}\ y\stackrel{{\mathrm{df}}}{{\Longleftrightarrow}}x \ R\ y\) and \(y\ R\ x\), sometimes called the \(T_{0}\)_quotient of \(R\)_. Following [29, p. 75f], we call the classes of \(\theta_{R}\)_clusters_. A cluster is _simple_, if \(|\theta_{R}(x)|=1\), otherwise it is called _proper_. The relation \(\leq_{R}\) defined by \(\theta_{R}(x)\leq_{R}\theta_{R}(y)\stackrel{{\mathrm{df}}}{{ \Longleftrightarrow}}x\ R\ y\) is a partial order. If \(W\) is finite, then \(\leq_{R}\) has minimal and maximal elements, i.e. a smallest and a largest level. A frame is called a _frame of depth \(n,1\leq n\)_, if the length of any maximal chain of \(\leq_{R}\) is at most \(n\), and this is attained for some maximal chain.
An algebra is called _trivial_, if it has exactly one element. If \(\mathfrak{A}\) and \(\mathfrak{B}\) are algebras of the same type, we write \(\mathfrak{A}\leq\mathfrak{B}\), if \(\mathfrak{A}\) is a subalgebra of \(\mathfrak{B}\). \(\mathbf{Sub}(\mathfrak{B})\) denotes the collection of subalgebras of \(\mathfrak{B}\).
As no generality is lost, we shall tacitly assume that a class of algebras is closed under isomorphic copies. If no confusion can arise, we will often refer to an algebra simply by its base set. If \(\mathbf{K}\) is a class of algebras of the same similarity type, we denote by \(\mathbf{H}(\mathbf{K})\) the collection of all homomorphic images of \(\mathbf{K}\), by \(\mathbf{S}(\mathbf{K})\) the collection of all subalgebras of \(\mathbf{K}\), by \(\mathbf{P}(\mathbf{K})\) the collection of all products of elements of \(\mathbf{K}\), by \(\mathbf{Pu}(\mathbf{K})\) the class of all ultraproducts of members of \(\mathbf{K}\), and by \(\mathbf{Si}(\mathbf{K})\) the class of all subdirectly irreducible members of \(\mathbf{K}\). The equational class (variety) \(\mathbf{HSP}(\mathbf{K})\) generated by \(\mathbf{K}\) is denoted by \(\mathbf{Var}(\mathbf{K})\). It is well known that an equational class is generated by its finitely generated members. A class of algebras is called _locally finite_ if every finitely generated member is finite. Thus, a locally finite variety is generated by its finite members. A variety \(\mathbf{V}\) is called _tabular_, if it is generated by a finite algebra, and _pretabular_ if it is not tabular, but every proper subvariety is tabular [4, p. 104].
A _quasivariety_ is a class \(\mathbf{K}\) of algebras which contains a trivial algebra, and for which \(\mathbf{SPPu}(\mathbf{K})=\mathbf{K}\). It is well known that \(\mathbf{K}\) is a quasivariety if and only if it can be axiomatized by quasiidentities, i.e. by sentences of the form \((\tau_{1}=\sigma_{1}\ \&\ldots\&\ \tau_{n}=\sigma_{n})\Rightarrow\tau=\sigma\), see [8, Theorem 2.25]. The quasivariety generated by \(\mathbf{K}\) is denoted by \(\mathbf{QVar}(\mathbf{K})\).
Throughout, \(\mathfrak{B}=\langle B,+,\cdot,-,0,1\rangle\) is a Boolean algebra (BA) with at least two elements. We shall usually identify a Boolean algebra with its underlying set, so that we may write \(B\) instead of \(\mathfrak{B}\). If \(M\subseteq B\), then \(M^{+}\stackrel{{\mathrm{df}}}{{=}}\{x\in M:x\neq 0\}\), and \(-M\stackrel{{\mathrm{df}}}{{=}}\{-x:x\in M\}\). If \(a\in B\), then \(\big{\downarrow}\ a\stackrel{{\mathrm{df}}}{{=}}\{x:x\leq a\}\) is the ideal generated by \(a\), and \(\upuparrow a\stackrel{{\mathrm{df}}}{{=}}\{x:x\geq a\}\) is the filter generated by \(a\). Moreover, \(\mathrm{At}(B)\) denotes the set of atoms of \(B\), and \(\mathrm{Ult}(B)\) is the set of ultrafilters of \(B\).
A _modal operator_ on \(B\) is a mapping \(f:B\to B\) which satisfies, for all \(a,b\in B\),
\[f(0) =0, \text{normality},\] \[f(a+b) =f(a)+f(b), \text{additivity}.\]
If \(f\) is a modal operator, \(\langle B,f\rangle\) is called a _modal algebra_. Modal algebras were investigated by Jonsson and Tarski [22] under the name of _Boolean algebras with operators_, where many of their properties can be found. The _dual of a modal operator_\(f\) is the function \(f^{\partial}:B\to B\), defined by \(f^{\partial}(a)=-f(-a)\). An ideal \(I\) of \(B\) is _closed_, if \(f(a)\in I\) for every \(a\in I\), and a filter \(F\) is called _open_, if \(f^{\partial}(a)\in F\) for all \(a\in F\). We denote the variety of modal algebras by MOA, and the lattice of its subvarieties by \(\Lambda(\mathsf{MOA})\); join and meet in \(\Lambda(\mathsf{MOA})\) are denoted by \(\vee\) and by \(\wedge\), respectively. The class of algebras of the form \(\langle B,f^{1}\rangle\) is denoted by \(\mathsf{DMA}\), see (1.1). It is well known that \(\mathsf{DMA}\) is the class of simple monadic algebras, see e.g. [27].
We shall frequently make use of an instance of Jonsson's Lemma:
**Lemma 2.1**.: _[_21_, Corollary 3.2]_ _If \(\mathbf{K}\) is a class of modal algebras, then \(\mathbf{Si}\,\mathbf{Var}(\mathbf{K})\subseteq\mathbf{HSP_{u}}(\mathbf{K})\)._
In particular, if \(\mathbf{K}\) is axiomatizable by first order positive universal sentences, then \(\mathbf{Si}\,\mathbf{Var}(\mathbf{K})\subseteq\mathbf{K}\), see e.g. [9, Chapter 5].
If \(\mathfrak{B}=\langle B,f\rangle\) is a modal algebra, then the structure \(\langle\operatorname{Ult}(B),R_{f}\rangle\) with
\[u\ R_{f}\ v\Longleftrightarrow f[v]\subseteq u\]
is called the _canonical frame_ or _ultrafilter extension_ of \(\mathfrak{B}\). For finite algebras this has a simple form: If \(a,b\in\operatorname{At}(B)\), and \(F_{a},F_{b}\) are the principal ultrafilters generated by \(a\), respectively, by \(b\), then
\[F_{a}\ R_{f}\ F_{b}\Longleftrightarrow\{f(p):b\leq p\}\subseteq F_{a} \Longleftrightarrow a\leq f(b). \tag{2.1}\]
If \(\mathfrak{F}=\langle W,R\rangle\) is a frame, then the structure \(\mathsf{Cm}(\mathfrak{F})\stackrel{{\mathrm{df}}}{{=}}\langle 2^{W}, \langle R\rangle\rangle\) is called the _complex algebra of \(\mathfrak{F}\)_. Here, for \(X\subseteq W\),
\[\langle R\rangle(X)\stackrel{{\mathrm{df}}}{{=}}\{x\in W:R(x) \cap X\neq\emptyset\}.\]
**Theorem 2.2**.: _[_22_, Theorem 3.9]___
1. \(\mathsf{Cm}(\mathfrak{F})\) _is complete and atomic, and_ \(\langle R\rangle\) _is a completely additive normal operator._
2. _If_ \(\mathfrak{B}=\langle B,f\rangle\) _is a complete atomic Boolean algebra and_ \(f\) _is completely additive, then_ \(\mathfrak{B}\) _is isomorphic to some_ \(\mathsf{Cm}(\langle W,R\rangle)\)_._
Two modal operators \(f\), \(g\) on \(B\) are called _conjugate_, if, for all \(a,b\in B\),
\[f(a)\cdot b=0\Longleftrightarrow g(b)\cdot a=0. \tag{2.2}\]
**Theorem 2.3**.: _[_22_, Theorem 3.6.]_ _If \(f,g\) are conjugate modal operators on \(B\), then \(f\) and \(g\) are completely additive, and \(R_{f}\) is the converse of \(R_{g}\). Conversely, if \(R\) is a binary relation on \(W\), then \(\langle R\rangle\) and \(\langle R^{\sim}\rangle\) are conjugate._
A modal operator \(f\) is called a _closure operator_[26], if it satisfies
\(\mathrm{Cl}_{1}\ \ a\leq f(a)\),
\(\mathrm{Cl}_{2}\ \ f(f(a))=f(a)\).
In this case, \(\langle B,f\rangle\) is called a _closure algebra_; observe that the complex algebra of a quasiordered frame is a closure algebra. Closure algebras have equationally definable congruences [6, Example 18, p. 204f]. It is well known that the congruences of a closure algebra correspond to the closed ideals of \(B\). The dual \(\langle B,g\rangle\) of a closure algebra is called an _interior algebra_, and \(g\) an interior operator. Thus, \(g(1)=1\), \(g\) is multiplicative, and \(g\) satisfies
\[\begin{array}{ll}\mathrm{Int}_{1}&g(a)\leq a,\\ \mathrm{Int}_{2}&g(g(a))=g(a).\end{array}\]
The classes of closure algebras and interior algebras are definitionally equivalent. If \(\langle B,f\rangle\) is a closure algebra we denote its corresponding interior algebra \(\langle B,f^{\partial}\rangle\) by \(B^{\partial}\). An element \(x\) of a closure algebra \(\langle B,f\rangle\) is called _closed_ if \(f(x)=x\), and _open_, if \(f^{\partial}(x)=x\). The trivial algebra is denoted by \(\mathbf{1}\), and the two element closure algebra is denoted by \(\mathbf{2}\).
For later use we mention
**Lemma 2.4**.: _[_2, 3_]___
1. _A closure algebra is subdirectly irreducible if and only if it has a smallest nontrivial closed ideal._
2. _If_ \(\mathbf{V}_{1}\) _and_ \(\mathbf{V}_{2}\) _are varieties of closure algebras, then_ \(\mathbf{Si}(\mathbf{V}_{1}\vee\mathbf{V}_{2})=\mathbf{Si}(\mathbf{V}_{1}) \cup\mathbf{Si}(\mathbf{V}_{2})\)_._
The following result will be useful in the sequel:
**Theorem 2.5**.: _[_5_, p. 190f]_ _Suppose that \(D\) is a bounded sublattice of \(B\). Then, \(D\) is the set of closed elements of a closure operator \(f\) on \(B\) if and only if \(\uparrow b\cap D\) has a smallest element for each \(b\in B\). In this case, \(f(b)=\min\ (\uparrow b\cap D)\)._
The following property is decisive for the classes of algebras which we consider: A closure algebra \(\langle B,f\rangle\) is said to have _depth two_, if it satisfies
\[\mathrm{B}_{2}\ \ f(f^{\partial}(x)\cdot f(f^{\partial}(y))\cdot-y)\,\leq x.\]
The name comes from the fact that the canonical relation of a finite closure algebra satisfying \(\mathrm{B}_{2}\) has depth two, see Table 1.
For unexplained notation we refer the reader to [8] for universal algebra, and to [23] for Boolean algebras. A thorough investigation of varieties of interior algebras is [2].
## 3. Modal logics and their semantics
Modal logic extends classical propositional logic by a unary logical connective \(\diamondsuit\) representing "possibility". Formulas and terms are defined recursively in the usual way. The _dual of \(\diamondsuit\)_ is the operator \(\squaresuit\) ("necessity") defined by \(\squaresuit\varphi=\neg\diamondsuit(\neg\varphi)\). A _modal logic_\(L\) is a set of modal formulas that contains all propositional tautologies and is closed under modus ponens and substitution. \(L\) is _normal_, if it contains the formula
\[\mathbb{K}\ \squaresuit(p\to q)\rightarrow(\squaresuit p\rightarrow\squaresuit q).\]
and is closed under necessitation, i.e. \(\varphi\in L\) implies \(\Box\varphi\in L\). Equivalently, \(L\) is normal if it contains the formulas \(\Diamond\bot\leftrightarrow\bot\) and \(\Diamond(\varphi\vee\psi)\leftrightarrow\Diamond\varphi\vee\Diamond\psi\). In the sequel a logic is assumed to be normal with a set \(\mathsf{Var}\) of variables and a set \(\mathsf{Fml}\) of formulas in a suitable language \(\mathsf{Lan}\). If \(\Gamma\subseteq\mathsf{Fml}\), then the smallest normal logic containing \(\Gamma\) is denoted by \(L_{\Gamma}\). If \(L\subseteq L^{\prime}\), then \(L^{\prime}\) is called an _extension of \(L\)_. It is well known that the class of normal modal logics forms a lattice under \(\subseteq\), dually isomorphic to the lattice of varieties of modal algebras, see e.g. [3].
A class \(\mathsf{M}\) of modal algebras _validates_ or _determines a logic_\(L\) if \(\mathfrak{B}\models\varphi\) for all theorems \(\varphi\) of \(L\) and all \(\mathfrak{B}\in\mathsf{M}\). With some abuse of notation, we let \(\mathbf{Var}(L)\) be the class of modal algebras that satisfy all theorems of \(L\). In the other direction, the set \(\Delta_{\mathsf{M}}\) of all formulas valid in all members of \(\mathsf{M}\) is a normal logic \(L_{\mathsf{M}}\)[20, p. 178]. If \(\mathsf{M}=\{\mathfrak{B}\}\), then \(L_{\mathsf{M}}=L_{\mathbf{Var}(\{\mathfrak{B}\})}\).
Next, we turn to frame semantics. Validity in a frame is defined as follows: A _model_\(\mathfrak{M}\) (for \(\mathsf{Fml}\)) is a structure \(\langle W,R,v\rangle\) where \(\langle W,R\rangle\) is a frame, and \(v:\mathsf{Var}\to 2^{W}\) is a valuation function which is extended over \(\mathsf{Fml}\) as follows:
\[v(\neg\varphi) =W\setminus v(\varphi)=\{w\in W:w\notin v(\varphi)\},\] \[v(\varphi\land\psi) =v(\varphi)\cap v(\psi),\] \[v(\top) =W,\] \[v(\Diamond(\varphi)) =\{w:(\exists u)[u\in v(\varphi)\text{ and }u\ R\ w\}.\]
A formula \(\varphi\) is _true in the model_\(\mathfrak{M}=\langle W,R,v\rangle\), written as \(\mathfrak{M}\models_{v}\varphi\), if \(v(\varphi)=W\). \(\varphi\) is _true in the frame_\(\mathfrak{F}=\langle W,R\rangle\), written as \(\mathfrak{F}\models\varphi\), if \(\langle W,R,v\rangle\models\varphi\) for all valuations \(v:\mathsf{Fml}\to 2^{W}\). If \(K\) is a class of frames, we write \(K\models\varphi\) just in case \(\mathfrak{F}\models\varphi\) for all \(\mathfrak{F}\in K\). Furthermore, if \(\Gamma\) is a set of formulas, then we write \(\mathfrak{F}\models\Gamma\) if \(\mathfrak{F}\models\varphi\) for all \(\varphi\in\Gamma\), and we write \(K\models\Gamma\), if \(\mathfrak{F}\models\Gamma\) for all \(\mathfrak{F}\in K\).
A frame \(\mathfrak{F}\)_determines the logic_\(L_{\mathfrak{F}}=\{\varphi:\mathfrak{F}\models\varphi\}\), and a class \(K\) of frames determines the logic \(L_{K}=\{\varphi:K\models\varphi\}=\cap\{L_{\mathfrak{F}}:\mathfrak{F}\in K\}\). Conversely, if \(\Gamma\) is a set of formulas, then \(K_{\Gamma}\stackrel{{\mathrm{df}}}{{=}}\{\mathfrak{F}:\mathfrak{F} \models\Gamma\}\) is the class of frames that determine \(\Gamma\); note that \(K_{\Gamma}=K_{L_{\Gamma}}\). A class \(K^{\prime}\) of frames is called _modally definable_ if it is of the form \(K_{L}\) for some logic \(L\), i.e. of the form \(K_{L_{K^{\prime}}}\).
A logic \(L^{\prime}\) is _Kripke (or frame) complete_, if it is determined by a class \(K^{\prime}\) of frames, i.e. if \(L^{\prime}=L_{K^{\prime}}\). This is true if and only if \(L^{\prime}=L_{K_{L^{\prime}}}\). Although not all logics are Kripke complete, see e.g. [7] Section 19, and [3], in this paper we will deal only with Kripke complete logics. Table 1 shows frame conditions for various extensions of \(\mathbf{K}\). There, the axioms are named in \(\mathtt{teletype}\), and the corresponding logics generated by them are denoted in **boldface**. Since the nomenclature used in the literature is not uniform, we list some alternative denotation:
\[\mathbf{S4}\stackrel{{\mathrm{df}}}{{=}}\mathbf{KT4},\ \mathbf{S4} \mathbf{1}\stackrel{{\mathrm{df}}}{{=}}\mathbf{S4M},\ \mathbf{S4} \mathbf{2}\stackrel{{\mathrm{df}}}{{=}}\mathbf{S4G},\ \mathbf{S4} \mathbf{3}\stackrel{{\mathrm{df}}}{{=}}\mathbf{S4H},\ \mathbf{S4} \mathbf{4}\stackrel{{\mathrm{df}}}{{=}}\mathbf{S4R1},\ \mathbf{S5} \stackrel{{\mathrm{df}}}{{=}}\mathbf{S4B}.\]
We note that the \(\mathbf{T4}\) extension of \(\mathbf{KM}\), i.e. \(\mathbf{S4M}\), is Kripke complete: If \(R\) is a quasiorder, then the condition that verifies \(\mathbf{M}\) is \((\forall x)(\exists y)[x\ R\ y\text{ and }(\forall z)(y\ R\ z\Rightarrow y=z)]\). As the logics whose canonical frames have depth two are of particular importance in the sequel, we shall mention the following:
**Theorem 3.1**.: \(\mathbf{S4}.\mathbf{2}\mathbf{B}_{2}=\mathbf{S4}.\mathbf{3}\mathbf{B}_{2}\)
Proof.: Since both logics are Kripke complete, it is enough to show that for a quasiorder \(R\) of depth two,.2 holds if and only if.3 holds.
"\(\Rightarrow\)": Suppose that \(R\) is convergent, and let \(x\ R\ y\) and \(x\ R\ z\); we need to show that \(y\ R\ z\) or \(z\ R\ y\). If \(y\ R\ x\), then \(y\ R\ z\), since \(R\) is transitive; similarly, \(z\ R\ x\) implies \(z\ R\ y\). Suppose that \(y\ (-R)\ x\) and \(z\ (-R)\ x\). Since \(R\) is convergent, there is some \(w\) such that \(y\ R\ w\) and \(z\ R\ w\); furthermore, \(\mathsf{B}_{2}\) and our assumption imply \(w\ R\ y\) and \(w\ R\ z\). Since \(y\ R\ w\), and by the transitivity of \(R\), we obtain \(y\ R\ z\), and, similarly, \(z\ R\ y\).
"\(\Leftarrow\)": Suppose that \(R\) satisfies.3, and let \(x\ R\ y\) and \(x\ R\ z\). By.3, we can w.l.o.g. assume that \(y\ R\ z\) otherwise we exchange \(y\) and \(z\). Since \(R\) is reflexive, we also have \(z\ R\ z\), and we set \(w\stackrel{{\mathrm{df}}}{{=}}z\). Observe that \(\mathsf{B}_{2}\) was not needed for this direction.
## 4. Ideal algebras
Our starting point are modal algebras \(\langle B,f\rangle\) for which there is an ideal \(I\) of \(B\) such that
\[f(x)=\begin{cases}x,&\text{if }x\in I,\\ 1,&\text{otherwise}.\end{cases}\]
Algebras of this form are called _ideal algebras_; the class of all ideal algebras is denoted by \(\mathsf{IMA}\). Ideal algebras generalize the discriminator algebra \(\langle B,f^{1}\rangle\): If \(I=\{0\}\), then \(f=f^{1}\). In [10] we have investigated the class \(\mathsf{IMA}\), and its main properties are as follows:
**Theorem 4.1**.:
1. \(\mathsf{IMA}\) _is a locally finite positive universal class, axiomatized by_ (4.1) \[(\forall x)\,[f(x)=x\ or\ f(x)=1].\]
2. \(\langle B,f\rangle\) _is an ideal algebra if and only if_ _the set_ \(B^{c}\) _of closed elements has the form_ \(I\cup\{1\}\)_, where_ \(I\) _is an ideal of_ \(B\)_._
3. _An ideal algebra is a closure algebra of depth at most two._
4. _If_ \(\langle B,f\rangle\in\mathbf{Var}(\mathsf{IMA})\) _is subdirectly irreducible, then it is an ideal algebra. An ideal algebra_ \(\langle B,f\rangle\) _is subdirectly irreducible if and only if_ _its defining ideal_ \(I\) _has the form_ \(I=\{0\}\)_, or it is generated by an atom of_ \(B\)_._
Our initial concern was algebraic, and it was something of a surprise to us that ideal algebras constitute the algebraic semantics of Sobocinski's logic **S4.4**[31], also known as **S4.3DumB2**.
\begin{table}
\begin{tabular}{l l l} Name & Axiom & Frame condition \\ \hline \(\mathsf{D}\) & \(\Box p\to\Diamond p\) & \(R\) is serial, i.e. \(\operatorname{dom}(R)=W\). \\ \(\mathsf{T}\) & \(p\to\Diamond p\) & \(R\) is reflexive. \\ \(\mathsf{4}\) & \(\Diamond\Diamond p\to\Diamond p\) & \(R\) is transitive. \\ \(\mathsf{B}\) & \(p\to\Box\Diamond p\) & \(R\) symmetric \\ \(\mathsf{B}_{2}\) & \(\Diamond(\Box q\wedge\Diamond p\wedge\neg p)\to q\) & \((x\ R\ y\ and\ y\ R\ z)\Rightarrow(y\ R\ x\ or\ z\ R\ y)\). \\ \(\mathsf{Dum}\) & \(\Box(\Box(p\to\Box p)\to p)\wedge\Diamond\Box p\to p\) & \(R\) is a quasiorder in which all but the final cluster is simple. \\ \(\mathsf{Grz}\) & \(\Box(\Diamond(p\wedge\Diamond(-p))\lor p)\to p\) & There is no infinite strictly ascending \(R\)-chain. \\ \(\mathsf{M}\) or.1 & \(\Box\Diamond p\to\Diamond\Box p\) & _None - the logic **KM** is not Kripke complete._ \\ \(\mathsf{G}\) or.2 & \(\Diamond\Box p\to\Diamond p\) & \(R\) is convergent \\ \(\mathsf{H}\) or.3 & \(\Box(\Box p\to q)\vee\Box(\Box q\to p)\) & \((x\ R\ y\ and\ x\ R\ z)\Rightarrow(y\ R\ z\ or\ z\ R\ y)\) \\ \(\mathsf{R}\) & \(p\wedge\Diamond\Box p\to\Box p\) & \((x\ R\ y\ and\ x\ R\ z)\Rightarrow(y\ R\ z\ or\ z\ R\ y)\) \\ \(\mathsf{R}\) & \(p\wedge\Diamond\Box p\to\Box p\) & \((x\ R\ y\ and\ x\ R\ z)\Rightarrow z\ R\ y\) \\ \end{tabular}
\end{table}
Table 1. Modal logic axioms
**Theorem 4.2**.:
1. \(\mathbf{Var}(\mathsf{IMA})\) _is generated by the complex algebras of finite frames_ \(\langle W,R\rangle\) _where_ \(R\) _is a quasiorder with two levels, such that the lower level consists of simple clusters, and the upper level consists of one cluster, see Figure_ 1_._
2. \(\mathbf{Var}(\mathsf{IMA})=\mathbf{Var}(\mathbf{S4.3DumB_{2}})\)_._
The canonical relation of an ideal algebra depicted in Figure 1 is called an _iu-relation_, and its corresponding operator an _iu-operator_, denoted by \(f^{\mathrm{iu}}\). This notation indicates that the restriction of the canonical relation \(R_{f}\) to the lower level is the identity, and the restriction to the upper level is the universal relation. This observation motivated us to consider the corresponding situations of iu,uu,iu relations and operators.
## 5. Filter algebras
Ideal algebras are characterized by the property that the set \(B^{c}\) of closed elements has the form \(B^{c}=I\cup\{1\}\) for some ideal of \(B\). Similarly, we call a closure algebra \(\langle B,f\rangle\) a _filter algebra_, if \(B\in\{\mathbf{1},\mathbf{2}\}\) or \(B^{c}\) has the form \(\{0\}\cup F\) for some filter \(F\) of \(B\) which is not equal to \(\{1\}\). In the sequel we shall use \(F\) for the determining filter. \(\langle B,f\rangle\) is called _proper_, if \(F\neq B\). We denote the class of all filter algebras by \(\mathsf{FMA}\), and the class of proper filter algebras by \(\mathsf{FMA}_{\mathsf{p}}\). Note that apart from the trivial algebra, \(\mathbf{2}\) is the only discriminator algebra in \(\mathsf{FMA}\).
Even though for any proper ideal \(I\), \(I\cup\{1\}\) can be the set of closed elements, this is not the case for filter algebras:
**Theorem 5.1**.: _Let \(\langle B,f\rangle\) be a filter algebra with respect to \(F\). Then, \(F\) is principal._
Proof.: If \(F=B\), then \(F=\uparrow 0\). Suppose that \(F\neq B\); then, \(\{0\}\cup F\) is a proper \(0\), \(1\)-sublattice of \(B\). We already know from Theorem 2.5 that a \(0,1\)-sublattice \(D\) of \(B\) is the set of closed elements if and only if \(\uparrow b\cap D\) has a smallest element for all \(b\in B\); in this case, \(f(b)=\min\ (\uparrow b\cap D)\). Choose some \(x\in F\setminus\{1\}\); then, \(-x\neq 0\) and \(f(-x)\in F\), since \(f(-x)\) is closed. It follows that \(x\cdot f(-x)\in F\).
Assume that \(\uparrow x\cdot f(-x)\subseteq F\). Then, there is some \(y\in F\) such that \(y\leq x\cdot f(-x)\). If \(-x+y=f(-x)\), then \(y=x\cdot(-x+y)=x\cdot f(-x)\), a contradiction. Hence, \(-x+y\leq f(-x)\)
Figure 1. The canonical relation of a finite ideal algebra of depth two
and now \(-x+y\in F\) shows that \(f(-x)\) is not the smallest element of \(F\) above \(-x\), a contradiction.
**Theorem 5.2**.: _If \(\langle B,f\rangle\) is a filter algebra with determining filter \(\uparrow a\), then for all \(x\in B\),_
\[f(x)=\begin{cases}0,&\text{if }x=0,\\ a+x,&\text{otherwise}.\end{cases} \tag{5.1}\]
Proof.: Let \(x\neq 0\). Then, \(x\leq f(x)\), and \(a+x\) is the smallest element of \(\{0\}\cup\uparrow a\) above \(x\).
**Theorem 5.3**.: _A filter algebra \(\langle B,f\rangle\) is subdirectly irreducible if and only if \(|B|=2\) or it is proper._
Proof.: "\(\Rightarrow\)": Suppose that \(\langle B,f\rangle\) is subdirectly irreducible. If \(\langle B,f\rangle\) is not proper, then \(F=B\), and each element is closed. Thus, \(|B|=2\).
"\(\Leftarrow\)": If \(|B|=2\), then it is simple, hence, subdirectly irreducible. Otherwise, suppose that \(\langle B,f\rangle\in\mathsf{FMA}\) with respect to some \(a\neq 0\). Then, \(\downarrow a\) is a nontrivial closed ideal, since \(f(x)=a\) if and only if \(0\leq x\leq a\). Since every nontrivial closed ideal contains \(a\), \(\downarrow a\) is the smallest nontrivial closed ideal, hence, \(B\) is subdirectly irreducible.
**Theorem 5.4**.: \(\mathsf{FMA}\) _is a positive universal class._
Proof.: Suppose that \(\langle B,f\rangle\in\mathsf{FMA}\) with respect to \(a\). If \(a=0\), i.e. \(f=1^{\prime}\), then, clearly, each subalgebra and homomorphic image of \(\langle B,f\rangle\) is in \(\mathsf{FMA}\). Thus, suppose that \(a\neq 0\), and let \(\langle A,g\rangle\) be a subalgebra of \(\langle B,f\rangle\), \(|A|\geq 2\). Let \(F^{\prime}\stackrel{{\mathrm{df}}}{{=}}\uparrow a\cap A\); then, \(F^{\prime}\) is a filter of \(A\), and \(A^{\mathrm{c}}=B^{\mathrm{c}}\cap A=\{0\}\cup F^{\prime}\). Since \(|A|\geq 4\), there is some \(x\in A\) such that \(x\neq 0\) and \(-x\neq 0\). Now, \(g(x)\cdot g(-x)=f(x)\cdot f(-x)=(a+x)\cdot(a+-x)=a\in A\), and thus, \(a\in F^{\prime}\) and \(a\neq 1\), since \(|B|\geq|A|\geq 2\).
Next, suppose that \(I\) is a nontrivial congruence ideal of \(\langle B,f\rangle\); then, \(a\in I\). Let \(A\stackrel{{\mathrm{df}}}{{=}}B/I\) with \(\pi:B\twoheadrightarrow A\) the natural homomorphism; w.l.o.g. we suppose that \(A\neq 2\). Let \(g:A\to A\) be the induced mapping defined by \(g(\pi(x))=\pi(f(x))\). This is well defined, since \(I\) is a congruence ideal. We will show that \(g\) is the identity, which implies that \(\langle A,g\rangle\in\mathsf{FMA}\): If \(x=0\), then \(g(\pi(0))=\pi(f(0))=\pi(0)\). Otherwise, for any \(y\in B\) we have
\[y\in g(\pi(x)) \iff y\in\pi(f(x)), \text{since }\pi\text{ is a homomorphism},\] \[\iff y\in\pi(a+x), \text{since }x\neq 0,\] \[\iff(\exists z\in I)\,[y+z=x+a+z],\] \[\iff(\exists z^{\prime}\in I)\,[y+z^{\prime}=x+z^{\prime}], \text{set }z^{\prime}\stackrel{{\mathrm{df}}}{{=}}a+z\text{, which is in }I,\] \[\iff y\in\pi(x).\]
This completes the proof.
**Corollary 5.5**.:
1. _Every subdirectly irreducible algebra in_ \(\mathbf{Var}(\mathsf{FMA})\) _is a filter algebra._
2. \(\mathbf{Var}(\mathsf{FMA})\) _is generated by_ \(\mathsf{FMA}_{\rho}\)
Proof.: 1. Suppose that \(\langle B,f\rangle\in\mathbf{Var}(\mathsf{FMA})\) is subdirectly irreducible. By Jonsson's Lemma 2.1, \(\langle B,f\rangle\in\mathbf{HSP}_{\mathbf{u}}(\mathsf{FMA})\). Theorem 5.4 shows that \(\mathsf{FMA}\) is closed under taking homomorphic images, subalgebras, and ultraproducts, hence, \(\langle B,f\rangle\in\mathsf{FMA}\).
2. \(\mathbf{Var}(\mathsf{FMA})\) is generated by its subdirectly irreducible algebras. From 1. above and Theorem 5.3 we obtain that \(\mathbf{Var}(\mathsf{FMA})\) is generated by \(\mathsf{FMA}_{\mathsf{p}}\cup\{\mathbf{2}\}\). Now observe that \(\mathbf{2}\) is a homomorphic image of, for example, the four element proper filter algebra (which is unique up to isomorphism).
Next we show that \(\mathbf{Var}(\mathsf{FMA})\) is locally finite; it will be more convenient to work with interior algebras as in [5]. Let \(\langle B,f\rangle\) be a filter algebra with \(f\) determined by \(\uparrow a\); then the set \(B^{\mathrm{o}}\) of open elements is the set \(\downarrow-a\cup\{1\}\): Clearly, \(1\) is open. Let \(x\neq 1\), i.e. \(-x\neq 0\). Then,
\[f^{\partial}(x)=x\Longleftrightarrow-f(-x)=x\Longleftrightarrow f(-x)=-x \Longleftrightarrow a+-x=-x\Longleftrightarrow a\leq-x\Longleftrightarrow x \leq-a.\]
Thus, \(\langle B,f\rangle\) is a filter algebra if and only if \(B^{\mathrm{o}}\) has the form \(\downarrow b\cup\{1\}\) for some \(b\in B\).
For each \(n\in\omega^{+}\), we define \(K_{n}\) to be (isomorphic to) the interior algebra generated by its atoms \(a_{1},\ldots,a_{n}\) with open elements \(b_{i}:i\leq n\), where \(b_{0}\stackrel{{\mathrm{df}}}{{=}}0\) and \(b_{i}\stackrel{{\mathrm{df}}}{{=}}a_{1}+\ldots+a_{i}\) for \(0\leq i\leq n\)[3, p. 32]; thus, \(B^{\mathrm{o}}\) is a chain of length \(n+1\). The induced interior operator is obtained by \(i(x)\stackrel{{\mathrm{df}}}{{=}}\max(\downarrow x\cap\{b_{i}:i \leq n\})\).
For the class \(1\) of interior algebras we define \((1:K_{n})\stackrel{{\mathrm{df}}}{{=}}\{A\in 1:K_{n}\not\in\mathbf{Sub}(A)\}\). We will use the characterization of locally finite interior algebras of [2, 4.3, p. 181]:
**Theorem 5.6**.: _Suppose that \(\mathsf{V}\) is a variety of interior algebras. Then, \(\mathsf{V}\) is locally finite if and only if \(\mathsf{V}\subseteq(1:K_{n})\) for some \(n\)._
**Theorem 5.7**.: \(\mathbf{Var}(\mathsf{FMA})\) _is locally finite._
Proof.: We will show that the class \(\mathsf{K}\) of algebras dual to those in \(\mathsf{FMA}_{\mathsf{p}}\) is contained in \((1:K_{3})\). Suppose that \(\langle B,i\rangle\) is an interior algebra with open elements \(B^{\mathrm{o}}=\downarrow a\cup\{1\}\), \(a\neq 1\), i.e. \(\langle B,i\rangle\) is the dual of a proper filter algebra. Assume that \(K_{3}\) is embeddable into \(B\). We may suppose that \(K_{3}\) is a subalgebra of \(B\) with atoms \(b_{1},b_{2},b_{3}\) and open elements \(K_{3}^{\mathrm{o}}\stackrel{{\mathrm{df}}}{{=}}\{0,b_{1},b_{1}+b _{2},1\}\); then, \(K_{3}^{\mathrm{o}}\subseteq\downarrow a\). Since \(B^{\mathrm{o}}\) is an ideal and \(b_{1}+b_{2}\in K_{3}^{\mathrm{o}}\setminus\{1\}\subseteq\downarrow a\), we have \(b_{2}\in\downarrow a\cap K_{3}\subseteq B^{\mathrm{o}}\cap K_{3}\), hence, \(b_{2}\in K_{3}^{\mathrm{o}}\), a contradiction. Thus, each algebra dual to a proper filter algebra is in \((1:K_{3})\). Since \((1:K_{3})\) is a variety, \(\mathbf{Var}(\mathsf{K})\subseteq(1:K_{3})\). By Theorem 5.6, we obtain that \(\mathbf{Var}(\mathsf{K})\) is locally finite, thus, so is \(\mathbf{Var}(\mathsf{FMA})\).
The following is now immediate from Theorem 5.3 and the previous theorem:
**Corollary 5.8**.: \(\mathbf{Var}(\mathsf{FMA})\) _is generated by the finite members of \(\mathsf{FMA}_{\mathsf{p}}\)._
Proof.: Each variety is generated by its finitely generated subdirectly irreducible members. By the previous theorem, these are finite, and by Corollary 5.5 (2) we may choose these to be in \(\mathsf{FMA}_{\mathsf{p}}\).
Next, we shall look at the canonical frame of a proper finite filter algebra \(\langle B,f\rangle\). Suppose that \(f\) is determined by the filter \(\uparrow a\), \(a\notin\{0,1\}\), and that \(B\) is generated as a Boolean algebra by its atoms \(a_{0},\ldots,a_{n}\). Let w.l.o.g. \(a=a_{0}+\ldots+a_{m}\) for some \(m\leq n\). Let \(F,G\in\mathrm{Ult}\)\((B)\), and recall that the canonical relation \(R\stackrel{{\mathrm{df}}}{{=}}R_{f}\) is defined by \(F\ R\ G\) if and only if \(f[G]\subseteq F\), i.e. if and only if \(\uparrow a\cap G\subseteq F\). There are two cases:
1. \(a\in F\): Then \(\uparrow a\subseteq F\), and thus, \(F\ R\ G\) for all \(G\in\operatorname{Ult}(B)\). Observe that \(a\in F\) if and only if \(\ F=\uparrow a_{i}\) for some \(i\leq m\).
2. \(a\notin F\): Then, \(F=\uparrow a_{i}\) for some \(m<i\leq n\). If \(G=\uparrow a_{j}\), then \(F\ R\ G\) entails \(a+a_{j}\in F\), and subsequently, \(a_{j}\in F\), since \(a\notin F\) and \(F\) is an ultrafilter. It follows that \(j=i\).
The graph of \(R\) is shown in Figure 2. It is easy to see that \((\operatorname{Ult}(B),R)\) satisfies the conditions
\[(\forall F)(\exists G)\,[F\ R\ G\ \text{and}\ (\forall H)(G\ R\ H \Rightarrow G=H)], \tag{5.3}\] \[(F\ R\ G\ \text{and}\ G\ R\ H)\Rightarrow(G\ R\ F\ \text{or}\ H\ R\ G). \tag{5.2}\]
Quasiorders that satisfy (5.2) and (5.3) have depth two, and consist of a single cluster on level one and simple clusters on level two. If \(U\) is the lower level and \(V\) is the upper level, then \(R=(U\times U)\cup(U\times V)\cup 1^{\prime}\). We see that \(R\ \upharpoonright U^{2}\) is the universal relation on \(U\) and \(R\ \upharpoonright V^{2}\) is the identity on \(V\). Based on this observation we call a relation of this form a _ui-relation_. If \(\langle B,f\rangle\) is a filter algebra we call \(f\) a _ui-operator_ and denote it by \(f^{\operatorname{ui}}\). Note that a \(\operatorname{ui}\)-relation is the converse of an \(\operatorname{iu}\)-relation, and thus, their operators are conjugate functions in the sense of [22].
According to [7, p. 45f] the reflexive and transitive frames that satisfy (5.2) and (5.3) are those that validate the logic \(\mathbf{S4MB_{2}}\). Conversely, it is easy to show that the complex algebra of a \(\operatorname{ui}\)-relation is a filter algebra. Thus, applying Corollary 5.8, we obtain
**Theorem 5.9**.: \(\mathbf{Var}(\mathbf{FMA})=\mathbf{Var}(\mathbf{S4MB_{2}})\)_._
Owing to the relation of the canonical relations we may say that in some sense the logic \(\mathbf{S4.3DumB_{2}}\) of ideal algebras is the converse of the logic \(\mathbf{S4MB_{2}}\) of filter algebras.
## 6. \(\operatorname{uu}\)- and \(\operatorname{ii}\)-algebras and their logics
In this section we consider the two remaining types of closure algebras of depth at most two which may be related to ideals. Suppose that \(a\neq 0\), and that \(f\) is a closure operator on \(B\) such that \(B^{\mathrm{c}}=\{0,a,1\}\). Then,
\[f(x)\stackrel{{\mathrm{df}}}{{=}}\begin{cases}0,&\text{if }x=0,\\ a,&\text{if }0\leq x\leq a,\\ 1,&\text{otherwise.}\end{cases}\]
Figure 2. The canonical relation of a finite filter algebra of depth two
Clearly, \(f\) is a closure operator. Considering the ideal \(I\stackrel{{\rm df}}{{=}}\downarrow a\), we see that \(f\) maps the nonzero elements of \(I\) to its maximum, and therefore we call \(\langle B,f\rangle\) a _MaxId algebra_. Technically, we could also allow \(a=0\), i.e. \(I=\{0\}\), for a MaxId algebra. In this case \(f\) is the unary discriminator, and \(f\) can also be described by \(a=1\). So, we shall always assume \(a\geq 0\). The class of MaxId algebras is denoted by MMA.
**Theorem 6.1**.: MMA _is a positive universal class._
Proof.: We will show that a closure algebra \(\langle B,f\rangle\) with \(|B|\geq 2\) is a MaxId-algebra if and only if it satisfies
\[(\forall x,y)[(0\leq f(x),f(y)\leq 1)\Rightarrow f(x)=f(y)], \tag{6.2}\] \[(\forall x,y)[(y\neq 0\text{ and }x\nleq f(y))\Rightarrow f(x)=1]. \tag{6.1}\]
Clearly, (6.1) and (6.2) are equivalent to positive universal sentences in disjunctive form.
"\(\Rightarrow\)": If \(a=1\), then (6.1) and (6.2) are vacuously satisfied. Suppose that \(a\neq 1\); then, \(f\) has exactly the values \(\{0,a,1\}\). If \(0\leq f(x),f(y)\leq 1\), we have \(f(x)=f(y)=a\), and (6.1) is satisfied.
Suppose that \(y\neq 0\) and \(x\nleq f(y)\); then, \(x\neq 0\) and \(f(y)\neq 1\). Since \(0\neq y\leq f(y)\leq 1\), we have \(f(y)=a\), and \(x\nleq f(y)\) implies \(f(x)=1\).
"\(\Leftarrow\)": Suppose that \(f\) is a closure operator that fulfils (6.1) and (6.2). If \(B^{\rm c}=\{0,1\}\), then the fact that \(f\) is expanding implies that \(f\) is the unary discriminator, and we set \(a\stackrel{{\rm df}}{{=}}1\). Otherwise, there is some \(b\in B\) such that \(0\leq f(b)\leq 1\), and we set \(a\stackrel{{\rm df}}{{=}}f(b)\). This is well defined by (6.1). Let \(x\neq 0\). There are two cases:
1. \(x\leq f(b)\): Then \(0\leq f(x)\leq f(f(b))=f(b)\leq 1\), and thus, \(f(x)=f(b)\) by (6.1).
2. \(x\nleq f(b)\): Then, \(f(b)\neq 0\) implies that \(b\neq 0\), and thus, \(f(x)=1\) by (6.2).
This completes the proof.
Theorem 6.1 implies that \(\textsf{MMA}=\mathbf{HSP_{u}}(\textsf{MMA})\), and applying Jonsson's Lemma we obtain
**Corollary 6.2**.: _If \(\langle B,f\rangle\in\mathbf{Var}(\textsf{MMA})\) is subdirectly irreducible, then \(\langle B,f\rangle\in\textsf{MMA}\)._
**Theorem 6.3**.: _Each MaxId algebra \(\langle B,f\rangle\) with \(|B|\geq 2\) is subdirectly irreducible._
Proof.: If \(a\) is the smallest element of \(B^{\rm c}\setminus\{0\}\), then \(\downarrow a\) is the smallest closed nontrivial ideal of \(B\), and thus, \(B\) is subdirectly irreducible.
**Theorem 6.4**.: \(\mathbf{Var}(\textsf{MMA})\) _is locally finite._
Proof.: For any algebra \(B\) in MMA, \(B^{\rm o}\) has at most three open elements, and thus, \(K_{3}\) cannot be embedded in it. Now use Theorem 5.6.
Next, we consider the canonical frames of MaxId algebras.
**Theorem 6.5**.:
1. _The canonical frame of a MaxId algebra is a chain of length at most two of two clusters._
2. _The complex algebra of a chain of at most two clusters is a MaxId algebra._
Proof.: 1. Let \(\langle B,f\rangle\) be a MaxId algebra determined by \(a\), and set \(U\stackrel{{\mathrm{df}}}{{=}}\{F\in\mathrm{Ult}(B):a\in F\}\), and \(V\stackrel{{\mathrm{df}}}{{=}}\{F\in\mathrm{Ult}(B):a\notin F\}\). Let \(F,G\in\mathrm{Ult}(B)\), and recall that \(F\ R_{f}\ G\) if and only if \(\ f[G]\subseteq F\). First, suppose that \(f\) is the unary discriminator, i.e. that \(a=1\). Then, \(V=\emptyset\) and \(R_{f}\) is the universal relation on \(\mathrm{Ult}(B)\), which shows that the canonical frame consists of one cluster. Otherwise, suppose that \(0\leq a\leq 1\); then neither \(U\) nor \(V\) are empty. Suppose that \(F\in U\). Then, \(f[G]\subseteq\{a,1\}\subseteq F\), and thus, \(F\ R_{f}\ G\); this implies \(U\times\mathrm{Ult}(B)\subseteq R_{f}\). If \(F\in V\), i.e. \(a\notin F\), then \(f[G]\subseteq F\) if and only if \(\ a\notin G\), which shows that \(V\times V\subseteq R_{f}\), and \((V\times U)\ \cap\ R_{f}{=}\emptyset\). Altogether, \(R_{f}=(U\times U)\cup(U\times V)\cup(V\times V)\), and so \(R\) has the desired form.
2. Suppose that \(\langle W,R\rangle\) is a chain of at most two clusters. If \(R\) has just one cluster, then \(R\) is the universal relation on \(W\), and \(\langle R\rangle\) is the unary discriminator; hence, \(\langle 2^{W},\ \langle R\rangle\rangle\) is a MaxId algebra. Next, suppose that \(R\) has two levels, \(U,V\), and \(R=(U\times U)\ \cup(U\times V)\ \cup(V\times V)\). We will show that \(\langle R\rangle(X)\in\{\emptyset,U,W\}\) for all \(X\subseteq W\). Since \(\langle R\rangle\) is completely additive, it is sufficient to consider singletons. Let \(x\in U\); then, \(\langle R\rangle(\{x\})=R^{\sim}(x)=U\). If \(x\in V\), then \(R^{\sim}(x)=W\). Hence \(\langle R\rangle\) is a MaxId operator.
It follows that \(\mathbf{Var}(\mathsf{MMA})\) is generated by the complex algebras of finite frames of depth at most two, each level of which contains one cluster. It is well known from e.g. [16, Theorem 1] or [7, p. 52] that frames of this form validate the logic \(\mathbf{S4.3B_{2}}\), so that we obtain
**Theorem 6.6**.: \(\mathbf{Var}(\mathsf{MMA})=\mathbf{Var}(\mathbf{S4.3B_{2}})\)_._
If \(\langle B,f\rangle\) is a MaxId-algebra we shall indicate this by writing \(f^{\mathrm{uu}}\) for \(f\), where the superscript \(\mathrm{uu}\) indicates that the restriction of the canonical relation \(R_{f}\) to a level is the universal relation. We note in passing that for \(a\in B\), \(a\notin\{0,1\}\), and the closure operators \(f^{\mathrm{iu}},f^{\mathrm{ui}}\) and \(f^{\mathrm{uu}}\) associated with the principal ideals and filters given by \(a\), we have \(f^{\mathrm{iu}}(x)+f^{\mathrm{ui}}(x)=f^{\mathrm{uu}}(x)\), and \(R_{f^{\mathrm{iu}}}\cup R_{f^{\mathrm{ui}}}=R_{f^{\mathrm{uu}}}\).
Thus far we have considered algebras whose associated frames have depth at most two and in which at least one level contains a proper cluster. We will now consider the remaining case where each level contains only simple clusters. In accordance with our previous procedure, we shall approach this from an algebraic viewpoint first. Let \(b\in B\) and let \(f\) be defined as follows:
\[f(x)\stackrel{{\mathrm{df}}}{{=}}\begin{cases}x,&\text{if }x\leq b,\\ b+x,&\text{if }x\nleq b.\end{cases} \tag{6.3}\]
Clearly, \(f\) is a closure operator, and \(B^{\mathrm{c}}=\downarrow b\ \cup\uparrow b\). Note that \(f\) is the identity if and only if \(\ b\in\{0,1\}\). The class of algebras \(\langle B,f\rangle\) that satisfy (6.3) is denoted by \(\mathsf{GMA}\). Clearly, \(\mathsf{GMA}\) is first order axiomatizable.
**Theorem 6.7**.: \(\mathbf{Var}(\mathsf{GMA})\) _is locally finite._
Proof.: This is analogous to the proof of Theorem 5.7, using \(K_{4}\) instead of \(K_{3}\).
Unlike the classes we have considered previously, \(\mathsf{GMA}\) is not closed under subalgebras, i.e. it is not a universal class:1
**Example 6.8**.: Suppose that \(B\) is an atomless Boolean algebra, \(b\in B\setminus\{0,1\}\), and let \(f\) be determined by \(b\). Choose some non-principal ideal \(I\subsetneq\downarrow b\), and set \(A\stackrel{{\mathrm{df}}}{{=}}I\cup-I\); then, \(A\) is a Boolean subalgebra of \(B\)[23, Lemma 5.33], and \(I\) is a prime ideal of \(A\). Let \(g\stackrel{{\mathrm{df}}}{{=}}f\uparrow A\); we show that \(g[A]\subseteq A\). If \(x\in I\), then \(x\leq b\), and therefore \(g(x)=f(x)=x\in I\). If \(x\in-I\), then \(-b\leq x\), and \(g(x)=f(x)=b+x=1\). Thus, \(\langle A,g\rangle\) is a subalgebra of \(\langle B,f\rangle\) but it is not in \(\mathsf{GMA}\), since \(I\) is non-principal.
Note that \(\langle A,g\rangle\) above is an ideal algebra. The following observation shows that the form of \(A\) in the previous example is no accident:
**Theorem 6.9**.: _Let \(\langle B,f\rangle\in\mathsf{GMA}\) with respect to \(b\) and let \(\langle A,g\rangle\) be a subalgebra of \(\langle B,f\rangle\). Then, \(b\in A\) and \(\langle A,g\rangle\in\mathsf{GMA}\), or there is a prime ideal \(J\) of \(A\) such that \(J\subseteq\downarrow b\), \(A=J\cup-J\) and \(\langle A,J\rangle\in\mathsf{IMA}\)._
Proof.: If \(b\in A\), then clearly \(\langle A,g\rangle\in\mathsf{GMA}\). Thus, suppose that \(b\not\in A\). Assume that there is some \(x\in A\) such that \(x\nleq b\) and \(-x\nleq b\). Then,
\[g(x)\cdot g(-x)=f(x)\cdot f(-x)=(b+x)\cdot(b+-x)=b,\]
contradicting \(b\not\in A\). Thus, \(x\leq b\) or \(-x\leq b\) for all \(x\in A\). Set \(J\stackrel{{\mathrm{df}}}{{=}}\{x\in A:x\leq b\}\); then, \(J\) is a closed prime ideal of \(A\) since \(g\nmid J\) is the identity, and \(A=J\cup-J\). Finally, if \(x\in A\setminus J\), then \(x\nleq b\), and therefore, \(-x\leq b\), and \(g(x)=f(x)=x+b\geq x+(-x)=1\). Hence, \(\langle A,J\rangle\in\mathsf{IMA}\).
**Corollary 6.10**.: _Let \(\langle B,f\rangle\in\mathsf{GMA}\) and let \(\langle A,g\rangle\) be a finite subalgebra of \(\langle B,f\rangle\). Then, \(\langle A,g\rangle\in\mathsf{GMA}\)._
Proof.: If \(b\not\in A\), choose \(a\stackrel{{\mathrm{df}}}{{=}}\max\ J\) in the proof above which exists and belongs to \(J\) since \(A\) is finite; then, \(\langle A,g\rangle\in\mathsf{GMA}\) with respect to \(a\). Indeed, if \(x\leq a\), then \(x\leq b\), since \(a\in J\), and therefore, \(g(x)=f(x)=x\). Otherwise, \(x\not\in J\) implies \(-x\in J\), since is prime, and therefore, \(-x\leq a\); hence \(x+a=1\). Furthermore, \(x\not\in J\) implies \(x\nleq b\), and therefore, \(g(x)=f(x)=x+b=1=x+a\).
**Theorem 6.11**.: \(\mathsf{GMA}\) _is a positive class._
Proof.: We show that \(\mathsf{GMA}\) is closed under homomorphic images. Let \(\langle B,f\rangle\in\mathsf{GMA}\) with respect to \(b\), \(I\) be a congruence ideal of \(B\), and suppose w.l.o.g. that \(I\) is nontrivial and proper. Let \(A\stackrel{{\mathrm{df}}}{{=}}B/I,\pi:B\twoheadrightarrow A\) be the canonical homomorphism, and \(g(\pi(x))\stackrel{{\mathrm{df}}}{{=}}\pi(f(x))\). We suppose w.l.o.g. that \(|A|\geq 4\). There are two cases:
1. \(b\in I\): Our aim is to show \(g\) is the identity, i.e. that \(g(\pi(x))=\pi(x)\) for all \(x\in B\). Let \(x\in B\). If \(x\in I\), then \(\pi(x)=0\) and there is nothing more to show. Suppose that \(x\not\in I\); then, in particular, \(x\nleq b\), since \(b\in I\) and \(I\) is an ideal. Now, \[y\in g(\pi(x)) \Longleftrightarrow y\in\pi(f(x)),\] \[\Longleftrightarrow(\exists z\in I)[y+z=f(x)+z],\] \[\Longleftrightarrow(\exists z\in I)[y+z=b+x+z], \text{since }x\nleq b,\] \[\Longleftrightarrow(\exists z^{\prime}\in I)[y+z^{\prime}=x+z^{ \prime}], \text{set }z^{\prime}\stackrel{{\mathrm{df}}}{{=}}z+b\in I,\] \[\Longleftrightarrow y\in\pi(x).\]
2. \(b\notin I\): Then, \(I\subsetneq\downarrow b\) since \(I\) is closed, and \(B^{c}=\downarrow b\cup\uparrow b\); furthermore, \(\pi(b)\neq 0\), Suppose that \(\pi(x)\leq\pi(b)\); then, there is some \(y\in I\) such that \(x+y\leq b+y\). Since \(I\subseteq\downarrow b\), it follows that \(y\leq b\), and therefore, \(x\leq b\) which implies \(f(x)=x\); hence, \(g(\pi(x))=\pi(f(x))=\pi(x)\). If \(\pi(x)\not\leq\pi(b)\), then, in particular, \(x\not\leq b\) which implies \(f(x)=b+x\). It follows that \(g(\pi(x))=\pi(f(x))=\pi(b)+\pi(x)\).
This completes the proof.
To describe the subdirectly irreducibles in \(\mathbf{Var}(\mathsf{GMA})\), we first describe those in \(\mathsf{GMA}\).
**Lemma 6.12**.: _Suppose that \(\langle B,f\rangle\in\mathsf{GMA}\) with respect to \(b\). Then, \(\langle B,f\rangle\) is subdirectly irreducible if and only if \(\ |B|=2\) or \(b\) is an atom of \(B\)._
Proof.: "\(\Rightarrow\)": Suppose that \(|B|\geq 4\). If \(b\in\{0,1\}\), then \(f=1^{\prime}\), and \(\langle B,f\rangle\) is not subdirectly irreducible. Hence, \(0\leq b\leq 1\), and, \(\downarrow b\) is a nontrivial closed ideal. If \(b\) is not an atom, then there are \(0\leq x,y\leq b\) such that \(x\cdot y=0\). Then, \(\downarrow x\) and \(\downarrow y\) are nontrivial closed ideals of \(B\) strictly contained in \(\downarrow b\). On the other hand, \(\downarrow x\cap\downarrow y=\{0\}\), and therefore, \(B\) is not subdirectly irreducible.
"\(\Leftarrow\)": If \(|B|=2\), then \(\langle B,f\rangle\) is simple, hence, subdirectly irreducible. If \(b\) is an atom of \(B\), then \(\downarrow b\) is the smallest nontrivial closed ideal of \(B\), since \(f(x)=x\) if and only if \(\ x\leq b\) or \(b\leq x\).
**Theorem 6.13**.: _If \(\langle B,f\rangle\in\mathbf{Var}(\mathsf{GMA})\) is subdirectly irreducible, then \(\langle B,f\rangle\in\mathsf{GMA}\)._
Proof.: Suppose that \(\langle B,f\rangle\in\mathbf{Si}\,\mathbf{Var}(\mathsf{GMA})\). By Jonsson's Lemma 2.1, \(\langle B,f\rangle\in\mathbf{HS}(\mathsf{GMA})\). It is well known that \(\mathbf{HS}(\mathbf{K})=\mathbf{SH}(\mathbf{K})\) for any class \(\mathbf{K}\) of modal algebras [3], and therefore, \(\mathbf{HS}(\mathsf{GMA})=\mathbf{S}(\mathsf{GMA})\)) by Theorem 6.11. Assume that \(\langle B,f\rangle\notin\mathsf{GMA}\). Then, \(|B|\geq 4\), and there is some \(\langle B^{\prime},f^{\prime}\rangle\in\mathsf{GMA}\) such that \(\langle B,f\rangle\) is a subalgebra of \(\langle B^{\prime},f^{\prime}\rangle\). By Theorem 6.9, there is some closed prime ideal \(I\) of \(B\) such that \(B=I\cup-I\). Since \(\langle B,f\rangle\) is subdirectly irreducible, \(I\) must be generated by an atom by Theorem 4.1, say, \(b\). Then, \(B\) is the four element Boolean algebra with \(B^{c}=\{0,b,1\}\) since \(B=I\cup-I\), and thus, \(\langle B,f\rangle\in\mathsf{GMA}\). This contradicts our assumption.
Therefore, since \(\mathbf{Var}(\mathsf{GMA})\) is locally finite, it is generated by the finite subdirectly irreducible algebras in \(\mathsf{GMA}\).
Next we turn to the canonical frames of algebras in \(\mathsf{GMA}\).
**Theorem 6.14**.:
1. _Suppose that_ \(\langle B,f\rangle\in\mathsf{GMA}\) _with respect to_ \(a\)_, and that_ \(a\notin\{0,1\}\)_. Then_ \(R_{f}\) _has two levels_ \(U,V\)_, and_ \(R_{f}=(U\times V)\cup 1^{\prime}\)_._
2. _If_ \(\langle W,R\rangle\) _is a frame where_ \(R\) _has two levels_ \(U,V\) _and_ \(R=(U\times V)\cup 1^{\prime}\)_, then_ \(\langle R\rangle\) _satisfies (_6.3_)._
Proof.: 1. Let \(U\stackrel{{\mathrm{df}}}{{=}}\{F\in\mathrm{Ult}(B):a\in F\}\), and \(V\stackrel{{\mathrm{df}}}{{=}}\{F\in\mathrm{Ult}(B):a\notin F\}\), and \(F,G\in\mathrm{Ult}(B)\).
"\(\subseteq\)": Let \(F\ R_{f}\ G\) and \(F\neq G\); then, there is some \(x\in G\), \(x\notin F\). Assume that \(a\in G\). Since \(G\) is a filter and \(x\in G\), we have \(x\cdot a\in G\). By (6.3), \(f(x\cdot a)=x\cdot a\), and \(F\ R_{f}\ G\) implies that \(x\cdot a\in F\). Since \(F\) is a filter, we obtain \(x\in F\), a contradiction. It follows that \(G\in V\). Next, assume that \(a\notin F\); then, \(x+a\not\in F\), since \(x\notin F\), and \(F\) is an ultrafilter. However, \(f[G]\subseteq F\) implies that \(f(x+a)=x+a\in F\), a contradiction. It follows that \(F\in U\), and altogether we have proved \(R_{f}\subseteq(U\times V)\cup 1^{\prime}\).
"\(\supseteq\)": We first show that \(F\)\(R_{f}\)\(F\), i.e. that \(f[F]\subseteq F\). Let \(x\in F\). Since \(f\) is a closure operator, we have \(x\leq f(x)\), which implies \(f(x)\in F\). Next, let \(F\in U\) and \(G\in V\). Suppose that \(x\in G\). Since \(-a\in G\), we have \(x\nleq a\), and thus, \(f(x)=a+x\). Now, \(a\in F\) implies \(a+x\in F\), and therefore, \(F\)\(R_{f}\)\(G\). It follows that \((U\times V)\cup 1^{\prime}\subseteq R_{f}\).
2. Suppose that \(R\) has two levels \(U,V\) and \(R=(U\times V)\cup 1^{\prime}\). If \(x\in U\), then \(R^{\sim}(x)=\{x\}\), and if \(x\in V\), then \(R^{\sim}(x)=U\cup\{x\}\). Altogether, \(\langle R\rangle(Y)=Y\), if \(Y\subseteq U\), and \(\langle R\rangle(Y)=U\cup Y\) otherwise.
We call a relation with two distinct levels \(U,V\) for which \(R=(U\times V)\cup 1^{\prime}\) an _ii-relation_, indicated by \(R^{\rm ii}\), and an operator which satisfies (6.3) an _ii-operator_. Example 6.8 shows that the canonical extension - i.e. the complex algebra of the ultrafilter extension - of an ideal algebra \(\mathfrak{A}\) can be in \(\mathsf{GMA}\), even though \(\mathfrak{A}\) is not.
It follows from the previous results that \(\mathbf{Var}(\mathsf{GMA})\) is generated by the complex algebras of ii-relations where the lower level has exactly one element. It is well known that the logic generated by these frames is the pretabular logic \(\mathbf{S4GrzB_{2}}\) of [7, p. 53], see also [15] and [25]. It is known that \(\mathbf{S4GrzB_{2}}=\mathbf{S4MDumB_{2}}\), see e.g. [29, p. 107]; in the sequel we shall use \(\mathbf{S4MDumB_{2}}\) instead of \(\mathbf{S4GrzB_{2}}\), since it demonstrates better the relationship among the logics we have considered. From the previous result we now obtain
**Theorem 6.15**.: \(\mathbf{Var}(\mathsf{GMA})=\mathbf{Var}(\mathbf{S4MDumB_{2}})\)_._
Figures 3 and 4 summarize the corresponding frame relations of depth two:
* Simple clusters on both levels. If the lower level contains just one simple cluster, the complex algebra is subdirectly irreducible. The corresponding logic is the pretabular logic \(\mathbf{S4MDumB_{2}}\), and \(\mathbf{Var}(\mathsf{GMA})\) is the class of its algebraic models.
* Simple clusters on the lower level, one cluster on the upper level. If the lower level contains just one simple cluster, the complex algebra is subdirectly irreducible. The corresponding logic is \(\mathbf{S4.3DumB_{2}}\), and \(\mathbf{Var}(\mathsf{IMA})\) is the class of its algebraic models.
* One cluster on the lower level, simple clusters on the upper level. Since such a frame is rooted, its complex algebra is subdirectly irreducible. The corresponding logic is \(\mathbf{S4MB_{2}}\), and \(\mathbf{Var}(\mathsf{FMA})\) is the class of its algebraic models.
* One cluster on the lower level, one cluster on the upper level. Since such a frame is rooted, its complex algebra is subdirectly irreducible. The corresponding logic is \(\mathbf{S4.3B_{2}}\), and \(\mathbf{Var}(\mathsf{MMA})\) is the class of its algebraic models.
## 7. Meet and join of classes of algebras of depth two
The four varieties we have considered are locally finite with the subdirectly irreducibles contained in the generating class, and they all contain the two element closure algebra. Table 2 lists the algebra classes, the corresponding logics, and the closed elements of the subdirectly irreducibles when \(|B|\geq 4\). We see that \(\mathbf{Si}(\mathsf{GMA})\subseteq\mathbf{Si}(\mathsf{FMA})\) and \(\mathbf{Si}(\mathsf{IMA})\subseteq\mathbf{Si}(\mathsf{MMA})\), which implies \(\mathbf{Var}(\mathsf{GMA})\subseteq\mathbf{Var}(\mathsf{FMA})\) and \(\mathbf{Var}(\mathsf{IMA})\subseteq\mathbf{Var}(\mathsf{MMA})\).
In the sequel we shall suppose that \(\langle B,f\rangle\) is finite with at least four elements unless stated otherwise. To simplify notation we shall index the operators with a subscript that indicates the special element generating the associated filter or ideal, depending on the superscript; for example, \(f^{\rm ii}_{a}\) is the ideal algebra with associated ideal \(\downarrow a\), and \(f^{\rm ii}_{b}\) is the filter algebra
with associated filter \(\uparrow b\). As a preparation for the description of the meet of \(\mathbf{Var}(\mathsf{MMA})\) and \(\mathbf{Var}(\mathsf{FMA})\) we observe the following:
**Lemma 7.1**.: _If \(\langle B,f\rangle\in\mathsf{MMA}\cap\mathsf{FMA}\) is subdirectly irreducible such that \(f=f_{a}^{uu}=f_{b}^{ui}\) and \(a,b\notin\{0,1\}\), then \(b\) is an antiatom, and \(a=b\)._
Proof.: Suppose that \(\langle B,f\rangle\in\mathsf{MMA}\cap\mathsf{FMA}\), \(f=f_{a}^{uu}=f_{b}^{ui}\). Since \(0\leq a\) and \(b\leq 1\), \(f\) is neither the identity nor the unary discriminator. Thus, \(B^{c}=\{0,a,1\}=\{0\}\cup\uparrow b\) which implies \(\{a,1\}=\uparrow b\), hence, \(a=b\), and \(b\) is an antiatom.
The canonical relation of an algebra satisfying the conditions of Lemma 7.1 has two levels with one cluster on the lower level, and a simple cluster on the second level. The logic determined by this type of frame is the pretabular logic \(\mathbf{S4.3MB_{2}}\), see [7, p. 53, \(\langle W_{2},R_{2}\rangle\)]. Thus, we obtain
**Theorem 7.2**.: \(\mathbf{Var}(\mathsf{MMA})\wedge\mathbf{Var}(\mathsf{FMA})=\mathbf{Var}( \mathbf{S4.3MB_{2}})\)_._
This also follows from Theorems 5.9 and 6.6, since the lattice of normal modal logics is dually isomorphic to the lattice of equational classes of modal algebras, see e.g. [3].
\begin{table}
\begin{tabular}{l l l l} Class & Logic & Closed elements & when \(f\neq f^{\mathbf{1}}\) \\ \hline \(\mathsf{GMA}\) & \(\mathbf{S4MDumB_{2}}\) & \(\{0\}\cup\uparrow b\), \(b\) an atom & (Theorem 6.12). \\ \(\mathsf{FMA}\) & \(\mathbf{S4MB_{2}}\) & \(\{0\}\cup\uparrow b\) & (Theorem 5.9). \\ \(\mathsf{IMA}\) & \(\mathbf{S4.3DumB_{2}}\) & \(\{0,a,1\}\), \(a\) an atom & (Theorem 4.1). \\ \(\mathsf{MMA}\) & \(\mathbf{S4.3B_{2}}\) & \(\{0,a,1\}\) & (Theorem 6.3) \\ \end{tabular}
\end{table}
Table 2. Algebras, logics, subdirectly irreducibles
Figure 3. The relations of depth two
If \(B\) is generated by its atoms \(a,b\), and \(I=\downarrow a\), then \(B\) has two ultrafilters \(F_{a},F_{b}\) and \(F\ R_{f}\ G\) if and only if \(\ I\cap G\subseteq F\)[10]. If \(G=F_{b}\), then \(I\cap G=\emptyset\), and it follows that \(F_{b}\ R_{f}\ F_{b}\) and also \(F_{a}\ R_{f}\ F_{b}\). Furthermore, \(\{a\}\subseteq F_{a}\) implies \(F_{a}\ R_{f}\ F_{a}\). On the other hand, \(\{a\}=I\cap F_{a}\not\subseteq F_{b}\), and thus, \(F_{b}(-\ R_{f})F_{a}\). This shows that the canonical relation of \(\langle B,f\rangle\) has two levels each containing a simple cluster; the element in the lower level is \(R_{f}\)-related to the element on the upper level, but not vice versa; in other words, \(R_{f}\) is the two element chain. Let us denote a frame of this type by \(\mathfrak{K}_{2}\), and its complex algebra by \(\mathfrak{K}_{2}^{+}\). Note that \(\mathfrak{K}_{2}^{+}\) is subdirectly irreducible, but not simple. Furthermore, the set of closed elements of \(\mathfrak{K}_{2}^{+}\) is a chain of length three. Conversely, if \(|B|=4\), and \(B^{c}\) has three elements, then \(\langle B,f\rangle\cong\mathfrak{K}_{2}^{+}\). Clearly, \(\mathfrak{K}_{2}^{+}\) is in all four classes of algebras which we have considered.
**Theorem 7.3**.: \(\mathbf{Var}(\mathsf{GMA})\wedge\mathbf{Var}(\mathsf{MMA})=\mathbf{Var}( \mathfrak{K}_{2}^{+})\)_._
Proof.: Since \(\mathfrak{K}_{2}^{+}\in\mathbf{Var}(\mathsf{GMA})\cap\mathbf{Var}(\mathsf{ MMA})\), we need only show the "\(\subseteq\)" direction. Suppose that \(\langle B,f\rangle\in\mathbf{Var}(\mathsf{GMA})\wedge\mathbf{Var}(\mathsf{ MMA})\) is subdirectly irreducible. Indeed, by Corollaries 6.2 and 6.13\(\langle B,f\rangle\) belongs to \(\mathsf{GMA}\) and \(\mathsf{MMA}\). If \(|B|=2\), then it is isomorphic to a subalgebra of \(\mathfrak{K}_{2}^{+}\). Otherwise, \(\langle B,f\rangle\in\mathbf{Var}(\mathsf{GMA})\) implies that there is an atom \(b\) of \(B\) such that \(b\neq 1\) and \(B^{c}=\{0\}\cup\uparrow b\) by Theorem 6.12. Thus, \(B^{c}\) has at least three elements. On the other hand, \(\langle B,f\rangle\in\mathsf{MMA}\) implies that \(B^{c}\) has at most three elements, and therefore, \(B^{c}\) has exactly three elements. This implies that \(|\!\uparrow\!b|=2\), hence, \(|B|=4\), since \(b\) is an atom. Thus, \(\langle B,f\rangle\cong\mathfrak{K}_{2}^{+}\).
Similarly, we can show:
**Theorem 7.4**.: \(\mathbf{Var}(\mathsf{IMA})\wedge\mathbf{Var}(\mathsf{FMA})=\mathbf{Var}( \mathfrak{K}_{2}^{+})\)_._
It is well known, e.g. from [33], that the logic determined by \(\mathbf{Var}(\mathfrak{K}_{2}^{+})\) is the logic \(\mathbf{S4.3MDumB_{2}}\), called _K4_ by Sobocinski [30], not to be confused with the nowadays common notation \(\mathbf{K4}\) for the class of normal logics determined by transitive frames.
The meet sub-semilattice of the lattice of modal varieties generated by the four varieties we have considered is shown in Figure 5.
Some remarks on pretabularity are in order. \(\mathsf{IMA}\) has as an important subclass the class of discriminator algebras \(\mathsf{DMA}\), (corresponding to the ideal \(\{0\}\)). If we were to add to Figure 5 the class \(\mathbf{Var}(\mathsf{DMA})\) belonging to \(\mathbf{S5}\) in the lower right hand corner, we realize that the classes of closure algebras that we study include all three types of finite closure algebras
Figure 5. The meets of classes generated by algebras of depth two and extremal frame relations
belonging to pretabular logics of depth at most two: **S5** (_clots_), **S4.3MB2** (_tacks_), and **S4MDumB2**, i.e. **S4GrzB2** (_fans_). We do not capture the remaining two types: _Chains_, which may have any finite depth, and _tops_ which have depth three, see e.g. [15].
For the meet of the logics (or join of corresponding varieties) we apply Theorem 3 from [32], adapted for S4 logics. If \(\varphi,\psi\) are modal formulas we denote by \(\varphi\not\subseteq\psi\) the formula obtained by replacing the variables in \(\varphi\vee\psi\) in such a way that \(\varphi\) and \(\psi\) have no variables in common.
**Theorem 7.5**.: _Suppose that \(\mathbf{L}_{1}\) and \(\mathbf{L}_{2}\) are S4 logics, say, \(\mathbf{L}_{1}=\mathbf{K}\cup\{\varphi_{i}:i\in I\}\), \(\mathbf{L}_{2}=\mathbf{K}\cup\{\psi_{j}:j\in J\}\). Then,_
\[\mathbf{L}_{1}\wedge\mathbf{L}_{2}=\mathbf{K}\cup\{\Box\varphi_{i}\not \subseteq\Box\psi_{j}:i\in I,j\in J\}. \tag{7.1}\]
Using Theorem 7.5 we obtain
**Corollary 7.6**.:
1. _[label=()]_
2. \(\mathbf{S4.3MB_{2}\wedge S4.3DumB_{2}=S4.3B_{2}(\Box M\not\subseteq\Box\mathbf{ num})}\)_._
3. \(\mathbf{S4.3DumB_{2}\wedge S4MDumB_{2}=S4DumB_{2}(\Box M\not\subseteq\Box.3)}\)_._
4. \(\mathbf{S4MB_{2}\wedge S4.3B_{2}=S4B_{2}(\Box M\not\subseteq\Box.3)}\)_._
By Theorem 3.1, **S4.3** may be replaced by **S4.2**. The subdirectly irreducibles of the corresponding equational classes of modal algebras may be obtained by Lemma 2.4(2).
Including or excluding some particular "boundary conditions" may give rather unexpected changes in the resulting varieties and logics. For instance, if in the definition of filter algebras (which form the class \(\mathsf{FMA}\)) one allows \(F\) to be equal to \(\{1\}\), we obtain the class \(\mathsf{FMA}\vee\mathsf{DMA}\), whose meet \(\mathsf{FMA}\wedge\mathsf{DMA}\) contains only the closure algebra with at most two-elements, in terms of logic, \(\mathbf{S4MB_{2}\vee S5=Triv}\). Moreover, the logic \(\mathbf{S4MB_{2}\wedge S5}\) determined by \(\mathsf{FMA}\vee\mathsf{DMA}\) differs from \(\mathbf{S4MB_{2}}\) substantially. Also, Figure 5 becomes much more complicated.
## 8. Quasivarieties of depth two algebras: Structural completeness
In this section we consider rules of inference in some of the logics we have considered, their related algebras, and their quasivarieties. In general, structural completeness concerns a deductive system \(\mathcal{L}\) (where \(\mathcal{L}\) can be determined by a set of axioms and a set of rules of inference) or its consequence operation \(\vdash_{\mathcal{L}}\), not just a logic understood as a set of formulas closed with respect to some rules. A deductive system \(\mathcal{L}\) (or its consequence operation \(\vdash_{\mathcal{L}}\)) is called _structurally complete_ (SC) if every admissible rule in \(\mathcal{L}\) is also derivable in \(\mathcal{L}\). The set of theorems of a system \(\mathcal{L}\) is denoted by \(L\), hence,
\[\vdash_{\mathcal{L}}\varphi\text{ if and only if }\varphi\text{ is derivable in }\mathcal{L}\text{ if and only if }\varphi\in L.\]
A consequence operation \(\vdash\) is SC if and only if \(\vdash\) is maximal among all \(\vdash^{\prime}\) such that: \(\emptyset\vdash^{\prime}\psi\) if and only if \(\emptyset\vdash\psi\) for all \(\psi\), i.e. if they have the same set of theorems. A rule \(r:\varphi_{1},\ldots,\varphi_{n}/\psi\) is _passive_ if for every substitution \(\varepsilon\), \(\{\varepsilon\varphi_{1},\ldots,\varepsilon\varphi_{n}\}\not\subseteq L\). For example, the rule \(P_{2}\)
\[\Diamond\varphi\wedge\Diamond\neg\varphi/\psi \tag{8.1}\]
or, equivalently, \(\Diamond\varphi\wedge\Diamond\neg\varphi/\bot\), is passive, hence admissible (but not derivable in many modal logics, for example, in **S5**). Therefore, we call \(\mathcal{L}\)_almost structurally complete_
(ASC)_, if every rule which is admissible and not passive is derivable in \(\mathcal{L}\), see [12]2. Slightly abusing the terminology, we say that a modal logic \(L\) is SC (ASC) if its standard consequence operation, understood as based on the axioms of \(L\) plus Modus Ponens and the Necessitation rule only, denoted here by \(\vdash_{L}\), is SC (ASC). For instance **S5** and, indeed, every extension of **S4.3** is ASC but, in general, not SC, see [13].
Footnote 2: Recently “almost structural completeness” was also called “active structural completeness”, since passive rules are neglected.
For a variety **K** of algebras, let \(\mathcal{T}_{\textbf{K}}(\lambda)\) be its \(\lambda\)-generated free algebra; we omit **K** if no confusion can arise. The following descriptions of SC and ASC (adapted here to closure algebras) are known, see [1], [12].
**Theorem 8.1**.: _Let **K** be a locally finite variety of closure algebras. Then_
1. **K** _is SC if and only if for every finite subdirectly irreducible algebra_ \(\mathfrak{A}\) _in_ **K** _,_ \(\mathfrak{A}\) _embeds into_ \(\mathcal{F}(\omega)\)_._
2. **K** _is ASC if and only if for every finite subdirectly irreducible algebra_ \(\mathfrak{A}\) _in_ **K** _,_ \(\mathfrak{A}\times\mathcal{F}(0)\) _embeds into_ \(\mathcal{F}(\omega)\)_._
A substitution \(\varepsilon\) of formulas is called a _unifier_ for a formula \(\varphi\) in the logic \(L\) if \(\varepsilon\varphi\in L\). A formula \(\varphi\) is _unifiable_ in \(L\), if \(\varepsilon\varphi\in L\) for some substitution \(\varepsilon\). Therefore, a rule \(r:\varphi_{1},\ldots,\varphi_{n}/\psi\) is passive if and only if \(\varphi_{1}\wedge\cdots\wedge\varphi_{n}\) is not unifiable. In case of logics extending **S4** it is enough to consider rules of the form \(r:\varphi/\psi\) only. A _projective unifier_ for \(\varphi\) in \(L\) is a unifier \(\varepsilon\) for \(\varphi\) such that \(\varphi\vdash_{L}\varepsilon(\psi)\leftrightarrow\psi\), for each formula \(\psi\); and one says that a logic \(L\) enjoys _projective unification_ if each \(L\)-unifiable formula has a projective unifier in \(L\). Projective unifiers (formulas, substitutions) were introduced and extensively used by Silvio Ghilardi in his papers on unification of 1997-2004, see e.g. [18, 19]. We have the following result (see [13]):
**Theorem 8.2**.: _Let \(L\) be a logic which enjoys projective unification. Then \(L\) is almost structurally complete. If, in addition, any formula which is not \(L\)-unifiable is inconsistent in \(L\), then \(L\) is structurally complete._
Proof.: Let \(r:\varphi/\psi\) be an admissible rule in \(L\) with a unifiable premise \(\varphi\) and let \(\varepsilon\) be a projective unifier for \(\varphi\). Then \(\varepsilon\varphi\in L\) and \(\varphi\vdash_{L}\varepsilon(\psi)\leftrightarrow\psi\), hence \(\varphi\vdash_{L}\psi\), i.e. \(r:\varphi/\psi\) is derivable in \(L\) and \(L\) is ASC. Now assume, in addition, that any formula which is not \(L\)-unifiable is inconsistent in \(L\) and consider any admissible rule in \(L\), \(r:\varphi/\psi\) with a premise \(\varphi\) which is not \(L\)-unifiable. \(\varphi\) is then inconsistent in \(L\) and \(\varphi\vdash_{L}\psi\), for every formula \(\psi\), i.e. \(L\) is SC.
In [14, 5.1-5.2] and in [24, 3.3] it is shown that
**Lemma 8.3**.: _Let \(L\) be a logic extending **S4**. If a formula \(\varphi\) is not unifiable in \(L\), then \(\varphi\vdash_{L}\Diamond\theta\wedge\Diamond\neg\theta\), for some formula \(\theta\)._
**Corollary 8.4**.: _Let \(L\) be a logic extending **S4**._
1. _If_ \(L\) _enjoys projective unification and_ \(\texttt{M:}\ \Box\Diamond p\rightarrow\Diamond\Box p\) _is in_ \(L\)_, then_ \(L\) _is structurally complete._
2. _If a rule_ \(r:\varphi/\psi\) _is passive in_ \(L\)_, then_ \(r\) _is derivable by an application of the rule_ \(P_{2}\)
Proof.: 1. Assume that \(L\supseteq\mathbf{S4}\) enjoys projective unification, \(\mathbb{M}\colon\Box\Diamond p\to\Diamond\Box p\) is in \(L\) and \(\varphi\) is not unifiable in \(L\). Then, by Lemma 8.3, \(\varphi\vdash_{L}\Diamond\theta\wedge\Diamond\neg\theta\), for some \(\theta\). Applying the necessitation rule we obtain \(\varphi\vdash_{L}\Box\Diamond\theta\wedge\Box\Diamond\neg\theta\), i.e. \(\varphi\vdash_{L}\neg(\Box\Diamond\theta\to\Diamond\Box\theta)\), which is the negation of an instance of \(\mathbb{M}\). This, together with \(\mathbb{M}\in L\), shows that \(\varphi\vdash_{L}\psi\) for every formula \(\psi\). Finally, Theorem 8.2 yields that \(L\) is SC.
2. Let \(r:\varphi/\psi\) be passive in \(L\), i.e. \(\varphi\) is not unifiable in \(L\). By Lemma 8.3, \(\varphi\vdash_{L}\Diamond\theta\wedge\Diamond\neg\theta\) for some formula \(\theta\). Now, by an application of \(P_{2}:\Diamond\theta\wedge\Diamond\neg\theta/\psi\), we obtain \(\varphi\vdash_{L}\psi\), i.e. \(r\) is derivable.
In [13, 3.19] (see also [14]) it is shown that
**Theorem 8.5**.: _A modal logic \(L\) containing \(\mathbf{S4}\) enjoys projective unification if and only if \(\mathbf{S4.3}\subseteq L\)._
Hence, together with [13, Corollary 4.2], Theorem 8.2 and Corollary 8.4 we obtain
**Corollary 8.6**.:
1. _The logics_ \(\mathbf{S4.3B_{2}}\) _and_ \(\mathbf{S4.3DumB_{2}}\) _as well as all their extensions enjoy projective unification and are almost structurally complete. In other words, each admissible rule in_ \(\mathbf{S4.3B_{2}}\) _(in_ \(\mathbf{S4.3DumB_{2}}\)_) is derivable in_ \(\mathbf{S4.3B_{2}}\) _(in_ \(\mathbf{S4.3DumB_{2}}\)_) or passive (and then derivable by the rule_ \(P_{2}\)_)._
2. _The logics_ \(\mathbf{S4.3MB_{2}}\) _and_ \(\mathbf{S4.3MDumB_{2}}\) _as well as all their extensions enjoy projective unification and are structurally complete. In other words, each admissible rule in_ \(\mathbf{S4.3MB_{2}}\)_(in_ \(\mathbf{S4.3MDumB_{2}}\)_) is derivable in_ \(\mathbf{S4.3MB_{2}}\)_(in_ \(\mathbf{S4.3MDumB_{2}}\)_)._
Some further remarks are in order:
1. Neither \(\mathbf{S4.3B_{2}}\) nor \(\mathbf{S4.3DumB_{2}}\) are structurally complete (the premise of the rule \(P_{2}\) is consistent).
2. ASC for \(\mathbf{S4.3B_{2}}\) and \(\mathbf{S4.3DumB_{2}}\) holds only for rules with _finite_ premises, in accordance with the standard definition. To show that the extension of ASC for rules with infinite premises does not hold, let us consider the rule with the scheme (8.2) \[\frac{\{\Box(\varphi_{i}\leftrightarrow\varphi_{j})\to\varphi_{0}:0<i<j< \omega\}}{\varphi_{0}}\] The rule is not passive. Observe that for each finite modal algebra \(B\), if \(\varphi_{0}\) is false in \(B\), then \(\{\Box(\varphi_{i}\leftrightarrow\varphi_{j})\to\varphi_{0}:0<i<j<\omega\}\) is false in \(B\); in other words, whenever premises are valid in \(B\), the conclusion is also valid in \(B\). Since both \(\mathbf{S4.3B_{2}}\) and \(\mathbf{S4.3DumB_{2}}\) have the finite model property, the rule is admissible in both logics. But \(p_{0}\) can be derived neither in \(\mathbf{S4.3B_{2}}\) nor in \(\mathbf{S4.3DumB_{2}}\) from any finite subset of the set \(\{\Box(p_{i}\leftrightarrow p_{j})\to p_{0}:0<i<j<\omega\}\), hence the rule is not derivable.
Recall that for a variety \(\mathbb{V}\) of algebras, \(\mathcal{F}_{\mathbb{V}}(\lambda)\) denotes its \(\lambda\)-generated free algebra. Since in our context \(\mathcal{F}_{\mathbb{V}}(0)=\mathbf{2}\), we obtain the following Corollary from Theorem 8.1, Corollary 8.6, and[13]
**Corollary 8.7**.:
1. _For_ \(\mathbb{V}\in\{\mathbf{Var}(\mathsf{IMA}),\mathbf{Var}(\mathsf{MMA})\}\) _and for every finite subdirectly irreducible algebra_ \(B\) _in_ \(\mathbb{V},\,B\,\times\,\mathbf{2}\) _embeds into_ \(\mathcal{F}_{\mathbb{V}}(\omega)\)_._
2. _For_ \(\mathbb{V}\in\{\mathbf{Var}(\mathsf{FMA})\wedge\mathbf{Var}(\mathsf{MMA}), \mathbf{Var}(\mathsf{GMA})\wedge\mathbf{Var}(\mathsf{IMA})\}\)_, every finite subdirectly irreducible algebra_ \(B\in\mathbb{V}\) _embeds into_ \(\mathcal{F}_{\mathbb{V}}(\omega)\)
Next, we consider (almost) structural completeness in quasivarieties \(\mathbf{Q}_{L}\) determined by logics \(L\). A rule \(r:\varphi_{1},\ldots,\varphi_{n}/\psi\) in a logic \(L\) can be translated into a quasiidentity (quasiequation) in the language of the class of algebras corresponding to \(L\) by \((\varphi_{1}^{\prime}=1\ \&\ldots\ \&\ \varphi_{n}^{\prime}=1)\Rightarrow\psi^{ \prime}=1\); here, & and \(\Rightarrow\) are connectives of the metalanguage. In particular, for \(r:\varphi/\psi\) we have \(\varphi^{\prime}=1\ \Rightarrow\psi^{\prime}=1\). Under this translation, a rule \(r:\varphi_{1},\ldots,\varphi_{n}/\psi\) is admissible in \(L\) if and only if the quasiidentity \((\varphi_{1}^{\prime}=1\ \&\ldots\ \&\ \varphi_{n}^{\prime}=1)\Rightarrow\psi^{ \prime}=1\) holds in the free algebra \(\mathcal{F}_{\mathsf{V}_{L}}(\omega)\), that is, in the Lindenbaum-Tarski algebra of \(L\); \(r\) is derivable in \(L\) if and only if the quasiidentity \((\varphi_{1}^{\prime}=1\ \&\ldots\ \&\ \varphi_{n}^{\prime}=1)\Rightarrow\psi^{ \prime}=1\) holds in the quasivariety \(\mathbf{Q}_{L}\) corresponding to \(L\), see e.g. [1]. Hence, we have the following definition: A quasivariety \(\mathbf{Q}\) is _structurally complete_ (SC), if every quasidentity which holds in \(\mathcal{F}_{\mathbf{Q}}(\omega)\) also holds in \(\mathbf{Q}\) see [1, 12]. \(\mathbf{Q}\) is _almost structurally complete_ (ASC), if every active quasiidentity which holds in \(\mathcal{F}_{\mathbf{Q}}(\omega)\) also holds in \(\mathbf{Q}\), where \((\varphi_{1}^{\prime}=1\ \&\ldots\ \&\ \varphi_{n}^{\prime}=1)\Rightarrow\psi^{ \prime}=1\) is active if \(\neg(\varphi_{1}^{\prime}=1\ \&\ldots\ \&\ \varphi_{n}^{\prime}=1)\) does not hold in \(\mathcal{F}_{\mathbf{Q}}(\omega)\), see [12, Section 3].
It is well known see e.g. [1, Proposition 1.2], that every consequence operation \(\vdash\) (or the corresponding logic) has a unique SC-extension \(\vdash^{\prime}\), that is, \(\vdash^{\prime}\) having the same set of theorems as \(\vdash\) such that \(\vdash^{\prime}\) is SC. Note that \(\vdash^{\prime}\) extends \(\vdash\) with the rules only. Hence every quasivariety \(\mathbf{Q}\) has a unique SC subquasivariety \(\mathbf{Q}^{\prime}\) such that \(\mathbf{Var}(\mathbf{Q})=\mathbf{Var}(\mathbf{Q}^{\prime})\). Below we describe the SC subquasivariety of \(\mathbf{Q}\), for those \(\mathbf{Q}\) in the paper, which are ASC. From Corollary 8.6 and from [14, Corollary 6.6] we have
**Corollary 8.8**.: _The structurally complete subquasivariety of \(\mathsf{Q}(\mathsf{IMA})\), respectively, \(\mathsf{Q}(\mathsf{MMA})\), is axiomatized by equations obtained from the axioms of \(\mathbf{S4.3DumB_{2}}\), respectively, of \(\mathbf{S4.3B_{2}}\), and the single quasiidentity \((\Diamond x\wedge\Diamond\neg x\ =1)\ \Rightarrow\ 0=1\)._
Observe that \((\Diamond x\wedge\Diamond\neg x\ =1)\ \Rightarrow\ 0=1\) is the algebraic translation of the passive rule \(P_{2}\), see, e.g. [14, p. 530] which was introduced by Rybakov [28]. Descriptions of admissible rules in \(\mathsf{FMA}\) and in \(\mathsf{GMA}\) are rather complicated and the statements analogous to the descriptions above of structurally complete subquasivarieties do not hold. For instance, unification in \(\mathbf{S4MB_{2}}\) and in \(\mathbf{S4MDumB_{2}}\) is not unitary, hence not projective, and \(\mathbf{S4MDumB_{2}}\) is not even almost structurally complete. According to [28, 4.3.33], the admissible rules in \(\mathbf{S4MDumB_{2}}\) have no finite basis, indeed, no basis in finitely many variables. Hence a simple description of the structurally complete subquasivariety of \(\mathsf{Q}(\mathsf{GMA})\) similar to Corollary 8.8 is not possible. Instead of \((\Diamond x\wedge\Diamond\neg x\ =1)\ \Rightarrow\ 0=1\) infinitely many complicated quasiidentities are needed.
## 9. Summary and outlook
We have investigated varieties of closure algebras of depth two, the canonical relations of which are the identity or the universal relation when restricted to the levels, and have described their associated logics. We have also discussed the quasivarieties generated by the four classes of algebras and structural completeness. In future work we shall investigate algebras and logics of depth two whose canonical frames are irreflexive, in particular the ones connected to extensions of the provability logic \(\mathbf{GL}\).
|
2309.11480 | Effects of disorder on the magnetic properties of the Heusler alloy
V$_{2}$FeAl | Magnetic properties of multicomponent alloys depend sensitively on the degree
of atomic order on the different crystallographic sites. In this work we
demonstrate the magnetic contrast between bulk and thin-film samples of the
Heusler alloy V$_{2}$FeAl. Arc-melted bulk ingots show practically no site
preference of the elements (A2 structure), whereas magnetron-sputtered
thin-film samples display a higher degree of atomic ordering with a tendency
towards XA-type order. Electronic structure calculations favour ferrimagnetic
XA-type ordering, and the effect of different pairwise atomic disorder on the
element specific and net magnetic moments are evaluated to reproduce
experimental observations. XA-type thin-films with iron moment of 1.24
$\mu_{\mathrm{B}}$ determined by X-ray magnetic circular dichroism are in
agreement with calculation, but the measured net moment of 1.0
$\mu_{\mathrm{B}}$ per formula unit and average vanadium moment are smaller
than expected from calculations. The measured Curie temperature is
approximately 500 K. Films with a higher degree of disorder have a
T$_{\mathrm{C}}$ close to 300 K with a net moment of 0.1 $\mu_{\mathrm{B}}$ at
low temperature. The large calculated vanadium moments are destroyed by partial
disorder on $4d$ vanadium sites. By contrast, the arc-melted and annealed bulk
alloy with a fully-disordered A2 structure shows no spontaneous magnetization;
it is a Pauli paramagnet with dimensionless susceptibility
$\chi_{\mathrm{v}}=-2.95\times10^{-4}$. | Ross Smith, Zsolt Gercsi, Rui Zhang, Katarzyna Estera Siewierska, Karsten Rode, J. M. D. Coey | 2023-09-20T17:25:12Z | http://arxiv.org/abs/2309.11480v1 | # Effects of Disorder on the Magnetic Properties of the Heusler Alloy V\({}_{2}\)FeAl
###### Abstract
Magnetic properties of multicomponent alloys depend sensitively on the degree of atomic order on the different crystallographic sites. In this work we demonstrate the magnetic contrast between bulk and thin-film samples of the Heusler alloy V\({}_{2}\)FeAl. Arc-melted bulk ingots show practically no site preference of the elements (A2 structure), whereas magnetron-sputtered thin-film samples display a higher degree of atomic ordering with a tendency towards XA-type order. Electronic structure calculations favour ferrimagnetic XA-type ordering, and the effect of different pairwise atomic disorder on the element specific and net magnetic moments are evaluated to reproduce experimental observations. XA-type thin-films with iron moment of 1.24 m determined by X-ray magnetic circular dichroism are in agreement with calculation, but the measured net moment of 1.0 m per formula unit and average vanadium moment are smaller than expected from calculations. The measured Curie temperature is approximately 500 K. Films with a higher degree of disorder have a T\({}_{\rm C}\) close to 300 K with a net moment of 0.1 m at low temperature. The large calculated vanadium moments are destroyed by partial disorder on 4\(d\) vanadium sites. By contrast, the arc-melted and annealed bulk alloy with a fully-disordered A2 structure shows no spontaneous magnetization; it is a Pauli paramagnet with dimensionless susceptibility \(\chi_{\rm v}=-2.95\times 10^{-4}\).
Heusler alloys; Order-disorder phenomena; processing; Sputter deposition; Magnetic thin films
## I Introduction
Heusler alloys are materials with formula X\({}_{2}\)YZ (where X and Y are transition metals and Z is a p-block element), which consist of four interpenetrating face-centered cubic lattices [1]. The Heusler family is vast, with thousands of possible combinations, and over 800 papers are published on the topic annually. The alloys exhibit a wide variety of mechanical, electronic, and magnetic properties, and have applications in many disparate areas of condensed matter physics ranging from spintronics [2][3] to thermoelectric power generation [4] and shape-memory behaviour [5]. The growing global demand for cobalt has driven a recent focus on cobalt-free magnetic Heusler alloys, which aside from economic and environmental concerns, also offer advantages for certain spintronics applications, such as reduced magnetisation and high coercive fields with the goal of achieving faster dynamics for current-induced magnetization switching for example. The magnetic and transport properties of these materials are highly dependent upon the crystal structure and symmetry, as well as the level of atomic disorder. Here we contrast the structural and magnetic properties of V\({}_{2}\)FeAl in bulk and thin-film form; full Heusler alloys can adopt one of two possible fully-ordered crystal structures, L2\({}_{1}\) and XA (or inverse)-type, presented in Figure 1, as well as partially-ordered vacancies depending the processing conditions.
In the L2\({}_{1}\)-type crystal structure all of the vanadium atoms occupy the crystallographically equivalent 8\(c\) Wyckoff sites, with the iron and aluminium occupying the 4\(b\) and 4\(a\) sites respectively. The L2\({}_{1}\) structure has space group Fm3m (No. 225), with corresponding centrosymmetric point group symmetry m\({}^{3}\)m. Alternatively, in the XA-type structure, the vanadium atoms occupy two crystallographically inequivalent Wyckoff positions, 4\(b\) and 4\(d\), the iron now occupying the 4\(c\) sites and aluminium remaining in the 4\(a\) sites. This structu
Figure 1: The two possible fully-ordered crystal structures of V\({}_{2}\)FeAl; L2\({}_{1}\) (top) where the vanadium, iron, and aluminium atoms sit in the 8\(c\), 4\(b\), and 4\(a\) Wyckoff sites respectively, and XA-type (bottom) where the iron and aluminium occupy the 4\(c\) and 4\(a\) sites respectively and the vanadium atoms sit in 4\(b\) and 4\(d\) sites.
(No. 216), with corresponding non-centrosymmetric point group symmetry \(\bar{4}\)3m. As the vanadium atoms in the XA-type crystal structure occupy two crystallographically inequivalent sites, they form two inequivalent magnetic sublattices. These are the fully-ordered crystal structures of V\({}_{2}\)FeAl, but there are also possible fully or partially disordered structures listed below and summarised in Table 1;
1. B32 structure in which there is partial disorder between the aluminium atoms in the \(4a\) sites and the vanadium atoms in the \(4b\) and \(4d\) sites.
2. DO\({}_{3}\) structure in which there is partial disorder between the iron atoms in the \(4c\) sites and the vanadium atoms in the \(4b\) and \(4d\) sites.
3. B2 structure in which there is partial disorder between the iron in the \(4c\) sites and the aluminium in the \(4a\) sites.
4. A2 structure is where atoms in all atomic sites are fully disordered.
V\({}_{2}\)FeAl has not been reported experimentally either in bulk or thin-film form, in contrast to a number of published electronic structure calculations. Watson et al. [6] place V\({}_{2}\)FeAl in the L2\({}_{1}\) structure and reported an unrealistically large unit cell parameter of 1103.3 pm with ferrimagnetic ordering. They concluded that the system is unstable due to a low heat of formation of -0.02 eV. Kumar et al. [7] also placed V\({}_{2}\)FeAl in the L2\({}_{1}\) structure with a more realistic lattice parameter and found that ferrimagnetic ordering is preferred. Zhang et al. [8] and Skafouros et al. [9] both determined the XA-type structure to be energetically preferred to the L2\({}_{1}\) structure, and predicted ferrimagnetism to be the preferred magnetic ordering, a result corroborated by density functional theory (DFT) calculations. The published electronic structure calculation results are summarised in Table 2.
## 2 Experimental
Epitaxial thin-films of V\({}_{2}\)FeAl were grown by DC magnetron sputtering on \(10\mathrm{mm}\times 10\mathrm{mm}\) single-crystal MgO(001) substrates in the ultra-high vacuum Trifolium Dubium sputtering system with a base pressure less than \(1\times 10^{-9}\) mbar. The films were co-sputtered from high-purity 50 mm targets of vanadium and binary iron-aluminium (50:50) with a confocal sputtering geometry. Films were grown at a range of deposition temperatures from 300 \({}^{\circ}\)C to 700 \({}^{\circ}\)C. The MgO substrates were degassed for 1 h at 600 \({}^{\circ}\)C to reduce surface contamination and encourage epitaxial film growth. Deposition rates were calculated based upon the thicknesses and densities of V and FeAl calibration samples measured by X-ray reflectivity. Following sample deposition, the thin-films were transferred under ultra-high vacuum to the dedicated _in-situ_ X-ray photoelectron spectroscopy chamber equipped with a Specs PHOIBOS 150 hemispherical energy analyser and monochromated Al K\({}_{\mathrm{x1}}\) X-ray source. The analyser was set to transmission mode with an acceptance area determined by the X-ray spot size (2 mm); all spectra were captured with a pass energy of 10 eV, step size of 0.1 eV, and dwell time of 1 s. Fitting of XPS spectra was carried out using the CasaXPS software package in order to confirm the chemical composition of the films and check for contamination. A fluorescent RHEED screen was used to probe surface crystallinity. A 2 nm capping layer of Al\({}_{2}\)O\({}_{3}\) was then deposited by radio-frequency magnetron sputtering to protect the sample against oxidation. Bulk ingots were prepared by arc-melting high-purity elemental samples of vanadium, iron, and aluminium followed by grinding the ingot into a powder and annealing at temperatures ranging from 600 \({}^{\circ}\)C to 900 \({}^{\circ}\)C.
A Panalytical X'Pert Pro X-ray diffractometer using Cu K\({}_{\mathrm{a1}}\) radiation (\(\lambda\). = 0.154 06 nm) was used to capture diffraction patterns of both bulk powder and thin-film samples. Low-angle X-ray reflectivity was also measured on the thin-films, and the open source GenX XRR refinement programme [10] was used to fit the measured data and determine sample density, thickness, and roughness. Roughness was confirmed by atomic force microscopy (Veeco Nanoscope III atomic force microscope with Multimode software suite). Reciprocal space mapping of the thin-film samples were performed on a Bruker D8 Discover high-resolution X-ray diffractometer with a Cu K\({}_{\mathrm{a1}}\) beam from a double-bounce Ge(220) monochromator. Rietveld refinement of the powder diffraction patterns was carried out using FullProf [11].
Magnetization measurements were performed with a 5 T Quantum Design MPMS-XL SQUID magnetometer. A 57.2 mg sample of bulk powder was measured, as were thin-films with the field applied perpendicular to the film plane. The diamagnetic contribution from the MgO substrate was corrected for including a paramagnetic contribution from Mn\({}^{2+}\)/Fe\({}^{3+}\) impurities in the MgO which appears below about 100 K [12]. A model consisting of a Curie-Weiss law paramagnetic component (\(m_{Para}\)), a diamagnetic component (\(m_{Dia}\)), and a spontaneous magnetic moment arising from the V\({}_{2}\)FeAl thin-film (\(m_{Sample}\)) was used to fit magnetization curves at 5 K, 100 K, and 300 K as well as temperature scans in 2 T to ensure the sample was saturated. The paramagnetism in the samples is modelled using a Brillouin function;
\begin{table}
\begin{tabular}{c c c} \hline Struct. & Space Group & Disorder \\ \hline XA & F\(\bar{4}\)3m (216) & Fully-Ordered \\ L2\({}_{1}\) & Fm3m (225) & Fully-Ordered \\ DO\({}_{3}\) & Fm\(\bar{3}\)m (225) & V\({}^{4d}\leftrightarrow\) Fe\({}^{4c}\) \\ B32 & Fd\(\bar{3}\)m (227) & V\({}^{4bkdd}\leftrightarrow\) Al\({}^{4a}\) \\ B2 & Pm\(\bar{3}\)m (221) & Fe\({}^{4c}\leftrightarrow\) Al\({}^{4a}\) \\ A2 & Im\(\bar{3}\)m (229) & V\({}^{4bkdd}\leftrightarrow\) Fe\({}^{4c}\leftrightarrow\) Al\({}^{4a}\) \\ \hline \end{tabular}
\end{table}
Table 1: The possible ordered and disordered structures of V\({}_{2}\)FeAl where \(\leftrightarrow\) represents disorder between elements in a given site, space group number presented in parentheses.
\[m_{Para}=ng\mu_{B}J\bigg{[}\frac{2J+1}{2J}\coth\bigg{(}\frac{2J+1}{2J} \cdot x\bigg{)}\] \[+\frac{1}{2J}\coth\bigg{(}\frac{x}{2J}\bigg{)}\bigg{]}, \tag{1}\] \[\text{where}\;\;x=\frac{g\mu_{B}J}{k_{B}(T-\theta_{P})}\cdot\mu_{0} H_{Applied},\]
with \(J=\frac{5}{2}\). We then refine the number of paramagnetic ions (\(n\)), and the paramagnetic Curie temperature (\(\theta_{P}\)), to account for magnetic interactions between the ions (the magnitude of this effect is typically less than \(1\,\mathrm{K}\)). The diamagnetic substrate contribution is simply;
\[m_{Dia}=\chi_{V}\cdot H_{Applied}, \tag{2}\]
where \(\chi_{V}\leq 0\) is the diamagnetic susceptibility which we refine, ensuring the value doesn't differ appreciably from values cited in literature. Finally, the temperature dependence of the magnetization is modelled using Bloch's T\({}^{3/2}\) law;
\[m_{Ferro}=m_{0}\bigg{[}1-\bigg{(}\frac{T}{Tc}\bigg{)}^{3/2}\bigg{]}, \tag{3}\]
where we fit the spontaneous magnetization at zero temperature (\(m_{0}\)), and the Curie temperature (\(T_{C}\)). Note that 3\(\sigma\) noise removal has been performed on the magnetometry data.
Iron L-edge X-ray absorption near-edge structure (XANES) and X-ray magnetic dichroism (XMCD) of V\({}_{2}\)FeAl thin-films were measured at the VEKMAG beamline of the BESSY II light source of Helmholtz-Zentrum Berlin. Measurements were performed on two samples, one ordered and the other disordered. The XANES of the Fe-LIII and Fe-LII edges were measured over an energy range of 680 eV to 780 eV in an applied field of \(2\,\mathrm{T}\). The absorption is measured in total electron yield, whereby a drain current from the sampleis measured, and normalised to a mirror current from the final optical component along the beampath before the sample (this mirror shouldn't show any absorption and the mirror current should therefore be directly proportional to the intensity of incident X-rays). The absorption spectra are measured for both left circularly polarised (LCP) and right circularly polarised (RCP) X-rays, then again with the applied field direction reversed as the spectrum measured with RCP X-rays and positive applied field should be equivalent to that measured with LCP and negative applied field. The sum of these spectra gives the XANES spectrum, while the difference gives the XMCD spectrum. A background function is subtracted from the XANES spectra prior to analysis. This function consists of a component linear in energy with a pair of arctangent functions at the LIII and LII edges, the amplitude of the latter being half that of the former. The integrated areas of the XANES and XMCD spectra are then calculated and the spin and orbital moments deduced using the XMCD sum rules [13]. The moments must be scaled to the number of holes in the 3d band. In this work we chose a value of n\({}_{\text{h}}\) = 3.3, based upon electronic structure calculations of the XA-type structure for the ordered thin-film sample. The number of holes for the disordered sample was then determined by comparing the magnitude of the edge-jumps against the ordered sample and then scaling the value of n\({}_{\text{h}}\) accordingly. The number of holes in the disordered sample was determined to be approximately half that of the ordered sample.
Thin-film samples were patterenced photolithographically into Hall bars with length and width of \(50\,\mathrm{\SIUnitSymbolMicro m}\) and \(10\,\mathrm{\SIUnitSymbolMicro m}\) respectively. Ru/Ta/Pt contact pads were fabricated by a lift-off technique, which were subsequently cold-welded to \(50\,\mathrm{\SIUnitSymbolMicro m}\) diameter silver wire using high purity indium. Anomalous Hall effect measurements were carried out using a \(5.5\,\mathrm{T}\) superconducting magnet with a cryostat capable of reaching temperatures down to \(10\,\mathrm{K}\). Transport measurements were performed with a DC current of \(0.5\,\mathrm{mA}\).
Ab-initio calculations based on density functional theory were carried out using norm-conserving pseudopotentials and pseudo-atomic localized basis functions implemented in the OpenMX software package [14]. The generalized gradient approximation (GGA-PBE) [15] was used for all calculations. We used a 16 atom convenient cell for the cubic structure using \(15\times 15\times 15\) k-points to evaluate the total energies of the ordered and site-disordered structures. Pre-generated fully relativistic pseudopotentials and the pseudo-atomic orbitals with a typical cut-off radius of 7 atomic units (a.u.) with s3p3d3 were used for all elements, respectively. An energy cut-off of 300 Ry was used for the numerical integrations. The convergence criterion for the energy minimization procedure was set to \(10^{7}\) Hartree. Spin-orbit interaction (SOI) was turned off and only collinear spin configurations were considered.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Reference & Structure & Magnetic-Ordering & Lattice Constant & \(\mathrm{\SIUnitSymbolMicro F}_{\text{Fe}}\) & \(\mathrm{\SIUnitSymbolMicro W}\) & \(\mathrm{\SIUnitSymbolMicro B}_{\text{Al}}\) & \(\mathrm{\SIUnitSymbolMicro T}_{\text{tot.}}\) \\ \hline Watson et al. [6] & L2\({}_{1}\) & FiM & 1103.4 & 1.15 & -0.19 & 0.00 & 0.77 \\ Kumar et al.* [7] & L2\({}_{1}\) & FiM & 597.8 & 1.91 & -0.57 & 0.00 & 0.46 \\ Skaftouros et al. [9] & XA & FiM & 593.0 & 1.20 & 2.11 & -0.31 & -0.09 & 2.92 \\ Zhang et al. [8] & XA & FiM & 592.0 & 1.18 & 2.46 & -0.46 & -0.18 & 3.00 \\ \hline \end{tabular}
\end{table}
Table 2: Results of density functional theory calculations from different works with the preferred structure, magnetic ordering, cubic lattice parameter (pm), element specific moments (\(\mathrm{\SIUnitSymbolMicro m}\)), and total moment (\(\mathrm{\SIUnitSymbolMicro m}\) per unit cell). * Element specific moments add up to \(0.77\,\mathrm{\SIUnitSymbolMicro m}\), not \(0.46\,\mathrm{\SIUnitSymbolMicro m}\).
## 3 Results
The results of density functional theory calculations performed for different crystal structures and magnetic-orderings of V\({}_{2}\)FeAl are shown in Table 3. Figure 2 (a) shows that the XA-type structure is energetically favourable compared to the L2\({}_{1}\) structure, and that ferrimagnetic (FiM) ordering is preferred over non-magnetic or ferromagnetic ordering in both cases. The calculation of the XA-type structure with ferromagnetic ordering did not converge suggesting a highly unfavorable energetic spin configuration. The results for FiM XA-type V\({}_{2}\)FeAl agree with the previous reports by Skaftouros et al. [9] and Zhang et al. [8], only with a slightly larger cubic lattice parameter of 595.0 pm, a slightly higher moment on the iron atoms, and lower moment on the vanadium. The L2\({}_{1}\) results predict a larger lattice parameter than Kumar et al. [7], with larger element specific moments, but a lower net moment. In this structure, the iron atoms in the 4\(b\) sites and take on a large moment of 2.10 \(\rm{\mu_{B}}\) which is mostly cancelled by the antiferromagnetically coupled vanadium moments (-0.86 \(\rm{\mu_{B}}\)) in the 8\(c\) sites, giving a small net moment of 0.31 \(\rm{\mu_{B}}\) per unit cell. In the XA-type structure, the Fe atoms occupy the 4\(c\) site previously taken by V, showing a smaller moment of 1.26 \(\rm{\mu_{B}}\). The vanadium atoms remaining in the 4\(d\) sites take on a large moment of 1.88 \(\rm{\mu_{B}}\) coupled ferromagnetically to the Fe moment with the remaining vanadium in the 4\(b\) sites having a small negative moment of -0.29 \(\rm{\mu_{B}}\). This gives the XA-type structure a total moment of 2.90 \(\rm{\mu_{B}}\) per formula unit, which matches the moment of 3 \(\rm{\mu_{B}}\) predicted by the Slater-Pauling rule (V\({}_{2}\)FeAl has 21 valence electrons \(\Rightarrow\)\(M=24-Z=3\)\(\rm{\mu_{B}}\)).
The effects of binary disorder on the magnetic properties of V\({}_{2}\)FeAl have also been explored. Table 4 shows the site and element specific moments as one goes from the XA-type structure toward the L2\({}_{1}\) structure (i.e. starting from the fully-ordered XA-type structure, one sequentially swaps iron atoms in the 4\(c\) sites with vanadium atoms in the 4\(b\) sites until the fully-ordered L2\({}_{1}\) structure is reached.) It is apparent that as disorder is introduced into the system, the magnetization decreases relative to the two fully-ordered structures. A drastic change is observed in the net moment between 50% XA and 25% XA, largely due to the vanadium atoms in the 4\(d\) sites now coupling antiferromagnetically to the iron atoms in the 4\(b\) and 4\(c\) sites. Note that in the XA-type structure, the vanadium atoms in the 4\(d\) sites take on a large moment of 1.88 \(\rm{\mu_{B}}\) coupled ferromagnetically to the iron in the 4\(c\) sites. This moment is much greater than has been previously reported for vanadium in metallic systems. For example, spin-polarised electronic structure calculations on binary FeV reveal a moment of -0.29 \(\rm{\mu_{B}}\)[16], which is consistent with what we observe for vanadium atoms in 4\(b\) and 4\(c\) sites. Polarised neutron scattering studies on bulk bcc Fe-V binary systems have shown that vanadium can have a moment up to approximately -3 \(\rm{\mu_{B}}\), but this is only observed for very dilute concentrations of vanadium (< 1 at.%) [17]. At higher vanadium concentrations the moment decreases dramatically and at 10 at.% a moment of approximately -1 \(\rm{\mu_{B}}\) is observed [18], when then falls monotonically with increasing vanadium concentration, with the alloys becoming non-magnetic at 77 at% vanadium [19]. V\({}_{2}\)FeAl contain 50% vanadium, so one might expect a moment of about 0.4 \(\rm{\mu_{B}}\) or less. We therefore believe that the calculated moment of the vanadium in the 4\(d\) sites (1.88 \(\rm{\mu_{B}}\) in fully-ordered XA structure) to be overestimated in partially-disordered thin-films, and that in reality they have a moment closer in magnitude to those of the 4\(b\) and 4\(c\) sites (0.29 \(\rm{\mu_{B}}\)) or to what is observed in the L2\({}_{1}\) structure (0.86 \(\rm{\mu_{B}}\)). Making this assumption, we find the total moment of the XA-type structure falls in the range from 1.31 \(\rm{\mu_{B}}\) to 1.88 \(\rm{\mu_{B}}\) per formula unit.
Figure 3 (a) shows X-ray diffraction patterns for thin-film V\({}_{2}\)FeAl as a function of deposition temperature. The best quality films are obtained at a temperature of 400 \({}^{\circ}\)C, with this sample showing Laue oscillations about the (004) reflection indicating a large out-of-plane crystalline coherence length, the out-of-plane lattice parameter for this sample was found to be 583.36 pm. All subsequent data presented for ordered V\({}_{2}\)FeAl thin-films is measured on this sample. Reciprocal space mapping of the V\({}_{2}\)FeAl (204) and (224) reflections confirm an epitaxial relationship with the MgO(001) single-crystal substrate (i.e. a = b = 595.55 pm \(\approx\)\(\sqrt{2}\)-a\({}_{\rm MgO}\)). The thin-films therefore show a large tetragonal distortion induced by the substrate with c/a = 0.9795. As can clearly be seen for the sample deposited at 700 \({}^{\circ}\)C, phase separation is beginning to occur with the (004) reflection showing a clear shoulder peak. Based upon the measured (002)/(004) ratio, the binary compound FeAl was determined to be the phase responsible for this peak. Figure 3 (b) shows the powder diffraction pattern of a bulk V\({}_{2}\)FeAl sample, Rietveld analysis confirms that it forms the fully-disordered A2 structure (as indicated by the absence of a (002) reflection) with a cubic lattice parameter of 594.87 pm, which corresponds well with the cubic lattice parameter predicted for the ferrimagnetic configuration of XA-type structured V\({}_{2}\)FeAl. A number of thin-film samples deposited at lower temperatures also showed an unusually high (002)/(004) ratio, indicating the presence of FeAl, which can likely be attributed to poor epitaxial growth of V\({}_{2}\)FeAl. These samples also contain V\({}_{2}\)FeAl in the fully-disordered A2 form which we observe for bulk powders. The disordered sample used for subsequent measurements
\begin{table}
\begin{tabular}{c c c c c c c} \hline Struct. & Ord. & a & \(\rm{\mu_{Fe}}\) & \(\rm{\mu_{V}}\) & \(\rm{\mu_{Al}}\) & \(\rm{\mu_{Tot.}}\) \\ \hline L2\({}_{1}\) & NM & 593.5 & — & — & — & — \\ L2\({}_{1}\) & FM & 602.3 & 1.90 & 0.20 & 0.00 & 2.30 \\ L2\({}_{1}\) & FiM & 602.3 & 2.10 & -0.86 & -0.07 & 0.31 \\ XA & NM & 590.6 & — & — & — & — \\ XA & FiM & 595.0 & 1.26 & 1.88 & -0.29 & 0.05 & 2.90 \\ \hline \end{tabular}
\end{table}
Table 3: Results of density functional theory calculations for the possible fully-ordered structures and magnetic-orderings of V\({}_{2}\)FeAl, showing the cubic lattice parameters (pm), element specific moments (\(\rm{\mu_{B}}\)), and total moments (\(\rm{\mu_{B}}\) per formula unit).
was deposited at 400 \({}^{\circ}\)C.
\(\omega-2\theta\) X-ray diffraction measurements were performed on an ordered V\({}_{2}\)FeAl thin-film to determine the integrated intensities of ten independent reflections. After applying appropriate corrections to account for the illuminated sample area, the Lorentz polarization correction, and geometrical factors, the integrated peak intensities can be compared to those calculated from their structure factor. Rietveld refinement can be performed to determine which structure type best matches the sample as shown in Figure 3 (c). It was determined that the thin-film sample was closer to XA-type structure than L2\({}_{1}\) type structure, with the best fit lying somewhere between XA-type and the partially-disordered B2 structure. However, owing to the similar X-ray scattering cross-sections of vanadium and iron, there are no significant differences in intensity between reflections from the two structures. It is therefore impossible to determine the crystal structure decisively by X-ray diffraction using a laboratory source alone. X-ray reflectivity measurements found the films to have thickness of approximately 15 nm, the ordered sample being 14.70 nm thick and the disordered being 15.78 nm thick. The films were all found to be smooth with RMS roughness values less than 0.5 nm, and the thin-film densities were found to be around 5.80 g cm\({}^{-3}\) (assuming full site occupancy, the nominal density is 5.83 g cm\({}^{-3}\)).
The magnetic properties of both bulk and thin-film samples of V\({}_{2}\)FeAl were measured using SQUID magnetometry. The measured magnetic moments of the thin-films were then corrected as described in the experimental section, and the anhysteretic spontaneous magnetization curves are presented in Figure 4. The ordered thin-film shows a magnetization of approximately 1.0 \(\mathrm{p_{B}}\) per formula unit at 5 K, which falls gradually to approximately 0.9 \(\mathrm{p_{B}}\) per formula unit at 300 K, indicating a Curie temperature well in excess of room temperature (\(\mathrm{T_{C}}\sim 400\) K). The disordered thin-film shows a magnetization of approximately 0.1 \(\mathrm{p_{B}}\) per formula unit at 5 K, about 10% of the ordered films magnetization, with the magnetization falling to nearly zero at room temperature (\(\mathrm{T_{C}}\sim 330\) K). The bulk sample was found to be a Pauli paramagnet with
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline Structural & \(\mathrm{\SIUnitSymbolMicro}_{\mathrm{Fe}}\) & & \(\mathrm{\SIUnitSymbolMicro}_{\mathrm{V_{1}}}\) & & & \(\mathrm{\SIUnitSymbolMicro}_{\mathrm{VI_{1}}}\) & \(\mathrm{\SIUnitSymbolMicro}_{\mathrm{AI}}\) & \(\mathrm{\SIUnitSymbolMicro}_{\mathrm{Pt_{0}}}\). \\ Ordering & (46 & 4c) & \(\mathrm{\SIUnitSymbolMicro}_{\mathrm{IF_{6}}}\)avg & (46 & 4c) & \(\mathrm{\SIUnitSymbolMicro}_{\mathrm{IV_{1}}}\)avg & (4d) & (4a) & f.u. \\ \hline
100\% XA & — & 1.26 & 1.26 & -0.29 & — & -0.29 & 1.88 & 0.05 & 2.90 \\
75\% XA / 25\% L2\({}_{1}\) & 1.98 & 1.64 & 1.73 & -0.35 & -0.51 & -0.39 & 1.47 & 0.00 & 2.81 \\
50\% XA / 50\% L2\({}_{1}\) & 2.02 & 1.87 & 1.95 & -0.26 & -0.46 & -0.36 & 0.99 & -0.02 & 2.56 \\
25\% XA / 75\% L2\({}_{1}\) & 2.16 & 0.20 & 1.67 & 0.16 & -0.49 & -0.33 & -1.07 & -0.07 & 0.20 \\
100\% L2\({}_{1}\) & 2.10 & — & 2.10 & — & -0.86 & -0.86 & -0.86 & -0.07 & 0.31 \\ \hline \end{tabular}
\end{table}
Table 4: Effects of disorder on the magnetic moments of V\({}_{2}\)FeAl, element specific moments are in \(\mathrm{\SIUnitSymbolMicro}_{\mathrm{II}}\) per atom, total moment is in \(\mathrm{\SIUnitSymbolMicro}_{\mathrm{B}}\) per formula unit.
Figure 2: (a) Total energy difference among the different structures and magnetic orderings of V\({}_{2}\)FeAl as a function of lattice parameter calculated by density functional theory, the star markers represent the lattice constant corresponding to the lowest total energy (b) Density of states for L2\({}_{1}\) structured V\({}_{2}\)FeAl with ferrimagnetic ordering, (c) Density of states for XA-type structured V\({}_{2}\)FeAl with ferrimagnetic ordering.
a volume susceptibility of \(\chi_{\text{v}}=-2.95\times 10^{-4}\).
The XPS spectra of both ordered and disordered thin-films of V\({}_{2}\)FeAl are presented in Figure 5. Comparing the integrated areas of the Fe 2p, V 2p, and Al 2s peaks and applying their respective relative sensitivity factors, we find that the ordered sample contains slightly less vanadium than predicted based upon sample densities measured by XRR, but was still within 20% of the nominal composition V\({}_{2}\)FeAl. The disordered sample was found to contain approximately 17% oxygen, significantly more than the ordered sample which contains approximately 3% oxygen. No significant contaminants other than oxygen were observed for either sample. The vanadium, iron, and
Figure 4: Out-of-plane magnetization loops with the ferromagnetic component of the fitting model for ordered and disordered thin-films of V\({}_{2}\)FeAl at 5 K, 100 K, and 300 K.
Figure 3: (a) X-ray diffraction patterns of V\({}_{2}\)FeAl thin-films grown at various deposition temperatures, (b) Rietveld refinement of V\({}_{2}\)FeAl powder diffraction pattern, (c) Rietveld refinement of several integrated peak intensities captured using grazing-incidence X-ray diffraction on V\({}_{2}\)FeAl thin-films
aluminium peaks all appear to be mostly metallic in nature, indicating both samples are electrically conductive.
Iron L-edge X-ray absorption spectroscopy and X-ray magnetic circular dichroism are shown in Figure 6. The panel on the left shows the XANES signal in blue (with the background signal removed), and the XMCD signal in red. The panel on the right shows the spin and orbital moments (and their sum) obtained using the sum rules. The total iron moment for the ordered film at 10 K is found to be 1.24 \(\mathrm{\SIUnitSymbolMicro B}\) per iron atom, which is in good agreement with the predicted moment for the ordered X-type structure of 1.26 \(\mathrm{\SIUnitSymbolMicro B}\). Comparing this to the total moment measured by SQUID of 1.0 \(\mathrm{\SIUnitSymbolMicro B}\) per formula unit at 5 K, we infer an average moment of -0.12 \(\mathrm{\SIUnitSymbolMicro B}\) per vanadium atom, which is somewhat lower than predicted for vanadium in the 4\(b\) and 4\(c\) sites (-0.29 \(\mathrm{\SIUnitSymbolMicro B}\)) and much lower than the 1.88 \(\mathrm{\SIUnitSymbolMicro B}\) calculated for the 4\(d\) site. At a temperature of 125 K, the iron moment is reduced to 0.77 \(\mathrm{\SIUnitSymbolMicro B}\), whereas the total moment measured by SQUID is approximately 0.9 \(\mathrm{\SIUnitSymbolMicro B}\) at 100 K. Conversely, the disordered sample shows an iron moment of 1.14 \(\mathrm{\SIUnitSymbolMicro B}\) per atom at 10 K. Given the small total moment of 0.1 \(\mathrm{\SIUnitSymbolMicro B}\) at 5 K, the inferred moment on vanadium is -0.65 \(\mathrm{\SIUnitSymbolMicro B}\) which decreases with temperature to -0.33 \(\mathrm{\SIUnitSymbolMicro B}\) as the iron moment drops to 0.77 \(\mathrm{\SIUnitSymbolMicro B}\) at 125 K.
The magnetotransport properties of an ordered \(\mathrm{V_{2}FeAl}\) thin-film patterned into a Hall bar are shown in Figure 7. The longitudinal conductivity is of order \(1\times 10^{3}\,\mathrm{\SIUnitSymbolMicro m^{-1}\,cm^{-1}}\), indicating that \(\mathrm{V_{2}FeAl}\) is a bad metal, with the anomalous Hall effect being primarily due to impurity and defect scattering. The temperature dependence of the anomalous Hall conductivity follows the magnetization, falling gradually from 3.64 \(\mathrm{\SIUnitSymbolMicro m^{-1}\,cm^{-1}}\) to 2.59 \(\mathrm{\SIUnitSymbolMicro m^{-1}\,cm^{-1}}\) in the temperature range 10 K to 300 K.
## 4 Discussion
When grown in bulk and annealed at \(\mathrm{T_{a}}\) = 650 \(\mathrm{\SIUnitSymbolMicro C}\), \(\mathrm{V_{2}FeAl}\) forms in the disordered cubic A2 structure. This is expected given the low enthalpy of formation of both ordered forms predicted by electronic structure calculations, and the entropy of disorder of \(R\ln\Omega\) per mole (R is the universal gas constant, \(\Omega\) the number of possible configurations of the system, in this case \(\Omega=12\)), which reduces the free energy by \(RT_{a}\ln\Omega\) = 19 kJ mol\({}^{-1}\) at the annealing temperature of 923 K. Contrast this to the calculated total energy difference per atom shown in Figure 2, which shows an energy difference of 159 meV between the ferrimagnetic L2\({}_{1}\) and XA-type structures corresponding to a value of approximately 15 kJ mol\({}^{-1}\). The energy associated with disordering the system is therefore of the same order of magnitude as the energy difference between the different structures, indicating that formation of fully-ordered alloys is unlikely.
Bulk \(\mathrm{V_{2}FeAl}\) may be regarded as a high-entropy alloy like CrVTiAl quaternaries that are also Pauli paramagnets [20]. However, when deposited in thin-film form on an appropriate substrate, in our case MgO(001), the induced tetragonal distortion stabilises a more highly ordered form of \(\mathrm{V_{2}FeAl}\), allowing for ferrimagnetic ordering between the iron and vanadium atoms, as opposed to the paramagnetic behaviour observed in bulk samples. These ordered thin-film sam
Figure 5: XPS spectra of the Fe 2p (left), V 2p and O 1s (centre), and Al 2s peaks (right) measured on both ordered (top row) and disordered (bottom row) \(\mathrm{V_{2}FeAl}\) thin films. The peaks shaded in green are metallic in nature, whereas those in red originate from oxygen or oxide species.
ples show a structure which lies somewhere between the fully-ordered XA-type and the partially ordered B2 structure. In all cases, XA-type structure appears to be preferred to L2\({}_{1}\) structure, which corroborates the predicted stability of the different configurations shown in Figure 2. X-ray photoelectron spectroscopy indicates that the disordered thin-films have a much higher concentration of surface oxygen than the ordered films. It is unclear whether this oxygen is the result of contamination during the deposition process, or diffusion from poor quality MgO substrates. The larger fraction of oxide species in the disordered sample means there will be less electrons available in the conduction band as they are bonding with the oxygen,
Figure 6: The X-ray absorption spectrum (with the background removed) and the dichroic XMCD signal measured on an ordered and disordered thin-film of V\({}_{2}\)FeAl (left), and the spin, orbital and total magnetic moments calculated using the sum rules as a function of temperature (right).
Figure 7: (a) Hall conductivity field loops at various temperatures, (b) magnetoresistance at various temperatures, (c) Hall conductivity (blue) and longitudinal conductivity (red) as a function of temperature measured in an applied field of 5.5 T.
and as a result a charge transfer between the iron and the other elements leaving more iron 3d core holes. This explains the approximately 40% more holes in the disordered sample which was determined from the magnitude of the XANES edge-jumps The magnetic properties of the thin-film samples is highly dependent upon their ordering, with the ordered thin-film sample showing a saturation magnetization an order of magnitude greater than the disordered film at 5 K as well as a higher Curie temperature.
It is apparent that disorder has a significant effect on the magnetic properties of V\({}_{2}\)FeAl. The ordered thin-film at low temperature, which shows a total moment of 1.0 \(\mathrm{\SIUnitSymbolMicro_{B}}\) per formula unit and an iron moment of 1.24 \(\mathrm{\SIUnitSymbolMicro_{B}}\) per atom, from which we infer a vanadium moment of -0.12 \(\mathrm{\SIUnitSymbolMicro_{B}}\) per atom. Comparing this to the calculated moments presented in Table 4, we find the iron moment to be in good agreement with the predicted moment for a fully-ordered XA-type structure. At the same time, there is no indication of a large vanadium moment, and the measured moments are also slightly lower than those predicted for vanadium in the 4\(b\) and 4\(c\) sites. This is consistent with our assumption that DFT overestimates the vanadium moments in this system. Conversely, the disordered thin-film sample shows a low-temperature total moment of 0.1 \(\mathrm{\SIUnitSymbolMicro_{B}}\) per formula unit and an iron moment of 1.40 \(\mathrm{\SIUnitSymbolMicro_{B}}\) per atom, from which we infer a vanadium moment of about -0.65 \(\mathrm{\SIUnitSymbolMicro_{B}}\). The larger iron and vanadium moments and lower total moment relative to the ordered sample is consistent with what is observed in the DFT results as disorder is introduced to the system. Comparison with Table 4 suggests that the disordered sample is approximately 25% XA-type structure.
## 5 Conclusion
Spontaneous magnetism in V\({}_{2}\)FeAl depends critically on establishing some ordering of the atomic constituents on the four face-centered cubic crystallographic sites of the Heusler structure. Our electronic structure calculations show that the energy difference between the L2\({}_{1}\) and XA-type ordered structure is approximately 15 kJ mol\({}^{-1}\). Annealed bulk material with a high-entropy, fully-disordered A2 structure is found experimentally and it is a Pauli paramagnet. Tetragonally-distorted thin-films grown on MgO substrates exhibit spontaneous magnetism that depends on the degree of disorder and oxidation. Substrate templated thin-film samples with tetragonal distortion can stabilise a partly XA-ordered V\({}_{2}\)FeAl structure which shows a spontaneous magnetization with moments on both iron and vanadium. The measured iron moments match well with DFT predictions, but the total moment is lower than predicted for a fully-ordered XA-type structure. Based upon comparison to the binary FeV system with similar vanadium concentrations, we accredit this discrepancy to an overestimation of the moments of vanadium in the 4\(d\) sites when they are not fully occupied by vanadium. The formation of the metastable ordered structure appears to be hindered by the presence of oxygen in the thin-film. We believe that in reality the 4\(d\) vanadium moments are closer to those predicted for the 4\(b\) and 4\(c\) sites, resulting in a lower total moment. A drawback of electronic structure calculations based on small superscrtructures is that to capture the nuances of disorder in imperfectly-ordered alloys they would need to consider much larger unit cells than that of the basic Heusler structure.
## 6 Acknowledgements
The authors acknowledge financial support from Science Foundation Ireland through contract 16/IA/4534 ZEMS. The authors acknowledge the support of Dr. Alevtina Smekhova of Helmholtz-Zentrum Berlin for performing X-ray absorption and dichroism measurements.
|
2306.17660 | A converse theorem for Borcherds products and the injectivity of the
Kudla-Millson theta lift | We prove a converse theorem for the multiplicative Borcherds lift for
lattices of square-free level whose associated discriminant group is
anisotropic. This can be seen as generalization of Bruinier's results in
\cite{Br2}, which provides a converse theorem for lattices of prime level. The
surjectivity of the Borcherds lift in our case follows from the injectivity of
the Kudla-Millson theta lift. We generalize the corresponding results in
\cite{BF1} to the aforementioned lattices and thereby in particular to lattices
which are not unimodular and not of type $(p,2)$. Along the way, we compute the
contribution of both, the non-Archimedean and Archimedean places of the
$L^2$-norm of the Kudla-Millson theta lift. As an application we refine a
theorem of Scheithauer on the non-existence of reflective automorphic products. | Oliver Stein | 2023-06-30T13:45:00Z | http://arxiv.org/abs/2306.17660v2 | # A converse theorem for Borcherds products and the injectivity of the Kudla-Millson theta lift
###### Abstract.
We prove a converse theorem for the multiplicative Borcherds lift for lattices of square-free level whose associated discriminant group is anisotropic. This can be seen as generalization of Bruinier's results in [Br2], which provides a converse theorem for lattices of prime level. The surjectivity of the Borcherds lift in our case follows from the injectivity of the Kudla-Millson theta lift. We generalize the corresponding results in [BF1] to the aforementioned lattices and thereby in particular to lattices which are not unimodular and not of type \((p,2)\). Along the way, we compute the contribution of both, the non-Archimedean and Archimedean places of the \(L^{2}\)-norm of the Kudla-Millson theta lift. As an application we refine a theorem of Scheithauer on the non-existence of reflective automorphic products.
## 1. Introduction
In his celebrated paper [B], Borcherds constructed a multiplicative lifting (referred to as _Borcherds lift_) from the space of vector valued weakly holomorphic modular forms of weight \(1-\frac{p}{2}\) for the Weil representation associated to a lattice \(L\) of type \((p,2)\) and rank \(m\in 2\mathbb{Z}\) to meromorphic modular forms for the orthogonal group of \(L\). The orthogonal modular forms arising this way are of special interest as they allow an infinite product expansion at each cusp and have a divisor being a linear combination of so-called Heegner divisors. Therefore, the following question, initially raised by Borcherds (see [B], Problem 16.10), is of great importance: Let \(F\) be a meromorphic modular form for the orthogonal group of \(L\) whose divisor is a linear combination of Heegner divisors. Is there a weakly holomorphic modular form of weight \(1-\frac{p}{2}\) for the Weil representation attached to the lattice \(L\) whose Borcherds lift is (up to a multiplicative constant) the form \(F\)? This question has been addressed in several papers: It is pointed out in [Br2], that in the case of lattices with signature \((1,2)\) there are orthogonal modular forms which cannot be obtained as a Borcherds lift of some vector valued modular form. However, an affirmative answer for a large class of lattices is given in [Br1] and [Br2]. The most general results in this direction can be found in [Br2]:
* In Theorem 1.2 a converse theorem is given under the assumption that the lattice \(L\) allows a decomposition over \(\mathbb{Z}\) of the form (1.1) \[L=M\oplus U(N)\oplus U,\] where \(M\) is a lattice of type \((p-2,0)\), \(U\) is a hyperbolic plane and \(U(N)\) is a scaled hyperbolic plane, i. e. the hyperbolic plane \(U\) equipped with the quadratic form \(Q_{N}((x_{1},x_{2}))=Nx_{1}x_{2}\).
* The Theorem 1.4 also states a converse theorem. But it does not rely on the assumption that the lattice splits a hyperbolic plane. It only requires that the level of the lattice is a prime number.
In both of the above theorems it is presumed that \(p\geq 3\).
In [BF], a converse theorem under the hypothesis that \(L\) is _unimodular_ is proved. The main portion of the proof shows that the Kudla-Millson theta lift, a map from the space of cusp forms of weight \(\frac{m}{2}\) transforming with the Weil representation into the space of closed differential \(2\)-forms on the modular variety \(X\), is injective (see Theorem 4.9). This is achieved by evaluating the \(L^{2}\)-norm of the Kudla-Millson theta lift, showing that it is non-zero for most choices of \(m\) and the Witt rank \(r_{0}\) of \(L\). The important paper [BF1] establishes a relation between the regularized theta lift (see (1.3)) and the Kudla-Millson theta lift (see [BF1], Theorem 6.1). Based on this relation and a weak converse theorem in [Br1], Theorem 4.23, the surjectivity of the Borcherds lift is derived.
The purpose of this paper is threefold.
* We give a further converse theorem for the Borcherds lift. This theorem may be seen as a generalization of Bruinier's Theorem 1.4 in [Br2]. Our converse theorem does not rely on the decomposition (1.1). However, it assumes that the level of the underlying lattice \(L\) is square-free and the associated discriminant group \(L^{\prime}/L\) is anisotropic. This condition means that each \(p\)-group of \(L^{\prime}/L\) is of the form (2.1). Although we have a restriction on the structure of the \(p\)-groups of \(L^{\prime}/L\), our results may be interpreted as generalisation of Theorem 1.4 in [Br2] to lattices with square-free level (to the best of my knowledge such a result has not yet been established).
* As a byproduct, but probably interesting in its own right, we show the injectivity of the Kudla-Millson theta lift associated to a lattice which is not unimodular and not of type \((p,2)\). This generalizes the results in [Br1] and [Zu].
* As an application we refine a result of Scheithauer (Theorem 12.3 in [Sch]), which states that there are only finitely many reflective and symmetric automorphic products of singular weight. As far as I am aware, most of the recent papers on the classification of reflective modular forms are based on Bruinier's converse theorems in [Br2] and thereby rely either on the assumption that the involved lattice splits \(U\oplus U(N)\) over \(\mathbb{Z}\) or has prime number level. It is conceivable that these assumptions can be weakened by employing the converse theorem of the present paper. I hope to come to back to these topics in the future.
Let us explain the content of the paper in some more detail. Let \((L,(\cdot,\cdot))\) be a non-degenerate even lattice of type \((p,2)\) equipped with a bilinear form \((\cdot,\cdot)\) and the associated quadratic form \(x\mapsto Q(x)=\frac{1}{2}(x,x)\). We denote the rank of \(L\) with \(m\) and assume in the whole paper that \(m\) is even. Moreover, let \(V(\mathbb{Q})\) be the rational quadratic space \(L\otimes\mathbb{Q}\), \(L^{\prime}\) the dual lattice of \(L\) and \(A=L^{\prime}/L\) be the associated discriminant group. Since \(L\) is even, the quadratic form \(Q\) induces a \(\mathbb{Q}/\mathbb{Z}\)-valued quadratic form on \(A\), which thereby becomes also a quadratic module. The (finite) Weil representation \(\rho_{A}\) is a unitary representation of \(\operatorname{SL}_{2}(\mathbb{Z})\) on the group ring \(\mathbb{C}[A]\),
\[\rho_{A}:\operatorname{SL}_{2}(\mathbb{Z})\longrightarrow\operatorname{GL}( \mathbb{C}[A]).\]
We denote the standard basis of \(\mathbb{C}[A]\) by \((\mathfrak{e}_{\mu})_{\mu\in A}\). A weakly holomorphic modular form of weight \(\kappa\in\mathbb{Z}\) for \(\operatorname{SL}_{2}(\mathbb{Z})\) of type \(\rho_{A}\) is a holomorphic function \(f\) on the upper half plane \(\mathbb{H}\) which satisfies \(f(\gamma\tau)=(c\tau+d)^{\kappa}\rho_{A}(\gamma)f(\tau)\) for all \(\gamma=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\operatorname{SL}_{2}(\mathbb{Z})\). Additionally, \(f\) is meromorphic at the cusp \(\infty\) (see Section 3 for further details). We denote the space of these forms by \(M^{!}_{\kappa,A}\). A modular form of this type is called holomorphic if it is holomorphic instead of meromorphic at \(\infty\). We write \(M_{\kappa,A}\) for the space of all such modular forms and
use the symbol \(S_{\kappa,A}\) for the subspace of cusp forms. Finally, let \(V(\mathbb{R})=L\otimes\mathbb{R}\) and \(D\) the Grassmannian of \(2\)-dimensional negative definite subspaces in \(V(\mathbb{R})\).
A vital step in the proof of the surjectivity of the Borcherds lift is the proof of the injectivity of the Kudla-Millson theta lift \(\Lambda\). It maps a cusp form \(f\in S_{\kappa,A}\) of weight \(\kappa=\frac{m}{2}\) to a closed differential \(2\)-form on \(D\). More precisely, we have
\[\Lambda:S_{\kappa,A}\longrightarrow\mathcal{Z}^{2}(D),\quad f\mapsto\Lambda(f)\]
with
\[\Lambda(f)(z)=\int_{\operatorname{SL}_{2}(\mathbb{Z})\backslash\mathbb{H}} \langle f(\tau),\Theta_{A}(\tau,z,\varphi_{\operatorname{KM}})\rangle\frac{ dudv}{v^{2}}\]
with \(\tau=u+iv\in\mathbb{H}\). Here \(\langle\cdot,\cdot\rangle\) is the standard scalar product on the group ring,
\[\varphi_{\operatorname{KM}}\in\left[S(V)\otimes\mathcal{A}^{2}(D)\right]^{O (V(\mathbb{R}))}, \tag{1.2}\]
is a Schwartz _form_ constructed by Kudla and Millson in [KM1] and \(\Theta_{A}\) is a theta series associated to \(\varphi_{\operatorname{KM}}\), where \(\mathcal{A}^{2}(D)\) means the space of smooth differential \(2\)-forms on \(D\) and \(\mathcal{Z}^{2}(D)\) the subspace of closed \(2\)-forms. See Section 4 for the definition of the Kudla-Millson form and Section 7 for the definition of \(\Theta_{A}\). In [BF] the injectivity of \(\Lambda\) is proved in a more general setting:
* Instead of \(\varphi_{\operatorname{KM}}\) a more generalized Schwartz form \(\varphi_{q,l}\) due to [FM] is considered.
* An adelic set-up for the involved Eisenstein series and theta series is utilized.
* Instead of working with the Grassmannian \(D\) the authors work with a Shimura variety \(X_{K}\) of orthogonal type, whose complex points are given by \[X_{K}(\mathbb{C})=H(\mathbb{Q})\backslash(D\times H(\mathbb{A}_{f}))/K,\] where \(H=\operatorname{GSpin}(V)\), \(\mathbb{A}_{f}\) the finite adeles of \(\mathbb{Q}\) and \(K\subset H(\mathbb{A}_{f})\) is a compact open subgroup which leaves \(L\) stable and acts trivially on \(A\).
However, the proof is given under the assumption that \(L\) is _unimodular_. Very recently, Zuffetti proved in [Zu] the injectivity of \(\Lambda\) for _non-unimodular_ lattices but of signature \((p,2)\). In the present paper, we prove the injectivity of \(\Lambda\) in the same general setting, but we drop the aforementioned restrictions on \(L\) and assume only that the associated discriminant group \(A\) is anisotropic. To this end, we produce for all relevant results in Section 4 of [BF] a corresponding result in the vector valued setting. This includes a vector valued version of the Siegel-Weil formula, which can be easily deduced from the classical Siegel-Weil formula in [KR1] and [KR2]. Apart from Proposition 4.7 and Theorem 4.9, the transfer to the vector valued approach is mostly straightforward. The main difference is that we have to work with a special family of Schwartz-Bruhat functions \(\varphi_{\mu}\in S(V(\mathbb{A}_{f})),\ \mu\in A,\) on the finite adeles \(\mathbb{A}_{f}\) (see (5.13)). It takes a lot more effort to obtain the analogue of Proposition 4.7 and Theorem 4.9 in [BF]. The former result requires a vector valued version of the classical doubling formula (see [Bo]), which has been given in a separate paper by the author (see [St2]). Theorem 4.9 relies on the existence of the standard \(L\)-function associated to a Hecke eigenform \(f\) of all Hecke operators \(T(n^{2})\). Such an \(L\)-function has been defined in another paper by the author and the usual fundamental properties have been proved therein (see [St3]). Thus, the following theorem and consequently this paper can be seen as a result of several papers of the author ([St1]-[St3]).
**Theorem 1.1**.: _[Cor. 7.8] Let \(m>\max(6,2l-2,3+r_{0})\) and \(s_{0}=(m-3)/2\). Further, let \(p>1\) and \(q+l\) even and assume that \(A\) is anisotropic. Then the theta lift \(\Lambda:S_{\kappa,A}\to\mathcal{Z}^{q}(X_{K},\widetilde{\operatorname{Sym}}^{l }(V))\) is injective._
Based on Theorem 1.1, subsequently the surjectivity of the Borcherds lift is established. The main result regarding the Borcherds lift is Theorem 13.3 of [B]. It states that, given a weakly holomorphic modular form \(g\) of weight \(\kappa=1-\frac{p}{2}\) and type \(\rho_{A}\) with Fourier coefficients \(c(\mu,n)\) with \(\mu\in A\) and \(n\in\mathbb{Z}+Q(\mu)\) (and \(c(\mu,n)\in\mathbb{Z}\) for \(n<0\)), there exists a meromorphic modular form \(\Psi_{L}\) for some subgroup of the orthogonal group of \(L\) with
* weight \(\dfrac{c(0,0)}{2}\) and
* a divisor given by \[\sum_{\mu\in A}\sum_{\begin{subarray}{c}n\in\mathbb{Z}+Q(\mu)\\ n<0\end{subarray}}c(\mu,n)Z(n,\mu),\] where \(Z(\mu,n)\) is given by (8.14).
The modular form \(\Psi_{L}\) is defined by means of the regularized theta lift
\[\Phi_{L}(g)(\tau,z)=\int_{\operatorname{SL}_{2}(\mathbb{Z})\backslash\mathbb{ H}}^{\operatorname{reg}}\langle g(\tau),\Theta_{A}(\tau,z,\varphi_{\infty}^{p,2}) \rangle\dfrac{dudv}{v^{2}}, \tag{1.3}\]
where \(\Theta_{A}\) is the theta series associated to the Gaussian \(\varphi_{\infty}^{p,2}\) of signature \((p,2)\) and \(\int_{\operatorname{SL}_{2}(\mathbb{Z})\backslash\mathbb{H}}^{\operatorname{ reg}}\) is a regularization of the integral \(\int_{\operatorname{SL}_{2}(\mathbb{Z})\backslash\mathbb{H}}\) according to [B]. The proof of our converse theorem makes use of the same approach as the one taken for Cor. 1.7 in [BF]. It is based on Thm. 6.1 in [BF1] and Thm. 4.23 in [Br1]. We stick with adelic setup we employed to establish the injectivity of the Kudla-Millson theta lift and generalize both of the aforementioned theorems to this setting. Based on these results we obtain
**Corollary 1.2** (Cor. 8.8).: _Let \(m\) be the even rank of the lattice \(L\) satisfying \(m>\max(6,3+r_{0})\) and \(m\equiv 0\bmod 4\). Moreover, let the associated discriminant form \(A\) be anisotropic and \(F:D^{+}\to\mathbb{C}\) be a meromorphic modular form of weight \(r\) and character \(\chi\) (of finite order) for the discriminant kernel \(\Gamma(L)\) whose divisor is a linear combination of Heegner divisors. Then there exists a weakly holomorphic modular form \(f\in M^{!}_{\ell,L^{-}}\) such that \(F\) is up to a constant multiple the Borcherds lift \(\Psi_{L}\) of \(f\)._
## 2. Preliminaries and notations
In this section we fix some notation, which will be used this way throughout the paper unless it is stated otherwise, and recall some basic facts, which will be vital for the rest of the paper. For details the reader may consult [BF] and [BF1].
We start with a non-degenerate lattice \((L,(\cdot,\cdot))\) of type \((p,q)\) and even rank \(m=p+q\). The signature of \(L\) is defined by \(\operatorname{sig}(L)=p-q\). Associated to the bilinear form \((\cdot,\cdot)\) we have the quadratic form \(Q(\cdot)=\frac{1}{2}(\cdot,\cdot)\). We assume that \(L\) is even, i. e. \(Q(x)\in\mathbb{Z}\) for all \(x\in L\). Let
\[L^{\prime}:=\{x\in V=L\otimes\mathbb{Q}\;:\;(x,y)\in\mathbb{Z}\quad\text{ for all }\;y\in L\}\]
be the dual lattice of the lattice even \(L\). Since \(L\subset L^{\prime}\), the elementary divisor theorem implies that \(L^{\prime}/L\) is a finite group. We denote this group by \(A\). The modulo \(1\) reduction of both, the bilinear form \((\cdot,\cdot)\) and the associated quadratic form, defines a \(\mathbb{Q}/\mathbb{Z}\)-valued bilinear form \((\cdot,\cdot)\) with corresponding \(\mathbb{Q}/2\mathbb{Z}\)-valued quadratic form on \(A\). We call \(A\) combined with \((\cdot,\cdot)\)
a discriminant group or a quadratic module. By \(\operatorname{sig}(A)=\operatorname{sig}(L)=p-q\) we denote the signature of \(A\). We call \(A\) anisotropic if \(q(\mu)=0\) holds only for \(\mu=0\). It is well known that any discriminant group can be decomposed into a direct sum of quadratic modules of the following form (cf. [BEF])
\[\begin{split}&\mathcal{A}^{t}_{p^{k}}=\left(\mathbb{Z}/p^{k} \mathbb{Z},\ \frac{tx^{2}}{p^{k}}\right),\ p>2,\quad\mathcal{A}^{t}_{2^{k}}=\left(( \mathbb{Z}/2^{k}\mathbb{Z},\ \frac{tx^{2}}{2^{k+1}}\right),\\ &\mathcal{B}_{2^{k}}=\left(\mathbb{Z}/2^{k}\mathbb{Z}\oplus \mathbb{Z}/2^{k}\mathbb{Z};\ \frac{x^{2}+2xy+y^{2}}{2^{k}}\right),\quad\mathcal{C}_{2^{k}}=\left(\mathbb{Z }/2^{k}\mathbb{Z}\oplus\mathbb{Z}/2^{k}\mathbb{Z};\ \frac{xy}{2^{k}}\right).\end{split} \tag{2.1}\]
The structure of anisotropic finite quadratic modules is well known: In particular, for an odd prime \(p\) each \(p\)-group \(A_{p}\) of a discriminant form \(A\) can be either written as
\[\mathcal{A}^{t}_{p}\text{ or as a direct sum }\mathcal{A}^{t}_{p}\oplus \mathcal{A}^{1}_{p}. \tag{2.2}\]
For further details we refer to [BEF].
We put \(V=V(\mathbb{Q})=L\otimes\mathbb{Q}\) and choose an orthogonal basis \(\{v_{i}\}\) of \(V(\mathbb{R})=L\otimes\mathbb{R}\) such that \((v_{\alpha},v_{\alpha})=1,\ \alpha=1,\ldots,p\) and \((v_{\mu},v_{\mu})=-1\) for \(\mu=p+1,\ldots,m\). For \(x\in V\) we may write
\[x=\sum_{\alpha=1}^{p}x_{\alpha}v_{\alpha}+\sum_{\mu=p+1}^{m}x_{\mu}v_{\mu}\]
in terms of its coordinates with respect to the basis \(\{v_{i}\}\) such that
\[(x,x)=\sum_{\alpha=1}^{p}x_{\alpha}^{2}-\sum_{\mu=p+1}^{m}x_{\mu}^{2}. \tag{2.3}\]
By \(r_{0}\in\mathbb{Z}\) we mean the _Witt index_ of \(V\), i. e. the dimension of a maximal isotropic subspace of \(V\). Now pick the fixed subspace
\[z_{0}=\operatorname{span}\{v_{\mu}\mid\mu=p+1,\ldots,m\} \tag{2.4}\]
and let
\[D=\{z\subset V\mid\dim(z)=q,\ (\cdot,\cdot)_{|z}<0\} \tag{2.5}\]
the Grassmannian of oriented \(q\)- planes in \(V(\mathbb{R})\). It is well known that \(D\) is a real analytic manifold. We denote by
\[\mathcal{A}^{q}(D)\text{ and }\mathcal{Z}^{q}(D)\]
the smooth differential \(q\)- forms and the smooth closed differential \(q\)- forms, respectively, on \(D\). Note that \(D\) has two connected components
\[D=D^{+}\sqcup D^{-}\]
given by the two possible choices of an orientation. Clearly, the orthogonal group \(G(\mathbb{R})=O(V(\mathbb{R}))\) acts on \(D\). We denote by \(K_{\infty}\) the subgroup of \(G(\mathbb{R})\) which stabilizes \(z_{0}\). As \(G(\mathbb{R})\) acts transitively on \(D\), we have \(G(\mathbb{R})/K_{\infty}\cong D\). Note that \(K_{\infty}\cong O(p)\times O(q)\). Let \(S(V(\mathbb{R}))\) be the space of Schwartz functions on \(V(\mathbb{R})\) and \(G^{\prime}(1)(\mathbb{R})=\operatorname{SL}_{2}(\mathbb{R})\). The Weil representation \(\omega_{\infty}\) in the Schrodinger model is a representation of \(G^{\prime}(1)(\mathbb{R})\times G(\mathbb{R})\) on \(S(V(\mathbb{R}))\) (see e. g. [We1], [Ku3] or [St1]). The group \(G(\mathbb{R})\) acts on \(S(V(\mathbb{R}))\) in a natural way by
\[\omega(g)\varphi(x)=\varphi(g^{-1}x). \tag{2.6}\]
It suffices to describe the action of \(G^{\prime}(1)(\mathbb{R})\) on its generators:
\[\begin{split}&\omega_{\infty}(m(a))\varphi(x)=a^{m/2}\varphi(ax) \text{ for }a>0\text{ and }m(a)=\left(\begin{smallmatrix}a&0\\ 0& a^{-1}\end{smallmatrix}\right),\\ &\omega_{\infty}(n(b))\varphi(x)=e^{\pi ib(x,x)}\varphi(x)\text{ with }n(b)=\left(\begin{smallmatrix}1&b\\ 0&1\end{smallmatrix}\right)\text{ and }\\ &\omega_{\infty}(S)\varphi(x)=e(-\operatorname{sig}(L)/8)\widehat{ \varphi}(-x)\text{ with }S=\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right),\end{split} \tag{2.7}\]
where \(\widehat{\varphi}(y)=\int_{V(\mathbb{R})}\varphi(t)e^{2\pi i(x,y)}dt\) is the Fourier transform of \(\varphi\). Note that \(\omega_{\infty}\) can be defined in a much more general adelic setting as a representation of \(G^{\prime}(n)(\mathbb{A})\times G(\mathbb{A})\) on the space of Schwartz-Bruhat functions \(S(V(\mathbb{A})^{n})\), where \(\mathbb{A}\) is the ring of adeles of \(\mathbb{Q}\) and \(G^{\prime}(n)(\mathbb{A})=\operatorname{Sp}(n,\mathbb{A})\) (see the literature cited above). We denote this more general Weil representation with \(\omega\) and use the notation \(K^{\prime}(n)=K^{\prime}(n)_{\infty}\prod_{p}K^{\prime}(n)_{p}\) with \(K^{\prime}(n)_{p}=G^{\prime}(n)(\mathbb{Z}_{p})\) for the maximal compact subgroup in \(G^{\prime}(n)(\mathbb{A})\). Here,
\[K^{\prime}(n)_{\infty}=\left\{k=\begin{pmatrix}a&b\\ -b&a\end{pmatrix}\in\operatorname{Sp}(n,\mathbb{R})\;\middle|\;\mathbf{k}=a+ib \in U(n)\right\}, \tag{2.8}\]
where \(U(n)\) means the unitary group.
Theta series will play vital role in this paper. Associated to \(\varphi\in S(V(\mathbb{R}))\) and \(\mu\in A\) we define
\[\vartheta(g^{\prime},\varphi,\mu)=\sum_{\lambda\in\mu+L}\omega_{\infty}(g^{ \prime})\varphi(\lambda). \tag{2.9}\]
As \(\omega_{\infty}(g^{\prime})\varphi\) is rapidly decreasing, \(\vartheta(g^{\prime},\varphi,\mu)\) is well defined. The standard majorant \((\cdot,\cdot)_{z}\) associated to \(z\in D\) is defined by
\[(x,x)_{z}=(x_{z^{\perp}},x_{z^{\perp}})-(x_{z},x_{z}), \tag{2.10}\]
where \(x=x_{z^{\perp}}+x_{z}\) is the decomposition of \(x\) with respect to \(V=z^{\perp}+z\). Now let the standard Gaussian on \(V(\mathbb{R})\times D\) be
\[\varphi_{\infty}^{p,q}(x,z)=\exp\left(-\pi(x,x)_{z}\right), \tag{2.11}\]
\((p,q)\) emphasizing the type of \(Q\). Since \((x,x)_{z}\) is positive for all \(x\in V\), the Gaussian is rapidly decreasing and thus an element of \(S(V(\mathbb{R}))\). More generally, for \(\mathbf{x}=(x_{1},\ldots,x_{n})\in V(\mathbb{R})^{n}\)
\[\varphi_{\infty}^{p,q}(\mathbf{x},z)=\exp\left(-\pi\sum_{i=1}^{n}(x_{i},x_{i}) _{z}\right) \tag{2.12}\]
is the standard Gaussian on \(V(\mathbb{R})^{n}\times D\).
We choose for \(g^{\prime}\in G^{\prime}(1)(\mathbb{R})\) the matrix \(g_{\tau}=n(u)m(\sqrt{v})\). It moves the base point \(i\) to the element \(\tau=u+iv\) in the upper half plane \(\mathbb{H}\). With this choice of \(g^{\prime}\) and \(\varphi\) we can define a Siegel theta function by means of \(\vartheta\):
\[\begin{split}\theta(\tau,z,\mu)&=\vartheta(g_{\tau}, \mu,\varphi_{\infty}^{p,q}(\cdot,z))\\ &=\sum_{\lambda\in L+\mu}\varphi_{\infty}^{p,q}(\lambda,\tau,z). \end{split} \tag{2.13}\]
with
\[\begin{split}\varphi_{\infty}^{p,q}(\lambda,\tau,z)& =v^{q/2}\exp\left(\pi i((\lambda,\lambda)u+(\lambda,\lambda)_{z }iv)\right)\\ &=v^{q/2}e\left(Q(\lambda_{z^{\perp}})\tau+Q(\lambda_{z}) \overline{\tau}\right)\end{split} \tag{2.14}\]
The last equation of (2.13) can be obtained immediately by employing the explicit formulas of the Weil representation (cf.(2.7)). It can be shown (see e. g. [B], Theorem 4.1) that the vector valued theta series
\[\Theta_{A}(\tau,z,\varphi_{\infty}^{p,p})=\sum_{\mu\in A}\theta(\tau,z,\mu) \tag{2.15}\]
is a real-analytic function with respect to both variables \(\tau\) and \(z\). Additionally, it transforms with respect to the modular group
\[\Gamma=\operatorname{SL}_{2}(\mathbb{Z})\]
like a vector valued modular form of weight \((p-q)/2\) for the finite Weil representation (see Section 3 for the definition of vector valued modular forms) and is invariant under the action of \(SO(V)(\mathbb{R})\). Later in the paper, we will utilize the notation \(\mathbb{1}_{M}\) for the characteristic function of a set \(M\) and we denote by \(\iota\) the embedding
\[\iota:G^{\prime}(1)\times G^{\prime}(1)\to G^{\prime}(2),\quad\iota\left( \begin{pmatrix}a&b\\ c&d\end{pmatrix},\begin{pmatrix}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}\end{pmatrix}\right)=\begin{pmatrix}a&0&b&0\\ 0&a^{\prime}&0&b^{\prime}\\ c&0&d&0\\ 0&c^{\prime}&0&d^{\prime}\end{pmatrix}. \tag{2.16}\]
In some sections of the present work the variable \(p\) has two meanings. On the hand, it is part of the type \((p,q)\) of the lattice \(L\). On the other hand, \(p\) stands for a prime \(p\) parametrizing a local Archimedean place. However, there is no danger of confusion since it is always clear from the context what the meaning of \(p\) is.
## 3. The finite Weil representation and weak Maass forms
In this section we recapitulate the background material to define harmonic weak Maass forms as introduced e. g. [BF1]. We assume the conventions and notation from Section 2. In [Br1] Bruinier explained how to extended the Borcherds lift to this type of modular forms. They can be seen as a natural generalization of weak holomorphic modular forms transforming according to the finite Weil representation. Later on, weak Maass forms will play a crucial role in the proof of the subjectivity of the Borcherds lift.
Recall from Section 2 that the discriminant group \(A=L^{\prime}/L\) equipped with the modulo \(1\) reduction of \(Q\) defines a quadratic module. Associated to \(A\) there is a representation \(\rho_{A}\) of \(\Gamma\) on the group ring \(\mathbb{C}[A]\), which we call the "finite" Weil representation. We denote the standard basis of \(\mathbb{C}[A]\) by \(\{\mathfrak{e}_{\mu}\}_{\mu\in A}\). The standard scalar product on \(\mathbb{C}[A]\) is given by
\[\left\langle\sum_{\mu\in A}a_{\mu}\mathfrak{e}_{\mu},\sum_{\mu\in A}b_{\mu} \mathfrak{e}_{\mu}\right\rangle=\sum_{\mu\in A}a_{\mu}\overline{b_{\mu}}. \tag{3.1}\]
Note that the group rings \(\mathbb{C}[A^{2}]\) and \(\mathbb{C}[A]\) are related by the following isomorphism
\[\mathbb{C}[A^{2}]\longrightarrow\mathbb{C}[A]\otimes\mathbb{C}[A],\quad \mathfrak{e}_{(\mu,\nu)}\mapsto\mathfrak{e}_{\mu}\otimes\mathfrak{e}_{\nu}. \tag{3.2}\]
This will be used later in Section 7.
As \(\Gamma\) is generated by the matrices
\[S=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix},\quad T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}, \tag{3.3}\]
it is sufficient to define \(\rho_{A}\) by the action on these generators.
**Definition 3.1**.: The representation \(\rho_{A}\) of \(\Gamma\) on \(\mathbb{C}[A]\), defined by
\[\begin{split}\rho_{A}(T)\mathfrak{e}_{\mu}&:=e(bQ(\mu ))\mathfrak{e}_{\mu},\\ \rho_{A}(S)\mathfrak{e}_{\mu}&:=\frac{e(-\operatorname {sig}(A)/8)}{|A|^{1/2}}\sum_{\nu\in A}e(-(\nu,\mu))\mathfrak{e}_{\nu},\end{split} \tag{3.4}\]
is called Weil representation. Note that \(\rho_{A}\) can be identified with a subrepresentation of the Weil representation \(\omega\). We denote the dual representation of \(\rho_{A}\) by \(\rho_{A}^{*}\). It is obtained from \(\rho_{A}\) by passing from \((L,(\cdot,\cdot))\) to \((L,-(\cdot,\cdot))\) or simply by complex conjugation when considered as matrices. Thus \(\rho_{A}^{*}=\rho_{A^{-}}=\overline{\rho_{A}}\), where \(A^{-}\) means the discriminant group \(A\) equipped with \(-(\cdot,\cdot)\).
Let \(Z=\left(\begin{smallmatrix}-1&0\\ 0&-1\end{smallmatrix}\right)\). The action of \(Z\) is given by
\[\rho_{A}(Z)(\mathfrak{e}_{\mu})=e(-\operatorname{sig}(A)/4)\mathfrak{e}_{- \mu}. \tag{3.5}\]
We denote by \(N\) the level of the lattice \(L\). It is the smallest positive integer such that \(NQ(\lambda)\in\mathbb{Z}\) for all \(\lambda\in L^{\prime}\). For the rest of this paper we suppose that \(N\) is _odd_. For later use we introduce a Gauss sum associated to the discriminant group \(A\). For an integer \(d\) we write
\[g_{d}(A)=\sum_{\mu\in A}e(dQ(\mu)) \tag{3.6}\]
and put \(g(A)=g_{1}(A)\). By Milgram's formula we have \(g(A)=\sqrt{|A|}e(\operatorname{sig}(A)/8)\).
We now define vector valued modular forms of type \(\rho_{A}\). With respect to the standard basis of \(\mathbb{C}[A]\) a function \(f:\mathbb{H}\to\mathbb{C}[A]\) can be written in the form
\[f(\tau)=\sum_{\mu\in A}f_{\mu}(\tau)\mathfrak{e}_{\mu}.\]
The following operator generalises the usual Petersson slash operator to the space of all those functions. For \(\kappa\in\mathbb{Z}\) we put
\[f\mid_{\kappa,A}\gamma=j(\gamma,\tau)^{-\kappa}\rho_{A}(\gamma)^{-1}f(\gamma \tau), \tag{3.7}\]
where
\[j(g,\tau)=\det(g)^{-1/2}(c\tau+d)\]
is the usual automorphy factor if \(g=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\operatorname{GL}_{2}^{+}(\mathbb{R})\).
A holomorphic function \(f:\mathbb{H}\to\mathbb{C}[A]\) is called a _weakly holomorphic_ modular form of weight \(\kappa\) and type \(\rho_{A}\) for \(\Gamma\) if \(f\mid_{\kappa,A}\gamma=f\) for all \(\gamma\in\Gamma\), and if \(f\) is meromorphic at the cusp \(\infty\). Here the last condition means that \(f\) admits a Fourier expansion of the form
\[\sum_{\mu\in A}\sum_{n\in\mathbb{Z}+Q(\mu)}c(\mu,n)e(n\tau), \tag{3.8}\]
where all but finitely many Fourier coefficients \(c(\mu,n)\) with \(n<0\) vanish. If in addition \(c(\mu,n)=0\) for all \(n<0\) (\(n\leq 0\)), we call the corresponding modular form _holomorphic_ (a _cusp form_). We denote by \(M_{\kappa,A}^{!}\), the space of all weakly holomorphic modular forms, by \(M_{\kappa,A}\) the space of holomorphic modular forms and by \(S_{\kappa,A}\) the subspace cusp forms. We write \(M_{\kappa,A^{-}}^{!},M_{\kappa,A^{-}}\) and \(S_{\kappa,A^{-}}\) for the corresponding spaces with respect to the dual Weil representation \(\rho_{A^{-}}\). For more details see e.g. [Br1]. Note that formula (3.5) implies that \(M_{k,A}=\{0\}\) unless
\[2\kappa\equiv\operatorname{sig}(L)\pmod{2}. \tag{3.9}\]
Therefore, if the signature of \(L\) is even, only non-trivial spaces of integral weight can occur.
The Petersson scalar product on \(S_{\kappa,A}\) is given by
\[(f,g)=\int_{\Gamma\backslash\mathbb{H}}\langle f(\tau),g(\tau)\rangle\operatorname {Im}\tau^{\kappa}d\mu(\tau) \tag{3.10}\]
where
\[d\mu(\tau)=\frac{du\;dv}{v^{2}}\]
denotes the hyperbolic volume element with \(\tau=u+iv\). In view of (3.2) we have for \(f\in S_{\kappa_{1},A}\) and \(g\in S_{\kappa_{2},A}\) that
\[f\otimes g=\sum_{\mu,\nu\in A}f_{\mu}g_{\nu}\mathfrak{e}_{\mu}\otimes \mathfrak{e}_{\nu}\in S_{\kappa_{1}+\kappa_{2},A^{2}}. \tag{3.11}\]
Following [1], a harmonic weak Maass forms of weight \(\kappa\) with representation \(\rho_{A}\) is a twice continuously differentiable function \(f:\mathbb{H}\longrightarrow\mathbb{C}\) such that
* \(f\mid_{\kappa,A}\gamma=f\) for all \(\gamma\in\Gamma\),
* \(\Delta_{\kappa}f=0\), where (3.12) \[\Delta_{\kappa}=-v^{2}\left(\frac{\partial^{2}}{\partial u^{2}}+\frac{ \partial^{2}}{\partial v^{2}}\right)+iky\left(\frac{\partial}{\partial u}+i \frac{\partial}{\partial v}\right)\] is the hyperbolic Laplace operator of weight \(\kappa\).
* \(f(\tau)=O(e^{ev})\) for \(v\to\infty\) for some \(\epsilon>0\) (uniformly in \(u\)).
By \(H_{\kappa,A}\) we mean the space of weak Maass forms. According to [1] each such Maass form possesses a unique decomposition \(f=f^{+}+f^{-}\) into a homomorphic part \(f^{+}\) and a non-holomorphic part \(f^{-}\) having Fourier expansions of the form
\[f^{+}(\tau)=\sum_{\mu\in A}\sum_{\begin{subarray}{c}n\in\mathbb{Z}+Q(\mu)\\ n\gg-\infty\end{subarray}}c^{+}(\mu,n)e(n\tau)\text{ and }\]
\[f^{-}(\tau)=\sum_{\mu\in A}\left(c^{-}(\mu,0)+\sum_{\begin{subarray}{c}n\in \mathbb{Z}+Q(\mu)\\ n\ll\infty\end{subarray}}c^{-}(\mu,n)H_{\kappa}(2\pi nv)e(nu)\right).\]
Here \(H_{\kappa}(w)\) is given by \(e^{-w}\int_{-2w}^{\infty}e^{-t}t^{-\kappa}dt\). For \(w<0\) we have \(H_{\kappa}(w)=e^{-w}\Gamma(1-\kappa,2w)\), where \(\Gamma(a,x)\) means the incomplete Gamma function. If \(w>0\), the integral converges only for \(\kappa<1\). But it can be analytically continued to all \(\kappa\in\mathbb{C}\) the same way as the Gamma function.
For \(\tau=u+iv\in\mathbb{H}\) we define the Maass lowering and raising operators of weight \(\kappa\) on non-holomorphic modular forms by
\[L_{\kappa}=-2iv^{2}\frac{\partial}{\partial\overline{\tau}},\quad R_{\kappa} =2i\frac{\partial}{\partial\tau}+\kappa v^{-1}. \tag{3.13}\]
It is shown in [1], Prop. 3.2, that the differential operator \(\xi_{\kappa}(f)(\tau)=v^{\kappa-2}\overline{L_{\kappa}(f)(\tau)}\) maps weak Maass forms of weight \(\kappa\) to weakly holomorphic modular forms of weight \(2-\kappa\) transforming with \(\rho_{A^{-}}\). Denote with \(H_{\kappa,A}^{+}\) the subspace of weak Maass forms which are
mapped by \(\xi_{\kappa}\) to the space of cusp forms \(S_{2-\kappa,A^{-}}\). It is proved in [BF1], Theorem 3.7, that \(\xi_{\kappa}\) is surjective, which implies that the sequence
\[0\longrightarrow M_{\kappa,A}^{!}\longrightarrow H_{\kappa,A}^{+}\stackrel{{ \xi_{\kappa}}}{{\longrightarrow}}S_{2-\kappa,A^{-}}\longrightarrow 0. \tag{3.14}\]
is exact. By means of [BF1], Lemma 3.1, we see that the Fourier expansion of \(f\in H_{\kappa,A}^{+}\) is given by
\[f(\tau)=\sum_{\mu\in A}\left(\sum_{\begin{subarray}{c}n\in\mathbb{Z}+Q(\mu) \\ n\gg-\infty\end{subarray}}c^{+}(\mu,n)e(n\tau)+\sum_{\begin{subarray}{c}n\in \mathbb{Z}+Q(\mu)\setminus\{0\}\\ n<0\end{subarray}}c^{-}(\mu,n)\Gamma(1-\kappa,4\pi|n|v)e(n\tau)\right). \tag{3.15}\]
The principal part of such a Maass form \(f\) is then given by
\[P_{f}(\tau)=\sum_{\mu\in A}\sum_{\begin{subarray}{c}n\in\mathbb{Z}+Q(\mu)\\ n<0\end{subarray}}c^{+}(\mu,n)e(n\tau). \tag{3.16}\]
## 4. Special Schwartz forms
In this paper the Kudla-Millson form \(\varphi_{\text{KM}}\) ([KM1]) and the more general Schwartz form \(\varphi_{q,l}\) ([FM]) play a prominent role as they constitute the theta kernel of the Kudla-Millson lift \(\Lambda\). The proof of the injectivity of \(\Lambda\) makes use of the related Schwarz functions \(\phi_{q,l}\) and \(\xi\) and some of their properties.
Schwartz forms are a generalization of Schwartz functions. In this section we review the construction of the Schwartz forms \(\varphi_{\text{KM}}\), \(\varphi_{q,l}\) and the aforementioned Schwartz functions \(\phi_{q,l}\) and \(\xi\). We additionally list some of their fundamental properties. Our main sources are [KM1], [FM] and [BF].
Schwartz forms are Schwartz functions on \(V\) with values in the differential \(q\)-forms on the Grassmannian \(D\). They can be considered as elements of \(S(V)\otimes\mathcal{A}^{q}(D)\). Note that \(G(\mathbb{R})\) acts on elements of \(S(V)\otimes\mathcal{A}^{q}(D)\) via
\[L_{g}^{*}\varphi(gx,z), \tag{4.1}\]
where \(L_{g}^{*}\) means the pullback on \(\mathcal{A}^{q}(D)\) induced by left translations of \(G(\mathbb{R})\) on \(D\).
The construction of the Kudla-Millson Schwartz form \(\varphi_{\text{KM}}\) makes use of the isomorphism
\[\left[S(V)\otimes\mathcal{A}^{r}(D)\right]^{G(\mathbb{R})}\cong\left[S(V) \otimes\bigwedge^{r}\mathfrak{p}^{*}\right]^{K_{\infty}}, \tag{4.2}\]
which is given by mapping any \(\varphi(x,z)\in[S(V)\otimes\mathcal{A}^{r}(D)]^{G(\mathbb{R})}\) to \(\varphi(x,z_{0})\in[S(V)\otimes\bigwedge^{r}\mathfrak{p}^{*}]^{K_{\infty}}\), where \(z_{0}\in D\) denotes a fixed base point and \(\mathfrak{p}\) is part of the Cartan decomposition \(\mathfrak{g}=\mathfrak{p}+\mathfrak{t}\) of the Lie algebra \(\mathfrak{g}\) of \(G(\mathbb{R})\) with \(\mathfrak{t}\) being the Lie algebra of \(K_{\infty}\). The isomorphism (4.2) is based on the well known fact (see e. g. [He], Chap. IV, Sec. 3 ) that \(\mathfrak{g}/\mathfrak{t}\cong\mathfrak{p}\) is isomorphic to the tangent space \(T(D)_{z_{0}}\) at the chosen base point \(z_{0}\). As usual, \(\mathfrak{p}^{*}\) means the dual space of \(\mathfrak{p}\) and \(\bigwedge^{r}\) the \(r\)-fold exterior product of \(\mathfrak{p}^{*}\). In view of 4.2, it suffices to specify \(\varphi_{\text{KM}}\) on \(\mathfrak{p}^{*}\). To this end, we utilize the explicit description of \(\mathfrak{p}\) by
\[\left\{\begin{pmatrix}0&X\\ X^{t}&0\end{pmatrix}\ \bigg{|}\ X\in M_{p,q}(\mathbb{R})\right\}\cong M_{p,q}( \mathbb{R}).\]
By means of this isomorphism, the standard basis of \(M_{p,q}(\mathbb{R})\) gives rise to a basis \(X_{\alpha,\mu}\) of \(\mathfrak{p}\) with respect to the above chosen basis of \(V\). For \(1\leq\alpha\leq p\) and \(p+1\leq\mu\leq p+q\) we have
\[X_{\alpha,\mu}(v_{i})=\begin{cases}v_{\mu},&i=\alpha,\\ v_{\alpha},&i=\mu,\\ 0,&\text{otherwise}.\end{cases}\]
Moreover, by \(\{\omega_{\alpha,\mu}\mid\alpha=1,\dots,p\text{ and }\mu=p+1,\dots,p+q\}\subset \mathfrak{p}^{*}\) we mean the associated dual basis.
Let \(\varphi_{\infty}^{p,q}\) be the standard Gaussian (see Section 2). For the rest of this section we drop the superscript \(p,q\) to lighten the notation. It can be verified that
\[\varphi_{\infty}(gx,gz)=\varphi_{\infty}(x,z)\]
for all \(g\in G(\mathbb{R})\), i. e.
\[\varphi_{\infty}\in\left[S(V)\otimes C^{\infty}(D)\right]^{G(\mathbb{R})}.\]
For the sake of clarity we write
\[\varphi_{0}(x)\text{ instead of }\varphi_{\infty}(x,z_{0}). \tag{4.3}\]
Then \(\varphi_{\text{KM}}\) is defined by (see [KM1], Chap. 3 and Chap. 5 and [FM], Chap. 5.2) by applying the Howe operator
\[\begin{split}&\mathcal{D}:S(V)\otimes\bigwedge^{\bullet}( \mathfrak{p}^{*})\longrightarrow S(V)\otimes\bigwedge^{\bullet+q}(\mathfrak{p} ^{*}),\\ &\mathcal{D}=\frac{1}{2^{q/2}}\prod_{\mu=p+1}^{m}\left[\sum_{ \alpha=1}^{p}\left(x_{\alpha}-\frac{1}{2\pi}\frac{\partial}{\partial x_{ \alpha}}\right)\otimes A(\omega_{\alpha\mu})\right]\end{split} \tag{4.4}\]
to \(\varphi_{0}\otimes 1\in\left[S(V)\otimes\bigwedge^{0}(\mathfrak{p}^{*}) \right]^{K}\):
\[\varphi_{KM}=\mathcal{D}(\varphi_{0}\otimes 1). \tag{4.5}\]
Here \(A(\omega_{\alpha\mu})\) denotes the exterior left multiplication by \(\omega_{\alpha\mu}\).
The definition of the more general Schwartz form \(\varphi_{q,l}\) can be found in [FM], Chap. 5.2. (for \(n=1\) in our case). As a differential form on \(D\), \(\varphi_{q,l}\) takes values in \(S(V)\otimes\operatorname{Sym}^{l}(V)\), where \(\operatorname{Sym}^{l}(V)\) is the \(l\)-th symmetric power of \(V\). As pointed out in [BMM], Lemma 8.2 and Theorem 8.3, the isomorphism in (4.2) extends to these more general forms. We have
\[\left[S(V)\otimes\mathcal{A}^{q}(D)\otimes\operatorname{Sym}^{l}(V)\right]^{ G(\mathbb{R})}\cong\left[S(V)\otimes\bigwedge^{q}\mathfrak{p}^{*}\otimes \operatorname{Sym}^{l}(V)\right]^{K_{\infty}}. \tag{4.6}\]
Consider the operator
\[\mathcal{D}^{\prime}=\left[\frac{1}{2}\sum_{\alpha=1}^{p}\left(x_{\alpha}- \frac{1}{2\pi}\frac{\partial}{\partial x_{\alpha}}\right)\otimes 1\otimes A(v_{ \alpha})\right]^{l}\in\operatorname{End}_{\mathbb{C}}\left(S(V)\otimes \bigwedge^{\bullet}(\mathfrak{p}^{*})\otimes\operatorname{Sym}^{l}(V)\right), \tag{4.7}\]
where \(A(v_{\alpha})\) means the multiplication in \(\operatorname{Sym}^{l}(V)\) by \(v_{\alpha}\). Then \(\varphi_{q,l}\) is obtained from \(\varphi_{\text{KM}}\in\left[S(V)\otimes\bigwedge^{q}\mathfrak{p}^{*}\right]^{ K_{\infty}}\) by
\[\varphi_{q,l}=\mathcal{D}^{\prime}(\varphi_{KM}). \tag{4.8}\]
If \(l\) is equal to zero, \(\varphi_{q,l}\) simplifies to \(\varphi_{\text{KM}}\). Setting \(\mathcal{D}_{\alpha}=\left(x_{\alpha}-\frac{1}{2\pi}\frac{\partial}{\partial x_{ \alpha}}\right)\), we may write \(\varphi_{q,l}\) in a more explicit form
\[\varphi_{q,l}(x)=\sum_{\begin{subarray}{c}\alpha_{1},\dots,\alpha_{q}=1\\ \beta_{1},\dots,\beta_{l}=1\end{subarray}}^{p}\prod_{i=1}^{q}\prod_{j=1}^{l} \mathcal{D}_{\alpha_{i}}\mathcal{D}_{\beta_{j}}\varphi_{0}(x)\otimes\left( \omega_{\alpha_{1}p+1}\wedge\dots\wedge\omega_{\alpha_{q}p+q}\right)\otimes \left(v_{\beta_{1}}\otimes\dots\otimes v_{\beta_{l}}\right). \tag{4.9}\]
The following theorem summarizes the most fundamental properties of \(\varphi_{q,l}\) (and thereby of \(\varphi_{\text{KM}}\)). They are proved in [FM], Section 6 and [KM1], Thm. 3.1, see also [BF] Section 3.
**Theorem 4.1**.: _The Schwartz form \(\varphi_{q,l}\) satisfies:_
* \(\varphi_{q,l}\) _is invariant under the action of_ \(K_{\infty}\)_, i. e._ \(\varphi_{q,l}\in\left[S(V)\otimes\bigwedge^{q}(\mathfrak{p}^{*})\otimes\text{ Sym}^{l}(V)\right]^{K_{\infty}}\)_._
* \(\varphi_{q,l}\) _is an eigenvector of weight_ \(\frac{m}{2}+l\) _under the action of_ \(K_{\infty}^{\prime}(1)\)_, i. e._ \[\omega(k)\varphi_{q,l}=\det(\mathbf{k})^{\frac{m}{2}+l}\varphi_{q,l},\] \(\mathbf{k}\in U(1)\) _corresponding to_ \(k\in K_{\infty}^{\prime}(1)\) _(cf. (_2.8_))._
* \(\varphi_{q,l}\) _is a closed differential_ \(q\)_- form on_ \(D\) _with values in_ \(\text{Sym}^{l}(V)\)_._
The Schwartz function \(\phi_{q,l}\in S(V(\mathbb{R})^{2})\) is constructed by applying the Hodge star operator \(*\) on \(D\) to \(\varphi_{q,l}\). More precisely,
\[\phi_{q,l}((x_{1},x_{2}),z)\mu=\varphi_{q,l}(x_{1},z)\wedge*(\varphi_{q,l})(x_ {2},z), \tag{4.10}\]
where \(\mu\) is the \(G(\mathbb{R})\)-invariant volume form on \(D\) induced by the Riemann metric coming from the Killing form on \(\mathfrak{g}\).
The definition of the Schwartz function \(\xi\) is more involved: Let \(\mathfrak{g}^{\prime}\) be the complexified Lie algebra of \(G^{\prime}(2)(\mathbb{R})\). The Harish-Chandra decomposition of \(\mathfrak{g}^{\prime}\) is then given by
\[\mathfrak{g}^{\prime}=\mathfrak{p}_{+}\oplus\mathfrak{p}_{-}\oplus\mathfrak{t }^{\prime}\]
with \(\mathfrak{t}^{\prime}\) being the complexified Lie algebra of \(K^{\prime}(2)_{\infty}\). We have an explicit description of \(\mathfrak{p}_{+}\):
\[\mathfrak{p}_{+}=\left\{p_{+}(X)=\frac{1}{2}\begin{pmatrix}X&iX\\ iX&-X\end{pmatrix}\ \bigg{|}\ X\in M_{2,2}(\mathbb{C})\ X^{t}=X\right\}.\]
It is generated by
\[R_{11}=p_{+}\left(\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\right),\quad R_{22}=p_{+}\left(\begin{smallmatrix}0&0\\ 0&1\end{smallmatrix}\right),\quad R_{12}=\frac{1}{2}p_{+}\left(\begin{smallmatrix} 0&1\\ 1&0\end{smallmatrix}\right).\]
It is well known (see e. g. [Bu], Chap. 2.1 and Chap. 2.2) that \(R_{1}=\iota(R,0)\) and \(R_{2}=\iota(0,R)\), where
\[R=\frac{1}{2}\begin{pmatrix}1&i\\ i&-1\end{pmatrix}\]
corresponds to the Maass raising operator. Now we are in the position to specify the Schwartz function \(\xi\in S(V(\mathbb{R})^{2})\). We set
\[\xi=\frac{p^{l}(-1)^{q+l}}{2^{l}\pi^{q+l}}\omega(\alpha_{q+l})\varphi_{0} \tag{4.11}\]
with
\[\alpha_{q+l}=\begin{cases}\left(R_{12}^{2}-R_{11}^{2}R_{22}^{2}\right)^{\frac {q+l}{2}},&\text{if $q+l$ is even},\\ R_{12}\left(R_{12}^{2}-R_{11}^{2}R_{22}^{2}\right)^{[\frac{q+l}{2}]},&\text{ if $q+l$ is odd}\end{cases}\]
being an element of \(\mathfrak{p}_{+}\).
Note that the functions \(\phi_{q,l}\) and \(\xi\) are related by the identity
\[\phi_{q,l}=\xi+\omega(R_{11})\omega(R_{22})\psi, \tag{4.12}\]
where \(\psi\) is specified in [BF], Prop. 3.9.
We close this section with
**Lemma 4.2** ([Br], Section 3).: _We have_
* \(\xi\) _is_ \(K_{\infty}\)_-invariant, i. e. an element in_ \(\left[S(V(\mathbb{R})^{2})\otimes\bigwedge^{0}(\mathfrak{p}^{*})\right]^{K_{ \infty}}\)_,_
* \(\xi\) _is a weight_ \(\frac{m}{2}+l\) _eigenvector with respect to the action of_ \(K_{\infty}^{\prime}(2)\) _via the Weil representation._
* \(\xi\) _vanishes identically if and only if_ \(p=1\) _and_ \(q+l>1\)_._
## 5. The Siegel-Weil formula
This section summarizes the necessary background to state the Siegel-Weil formula in a vector valued setup. We mainly follow [Ku2] and [Ku3]. This formula connects the integral over a theta function associated to a Schwartz-Bruhat function and a special value of an adelic Eisenstein series. It is stated in a global and more general setting compared to Section 4.
Recall that \(S(V(\mathbb{A})^{n})\) is the space of Schwartz-Bruhat functions, \(G(\mathbb{A})=O(V(\mathbb{A}))\) and \(G^{\prime}(n)(\mathbb{A})=\operatorname{Sp}(n,\mathbb{A})\). Within \(G^{\prime}(n)(\mathbb{A})\) we have the following subgroups
\[M(\mathbb{A}) =\left\{m(a)=\begin{pmatrix}a&0\\ 0&(a^{-1})^{t}\end{pmatrix}\ \big{|}\ a\in\operatorname{GL}_{n}(\mathbb{A}) \right\}, \tag{5.2}\] \[N(\mathbb{A}) =\left\{n(b)=\begin{pmatrix}1&b\\ 0&1\end{pmatrix}\ \big{|}\ b\in\operatorname{Sym}_{n}(\mathbb{A})\right\}. \tag{5.1}\]
These define the Siegel parabolic subgroup \(P(\mathbb{A})=N(\mathbb{A})M(\mathbb{A})\) of \(G^{\prime}(n)(\mathbb{A})\), which is part of the Iwasawa decomposition
\[G^{\prime}(n)(\mathbb{A})=N(\mathbb{A})M(\mathbb{A})K^{\prime}(n) \tag{5.3}\]
of \(G^{\prime}(n)(\mathbb{A})\), where \(K^{\prime}(n)=\prod_{v}K_{v}^{\prime}(n)\) is the maximal compact subgroup of \(G^{\prime}(n)(\mathbb{A})\). Here for a non-Archimedean place \(v=p\) the group \(K_{p}^{\prime}(n)\) is given by \(\operatorname{Sp}(n,\mathbb{Z}_{p})\) and \(K^{\prime}(n)_{\infty}\) is the maximal compact in \(G^{\prime}(n)(\mathbb{R})\) (cf. (2.8)).
For \(\varphi\in S(V(\mathbb{A})^{n})\) we have a generalization of the theta function (2.9) to the genus \(n\) in the adelic setting:
\[\vartheta(g,h,\varphi)=\sum_{\mathbf{x}\in V(\mathbb{Q})^{n}}\omega(g)\varphi (h^{-1}\mathbf{x}), \tag{5.4}\]
where \(g\in G^{\prime}(n)(\mathbb{A})\) and \(h\in G(\mathbb{A})\). Here \(G^{\prime}(n)(\mathbb{A})\) acts on \(S(V(\mathbb{A})^{n})\) via the global Weil representation \(\omega\) (see e. g. [We1], [Ku3] or [St1]).This action commutes with the action
\[\varphi(\mathbf{x})\mapsto\varphi(h^{-1}\mathbf{x})\]
of \(G(\mathbb{A})\). Sometimes the action of \(G(\mathbb{A})\) is written as \(\omega(h)\varphi\). It can be shown that \(\vartheta(g,h,\varphi)\) is left invariant under \(G^{\prime}(n)(\mathbb{Q})\times G(\mathbb{Q})\), slowly increasing on \((G^{\prime}(n)(\mathbb{Q})\times G(\mathbb{Q}))\backslash(G^{\prime}(n)( \mathbb{A})\times G(\mathbb{A}))\). Moreover, it defines a smooth function on \(G^{\prime}(n)(\mathbb{A})\times G(\mathbb{A})\). The integral
\[I(g,\varphi)=\int_{G(\mathbb{Q})\backslash G(\mathbb{A})}\vartheta(g,h, \varphi)dh \tag{5.5}\]
can be interpreted as the average value of \(\vartheta\) with respect to \(G(\mathbb{A})\). Here \(dh\) is the Haar measure on \(G(\mathbb{Q})\backslash G(\mathbb{A})\) normalized such that \(\operatorname{vol}(G(\mathbb{Q})\backslash G(\mathbb{A}))=1\), thus half of the Tamagawa measure. By Weil's convergence criterion (see [We2], Chap. VI, Prop. 8), the integral in (5.5) is absolutely convergent for all \(\varphi\) whenever either \(V\) is anisotropic or the rank of \(L\) and the Witt index of \(V\) satisfy the inequality \(m-r_{0}>n+1\). Also, if \(\varphi\) is \(K^{\prime}(n)\)-finite, \(g\mapsto I(g,\varphi)\) defines an automorphic form on \(G^{\prime}(n)(\mathbb{A})\) ([Ku3]).
To describe the Eisenstein series involved in the Siegel-Weil formula, we let
\[\chi_{V}:\mathbb{A}^{\times}/\mathbb{Q}^{\times}\longrightarrow\mathbb{C}; \quad x\mapsto\chi_{V}(x)=(x,(-1)^{m/2}\det(V))_{\mathbb{A}} \tag{5.6}\]
the quadratic character defined by the global Hilbert symbol \((\cdot,\cdot)_{\mathbb{A}}\), where \(\det(V)\) is the Gram determinant of \(V\).
For \(s\in\mathbb{C}\), we denote by \(I_{n}(s,\chi_{V})\) the normalized induced representation from \(P(\mathbb{A})\) to \(G^{\prime}(n)(\mathbb{A})\). It consists of smooth function \(g\mapsto\Phi(g,s)\) on \(G^{\prime}(n)(\mathbb{A})\) satisfying
\[\Phi(n(b)m(a)g,s)=\chi_{V}(\det(a))|\det(a)|^{s+\rho_{n}}\Phi(g,s) \tag{5.7}\]
for all \(a\in\operatorname{GL}_{n}(\mathbb{A})\) and \(b\in\operatorname{Sym}_{n}(\mathbb{A})\), where
\[\rho_{n}=\frac{n+1}{2}. \tag{5.8}\]
An element \(\Phi\in I_{n}(s,\chi_{V})\) is called a standard section if its restriction to \(K^{\prime}(n)\) is independent of \(s\). For any \(\Phi\in I_{n}(s,\chi_{V})\) and \(g\in G^{\prime}(n)(\mathbb{A})\), we define the (adelic) Siegel Eisenstein series of genus \(n\) by
\[E(g,s,\Phi)=\sum_{\gamma\in P(\mathbb{Q})\backslash G^{\prime}(n)(\mathbb{Q}) }\Phi(\gamma g,s). \tag{5.9}\]
If \(\Phi\) is a standard section, one can proof that \(E(g,s,\Phi)\) converges absolutely for \(\operatorname{Re}(s)>\rho_{n}\) and thereby defines and automorphic form on \(G^{\prime}(n)(\mathbb{A})\) provided \(\Phi\) is \(K^{\prime}(n)\)-finite. Moreover, \(E(g,s,\Phi)\) can be continued meromorphically to the whole \(s\)-plane (for these results see e. g. [A]) and satisfies a functional equation.
One way to construct a standard section \(\Phi\) is by means of the intertwining map
\[\lambda:S(V(\mathbb{A})^{n})\longrightarrow I_{n}(s_{0},\chi_{V}),\quad \varphi\mapsto\lambda(\varphi)(g,s_{0})=\Phi(g,s_{0})=(\omega(g)\varphi)(0), \tag{5.10}\]
where
\[s_{0}=\frac{m}{2}-\rho_{n} \tag{5.11}\]
and \(\omega\) is the (adelic) Weil representation. Using the Iwasawa decomposition, write \(g\in G(\mathbb{A})\) as \(g=n(b)m(a)k\) and put
\[|a(g)|=|\det(a)|_{\mathbb{A}}.\]
It can then be proved that \(\lambda(\varphi)\in I_{n}(s_{0},\chi_{V})\) has a unique extension to a standard section of \(I_{n}(s,\chi_{V})\) given by
\[\Phi(g,s)=\lambda(\varphi)(g,s)=|a(g)|^{s-s_{0}}(\omega(g)\varphi)(0)\in I_{n }(s,\chi_{V}). \tag{5.12}\]
The Siegel-Weil formula was originally stated by Weil in a very general manner for dual reductive pairs (see [We2], Chap. IV, Theoreme 5). Here, we consider the dual pair \((\operatorname{Sp}(n),O(V))\) and follow [BF], [KR1] and [KR2].
**Theorem 5.1**.: _Assume that \(r_{0}=0\) or \(m-r_{0}>n+1\). Then for all \(K^{\prime}(n)\)-finite \(\varphi\in S(V(\mathbb{A})^{n})\)_
* \(E(g,s,\varphi)\) _is holomorphic at_ \(s=s_{0}\) _and_
2. \(E(g,s_{0},\lambda(\varphi))=\alpha I(g,\varphi)\) _for all_ \(g\in G(\mathbb{Q})\backslash G(\mathbb{A})\)_, where_ \(\alpha=\begin{cases}1,&m>n+1,\\ 2,&m\leq n+1.\end{cases}\)__
We would like to give a vector valued version of Theorem 5.1 in the same vein as in Prop. 2.2 of [BY], but for lattices of signature \((p,q)\) and the group \(\operatorname{Sp}(n)\) (see also [St1], Section 3, for the following notations and definitions). To this end, we define a Schwartz-Bruhat function \(\varphi_{\mu}\in S(V(\mathbb{A}_{f})^{n})\) associated to \(\mu\in(L^{\prime}/L)^{n}=A^{n}\) by
\[\varphi_{\mu}=\mathbb{1}_{\mu+\hat{L}^{n}}=\prod_{p<\infty}\varphi_{p}^{(\mu) }=\prod_{p<\infty}\mathbb{1}_{\mu+L_{p}^{n}}. \tag{5.13}\]
Here \(L_{p}=L\otimes\mathbb{Z}_{p}\), which is the \(p\)-part of \(\hat{L}=L\otimes\hat{\mathbb{Z}}\) with \(\hat{\mathbb{Z}}=\prod_{p<\infty}\mathbb{Z}_{p}\). For the Archimedean place we choose the Gaussian \(\varphi_{0}^{p,q}\) evaluated at the base point \(z_{0}\in D\) (see (4.3)). We associate to \(\varphi_{0}^{p,q}\prod_{p<\infty}\varphi_{p}^{(\mu)}\) the standard section \(\Phi_{\infty}^{(p-q)/2}(s)\,\prod_{p<\infty}\Phi_{p}^{(\mu)}(s)\in I_{n}(s, \chi_{V})\), where
\[\begin{split}&\Phi_{\infty}^{(p-q)/2}(g_{\infty},s)=\lambda_{ \infty}(\varphi_{0}^{p,q})(g_{\infty},s)\text{ and }\\ &\Phi_{p}^{(\mu)}(g_{p},s)=\lambda_{p}(\varphi_{p}^{(\mu)})(g_{p},s)\end{split} \tag{5.14}\]
for \(g\in G(\mathbb{A})\). We use the notation
\[\Phi_{\mu}(s)=\prod_{p<\infty}\Phi_{p}^{(\mu)}(s).\]
Note that the Schwartz-Bruhat functions \(\varphi_{\mu},\ \mu\in A^{n}\), generate a finite dimensional subspace
\[S_{L}=\bigoplus_{\mu\in A^{n}}\mathbb{C}\varphi_{\mu} \tag{5.15}\]
of \(S(V(\mathbb{A}_{f})^{n})\), which is stable under the action of \(G^{\prime}(n)(\widehat{\mathbb{Z}})\) via the Weil representation (see [St1], Lemma 3.4 and the discussion after the Lemma).
We are now ready to define a vector valued theta function and a vector valued Eisenstein series. Let \(\phi\in S(V(\mathbb{R})^{n})\) with associated standard section \(\Phi(g_{\infty},s)=\lambda(\phi)(g_{\infty},s)\) at the place infinity and \(\varphi_{\mu}\) and \(\Phi_{\mu}\) as above. Then we put
\[\vartheta_{L}(g,h,\phi)=\sum_{\mu\in A^{n}}\vartheta(g,h,\phi\otimes\varphi_{ \mu})\varphi_{\mu} \tag{5.16}\]
and
\[E_{L}(g,s,\Phi)=\sum_{\mu\in A^{n}}E(g,s,\Phi\otimes\Phi_{\mu})\varphi_{\mu}. \tag{5.17}\]
In terms of \(\theta_{L}\) and \(E_{L}\), based on the same assumptions and notations as in Theorem 5.1, the Siegel-Weil formula then reads as follows.
**Corollary 5.2**.: _For \(\phi\in S(V(\mathbb{A})^{n})\) with induced section \(\Phi=\lambda(\phi)\) we have_
\[\alpha\int_{O(V)(\mathbb{Q})\backslash O(V)(\mathbb{A})}\vartheta_{L}(g,h, \phi)dh=E_{L}(g,s_{0},\Phi). \tag{5.18}\]
We close this section with a result taken from [BF], which will be important later in the paper. To that end, let \(\kappa\in\mathbb{N}\) and denote with
\[\Phi_{\infty}^{\kappa}(s)=\det(\mathbf{k})^{\kappa} \tag{5.19}\]
the standard section of weight \(\kappa\) at the place infinity. It turns out that the induced section \(\Xi\) associated to the Schwartz function \(\xi\) (see (4.11)) is essentially \(\Phi_{\infty}^{m/2+l}\):
**Proposition 5.3** ([BF],Prop. 3.12).: _Let \(q,l\) be as in Section 4 and \(q+l\) even. Then \(\Xi(g,s)=\lambda(\xi)(g,s)\) is the standard section (5.19) of weight \(m/2+l\) (at the infinite place), i. e._
\[\Xi(s)=C(s)\Phi_{\infty}^{m/2+l}(s).\]
_Here \(C(s)\) is a polynomial, which is non-zero for \(s_{0}=\frac{m}{2}-\frac{3}{2}\) and which vanishes identically for \(p=1\)._
## 6. A standard \(L\)-fundtion and zeta function
In [St2] and [St3] a zeta function and a standard \(L\)-function, respectively, is assigned to a vector valued common Hecke eigenform \(f\) of weight \(\kappa\) and type \(\rho_{A}\). The usual basic properties for both functions are shown. In this section we briefly review some of the before mentioned material and show that the standard \(L\)-function is non-zero at some \(s\in\mathbb{C}\). This result will turn out to be an important step in the proof of the injectivity of the Kudla-Millson theta lift. All details can be found in [St2] and [St3].
Because there is a lot of notation used in this section, we list most of it at the beginning of this section and explain the rest of it within the text or refer to the main sources. There are several groups involved. The group \(\mathcal{Q}_{p}\) is a subgroup of \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\) and \(\mathcal{K}_{p}\) means a subgroup of \(\operatorname{GL}_{2}(\mathbb{Z}_{p})\). We write \(\mathcal{M}_{p}\) and \(\mathcal{D}_{p}\) for a subgroup of diagonal matrices in \(\mathcal{Q}_{p}\) and \(\mathcal{K}_{p}\), respectively, and denote with \(N(\mathbb{Z}_{p})\) the group \(\{(\begin{smallmatrix}1&x\\ 0&1\end{smallmatrix})\mid x\in\mathbb{Z}_{p}\}\). Recall that \(\omega=\bigotimes_{p\leq\infty}\omega_{p}\) is the global Weil representation of \(G^{\prime}(1)(\mathbb{A})\times G(\mathbb{A})\). The Schrodinger model of \(\omega\) acts on \(S(V(\mathbb{A}))\) and leaves the subspace \(S_{L}=\bigotimes_{p}S_{L_{p}}\) (see (5.15)) invariant. \(\varphi_{p}^{(0)}\in S_{L_{p}}\) is defined by (5.13) (with \(\mu=0\)). As usual, we abbreviate the diagonal matrix \(\left(\begin{smallmatrix}d_{1}&0\\ 0&d_{2}\end{smallmatrix}\right)\) with the symbol \(m(d_{1},d_{2})\).
The zeta function \(Z(s,f)\) is defined via the eigenvalues \(\lambda_{f}(m(d^{2},1))\) of the Hecke operators \(T(m(d^{2},1))\):
\[Z(s,f)=\sum_{d\in\mathbb{N}}\lambda_{f}(m(d^{2},1))d^{-2s}. \tag{6.1}\]
Its analytic properties are linked to those of a vector valued Siegel Eisenstein series \(E^{2}_{\kappa,0}\) of genus two transforming according to the Weil representation of \(G^{\prime}(2)(\mathbb{Z})\) by a Rankin-Selberg type integral formula (see [St2], Thm. 6.4)
\[\sum_{\lambda\in L^{\prime}/L}\left(\int_{\Gamma\backslash\mathbb{H}} \langle f(\tau)\otimes\mathfrak{e}_{\lambda},E^{2}_{\kappa,0}(\left(\begin{smallmatrix }\tau&0\\ 0&-\zeta\end{smallmatrix}\right),\overline{s})\rangle_{2}\operatorname{Im}( \tau)^{\kappa}d\mu(\tau)\right)\mathfrak{e}_{\lambda}\] \[=K(\kappa,s)Z(2s+\kappa,f)f(\zeta), \tag{6.2}\]
which holds in the region of convergence of \(E^{2}_{\kappa,0}\), i. e. for all \(s\) with \(\operatorname{Re}(s)>\frac{3-\kappa}{2}\). Thus, \(Z(s,f)\) converges in the region of all \(s\) with \(\operatorname{Re}(s)>\frac{3+\kappa}{2}\). A more general zeta function is specified in [St3]. It is defined as an Euler product
\[\mathcal{Z}(s,f)=\prod_{p\text{ prime}}\mathcal{Z}_{p}(s,f),\]
where for each prime \(p\) the local zeta function \(\mathcal{Z}_{p}(s,f)\) is given by
\[\mathcal{Z}_{p}(s,f)=\sum_{(k,l)\in\Lambda_{+}}\lambda_{f}(m(p^{k},p^{l}))p^{-s( k+l)}.\]
Here \(\Lambda_{+}\) means the set
\[\Lambda_{+}=\left\{(k,l)\in\mathbb{Z}^{2}\mid 0\leq k\leq l\text{ and }k+l\in 2 \mathbb{Z}\right\}.\]
The zeta function \(\mathcal{Z}(s,f)\) can be expressed in terms of \(Z(s,f)\) by the following relation
\[\mathcal{Z}(s,f)\] \[=\prod_{p||A|}\left(\left(\frac{e(\text{sig}(A_{p})/8)}{|A_{p}|^ {1/2}}-1\right)+L_{p}(2s+\kappa-2,\chi_{A_{p}^{\perp}})\right)L(2s+\kappa-2, \chi_{A})Z(s+\kappa-2,f), \tag{6.3}\]
where
* \(\chi_{A}\) and \(\chi_{A_{p}^{\perp}}\) are Dirichlet characters, which are defined in [St3], Section 3,
* \[L_{p}(s,\chi_{A_{p}^{\perp}})=(1-\chi_{A_{p}^{\perp}}(p)p^{-s})^{-1},\]
* \(L(s,\chi_{A})\) is the Dirichlet \(L\)-series associated to \(\chi_{A}\).
From (6.3) we can immediately conclude that \(\mathcal{Z}(s,f)\) converges for \(s\in\mathbb{C}\) with \(\text{Re}(s)>\frac{7-\kappa}{2}\).
In [St3] an isomorphism between \(S_{\kappa,A}\) and a space of vector valued automorphic forms \(A_{\kappa}(\omega_{f})\) of type \(\omega\) is established. Vector valued spherical Hecke algebras depending on a representation are well studied objects. In [St3] for each prime \(p\) the structure of a subalgebra of the vector valued spherical Hecke algebra \(\mathcal{H}(\mathcal{Q}_{p}//\mathcal{K}_{p},\omega_{p})\) depending on the Weil representation \(\omega_{p}\) is determined. Two cases have to be considered. In both cases we let
\[\{T_{k,l}\mid(k,l)\in\Lambda_{+}\}\]
be a set of generators. If \(p\) divides \(|A|\), this algebra is denoted with \(\mathcal{H}^{+}(\mathcal{Q}_{p}//\mathcal{K}_{p},\omega_{p})\) and
\[T_{k,l}(k_{1}m(p^{r},p^{s})k_{2})=\omega_{p}(k_{1})\circ T(k,l)\circ\omega_{p} (k_{2})\]
with
\[T(k,l)(m(p^{r},p^{s}))\varphi_{p}^{(\mu_{p})}=\frac{g(A_{p})}{g_{p^{l}}(A_{p}) }\mathbb{1}_{\mathcal{K}_{p}m(p^{k},p^{l})\mathcal{K}_{p}}\varphi_{p}^{(p^{(l -k)/2}\mu_{p})}=\frac{g(A_{p})}{g_{p^{l}}(A_{p})}\mathbb{1}_{\mathcal{K}_{p}m( p^{k},p^{l})\mathcal{K}_{p}}\varphi_{p}^{(0)}\]
for \(k<l\) and
\[T(k,k)(m(p^{r},p^{s}))=\frac{g(A_{p})}{g_{p^{k}}(A_{p})}\mathbb{1}_{\mathcal{K }_{p}m(p^{k},p^{k})\mathcal{K}_{p}}\operatorname{id}_{S_{L_{p}}}\]
for \(k=l\). If \(p\nmid|A|\), the whole spherical Hecke algebra \(\mathcal{H}(\mathcal{Q}_{p}//\mathcal{K}_{p},\omega_{p})\) is considered. As \(\omega_{p}\) acts trivially in this case, its structure is much easier and well known. In this case we have
\[T_{k,l}=\mathbb{1}_{\mathcal{K}_{p}m(p^{k},p^{l})\mathcal{K}_{p}} \operatorname{id}_{S_{L_{p}}}.\]
Here \(g_{p^{k}}(A_{p})\) (and \(g(A_{p})\)) is given by (3.6). Each local Hecke algebra acts on \(A_{\kappa}(\omega_{f})\) by a Hecke operator \(\mathcal{T}^{T_{k,l}}\). It is important to realize that this action is compatible with the action of Hecke operators on \(S_{\kappa,A}\). More specifically, the identity
\[\mathcal{T}^{T_{k,l}}(F_{f})=F_{p^{(k+l)(\frac{\kappa}{2}-1)}T(m(p^{-k},p^{-l} ))f} \tag{6.4}\]
holds, where \(F_{f}\in A_{\kappa}(\omega_{f})\) corresponds to \(f\in S_{\kappa,A}\) and \(T(m(p^{-k},p^{-l}))\) means the Hecke operator attached to \(m(p^{-k},p^{-l})\) (cf. [St3], Thm. 6.9). If \(F\) is an automorphic eigenform of all Hecke operators \(\mathcal{T}^{T_{k,l}}\), the corresponding eigenvalues give rise to an algebra homomorphism
\[\lambda_{F,p}:\mathcal{H}^{+}(\mathcal{Q}_{p}//\mathcal{K}_{p},\omega_{p}) \to\mathbb{C},\quad T\mapsto\lambda_{F,p}(T).\]
This algebra homomorphism determines in turn an unramified character \(\chi_{F,p}\) on the group \(\mathcal{M}_{p}\). The standard \(L\)-function \(L(s,F)\) of a common eigenform \(F\) is constructed as an Euler product
\[L(s,F)=\prod_{p<\infty}L_{p}(s,F) \tag{6.5}\]
with
\[L_{p}(s,F)=\begin{cases}\frac{1+\chi_{F,p}^{(1)}(p)\chi_{F,p}^{(2)}(p)p^{-2s+1 }}{(1-\chi_{F,p}^{(1)}(p^{2})p^{-2s+1})(1-\chi_{F,p}^{(2)}(p^{2})p^{-2s+1})}, &(p,|A|)=1,\\ \\ \frac{1}{C(A_{p})}\frac{1+\chi_{F,p}^{(1)}(p)\chi_{F,p}^{(2)}(p)p^{-2s+1}}{(1- \chi_{F,p}^{(1)}(p^{2})p^{-2s+1})(1-\chi_{F,p}^{(2)}(p^{2})p^{-2s+1})},&p\mid|A |.\end{cases} \tag{6.6}\]
Here \(\chi_{F,p}=(\chi_{F,p}^{(1)},\chi_{F,p}^{(2)})\) is the unramified character mentioned before and \(C(A_{p})\) is some constant depending on the \(p\)-group \(A_{p}\) (see [St3], Lemma 7.3). If \(F_{f}\) is the automorphic form belonging to \(f\in S_{\kappa,A}\) and \(L(s,F_{f})\) the standard \(L\)-function of \(F_{f}\), we define the standard \(L\)-function of \(f\) naturally by
\[L(s,f)=L(s,F_{f}).\]
There is a relation between the standard zeta function \(\mathcal{Z}(s,f)\) and the standard \(L\)-funktion \(L(s,f)\). We have
\[\mathcal{Z}(s+\frac{\kappa}{2}-1,f)=L(s,f). \tag{6.7}\]
As \(\mathcal{Z}(s,f)\) converges for all \(s\in\mathbb{C}\) with \(\operatorname{Re}(s)>\frac{7-\kappa}{2}\), it follows from (6.7) that \(L(s,f)\) converges for all \(s\in\mathbb{C}\) with \(\operatorname{Re}(s)>\frac{9}{2}-\kappa\).
**Theorem 6.1**.: _Let \(\kappa=\frac{m}{2}+l\) with \(m\) and \(l\) as in Sections 2 and 4 satisfying \(\frac{m}{2}>l+3\). Then the standard \(L\)-function \(L(s,f)\) is non-zero for \(s=-\frac{m}{4}-\frac{3}{2}l+3\)._
Proof.: The condition \(\frac{m}{2}>l+3\) is equivalent to \(\frac{m}{2}>\frac{3}{2}+\frac{\kappa}{2}\). Therefore, \(\frac{m}{2}\) lies inside the region of convergence of \(Z(s,f)\). The identity (7.19) implies immediately that \(L(s,f)\) converges for \(s=-\frac{m}{4}-\frac{3}{2}l+3\). Alternatively, this can be confirmed directly by checking that
\[3-\frac{m}{4}-\frac{3l}{2}>\frac{9}{2}-\frac{m}{2}-l.\]
\(L(s,f)\) is defined as an Euler product. Thus, it suffices to prove that each factor of this product is non-zero at the point \(s\) in question. In view of (6.6) this boils down to show that
\[1+\chi_{F,p}(m(p,p))p^{\frac{m}{2}+3l-5}=1+\chi_{F,p}^{(1)}(p)\chi_{F,p}^{(2)}( p)p^{\frac{m}{2}+3l-5} \tag{6.8}\]
is non-zero. For this we need to calculate the character \(\chi_{F,p}\) evaluated at \(m(p,p)\). It is determined by the algebra homomorphism \(\lambda_{F,p}\) via the relation
\[\lambda_{F,p}(T_{k,l})=\begin{cases}\sum_{(r,s)\in\mathbb{Z}^{2}}S(\langle T_{ k,l},\varphi_{p}^{(0)}\rangle)(m(p^{r},p^{s}))\chi_{F,p}(m(p^{r},p^{s})),&(p,|A|)=1, \\ \sum_{(r,s)\in\mathbb{Z}^{2}}\langle\mathcal{S}(T_{k,l})(m(p^{r},p^{s})), \varphi_{p}^{(0)}\rangle\chi_{F,p}(m(p^{r},p^{s})),&p\mid|A|,\end{cases} \tag{6.9}\]
cf. equation (7.21) in [St3]. Here \(S\) means the classical Satake map and \(\Cal{S}\) the Satake map introduced in [St3], (5.12). These equations simplify significantly if we choose \(T_{k,l}=T_{1,1}\). We find immediately from the calculations in the proof of Theorem 5.11 of [St3] that
\[\eqalign{(\Cal{S}T_{k,k}(m(p^{k},p^{k}))&=T_{k,k}(m(p^{k},p^{k}))_{|S^{N({ \Cal{Z}}_{p})}_{L_{p}}}\cr&={g(A_{p})\over g_{p^{k}}(A_{p})}\mathbb{1}_{ \Cal{D}_{p}m(p^{k},p^{k})\Cal{D}_{p}}\,{\rm id}_{S^{N({\Cal{Z}}_{p})}_{L_{p}}} \,.\cr}\]
The same arguments lead in the case of \(p\nmid|A|\) to
\[(ST_{k,k}(m(p^{k},p^{k}))=\mathbb{1}_{\Cal{D}_{p}m(p^{k},p^{k})\Cal{D}_{p}}\,{ \rm id}_{S^{N({\Cal{Z}}_{p})}_{L_{p}}}\,.\]
Replacing the Satake maps in (6.9) in both cases with the right-hand side of (6.10) and (6.11), we obtain
\[\lambda_{F,p}(T_{1,1})=\begin{cases}{g(A_{p})\over g_{p}(A_{p})}\chi_{F,p}(m(p, p)),&p\mid|A|,\cr\chi_{F,p}(m(p,p)),&(p,|A|)=1.\cr\end{cases}\]
Combining the equations (3.14) and (3.15) of [St3] with (6.4), yields a relation between the eigenvalues \(\lambda_{F,p}(T_{k,l})\) and \(\lambda_{f}(T(m(p^{l-k},1))\):
\[\lambda_{F,p}(T_{k,l})=\begin{cases}p^{(k-l)(\kappa/2-1)}{g_{p^{k}}(A)\over g (A)}{g(A)\over g_{p^{k}+l}(A)}\lambda_{f}(m(p^{l-k},1)),&p\mid|A|,\cr p^{(k-l)( \kappa/2-1)}\lambda_{f}(m(p^{l-k},1)),&p\nmid|A|.\cr\end{cases}\]
In the case \(p\mid|A|\) we would like to further simplify the expression on the right-hand side of (6.13). First we keep in mind that \(p^{k+l}\) is a square as \((k,l)\) is an element of \(\Lambda_{+}\). This implies that \(g_{p^{k+l}}(A)=g(A)\). To evaluate the fraction \({g_{pk}(A)\over g(A)}\), we decompose \(A\) in the following way
\[A=A_{p}\oplus A_{p}^{\perp},\]
where \(A_{p}^{\perp}\) is the orthogonal complement of the \(p\)-group \(A_{p}\) in \(A\). Following [St3], p. 10 ff, we have for any \(r\in\mathbb{N}\)
\[\eqalign{{g(A)\over g_{p^{r}}(A)}&={g(A_{p})\over g_{p^{r}}(A_{p})}{g(A_{p}^{ \perp})\over g_{p^{r}}(A_{p}^{\perp})}\cr&={e({\rm sig}(A_{p})/8)\over|A_{p}|^{1/ 2}}\chi_{A_{p}^{\perp}}(p^{r}),\cr}\]
where \(\chi_{A_{p}^{\perp}}\) is the quadratic character \(n\mapsto\left({n\over|A_{p}^{\perp}|}\right)\). For the evaluation of of \({g(A_{p})\over g_{p^{r}}(A_{p})}\) we have used Milgram's formula and the fact that \(g_{p^{r}}(A_{p})=|A_{p}|\). Taking this into account, we obtain in the case \(p\mid|A|\)
\[\lambda_{F,p}(T_{k,l})=p^{(k-l)(\kappa/2-1)}{g_{p^{k}}(A_{p})\over g(A_{p})} \chi_{A_{p}^{\perp}}(p^{k})\lambda_{f}(m(p^{l-k},1)).\]
Combining the equations (6.12) and (6.14), we find
\[\begin{split}\chi_{F,p}(m(p,p))&=\begin{cases}\left( \frac{g_{p}(A_{p})}{g(A_{p})}\right)^{2}\chi_{A_{p}^{\perp}}(p)\lambda_{f}(m(1,1 )),&p\mid|A|,\\ \chi_{A_{p}^{\perp}}(p)\lambda_{f}(m(1,1)),&(p,|A|)=1\end{cases}\\ &=\begin{cases}|A_{p}|e(-\operatorname{sig}(A_{p})/4)\chi_{A_{p}^{ \perp}}(p),&p\mid|A|,\\ \chi_{A_{p}^{\perp}}(p),&(p,|A|)=1.\end{cases}\end{split} \tag{6.15}\]
Replacing \(\chi_{F,p}(m(p,p))\) in (6.8) with the right-hand side of (6.15) gives
\[1+\chi_{F,p}(m(p,p))p^{\frac{m}{2}+\frac{3}{2}l-5}=\begin{cases}1+|A_{p}|e(- \operatorname{sig}(A_{p})/4)\chi_{A_{p}^{\perp}}(p)p^{\frac{m}{2}+3l-5},&p\mid |A|,\\ 1+\chi_{A_{p}^{\perp}}(p)p^{\frac{m}{2}+3l-5},&(p,|A|)=1.\end{cases} \tag{6.16}\]
Note that \(A_{p}\) is anisotropic and \(\frac{m}{2}>l+3\). Thus, as \(|A_{p}|\geq p\), we have the estimate
\[|A_{p}|p^{\frac{m}{2}+3l-5}\geq p\]
and we can easily conclude that \(1+\chi_{F,p}(m(p,p))p^{\frac{m}{2}+\frac{3}{2}l-5}\) is non-zero in both of the above treated cases.
## 7. Injectivity of the Kudla-Millson theta lift
In this section, we generalize the results in [BF], Chap. 4, to cusp forms of type \(\rho_{A}\) where \(L\) is not unimodular. We follow quite closely the steps of the proof in [BF], which carry over with some modifications to the general case. To this end, we keep the notation of Section 4 and return to the adelic setup of Section 5 as we want to make use of the adelic version of the Siegel-Weil formula. Throughout this section we suppose that the weight \(\kappa\) is fixed and equal to
\[\kappa=\frac{m}{2}+l.\]
Let \(C(V)\) be the Clifford algebra of the quadratic space \(V\). It splits into a direct sum
\[C(V)=C^{+}(V)\oplus C^{-}(V),\]
where \(C^{+}(V)\) is the subalgebra of even elements of \(C(V)\). We write \(C^{+}(V)^{\times}\) for the invertible in \(C^{+}(V)\). The general spin group \(H=\operatorname{GSpin}(V)\) is defined by
\[\operatorname{GSpin}(V)=\left\{g\in C^{+}(V)^{\times}\bigm{|}gVg^{-1}=V\right\},\]
One can show that the action \(\alpha\) of \(H\) on \(V\) given by
\[g\mapsto\alpha(g),\quad\alpha(g)(v)=gvg^{-1} \tag{7.1}\]
leaves the quadratic form \(Q\) invariant. In fact, the group \(H\) is connected to \(\operatorname{SO}(V)\) by the following exact sequence
\[1\longrightarrow\mathbb{G}_{m}\longrightarrow H\overset{\alpha}{ \longrightarrow}\operatorname{SO}(V)\longrightarrow 1.\]
Note that the same construction generalizes to lattices \(L\) in \(V\) with the inclusion \(C(L)\subset C(V)\) and \(\operatorname{GSpin}(L)\subset\operatorname{GSpin}(V)\). A good reference for further details is [AD]. Let \(K_{f}^{H}=\prod_{p}K_{p}^{H}\) be an open compact subgroup of \(H(\mathbb{A}_{f})\) which leaves \(L\) invariant and acts trivially on \(A\) (see Remark 7.1 for for the action on \(L\) and \(L^{\prime}/L\)). To lighten the notation, we write \(K\) instead
of \(K_{f}^{H}\). Then there is a Shimura variety \(X_{K}\) over \({\mathbb{Q}}\) associated to the Shimura datum \((D,H)\) whose \({\mathbb{C}}\)-points are of the form
\[X_{K}({\mathbb{C}})=H({\mathbb{Q}})\backslash(D\times H({\mathbb{A}}_{f}))/K. \tag{7.2}\]
We identify \(X_{K}\) with \(X_{K}({\mathbb{C}})\). It is well known (see [Mi], Lemma 5.13, or [Ku1],p. 44-45) that the Shimura variety \(X_{K}\) allows a finite decomposition into connected components. To describe these components, we note that by the strong approximation theorem one has
\[H({\mathbb{A}}_{f})=\bigsqcup_{i}H({\mathbb{Q}})^{+}h_{i}K, \tag{7.3}\]
with \(h_{i}\in H({\mathbb{A}}_{f})\), where \(H({\mathbb{Q}})^{+}=H({\mathbb{R}})^{+}\cap H({\mathbb{Q}})\) and \(H({\mathbb{R}})^{+}\) is the component of the identity of \(H({\mathbb{R}})\). Then
\[X_{K}\cong\bigsqcup_{i}\Gamma_{i}\backslash D^{+} \tag{7.4}\]
with \(\Gamma_{i}=H({\mathbb{Q}})^{+}\cap h_{i}Kh_{i}^{-1}\) being a congruence subgroup of \(H({\mathbb{Q}})^{+}\). Throughout the rest of the paper we assume that the image of \(H({\mathbb{Q}})^{+}\cap K\) in \(SO^{+}(V)({\mathbb{A}}_{f})\) is isomorphic to a subgroup of finite index of the discriminant kernel \(\Gamma(L)\) (see Remark 8.2). It is well known that \(X_{K}\) has the analytic structure of a complex orbifold and is a complex manifold if \(K\) is neat. The same holds for the locally symmetric spaces \(\Gamma_{i}\backslash D^{+}\). The isomorphism (7.4) yields
\[{\mathcal{A}}^{q}(X_{K})\cong[{\mathcal{A}}^{q}(D)\otimes C^{\infty}(H({ \mathbb{A}}_{f}))]^{H({\mathbb{Q}})\times K}\cong\bigoplus_{i}{\mathcal{A}}^{ q}(D^{+})^{\Gamma_{i}}, \tag{7.5}\]
where the second isomorphism is obtained by mapping a differential form \(\eta(z,h)\) to the vector \((\eta(z,h_{i}))_{i}\), see [Ku1], p. 69.
**Remark 7.1**.:
1. The action of \(H({\mathbb{A}}_{f})\) on \(L\) and \(L^{\prime}\) via \(\alpha\) in (7.1) is understood as action on \(L=\cap_{p}\left(V({\mathbb{Q}})\cap L_{p}\right)=V({\mathbb{Q}})\cap\hat{L}\) and \(L^{\prime}=\cap_{p}\left(V({\mathbb{Q}})\cap L^{\prime}_{p}\right)\), respectively. We write
\[L^{h}=\alpha(h)(L).\]
As \((\alpha(h)(L))^{\prime}=\alpha(h)(L^{\prime})\), the action of \(h\in H({\mathbb{A}}_{f})\) on \(L\) and \(L^{\prime}\) induces an action on the discriminant group \(A\). We obviously have \(A\cong(L^{h})^{\prime}/L^{h}\) and write \(A^{h}\) for \((L^{h})^{\prime}/L^{h}\) and \(h\mu=\alpha(h)(\mu)\) for any \(\mu\in A\). As is pointed out in [HMP], the map \[h\cdot\sum_{\mu\in A}c_{\mu}\mathfrak{e}_{\mu}=\sum_{\mu\in A}c_{\mu} \mathfrak{e}_{h\mu}\] defines an isomorphism \({\mathbb{C}}[A]\cong{\mathbb{C}}[A^{h}]\) and isomorphic representations \(\rho_{A}\) and \(\rho_{A^{h}}\) for each \(h\in H({\mathbb{A}}_{f})\). Consequently, the spaces \(H_{\kappa,A}\) and \(H_{\kappa,A^{h}}\) of weak Maass forms are isomorphic (by the map \(f\mapsto h\cdot f\)) and clearly the same is true for all specified subspaces in Section 3. We write \(f^{h}\) for \(h\cdot f\). We will later make use of the following fact, which can be found in [Ho], Lemma (10.2.8): If the image of \(\Gamma_{K}:=\Gamma_{1}=H({\mathbb{Q}})^{+}\cap K\) is exactly \(\Gamma(L)\), then the image of \(\Gamma_{i}=H({\mathbb{Q}})^{+}\cap h_{i}Kh_{i}^{-1}\) is given by \(\Gamma(L^{h_{i}})\).
2. Let \(\varphi_{q,l}\) be the Schwartz form (4.8) and \(\varphi_{\mu},\ \mu\in A\) as in (5.13). Attached to \(\varphi_{q,l}\) and \(\varphi_{\mu}\), we define a vector valued Siegel theta function on \(D\times H({\mathbb{A}}_{f})\). Similar to (5.4)
we put
\[\vartheta((g_{\tau},1_{f}),(z,h),\varphi_{q,l}\otimes\varphi_{\mu})=\sum_{x\in V( \mathbb{Q})}\varphi_{\mu}(h^{-1}x)\omega((g_{\tau},1_{f}))\varphi_{q,l}(x,z),\]
where \(h\in H(\mathbb{A}_{f})\) and \(g_{\tau}=n(u)m(\sqrt{v})\in G^{\prime}(1)\) moves \(i\) to \(\tau=u+iv\in\mathbb{H}\). Analogous to (2.13), we subsequently set
\[\begin{split}\Theta_{A}(\tau,(z,h),\varphi_{q,l})& =\sum_{\mu\in A}\vartheta((g_{\tau},1_{f}),(z,h),\varphi_{q,l} \otimes\varphi_{\mu})\varphi_{\mu}\\ &=v^{-\kappa/2}\sum_{\mu\in A}\sum_{\lambda\in h(\mu+L)}\varphi_{q,l}(\sqrt{v}\lambda,z)e^{\pi i(\lambda,\lambda)u}\varphi_{\mu},\end{split} \tag{7.6}\]
where \(h(\mu+L)\) is meant in sense of part i) of this remark. We sometimes use the notation \(\Theta_{\mu}(\tau,(z,h),\varphi_{q,l})\) for the \(\mu\)-th component of \(\Theta_{A}\).
Note that \(\Theta_{A}\) descends to a \(q\)-form on \(X_{K}\) since \(K\) leaves \(A\) invariant and stabilizes \(L\). In this case \(\varphi_{\mu}\) is also invariant under the action of \(K\). Based on Theorem 4.1, the usual arguments then yield the following important properties of \(\Theta_{A}(\tau,(z,h),\varphi_{q,l})\) (cf. [FM], Prop. 7.1, and [Ku2], p. 301).
**Theorem 7.2**.: _Let \(K\subset H(\mathbb{A}_{f})\) be as above. Then \(\Theta_{A}(\tau,(z,h),\varphi_{q,l})\) defines a \(\operatorname{Sym}^{l}(V)\)-valued closed \(q\)-form on the Shimura variety \(X_{K}\). Also, as a function on \(\mathbb{H}\) it is a non-holomorphic vector-valued modular form of weight \(\kappa\) transforming according to \(\rho_{A}\)._
In view of this theorem and the fact that \(\vartheta((g_{\tau},1_{f}),h,\varphi_{q,l}(z)\otimes\varphi_{\mu})\) is slowly increasing in \(\tau\) (see [Ku2], p. 324), the following definition makes sense.
**Definition 7.3**.: Let \(f=\sum_{\lambda\in L^{\prime}/L}f_{\lambda}\mathfrak{e}_{\lambda}\in S_{ \kappa,A}\) be a cusp form. Then \(f\mapsto\Lambda(f)\) with
\[\Lambda(f)(z,h):=\int_{\Gamma\setminus\mathbb{H}}\langle f(\tau),\Theta_{A}( \tau,(z,h),\varphi_{q,l})\rangle\operatorname{Im}(\tau)^{\kappa}d\mu(\tau) \tag{7.7}\]
defines a linear map
\[\Lambda:S_{\kappa,A}\longrightarrow\mathcal{Z}^{q}(X_{K},\widetilde{ \operatorname{Sym}}^{l}(V))\]
Here \(\widetilde{\operatorname{Sym}}^{l}(V)\) is the local system on \(D\) associated to \(\operatorname{Sym}^{l}(V)\).
Following [BF], the \(L^{2}\)-norm of \(\Lambda\) is defined as
\[\|\Lambda(f)\|_{2}^{2}=\int_{X_{K}}\Lambda(f)\wedge*\overline{\Lambda(f)}. \tag{7.8}\]
The subsequent proposition ensures that \(\Lambda(f)\) is square integrable. It follows from the scalar valued companion statement in Prop. 4.1, [BF]. To phrase this result, we introduce the following notation:
In accordance with (7.6) we write
\[\begin{split}&\Theta_{A^{2}}(\tau_{1},\tau_{2},(z,h),\phi_{q,l}) \\ &=(v_{1}v_{2})^{-\kappa/2}\sum_{(\lambda,\nu)\in A^{2}}\left(\sum_ {\mathbf{x}\in V^{2}(\mathbb{Q})}\varphi_{(\lambda,\mu)}(h^{-1}\mathbf{x}) \omega_{\infty}(\iota(g_{\tau_{1}},g_{\tau_{2}}))\phi_{q,l}(\mathbf{x},z) \right)\varphi_{(\lambda,\mu)},\end{split} \tag{7.9}\]
where \(\phi_{q,l}\) is the Schwartz function in (4.10), \(\varphi_{(\lambda,\mu)}=\varphi_{\lambda}\otimes\varphi_{\mu}\) and \(\iota\) is the standard embedding in (2.16). Finally, we set
\[I(\tau_{1},\tau_{2},\phi_{q,l})=\int_{X_{K}}\Theta_{A^{2}}(\tau_{1},\tau_{2}, \phi_{q,l})\mu, \tag{7.10}\]
where \(\mu\) is the volume form on \(D\) specified in Section 4. A similar theta integral is also studied in [Ku2]. Given Proposition 7.6, this integral exists if Weil's convergence criterion is fulfilled.
**Proposition 7.4**.: _Take the assumptions of Definition 7.3 and let \(m>r_{0}+3\) such that Theorem 5.1 holds. Then \(\Lambda(f)\) is square integrable and_
\[\|\Lambda(f)\|_{2}^{2}=\left(f(\tau_{1})\otimes\overline{f(\tau_{2})},I(\tau_ {1},-\overline{\tau}_{2},\phi_{q,l})\right), \tag{7.11}\]
_where \((\cdot,\cdot)\) is the Petersson scalar product on \(S_{\kappa,A}\otimes S_{\kappa,A}\)._
Proof.: As in Prop. 4.1, [BF], we argue that \(\|\Lambda(f)\|_{2}^{2}\) indeed exists if (7.11) holds. As proved in [Ku2], Theorem 3.1, in a more general and complicated situation and also done in the scalar valued case in Prop. 4.1 of [BF], we may exchange the order of the integrals over \(X_{K}\) and \(\Gamma\backslash\mathbb{H}\). Following the remaining proof of Prop. 4.1, for any pair \(\lambda,\nu\in A\) we also have
\[\Theta_{\lambda}(\tau_{1},\varphi_{q,l}\otimes\varphi_{\lambda})\wedge\overline {\Theta_{\nu}(\tau_{2},*\varphi_{q,l}\otimes\varphi_{\nu})}=\Theta_{(\lambda, \nu)}(\tau_{1},-\overline{\tau_{2}},\phi_{q,l}\otimes(\varphi_{\lambda}\otimes \varphi_{\nu}))\mu,\]
where \(\Theta_{(\lambda,\nu)}\) is the component of \(\Theta_{A^{2}}\) belonging to the index \((\lambda,\mu)\in A^{2}\). In view of (3.11) we obtain the assertion.
In the next result we want to replace the theta kernel \(\phi_{q,l}\) in the integral \(I\) with the Schwartz function \(\xi\) in (4.11) with the help of the relation (4.12). To this end, we have to interpret \(\xi\) as an element of \(\left[S(V(\mathbb{R})^{2})\otimes C^{\infty}(D)\right]^{G(\mathbb{R})}\). In view of Lemma 4.2, \(i)\), we may set
\[\xi(\mathbf{x},z)=\xi(g^{-1}\mathbf{x}) \tag{7.12}\]
with \(g\in G(\mathbb{R})\) such that \(gz_{0}=z\), where \(z_{0}\in D\) is a fixed base point.
**Proposition 7.5**.: _Let \(\xi\) be the Schwartz function in (7.12), \(\theta_{A^{2}}(\tau_{1},\tau_{2},(z,h),\xi)\) and \(I(\tau_{1},\tau_{2},z,\xi)\) as in (7.9) and (7.11), respectively with \(\phi_{q,l}\) replaced by \(\xi\). Then_
\[\|\Lambda(f)\|_{2}^{2}=\left(f(\tau_{1})\otimes\overline{f(\tau_{2})},I(\tau_ {1},-\overline{\tau_{2}},\xi)\right). \tag{7.13}\]
_Moreover, \(\Lambda\) vanishes identically if \(p=1\) and \(q+l>1\)._
Proof.: The result is an immediate consequence from Prop. 4.3 and Corollary 4.4 in [BF] combined with (7.11). The second assertion is due to Lemma 4.2, \(iii)\).
The next proposition shows that the theta integral \(I\) over \(X_{K}\) can be written in terms of the theta integral in the Siegel-Weil formula in (7.13). This justifies the convergence of \(I\), which was required in Prop. 7.4. The analogous statement in a scalar valued setting can be found in [BF], Prop. 4.6, [Ku2], Prop. 4.17. These papers presume that \(X_{K}\) is a manifold and require the condition
\[Z(\mathbb{A}_{f})\cap K=\widehat{\mathbb{Z}}^{\times} \tag{7.14}\]
to be satisfied, where \(Z(\mathbb{A}_{f})\) means the center of \(H(\mathbb{A}_{f})\). However, the proof of Prop. 4.17 in [Ku2] should still work if \(X_{K}\) is an orbifold. But for our purposes it sufficient for \(X_{K}\) to be a complex manifold.
**Proposition 7.6**.: _Let \(m>r_{0}+3\) such that Theorem 5.1 holds. Suppose further that the image \(\alpha(H(\mathbb{Q})^{+}\cap K)\) in \(SO(V)(\mathbb{A}_{f})\) is isomorphic to the discriminant kernel and that \(Z(\mathbb{A}_{f})\cap K\cong\widehat{\mathbb{Z}}^{\times}\). Then_
\[\frac{1}{\operatorname{vol}(X_{K},\mu)}I(\tau_{1},\tau_{2},\xi)=(v_{1}v_{2})^{- \kappa/2}\int_{O(V)(\mathbb{Q})\setminus O(V)(\mathbb{A})}\vartheta_{L}( \iota(g_{\tau_{1}},g_{\tau_{2}}),h,\xi)dh. \tag{7.15}\]
Proof.: By Remark 7.1, i), \(\alpha(\Gamma_{i})\cong\Gamma(L^{h_{i}})\) for all \(i\). It is stated in [Br1], p. 115, that \(\Gamma(L^{h_{i}})/D^{+}\) is Riemannian manifold, which implies that \(X_{K}\) is a complex manifold. Applied to each component on both sides of 7.15, Prop. 4.6 in [BF1] and Prop. 4.17 in [Ku2] yields the claimed assertion.
The Siegel-Weil formula 5.2 combined with Proposition 7.6 allows us to express the \(L^{2}\)-norm of \(\Lambda(f)\) as a Rankin-Selberg type integral. The doubling method for our setup then leads to a formula for \(\|\Lambda(f)\|_{2}^{2}\) in terms of a special value of the standard zeta function associated to a common Hecke eigenform \(f\). For the next theorem we use the following notation
\[K(A_{p},m,l)=\prod_{p||A|}\left(\left(\frac{e(\operatorname{sig}(A_{p})/8)}{|A _{p}|^{1/2}}-1\right)+L_{p}\left(\frac{m}{2}-l+2,\chi_{A_{p}^{\perp}}\right) \right)^{-1}\]
(see Section 6). Under the assumption \(\frac{m}{2}>l-1\) by taking into account that \(\chi_{A_{p}^{\perp}}\) is a quadratic Dirichlet character, we find that
\[\left(\frac{e(\operatorname{sig}(A_{p})/8)}{|A_{p}|^{1/2}}-1\right)+L_{p}\left( \frac{m}{2}-l+2,\chi_{A_{p}^{\perp}}\right)\neq 0\]
for each \(p\) in the above product.
**Theorem 7.7**.: _Let \(m,q,l\) be as before with \(m>\max(6,2l-2,3+r_{0})\) and \(s_{0}=(m-3)/2\). Furthermore, we assume that the conditions of Proposition 7.6 are satisfied and that \(q+l\), \(\kappa=\frac{m}{2}+l\) are even and \(A\) is an anisotropic quadratic module. If \(f\in S_{\kappa,A}\) is a common eigenform of all Hecke operators \(T\left(\begin{smallmatrix}d^{2}&0\\ 0&1\end{smallmatrix}\right)\), we have_
\[\begin{split}\frac{1}{\operatorname{vol}(X_{K},\mu)}\frac{\| \Lambda(f)\|_{2}^{2}}{\|f\|_{2}^{2}}&=C(s_{0})K(\kappa,-l/2)L \left(\frac{m}{2}-l+2,\chi_{A}\right)^{-1}\times\\ &\times\prod_{p||A|}K(A_{p},m,l)L(-\frac{m}{4}-\frac{3l}{2}+3,f), \end{split} \tag{7.16}\]
_where_
\[K(\kappa,s)=\frac{e(\operatorname{sig}(A)/8)}{|A|^{1/2}}(-1)^{s+\frac{\kappa} {2}}2^{2-2s-\kappa+1}\frac{\Gamma(\kappa+s-1)}{\Gamma(\kappa+s)}. \tag{7.17}\]
Proof.: By the Propositions 7.4, 7.6 and the Siegel-Weil formula in Corollary 5.2 we have
\[\begin{split}&\frac{1}{\operatorname{vol}(X_{K},\mu)}\|\Lambda(f)\|_{2}^{2} =\left(f(\tau_{1})\otimes\overline{f(\tau_{2})},\frac{1}{\operatorname{vol}(X _{K},\mu)}I(\tau_{1},-\tau_{2},\xi)\right)\\ &=\left(f(\tau_{1})\otimes\overline{f(\tau_{2})},(v_{1}v_{2})^{- \kappa/2}\int_{O(V)(\mathbb{Q})\setminus O(V)(\mathbb{A})}\vartheta_{L}( \iota(g_{\tau_{1}},g_{-\overline{\tau_{2}}}),h,\xi)dh\right)\\ &=\left(f(\tau_{1})\otimes\overline{f(\tau_{2})},(v_{1}v_{2})^{- \kappa/2}\sum_{(\lambda,\nu)\in A^{2}}E(\iota(g_{\tau_{1}},g_{-\overline{\tau_ {2}}}),s_{0},\Xi\otimes\Phi_{(\lambda,\nu)})\varphi_{(\lambda,\nu)}\right). \end{split} \tag{7.18}\]
Bearing Proposition 5.3 in mind, we see that
\[(v_{1}v_{2})^{-\kappa/2}\sum_{(\lambda,\nu)\in A^{2}}E(\iota(g_{\tau_{1}},g_{- \overline{\tau_{2}}}),s_{0},\Xi\otimes\Phi_{(\lambda,\nu)})\varphi_{(\lambda,\nu)}\]
is nothing else but the Eisenstein series of genus \(2\) defined in [St1], Def. 3.13. That being said, we may apply Lemma 3.14 of [St1] and obtain for the right-hand side of (7.18)
\[C(s_{0})\left(f(\tau_{1})\otimes\overline{f(\tau_{2})},E_{\kappa,0}^{2}(\left( \begin{smallmatrix}\tau_{1}&0\\ 0&-\overline{\tau_{2}}\end{smallmatrix}\right),-\frac{l}{2})\right).\]
By means of [St2], Theorem 6.4, this becomes
\[C(s_{0})K(\kappa,-l/2)Z(2(-\frac{l}{2})+\kappa,f)\int_{\Gamma\backslash\mathbb{ H}}\sum_{\lambda\in A}\overline{f_{\lambda}(\tau_{2})}f_{\lambda}(\tau_{2})\operatorname {Im}(\tau_{2})^{\kappa}d\mu(\tau_{2}).\]
As \(\frac{m}{2}>l-1\), it is a classical result that \(L\left(\frac{m}{2}-l+2,\chi_{A}\right)\neq 0\) and we may then express \(Z(\frac{m}{2},f)\) in terms of \(\mathcal{Z}(-l+2,f)\) using the equation (3.24) of [St3]. Subsequently, employing (7.25) of [St3], yields
\[Z(\frac{m}{2},f)=\prod_{p||A|}\left(\left(\frac{e(\operatorname{ sig}(A_{p})/8)}{|A_{p}|^{1/2}}-1\right)+L_{p}\left(\frac{m}{2}-l+2,\chi_{A_{p}^{ \perp}}\right)\right)^{-1}\times\] \[L\left(\frac{m}{2}-l+2,\chi_{A}\right)^{-1}L\left(-\frac{m}{4}- \frac{3}{2}l+3,f\right). \tag{7.19}\]
As a corollary we can deduce the injectivity of the lifting \(\Lambda\).
**Corollary 7.8**.: _Under the conditions of Theorems 7.7 the Kudla-Millson theta lift \(\Lambda\) (7.7) is injective._
Proof.: It suffices to prove that \(\|\Lambda\|_{2}^{2}\) is non-zero. In view of (7.16) and (7.17) we need to show that \(L(-\frac{m}{4}-\frac{3l}{2}+3,f)\) is non-zero. But this is just the assertion of Theorem 6.1.
## 8. Surjectivity of the Borcherds lift
In this section we pick up the question from the introduction whether a modular form for some orthogonal group with zeroes and poles located on Heegner divisors can be realized as a Borcherds lift of weakly holomorphic modular forms. The most general results in this direction are given in [Br2]. We focus here on Theorem 1.4 in [Br2], which essentially only assumes that the level of the lattice \(L\) is a prime number. In particular, it is not required that \(L\) splits a lattice of the form \(U\otimes U(N)\) over \(\mathbb{Z}\). Our assertion is in the same vein. We do not impose any further restrictions on the lattice, but assume that the discriminant group \(A=L^{\prime}/L\) is anisotropic. Thus,
\[A=\bigoplus_{p}A_{p},\]
where each \(p\)-component \(A_{p}\) is anisotropic (see (2.2)).
Before stating our results, we briefly gather the necessary facts on modular forms on orthogonal groups and review in some detail the involved lifts in an adelic setting suited to our needs. We follow loosely [Br1], [Ku2]. We adopt the notation from Section 4 and 7, but restrict ourselves to the Hermitian case and assume that \(l\) (the parameter of the Schwartz form \(\varphi_{q,l}\)) is
zero. Accordingly, \((V(\mathbb{R}),Q)\) is a quadratic space of type \((p,2)\) and \(D\) is the Grassmannian of negative definite oriented subspaces \(z\subset V(\mathbb{R})\) of dimension \(2\). By \(D^{+}\) we mean one of its two connected components. Let \(G(\mathbb{R})^{+}\) be the subgroup of \(G(\mathbb{R})=O(V)(\mathbb{R})\) which preserves \(D^{+}\) and \(D^{-}\). It acts transitively on \(D^{+}\). Also, we define and write \(X_{K}=H(\mathbb{Q})\backslash D\times H(\mathbb{A}_{f})/K\) with the same meaning as in the section before. In particular, \(K\) is an open compact subgroup of \(H(\mathbb{A}_{f})\) which preserves \(L\) and acts trivially on \(A\). Furthermore, there are two weights involved in this section. On the one hand we reserve \(\kappa\) for \(\frac{m}{2}=1+\frac{p}{2}\), on the other hand we use \(\ell=2-\kappa=1-\frac{p}{2}\). We stick with this notation throughout this section.
To define an analogue of the upper half plane in the orthogonal setting, we give \(D\) a complex structure. For this purpose, we consider the complexified space \(V(\mathbb{C})=V\otimes_{\mathbb{Q}}\mathbb{C}\) of \(V\) and extend \((\cdot,\cdot)\) to a \(\mathbb{C}\)-bilinear form. Then
\[\mathcal{K}=\{[z]\in P(V(\mathbb{C}))\mid(z,z)=0\text{ and }(z,\overline{z})<0\} \tag{8.1}\]
is a complex manifold with two connected components, which are exchanged by \(z\mapsto\overline{z}\). For \([z]\in\mathcal{K}\) we utilize the notation \(z=x+iy\). In terms of this decomposition one can show that \([z]\mapsto\mathbb{R}x+\mathbb{R}y\) is a bijection between \(\mathcal{K}\) and \(D\) inducing a complex structure on \(D\).
We write \(\mathcal{K}^{+}\) for the component in \(\mathcal{K}\), which corresponds to \(D^{+}\). Further, denote with \(\widetilde{\mathcal{K}}^{+}\) the preimage in \(V(\mathbb{C})\) of \(\mathcal{K}^{+}\) under the natural projection into the projective space \(P(V(\mathbb{C}))\). The following definition is taken from [Eh], Def. 1.5.21.
**Definition 8.1**.: A function \(F:\widetilde{\mathcal{K}}^{+}\times H(\mathbb{A}_{f})\to\mathbb{C}\) is called meromorphic modular form of weight \(r\in\mathbb{Z}\), level \(K\subset H(\mathbb{A}_{f})\) and unitary character \(\chi\) of finite order for \(H(\mathbb{Q})\) if
* \(z\mapsto f(z,h)\) is meromorphic for any fixed \(h\in H(\mathbb{A}_{f})\),
* \(f(z,hk)=f(z,h)\) for all \(k\in K\),
* \(f(tz,h)=t^{-r}f(z,h)\) for all \(t\in\mathbb{C}\backslash\{0\}\),
* \(f(\gamma z,\gamma h)=\chi(\gamma)f(z,h)\) for all \(\gamma\in H(\mathbb{Q})\),
* and \(f\) is meromorphic at the boundary components of \(\widetilde{\mathcal{K}}^{+}\).
**Remark 8.2**.: Let \(L\) be a lattice, which is additionally assumed to be a maximal lattice (that is there is no even lattice \(M\) with \(L\subsetneq M\subset V\)) and set
\[K=H(\mathbb{A}_{f})\cap C^{+}(\hat{L})^{\times}, \tag{8.2}\]
where \(L=L\otimes\hat{Z}\) as in Section 5 and \(C^{+}(\hat{L})^{\times}\) is the unit group of the even Clifford algebra of the lattice \(\hat{L}\). With this choice Andreatta et. al showed that the \(\mathbb{C}\)-points of the GSpin-Shimura variety (7.2) is connected if \(p\geq 2\) or the order of \(A\) is square-free (see [AGHM], Prop. 4.1.1). In this case and in view of (7.4) we consequently have
\[X_{K}(\mathbb{C})=\Gamma_{K}\backslash D^{+}.\]
It turns out that the group \(\Gamma_{K}=H(\mathbb{Q})^{+}\cap K\) can be specified explicitly: The map \(\alpha\) restricted to \(K\) defines a homomorphism \(K\to\operatorname{SO}(\hat{L})\) whose exact image is the subgroup of elements acting trivially on the discriminant group \(L^{\prime}/L\cong\hat{L}^{\prime}/\hat{L}\). Therefore \(\alpha(\Gamma_{K})\) can be identified with the discriminant kernel \(\Gamma(L)\) as defined in [Br1] or [Br2]. We may conclude that in this setting for any meromorphic modular form \(F\) in the sense of Definition 8.1 the function \(F(\cdot,1)\) behaves like a meromorphic modular form of the same weight for the orthogonal group \(\Gamma(L)\). Finally, it is worth mentioning that owing to its definition, \(K\) satisfies (7.14).
Examples of modular forms of Definition 8.1 can be constructed by means of the celebrated Borcherds lift. In its original form ([B], Theorem 13.3), it takes weakly holomorphic modular
forms \(f\in M^{!}_{\ell,A}\) of weight \(\ell\) to meromorphic modular forms on orthogonal modular groups. Bruinier ([Br1]) extended the lift to harmonic weak Maass forms \(f\in H^{+}_{\ell,A}\) as a supply of inputs. Here we recall in line with Def. 7.3 the regularized theta lift on \(D\times H(\mathbb{A})\). Its definition is based on an integral of the form
\[\Phi_{L}(f)(z,h)=\int_{\Gamma\backslash\mathbb{H}}^{\text{reg}}\langle f(\tau),\Theta_{A}(\tau,(z,h),\varphi_{\infty}^{p,2})\rangle d\mu(\tau), \tag{8.3}\]
where \(\Theta_{A}\) is defined in (7.6) with \(\varphi_{q,l}\) replaced with the Gaussian \(\varphi_{\infty}^{p,2}\) (cf. (2.14)). Since \(f\) grows exponentially for \(\text{Im}(\tau)\to\infty\), the corresponding integral over \(\Gamma\backslash\mathbb{H}\) has to be regularized according to Borcherds [B], p. 514 ff. The regularized integral is denoted with \(\int_{\Gamma\backslash\mathbb{H}}^{\text{reg}}\).
**Remark 8.3**.: Let \(f\in H^{+}_{\ell,A}\) (or an element of \(M_{\ell,A}!\)) and \(\Phi_{L}(f)(z,h)\) the regularized theta lift (8.3). Howard observed in [HMP] that this adelic theta lift can be expressed as a classical regularized theta lift. More precisely, in terms of the notation of Remark 7.1 we have
\[\begin{split}\Phi_{L}(f)(z,h)&=\Phi_{L^{h}}(f^{h})(z )\\ &=\int_{\Gamma\backslash\mathbb{H}}^{\text{reg}}\langle f^{h}( \tau),\Theta_{A^{h}}(\tau,z,\varphi_{\infty}^{p,2})\rangle d\mu(\tau),\end{split} \tag{8.4}\]
which can be easily confirmed with the help of (7.6). For the same reasons, the same holds for the Kudla-Millson theta lift, i. e.
\[\begin{split}\Lambda(f)(z,h)&=\Lambda(f^{h})(z)\\ &=\int_{\Gamma\backslash\mathbb{H}}\langle f^{h}(\tau),\Theta_{A ^{h}}(\tau,z,\varphi_{q,l})\rangle d\mu(\tau),\end{split} \tag{8.5}\]
where the underlying lattice for the lift on the right-hand side is \(L^{h}\) as well.
It is a fundamental theorem of Borcherds ([B], Thm. 6.2, or [Br2], Thm. 2.12) that \(\Phi_{L}\) is a smooth function on \(X_{K}\) apart from logarithmic singularities along a certain divisor on \(X_{K}\). This divisor, denoted with \(Z(f)\), is determined by the principal part of the lifted form \(f\) and is a linear combination of so-called _Heegner divisors_, which we will now briefly describe. Following [Ku1], let \(x\in V(\mathbb{Q})\) with \(Q(x)>0\) and \(V_{x}=x^{\perp}\) the orthogonal complement of \(x\) in \(V(\mathbb{Q})\). We write \(H_{x}\) for the stabilizer of \(x\) in \(H\). As is noted in [Ku2], we have \(H_{x}\cong\text{GSpin}(V_{x})\). Further,
\[D_{x}=\{z\in D\mid(z,x)=0\} \tag{8.6}\]
defines an analytic set of codimension one in \(D\). We put \(D_{x}^{+}=D_{x}\cap D^{+}\). For \(h\in H(\mathbb{A}_{f})\) we define a divisor \(Z(x,h,K)\) on \(X_{K}\) by the image of the map
\[\begin{split} H_{x}(\mathbb{Q})\backslash D_{x}\times H_{x}( \mathbb{A}_{f})/\left(H(\mathbb{A}_{f})\cap hKh^{-1}\right)&\longrightarrow H (\mathbb{Q})\backslash D\times H(\mathbb{A}_{f})/K,\\ &(z,h_{1})\mapsto(z,h_{1}h).\end{split} \tag{8.7}\]
It can be shown that \(Z(x,h,K)\) is rational over \(\mathbb{Q}\). Let \(\mu\in A\). In terms of an \(n\in\mathbb{Q}_{>0}\) and the Schwartz function \(\varphi_{\mu}\) (see (5.13)) we introduce a weighted sum \(Z(n,\varphi_{\mu},K)\) of these divisors. To that end, we consider the set
\[\Omega_{n}=\left\{x\in V\mid Q(x)=n\right\}. \tag{8.8}\]
According to [Ku2], we may write for a fixed \(x_{0}\in\Omega_{n}\)
\[\operatorname{supp}(\varphi_{\mu})\cap\Omega_{n}(\mathbb{A}_{f})=\bigsqcup_{r} Kx_{r}^{-1}\cdot x_{0} \tag{8.9}\]
with a finite set of elements \(x_{r}\in H(\mathbb{A}_{f})\). We then define \(Z(n,\varphi_{\mu},K)\) by
\[Z(n,\varphi_{\mu},K)=\sum_{r}\varphi_{\mu}(x_{r}^{-1}\cdot x_{0})Z(x_{0},x_{r},K) \tag{8.10}\]
and \(Z(n,\varphi_{\mu},K)=0\) if \(\Omega_{n}(\mathbb{Q})\) is empty. Finally, \(Z(f)\) is given by
\[Z(f)=\frac{1}{2}\sum_{\mu\in A}\sum_{n>0}c^{+}(-n,\mu)Z(n,\varphi_{\mu},K). \tag{8.11}\]
Here \(c^{+}(-n,h)\) are the Fourier coefficients of the principal part of \(f\) (see (3.16)).
With respect to (7.4) the divisor \(Z(n,\varphi_{\mu},K)\) can be decomposed into a finite sum of divisors \(Z_{i}(n,\varphi_{\mu},K)\), where \(Z_{i}(n,\varphi_{\mu},K)\) is a divisor on the connected component \(\Gamma_{i}\backslash D^{+}\):
\[Z(n,\varphi_{\mu},K)=\sum_{i}Z_{i}(n,\varphi_{\mu},K)\text{ with }Z_{i}(n, \varphi_{\mu},K)=\sum_{x\in\Gamma_{i}\backslash\Omega_{n}(\mathbb{Q})}\varphi _{\mu}(h_{i}^{-1}x)\operatorname{pr}_{i}(D_{x}^{+}), \tag{8.12}\]
where \(\operatorname{pr}_{i}\) maps \(z\in D_{x}^{+}\) to \(\Gamma_{i}z\) in \(\Gamma_{i}\backslash D^{+}\). Note that \(Z_{i}(n,\varphi_{\mu},K)\) can be written in a more explicit way by
\[Z_{i}(n,\varphi_{\mu},K)=\sum_{\begin{subarray}{c}\lambda_{i}\in\Gamma_{i} \backslash h_{i}\mu+L^{h_{i}}\\ Q(\lambda_{i})=n\end{subarray}}\operatorname{pr}_{i}(D_{\lambda_{i}}^{+}). \tag{8.13}\]
Also, as \(K\) leaves \(L\) invariant and stabilizes \(A\), the same holds for \(\Gamma_{i}\) regarding \(L^{h_{i}}\). We use the notation \(\mu_{i}=h_{i}\mu\). Thus, the sum in (8.13) is invariant under the action of \(\Gamma_{i}\) and consequently
\[\begin{split} Z_{i}(n,\varphi_{\mu},K)&= \operatorname{pr}_{i}\left(\sum_{\begin{subarray}{c}\lambda_{i}\in h_{i}\mu+L^ {h_{i}}\\ Q(\lambda_{i})=n\end{subarray}}D_{\lambda_{i}}^{+}\right)\\ &=\sum_{\begin{subarray}{c}\lambda_{\in\mu_{i}+L^{h_{i}}\\ Q(\lambda)=n\end{subarray}}}D_{\lambda}^{+}.\end{split} \tag{8.14}\]
Note that the last expression in (8.14) is just the Heegner divisor \(H(\mu_{i},n)\) as defined in [Br1]. For simplicity, we write \(Z(n,\mu)\) and \(Z_{i}(n,\mu_{i})\) instead of \(Z(n,\varphi_{\mu},K)\) and \(Z(n,\varphi_{\mu},K)\), respectively.
Now Theorem 13.3 in [B] can be transferred to our adelic setting (see [Ku2], Thm 1.3 or [Eh], Thm. 1.8.1), which reads as follows:
**Theorem 8.4**.: _Let \(f\in M_{\ell,L}^{!}\) with \(c(\mu,n)\in\mathbb{Z}\) for all \(m<0\) and \(c(\mu,n)\in\mathbb{Q}\) for all \(n\in\mathbb{Z}+\mathbb{Q}(\mu)\). Then there is a function \(\Psi_{L}(f)(z,h)\) on \(D\times H(\mathbb{A}_{f})\) such that_
* \(\Psi_{L}(f)(z,h)\) _is a meromorphic modular form of weight_ \(c(0,0)/2\)_, level_ \(K\) _and some unitary character of finite order for_ \(H(\mathbb{Q})\)_._
* _the divisor of_ \(\Psi_{L}(f)(z,h)\) _on_ \(X_{K}\) _is given by_ \(Z(f)\)
* \(\Psi_{L}\) _is related to_ \(\Phi_{L}\) _by the equation_ (8.15) \[-4\log|\Psi_{L}(f)(z,h)|=\Phi_{L}(f)(z,h)+c(0,0)(2\log|\operatorname{Im}(z)|+ \log(2\pi)+\Gamma^{\prime}(1))\] _Equivalently we may write_ (8.16) \[-2\log\|\Psi_{L}(f)(z,h)\|_{Pet,c(0,0)/2}^{2}=\Phi_{L}(f)(z,h)+c(0,0)(\log(2\pi )+\Gamma^{\prime}(1)),\] _where_ \(\|\Psi_{L}(f)(z,h)\|_{Pet,r}=|\Psi_{L}(f)(z,r)||\operatorname{Im}(z)|^{r}\) _is the Petersson norm weight_ \(r\) _with_ \(|\operatorname{Im}(z)|=|(\operatorname{Im}(z),\operatorname{Im}(z))|^{1/2}\)_._
Note that we get for \(h=1\) the original regularized theta lift \(\Phi_{L}(f)(z,1)\) and the original Borcherds lift \(\Psi_{L}(f)(z,1)\) back. Also, in view of Remark 8.3, we have \(\Psi_{L}(f)(z,h)=\Psi_{L^{h}}(f^{h})(z)\), i. e. the classical Borcherds lift of \(f^{h}\) for the orthogonal group attached to the lattice \(L^{h}\).
We now address the main question of this paper. We proceed as in [BF], Sect. 1.1. The proof given therein is essentially based on [BF1], Thm. 6.1, and [Br1], Thm. 4.23. We present for both theorems a version in our adelic setup fitting to the statement of the Kudla-Millson theta lift and its injectivity in Section 7. As both theorems rely on the fact that the underlying Hermitian space is a Riemannian manifold, we adopt this assumption (although the theorem below should also hold for the more general situation where \(X_{K}\) is a complex orbifold).
**Theorem 8.5**.: _Let \(V\) be of type \((p,2)\) and \(f\in H^{+}_{\ell,A^{-}}\) with the Fourier expansion as in (3.15) and suppose that \(\alpha(\Gamma_{K})=\Gamma(L)\). Then for all \((z,h)\in(D\backslash Z(f))\times H(\mathbb{A}_{f})\) the identity_
\[dd^{c}\Phi_{L}(f)(z,h)=\Lambda(\xi_{\ell}(f))(z,h)+c^{+}(0,0)\Omega \tag{8.17}\]
_holds, where \(\Omega\) is the Kahler form on \(D\) as specified in [BF1]._
Proof.: Let \(X_{K}\cong\bigsqcup_{i}\Gamma_{i}\backslash D^{+}\) with \(\Gamma_{i}=H(\mathbb{Q})^{+}\cap h_{i}Kh_{i}^{-1}\) and \(h_{i}\in H(\mathbb{A}_{f})\) (cf. (7.4)). By virtue of Remark 8.3 for each \(h_{i}\) we have that \(f^{h_{i}}\in H_{\ell,(A^{h_{i}})^{-}}\). According to [B], 13.3 the divisor of \(\Psi_{L^{h_{i}}}(f^{h_{i}})\) is given by
\[Z(f^{h_{i}})=\frac{1}{2}\sum_{\mu\in A}\sum_{n>0}c^{+}(-n,\mu)Z_{i}(n,\varphi_ {\mu_{i}},K)\]
with \(Z_{i}(n,\varphi_{\mu_{i}},K)\) being defined in (8.14). Theorem 6.1 in [BF1] combined with Remark 8.3 now yields
\[dd^{c}\Phi_{L}(f^{h_{i}})(z)=\Lambda(\xi_{\ell}(f^{h_{i}}))(z)+c^{+}(0,0)\Omega \tag{8.18}\]
for all \(z\in D\backslash Z_{i}(f^{h_{i}})\). Now the function \(f\) corresponds via (7.5) to the collection of functions \((f(\cdot,h_{i}))_{i}\). As \(\Phi_{L}(f)\) and \(\Lambda(\xi_{\ell}(f))\) descend to differential forms on \(X_{K}\), they are invariant under the left-action of \(H(\mathbb{Q})\) and the right-action of \(K\). Therefore, the equations (8.18) lift to the corresponding equation (8.17) on \(X_{K}\). This equation holds for all \((z,h)\in X_{K}\setminus\sum_{i}Z(f^{h_{i}})\), where \(\sum_{i}Z(f^{h_{i}})\) is nothing else than the divisor \(Z(f)\) in (8.11) implying the claimed result.
The following theorem is a generalisation of Theorem 4.23 in [Br1] to meromorphic modular forms on \(D\times H(\mathbb{A}_{f})\).
**Theorem 8.6**.: _Assume that \(\alpha(\Gamma_{K})\) is the discriminant kernel \(\Gamma(L)\). Let \(F:D\times H(\mathbb{A}_{f})\to\mathbb{C}\) be a meromorphic modular form of weight \(r\), character \(\chi\) and level \(K\subset H(\mathbb{A}_{f})\) with respect to \(H(\mathbb{Q})\) whose divisor is of the form_
\[\operatorname{Div}(F)=\frac{1}{2}\sum_{\mu\in L^{\prime}/L}\sum_{n>0}c^{+}(-n, \mu)Z(n,\mu). \tag{8.19}\]
_Then there exists a weak Maass form \(f\in H^{+}_{\ell,L^{-}}\) with principal part \(\sum_{\mu\in L^{\prime}/L}\sum_{n>0}c^{+}(-n,\mu)e(n\tau)\mathfrak{ e}_{\mu}\) such that_
\[\Phi_{L}(f)(z,h_{i})=-2\log\|F(z,h_{i})\|_{\text{Pet},\frac{r}{2}}+c_{i}, \tag{8.20}\]
_where \((h_{i})_{i}\) with \(h_{i}\in\mathbb{H}(\mathbb{A}_{f})\) is a set of coset representatives of the double coset space in (7.4) and \((c_{i})_{i}\) is a set of constants._
Proof.: Let \((F_{h_{i}}=F(\cdot,h_{i}))_{i}\) be the collection of functions on \(D^{+}\) associated to \(F\) (as in (7.5)) with respect to the decomposition (7.4). By Remark 7.1, i), we have that \(\alpha(\Gamma_{i})=\Gamma(L^{h_{i}})\) for all \(i\). Taking this into account, the assumptions on \(F\) imply that \(F_{h_{i}}\) (restricted to \(D^{+}\)) is a meromorphic modular form for the orthogonal group \(\Gamma(L^{h_{i}})\) in the sense of [Br1], p. 83. According to (8.12) and (8.13) the component of \(\operatorname{Div}(F)\) on \(\Gamma_{i}\backslash D^{+}\) (identified with \(\Gamma(L^{h_{i}})\backslash D^{+}\)) is given by
\[Z_{i}(F_{h_{i}})=\frac{1}{2}\sum_{\mu\in L^{\prime}/L}\sum_{n>0}c^{+}(-n,\mu)Z _{i}(n,\mu_{i}). \tag{8.21}\]
Note that \(H(\mathbb{Q})^{+}\) acts on \(F_{h_{i}}\) by multiplication with the character \(\chi\). Thus, \(Z_{i}(F_{h_{i}})\) may be interpreted as divisor of \(F_{h_{i}}\) (although \(F_{h_{i}}\) is not a function on \(\Gamma_{i}\backslash D^{+}\)). Each component function \(F_{h_{i}}\) satisfies the assumptions of Theorem 4.23 in [Br1]. It follows from this theorem that the regularized theta lift
\[\Phi_{L}(z,h_{i})=-\frac{1}{8}\sum_{\mu_{i}\in L^{\prime}/L}\sum_{\begin{subarray} {c}n\in\mathbb{Z}+Q(\mu)\\ n>0\end{subarray}}c^{+}(-n,\mu)\Phi_{\mu_{i},n}(z) \tag{8.22}\]
satisfies the equation
\[\Phi_{L}(z,h_{i})=\log\|F_{h_{i}}(z)\|_{\text{Pet},\frac{r}{2}}+c_{i}, \tag{8.23}\]
where \(c_{i}\) is some constant. Here \(\Phi_{\mu_{i},n}\) denotes the regularized theta lift of the Poincare-Maass form of index \((\mu_{i},n)\) (see [Br1], Def. 1.8).
**Theorem 8.7**.: _Let \(m\) be the even rank of the lattice \(L\) satisfying \(m>\max(6,3+r_{0})\) and \(m\equiv 0\bmod 4\). Moreover, assume that the associated discriminant group \(A\) is anisotropic and that \(\alpha(\Gamma_{K})=\Gamma(L)\). Further, let \(F:D\times H(\mathbb{A}_{f})\to\mathbb{C}\) be a meromorphic modular form of weight \(r\) and level \(K\subset H(\mathbb{A}_{f})\) with respect to \(H(\mathbb{Q})\) whose divisor is a linear combination of Heegner divisors \(Z(h,n)\) (as in (8.19)). Then there exists a weakly holomorphic modular form \(f\in M^{!}_{\ell,L^{-}}\) such that \(F\) is up to a constant multiple the Borcherds lift \(\Psi_{L}\) of \(f\)._
Proof.: The proof is basically the same as in [BF], Corollary 1.7. From Theorem 8.6 we know that there is a harmonic weak Maass form \(f\in H^{+}_{\ell,L^{-}}\) with
\[\Phi_{L}(f)(z,h_{i})=-2\log\|F(z,h_{i})\|_{\text{Pet},\frac{r}{2}}+c_{i}\]
for some constant \(c_{i}\) for all \(i\). Applying the exterior derivative \(dd^{c}\) on both sides yields
\[dd^{c}\Phi_{L}(f)(z,h_{i})=-2dd^{c}\log\|F(z,h_{i})\|_{\operatorname{Pet},\frac{ r}{2}}=c^{+}(0,0)\Omega \tag{8.24}\]
for all \(i\). On the other hand, by Theorem 8.5 the identity (8.17) holds for all \((z,h)\in X_{K}\backslash Z(f)\). Comparing the equations (8.17) and (8.24), we may conclude that \(\Lambda(\xi_{\ell}(f))(z,h)=0\) for all \((z,h)\). By the injectivity of the Kudla-Millson theta lift, Corollary 7.8, it follows \(\xi_{\ell}(f)=0\). The exactness of the sequence in (3.14) then implies \(f\in M^{!}_{\ell,L^{-}}\), giving the desired result.
As a corollary we obtain
**Corollary 8.8**.: _Let \(m\) be the even rank of the lattice \(L\) satisfying \(m>\max(6,3+r_{0})\) and \(m\equiv 0\bmod 4\). Moreover, let the associated discriminant form \(A\) be anisotropic and \(F:D^{+}\to\mathbb{C}\) be a meromorphic modular form of weight \(r\) and character \(\chi\) (of finite order) for the discriminant kernel \(\Gamma(L)\) whose divisor is a linear combination of Heegner divisors. Then there exists a weakly holomorphic modular form \(f\in M^{!}_{\ell,L^{-}}\) such that \(F\) is up to a constant multiple the Borcherds lift \(\Psi_{L}\) of \(f\)._
Proof.: In view of [Ni], Prop. 1.4.1, we may infer that \(L\) is maximal. Otherwise an overlattice would lead to an isotropic subgroup of \(A\), which is clearly a contradiction considering that \(A\) is anisotropic. Let \(X_{K}=H(\mathbb{Q})\backslash D\times H(\mathbb{A}_{f})/K\) the GSpin Shimura variety as specified in Remark 8.2. Then we know from this remark that \(X_{K}\cong\Gamma(L)\backslash D^{+}\). This implies that \(H(\mathbb{A}_{f})=H(\mathbb{Q})^{+}K\) because otherwise \(X_{K}\) could not be connected (see the proof of Lemma 5.13 in [Mi]). In this situation we can identify the modular forms in Definition 8.1 with modular form for the orthogonal group \(\Gamma(L)\). Moreover, we get back the original Kudla-Millson theta lift \(\Lambda(f)(z,1)\) and the original Borcherds lift \(\Psi_{L}(f)(z,1)\). The application of Theorem 8.7 (for \(h=1\)) then concludes the proof.
## 9. Non-existence of reflective automorphic forms
In this section we apply the converse theorem Corollary 8.8 to refine a theorem of Scheithauer in [Sch] on the non-existence of reflective automorphic products. We first briefly recall the necessary material to present our results. We use [Di] and [Sch] as main references and keep the setting and notation of Section 8.
**Definition 9.1**.: We say that a weakly holomorphic modular form \(f\) for \(\rho_{A}\) is _symmetric_ if \(\sigma(f)=f\) for all \(\sigma\in\operatorname{Aut}(A)\). We can associate to any weakly holomorphic modular form \(f\) a symmetric vector valued modular form of the same weight by
\[f^{\operatorname{Sym}}=\frac{1}{|\operatorname{Aut}(A)|}\sum_{\sigma\in \operatorname{Aut}(A)}\sigma(f). \tag{9.1}\]
\(f^{\operatorname{Sym}}\) is called the symmetrization of \(f\).
Based on this definition we explain the concept of a symmetric modular form for some orthogonal group.
**Definition 9.2**.: Let \(\Gamma_{L}\leq\Gamma(L)\) be a subgroup of finite index of the discriminant kernel. A holomorphic modular form \(F\) of weight \(r\in\mathbb{Z}\) and character \(\chi\) for \(\Gamma_{L}\) (as defined e. g. in [Br1], p 8.3) is called
* _symmetric_ if it is the Borcherds lift of a symmetric weakly holomorphic modular form in the sense of Definition 9.1.
_._
2. _reflective_ if all its zeros are located on divisors of the form \(\lambda^{\perp}\) where \(\lambda\) is a root in \(L\).
Reflective modular forms can be obtained by applying the Borcherds lift to a vector valued modular form of a certain shape. To phrase this relation, we introduce some subsets of the discriminant group \(A\): For \(c\in\mathbb{Z}_{>0}\) and \(x\in\mathbb{Q}\) we put
\[A_{c,x}=\left\{\mu\in A\mid\,\text{ord}(\mu)=c\text{ and }Q(\mu)=x+\mathbb{Z} \right\}. \tag{9.2}\]
In terms of this subset we have
**Proposition 9.3** ([Sch], Sect. 9, [Di], Prop. 3.18).: _Assume that \(L\) has square-free level and let \(f\) be a modular form in \(M^{!}_{\ell,A}\) satisfying:_
1. _For_ \(\mu\in A_{l,1/l}\) _the Fourier expansion of_ \(f_{\mu}\) _is given by_ \(f_{\mu}=c(\mu,1/l)+O(1)\) _with_ \(c(\mu,1/l)\in\mathbb{Z}_{>0}\)_._
2. \(f_{\mu}\) _is holomorphic at_ \(\infty\) _for all other_ \(\mu\in A\)_._
_Then the Borcherds lift \(\Psi_{L}(f)\) is a reflective orthogonal modular form._
Dittman proved the converse of Proposition 9.3 if \(L\) additionally splits a hyperbolic plane \(U\) over \(\mathbb{Z}\).
**Proposition 9.4**.: _Let \(L\) be lattice of square-free level and suppose that \(L\) splits a hyperbolic plane \(U\). If the Borcherds lift \(\Psi_{L}(f)\) is reflective, then the vector valued modular form \(f\in M^{!}_{\ell,A}\) satisfies:_
1. _For_ \(\mu\in A_{l,1/l}\) _the Fourier expansion of_ \(f_{\mu}\) _is given by_ \(f_{\mu}=c(\mu,1/l)+O(1)\) _with_ \(c(\mu,1/l)\in\mathbb{Z}_{>0}\)_._
2. \(f_{\mu}\) _is holomorphic at_ \(\infty\) _for all other_ \(\mu\in A\)_._
At this point it is worth mentioning that in [Sch] Scheithauer refers to the vector valued modular forms in Prop. 9.3 as _reflective modular forms_. Moreover, he calls a modular form \(F\) for some orthogonal group \(\Gamma_{L}\) as an _reflective automorphic product_ if \(F\) is the Borcherds lift of some vector valued modular as specified in Prop. 9.3. In terms of these definitions an important assertion of [Sch] is given in the following theorem.
**Theorem 9.5**.: _The number of automorphic products of singular weight which are symmetric and reflective on lattices of type \((p,2)\) with \(p>2\), square-free level and \(q\)-ranks at most \(p+1\) is finite._
By means of the converse theorem of this paper, we are able to extend this to result to reflective modular forms as given in Definition 9.2 without assuming that the underlying lattice \(L\) splits a sublattice of the form \(U\oplus U(N)\) over \(\mathbb{Z}\). Nevertheless, it is necessary that the conditions of the converse theorem in Corollary 8.8 are satisfied.
We first note that the assumptions on the lattice \(L\) in this paper are sufficient to prove that \(L\) splits a hyperbolic plane \(U\) over \(\mathbb{Z}\). Therefore, we may employ Prop. 9.4 for subsequent arguments.
**Lemma 9.6**.: _Let \(L\) be non-degenerate even lattice of type \((p,2)\) with \(p\geq 3\) whose associated discriminant group \(A=L^{\prime}/L\) is anisotropic. Then \(L\) splits a hyperbolic plane over \(\mathbb{Z}\)._
Proof.: We first note that the level \(N\) of \(L\) is square-free as \(A\) is anisotropic and \(N\) is odd (see e. g. [BEF], p. Lemma 4.9). In view of the type of \(L\) we clearly have an isotropic vector \(z\in L\), which we can choose to be primitive. It is well known (arguing the same way as in the proof of Lemma 4.6 in [Eb]) that there is a vector \(z^{\prime}\in L^{\prime}\) associated to \(z\) with \((z,z^{\prime})=1\)
Dittmann shows in Prop. 2.73 that \(z^{\prime}\) can be chosen to be isotropic and to satisfy \(lz^{\prime}\in L\) for some positive integer \(l\) dividing \(N\). In particular, we have a decomposition of the form \(L\cong K\oplus U(l)\), where \(K=L\cap z^{\perp}\cap w^{\perp}\) with \(w=lz^{\prime}\). It easily seen that \(U(l)^{\prime}/U(l)\cong(\mathbb{Z}/l\mathbb{Z})^{2}\) and that this subgroup of \(A\) contains isotropic elements (c. f. [Di], Prop. 1.71). But this is not possible as \(A\) is anisotropic. Hence, \(l\) must be equal to \(1\). In this case we have \(L\cong K\oplus U\), giving the desired assertion.
We are now in a position to state and proof the main result of this section.
**Theorem 9.7**.: _There are only finitely many lattices (up to isomorphism) that satisfy all conditions of Corollary 8.8 and that admit a reflective modular form of singular weight for \(\Gamma(L)\)._
Proof.: Let \(F\) be a reflective modular form of weight \(\frac{p}{2}-1\) and character of finite order with respect to the discriminant kernel \(\Gamma(L)\). By Corollary 8.8, there is a weakly modular form \(f\in M^{!}_{\ell,L^{-}}\) such that \(F\) is a constant multiple of \(\Psi_{L}(f)\). As \(F\) has singular weight, the Fourier coefficient \(c(0,0)\) is equal to \(p-2\). By Lemma 9.6, we know that \(L\) splits a hyperbolic plane \(U\). Prop. 9.4 then implies that \(f\) is a reflective vector valued modular form. The symmetrization of \(f\) is a symmetric and reflective (see (3.2) in [Di]). It is easily checked that the constant coefficient of \((f^{\text{Sym}})_{0}\) is also equal to \(c(0,0)=p-2\). Finally, by means of (2.2) it clear that the \(q\)-rank of each \(q\) component of \(A\) is smaller than \(p+1\). Thus, \(f^{\text{Sym}}\) meets all conditions of Thm. 11.1 in [Sch]. Then arguing the same way as in [Sch], Sect. 11 and 12, yields that the rank and the level of \(L\) is bounded, which gives the claimed assertion.
|
2309.10952 | LMDX: Language Model-based Document Information Extraction and
Localization | Large Language Models (LLM) have revolutionized Natural Language Processing
(NLP), improving state-of-the-art and exhibiting emergent capabilities across
various tasks. However, their application in extracting information from
visually rich documents, which is at the core of many document processing
workflows and involving the extraction of key entities from semi-structured
documents, has not yet been successful. The main obstacles to adopting LLMs for
this task include the absence of layout encoding within LLMs, which is critical
for high quality extraction, and the lack of a grounding mechanism to localize
the predicted entities within the document. In this paper, we introduce
Language Model-based Document Information Extraction and Localization (LMDX), a
methodology to reframe the document information extraction task for a LLM. LMDX
enables extraction of singular, repeated, and hierarchical entities, both with
and without training data, while providing grounding guarantees and localizing
the entities within the document. Finally, we apply LMDX to the PaLM 2-S and
Gemini Pro LLMs and evaluate it on VRDU and CORD benchmarks, setting a new
state-of-the-art and showing how LMDX enables the creation of high quality,
data-efficient parsers. | Vincent Perot, Kai Kang, Florian Luisier, Guolong Su, Xiaoyu Sun, Ramya Sree Boppana, Zilong Wang, Zifeng Wang, Jiaqi Mu, Hao Zhang, Chen-Yu Lee, Nan Hua | 2023-09-19T22:32:56Z | http://arxiv.org/abs/2309.10952v2 | # LMDX: Language Model-based Document Information Extraction And Localization
###### Abstract
Large Language Models (LLM) have revolutionized Natural Language Processing (NLP), improving state-of-the-art on many existing tasks and exhibiting emergent capabilities. However, LLMs have not yet been successfully applied on semi-structured document information extraction, which is at the core of many document processing workflows and consists of extracting key entities from a visually rich document (VRD) given a predefined target schema. The main obstacles to LLM adoption in that task have been the absence of layout encoding within LLMs, critical for a high quality extraction, and the lack of a grounding mechanism ensuring the answer is not hallucinated. In this paper, we introduce _Language Model-based Document Information EXtraction and Localization_ (LMDX), a methodology to adapt arbitrary LLMs for document information extraction. LMDX can do extraction of singular, repeated, and hierarchical entities, both with and without training data, while providing grounding guarantees and localizing the entities within the document. In particular, we apply LMDX to the PaLM 2-S LLM and evaluate it on VRDU and CORD benchmarks, setting a new state-of-the-art and showing how LMDX enables the creation of high quality, data-efficient parsers.
## 1 Introduction
The recent advent of transformers (Vaswani et al., 2017) and self-supervised pretraining procedures has led to significant progress in Visually Rich Document (VRD) Understanding. Within that field, the task of document information extraction (IE), which consists of extracting key entities within a semi-structured document (e.g. invoice, tax form, paystub, receipt, etc) given a predefined schema, has received a lot of attention from industry and academia due to its importance and wide applicability to intelligent document processing workflows. However, document information extraction still remains challenging for current generation systems. In particular, information in semi-structured forms is organized in complex layout across many possible templates, which requires understanding of the document context, spatial alignment among the different segments of text, and tabular arrangement of structured entities (e.g. line items on an invoice, deduction items on a paystub, etc.). Content on the document can be printed or handwritten, with scanning artefacts like rotation and contrast issues. Moreover, since some business automation workflows require certain level of accuracy, they are often integrated with human-in-the-loop interactions for auditing and correction of predictions, requiring knowing the precise location of extracted entities to make it a tractable task for a human rater. Finally, since a quasi-infinite number of document types exist, and that organizations have limited annotation resources, most parsers are built with very small amount of data.
From those complexities emerge the following desiderata of document information extraction systems: they should support high-quality extraction of singular, repeated, and hierarchical entities, while localizing those entities in the document, and doing so with very low or no amount of training data. So far, no publicly disclosed system has been able to address all of those desiderata.
Many current approaches divide the problem in two stages: a text recognition/serialization step, usually achieved by an off-the-shelf Optical Character Recognition (OCR) service, followed by a parsing step, which finds the relevant entity values from the recognized text. Since the text serialization is imperfect, much attention has been given to fusing the text and layout together in the parsing step (Majumder et al., 2020; Garracarek et al., 2021; Hwang et al., 2021; Katti et al., 2018; Denk and Reisswig, 2019). Hong et al. (2021) proposes to encode the relative 2D distances of text blocks in the attention of the transformer, and learning from unlabeled documents with an area-masking strategy. Lee et al. (2022) proposes encoding the relative token positions with a graph neural network with edges constructed from a beta-skeleton algorithm. It further frames information extraction as a NER sequence tagging task with an IOB scheme (Ramshaw and Marcus, 1995; Palm et al., 2017) which allows them to localize the entities. However, IOB does not support extracting hierarchical entities, and is not robust to text serialization errors, where an entity is broken in disjoint segments.
Since text and layout do not contain all the information in the document (e.g. table boundaries, logos), leveraging the image modality has also been extensively explored (Xu et al., 2021; Lee et al., 2023; Appalaraju et al., 2021, 2023; Zhang et al., 2022). Xu et al. (2020) uses a separate image encoder before adding the output as feature to the token encodings, while Huang et al. (2022) jointly models the page image patches alongside the tokens, using a word-patch alignment self-supervised pretraining task to learn an alignment between the modalities.
Other approaches treat extraction as a sequence generation problem. Powalski et al. (2021) adds an auto-regressive decoder on top of a text-layout-image encoder, all initialized from T5 (Raffel et al., 2020). Kim et al. (2022) foregoes the text recognition step completely, using a Vision Transformer encoder with an auto-regressive decoder pretrained on a pseudo-OCR task on a large document image corpora, and finetuned on the final extraction parse tree with XML tags for the target extraction schema. While this approach allows to predict hierarchical entities, it does not allow localizing entities in the document.
None of the previously discussed approaches attempt to understand the semantics of the schema and its entity types, and instead opt to encode the schema in the model weights through training, hence requiring training data for unseen schemas and document types. QueryForm (Wang et al., 2023b) utilizes a prompt encoding both the schema and entity types, allowing the model to do zero-shot extraction. Likewise, PPN (Wei et al., 2023) inputs the raw entity types in the encoder itself, and uses a scoring matrix to predict the link classes between document tokens and types, with great few-shot performance. However, both approaches are not able to predict hierarchical entities.
In parallel, Large Language Models (OpenAI, 2023; Google et al., 2023; Hoffmann et al., 2022) have revolutionized Natural Language Processing, showing the capabilities to solve a task with simply an instruction (Wei et al., 2022) or a few examples added to the prompt (Brown et al., 2020). This paradigm opens the possibility of extracting entities with very little to no training data. Wang et al. (2023a) transforms the NER task to a sequence generation task suitable for LLMs by incorporating special tokens in the sequence, marking the entity boundaries, and proposes a self-verification strategy limiting the LLM hallucinations. However, this is applicable to text-only scenarios, with hallucinations still a possibility.
This motivates us to introduce _Language Model-based Document Information EXtraction and Localization_ (LMDX), a methodology for leveraging off-the-shelf LLMs for information extraction on semi-structured documents. Our contributions can be summarized as follows:
* We propose a prompt that enables LLMs to perform the document IE task on leaf and hierarchical entities with precise localization, including without any training data.
* We also propose a layout encoding scheme that communicate spatial information to the LLM without any change to its architecture.
* We introduce a decoding algorithm transforming the LLM responses into extracted entities and their bounding boxes on the document, while discarding all hallucination.
* We systematically evaluate the data efficiency of LMDX on multiple public benchmarks and establish a new state-of-the-art on those by a wide margin, especially at low-data regimes.
A comparison of LMDX characteristics and other popular document information extraction systems can be found at Table 1.
## 2 Methodology
### Overview
Overall, our pipeline is divided into five stages: OCR, chunking, prompt generation, LLM inference and decoding, detailed in the following sections. An overview with a simple example can be found in Figure 1, with the input and output of each stage showcased.
### Optical Character Recognition
We first use an off-the-shelf OCR service on the document image to obtain words and lines segments, along with their corresponding spatial position (bounding box) on the document. An example of output from that stage on a sample document is given in Appendix A.6.
### Chunking
Since a document can be arbitrarily long and that LLMs have limited input token length, the document is divided into document chunks so that each is small enough to be processed by the LLM. To achieve this, we first divide the document into individual pages, then we iteratively remove the last line segments (coming from OCR) until the prompt containing this chunk is below the maximum input token length of the LLM. Lastly, we group those removed lines as a new document page, and repeat the same logic until all chunks are below the input token limit of the LLM. At the end of this stage, we have \(N\) chunks. The decision to first divide the document by page stems from the observation that entities rarely cross page boundaries, and as such this chunking scheme will have minimal impact on the final extraction quality. The algorithm is described in pseudo-code in Appendix A.1.
### Prompt Generation
The prompt generation stage takes in the \(N\) document chunks and creates a LLM prompt for each of them. As seen in Figure 2, our prompt design contains the document representation, a description
\begin{table}
\begin{tabular}{l c c c}
**Document Information Extraction Systems** & **Hierarchical entity** & **Entity localization** & **Zero-shot support** \\ \hline FormNet(\(\gamma\)), LayoutLMV(\(\gamma\)), Decformer, Glean,... & ✗ & ✓ & ✗ \\ QueryForm, PPN & ✗ & ✓ & ✓ \\ Donut & ✓ & ✗ & ✗ \\
**LMDX (Ours)** & ✓ & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of document information extraction systems.
Figure 1: Overview of the LMDX methodology.
of the task, and the target schema representation containing the entities to extract. XML-like tags are used to define the start and end of each component.
Document Representation.The chunk content is represented in the prompt as the concatenation of all its segment texts, suffixed with the coordinates of those segments in the following format: \(<segment\:text>\:XX|YY_{segment}\). Coordinate tokens are built by normalizing the segment's X and Y coordinate, and quantizing them in \(B\) buckets, assigning the index of that bucket as the token for a coordinate.
This coordinate-as-tokens scheme allows us to communicate the layout modality to the LLM, without any change to its architecture. There are many variation to that scheme: using OCR line versus OCR words as segment, the granularity of the quantization, and the number of coordinates to use per segment (e.g. \([x_{\text{center}},y_{\text{center}}]\) versus \([x_{\text{min}},y_{\text{min}},x_{\text{max}},y_{\text{max}}]\)). Appendix A.4 shows how those variations affect the prompt token length. In practice, since LLM context length is still limited, we use line-level segments with 2 coordinates and \(B=100\) quantization buckets in all our experiments.
Task Description.The task description is simply a short explanation of the task to accomplish. In our experiments, we hard code it to the following: _From the document, extract the text values and tags of the following entities:_.
Schema Representation.The schema is represented as a structured JSON object, where the keys are the entity types to be extracted, and the values correspond to their occurrence (single or multiple), and sub-entities (for hierarchical entities). For instance, {_"foo": "" "bar": ["ba": []]]_} means that the LLM should extract only a single entity of type _foo_ and multiple hierarchical entities of type _bar_, that could each hold multiple entities of type _bar_.
After this step, we have \(N\) prompts, one for each document chunk. A full example of a prompt on a document can be found in Appendix A.6.
### Completion Targets
In this section, we describe the expected LLM completion format, which can be observed in Figure 1. Like the schema, the completion is a JSON structured object with the keys being the entity types, and values being the extracted information from the document chunk. JSON was chosen as a format for the completion and schema since it supports hierarchical objects, is very token-efficient, and usually present in LLMs training data mixtures. Note that the keys in the completion have the same ordering, occurrence and class (hierarchical or leaf) as the entity types in the schema. The values of leaf entities must follow a specific format:
\[<text{\it on\ segment}_{1}>\:XX|YY_{segment}{}_{1}\backslash n<text{\it on\ segment}_{2}>\:XX|YY_{segment}{}_{2}\backslash n\:...\]
An entity can span multiple (potentially disjoint) OCR segments (lines or words). For each segment of the entity, the value contains the entity text on that segment, along with the coordinate tokens of that segment, which act as a _segment identifier_, allowing us to localize the entities and ground the model prediction (e.g. making sure the extracted value is not a hallucination), as will be detailed in Section 2.7.
Figure 2: Structure of the LLM prompts.
Missing entity types are completed by the model with \(null\) for singular types, and \([]\) for repeated types. Samples of completions can be found in Appendix A.6.
### LLM Inference
In this stage of the pipeline, we run inference on the LLM with the \(N\) prompts. For each prompt, we sample \(K\) completions from the LLM (for a total of \(NK\) completions for the entire document) using Topx sampling. This randomness in the sampling allows to do error correction (e.g. if a response is not valid JSON, have hallucinated segment coordinate identifier, etc), and increase the extraction quality as will be shown in further sections. Note that we still want the inference to be fully deterministic so that LMDX's extractions are the same across two identical documents. To do so, we rely on pseudo-random sampling using a fixed seed.
### Decoding
In this stage, we parse the raw LLM completions into structured entities and their locations.
Conversion to structured entities.We begin by parsing each model completion as a JSON object. Completions that fail to parse are discarded. For each key-value pair in the JSON object, we interpret the key as the entity type and parse the value to get the entity text and bounding box (as detailed in the next paragraph). Predicted entity types that are not in the schema are discarded. If the model unexpectedly predicts multiple values for single-occurrence entity types, we use the most frequent value as the final predicted value. Hierarchical JSON object are recursively parsed as hierarchical entities in a similar manner. This algorithm is described in pseudo-code in Appendix A.3.
Entity Value Parsing.We expect the JSON value to include both text extractions and segment identifiers for each predicted entity, as described in Section 2.5. We first parse the value into its \((segment\text{ }text{ }text{ }text{ }text{ }text{ }text{ }text{ }text{ }text{ }text{ }text{ }text{ }text{ } identifier)\) pairs. For each pair, we look up the corresponding segment in the original document using the segment identifier and verify that the extracted text is exactly included on that segment. Finally, once we have the entity location on all its segments, we get the entity bounding box by computing the smallest bounding box encompassing all the OCR words included in the entity. Entity values with any segments that fail to ground (invalid entity value format, non-existent segment identifier, or non-matching segment text) in the original document are discarded. The entity value parsing algorithm is described in pseudo-code in Appendix A.2.
Prediction Merging.We first merge the predicted entities for the same document chunk from the \(K\) LLM completions through majority voting (Wang et al., 2022). For each entity type, we gather the predicted entities, including empty predictions, across the \(K\) completions. The most common prediction(s) are selected as the predicted value for that entity type. We then merge the predictions among the \(N\) document chunks by concatenating them to obtain the document level predictions.
Prediction Merging for hierarchical entities.For hierarchical entities, we use the entire predicted tree value from a single LLM completion, as this method best preserves the parent-child relationship predicted by the model. For each top-level hierarchical entity type, we perform majority voting on all affiliated leaf, intermediate and top-level entity types among \(K\) completions as if they are flattened. We then tally the votes with equal weight to determine which completion to use for the prediction, and select the most common one for that hierarchical entity.
## 3 Evaluation
We evaluate the methodology explained in section 2 on public benchmarks using the PaLM 2-S LLM, which we call LMDX\({}_{\text{PaLM 2-S}}\). Note that we use the small version of this LLM due to limited accelerator resources, but larger versions could be used, likely leading to higher extraction quality.
Our training process is composed of two phases as shown in Figure 3. In the first phase we finetune PaLM 2-S on a data mixture containing a variety of (_document, schema, extraction)_ tuples. In particular, this data mixture contains the _Payment_ dataset (Majumder et al., 2020), along with a diverse set of publicly available PDF form templates obtained from government websites that we filled with
synthetic data using an internal tool, and annotated for schema and entities to extract. The goal of this phase is to train the model to interpret the semantics of the entity types and extraction hierarchy specified in the schema, and find them within the document, along with learning the extraction syntax. Hence, the variety of schemas and documents in this phase is of utmost importance.
During the second phase, starting from the base entity extractor checkpoint from the previous phase, we finetune the LLM on the target to specialize it to do high quality extraction on the target benchmark. At this stage, only the target benchmark data is included in the training mixture. Note that, for zero-shot experiments, this second phase is skipped. Furthermore, no document or schema contained in the base extraction training phase overlap with the documents and schemas used in the specialization training phase. For all training phases, we follow the input and target syntax described in section 2.4 and 2.5.
### Parameters
For training, we finetune PaLM 2-S using a batch size of 8, a dropout probability of 0.1 and a learning rate of \(10^{-6}\) with a standard cross-entropy loss. Once training is done, we select the checkpoint with the lowest loss on the dev set, and report performance on the test set. For LLM inference, we use a temperature of 0.5 and a Top\({}_{\text{K}}\) of 40, sampling 16 responses for each chunk processed by the LLM, as described in section 2.6. Finally, for both training and inference, we use an input token length of 6144 and output token length of 2048. We use line-level segments and only two coordinates [x\({}_{\text{center}}\), \({}_{\text{Ycenter}}\)] with 100 quantization buckets to save on the number of input and output tokens consumed by the coordinate-as-tokens scheme.
### Datasets
Visually Rich Document Understanding (VRDU).Wang et al. (2023c) introduces a public benchmark for entity extraction from visually-rich documents that includes two datasets: Registration Form, containing 6 semantically rich entity types, and Ad-buy Form, containing 14 entity types with one hierarchical _line_item_ entity. For each dataset, VRDU proposes samples of 10, 50, 100 and 200 train documents to evaluate the data efficiency of models. It also offers different tasks to evaluate the generalization powers of extraction systems: Single Template Learning (STL) where train/test share the same single template, Mixed Template Learning (MTL) where train/test contains overlapping sets of templates, and Unseen Template Learning (UTL) where train/test contains disjoint sets of templates. For our experiments, we finetune LMDX\({}_{\text{PaLM\,2-S}}\) for 4000 steps on each dataset, training data size, and task setup independently and report Micro-F1 through the provided evaluation tool. We then compare LMDX\({}_{\text{PaLM\,2-S}}\) to the published state-of-the-art baselines.
Consolidated Receipt Dataset (CORD).Park et al. (2019) introduces a benchmark of Indonesian receptions from shops and restaurants, with a target schema of 30 fine-grained entities, grouped into _menu_, _total_ and _subtotal_ hierarchical entities. CORD1 does not provide a standard evaluation toolkit, so we adopt the normalized Tree Edit Distance accuracy metric (Zhang & Shasha, 1989), previously introduced by Kim et al. (2022) on that benchmark, since it is agnostic to the output scheme used and considers the hierarchical entities as part of the metric. For our experiments, we use the official \(800train/100dev/100test\) split, but also sample the first \(D=10/50/100/200\) documents from the train split to assess the data efficiency of LMDX on this benchmark. For each data setup, we finet
Figure 3: LMDX training phases.
tune LMDX for 12000 steps. For comparison, we also train and evaluate state-of-the-art baselines \(\mathrm{LayoutLMv3_{LARGE}}\) and \(Donut\). Those baselines are detailed in Appendix A.7.
### Results
Results for VRDU are presented in Table 2. For all data regimes and tasks, \(\mathrm{LMDX_{pLM\,2\,S}}\) sets a new state-of-the-art by a wide margin. In particular, we find that \(\mathrm{LMDX_{pLM\,2\,S}}\) can extract decently with no training data: it exhibits similar extraction quality at zero-shot than baselines at 10-100 train dataset size (for instance 39.74% Micro-F1 on Ad-Buy Form Mixed Template vs 40.68% for FormNet at 50 train documents, or 73.81% Micro-F1 on Registration Single Template vs 74.22% for FormNet at 10 train documents). \(\mathrm{LMDX_{pLM\,2\,S}}\) is also much more data efficient than the baselines: it is at 5.06% Micro-F1 of its peak performance at 10 training documents for Registration Form Mixed Template (87.72% vs 92.78% Micro-F1) while LayoutLMv2, the strongest baseline, is within 19.75% of its peak performance (69.44% vs 89.19% Micro-F1). Finally, we notice that \(\mathrm{LMDX_{pLM\,2\,S}}\) generalizes much better to unseen templates than baselines: on Registration Form, \(\mathrm{LMDX_{pLM\,2\,S}}\) has a drop lesser than 5% Micro-F1 on Unseen Template compared to Single Template across all data regimes, while baselines like LayoutLMv2 sees a drop between 19.38% and 27.32%.
On CORD, with results in Table 3, we observe similar trends, highlighting the generalization of the results. At 10 documents, \(\mathrm{LMDX_{pLM\,2\,S}}\) is 4.03% from its peak performance attained at 800 documents, versus 22.34% for the strongest baseline \(\mathrm{LayoutLMv3_{LARGE}}\), showcasing the data efficiency of the LMDX methodology.
Performance on Hierarchical Entities.As seen on Ad-Buy Form tasks, \(\mathrm{LMDX_{pLM\,2\,S}}\) is capable of grouping line items much better than the baselines (which are using heuristics) for all data regimes. In particular, \(\mathrm{LMDX_{pLM\,2\,S}}\) has similar line-item grouping performance at zero-shot than the best baseline at 200 train documents (21.21% versus 25.46% F1 respectively). With all the training data, \(\mathrm{LMDX_{pLM\,2\,S}}\) scores a 72.09% F1 on line_item, an absolute improvement of 46.63% over the best baseline LayoutLMv2.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{**Registration Form**} & \multicolumn{3}{c}{**Ad-buy Form**} \\ \cline{3-8} \(|\mathcal{D}|\) & **Model** & \multicolumn{1}{c}{**Single**} & **Mixed** & **Unseen** & **Mixed Template** & **Unseen** \\ \cline{3-8} & & **Micro-F1** & **Micro-F1** & **Micro-F1** & **Micro-F1** & **Line Item F1** & **Micro-F1** \\ \hline
**0** & \(\mathbf{LMDX_{pLM\,2\,S}}\) & **73.81** & **71.65** & **74.94** & **39.74** & **21.21** & **39.33** \\ \hline \multirow{5}{*}{**10**} & \(\mathbf{FormNet}\) & 74.22 & 63.61 & 50.53 & 20.47 & 5.72 & 20.28 \\ & \(\mathrm{LayoutLM}\) & 65.91 & 36.41 & 25.54 & 20.20 & 6.95 & 19.92 \\ & \(\mathrm{LayoutLMv2}\) & 80.05 & 69.44 & 54.21 & 25.36 & 9.96 & 25.17 \\ & \(\mathrm{LayoutLMv3}\) & 72.51 & 60.72 & 21.17 & 10.16 & 5.92 & 10.01 \\ & \(\mathbf{LMDX_{pLM\,2\,S}}\) & **90.88** & **87.72** & **86.87** & **54.35** & **39.35** & **54.82** \\ \hline \multirow{5}{*}{**50**} & \(\mathbf{FormNet}\) & 89.38 & 85.38 & 68.29 & 40.68 & 19.06 & 39.52 \\ & \(\mathrm{LayoutLM}\) & 86.21 & 80.15 & 55.86 & 39.76 & 19.50 & 38.42 \\ & \(\mathrm{LayoutLMv2}\) & 88.68 & 84.13 & 61.36 & 42.23 & 20.98 & 41.59 \\ & \(\mathrm{LayoutLMv3}\) & 87.24 & 81.36 & 47.85 & 39.49 & 19.53 & 38.43 \\ & \(\mathbf{LMDX_{pLM\,2\,S}}\) & **93.06** & **91.42** & **88.43** & **75.08** & **65.42** & **75.70** \\ \hline \multirow{5}{*}{**100**} & \(\mathbf{FormNet}\) & 90.91 & 88.13 & 72.58 & 40.38 & 18.80 & 39.88 \\ & \(\mathrm{LayoutLM}\) & 88.70 & 86.02 & 63.68 & 42.38 & 21.26 & 41.46 \\ & \(\mathrm{LayoutLMv2}\) & 90.45 & 88.36 & 65.96 & 44.97 & 23.52 & 44.35 \\ & \(\mathrm{LayoutLMv3}\) & 89.23 & 87.32 & 57.69 & 42.63 & 22.08 & 41.54 \\ & \(\mathbf{LMDX_{pLM\,2\,S}}\) & **93.97** & **92.41** & **89.70** & **78.05** & **69.77** & **75.99** \\ \hline \multirow{5}{*}{**200**} & \(\mathbf{FormNet}\) & 92.12 & 90.51 & 77.29 & 43.23 & 21.86 & 42.87 \\ & \(\mathrm{LayoutLM}\) & 90.47 & 87.94 & 70.47 & 44.66 & 23.90 & 44.18 \\ \cline{1-1} & \(\mathrm{LayoutLMv2}\) & 91.41 & 89.19 & 72.03 & 46.54 & 25.46 & 46.31 \\ \cline{1-1} & \(\mathrm{LayoutLMv3}\) & 90.89 & 89.77 & 62.58 & 45.16 & 24.51 & 44.43 \\ \cline{1-1} & \(\mathbf{LMDX_{pLM\,2\,S}}\) & **93.97** & **92.78** & **90.22** & **79.82** & **72.09** & **78.42** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of \(\mathrm{LMDX_{pLM\,2\,S}}\) on the different tasks and train data size setups \(|\mathcal{D}|\) of VRDU, with best performing model results in bold. Unlike the baselines, \(\mathrm{LMDX}\) can do zero-shot extraction.
### Ablations
In this section, we ablate different facets of the LMDX methodology to highlight their relative importance. The results can be found in Table 4 and are discussed below. For all ablations, we evaluate on the VRDU Ad-Buy Form Mixed Template task, only changing the ablated facet.
Effects of Base Entity Extraction Training.In this ablation, we remove the first stage training on the varied data mixture from Figure 3 and directly finetune on the VRDU target task. As seen on columns 3-4 of Table 4, ablating that training stage leads to significant drop in extraction quality in few-shot scenarios and the complete loss of zero-shot extraction ability due to the model not respecting the extraction format, hence failing decoding. As the train set size increases, the degraded performance lessens, from -11.44% to -7.57%, as the model learns the extraction task and the desired completion format.
Effects of Coordinate Tokens.In this ablation, we replace the coordinate tokens, which communicate the position of each line within the document, by the index of that line. This index still acts as a unique identifier for the line segment (required for entity localization and grounding) but does not communicate any position information. An example of a prompt with line index can be found in Appendix A.6. As can be seen on columns 5-6 of Table 4, the coordinate tokens are substantially important to the extraction quality, ranging from 12.15% to 14.98% absolute micro-F1 improvement across the data regimes.
Effects of Sampling Strategy.In this ablation, we discard our sampling strategy, and instead sample a single response from the model. As seen in columns 7-8 of Table 4, this leads to a 0.21% to 1.5% drop in micro-F1. While overall minor for quality, the sampling strategy also allows to correct extraction format mistakes.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{**LMDX\({}_{\text{PaLM2.5}}\)**} & \multicolumn{2}{c}{**Without Base EE Training**} & \multicolumn{2}{c}{**Without Coordinate Tokens**} & \multicolumn{2}{c}{**Without Sampling Strategy**} \\ \cline{2-7} \(|\mathcal{D}|\) & **Micro-F1** & **Micro-F1** & \(\Delta\) **(\%)** & **Micro-F1** & \(\Delta\) **(\%)** & **Micro-F1** & \(\Delta\) **(\%)** \\ \hline
**0** & 39.74 & 0.00 & -39.74 & 27.59 & -12.15 & 39.53 & -0.21 \\
**10** & 54.35 & 42.91 & -11.44 & 39.37 & -14.98 & 52.85 & -1.50 \\
**50** & 75.08 & 66.51 & -8.57 & 62.35 & -12.73 & 73.88 & -1.20 \\
**100** & 78.05 & 68.87 & -9.18 & 65.14 & -12.91 & 77.30 & -0.75 \\
**200** & 79.82 & 72.25 & -7.57 & 65.70 & -14.12 & 78.43 & -1.39 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablations of Base Entity Extraction Training, Coordinate Tokens, and Sampling and their relative effects on extraction quality. All ablations are done on VRDU Ad-Buy Mixed Template.
\begin{table}
\begin{tabular}{c c c} \hline \hline \(|\mathcal{D}|\) & **Model** & **n-TED Accuracy** \\ \hline
**0** & **LMDX\({}_{\text{PaLM2.5}}\)** & **67.47** \\ \hline & \multicolumn{2}{c}{Dound} & 33.01 \\
**10** & LayoutLM\({}_{\text{V3}_{\text{Carol2.5}}}\)** & **73.87** \\ & \multicolumn{2}{c}{**LMDX\({}_{\text{PaLM2.5}}\)**} & **92.27** \\ \hline & \multicolumn{2}{c}{Dound} & 75.44 \\
**50** & LayoutLM\({}_{\text{V3}_{\text{Carol2.5}}}\)** & 87.29 \\ & \multicolumn{2}{c}{**LMDX\({}_{\text{PaLM2.5}}\)**} & **93.80** \\ \hline & \multicolumn{2}{c}{Dound} & 82.17 \\
**100** & LayoutLM\({}_{\text{V3}_{\text{Carol2.5}}}\)** & 91.83 \\ & \multicolumn{2}{c}{**LMDX\({}_{\text{PaLM2.5}}\)**} & **93.64** \\ \hline
**200** & \multicolumn{2}{c}{Dound} & 84.49 \\ & \multicolumn{2}{c}{**LMDX\({}_{\text{PaLM2.5}}\)**} & **94.44** \\ & \multicolumn{2}{c}{**LMDX\({}_{\text{PaLM2.5}}\)**} & **94.73** \\ \hline & \multicolumn{2}{c}{Dound} & 90.23 \\
**800** & LayoutLM\({}_{\text{V3}_{\text{Carol2.5}}}\)** & 96.21 \\ & \multicolumn{2}{c}{**LMDX\({}_{\text{PaLM2.5}}\)**} & **96.30** \\ \hline \hline \end{tabular}
\end{table}
Table 3: LMDX\({}_{\text{PaLM2.5}}\) results on CORD. Normalized Tree Edit Distance Accuracy is reported.
### Error Analysis and Limitations
In this section, we perform an error analysis on the test set to identify common error patterns of LMDX. A very common error type we observe is caused by OCR lines grouping multiple semantically different segments. We show two instance of those cases observed in LMDX\({}_{\text{PaLM\,2\,S}}\) on the VRDU Ad-Buy Form in Figure 4. In the first example, prediction for the entity _line_item/program_desc_ includes text from the previous column "Channel" along with the value in the column "Description". From the OCR line bounding boxes, we can see that these two columns are grouped as the same OCR line. In the second example, the model confuses between the adjacent keys "Invoice Period" and "Flight Dates" and extracts invoice dates as flight dates. Similar to the first example, OCR line bounding boxes show that the invoice dates and the key "Flight Dates" are grouped together in the same line although they are semantically different. As LMDX\({}_{\text{PaLM\,2\,S}}\) uses only coarse line layout information ([x\({}_{\text{center}}\), y\({}_{\text{center}}\)] with 100 quantization buckets), the model fails in these cases. This is a current limitation of the LMDX system. We believe that incorporating the image modality will help make LMDX more performant and robust to those OCR errors, addressing the current limitations of the system.
## 4 Conclusion
In this paper, we have introduced LMDX, a methodology that enables using LLMs for information extraction on visually rich documents, setting a new state-of-the-art on public benchmarks VRDU and CORD. LMDX is the first methodology to allow the extraction of singular, repeated and hierarchical entities, while grounding its predictions and localizing the entities in the document. LMDX is extremely data efficient, and even allows high quality extraction at zero-shot on entirely new document types and schemas. Nonetheless, since it relies on a LLM, LMDX is more resource-intensive than previous approaches, and its coordinate-as-tokens scheme requires long inputs and outputs. As future work, we will explore applying the methodology to open-source LLMs and adding the image modality to the system using Large Vision-Language Models.
## 5 Reproducibility Statement
In order to increase reproducibility, we've provided all details of the LMDX methodology. We've included our LLM prompts and completions in Appendix A.6, along with all our algorithms for chunking and decoding in Appendix A.1, A.2 and A.3. Furthermore, we've provided the exact target schemas used in our experiments in Appendix A.5. For CORD specifically, we've used a metric with a public implementation ([https://github.com/clovaai/donut/blob/master/donut/util.py](https://github.com/clovaai/donut/blob/master/donut/util.py)) and an easy to reproduce sampling strategy for the data-efficiency splits (first \(D\) train documents). Finally, our baselines are publicly available ([https://github.com/microsoft/unilm/tree/master/layout1mv3](https://github.com/microsoft/unilm/tree/master/layout1mv3), [https://github.com/clovaai/donut](https://github.com/clovaai/donut)) and thoroughly detailed in Appendix A.7.
Figure 4: Typical error pattern of LMDX\({}_{\text{PaLM\,2\,S}}\). In both examples, the detected OCR lines are shown in red, the model predicted entities are shown in blue, and the groundtruth entities are shown in green. In both cases, the detected OCR lines merge two semantically distinct segments, causing the model to wrongly associate them in its predictions. |
2307.16657 | Cell decomposition and dual boundary complexes of character varieties | The weak geometric P=W conjecture of L. Katzarkov, A. Noll, P. Pandit, and C.
Simpson asserts that for any smooth Betti moduli space $\mathcal{M}_B$ of
complex dimension $d$ over a punctured Riemann surface, the dual boundary
complex $\mathbb{D}\partial\mathcal{M}_B$ is homotopy equivalent to a
$(d-1)$-dimensional sphere. Here, we consider $\mathcal{M}_B$ as a generic
$GL_n(\mathbb{C})$-character variety defined on a Riemann surface of genus $g$,
with local monodromies specified by generic semisimple conjugacy classes at $k$
punctures.
In this article, we establish the weak geometric P=W conjecture for all
\emph{very generic} $\mathcal{M}_B$ in the sense that at least one conjugacy
class is regular semisimple. A crucial step is to establish a stronger form of
A. Mellit's cell decomposition theorem, i.e. we decompose $\mathcal{M}_B$
(without passing to a vector bundle) into locally closed subvarieties of the
form $(\mathbb{C}^{\times})^{d-2b}\times\mathcal{A}$, where $\mathcal{A}$ is
stably isomorphic to $\mathbb{C}^b$. A second ingredient involves a motivic
characterization of the integral cohomology of dual boundary complexes
developed in a subsequent article [Su24]. Following C. Simpson's strategy, the
proof is now an inductive computation of the dual boundary complexes from such
a cell decomposition. | Tao Su | 2023-07-31T13:39:03Z | http://arxiv.org/abs/2307.16657v4 | # Cell decomposition, dual boundary complexes and motives of character varieties
###### Abstract.
The homotopy type conjecture (weak form of the geometric P=W conjecture) states: for any (smooth) Betti moduli space \(\mathcal{M}_{B}\) of complex dimension \(d\) over a (punctured) Riemann surface, the dual boundary complex \(\mathbb{D}\partial\mathcal{M}_{B}\) is homotopy equivalent to a sphere of dimension \(d-1\). Say, \(\mathcal{M}_{B}\) is a generic \(GL_{n}(\mathbb{C})\)-character variety on a genus \(g\) Riemann surface with local monodromies at \(k\) punctures in prescribed generic semisimple conjugacy classes. First, we prove the homotopy type conjecture for \(\mathcal{M}_{B}\) if it's very generic: at least one conjugacy class is in addition regular semisimple.
Second, the main result is obtained by proving a strong form of A. Mellit's cell decomposition: \(\mathcal{M}_{B}\) itself is decomposed into locally closed subvarieties of the form \((\mathbb{C}^{\times})^{d-2b}\times\mathcal{A}\), where \(\mathcal{A}\) is stably isomorphic to \(\mathbb{C}^{b}\). We expect that \(\mathcal{A}\) is in general a counterexample to the Zariski cancellation problem for dimension \(b\geq 3\) in characteristic zero.
Third, we propose a conjectural formula \(A\) for Voevodsky's motive with compact support of any generic \(\mathcal{M}_{B}\). This directly generalizes the HLRV conjecture. It also suggests an integral curious Poincare duality conjecture \(B\) for the singular weight cohomology with compact support of the same variety. Some partial results are: 1. prove a weak form of \(A\) for very generic \(\mathcal{M}_{B}\), i.e. the formula for its class in the Grothendieck ring of effective pure Chow motives; 2. for geneir \(\mathcal{M}_{B}\), \(B\) implies that \(\mathbb{D}\partial\mathcal{M}_{B}\) is an integral homology sphere of the right dimension. Finally, we verify all the conjectures in simple examples.
###### Contents
* 1 Setup
* 1.1 Generic character varieties
* 1.2 Very generic character varieties
* 1.3 Diagram calculus of matrices
* 1.4 Braid varieties
* 2 Cell decomposition of character varieties
* 2.1 Diagram calculus for punctures
* 2.2 Diagram calculus for genera
* 3
2.3 Connection to braid varieties * 2.4 The cell decomposition * 2.5 Examples
* 3 Dual boundary complexes of character varieties
* 3.1 Dual boundary complexes
* 3.2 Fundamental group of the dual boundary complex
* 3.3 Dual boundary complexes of very generic character varieties
* 3.4 Dual boundary complexes of generic character varieties
* 4 Motives of character varieties
* 4.1 Chow and geometric motives
* 4.2 Motivic weight complexes
* 4.3 Motives with compact support of generic character varieties
* 4.4 Examples
* A Cell decomposition of braid varieties
* B Modified Macdonald symmetric functions
* C Quotients of varieties
* C.1 Various quotients for algebraic group actions
* C.2 Principal bundles
* C.3 Associated fiber bundles
* C.4 Functorial properties and applications
## Introduction
Let \(\Sigma\) be a genus \(g\) closed Riemann surface with \(k\) punctures \(\sigma=\{q_{1},\cdots,q_{k}\}\), \(k\geq 1,2g+k\geq 3\), and \(G=GL_{n}(\mathbb{C})\). Modulo extra input, the tame nonabelian Hodge correspondence over noncompact curves [71, 45] induces a diffeomorphism
\[\operatorname{NAH}:\mathcal{M}_{\operatorname{DoI}}\simeq\mathcal{M}_{B}\]
between two moduli spaces: the Dolbeault moduli space \(\mathcal{M}_{\operatorname{DoI}}\) of stable filtered regular (parabolic) \(G\)-Higgs bundles on \((\Sigma,\sigma)\) with parabolic degree 0; and the Betti moduli space \(\mathcal{M}_{B}\) of stable filtered \(G\)-local systems on \(\Sigma\setminus\sigma\) with parabolic degree 0. For more on nonabelian Hodge theory, see [11, 72, 73, 74, 3, 59, 60, 35, 41].
The geometric P=W conjecture of L. Katzarkov, A. Noll, P. Pandit and C. Simpson [43, 75] predicts that, under NAH, the "Hitchin fibration at infinity" of \(\mathcal{M}_{\operatorname{DoI}}\) matches, up to homotopy, with the "fibration at infinity" of \(\mathcal{M}_{B}\) over the dual boundary complex \(\mathbb{D}\partial\mathcal{M}_{B}\) (Definition 3.2).
More concretely, on the Dolbeault side, the Hitchin fibration \(h:\mathcal{M}_{\mathrm{Dol}}\to\mathbb{A}\) induces:
\[\overline{h}:N^{*}_{\mathrm{Dol}}=\mathcal{M}_{\mathrm{Dol}}\setminus h^{-1}(B_{R} (0))\xrightarrow{h}\mathbb{A}\setminus B_{R}(0)\to(\mathbb{A}\setminus B_{R}(0))/ \mathrm{scaling}\ =S^{d-1},R\gg 0,d=\dim\mathcal{M}_{\mathrm{Dol}};\]
On the Betti side, there is a fibration (up to homotopy) via the dual boundary complex
\[\alpha:N^{*}_{B}\to\mathbb{D}\partial\mathcal{M}_{B},\]
where \(N^{*}_{B}\) is the punctured tubular neighborhood of the simple normal crossing boundary divisor \(\partial\mathcal{M}_{B}\) in a log compactification \(\overline{\mathcal{M}}_{B}\) of \(\mathcal{M}_{B}\). See Remark 3.8 for the details. Then
**Conjecture 0.1** ([43, 75], Geometric P=W).: _There is a homotopy commutative square_
\[\begin{CD}N^{*}_{\mathrm{Dol}}@>{\simeq}>{\mathrm{NAH}}>N^{*}_{B}\\ @V{}V{\not\exists}V@V{}V{\alpha}V\\ S^{d-1}@>{\simeq}>{}>\mathbb{D}\partial\mathcal{M}_{B}\end{CD}\]
As a weak form of the geometric P=W conjecture, we in particular have
**Conjecture 0.2** ([43, 75], the homotopy type conjecture).: _The dual boundary complex \(\mathbb{D}\partial\mathcal{M}_{B}\) is homotopy equivalent to the sphere \(S^{d-1}\), where \(d=\dim_{\mathbb{C}}\mathcal{M}_{B}\)._
By a **folklore conjecture**, all (smooth) Betti moduli spaces (i.e. character varieties) \(\mathcal{M}_{B}\) are expected to be log Calabi-Yau (**CY**). This has been verified for the case \(G=SL_{2}(\mathbb{C})\)[83, 21]. Then, the homotopy type conjecture is closely related to the following conjecture:
**Conjecture 0.3** (M. Kontsevich).: _The dual boundary complex of a log **CY** variety is a sphere._
The geometric P=W conjecture was originally inspired by and aimed at a geometric interpretation of the (cohomological) P=W conjecture of M. de Cataldo, T. Hausel and L. Migliorini [14]. The latter states that, NAH exchanges the weight filtration (algebraic geometry) on \(H^{*}(\mathcal{M}_{B},\mathbb{Q})\) with the Perverse-Leray filtration (topology) on \(H^{*}(\mathcal{M}_{\mathrm{Dol}},\mathbb{Q})\):
\[\mathrm{NAH}^{*}(W_{2k}H^{*}(\mathcal{M}_{B},\mathbb{Q})=W_{2k+1}H^{*}( \mathcal{M}_{B},\mathbb{Q}))=P_{k}H^{*}(\mathcal{M}_{\mathrm{Dol}},\mathbb{Q}).\]
After the results for rank 2 [14] and respectively for genus 2 [10], the cohomological P=W conjecture has recently been resolved independently by two groups [63, 38] for the major case of twisted \(GL_{n}(\mathbb{C})\)-character varieties (in particular, \(k=1\)). See also [15, Question 4.1.7], [13, Conj.B], [52, 24], [58, Conj.4.2.7], for the extensions of the P=W conjectures to the singular or stacky character varieties.
Under an assumption, the geometric P=W conjecture recovers the cohomoloical P=W conjecture for the weight in top degree [58, Thm.6.2.6]. In general, however, the full geometric P=W conjecture stated here is not sufficient to imply the latter [58, Rmk.6.2.11]. Conversely, for
example, the latter concerns only the cohomology with rational coefficients, hence captures only \(H^{*}(\mathbb{D}\partial\mathcal{M}_{B},\mathbb{Q})\) via the identification (see e.g. [66])
\[\widetilde{H}_{i-1}(\mathbb{D}\partial\mathcal{M}_{B},\mathbb{Q})\cong\operatorname {Gr}_{2d}^{W}H^{2d-i}(\mathcal{M}_{B},\mathbb{Q}),\quad d=\dim_{\mathbb{C}} \mathcal{M}_{B}.\]
On the other hand, the former does capture \(\widetilde{H}^{*}(\mathbb{D}\partial\mathcal{M}_{B},\mathbb{Z})\). Altogether, to interpret the cohomological P=W conjecture in all weights, a refinement of the geometric P=W conjecture is required. Let's make a complementary remark. Assuming the folklore conjecture, we may consider only (dlt) log **CY** compactifications. Then we obtain a _refined dual boundary complex_\(\operatorname{DMR}(\overline{\mathcal{M}}_{B},\partial\mathcal{M}_{B})\), well-defined up to PL-homeomorphism [18, Prop.11]. By Conjecture 0.2, \(\operatorname{DMR}(\overline{\mathcal{M}}_{B},\partial\mathcal{M}_{B})\) is a PL-sphere of dimension \(d-1\). It's expected that [75, SS1.2] this PL-sphere is closed related to the Kontsevich-Soibelman picture [46]: the Kontsevich-Soibelman chambers in the Hitchin base \(\mathbb{A}\) of \(\mathcal{M}_{\operatorname{Dol}}\) should correspond to the cells in \(\operatorname{DMR}(\overline{\mathcal{M}}_{B},\partial\mathcal{M}_{B})\).
Currently, the full geometric P=W conjecture is known for: the Painleve cases [65, 79, 80]; the case \((g,k)=(1,0)\) or \((k,n)=(0,1)\)[58, Thm.B]. Our major interest in this paper is its weak form, the homotopy type conjecture 0.2. Previously, this is only known in a few cases: \(G=SL_{2}(\mathbb{C})\)[44, 75, 21]; the Painleve cases as above; singular character variety of any rank with \(g=1\) and \(k=0\)[58]; smooth wild character variety of any rank with \(g=0\) and \(k=1\)[78]. Concerning Kontsevich's conjecture 0.3, the only known general result is due to J. Kollar and C. Xu [48]: If \(X\) is log Calabi-Yau of dimension \(\leq 5\), then \(\mathbb{D}\partial X\) is a finite quotient of a sphere.
### Results
As a complement to the discussion above, our main result is the following:
**Theorem 0.4** (Theorem 3.15).: _Let \((\Sigma,\sigma=\{q_{1},\cdots,q_{k}\})\) be a \(k\)-punctured genus \(g\) Riemann surface, and \(\mathcal{M}_{B}=\mathcal{M}_{\mu}\) be its \(G\)-character variety of type \(\mu\in\mathcal{P}_{n}^{k}\). If \(\mathcal{M}_{\mu}\) is very generic, then the homotopy type conjecture 0.2 holds for \(\mathcal{M}_{\mu}\)._
Here, \(\mathcal{M}_{\mu}\) is a \(GL_{n}(\mathbb{C})\)-character variety on \(\Sigma\setminus\sigma\) with fixed semisimple conjugacy class \(\mathcal{C}_{i}\) around \(q_{i}\); \(\mu=(\mu^{1},\cdots,\mu^{k})\), where \(\mu^{i}=(\mu_{1}^{i}\geq\mu_{2}^{i}\geq\cdots)\in\mathcal{P}_{n}\) encodes the multiplicities of the eigenvalues of \(\mathcal{C}_{i}\). '_generic_' means: \((\mathcal{C}_{1},\cdots,\mathcal{C}_{k})\) is _generic_ in the sense of [36, Def.2.1.1] (Definition 1.1); '_very generic_' means: \(\mathcal{C}_{k}\) is in addition regular semisimple (Assumption 1.4).
If \(\mathcal{M}_{\mu}\) is only _generic_, by the curious hard Lefschetz property [54], we know that \(\mathbb{D}\partial\mathcal{M}_{\mu}\) is a rational homology \((d_{\mu}-1)\)-sphere, \(d_{\mu}=\dim\mathcal{M}_{\mu}\). See Proposition 3.21 or [58, Rmk.7.0.7].
To prove the main result, we need to improve A. Mellit's cell decomposition [54, SS.7]. In fact, our second main result answers Mellit's question in [54, SS1.4], i.e. we give a honest cell decomposition for very generic character varieties \(\mathcal{M}_{\mu}\) (without passing to a vector bundle):
**Theorem 0.5** (Theorem 2.10).: _Any very generic \(\mathcal{M}_{\mu}\) admits a cell decomposition:_
\[\mathcal{M}_{\mu}=\sqcup_{(\widetilde{w},p)\in\mathcal{W}^{*}}\mathcal{M}_{ \mu}(\widetilde{w},p),\]
_where each \(\mathcal{M}_{\mu}(\vec{w},p)\) is a locally closed affine subvariety, such that_
\[\mathcal{M}_{\mu}(\vec{w},p)\cong(\mathbb{K}^{\times})^{\overline{a}(\vec{w},p)} \times\mathcal{A}_{\mu}(\vec{w},p),\quad\mathcal{A}_{\mu}(\vec{w},p)\times \mathbb{K}^{|U|}\cong\mathbb{K}^{b(\vec{w},p)},\]
_where \(U\subset G\) is the subgroup of unipotent upper triangular matrices, \(\overline{b}(\vec{w},p):=b(\vec{w},p)-|U|\), and \(\overline{a}(\vec{w},p)+2\overline{b}(\vec{w},p)=d_{\mu}=\dim_{\mathbb{C}} \mathcal{M}_{\mu}\) is a constant._
_Moreover, there exists a unique \((\vec{w}_{\max},p_{\max})\) such that \(\dim\mathcal{M}_{\mu}(\vec{w}_{\max},p_{\max})\) is of maximal dimension \(d_{\mu}\). Equivalently, \(\overline{a}(\vec{w}_{\max},p_{\max})=d_{\mu}\) (resp. \(\overline{b}(\vec{w}_{\max},p_{\max})=0\)). In particular, \(\mathcal{M}_{\mu}(\vec{w}_{\max},p_{\max})\) is an open dense algebraic torus:_
\[\mathcal{M}_{\mu}(\vec{w}_{\max},p_{\max})\cong(\mathbb{K}^{\times})^{d_{\mu} },\quad\mathcal{A}_{\mu}(\vec{w}_{\max},p_{\max})=\{\mathrm{pt}\}.\]
**Note**: \(\mathcal{A}_{\mu}(\vec{w},p)\) is stably isomorphic to \(\mathbb{A}^{\overline{b}(\vec{w},p)}\). We expect that, \(\mathcal{A}_{\mu}(\vec{w},p)\) is in general not isomorphic to \(\mathbb{A}^{\overline{b}(\vec{w},p)}\), hence gives a counterexample to the Zariski cancellation problem for dimension \(b=\overline{b}(\vec{w},p)\geq 3\) in characteristic zero. See Remark 2.12 for a further discussion.
Similar to [54, SS.7], Theorem 0.5 is proved via a connection to braid varieties. However, a more careful analysis is required. For the sake of clarity, we will give a self-contained proof. In fact, we use a somewhat different language (diagram calculus of matrices). To prove Theorem 0.4.(2) via Theorem 0.5, the idea is to apply a **remove lemma** (Lemma 3.6): If \(X\) is a connected smooth quasi-projective variety, and \(Z\subset X\) is a smooth irreducible closed subvariety with nonempty open complement \(U\), such that \(\mathbb{D}\partial Z\) is contractible, then we have a homotopy equivalence \(\mathbb{D}\partial X\sim\mathbb{D}\partial U\). We do can apply the remove lemma because of a key property (Lemma 3.16): if \(\mathcal{A}_{\mu}\) is stably isomorphic to \(\mathbb{A}^{b}\) for some \(b\geq 1\), then \(\mathbb{D}\partial\mathcal{A}_{\mu}\) is contractible.
Partly motivated by the question of capturing the (co)homology with integer coefficients of \(\mathbb{D}\partial X\) for any smooth quasi-projective variety \(X\), especially for any generic character variety \(X=\mathcal{M}_{\mu}\), we work with Voevodsky's geometric motives with compact support \(\mathrm{M}^{c}(X)\in\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})\)[64]. \(\mathrm{M}^{c}(X)\) is defined for any variety \(X\in\mathbf{Var}(\mathbb{K})\). When \(X\) is smooth quasi-projective, by taking a log compactification with simple normal crossing boundary divisor, \(\mathrm{M}^{c}(X)\) admits a more concrete description in terms of those of smooth projective varieties (Lemma 4.2). On the other hand, by the classical work of H.Gillet and C.Soule [30, Thm.2] (recalled in Theorem 4.3), one can associate a _motivic weight complex_\(\mathrm{W}(X)\in K^{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})^{ \mathrm{op}})\) to any \(X\in\mathbf{Var}(\mathbb{K})\). In other words, \(\mathrm{W}(X)\) is a bounded cochain complex of (contravariant) effective pure Chow motives over \(\mathbb{K}\). When \(X\) is smooth projective, \(\mathrm{W}(X)=\mathrm{M}_{\mathrm{rat}}(X)\) reduces to the effective pure Chow motive associated to \(X\). The connection of \(\mathrm{W}(X)\) with \(\mathrm{M}^{c}(X)\) is as follows: Let \(\mathrm{W}_{\mathrm{c}}(X)^{-}\in K^{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{ eff}}(\mathbb{K}))\) be the bounded cochain complex of (covariant) effective pure Chow motives obtained from \(\mathrm{W}(X)\) by reversing the arrows and negating the degrees. Then
\[\mathrm{W}_{\mathrm{c}}(X)^{-}=t\circ\mathrm{M}^{c}(X),\]
where \(t\) is Bondarko's weight complex functor [5] (See Theorem 4.4).
The point of interest to us is that \(\mathrm{W}(X)\) knows \(H^{*}(\mathbb{D}\partial X;\mathbb{Z})\). As a cohomological analogue of [47, SS.3.1], one can define the _Betti weight cohomology with compact support_ with coefficients in any commutative ring \(A\) (see Definition 4.7):
\[H^{a,b}_{\mathrm{W},c}(X,\mathfrak{R}_{B};A):=H^{a}(H^{b}(\mathrm{W}(X);A))\in A \text{-Mod}.\]
When \(A=\mathbb{Q}\), this recovers Deligne's weight filtration (Lemma 4.8):
\[H^{a,b}_{\mathrm{W},c}(X,\mathfrak{R}_{B};\mathbb{Q})=H^{a,b}_{\mathrm{W},c}(X,\mathfrak{R}_{B};\mathbb{Z})\otimes\mathbb{Q}\cong\mathrm{Gr}^{b}_{b}H^{a+b} _{c}(X,\mathbb{Q}).\]
When \(A=\mathbb{Z}\) and \(X\) is connected smooth quasi-projective, \(H^{a,b}_{\mathrm{W},c}(X,\mathfrak{R}_{B})=H^{a,b}_{\mathrm{W},c}(X, \mathfrak{R}_{B};\mathbb{Z})\) extends the reduced integral cohomology of the dual boundary complex \(\mathbb{D}\partial X\) (Proposition 4.10):
\[H^{a,0}_{\mathrm{W},c}(X,\mathfrak{R}_{B})\cong\widetilde{H}^{a-1}(\mathbb{D} \partial X,\mathbb{Z}).\]
So, Conjecture 0.2 implies that \(H^{\bullet,0}_{\mathrm{W},c}(\mathcal{M}_{\mu},\mathfrak{R}_{B})=\mathbb{Z}[- (d_{\mu}-1)]\).
Inspired by above, one may wonder if \(H^{a,b}_{\mathrm{W},c}(\mathcal{M}_{\mu},\mathfrak{R}_{B})\) is always torsion-free? Basing on examples, we tend to believe that the answer is affirmative. Combined with the HLRV conjecture [36, Conj.1.2.1] (Conjecture 4.16) on mixed Hodge polynomials of \(\mathcal{M}_{\mu}\), Theorem 0.4 and some other thoughts, this leads us to a conjectural formula computing \(\mathrm{M}^{c}(\mathcal{M}_{\mu})\):
**Conjecture 0.6** (Conjecture 4.20).: _Let \(\mathcal{M}_{\mu}\) be any generic character variety of type \(\mu\), then:_
1. _(a reformulation of Conjecture_ 4.16_._(1)) The rational function_ \(\mathbb{H}_{\mu}(-z,w)\) _is of the form_ (0.0.1) \[\mathbb{H}_{\mu}(-z,w)=\sum_{0\leq i,j\leq d_{\mu},i+j\leq d_{\mu}}c^{\mu}_{ij }z^{i}w^{j},\quad c^{\mu}_{ij}\in\mathbb{Z}_{\geq 0},\]
_such that \(c^{\mu}_{ij}\neq 0\Rightarrow j-i\) is even. Besides, \(c^{\mu}_{0d_{\mu}}=c^{\mu}_{d_{\mu}0}=1\)._
2. \(\mathcal{M}_{\mu}\) _contains an open dense algebraic torus. Thus, any log compactification of_ \(\mathcal{M}_{\mu}\) _is rational._
3. _(main part) The motive with compact support of_ \(\mathcal{M}_{\mu}\) _is_ (0.0.2) \[\mathrm{M}^{c}(\mathcal{M}_{\mu})=\oplus_{0\leq i,j\leq d_{\mu},i+j\leq d_{ \mu}}(\mathbf{L}^{\frac{d_{\mu}+j-i}{2}}[i])^{\oplus c^{\mu}_{ij}}\in\mathbf{ DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K}).\]
4. _(Integral form of curious Poincare duality) We have an isomorphism:_ (0.0.3) \[H^{a,d_{\mu}-2m}_{\mathrm{W},c}(\mathcal{M}_{\mu},\mathfrak{R}_{B})\xrightarrow{ z}H^{a-2m,d_{\mu}+2m}_{\mathrm{W},c}(\mathcal{M}_{\mu},\mathfrak{R}_{B}).\]
Here, \(\mathbb{H}_{\mu}(z,w)\in\mathbb{Q}(z,w)\) is the HLRV function of type \(\mu\)[36, (1.1.3)] (Definition 4.14); \(\mathbf{L}^{m}:=\mathbf{L}^{\otimes m},\forall m\geq 0\), where \(\mathbf{L}:=\mathrm{M}^{c}(\mathbb{A}^{1})=\mathbb{Z}(1)[2]\in\mathbf{DM}^{ \mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})\) is the (geometric) Lefschetz motive.
By Theorem 0.4, (2) is already true in the very generic case. One may also wonder a more general question: when is \(\mathcal{M}_{\mu}\) a cluster variety? For related discussions, see [22, 77]. If \(\mathcal{M}_{\mu}\) is only generic, unfortunately, we do not have any good evidence for (2) (except for Example 4.27). However, for the application to Conjecture 0.2, we only need a much weaker statement:
\((2^{\prime})\): \(\pi_{1}(\overline{\mathcal{M}}_{\mu})\) is abelian, where \(\overline{\mathcal{M}}_{\mu}\) is a log compactification.
Then, \((2^{\prime})\Rightarrow\pi_{1}(\mathbb{D}\partial\mathcal{M}_{\mu})\) is abelian, if \(d_{\mu}>2\). This follows from the surjection (Corollary 3.14):
\[d_{\mu}>2\ \ \Rightarrow\ \pi_{1}(\mathcal{M}_{\mu})\twoheadrightarrow\pi_{1}( \overline{\mathcal{M}}_{\mu})\twoheadrightarrow\pi_{1}(\mathbb{D}\partial \mathcal{M}_{\mu}).\]
Another potential strategy for proving \((2^{\prime})\) when \(\mathcal{M}_{\mu}\) is only generic, is via gauge theory as in [19] (See Remark 4.25). For example, this implies that \((2^{\prime})\) holds at least for twisted character varieties. See also [58, Thm.7.0.3]. Now, we mention the bonus of (4):
**Proposition 0.7** (Proposition 4.24).: _For any generic character variety \(\mathcal{M}_{\mu}\) of type \(\mu\), we have_
\[H^{\bullet,2d_{\mu}}_{\mathrm{W},c}(\mathcal{M}_{\mu},\Re_{B})\cong\mathbb{Z}.\quad(\text{i.e., }H^{a,2d_{\mu}}_{\mathrm{W},c}(\mathcal{M}_{\mu},\Re_{B})=0,\forall a \neq 0;\ \ H^{0,2d_{\mu}}_{\mathrm{W},c}(\mathcal{M}_{\mu},\Re_{B})\cong\mathbb{Z})\]
_Thus,_
* _Conjecture_ 0.6_._(_4_)_ \(\Rightarrow\mathbb{D}\partial\mathcal{M}_{\mu}\) _is a_ _integral_ _homology sphere of dimension_ \(d_{\mu}-1\)_;_
* _Conjecture_ 0.6_._(_4_)_ \(+\)__\((2^{\prime})\Rightarrow\) _the homotopy type conjecture_ 0.2 _holds for_ \(\mathcal{M}_{\mu}\)_._
Recall that, the weight complex functor \(t\) induces an isomorphism on Grothendieck rings: \(K_{0}(t):K_{0}(\mathbf{DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K}))\cong K_{0 }(\mathbf{Chow}^{\mathrm{eff}}_{\mathrm{rat}}(\mathbb{K}))\)[5, Thm.6.4.2] (recalled in Theorem 4.4.(_b_)). So, denote
\[K_{0}(X):=K_{0}(\mathrm{M}^{c}(X))\in K_{0}(\mathbf{DM}^{\mathrm{eff}}_{ \mathrm{gm}}(\mathbb{K}))\cong K_{0}(\mathbf{Chow}^{\mathrm{eff}}_{\mathrm{rat }}(\mathbb{K})),\quad\forall X\in\mathbf{Var}(\mathbb{K}).\]
Let \(\mathbb{L}:=K_{0}(\mathbb{L})=K_{0}(\mathbb{A}^{1})\), \(\mathbb{1}:=K_{0}(\mathrm{pt})\). As a partial evidence of Conjecture 0.6.(3), we have:
**Proposition 0.8** (Proposition 4.22).: _If \(\mathcal{M}_{\mu}\) is a very generic character variety of type \(\mu\), then:_
\[K_{0}(\mathcal{M}_{\mu})=\sum_{\ell=0}^{d_{\mu}}c^{\mu}_{\ell}\mathbb{L}^{ \ell}\in K_{0}(\mathbf{Chow}^{\mathrm{eff}}_{\mathrm{rat}}(\mathbb{K}));\ \ \ \sum_{\ell=0}^{d_{\mu}}c^{\mu}_{\ell}q^{\ell}:=\sqrt{q}^{d_{\mu}}\mathbb{H}_{\mu} (\frac{1}{\sqrt{q}},\sqrt{q})\in\mathbb{Z}[q].\]
_That is, a weak form of Conjecture 0.6.(3) holds for \(\mathcal{M}_{\mu}\)._
As a direct evidence, it's possible to verify Conjecture 0.6 in concrete examples. For \((g,k,n)=(0,4,2)\) and \(\mu=((1^{2}),(1^{2}),(1^{2}),(1^{2}))\in\mathcal{P}^{4}_{2}\) (Example 2.13, 4.26), \(\mathcal{M}_{\mu}\) is a rank 2 (very generic) character variety of dimension \(d_{\mu}=2\) over the four-punctured two-sphere. In a case, it's identified with the classical Fricke-Klein cubic [23]:
\[\mathcal{M}_{\mu}\cong V_{t}:=\{x^{2}+y^{2}+z^{2}-xyz-2=t\}\subset\mathbb{A}^ {3}_{x,y,z},\ \ t\neq\pm 2.\]
The cell decomposition (Theorem 2.10) gives
\[\mathcal{M}_{\mu}=\mathbb{K}^{\sqcup\xi}\sqcup(\mathbb{K}^{\times})^{2}.\]
By a direct computation, Conjecture 4.20 holds for \(\mathcal{M}_{\mu}\):
\[\mathrm{M}^{c}(\mathcal{M}_{\mu})=\mathbb{L}^{0}[2]\oplus(\mathbb{L}^{1})^{ \oplus 4}\oplus\mathbb{L}^{2}\in\mathbf{DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K}); \quad\mathbb{H}_{\mu}(-z,w)=z^{2}+4+w^{2}.\]
As a rank 3 example, take \((g,k,n)=(0,3,3)\) and \(\mu=((1^{3}),(1^{3}),(1^{3}))\) (Example 2.14, 4.28), then \(\mathcal{M}_{\mu}\) is a rank 3 (very generic) character variety of dimension \(d_{\mu}=2\) over the pair of pants. More concretely, take \(\mathcal{C}_{j}=G\cdot C_{j},1\leq j\leq 3\), \(\zeta=\exp(\frac{2\pi i}{3})\), with
\[C_{j}:=\mathrm{Diag}(1,\zeta,\zeta^{2}),j\leq 2;\ C_{3}:=\mathrm{Diag}(a_{1},a_{2},a_{3 }),\ \mathrm{s.t.}\ a_{1}+a_{2}+a_{3}=0,a_{1}a_{2}a_{3}=1,a_{1}^{-1}+a_{2}^{-1}+a_{3}^{- 1}=t,\]
where \(\tau:=4t^{3}+27\neq 0,27\). Then \(\mathcal{M}_{\mu}\) is identified with a smooth cubic affine surface:
\[\mathcal{M}_{\mu}\cong\{\widetilde{y}_{1}\widetilde{y}_{2}\widetilde{y}_{3}+ \widetilde{y}_{1}^{3}+\widetilde{y}_{2}^{3}+\widetilde{y}_{3}^{2}-\frac{9}{2} \widetilde{y}_{1}\widetilde{y}_{2}+\frac{\tau}{4}=0\}\subset\mathbb{A}_{ \widetilde{y}_{1},\widetilde{y}_{2},\widetilde{y}_{3}}^{3}.\]
The cell decomposition (Theorem 2.10) gives
\[\mathcal{M}_{\mu}=\mathbb{K}^{\sqcup\mathbb{R}}\sqcup(\mathbb{K}^{\times})^{ 2}.\]
By a direct computation, Conjecture 4.20 holds for \(\mathcal{M}_{\mu}\):
\[\mathrm{M}^{c}\left(\mathcal{M}_{\mu}\right)=\mathbf{L}^{0}[2]\oplus(\mathbf{ L}^{1})^{\oplus 6}\oplus\mathbf{L}^{2}\in\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}( \mathbb{K});\quad\mathbb{H}_{\mu}(-z,w)=z^{2}+6+w^{2}.\]
As an example which is **not** very generic, take \((g,k,n)=(1,1,n)\), and \(\mu=((n))\) (Example 4.27), then \(\mathcal{M}_{\mu}\) is a rank \(n\) (generic) character variety of dimension \(d_{\mu}=2\) over the punctured-torus of type \(\mu=((n))\). By the generic assumption, we may denote \(\mathcal{C}_{1}=G\cdot C_{1},C_{1}=\zeta_{n}^{-c_{1}}I_{n}\) where \(\zeta_{n}=\exp(\frac{2\pi i}{n}),\gcd(n,c_{1})=1\). By [40, Thm.2.2.17], we have \(\mathcal{M}_{\mu}\cong(\mathbb{K}^{\times})^{2}\), and hence
\[\mathrm{M}^{c}(\mathcal{M}_{\mu})=\mathbf{L}^{0}[2]\oplus(\mathbf{L}^{1}[1])^ {\oplus 2}\oplus\mathbf{L}^{2}\in\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}( \mathbb{K}).\]
So, Conjecture 4.20 becomes \(\mathbb{H}_{\mu}(-z,w)=(z+w)^{2}\), i.e., \(\mathbb{H}_{\mu}(z,w)=(z-w)^{2}\). For arbitrary \(n\), this is still a conjecture ([40, Conj.4.3.2] or [36, Conj.1.5.1]). For lower ranks, by a direct computation, we indeed have \(\mathbb{H}_{((1))}(z,w)=\mathbb{H}_{((2))}(z,w)=(z-w)^{2}\), etc.
However, all the three examples above are only 2-dimensional. Thus, an interesting problem is to compute more higher-dimensional examples. Besides, when \(d_{\mu}\geq 6\), another new phenomenon might appear: the cell decomposition for very generic \(\mathcal{M}_{\mu}\) might involve nontrivial stably affine spaces \(\mathcal{A}_{\mu}(\vec{w},p)\). As a final remark, we mention that the same strategy in this article can be applied to wild character varieties [4, 7]. This will be pursued elsewhere.
### Organization
We have already explained the main ideas above. Now, we sketch the organization.
In Section 1, we set up the basic notions related to character varieties. To clarify the computations in Section 2, we introduce some diagram calculus of matrices in Section 1.3. Morally, it's about braid matrix diagrams generalizing braid matrices associated to positive braids. In Section 1.4, we review braid varieties, complemented by Appendix A.
In Section 2, we prove a strong form of the cell decomposition for \(\mathcal{M}_{\mu}\) (Theorem 2.10). This involves some routine diagram calculus in Sections 2.1-2.2. In Sections 2.3-2.4, we have borrowed statements on quotients of varieties from Appendix C. In Section 2.5, we illustrate Theorem 2.10 by two examples.
In Section 3, we study dual boundary complexes of character varieties. Section 3.1 reviews the basics on dual boundary complexes. Section 3.2 discusses some properties of fundamental groups of dual boundary complexes. In particular, it's used in the proof of Lemma 3.16. In Section 3.3-3.4, we prove our main results: Theorem 3.15 and Proposition 3.21.
In Section 4, we discuss motives of character varieties. In Section 4.1, we give a crash review on Chow and geometric motives. Section 4.2 introduces the motivic weight complex, the Betti/singular weight cohomology with compact support, and discusses some properties. In Section 4.3, we propose a motivic HLRV conjecture (Conjecture 4.20), and prove some compatible results (Proposition 4.24, 4.22). Here, we use Appendix B (modified Macdonald symmetric functions). In Section 4.4, we verify Conjecture 4.20 for three 2-dimensional examples.
### Acknowledgements
The author would like to thank his Ph.D advisor, Vivek Shende, for the interest and the suggestion to work along the direction of this article. He thanks Chenglong Yu, Penghui Li, Mingchen Xia, and Yu Zhao for helpful (online) discussions, and Mirko Mauri for some useful comments after the first arXiv version. Furthermore, the author is very grateful to David Nadler, Lenhard Ng, Shing-Tung Yau, and Shaoyuan Zheng, for the valuable support in his career. During this work, the author is a visiting postdoc at the Institute of Mathematical Sciences, the Chinese University of Hong Kong.
## 1. Setup
Fix the base field \(\mathbb{K}\) to be algebraically closed of characteristic \(0\). For simplicity, \(\mathbb{K}=\mathbb{C}\). A _\(\mathbb{K}\)-variety_ means a _reduced separated scheme of finite type_ over \(\mathbb{K}\).
**Convention 1**: Let \(H\) be a linear algebraic group acting on a variety \(X\) over \(\mathbb{K}\), then
* A principal \(H\)-bundle (or a fiber bundle) means so in the etale topology, unless stated otherwise.
* For \(H\) reductive and \(X\) affine, \(X//H=\operatorname{Spec}\,\mathcal{O}_{X}(X)^{H}\) denotes the affine GIT quotient.
* \([X/H]\) denotes the quotient stack of \(X\) by \(H\).
* If \(H\) acts freely on \(X\), and \(\pi:X\to Y\) is principal \(H\)-bundle over a \(\mathbb{K}\)-variety, so also a geometric quotient (see Definition C.7, Proposition C.24). Then denote \(X/H:=Y\).
Equivalently, this says that the quotient stack \([X/H]\) is representable by \(Y\). Thus, we may also use the identification \([X/H]=X/H\).
We refer to Appendix C for more background on various quotients of varieties.
### Generic character varieties
Recall that, \((\Sigma,\sigma=\{q_{1},\cdots,q_{k}\})\) is a \(k\)-punctured genus \(g\) Riemann surface, \(k\geq 1,2g+k\geq 3\). Let \(T\subset G=GL_{n}(\mathbb{K})\) be the diagonal maximal torus. Let \((C_{1},\cdots,C_{k})\in T^{k}\) be semisimple elements of type \(\mu:=(\mu^{1},\cdots,\mu^{k})\in\mathcal{P}_{n}^{k}\). That is, the multiplicities of eigenvalues of \(C_{i}\) define a partition of \(n\): \(\mu^{i}=(\mu^{i}_{1},\cdots,\mu^{i}_{r_{i}})\in\mathcal{P}_{n}\).
Let \(\mathcal{M}_{\mu}=\mathcal{M}_{B}=\mathcal{M}_{B}(\Sigma,\sigma,G;C_{1},\cdots,C_{k})\) be the character variety of \(G\)-local systems on \(\Sigma\) whose local monodromy around \(q_{i}\) is conjugate to \(C_{i}\). More precisely, define the affine \(\mathbb{K}\)-variety
\[M_{B}=M_{B}(\Sigma,\sigma,G;C_{1},\cdots,C_{k}):=\{(A_{j})_{j=1}^{2g},x_{1}, \cdots,x_{k}\}\in G^{2g+k}:\prod_{j=1}^{g}(A_{2j-1},A_{2j})\prod_{i=1}^{k}x_{i} C_{i}x_{i}^{-1}=\mathsf{id}\},\]
where \((-,-)\) stands for the multiplicative commutator. It has an action of the reductive group
\[G_{\mathrm{par}}:=G\times\prod_{i=1}^{k}\bigl{[}Z(C_{i}),\quad Z(C_{i})=\text{ the centralizer of $C_{i}$,}\]
with the action given by:
\[(h_{0},h_{1},\cdots,h_{k})\cdot(A_{1},\cdots,A_{2j},x_{1},\cdots,x_{k}):=(h_{0} A_{1}h_{0}^{-1},\cdots,h_{0}A_{2j}h_{0}^{-1},h_{0}x_{1}h_{1}^{-1},\cdots,h_{0}x_{k}h_ {k}^{-1}).\]
Then, the diagonal \(\mathbb{K}^{\times}\) acts trivially on \(M_{B}\), and \(\mathcal{M}_{\mu}:=M_{B}//G_{\mathrm{par}}\) is the affine GIT quotient.
**Definition 1.1** ([54, Def.4.6.1]).: \((C_{1},\cdots,C_{k})\in T^{k}\) is _generic_ if: \(\prod_{i=1}^{k}\det C_{i}=1\), and for any \(1\leq n^{\prime}<n\), take any \(n^{\prime}\) eigenvalues \(\alpha_{i,1},\cdots,\alpha_{i,n^{\prime}}\) of each \(C_{i}\), have
\[\prod_{i=1}^{k}\prod_{j=1}^{n^{\prime}}\alpha_{i,j}\neq 1.\]
In this case, \(\mathcal{M}_{\mu}\) is called a _generic character variety_.
**Lemma 1.2** ([36, Thm.2.1.5], [37, Thm.5.1.1]).: _If \((C_{1},\cdots,C_{k})\) is generic of type \(\mu\), then \(\mathcal{M}_{\mu}=M_{B}/PG_{\mathrm{par}}\) (if nonempty) is a connected smooth affine \(\mathbb{K}\)-variety of dimension_
\[d_{\mu}:=n^{2}(2g-2+k)-\sum_{i,j}(\mu_{j}^{i})^{2}+2, \tag{1.1.1}\]
_and the quotient map \(\pi:M_{B}\to\mathcal{M}_{\mu}=M_{B}/PG_{\mathrm{par}}\) is a principal \(PG_{\mathrm{par}}\)-bundle._
We remark that the connectedness of \(\mathcal{M}_{\mu}\) is proved by a computation of the E-polynomial.
**Definition 1.3**.: \(C_{i}\in T\) is _ordered nicely_ if it's of the form \(\mathrm{Diag}(\lambda_{i,1}I_{\mu_{1}^{i}},\cdots,\lambda_{i,r_{i}}I_{\mu_{r_{ i}}^{i}})\).
Without loss of generality, we may assume that each \(C_{i}\in T\) is _ordered nicely_, so that \(Z(C_{i})\subset G\) is the Levi subgroup of block-diagonal matrices of type \(\mu^{i}\).
### Very generic character varieties
Let \(B\subset G\) be the standard Borel subgroup of upper triangular elements, with unipotent radical \(U\subset B\). We make the _very generic_ assumption:
**Assumption 1.4**.: \(C_{k}\) is regular semisimple. So, \(Z(C_{k})=T\) and \(\mu^{k}=(1^{n})\in\mathcal{P}_{n}\).
Then, \(\mathcal{M}_{\mu}\) is called a _very generic character variety_. The nice feature in this case is a cell decomposition [54], and an enhanced version will be proved in Theorem 2.10.
The first step is as follows: Taking the diagonal gives the quotient morphism
\[D:B\twoheadrightarrow B/U\cong T.\]
Define a closed (affine) subvariety of \(M_{B}\) by
\[M_{B}^{\prime}:=M_{B}\cap(G^{2g+k-1}\times U), \tag{1.2.1}\]
and a closed subgroup of \(G_{\rm par}\) by
\[B_{\rm par}=B\times\prod_{i=1}^{k-1}\!\!\!Z(C_{i})\hookrightarrow G_{\rm par}:(b,h _{1},\ldots,h_{k-1})\mapsto(b,h_{1},\ldots,h_{k-1},D(b)). \tag{1.2.2}\]
Denote \(PB_{\rm par}:=B_{\rm par}/\mathbb{K}^{\times}\). We have mutually inverse isomorphisms of \(G_{\rm par}\)-varieties
\[PG_{\rm par}\times^{PB_{\rm par}}M^{\prime}_{B}:=(PG_{\rm par} \times M^{\prime}_{B})/PB_{\rm par}\xrightarrow{\simeq}M_{B}:[g_{\rm par},m^{ \prime}_{B}=(A_{1},\ldots,x_{k})]\mapsto g_{\rm par}\cdot m^{\prime}_{B},\] \[M_{B}\to PG_{\rm par}\times^{PB_{\rm par}}M^{\prime}_{B}:(A_{1}, \ldots,x_{k})\mapsto[(x_{k},\mathsf{id},\ldots,\mathsf{id}),((x_{k}^{-1}A_{j }x_{k})_{j=1}^{2g},(x_{k}^{-1}x_{i})_{i=1}^{k})],\]
where \(PG_{\rm par}\times^{PB_{\rm par}}M^{\prime}_{B}\) is well-defined by Proposition C.24. Then by Proposition C.29 and Proposition C.24, we obtain a natural isomorphism of \(\mathbb{K}\)-varieties:
\[M^{\prime}_{B}/PB_{\rm par}\cong(PG_{\rm par}\times^{PB_{\rm par}}M^{\prime}_{ B})/PG_{\rm par}\cong M_{B}/PG_{\rm par}=\mathcal{M}_{\mu},\]
and the quotient \(\pi^{\prime}:M^{\prime}_{B}\to\mathcal{M}_{\mu}=M^{\prime}_{B}/PB_{\rm par}\) is a principal \(PB_{\rm par}\)-bundle. Observe that we have a quotient group \(PB_{\rm par}/U\cong PT_{\rm par}:=(T\times\prod_{i=1}^{k-1}Z(C_{i}))/\mathbb{K }^{\times}\).
Take the coordinate change
\[U\to U:x_{k}\mapsto u_{k}:=x_{k}C_{k}x_{k}^{-1}C_{k}^{-1}. \tag{1.2.3}\]
We can re-write
\[M^{\prime}_{B}=\{(A_{1},\cdots,x_{k-1},u_{k})\in G^{2g+k-1}\times U:(\prod_{j =1}^{g}(A_{2j-1},A_{2j}))(\prod_{i=1}^{k-1}\!\!x_{i}C_{i}x_{i}^{-1})u_{k}C_{k} =\mathsf{id}\} \tag{1.2.4}\]
with the action of \(B_{\rm par}\) given by:
\[(b,h_{1},\ldots,h_{k-1})\cdot(A_{1},\cdots,x_{k-1},u_{k}):=(bA_{1}b^{-1}, \ldots,bx_{k-1}h_{k-1}^{-1},bu_{k}(b^{C_{k}})^{-1}),\ b^{C_{k}}:=C_{k}bC_{k}^{ -1}. \tag{1.2.5}\]
Next, for any _nicely ordered_\(C\in T\), take the parabolic subgroup \(P=BZ(C)\subset G\). Denote the Weyl groups \(W\cong S_{n}\subset G\), \(W(C):=W(Z(C))\). Recall the Bruhat cell decomposition
\[G=\sqcup_{w\in W/W(C)}B\dot{w}P,\]
where \(\dot{w}\in W\) denotes the _shortest_ representative of \(w\in W/W(C)\).
In our setting, \(P_{i}:=BZ(C_{i})\subset G\). For each sequence
\[\widetilde{w}=(\tau_{1},\cdots,\tau_{2g},w_{1},\cdots,w_{k-1})\in W^{2g} \times\prod_{i=1}^{k-1}\!\!W/W(C_{i}), \tag{1.2.6}\]
we obtain a locally closed affine \(B_{\rm par}\)-subvariety of \(M^{\prime}_{B}\):
\[M^{\prime}_{B}(\widetilde{w}):=M^{\prime}_{B}\cap(\prod_{j=1}^{2g}\!\!B\tau_{ j}B\times\prod_{i=1}^{k-1}\!\!B\dot{w}_{i}P_{i}\times U)=M_{B}\cap(\prod_{j=1}^{2g}\! \!B\tau_{j}B\times\prod_{i=1}^{k-1}\!\!B\dot{w}_{i}P_{i}\times U). \tag{1.2.7}\]
Define
\[\mathcal{M}_{\mu}(\widetilde{w}):=\pi^{\prime}(M^{\prime}_{B}(\widetilde{w})) \hookrightarrow\mathcal{M}_{\mu}=M^{\prime}_{B}/PB_{\rm par}. \tag{1.2.8}\]
By Corollary C.28, \(\mathcal{M}_{\mu}(\widetilde{w})\subset\mathcal{M}_{\mu}\) is a locally closed \(\mathbb{K}\)-subvariety, and the quotient map
\[\pi^{\prime}_{\widetilde{w}}:=\pi^{\prime}|_{M^{\prime}_{B}(\widetilde{w})}:M^ {\prime}_{B}(\widetilde{w})\to\mathcal{M}_{\mu}(\widetilde{w})=M^{\prime}_{B }(\widetilde{w})/PB_{\rm par}\]
is a principal \(PB_{\text{par}}\) bundle.
To obtain a cell decomposition for \(\mathcal{M}_{\mu}\), the idea is to decompose \(M^{\prime}_{B}(\vec{w})\) further, via a connection to the so-called _braid varieties_. We will come back to this point after some preparations.
### Diagram calculus of matrices
To relate character and braid varieties, we introduce some diagram calculus of matrices. Denote
\[\text{FBr}_{n}^{+}:=\langle\sigma_{1},\cdots,\sigma_{n-1}\rangle; \text{ (free monoid of $n$-strand (positive) braid presentations)}\] \[\text{Br}_{n}^{+}:=\text{FBr}_{n}^{+}/(\sigma_{i}\sigma_{i+1} \sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1},\forall i;\ \sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i},\forall|i-j|>1); \text{ ($n$-strand positive braids)}\] \[\text{s}:\text{Br}_{n}^{+}\to\text{Br}_{n}^{+}/(\sigma_{i}^{2}=1,\forall i)\cong S_{n}:\sigma_{k}\mapsto\text{s}_{k}:=\text{s}(\sigma_{k})=(k \ k+1).\text{ (underlying permuations)}\]
**Convention 2**: As in Figure 1.3.1, any \(\beta=\sigma_{i_{\ell}}\cdots\sigma_{i_{1}}\in\text{FBr}_{n}^{+}\) is represented by the _braid diagram \([\sigma_{i_{1}}|\cdots|\sigma_{i_{\ell}}]\)_ going _from left to right_, where \([\beta_{1}|\beta_{2}]\) is the concatenation of \(\beta_{1}\) with \(\beta_{2}\). Label the left (resp. right) ends _from bottom to top_ by \(1,2,\cdots,n\). Denote \([n]:=\{1,2,\cdots,n\}\).
**Definition 1.5**.: Let \(e_{i,j}\in M_{n\times n}(\mathbb{K})\) be the matrix so that \((e_{i,j})_{a,b}=\delta_{a,i}\delta_{b,j}\). Define
\[\text{K}_{k}(\epsilon):=\sum_{i\neq k}e_{i,i}+\epsilon e_{k,k}\in G,\ 1\leq k\leq n,\epsilon\in\mathbb{K}^{\times};\quad[\text{K}_{k}(\epsilon)]:= \text{ Figure \ref{fig:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:bra:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:braid:bra
2. Define a morphism of monoids \[g_{-}:\mathrm{FBD}_{n}\to G:\sigma_{k}\mapsto\mathrm{s}_{k},\ \ [\mathrm{K}_{k}( \epsilon)]\mapsto\mathrm{K}_{k}(\epsilon),\ \ [\mathrm{H}_{i,j}(\epsilon^{\prime})]\mapsto\mathrm{H}_{i,j}( \epsilon^{\prime}).\]
Two **bmdp**'s \(\Gamma_{1},\Gamma_{2}\in\mathrm{FBD}_{n}\) are _weakly equivalent_ if \(g_{\Gamma_{1}}=g_{\Gamma_{2}}\), denoted as \(\Gamma_{1}\stackrel{{\mathfrak{w}}}{{\sim}}\Gamma_{2}\).
**Lemma 1.7** (_Elementary moves_ of **bmdp**'s).: _Denote \(i^{\prime}:=\mathrm{s}_{k}(i)\), then_
\[\mathrm{K}_{k}(\epsilon_{1})\circ\mathrm{K}_{\ell}(\epsilon_{2})=\left\{ \begin{array}{ll}\mathrm{K}_{k}(\epsilon_{1}\epsilon_{2})&k=\ell,\\ \mathrm{K}_{\ell}(\epsilon_{2})\circ\mathrm{K}_{k}(\epsilon_{1})&k\neq\ell. \end{array}\right.\ \mathrm{H}_{i,j}(\epsilon_{1})\circ\mathrm{K}_{k}(\epsilon_{2})=\left\{ \begin{array}{ll}\mathrm{K}_{i}(\epsilon_{2})\circ\mathrm{H}_{i,j}(\epsilon _{2}^{-1}\epsilon_{1})&k=i,\\ \mathrm{K}_{i}(\epsilon_{2})\circ\mathrm{H}_{i,j}(\epsilon_{1}\epsilon_{2})&k= j,\\ \mathrm{K}_{i}(\epsilon_{2})\circ\mathrm{H}_{i,j}(\epsilon_{1})&k\neq i,j. \end{array}\right.\]
_Each identity is either trivial (commutative), or represented by an (elementary) move in Figure 1.3.3._
Proof.: Let \(e_{1},\cdots,e_{n}\) be the _standard basis_ for \(\mathbb{K}^{n}\). It's direct to check that, both sides of each identity applied to every \(e_{i}\), give the same results. See also Figure 1.3.3 for an illustration.
**Definition 1.8** (Braid matrix diagrams).:
1. Let \(\underline{\mathrm{FBD}}_{n}\) be the quotient of \(\mathrm{FBD}_{n}\) by elementary moves. Then \(\exists\) a morphism of monoids \[\beta_{-}:\underline{\mathrm{FBD}}_{n}\to\mathrm{FBr}_{n}^{+}:\sigma_{k}\mapsto \sigma_{k},\ \ [\mathrm{K}_{k}(\epsilon)]\mapsto\mathsf{id}_{n},\ \ [\mathrm{H}_{i,j}( \epsilon^{\prime})]\mapsto\mathsf{id}_{n}.\]
2. The monoid \(\mathrm{BD}_{n}\) of rank \(n\)_braid matrix diagrams_ (**bmd**) is the quotient of \(\underline{\mathrm{FBD}}_{n}\) by usual _braid relations/moves_. So, \(\mathrm{BD}_{n}=\mathrm{FBD}_{n}/\!\!\stackrel{{\mathfrak{b}}}{{ \sim}}\), with \(\stackrel{{\mathfrak{b}}}{{\sim}}\)1 generated by elementary and braid moves. 3. \(\beta_{-}:\underline{\mathrm{FBD}}_{n}\to\mathrm{FBr}_{n}^{+}\) and \(g_{-}:\mathrm{FBD}_{n}\to G\) induce morphisms of monoids: \[\beta_{-}:\mathrm{BD}_{n}\to\mathrm{Br}_{n}^{+},\quad g_{-}:\mathrm{BD}_{n}\to G.\]
Figure 1.3.3. _Elementary moves_ for **bmdp**’s: The trivial ones are skipped. Each move is a weak equivalence in \(\mathrm{FBD}_{n}\) representing a composition identity in Lemma 1.7, and vice versa. The composition goes from left to right: \(\Gamma_{2}\circ\Gamma_{1}=[\Gamma_{1}|\Gamma_{2}]\).
**Remark 1.9**.: We have natural morphisms of monoids
\[i:\operatorname{FBr}_{n}^{+}\to\underline{\operatorname{FBD}}_{n}:\sigma_{k}\mapsto \sigma_{k},\ \ \ \leadsto\ \ \ i:\operatorname{Br}_{n}^{+}\to\operatorname{BD}_{n}:\sigma_{k}\mapsto \sigma_{k}.\]
\(\beta_{-}\circ i=\operatorname{Id}\), so \(i\) induces embeddings \(\operatorname{FBr}_{n}^{+}\hookrightarrow\underline{\operatorname{FBD}}_{n}\) and \(\operatorname{Br}_{n}^{+}\hookrightarrow\operatorname{BD}_{n}\). This morally explains the terminology. Altogether, we get a commutative diagram of monoids:
\[\begin{CD}\text{FBP}_{n}^{+}&\text{$\longrightarrow$}\operatorname{Br}_{n}^{+ }&\text{$\longrightarrow$}\operatorname{S}_{n}\\ \text{FBD}_{n}&\text{$\longrightarrow$}\operatorname{FBD}_{n}&\text{$ \longrightarrow$}\operatorname{BD}_{n}&\text{$\longrightarrow$}\operatorname{ BD}_{n}&\text{$\longrightarrow$}\operatorname{BD}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$ \longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n} &\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$} \operatorname{B}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$ \longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$}\operatorname{B}_{n} &\text{$\longrightarrow$}\operatorname{B}_{n}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}& \text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$\longrightarrow$} \operatorname{B}_{n}^{+}&\text{$\longrightarrow$}\operatorname{B}_{n}^{+}&\text{$ \longrightarrow$}\operatorname{
1. \(U_{\mathcal{J}}\subset U\) _is a closed subgroup._
2. _Any fixed total order_ \(\preceq\) _on_ \(\mathcal{J}\) _induces an isomorphism of_ \(\mathbb{K}\)_-varieties_ \[\phi_{\preceq}:\mathbb{A}^{|\mathcal{J}|}\to U_{\mathcal{J}}:(\epsilon_{i,j})_{ (i,j)\in\mathcal{I}}\mapsto\prod_{(i,j)\in\mathcal{J}}\mathrm{H}_{i,j}( \epsilon_{i,j})\]
Proof.: We prove (2). (1) is similar. Say, \(\mathcal{J}=\{(i_{1},j_{1})<\dots<(i_{N},j_{N})\}\). Then
\[\prod_{(i,j)\in\mathcal{J}}\mathrm{H}_{i,j}(\epsilon_{i,j})=\prod_{k=1}^{N}(I_ {n}+\epsilon_{i_{\ell},j\ell}e_{i_{\ell},j\ell})=I_{n}+\sum_{m\geq 1\;(k_{0},k_{1 })<\dots<(k_{m-1},k_{m})\;\text{in $\mathcal{J}$}}\epsilon_{k_{0},k_{1}}\dots \epsilon_{k_{m-1},k_{m}}e_{k_{0},k_{m}}.\]
The equation \(I_{n}+\sum_{(i,j)\in\mathcal{J}}a_{ij}e_{i,j}=\prod_{(i,j)\in\mathcal{J}} \mathrm{H}_{i,j}(\epsilon_{i,j})\) becomes
\[a_{i,j}=\epsilon_{i,j}+\sum_{m\geq 2\;(i,k_{1})<\dots<(k_{m-1},j)\;\text{in $ \mathcal{J}$}}\epsilon_{i,k_{1}}\dots\epsilon_{k_{m-1},j},\quad\forall(i,j)\in \mathcal{J}.\]
This uniquely determines the \(\epsilon_{i,j}\)'s inductively, in the increasing (partial) order on \(|j-i|\).
For any permuation \(w\in W=S_{n}\), denote
\[\mathrm{inv}(w):=\{(i,j)\in\mathcal{I}:i<j,w(i)>w(j)\};\quad\mathrm{noinv}(w):= \{(i,j)\in\mathcal{I}:i<j,w(i)<w(j)\}.\]
Then \(\ell(w)=|\mathrm{inv}(w)|\), and \(\mathrm{inv}(w)\), \(\mathrm{noinv}(w)\) are multiplicative subsets of \(\mathcal{I}\). So,
\[U^{+}_{w}:=U_{\mathrm{noinv}(w)}=\mathsf{id}+\sum_{(i,j)\in\mathrm{noinv}(w)} \mathbb{K}e_{i,j};\quad U^{-}_{w}:=U_{\mathrm{inv}(w)}=\mathsf{id}+\sum_{(i,j )\in\mathrm{inv}(w)}\mathbb{K}e_{i,j},\]
are closed subgroups of \(U\), to which Lemma 1.11 apply. Alternatively, we have
\[U^{+}_{w}=U\cap w^{-1}Uw=w^{-1}U^{+}_{w^{-1}}w,\quad U^{-}_{w}=U\cap w^{-1}U^{ -}w, \tag{1.3.5}\]
with \(U^{-}\subset G\) the opposite unipotent subgroup. By Lemma 1.11 (2), we have decompositions:
\[U=U^{+}_{w}U^{-}_{w}=U^{-}_{w}U^{+}_{w};\quad BwB=U^{-}_{w^{-1}}wB=BwU^{-}_{w}. \tag{1.3.6}\]
Proof of Proposition 1.10.: Clearly, \([-]\) is well-defined, \(g_{-}\circ[-]=\mathsf{id}\). To show equivalences. "LHS \(\Rightarrow\) Middle": Say, \(xx^{\prime}\in B\widetilde{w}B\), \(\widetilde{w}\in S_{n}\), so \(\beta_{[xx^{\prime}]}=[\widetilde{w}]\). If \([x]\circ[x^{\prime}]=[xx^{\prime}]\), then
\[[\widetilde{w}]=\beta_{[xx^{\prime}]}=\beta_{[x]}\circ\beta_{[x^{\prime}]}=[w ]\circ[w^{\prime}],\quad\widetilde{w}=\mathrm{s}(\beta_{[xx^{\prime}]})= \mathrm{s}(\beta_{[x]})\circ\mathrm{s}(\beta_{[x^{\prime}]})=ww^{\prime}.\]
Thus, \(\ell(ww^{\prime})=\ell([ww^{\prime}])=\ell([w]\circ[w^{\prime}])=\ell([w])+ \ell([w^{\prime}])=\ell(w)+\ell(w^{\prime})\), as desired.
"Middle \(\Rightarrow\) RHS": Denote
\[(a^{\prime},b^{\prime}):=(w^{\prime}(a),w^{\prime}(b)),\quad(a^{\prime\prime },b^{\prime\prime}):=(w(a^{\prime}),w(b^{\prime})),\quad 1\leq a<b\leq n;\]
\[I_{+-}:=\{(a,b)\in\mathcal{I}:a^{\prime}<b^{\prime},a^{\prime\prime}>b^{\prime \prime}\};\quad I_{-+}:=\{(a,b)\in\mathcal{I}:a^{\prime}>b^{\prime},a^{\prime \prime}>b^{\prime\prime}\}.\]
Observe that \(\mathrm{inv}(ww^{\prime})=I_{+-}\sqcup I_{-+}\) and \(w^{\prime}\) induces bijections
\[w^{\prime}:I_{+-}\stackrel{{\simeq}}{{\to}}\mathrm{ noinv}(w^{\prime-1})\cap\mathrm{inv}(w);\;\;R\circ w^{\prime}:I_{+-}\stackrel{{ \simeq}}{{\to}}\mathrm{inv}(w^{\prime-1})\cap\mathrm{noinv}(w),\;\;R(i,j):=(j,i);\] \[w^{\prime}\sqcup R\circ w^{\prime}:\mathrm{inv}(ww^{\prime})=I_{+ -}\sqcup I_{-+}\stackrel{{\simeq}}{{\to}}\mathrm{noinv}(w^{\prime-1 })\cap\mathrm{inv}(w)\sqcup\mathrm{inv}(w^{\prime-1})\cap\mathrm{noinv}(w).\]
Hence,
\[\ell(ww^{\prime})=|\mathrm{noinv}(w^{\prime-1})\cap\mathrm{inv}(w)|+|\mathrm{ inv}(w^{\prime-1})\cap\mathrm{noinv}(w)|\leq|\mathrm{inv}(w)|+|\mathrm{inv}(w^{ \prime-1})|=\ell(w)+\ell(w^{\prime}),\]
with equality holds if and only if \(\operatorname{inv}(w)\cap\operatorname{inv}(w^{\prime-1})=\varnothing\). Then, \(I_{+-}=w^{\prime-1}(\operatorname{inv}(w))\), and \(I_{-+}=(R\circ w^{\prime})^{-1}(\operatorname{inv}(w^{\prime-1}))=\operatorname {inv}(w^{\prime})\). Hence, \(\operatorname{inv}(ww^{\prime})=w^{\prime-1}(\operatorname{inv}(w))\sqcup \operatorname{inv}(w^{\prime})\), as desired.
"RHS \(\Rightarrow\) Middle" is clear, so RHS \(\Leftrightarrow\) Middle.
"Middle \(\Rightarrow\) LHS": By above, \(\operatorname{noinv}(w)\cup\operatorname{noinv}(w^{\prime-1})=\mathcal{I}\). By Lemma 1.11, we get a surjection
\[m:U^{+}_{w}\times U^{+}_{w^{\prime-1}}\to U:(u_{1},u_{2})\mapsto u_{1}u_{2}.\]
By (1.3.6), can assume \(b_{2}\in U^{-}_{w}\), \(b^{\prime}_{1}\in U^{-}_{w^{\prime-1}}\). By above, we may write \(b_{2}b^{\prime}_{1}=u_{1}u_{2},\ u_{1}\in U^{+}_{w},u_{2}\in U^{+}_{w^{\prime -1}}\). So, \(xx^{\prime}=(b_{1}wu_{1}w^{-1})ww^{\prime}(w^{\prime-1}u_{2}w^{\prime}b^{ \prime}_{2})\in Bww^{\prime}B\), and
\[[xx^{\prime}] = [b_{1}wu_{1}w^{-1}]\circ[ww^{\prime}]\circ[w^{\prime-1}u_{2}w^{ \prime}b^{\prime}_{2}]\in\operatorname{BD}_{n}\quad\text{(by definition)}\] \[= [b_{1}]\circ[wu_{1}w^{-1}]\circ[w]\circ[w^{\prime}]\circ[w^{ \prime-1}u_{2}w^{\prime}]\circ[b^{\prime}_{2}]\quad\text{(by (1.3.2), (1.3.3))}\] \[= [b_{1}]\circ([w]\circ[u_{1}])\circ([u_{2}]\circ[w^{\prime}]) \circ[b^{\prime}_{2}]\quad\text{(by elementary moves as in Figure 1.3.3.(8))}\] \[= [b_{1}]\circ[w]\circ[b_{2}]\circ[b^{\prime}_{1}]\circ[w^{\prime}] \circ[b^{\prime}_{2}]=[x]\circ[x^{\prime}].\quad\text{(by (1.3.3))}\]
It remains to show the decompositions. By Lemma 1.11, we get an isomorphism
\[m:U_{w^{\prime-1}(\operatorname{inv}(w))}\times U_{\operatorname{inv}(w^{ \prime})}=w^{\prime-1}U^{-}_{w}w^{\prime}\times U^{-}_{w^{\prime}}\to U_{ \operatorname{inv}(ww^{\prime})}=U^{-}_{ww^{\prime}}:(u_{1},u_{2})\mapsto u_{ 1}u_{2}\]
So is \(m:U^{-}_{w^{\prime}}\times w^{\prime-1}U^{-}_{w}w^{\prime}\to U^{-}_{ww^{ \prime}}\). Then, \(U^{-}_{ww^{\prime}}=(w^{\prime-1}U^{-}_{w}w^{\prime})U^{-}_{w^{\prime}}=U^{-}_ {w^{\prime}}(w^{\prime-1}U^{-}_{w}w^{\prime})\). Also, the same result applies to \((ww^{\prime})^{-1}=w^{\prime-1}w^{-1}\). Altogether, we get unique decompositions
\[Bww^{\prime}B=Bww^{\prime}U^{-}_{ww^{\prime}}=BwU^{-}_{ww^{\prime}}U^{-}_{w^{ \prime}}=U^{-}_{w^{-1}}wBw^{\prime}U^{-}_{w^{\prime}}=U^{-}_{w^{-1}}wU^{-}_{w^ {\prime-1}}w^{\prime}B=U^{-}_{w^{\prime-1}w^{-1}}ww^{\prime}B.\]
Back to character varieties. Let's reinterpret \(M^{\prime}_{B}(\widetilde{w})\) (see (1.2.7)) via braid matrix diagrams.
First, we set up some notations. We have seen (unique) decompositions:
\[U=U^{+}_{w}U^{-}_{w}:u=L^{+}_{w}(u)L^{-}_{w}(u),\quad B=TU=(TU^{+}_{w})U^{-} _{w}:b=D(b)b_{R}=L^{+}_{w}(b)L^{-}_{w}(b);\] \[U=U^{-}_{w}U^{+}_{w}:u=R^{-}_{w}(u)R^{+}_{w}(u),\quad B=UT=(U^{-}_ {w})(U^{+}_{w}T):b=b_{L}D(b)=R^{-}_{w}(b)R^{+}_{w}(b). \tag{1.3.7}\]
Similarly, each parabolic \(P_{i}\subset G\) has decompositions
\[P_{i}=N_{i}Z(C_{i})=Z(C_{i})N_{i}:p_{i}=L_{i}(p_{i})D_{i}(p_{i})=D_{i}(p_{i})R _{i}(p_{i}),\quad N_{i}:=P_{i}\cap U. \tag{1.3.8}\]
As the shortest representative of \(w_{i}\in W/W(C_{i})\), \(\dot{w}_{i}\) gives
\[U^{-}_{\dot{w}_{i}}\subset N_{i},\quad Z(C_{i})\cap U\subset U^{+}_{\dot{w}_{i }}.\]
Thus, by Lemma 1.11, we obtain decompositions
\[N_{i}=U^{-}_{\dot{w}_{i}}(U^{+}_{\dot{w}_{i}}\cap N_{i})=(U^{+}_{\dot{w}_{i}} \cap N_{i})U^{-}_{\dot{w}_{i}};\ U^{+}_{\dot{w}_{i}}=(U^{+}_{\dot{w}_{i}}\cap N _{i})(Z(C_{i})\cap U)=(Z(C_{i})\cap U)(U^{+}_{\dot{w}_{i}}\cap N_{i}). \tag{1.3.9}\]
Now, we reinterpret \(M^{\prime}_{B}(\widetilde{w})\). To begin with, we have a canonical isomorphism
\[U^{-}_{\dot{w}_{i}}\times N_{i}\times Z(C_{i})\equiv U^{-}_{\dot{w}_{i-1}}\dot{w }_{i}P_{i}=B\dot{w}_{i}P_{i}:(\nu_{i},n_{i},z_{i})\mapsto x_{i}=\nu_{i}\dot{w}_{i }n_{i}z_{i}. \tag{1.3.10}\]
Then, \(x_{i}C_{i}x_{i}^{-1}=\nu_{i}\dot{w}_{i}n_{i}C_{i}n_{i}^{-1}\dot{w}_{i}^{-1}v_{i}^ {-1}=\nu_{i}\dot{w}_{i}n_{i}^{\prime}C_{i}\dot{w}_{i}^{-1}\nu_{i}^{-1}\). Here, we use the isomorphism
\[N_{i}\xrightarrow{\cong}N_{i}:n_{i}\mapsto n_{i}^{\prime}=n_{i}C_{i}n_{i}^{-1}C _{i}^{-1}. \tag{1.3.11}\]
Recall that for \(b\in B\), \([b]\in\underline{\text{FBD}}_{n}\) (see (1.3.3)), \(\beta_{[b]}=\text{id}_{n}\in\text{Br}_{n}^{+}\) and \(g_{[b]}=b\). Define
\[[x_{i}C_{i}x_{i}^{-1}]^{\prime}:=[\nu_{i}]\circ[\dot{w}_{i}]\circ[n_{i}^{ \prime}]\circ[C_{i}]\circ[\dot{w}_{i}^{-1}]\circ[\nu_{i}^{-1}]=[\nu_{i}^{-1}| \dot{w}_{i}^{-1}|C_{i}|n_{i}^{\prime}|\dot{w}_{i}|\nu_{i}]\in\underline{\text{ FBD}}_{n}. \tag{1.3.12}\]
Recall that \(A_{j}\in B\tau_{j}B\), with Proposition 1.10 in mind, define \([\text{M}_{\widetilde{w}}]=[\text{M}_{\widetilde{w}}((A_{j})_{j},(x_{i})_{i}, u_{k})]\) by
\[[\text{M}_{\widetilde{w}}]:=\prod_{j=1}^{g}([A_{2j-1}]\circ[A_{2j}]\circ[A_{ 2j-1}^{-1}]\circ[A_{2j}^{-1}])\circ\prod_{i=1}^{k-1}([x_{i}C_{i}x_{i}^{-1}]^{ \prime})\circ[u_{k}C_{k}]\in\underline{\text{FBD}}_{n}. \tag{1.3.13}\]
Then by (1.2.7) and (1.2.4), the defining equation for \(M_{B}^{\prime}(\widetilde{w})\) reads
\[[\text{M}_{\widetilde{w}}]=[\text{M}_{\widetilde{w}}((A_{j})_{j=1}^{2g},(x_{i })_{i=1}^{k-1},u_{k})]\xrightarrow{\cong}\text{id}_{n}\in\underline{\text{ FBD}}_{n}, \tag{1.3.14}\]
where \([\text{M}_{\widetilde{w}}]\) is a **bmdp** (with varying coefficients) but with fixed _shape_:
\[\beta(\widetilde{w}):=\beta_{[\text{M}_{\widetilde{w}}]}=\prod_{j=1}^{g}([ \tau_{2j-1}]\circ[\tau_{2j}]\circ[\tau_{2j-1}^{-1}]\circ[\tau_{2j}])\circ \prod_{i=1}^{k-1}([\dot{w}_{i}]\circ[\dot{w}_{i}^{-1}])\in\text{FBr}_{n}^{+}. \tag{1.3.15}\]
The next idea is to canonicalize \([\text{M}_{\widetilde{w}}]\) by diagram calculus: push every handleslide or scaling in \([\text{M}_{\widetilde{w}}]=[C_{k}|u_{k}|[x_{k-1}C_{k-1}x_{k-1}^{-1}]^{\prime} |\cdots|A_{2}^{-1}|A_{1}^{-1}|A_{2}|A_{1}]\) via elementary moves, to the right as far as possible. Later on, we'll see this leads to braid varieties.
### Braid varieties
As already mentioned above, we now review some basics on braid varieties.
**Definition 1.12**.: The _braid matrix (resp._ **bmdp**_) with coefficient \(\epsilon\in\mathbb{K}\) associated to \(\sigma_{k}\) is
\[\text{B}_{k}(\epsilon):=\text{s}_{k}\text{H}_{k}(\epsilon)\in G;\quad[\text{B} _{k}(\epsilon)]=\sigma_{k}\circ[\text{H}_{k}(\epsilon)]\in\text{FBD}_{n}\ \ (\text{Figure \ref{fig:braid_matrix}}). \tag{1.4.1}\]
For \(\beta=\sigma_{i_{\ell}}\cdots\sigma_{i_{1}}\in\text{FBr}_{n}^{+}\), and \(\vec{\epsilon}=(\epsilon_{i})_{i=\ell}^{1}\in\mathbb{A}^{\ell}\), define
\[\text{B}_{\beta}(\vec{\epsilon}):=\text{B}_{i_{\ell}}(\epsilon_{\ell})\cdots \text{B}_{i_{1}}(\epsilon_{1})\in G;\quad[\text{B}_{\beta}(\vec{\epsilon})]^{ \prime}:=[\text{B}_{i_{\ell}}(\epsilon_{\ell})]\circ\cdots\circ[\text{B}_{i_ {1}}(\epsilon_{1})]\in\text{FBD}_{n}. \tag{1.4.2}\]
As an easy application of diagram calculus of matrices, we obtain the following:
**Lemma 1.13** (Braid relations for braid matrices).: _We have_
* \([\text{B}_{i}(\epsilon_{1})]\circ[\text{B}_{j}(\epsilon_{2})]=[\text{B}_{j}( \epsilon_{2})]\circ[\text{B}_{i}(\epsilon_{1})]\in\text{BD}_{n},\ \ |i-j|>1,\epsilon_{\bullet}\in\mathbb{K};\)__
* \([\text{B}_{i}(\epsilon_{1})]\circ[\text{B}_{i+1}(\epsilon_{2})]\circ[\text{B}_{i }(\epsilon_{3})]=[\text{B}_{i+1}(\epsilon_{3})]\circ[\text{B}_{i}(\epsilon_{2}- \epsilon_{3}\epsilon_{1})]\circ[\text{B}_{i+1}(\epsilon_{1})]\in\text{BD}_{n}, \epsilon_{\bullet}\in\mathbb{K}.\)__
Figure 1.4.1. The braid matrix diagram with coefficient \(\epsilon\) associated to \(\sigma_{k}\).
Proof.: \((a)\) is clear. We prove \((b)\) by Figure 1.4.2: \((i)\) is a composition of elementary moves Figure 1.3.3 (8) and a trivial move switching \(\epsilon_{1},\epsilon_{2}\); \((ii)\) is a composition of an elementary move Figure 1.3.3 (7), a trivial move switching \(-\epsilon_{3}\epsilon_{1},\epsilon_{3}\), and an elementary move Figure 1.3.3 (5); \((iii)\) is a braid move; \((iv)\) is a composition of elementary moves Figure 1.3.3 (8).
**Definition 1.14** (Braid varieties).: Let \(\beta=\sigma_{i_{\ell}}\cdots\sigma_{i_{1}}\in\mathrm{FBr}_{n}^{+}\). For each \(1\leq j\leq\ell\), denote
\[f_{j}:\mathbb{A}^{j}\to G:(\epsilon_{j},\cdots,\epsilon_{1})\mapsto\mathrm{B} _{i_{j}}(\epsilon_{j})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1}). \tag{1.4.3}\]
The _(resp. restricted) braid variety_ associated to \(\beta\) is a closed subvariety of \(\mathbb{A}^{\ell}\):
\[X(\beta):=f_{\ell}^{-1}(B);\quad\text{resp. }X(\beta,C):=\mu\mathfrak{mon}^{-1} (C),C\in T,\ \ \mu\mathfrak{mon}:=D\circ f_{\ell}:X(\beta)\to B\to T. \tag{1.4.4}\]
By Lemma 1.13, \(X(\beta)\) (resp. \(X(\beta,C)\)) depends only on \(\beta\in\mathrm{Br}_{n}^{+}\), up to a canonical isomorphism.
\(b\in B\) _acts_ on \(X(\beta)\in\widetilde{\epsilon}\) as follows: \(\widetilde{\epsilon}=(\widetilde{\epsilon}_{\ell},\cdots,\widetilde{\epsilon _{1}}):=b\cdot\widetilde{\epsilon}\) is uniquely determined by
\[[\mathrm{B}_{i_{\ell}}(\epsilon_{\ell})]\circ\cdots\circ[\mathrm{B}_{i_{1}}( \epsilon_{1})]\circ[b^{-1}]=[\widetilde{b}_{\ell}]\circ[\mathrm{B}_{i_{\ell}} (\widetilde{\epsilon}_{\ell})]\circ\cdots\circ[\mathrm{B}_{i_{1}}(\widetilde {\epsilon}_{1})]\in\underline{\mathrm{FBD}}_{n},\widetilde{b}_{\ell}\in B. \tag{1.4.5}\]
That is, in \([b^{-1}|\mathrm{B}_{i_{1}}(\epsilon_{1})|\cdots|\mathrm{B}_{i_{\ell}}( \epsilon_{\ell})]\), push \([b^{-1}]\) by elementary moves (Figure 1.3.3) to the right as far as possible, the outcome is \([\mathrm{B}_{i_{1}}(\widetilde{\epsilon}_{1})|\cdots|\mathrm{B}_{i_{\ell}}( \widetilde{\epsilon}_{\ell})|\widetilde{b}_{\ell}]\).
**Convention \(4\)**: If the context is clear, for a group action of \(h\in H\) on any variety \(Y\ni y\), denote
\[\widehat{y}:=h\cdot y. \tag{1.4.6}\]
Next, we recall the cell decomposition of braid varieties. Fix \(\beta=\sigma_{i_{\ell}}\cdots\sigma_{i_{1}}\in\mathrm{FBr}_{n}^{+}\). Define
\[p:\mathbb{A}^{\ell}\to W^{\ell+1}:\widetilde{\epsilon}\mapsto p(\widetilde{ \epsilon})=(p_{\ell},\cdots,p_{0}),\quad\mathrm{B}_{i_{m}}(\epsilon_{m}) \cdots\mathrm{B}_{i_{1}}(\epsilon_{1})\in Bp_{m}B. \tag{1.4.7}\]
Alternatively by Proposition 1.10: \([\mathrm{B}_{i_{m}}(\epsilon_{m})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1})]\in[ B]\circ[p_{m}]\circ[B]\subset\mathrm{BD}_{n}\).
**Definition 1.15** ([54, SS.5.4]).: Let \(p=(p_{\ell},\cdots,p_{0})\in W^{\ell+1}\). If for any _position_\(1\leq m\leq\ell\):
\[p_{m}=\left\{\begin{array}{cl}\mathrm{s}_{i_{m}}p_{m-1}\text{ (go-up)}&\text{ if }\mathrm{s}_{i_{m}}p_{m-1}>p_{m-1},\\ \mathrm{s}_{i_{m}}p_{m-1}\text{ (go-down) or }p_{m-1}\text{ (stay)}&\text{ if }\mathrm{s}_{i_{m}}p_{m-1}<p_{m-1}, \end{array}\right.\]
and \(p_{0}=p_{\ell}=\mathsf{id}\), we say \(p\) is a _walk_ of \(\beta\). Denote:
\[U_{p}:=\{\text{go-up's}\},\ \ S_{p}:=\{\text{stays}\},\ \ D_{p}:=\{\text{go- down's}\}.\Rightarrow[\ell]=\{1,\cdots,\ell\}=U_{p}\sqcup D_{p} \sqcup S_{p}.\]
Figure 1.4.2. Braid relation for braid matrix diagrams with coefficients.
By a length count: \(|U_{p}|-|D_{p}|=\ell(p_{\ell})-\ell(p_{0})=0\). Denote \(\mathcal{W}(\beta):=\{\text{walks of $\beta$}\}\).
For any \(1\leq m\leq\ell=\ell(\beta)\), denote
\[\mathrm{s}_{<m}(\beta):=\prod_{q=m-1}^{1}\mathrm{s}_{i_{q}},\quad\mathrm{s}_{> m}(\beta):=\prod_{q=\ell}^{m+1}\mathrm{s}_{i_{q}}. \tag{1.4.8}\]
We use **Convention** 4. Recall that \(x^{y}=yxy^{-1}\) in \(G\), and we write \(t=\mathrm{Diag}(t_{1},\cdots,t_{n})\in T\).
**Proposition 1.16**.: _We have \(B\)-equivariant decompositions into locally closed subvarieties:_
\[X(\beta)=\sqcup_{p\in\mathcal{W}(\beta)}X_{p}(\beta),\quad \varphi:X_{p}(\beta):=X_{p}^{\ell}(\beta)\cong(\mathbb{K}^{\times})^{|S_{p}|} \times\mathbb{K}^{|U_{p}|}:\vec{\varepsilon}\mapsto(\epsilon^{\prime}_{m})_{m \in S_{p}\sqcup U_{p}},\] \[X(\beta,C)=\sqcup_{p\in\mathcal{W}(\beta)}X_{p}(\beta,C),\quad X _{p}(\beta,C):=X_{p}(\beta)\cap X(\beta,C), \tag{1.4.9}\]
_such that_
1. _The inherited action of_ \(b\in B\) _on_ \((\epsilon^{\prime}_{m})_{m\in S_{p}\sqcup U_{p}}\in(\mathbb{K}^{\times})^{|S_ {p}|}\times\mathbb{K}^{|U_{p}|}\) _satisfies:_ 1. _If_ \(b=u\in U\subset B\)_, then_ \(\widehat{\epsilon}^{\prime}_{m}=\epsilon^{\prime}_{m},\forall m\in S_{p}\)_._ 2. _If_ \(b=t\in T\subset B\)_, then_ (1.4.10) \[\widetilde{\epsilon}^{\prime}_{m}=(t^{p_{m-1}})_{i_{m}}(t^{p_{m-1}})_{i_{m}+1} ^{-1}\epsilon^{\prime}_{m},\forall m\in S_{p}\sqcup U_{p}.\]
2. \(\mu\mathfrak{m}\mathfrak{n}\mathfrak{n}:X_{p}(\beta)\to T\) _is identified with_ (1.4.11) \[\mu\mathfrak{m}\mathfrak{n}\mathfrak{n}((\epsilon^{\prime}_{m})_{m\in S_{p} \sqcup U_{p}})=\mu\mathfrak{m}\mathfrak{n}((\epsilon^{\prime}_{m})_{m\in S_{p }})=\prod_{m\in S_{p}}(\mathrm{K}_{i_{m}}(-\epsilon^{\prime-1}_{m})\mathrm{K} _{i_{m}+1}(\epsilon^{\prime}_{m}))^{\mathrm{s}_{>m}(\beta)}.\]
_In particular, \(\det(\mu\mathfrak{m}\mathfrak{n}\mathfrak{n}((\epsilon^{\prime}_{m})_{m\in S _{p}\sqcup U_{p}}))=(-1)^{|S_{p}|}\)._
Proof.: By diagram calculus, the proof is straightforward. See Appendix A for the details.
## 2. Cell decomposition of character varieties
In this section, we prove a strong form (Theorem 2.10) of A. Mellit's cell decomposition for very generic character varieties [54]. This will be used to prove our main theorem 3.15.
Recall that we have obtained a decomposition (see (1.2.8),(1.2.7)):
\[\mathcal{M}_{\mu}=\sqcup_{\widetilde{w}\in W^{2g}\times\prod_{i=1}^{k-1}W/W(C _{i})}\mathcal{M}_{\mu}(\widetilde{w}),\quad\mathcal{M}_{\mu}(\widetilde{w}) =M^{\prime}_{B}(\widetilde{w})/PB_{\mathrm{par}},\]
and the defining equation of \(M^{\prime}_{B}(\widetilde{w})\) has been interpreted via braid matrix diagrams as (1.3.14):
\[[\mathrm{M}_{\widetilde{w}}]=\prod_{j=1}^{g}([A_{2j-1}]\circ[A_{2j}]\circ[A_{2 j-1}^{-1}]\circ[A_{2j}^{-1}])\circ\prod_{i=1}^{k-1}([x_{i}C_{i}x_{i}^{-1}]^{ \prime})\circ[u_{k}]\circ[C_{k}]\stackrel{{\varpi}}{{\sim}} \mathfrak{id}_{n}\in\underline{\mathrm{FBD}}_{n}.\]
To obtain the actual cell decomposition of \(\mathcal{M}_{\mu}\), we would like to decompose \(\mathcal{M}_{\mu}(\widetilde{w})\) further. This amounts to decomposing \(M^{\prime}_{B}(\widetilde{w})\) equivariantly. For that, as mentioned in the end of Section 1.3, the next step is to canonicalize \([\mathrm{M}_{\widetilde{w}}]\) by diagram calculus. This will be done in the next three subsections: Section 2.1 and Section 2.2 do the puncture and genus calculations respectively;
Section 2.3 combines these local calculations to describe \(M^{\prime}_{B}(\widetilde{w})\) via braid varieties. Finally, in Section 2.4, we prove a strong form (Theorem 2.10) of the cell decomposition for \(\mathcal{M}_{\mu}\).
To simplify our computations, we do a **trick** as follows.
* Fix an embedding \(S_{n}\hookrightarrow\operatorname{FBr}^{+}_{n}\subset\operatorname{FBD}_{n}\) lifting \([-]:S_{n}\to\operatorname{Br}^{+}_{n}\) (as a map of sets).
* Via Lemma 1.11, we also fix an embedding \([-]:B=U\rtimes T\hookrightarrow\operatorname{FBD}_{n}\) (not as a morphism of monoids) lifting \([-]:B\to\underline{\operatorname{FBD}}_{n}\).
**Definition/Proposition 2.1**.: _Fix \(\mathcal{S}\subset[N]=\{1,\dots,N\}\). Take \(Y_{1},\dots,Y_{N}\) such that: \(Y_{i}\subset B\) is a locally closed \(\mathbb{K}\)-subvariety for \(i\notin\mathcal{S}\), and \(Y_{i}\) is a single permutation 2 in \(S_{n}\), for \(i\in\mathcal{S}\)._
Footnote 2: By a little abuse of notations, we identify \(w\in S_{n}\) with \(\{w\}\subset S_{n}\). Similarly, we identify \(b\in B\) with \(\{b\}\subset B\).
1. _For any_ \(\mathbb{K}\)_-variety_ \(Z\)_, define_ \[[Y_{1}]\circ\dots\circ[Y_{N}]\times Z:=\{[y_{1}]\circ\dots\circ[y_{N}]\in \operatorname{FBD}_{n}:y_{i}\in Y_{i}\}\times Z.\]
_As a \(\mathbb{K}\)-variety, \([Y_{1}]\circ\dots\circ[Y_{N}]\times Z\) is \(\prod_{i\neq S}Y_{i}\times Z\). Define the obvious composition_
\[E:[Y_{1}]\circ\dots\circ[Y_{N}]\times Z\to[Y_{1}]\circ\dots\circ[Y_{N}] \hookrightarrow\operatorname{FBD}_{n}\to\underline{\operatorname{FBD}}_{n}.\]
2. _If_ \(\varphi:[Y_{1}]\circ\dots\circ[Y_{N}]\times Z\to[Y^{\prime}_{1}]\circ\dots \circ[Y^{\prime}_{M}]\times Z^{\prime}\) _is an isomorphism of_ \(\mathbb{K}\)_-varieties respecting_ \(E\)_, we say_ \(\varphi\) _is_ _elementary_. In this case, we write_ \[[Y_{1}]\circ\dots\circ[Y_{N}]\times Z\cong[Y^{\prime}_{1}]\circ\dots\circ[Y^{ \prime}_{M}]\times Z^{\prime}.\]
_In particular, any elementary isomorphism respects the maps_
\[g_{-}:[Y_{1}]\circ\dots\circ[Y_{N}]\times Z\to G=GL(n,\mathbb{K}):([y_{1}] \circ\dots\circ[y_{N}],z)\mapsto y_{1}\dots y_{N},\] \[\beta_{-}:[Y_{1}]\circ\dots\circ[Y_{N}]\times Z\to\operatorname{FBr }^{+}_{n}:([y_{1}]\circ\dots\circ[y_{N}],z)\mapsto\beta_{[y_{1}]}\circ\dots \circ\beta_{[y_{N}]}.\]
3. _Clearly, we have the following_ _elementary isomorphisms_:_
1. _Let_ \(Y,Y_{1},Y_{2}\subset B\)_. If the multiplication induces an isomorphism_ \(m:Y_{1}\times Y_{2}\xrightarrow{\cong}Y:(y_{1},y_{2})\mapsto y_{1}y_{2}\)_, then_ \([Y_{1}]\circ[Y_{2}]\cong[Y]\)_._
2. _If_ \(Y_{1}\subset Y_{2}\subset B\) _and_ \(Y_{2}\) _is a closed subgroup, then_ \([Y_{1}]\circ[Y_{2}]\cong[Y_{2}]\times Y_{1}\cong[Y_{2}]\circ[Y_{1}]\)_._
3. _If_ \(w\in S_{n}\) _and_ \(Y\subset U^{+}_{w}\)_, then_ \([w]\circ[Y]\cong[wYw^{-1}]\circ[w]\)_._
4. _For any_ \(C\in T\) _and any_ \(Y\subset B\)_, we have_ \([C]\circ[Y]\cong[CYC^{-1}]\circ[C]\)_._
### Diagram calculus for punctures
Back to the end of Section 1.3. Recall (1.3.10), (1.3.12):
\[U^{-}_{\dot{w}^{-1}_{i}}\times N_{i}\times Z(C_{i})\cong B\dot{w }_{i}P_{i}:(\nu_{i},n_{i},z_{i})\mapsto x_{i}=\nu_{i}\dot{w}_{i}n_{i}z_{i};\ \ N_{i}\cong N_{i}:n_{i}\mapsto n^{\prime}_{i}=n_{i}C_{i}n_{i}^{-1}C_{i}.\] \[[x_{i}C_{i}x_{i}^{-1}]^{\prime}=[\nu_{i}]\circ[\dot{w}_{i}]\circ[n ^{\prime}_{i}]\circ[C_{i}]\circ[\dot{w}_{i}^{-1}]\circ[\nu_{i}^{-1}]\in \underline{\operatorname{FBD}}_{n}.\]
Using the notations in (1.3.7), define \({}^{+}n^{\prime}_{i}:=R^{+}_{\dot{w}_{i}}(n^{\prime}_{i})\in U^{+}_{\dot{w}_{i} }\cap N_{i}\) by (1.3.9).
**Lemma 2.2**.: _For any \(D_{i+1}\in T\), we have a natural isomorphism of \(\mathbb{K}\)-varieties of the form_
\[B\dot{w}_{i}P_{i}\times U\xrightarrow{\simeq}U\times U^{-}_{\dot{w}_{i}}\times U ^{-}_{\dot{w}^{-1}_{i}}\times(U^{+}_{\dot{w}_{i}}\cap N_{i})\times Z(C_{i}):(x_ {i},u_{i+1})\mapsto(u_{i},\xi^{\prime}_{i},\xi_{i},{}^{+}n^{\prime}_{i},z_{i}) \tag{2.1.1}\]
_such that_
\[[x_{i}C_{i}x_{i}^{-1}]^{\prime}\circ[u_{i+1}]\circ[D_{i+1}]=[u_{i}]\circ[D_{i }]\circ[\dot{w}_{i}]\circ[\xi^{\prime}_{i}]\circ[\dot{w}^{-1}_{i}]\circ[\xi_{ i}]\in\underline{\mathrm{FBD}}_{n},\;\exists!\,D_{i}=C^{\dot{w}_{i}}_{i}D_{i+1}\in T. \tag{2.1.2}\]
_Moreover, the above equation uniquely determines \((u_{i},\xi^{\prime}_{i},\xi_{i})\in U\times U^{-}_{\dot{w}_{i}}\times U^{-}_{ \dot{w}^{-1}_{i}}\)._
**Note**: Up to a canonical isomorphism, we have
\[[\dot{w}_{i}]\circ[\xi^{\prime}_{i}]=[\mathrm{B}_{[\dot{w}_{i}]}(\xi^{\prime }_{i})]\in\underline{\mathrm{FBD}}_{n},\quad[\dot{w}^{-1}_{i}]\circ[\xi_{i}]= [\mathrm{B}_{[\dot{w}^{-1}_{i}]}(\xi_{i})]\in\underline{\mathrm{FBD}}_{n}.\]
This will pave the way to braid varieties.
Proof of Lemma 2.2.: Observe that we have an isomorphism of \(\mathbb{K}\)-varieties
\[B\dot{w}_{i}P_{i}\times U\xrightarrow{\simeq}U^{-}_{\dot{w}^{-1}_{i}}\times N _{i}\times Z(C_{i})\times U:(x_{i},u_{i+1})\mapsto(v_{i},n_{i},z_{i},u^{\prime }_{i+1}:=v^{-1}_{i}u_{i+1}).\]
such that \([x_{i}C_{i}x_{i}^{-1}]^{\prime}\circ[u_{i+1}]=[v_{i}]\circ[\dot{w}_{i}]\circ[n ^{\prime}_{i}]\circ[C_{i}]\circ[\dot{w}^{-1}_{i}]\circ[u^{\prime}_{i+1}]\in \underline{\mathrm{FBD}}_{n}\). By our trick (Definition/Proposition 2.1), it suffices to compute \([U^{-}_{\dot{w}^{-1}_{i}}]\circ[\dot{w}_{i}]\circ[N_{i}]\circ[C_{i}]\circ[ \dot{w}^{-1}_{i}]\circ[U]\circ[D_{i+1}]\ni[v_{i}]\circ[\dot{w}_{i}]\circ[n^{ \prime}_{i}]\circ[C_{i}]\circ[\dot{w}^{-1}_{i}]\circ[u^{\prime}_{i+1}]\circ[D_ {i+1}]\):
\[[U^{-}_{\dot{w}^{-1}_{i}}]\circ[\dot{w}_{i}]\circ[N_{i}]\circ[C_ {i}]\circ[\dot{w}^{-1}_{i}]\circ[U]\circ[D_{i+1}]\] \[\cong [U^{-}_{\dot{w}^{-1}_{i}}]\circ[\dot{w}_{i}]\circ[U^{-}_{\dot{w} _{i}}]\circ[U^{+}_{\dot{w}_{i}}\cap N_{i}]\circ[C_{i}]\circ[\dot{w}^{-1}_{i} ]\circ[U]\circ[D_{i+1}]\;(n^{\prime}_{i}={}^{-}n^{\prime}_{i}\cdot{}^{+}n^{ \prime}_{i})\] \[\cong [U^{-}_{\dot{w}^{-1}_{i}}]\circ[\dot{w}_{i}]\circ[U^{-}_{\dot{w} _{i}}]\circ[C_{i}]\circ[\dot{w}^{-1}_{i}]\circ[\dot{w}_{i}C^{-1}_{i}(U^{+}_{ \dot{w}_{i}}\cap N_{i})C_{i}\dot{w}^{-1}_{i}]\circ[U]\circ[D_{i+1}]\] \[\cong [U^{-}_{\dot{w}^{-1}_{i}}]\circ[\dot{w}_{i}]\circ[U^{-}_{\dot{w} _{i}}]\circ[C_{i}]\circ[\dot{w}^{-1}_{i}]\circ[U]\circ[D_{i+1}]\times(U^{+}_{ \dot{w}_{i}}\cap N_{i})\;(\text{direct factor}:{}^{+}n^{\prime}_{i}\in U^{+}_{ \dot{w}_{i}}\cap N_{i})\] \[\cong [U^{-}_{\dot{w}^{-1}_{i}}]\circ[\dot{w}_{i}]\circ[U^{-}_{\dot{w} _{i}}]\circ[C_{i}]\circ[\dot{w}^{-1}_{i}]\circ[U^{+}_{\dot{w}^{-1}_{i}}]\circ[ U^{-}_{\dot{w}^{-1}_{i}}]\circ[D_{i+1}]\times(U^{+}_{\dot{w}_{i}}\cap N_{i})\;(U=U^{+}_{ \dot{w}^{-1}_{i}}U^{-}_{\dot{w}^{-1}_{i}})\] \[\cong [U^{-}_{\dot{w}^{-1}_{i}}]\circ[\dot{w}_{i}]\circ[U^{-}_{\dot{w} _{i}}]\circ[U^{+}_{\dot{w}_{i}}]\circ[C_{i}]\circ[\dot{w}^{-1}_{i}]\circ[U^{-}_{ \dot{w}^{-1}_{i}}]\circ[D_{i+1}]\times(U^{+}_{\dot{w}_{i}}\cap N_{i})\] \[\cong [U^{-}_{\dot{w}^{-1}_{i}}]\circ[\dot{w}_{i}]\circ[U^{+}_{\dot{w} _{i}}]\circ[U^{-}_{\dot{w}_{i}}]\circ[C_{i}]\circ[\dot{w}^{-1}_{i}]\circ[U^{-}_{ \dot{w}^{-1}_{i}}]\circ[D_{i+1}]\times(U^{+}_{\dot{w}_{i}}\cap N_{i})\] \[\cong [U]\circ[D_{i}]\circ[\dot{w}_{i}]\circ[U^{-}_{\dot{w}_{i}}]\circ[U^{-}_{ \dot{w}_{i}}]\circ[\dot{w}^{-1}_{i}]\circ[U^{-}_{\dot{w}^{-1}_{i}}]\circ[U^{-}_{ \dot{w}^{-1}_{i}}]\times(U^{+}_{\dot{w}_{i}}\cap N_{i}).\]
The uniqueness part is clear by definition of \(\underline{\mathrm{FBD}}_{n}\). This finishes the proof.
The upshot is that, Lemma 2.2 provides the inductive step for the diagram calculus for punctures. In our case, we have seen \(u_{k}\in U\), take \(D_{k}:=C_{k}\), and define \(D_{i}\)'s inductively as above. Thus,
\[D_{1}=(\prod_{i=1}^{k-1}C^{\dot{w}_{i}}_{i})C_{k}\in T. \tag{2.1.3}\]
Altogether, the defining equation (1.3.14) for \(M^{\prime}_{B}(\vec{w})\) reduces to:
\[[\mathrm{M}_{\vec{w}}]=\prod_{j=1}^{g}[([A_{2j-1}]\circ[A_{2j}]\circ[A_{2j-1}^{-1 }]\circ[A_{2j}])\circ[u_{1}]\circ[D_{1}]\circ\prod_{i=1}^{k-1}([\dot{w}_{i} \xi^{\prime}_{i}]\circ[\dot{w}_{i}^{-1}\xi_{i}])\stackrel{{ \mathfrak{w}}}{{\sim}}\mathsf{id}_{n}. \tag{2.1.4}\]
For the diagram calculus for genera below, denote \(u^{g}:=u_{1}\in U,\quad D^{g}:=D_{1}\in T\).
### Diagram calculus for genera
For each \(1\leq j\leq g\), take the isomorphisms
\[U^{-}_{\tau_{2j-1}^{-1}}\times T\times U\cong B\tau_{2j-1}B:(\mu_ {2j-1},y_{2j-1},\eta_{2j-1})\mapsto A_{2j-1}=\mu_{2j-1}\tau_{2j-1}y_{2j-1}\eta _{2j-1},\] \[U\times T\times U^{-}_{\tau_{2j}}\cong B\tau_{2j}B:(\eta_{2j},y_ {2j},\mu_{2j})\mapsto A_{2j}=\eta_{2j}y_{2j}\tau_{2j}\mu_{2j}. \tag{2.2.1}\]
Here, recall that \(A_{m}\in B\tau_{m}B\), \(\forall 1\leq m\leq 2g\). By Proposition 1.10, we have
\[[A_{2j-1}]=[\mu_{2j-1}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[ \eta_{2j-1}],\ [A_{2j}]=[\eta_{2j}]\circ[y_{2j}]\circ[\tau_{2j}]\circ[\mu_{2j}]\in \underline{\mathrm{FBD}}_{n},\] \[[A_{2j-1}^{-1}]=[\eta_{2j-1}^{-1}]\circ[y_{2j-1}^{-1}]\circ[\tau _{2j-1}^{-1}]\circ[\mu_{2j-1}^{-1}],\ [A_{2j}^{-1}]=[\mu_{2j}^{-1}]\circ[\tau_{2j}^{-1}]\circ[y_{2j}^{-1}]\circ[ \eta_{2j}^{-1}]\in\underline{\mathrm{FBD}}_{n}.\]
**Lemma 2.3**.: _For any \(D^{j}\in T\), we have a natural isomorphism of \(\mathbb{K}\)-varieties_
\[B\tau_{2j-1}B\times B\tau_{2j}B\times U\stackrel{{ \simeq}}{{\to}}U\times T^{2}\times(U^{-}_{\tau_{2j-1}}\times U^{-}_{\tau_{2j}} \times U^{-}_{\tau_{2j-1}^{-}}\times U^{-}_{\tau_{2j}^{-1}})\times(U^{+}_{\tau _{2j}^{-1}}\times U^{+}_{\tau_{2j-1}}),\] \[(A_{2j-1},A_{2j},u^{j})\mapsto(u^{j-1},y_{2j-1},y_{2j},\xi^{ \prime}_{2j-1},\xi^{\prime}_{2j},\xi_{2j-1},\xi_{2j},{}^{+}n^{2j},{}^{+}n^{2j -1}), \tag{2.2.2}\]
_such that_
\[[A_{2j-1}]\circ[A_{2j}]\circ[A_{2j-1}^{-1}]\circ[A_{2j}^{-1}] \circ[u^{j}]\circ[D^{j}]\] \[= [u^{j-1}]\circ[D^{j-1}]\circ[\tau_{2j-1}]\circ[\xi^{\prime}_{2j -1}]\circ[\tau_{2j}]\circ[\xi^{\prime}_{2j}]\circ[\tau_{2j-1}^{-1}]\circ[\xi_{ 2j-1}]\circ[\tau_{2j}^{-1}]\circ[\xi_{2j}]\in\underline{\mathrm{FBD}}_{n}, \tag{2.2.3}\]
_for a unique \(D^{j-1}\in T\):_
\[D^{j-1}:=y_{2j-1}^{\tau_{2j-1}}y_{2j}^{\tau_{2j-1}}(y_{2j-1}^{-1})^{\tau_{2j- 1}\tau_{2j}}(y_{2j}^{-1})^{\tau_{2j-1}\tau_{2j}\tau_{2j-1}^{-1}\tau_{2j}^{-1}} (D^{j})^{\tau_{2j-1}\tau_{2j}\tau_{2j-1}^{-1}\tau_{2j}^{-1}}\in T. \tag{2.2.4}\]
_Moreover, (2.2.3) uniquely determines \((u^{j-1},\zeta^{\prime}_{2j-1},\zeta^{\prime}_{2j},\zeta_{2j-1},\zeta_{2j})\in U \times U^{-}_{\tau_{2j-1}}\times U^{-}_{\tau_{2j}^{-1}}\times U^{-}_{\tau_{2 j-1}^{-1}}\times U^{-}_{\tau_{2j}^{-1}}\)._
This provides the inductive step for the diagram calculus for genera. In our case, we have seen that \(u^{g}\in U\), take \(D^{g}=D_{1}\in T\), and define \(D^{j}\)'s inductively as above. Thus,
\[D^{0}((y_{j})_{j=1}^{2g})=\prod_{j=1}^{g}(y_{2j-1}^{\tau_{2j-1}}(y_{2j-1}^{-1} )^{\tau_{2j-1}\tau_{2j}}y_{2j}^{\tau_{2j-1}}(y_{2j}^{-1})^{(\tau_{2j-1},\tau_{ 2j})})\prod_{m=1}^{j-1}(\tau_{2m-1},\tau_{2m})D_{1}^{\mathbb{I}_{m=1}^{g}(\tau_ {2m-1},\tau_{2m})}. \tag{2.2.5}\]
Altogether, the defining equation (1.3.14) for \(M^{\prime}_{B}(\vec{w})\), i.e. \([\mathrm{M}_{\vec{w}}]\stackrel{{\mathfrak{w}}}{{\sim}}\mathsf{ id}_{n}\in\underline{\mathrm{FBD}}_{n}\), reduces to:
\[[u^{0}D^{0}]\circ\prod_{j=1}^{g}([\tau_{2j-1}\zeta^{\prime}_{2j-1}]\circ[\tau _{2j}\zeta^{\prime}_{2j}]\circ[\tau_{2j-1}^{-1}\zeta_{2j-1}]\circ[\tau_{2j}^{-1} \zeta_{2j}])\circ\prod_{i=1}^{k-1}([\dot{w}_{i}\xi^{\prime}_{i}]\circ[\dot{w}_{i }^{-1}\xi_{i}])\stackrel{{\mathfrak{w}}}{{\sim}}\mathsf{id}_{n}.\]
Proof of Lemma 2.3.: As one could expect, the proof is done by diagram calculus.
**Step \(1\)**. Denote \(u^{\prime}{}^{j}:=\eta_{2j}^{-1}u^{j}\in U,\eta_{2j}^{\prime}:=\eta_{2j-1}\eta_{2 j}\in U,\eta_{2j-1}^{\prime}:=\eta_{2j-1}\mu_{2j}^{-1}\in U\). So, \(\eta_{2j-1}^{\prime-1}=\mu_{2j}\eta_{2j-1}^{-1}\). Then \((A_{2j-1},A_{2j},u^{j})\mapsto(\mu_{2j-1},y_{2j-1},\eta_{2j-1}^{\prime},\eta_{ 2j}^{\prime},y_{2j},\mu_{2j},u^{\prime}{}^{j})\) defines an isomorphism
\[B\tau_{2j-1}B\times B\tau_{2j}B\times U\xrightarrow{\cong}U_{\tau_{2j-1}^{-1}}^ {-}\times T\times U^{2}\times T\times U_{\tau_{2j}}^{-}\times U\]
such that we obtain an identity in \(\underline{\text{FBD}}_{n}\):
\[[A_{2j-1}]\circ[A_{2j}]\circ[A_{2j-1}^{-1}]\circ[A_{2j}^{-1}] \circ[u^{j}]\] \[= [\mu_{2j-1}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[\eta_{2j}^{ \prime}]\circ[y_{2j}]\circ[\tau_{2j}]\] \[\circ[\eta_{2j-1}^{\prime-1}]\circ[y_{2j-1}^{-1}]\circ[\tau_{2j-1 }^{-1}]\circ[\mu_{2j-1}^{-1}]\circ[\mu_{2j}^{-1}]\circ[\tau_{2j}^{-1}]\circ[y _{2j}^{-1}]\circ[u^{\prime j}]. \tag{2.2.6}\]
**Note**: The variable \(\mu_{2j}^{-1}\in U_{\tau_{2j}}^{-}\) becomes free (appears only once).
**Step \(2\)**. We firstly compute \([\mu_{2j-1}^{-1}]\circ[\mu_{2j}^{-1}]\circ[\tau_{2j}^{-1}]\circ[y_{2j}^{-1}] \circ[u^{\prime j}]\). By Definition/Proposition 2.1,
\[[\mu_{2j-1}^{-1}]\circ[U_{\tau_{2j}}^{-}]\circ[\tau_{2j}^{-1}] \circ[y_{2j}^{-1}]\circ[U]\cong[\mu_{2j-1}^{-1}]\circ[U_{\tau_{2j}}^{-}]\circ[ \tau_{2j}^{-1}]\circ[U]\circ[y_{2j}^{-1}]\] \[\cong [\mu_{2j-1}^{-1}]\circ[U_{\tau_{2j}}^{-}]\circ[\tau_{2j}^{-1}] \circ[U_{\tau_{2j}^{-1}}^{-}]\circ[y_{2j}^{-1}]\cong[\mu_{2j-1}^{-1}]\circ[U] \circ[\tau_{2j}^{-1}]\circ[U_{\tau_{2j}^{-1}}^{-}]\circ[y_{2j}^{-1}]\] \[\cong [U]\circ[\tau_{2j}^{-1}]\circ[U_{\tau_{2j}^{-1}}^{-}]\circ[y_{2j}^{ -1}].\]
This means we obtain an isomorphism of \(\mathbb{K}\)-varieties
\[U_{\tau_{2j-1}^{-1}}^{-}\times T\times U^{2}\times T\times U_{ \tau_{2j}}^{-}\times U\xrightarrow{\cong}U_{\tau_{2j-1}^{-1}}^{-}\times T \times U^{3}\times T\times U_{\tau_{2j}^{-1}}^{-},\] \[(\mu_{2j-1},y_{2j-1},\eta_{2j-1}^{\prime},\eta_{2j}^{\prime},y_{ 2j},\mu_{2j},u^{\prime j})\mapsto(\mu_{2j-1},y_{2j-1},\eta_{2j-1}^{\prime}, \eta_{2j}^{\prime},u_{3}^{j},y_{2j},L_{1}^{-}), \tag{2.2.7}\]
such that \([\mu_{2j-1}^{-1}]\circ[\mu_{2j}^{-1}]\circ[\tau_{2j}^{-1}]\circ[y_{2j}^{-1}] \circ[u^{\prime j}]=[u_{3}^{j}]\circ[\tau_{2j}^{-1}]\circ[L_{1}^{-}]\circ[y_ {2j}^{-1}]\in\underline{\text{FBD}}_{n}\). Hence,
\[[A_{2j-1}]\circ[A_{2j}]\circ[A_{2j-1}^{-1}]\circ[A_{2j}^{-1}] \circ[u^{j}]\] \[= [\mu_{2j-1}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[\eta_{2j}^{ \prime}]\circ[y_{2j}]\circ[\tau_{2j}]\] \[\circ[\eta_{2j-1}^{\prime-1}]\circ[y_{2j-1}^{-1}]\circ[\tau_{2j-1 }^{-1}]\circ[u_{3}^{j}]\circ[\tau_{2j}^{-1}]\circ[L_{1}^{-}]\circ[y_{2j}^{-1}]. \tag{2.2.8}\]
**Note**: the non-torus variables \(\mu_{2j-1}\in U_{\tau_{2j-1}^{-1}}^{-}\), \(\eta_{2j}^{\prime},\eta_{2j-1}^{\prime-1},u_{3}^{j}\in U\), \(L_{1}^{-}\in U_{\tau_{2j}^{-1}}^{-}\) all become free.
**Step \(3\)**. By our trick (Definition/Proposition 2.1), the computation of \([A_{2j-1}]\circ[A_{2j}]\circ[A_{2j-1}^{-1}]\circ[A_{2j}^{-1}]\circ[u^{j}]\) then reduces to that of the following variety in \(\underline{\text{FBD}}_{n}\):
\[[U_{\tau_{2j-1}^{-1}}^{-}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[U]\circ[y_{2j} ]\circ[\tau_{2j}]\circ[U]\circ[y_{2j-1}^{-1}]\circ[U]\circ[\tau_{2j}^{-1}] \circ[U_{\tau_{2j}^{-1}}^{-}]\circ[y_{2j}^{-1}]. \tag{2.2.9}\]
The idea is 'canonicalize'. According to the expression above, reorder the variables:
\[U_{\tau_{2j-1}^{-1}}^{-}\times T\times U^{3}\times T\times U_{\tau_{2j}^{-1}}^ {-}\xrightarrow{\cong}U_{\tau_{2j-1}^{-1}}^{-}\times(T\times U)^{2}\times U \times U_{\tau_{2j}^{-1}}^{-},\] \[(\mu_{2j-1},y_{2j-1},\eta_{2j-1}^{\prime},\eta_{2j}^{\prime},u_{3}^ {j},y_{2j},L_{1}^{-})\mapsto(\mu_{2j-1},y_{2j-1},\eta_{2j}^{\prime},y_{2j},\eta _{2j-1}^{\prime-1},u_{3}^{j},L_{1}^{-}) \tag{2.2.10}\]
**Step 3.1**.: We firstly compute \([U]\circ[y_{2j-1}^{-1}]\circ[\tau_{2j-1}^{-1}]\circ[U]\ni[\eta_{2j-1}^{\prime-1}] \circ[y_{2j-1}^{-1}]\circ[\tau_{2j-1}^{-1}]\circ[u_{3}^{j}]\):
\[[U]\circ[y_{2j-1}^{-1}]\circ[\tau_{2j-1}^{-1}]\circ[U]\cong[U]\circ[y_{2j-1}^ {-1}]\circ[\tau_{2j-1}^{-1}]\circ[\tau_{2j-1}^{-1}]\circ[U_{\tau_{2j-1}^{-1}}^ {-1}]\circ[U=U_{\tau_{2j-1}^{-1}}^{+}U_{\tau_{2j-1}^{-1}}^{-1})\] \[\cong [U]\circ[U_{\tau_{2j-1}}^{+}]\circ[y_{2j-1}^{-1}]\circ[\tau_{2j-1 }^{-1}]\cong[U]\circ[y_{2j-1}^{-1}]\circ[\tau_{2j-1}^{-1}]\circ[U_{\tau_{2j- 1}^{-1}}^{-1}]\times U_{\tau_{2j-1}}^{+},\]
with the direct factor \(U_{\tau_{2j-1}}^{+}\ni{}^{+}n^{2j-1}\). This means we obtain an isomorphism of \(\mathbb{K}\)-varieties
\[U_{\tau_{2j-1}^{-1}}^{-}\times(T\times U)^{2}\times U\times U_{ \tau_{2j}^{-1}}^{-}\xrightarrow{\simeq}U_{\tau_{2j-1}^{-1}}^{-}\times(T \times U)^{2}\times U_{\tau_{2j-1}^{-1}}^{-}\times U_{\tau_{2j}^{-1}}^{-},\] \[(\mu_{2j-1},y_{2j-1},\eta_{2j}^{\prime},y_{2j},\eta_{2j-1}^{ \prime-1},u_{3}^{j},L_{1}^{-})\mapsto(\mu_{2j-1},y_{2j-1},\eta_{2j}^{\prime},y _{2j},u_{4}^{j},L_{3}^{-},L_{1}^{-},{}^{+}n^{2j-1}), \tag{2.2.11}\]
such that \([\eta_{2j-1}^{\prime-1}]\circ[y_{2j-1}^{-1}]\circ[\tau_{2j-1}^{-1}]\circ[u_{3 }^{j}]=[u_{4}^{j}]\circ[y_{2j-1}^{-1}]\circ[\tau_{2j-1}^{-1}]\circ[L_{3}^{-}] \in\underline{\text{FBD}}_{n}\).
**Step 3.2**.: We compute \([U]\circ[y_{2j}]\circ[\tau_{2j}]\circ[U]\ni[\eta_{2j}^{\prime}]\circ[y_{2j} ]\circ[\tau_{2j}]\circ[u_{4}^{j}]\), which is similar:
\[[U]\circ[y_{2j}]\circ[\tau_{2j}]\circ[U]\cong[U]\circ[y_{2j}] \circ[\tau_{2j}]\circ[\tau_{2j}]\circ[U_{\tau_{2j}}^{+}]\circ[U_{\tau_{2j}}^{- }]\ \ (U=U_{\tau_{2j}}^{+}U_{\tau_{2j}}^{-})\] \[\cong [U]\circ[U_{\tau_{2j}^{+}j}^{+}]\circ[y_{2j}]\circ[\tau_{2j}] \circ[U_{\tau_{2j}}^{-}]\times U_{\tau_{2j}^{-1}}^{+},\]
with the direct factor \(U_{\tau_{2j}^{-1}}^{+}\ni{}^{+}n^{2j}\). This means we obtain an isomorphism of \(\mathbb{K}\)-varieties
\[U_{\tau_{2j-1}^{-1}}^{-}\times(T\times U)^{2}\times U_{\tau_{2j-1 }^{-1}}^{-}\times U_{\tau_{2j}^{-1}}^{-}\times U_{\tau_{2j-1}}^{+}\xrightarrow{ \simeq}\] \[U_{\tau_{2j-1}^{-1}}^{-}\times T\times U\times T\times U_{\tau_{2j }}^{-}\times U_{\tau_{2j-1}^{-1}}^{-}\times U_{\tau_{2j}^{-1}}^{-}\times U_{ \tau_{2j}^{+1}}^{+}\times U_{\tau_{2j-1}}^{+},\] \[(\mu_{2j-1},y_{2j-1},\eta_{2j}^{\prime},y_{2j},u_{4}^{j},L_{3}^{- },L_{1}^{-},{}^{+}n^{2j-1})\mapsto(\mu_{2j-1},y_{2j-1},u_{5}^{j},y_{2j},L_{5}^ {-},L_{3}^{-},L_{1}^{-},{}^{+}n^{2j},{}^{+}n^{2j-1}),\]
such that \([\eta_{2j}^{\prime}]\circ[y_{2j}]\circ[\tau_{2j}]\circ[u_{4}^{j}]=[u_{5}^{j}] \circ[y_{2j}]\circ[\tau_{2j}]\circ[L_{5}^{-}]\in\underline{\text{FBD}}_{n}\).
**Step 3.3**.: Now, we compute \([U_{\tau_{2j-1}^{-1}}^{-}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[U]\ni[\mu_{2j -1}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[y_{2j-1}]\circ[u_{5}^{j}]\):
\[[U_{\tau_{2j-1}^{-1}}^{-}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[U] \cong[U_{\tau_{2j-1}^{-1}}^{-}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[U_{\tau_{ 2j-1}}^{-}]\circ[U_{\tau_{2j-1}}^{+}]\circ[U_{\tau_{2j-1}}^{-}]\] \[\cong [U_{\tau_{2j-1}^{-1}}^{-}]\circ[U_{\tau_{2j-1}^{-1}}^{+}]\circ[\tau_{2 j-1}]\circ[U_{\tau_{2j-1}}^{-}]\cong[U]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[U_{\tau_{ 2j-1}}^{-}].\]
This means we obtain an isomorphism of \(\mathbb{K}\)-varieties
\[U_{\tau_{2j-1}^{-1}}^{-}\times T\times U\times T\times U_{\tau_{2j }}^{-}\times U_{\tau_{2j-1}^{-1}}^{-}\times U_{\tau_{2j}^{-1}}^{-}\times U_{ \tau_{2j}^{-1}}^{+}\times U_{\tau_{2j-1}^{+}}^{+}\xrightarrow{\simeq}\] \[U\times T\times U_{\tau_{2j-1}}^{-}\times T\times U_{\tau_{2j }}^{-}\times U_{\tau_{2j-1}^{-1}}^{-}\times U_{\tau_{2j}^{-1}}^{-}\times U_{ \tau_{2j}^{-1}}^{+}\times U_{\tau_{2j-1}}^{+},\] \[(\mu_{2j-1},y_{2j-1},u_{5}^{j},y_{2j},L_{5}^{-},L_{3}^{-},L_{1}^{- },{}^{+}n^{2j},{}^{+}n^{2j-1})\mapsto(u^{j-1},y_{2j-1},L_{7}^{-},y_{2j},L_{5}^ {-},L_{3}^{-},L_{1}^{-},{}^{+}n^{2j},{}^{+}n^{2j-1}),\]
such that \([\mu_{2j-1}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[u_{5}^{j}]=[u^{j-1}]\circ[ \tau_{2j-1}]\circ[y_{2j-1}]\circ[L_{7}^{-}]\in\underline{\text{FBD}}_{n}\).
In summary, we have obtained
\[[A_{2j-1}]\circ[A_{2j}]\circ[A_{2j-1}^{-1}]\circ[A_{2j}^{-1}]\circ[u^{j}]\] \[= [u^{j-1}]\circ[\tau_{2j-1}]\circ[y_{2j-1}L_{7}^{-}y_{2j}]\circ[\tau_{ 2j}]\circ[L_{5}^{-}y_{2j-1}^{-1}]\circ[\tau_{2j-1}^{-1}]\circ[L_{3}^{-}]\circ[\tau_{2j }^{-1}]\circ[L_{1}^{-}y_{2j}^{-1}]. \tag{2.2.12}\]
and \([U^{-}_{\tau_{2j-1}^{-1}}]\circ[\tau_{2j-1}]\circ[y_{2j-1}]\circ[U]\circ[y_{2j}] \circ[\tau_{2j}]\circ[U]\circ[y_{2j-1}^{-1}]\circ[t_{2j-1}^{-1}]\circ[U]\circ [\tau_{2j}^{-1}]\circ[V_{\tau_{2j-1}}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j-1} ^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j -1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{ 2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v _{2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ[v_{2j-1}^{-1}]\circ [v_{2j}^{-1}]\circ[v_{2j}^{-1}]\circ[v_{2j}^{-1}]\circ[v_{2j}^{-1}]\circ[v _{2j}^{-1}]\circ[v_{2j}^{-1}]\circ[v_{2j}^{-1}]\circ[v_{2j}^{-1}]\circ[v_{2j }^{-1}]\circ[v_{2j}^{-1}]\circ[v_{2j}^{-1}]\circ[v_{2j}^{-1}]\circ[v_{2j}] \in\underline{\text{FBD}}_{n},\]
where \((\zeta^{\prime}_{2j-1},\zeta^{\prime}_{2j},\zeta_{2j-1},\zeta_{2j})\in U^{-}_{ \tau_{2j-1}}\times U^{-}_{\tau_{2j}}\times U^{-}_{\tau_{2j-1}^{-1}}\times U^{- }_{\tau_{2j}^{-1}}\), and \(D^{j-1}\) is indeed given by (2.2.4).
In other words, we have obtained an isomorphism of the form (2.2.2) in Lemma 2.3 such that (2.2.3) and (2.2.4) hold. By definition of \(\underline{\text{FBD}}_{n}\), the uniqueness part is clear. Done.
**Remark 2.4**.: By a careful check of the proof, there're formulas for \({}^{+}n^{2j-1},{}^{+}n^{2j}\) using (1.3.7):
\[{}^{+}n^{2j-1}=y_{2j-1}^{-1}\tau_{2j-1}^{-1}L^{+}_{\tau_{2j-1}}(\mu_{2j-1}^{- 1}\mu_{2j}^{-1}\tau_{2j}^{-1}L^{+}_{\tau_{2j}^{-1}}(y_{2j}^{-1}\eta_{2j}^{-1}u^ {j}y_{2j})\tau_{2j-1}y_{2j-1}, \tag{2.2.13}\]
\[{}^{+}n^{2j}=y_{2j}\tau_{2j}L^{+}_{\tau_{2j}}(\mu_{2j}\eta_{2j-1}^{-1}{}^{+}n^ {2j-1})\tau_{2j}^{-1}y_{2j}^{-1}.\]
### Connection to braid varieties
We relate \(M^{\prime}_{B}(\vec{w})\) to braid varieties. Sum up Section 2.1-2.2, we've obtained an isomorphism
\[(\bigcap_{j=1}^{2g}[B\tau_{j}B)\times(\bigcap_{i=1}^{k-1}B\dot{w}_ {i}P_{i})\times U\xrightarrow{\cong}\] \[U\times\bigcap_{j=1}^{g}(T^{2}\times(U^{-}_{\tau_{2j-1}}\times U^{ -}_{\tau_{2j}}\times U^{-}_{\tau_{2j-1}}\times U^{-}_{\tau_{2j-1}})\times(U^{+} _{\tau_{2j}^{-1}}\times U^{+}_{\tau_{2j-1}}))\times\bigcap_{i=1}^{k-1}(U^{-}_{ \dot{w}_{i}}\times U^{-}_{\dot{w}_{i}^{-1}}\times(U^{+}_{\dot{w}_{i}}\cap N_{i })\times Z(C_{i})),\] \[((A_{j})_{j},(x_{i})_{i},u_{k})\mapsto(u^{0},(y_{2j-1},y_{2j}, \zeta^{\prime}_{2j-1},\zeta^{\prime}_{2j},\zeta_{2j-1},\zeta_{2j},{}^{+}n^{2 j},{}^{+}n^{2j-1})_{j=1}^{g},(\xi^{\prime}_{i},\xi_{i},{}^{+}n^{\prime}_{i},z_{i})_{i=1} ^{k-1}), \tag{2.3.1}\]
such that we obtain an equality in \(\underline{\text{FBD}}_{n}\):
\[[\text{M}_{\widetilde{w}}]=\bigcap_{j=1}^{g}([A_{2j-1}]_{\circ}[ A_{2j}]_{\circ}[A_{2j-1}^{-1}]_{\circ}[A_{2j}^{-1}])_{\circ}\bigcap_{i=1}^{k-1}[x_{i}C_{i }x_{i}^{-1}]^{\prime}\circ[u_{k}]\circ[D_{k}]\] \[= [u^{0}D^{0}]_{\circ}\bigcap_{j=1}^{g}([\tau_{2j-1}]_{\circ}[ \zeta^{\prime}_{2j-1}]_{\circ}[\tau_{2j}]_{\circ}[\zeta^{\prime}_{2j}]_{\circ}[ \tau_{2j-1}^{-1}]_{\circ}[\zeta_{2j-1}]_{\circ}[\tau_{2j}^{-1}]_{\circ}[\zeta_{ 2j}])_{\circ}\bigcap_{i=1}^{k-1}([\dot{w}_{i}]_{\circ}[\xi^{\prime}_{i}]_{ \circ}[\dot{w}_{i}^{-1}]_{\circ}[\xi_{i}]).\]
Recall by (1.3.15), \(\beta(\vec{w})=\prod_{j=1}^{g}([\tau_{2j-1}]\circ[\tau_{2j}]\circ[\tau_{2j-1}] ^{-1}\circ[\tau_{2j}^{-1}])\circ\prod_{i=1}^{k-1}([\dot{w}_{i}]_{\circ}[\dot{w }_{i}^{-1}])\in\underline{\text{FBr}}^{+}_{n}\). By Definition 1.12, the defining equation (1.3.14) for \(M^{\prime}_{B}(\vec{w})\) in \(\underline{\text{FBD}}_{n}\) becomes
\[[\text{B}_{\beta(\vec{w})}(\vec{\epsilon})]^{\prime}\xrightarrow{\cong}[(D^{0})^{ -1}]\circ[(u^{0})^{-1}],\ \ \vec{\epsilon}:=((\zeta^{\prime}_{2j-1},\zeta^{\prime}_{2j},\zeta_{2j-1},\zeta_{2j })_{j=1}^{g},(\xi^{\prime}_{i},\xi_{i})_{i=1}^{k-1})\in\mathbb{K}^{\ell(\beta( \vec{w}))}. \tag{2.3.2}\]
By (2.2.5), (2.1.3), and the generic assumption (Definition 1.1), \(\det D^{0}=\det D_{1}=1\). Define
\[\phi_{\vec{w}}:T^{2g}\times X(\beta(\vec{w}))\to T_{1}:((y_{j})_{j=1}^{2g},\vec{ \epsilon})\mapsto D^{0}\mu\mathfrak{mon}(\vec{\epsilon});\ \ \ T_{1}:=\{t\in T:\det t=1\}. \tag{2.3.3}\]
Here, \(\det\circ\mu\mathfrak{m}\mathfrak{on}=1\) by Proposition 1.16: \(\forall p\in\mathcal{W}(\beta(\widetilde{w}))\Rightarrow|S_{p}|=\ell(\beta( \widetilde{w}))-2|U_{p}|=\sum_{j=1}^{g}2\ell(\tau_{j})+\sum_{i=1}^{k-1}2\ell( \dot{w}_{i})-2|U_{p}|\), which is even. Define a closed \(\mathbb{K}\)-subvariety
\[\widetilde{X}(\beta(\widetilde{w})):=\phi_{\widetilde{w}}^{-1}(I_{n})\subset T ^{2g}\times X(\beta(\widetilde{w})). \tag{2.3.4}\]
Then by (2.3.2), \(M_{B}^{\prime}(\widetilde{w})\) is related to the restricted twisted braid variety \(\widetilde{X}(\beta(\widetilde{w}))\):
\[M_{B}^{\prime}(\widetilde{w})\cong\widetilde{X}(\beta(\widetilde{w}))\times \prod_{j=1}^{g}(U_{\tau_{2j}^{-1}}^{+}\times U_{\tau_{2j-1}}^{+})\times\prod_{ i=1}^{k-1}((U_{\dot{w}_{i}}^{+}\cap N_{i})\times Z(C_{i})). \tag{2.3.5}\]
Now, \((b,(h_{i})_{i=1}^{k-1})\in B_{\text{par}}\) acts on \((u^{0},(y_{j},\zeta_{j}^{\prime},\zeta_{j},{}^{+}n^{j})_{j=1}^{2g},(\xi_{i}^{ \prime},\xi_{i},{}^{+}n_{i}^{\prime},z_{i})_{i=1}^{k-1})\in M_{B}^{\prime}( \widetilde{w})\). Our major interest is that of \(T_{\text{par}}:=T\times\prod_{i=1}^{k-1}Z(C_{i})\subset B_{\text{par}}\). Let's study the action. We use **Convention 4**. Recall by (1.2.5) and (1.3.10) that
\[\widehat{A}_{j}=bA_{j}b^{-1},\quad\widehat{x}_{i}=bx_{i}h_{i}^{-1},\quad \widehat{u}_{k}=bu_{k}(b^{C_{k}})^{-1};\quad x_{i}=\nu_{i}\dot{w}_{i}n_{i}z_{i}.\]
Notice that \(h_{i}\in Z(C_{i})\) acts only on \(x_{i}\), hence only on \(z_{i}\in Z(C_{i})\) via: \(z_{i}\mapsto z_{i}h_{i}^{-1}\). Thus, define
\[M_{B}^{\prime\prime}(\widetilde{w}):=\{(u^{0},(y_{j})_{j=1}^{2g},\bar{\epsilon },({}^{+}n^{j})_{j=1}^{2g},({}^{+}n_{i}^{\prime})_{i=1}^{k-1}):u^{0}D^{0} \text{B}_{\beta(\widetilde{w})}(\bar{\epsilon})=\text{id}_{n}\}. \tag{2.3.6}\]
Then it admits an induced action of \(b\in B\), and the above computation shows that
\[M_{B}^{\prime\prime}(\widetilde{w})\cong\widetilde{X}(\beta(\widetilde{w})) \times\prod_{j=1}^{g}(U_{\tau_{2j}^{-1}}^{+}\times U_{\tau_{2j-1}}^{+})\times \prod_{i=1}^{k-1}(U_{\dot{w}_{i}}^{+}\cap N_{i});\quad M_{B}^{\prime}( \widetilde{w})=M_{B}^{\prime\prime}(\widetilde{w})\times\prod_{i=1}^{k-1}Z(C _{i}). \tag{2.3.7}\]
Recall that \(PB_{\text{par}}=B_{\text{par}}/\mathbb{K}^{\times}\) acts freely on \(M_{B}^{\prime}(\widetilde{w})\), and \(M_{B}^{\prime}(\widetilde{w})\to\mathcal{M}_{\mu}(\widetilde{w})=M_{B}^{\prime }(\widetilde{w})/PB_{\text{par}}\) is a principal \(PB_{\text{par}}\)-bundle. Observe that \(\prod_{i=1}^{k-1}Z(C_{i})\cong(\mathbb{K}^{\times}\times\prod_{i=1}^{k-1}Z(C_ {i}))/\mathbb{K}^{\times}\hookrightarrow PB_{\text{par}}=(B\times\prod_{i=1}^{k- 1}Z(C_{i}))/\mathbb{K}^{\times}\) is a normal closed subgroup, with quotient \(PB_{\text{par}}/\prod_{i=1}^{k-1}Z(C_{i})\cong PB:=B/\mathbb{K}^{\times}\). Clearly, we have \(M_{B}^{\prime}(\widetilde{w})/\prod_{i=1}^{k-1}Z(C_{i})\cong M_{B}^{\prime \prime}(\widetilde{w})\). Then, by Proposition C.31 and Proposition C.24,
\[\mathcal{M}_{\mu}(\widetilde{w})=M_{B}^{\prime}(\widetilde{w})/PB_{\text{par}} \cong M_{B}^{\prime\prime}(\widetilde{w})/PB, \tag{2.3.8}\]
and the induced quotient map
\[\pi_{\widetilde{w}}^{\prime\prime}:M_{B}^{\prime\prime}(\widetilde{w})\to \mathcal{M}_{\mu}(\widetilde{w})=M_{B}^{\prime\prime}(\widetilde{w})/PB\]
is a principal \(PB\)-bundle. It leads to study the \(B\)-action on \(M_{B}^{\prime\prime}(\widetilde{w})\).
**Lemma 2.5**.: _The induced action of \(b\in B\) on \((u^{0},(y_{j})_{j=1}^{2g},\bar{\epsilon},({}^{+}n^{j})_{j=1}^{2g},({}^{+}n_{i}^{ \prime})_{i=1}^{k-1})\in M_{B}^{\prime\prime}(\widetilde{w})\) satisfies: a) The canonical projection \(M_{B}^{\prime\prime}(\widetilde{w})\to\widetilde{X}(\beta(\widetilde{w})) \subset T^{2g}\times X(\beta(\widetilde{w}))\) is \(B\)-equivariant. Here, \(b\in B\) acts on \(((y_{j})_{j=1}^{2g},\bar{\epsilon})\in T^{2g}\times X(\beta(\widetilde{w}))\) diagonally: \(B\curvearrowright X(\beta(\widetilde{w}))\) via Definition 1.14, and_
\[\widehat{y}_{2j-1}=D(b)^{\tau_{2j-1}^{-1}}D(b)^{-1}y_{2j-1},\ \ \widehat{y}_{2j}=D(b)(D(b)^{\tau_{2j}})^{-1}y_{2j}. \tag{2.3.9}\]
_._
_If \(b=t\in T\), then_
\[\widehat{n}_{i}^{\prime}=t^{\widehat{w}_{i}^{-1}+}n_{i}^{\prime}(t^{ -1})^{\widehat{w}_{i}^{-1}};+\widehat{n}^{2j-1}=t^{+}n^{2j-1}t^{-1};+\widehat{n} ^{2j}=t^{+}n^{2j}t^{-1},\] \[(\widehat{y}_{2j-1}=t^{\tau_{2j-1}^{-1}}t^{-1}y_{2j-1};\widehat{y }_{2j}=t(t^{\tau_{2j}})^{-1}y_{2j};\widehat{u}^{0}=tu^{0}t^{-1};\widehat{D}^{0} =tD^{0}(t^{-1})^{\prod_{m=1}^{s}\tau_{2m-1},\tau_{2m})}\] \[[\widehat{u}^{0}]\circ[\widehat{D}^{0}]\circ[\mathrm{B}_{\beta( \widetilde{w})}(\widehat{\overline{\epsilon}})]^{\prime}=[t]\circ[u^{0}] \circ[D^{0}]\circ[\mathrm{B}_{\beta(\widetilde{w})}(\overline{\epsilon})]^{ \prime}\circ[t^{-1}]\in\underline{\mathrm{FBD}}_{n}. \tag{2.3.10}\]
_Here, the last equation uniquely determines \((\widehat{u}^{0},\widehat{D}^{0},\widehat{\overline{\epsilon}})\)._
Proof.: Now, \(\widehat{A}_{j}=bA_{j}b^{-1}\), \(\widehat{x}_{i}=bx_{i}\). So, \([\widehat{A}_{j}]=[b]\circ[A_{j}]\circ[b^{-1}]\), \([\widehat{x}_{i}C_{i}\widehat{x}_{i}^{-1}]^{\prime}=[b]\circ[x_{i}C_{i}x_{i}^ {-1}]^{\prime}\circ[b^{-1}]\). \(a)\). By Section 2.2, \((\widehat{n}_{2j-1},\widehat{y}_{2j-1},\widehat{n}_{2j-1})\in U_{\tau_{2j-1}^{- 1}}^{-}\times T\times U\) is uniquely determined by:
\[\widehat{\mu}_{2j-1}\tau_{2j-1}\widehat{y}_{2j-1}\widehat{\eta}_{2j-1}= \widehat{A}_{2j-1}=bA_{2j-1}b^{-1}=b\mu_{2j-1}\tau_{2j-1}y_{2j-1}\eta_{2j-1}b ^{-1}.\]
Thus, \(\widehat{y}_{2j-1}=D(b)^{\tau_{2j-1}^{-1}}y_{2j-1}D(b)^{-1}\in T\), as desired. Similarly, \(\widehat{y}_{2j}=D(b)y_{2j}(D(b)^{-1})^{\tau_{2j}}\in T\).
Recall that \(\widehat{u}_{k}=bu_{k}(b^{C_{k}})^{-1}\), \(D_{k}=C_{k}\), then we have equalities in \(\underline{\mathrm{FBD}}_{n}\):
\[[\widehat{u}^{0}\widehat{D}^{0}]\circ[\mathrm{B}_{\beta(\widetilde{w})}( \widehat{\overline{\epsilon}})]=[\widehat{\mathrm{M}}_{\widetilde{w}}]=\prod_ {j=1}^{g}([\widehat{A}_{2j-1}]\circ[\widehat{A}_{2j}]\circ[\widehat{A}_{2j-1 }^{-1}]\circ[\widehat{A}_{2j}^{-1}])\circ\prod_{i=1}^{k-1}[\widehat{x}_{i}C_{ i}\widehat{x}_{i}^{-1}]^{\prime}\circ[\widehat{u}_{k}D_{k}]\] \[= [b]\circ[u^{0}D^{0}]\circ[\mathrm{B}_{\beta(\widetilde{w})}( \overline{\epsilon})]\circ[D_{k}^{-1}u_{k}^{-1}b^{-1}\widehat{u}_{k}D_{k}]=[b] \circ[u^{0}D^{0}]\circ[\mathrm{B}_{\beta(\widetilde{w})}(\overline{\epsilon})] \circ[b^{-1}]. \tag{2.3.11}\]
This equation uniquely determines \((\widehat{u}^{0},\widehat{D}^{0},\widehat{\overline{\epsilon}})\), and we see that \(\widehat{\overline{\epsilon}}\) coincides with the action of \(b\in B\) on \(\widetilde{\epsilon}\in X(\beta(\widetilde{w}))\) in Definition 1.14. This show \(a)\) and most part of \(b)\).
\(b)\). By above, it remains to check the action of \(b=t\in T\) on \(((^{+}n^{j})_{j},(^{+}n_{i}^{\prime})_{i})\).
By the equation \(\widehat{\nu}_{i}\dot{w}_{i}\widehat{n}_{i}\widehat{z}_{i}=\widehat{x}_{i}= tx_{i}=t\nu_{i}\dot{w}_{i}n_{i}z_{i}\), we see that \(\widehat{n}_{i}=t^{\dot{w}_{i}^{-1}}n_{i}(t^{\dot{w}_{i}^{-1}})^{-1}\). It follows that \(\widehat{n}_{i}^{\prime}=\widehat{n}_{i}C_{i}\widehat{n}_{i}^{-1}C_{i}^{-1}=t ^{\dot{w}_{i}^{-1}}n_{i}^{\prime}(t^{\dot{w}_{i}^{-1}})^{-1}\). Then \({}^{+}\widehat{n}_{i}^{\prime}=R_{\dot{w}_{i}}^{+}(\widehat{n}_{i}^{\prime})=t ^{\dot{w}_{i}^{-1}+}n_{i}^{\prime}(t^{\dot{w}_{i}^{-1}})^{-1}\), as desired.
It remains to compute \({}^{+}\widehat{n}^{j}\). By (2.1.4) and similar to (2.3.11), we get by induction that \(\widehat{u}_{i}=tu_{i}t^{-1}\). Similarly, by (2.2.3) and induction, we get \(\widehat{u}^{j}=tu^{j}t^{-1}\). By the equation \(\widehat{\mu}_{2j-1}\tau_{2j-1}\widehat{y}_{2j-1}\widehat{y}_{2j-1}=t\mu_{2j-1 }\tau_{2j-1}y_{2j-1}t^{-1}\), we see \(\widehat{\mu}_{2j-1}=t\mu_{2j-1}t^{-1}\), \(\widehat{\eta}_{2j-1}=t\eta_{2j-1}t^{-1}\). Similarly, \(\widehat{\mu}_{2j}=t\mu_{2j}t^{-1}\), \(\widehat{\eta}_{2j}=t\eta_{2j}t^{-1}\). Now, by Remark 2.4, we see \({}^{+}\widehat{n}^{j}=t^{+}\widehat{n}^{j}t^{-1}\), as desired.
### The cell decomposition
Recall that \(\pi_{\widetilde{w}}^{\prime\prime}:M_{B}^{\prime\prime}(\widetilde{w})\to \mathcal{M}_{\mu}(\widetilde{w})=M_{B}^{\prime\prime}(\widetilde{w})/PB\) is a principal \(PB\)-bundle. Moreover, \(M_{B}^{\prime\prime}(\widetilde{w})\cong\widetilde{X}(\beta(\widetilde{w})) \times\prod_{j=1}^{g}(U_{\tau_{2j}^{-1}}^{+}\times U_{\tau_{2j-1}}^{+})\times \prod_{i=1}^{k-1}(U_{\dot{w}_{i}}^{+}\cap N_{i})\) by (2.3.7). Say, \(\beta:=\beta(\widetilde{w})=\sigma_{i_{\ell}}\cdots\sigma_{i_{1}}\in\mathrm{FBr}_{n }^{+}\). Define a \(B\)-invariant closed subvariety of \(T^{2g}\times X(\beta)\) by
\[\widetilde{X}_{p}(\beta):=\widetilde{X}(\beta)\cap(T^{2g}\times X_{p}(\beta)) \subset T^{2g}\times X(\beta),\quad p\in\mathcal{W}(\beta). \tag{2.4.1}\]
Then by Lemma 2.5, the \(B\)-equivariant decomposition \(\widetilde{X}(\beta)=\sqcup_{p\in\mathcal{W}(\beta)}\widetilde{X}_{p}(\beta)\) induces a (resp. \(B\)-equivariant) decomposition of \(\mathcal{M}_{\mu}(\widetilde{w})\) (resp. \(M_{B}^{\prime\prime}(\widetilde{w})\)), as desired. It remains to give a more concrete description of each piece in these decompositions. This reduces to describe \(\widetilde{X}_{p}(\beta)\).
We start with the following observation: Remember that \(PB\) acts freely on \(M_{B}^{\prime\prime}(\widetilde{w})\). By (2.3.10), we then see that \(PT=T/\mathbb{K}^{\times}\subset PB\) preserves and acts _freely_ on \(\widetilde{X}(\beta)\cong\widetilde{X}(\beta)\times\{0\}\subset M_{B}^{\prime \prime}(\widetilde{w})\). It turns out that, this free action leads to a more concrete description of \(\widetilde{X}(\beta)\), which we now pursue.
By Proposition 1.16, \(X_{p}(\beta)\cong(\mathbb{K}^{\times})^{|S_{p}|}\times\mathbb{K}^{|U_{p}|}:( \widetilde{\epsilon})=(\epsilon_{m})_{m=\ell}^{1}\mapsto(\epsilon_{m}^{\prime })_{m\in S_{p}\cup U_{p}}\). Define
\[\begin{split}&\overline{\phi}_{\widetilde{w},p}:T^{2g}\times( \mathbb{K}^{\times})^{|S_{p}|}\to T_{1}:(\vec{y}=(y_{j})_{j=1}^{2g},( \epsilon_{m}^{\prime})_{m\in S_{p}})\mapsto D^{0}(\vec{y})\mu\mathfrak{m} \mathfrak{o}((\epsilon_{m}^{\prime})_{m\in S_{p}}),\\ &\widetilde{T}_{p}(\beta):=\overline{\phi}_{\widetilde{w},p}^{-1 }(I_{n})\subset T^{2g}\times(\mathbb{K}^{\times})^{|S_{p}|}.\end{split} \tag{2.4.2}\]
Here, \(D^{0}(\vec{y})\) is given by (2.2.5), and \(\mu\mathfrak{m}\mathfrak{o}((\epsilon_{m}^{\prime})_{m\in S_{p}})\) is given by Proposition 1.16 (2). Clearly, \(\phi_{\widetilde{w},p}:=\phi_{\widetilde{w}}|_{T^{2g}\times X_{p}(\beta)}\) is the composition of \(\overline{\phi}_{\widetilde{w},p}\) with the obvious projection:
\[\phi_{\widetilde{w},p}:T^{2g}\times X_{p}(\beta)\cong T^{2g}\times(\mathbb{K} ^{\times})^{|S_{p}|}\times\mathbb{K}^{|U_{p}|}\twoheadrightarrow T^{2g}\times( \mathbb{K}^{\times})^{|S_{p}|}\xrightarrow{\overline{\phi}_{\widetilde{w},p}} T_{1}.\]
Hence, we have
\[\widetilde{X}_{p}(\beta)=\phi_{\widetilde{w},p}^{-1}(I_{n})=\widetilde{T}_{p} (\beta)\times\mathbb{K}^{|U_{p}|}. \tag{2.4.3}\]
In particular, \(\widetilde{X}_{p}(\beta)\neq\varnothing\) if and only \(\widetilde{T}_{p}(\beta)\neq\varnothing\).
Also, by Proposition 1.16 and (2.3.9), \(T^{2g}\times(\mathbb{K}^{\times})^{|S_{p}|}\) inherits an action of \(T\) such that, the projection is equivariant with respect to the projection \(B\to T:b\mapsto D(b)\). Besides, observe that
\[PT\text{ acts freely on }\widetilde{X}_{p}(\beta)\text{ if and only if it does so on }\widetilde{T}_{p}(\beta). \tag{2.4.4}\]
Indeed, this is the case if \(\widetilde{X}_{p}(\beta)\neq\varnothing\) by the observation above.
The key property is the following
**Lemma 2.6**.: _The \(PT\)-action on \(\widetilde{T}_{p}(\beta)\) is free if and only if \(\overline{\phi}_{\widetilde{w},p}:T^{2g}\times(\mathbb{K}^{\times})^{|S_{p}|} \to T_{1}\) is surjective._
Proof.: Firstly, we consider the \(PT\)-action. For that, we make some preparations:
1. For any \(\tau\in S_{n}\), define a subgroup of \(T\) by (2.4.5) \[{}^{\tau}T:=\{t\in T:t^{\tau}=t,\text{ i.e., }t_{a}=t_{\tau(a)},\forall a\in[n]\}\subset T.\] So, \({}^{(a\ b)}T=\{t\in T:t_{a}=t_{b}\}\) for any transposition \((a\ b)\in S_{n}\). Define a subgroup of \(T\) by (2.4.6) \[{}^{I}T:=\{t\in T:t^{\tau}=t,\forall\tau\in I\}\subset T,\ \ I\subset S_{n};\ \Leftrightarrow\ {}^{I}T=\cap_{\tau\in I}{}^{T}T=\cap_{a\in[n],\tau\in I}{}^{(a\ \tau(a))}T.\]
2. By definition, \({}^{\tau}T={}^{\tau^{-1}}T\). Also, \(t=t^{\tau_{1}},t=t^{\tau_{2}}\) implies that \(t=t^{\tau_{1}}=(t^{\tau_{2}})^{\tau_{1}}=t^{\tau_{1}\tau_{2}}\), so by (1), (2.4.7) \[{}^{I}T={}^{(I)}T={}^{\overline{(I)}}T.\]
Here, we denote by \(\langle I\rangle\subset S_{n}\) the subgroup generated by \(I\), and
\[\overline{\langle I\rangle}:=\langle(a\ \tau(a)),\forall a\in[n],\tau\in I\rangle \subset S_{n}. \tag{2.4.8}\]
3. Let \([n]=O_{1}\sqcup\cdots\sqcup O_{r}\) be any partition, denoted by \(\vec{\mathcal{O}}\). Define subgroups of \(T\) and \(S_{n}\): (2.4.9) \[\bar{\partial}_{T}:=\{t\in T:a,b\in O_{m}\Rightarrow t_{a}=t_{b},\forall 1\leq m \leq r\}\subset T;\quad S_{\bar{\mathcal{O}}}:=S_{|O_{1}|}\times\cdots\times S _{|O_{r}|}\subset S_{n}.\] Then for any subset \(I\subset S_{n}\), we have (2.4.10) \[{}^{I}T=\bar{\mathcal{O}}T\Leftrightarrow\ \{O_{i}:1\leq i\leq r\}=\{\langle I \rangle\text{-orbits on }[n]\}\ \Leftrightarrow\overline{\langle I\rangle}=S_{\bar{ \mathcal{O}}}.\] Now, we see that \({}^{I}T={}^{J}I\) if and only if \(\overline{\langle I\rangle}=\overline{\langle J\rangle}\). In particular, \({}^{I}T=\mathbb{K}^{\times}I_{n}\) if and only if \(\overline{\langle I\rangle}=S_{n}\).
Come back to the proof. The free \(PT\)-action on \(\widetilde{T}_{p}(\beta)\) means: \(\forall t\in T\), \(\forall((y_{j})_{j=1}^{2g},(\epsilon^{\prime}_{m})_{m\in S_{p}})\in\widetilde {T}_{p}(\beta)\), we have \(t\cdot((y_{j})_{j=1}^{2g},(\epsilon^{\prime}_{m})_{m\in S_{p}})=((y_{j})_{j= 1}^{2g},(\epsilon^{\prime}_{m})_{m\in S_{p}})\Rightarrow t\in\mathbb{K}^{ \times}I_{n}\). We use **Convention 4**. Recall by (2.3.10) that, \(\widetilde{y}_{j}=y_{j}\) if and only if \(t^{\tau_{j}}=t\). By Proposition 1.16 (1), for any \(m\in S_{p}\), \(\widetilde{\epsilon}^{\prime}_{m}=\epsilon^{\prime}_{m}\) if and only if \((t^{p_{m-1}})_{i_{m}}=(t^{p_{m-1}})_{i_{m}+1}\). Equivalently, \((t^{p_{m-1}})^{s_{im}}=t^{p_{m-1}}\), i.e.. \(t^{\mathbb{S}_{m}}=t\), where:
\[\underline{s}_{i_{m}}:=p_{m-1}^{-1}s_{i_{m}}p_{m-1}\in S_{n}. \tag{2.4.11}\]
Thus, denote
\[J_{p}:=\{\tau_{j}:1\leq j\leq 2g,\ \underline{s}_{i_{m}}:m\in S_{p}\}\subset S_{n}. \tag{2.4.12}\]
By the preparation above, we conclude that
\[PT\text{ acts freely on }\widetilde{T}_{p}(\beta)\ \ \Leftrightarrow\ \ {}^{J_{p}}T=\mathbb{K}^{\times}I_{n}\Leftrightarrow\overline{\langle J_{p} \rangle}=S_{n}. \tag{2.4.13}\]
On the other hand, we consider the image of \(\overline{\phi}_{\widetilde{w},p}\). Again, we begin with some preparations:
1. For any \(\tau\in S_{n}\), define a subgroup of \(T_{1}\) by (2.4.14) \[{}_{\tau}T:=\{y(y^{-1})^{\tau}:y\in T\}\subset T_{1}.\] So, \({}_{(a\ b)}T=\{\mathrm{K}_{a}(\lambda)\mathrm{K}_{b}(\lambda^{-1}):\lambda \in\mathbb{K}^{\times}\}\) for any transposition \((a\ b)\in S_{n}\). For any \(I\subset S_{n}\), define \({}_{I}T\subset T_{1}\) as the subgroup generated by \({}_{\tau}T,\tau\in I\), and write \({}_{I}T:=\langle{}_{\tau}T:\tau\in I\rangle\). Equivalently, \[{}_{I}T=\langle{}_{(a\ \tau(a))}T:a\in[n],\tau\in I\rangle=\langle\mathrm{K}_{a}( \lambda)\mathrm{K}_{\tau(a)}(\lambda^{-1}):\lambda\in\mathbb{K}^{\times},a\in[ n],\tau\in I\rangle\subset T_{1}.\]
2. By definition, \({}_{\tau}T={}_{\tau^{-1}}T\). Also, as \(y(y^{-1})^{\tau_{1}\tau_{2}}=(y(y^{-1})^{\tau_{2}})(y^{\tau_{2}}((y^{-1})^{ \tau_{1}})\), we have by (1): \[{}_{I}T={}_{\langle I\rangle}T=\overline{\langle I\rangle}T.\]
3. Let \([n]=O_{1}\sqcup\cdots\sqcup O_{r}\) be any partition, denoted by \(\vec{\mathcal{O}}\). Define a subgroup \[\bar{\mathcal{O}}T:=\{t\in T:\prod_{q\in O_{\alpha}}t_{q}=1,\forall 1\leq a\leq r\} \subset T_{1}.\] Then for any subset \(I\subset S_{n}\), we have \[{}_{I}T=\bar{\mathcal{O}}T\Leftrightarrow\ \{O_{i}:1\leq i\leq r\}=\{\langle I \rangle\text{-orbits on }[n]\}\ \Leftrightarrow\overline{\langle I\rangle}=S_{\bar{ \mathcal{O}}}.\] Now, we see that \({}_{I}T={}_{J}I\) if and only if \(\overline{\langle I\rangle}=\overline{\langle J\rangle}\). In particular, \({}_{I}T=T_{1}\) if and only if \(\overline{\langle I\rangle}=S_{n}\).
Now, come back to our setting. Observe that \(\mathrm{s}(\beta)=\prod_{j=1}^{g}(\tau_{2j-1},\tau_{2j})\). Inspired by Proposition 1.16 (2) and equation (2.2.5), we define an isomorphism
\[m_{p}:T_{1}\xrightarrow{\simeq}T_{1}:t\mapsto t(D_{1}^{-1})^{\mathrm{s}(\beta)} \prod_{m\in S_{p}}(\mathrm{K}_{i_{m}}(-1))^{\mathrm{s}_{>m}(\beta)}.\]
So \(\mathrm{Im}(\overline{\phi}_{\widetilde{w},p})=T_{1}\) if and only if \(\mathrm{Im}(m_{p}\circ\overline{\phi}_{\widetilde{w},p})=T_{1}\). It suffices to consider the latter.
For any \(1\leq j\leq g\), denote
\[c_{<j}=c_{<j}(\widetilde{\tau}):=\prod_{m=1}^{j-1}(\tau_{2m-1},\tau_{2m}).\]
In (2.2.5), observe that
\[(y_{2j-1}^{\tau_{2j-1}}(y_{2j-1}^{-1})^{\tau_{2j-1}\tau_{2j}}) \prod_{m=1}^{\prod_{j=1}^{j-1}(\tau_{2m-1},\tau_{2m})}=\widetilde{y}_{2j-1}( \widetilde{y}_{2j-1}^{-1})^{c_{<j+1}\tau_{2j}c_{<j}^{-1}},\quad\widetilde{y}_ {2j-1}:=y_{2j-1}^{c_{<j}\tau_{2j-1}}\in T,\] \[(y_{2j}^{\tau_{2j-1}}(y_{2j}^{-1})^{(\tau_{2j-1},\tau_{2j})}) \prod_{m=1}^{\prod_{j=1}^{j-1}(\tau_{2m-1},\tau_{2m})}=\widetilde{y}_{2j}( \widetilde{y}_{2j}^{-1})^{c_{<j+1}\tau_{2j-1}^{-1}c_{<j}^{-1}},\quad\widetilde{ y}_{2j}=y_{2j}^{c_{<j}\tau_{2j-1}}\in T.\]
In Proposition 1.16 (2), for each \(m\in S_{p}\), observe that
\[\{(\mathrm{K}_{i_{m}}(\epsilon_{m}^{\prime-1})\mathrm{K}_{i_{m}+ 1}(\epsilon_{m}^{\prime}))^{\mathrm{s}_{>m}(\beta)}=\mathrm{K}_{\mathrm{s}_{>m }(\beta)\,(i_{m})}(\epsilon_{m}^{\prime-1})\mathrm{K}_{\mathrm{s}_{>m}(\beta) \,(i_{m}+1)}(\epsilon_{m}^{\prime}):\epsilon_{m}^{\prime}\in\mathbb{K}^{ \times}\}\] \[= {}_{(\mathrm{s}_{>m}(\beta)\,(i_{m})\,\,\mathrm{s}_{>m}(\beta)\,( i_{m}+1))}T={}_{\mathrm{s}_{>m}(\beta)\mathrm{s}_{i_{m}>m}(\beta)^{-1}}T\subset T _{1}.\]
Now, by above, (2.2.5), and Proposition 1.16 (2), we have \(\mathrm{Im}(m_{p}\circ\overline{\phi}_{\widetilde{w},p})=\widetilde{\jmath}_{p}\,I\), where
\[\widetilde{J}_{p}:=\{c_{<j+1}\tau_{2j}c_{<j}^{-1},c_{<j+1}\tau_{2j-1}^{-1}c_{<j }^{-1},1\leq j\leq g;\mathrm{s}_{>m}(\beta)\mathrm{s}_{i_{m}}\mathrm{s}_{>m}( \beta)^{-1},m\in S_{p}\}. \tag{2.4.15}\]
Then by the preparation above, we conclude that
\[\mathrm{Im}(m_{p}\circ\overline{\phi}_{\widetilde{w},p})=\widetilde{\jmath}_{p }\,I=T_{1}\Leftrightarrow\overline{\langle\widetilde{J}_{p}\rangle}=S_{n}. \tag{2.4.16}\]
Combined with (2.4.13), this shows that the lemma is equivalent to the following statement:
\[\overline{\langle J_{p}\rangle}=S_{n}\Leftrightarrow\overline{\langle \widetilde{J}_{p}\rangle}=S_{n},\]
which will follow from the claim below.
**Claim**: we have \(\langle J_{p}\rangle=\langle\widetilde{J}_{p}\rangle\).
_Proof of Claim_. Denote
\[J_{\tau}:=\{\tau_{m}:1\leq m\leq 2g\}\subset S_{n},\quad\widetilde{J}_{\tau}:=\{ \widetilde{\tau}_{2j}:=c_{<j+1}\tau_{2j}c_{<j}^{-1},\widetilde{\tau}_{2j-1}:=c_ {<j+1}\tau_{2j-1}^{-1}c_{<j}^{-1},1\leq j\leq g\}\subset S_{n}.\]
Firstly, we prove by induction that, for each \(1\leq j\leq g\), we have
\[\langle\tau_{m}:1\leq m\leq 2j\rangle=\langle\widetilde{\tau}_{m}:1\leq m\leq 2j \rangle\subset S_{n}. \tag{2.4.17}\]
In particular, \(j=g\) gives \(\langle J_{\tau}\rangle=\langle\widetilde{J}_{\tau}\rangle\).
For \(j=1\), we have \(\widetilde{\tau}_{2}=\tau_{1}\tau_{2}\tau_{1}^{-1},\quad\widetilde{\tau}_{1}= \tau_{1}\tau_{2}\tau_{1}^{-1}\tau_{2}^{-1}\tau_{1}^{-1}\). So, \(\widetilde{\tau}_{2}^{-1}\widetilde{\tau}_{1}\widetilde{\tau}_{2}=\tau_{1}^{-1 }\in\langle\widetilde{\tau}_{1},\widetilde{\tau}_{2}\rangle\). It follows that \(\langle\widetilde{\tau}_{1},\widetilde{\tau}_{2}\rangle=\langle\tau_{1}^{-1 },\widetilde{\tau}_{2}=\tau_{1}\tau_{2}\tau_{1}^{-1}\rangle=\langle\tau_{1},\tau_{ 2}\rangle\), as desired.
Suppose (2.4.17) holds for \(`<j\)', so
\[c_{<j}\in\langle\tau_{m}:1\leq m\leq 2(j-1)\rangle=\langle\widetilde{\tau}_{m}:1 \leq m\leq 2(j-1)\rangle.\]
Observe that \(\widetilde{\tau}_{2j}^{-1}\widetilde{\tau}_{2j-1}\widetilde{\tau}_{2j}=c_{<j} \tau_{2j}^{-1}\tau_{2j-1}^{-1}(\tau_{2j-1},\tau_{2j})\tau_{2j}c_{<j}^{-1}=c_{< j}\tau_{2j-1}^{-1}c_{<j}^{-1}\). It follows that
\[\langle\widetilde{\tau}_{m}:1\leq m\leq 2j\rangle=\langle\tau_{m}:1 \leq m\leq 2(j-1),c_{<j}\tau_{2j-1}^{-1}c_{<j}^{-1},\widetilde{\tau}_{2j}=c_{<j+1 }\tau_{2j}c_{<j}^{-1}\rangle\] \[= \langle\tau_{m}:1\leq m\leq 2(j-1),\tau_{2j-1}^{-1},(\tau_{2j-1}, \tau_{2j})\tau_{2j}=\tau_{2j-1}\tau_{2j}\tau_{2j-1}^{-1}\rangle=\langle\tau_{m}: 1\leq m\leq 2j\rangle.\]
This finishes the induction, and hence proves (2.4.17).
Now, observe that \(\mathrm{s}(\beta)=\prod_{j=1}^{g}\,(\tau_{2j-1},\tau_{2j})\in\langle \widetilde{J}_{\tau}\rangle=\langle J_{\tau}\rangle\). Thus,
\[\langle\widetilde{J}_{p}\rangle=\langle J_{\tau},\mathrm{s}(\beta)^{-1} \mathrm{s}_{>m}(\beta)\mathrm{s}_{i_{m}}\mathrm{s}_{>m}(\beta)^{-1}\mathrm{s} (\beta),m\in S_{p}\rangle=\langle J_{\tau},\mathrm{s}_{<m}(\beta)^{-1} \mathrm{s}_{i_{m}}\mathrm{s}_{<m}(\beta)=:\widetilde{s}_{i_{m}},m\in S_{p}\rangle.\]
Say, \(S_{p}=\{m_{1}<\dots<m_{N}\}\). Set \(J_{s}:=\{\mathrm{s}_{i_{m}}:m\in S_{p}\}\subset W=S_{n}\), \(\widetilde{J}_{s}:=\{\widetilde{s}_{i_{m}}:m\in S_{p}\}\subset W=S_{n}\). We will show that
\[\langle\mathrm{s}_{i_{m_{j}}},1\leq j\leq L\rangle=\langle\widetilde{s}_{i_{m_ {j}}},1\leq j\leq L\rangle. \tag{2.4.18}\]
In particular, \(L=N\) gives \(\langle J_{s}\rangle=\langle\widetilde{J}_{s}\rangle\), and the Claim will follow immediately.
For that, we firstly prove by induction that, for each \(1\leq j\leq N\), we have
(2.4.19) \[\big{\lceil}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(J_{p}=\{\tau_{j}:1\leq j\leq 2g,\ \operatorname{s}_{i_{m}}=p_{m-1}^{-1} \mathrm{s}_{i_{m}}p_{m-1}:m\in S_{p}\}\subset S_{n}\). Alternatively, denote
\[J_{p}^{\prime}:=\{\tau_{j}:1\leq j\leq 2g,\ \operatorname{\widetilde{s}}_{i_{m}}= \mathrm{s}_{<m}(\beta)^{-1}\mathrm{s}_{i_{m}}\mathrm{s}_{<m}(\beta):m\in S_{p} \}\subset S_{n}.\]
We have seen that \(\langle J_{p}\rangle=\langle J_{p}^{\prime}\rangle\), then the same holds if we replace \(J_{p}\) above by \(J_{p}^{\prime}\).
As an immediate corollary of Lemma 2.6, we obtain
**Corollary 2.8**.: _If \(\widetilde{X}_{p}(\beta)\neq\varnothing\), then we have a (\(B\)-equivariant) isomorphism_
\[\widetilde{X}_{p}(\beta)=\widetilde{T}_{p}(\beta)\times\mathbb{K}^{|U_{p}|} \cong(\mathbb{K}^{\times})^{|S_{p}|+2gn-n+1}\times\mathbb{K}^{|U_{p}|}. \tag{2.4.21}\]
Proof.: If \(\widetilde{X}_{p}(\beta)\neq\varnothing\), then we know that the \(PT\)-action on \(\widetilde{X}_{p}(\beta)\) is free. By Lemma 2.6, the map
\[\overline{\phi}_{\widetilde{w},p}:T^{2g}\times(\mathbb{K}^{\times})^{|S_{p}| }\cong(\mathbb{K}^{\times})^{|S_{p}|+2gn}\to T_{1}\cong(\mathbb{K}^{\times}) ^{n-1}.\]
is surjective. By (2.3.3), Proposition 1.16 (2), and (2.2.5), the composition \(m_{p}\circ\overline{\phi}_{\widetilde{w},p}\) is a surjective group homomorphism of algebraic tori. It follows that
\[\widetilde{T}_{p}(\beta)=\overline{\phi}_{\widetilde{w},p}^{-1}(I_{n})=(m_{p} \circ\overline{\phi}_{\widetilde{w},p})^{-1}((D_{1}^{-1})^{\mathrm{s}(\beta) }\prod_{m\in S_{p}}(\mathrm{K}_{i_{m}}(-1))^{\mathrm{s}_{>m}(\beta)})\cong( \mathbb{K}^{\times})^{|S_{p}|+2gn-n+1}.\]
Thus, \(\widetilde{X}_{p}(\beta)=\widetilde{T}_{p}(\beta)\times\mathbb{K}^{|U_{p}|} \cong(\mathbb{K}^{\times})^{|S_{p}|+2gn-n+1}\times\mathbb{K}^{|U_{p}|}\), as desired.
Now, denote
\[n_{\widetilde{w}}:=\sum_{j=1}^{2g}|U_{\tau_{j}}^{+}|+\sum_{i=1}^{k-1}|U_{ \widetilde{w}_{i}}^{+}\cap N_{i}|=2g|U|+\sum_{i=1}^{k-1}|N_{i}|-\frac{1}{2} \ell(\beta). \tag{2.4.22}\]
By Lemma 2.5 and Corollary 2.8, we have shown the following
**Proposition 2.9**.: _We have a \(B\)-equivariant decomposition into locally closed \(\mathbb{K}\)-subvarieties_
\[M_{B}^{\prime\prime}(\widetilde{w})=\sqcup_{p\in W^{\prime\prime}(\beta)}M_{B }^{\prime\prime}(\widetilde{w},p),\quad M_{B}^{\prime\prime}(\widetilde{w},p) :=\widetilde{X}_{p}(\beta)\times\prod_{j=1}^{g}(U_{\tau_{2j}^{-1}}^{+}\times U _{\tau_{2j-1}}^{+})\times\prod_{i=1}^{k-1}(U_{\widetilde{w}_{i}}^{+}\cap N_{i }).\]
_In particular,_
\[M_{B}^{\prime\prime}(\widetilde{w},p)\cong(\mathbb{K}^{\times})^{a(\widetilde {w},p)}\times\mathbb{K}^{b(\widetilde{w},p)}=(\mathbb{K}^{\times})^{|S_{p}|+2 gn-n+1}\times\mathbb{K}^{|U_{p}|+n_{\widetilde{w}}},\]
_and \(a(\widetilde{w},p)+2b(\widetilde{w},p)=4g|U|+2\sum_{i=1}^{k-1}|N_{i}|+2gn-n+1\) is a constant independent of \(\widetilde{w}\), \(p\)._
Recall by (2.3.8) that, \(PB\) acts freely on \(M_{B}^{\prime\prime}(\widetilde{w})\) and \(\pi_{\widetilde{w}}^{\prime\prime}:M_{B}^{\prime\prime}(\widetilde{w})\to \mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w})=M_{B}^{\prime\prime}(\widetilde{ w})/PB\) is a principal \(PB\)-bundle. Define
\[\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w},p):=\pi_{\widetilde{w}}^{\prime \prime}(M_{B}^{\prime\prime}(\widetilde{w},p))\hookrightarrow\mathcal{M}_{ \boldsymbol{\mu}}(\widetilde{w}) \tag{2.4.23}\]
as a locally closed \(\mathbb{K}\)-subvariety. By base change (see Corollary C.28), the restriction of \(\pi_{\widetilde{w}}^{\prime\prime}\)
\[\pi_{\widetilde{w},p}:M_{B}^{\prime\prime}(\widetilde{w},p)\cong(\mathbb{K}^{ \times})^{a(\widetilde{w},p)}\times\mathbb{K}^{b(\widetilde{w},p)}\to\mathcal{ M}_{\boldsymbol{\mu}}(\widetilde{w},p)=M_{B}^{\prime\prime}(\widetilde{w},p)/PB,\]
is a principal \(PB\)-bundle as well as a geometric quotient.
Our first main result improves A. Mellit's cell decomposition theorem [54, SS7]:
**Theorem 2.10**.: _If \((C_{1},\cdots,C_{k})\in T^{k}\) is very generic (Definition 1.1 and Assumption 1.4) of type \(\mu\), and \(\mathcal{M}_{\mu}\) is nonempty, then there is a decomposition into locally closed affine \(\mathbb{K}\)-subvarieties_
\[\mathcal{M}_{\mu}=\sqcup_{\widetilde{w}\in W^{2g}\times\prod_{i=1}^{k-1}W/W(C _{i})}\mathcal{M}_{\mu}(\widetilde{w})=\sqcup_{\widetilde{w}\in W^{2g}\times \prod_{i=1}^{k-1}W/W(C_{i})}\ \sqcup_{p\in\mathcal{W}^{*}(\beta(\widetilde{w}))}\ \mathcal{M}_{\mu}( \widetilde{w},p), \tag{2.4.24}\]
_such that:_
1. _[label=(0.4)]_
2. _We have_ (_2.4.25_)_ \[\mathcal{M}_{\mu}(\widetilde{w},p)\cong(\mathbb{K}^{\times})^{ \overline{a}(\widetilde{w},p)}\times\mathcal{A}_{\mu}(\widetilde{w},p),\quad \mathcal{A}_{\mu}(\widetilde{w},p)\times\mathbb{K}^{|U|}\cong\mathbb{K}^{b( \widetilde{w},p)}.\] \[\overline{a}(\widetilde{w},p)=a(\widetilde{w},p)-n+1=|S_{p}|+2 gn-2n+2,\quad\overline{b}(\widetilde{w},p):=b(\widetilde{w},p)-|U|.\]
_In particular, \(\mathcal{M}_{\mu}(\widetilde{w},p)\) is of dimension \(\overline{a}(\widetilde{w},p)+\overline{b}(\widetilde{w},p)\), and_
\[\overline{a}(\widetilde{w},p)+2\overline{b}(\widetilde{w},p)=d_{\mu}=n^{2}(2 g-2+k)-\sum_{i,j}(\mu_{j}^{i})^{2}+2\]
_is a constant independent of \((\widetilde{w},p)\)._
1. _[label=(2)]_
2. _There exists a unique_ \((\widetilde{w}_{\max},p_{\max})\) _such that_ \(\dim\mathcal{M}_{\mu}(\widetilde{w}_{\max},p_{\max})\) _is of maximal dimension_ \(d_{\mu}\)_. Equivalently,_ \(\overline{a}(\widetilde{w}_{\max},p_{\max})=d_{\mu}\) _(resp._ \(\overline{b}(\widetilde{w}_{\max},p_{\max})=0\)_). In particular,_ \(\mathcal{M}_{\mu}(\widetilde{w}_{\max},p_{\max})\) _is an open dense algebraic torus:_ \[\mathcal{M}_{\mu}(\widetilde{w}_{\max},p_{\max})\cong(\mathbb{K}^{\times})^{d _{\mu}},\quad\mathcal{A}_{\mu}(\widetilde{w}_{\max},p_{\max})=\{pt\}.\]
Proof.: (0). By (1.2.8) and Proposition 2.9, the decomposition of \(\mathcal{M}_{\mu}\) is clear. Next, we show that \(\mathcal{M}_{\mu}(\widetilde{w})\) is affine. Inspired by (1.2.7), we define a closed (affine) subvariety of \(M^{\prime}_{B}(\widetilde{w})\):
\[\underline{M}^{\prime}_{B}(\widetilde{w}):=M^{\prime}_{B}\cap(\prod_{j=1}^{2g} \!B\tau_{j}B\times\prod_{i=1}^{k-1}\!B\dot{w}_{i}P_{i}\times\{I_{n}\})\subset M ^{\prime}_{B}(\widetilde{w}).\]
Observe that we have mutually inverse \(U\)-equivariant isomorphisms:
\[U\times\underline{M}^{\prime}_{B}(\widetilde{w})\to M^{\prime}_{B}( \widetilde{w}):(u,(A_{1},\cdots,x_{k-1},I_{n}))\mapsto(uA_{1}u^{-1},\cdots,ux_{ k-1},u(u^{C_{k}})^{-1});\] \[M^{\prime}_{B}(\widetilde{w})\to U\times\underline{M}^{\prime}_{B}( \widetilde{w}):(A_{1},\cdots,x_{k-1},u_{k}=\widetilde{u}(\widetilde{u}^{C_{k}} )^{-1})\mapsto(\widetilde{u},(\widetilde{u}^{-1}A_{1}\widetilde{u},\cdots, \widetilde{u}^{-1}x_{k-1},I_{n})), \tag{2.4.26}\]
where \(\widetilde{u}\in U\) is uniquely determined by the equation \(u_{k}=\widetilde{u}(\widetilde{u}^{C_{k}})^{-1}\). Thus, \(\underline{M}^{\prime}_{B}(\widetilde{w})=M^{\prime}_{B}(\widetilde{w})/U\). Recall that \(U\subset PB_{\mathrm{par}}\) is a normal closed subgroup with quotient \(PB_{\mathrm{par}}/U\cong PT_{\mathrm{par}}=(T\times\prod_{i=1}^{k-1}Z(C_{i}))\). Then by Proposition C.31 and Proposition C.24, we get an isomorphism
\[\mathcal{M}_{\mu}(\widetilde{w})=M^{\prime}_{B}(\widetilde{w})/PB_{\mathrm{ par}}\cong\underline{M}^{\prime}_{B}(\widetilde{w})/(PB_{\mathrm{par}}/U)= \underline{M}^{\prime}_{B}(\widetilde{w})/PT_{\mathrm{par}},\]
and the quotient map \(\underline{M}^{\prime}_{B}\to\mathcal{M}_{\mu}(\widetilde{w})\cong\underline{M}^ {\prime}_{B}(\widetilde{w})/PT_{\mathrm{par}}\) is a principal \(PT_{\mathrm{par}}\)-bundle as well as a geometric quotient, which is unique up to unique isomorphism. However, \(\underline{M}^{\prime}_{B}(\widetilde{w})\) is an affine \(\mathbb{K}\)-variety and \(PT_{\mathrm{par}}\) is reductive, we must have
\[\mathcal{M}_{\mu}(\widetilde{w})\cong\underline{M}^{\prime}_{B}(\widetilde{w} )/PT_{\mathrm{par}}\cong\underline{M}^{\prime}_{B}(\widetilde{w})//PT_{\mathrm{ par}}=\mathrm{Spec}\ \mathcal{O}(\underline{M}^{\prime}_{B}(\widetilde{w}))^{PT_{\mathrm{par}}}.\]
In particular, \(\mathcal{M}_{\mu}(\widetilde{w})\) is affine, as desired. It suffices to prove (1) and (2).
(1). We firstly show that \(\mathcal{M}_{\mu}(\widetilde{w},p)\) is affine, the argument is similar to (0).
Recall by (2.3.1), we have a \(PB_{\text{par}}\)-equvariant isomorphism of \(\mathbb{K}\)-varieties
\[(\prod_{j=1}^{2g}\!\!B\tau_{j}B)\times(\prod_{i=1}^{k-1}\!B\dot{w}_{i }P_{i})\times U\xrightarrow{\cong}\] \[U\times\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Thirdly, let's prove the first isomorphism in (2.4.25). By the uniqueness of geometric quotient, we obtain an isomorphism
\[\mathcal{P}(\vec{w},p)=(\mathbb{K}^{\times})^{a(\vec{w},p)}\times\mathbb{K}^{b( \vec{w},p)})/U\cong(\mathbb{K}^{\times})^{a(\vec{w},p)}\times(\mathbb{K}^{b( \vec{w},p)}/U)=(\mathbb{K}^{\times})^{a(\vec{w},p)}\times\mathcal{A}_{\mu}(\vec {w},p).\]
Recall by (2.4.4) that \(PT\) acts freely on the torus factor \(\widetilde{T}_{p}(\beta(\vec{w}))\cong(\mathbb{K}^{\times})^{a(\vec{w},p)}= \mathbb{K}^{|S_{p}|+2gn-n+1}\) of \(M_{B}^{\prime\prime}(\vec{w},p)\cong(\mathbb{K}^{\times})^{a(\vec{w},p)}\times \mathbb{K}^{b(\vec{w},p)}\). By Lemma 2.11 below, this is equivalent to an injective algebraic group homomorphism \(PT\cong(\mathbb{K}^{\times})^{n-1}\hookrightarrow(\mathbb{K}^{\times})^{a( \vec{w},p)}\). So, up to a coordinate change, we may assume \((\mathbb{K}^{\times})^{a(\vec{w},p)}\cong(\mathbb{K}^{\times})^{\overline{a} (\vec{w},p)}\times PT\), where \(\overline{a}(\vec{w},p)=a(\vec{w},p)-n+1\), and \(PT\) acts via translation on the second factor. Now, we have
\[\mathcal{M}_{\mu}(\vec{w},p)=\mathcal{P}(\vec{w},p)/PT\cong(\mathbb{K}^{ \times})^{\overline{a}(\vec{w},p)}\times(PT\times\mathcal{A}_{\mu}(\vec{w},p) )/PT\cong(\mathbb{K}^{\times})^{\overline{a}(\vec{w},p)}\times\mathcal{A}_{\mu }(\vec{w},p),\]
as desired. Here, in the last step: By Proposition C.24, the natural map \((PT\times\mathcal{A}_{\mu}(\vec{w},p))/PT\to PT/PT=\text{Spec }\mathbb{K}\) is a fiber bundle with fiber \(\mathcal{A}_{\mu}(\vec{w},p)\), hence \((PT\times\mathcal{A}_{\mu}(\vec{w},p))/PT\cong\mathcal{A}_{\mu}(\vec{w},p)\).
Finally, it remains to compute \(\overline{a}(\vec{w},p)+2\overline{b}(\vec{w},p)\). By Proposition 2.9, we have
\[\overline{a}(\vec{w},p)+2\overline{b}(\vec{w},p)=a(\vec{w},p)-n+1 +2b(\vec{w},p)-2|U|\] \[= (4g|U|+2{\sum\limits_{i=1}^{k-1}}|N_{i}|+2gn-n+1)-n+1-2|U|=(4g-4)|U |+2{\sum\limits_{i=1}^{k}}|N_{i}|+2gn-2n+2\] \[= (2g-2)(n^{2}-n)+{\sum\limits_{i=1}^{k}}(n^{2}-{ \sum\limits_{j}}(\mu_{j}^{i})^{2})+2gn-2n+2=(2g-2+k)n^{2}-{ \sum\limits_{i,j}}(\mu_{j}^{i})^{2}+2=d_{\mu},\]
which is clearly independent of \(\vec{w},p\). This completes the proof of (1).
(2). By Lemma 1.2, \(\mathcal{M}_{\mu}\) (if nonempty) is connected smooth affine of dimension \(d_{\mu}\). So, in the decomposition, there exists a unique \((\vec{w},p)\) such that \(\mathcal{M}_{\mu}(\vec{w},p)\subset\mathcal{M}_{\mu}\) is (open dense) of dimension \(d_{\mu}\). Moreover, for all \((\vec{w},p)\) in the decomposition,
\[\dim\mathcal{M}_{\mu}(\vec{w},p)=\overline{a}(\vec{w},p)+\overline{b}(\vec{w},p)\leq\overline{a}(\vec{w},p)+2\overline{b}(\vec{w},p)=d_{\mu}.\]
Thus, \(\dim\mathcal{M}_{\mu}(\vec{w},p)=d_{\mu}\Leftrightarrow\overline{b}(\vec{w},p)=0\Leftrightarrow\overline{a}(\vec{w},p)=0\). The rest follows immediately.
**Lemma 2.11**.: _Let \(H\) be a connected algebraic group and \(T^{\prime}=(\mathbb{K}^{\times})_{z_{1},\dots,z_{m}}^{m}\) is an algebraic torus, then any algebraic \(H\)-action on \(T^{\prime}\) is identical to an algebraic group homomorphism_
\[\rho:H\to T^{\prime},\]
_where \(T^{\prime}\) acts on itself by translation. Thus, if \(H\) is unipotent, then the \(H\)-action on \(T^{\prime}\) is trivial._
Proof.: For any \(a\in H\), the \(H\)-action \(\rho\) on \(T^{\prime}\) induces an isomorphism
\[\rho_{a}:T^{\prime}\xrightarrow{\simeq}T^{\prime}:z_{j}\mapsto c_{i}(a){\prod \limits_{i=1}^{m}}z_{i}^{a_{ij}},\]
for some \((a_{ij})\in GL(n,\mathbb{Z})\) and \(c_{i}(a)\in\mathbb{K}^{\times}\). As \(\rho_{1}=\text{id}\), by continuity, \(a_{ij}=\delta_{ij}\). So, \(\rho:H\to T^{\prime}:a\mapsto\rho_{a}=\text{Diag}(c_{1}(a),\cdots,c_{m}(a))\) becomes an algebraic group morphism. This gives the desired identification. In addition, if \(H\) is unipotent, then any \(a\in H\) is unipotent, hence so is \(\rho_{a}\in T^{\prime}\), which is also semisimple. It follows that \(\rho_{a}=\text{id}\), \(\forall a\in H\). We're done.
**Question**: If \(\mu\) is only generic, does \(\mathcal{M}_{\mu}\) contain an open algebraic torus? (Conjecture 4.20.(2)).
**Remark 2.12**.: If \((\widetilde{w},p)\neq(\widetilde{w}_{m},p_{m})\) in Theorem 2.10, then \(\mathcal{A}_{\mu}(\widetilde{w},p)\) is smooth affine of dimension \(\widetilde{b}(\widetilde{w},p)>0\), and \(\mathcal{A}_{\mu}(\widetilde{w},p)\) is _stably isomorphic_ to \(\mathbb{A}_{\mathbb{K}}^{\widetilde{b}(\widetilde{w},p)}\): \(\mathcal{A}_{\mu}(\widetilde{w},p)\times\mathbb{A}_{\mathbb{K}}^{\frac{n^{2} -n}{2}}\cong\mathbb{A}_{\mathbb{K}}^{\widetilde{b}(\widetilde{w},p)+\frac{n^{ 2}-n}{2}}\). This is closely related to the famous **Zariski cancellation problem** (**ZCP**):
If \(Y\) is an affine \(\mathbb{K}\)-variety of dimension \(d\) such that \(Y\times\mathbb{A}^{1}\cong\mathbb{A}^{d+1}\), is \(Y\) always isomorphic to \(\mathbb{A}^{d}\)?
The answer is positive if \(d=1,2\)[2, 62, 68, 25], and negative for \(d\geq 3\) in positive characteristic [31, 32]; For \(d\geq 3\) and char \(\mathbb{K}=0\), the problem is still open.
We tend to believe that \(\mathcal{A}_{\mu}(\widetilde{w},p)\) is not isomorphic to \(\mathbb{A}_{\mathbb{K}}^{\widetilde{b}(\widetilde{w},p)}\) in general, thus providing a systematical way of constructing counterexamples to **ZCP** for \(d\geq 3\) and char \(\mathbb{K}=0\).
### Examples
We illustrate Theorem 2.10 by two examples. See Section 4.4 for more aspects.
**Example 2.13** (\((g,k,n,\mu)=(0,4,2,((1^{2}),(1^{2}),(1^{2}),(1^{2}))\)): Fricke-Klein cubic).: Let \(\Sigma_{0,4}:=(\Sigma_{0},\sigma=\{q_{1},q_{2},q_{3},q_{4}\})\) be a four-punctured two-sphere, \(G=GL_{2}(\mathbb{K})\). So, \(T\cong(\mathbb{K}^{\times})^{2}\), and \(W=S_{2}=\{1,s_{1}=(1\ 2)\}\). Let \(\mu=((1)^{2},(1)^{2},(1)^{2},(1)^{2})\), so \(\mu\) is very generic (Definition 1.1, Assumption 1.4) and
\[C_{i}=\operatorname{Diag}(a_{i,1},a_{i,2})\in T,a_{i,1}\neq a_{i,2},\quad 1 \leq i\leq 4;\quad\prod_{i,j}a_{i,j}=1,\quad\prod_{i=1}^{4}a_{i,\psi_{i}(1)} \neq 1,\forall\psi_{i}\in W. \tag{2.5.1}\]
Clearly, \(\dim\mathcal{M}_{\mu}=d_{\mu}=n^{2}(2g-2+k)-\sum_{i=1}^{k}\sum_{j}(\mu_{j}^{i })^{2}+2=2\). The example goes back to [23].
1. We firstly compute the cell decomposition of \(\mathcal{M}_{\mu}\) (Theorem 2.10): \[\mathcal{M}_{\mu}=\sqcup_{(\widetilde{w},p)\in W^{*}}\mathcal{M }_{\mu}(\widetilde{w},p),\quad\mathcal{W}^{*}:=\{(\widetilde{w},p):\widetilde{ w}\in W^{k-1}=W^{3},p\in\mathcal{W}^{*}(\beta(\widetilde{w}))\},\] \[\mathcal{M}_{\mu}(\widetilde{w},p)\cong(\mathbb{K}^{\times})^{ \overline{a}(\widetilde{w},p)}\times\mathcal{A}_{\mu}(\widetilde{w},p),\ \ \mathcal{A}_{\mu}(\widetilde{w},p)\times\mathbb{A}^{|U|}\cong\mathbb{A}^{ \widetilde{b}(\widetilde{w},p)+|U|},\] where \(|U|=1\), and \[\overline{a}(\widetilde{w},p)=|S_{p}|+2gn-2n+2=|S_{p}|-2\geq 0,\ \ \overline{b}( \widetilde{w},p)=\frac{1}{2}(d_{\mu}-\overline{a}(\widetilde{w},p))=\frac{1}{ 2}(4-|S_{p}|)\geq 0.\] Thus, \(|S_{p}|=2\) or \(4\). Accordingly, \((\overline{a}(\widetilde{w},p),\overline{b}(\widetilde{w},p))=(0,1)\) or \((2,0)\). By the affirmative answer [2, 25, 62, 68, 2] to the Zariski cancellation problem in \(\dim\leq 2\), we have \[\mathcal{A}_{\mu}(\widetilde{w},p)\cong\mathbb{A}^{\widetilde{b}(\widetilde{w },p)}\ \ \Rightarrow\ \mathcal{M}_{\mu}(\widetilde{w},p)\cong(\mathbb{K}^{\times})^{ \overline{a}(\widetilde{w},p)}\times\mathbb{K}^{\widetilde{b}(\widetilde{w},p) }=\mathbb{K}\ \text{or}\ (\mathbb{K}^{\times})^{2}.\] We would like to compute \(\mathcal{W}^{*}\). For each \(\widetilde{w}=(\dot{w}_{1},\dot{w}_{2},\dot{w}_{3})=(w_{1},w_{2},w_{3})\in W^{k- 1}=W^{3}=S_{2}^{3}\), denote \(\beta:=\beta(\widetilde{w})\) and \(\ell:=\ell(\beta)\). Recall that, for any \(p\in\mathcal{W}(\beta)\subset W^{\ell+1}\), denote
\[J_{p}=\{\mathbb{S}_{i_{m}}=p_{m-1}^{-1}\mathbb{s}_{i_{m}}p_{m-1},m\in S_{p}\} \subset W,\]
then \(p\in\mathcal{W}^{*}(\beta)\) if and only if the group \(\langle J_{p}\rangle\) acts transitively on \([2]=\{1,2\}\).
**Note**: In this case, it means that \(\langle J_{p}\rangle=W\), equivalently, \(S_{p}\neq\varnothing\). That is,
\[\mathcal{W}^{*}(\beta)=\{p\in\mathcal{W}(\beta):S_{p}\neq\varnothing\}\subset \mathcal{W}(\beta).\]
Recall that, \(\beta=\beta(\vec{w})=[w_{1}]\circ[w_{1}^{-1}]\circ[w_{2}]\circ[w_{2}^{-1}]\circ[w_{ 3}]\circ[w_{3}^{-1}]\), so \(\ell=\ell(\beta)=2\ell(w_{1})+2\ell(w_{2})+2\ell(w_{3})\). If \(\exists p\in\mathcal{W}(\beta)\) such that \(S_{p}\neq\varnothing\), then \(\ell(\beta)>0\), so \(\ell(\beta)\geq 2\), and \(w_{i}=\mathrm{s}_{1}=(1\ 2)\) for at least one \(i\in\{1,2,3\}\). Recall that \(p\) is of the form \((p_{\ell}=\mathsf{id},\ldots,p_{1},p_{0}=\mathsf{id})\in W^{\ell+1}\). By Definition 1.15 of a walk, there's no \(i\) such that \((p_{i+1},p_{i})=(\mathsf{id},\mathsf{id})\) (if we could go up, then we must go up). In particular, we must have \(p_{1}=\mathrm{s}_{1},p_{\ell-1}=\mathrm{s}_{1}\). As \(S_{p}\neq\varnothing\), we must have \(\ell\geq 4\). This means that \(w_{i}=\mathrm{s}_{1}=(1\ 2)\) for at least two \(i\in\{1,2,3\}\). We're left with \(4\) cases:
1. \(\vec{w}=(\mathrm{s}_{1},\mathrm{s}_{1},\mathsf{id})\in W^{3}\). Then, \(\beta:=\beta(\vec{w})=\sigma_{1}^{4}\in\mathrm{FBr}_{2}^{+}\) and \(\ell=\ell(\beta)=4\). Observe that \[\mathcal{W}(\beta)=\{(\mathsf{id},\mathrm{s}_{1},\mathsf{id},\mathrm{s}_{1}, \mathsf{id}),p^{1}=(\mathsf{id},\mathrm{s}_{1},\mathrm{s}_{1},\mathsf{s}_{1}, \mathsf{id})\}\subset W^{\ell+1}=W^{5}.\]
Then \(\mathcal{W}^{*}(\beta)=\{p^{1}\}\), with \(S_{p^{1}}=\{2,3\}\subset[\ell]=[4]=\{1,2,3,4\}\), \(U_{p^{1}}=\{1\}\subset[4]\), and \(D_{p^{1}}=\{4\}\subset[4]\). Denote \(\vec{w}^{1}:=(\mathrm{s}_{1},\mathrm{s}_{1},\mathsf{id})\). By the previous computation, we have
\[\overline{a}(\vec{w}^{1},p^{1})=|S_{p^{1}}|-2=0,\ \ \overline{b}(\vec{w}^{1},p^{1})=\frac{1}{2}(4-|S_{p^{1}}|)=1,\ \ \mathcal{M}_{\mu}(\vec{w}^{1},p^{1})\cong\mathbb{K}.\]
2. Similarly, for \(\vec{w}^{2}=(\mathrm{s}_{1},\mathsf{id},\mathrm{s}_{1})\in W^{3}\) (resp. \(\vec{w}^{3}=(\mathsf{id},\mathrm{s}_{1},\mathrm{s}_{1})\in W^{3}\)), we have \(\mathcal{W}^{*}(\beta(\vec{w}^{2}))=\{p^{2}=(\mathsf{id},\mathrm{s}_{1}, \mathrm{s}_{1},\mathrm{s}_{1},\mathsf{id})\}\) (resp. \(\mathcal{W}^{*}(\beta(\vec{w}^{3}))=\{p^{3}=(\mathsf{id},\mathrm{s}_{1}, \mathrm{s}_{1},\mathrm{s}_{1},\mathsf{id})\}\)), and \[\mathcal{M}_{\mu}(\vec{w}^{2},p^{2})\cong\mathbb{K},\quad\mathcal{M}_{\mu}( \vec{w}^{3},p^{3})\cong\mathbb{K}.\]
3. \(\vec{w}=(\mathrm{s}_{1},\mathrm{s}_{1},\mathrm{s}_{1})=:\vec{w}^{4}\in W^{3}\). Then, \(\beta:=\beta(\vec{w}^{4})=\sigma_{1}^{6}\in\mathrm{FBr}_{2}^{+}\) and \(\ell(\beta)=6\). Observe that \[\mathcal{W}(\beta) = \{(\mathsf{id},\mathrm{s}_{1},\mathsf{id},\mathrm{s}_{1},\mathsf{ id},\mathrm{s}_{1},\mathsf{id}),p^{4}=(\mathsf{id},\mathrm{s}_{1},\mathrm{s}_{1}, \mathsf{s}_{1},\mathsf{id},\mathrm{s}_{1},\mathsf{id}),p^{5}=(\mathsf{id}, \mathrm{s}_{1},\mathrm{s}_{1},\mathsf{id},\mathrm{s}_{1},\mathsf{id}),\] \[p^{6}=(\mathsf{id},\mathrm{s}_{1},\mathsf{id},\mathrm{s}_{1}, \mathrm{s}_{1},\mathrm{s}_{1},\mathsf{id}),p^{7}=(\mathsf{id},\mathrm{s}_{1}, \mathrm{s}_{1},\mathrm{s}_{1},\mathrm{s}_{1},\mathsf{id})\}\subset W^{\ell+1}=W^ {7}.\]
Thus, \(\mathcal{W}^{*}(\beta)=\{p\in\mathcal{W}(\beta):S_{p}\neq\varnothing\}=\{p^{j} :4\leq j\leq 7\}\). Denote \(\vec{w}^{j}:=(\mathrm{s}_{1},\mathrm{s}_{1},\mathrm{s}_{1}),4\leq j\leq 7\). So,
\[\mathcal{M}_{\mu}(\vec{w}^{j},p^{j})\cong\mathbb{K},\forall 4\leq j\leq 6;\quad \mathcal{M}_{\mu}(\vec{w}^{7},p^{7})\cong(\mathbb{K}^{\times})^{2}.\]
In summary, \(\mathcal{W}^{*}=\{(\vec{w}^{j},p^{j}),1\leq j\leq 7\}\), and the cell decomposition of \(\mathcal{M}_{\mu}\) reads:
\[\mathcal{M}_{\mu}=\sqcup_{j=1}^{7}\mathcal{M}_{\mu}(\vec{w}^{j},p^{j})= \mathbb{K}^{\sqcup\mathsf{id}}\sqcup(\mathbb{K}^{\times})^{2}.\]
The order on indices is _admissible_ (Definition 3.17). Our example matches with [54, SS1.4].
1. Let's give a concrete description of the variety \(\mathcal{M}_{\mu}\). The defining equation of \(M_{B}\) reads \[x_{1}C_{1}x_{1}^{-1}x_{2}C_{2}x_{2}^{-1}x_{3}C_{3}x_{3}^{-1}x_{4}C_{4}x_{4}^{-1 }=\mathsf{id},\quad x_{i}\in G.\] with an action of \(G_{\mathrm{par}}=G\times T^{4}\) via conjugation: \[h\cdot(x_{1},x_{2},x_{3},x_{4})=(h_{0}x_{1}h_{1}^{-1},\,h_{0}x_{2}h_{2}^{-1},\,h _{0}x_{3}h_{3}^{-1},\,h_{0}x_{4}h_{4}^{-1}),\quad h=(h_{0},h_{1},h_{2},h_{3},h_{ 4})\in G\times T^{4}.\] Denote \[\underline{M}^{\prime}_{B}:=M_{B}\cap(G^{3}\times\{I_{2}\})\subset M_{B},\] so the defining equation of \(\underline{M}^{\prime}_{B}\) becomes \[x_{1}C_{1}x_{1}^{-1}x_{2}C_{2}x_{2}^{-1}x_{3}C_{3}x_{3}^{-1}C_{4}=\mathsf{id}, \quad x_{1},x_{2},x_{3}\in G.\]
Now by Proposition C.24 and Proposition C.29, we have
\[\mathcal{M}_{\mu}=M_{B}/PG_{\mathrm{par}}\cong\underline{M}^{\prime}_{B}/PT_{ \mathrm{par}},\quad PT_{\mathrm{par}}=T^{4}/\mathbb{K}^{\times}\hookrightarrow PG _{\mathrm{par}}=(G\times T^{4})/\mathbb{K}^{\times},\]
where \(PT_{\mathrm{par}}\hookrightarrow PG_{\mathrm{par}}:(h_{0},h_{1},h_{2},h_{3}) \mapsto(h_{0},h_{1},h_{2},h_{3},h_{0})\) acts on \(\underline{M}^{\prime}_{B}\) by:
\[(h_{0},h_{1},h_{2},h_{3})\cdot(x_{1},x_{2},x_{3}):=(h_{0}x_{1}h_{1}^{-1},h_{0} x_{2}h_{2}^{-1},h_{0}x_{3}h_{3}^{-1}).\]
Denote \(X^{i}:=x_{i}C_{i}x_{i}^{-1}\in G\cdot C_{i}\cong G/T\). Clearly, \(G\cdot C_{i}\) is affine. Define
\[\underline{M}^{\prime}_{B}(\vec{X}):=\{(X^{i})_{i=1}^{3}\in\big{[}\big{]}_{i= 1}^{3}G\cdot C_{i}:X^{1}X^{2}X^{3}C_{4}=\mathrm{id}\},\]
equipped with the action of \(T\ni h_{0}\) via conjugation. Then we obtain a cartesian diagram
By base change, \(\underline{M}^{\prime}_{B}\to\underline{M}^{\prime}_{B}(\vec{X})\) is a principal \(T^{3}\)-bundle. As \(T^{3}\hookrightarrow PT_{\mathrm{par}}:(h_{1},h_{2},h_{3})\mapsto[\mathrm{id}, h_{1},h_{2},h_{3}]\) is a normal subgroup with quotient \(PT\ni[h_{0}]\), by Proposition C.31, we have
\[\mathcal{M}_{\mu}\cong\underline{M}^{\prime}_{B}/PT_{\mathrm{par}}\cong( \underline{M}^{\prime}_{B}/T^{3})/PT\cong\underline{M}^{\prime}_{B}(\vec{X})/ PT.\]
Clearly, the conjugate action of \(h_{0}=\mathrm{Diag}(h_{0,1},h_{0,2})\in T\) on \(X^{i}\) is:
\[h_{0}\cdot X^{i}=\left(\begin{array}{cc}X^{i}_{11}&h_{0,1}X^{i}_{12}h_{0,2}^ {-1}\\ h_{0,2}X^{i}_{21}h_{0,1}^{-1}&X^{i}_{22}\end{array}\right).\]
Denote
\[y_{3}:=\mathrm{Tr}(X^{1}X^{2}=(X^{3}C_{4})^{-1}),y_{1}:=\mathrm{Tr}(X^{2}X^{3} =(C_{4}X^{1})^{-1}),y_{2}:=\mathrm{Tr}(C_{4}^{-1}X^{3}C_{4}X^{1}=C_{4}^{-1}(X^ {2})^{-1}).\]
Then a direct computation shows that, \(\mathcal{M}_{\mu}\) is a (smooth) affine cubic surface defined by
\[\sum\limits_{i=1}^{3}\det C_{i}y_{i}^{2}+y_{1}y_{2}y_{3}-\det C_{4 }^{-1}\sum\limits_{\text{cyclic rotation on $1,2,3$}}(\mathrm{Tr}C_{4}\mathrm{Tr}C_{1}+\mathrm{Tr}C_{2}^{-1}\mathrm{Tr}C _{3}^{-1})y_{1}\] \[+\det C_{4}^{-1}(\sum\limits_{i=1}^{4}\mathrm{Tr}C_{i}\mathrm{Tr}C _{i}^{-1}+\mathrm{Tr}C_{1}\mathrm{Tr}C_{2}\mathrm{Tr}C_{3}\mathrm{Tr}C_{4}-4)=0. \tag{2.5.2}\]
**Note**: when \(\det C_{i}=1,\forall 1\leq i\leq 4\), this recovers the Fricke-Klein cubic [23], [28, SS5].
Next, we consider a rank 3 example.
**Example 2.14** (\((g,k,n,\mu)=(0,3,3,((1^{3}),(1^{3}),(1^{3})))\)).: Let \(\Sigma_{0,3}:=(\Sigma_{0},\sigma=\{q_{1},q_{2},q_{3}\})\) be the pair of pants, \(G=GL_{3}(\mathbb{K})\). So, \(W=S_{3}\). Let \(\mu=((1^{3}),(1^{3}),(1^{3}))\). So, \(\mu\) is very generic and
\[C_{i}=\mathrm{Diag}(a_{i,1},a_{i,2},a_{i,3})\in T,\ \ a_{i,1},a_{i,2},a_{i,3}:\ \text{pairwise distinct};\ \ \big{\lceil} \underset{i,j}{\to}a_{i,j}=1,\ \big{\lceil}\underset{i=1}{\to}a_{i,\,\psi_{i}(1)}\neq 1, \forall\psi_{i}\in W.\]
Clearly, \(\dim\mathcal{M}_{\mu}=d_{\mu}=n^{2}(2g-2+k)-\sum_{i=1}^{k}\sum_{j}(\mu_{j}^{i })^{2}+2=9-3\times 3+2=2\).
1. We firstly compute the cell decomposition of \(\mathcal{M}_{\mu}\) (Theorem 2.10): \[\mathcal{M}_{\mu}=\sqcup_{(\widetilde{w},p)\in\mathcal{W}^{*}} \mathcal{M}_{\mu}(\widetilde{w},p),\quad\mathcal{W}^{*}:=\{(\widetilde{w},p): \widetilde{w}\in W^{k-1}=W^{2},p\in\mathcal{W}^{*}(\beta(\widetilde{w}))\},\] \[\mathcal{M}_{\mu}(\widetilde{w},p)\cong(\mathbb{K}^{\times})^{ \overline{a}(\widetilde{w},p)}\times\mathcal{A}_{\mu}(\widetilde{w},p),\ \ \mathcal{A}_{\mu}(\widetilde{w},p)\times\mathbb{A}^{|U|}\cong\mathbb{A}^{ \widetilde{\overline{b}}(\widetilde{w},p)+|U|},\] where \(|U|=3\), and \[\overline{a}(\widetilde{w},p)=|S_{p}|+2gn-2n+2=|S_{p}|-4\geq 0,\ \ \overline{b}(\widetilde{w},p)=\frac{1}{2}(d_{\mu}-\overline{a}(\widetilde{w},p ))=\frac{1}{2}(6-|S_{p}|)\geq 0.\] Thus, \(|S_{p}|=4\) or \(6\). Accordingly, \((\overline{a}(\widetilde{w},p),\overline{b}(\widetilde{w},p))=(0,1)\) or \((2,0)\). As in Example 2.13, \[\mathcal{A}_{\mu}(\widetilde{w},p)\cong\mathbb{A}^{\widetilde{\overline{b}}( \widetilde{w},p)}\ \ \Rightarrow\ \ \mathcal{M}_{\mu}(\widetilde{w},p)\cong(\mathbb{K}^{\times})^{\overline{a}( \widetilde{w},p)}\times\mathbb{K}^{\widetilde{\overline{b}}(\widetilde{w},p)}= \mathbb{K}\ \text{or}\ (\mathbb{K}^{\times})^{2}.\] We would like to compute \(\mathcal{W}^{*}\). For each \(\widetilde{w}=(\dot{w}_{1},\dot{w}_{2})=(w_{1},w_{2})\in W^{k-1}=W^{2}=S_{3}^ {2}\), denote \(\beta:=\beta(\widetilde{w})\) and \(\ell:=\ell(\beta)\). Recall that, for any \(p\in\mathcal{W}(\beta)\subset W^{\ell+1}\), denote \[J_{p}=\{\underline{s}_{i_{m}}=p_{m-1}^{-1}s_{i_{m}}p_{m-1},m\in S_{p}\}\subset W,\] then \(p\in\mathcal{W}^{*}(\beta)\) if and only if the group \(\langle J_{p}\rangle\) acts transitively on \([3]=\{1,2,3\}\). Recall that, \(\beta=\beta(\widetilde{w})=[w_{1}]\circ[w_{1}^{-1}]\circ[w_{2}]\circ[w_{2}^{- 1}]\in\mathrm{FBr}_{3}^{+}\), so \(\ell=\ell(\beta)=2\ell(w_{1})+2\ell(w_{2})\). For any \(p\in\mathcal{W}(\beta)\), recall that \(p\) is of the form \((p_{\ell}=\mathsf{id},\cdots,p_{1},p_{0}=\mathsf{id})\in W^{\ell+1}\). By Definition 1.15 of a walk, we necessarily have that, the positions in \([w_{2}^{-1}]\) are contained in \(U_{p}\), and the positions in \([w_{1}]\) are contained in \(D_{p}\). As \(|U_{p}|=|D_{p}|\), we then have \(|U_{p}|=|D_{p}|\geq\max\{\ell(w_{1}),\ell(w_{2})\}\). It follows that
\[4\leq|S_{p}|=\ell-2|U|_{p}\leq 2\ell(w_{1})+2\ell(w_{2})-2\max\{\ell(w_{1}), \ell(w_{2})\}=2\min\{\ell(w_{1}),\ell(w_{2})\}.\]
So, \(\ell(w_{1}),\ell(w_{2})\geq 2\). If \(\ell(w_{1})=\ell(w_{2})=2\), then \(S_{p}\subset[\ell]\) occupies all positions of \([w_{1}^{-1}]\circ[w_{2}]\). So, \(\ell(s_{1}w_{2}^{-1})<\ell(w_{2}^{-1})\) and \(\ell(s_{2}w_{2}^{-1})<\ell(w_{2}^{-1})\), a contradiction. Thus, \(\max\{\ell(w_{1}),\ell(w_{2})\}=3\).
To simplify the computation a little bit, we observe the following **symmetries**: for any \(\beta=\sigma_{i_{\ell}}\cdots\sigma_{i_{\ell}}\), consider \(\beta^{\mathrm{op}}:=\sigma_{i_{1}}\cdots\sigma_{i_{\ell}}\) and \(\beta^{\prime}:=\sigma_{n-i_{\ell}}\cdots\sigma_{n-i_{1}}\), then we obtain bijections
1. \(\mathcal{W}(\beta)\stackrel{{\simeq}}{{\to}}\mathcal{W}(\beta^{ \mathrm{op}}):p\mapsto p^{\mathrm{op}},\ \ U_{p^{\mathrm{op}}}:=\ell+1-D_{p},\ \ S_{p^{\mathrm{op}}}:=\ell+1-D_{p}.\)
2. \(\mathcal{W}(\beta)\stackrel{{\simeq}}{{\to}}\mathcal{W}(\beta^{ \prime}):p\mapsto p^{\prime},\ \ U_{p^{\prime}}:=U_{p},\ \ D_{p^{\prime}}:=D_{p},\ \ S_{p^{\prime}}:=S_{p}.\)
Now, we divide the computation into 3 cases:
1. If \(\ell(w_{1})=3,\ell(w_{2})=2\), then \(w_{1}=w_{1}^{-1}=s_{1}s_{2}s_{1}=s_{2}s_{1}s_{2}\).
2. If \(w_{2}=s_{1}s_{2}\), we may write \(\beta=\sigma_{1}\sigma_{2}\sigma_{1}\sigma_{1}\sigma_{2}\sigma_{1}\sigma_{1} \sigma_{2}\sigma_{2}\sigma_{1}\in\mathrm{FBr}_{3}^{+}\). By a direct check, we have \[\mathcal{W}(\beta) = \{p^{1}=(\mathsf{id},s_{1},s_{2}s_{1},s_{1}s_{2}s_{1},s_{1}s_{2}s_ {1},s_{1}s_{2}s_{1},s_{1}s_{2}s_{1},s_{2}s_{1},s_{2}s_{1},s_{1},\mathsf{id}),\] \[(\mathsf{id},s_{1},s_{2}s_{1},s_{1}s_{2}s_{1},s_{2}s_{1},s_{2}s_{1},s _{1}s_{2}s_{1},s_{2}s_{1},s_{1},\mathsf{id}),\] \[(\mathsf{id},s_{1},s_{2}s_{1},s_{1}s_{2}s_{1},s_{2}s_{1},s_{1},s_{1},s _{1},s_{1},s_{2}s_{1},\mathsf{id}),\ \ (\mathsf{id},s_{1},s_{2}s_{1},s_{1}s_{2}s_{1},s_{1}, \mathsf{id},s_{1},s_{2}s_{1},s_{1},\mathsf{id})\}.\] Clearly, \(p^{1}\in\mathcal{W}(\beta)\) is the unique walk such that \(|S_{p^{1}}|\geq 4\). Indeed, \(S_{p^{1}}=\{3,5,6,7\}\), and \(p^{1}\in\mathcal{W}^{*}(\beta)\). Denote \(\widetilde{w}^{1}:=(s_{1}s_{2}s_{1},s_{1}s_{2})\), then \((\widetilde{w}^{1},p^{1})\in\mathcal{W}^{*}\), and \(\mathcal{M}_{\mu}(\widetilde{w}^{1},p^{1})\cong\mathbb{K}\).
2. If \(w_{2}={\rm s}_{2}{\rm s}_{1}\), we may write \(\beta=\sigma_{2}\sigma_{1}\sigma_{2}\sigma_{2}\sigma_{1}\sigma_{2}\sigma_{2} \sigma_{1}\sigma_{1}\sigma_{2}\in{\rm FBr}^{+}_{3}\). Then by the \({\bf h}\)-symmetry, \[{\cal W}(\beta) = \{p^{2}=({\rm id},{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{2}{ \rm s}_{1}{\rm s}_{2},{\rm s}_{2}{\rm s}_{1}{\rm s}_{2},{\rm s}_{2}{\rm s}_{1}{ \rm s}_{2},{\rm s}_{2}{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1} {\rm d}),\] \[({\rm id},{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{2}{\rm s}_{1} {\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{2}{\rm s}_{1 }{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{2},{\rm id}),\] \[({\rm id},{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{2}{\rm s}_{1 }{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{2},{\rm s}_{2},{\rm s}_{1}{\rm s} _{2},{\rm s}_{2},{\rm id}),\ \ ({\rm id},{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{2}, {\rm id})\}.\]
Clearly, \(p^{2}\in{\cal W}(\beta)\) is the unique walk such that \(|S_{p^{2}}|\geq 4\). Indeed, \(S_{p^{2}}=\{3,5,6,7\}\), and \(p^{2}\in{\cal W}^{*}(\beta)\). Denote \(\vec{w}^{2}:=({\rm s}_{2}{\rm s}_{1}{\rm s}_{2},{\rm s}_{2}{\rm s}_{1})\), then \((\vec{w}^{2},p^{2})\in{\cal W}^{*}\), and \({\cal M}_{\boldsymbol{\mu}}(\vec{w}^{2},p^{2})\cong\mathbb{K}\).
1. If \(\ell(w_{1})=2,\ell(w_{2})=3\), then \(w_{2}=w_{2}^{-1}={\rm s}_{1}{\rm s}_{2}{\rm s}_{1}={\rm s}_{2}{\rm s}_{1}{\rm s }_{2}\).
(i) If \(w_{1}={\rm s}_{1}{\rm s}_{2}\), we may write \(\beta=\sigma_{1}\sigma_{2}\sigma_{2}\sigma_{1}\sigma_{1}\sigma_{2}\sigma_{1} \sigma_{1}\sigma_{2}\sigma_{1}\in{\rm FBr}^{+}_{3}\). By the \({\bf v}\)-symmetry, get \[{\cal W}(\beta) = \{p^{3}=({\rm id},{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{2}{ \rm s}_{1},{\rm s}_{1}{\rm s}_{2}{\rm s}_{1},{\rm s}_{1}{\rm s}_{2}{\rm s}_{1}, {\rm s}_{1}{\rm s}_{2}{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{1},{\rm id}),\] \[({\rm id},{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{2}{\rm s}_{1 },{\rm s}_{1}{\rm s}_{2}{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{1}{\rm s}_{2 }{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{1},{\rm id}),\] \[({\rm id},{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{1},{\rm s}_{ 1},{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{1}{\rm s}_{2}{\rm s}_{1},{\rm s }_{2}{\rm s}_{1},{\rm s}_{1},{\rm id}),\ \ ({\rm id},{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{1},{\rm id },{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{2}{\rm s}_{1},{\rm s}_{1},{\rm id })\}.\]
Clearly, \(p^{3}\in{\cal W}(\beta)\) is the unique walk such that \(|S_{p^{3}}|\geq 4\). Indeed, \(S_{p^{3}}=\{4,5,6,8\}\), and \(p^{3}\in{\cal W}^{*}(\beta)\). Denote \(\vec{w}^{3}:=({\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2}{\rm s}_{1})\), then \((\vec{w}^{3},p^{3})\in{\cal W}^{*}\), and \({\cal M}_{\boldsymbol{\mu}}(\vec{w}^{3},p^{3})\cong\mathbb{K}\).
(ii) If \(w_{1}={\rm s}_{2}{\rm s}_{1}\), we may write \(\beta=\sigma_{2}\sigma_{1}\sigma_{1}\sigma_{2}\sigma_{2}\sigma_{1}\sigma_{2} \sigma_{2}\sigma_{1}\sigma_{2}\in{\rm FBr}^{+}_{3}\). Then by the \({\bf h}\)-symmetry, \[{\cal W}(\beta) = \{p^{4}=({\rm id},{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{ \rm s}_{2},{\rm s}_{2}{\rm s}_{1}{\rm s}_{2},{\rm s}_{2}{\rm s}_{1}{\rm s}_{2 },{\rm s}_{2}{\rm s}_{1}{\rm s}_{2},{\rm s}_{2}{\rm s}_{1}{\rm s}_{2},{\rm s}_{ 1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{2},{\rm id}),\] \[({\rm id},{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{ 2},{\rm s}_{2}{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{ 2},{\rm s}_{2}{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{2},{\rm id }),\] \[({\rm id},{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{ 2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s }_{1}{\rm s}_{2},{\rm s}_{2}{\rm s}_{1}{\rm s}_{2},{\rm id}),\ \ ({\rm id},{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s }_{2},{\rm id},{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s}_{1}{\rm s}_{2},{\rm s }_{2},{\rm id})\}.\]
Clearly, \(p^{4}\in{\cal W}(\beta)\) is the unique walk such that \(|S_{p^{4}}|\geq 4\). Indeed, \(S_{p^{4}}=\{4,5,6,8\}\), and \(p^{4}\in{\cal W}^{*}(\beta)\). Denote \(\vec{w}^{4}:=({\rm s}_{2}{\rm s}_{1},{\rm s}_{2}{\rm s}_{1}{\rm s}_{2})\), then \((\vec{w}^{4},p^{4})\in{\cal W}^{*}\), and \({\cal M}_{\boldsymbol{\mu}}(\vec{w}^{4},p^{4})\cong\mathbb{K}\).
1. If \(\ell(w_{1})=\ell(w_{2})=3\), then \(w_{1}=w_{1}^{-1}=w_{2}=w_{2}^{-1}={\rm s}_{1}{\rm s}_{2}{\rm s}_{1}={\rm s}_{2} {\rm s}_{1}{\rm s}_{2}\). We may write \(\beta=\sigma_{1}\sigma_{2}\sigma_{1}\sigma_{1}\sigma_{2}\sigma_{1}\sigma_{1} \sigma_{2}\sigma_{1}\sigma_{1}\sigma_{2}\sigma_{1}\sigma_{1}\sigma_{2}\sigma_{1}\). By a direct check, we have \[{\cal W}(\beta) = \{p^{9}
Altogether, the cell decomposition of \(\mathcal{M}_{\mu}\) reads:
\[\mathcal{M}_{\mu}=\sqcup_{j=1}^{9}\mathcal{M}_{\mu}(\widetilde{w}^{j},p^{j})= \mathbb{K}^{\sqcup\mathbb{8}}\sqcup(\mathbb{K}^{\times})^{2}. \tag{2.5.3}\]
We have ordered the indices so that we get an admissible total order (Definition 3.17).
* Now, let's give a concrete description of the variety \(\mathcal{M}_{\mu}\). Similar to Example 2.13, define \[\underline{M}^{\prime}_{B}(\vec{X}):=\{(X^{i})_{i=1}^{2}\in\prod_{i-1}^{2} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
It follows that
\[\left(\begin{array}{c}X_{11}^{1}\\ X_{22}^{1}\\ X_{33}^{1}\end{array}\right)=-\frac{\det C_{3}}{\delta(C_{3})}\left(\begin{array} []{c}(a_{3,3}-a_{3,2})y_{1}+(\frac{a_{3,2}}{a_{3,3}}-\frac{a_{3,3}}{a_{3,2}}) \mbox{Tr}(C_{1})+(\frac{1}{a_{3,2}}-\frac{1}{a_{3,3}})\mbox{Tr}(C_{2}^{-1})\\ (a_{3,1}-a_{3,3})y_{1}+(\frac{a_{3,3}}{a_{3,1}}-\frac{a_{3,1}}{a_{3,3}})\mbox{ Tr}(C_{1})+(\frac{1}{a_{3,3}}-\frac{1}{a_{3,1}})\mbox{Tr}(C_{2}^{-1})\\ (a_{3,2}-a_{3,1})y_{1}+(\frac{a_{3,1}}{a_{3,2}}-\frac{a_{3,2}}{a_{3,1}})\mbox{ Tr}(C_{1})+(\frac{1}{a_{3,1}}-\frac{1}{a_{3,2}})\mbox{Tr}(C_{2}^{-1})\end{array} \right)=:\left(\begin{array}{c}c_{1,1}^{1}y_{1}+c_{1,1}^{0}\\ c_{2,2}^{1}y_{1}+c_{2,2}^{0}\\ c_{3,3}^{1}y_{1}+c_{3,3}^{0}\end{array}\right)\]
and
\[\left(\begin{array}{c}\sigma_{1,1}(X^{1})\\ \sigma_{2,2}(X^{1})\\ \sigma_{3,3}(X^{1})\end{array}\right)=-\frac{\det C_{1}\det C_{3}}{\delta(C_{3 })}\left(\begin{array}{c}(a_{3,3}-a_{3,2})\mbox{Tr}(C_{2})+(\frac{a_{3,2}}{a _{3,3}}-\frac{a_{3,3}}{a_{3,2}})\mbox{Tr}(C_{1}^{-1})+(\frac{1}{a_{3,2}}-\frac {1}{a_{3,3}})y_{2}\\ (a_{3,1}-a_{3,3})\mbox{Tr}(C_{2})+(\frac{a_{3,3}}{a_{3,1}}-\frac{a_{3,1}}{a_{3,3}})\mbox{Tr}(C_{1}^{-1})+(\frac{1}{a_{3,3}}-\frac{1}{a_{3,1}})y_{2}\\ (a_{3,2}-a_{3,1})\mbox{Tr}(C_{2})+(\frac{a_{3,1}}{a_{3,2}}-\frac{a_{3,2}}{a_{3,1}})\mbox{Tr}(C_{1}^{-1})+(\frac{1}{a_{3,1}}-\frac{1}{a_{3,2}})y_{2}\end{array} \right).\]
We may denote
\[X_{23}^{1}X_{32}^{1} = X_{22}^{1}X_{33}^{1}-\sigma_{1,1}(X^{1})=:c_{2,3}^{2,0}y_{1}^{2} +c_{2,3}^{0,1}y_{1}+c_{2,3}^{0,1}y_{2}+c_{2,3}^{0,0},\] \[X_{31}^{1}X_{13}^{1} = X_{33}^{1}X_{11}^{1}-\sigma_{2,2}(X^{1})=:c_{3,1}^{2,0}y_{1}^{2} +c_{3,1}^{1,0}y_{1}+c_{3,1}^{0,1}y_{2}+c_{3,1}^{0,0},\] \[X_{12}^{1}X_{21}^{1} = X_{11}^{1}X_{22}^{1}-\sigma_{3,3}(X^{1})=:c_{1,2}^{2,0}y_{1}^{2} +c_{1,2}^{1,0}y_{1}+c_{1,2}^{0,1}y_{2}+c_{1,2}^{0,0}.\]
\(c_{3,1}^{\bullet,\bullet}\) (resp. \(c_{1,2}^{\bullet,\bullet}\)) is \(c_{2,3}^{\bullet,\bullet}\) with \((a_{3,1},a_{3,2},a_{3,3})\) replaced by \((a_{3,2},a_{3,3},a_{3,1})\) (resp. \((a_{3,3},a_{3,1},a_{3,2})\)). Finally, have \(\det C_{1}=\det X^{1}=X_{11}^{1}X_{22}^{1}X_{33}^{1}-\Sigma_{\mbox{\scriptsize cyclic permutation}}X_{11}^{1}X_{23}^{1}X_{32}^{1}+X_{12}^{1}X_{23}^{1}X_{31}^{1}+X_{21}^{1}X_{32}^{1}X_{13}^ {1}\). So,
\[X_{21}^{1}X_{32}^{1}X_{13}^{1}=\det C_{1}-X_{11}^{1}X_{22}^{1}X_{ 33}^{1}+\sum_{\mbox{\scriptsize cyclic permutation}}X_{11}^{1}X_{23}^{1}X_{32}^{1}-y_{3}\] \[= \det C_{1}-\prod_{j=1}^{3}(c_{j,j}^{1}y_{1}+c_{j,j}^{0})+\sum_{ \mbox{\scriptsize cyclic permutation}}(c_{1,1}^{1}y_{1}+c_{1,1}^{0})(c_{2,3}^{2,0}y_{1 }^{2}+c_{2,3}^{1,0}y_{1}+c_{2,3}^{0,1}y_{2}+c_{2,3}^{0,0})-y_{3}\] \[=: c^{3,0}y_{1}^{3}+c^{1,1}y_{1}y_{2}+c^{0,1}y_{2}+c^{2,0}y_{1}^{2} +c^{1,0}y_{1}+c^{0,0}-y_{3}.\]
Observe that \(\mathcal{O}(M_{B}^{\prime}(\vec{X}))^{PT}\) is generated by \(X_{jj}^{1},1\leq j\leq 3\), \(X_{ij}^{1}X_{ji}^{1},1\leq i<j\leq 3\), \(X_{12}^{1}X_{23}^{1}X_{31}^{1}\), \(X_{21}^{1}X_{32}^{1}X_{13}^{1}\). By above, \(\mathcal{O}(M_{B}^{\prime}(\vec{X}))^{PT}\) is isomorphic to the ring on \(y_{1},y_{2},y_{3}\) with a single relation
\[(X_{23}^{1}X_{32}^{1})(X_{31}^{1}X_{13}^{1})(X_{12}^{1}X_{21}^{1})=(X_{12}^{1}X_{ 23}^{1}X_{31}^{1})(X_{21}^{1}X_{32}^{1}X_{13}^{1}), \tag{2.5.9}\]
equivalently, \(\mathcal{M}_{\mu}\cong\mbox{Spec }\mathcal{O}(M_{B}^{\prime}(\vec{X}))^{PT}\) is a smooth _degree_\(6\)_affine surface_ in \(\mathbb{A}_{y_{1},y_{2},y_{3}}^{3}\) defined by
\[\prod_{\mbox{\scriptsize cyclic permutation}}(c_{2,3}^{2,0}y_{1}^{2}+c_{2,3}^{1,0 }y_{1}+c_{2,3}^{0,1}y_{2}+c_{2,3}^{0,0})=y_{3}(c^{3,0}y_{1}^{3}+c^{1,1}y_{1}y_{ 2}+c^{0,1}y_{2}+c^{2,0}y_{1}^{2}+c^{1,0}y_{1}+c^{0,0}-y_{3}).\]
3) To be concise, we consider a **simplification**: Denote \(\zeta:=\exp(\frac{2\pi i}{3})\), and take
\[C_{i}:=\mbox{Diag}(1,\zeta,\zeta^{2}),1\leq i\leq 2,\ \ C_{3}:=\mbox{Diag}(a_{3,1},a_{3,2},a_{3,3}):=(a_{1},a_{2},a_{3}), \tag{2.5.10}\]
such that
\[\left\{\begin{array}{l}\mbox{Tr}(C_{3})=a_{1}+a_{2}+a_{3}=0,\\ \mbox{Tr}(C_{3}^{-1})=\frac{1}{a_{1}}+\frac{1}{a_{2}}+\frac{1}{a_{3}}=t,\\ \det C_{3}=a_{1}a_{2}a_{3}=1.\end{array}\right. \tag{2.5.11}\]
where \(t\in\mathbb{K}\) is an auxiliary parameter. That is, \(a_{1},a_{2},a_{3}\) are the roots of the polynomial \(f(\lambda):=\lambda^{3}+t\lambda-1\). So, \(f^{\prime}(\lambda)=3\lambda^{2}+t\). Clearly, \(f^{\prime}(a_{1})=(a_{1}-a_{2})(a_{1}-a_{3})=3a_{1}^{2}+t\), etc. Thus,
\[\tau:=-\delta(C_{3})^{2}=\prod_{j=1}^{3}\!\!\!f^{\prime}(a_{j})=\prod_{j=1}^{3} (t+3a_{j}^{2})=4t^{3}+27. \tag{2.5.12}\]
Observe that, our assumption on \(C_{1},C_{2},C_{3}\) becomes: \(a_{1},a_{2},a_{3}\) are pairwise distinct, and \(a_{i}\neq 1,\zeta,\zeta^{2}\). In terms of \(t\), this means: \(t\notin\{0,-\frac{3}{\sqrt[3]{4}},-\frac{3}{\sqrt[3]{4}}\zeta,-\frac{3}{\sqrt[3] {4}}\zeta^{2}\}\), i.e. \(\tau\neq 0,27\). Also, notice that \(\mathrm{Tr}(C_{i})=0=\mathrm{Tr}(C_{i}^{-1})\) and \(\det C_{i}=1\), for \(i=1,2\). Under this simplification, we now compute
\[\left(\begin{array}{c}X_{11}^{1}\\ X_{22}^{1}\\ X_{33}^{1}\end{array}\right)=-\frac{1}{\delta(C_{3})}\left(\begin{array}{c} a_{3}-a_{2}\\ a_{1}-a_{3}\\ a_{2}-a_{1}\end{array}\right)y_{1}=:\left(\begin{array}{c}c_{1,1}^{1}y_{1}+c _{1,1}^{0}\\ c_{2,2}^{1}y_{1}+c_{2,2}^{0}\\ c_{3,3}^{1}y_{1}+c_{3,3}^{0}\end{array}\right),\left(\begin{array}{c} \sigma_{1,1}(X^{1})\\ \sigma_{2,2}(X^{1})\\ \sigma_{3,3}(X^{1})\end{array}\right)=-\frac{1}{\delta(C_{3})}\left(\begin{array} []{c}\frac{1}{a_{2}}-\frac{1}{a_{3}}\\ \frac{1}{a_{3}}-\frac{1}{a_{1}}\\ \frac{1}{a_{1}}-\frac{1}{a_{2}}\end{array}\right)y_{2}.\]
In particular, \(c_{j,j}^{0}=0\), and \(\prod_{j=1}^{3}c_{j,j}^{1}=-\frac{1}{\delta(C_{3})^{2}}=\frac{1}{4t^{3}+27}\). Secondly,
\[X_{23}^{1}X_{32}^{1}=\frac{1}{\delta(C_{3})(a_{3}-a_{2})}y_{1}^{ 2}+\frac{a_{3}-a_{2}}{\delta(C_{3})}a_{1}y_{2}=:c_{2,3}^{2,0}y_{1}^{2}+c_{2,3} ^{1,0}y_{1}+c_{2,3}^{0,1}y_{2}+c_{2,3}^{0,0},\] \[X_{31}^{1}X_{13}^{1}=\frac{1}{\delta(C_{3})(a_{1}-a_{3})}y_{1}^{ 2}+\frac{a_{1}-a_{3}}{\delta(C_{3})}a_{2}y_{2}=:c_{3,1}^{2,0}y_{1}^{2}+c_{3,1} ^{1,0}y_{1}+c_{3,1}^{0,1}y_{2}+c_{3,1}^{0,0},\] \[X_{12}^{1}X_{21}^{1}=\frac{1}{\delta(C_{3})(a_{2}-a_{1})}y_{1}^{ 2}+\frac{a_{2}-a_{1}}{\delta(C_{3})}a_{3}y_{2}=:c_{1,2}^{2,0}y_{1}^{2}+c_{1,2} ^{1,0}y_{1}+c_{1,2}^{0,1}y_{2}+c_{1,2}^{0,0}.\]
In particular, \(c_{2,3}^{1,0}=c_{2,3}^{0,0}=0\), etc. Besides, as \((a_{3}-a_{2})^{2}a_{1}=(a_{3}+a_{2})^{2}a_{1}-4=a_{1}^{3}-4=-ta_{1}-3\), and \((a_{1}-a_{3})^{2}(a_{2}-a_{1})^{2}a_{2}a_{3}=(ta_{2}+3)(ta_{3}+3)=\frac{t^{2}} {a_{1}}-3ta_{1}+9\), etc., we have
\[(X_{23}^{1}X_{32}^{1})(X_{31}^{1}X_{13}^{1})(X_{12}^{1}X_{21}^{1})\] \[= \frac{1}{\delta(C_{3})^{4}}y_{1}^{6}+\sum_{\text{cyclic permutation}}[ \frac{(a_{2}-a_{1})^{2}a_{3}}{\delta(C_{3})^{4}}y_{1}^{4}y_{2}+\frac{(a_{1}-a _{3})^{2}(a_{2}-a_{1})^{2}a_{2}a_{3}}{\delta(C_{3})^{4}}y_{1}^{2}y_{2}^{2}]+ \frac{1}{\delta(C_{3})^{2}}y_{2}^{3}\] \[= \frac{1}{(4t^{3}+27)^{2}}y_{1}^{6}-\frac{9}{(4t^{3}+27)^{2}}y_{1} ^{4}y_{2}+\frac{t^{3}+27}{(4t^{3}+27)^{2}}y_{1}^{2}y_{2}^{2}-\frac{1}{4t^{3}+2 7}y_{2}^{3}. \tag{2.5.13}\]
Thirdly, observe that \(c_{1,1}^{1}c_{2,3}^{2,0}=-\frac{1}{\delta(C_{3})^{2}}=\frac{1}{4t^{3}+27}\), and \(c_{1,1}^{1}c_{2,3}^{0,1}=\frac{(a_{3}-a_{2})^{2}a_{1}}{4t^{3}+27}=\frac{-ta_{1} -3}{4t^{3}+27}\), etc. Then,
\[X_{21}^{1}X_{32}^{1}X_{13}^{1}=1-\prod_{j=1}^{3}(c_{j,j}^{1}y_{1} )+\sum_{\text{cyclic permutation}}(c_{1,1}^{1}y_{1})(c_{2,3}^{2,0}y_{1}^{2}+c_{2,3} ^{0,1}y_{2})-y_{3}\] \[= \frac{2}{4t^{3}+27}y_{1}^{3}-\frac{9}{4t^{3}+27}y_{1}y_{2}+1-y_{3 }=:c^{3,0}y_{1}^{3}+c^{1,1}y_{1}y_{2}+c^{0,1}y_{2}+c^{2,0}y_{1}^{2}+c^{1,0}y_{1 }+c^{0,0}-y_{3}.\]
In particular, \(c^{0,1}=c^{2,0}=c^{1,0}=0\), \(c^{0,0}=1\). So, the defining equation (2.5.9) of \(\mathcal{M}_{\mu}\) becomes
\[\frac{1}{(4t^{3}+27)^{2}}y_{1}^{6}-\frac{9}{(4t^{3}+27)^{2}}y_{1} ^{4}y_{2}+\frac{t^{3}+27}{(4t^{3}+27)^{2}}y_{1}^{2}y_{2}^{2}-\frac{1}{4t^{3}+2 7}y_{2}^{3}\] \[= y_{3}(c^{3,0}y_{1}^{3}+c^{1,1}y_{1}y_{2}+1-y_{3})=\frac{2}{4t^{3} +27}y_{1}^{3}y_{3}-\frac{9}{4t^{3}+27}y_{1}y_{2}y_{3}+y_{3}-y_{3}^{2}.\]
Thus, \(\mathcal{M}_{\mu}\subset\mathbb{A}^{3}_{y_{1},y_{2},y_{3}}\) is, a priori, a smooth degree \(6\) affine surface defined by
\[F(y_{1},y_{2},y_{3}):=y_{1}^{6}-9y_{1}^{4}y_{2}-2\tau y_{1}^{3}y_{3}+\frac{\tau+ 81}{4}y_{1}^{2}y_{2}^{2}+9\tau y_{1}y_{2}y_{3}-\tau y_{2}^{3}+\tau^{2}y_{3}^{2} -\tau^{2}y_{3}=0. \tag{2.5.14}\]
We can simplify the description. Observe that
\[F(y_{1},y_{2},y_{3})=(\tau y_{3}-y_{1}^{3})^{2}+9y_{1}y_{2}(\tau y _{3}-y_{1}^{3})+\frac{\tau+81}{4}y_{1}^{2}y_{2}^{2}-\tau y_{2}^{3}-\tau^{2}y_{3}\] \[= (\tau y_{3}-y_{1}^{3}+\frac{9}{2}y_{1}y_{2})^{2}+\frac{\tau}{4}y_ {1}^{2}y_{2}^{2}-\tau(\tau y_{3}-y_{1}^{3}+\frac{9}{2}y_{1}y_{2})+\frac{9\tau }{2}y_{1}y_{2}-\tau(y_{1}^{3}+y_{2}^{3})\] \[= (\tau y_{3}-y_{1}^{3}+\frac{9}{2}y_{1}y_{2}-\frac{\tau}{2})^{2}- \frac{\delta(C_{3})^{2}}{4}y_{1}^{2}y_{2}^{2}-\tau(y_{1}^{3}+y_{2}^{3})+\frac{9 \tau}{2}y_{1}y_{2}-\frac{\tau^{2}}{4}.\]
Inspired by this, we take the coordinate change \((y_{1},y_{2},y_{3})\rightarrow(\widetilde{y}_{1}=y_{1},\widetilde{y}_{2}=y_{2 },\widetilde{y}_{3})\), with
\[\delta(C_{3})\widetilde{y}_{3}:=\tau y_{3}-y_{1}^{3}+\frac{9}{2}y_{1}y_{2}- \frac{\tau}{2}-\frac{\delta(C_{3})}{2}y_{1}y_{2}. \tag{2.5.15}\]
Then, \(F=\delta(C_{3})\widetilde{y}_{3}\delta(C_{3})(\widetilde{y}_{3}+y_{1}y_{2})- \tau(y_{1}^{3}+y_{2}^{3})+\frac{9\tau}{2}y_{1}y_{2}-\frac{\tau^{2}}{4}=-\tau (\widetilde{y}_{1}\widetilde{y}_{2}\widetilde{y}_{3}+\widetilde{y}_{1}^{3}+ \widetilde{y}_{2}^{3}+\widetilde{y}_{3}^{2}-\frac{9}{2}\widetilde{y}_{1} \widetilde{y}_{2}+\frac{\tau}{4})\).
**In conclusion**: \(\mathcal{M}_{\mu}\subset\mathbb{A}^{3}_{\widetilde{y}_{1},\widetilde{y}_{2}, \widetilde{y}_{3}}\) becomes a _smooth cubic affine surface_ defined by
\[\widetilde{F}(\widetilde{y}_{1},\widetilde{y}_{2},\widetilde{y}_{3}):= \widetilde{y}_{1}\widetilde{y}_{2}\widetilde{y}_{3}+\widetilde{y}_{1}^{3}+ \widetilde{y}_{2}^{3}-\frac{9}{2}\widetilde{y}_{1}\widetilde{y}_{2}+\frac{ \tau}{4}=0. \tag{2.5.16}\]
**Note**: as a side remark, we observe that, a further computation in fact gives:
\[F(y_{1},y_{2},y_{3})=-\tau(y_{1}+y_{2}+3)^{3}+\frac{\tau}{4}(y_{1}y_{2}+6(y_{1 }+y_{2})+9)^{2}+(\tau y_{3}-y_{1}^{3}+\frac{9}{2}y_{1}y_{2}-\frac{\tau}{2})^{ 2}+\frac{(27-\tau)\tau}{4}. \tag{2.5.17}\]
Thus, \(\mathcal{M}_{\mu}\) is a branched double-cover over the smooth cubic affine surface: \(\{z_{1}^{3}+z_{2}^{2}+z_{3}^{2}-1=0\}\).
## 3. Dual boundary complexes of character varieties
### Dual boundary complexes
#### 3.1.1. Setup
Let \(X\) be a smooth quasi-projective \(\mathbb{K}\)-variety.
**Definition 3.1**.: A _log compactification_ of \(X\) is a smooth projective variety \(\overline{X}\) with simple normal crossing (snc) boundary divisor \(\partial X=\overline{X}\setminus X\). Moreover, we say that \(\partial X\) is _very simple normal crossing_, if every nonempty finite intersection of its irreducible components is _connected_.
By Hironaka's resolution of singularities, a log compactification \((\overline{X},\partial X)\) always exists. By blowing up further if necessary, \(\partial X\) will be very simple normal crossing. Say, we're in this case.
**Definition 3.2**.: The _dual boundary complex_\(\mathbb{D}\partial X\) is a simplicial complex such that:
* Vertices of \(\mathbb{D}\partial X\) are in one-to-one correspondence with the irreducible components of \(\partial X\);
* \(k\) vertices of \(\mathbb{D}\partial X\) spans a \((k-1)\)-simplex if and only if the corresponding irreducible components have non-empty intersection.
**Proposition 3.3** ([12]).: _The homotopy type of \(\mathbb{D}\partial X\) is an invariant of \(X\). i.e. independent of the choice of the log compactification \((\overline{X},\partial X)\)._
**Example 3.4**.: For any two quasi-projective smooth varieties \(X,Y\), take the log compactifications \(\overline{X},\overline{Y}\) separately, then \(\overline{X}\times\overline{Y}\) is a log compactification of \(X\times Y\) with \(\partial(X\times Y)=\partial X\times\overline{Y}\cup_{\partial X\times \partial Y}\overline{X}\times\partial Y\). By a direct calculation, we then have a homotopy equivalence:
\[\mathbb{D}\partial(X\times Y)\sim\mathbb{D}\partial X\star\mathbb{D}\partial Y, \tag{3.1.1}\]
where '\(\star\)' stands for the _join_ of simplicial complexes. Clearly, \(\mathbb{D}\partial\mathbb{K}\sim*\) and \(\mathbb{D}\partial\mathbb{K}^{*}\sim S^{0}\). Thus,
\[\mathbb{D}\partial(\mathbb{A}^{1}\times Y)\sim*,\]
i.e. the dual boundary complex of \(\mathbb{A}^{1}\times Y\) is contractible. As another example, we have
\[\mathbb{D}\partial(\mathbb{K}^{\times})^{d}\sim S^{0}\star\mathbb{D}\partial (\mathbb{K}^{\times})^{d-1}\sim\cdots\sim S^{d-1}.\]
The cohomology with rational coefficients of a dual boundary complex is computed by:
**Proposition 3.5** (See e.g. [66]).: _Let \(X\) be a connected smooth quasi-projective complex variety of dimension \(d\), then the reduced rational (co)homology of the dual boundary complex captures one piece of the weight filtration:_
\[\widetilde{H}_{i-1}(\mathbb{D}\partial X,\mathbb{Q})\cong\operatorname{Gr}_{2 d}^{W}H^{2d-i}(X(\mathbb{K}),\mathbb{Q}),\quad\widetilde{H}^{i-1}(\mathbb{D} \partial X,\mathbb{Q})\cong\operatorname{Gr}_{0}^{W}H_{c}^{i}(X(\mathbb{K}), \mathbb{Q}).\]
The latter is equivalent to the former by Poincare duality. More generally, see Proposition 4.10.
#### 3.1.2. A remove lemma
In a case, one can simplify the computation of dual boundary complexes:
**Lemma 3.6**.: _[_75_, Lem.2.3]_ _Let \(X\) be a smooth irreducible quasi-projective \(\mathbb{K}\)-variety, and \(Z\subset X\) be a smooth irreducible closed subvariety of smaller dimension with complement \(U\). If \(N_{Z}\) is the normal bundle of \(Z\) in \(X\), then we have a natural homotopy cofiber sequence_
\[\mathbb{D}\partial Z\sim\mathbb{D}\partial\mathbb{P}(N_{Z})\to\mathbb{D} \partial X\sim\mathbb{D}\mathrm{Bl}_{Z}(X)\to\mathbb{D}\partial U.\]
_In particular, if \(\mathbb{D}\partial Z\sim\operatorname{pt}\), then the natural map \(\mathbb{D}\partial X\to\mathbb{D}\partial U\) is a homotopy equivalence._
By an inductive procedure, one may obtain a further simplification:
**Lemma 3.7**.: _[_75_, Prop.2.6]_ _Let \(X\) be a smooth irreducible quasi-projective \(\mathbb{K}\)-variety with a non-empty open subset \(U\). If the complement \(Z=X\setminus U\) admits a finite decomposition into locally closed smooth subvarieties \(Z_{j}\) such that: \(\mathbb{D}\partial Z_{j}\sim*\); there is a total order on the indices such that \(\cup_{j\leq a}Z_{j}\) is closed for all \(a\). Then the natural map \(\mathbb{D}\partial X\to\mathbb{D}\partial U\) is a homotopy equivalence._
This will be our key tool for computing the dual boundary complex of the character variety \(\mathcal{M}_{\mu}\).
### Fundamental group of the dual boundary complex
We adapt [48, SS5] to our setting. Let \(X\) be a connected smooth quasi-projective \(\mathbb{K}\)-variety. Fix a log compactification \((\overline{X},D=\partial X)\) with \(D\) very simple normal crossing. Say, \(\overline{X}\subset\mathbb{P}^{N}\). Fix a Riemann metric on \(\mathbb{P}^{N}\). For \(0<\delta\ll 1\), let \(N_{\delta}(D)\subset\overline{X}\) be the \(\delta\)-neighborhood of \(D\). Then \(D\) is a deformation retract of \(N_{\delta}(D)\), and
\[\pi_{j}(D)\cong\pi_{j}(N_{\delta}(D)),\forall j\geq 0.\]
Let \(D_{i}\), \(i\in I=\{1,2,\cdots,m\}\) be the irreducible components of \(D\). For each \(i\), let \(N_{\delta}(D_{i})\subset\overline{X}\) be the \(\delta\)-neighborhood of \(D_{i}\), which is a tubular neighborhood of \(D_{i}\). Then \(\{N_{\delta}(D_{i})\}_{i\in I}\) is an open cover of \(N_{\delta}(D)\), and for \(0<\delta\ll 1\), we have
\[\cap_{1\leq j\leq r}N_{\delta}(D_{i_{j}})\neq\varnothing\Leftrightarrow\cap_ {1\leq j\leq r}D_{i_{j}}\neq\varnothing. \tag{3.2.1}\]
Now, take any partition of unity \(\{\rho_{i}\}_{i\in I}\) associated to this cover. That is, \(\rho_{i}\in C^{\infty}(N_{\delta}(D))\) s.t.:
1. \(0\leq\rho_{i}\leq 1\) and \(\sum_{i\in I}\rho_{i}=1\).
2. For each \(i\), the support \(\operatorname{Supp}\!\rho_{i}\) is a compact subset contained in \(N_{\delta}(D_{i})\).
Then define a map
\[\overline{\alpha}=\overline{\alpha}[\{\rho_{i}\}]:N_{\delta}(D)\to\mathbb{D}D,\quad\overline{\alpha}(x):=\sum_{i=1}^{m}\rho_{i}(x)v_{i}, \tag{3.2.2}\]
where \(v_{i}\) is the **vertex** of \(\mathbb{D}D\) corresponding to \(D_{i}\). By (3.2.1), the map is indeed well-defined.
**Claim**: the map \(\overline{\alpha}\), up to homotopy, is independent of the choice of partition of unity.
_Proof of Claim._ Indeed, for any other partition of unity \(\{\widetilde{\rho_{i}}\}\), we get a continuous family of partitions of unity \(\{\rho_{i}^{t}:=(1-t)\rho_{i}+t\widetilde{\rho_{i}}\}_{i\in I}\), parameterized by \(t\in[0,1]\). Apply the above definition, we then obtain a continuous family of maps \(\overline{\alpha}_{t}=\sum\rho_{i}^{t}v_{i}:N_{\delta}(D)\to\mathbb{D}D\). \(\Box\)
**Remark 3.8**.: By the **Claim** and Proposition 3.3, the map obtained by composition
\[\alpha:N_{\delta}^{*}(D)=N_{\delta}(D)\setminus D\hookrightarrow N_{\delta}( D)\xrightarrow{\overline{\alpha}}\mathbb{D}\partial X, \tag{3.2.3}\]
is well-defined up to homotopy.
**Definition 3.9**.: Let \(Y\) be a (reasonable) locally compact Hausdorff space (e.g. \(Y=X\) equipped with analytic topology). Consider the direct system \(\{K\}\) of compact subsets with inclusions.
1. The _set of ends_ of \(Y\) is \[\pi_{0}^{\infty}(Y):=\lim_{\longleftarrow}\pi_{0}(Y\setminus K).\] We say \(Y\) is _connected at infinity_, if \(Y\) has a single end.
2. For simplicity, suppose \(Y\) has a single end.3 The _fundamental group at infinity_ of \(Y\) is \[\pi_{1}^{\infty}(Y):=\lim_{\longleftarrow}\pi_{1}(Y\setminus K).\]
In our case, observe that for \(0<\delta\ll 1\) and \(i=0,1\), we have a canonical isomorphism
\[\pi_{i}^{\infty}(X)\cong\pi_{i}(N_{\delta}^{*}(D)).\]
Then by Remark 3.8, we obtain canonical maps
\[\pi_{i}^{\infty}(X)\cong\pi_{i}(N_{\delta}^{*}(D))\to\pi_{i}(D)\cong\pi_{i}(N_ {\delta}(D))\to\pi_{i}(\mathbb{D}D)=\pi_{i}(\mathbb{D}\partial X).\]
For \(i=0\), we clearly have natural bijections
\[\pi_{0}^{\infty}(X)\simeq\pi_{0}(D)\simeq\pi_{0}(\mathbb{D}D)=\pi_{0}(\mathbb{ D}\partial X). \tag{3.2.4}\]
The first constraint on the fundamental group of the dual boundary complex is:
**Lemma 3.10**.: _We have natural surjections: \(\pi_{1}^{\infty}(X)\twoheadrightarrow\pi_{1}(D)\twoheadrightarrow\pi_{1}( \mathbb{D}D)=\pi_{1}(\mathbb{D}\partial X).\)_
Proof.: (1). For the first surjection: Take \(N_{\delta}^{*}(D):=N_{\delta}(D)\setminus D\). As \(D\) has complex codimension \(1\) (real codimension \(2\)) in \(N_{\delta}(D)\), we can deform any loop in \(N_{\delta}(D)\) to avoid \(D\), i.e. we have a natural surjection \(\pi_{1}(N_{\delta}^{*}(D))\twoheadrightarrow\pi_{1}(N_{\delta}(D))\cong\pi_{ 1}(D)\). However, for \(0<\delta\ll 1\), \(\pi_{1}(N_{\delta}^{*}(D))\cong\pi_{1}^{\infty}(X)\) is the fundamental group at infinity of \(X\). Thus, we get a surjection \(\pi_{1}^{\infty}(X)\twoheadrightarrow\pi_{1}(D)\).
(2). For the second surjection: \(\forall[\gamma]\in\pi_{1}(\mathbb{D}D)\), we want to show \([\gamma]\in\overline{\alpha}_{*}(\pi_{1}(N_{\delta}(D))),0<\delta\ll 1\).
We may assume \([\gamma]\) is represented by a loop \(\gamma:S^{1}=[0,1]/(0\sim 1)\to(\mathbb{D}D)^{1}\) with \(\gamma(0)\) a vertex of \(\mathbb{D}D\). Here, \((\mathbb{D}D)^{1}\) is the \(1\)-skeleton. Up to a homotopy, one can further assume that
\[[\gamma]=[\gamma_{\ell}\circ\cdots\circ\gamma_{1}],\]
where \(\gamma_{j}:[0,1]\to e_{j}\) is a path in an edge \(e_{j}\) of \((\mathbb{D}D)^{1}\) with source \(v_{i_{j-1}}\) and target \(v_{i_{j}}\), s.t. \(v_{i_{\ell}}=v_{i_{0}}\). Then, it suffices to find a loop in \(N_{\delta}(D)\) of the form \(\widetilde{\gamma}=\widetilde{\gamma}_{\ell}\circ\cdots\circ\widetilde{\gamma }_{1}\) such that: \(\widetilde{\gamma}_{j}:[0,1]\to N_{\delta}(D)\) is a path such that \(\overline{\alpha}\circ\widetilde{\gamma}_{j}\) is homotopic to \(\gamma_{j}\) relative to the endpoints.
Indeed, for each vertex \(v_{j}\), we can fix a point \(p_{j}\in D_{j}\setminus\cup_{i\neq j}N_{\delta}(D_{i})\) so that \(\overline{\alpha}(p_{j})=v_{j}\); for each edge \(e_{j}\), we can clearly take a path \(\widetilde{r}_{j}\) in \((D_{i_{j-1}}\cup D_{i_{j}})\setminus\cup_{i\neq i_{j-1},i_{j}}N_{\delta}(D_{ i}))\) which firstly goes from \(p_{i_{j-1}}\) to some point \(p_{i_{j-1},i_{j}}\in D_{i_{j-1}}\cap D_{i_{j}}\) in \(D_{i_{j-1}}\), then goes from \(p_{i_{j-1},i_{j}}\) to \(p_{i_{j}}\) in \(D_{i_{j}}\). Dine.
On the other hand, recall the following generalization of the Lefschetz hyperplane theorem:
**Theorem 3.11** (Relative Lefschetz theorem with large fibers, [27, part II, Thm.1.1]).: _Let \(U\) be a purely \(n\)-dimensional smooth connected algebraic variety. Let \(f:U\to\mathbb{P}^{N}\) be an algebraic map and \(H\subset\mathbb{P}^{N}\) be a linear subspace of codimension \(c\). Let \(H_{\delta}\) be the \(\delta\)-neighborhood of \(H\) with respect to some (real analytic) Riemann metric. Define_
\[\phi(k):=\dim\{z\in\mathbb{P}^{N}\setminus H:\dim f^{-1}(z)=k\},\text{ or }-\infty\text{ if the set is empty}.\]
_Then for \(0<\delta\ll 1\), the natural morphism_
\[\pi_{i}(f^{-1}(H_{\delta}))\to\pi_{i}(U)\]
_is an isomorphism for \(i<\widehat{n}\), and is a surjection for \(i=\widehat{n}\), where_
\[\widehat{n}:=n-\sup_{k}(2k-(n-\phi(k))+\inf(\phi(k),c-1))-1.\]
**Corollary 3.12**.: _Let \(U\) be a connected smooth complex affine variety of dimension \(n\geq 3\). Then_
\[\pi_{i}^{\infty}(U)\xrightarrow{\simeq}\pi_{i}(U),\]
_for \(i=0,1\). In particular, \(\pi_{0}^{\infty}(U)=\pi_{0}(\mathbb{D}\partial U)=\operatorname{pt}\) by (3.2.4)._
Proof.: Say, \(U\hookrightarrow\mathbb{A}^{N}\) is a closed embedding. Let \(f:U\hookrightarrow\mathbb{A}^{N}\hookrightarrow\mathbb{P}^{N}\) be the composition and \(H=\mathbb{P}^{N}\setminus\mathbb{A}^{N}\) be the hyperplane at infinity (of codimension \(c=1\)). Clearly, \(\phi(0)=\dim U=n\) and \(\phi(k)=-\infty\) for all \(k\geq 1\). So, \(\widehat{n}=n-\sup_{k}(2k-(n-\phi(k))+\inf(\phi(k),c-1))-1=n-1>1\). Then by Theorem 3.11, \(\pi_{i}(f^{-1}(H_{\delta}))\xrightarrow{\simeq}\pi_{i}(U)\), for \(0<\delta\ll 1\) and all \(i\leq 1\). Observe that for any \(R>0\), \(\overline{B}_{R}(0)\cap U\hookrightarrow\overline{B}_{R}(0)\subset\mathbb{A} ^{N}\) is a closed, hence compact subset of \(\overline{B}_{R}(0)\). It's then easy to see that
\[\pi_{i}^{\infty}(U)=\lim_{R\to\infty}\pi_{i}(U-\overline{B}_{R}(0)\cap U)= \lim_{\delta\to 0}\pi_{i}(f^{-1}(H_{\delta})).\]
Therefore, we obtain a natural isomorphism \(\pi_{i}^{\infty}(U)\xrightarrow{\simeq}\pi_{i}(U)\) for \(i\leq 1\), as desired.
**Corollary 3.13**.: _Let \(X\) be a smooth connected affine algebraic variety of dimension \(n\geq 3\), with any log compactification \((\overline{X},D=\overline{X}-X)\). Then we have a natural isomorphism_
\[\pi_{i}(D)\xrightarrow{\simeq}\pi_{i}(\overline{X}),\]
_for \(i=0,1\). In particular, \(\pi_{0}(D)=\operatorname{pt}\)._
Proof.: Clearly, \(\overline{X}\) is connected. By Corollary 3.12, we see that \(\pi_{0}(D)\simeq\pi_{0}^{\infty}(X)\simeq\pi_{0}(X)=\operatorname{pt}\). This shows that \(\pi_{0}(D)\simeq\pi_{0}(\overline{X})\simeq\operatorname{pt}\). It suffices to show \(\pi_{1}(D)\simeq\pi_{1}(\overline{X})\).
Fix a Riemannian metric on \(\overline{X}\). For \(0<\delta\ll 1\), apply the Seifert-van Kampen theorem to the open cover \(\overline{X}=X\cup_{N_{\delta}^{*}(D)}N_{\delta}(D)\), we obtain for any \(x\in N_{\delta}^{*}(D)\) a pushout diagram
The first column is the natural morphism \(\pi_{1}^{\infty}(X)\to\pi_{1}(X)\), which is an isomorphism by Corollary 3.12. Thus, the second column \(\pi_{1}(N_{\delta}(D))\cong\pi_{1}(D)\to\pi_{1}(\overline{X})\) is also an isomorphism.
As an immediate corollary of Lemma 3.10, Corollary 3.12, and Corollary 3.13, we have
**Corollary 3.14**.: _Let \(X\) be a smooth connected affine algebraic variety of dimension \(\geq 3\), with any log compactification \((\overline{X},D=\overline{X}-X)\). Then we have a natural surjection_
\[\pi_{1}(X)\twoheadrightarrow\pi_{1}(\overline{X})\simeq\pi_{1}(D)\twoheadrightarrow\pi_{1}( \mathbb{D}\partial X).\]
### Dual boundary complexes of very generic character varieties
Here's our main result:
**Theorem 3.15**.: _The homotopy type conjecture 0.2 for very generic character varieties holds: If \((C_{1},\cdots,C_{k})\in T^{k}\) is very generic (Definition 1.1, Assumption 1.4) of type \(\boldsymbol{\mu}\), and \(\mathcal{M}_{\boldsymbol{\mu}}\) is nonempty, then we have a homotopy equivalence_
\[\mathbb{D}\partial\mathcal{M}_{\boldsymbol{\mu}}\sim S^{d_{\boldsymbol{\mu}}-1 },\quad d_{\boldsymbol{\mu}}=\dim\mathcal{M}_{\boldsymbol{\mu}}.\]
It relies on the following key lemma, whose proof will be postponed until Section 4.2.3:
**Lemma 3.16**.: _If \(Y\) is a \(\mathbb{K}\)-variety stably isomorphic to \(\mathbb{A}^{\ell}\), \(\ell\geq 1\), then \(\mathbb{D}\partial Y\) is contractible._
Proof of Theorem 3.15.: By Theorem 2.10, we get a decomposition into locally closed subvarieties
\[\mathcal{M}_{\boldsymbol{\mu}}=\sqcup_{\widetilde{w}\in W^{2g} \times\prod_{i=1}^{k-1}W/W(C_{i})}\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{ w})=\sqcup_{\widetilde{w}\in W^{2g}\times\prod_{i=1}^{k-1}W/W(C_{i})}\sqcup_{p \in W^{*}(\beta(\widetilde{w}))}\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w}, p),\] \[\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w},p)\approx(\mathbb{ K}^{\times})^{\overline{a}(\widetilde{w},p)}\times\mathcal{A}_{\boldsymbol{\mu}}( \widetilde{w},p),\quad\mathcal{A}_{\boldsymbol{\mu}}(\widetilde{w},p)\times \mathbb{E}^{|U|}\approx\mathbb{E}^{(0\widetilde{w},p)},\]
and \(\dim\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w},p)=d_{\boldsymbol{\mu}}\) if and only if \((\widetilde{w},p)=(\widetilde{w}_{\max},p_{\max})\), that is, \(\overline{a}(\widetilde{w},p)=d_{\boldsymbol{\mu}}\), i.e., \(\overline{b}(\widetilde{w},p)=\dim\mathcal{A}_{\boldsymbol{\mu}}(\widetilde{ w},p)=0\). Thus, for any \((\widetilde{w},p)\neq(\widetilde{w}_{\max},p_{\max})\), by (3.1.1) and Lemma 3.16, we have
\[\mathbb{D}\partial\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w},p)\sim\mathbb{ D}\partial(\mathbb{K}^{\times})^{\overline{a}(\widetilde{w},p)}\star \mathbb{D}\partial\mathcal{A}_{\boldsymbol{\mu}}(\widetilde{w},p)\sim\mathbb{ D}\partial(\mathbb{K}^{\times})^{\overline{a}(\widetilde{w},p)}\star\text{ pt}\sim\text{pt}.\]
To apply Lemma 3.7, it remains to produce an _admissible total order_ (Definition 3.17) on
\[\mathcal{W}^{*}\coloneqq\{(\widetilde{w},p):\widetilde{w}\in W^{2g}\times \prod_{i=1}^{k-1}W/W(C_{i}),p\in\mathcal{W}^{*}(\beta(\widetilde{w}))\}. \tag{3.3.1}\]
This is done in Corollary 3.20 below. Thus, by Lemma 3.7, with \(X=\mathcal{M}_{\boldsymbol{\mu}}\) and \(U=\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w}_{m},p_{m})=(\mathbb{K}^{\times })^{d_{\boldsymbol{\mu}}}\), we get a homotopy equivalence \(\mathbb{D}\partial\mathcal{M}_{\boldsymbol{\mu}}\xrightarrow{\sim}\mathbb{D} \partial\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w}_{m},p_{m})\sim S^{d_{ \boldsymbol{\mu}}-1}\), as desired.
To finish the proof of our main theorem, we're left with the question of admissible total orders.
**Definition 3.17**.: Let \(X=\sqcup_{i\in I}Z_{i}\) be a finite decomposition of a \(\mathbb{K}\)-variety into locally closed subvarieties. We say that a total order \(\leq\) on \(I\) is _admissible_, if
\[Z_{\leq a}:=\cup_{i\in I:i\leq a}Z_{i}\subset X\]
is closed, all \(a\in I\). In particular, for the maximal index \(m\in I\), \(U\coloneqq Z_{m}\subset X\) is open.
In this case, we call \((X=\sqcup_{i\in I}Z_{i},\leq)\) an _admissible decomposition_.
For simplicity, we also denote
\[I_{\leq a}:=\{i\in I:i\leq a\},\quad I_{<a}:=\{i\in I:i<a\},\quad Z_{<a}:=\cup _{i\in I_{<a}}Z_{i}.\]
Observe that \(I_{<i}=I_{\leq m_{<i}}\), where \(m_{<i}\) is the maximal element of \(I_{<i}\). It follow by definition that, \(Z_{<i}\subset Z_{\leq i}\subset X\) are two closed subsets. Hence, the complement \(Z_{i}=Z_{\leq i}\setminus Z_{<i}\subset Z_{\leq i}\) is _open_.
**Lemma 3.18**.: _Let \((X=\sqcup_{i\in I}Z_{i},\leq)\) be an admissible decomposition. Suppose that, for each \(i\in I\), we have an admissible decomposition \((Z_{i}=\sqcup_{j\in J_{i}}Z_{i,j},\leq)\). Denote \(\widetilde{I}:=\{(i,j):i\in I,j\in I_{i}\}\cong\sqcup_{i\in I}J_{i}\), and define a total order on \(\widetilde{I}\) by_
\[(i,j)\leq(i^{\prime},j^{\prime})\Leftrightarrow i<i^{\prime},\text{ or }i=i^{\prime}\text{ and }j\leq j^{\prime}.\]
_Then \((X=\sqcup_{(i,j)\in I}Z_{i,j},\leq)\) is an admissible decomposition._
Proof.: Clearly, we have a decomposition of \(X\) into locally closed subvarieties \(X=\sqcup_{(i,j)\in\widetilde{I}}Z_{i,j}\). It suffices to show that the total order \(\leq\) on \(\widetilde{I}\) is admissible. Indeed, we have
\[Z_{\leq(i,j)}=Z_{<i}\cup(Z_{i})_{\leq j}\hookrightarrow Z_{<i}\cup Z_{i}=Z_{ \leq i}\hookrightarrow X.\]
By assumption, \(Z_{\leq i}\subset X\) and \((Z_{i})_{\leq j}\subset Z_{i}\) are closed. By the observation above, the composition
\[Z_{\leq i}\setminus Z_{\leq i,j}=Z_{i}\setminus(Z_{i})_{\leq j}\subset Z_{i} \subset Z_{\leq i}\]
is then open. So, the complement \(Z_{\leq i,j}\) is closed in \(Z_{\leq i}\), hence also closed in \(X\). Done.
Next, consider the Bruhat cell decomposition \(G=\sqcup_{\tilde{w}\in\mathcal{W}/\mathcal{W}(P)}B\dot{w}P\), where \(W(P)\) is the Weyl group of a Levi subgroup of \(P\). Recall that there is a Bruhat partial order on \(W/W(p)\): \(\dot{\lambda}\leq\dot{\mu}\) if and only if \(B\dot{\lambda}P\subset\overline{B\dot{\mu}P}\). That is, \(\overline{B\dot{\mu}P}=\sqcup_{\lambda\leq\dot{\mu}}B\dot{\lambda}P\). It follows that any total order extending the Bruhat partial order is admissible. From now on, we alway fix such an extension.
Let \(\beta\in\mathrm{Br}_{n}^{+}\) be a \(n\)-strand positive braid with a braid presentation \(\beta=\sigma_{i_{\ell}}\circ\cdots\sigma_{i_{1}}\) as usual. Recall that the braid variety \(X(\beta)\) has a \(B\)-equivariant decomposition (1.4.9):
\[X(\beta)=\sqcup_{p\in\mathcal{W}(\beta)}X_{p}(\beta).\]
**Lemma 3.19**.: _There exists a natural admissible total order on \(\mathcal{W}(\beta)\)._
Proof.: For each \(p=(p_{\ell}=\mathsf{id},\cdots,p_{1},p_{0}=\mathsf{id})\in\mathcal{W}(\beta)\), define locally closed subvarieties of \(X(\beta)\):
\[X_{(p_{i},\cdots,p_{0})}(\beta)=\cap_{1\leq j\leq i}\overline{f}_{j}^{-1}(Bp_ {j}B);\quad\overline{f}_{j}:X(\beta)\to G:\widetilde{\epsilon}=(\epsilon_{i})_ {i=\ell}^{1}\mapsto\mathrm{B}_{i_{j}}(\epsilon_{j})\cdots\mathrm{B}_{i_{1}}( \epsilon_{1}). \tag{3.3.2}\]
So, \(X_{(p_{i},\cdots,p_{0})}(\beta)=\sqcup_{p_{i+1}\in W}X_{(p_{i+1},\cdots,p_{0}) }(\beta)\). Then by the Bruhat cell decomposition, have
\[X_{(\leq p_{i+1},p_{i},\cdots,p_{0})}(\beta)=\cup_{w\leq p_{i+1}}X_{(p_{i}, \cdots,p_{0})}(\beta)\cap\overline{f}_{i+1}^{-1}(\overline{BwB}),\]
hence is closed in \(X_{(p_{i},\cdots,p_{0})}(\beta)\). In other words, the Bruhat total order on
\[W_{(p_{i},\cdots,p_{0})}\coloneqq\{p_{i+1}\in W:X_{(p_{i+1},\cdots,p_{0})}( \beta)\neq\varnothing\}\subset W\]
is admissible. Then by induction, Lemma 3.18 induces an admissible total order \(\leq\) on \(\mathcal{W}(\beta)\).
Finally, as promised in the proof of Theorem 3.15, we obtain
**Corollary 3.20**.: _There is a natural admissible total order on the cell decomposition:_
\[\mathcal{M}_{\boldsymbol{\mu}}=\sqcup_{(\widetilde{w},p)\in\mathcal{W}^{*}} \mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w},p),\]
_such that \((\widetilde{w}_{m},p_{m})\) is the maximal index._
Proof.: This is more or less a consequence of Lemma 3.19, and the argument is similar.
Firstly, consider the decomposition
\[\mathcal{M}_{\boldsymbol{\mu}}=\sqcup_{\widetilde{w}\in W^{2g}\times\prod_{i=1}^ {k-1}W/W(C_{i})}\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w}).\]
Let's show that it's _admissible_: there exists an admissible total order on \(W^{2g}\times\prod_{i=1}^{k-1}W/W(C_{i})\). Equivalently by definition (1.2.8) and (1.2.7), it means the equivariant decomposition
\[M^{\prime}_{B}=\sqcup_{\widetilde{w}\in W^{2g}\times\prod_{i=1}^{k-1}W/W(C_{i })}M^{\prime}_{B}(\widetilde{w})\]
is admissible. Indeed, similar to the proof of Lemma 3.19, by Lemma 3.18, we obtain an admissible total order on \(W^{2g}\times\prod_{i=1}^{k-1}W/W(C_{i})\) as the compositions of the total Bruhat orders on \(G=\sqcup_{w\in W}BwB\) and \(G=\sqcup_{\widetilde{w}_{i}\in W/W(C_{i})}B\dot{w}_{i}P_{i}\).
Now, again by Lemma 3.18, it suffices to show that the decomposition
\[\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w})=\sqcup_{p\in W^{\ast}(\beta( \widetilde{w}))}\mathcal{M}_{\boldsymbol{\mu}}(\widetilde{w},p)\]
is admissible. By definition (2.4.23), it suffices to show that the equivariant decomposition
\[M^{\prime\prime}_{B}(\widetilde{w})=\sqcup_{p\in W^{\ast}(\beta(\widetilde{w }))}M^{\prime\prime}_{B}(\widetilde{w},p)\]
in Proposition 2.9 is admissible. By (2.3.7), it amounts to show that the equivariant decomposition
\[\widetilde{X}(\beta(\widetilde{w}))=\sqcup_{p\in W^{\ast}(\beta(\widetilde{w }))}\widetilde{X}_{p}(\beta(\widetilde{w}))\]
defined by (2.3.4), (2.4.1) is admissible. By definition, this follows from Lemma 3.19.
### Dual boundary complexes of generic character varieties
If \(\mathcal{M}_{\boldsymbol{\mu}}\) is only generic, we get:
**Proposition 3.21** (See also [58, Rmk.7.0.7]).: _If \((C_{1},\cdots,C_{k})\in T^{k}\) is generic (Definition 1.1) of type \(\boldsymbol{\mu}\), and \(\mathcal{M}_{\boldsymbol{\mu}}\) is nonempty, then \(\mathbb{D}\partial\mathcal{M}_{\boldsymbol{\mu}}\) is a rational homology \((d_{\boldsymbol{\mu}}-1)\)-sphere, \(d_{\boldsymbol{\mu}}=\dim\mathcal{M}_{\boldsymbol{\mu}}\)._
For completeness, we give a proof. First, Deligne's mixed Hodge structures [16, 17] satisfy:
**Proposition 3.22**.: _Let \(X\) be a complex variety of dimension \(d\)._
1. _(See_ _[_67_, Thm.5.39]__) The mixed Hodge structure (MHS) on_ \(H^{k}(X;\mathbb{Q})\) _satisfies:_ \[\operatorname{Gr}_{i}^{F}\operatorname{Gr}_{i+j}^{W}H^{k}(X;\mathbb{C})\neq 0 \ \Rightarrow\ \left\{\begin{array}{ll}0\leq i,j\leq k;\\ k-d\leq i,j\leq d,&\text{if $k>d$};\\ i+j\geq k,&\text{if $X$ is smooth};\\ i+j\leq k,&\text{if $X$ is proper}.\end{array}\right.\]
2. _(See_ _[_67_, Cor.5.47,Def.5.52]__) The MHS on_ \(H^{k}_{c}(X;\mathbb{Q})\) _has at most weights in_ \([0,k]\)_._
3. _If_ \(X\) _is smooth connected, then the Poincare duality_ \[H^{k}_{c}(X;\mathbb{Q})\times H^{2d-k}(X;\mathbb{Q})\to H^{2d}_{c}(X; \mathbb{Q})\cong\mathbb{Q}(-d)[-2d],\]
_is a perfect pairing compatible with MHS's, where \(\mathbb{Q}(-d)\) is the pure Hodge structure of weight \(2d\) on \(\mathbb{Q}\), with Hodge filtration \(F^{d}=\mathbb{Q},F^{d+1}=0\)._
_(4) (Andreotti-Frankel theorem_ _[_1, 42_]__,_ _[_27_, SS5.1]_) If_ \(X\) _is irreducible affine, then_ \(X\) _has the homotopy type of a finite CW complex of dimension_ \(\leq d\)_. So,_ \(H^{k}(X,\mathbb{Q})\neq 0\Rightarrow 0\leq k\leq d\)_._
_(5) By (1)-(4), we conclude that, if_ \(X\) _is a smooth affine variety of dimension_ \(d\)_, then_
\[\operatorname{Gr}_{i}^{\mathrm{F}}\operatorname{Gr}_{i+j}^{\mathrm{W}}H^{k}_{ c}(X;\mathbb{C})\neq 0\ \ \Rightarrow\ d\leq k\leq 2d,\ \ k-d\leq i,j\leq d,\ \ i+j\leq k. \tag{3.4.1}\]
Proof of Proposition 3.21.: By Lemma 1.2, \(\mathcal{M}_{\mu}\) is smooth connected affine of dimension \(d_{\mu}\), with \(d_{\mu}\) even. Recall that \(\mathcal{M}_{\mu}\) satisfies the curious hard Lefschetz property [54, Thm.1.5.3]:
\[\operatorname{Gr}_{d_{\mu}-2m}^{\mathrm{W}}H^{j}_{c}(\mathcal{M}_{\mu}, \mathbb{Q})\xrightarrow{\simeq}\operatorname{Gr}_{d_{\mu}+2m}^{\mathrm{W}}H^ {j+2m}_{c}(\mathcal{M}_{\mu},\mathbb{Q}).\]
So by Proposition 3.5, \(\widetilde{H}^{a-1}(\mathbb{D}\partial\mathcal{M}_{\mu},\mathbb{Q})\cong \operatorname{Gr}_{0}^{\mathrm{W}}H^{a}_{c}(\mathcal{M}_{\mu},\mathbb{Q}) \cong\operatorname{Gr}_{2d_{\mu}}^{\mathrm{W}}H^{a+d_{\mu}}_{c}(\mathcal{M}_{ \mu},\mathbb{Q})\). Hence,
\[\widetilde{H}^{a-1}(\mathbb{D}\partial\mathcal{M}_{\mu},\mathbb{Q})\neq 0 \Rightarrow 2d_{\mu}\leq a+d_{\mu}\leq 2d_{\mu}\Leftrightarrow a=d_{\mu},\]
by (3.4.1). Moreover, in this case, i.e. \(a=d_{\mu}\), we have
\[\widetilde{H}^{d_{\mu}-1}(\mathbb{D}\partial\mathcal{M}_{\mu},\mathbb{Q}) \cong\operatorname{Gr}_{2d_{\mu}}^{\mathrm{W}}H^{2d_{\mu}}_{c}(\mathcal{M}_{ \mu},\mathbb{Q})=\mathbb{Q}.\]
This finishes the proof.
## 4. Motives of character varieties
In this section, the purpose is twofold:
\(\bullet\) First, we review Gillet-Soule's motivic weight complexes and Betti weight cohomology with compact support. The latter generalizes Deligne's weight filtration on compactly supported rational cohomology (Lemma 4.8). It's used to capture the integral cohomology of dual boundary complexes (Proposition 4.10), a key tool in the proof of our main theorem 3.15.
\(\bullet\) Second, we propose a conjectural formula (Conjecture 4.20) for the motive with compact support of generic character varieties \(\mathcal{M}_{\mu}\), as a motivic promotion of the HLRV conjecture [36, Conj.1.2.1-1.2.2] on their mixed Hodge polynomials. It captures information with integer coefficients, in a way compatible with the homotopy type conjecture 0.2 (Proposition 4.24). As a partial evidence, we prove a weak form of the conjectural formula for very generic character varieties (Proposition 4.22). Besides, we verify the conjecture in three simple examples.
### Chow and geometric motives
#### 4.1.1. Chow motives
We review the basics on Chow motives and introduce some notations:
1. Let \(\operatorname{\mathbf{Sm}roj}(\mathbb{K})\subset\operatorname{\mathbf{Sm}}( \mathbb{K})\subset\operatorname{\mathbf{Var}}(\mathbb{K})\) be the category of smooth projective varieties, smooth varieties, and varieties over \(\mathbb{K}\), respectively. For any \(X\in\operatorname{\mathbf{Sm}roj}(\mathbb{K})\) with connected components \(X_{i},i\in I\), and \(r\geq 0\), its Chow homology (resp. cohomology) group of algebraic cycles of dimension \(r\) (resp. codimension \(r\)) is \[\operatorname{CH}_{r}(X)=\oplus_{i\in I}\operatorname{CH}_{r}(X_{i})\quad( \text{resp. }\operatorname{CH}^{r}(X)=\oplus_{i\in I}\operatorname{CH}^{r}(X_{i})).\]
2. Let \(\mathbf{Cor}_{\text{rat}}(\mathbb{K})\) be the category of (deg 0) rational correspondences over \(\mathbb{K}\), whose objects are the same as \(\mathbf{SmProj}(\mathbb{K})\), but with morphisms (4.1.1) \[\operatorname{Hom}_{\mathbf{Cor}_{\text{rat}}(\mathbb{K})}(X,Y)=\oplus_{j\in J }\operatorname{CH}^{\dim Y_{j}}(Y_{j}\times X),\] where \(Y_{j}\), \(j\in J\) are the connected components of \(Y\).
3. Let \(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})\) be the category of (covariant) effective pure Chow motives over \(\mathbb{K}\), defined as the idempotent completion of \(\mathbf{Cor}_{\text{rat}}(\mathbb{K})\). The objects are pairs \((X,p)\), where \(X\in\mathbf{SmProj}(\mathbb{K})\) and \(p\in\operatorname{Hom}_{\mathbf{Cor}_{\text{rat}}(\mathbb{K})}(X,X)\) is a projector. The morphisms are \[\operatorname{Hom}_{\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})}((X, p),(Y,q))=q\operatorname{Hom}_{\mathbf{Cor}_{\text{rat}}}(X,Y)p=\{f\in \operatorname{Hom}_{\mathbf{Cor}_{\text{rat}}(\mathbb{K})}(X,Y):q\circ f=f=f \circ p\}.\] By definition, \(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})\) is a _pseudo-abelian_ category, i.e. it's additive, and projectors have images (equivalently, kernels). Moreover, it's symmetric monoidal with direct sum \[(X,p)\oplus(Y,q):=(X\sqcup Y,p+q),\] and tensor product \[(X,p)\otimes(Y,q):=(X\times Y,[p\times q]).\]
4. We have a canonical covariant graph functor \[\operatorname{M}_{\text{rat}}:\mathbf{SmProj}(\mathbb{K})\to\mathbf{Chow}_{ \text{rat}}^{\text{eff}}(\mathbb{K}):X\mapsto\operatorname{M}_{\text{rat}}(X )=(X,[\operatorname{id}_{X}]=[\Delta_{X}]),\quad(f:X\to Y)\mapsto[\Gamma_{f}],\]
5. The opposite category \(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})^{\text{op}}\) will be called the category of _contravariant effective pure Chow motives_. Let \(K^{b}(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})^{\text{op}})= \operatorname{Ho}(\mathbf{Coch}^{b}(\mathbf{Chow}_{\text{rat}}^{\text{eff}}( \mathbb{K})^{\text{op}}))\) be the homotopy category of bounded cochain complexes in \(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})^{\text{op}}\). Unfortunately, the literature involves different and somewhat confusing conventions. To clarify our situation here, we make the following **Convention**5:
\(\bullet\) By reversing the arrows, we can always identify \(K^{b}(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})^{\text{op}})\) with the homotopy category \(K_{b}(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K}))=\operatorname{Ho}( \mathbf{Ch}_{b}(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})))\) of bounded chain complexes in \(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})\):
\[K_{b}(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K}))\cong K^{b}(\mathbf{ Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})^{\text{op}}):C\mapsto C^{ \text{op}},\quad C_{i}=(C^{\text{op}})^{i}.\]
\(\bullet\) By negating the degrees, we have the following identification:
\[K_{b}(\mathbf{Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K}))\cong K^{b}(\mathbf{ Chow}_{\text{rat}}^{\text{eff}}(\mathbb{K})):C\mapsto C^{-},\quad C_{i}=(C^{-})^{-i}.\]
#### 4.1.2. Geometric motives
We give a crash review on geometric motives [81, 64]:
1. \(\mathbf{FinCor}(\mathbb{K})\): category of finite correspondences over \(\mathbb{K}\), with objects in \(\mathbf{Sm}(\mathbb{K})\), and morphisms: \[\operatorname{Hom}_{\mathbf{FinCor}(\mathbb{K})}(X,Y):=\oplus_{i\in I} \operatorname{Hom}_{\mathbf{FinCor}(\mathbb{K})}(X_{i},Y),\quad X_{i}\in\text{ connected components of }X,\] where \(\operatorname{Hom}_{\mathbf{FinCor}(\mathbb{K})}(X_{i},Y)\) is the free abelian group on integral closed subschemes of \(X_{i}\times Y\) which are finite surjective over \(X_{i}\).
2. \(\mathbf{PreSh}(\mathbf{FinCor}(\mathbb{K}))\): category of abelian presheaves (_presheaves with transfers_) on \(\mathbf{FinCor}(\mathbb{K})\). \(\mathbf{Sh}_{\mathrm{Nis}}(\mathbf{FinCor}(\mathbb{K}))\): (abelian) category of _Nisnevich sheaves with transfers_ over \(\mathbb{K}\), i.e. presheaf with transfers that restricts to a sheaf on \(\mathbf{Sm}(\mathbb{K})\) with Nisnevich topology. \(D^{-}\mathbf{Sh}_{\mathrm{Nis}}(\mathbf{FinCor}(\mathbb{K}))\): its derived category of cohomologically bounded above cochain complexes. By Yoneda lemma, we obtain embeddings of \(\mathbf{Sm}(\mathbb{K})\) and \(\mathbf{FinCor}(\mathbb{K})\) into \(\mathbf{PreSh}(\mathbf{FinCor}(\mathbb{K}))\): (4.1.2) \[\mathbf{Sm}(\mathbb{K})\to\mathbf{FinCor}(\mathbb{K})\to\mathbf{PreSh}( \mathbf{FinCor}(\mathbb{K})):X\mapsto X\mapsto\mathbb{Z}_{\mathrm{tr}}(X).\]
\(\mathbb{Z}_{\mathrm{tr}}\) extends to \(\mathbf{Var}(\mathbb{K})\)[64, 2.11]. By [64, Lem.6.2], \(\mathbb{Z}_{\mathrm{tr}}(X)\in\mathbf{Sh}_{\mathrm{Nis}}(\mathbf{FinCor}( \mathbb{K})),\forall X\in\mathbf{Var}(\mathbb{K})\).
3. The _(tensor) triangulated category of effective (integral) motives over \(\mathbb{K}\)_ is the localization
\[\mathbf{DM}^{\mathrm{eff},-}_{\mathrm{Nis}}(\mathbb{K}):=\mathbf{D}^{-} \mathbf{Sh}_{\mathrm{Nis}}(\mathbf{FinCor}(\mathbb{K}))[\mathscr{W}_{\mathbb{ A}}^{-1}],\]
with \(\mathscr{W}_{\mathbb{A}}\) the class of \(\mathbb{A}^{1}\)-_weak equivalences_, generated by \(\mathbb{Z}_{\mathrm{tr}}(X\times\mathbb{A}^{1})\to\mathbb{Z}_{\mathrm{tr}}(X)\), \(X\in\mathbf{Sm}(\mathbb{K})\).
4. There are two functors (4.1.3) \[\mathrm{M}:\mathbf{Var}(\mathbb{K})\to\mathbf{DM}^{\mathrm{eff},-}_{ \mathrm{Nis}}(\mathbb{K}):X\mapsto\mathrm{M}(X)=\mathbb{Z}_{\mathrm{tr}}(X).\] (4.1.4) \[\mathrm{M}^{c}:\mathbf{Var}^{\mathrm{prop}}(\mathbb{K})\to\mathbf{DM }^{\mathrm{eff},-}_{\mathrm{Nis}}(\mathbb{K}):X\mapsto\mathrm{M}^{c}(X).\] ((effective) motives with compact support)
where \(\mathbf{Var}^{\mathrm{prop}}(\mathbb{K})\) denotes the full subcategory of \(\mathbf{Var}(\mathbb{K})\) with only proper morphisms.
Also, \(\mathrm{M}^{c}(X)\) is contravariant for etale morphisms, and \(\mathrm{M}(X)=\mathrm{M}^{c}(X)\) if \(X\in\mathbf{SmProj}(\mathbb{K})\).
(5) The _(tensor) triangulated category \(\mathbf{DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})\) of effective geometric motives over \(\mathbb{K}\)_ is the thick subcategory of \(\mathbf{DM}^{\mathrm{eff},-}_{\mathrm{Nis}}(\mathbb{K})\) generated by \(\mathrm{M}(X)\), \(X\in\mathbf{Sm}(\mathbb{K})\).
Recall that \(\mathrm{M}^{c}(X)\in\mathbf{DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})\) for all \(X\in\mathbf{Var}(\mathbb{K})\) (see [64, Cor.16.17]).
(6) Let \(\mathbb{Z}(1):=\mathrm{M}^{c}(\mathbb{A}^{1})[-2]=\mathrm{Coker}(\mathrm{M}( \mathrm{pt})\to\mathrm{M}(\mathbb{P}^{1}))[-2]\in\mathbf{DM}^{\mathrm{eff}}_{ \mathrm{gm}}(\mathbb{K})\). The _Tate twist_ functor is
\[-\otimes\mathbb{Z}(1):\mathbf{DM}^{\mathrm{eff},-}_{\mathrm{Nis}}(\mathbb{K}) \to\mathbf{DM}^{\mathrm{eff},-}_{\mathrm{Nis}}(\mathbb{K}):\mathrm{M}\mapsto \mathrm{M}(1):=\mathrm{M}\otimes\mathbb{Z}(1).\]
Denote \(\mathrm{M}(m):=\mathrm{M}\otimes\mathbb{Z}(1)^{\otimes m}\). The _(geometric) Lefschetz motive_ is \(\mathbf{L}:=\mathrm{M}^{c}(\mathbb{A}^{1})=\mathbb{Z}(1)[2]\).
(7) For any \(X\in\mathbf{Var}(\mathbb{K})\), the _motivic cohomology_ (with integer coefficients) of \(X\) is
\[H^{n,i}(X,\mathbb{Z}):=\mathrm{Hom}_{\mathbf{DM}^{\mathrm{eff},-}_{\mathrm{Nis }}(\mathbb{K})}(\mathrm{M}(X),\mathbb{Z}(i)[n])\in\mathcal{A}b.\]
Among many other things, we list some key properties of effective geometric motives.
**Theorem 4.1** (See e.g. [64]).: _The category \(\mathbf{DM}^{\mathrm{eff},-}_{\mathrm{Nis}}(\mathbb{K})\) satisfies the following properties:_
1. _For all_ \(X,Y\in\mathbf{Var}(\mathbb{K})\)_, we have_ \[\mathrm{M}(X\times\mathbb{A}^{1})\cong\mathrm{M}(X),\quad\mathrm{M} (X\times Y)\cong\mathrm{M}(X)\otimes\mathrm{M}(Y).\] \[\mathrm{M}^{c}(X\times\mathbb{A}^{1})\cong\mathrm{M}^{c}(X)(1)[2], \quad\mathrm{M}^{c}(X\times Y)\cong\mathrm{M}^{c}(X)\otimes\mathrm{M}^{c}(Y).\]
2. _(Mayer-Vietoris)_ \(X\in\mathbf{Sm}_{\mathbb{K}}\) _with open cover_ \(\{U,V\}\) _induces an exact triangle in_ \(\mathbf{DM}^{\mathrm{eff},-}_{\mathrm{Nis}}(\mathbb{K})\)_:_ \[\mathrm{M}(U\cap V)\to\mathrm{M}(U)\oplus\mathrm{M}(V)\to\mathrm{M}(X)\to \mathrm{M}(U\cap V)[1].\]
_(3) (Triangle for \(\mathrm{M}^{c}\)) Any closed subvariety \(i:Z\hookrightarrow X\) with open complement \(j:U\hookrightarrow X\) induces an exact triangle:_
\[\mathrm{M}^{c}(Z)\xrightarrow{i_{*}}\mathrm{M}^{c}(X)\xrightarrow{j^{*}} \mathrm{M}^{c}(U)\to\mathrm{M}^{c}(Z)[1].\]
_(4) (Vector bundle) If \(E\to X\) is a vector bundle, then_
\[\mathrm{M}(E)\xrightarrow{\cong}\mathrm{M}(X).\]
_(5) (Projective bundle) Let \(\mathcal{E}\) be a vector bundle of rank \(n+1\) over \(X\) and \(\mathbb{P}(\mathcal{E})\) be the associated projective bundle, then the canonical map induces an isomorphism_
\[\oplus_{i=0}^{n}M(X)(i)[2i]\xrightarrow{\cong}M(\mathbb{P}(\mathcal{E})).\]
_(6) (Blow-up triangle) Let \(X^{\prime}\to X\) be a blow-up with center \(Z\), and \(Z^{\prime}=Z\times_{X}X^{\prime}\), then there is a blow-up triangle:_
\[M(Z^{\prime})\to M(X^{\prime})\oplus M(Z)\to M(X)\to M(Z^{\prime})[1].\]
_If moreover \(X\) and \(Z\) are smooth, and \(Z\) has codimension \(c\), then_
\[M(X^{\prime})\cong M(X)\oplus(\oplus_{i=1}^{c-1}M(Z)(i)[2i]).\]
_(7) (Duality) For \(T\in\mathbf{Sm}(\mathbb{K})\) of dimension \(d\), \(X,Y\in\mathbf{Var}(\mathbb{K})\), we get canonical isomorphisms_
\[\mathrm{Hom}_{\mathbf{DM}_{\mathrm{Nis}}^{\mathrm{eff},-}(\mathbb{K})}( \mathrm{M}(X\times T)[n],\mathrm{M}^{c}(Y))\cong\mathrm{Hom}_{\mathbf{DM}_{ \mathrm{Nis}}^{\mathrm{eff},-}(\mathbb{K})}(\mathrm{M}(X)(d)[d+n],\mathrm{M}^{ c}(T\times Y)).\]
_(8) (Cancellation_ _[_82_]_) The Tate twist_ \(-\otimes\mathbb{Z}(1):\mathbf{DM}_{\mathrm{Nis}}^{\mathrm{eff},-}(\mathbb{K}) \to\mathbf{DM}_{\mathrm{Nis}}^{\mathrm{eff},-}(\mathbb{K})\) _is_ fully faithful.
_(Chow motives) By_ _[_81_]__, there exists a covariant embedding_
\[\iota:\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})\to\mathbf{DM}_{ \mathrm{gm}}^{\mathrm{eff}}(\mathbb{K}), \tag{4.1.5}\]
_such that \(\iota(\mathrm{M}_{\mathrm{rat}}(X))=\mathrm{M}(X)=\mathrm{M}^{c}(X)\) for all \(X\in\mathbf{SmProj}(\mathbb{K})\)._
_(Higher Chow groups) For any \(X\in\mathbf{Sm}(\mathbb{K})\), any \(n\), and \(i\geq 0\), there is a natural isomorphism_
\[H^{n,i}(X,\mathbb{Z})\xrightarrow{\cong}\mathrm{CH}^{i}(X,2i-n).\]
_In particular, we have \(H^{2i,i}(X,\mathbb{Z})\cong\mathrm{CH}^{i}(X)\)._
_(11) (Vanishing theorem) By_ _[_64_, Thm.3.6,Thm.19.3]__, for any \(X\in\mathbf{Sm}(\mathbb{K})\), we have_
\[H^{n,i}(X,\mathbb{Z})=0,\ \ \forall n>\min\{i+\dim X,2i\}.\]
When \(X\) is smooth quasi-projective, the motive with compact support \(\mathrm{M}^{c}(X)\) admits a concrete description in terms of smooth projective varieties. Suppose \(X\) is of pure dimension \(d\). We fix a log compactification \((\overline{X},\partial X)\) with very simple normal crossing divisor (Definition 3.1). Say, \(\partial X=\cup_{i=1}^{n}Y_{i}\) is the union of irreducible components. For any subset \(I\subset[n]=\{1,\cdots,n\}\), denote \(Y_{I}:=\cap_{i\in I}Y_{i}\). Then, \(Y_{I}\subset X\) is either empty or smooth connected. For any integer \(k\geq 1\), denote
\[Y^{(k)}:=\sqcup_{I\subset[n]:|I|=k}Y_{I},\]
and \(Y^{(0)}=Y_{\varnothing}:=\overline{X}\). In particular, \(\dim Y^{(k)}=\dim X-k=d-k\) (if nonempty).
For any \(1\leq j\leq k\), let \(\delta_{j}:Y^{(k)}\to Y^{(k-1)}\) be the disjoint union of inclusions \(Y_{I}\hookrightarrow Y_{I\setminus\{i_{j}\}}\), \(I=\{i_{1}<\ldots<i_{k}\}\subset[n]\). Then
\[\partial=\partial^{(k)}:=\sum_{j=1}^{k}(-1)^{j-1}\delta_{j}:\operatorname{M} (Y^{(k)})\to\operatorname{M}(Y^{(k-1)}). \tag{4.1.6}\]
defines a morphism in \(\operatorname{\mathbf{DM}}^{\operatorname{eff},-}_{\operatorname{Nis}}( \mathbb{K})\). It's direct to see that \(\partial^{2}=0\).
In addition, observe that \(\delta:\operatorname{M}(Y^{(1)})=\operatorname{M}^{c}(Y^{(1)})\to \operatorname{M}(\overline{X})=\operatorname{M}^{c}(\overline{X})\) factors through \(\operatorname{M}^{c}(\partial X)\), so by Theorem 4.1.(3), the composition \(\operatorname{M}(Y^{(1)})\xrightarrow{\partial}\operatorname{M}(\overline{X })=\operatorname{M}^{c}(\overline{X})\xrightarrow{\epsilon}\operatorname{M}^{ c}(X)\) is zero.
**Lemma 4.2**.: _The total complex of_
\[\operatorname{M}(Y^{(d)})\xrightarrow{\partial}\ldots\xrightarrow{\partial} \operatorname{M}(Y^{(1)})\xrightarrow{\partial}\operatorname{M}(\overline{X })\xrightarrow{\epsilon}\operatorname{M}^{c}(X)\]
_is acyclic in \(\operatorname{\mathbf{DM}}^{\operatorname{eff},-}_{\operatorname{Nis}}( \mathbb{K})\). In other words, \(\operatorname{M}^{c}(X)\) is naturally isomorphic to the cochain complex_
\[\operatorname{M}(Y^{(\bullet)}):=[\operatorname{M}(Y^{(d)})\xrightarrow{ \partial}\ldots\xrightarrow{\partial}\operatorname{M}(Y^{(1)})\xrightarrow{ \partial}\operatorname{M}(Y^{(0)})],\]
_where \(\operatorname{M}(Y^{(k)})\) lies in degree \(-k\)._
Proof.: We prove by induction on \(n\). The case \(n=1\) is Theorem 4.1.(3). For the induction, suppose the statement holds for '\(<n\)', we prove the case '\(n\)'.
Denote \(Y:=\partial X=\cup_{i=1}^{n}Y_{i}\subset\overline{X}\), \(Y^{\prime}:=\cup_{i=1}^{n-1}\subset Y\subset\overline{X}\), and \(X^{\prime}:=\overline{X}-Y^{\prime}\). Define \(Y^{\prime(k)}:=\sqcup_{I\subset[n-1],|I|=k}Y_{I}\). Consider the following commutative diagram in \(\operatorname{\mathbf{DM}}^{\operatorname{eff},-}_{\operatorname{Nis}}( \mathbb{K})\)
with each row a cochain complex. By inductive hypothesis, both of the two rows are acyclic in \(\operatorname{\mathbf{DM}}^{\operatorname{eff},-}_{\operatorname{Nis}}( \mathbb{K})\). In short, we may rewrite the commutative diagram as
In particular, the induced morphism \(\epsilon:\operatorname{Cone}(\delta_{\bullet})\to\operatorname{Cone}(i^{ \prime}_{*})\) is an isomorphism in \(\operatorname{\mathbf{DM}}^{\operatorname{eff},-}_{\operatorname{Nis}}( \mathbb{K})\).
Notice that \(Y^{(k)}=Y^{\prime(k)}\sqcup(Y_{n}\cap Y^{\prime(k-1)})\), hence \(\operatorname{M}(Y^{(k)})=\operatorname{M}(Y^{\prime(k)})\oplus\operatorname{ M}(Y_{n}\cap Y^{\prime(k-1)})\). We then see that \(\operatorname{M}(Y^{\bullet})=\operatorname{Cone}(\delta_{\bullet}).\) By Theorem 4.1.(3), we have a distinguished triangle in \(\operatorname{\mathbf{DM}}^{\operatorname{eff},-}_{\operatorname{Nis}}( \mathbb{K})\):
\[\operatorname{M}^{c}(Y_{n}\setminus Y^{\prime})\xrightarrow{i^{\prime}_{*}} \operatorname{M}^{c}(X^{\prime})\to\operatorname{M}^{c}(X)\to\operatorname{M}^ {c}(Y_{n}\setminus Y^{\prime})[1].\]
This means the induced isomorphism is in fact
\[\epsilon\,:\,\mathrm{M}(Y^{\bullet})=\mathrm{Cone}(\delta_{\bullet})\xrightarrow{ \cong}\mathrm{Cone}(i^{\prime}_{*})\cong\mathrm{M}^{c}(X).\]
We're done.
### Motivic weight complexes
#### 4.2.1. Weight complexes
Now, we recall an important result of Gillet and Soule, which associates to any \(\mathbb{K}\)-variety \(X\) a canonical (motivic) weight chain complex \(\mathrm{W}_{\mathrm{c}}(X)\in K_{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{ eff}}(\mathbb{K}))\) (equivalently, weight cochain complex \(\mathrm{W}(X)=\mathrm{W}_{\mathrm{c}}(X)^{\mathrm{op}}\in K^{b}(\mathbf{Chow}_{ \mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})^{\mathrm{op}})\) in [30], or \(\mathrm{W}_{\mathrm{c}}^{-}(X)=(\mathrm{W}_{\mathrm{c}}(X))^{-}\in K^{b}( \mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K}))\)). The matter for us will be that \(\mathrm{W}_{\mathrm{c}}(X)\) knows the _integral_ homology of the dual boundary complex of \(X\) (Proposition 4.10).
Up to a notation change, we have the following:
**Theorem 4.3** ([30, Thm.2]).: _For any \(\mathbb{K}\)-variety \(X\), there exists a **weight complex**\(\mathrm{W}_{\mathrm{c}}(X)\in K_{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{ eff}}(\mathbb{K}))\), well-defined up to canonical isomorphism, such that:_
1. \(\mathrm{W}_{\mathrm{c}}(X)\) _is isomorphic to a bounded chain complex of the form_ \[\mathrm{M}_{\mathrm{rat}}(X_{k})\to\cdots\to\mathrm{M}_{\mathrm{rat}}(X_{1}) \to\mathrm{M}_{\mathrm{rat}}(X_{0}),\quad X_{i}\in\mathbf{SmProj}(\mathbb{K}),\]
_where \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is in degree \(i\), and \(\dim(X_{i})\leq\dim X-i\). In particular, \(k\leq\dim X\)._
_In addition, if \(X\in\mathbf{SmProj}(\mathbb{K})\), then \(\mathrm{W}_{\mathrm{c}}(X)=\mathrm{M}_{\mathrm{rat}}(X)\)._
1. \(\mathrm{W}_{\mathrm{c}}(X)\) _is covariant for proper morphisms, and contravariant for open inclusions._
2. _For any open subset_ \(j:U\hookrightarrow X\) _with closed complement_ \(i:Z\hookrightarrow X\)_, there is a canonical triangle in_ \(K_{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K}))\)_:_ \[\mathrm{W}_{\mathrm{c}}(Z)\xrightarrow{i_{*}}\mathrm{W}_{\mathrm{c}}(X) \xrightarrow{j^{*}}\mathrm{W}_{\mathrm{c}}(U)\to\mathrm{W}_{\mathrm{c}}(Z)[1].\]
3. _Any cover_ \(X=A\cup B\) _by two closed subvarieties gives a canonical triangle in_ \(K_{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K}))\)_:_ \[\mathrm{W}_{\mathrm{c}}(A\cap B)\to\mathrm{W}_{\mathrm{c}}(A)\oplus\mathrm{W} _{\mathrm{c}}(B)\to\mathrm{W}_{\mathrm{c}}(X)\to\mathrm{W}_{\mathrm{c}}(A\cap B )[1].\]
_Any cover_ \(X=U\cup V\) _by two open subsets gives a canonical triangle in_ \(K_{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K}))\)_:_
\[\mathrm{W}_{\mathrm{c}}(X)\to\mathrm{W}_{\mathrm{c}}(U)\oplus\mathrm{W}_{ \mathrm{c}}(V)\to\mathrm{W}_{\mathrm{c}}(U\cap V)\to\mathrm{W}_{\mathrm{c}}(X)[ 1].\]
4. _For any_ \(X,Y\in\mathbf{Var}(\mathbb{K})\)_, have_ \[\mathrm{W}_{\mathrm{c}}(X\times Y)=\mathrm{W}_{\mathrm{c}}(X)\otimes\mathrm{W} _{\mathrm{c}}(Y).\]
Alternatively, the weight complex can be described via Bondarko's weight complex functor.
**Theorem 4.4**.: _[_5_, 3.3.1,6.3.1,6.4.2,6.6.2]_ _There is a conservative exact functor_
\[t:\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})\to K^{b}(\mathbf{Chow}_{ \mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})), \tag{4.2.1}\]
_called the **weight complex functor**, such that:_
1. \(\mathrm{W}_{\mathrm{c}}(X)\) _is isomorphic to a bounded chain complex of the form_ \[\mathrm{M}_{\mathrm{rat}}(X)\to\cdots\to\mathrm{M}_{\mathrm{rat}}(X)\to\mathrm{M }_{\mathrm{rat}}(X),\quad X_{i}\in\mathbf{SmProj}(\mathbb{K}),\]
2. \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) _is isomorphic to a bounded chain complex of the form_ \[\mathrm{M}_{\mathrm{rat}}(X_{i})\to\cdots\to\mathrm{M}_{\mathrm{rat}}(X_{1}) \to\mathrm{M}_{\mathrm{rat}}(X_{0}),\quad X_{i}\in\mathbf{SmProj}(\mathbb{K}),\]
_where \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is in degree \(i\), and \(\mathrm{dim}(X_{i})\leq\dim X-i\). In particular, \(k\leq\dim X\)._
_In addition, if \(X\in\mathbf{SmProj}(\mathbb{K})\), then \(\mathrm{W}_{\mathrm{c}}(X)=\mathrm{M}_{\mathrm{rat}}(X)\)._
Proof.: We first show that \(\mathrm{M}_{\mathrm{rat}}(X)\) is isomorphic to a bounded chain complex of the form
\[\mathrm{M}_{\mathrm{rat}}(X_{i})\to\cdots\to\mathrm{M}_{\mathrm{rat}}(X_{1}) \to\mathrm{M}_{\mathrm{rat}}(X_{0}),\quad X_{i}\in\mathbf{SmProj}(\mathbb{K}), \tag{4.3}\]
where \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is in degree \(i\), and \(\mathrm{dim}(X_{i})\leq\dim X-i\). In particular, \(k\leq\dim X\). In addition, if \(X\in\mathbf{SmProj}(\mathbb{K})\), then \(\mathrm{W}_{\mathrm{c}}(X)=\mathrm{M}_{\mathrm{rat}}(X)\).
_Proof.:_ We first show that \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is isomorphic to a bounded chain complex of the form
\[\mathrm{M}_{\mathrm{rat}}(X_{i})\to\cdots\to\mathrm{M}_{\mathrm{rat}}(X_{1}) \to\mathrm{M}_{\mathrm{rat}}(X_{0}),\quad X_{i}\in\mathbf{SmProj}(\mathbb{K}), \tag{4.4}\]
where \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is in degree \(i\), and \(\mathrm{dim}(X_{i})\leq\dim X-i\). In particular, \(k\leq\dim X\). In addition, if \(X\in\mathbf{SmProj}(\mathbb{K})\), then \(\mathrm{W}_{\mathrm{c}}(X)=\mathrm{M}_{\mathrm{rat}}(X)\).
_Proof.:_ We first show that \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is isomorphic to a bounded chain complex of the form
\[\mathrm{M}_{\mathrm{rat}}(X_{i})\to\cdots\to\mathrm{M}_{\mathrm{rat}}(X_{1}) \to\mathrm{M}_{\mathrm{rat}}(X_{0}),\quad X_{i}\in\mathbf{SmProj}(\mathbb{K}), \tag{4.5}\]
where \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is in degree \(i\), and \(\mathrm{dim}(X_{i})\leq\dim X-i\). In particular, \(k\leq\dim X\). In addition, if \(X\in\mathbf{SmProj}(\mathbb{K})\), then \(\mathrm{W}_{\mathrm{c}}(X)=\mathrm{M}_{\mathrm{rat}}(X)\).
_Proof.:_ We first show that \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is isomorphic to a bounded chain complex of the form
\[\mathrm{M}_{\mathrm{rat}}(X_{i})\to\cdots\to\mathrm{M}_{\mathrm{rat}}(X_{1}) \to\mathrm{M}_{\mathrm{rat}}(X_{0}),\quad X_{i}\in\mathbf{SmProj}(\mathbb{K}), \tag{4.6}\]
where \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is in degree \(i\), and \(\mathrm{dim}(X_{i})\leq\dim X-i\). In particular, \(k\leq\dim X\). In addition, if \(X\in\mathbf{SmProj}(\mathbb{K})\), then \(\mathrm{W}_{\mathrm{c}}(X)=\mathrm{M}_{\mathrm{rat}}(X)\).
_Proof.:_ We first show that \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is isomorphic to a bounded chain complex of the form
\[\mathrm{M}_{\mathrm{rat}}(X_{i})\to\cdots\to\mathrm{M}_{\mathrm{rat}}(X_{1}) \to\mathrm{M}_{\mathrm{rat}}(X_{0}),\quad X_{i}\in\mathbf{SmProj}(\mathbb{K}), \tag{4.7}\]
where \(\mathrm{M}_{\mathrm{rat}}(X_{i})\) is in degree \(i\), and \(\mathrm{dim}(X_{i})\leq\dim X-i\). In particular, \(k\leq\dim X\).
1. _The composition_ \(t\circ\iota:\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})\hookrightarrow \mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})\to K^{b}(\mathbf{Chow}_{ \mathrm{rat}}^{\mathrm{eff}}(\mathbb{K}))\) _is the obvious inclusion._
2. \(t\) _induces an isomorphism on the Grothendieck rings:_ \[K_{0}(t):K_{0}(\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K}))\xrightarrow{ \cong}K_{0}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})).\]
3. _There exists a natural isomorphism_ \[t\circ\mathrm{M}^{c}\cong\mathrm{W}_{\mathrm{c}}^{-}:\mathbf{Var}(\mathbb{K}) \to K^{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})).\]
_Here, by **Convention**5_, \(\mathrm{W}_{\mathrm{c}}^{-}(X)=(\mathrm{W}_{\mathrm{c}}(X))^{-}\)._
By Lemma 4.2 and Theorem 4.4, we obtain an alternative proof of the following
**Proposition 4.5** ([30, Prop.3]).: _If \(X\) is a smooth quasi-projective \(\mathbb{K}\)-variety of pure dimension \(d\) as in Lemma 4.2, then \(\mathrm{W}_{\mathrm{c}}(X)\) is isomorphic to the following chain complex in \(K_{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K}))\):_
\[\mathrm{M}_{\mathrm{rat}}(Y^{(d)})\xrightarrow{\partial}\ldots\xrightarrow{ \partial}\mathrm{M}_{\mathrm{rat}}(Y^{(1)})\xrightarrow{\partial}\mathrm{M}_ {\mathrm{rat}}(\overline{X}),\]
_where \(M_{\mathrm{rat}}(Y^{(k)})\) is concentrated in homological degree \(k\)._
Alternatively by **Convention**5, \(\mathrm{W}(X)=\mathrm{W}_{\mathrm{c}}(X)^{\mathrm{op}}\) is the cochain complex in \(K^{b}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K}))^{\mathrm{op}}\)):
\[\mathrm{M}_{\mathrm{rat}}(\overline{X})\xrightarrow{\partial^{*}}\mathrm{M}_{ \mathrm{rat}}(Y^{(1)})\xrightarrow{\partial^{*}}\ldots\xrightarrow{\partial^{* }}\mathrm{M}_{\mathrm{rat}}(Y^{(d)}), \tag{4.2.2}\]
where \(M_{\mathrm{rat}}(Y^{(k)})\) is concentrated in cohomological degree \(k\)
#### 4.2.2. Weight cohomology with compact support
Let \(\mathfrak{R}:\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})\to \mathcal{A}\) be a contravariant additive functor into an abelian category. Equivalently, \(\mathfrak{R}:\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})^{\mathrm{ op}}\to\mathcal{A}\) is a covariant additive functor.
**Definition 4.6** (Weight cohomology with compact support).: For any \(X\in\mathbf{Var}(\mathbb{K})\), the _\(a\)-th weight cohomology with compact support_ of \(X\) associated to \(\mathfrak{R}\) is
\[H^{a}_{\mathrm{W},c}(X,\mathfrak{R}):=H^{a}(\mathfrak{R}(\mathrm{W}_{\mathrm{c} }(X)^{\mathrm{op}}))=H^{a}(\mathfrak{R}(\mathrm{W}(X)))\in\mathcal{A}.\]
By definition, the weight cohomology with compact support satisfies the following properties:
1. \(H^{a}_{\mathrm{W},c}(X,\mathfrak{R})\) is contravariant in \(X\), and equals zero for \(a<0\) or \(a>\dim X\).
2. For \(X\in\mathbf{SmProj}(\mathbb{K})\), \(H^{a}_{\mathrm{W},c}(X,\mathfrak{R})=\mathfrak{R}(\mathrm{M}_{\mathrm{rat}}(X))\) for \(a=0\), and vanishes for all \(a\neq 0\).
3. If \(i:Z\hookrightarrow X\) is a closed subvariety with open complement \(j:U\hookrightarrow X\), then there is a long exact sequence \[\cdots\to H^{a}_{\mathrm{W},c}(U,\mathfrak{R})\xrightarrow{j_{*}}H^{a}_{ \mathrm{W},c}(X,\mathfrak{R})\xrightarrow{i^{*}}H^{a}_{\mathrm{W},c}(Z, \mathfrak{R})\to H^{a+1}_{\mathrm{W},c}(U,\mathfrak{R})\to\cdots.\]
The main example of interest is the singular cohomology with compact support:
Let \(\mathbb{K}=\mathbb{C}\), and \(b\in\mathbb{N}\). For any \(X\in\mathbf{SmProj}(\mathbb{K})\) (\(\mathbb{K}=\mathbb{C}\)), let \(H^{*}(X):=H^{*}(X(\mathbb{C}),\mathbb{Z})\) denote the singular cohomology with integer coefficients.
1) If \(X,Y\in\mathbf{SmProj}(\mathbb{K})\) are of pure dimension \(d_{X}\), \(d_{Y}\) respectively, and \(p\in\operatorname{Hom}_{\mathbf{Cor}_{\operatorname{rat}}(\mathbb{K})}(X,Y)= \operatorname{CH}^{d_{Y}}(Y\times X)\to H^{2d_{Y}}(Y\times X)\), we define a morphism \(p^{*}:H^{b}(Y)\to H^{b}(X)\) by the composition:
\[H^{b}(Y)\xrightarrow{\pi_{Y}^{*}}H^{b}(Y\times X)\xrightarrow{[p]\cup-}H^{b+2 d_{Y}}(Y\times X)\xrightarrow{\operatorname{PD}_{Y\times X}}H_{2d_{X}-b}(Y \times X)\xrightarrow{\pi_{X}}H_{2d_{X}-b}(X)\xrightarrow{\operatorname{PD}_{ X}^{-1}}H^{b}(X),\]
where \(\pi_{Y}:Y\times X\) and \(\pi_{X}:Y\times X\) are the obvious projections.
2) Now, suppose \((X,p)\in\mathbf{Chow}_{\operatorname{rat}}^{\operatorname{eff}}(\mathbb{K})\), and \(X\) has connected components \(X_{i},i\in I\). Say, \(\dim X_{i}=d_{i}\). So, \(p=\sum_{i,j\in I}p_{ji}\in\operatorname{Hom}_{\mathbf{Cor}_{\operatorname{rat }}(\mathbb{K})}(X,X)=\oplus_{i,j\in I}\operatorname{CH}^{d_{j}}(X_{j}\times X _{i})\) with \(p_{ji}\in\operatorname{CH}^{d_{j}}(X_{j}\times X_{i})\). Besides, \(H^{*}(X)=\oplus_{i\in I}H^{*}(X_{i})\) and \(H^{*}(X\times X)=\oplus_{i,j\in I}H^{*}(X_{j}\times X_{i})\). As a consequence, the maps \(p^{*}_{ji}:H^{b}(X_{j})\to H^{b}(X_{i})\) induce an action of \(p\) on the singular cohomology of \(X\):
\[p^{*}=(p^{*}_{ji}):H^{b}(X)=\oplus_{j\in I}H^{b}(X_{j})\to\oplus_{i\in I}H^{b} (X_{i})=H^{b}(X).\]
Notice that \((p^{*})^{2}=p^{*}\), since \(p^{2}=p\).
**Definition 4.7** (Betti weight cohomology with compact support).:
1. The \(b\)_-th singular/Betti cohomology (with integer coefficients)_ of effective pure Chow motives is the contravariant additive functor into abelian groups: (4.2.3) \[\mathfrak{R}^{b}_{B}:\mathbf{Chow}_{\operatorname{rat}}^{\operatorname{eff}}( \mathbb{K})\to\mathcal{A}=\mathcal{A}b:\mathfrak{R}^{b}_{B}(X,p):=p^{*}H^{b}(X).\]
In particular, for any \(X\in\mathbf{SmProj}(\mathbb{K})\), we have \(\mathfrak{R}^{b}_{B}(X)=\mathfrak{R}^{b}_{B}(X,\operatorname{id}_{X})=H^{b}(X)\). By definition, \(\mathfrak{R}^{b}_{B}(X,p)\) is always a finitely generated abelian group.
2. For any \(X\in\mathbf{Var}(\mathbb{K})\), its \((a,b)\)_-th singular/Betti weight cohomology with compact support_ is: \[H^{a,b}_{\operatorname{W},c}(X,\mathfrak{R}_{B}):=H^{a}_{\operatorname{W},c}( X,\mathfrak{R}^{b}_{B})=H^{a}(\mathfrak{R}^{b}_{B}(W(X)))\in\mathcal{A}b.\]
For any commutative ring \(A\) instead of \(\mathbb{Z}\), define \(H^{a,b}_{\operatorname{W},c}(X,\mathfrak{R}_{B};A)\in A\)-Mod similarly.
**Lemma 4.8** ([30, Thm.3]).: _There is a canonical cohomological descent spectral sequence_
\[E^{a,b}_{2}=H^{a,b}_{\operatorname{W},c}(X,\mathfrak{R}_{B};A)\ \Rightarrow\ H^{a+b}_{c}(X(\mathbb{C});A).\]
_So, it defines a canonical increasing weight filtration on \(H^{k}_{c}(X(\mathbb{C});A)\). When \(A=\mathbb{Q}\), the spectral sequence degenerates at \(E_{2}\) and we recover Deligne's weight filtration:_
\[H^{a,b}_{\operatorname{W},c}(X,\mathfrak{R}_{B})\otimes\mathbb{Q}=\operatorname {Gr}^{W}_{b}H^{a+b}_{c}(X(\mathbb{C});\mathbb{Q}).\]
**Remark 4.9**.: Let \(X\) be a connected smooth quasi-projective variety of dimension \(d\). Fix a log compactification \((\overline{X},\partial X)\) with very simple normal crossing boundary divisor (Definition 3.1). We use the notations before Lemma 4.2. So, \(\partial X=\cup_{i=1}^{n}Y_{i}\). By Proposition 4.5, \(H^{a,b}_{\operatorname{W},c}(X,\mathfrak{R}_{B})\) is the \(a\)-th cohomology of the following cochain complex:
\[\mathfrak{R}^{b}_{B}(W(X))=[H^{b}(\overline{X})\xrightarrow{\partial^{*}}H^{b} (Y^{(1)})\xrightarrow{\partial^{*}}\ldots\xrightarrow{\partial^{*}}H^{b}(Y^{ (d)})].\]
**Proposition 4.10**.: _If \(X\) is a connected smooth quasi-projective \(\mathbb{K}\)-variety, then_
\[\widetilde{H}^{a-1}(\mathbb{D}\partial X,\mathbb{Z})\cong H^{a,0}_{ \operatorname{W},c}(X,\mathfrak{R}_{B}). \tag{4.2.4}\]
**Note**: By Lemma 4.8, this generalizes Proposition 3.5.
Proof.: As in Remark 4.9, \(H^{a,0}_{\mathrm{W},c}(X,\mathfrak{R}_{B})\) is the \(a\)-th cohomology of the cochain complex:
\[\mathfrak{R}^{0}_{B}(W(X))=[H^{0}(\overline{X})\xrightarrow{\partial^{*}}H^{0}( Y^{(1)})\xrightarrow{\partial^{*}}\dots\xrightarrow{\partial^{*}}H^{0}(Y^{(d)})].\]
As \(H^{0}(\overline{X})=\mathbb{Z}\), and \(H^{0}(Y^{(i)})=\oplus_{I\subset[n]:|I|=i,Y_{I}\neq\partial}\mathbb{Z}\) for \(i\geq 1\), we get an identification:
\[\mathfrak{R}^{0}_{B}(W(X))\cong\widetilde{C}^{\bullet}(\mathbb{D}\partial X) [-1],\]
with the latter the reduced simplicial cochain complex of \(\mathbb{D}\partial X\). The result then follows.
#### 4.2.3. An application
As promised in Section 3.3, we're ready to prove Lemma 3.16.
**Lemma 4.11**.: _If \(X,Y\) are stably isomorphic \(\mathbb{K}\)-varieties: \(X\times\mathbb{A}^{j}\cong Y\times\mathbb{A}^{j}\) for some \(j\geq 0\), then_
\[\mathrm{M}^{c}(X)\cong\mathrm{M}^{c}(Y)\in\mathbf{DM}^{\mathrm{eff},-}_{ \mathrm{Nis}}(\mathbb{K}).\]
_In particular, \(\mathrm{W}_{\mathrm{c}}(X)\cong\mathrm{W}_{\mathrm{c}}(Y)\in K^{b}(\mathbf{ Chow}^{\mathrm{eff}}_{\mathrm{rat}}(\mathbb{K})^{\mathrm{op}})\), and hence \(H^{a,b}_{\mathrm{W},c}(X,\mathfrak{R}_{B})\cong H^{a,b}_{\mathrm{W},c}(Y, \mathfrak{R}_{B})\)._
Proof.: By Theorem 4.1).(1), the isomorphism \(\phi:X\times\mathbb{A}^{j}\cong Y\times\mathbb{A}^{j}\) induces an isomorphism
\[\mathrm{M}^{c}(\phi):\mathrm{M}^{c}(X\times\mathbb{A}^{j})=\mathrm{M}^{c}(X) (j)[2j]\xrightarrow{\cong}\mathrm{M}^{c}(Y\times\mathbb{A}^{j})=\mathrm{M}^{c }(Y)(j)[2j],\]
hence equivalently, \(\mathrm{M}^{c}(\phi):\mathrm{M}^{c}(X)(j)\xrightarrow{\cong}\mathrm{M}^{c}(Y) (j)\). But by the Cancellation theorem [82] (see Theorem 4.1.(8)), we have a natural isomorphism
\[-\otimes\mathbb{Z}(j):\mathrm{Hom}_{\mathbf{DM}^{\mathrm{eff},-}_{\mathrm{Nis} }(\mathbb{K})}(\mathrm{M}^{c}(X),\mathrm{M}^{c}(Y))\cong\mathrm{Hom}_{\mathbf{ DM}^{\mathrm{eff},-}_{\mathrm{Nis}}(\mathbb{K})}(\mathrm{M}^{c}(X)(j), \mathrm{M}^{c}(Y)(j))\ni\mathrm{M}^{c}(\phi).\]
This shows that \(\mathrm{M}^{c}(\phi)\) induces an isomorphism \(\mathrm{M}^{c}(X)\cong\mathrm{M}^{c}(Y)\), as desired.
**Remark 4.12**.: To prove Theorem 3.15, only the equality \(H^{a,b}_{\mathrm{W},c}(X)\cong H^{a,b}_{\mathrm{W},c}(Y)\) in Lemma 4.11 is needed. This admits an alternative proof without using geometric motives: It suffices to show
\[H^{a,b}_{\mathrm{W},c}(X\times\mathbb{A}^{1},\mathfrak{R}_{B})\cong H^{a,b-2} _{\mathrm{W},c}(X,\mathfrak{R}_{B}). \tag{4.2.5}\]
By Proposition 4.5, \(\mathrm{W}(\mathbb{A}^{1})=[\mathrm{M}_{\mathrm{rat}}(\mathbb{P}^{1}) \xrightarrow{i^{*}}\mathrm{M}_{\mathrm{rat}}(\mathrm{pt})]\), where \(\mathrm{M}_{\mathrm{rat}}(\mathbb{P}^{1})\) lies in degree \(0\). Observe that \(H^{*}(\mathbb{P}^{1})=\mathbb{Z}\oplus\mathbb{Z}[-2]\xrightarrow{i^{*}}H^{*}( \mathrm{pt})=\mathbb{Z}\) with the obvious projection. So, \(H^{\bullet,\star}_{\mathrm{W},c}(\mathbb{A}^{1})=\mathbb{Z}[0]\{-2\}\), by which we mean \(\mathbb{Z}\) concentrated in bidegree \((a,b)=(0,2)\). By Theorem 4.3 \((v)\), we have:
\[\mathrm{W}(X\times\mathbb{A}^{1})\cong\mathrm{W}(X)\otimes\mathrm{W}(\mathbb{ A}^{1})\in K^{b}(\mathbf{Chow}^{\mathrm{eff}}_{\mathrm{rat}}(\mathbb{K})^{ \mathrm{op}}).\]
It follows that we obtain the desired bi-graded isomorphism
\[H^{\bullet,\star}_{\mathrm{W},c}(X\times\mathbb{A}^{1},\mathfrak{R}_{B})\cong H ^{\bullet,\star}_{\mathrm{W},c}(X,\mathfrak{R}_{B})\otimes H^{\bullet,\star}_ {\mathrm{W},c}(\mathbb{A}^{1},\mathfrak{R}_{B})=H^{\bullet,\star}_{\mathrm{W},c }(X,\mathfrak{R}_{B})[0]\{-2\}.\]
Finally, recall Lemma 3.16: \(Y\) is a \(\mathbb{K}\)-variety stably isomorphic to \(\mathbb{A}^{\ell}\), \(\ell\geq 1\Rightarrow\mathbb{D}\partial Y\sim\mathrm{pt}\).
Proof of Lemma 3.16.: Say, \(Y\times\mathbb{A}^{j}\cong\mathbb{A}^{\ell+j}\). As \(Y\cong Y\times 0\hookrightarrow Y\times\mathbb{A}^{j}\cong\mathbb{A}^{\ell+j}\) is a closed subvariety, \(Y\) is affine. As \(Y\times\mathbb{A}^{j}\cong\mathbb{A}^{\ell+j}\) is smooth, so is \(Y\). Clearly, \(Y\) is also connected.
If \(\ell\leq 2\), then \(Y\cong\mathbb{A}^{\ell}\) by [25, 62], so \(\mathbb{D}\partial Y\simeq\mathbb{D}\partial\mathbb{A}^{\ell}=\text{pt}\), as desired. If \(\ell\geq 3\), clearly, \(Y\) is contractible as an analytic topological space. So, by Corollary 3.12 and Corollary 3.14, we have \(\pi_{i}(Y)\cong\pi_{i}(\mathbb{D}\partial Y)=0\) for \(i=0,1\). In addition, by Lemma 4.11 and Proposition 4.10,
\[\widetilde{H}^{i}(\mathbb{D}\partial Y,\mathbb{Z})\cong H^{i+1,0}_{\mathbb{W},c }(Y,\mathfrak{R}_{B})\cong H^{i+1,0}_{\mathbb{W},c}(\mathbb{A}^{\ell}, \mathfrak{R}_{B})=0,\]
for all \(i\in\mathbb{N}\). Thus, \(H_{i}(\mathbb{D}\partial Y,\mathbb{Z})\cong H_{i}(\text{pt},\mathbb{Z})\) and \(\pi_{1}(\mathbb{D}\partial Y)=0\). Now, by the Hurewicz theorem, \(\pi_{i}(\mathbb{D}\partial Y)\simeq 0\) for all \(i\geq 0\). Then by the Whitehead theorem for CW complexes, \(\mathbb{D}\partial Y\sim\text{pt}\).
### Motives with compact support of generic character varieties
In this subsection, we aim to formulate a conjectural formula for the motives with compact support of generic character varieties (Conjecture 4.20). For symmetric functions, see Appendix B a quick review.
#### 4.3.1. Plethystic exponential and plethystic log
Let \(\mathbf{x}_{1}=\{x_{1,1},x_{1,2},\cdots\},\cdots,\mathbf{x}_{k}=\{x_{k,1},x_{ k,2},\cdots\}\) be \(k\) sets of infinitely many independent variables. Let \(\Lambda(\mathbf{x}_{1},\cdots,\mathbf{x}_{k}):=\Lambda(\mathbf{x}_{1})\otimes_ {\mathbb{Z}}\cdots\otimes\Lambda(\mathbf{x}_{k})\) be the _usual lambda-ring_ (Definition B.1) of functions separately symmetric in each set of variables.
**Convention** 6: If the context is clear, we write \(\Lambda=\oplus_{n\geq 0}\Lambda_{n}\) for various _usual lambda-rings_: \(\Lambda(\mathbf{x})\); \(\Lambda(\mathbf{x}_{1},\cdots,\mathbf{x}_{k})\); \(\Lambda(\mathbf{x}_{1},\cdots,\mathbf{x}_{k})\otimes_{\mathbb{Z}}\mathbb{Q}(q,t)\); etc. Here, \(\Lambda_{n}\) is the homogeneous degree \(n\) part.
Let \(T\) be another indeterminate, the **plethystic exponential** is the map
\[\text{Exp}:T\Lambda[[T]]\to 1+T\Lambda[[T]]:V\mapsto\exp(\sum_{r\geq 1} ^{1}r_{r}[V])=\exp(\sum_{r\geq 1}^{1}r(\mathbf{x}_{1}^{r},\cdots,\mathbf{x}_{k}^{r },q^{r},t^{r},T^{r})). \tag{4.3.1}\]
So, \(\text{Exp}=\exp\circ(\sum_{r\geq 1}\frac{p_{r}[-]}{r})\). If \(V=\sum_{n\geq 1}a_{n},a_{n}\in\Lambda_{n}\), then \(\text{Exp}(V):=\text{Exp}(\sum_{n\geq 1}a_{n}T^{n})|_{T=1}\).
Exp has an inverse map \(\text{Log}\), called **plethystic log**, defined as follows: \(\forall F\in 1+T\Lambda[[T]]\), let
\[\log(F)=:\sum_{m\geq 1}U_{m}(\mathbf{x}_{1},\cdots,\mathbf{x}_{k};q,t)\frac{T^{m}} {m};V_{n}(\mathbf{x}_{1},\cdots,\mathbf{x}_{k};q,t):=\frac{1}{n}\underset{d|n}{ \sum}\mu(d)U_{n/d}(\mathbf{x}_{1}^{d},\cdots,\mathbf{x}_{k}^{d},q^{d},t^{d}) \in\Lambda,\]
where \(\mu\) is the ordinary _Mobius function_ (see Remark 4.13 below). Then define
\[\text{Log}(F):=\underset{n\geq 1}{\sum}V_{n}(\mathbf{x}_{1},\cdots,\mathbf{x}_{k};q,t )T^{n}\in T\Lambda[[T]]. \tag{4.3.2}\]
So, \(\text{Log}=(\sum_{m\geq 1}\frac{\mu(m)}{m}p_{r}[-])\circ\text{log}\). If \(F=\sum_{n\geq 0}a_{n},a_{n}\in\Lambda_{n},a_{0}=1\), then \(\text{Log}(F):=\text{Log}(\sum_{n\geq 0}a_{n}T^{n})|_{T=1}\).
**Remark 4.13** (**Mobius function**).: For any \(n\in\mathbb{Z}_{>0}\), recall that \(\mu(n):=(-1)^{r}\), if \(n=p_{1}\cdots p_{r}\) is a product of distinct primes, and \(\mu(n):=0\) otherwise. Alternatively, \(\mu(n)=\sum_{1\leq j\leq n,\ \gcd(j,n)=1}e^{2\pi i\frac{j}{n}}\). Recall also the **Mobius inversion formula**: For any two functions \(g,f:\mathbb{Z}_{>0}\to\mathbb{C}\), we have
\[g(n)=\underset{d|n}{\sum}f(d),\ \ \forall n\geq 1,\ \ \Rightarrow\ f(n)= \underset{d|n}{\sum}\mu(d)g(\frac{n}{d}),\ \ \forall n\geq 1. \tag{4.3.3}\]
#### 4.3.2. The HLRV functions
For each partition \(\lambda\in\mathcal{P}\), let \(\widetilde{H}_{\lambda}(\mathbf{x};q,t)\in\Lambda(\mathbf{x})\otimes_{\mathbb{Z}} \mathbb{Q}(q,t)\) be the _modified Macdonald symmetric function_ of type \(\lambda\) (See [26, I.(11)] or Definition B.6).
**Definition 4.14** ([36, SS.1.1]).: We use the notations in Appendix B.1.
1. For each partition \(\lambda\in\mathcal{P}\), define the _genus \(g\) deformed hook polynomial_ of type \(\lambda\): (4.3.4) \[\mathcal{H}_{\lambda}(z,w):=\underset{s\in\mathcal{D}(\lambda,t)}{\prod}\frac {(z^{2\alpha(s)+1}-w^{2\ell(s)+1})^{2g}}{(z^{2\alpha(s)+2}-w^{2\ell(s)})(z^{2 \alpha(s)}-w^{2\ell(s)+2})}\in\mathbb{Q}(z,w).\]
2. Define the _\(k\)-point genus \(g\) Cauchy function_ (4.3.5) \[\Omega(z,w):=\underset{\lambda\in\mathcal{P}}{\sum}\mathcal{H}_{\lambda}(z,w )\underset{i=1}{\overset{k}{\prod}}\widetilde{H}_{\lambda}(\mathbf{x}_{i};z^ {2},w^{2})\in\Lambda(\mathbf{x}_{1},\dots,\mathbf{x}_{k})\otimes_{\mathbb{Z}} \mathbb{Q}(z,w).\]
3. (extended Hall pairing) Define the _extended Hall pairing_ on \(\Lambda(\mathbf{x}_{1},\dots,\mathbf{x}_{k})\) by (4.3.6) \[\langle a_{1}(\mathbf{x}_{1})\dots a_{k}(\mathbf{x}_{k}),b_{1}(\mathbf{x}_{1}) \dots b_{k}(\mathbf{x}_{k})\rangle:=\langle a_{1},b_{1}\rangle\dots\langle a_ {k},b_{k}\rangle,\]
where \(\langle-,-\rangle\) is the scalar product/Hall pairing on \(\Lambda(\mathbf{x})\) so that \((h_{\lambda})\) and \((m_{\lambda})\) are dual bases.
1. For each **type \(\boldsymbol{\mu}=(\mu^{1},\dots,\mu^{k})\in\mathcal{P}^{k}\)**, define the _HLRV function_ of type \(\boldsymbol{\mu}\): (4.3.7) \[\mathbb{H}_{\boldsymbol{\mu}}(z,w):=-(z^{2}-1)(w^{2}-1)\langle\operatorname{ Log}\Omega(z,w),h_{\boldsymbol{\mu}}\rangle\in\mathbb{Q}(z,w),\] where \(h_{\boldsymbol{\mu}}:=h_{\mu^{1}}(\mathbf{x}_{1})\dots h_{\mu^{k}}(\mathbf{x} _{k})\in\Lambda(\mathbf{x}_{1},\dots,\mathbf{x}_{k})\).
**Note**: By definition, we have \(\mathcal{H}_{\lambda}(z,w)=\mathcal{H}_{\lambda^{\prime}}(w,z)=\mathcal{H}_{ \lambda}(-z,-w)\). Then by the duality (B.4.12), we have \(\Omega(z,w)=\Omega(w,z)=\Omega(-z,-w)\), and hence
\[\mathbb{H}_{\boldsymbol{\mu}}(z,w)=\mathbb{H}_{\boldsymbol{\mu}}(w,z)=\mathbb{ H}_{\boldsymbol{\mu}}(-z,-w). \tag{4.3.8}\]
#### 4.3.3. A review on the HLRV conjecture
Let \(X\) be a complex variety of dimension \(d\).
**Definition 4.15**.: (1) The _compactly supported mixed Hodge polynomial_ of \(X\) is
\[H_{c}(X;x,y,t):=\underset{i,j,k}{\sum}h_{c}^{i,j;k}(X)x^{i}y^{j}t^{k}\in \mathbb{Z}[x,y,t];\quad h_{c}^{i,j;k}(X):=\dim_{\mathbb{C}}\operatorname{Gr}_ {i}^{\mathrm{F}}\operatorname{Gr}_{i+j}^{\mathrm{W}}H_{c}^{k}(X;\mathbb{C}). \tag{4.3.9}\]
The _\(E\)-polynomial_ of \(X\), and the _pure part_ of \(H_{c}(X;x,y,t)\) are respecitvely:
\[E(X;x,y):=H_{c}(X;x,y,-1)\in\mathbb{Z}[x,y];\quad PH_{c}(X;x,y):=\underset{i,j }{\sum}h_{c}^{i,j;j+j}(X)x^{i}y^{j}\in\mathbb{Z}[x,y]. \tag{4.3.10}\]
2. Our _weight polynomial_ of \(X\) will be defined as (4.3.11) \[\mathbb{W}(X;q,t):=\underset{a,b}{\sum}\dim\operatorname{Gr}_{b}^{\mathrm{W}}H _{c}^{a+b}(X,\mathbb{C})q^{\frac{b}{2}}t^{a}\in\mathbb{Z}[q^{\frac{1}{2}},t]. \quad\text{(by Proposition~{}\ref{prop:2.2}.(2))}\]
3. \(X\) is of _Hodge-Tate type_ if \(h_{c}^{i,j;k}(X)=0,\forall i\neq j\). Then, \(\mathbb{W}(X;q,t)\in\mathbb{Z}[q,t]\), and we may denote \[H_{c}(X;q,t):=H_{c}(X;\sqrt{q},\sqrt{q},t)=\mathbb{W}(X;qt^{2},t);\ \ E(X;q):=\mathbb{W}(X;q,-1);\ \ PH_{c}(X;q):=\mathbb{W}(X;q,0).\]
Now, recall the HLRV conjecture for MHS's on generic character varieties \(\mathcal{M}_{\boldsymbol{\mu}}\):
**Conjecture 4.16** (HLRV conjecture [40, Conj.4.2.1-4.2.7], [36, Conj.1.2.1-1.2.2]).:
1. \(\mathbb{H}_{\mu}(-z,w)\in\mathbb{Z}_{\geq 0}[z,w]\)_, and it has degree_ \(d_{\mu}\) _in each variable._
2. \(\mathcal{M}_{\mu}\) _is of Hodge-Tate type. The mixed Hodge polynomial_ \(H_{c}(\mathcal{M}_{\mu};x,y,t)=\mathbb{W}(\mathcal{M}_{\mu};xyt^{2},t)\) _is independent of the choice of generic eigenvalues of multiplicities_ \(\mu\)_._
3. _(Main part) Moreover, we have:_ \(\mathbb{W}(\mathcal{M}_{\mu};q,t)=\sqrt{q}^{d_{\mu}}\mathbb{H}_{\mu}(-\frac{t }{\sqrt{q}},\sqrt{q})\)_._
4. _The pure part of_ \(H_{c}\) _is:_ \(PH_{c}\left(\mathcal{M}_{\mu};q\right)=\mathbb{W}(\mathcal{M}_{\mu};q,0)= \sqrt{q}^{d_{\mu}}\mathbb{H}_{\mu}(0,\sqrt{q})\)_._
5. _(Curious Poincare duality) We have:_ \(\mathbb{W}(\mathcal{M}_{\mu};qt,t)=q^{d_{\mu}}\mathbb{W}(\mathcal{M}_{\mu}; \frac{t}{q},t)\)_._
**Remark 4.17**.: There is a string theoretic interpretation of the HLRV conjecture [8, 9]. Moreover,
1. Concerning (1), by [53, Cor.7.2], it's known that \(\mathbb{H}_{\mu}(z,w)\in\mathbb{Z}[z^{2},w^{2},zw,(zw)^{-1}]\).
2. By [40], Conjecture 4.16 holds for all rank 2 twisted character varieties.
3. By [70], [54, Thm.1.5.2], \(\mathcal{M}_{\mu}\) is of Hodge-Tate type. So, \(\mathbb{W}(\mathcal{M}_{\mu};q,t)\in\mathbb{Z}[q,t]\).
4. Clearly, (3) \(\Rightarrow\) (4). By the symmetry (4.3.8), we also have (3) \(\Rightarrow\) (5).
5. By [36, Thm.1.2.3], the \(E\)-polynomial version of (3) holds: (4.3.12) \[E(\mathcal{M}_{\mu};q)=\mathbb{W}(\mathcal{M}_{\mu};q,-1)=\sqrt{q}^{d_{\mu}} \mathbb{H}_{\mu}(\frac{1}{\sqrt{q}},\sqrt{q}).\]
6. By [55, Thm.1.1], [56, Thm.7.12], the Poincare polynomial version of (3) holds: (4.3.13) \[P(\mathcal{M}_{\mu};t)=\mathbb{W}(\mathcal{M}_{\mu};t^{2},t)=t^{d_{\mu}} \mathbb{H}_{\mu}(-1,t).\]
7. By [54, Thm.1.5.3], a stronger form of (5), i.e. the _curious hard Lefschetz property_ holds: (4.3.14) \[(\omega\cup-)^{m}:\operatorname{Gr}^{\mathbb{W}}_{d_{\mu}-2m}H^{j}_{c}( \mathcal{M}_{\mu},\mathbb{C})\xrightarrow{\sim}\operatorname{Gr}^{\mathbb{W} }_{d_{\mu}+2m}H^{j+2m}_{c}(\mathcal{M}_{\mu},\mathbb{C}),\ \ \omega:\ \text{ holo. symplectic form.}\]
#### 4.3.4. A conjecture for motives with compact support of generic character varieties
We will propose a motivic conjecture promoting the HLRV conjecture.
**Lemma 4.18**.: _For any \(a,b,m\in\mathbb{Z}_{\geq 0}\), we have_
\[\operatorname{Hom}_{\mathbf{DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})}( \mathbb{Z}(a)[2a],\mathbb{Z}(b)[2b+m])=\left\{\begin{array}{ll}\mathbb{Z}& \text{if $a=b,m=0$;}\\ 0&\text{else.}\end{array}\right. \tag{4.3.15}\]
Proof.: 1. If \(a\leq b\), then by Theorem 4.1.(8) (Cancellation) and (10) (Higher Chow groups),
\[\operatorname{Hom}_{\mathbf{DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})}( \mathbb{Z}(a)[2a],\mathbb{Z}(b)[2b+m])\cong\operatorname{Hom}_{\mathbf{DM}^{ \mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})}(\mathbb{Z},\mathbb{Z}(b-a)[2b-2a+m]) \cong\operatorname{CH}^{b-a}(\operatorname{Spec}\mathbb{K},-m).\]
If \(m>0\), then \(\operatorname{CH}^{b-a}(\operatorname{Spec}\mathbb{K},-m)=0\) by definition or Theorem 4.1.(11) (Vanishing theorem). If \(m=0\), then \(\operatorname{CH}^{b-a}(\operatorname{Spec}\mathbb{K},-m)=\operatorname{CH}^{b-a }(\operatorname{Spec}\mathbb{K})=\mathbb{Z}\) if \(a=b\), and \(0\) otherwise.
2. If \(a>b\), we will show that \(\operatorname{Hom}_{\mathbf{DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})}( \mathbb{Z}(a)[2a],\mathbb{Z}(b)[2b+m])=0\). Indeed, by Theorem 4.1.(8) (Cancellation), we may assume \(b=0\). Now, by Theorem 4.1.(5), we have
\[\operatorname{M}(\mathbb{P}^{a})\cong\operatorname{M}(\mathbb{P}^{a-1})\oplus \mathbb{Z}(a)[2a]\cong\oplus_{j=0}^{a}\mathbb{Z}(j)[2j],\]
where \(\operatorname{M}(\mathbb{P}^{a-1})\to\operatorname{M}(\mathbb{P}^{a})\) is induced by the inclusion \(i:\mathbb{P}^{a-1}\hookrightarrow\mathbb{P}^{a}\). It follows that
\[\operatorname{Hom}_{\mathbf{DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})}( \operatorname{M}(\mathbb{P}^{a}),\mathbb{Z}[m])=\operatorname{Hom}_{\mathbf{ DM}^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})}(\operatorname{M}( \mathbb{P}^{a-1}),\mathbb{Z}[m])\oplus\operatorname{Hom}_{\mathbf{DM}^{ \mathrm{eff}}_{\mathrm{gm}}(\mathbb{K})}(\mathbb{Z}(a)[2a],\mathbb{Z}[m]).\]
By Theorem 4.1.(10), we have \(\operatorname{Hom}_{\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})}( \operatorname{M}(\mathbb{P}^{a}),\mathbb{Z}[m])\cong\operatorname{CH}^{0}( \mathbb{P}^{a},-m)\), and similarly for \(\mathbb{P}^{a-1}\). If \(m>0\), then \(\operatorname{CH}^{0}(\mathbb{P}^{a},-m)=0\), hence so is \(\operatorname{Hom}_{\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})}( \mathbb{Z}(a)[2a],\mathbb{Z}(b)[2b+m])\). If \(m=0\), then
\[\operatorname{CH}^{0}(\mathbb{P}^{a})=\mathbb{Z}\xrightarrow[i^{*}]{\cong} \operatorname{CH}^{0}(\mathbb{P}^{a-1})=\mathbb{Z},\]
as \(a>0\). This ensures that \(\operatorname{Hom}_{\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})}( \mathbb{Z}(a)[2a],\mathbb{Z}(b)[2b+m])=0\), as desired.
**Note**: By definition [64, Def.17.1], for all \(i\geq 0\), we have \(\operatorname{CH}^{0}(\mathbb{P}^{i},0)=\mathbb{Z}\), and \(\operatorname{CH}^{0}(\mathbb{P}^{i},j)=0\) for all \(j\neq 0\). Thus, the same argument as above also shows that
\[\operatorname{Hom}_{\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})}( \mathbb{Z}(a)[2a],\mathbb{Z}(b)[2b+m])=0,\text{ if }a>b,\text{ or }a=b,m\neq 0. \tag{4.3.16}\]
**Lemma 4.19** ([20, Lem.2.3.(10)]).: _If \(X\in\mathbf{Var}(\mathbb{K})\) has a decomposition \(X=\sqcup_{i=1}^{s}X_{i}\) such that: \(\cup_{i\leq r}X_{i}\) is closed, \(\forall 1\leq r\leq s\); \(X_{i}\) is stably isomorphic to some affine space \(\mathbb{A}^{n_{i}}\), then_
\[\operatorname{M}^{c}(X_{i})\cong\oplus_{i=1}^{s}\mathbb{Z}(n_{i})[2n_{i}]\in \mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K}).\]
_A geometric motive of this form is called **pure Tate**._
Proof.: By Lemma 4.11, \(\operatorname{M}^{c}(X_{i})\cong\operatorname{M}^{c}(\mathbb{A}^{n_{i}})= \mathbb{Z}(n_{i})[2n_{i}]\). Now, it suffices to show that, for any \(X\in\mathbf{Var}(\mathbb{K})\), any closed subvariety \(Z\), with open complement \(U=X-Z\), if \(\operatorname{M}^{c}(Z)\) and \(\operatorname{M}^{c}(U)\) are pure Tate, then so is \(\operatorname{M}^{c}(X)\). Indeed, by Theorem 4.1.(3), we have a triangle in \(\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})\):
\[\operatorname{M}^{c}(Z)\to\operatorname{M}^{c}(X)\to\operatorname{M}^{c}(U) \xrightarrow[]{+1}\]
If \(\operatorname{M}^{c}(Z)\) and \(\operatorname{M}^{c}(U)\) are pure Tate, then \(\operatorname{Hom}_{\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})}( \operatorname{M}^{c}(U),\operatorname{M}^{c}(Z)[1])=0\) by Lemma 4.18. Therefore, \(\operatorname{M}^{c}(X)\cong\operatorname{M}^{c}(Z)\oplus\operatorname{M}^{c}(U)\), and the statement follows.
Inspired by Lemma 4.19 and Theorem 2.10, one may wonder if there exists any simple description of \(\operatorname{M}^{c}(\mathcal{M}_{\boldsymbol{\mu}})\), for any (very) generic character variety \(\mathcal{M}_{\boldsymbol{\mu}}\)? In the very generic case, we have a cell decomposition \(X=\mathcal{M}_{\boldsymbol{\mu}}=\cup_{1\leq i\leq r}X_{i}\) such that: \(\cup_{i\leq r}X_{i}\) is closed, \(\forall 1\leq r\leq s\); \(X_{i}\) is stably isomorphic to \(\mathbb{G}_{m}^{a_{i}}\times\mathbb{A}^{b_{i}}\). Now, \(\operatorname{M}^{c}(\mathbb{G}_{m}^{a_{i}}\times\mathbb{A}^{b_{i}})=(\mathbb{ Z}[1]\oplus\mathbb{Z}(1)[2])^{\otimes a_{i}}\otimes\mathbb{Z}(b_{i})[2b_{i}]\). Besides, notice that the triangle as in Lemma 4.19 no longer splits in general. Nevertheless, Conjecture 0.2 and Proposition 4.10 imply that, the Betti weight cohomology with compact support \(H_{\mathrm{W},c}^{a,0}(\mathcal{M}_{\boldsymbol{\mu}},\mathfrak{R}_{B})\) is torsion-free. On the other hand, it's expected that \(H_{c}^{*}(\mathcal{M}_{\boldsymbol{\mu}};\mathbb{Z})\) is always torsion-free. For example, this is the case for twisted character varieties in ranks \(2,3\)[84, Thm.4.5]). By Lemma 4.8, one may then wonder if the spectral sequence \(E_{2}^{a,b}=H_{\mathrm{W},c}^{a,b}(\mathcal{M}_{\boldsymbol{\mu}},\mathfrak{R} _{B})\Rightarrow H_{c}^{a+b}(\mathcal{M}_{\boldsymbol{\mu}};\mathbb{Z})\) takes some simple form, in particular, \(H_{\mathrm{W},c}^{a,b}(\mathcal{M}_{\boldsymbol{\mu}},\mathfrak{R}_{B})\) is torsion-free in all degrees? Our examples do lead to affirmative answers. Combined with the cell decomposition above, this suggests that \(\operatorname{M}^{c}(\mathcal{M}_{\boldsymbol{\mu}})\) is a direct sum, with direct summands of the form \(\mathbb{Z}(a)[2a+b],a,b\geq 0\). Taking Conjecture 4.16 into account, we are led to a motivic conjecture 4.20 below. As the HLRV conjecture 4.16
only concerns rational cohomology, it does not capture cohomology with integer coefficients of \(\mathbb{D}\partial\mathcal{M}_{\mu}\). The latter is morally our major motivation for pursuing such a motivic promotion.
First, observe that, if \(X\) is smooth affine of dimension \(d\) and of Hodge-Tate type, then by (3.4.1),
\[\operatorname{Gr}_{b}^{\operatorname{W}}H_{c}^{a+b}(X,\mathbb{C})\neq 0\ \ \Rightarrow\ \ b\ \text{is even,}\ a,b\geq 0;a+b\geq d;2a+b\leq 2d. \tag{4.3.17}\]
**Assuming** Conjecture 4.16.(3), we then have
\[\mathbb{H}_{\mu}(-z,w)=\sum\limits_{a,b\geq 0}\dim\operatorname{Gr}_{b}^{ \operatorname{W}}H_{c}^{a+b}(\mathcal{M}_{\mu};\mathbb{C})z^{a}w^{a+b-d\mu}=: \sum\limits_{i,j}c_{ij}^{\mu}z^{i}w^{j}.\]
As \(d_{\mu}\) is even, by (4.3.17), we see that
\[c_{ij}^{\mu}=\dim\operatorname{Gr}_{d_{\mu}+j-i}^{\operatorname{W}}H_{c}^{d_{ \mu}+j}(\mathcal{M}_{\mu};\mathbb{C})\neq 0\ \ \Rightarrow\ c_{ij}^{\mu}\in\mathbb{Z}_{\geq 0},j-i\ \text{is even}\,0\leq i,j\leq d_{\mu},i+j\leq d_{\mu}. \tag{4.3.18}\]
Moreover, recall that \(\operatorname{Gr}_{0}^{\operatorname{W}}H_{c}^{d_{\mu}}(\mathcal{M}_{\mu}; \mathbb{C})\cong\operatorname{Gr}_{2d_{\mu}}^{\operatorname{W}}H_{c}^{2d_{ \mu}}(\mathcal{M}_{\mu};\mathbb{C})=\mathbb{C}\), then \(c_{0d_{\mu}}^{\mu}=c_{d_{\mu}0}^{\mu}=1\).
Now, we're ready to state the conjectural formula computing the motives with compact support of generic character varieties. Recall that
\[\mathbf{L}:=\operatorname{M}^{c}(\mathbb{A}^{1})=\mathbb{Z}(1)[2]\in\mathbf{ DM}_{\operatorname{gm}}^{\operatorname{eff}}(\mathbb{K});\ \ \Rightarrow\ \mathbf{L}^{m}=\operatorname{M}^{c}(\mathbb{A}^{m})=\mathbb{Z}(m)[2m],\ \ \forall m\geq 0. \tag{4.3.19}\]
**Conjecture 4.20**.: _Let \(\mathcal{M}_{\mu}\) be any generic character variety of type \(\mu\), then:_
1. _(a reformulation of Conjecture_ 4.16_.(1)) The rational function_ \(\mathbb{H}_{\mu}(-z,w)\) _is of the form_ (4.3.20) \[\mathbb{H}_{\mu}(-z,w)=\sum\limits_{0\leq i,j\leq d_{\mu},i+j\leq d_{\mu}}c_{ ij}^{\mu}z^{i}w^{j},\quad c_{ij}^{\mu}\in\mathbb{Z}_{\geq 0},\]
_such that \(c_{ij}^{\mu}\neq 0\Rightarrow j-i\) is even. Besides, \(c_{0d_{\mu}}^{\mu}=c_{d_{\mu}0}^{\mu}=1\)._
2. \(\mathcal{M}_{\mu}\) _contains an open dense algebraic torus. Thus, any log compactification of_ \(\mathcal{M}_{\mu}\) _is rational._
3. _(main part) The motive with compact support of_ \(\mathcal{M}_{\mu}\) _is_ (4.3.21) \[\operatorname{M}^{c}(\mathcal{M}_{\mu})=\oplus_{0\leq i,j\leq d_{\mu},i+j\leq d _{\mu}}(\mathbf{L}^{\frac{d_{\mu}\mu-j-i}{2}}[i])^{\oplus c_{ij}^{\mu}}\in \mathbf{DM}_{\operatorname{gm}}^{\operatorname{eff}}(\mathbb{K}).\]
4. _(Integral form of curious Poincare duality) We have an isomorphism:_ (4.3.22) \[H_{\operatorname{W},c}^{a,d_{\mu}-2m}(\mathcal{M}_{\mu},\mathfrak{R}_{B}) \stackrel{{\simeq}}{{\to}}H_{\operatorname{W},c}^{a-2m,d_{\mu}+2 m}(\mathcal{M}_{\mu},\mathfrak{R}_{B}).\]
**Note**: In a different direction, Mozgovoy had proposed a conjectural formula [61, Conj.3] for the Chow motive (with \(\mathbb{Q}\)-coefficients) of \(\mathcal{M}_{\operatorname{DoI}}\), the Dolbeault moduli space corresponding to \(\mathcal{M}_{\mu}\) under NAH. This generalizes the Poincare polynomial version of Conjecture 4.16.(3).
**Remark 4.21**.: a) By Theorem 0.4, (2) is already true in the very generic case. Besides, (2) \(\Rightarrow\overline{\mathcal{M}}_{\mu}\) is rational (\(\Rightarrow\overline{\mathcal{M}}_{\mu}\) is stably rational) \(\Rightarrow\pi_{1}(\overline{\mathcal{M}}_{\mu})=1\). Unfortunately, if \(\mathcal{M}_{\mu}\) is only generic, currently our only evidence for (2) is Example 4.27 below. However, for our interest relevant to Conjecture 0.2, a much weaker statement suffices:
(2'): \(\pi_{1}(\overline{\mathcal{M}}_{\mu})\) is abelian, where \(\overline{\mathcal{M}}_{\mu}\) is a log compactification.
By Corollary 3.14, (2') \(\Rightarrow\pi_{1}(\mathbb{D}\partial\mathcal{M}_{\mu})\) is abelian, if \(d_{\mu}>2\).
2. By Lemma 4.8, Conjecture 4.20\(\Rightarrow\) the HLRV conjecture 4.16: Indeed, \[\mathrm{W}_{c}^{-}(\mathcal{M}_{\mu})=t\circ\mathrm{M}^{c}(\mathcal{M}_{\mu})= \oplus_{i,j}\mathrm{W}_{c}^{-}(\mathbb{A}^{\frac{d\mu+j-i}{2}})[i]^{\oplus c_{ij }^{\mu}}\Rightarrow\mathrm{W}(\mathcal{M}_{\mu})=\oplus_{i,j}\mathrm{W}( \mathbb{A}^{\frac{d\mu+j-i}{2}})[-i]^{\oplus c_{ij}^{\mu}}.\] As \(H^{a}(\mathfrak{R}_{B}^{b}(\mathrm{W}(\mathbb{A}^{m})[a^{\prime}]))=H_{\mathrm{ W},c}^{a+a^{\prime},b}(\mathbb{A}^{m},\mathfrak{R}_{B})=\mathbb{Z}\) in bi-degree \((a,b)=(-a^{\prime},2m)\), we see that \[H_{\mathrm{W},c}^{\bullet,\star}(\mathcal{M}_{\mu},\mathfrak{R}_{B})=\oplus_{i,j}\mathbb{Z}[-i]\{-d_{\mu}-j+i\}^{\oplus c_{ij}^{\mu}},\;\;\Rightarrow\;\;H_{c }^{*}(\mathcal{M}_{\mu},\mathbb{Q})=\mathbb{Q}(\frac{-d_{\mu}-j+i}{2})[-d_{ \mu}-j]^{\oplus c_{ij}^{\mu}}.\] Here, \(\mathbb{Z}[a^{\prime}]\{b^{\prime}\}\) denotes \(\mathbb{Z}\) in bi-degree \((a,b)=(-a^{\prime},-b^{\prime})\). This implies that \[\mathbb{W}(\mathcal{M}_{\mu};q,t)=\sum\limits_{i,j}c_{ij}^{\mu}q^{\frac{d\mu+ j-i}{2}}t^{i}=\sqrt{q}^{d\mu}\mathbb{H}_{\mu}(-\frac{t}{\sqrt{q}},\sqrt{q}).\] c) By the same Lemma 4.8, after tensoring with \(\mathbb{Q}\), (4) reduces to Conjecture 4.16.(5). This explains the name. By the computation above and (4.3.14), we see that (3)\(\Rightarrow\) (4).
By Theorem 4.4.(\(b\)), recall that \(K_{0}(t):K_{0}(\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K}))\cong K_{0 }(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K}))\). Denote
\[K_{0}(X):=K_{0}(\mathrm{M}^{c}(X))\in K_{0}(\mathbf{DM}_{\mathrm{gm}}^{ \mathrm{eff}}(\mathbb{K}))\cong K_{0}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{ eff}}(\mathbb{K})),\quad\forall X\in\mathbf{Var}(\mathbb{K}). \tag{4.3.23}\]
Denote \(\mathbb{L}:=K_{0}(\mathbf{L})=K_{0}(\mathbb{A}^{1})\), and \(\mathbb{1}:=K_{0}(\mathrm{pt})\).
**Proposition 4.22**.: _If \(\mu\) is very generic, then a weak form of Conjecture 4.20.(3) holds for \(\mathcal{M}_{\mu}\):_
\[K_{0}(\mathcal{M}_{\mu})=\sum\limits_{\ell=0}^{d_{\mu}}c_{\ell}^{\mu}\mathbb{L }^{\ell}\in K_{0}(\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K}))\cong K _{0}(\mathbf{Chow}_{\mathrm{rat}}^{\mathrm{eff}}(\mathbb{K})),\]
_where \(\sum\limits_{\ell=0}^{d_{\mu}}c_{\ell}^{\mu}q^{\ell}:=\sqrt{q}^{d_{\mu}} \mathbb{H}_{\mu}(\frac{1}{\sqrt{q}},\sqrt{q})\in\mathbb{Z}[q]\)._
Proof.: By the cell decomposition theorem 2.10, Theorem 4.1.(3), and Lemma 4.11, we have
\[K_{0}(\mathcal{M}_{\mu})=\sum\limits_{(\vec{w},p)\in\mathcal{W}^{*}}(\mathbb{L }-\mathbb{1})^{\overline{a}(\vec{w},p)}\mathbb{L}^{\overline{b}(\vec{w},p)}= \sum\limits_{0\leq j\leq\frac{d_{\mu}}{2}}f_{j}(\mathbb{L}-\mathbb{1})^{d_{\mu }-2j}\mathbb{L}^{j}=:\sum\limits_{\ell=0}^{d_{\mu}}\widetilde{c}_{\ell}^{\mu} \mathbb{L}^{\ell}, \tag{4.3.24}\]
where \(f_{j}:=|\{(\vec{w},p)\in\mathcal{W}^{*}:\overline{b}(\vec{w},p)=j\}|\in \mathbb{Z}_{\geq 0}\), and
\[\widetilde{c}_{\ell}^{\mu}=\sum\limits_{j=0}^{\ell}f_{\ell-j}\binom{d_{\mu}-2 \ell+2j}{j}(-1)^{j}\in\mathbb{Z};\quad f_{i}:=0,\forall\frac{d_{\mu}}{2}<i \leq d_{\mu}. \tag{4.3.25}\]
Then by Remark 4.17.(\(e\)) ([36, Thm.1.2.3]), we have
\[E(\mathcal{M}_{\mu},q)=\sum\limits_{\ell=0}^{d_{\mu}}\widetilde{c}_{\ell}^{ \mu}q^{\ell}=\sqrt{q}^{d_{\mu}}\mathbb{H}_{\mu}(\frac{1}{\sqrt{q}},\sqrt{q})= \sum\limits_{\ell=0}^{d_{\mu}}c_{\ell}^{\mu}q^{\ell}.\]
It follows that \(\widetilde{c}_{\ell}^{\mu}=c_{\ell}^{\mu}\). We're done.
**Remark 4.23**.: There is a formula for \(f_{j}\) in terms of the \(c_{\ell}^{\mu}\)'s: Denote \(d:=d_{\mu},c_{\ell}:=c_{\ell}^{\mu}\). Then,
\[q^{-\frac{d}{2}}\sum\limits_{\ell=0}^{d}c_{\ell}q^{\ell}=q^{-\frac{d}{2}}\sum \limits_{0\leq j\leq\frac{d}{2}}f_{j}(q-1)^{d-2j}q^{j}=\sum\limits_{0\leq j \leq\frac{d}{2}}f_{j}z^{d-2j},\quad z:=\sqrt{q}-\frac{1}{\sqrt{q}}.\]
Besides, the curious Poincare duality (Remark 4.17.(\(g\))) implies that \(c_{\ell}=c_{d-\ell}\). So,
\[q^{-\frac{d}{2}}\sum_{0\leq\ell\leq\frac{d}{2}}c_{\ell}q^{\ell}=c_{\frac{d}{2}}+ \sum_{1\leq m\leq\frac{d}{2}}c_{\frac{d}{2}-m}(q^{m}+q^{-m})=\sum_{0\leq m\leq \frac{d}{2}}c_{\frac{d}{2}-m}p_{m}(e_{1},e_{2},0,\cdots);\ \ e_{1}:=z^{2}+2,e_{2}:=1.\]
Here, \(p_{m}\) is the \(m\)-th power sum (Appendix B.2), with \(p_{0}:=1\), and \(e_{i}:=0,\forall i>2\). Recall that
\[p_{m}=(-1)^{m}m\sum_{r_{1}+2r_{2}+\cdots+mm_{m}=m,r_{i}\geq 0}\frac{(r_{1}+r_{2}+ \cdots+r_{m}-1)!}{r_{1}!r_{2}!\cdots r_{m}!}\prod_{i=1}^{m}(-e_{i})^{r_{i}}, \quad\forall m\geq 1.\]
Denote By a direct computation, we then obtain for all \(0\leq j\leq\frac{d_{\mathbf{\mu}}}{2}\):
\[f_{j}=\sum_{\ell\to 0}^{j}c_{\ell}^{\mathbf{\mu}}[\sum_{0\leq j\leq\frac{j- \ell}{2}}(-1)^{i}\frac{(\frac{d_{\mathbf{\mu}}}{2}-\ell-i)!\;2^{j-\ell-2i}}{(\frac{ d_{\mathbf{\mu}}}{2}-j)!i!(j-\ell-2i)!}+\sum_{1\leq i\leq\frac{j-\ell}{2}}(-1)^{i} \frac{(\frac{d_{\mathbf{\mu}}}{2}-\ell-i-1)!\;2^{j-\ell-2i}}{(\frac{d_{\mathbf{\mu}}}{2 }-j)!(i-1)!(j-\ell-2i)!}]. \tag{4.3.26}\]
The bonus of Conjecture 4.20.(4) is the following:
**Proposition 4.24**.: _For any generic character variety \(\mathcal{M}_{\mathbf{\mu}}\) of type \(\mathbf{\mu}\), we have_
\[H^{\bullet,2d_{\mathbf{\mu}}}_{\mathrm{W},c}(\mathcal{M}_{\mathbf{\mu}},\mathfrak{R}_ {B})\cong\mathbb{Z}.\quad(\text{i.e., }H^{a,2d_{\mathbf{\mu}}}_{\mathrm{W},c}(\mathcal{M}_{\mathbf{\mu}},\mathfrak{R}_{B})= 0,\forall a\neq 0;\ \ H^{0,2d_{\mathbf{\mu}}}_{\mathrm{W},c}(\mathcal{M}_{\mathbf{\mu}}, \mathfrak{R}_{B})\cong\mathbb{Z})\]
_Thus,_
* _Conjecture_ 4.20.(_4_)_ \(\Rightarrow\)__\(\mathbb{D}\partial\mathcal{M}_{\mathbf{\mu}}\) _is a_ _integral_ _homology sphere of dimension_ \(d_{\mathbf{\mu}}-1\)_;_
* _Conjecture_ 4.20.(_4_)_ \(+\)__\((2^{\prime})\) _\(\Rightarrow\) _the homotopy type conjecture_ 0.2 _holds for_ \(\mathcal{M}_{\mathbf{\mu}}\)_._
Proof.: Recall that \(\mathcal{M}_{\mathbf{\mu}}\) is _connected_ smooth affine of even dimension \(d=d_{\mathbf{\mu}}\). Take any log compactification \((\overline{\mathcal{M}}_{\mathbf{\mu}},\partial\mathcal{M}_{\mathbf{\mu}})\) with very simple normal crossing boundary divisor, as in Remark 4.9. So, \(\partial\mathcal{M}_{\mathbf{\mu}}=\cup_{i=1}^{n}Y_{i}\) is a union of \(n\) irreducible components of dimension \(d-1\), and \(H^{a,2d}_{\mathrm{W},c}(\mathcal{M}_{\mathbf{\mu}},\mathfrak{R}_{B})\) is the \(a\)-th cohomology of the cochain complex:
\[\mathfrak{R}_{B}^{2d}(W(X))=[H^{2d}(\overline{X})\xrightarrow{\partial^{*}}H^ {2d}(Y^{(1)})\xrightarrow{\partial^{*}}\cdots\xrightarrow{\partial^{*}}H^{2d} (Y^{(d)})]=[\mathbb{Z}\to 0\to\cdots\to 0].\]
Thus, \(H^{\bullet,2d_{\mathbf{\mu}}}_{\mathrm{W},c}(\mathcal{M}_{\mathbf{\mu}},\mathfrak{R}_ {B})\cong\mathbb{Z}\), as desired. If Conjecture 4.20.(4) holds for \(\mathcal{M}_{\mathbf{\mu}}\), then
\[H^{\bullet,0}(\mathcal{M}_{\mathbf{\mu}},\mathfrak{R}_{B})=H^{\bullet,d-2\frac{d}{ 2}}(\mathcal{M}_{\mathbf{\mu}},\mathfrak{R}_{B})\cong H^{\bullet-2\frac{d}{2},d+2 \frac{d}{2}}(\mathcal{M}_{\mathbf{\mu}},\mathfrak{R}_{B})=\mathbb{Z}[-d].\]
Then by Proposition 4.10, \(\widetilde{H}^{\bullet}(\mathbb{D}\partial\mathcal{M}_{\mathbf{\mu}},\mathbb{Z}) \cong H^{\bullet+1,0}(\mathcal{M}_{\mathbf{\mu}},\mathfrak{R}_{B})=\mathbb{Z}[-(d -1)]\), as desired.
**Remark 4.25**.: Let \(\mathcal{N}(n,c_{1})\) be the moduli space of stable rank \(n\) holomorphic bundles of degree \(c_{1}\) on a Riemann surface \(\Sigma_{g}\) of genus \(g\). In [19, Thm.3.1], using the Yang-Mills functional as a Morse-Bott function, it's shown that \(\pi_{1}(\mathcal{N}(n,c_{1}))\) is abelian, if \(n,c_{1}\) are coprime and \((g,n)\neq(2,2)\). In this case, \(T^{*}\mathcal{N}(n,c_{1})\) is open dense in \(\mathcal{M}_{\mathrm{Dol}}(n,c_{1})\), the Dolbeault moduli space of stable rank \(n\) Higgs bundles of degree \(c_{1}\) on \(\Sigma_{g}\). So, we obtain a surjection \(\pi_{1}(\mathcal{N}(n,c_{1}))\simeq\pi_{1}(T^{*}\mathcal{N}(n,c_{1}))\twoheadrightarrow\pi_ {1}(\mathcal{M}_{\mathrm{Dol}}(n,c_{1}))\), and hence \(\pi_{1}(\mathcal{M}_{\mathrm{Dol}}(n,c_{1}))\) is also abelian. Under NAH, we get a diffemorphism \(\mathcal{M}_{\mathrm{Dol}}(n,c_{1})\simeq\mathcal{M}_{B}(n,c_{1})\), with \(\mathcal{M}_{B}(n,c_{1})=\mathcal{M}_{\mathbf{\mu}}\), \(k=1,\mathbf{\mu}=((n))\), and \(C_{1}=\exp(-\frac{2\pi ic_{1}}{n})\). Then, \(\pi_{1}(\mathcal{M}_{B}(n,c_{1}))\) and \(\pi_{1}(\overline{\mathcal{M}}_{B}(n,c_{1}))\) are abelian. See also [24, Prop.6.30]. By Proposition 4.24,
the integral curious Poincare duality conjecture 4.20.(4) \(\Rightarrow\) the homotopy type conjecture 0.2 for \(\pi_{1}(\mathcal{M}_{B}(n,c_{1}))\). We expect that such an argument works more generally.
### Examples
We verify Conjecture 4.20 in three examples.
**Example 4.26** (Example 2.13 revisited).: \((g,k,n,\mu)=(0,4,2,((1^{2}),(1^{2}),(1^{2}),(1^{2})))\). For simplicity, assume \(\det C_{j}=1\). So, \(C_{j}=\operatorname{Diag}(a_{j,1},a_{j,2}=a_{j,1}^{-1}),a_{j,1}\neq\pm 1\); \(\prod_{j=1}^{4}a_{j,\psi_{j}(1)}\neq 1,\forall\psi_{j}\in W=S_{2}\). Then, \(\mathcal{M}_{\mu}\) is a smooth cubic affine surface defined by
(4.4.1) \[\sum_{j=1}^{3}\!\!y_{j}^{2}\!+\!y_{1}y_{2}y_{3}\!-\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Step \(2\)**. Now, we're ready to compute \(\mathrm{M}^{c}(\mathcal{M}_{\mu})\in\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}( \mathbb{K})\). Denote \(p_{\star\star^{\prime}}:=L_{\star}\cap L_{\star^{\prime}}\cong\mathrm{Spec} \ \mathbb{K}=\mathrm{pt}\) for \(\star\neq\star^{\prime}\). By Lemma 4.2, we have
\[\mathrm{M}^{c}(\mathcal{M}_{\mu})\cong[\mathrm{M}(p_{YZ})\oplus\mathrm{M}(p_{ XZ})\oplus\mathrm{M}(p_{XY})\xrightarrow{\partial^{(2)}}\mathrm{M}(L_{X})\oplus \mathrm{M}(L_{Y})\oplus\mathrm{M}(L_{Z})\xrightarrow{\partial^{(1)}}\mathrm{M} (\overline{\mathcal{M}}_{\mu})], \tag{4.4.2}\]
where \(\mathrm{M}(\overline{\mathcal{M}}_{\mu})\) is placed in degree \(0\). Denote \(\mathrm{M}(p_{\star\star^{\prime}})=\mathbb{Z}_{\star\star^{\prime}}\cong \mathbb{Z}\in\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})\). As \(\mathrm{M}(\mathbb{P}^{1})\cong\mathrm{M}^{c}(\mathbb{A}^{1})\oplus\mathrm{M }(\mathrm{pt})\in\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})\), we may write \(\mathrm{M}(L_{\star})=\mathbf{L}_{\star}\oplus\mathbb{Z}_{\star}\), with \(\mathbf{L}_{\star}\cong\mathrm{M}^{c}(\mathbb{A}^{1})=\mathbb{Z}(1)[2]\). Then by (4.1.6), \(\partial^{(2)}\) is given by:
\[\partial^{(2)}:\mathbb{Z}_{YZ}\oplus\mathbb{Z}_{XZ}\oplus\mathbb{Z}_{XY} \rightarrow\mathbb{Z}_{X}\oplus\mathbb{Z}_{Y}\oplus\mathbb{Z}_{Z}\oplus \mathbf{L}_{X}\oplus\mathbf{L}_{Y}\oplus\mathbf{L}_{Z}:(u,v,w)\mapsto(-w-v,w- u,u+v,0,0,0). \tag{4.4.3}\]
By Theorem 4.1.(5), we have \(\mathrm{M}(\mathbb{P}^{2})\cong\mathbb{Z}\oplus\mathbb{Z}(1)[2]\oplus\mathbb{ Z}(2)[4]\). Denote its direct summand \(\mathbb{Z}(1)[2]\) by \(\mathbf{L}_{H}\). By Theorem 4.1.(6), we have
\[\mathrm{M}(\overline{\mathcal{M}}_{\mu})\cong\mathrm{M}(\mathbb{P}^{2})\oplus \oplus_{\star=X,Y,Z}(\mathrm{M}(p_{\star,1})(1)[2]\oplus\mathrm{M}(p_{\star,3} )(1)[2]).\]
Denote \(\mathbf{L}_{\star,j}:=\mathrm{M}(p_{\star,j})(1)[2]\). Then we may write
\[\mathrm{M}(\overline{\mathcal{M}}_{\mu})\cong\mathbb{Z}\oplus\mathbb{Z}(2)[4] \oplus\mathbf{L}_{H}\oplus\oplus_{\star=X,Y,Z}(\mathbf{L}_{\star,1}\oplus \mathbf{L}_{\star,3}).\]
It remains to compute \(\partial^{(1)}=\delta_{1}\), equivalently, \(\delta_{\star}:=\delta_{1}|_{L_{\star}}:\mathrm{M}(L_{\star})\rightarrow\mathrm{ M}(\overline{\mathcal{M}}_{\mu})\) induced by the inclusion \(L_{\star}\hookrightarrow\overline{\mathcal{M}}_{\mu}\), for each \(\star\in\{X,Y,Z\}\). Alternatively, we may compute for \(m\geq 0\):
\[\delta_{\star}^{*}:\mathrm{CH}^{m}(\overline{\mathcal{M}}_{\mu})\cong\mathrm{ Hom}_{\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})}(\mathrm{M}( \overline{\mathcal{M}}_{\mu}),\mathbb{Z}(m)[2m])\rightarrow\mathrm{Hom}_{ \mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K})}(\mathrm{M}(L_{\star}), \mathbb{Z}(m)[2m])\cong\mathrm{CH}^{m}(L_{\star}).\]
Here, we use Theorem 4.1.(10). Now, the map \(\delta_{\star}^{*}\) is simply the pullback of Chow cycles along \(L_{\star}\hookrightarrow\overline{\mathcal{M}}_{\mu}\). By dimensional reason, \(\delta_{\star}^{*}=0\) for \(m>1\). We're left with the case \(m=0\) or \(1\).
If \(m=0\), we obtain an isomorphism \(\delta_{\star}^{*}:\mathrm{CH}^{0}(\overline{\mathcal{M}}_{\mu})=\mathbb{Z} \xrightarrow{\cong}\mathrm{CH}^{0}(L_{\star})=\mathbb{Z}\). By Lemma 4.18, this means that we get an isomorphism \(\delta_{\star}|_{\mathrm{M}(\mathrm{pt})}:\mathrm{M}(\mathrm{pt})=\mathbb{Z}_{ \star}\xrightarrow{\cong}\mathrm{M}(\mathrm{pt})=\mathbb{Z}\subset\mathrm{M}( \overline{\mathcal{M}}_{\mu})\).
If \(m=1\), we have \(\mathrm{CH}^{1}(\overline{\mathcal{M}}_{\mu})=\mathrm{Pic}(\overline{\mathcal{ M}}_{\mu})\cong H^{2}(\overline{\mathcal{M}}_{\mu},\mathbb{Z})=\mathbb{Z}^{7}\), with a basis given by \([\widetilde{H}]\), \([L_{\bullet,j}]\), \(\bullet\in\{X,Y,Z\},j=1,3\), where \(\widetilde{H}\) is the proper transform under \(\pi:\overline{\mathcal{M}}_{\mu}\rightarrow\mathbb{P}^{2}\) of a line \(H\subset\mathbb{P}^{1}\). Besides, \(\mathrm{CH}^{1}(L_{\star})=\mathrm{Pic}(L_{\star})=\mathbb{Z}\cdot[\mathrm{pt}]\). Then \(\delta_{\star}^{*}\) becomes the intersection product with \(L_{\star}\):
\[\delta_{\star}^{*}=L_{\star}\cdot(-):\mathrm{CH}^{1}(\overline{\mathcal{M}}_{\mu} )=\mathbb{Z}\cdot[\widetilde{H}]\oplus\oplus_{\bullet\in\{X,Y,Z\},j=1,3}\mathbb{Z }\cdot[L_{\bullet,j}]\rightarrow\mathrm{CH}^{1}(L_{\star})=\mathbb{Z}\cdot[ \mathrm{pt}]:\]
That is, \(\delta_{\star}^{\star}([\widetilde{H}])=1\), \(\delta_{\star}^{\star}([L_{\bullet,j}])=1\) if \(\bullet=\star\), and \(0\) otherwise. Thus, by Lemma 4.18, we get
\[\delta_{\star}:\mathrm{M}(L_{\star})=\mathbb{Z}_{\star}\oplus L_{ \star}\to\mathrm{M}(\overline{\mathcal{M}}_{\mu})=\mathbb{Z}\oplus\mathbf{L}_ {H}\oplus\oplus_{\bullet\in\{X,Y,Z\},j=1,3}\mathbf{L}_{\bullet,j}\oplus\mathbb{ Z}(2)[4],\] \[\delta_{\star}|_{\mathbb{Z}_{\star}}:\mathbb{Z}_{\star}\overset{ \cong}{\to}\mathbb{Z};\quad\delta_{\star}|_{\mathbf{L}_{\star}}:\mathbf{L}_{ \star}\xrightarrow{(1,1,1)^{\mathrm{T}}}\mathbf{L}_{H}\oplus\mathbf{L}_{\star,1 }\oplus\mathbf{L}_{\star,3}. \tag{4.4.4}\]
Altogether, combine (4.4.3) and (4.4.4), a direct computation now gives:
\[\mathrm{M}^{c}(\mathcal{M}_{\mu})\cong[\mathbb{Z}\overset{0}{\to}0\overset{0 }{\to}\mathbf{L}^{\oplus 4}\oplus\mathbb{Z}(2)[4]]=\mathbf{L}^{0}[2]\oplus( \mathbf{L}^{1})^{\oplus 4}\oplus\mathbf{L}^{2}\in\mathbf{DM}_{\mathrm{gm}}^{ \mathrm{eff}}(\mathbb{K}). \tag{4.4.5}\]
Here, recall that \(\mathbf{L}^{m}=\mathbf{L}^{\otimes m}=\mathbb{Z}(m)[2m]\), for all \(m\geq 0\).
**Step \(3\)**. For \(\mu=((1^{2}),(1^{2}),(1^{2}),(1^{2}))\), we know that (see e.g. [36, Rmk.1.5.5]):
\[\mathbb{H}_{\mu}(-z,w)=z^{2}+4+w^{2}=:\underset{i,j}{\sum}c_{ij}^{\mu}z^{i}w^ {j}. \tag{4.4.6}\]
So, \(c_{2,0}^{\mu}=c_{0,2}^{\mu}=1\), \(c_{00}^{\mu}=4\), and \(c_{ij}^{\mu}=0\) otherwise. It follows that
\[\oplus_{i,j}(\mathbf{L}^{\frac{d\mu+j-i}{2}}[i])^{\oplus c_{ij}^{\mu}}= \mathbf{L}^{0}[2]\oplus(\mathbf{L}^{1})^{\oplus 4}\oplus\mathbf{L}^{2}=\mathrm{M }^{c}\left(\mathcal{M}_{\mu}\right)\in\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}} (\mathbb{K}).\]
The result matches with the computation above, hence verifies Conjecture 4.20 for this example.
**Example 4.27** (\((g,k,n,\mu)=(1,1,n,((n)))\)).: Let \(\Sigma_{1,1}:=(\Sigma_{1},\sigma=\{q_{1}\})\) be the punctured torus, \(G=GL_{n}(\mathbb{K})\). So, \(W=S_{n}\). Let \(C_{1}\in T\) be of type \(\mu=((n))\): \(\mu\) is **not** very generic and
\[C_{1}=a_{1,1}I_{n}\in T\cong(\mathbb{K}^{\times})^{n}.\]
The _generic_ assumption (Definition 1.1) becomes \(a_{1,1}^{n}=1\), and \(a_{1,1}^{n^{\prime}}\neq 1\), for all \(1\leq n^{\prime}<n\). So, \(a_{1,1}\) is a primitive \(n\)-th root of unity. Say, \(a_{1,1}=\exp(-\frac{2\pi c_{1}i}{n})\), \(\gcd(c_{1},n)=1\). Recall that \(\dim\mathcal{M}_{\mu}=d_{\mu}=n^{2}(2g-2+k)-\underset{i=1}{\sum}\Sigma_{j}( \mu_{j}^{i})^{2}+2=n^{2}-n^{2}+2=2\). In fact, by [40, Thm.2.2.17], we have \(\mathcal{M}_{\mu}\cong(\mathbb{K}^{\times})^{2}\). Thus, by Theorem 4.1.(1), we have
\[\mathrm{M}^{c}\left(\mathcal{M}_{\mu}\right)=\mathrm{M}^{c}(\mathbb{K}^{\times })^{\otimes 2}\cong(\mathbb{Z}[1]\oplus\mathbf{L})^{\otimes 2}=\mathbf{L}^{0}[2] \oplus(\mathbf{L}^{1}[1])^{\oplus 2}\oplus\mathbf{L}^{2}\in\mathbf{DM}_{ \mathrm{gm}}^{\mathrm{eff}}(\mathbb{K}).\]
In this case, Conjecture 4.20.(3) means that,
\[\mathrm{M}^{c}\left(\mathcal{M}_{\mu}\right)=\mathbf{L}^{0}[2]\oplus(\mathbf{L }^{1}[1])^{\oplus 2}\oplus\mathbf{L}^{2}=\oplus_{i,j}(\mathbf{L}^{\frac{d_{\mu}+j-i}{2}}[ i])^{\oplus c_{i,j}^{\mu}},\quad\mathbb{H}_{(n)}(-z,w)=\underset{i,j}{\sum}c_{i,j}^{\mu}z^{i}w^ {j}.\]
That is, \(c_{2,0}^{\mu}=c_{0,2}^{\mu}=1,c_{1,1}^{\mu}=2\), i.e., \(\mathbb{H}_{(n)}(z,w)=(z-w)^{2}\). For arbitrary \(n\), this is still a _conjecture_ ([40, Conj.4.3.2], [36, Conj.1.5.1]). For lower ranks, however, one can compute \(\mathbb{H}_{(n)}(z,w)\) by hand, via Definition B.6, 4.14, or the formula below [36, (2.3.28)]:
\[\underset{k\in\mathcal{P}}{\sum}\mathcal{H}_{\lambda}(z,w)T^{|\lambda|}=\mathrm{ Exp}(\underset{n\geq 1}{\sum}\frac{\mathbb{H}_{(n)}(z,w)}{(z^{2}-1)(1-w^{2})}T^{n}). \tag{4.4.7}\]
Indeed, \(\mathbb{H}_{((1))}(z,w)=\mathbb{H}_{((2))}(z,w)=(z-w)^{2}\), and so on.
**Example 4.28** (Example 2.14 revisited).: \((g,k,n,\mu)=(0,3,3,((1^{3}),(1^{3}),(1^{3})))\). For the simplified version, \(\zeta:=\exp(\frac{2\pi i}{3})\), \(C_{i}:=\operatorname{Diag}(1,\zeta,\zeta^{2}),1\leq i\leq 2,\ \ C_{3}:= \operatorname{Diag}(a_{1},a_{2},a_{3})\), such that
\[\left\{\begin{array}{l}\operatorname{Tr}(C_{3})=a_{1}+a_{2}+a_{3}=0,\\ \operatorname{Tr}(C_{3}^{-1})=\frac{1}{a_{1}}+\frac{1}{a_{2}}+\frac{1}{a_{3}}= t,\\ \det C_{3}=a_{1}a_{2}a_{3}=1,\end{array}\right.\]
where \(\tau=-\delta(C_{3})^{2}=4t^{3}+27\neq 0,27\). Then, \(\mathcal{M}_{\mu}\subset\mathbb{A}_{\widetilde{y}_{1},\widetilde{y}_{2}, \widetilde{y}_{3}}^{3}\) is a _smooth cubic affine surface_ given by \(\widetilde{F}(\widetilde{y}_{1},\widetilde{y}_{2},\widetilde{y}_{3}):= \widetilde{y}_{1}\widetilde{y}_{2}\widetilde{y}_{3}+\widetilde{y}_{1}^{3}+ \widetilde{y}_{2}^{2}+\widetilde{y}_{3}^{2}-\frac{9}{2}\widetilde{y}_{1} \widetilde{y}_{2}+\frac{\tau}{4}=0\).
1) We would like to construct a concrete log compactification of \(\mathcal{M}_{\mu}\). Take the projective closure \(\widetilde{\mathcal{M}}_{\mu}\) of \(\mathcal{M}_{\mu}\) in \(\mathbb{P}_{[Y_{1}:Y_{2}:Y_{3}:Y_{4}]}^{3}\supset\{Y_{4}\neq 0\}\cong\mathbb{A}_{ \widetilde{y}_{1},\widetilde{y}_{2},\widetilde{y}_{3}}^{3}\), with \(\widetilde{y}_{i}=\frac{Y_{i}}{Y_{4}}\). So, \(\widetilde{\mathcal{M}}_{\mu}\) is defined by
\[\widetilde{\mathcal{M}}_{\mu}=\{\widetilde{F}(Y_{1},Y_{2},Y_{3},Y_{4})=Y_{1}Y _{2}Y_{3}+Y_{1}^{3}+Y_{2}^{3}+Y_{3}^{2}Y_{4}-\frac{9}{2}Y_{1}Y_{2}Y_{4}+\frac{ \tau}{4}Y_{4}^{3}=0\};\]
\[\widetilde{\partial}\mathcal{M}_{\mu}:=\widetilde{\mathcal{M}}_{\mu}-\mathcal{ M}_{\mu}=\{H(Y_{1},Y_{2},Y_{3}):=Y_{1}Y_{2}Y_{3}+Y_{1}^{3}+Y_{2}^{3}=0\}\subset \mathbb{P}_{[Y_{1}:Y_{2}:Y_{3}]}^{2}=\{Y_{4}=0\}\subset\mathbb{P}_{[Y_{1}:Y_{ 2}:Y_{3}:Y_{4}]}^{3}.\]
By a direct check, \(\widetilde{\mathcal{M}}_{\mu}\) is a _smooth cubic projective surface_, and \(\widetilde{\partial}\mathcal{M}_{\mu}\subset\mathbb{P}_{[Y_{1}:Y_{2}:Y_{3}]}^{2}\) is a _cubic nodal rational curve_ with a unique nodal singularity at \(p_{3}:=[0:0:1]\).
Denote \({}^{1}\widetilde{\mathcal{M}}_{\mu}:=\widetilde{\mathcal{M}}_{\mu}\), and \({}^{1}D_{0}:=\widetilde{\partial}\mathcal{M}_{\mu}\). Let \(\pi_{2}:{}^{2}\widetilde{\mathcal{M}}_{\mu}\to{}^{1}\widetilde{\mathcal{M}}_{\mu}\) be the blowing up at \(p_{3}\). Let \({}^{2}E_{1}\) be the exceptional divisor, and \({}^{2}D_{0}\) be the proper transform of \({}^{1}D_{0}\). Then \({}^{2}\widetilde{\partial}\mathcal{M}_{\mu}={}^{2}\widetilde{\mathcal{M}}_{ \mu}-\mathcal{M}_{\mu}\) consists of two rational curves \({}^{2}D_{0},{}^{2}E_{1}\) intersecting at two points, say \({}^{2}p_{3,1},{}^{2}p_{3,2}\). Let \(\pi_{3}:\widetilde{\mathcal{M}}_{\mu}:={}^{3}\widetilde{\mathcal{M}}_{\mu} \to{}^{2}\widetilde{\mathcal{M}}_{\mu}\) be the blowing up at \({}^{2}p_{3,1}\). Let \({}^{3}E_{2}\) be the exceptional divisor, and \({}^{3}D_{0},{}^{3}E_{1}\) be the proper transform of \({}^{2}D_{0},{}^{2}E_{1}\) respectively. Then \(\partial\mathcal{M}_{\mu}=\overline{\mathcal{M}}_{\mu}-\mathcal{M}_{\mu}\) consists of a triangle of three rational curves \({}^{3}D_{0},{}^{3}E_{1},{}^{3}E_{2}\). In particular, this matches with Theorem 3.15. Now, \((\overline{\mathcal{M}}_{\mu},\partial\mathcal{M}_{\mu})\) is a log compactification with very simple normal crossing boundary divisor.
2) We would like to determine the configuration of 27 lines in \(\widetilde{\mathcal{M}}_{\mu}\). Recall that \(\tau=-\delta(C_{3})^{2}=4t^{3}+27\) and \(\tau\neq 0,27\). So, \(-t^{3}=\frac{\delta(C_{3})^{2}+27}{4}=\frac{\delta(C_{3})+3\sqrt{3}i}{2}\cdot \frac{\delta(C_{3})-3\sqrt{3}i}{2}\). Fix a **cubic root**\(\delta^{\prime}=\delta^{\prime}(C_{3})\) of \(\frac{\delta(C_{3})+3\sqrt{3}i}{2}\), i.e., \(\delta^{\prime 3}=\frac{\delta(C_{3})+3\sqrt{3}i}{2}\). Then, \(\frac{\delta(C_{3})-3\sqrt{3}i}{2}=-(\frac{t}{\delta^{\prime}})^{3}\).
**Lemma 4.29**.: _The 27 lines of \(\widetilde{\mathcal{M}}_{\mu}\) are given by:_
\[L_{1,u,v} := \{Y_{3}+3(ut+\frac{3}{2})Y_{4}=0,(ut+3)Y_{4}+vY_{1}+v^{2}Y_{2}=0\};\] \[L_{2,u,v} := \{Y_{3}-\frac{\sqrt{3}i}{2}(\delta^{\prime}u+\frac{t}{\delta^{ \prime}u})vY_{1}+\frac{1}{2}((\delta^{\prime}u)^{2}+(\frac{t}{\delta^{\prime}u })^{2})v^{2}Y_{2}=0,\] \[Y_{4}-\frac{\delta^{\prime}u-\frac{t}{\delta^{\prime}u}}{\delta(C _{3})}vY_{1}-\frac{(\delta^{\prime}u-\frac{t}{\delta^{\prime}u})(\delta^{ \prime}u+\frac{t}{\delta^{\prime}u})i}{\sqrt{3}\delta(C_{3})}v^{2}Y_{2}=0\};\] \[L_{3,u,v} := \{Y_{3}+\frac{1}{2}((\delta^{\prime}u)^{2}+(\frac{t}{\delta^{ \prime}u})^{2})v^{2}Y_{1}-\frac{\sqrt{3}i}{2}(\delta^{\prime}u+\frac{t}{ \delta^{\prime}u})vY_{2}=0,\] \[Y_{4}-\frac{(\delta^{\prime}u-\frac{t}{\delta^{\prime}u})(\delta^{ \prime}u+\frac{t}{\delta^{\prime}u})i}{\sqrt{3}\delta(C_{3})}v^{2}Y_{1}-\frac{ \delta^{\prime}u-\frac{t}{\delta^{\prime}u}}{\delta(C_{3})}vY_{2}=0\}, \tag{4.4.8}\]
_where \(u^{3}=v^{3}=1\). Moreover, the pairwise intersection numbers are as follows:_
\[L_{1,u,v}\cdot L_{1,u^{\prime},v^{\prime}} =\delta_{u,u^{\prime}}+\delta_{v,v^{\prime}}-\delta_{u,u^{\prime}} \delta_{v,v^{\prime}},\ \forall(u,v)\neq(u^{\prime},v^{\prime})\ \text{in}\ \{1,\zeta,\zeta^{2}\}; \tag{4.4.11}\] \[L_{2,u,v}\cdot L_{2,u^{\prime},v^{\prime}} =L_{3,u,v}\cdot L_{3,u^{\prime},v^{\prime}}=(1-\delta_{u,u^{\prime }})(1-\delta_{v,v^{\prime}}),\ \forall(u,v)\neq(u^{\prime},v^{\prime})\ \text{in}\ \{1,\zeta,\zeta^{2}\};\] \[L_{2,u,v}\cdot L_{3,u^{\prime},v^{\prime}} =\delta_{u,u^{\prime}},\quad L_{1,u,v}\cdot L_{2,u^{\prime},v^{ \prime}}=L_{1,u,v^{2}}\cdot L_{3,u^{\prime},v^{\prime}}=\delta_{\frac{v}{u},v ^{\prime}},\ \forall u,v,u^{\prime},v^{\prime}\in\{1,\zeta,\zeta^{2}\},\]
_where \(\delta_{a,b}\) is the Kronecker symbol: \(\delta_{a,b}=1\) if \(a=b\), and is \(0\) otherwise._
**Note**: By our knowledge on cubic surfaces, each of the \(27\) lines has self-intersection number \(-1\):
\[L_{j,u,v}^{2}=-1,\quad\forall 1\leq j\leq 3,\forall u,v\in\{1,\zeta,\zeta^{2}\}. \tag{4.4.12}\]
Proof.: The proof of lemma is a straightforward computation.4
Footnote 4: Though it takes more than \(10\) pages.
**Note**: Observe that \(L_{2,\zeta,v},L_{3,\zeta^{2},v^{\prime}},v,v^{\prime}\in\{1,\zeta,\zeta^{2}\}\) are \(6\) disjoint \((-1)\)-curves in \(\widetilde{\mathcal{M}}_{\mu}\). For our purpose, all we need is the knowledge on these six lines, not the whole Lemma 4.29.
By _Castelnuovo's contraction theorem_, we obtain a birational morphism \(\pi_{1}:\widetilde{\mathcal{M}}_{\mu}\to B\) which contracts \(L_{2,\zeta,v},L_{3,\zeta^{2},v^{\prime}},v,v^{\prime}\in\{1,\zeta,\zeta^{2}\}\) such that \(B\) is still smooth. As any smooth cubic surface is the blowing up of \(\mathbb{P}^{2}\) at \(6\) points in general position, we see that \(B\cong\mathbb{P}^{2}\).
3) For our purpose, let's compute the intersection numbers \({}^{1}D_{0}\cdot L_{j,u,v}\) for \(j=2,3\). Recall that \({}^{1}D_{0}=\{Y_{1}Y_{2}Y_{3}+Y_{1}^{3}+Y_{2}^{3}=0,Y_{4}=0\}\). By symmetry, we have \({}^{1}D_{0}\cdot L_{2,u,v}={}^{1}D_{0}\cdot L_{3,u,v}\). It suffices to compute \({}^{1}D_{0}\cdot L_{2,u,v}\). Denote \(x_{u}:=\delta^{\prime}u+\frac{t}{\delta^{\prime}u}\), so \((\delta^{\prime}u)^{2}+(\frac{t}{\delta^{\prime}u})^{2}=x_{u}^{2}-2t\). Denote
\[b_{3,1}:=\frac{\sqrt{3}i}{2}x_{u}v,\ \ b_{3,2}:=-\frac{1}{2}(x_{u}^{2}-2t)v^{2}; \ \ b_{4,1}:=\frac{\delta^{\prime}u-\frac{t}{\delta^{\prime}u}}{\delta(C_{3})}v, \ \ b_{4,2}:=\frac{(\delta^{\prime}u-\frac{t}{\delta^{\prime}u})x_{u}i}{\sqrt{3} \delta(C_{3})}v^{2}. \tag{4.4.13}\]
Then
\[{}^{1}D_{0}\cap L_{2,u,v} = \{Y_{3}=b_{3,1}Y_{1}+b_{3,2}Y_{2},Y_{4}=b_{4,1}Y_{1}+b_{4,2}Y_{2} =0,Y_{1}Y_{2}Y_{3}+Y_{1}^{3}+Y_{2}^{3}=0\}\]
Recall that \(b_{4,1},b_{4,2}\neq 0\). By the equations, \(Y_{4}=0,Y_{1}=-\frac{b_{4,2}}{b_{4,1}}Y_{2},Y_{3}=(b_{3,2}-b_{3,1}\frac{b_{4,2 }}{b_{4,1}})Y_{2}\), and
\[0 = Y_{1}Y_{2}Y_{3}+Y_{1}^{3}+Y_{2}^{3}=Y_{2}^{3}((-\frac{b_{4,2}}{b _{4,1}})^{3}+b_{3,1}(-\frac{b_{4,2}}{b_{4,1}})^{2}+b_{3,2}(-\frac{b_{4,2}}{b_{ 4,1}})+1).\]
That is, \({}^{1}D_{0}\cap L_{2,u,v}\neq\varnothing\) if and only if
\[(-\frac{b_{4,2}}{b_{4,1}})^{3}+b_{3,1}(-\frac{b_{4,2}}{b_{4,1}})^{2}+b_{3,2}(- \frac{b_{4,2}}{b_{4,1}})+1=0. \tag{4.4.14}\]
Observe that \(-\frac{b_{4,2}}{b_{4,1}}=-\frac{x_{u}iv}{\sqrt{3}}\). Then, the equation on \(b_{i,j}\)'s becomes
\[0=\frac{i}{3\sqrt{3}}x_{u}^{3}-\frac{i}{2\sqrt{3}}x_{u}^{3}+\frac{i}{2\sqrt{3} }(x_{u}^{2}-2t)x_{u}+1=\frac{i}{3\sqrt{3}}(x_{u}^{3}-3tx_{u}-3\sqrt{3}i).\]
Recall that \(3\sqrt{3}i=\delta^{\prime 3}+(\frac{t}{\delta^{\prime}})^{3}=x_{u}(x_{u}^{2}-3t)\), so the equation indeed holds. Also, we compute
\[b_{3,2}-b_{3,1}\frac{b_{4,2}}{b_{4,1}}=-\frac{v^{2}}{2}(x_{u}^{2}-2t)-\frac{ \sqrt{3}ix_{u}v}{2}\frac{ivx_{u}}{\sqrt{3}}=tv^{2}.\ \ \ \Rightarrow\ \ ^{1}D_{0}\cap L_{2,u,v}=\{[\frac{-ivx_{u}}{\sqrt{3}}:1:tv^{2}:0]\}.\]
The unique intersection point is away from the node \(p_{3}=[0:0:1:0]\) of \({}^{1}D_{0}\). Therefore,
\[{}^{1}D_{0}\cdot L_{2,u,v}={}^{1}D_{0}\cdot L_{3,u,v}=1,\ \ \ \forall u,v\in\{1,\zeta,\zeta^{2}\}. \tag{4.4.15}\]
Let's compute various intersection numbers in \(\overline{\mathcal{M}}_{\mu}\). Recall that \((\overline{\mathcal{M}}_{\mu},\partial\mathcal{M}_{\mu})\) is a log compactification of \(\mathcal{M}_{\mu}\) with \(\partial\mathcal{M}_{\mu}\) very simple normal crossing, obtained via iterated blowing ups
\[\pi:\overline{\mathcal{M}}_{\mu}={}^{3}\widetilde{\mathcal{M}}_{\mu}\xrightarrow{ \pi_{3}}{}^{2}\widetilde{\mathcal{M}}_{\mu}\xrightarrow{\pi_{2}}{}^{1} \widetilde{\mathcal{M}}_{\mu}=\widetilde{\mathcal{M}}_{\mu}\xrightarrow{\pi_ {1}}B=\mathbb{P}^{2},\]
such that:
1. \(\pi_{1}\) contracts \(L_{2,\zeta,v},L_{3,\zeta^{2},v^{\prime}},v,v^{\prime}\in\{1,\zeta,\zeta^{2}\}\). For our convenience, let \[(m_{3},m_{4},m_{5},m_{6},m_{7},m_{8}):=\pi_{1}(L_{2,\zeta,1},L_{2,\zeta,\zeta},L_{2,\zeta,\zeta^{2}},L_{3,\zeta^{2},1},L_{3,\zeta^{2},\zeta},L_{3,\zeta^{2},\zeta^{2}}).\]
denote the \(6\) points where we blow up \(B=\mathbb{P}^{2}\).
2. \(\pi_{2}\) is the blowing up at \(m_{2}:=p_{3}=[0:0:1:0]\) with exceptional divisor \({}^{2}E_{1}\). \({}^{2}D_{0}\) is the proper transform of \({}^{1}D_{0}=\widetilde{\mathcal{M}}_{\mu}-\mathcal{M}_{\mu}\), and \({}^{2}D_{1}\) intersects \({}^{2}E_{1}\) at two distinct points \({}^{2}p_{3,1},{}^{2}p_{3,2}\);
3. \(\pi_{3}\) is the blowing up at \(m_{1}:={}^{2}p_{3,1}\) with exceptional divisor \({}^{3}E_{2}\). \({}^{3}D_{0},{}^{3}E_{1}\) are the proper transforms of \({}^{2}D_{0},{}^{2}E_{1}\) respectively.
4. For our convenience later on, **denote** (4.4.16) \[D_{0}:={}^{3}D_{0},\ \ D_{1}:={}^{3}E_{2},\ \ D_{2}:={}^{3}E_{1}.\ \ \ (\Rightarrow\pi_{3}(D_{1})=m_{1},\pi_{2}\circ\pi_{3}(D_{2})=m_{2})\]
Then \(\partial\mathcal{M}_{\mu}=D_{0}\cup D_{1}\cup D_{2}\), with \(D_{i}\cdot D_{j}=1,\forall i\neq j\).
Let \({}^{2}L_{j,u,v}\) be the proper transform of \(L_{j,u,v}\) under \(\pi_{2}\), and \({}^{3}L_{j,u,v}\) be the proper transform of \({}^{2}L_{j,u,v}\) under \(\pi_{3}\). By the previous discussion, we have
\[D_{0}\cdot{}^{3}L_{j,u,v}={}^{2}D_{0}\cdot{}^{2}L_{j,u,v}={}^{1}D_{0}\cdot L_ {j,u,v}=1,\ \ \ \forall j=2,3,\forall u,v\in\{1,\zeta,\zeta^{2}\}. \tag{4.4.17}\]
and \(D_{i}\cdot{}^{3}L_{j,u,v}=0,\forall i=2,3,j=2,3,u,v\in\{1,\zeta,\zeta^{2}\}\). For our convenience, denote
\[(D_{3},D_{4},D_{5},D_{6},D_{7},D_{8}):=({}^{3}L_{2,\zeta,1},{}^{3}L_{2,\zeta, \zeta},{}^{3}L_{2,\zeta,\zeta^{2}},{}^{3}L_{3,\zeta^{2},1},{}^{3}L_{3,\zeta^{2 },\zeta},{}^{3}L_{3,\zeta^{2},\zeta^{2}}). \tag{4.4.18}\]
Let \(H\subset\mathbb{P}^{2}\) be a general line, and \(\widetilde{H}\) be the proper transform of \(H\) under \(\pi:\overline{\mathcal{M}}_{\mu}\to\mathbb{P}^{2}\). By the blowing up formula for Picard groups, we have
\[\operatorname{CH}^{1}(\overline{\mathcal{M}}_{\mu})=\operatorname{Pic}( \overline{\mathcal{M}}_{\mu})\cong\mathbb{Z}[\widetilde{H}]\oplus\oplus_{i=1} ^{8}\mathbb{Z}[D_{i}].\]
As \(H\) is general, we have \(\widetilde{H}=\pi^{*}H\), and hence
\[\widetilde{H}^{2}=H^{2}=1,\ \ \ \widetilde{H}\cdot D_{i}=0,\ \ \ \forall 1\leq i\leq 8. \tag{4.4.19}\]
As \(D_{j}=\pi^{-1}(m_{j})\) for all \(3\leq j\leq 8\), and \(D_{j}\) is disjoint from \(D_{1},D_{2}\), we see that
\[D_{i}\cdot D_{j}=0,\ \ \forall 1\leq i<j\leq 8,j\geq 3,\ \ \ D_{j}^{2}=\pi_{1}^{-1}(m_{j})\cdot\pi_{1}^{-1}(m_{j})=-1,\ \ \forall 3\leq j\leq 8. \tag{4.4.20}\]
By above, \(D_{1}\cdot D_{2}=1\), \(D_{1}^{2}={}^{3}E_{2}\cdot{}^{3}E_{2}=-1\), and \({}^{2}E_{1}\cdot{}^{2}E_{1}=-1\). As \(\pi_{3}^{*}(^{2}E_{1})\cdot{}^{3}E_{2}=0\), we obtain
\[\pi_{3}^{*}(^{2}E_{1})={}^{3}E_{1}+{}^{3}E_{2}=D_{2}+D_{1}\ \ \Rightarrow\ \ -1={}^{2}E_{1}\cdot{}^{2}E_{1}=(D_{1}+D_{2})^{2}=D_{2}^{2}+1 \Rightarrow D_{2}^{2}=-2.\]
In summary, relative to the \(\mathbb{Z}\)-basis \(([\widetilde{H}],[D_{1}],\cdots,[D_{8}])\), the intersection form on \(\operatorname{Pic}(\overline{\mathcal{M}}_{\mu})\) is
\[I_{1}\oplus\left(\begin{array}{cc}-1&1\\ 1&-2\end{array}\right)\oplus-I_{6}. \tag{4.4.21}\]
Now, we compute \([D_{0}]\in\operatorname{Pic}(\overline{\mathcal{M}}_{\mu})\). As \(D_{0}\cdot D_{i}=1,1\leq i\leq 8\). It suffices to compute \(D_{0}^{2}\).
i) Let \(i:\widetilde{\mathcal{M}}_{\mu}\hookrightarrow\mathbb{P}^{3}_{[Y_{1}:Y_{2}:Y_{ 3}:Y_{4}]}\) be the inclusion. Denote \(H_{Y_{4}}:=\{Y_{4}=0\}\subset\mathbb{P}^{3}_{[Y_{1}:Y_{2}:Y_{3}:Y_{4}]}\). Recall that \({}^{1}D_{0}=H_{Y_{4}}\cap\widetilde{\mathcal{M}}_{\mu}=i^{-1}(H_{Y_{4}})\) is the scheme theoretic inverse image. By [76, Section 0B0I], then
\[i^{*}[H_{Y_{4}}]=[i^{-1}(H_{Y_{4}})]=[{}^{1}D_{0}]\in\operatorname{CH}^{1}( \widetilde{\mathcal{M}}_{\mu}).\]
By the _projection formula for Chow cycle classes_ ([76, Lemma 0B2X], we have
\[{}^{1}D_{0}\cdot{}^{1}D_{0}=i_{*}(i^{*}[H_{Y_{4}}]\cdot_{\widetilde{\mathcal{ M}}_{\mu}}[{}^{1}D_{0}])=[H_{Y_{4}}]\cdot_{\mathbb{P}^{3}}i_{*}[{}^{1}D_{0}]=3,\]
as \({}^{1}D_{0}\subset H_{Y_{4}}\) is a (nodal) cubic curve, hence linearly equivalent to \(3[\ell]\) in \(\mathbb{P}^{3}\), where \(\ell\subset\mathbb{P}^{3}\) is a general line. As \(\pi_{2}\circ\pi_{3}:\overline{\mathcal{M}}_{\mu}\to\widetilde{\mathcal{M}}_{\mu}\) contracts \(D_{1},D_{2}\), we have \((\pi_{2}\circ\pi_{3})^{*1}D_{0}=D_{0}+aD_{1}+bD_{2}\) for some \(a,b\in\mathbb{Z}\). As \(D_{1}\cdot(\pi_{2}\circ\pi_{3})^{*1}D_{0}=D_{2}\cdot(\pi_{2}\circ\pi_{3})^{*1}D _{0}=0\), we obtain
\[0=D_{1}\cdot(\pi_{2}\circ\pi_{3})^{*1}D_{0}=1-a+b,\ \ 0=D_{2}\cdot(\pi_{2}\circ\pi_{3})^{ *1}D_{0}=1+a-2b,\ \ \Rightarrow\ \ a=3,b=2.\]
Again by the projection formula for Chow cycles, we have
\[3={}^{1}D_{0}\cdot{}^{1}D_{0}=(\pi_{2}\circ\pi_{3})^{*}({}^{1}D_{0})\cdot(\pi _{2}\circ\pi_{3})^{*}({}^{1}D_{0})=(D_{0}+3D_{1}+2D_{2})^{2}=D_{0}^{2}+5 \Rightarrow D_{0}^{2}=-2.\]
ii) Now, suppose \([D_{0}]=c_{0}[\widetilde{H}]+\sum_{i=1}^{8}c_{i}[D_{i}]\), \(c_{i}\in\mathbb{Z}\) in \(\operatorname{Pic}(\overline{\mathcal{M}}_{\mu})=\operatorname{CH}^{1}( \overline{\mathcal{M}}_{\mu})\). Then,
\[D_{0}\cdot D_{i}=1\ \ \Rightarrow\ \ c_{i}=-1,\forall 3\leq i\leq 8,\ \ 1=-c_{1}+c_{2},1=c_{1}-2c_{2},\ \ \Rightarrow\ \ c_{1}=-3,c_{2}=-2.\]
As \(D_{0}\) and \(\widetilde{H}\) are distinct irreducible curves, we have \(c_{0}=D_{0}\cdot\widetilde{H}\geq 0\). Now,
\[-2=D_{0}^{2}=(c_{0}\widetilde{H}+\sum_{i=1}^{8}c_{i}D_{i})^{2}=c_{0}^{2}+(-3D_ {1}-2D_{2})^{2}-6=c_{0}^{2}-11\Rightarrow c_{0}=3.\]
That is,
\[[D_{0}]=3[\widetilde{H}]-3[D_{1}]-2[D_{2}]-\sum_{i=3}^{8}[D_{i}]\in \operatorname{Pic}(\overline{\mathcal{M}}_{\mu});\ \ \ D_{0}\cdot\widetilde{H}=3. \tag{4.4.22}\]
5) Finally, we're ready to compute \(\operatorname{M}^{c}(\mathcal{M}_{\mu})\in\mathbf{DM}_{\operatorname{gm}}^{ \operatorname{eff}}(\mathbb{K})\).
Denote \(m_{i,j}:=D_{i}\cap D_{j}\cong\operatorname{Spec}\mathbb{K}=\operatorname{pt}\) for \(0\leq i<j\leq 2\). By Lemma 4.2, we have
\[\operatorname{M}^{c}(\mathcal{M}_{\mu})\cong[\operatorname{M}(m_{1,2})\oplus \operatorname{M}(m_{0,2})\oplus\operatorname{M}(m_{0,1})\xrightarrow{\partial^{(2 )}}\operatorname{M}(D_{0})\oplus\operatorname{M}(D_{1})\oplus\operatorname{M}(D _{2})\xrightarrow{\partial^{(1)}}\operatorname{M}(\overline{\mathcal{M}}_{\mu})], \tag{4.4.23}\]
where \(\operatorname{M}(\overline{\mathcal{M}}_{\mu})\) is placed in degree \(0\).
**Step 1**. We firstly identify \(\mathrm{M}(\overline{\mathcal{M}}_{\mu})\). By Theorem 4.1.(5), \(\mathrm{M}(\mathbb{P}^{2})\cong\mathbb{Z}\oplus\mathbb{Z}(1)[2]\oplus\mathbb{Z}(2)[4]\). Denote its direct summand \(\mathbb{Z}(1)[2]\) by \(\mathbb{Z}_{H}(1)[2]\) or \(\mathbf{L}_{H}\). By Theorem 4.1.(6),
\[\mathrm{M}(\overline{\mathcal{M}}_{\mu}) \cong \mathbb{Z}_{m_{1}}(1)[2]\oplus\mathrm{M}(^{2}\widetilde{\mathcal{ M}}_{\mu})\cong\mathbb{Z}_{m_{1}}(1)[2]\oplus\mathbb{Z}_{m_{2}}(1)[2]\oplus \mathrm{M}(^{1}\widetilde{\mathcal{M}}_{\mu})\] \[\cong \oplus_{i=1}^{8}\mathbb{Z}_{m_{i}}(1)[2]\oplus\mathrm{M}(\mathbb{ P}^{2})\cong\oplus_{i=1}^{8}\mathbb{Z}_{m_{i}}(1)[2]\oplus\mathbb{Z}_{H}(1)[2] \oplus\mathbb{Z}\oplus\mathbb{Z}(2)[4].\]
Here, \(\mathbb{Z}_{m_{i}}(1)[2]\cong\mathbb{Z}(1)[2]\). Denote \(\mathrm{Hom}(-,-):=\mathrm{Hom}_{\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}( \mathbb{X})}(-,-)\). By Lemma 4.18, have
\[\mathrm{Hom}(\mathrm{M}(\overline{\mathcal{M}}_{\mu}),\mathbb{Z}( 1)[2]) = \mathrm{Hom}(\mathbb{Z}_{H}(1)[2],\mathbb{Z}(1)[2])\oplus_{i=1}^ {8}\mathrm{Hom}(\mathbb{Z}_{m_{i}}(1)[2],\mathbb{Z}(1)[2])\] \[= \mathbb{Z}[p_{H}]\oplus_{i=1}^{8}\mathbb{Z}[p_{m_{i}}],\]
where \(p_{H},p_{m_{i}}\) denote the projections to the direct summands \(\mathbb{Z}_{H}(1)[2],\mathbb{Z}_{m_{i}}(1)[2]\) respectively. On the other hand, by Theorem 4.1.(10), we have a canonical isomorphism
\[\mathbb{Z}[p_{H}]\oplus_{i=1}^{8}\mathbb{Z}[p_{m_{i}}]=\mathrm{Hom}(\mathrm{M }(\overline{\mathcal{M}}_{\mu}),\mathbb{Z}(q)[2q])\cong\mathrm{CH}^{1}( \overline{\mathcal{M}}_{\mu})=\mathrm{Pic}(\overline{\mathcal{M}}_{\mu})= \mathbb{Z}[\widetilde{H}]\oplus_{i=1}^{8}\mathbb{Z}[D_{i}].\]
By Theorem 4.1.(6), we compute the base change as follows:
(i) First, in the isomorphism \(\mathrm{M}(\overline{\mathcal{M}}_{\mu})\cong\mathbb{Z}_{m_{1}}(1)[2]\oplus \mathrm{M}(^{2}\widetilde{\mathcal{M}}_{\mu})\): the projection \(p_{m_{1}}:\mathrm{M}(\overline{\mathcal{M}}_{\mu})\to\mathbb{Z}_{m_{1}}(1)[2]\) corresponds to the exceptional divisor \([p_{m_{1}}]\cong[\pi_{3}^{-1}(m_{1})]=[D_{1}]\); \(\pi_{3}\) induces the natural projection \(\mathrm{M}(\pi_{3}):\mathrm{M}(\overline{\mathcal{M}}_{\mu})\to\mathrm{M}(^{2 }\widetilde{\mathcal{M}}_{\mu})\), then
\[(-)\circ\mathrm{M}(\pi_{3})\cong\pi_{3}^{*}:\mathrm{Hom}(\mathrm{M}(^{2} \widetilde{\mathcal{M}}_{\mu}),\mathbb{Z}(1)[2])\cong\mathrm{Pic}(^{2} \widetilde{\mathcal{M}}_{\mu})\to\mathrm{Hom}(\mathrm{M}(\overline{ \mathcal{M}}_{\mu}),\mathbb{Z}(1)[2])\cong\mathrm{Pic}(\overline{\mathcal{M}}_ {\mu}).\]
(ii) Second, in the isomorphism \(\mathrm{M}(^{2}\widetilde{\mathcal{M}}_{\mu})\cong\mathbb{Z}_{m_{2}}(1)[2] \oplus\mathrm{M}(^{1}\widetilde{\mathcal{M}}_{\mu})\): let \({}^{2}p_{m_{2}}\) denote the projection \(\mathrm{M}(^{2}\widetilde{\mathcal{M}}_{\mu})\to\mathbb{Z}_{m_{2}}(1)[2]\), then as above, \([{}^{2}p_{m_{2}}]\cong[\pi_{2}^{-1}(m_{2})]=[{}^{2}E_{1}]\), and \((-)\circ\mathrm{M}(\pi_{2})\cong\pi_{2}^{*}\). Thus, \([p_{m_{2}}]=[{}^{2}p_{m_{2}}\circ\mathrm{M}(\pi_{3})]\cong\pi_{3}^{*}[{}^{2}p_{ m_{2}}]\cong\pi_{3}^{*}[{}^{2}E_{1}]=[D_{1}]+[D_{2}]\).
(iii) Now, similarly, \([p_{m_{i}}]\cong(\pi_{2}\circ\pi_{3})^{*}[\pi_{1}^{-1}(m_{i})]=[D_{i}],3\leq i \leq 8\), and \([p_{H}]\cong\pi^{*}[H]=[\widetilde{H}]\).
Altogether, have \(([p_{H}],[p_{m_{1}}],[p_{m_{2}}],[p_{m_{3}}],\cdots,[p_{m_{8}}])=([\widetilde{H }],[D_{1}],[D_{1}]+[D_{2}],[D_{3}],\cdots,[D_{8}])\).
**Step 2**. Denote \(\mathbb{Z}_{i,j}:=\mathrm{M}(m_{i,j})\cong\mathbb{Z}\in\mathbf{DM}_{\mathrm{ gm}}^{\mathrm{eff}}(\mathbb{X})\). As \(\mathrm{M}(\mathbb{P}^{1})\cong\mathrm{M}(\mathrm{pt})\oplus\mathrm{M}^{c}( \mathbb{A}^{1})\), we may write
\[\mathrm{M}(D_{i})=\mathbb{Z}_{i}\oplus\mathbf{L}_{i},\ \ 0\leq i\leq 8;\ \ \ \mathbb{Z}_{i}\cong\mathrm{M}(\mathrm{pt})=\mathbb{Z},\ \ \mathbf{L}_{i}\cong\mathrm{M}^{c}(\mathbb{A}^{1})=\mathbb{Z}(1)[2]\in\mathbf{DM}_{ \mathrm{gm}}^{\mathrm{eff}}(\mathbb{X}). \tag{4.4.24}\]
Then by (4.1.6), \(\partial^{(2)}\) is given by:
\[\partial^{(2)}:\mathbb{Z}_{1,2\oplus}\mathbb{Z}_{0,2\oplus}\mathbb{Z}_{0,1}\to \mathbb{Z}_{0}\oplus\mathbb{Z}_{1}\oplus\mathbb{Z}_{2}\oplus\mathbf{L}_{0}\oplus \mathbf{L}_{1}\oplus\mathbf{L}_{2}:(a,b,c)\mapsto(-b-c,c-a,a+b,0,0,0). \tag{4.4.25}\]
**Step 3**. It remains to compute \(\partial^{(1)}=\delta_{1}\), equivalently, \(\iota_{i}:=\delta_{1}|_{D_{i}}:\mathrm{M}(D_{i})\to\mathrm{M}(\overline{ \mathcal{M}}_{\mu})\) induced by the inclusion \(D_{i}\hookrightarrow\overline{\mathcal{M}}_{\mu}\), for each \(0\leq i\leq 2\). Alternatively, we may compute for \(q\geq 0\):
\[\iota_{i}^{*}:\mathrm{CH}^{q}(\overline{\mathcal{M}}_{\mu})\cong\mathrm{Hom}_{ \mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{X})}(\mathrm{M}(\overline{ \mathcal{M}}_{\mu}),\mathbb{Z}(q)[2q])\to\mathrm{Hom}_{\mathbf{DM}_{\mathrm{ gm}}^{\mathrm{eff}}(\mathbb{X})}(\mathrm{M}(D_{i}),\mathbb{Z}(q)[2q])\cong \mathrm{CH}^{q}(D_{i}).\]
Here, we use Theorem 4.1.(10). Now, the map \(\iota_{i}^{*}\) is simply the pullback of Chow cycles along \(D_{i}\hookrightarrow\overline{\mathcal{M}}_{\mu}\). For dimensional reason, \(\iota_{i}^{*}=0\) for \(q>1\). We're left with the case \(q=0\) or \(1\).
If \(q=0\), we obtain an isomorphism \(\iota_{i}^{*}:\mathrm{CH}^{0}(\overline{\mathcal{M}}_{\mu})=\mathbb{Z}\xrightarrow{ \cong}\mathrm{CH}^{0}(L_{\star})=\mathbb{Z}\). By Lemma 4.18, this means that we get an isomorphism
\[\partial^{(1)}|_{\mathbb{Z}_{t}}=\iota_{i}|_{\mathbb{Z}_{t}}:\mathbb{Z}_{i}= \mathrm{M}(\mathrm{pt})\xrightarrow{\cong}\mathrm{M}(\mathrm{pt})=\mathbb{Z} \subset\mathrm{M}(\overline{\mathcal{M}}_{\mu}). \tag{4.4.26}\]
If \(q=1\), then \(\iota_{i}^{*}\) becomes the intersection product with \(D_{i}\):
\[\iota_{i}^{*}=D_{i}\cdot(-):\mathrm{CH}^{1}(\overline{\mathcal{M}}_{\mu})= \mathbb{Z}\cdot[\widetilde{H}]\oplus\oplus_{i=1}^{8}\mathbb{Z}\cdot[D_{i}] \to\mathrm{CH}^{1}(D_{i})=\mathbb{Z}\cdot[\mathrm{pt}]:\]
Thus, by the previous computation, we have
\[\iota_{0}^{*}([p_{H}],[p_{m_{1}}],\cdots,[p_{m_{8}}])=(3,1,1,1,1, 1,1,1);\] \[\iota_{1}^{*}([p_{H}],[p_{m_{1}}],\cdots,[p_{m_{8}}])=(0,-1,0,0,0, 0,0,0);\] \[\iota_{2}^{*}([p_{H}],[p_{m_{1}}],\cdots,[p_{m_{8}}])=(0,1,-1,0,0, 0,0,0).\]
Now, by Lemma 4.18, \(\iota_{i}|_{\mathbf{L}_{i}}:\mathbf{L}_{i}\to\mathbb{Z}_{H}(1)[2]\oplus\oplus _{j=1}^{8}\mathbb{Z}_{m_{i}}(1)[2]\subset\mathrm{M}(\overline{\mathcal{M}}_{ \mu})\) is nothing but the transpose of \(\iota_{i}^{*}([p_{H}],[p_{m_{1}}],\cdots,[p_{m_{8}}])\). Thus, \(\partial^{(1)}|_{\oplus_{i=0}^{2}\mathbf{L}_{i}}\) is:
\[\partial^{(1)}=\left(\begin{array}{cccccccccc}3&1&1&1&1&1&1&1&1\\ 0&-1&0&0&0&0&0&0&0\\ 0&1&-1&0&0&0&0&0&0\end{array}\right)^{\mathrm{T}}:\mathbf{L}_{0}\oplus \mathbf{L}_{1}\oplus\mathbf{L}_{2}\to\mathbb{Z}_{H}(1)[2]\oplus\oplus_{j=1}^{8 }\mathbb{Z}_{m_{i}}(1)[2]. \tag{4.4.27}\]
Notice that by Lemma 4.18, \(\mathrm{Aut}(\mathbb{Z}(q)[2q]^{\oplus r})\cong GL_{r}(\mathbb{Z})\). Then by a direct check, after a twist by automorphisms of \(\mathbf{L}_{0}\oplus\mathbf{L}_{1}\oplus\mathbf{L}_{2}\) and \(\mathbb{Z}_{H}(1)[2]\oplus\oplus_{j=1}^{8}\mathbb{Z}_{m_{i}}(1)[2]\), we have \(\partial^{(1)}|_{\mathbf{L}_{0}\oplus\mathbf{L}_{1}\oplus\mathbf{L}_{2}} \cong(I_{3},0)^{\mathrm{T}}\).
Altogether, by (4.4.25), (4.4.26), and (4.4.27), a direct computation now gives:
\[\mathrm{M}^{c}(\mathcal{M}_{\mu})\cong[\mathbb{Z}\xrightarrow{0}0\xrightarrow{ 0}\mathbf{L}^{\oplus 6}\oplus\mathbb{Z}(2)[4]]=\mathbf{L}^{0}[2]\oplus( \mathbf{L}^{1})^{\oplus 6}\oplus\mathbf{L}^{2}\in\mathbf{DM}_{\mathrm{gm}}^{ \mathrm{eff}}(\mathbb{K}). \tag{4.4.28}\]
Here, recall that \(\mathbf{L}^{a}=\mathbf{L}^{\otimes a}=\mathbb{Z}(a)[2a]\), for all \(a\geq 0\).
* For \(\mu=((1^{3}),(1^{3}),(1^{3}))\), we know that (see e.g. [36, SS1.5.5, p.340]): (4.4.29) \[\mathbb{H}_{\mu}(-z,w)=z^{2}+6+w^{2}=:\sum\limits_{t,j}c_{ij}^{\mu}z^{i}w^{j}.\] So, \(c_{2,0}^{\mu}=c_{0,2}^{\mu}=1\), \(c_{00}^{\mu}=6\), and \(c_{ij}^{\mu}=0\) otherwise. It follows that \[\oplus_{i,j}(\mathbf{L}^{\frac{d_{\mu}+j-t}{2}}[i])^{\oplus c_{ij}^{\mu}}= \mathbf{L}^{0}[2]\oplus(\mathbf{L}^{1})^{\oplus 6}\oplus\mathbf{L}^{2}=\mathrm{M}^{c} (\mathcal{M}_{\mu})\in\mathbf{DM}_{\mathrm{gm}}^{\mathrm{eff}}(\mathbb{K}).\]
The result matches with the computation above, hence verifies Conjecture 4.20 for this example.
We'll prove Proposition 1.16. We use the notations of Definition 1.14, 1.15. Let \(p\in\mathcal{W}(\beta)\), \(0\leq m\leq\ell\). Define a closed subvariety of \(\mathbb{A}^{m}\) and subsets of \([\ell]\):
(A.0.1) \[X^{m}_{p}(\beta):=\cap_{1\leq j\leq m}f_{j}^{-1}(Bp_{j}B)\subset\mathbb{A}^{m}, \quad X^{0}_{p}(\beta)=\text{pt}.\]
\[U^{m}_{p}:=U_{p}\cap[m],\ \ S^{m}_{p}:=S_{p}\cap[m],\ \ D^{m}_{p}:=D_{p}\cap[m]. \ \ \Rightarrow\ \ [m]=U^{m}_{p}\sqcup D^{m}_{p}\sqcup S^{m}_{p}.\]
**Lemma A.1**.: _We have \(p(X(\beta))\subset\mathcal{W}(\beta)\). Moreover, for any \(p\in\mathcal{W}(\beta)\) and \(1\leq m\leq\ell\),_
\[X^{m}_{p}(\beta)\cong\left\{\begin{array}{ll}\mathbb{E}_{\epsilon^{\prime}_ {m}}\times X^{m-1}_{p}(\beta)&\text{if $p_{m}=\mathrm{s}_{i_{m}}p_{m-1}>p_{m-1}$ \ \ ($go$-up);}\\ X^{m-1}_{p}(\beta)&\text{if $p_{m}=\mathrm{s}_{i_{m}}p_{m-1}<p_{m-1}$ \ \ ($go$-down);}\\ \mathbb{E}^{\times}_{\epsilon^{\prime}_{m}}\times X^{m-1}_{p}(\beta)&\text{if $ \mathrm{s}_{i_{m}}p_{m-1}<p_{m-1}$ and $p_{m}=p_{m-1}$ \ \ (stay).}\end{array}\right.\]
Proof.: For any \(\widetilde{\epsilon}\in\mathbb{A}^{\ell}\), denote \(p:=p(\widetilde{\epsilon})\in W^{\ell+1}\).
1. If \(\mathrm{s}_{i_{m}}p_{m-1}>p_{m-1}\), that is, \(\ell(\mathrm{s}_{i_{m}}p_{m-1})=\ell(p_{m-1})+1\). By Proposition 1.10, we have \[[\mathrm{B}_{i_{m}}(\epsilon_{m})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1})]=[ \mathrm{B}_{i_{m}}(\epsilon_{m})]\circ[\mathrm{B}_{i_{m-1}}(\epsilon_{m-1}) \cdots\mathrm{B}_{i_{1}}(\epsilon_{1})]\in\mathrm{BD}_{n};\Rightarrow p_{m}= \mathrm{s}_{i_{m}}p_{m-1}\ ($\text{go-up}).\]
Let's define a parameter \(\epsilon^{\prime}_{m}\in\mathbb{K}\): By the unique decomposition
\[Bp_{m-1}B=Bp_{m-1}U^{-}_{p_{m-1}}:\mathrm{B}_{i_{m-1}}(\epsilon_{m-1})\cdots \mathrm{B}_{i_{1}}(\epsilon_{1})=b_{m-1}(\epsilon_{m-1},\cdots,\epsilon_{1})p _{m-1}L^{-}_{p_{m-1}}(\epsilon_{m-1},\cdots,\epsilon_{1}),\]
we get \(b_{m-1}\in B\). Then \(\epsilon^{\prime}_{m}\in\mathbb{K}\) and \(b_{m}\in B\) are uniquely determined by the equation
(A.0.2) \[[\mathrm{B}_{i_{m}}(\epsilon_{m})]\circ[b_{m-1}]=[b_{m}]\circ[\mathrm{B}_{i_{ m}}(\epsilon^{\prime}_{m})]\in\underline{\mathrm{FBD}}_{n}.\]
Or, by Proposition 1.10: \([\mathrm{B}_{i_{m}}(\epsilon_{m})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1})]=[b_{m }]\circ[\mathrm{B}_{i_{m}}(\epsilon^{\prime}_{m})]\circ[p_{m-1}]\circ[L^{-}_{p _{m-1}}]\in\mathrm{BD}_{n}\). In fact,
(A.0.3) \[\epsilon^{\prime}_{m}=(b_{m-1})^{-1}_{i_{m},i_{m}}(b_{m-1})_{i_{m}+1,i_{m}+1} \epsilon_{m}+(b_{m-1})^{-1}_{i_{m},i_{m}}(b_{m-1})_{i_{m},i_{m}+1}.\]
This shows that \(X^{m}_{p}(\beta)\cong\mathbb{K}_{\epsilon^{\prime}_{m}}\times X^{m-1}_{p}(\beta)\).
1. If \(p^{\prime}_{m-1}:=\mathrm{s}_{i_{m}}p_{m-1}<p_{m-1}\). So, \(\ell(p_{m-1}=\mathrm{s}_{i_{m}}p^{\prime}_{m-1})=\ell(p^{\prime}_{m-1})+1\). By Proposition 1.10,
\[B\mathrm{s}_{i_{m}}p^{\prime}_{m-1}B=U^{-}_{\mathrm{s}_{i_{m}}}\mathrm{s}_{i_{ m}}Bp^{\prime}_{m-1}U^{-}_{p^{\prime}_{m-1}}:\mathrm{B}_{i_{m-1}}(\epsilon_{i_{m-1}}) \cdots\mathrm{B}_{i_{1}}(\epsilon_{1})=\mathrm{H}_{i_{m}}(c_{m-1})\mathrm{s}_{ i_{m}}b^{\prime}_{m-1}p^{\prime}_{m-1}L^{-}_{p^{\prime}_{m-1}}.\]
So, \([\mathrm{B}_{i_{m}}(\epsilon_{m})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1})]=[ \mathrm{s}_{i_{m}}\mathrm{H}_{i_{m}}(\epsilon_{m}+c_{m-1})\mathrm{s}_{i_{m}}b^{ \prime}_{m-1}p^{\prime}_{m-1}L^{-}_{p^{\prime}_{m-1}}]\in\mathrm{BD}_{n}\). Define \(\epsilon^{\prime}_{m}\in\mathbb{K}\) by
(A.0.4) \[B=TU:b^{\prime}_{m-1}=D^{\prime}_{m-1}u^{\prime}_{m-1};\quad\mathrm{s}_{i_{m}} \mathrm{H}_{i_{m}}(\epsilon_{m}+c_{m-1})\mathrm{s}_{i_{m}}D^{\prime}_{m-1}=D^{ \prime}_{m-1}\mathrm{s}_{i_{m}}\mathrm{H}_{i_{m}}(\epsilon^{\prime}_{m})\mathrm{s }_{i_{m}}.\]
More concretely, write \(D^{\prime}_{m-1}=\mathrm{Diag}((D^{\prime}_{m-1})_{1},\cdots,(D^{\prime}_{m-1}) _{n})\in T\), then we have
(A.0.5) \[\epsilon^{\prime}_{m}=(D^{\prime}_{m-1})^{-1}_{i_{m}+1}(\epsilon_{m}+c_{m-1})(D^ {\prime}_{m-1})_{i_{m}}.\]
By the Bruhat cell decomposition for \(GL(2,\mathbb{K})\), we have:
(A.0.6) \[\mathrm{s}_{i_{m}}\mathrm{H}_{i_{m}}(\epsilon^{\prime}_{m})\mathrm{s}_{i_{m}}= \left\{\begin{array}{ll}I_{n}&\epsilon^{\prime}_{m}=0;\\ a^{\prime}_{m}\mathrm{s}_{i_{m}}d^{\prime}_{m}\in Bs_{i_{m}}U^{-}_{i_{m}},\ \epsilon^{\prime}_{m}\neq 0.\end{array}\right.\]
according to the following computation:
\[\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\left(\begin{array}{cc}1&\epsilon^{\prime}_{m}\\ 0&1\end{array}\right)\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)=\left(\begin{array}{cc}-(\epsilon^{\prime}_{m})^{-1}&0 \\ 0&\epsilon^{\prime}_{m}\end{array}\right)\left(\begin{array}{cc}1&-\epsilon^{ \prime}_{m}\\ 0&1\end{array}\right)\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\left(\begin{array}{cc}1&(\epsilon^{\prime}_{m})^{-1}\\ 0&1\end{array}\right).\]
1. If \(\epsilon^{\prime}_{m}=0\), then \[[\mathrm{B}_{i_{m}}(\epsilon_{m})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1})]=[b^{ \prime}_{m-1}p^{\prime}_{m-1}L^{-}_{p^{\prime}_{m-1}}]\Rightarrow p_{m}=p^{ \prime}_{m-1}=\mathrm{s}_{i_{m}}p_{m-1}\ (\text{go-down}).\] This shows that \(X^{m}_{p}(\beta)\cong X^{m-1}_{p}(\beta)\).
2. If \(\epsilon^{\prime}_{m}\neq 0\), then by Proposition 1.10, we have: \[[\mathrm{B}_{i_{m}}(\epsilon_{m})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1})]=[D^{ \prime}_{m-1}a^{\prime}_{m}\mathrm{s}_{i_{m}}d^{\prime}_{m}u^{\prime}_{m-1}p^{ \prime}_{m-1}L^{-}_{p^{\prime}_{m-1}}]=[D^{\prime}_{m-1}a^{\prime}_{m}\mathrm{ s}_{i_{m}}d^{\prime}_{m}]\circ[u^{\prime}_{m-1}p^{\prime}_{m-1}L^{-}_{p^{\prime}_{m-1}} ]\in\mathrm{BD}_{n},\] and \(\mathrm{B}_{i_{m}}(\epsilon_{m})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1})\in \mathrm{Bs}_{im}p^{\prime}_{m-1}B\). So, \(p_{m}=p_{m-1}\) (stay), and \(X^{m}_{p}(\beta)\cong\mathbb{K}^{\times}_{\epsilon^{\prime}_{m}}\times X^{m-1} _{p}(\beta)\). Now, if \(\overline{\epsilon}\in X(\beta)\), then \(p_{0}=p_{\ell}=\mathsf{id}\) by definition. So, \(p\in\mathcal{W}(\beta)\). Done.
Now, we fulfill our promise:
Proof of Proposition 1.16.: (0). The action of \(b\in B\) on \(\overline{\epsilon}\in X(\beta)\) is uniquely determined by
\[[\widetilde{b}_{m}]\circ[\mathrm{B}_{i_{m}}(\widehat{\epsilon}_{m})]\circ \cdots\circ[\mathrm{B}_{i_{1}}(\widehat{\epsilon}_{1})]=[\mathrm{B}_{i_{m}}( \epsilon_{m})]\circ\cdots\circ[\mathrm{B}_{i_{1}}(\epsilon_{1})]\circ[b^{-1}] \in\underline{\mathrm{FBD}}_{n}.\ (\forall 1\leq m\leq\ell)\]
Apply \(g_{-}\) (Definition 1.6), we see \(X_{p}(\beta)\) is \(B\)-invariant. The rest follows from Lemma A.1.
(1). By above, \(\widetilde{b}_{m}\in B\) and \(\widehat{\epsilon}_{m}\in\mathbb{K}\) are determined inductively by
(A.0.7) \[[\widetilde{b}_{m}]\circ[\mathrm{B}_{i_{m}}(\widehat{\epsilon}_{m})]=[\mathrm{ B}_{i_{m}}(\epsilon_{m})]\circ[\widetilde{b}_{m-1}]\in\underline{\mathrm{FBD}}_{n}.\]
By diagram calculus (Lemma 1.7), \(\widetilde{b}_{m}\in TU^{+}_{\mathrm{s}_{i_{m}}}\), and by induction, \(\widetilde{b}_{m}\in U^{+}_{\mathrm{s}_{i_{m}}}\) if \(b=u\in U\).
(1.a). If \(b=u\in U\), so \(\widetilde{b}_{m}=\widetilde{u}_{m}\in U^{+}_{\mathrm{s}_{i_{m}}}\). Again by diagram calculus, have \(U^{+}_{\mathrm{s}_{i_{m}}}X=XU^{+}_{\mathrm{s}_{i_{m}}},\ \ \forall X\in T,X= \mathrm{s}_{i_{m}},\ \text{or}\ X=\mathrm{H}_{i_{m}}(c).\) In particular, we can define \(\widetilde{u}^{\prime}_{m}\in U\) by
\[\widetilde{u}^{-1}_{m}D^{\prime}_{m-1}\mathrm{s}_{i_{m}}\mathrm{H}_{i_{m}}( \epsilon^{\prime}_{m})\mathrm{s}_{i_{m}}=D^{\prime}_{m-1}\mathrm{s}_{i_{m}} \mathrm{H}_{i_{m}}(\epsilon^{\prime}_{m})\mathrm{s}_{i_{m}}\widetilde{u}^{ \prime-1}_{m}.\]
If \(m\in S_{p}\), so \(p_{m}=p_{m-1}=s_{i_{m}}p^{\prime}_{m-1}\) and \(\epsilon^{\prime}_{m}\neq 0\). By the decomposition (1.3.7), we compute
\[[\mathrm{B}_{i_{m}}(\widetilde{\epsilon}_{m})\cdots\mathrm{B}_{i_ {1}}(\widetilde{\epsilon}_{1})]=[\widetilde{u}^{-1}_{m}\mathrm{B}_{i_{m}}( \epsilon_{m})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1})u^{-1}]\] \[= [\widetilde{u}^{-1}_{m}D^{\prime}_{m-1}\mathrm{s}_{i_{m}}\mathrm{H }_{i_{m}}(\epsilon^{\prime}_{m})\mathrm{s}_{i_{m}}u^{\prime}_{m-1}p^{\prime}_{ m-1}L^{-}_{p^{\prime}_{m-1}}u^{-1}]\quad(\text{By \eqref{eq:b} in the proof of Lemma A.1})\] \[= [D^{\prime}_{m-1}\mathrm{s}_{i_{m}}\mathrm{H}_{i_{m}}(\epsilon^{ \prime}_{m})\mathrm{s}_{i_{m}}\widetilde{u}^{\prime-1}_{m}u^{\prime}_{m-1}(p^{ \prime}_{m-1}L^{+}_{p^{\prime}_{m-1}}(L^{-}_{p^{\prime}_{m-1}}u^{-1})p^{\prime -1}_{m-1})p^{\prime}_{m-1}L^{-}_{p^{\prime}_{m-1}}(L^{-}_{p^{\prime}_{m-1}}u^{- 1})]\] \[= [\widetilde{D}^{\prime}_{m-1}\mathrm{s}_{i_{m}}\mathrm{H}_{i_{m}}( \widetilde{\epsilon}^{\prime}_{m})\mathrm{s}_{i_{m}}\widetilde{u}^{\prime}_{m-1}p^ {\prime}_{m-1}\widetilde{L}^{-}_{p^{\prime}_{m-1}}]\in\mathrm{BD}_{n},\]
where the last equality gives: \(\widetilde{L}^{-}_{p^{\prime}_{m-1}}\in U^{-}_{p^{\prime}_{m-1}}\), \(\widetilde{u}^{\prime}_{m-1}\in U\), \(\widetilde{D}^{\prime}_{m-1}=D^{\prime}_{m-1}\), and \(\widetilde{\epsilon}^{\prime}_{m}=\epsilon^{\prime}_{m}\), as desired. (1.b). By diagram calculus (Lemma 1.7), for any \(\lambda=\mathrm{Diag}(\lambda_{1},\cdots,\lambda_{n})\in T\), we have
\[[\mathrm{B}_{j}(\epsilon)]\circ[\lambda^{-1}]=[\mathrm{s}_{j}\mathrm{H}_{j}( \epsilon)]\circ[\lambda^{-1}]=[(\lambda^{\mathrm{s}_{j}})^{-1}]\circ[\mathrm{s }_{j}\mathrm{H}_{j}(\lambda_{j}\lambda_{j+1}^{-1}\epsilon)]=[(\lambda^{ \mathrm{s}_{j}})^{-1}]\circ[\mathrm{B}_{j}(\lambda_{j}\lambda_{j+1}^{-1} \epsilon)].\]
Then by (A.0.7) and induction, we obtain
\[\widehat{\epsilon}_{m}=(t^{\mathrm{s}_{i_{m-1}}\cdots\mathrm{s}_{i_{1}}})_{i_{m}} (t^{\mathrm{s}_{i_{m-1}}\cdots\mathrm{s}_{i_{1}}})^{-1}_{i_{m}+1}\epsilon_{m}=t _{(\mathrm{s}_{i_{m-1}}\cdots\mathrm{s}_{i_{1}})^{-1}(i_{m})}t^{-1}_{( \mathrm{s}_{i_{m-1}}\cdots\mathrm{s}_{i_{1}})^{-1}(i_{m}+1)}\epsilon_{m},\quad \widetilde{b}_{m}=(t^{\mathrm{s}_{i_{m}}\cdots\mathrm{s}_{i_{1}}})^{-1}.\]
(1.b.1). If \(m\in U_{p}\), then \(p_{m}={\rm s}_{i_{m}}p_{m-1}\). By (1) in the proof of Lemma A.1,
\[\widehat{b}_{m-1}p_{m-1}\widehat{L}_{p_{m-1}}^{-}={\rm B}_{i_{m-1}}(\widehat{ \epsilon}_{m-1})\dots{\rm B}_{i_{1}}(\widehat{\epsilon}_{1})=\widetilde{b}_{m- 1}^{-1}{\rm B}_{i_{m-1}}(\epsilon_{m-1})\dots{\rm B}_{i_{1}}(\epsilon_{1})t^{ -1}=\widetilde{b}_{m-1}^{-1}b_{m-1}p_{m-1}L_{p_{m-1}}^{-}t^{-1}.\]
Then a simple computation gives \(\widehat{b}_{m-1}=t^{{\rm s}_{i_{m-1}}\cdots{\rm s}_{i_{1}}}\,b_{m-1}(t^{p_{m- 1}})^{-1}\), \(\widehat{L}_{p_{m-1}}^{-}=tL_{p_{m-1}}^{-}t^{-1}\). So,
\[[{\rm B}_{i_{m}}(\widehat{\epsilon}_{m})]\circ[\widehat{b}_{m-1} ]=[t^{{\rm s}_{i_{m}}\cdots{\rm s}_{i_{1}}}{\rm B}_{i_{m}}(\epsilon_{m})(t^{{ \rm s}_{m-1}\cdots{\rm s}_{i_{1}}})^{-1}]\circ[t^{{\rm s}_{i_{m-1}}\cdots{\rm s }_{i_{1}}}b_{m-1}(t^{p_{m-1}})^{-1}]\] \[= [t^{{\rm s}_{i_{m}}\cdots{\rm s}_{i_{1}}}]\circ[b_{m}]\circ[{\rm B }_{i_{m}}(\epsilon_{m}^{\prime})]\circ[(t^{p_{m-1}})^{-1}]=[\widehat{b}_{m}] \circ[{\rm B}_{i_{m}}(\widetilde{\epsilon}_{m}^{\prime})]\in\underline{{\rm FBD }}_{n}.\]
This implies that
\[\widetilde{\epsilon}_{m}^{\prime}=(t^{p_{m-1}})_{i_{m}}(t^{p_{m-1}})_{i_{m}+1} ^{-1}\epsilon_{m}^{\prime},\quad\widehat{b}_{m}=t^{{\rm s}_{i_{m}}\cdots{\rm s }_{i_{1}}}b_{m}(t^{p_{m}})^{-1}.\]
(1.b.2). If \(m\in S_{p}\), so \(p_{m}=p_{m-1}=s_{i_{m}}p_{m-1}^{\prime}\) and \(\epsilon_{m}^{\prime}\neq 0\). By (2.b) in the proof of Lemma A.1,
\[[{\rm B}_{i_{m}}(\widehat{\epsilon}_{m})\dots{\rm B}_{i_{1}}( \widehat{\epsilon}_{1})]=[\widetilde{b}_{m}^{-1}{\rm B}_{i_{m}}(\epsilon_{m}) \dots{\rm B}_{i_{1}}(\epsilon_{1})t^{-1}]\] \[= [\widetilde{b}_{m}^{-1}D_{m-1}^{\prime}{\rm s}_{i_{m}}{\rm H}_{i_{ m}}(\epsilon_{m}^{\prime}){\rm s}_{i_{m}}u_{m-1}^{\prime}p_{m-1}^{\prime}L_{p_{m-1} ^{\prime}}^{-}t^{-1}]=[\widetilde{D}_{m-1}^{\prime}{\rm s}_{i_{m}}{\rm H}_{i_{ m}}(\widetilde{\epsilon}_{m}^{\prime}){\rm s}_{i_{m}}\widetilde{u}_{m-1}^{ \prime}p_{m-1}^{\prime}\widetilde{L}_{p_{m-1}^{\prime}}^{-}]\in{\rm BD}_{n}.\]
Then a simple computation gives
\[\widetilde{D}_{m-1}^{\prime}=t^{{\rm s}_{i_{m}}\cdots{\rm s}_{i_{ 1}}}D_{m-1}^{\prime}(t^{-1})^{p_{m-1}^{\prime}},\quad\widetilde{\epsilon}_{m}^ {\prime}=(t^{p_{m-1}})_{i_{m}}(t^{p_{m-1}})_{i_{m}+1}^{-1}\epsilon_{m}^{\prime},\] \[\widetilde{u}_{m-1}^{\prime}=t^{p_{m-1}^{\prime}}u_{m-1}^{\prime} (t^{p_{m-1}^{\prime}})^{-1},\quad\widetilde{L}_{p_{m-1}^{\prime}}^{-}=tL_{p_{ m-1}^{\prime}}^{-}t^{-1}.\]
Altogether, we have proved (1).
(2). By the decomposition \(Bp_{m}B=Bp_{m}U_{p_{m}}^{-}:{\rm B}_{i_{m}}(\epsilon_{m})\dots{\rm B}_{i_{1}}( \epsilon_{1})=b_{m}p_{m}L_{p_{m}}^{-}\), we define
(A.0.8) \[\mu\mathfrak{m}\mathfrak{on}_{m}:X_{p}^{m}(\beta)\to T:(\epsilon_{m},\cdots, \epsilon_{1})\mapsto D(b_{m})\in T.\]
In particular, \(\mu\mathfrak{m}\mathfrak{on}=\mu\mathfrak{m}\mathfrak{on}_{\ell}\), and \(\mu\mathfrak{m}\mathfrak{on}_{0}=I_{n}\). We will prove by induction.
Case 1. If \(\ell({\rm s}_{i_{m}}p_{m-1})=\ell(p_{m-1})+1\), i.e. \(m\in U_{p}\) and \(p_{m}={\rm s}_{i_{m}}p_{m-1}\). By (2) in Lemma A.1,
\[{\rm B}_{i_{m-1}}(\epsilon_{m-1})\dots{\rm B}_{i_{1}}(\epsilon_{1})=b_{m-1}p_{ m-1}L_{p_{m-1}}^{-};\quad[{\rm B}_{i_{m}}(\epsilon_{m})]\circ[b_{m-1}]=[b_{m}] \circ[{\rm B}_{i_{m}}(\epsilon_{m}^{\prime})]\in\underline{{\rm FBD}}_{n}.\]
If follows that
(A.0.9) \[\mu\mathfrak{m}\mathfrak{on}_{m}(\epsilon_{m},\cdots,\epsilon_{1})=D(b_{m})=D(b_ {m-1})^{{\rm s}_{i_{m}}}=(\mu\mathfrak{m}\mathfrak{on}_{m-1}(\epsilon_{m-1}, \cdots,\epsilon_{1}))^{{\rm s}_{i_{m}}}\in T.\]
Case 2. If \(p_{m-1}=s_{i_{m}}p_{m-1}^{\prime}\) with \(\ell(p_{m-1})=\ell(p_{m-1}^{\prime})+1\). We use (2) in Lemma A.1. So,
\[{\rm B}_{i_{m-1}}(\epsilon_{m-1})\dots{\rm B}_{i_{1}}(\epsilon_{1})={\rm H}_{i_{ m}}(c_{m-1}){\rm s}_{i_{m}}b_{m-1}^{\prime}p_{m-1}^{\prime}L_{p_{m-1}^{\prime}}^{-}=b_{m-1 }p_{m-1}L_{p_{m-1}}^{-}.\]
Observe that \(D(b_{m-1})=(D(b_{m-1}^{\prime}))^{{\rm s}_{i_{m}}}=(D_{m-1}^{\prime})^{{\rm s}_{i_ {m}}}\in T\). Again, by (2) in Lemma A.1,
\[{\rm B}_{i_{m}}(\epsilon_{m})\dots{\rm B}_{i_{1}}(\epsilon_{1})=D_{m-1}^{\prime}{ \rm s}_{i_{m}}{\rm H}_{i_{m}}(\epsilon_{m}^{\prime}){\rm s}_{i_{m}}u_{m-1}^{ \prime}p_{m-1}^{\prime}L_{p_{m-1}^{\prime}}^{-}.\]
Case 2.1. If \(m\in D_{p}\), then \(p_{m}=p_{m-1}^{\prime}\) and \(\epsilon_{m}^{\prime}=0\). Thus,
\[{\rm B}_{i_{m}}(\epsilon_{m})\dots{\rm B}_{i_{1}}(\epsilon_{1})=b_{m-1}^{\prime}p_ {m-1}^{\prime}L_{p_{m-1}^{\prime}}^{-}=b_{m}p_{m}L_{p_{m}}^{-},\]
with \(b_{m}=b_{m-1}^{\prime}\) and \(L_{p_{m}}^{-}=L_{p_{m-1}^{\prime}}^{-}\). By above, it follows that
(A.0.10) \[\mu_{m}(\epsilon_{m},\cdots,\epsilon_{1})=D(b_{m})=(D(b_{m-1}))^{\mathrm{s}_{im} }=(\mu\mathfrak{m}\mathfrak{n}_{m-1}(\epsilon_{m-1},\cdots,\epsilon_{1}))^{ \mathrm{s}_{im}}\in T.\]
Case 2.2. If \(m\in S_{p}\), then \(p_{m}=p_{m-1}\) and \(\epsilon_{m}^{\prime}\neq 0\). By (2.2) in Lemma A.1,
\[\mathrm{B}_{i_{m}}(\epsilon_{m})\cdots\mathrm{B}_{i_{1}}(\epsilon_{1})=D_{m-1 }^{\prime}a_{m}^{\prime}\mathrm{s}_{i_{m}}d_{m}^{\prime}u_{m-1}^{\prime}p_{m-1 }^{\prime}L_{p_{m-1}^{\prime}}^{-}=b_{m}p_{m}L_{p_{m}}^{-},\]
where \(a_{m}^{\prime},d_{m}^{\prime}\) are given by (A.0.6). So \(D(b_{m})=D(a_{m}^{\prime})D_{m-1}^{\prime}=D(a_{m}^{\prime})D(b_{m-1})^{ \mathrm{s}_{im}}\). That is,
(A.0.11) \[\mu_{m}(\epsilon_{m},\cdots,\epsilon_{1})=\mathrm{K}_{i_{m}}(-\epsilon_{m}^{ \prime-1})\mathrm{K}_{i_{m}+1}(\epsilon_{m}^{\prime})\mu_{m-1}(\epsilon_{m-1},\cdots,\epsilon_{1})^{\mathrm{s}_{im}}\in T.\]
Now, (1.4.11) follows by induction from (A.0.9), (A.0.10), and (A.0.11). This proves (2).
## Appendix B Modified Macdonald symmetric functions
In this section, we give a quick review on modified/transformed Macdonald symmetric functions (Definition B.6). In particular, we fix some notations. The references are [50, 26].
### Partitions
Recall that \(\mathcal{P}_{n}\) is the set of all partitions of \(n\), and \(\mathcal{P}=\cup_{n\geq 0}\mathcal{P}_{n}\), where \(\mathcal{P}_{0}=\{0\}\).
(1) Let \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{\ell}>0)\) denote a _partition_ of \(n\), so \(\sum\lambda_{i}=n\). Each \(\lambda_{i}\) is called a _part_; \(\ell\) is called the _length_ of \(\lambda\); and \(|\lambda|:=n\) is called the _weight_ of \(\lambda\). Alternatively, we may write \(\lambda=(1^{m_{1}}2^{m_{2}}\ldots)\), where \(m_{i}=m_{i}(\lambda):=|\{j:\lambda_{i}=j\}|\) is called the _multiplicity_ of \(i\).
(2) We follow the **matrix notation** in [50, I.1]. For any partition \(\lambda\in\mathcal{P}\), \(D(\lambda):=\{(i,j)\in\mathbb{Z}_{\geq 1}^{2}:1\leq j\leq\lambda_{i}\}\) denotes its _Ferrers diagram_, or _Young diagram_ if each'matrix' position/point is replaced by a box. For any box \(z\in D(\lambda)\), its _arm length_\(a(z)=a_{\lambda}(z)\) (resp. _leg length_\(\ell(z)=\ell_{\lambda}(z)\)) is the number of boxes strictly to the right of \(z\) (resp. below \(z\)). Its _hook length_ is:
(B.1.1) \[h(z)=h_{\lambda}(z):=a(z)+\ell(z)+1.\]
The _dual partition_ (or _conjugate_) \(\lambda^{\prime}\) of \(\lambda\) is defined by \(D(\lambda^{\prime}):=D(\lambda)^{\mathrm{T}}\). For example, if \(\lambda=(4,4,3,1)\in\mathcal{P}_{12}\), then \(\lambda^{\prime}=(4,3,3,2)\), as illustrated in the diagram below:
(3) Given \(\lambda\in\mathcal{P}_{n}\), a _semistandard/column-strict Young tableau_ (**ssYT**) of _shape_\(\lambda\) is a map (or _label_) \(T:D(\lambda)\to\mathbb{Z}_{\geq 1}:(i,j)\mapsto T_{i,j}\) (_Young tableau_) such that:
(a) \(T\) is non-increasing from left to right along each row;
(b) \(T\) is (strictly) increasing down each column.
The sequence \((|T^{-1}(1)|,|T^{-1}(2)|,\cdots)\) is called the _weight_ of \(T\). **Note**: \(T_{1,1}\) is _not required_ to be \(1\).
(4) A _standard Young tableau_ (**sYT**) of _shape_\(\lambda\) is a semistandard Young tableau \(T\) such that each number \(1,2,\cdots,n\) appears exactly once as a label. Equivalently, \(T_{1,1}=1\), and \(T\) is a Young tableau that is (strictly) increasing along each row and down each column.
(5) For any \(\lambda,\mu\in\mathcal{P}\), we say \(\mu\subset\lambda\), if \(D(\mu)\subset D(\lambda)\). The set-theoretic difference \(\lambda-\mu:=D(\lambda)-D(\mu)\) is called a _skew diagram_. Replace \(\lambda\) by \(\lambda-\mu\) in (3) (resp. (4)), we obtain the notion of a _semistandard/column-strict tableau_ (resp. _standard tableau_) of _shape_\(\lambda-\mu\).
(6) The _natural/dominance partial ordering_\(\geq\) on \(\mathcal{P}_{n}\) is: \(\lambda\geq\mu\Leftrightarrow\Sigma_{j=1}^{i}\lambda_{j}\geq\Sigma_{j=1}^{i} \mu_{j},\forall i\geq 1\). Given two partitions \(\lambda,\mu\in\mathcal{P}\), define a _pairing_ by: \(\langle\lambda,\mu\rangle:=\sum_{j\geq 1}\lambda^{\prime}_{j}\mu^{\prime}_{j}\). Then \(\langle\lambda,\lambda\rangle=2n(\lambda)+|\lambda|\) with
(B.1.2) \[n(\lambda):=\sum_{z\in D(\lambda)}\ell(z)=\sum_{j\geq 1}\binom{\lambda^{ \prime}_{j}}{2}=\sum_{i\geq 1}(i-1)\lambda_{i}.\]
### Symmetric functions
Let \(\mathbf{x}=\{x_{1},x_{2},\cdots\}\) be a set of infinitely many independent variables. For any commutative ring \(R\), let \(\Lambda=\Lambda(\mathbf{x})\) (resp. \(\Lambda_{R}:=\Lambda\otimes_{\mathbb{Z}}R\)) denote the _ring of symmetric functions_ in \(\mathbf{x}\) (resp. with coefficients in \(R\)). For any sequence \(\lambda=(\lambda_{1},\lambda_{2},\cdots)\) of natural numbers (including a partition), denote \(\mathbf{x}^{\lambda}:=x_{1}^{\lambda_{1}}x_{2}^{\lambda_{2}}\cdots\).
1. For any partition \(\lambda\in\mathcal{P}\),
2. \(m_{\lambda}\in\Lambda\) is the _monomial symmetric function_ of type \(\lambda\). So, \(m_{\lambda}(\mathbf{x}):=\Sigma_{\text{permutations}}\mathbf{x}^{\lambda}\), and \(m_{0}=1\);
3. \(e_{\lambda}\in\Lambda\) is the _elementary symmetric function_ of type \(\lambda\). So, \(e_{\lambda}:=e_{\lambda_{1}}e_{\lambda_{2}}\cdots\), \(e_{r}:=m_{(1^{r})}\);
4. \(h_{\lambda}\in\Lambda\) is the _complete symmetric function_ of type \(\lambda\). So, \(h_{\lambda}:=h_{\lambda_{1}}h_{\lambda_{2}}\cdots\), \(h_{r}:=\Sigma_{|\mu|=r}m_{\mu}\);
5. \(p_{\lambda}\in\Lambda\) is the _power sum_ of type \(\lambda\). So, \(p_{\lambda}:=p_{\lambda_{1}}p_{\lambda_{2}}\cdots\), with \(p_{r}:=m_{(r)}\) the \(r\)-_th power sum_;
6. \(s_{\lambda}\in\Lambda\) is the _Schur function_ of type \(\lambda\). So, \(s_{\lambda}:=\varprojlim s_{\lambda}(x_{1},x_{2},\cdots,x_{n})\), \(s_{\lambda}(x_{1},x_{2},\cdots,x_{n}):=\det(x_{i}^{\lambda_{j}+n-j})/\det(x_{i }^{n-j})\).
Recall that, \(\{m_{\lambda}\}_{\lambda\in\mathcal{P}}\), \(\{e_{\lambda}\}_{\lambda\in\mathcal{P}}\), \(\{h_{\lambda}\}_{\lambda\in\mathcal{P}}\), and \(\{s_{\lambda}\}_{\lambda\in\mathcal{P}}\) form four \(R\)-bases of \(\Lambda_{R}\). In addition, \(\{p_{\lambda}\}_{\lambda\in\mathcal{P}}\) forms a \(R\)-basis of \(\Lambda_{R}\), if \(R\supset\mathbb{Q}\).
1. The graded involution \(\omega:\Lambda=\mathbb{Z}[e_{1},e_{2},\cdots]\to\Lambda\) is \(\omega(e_{r}):=h_{r}\). Then, (B.2.1) \[\omega(h_{r})=e_{r};\quad\omega(p_{\lambda})=\epsilon_{\lambda}p_{\lambda}, \ \ \epsilon_{\lambda}:=(-1)^{|\lambda|-\ell(\lambda)};\quad\omega(s_{\lambda})=s_{ \lambda^{\prime}}.\]
2. Denote \(h_{r}:=0,e_{r}:=0\), for all \(r<0\). Among various relations, we recall the following: (B.2.2) \[h_{n}=\sum_{|\lambda|=n}z_{\lambda}^{-1}p_{\lambda},\quad e_{n}= \sum_{|\lambda|=n}\epsilon_{\lambda}z_{\lambda}^{-1}p_{\lambda},\quad z_{ \lambda}:=\prod_{i\geq 1}i^{m_{i}}\cdot m_{i}!.\] (B.2.3) \[s_{\lambda}=\det(h_{\lambda_{i}-i+j})_{1\leq i,j\leq n},\ \ \forall n\geq\ell(\lambda);\quad s_{\lambda}=\det(e_{\lambda^{\prime}_{i}-i+j})_{1 \leq i,j\leq m},\ \ \forall m\geq\ell(\lambda^{\prime}).\]
3. \(\Lambda\) has a _scalar product/Hall pairing_: \(\langle h_{\lambda},m_{\mu}\rangle:=\delta_{\lambda\mu}\). So, \(\langle p_{\lambda},p_{\mu}\rangle=z_{\lambda}\delta_{\lambda\mu}\), \(\langle s_{\lambda},s_{\mu}\rangle=\delta_{\lambda\mu}\).
### lambda-rings and plethysm
**Definition B.1** (lambda-rings).: Fix a commutative ring \(\Gamma\supset\mathbb{Q}\).
1. \(\Gamma\) is called a _lambda-ring_ (or \(\lambda\)_-ring_) if, there are morphisms \(p_{n}:\Gamma\to\Gamma:g\mapsto p_{n}[g]\), \(n\geq 1\), called _Adams operations_, such that, (B.3.1) \[p_{1}[g]=g,\quad p_{m}[p_{n}[g]]=p_{mn}[g],\quad\forall m,n\geq 1,g\in\Gamma.\]
Given two \(\lambda\)-rings \(\Gamma,\Gamma^{\prime}\), a ring homomorphism \(\varphi:\Gamma\to\Gamma^{\prime}\) respecting the Adams operations, is called a \(\lambda\)_-ring homomorphism_.
(2) If \(\Gamma\) is a ring of polynomials, rational functions, power series, or Laurent series in some variables \(x_{1},x_{2},\cdots\), the _usual lambda-ring structure_ on \(\Gamma\) is defined by
(B.3.2) \[p_{n}[x_{i}]:=x_{i}^{n}.\]
(3) The _trivial lambda-ring structure_ on \(\Gamma\) is defined by \(p_{n}[g]:=g\).
(4) If \(R\supset\mathbb{Q}\) is a lambda-ring, define the _usual lambda-ring structure_ on \(\Lambda_{R}:=\Lambda\otimes_{\mathbb{Z}}R\) such that, the Adams operation \(p_{n}[-]\) on \(\Lambda_{R}\) restricts to that on \(R\), and
(B.3.3) \[p_{n}[p_{m}]:=p_{mn}.\]
Then, the inclusion \(R\subset\Lambda_{R}\) is a lambda-ring homomorphism.
**Definition/Proposition B.2** (Plethystic substitution).: _Fix a base lambda-ring \(R\supset\mathbb{Q}\), and a lambda-ring homomorphism \(R\subset\Gamma\). Then, for any \(g\in\Gamma\), there exists a lambda-ring homomorphism \(\Lambda_{R}=R[p_{1},p_{2},\cdots]\to\Gamma:f\mapsto f[g]\) defined by_
(B.3.4) \[r[g]:=r,\forall r\in R;\quad p_{n}[g]:=p_{n}[g]\ \ (\text{the latter is the Adams operation on $\Gamma$}).\]
Proof.: By definition, \(f\mapsto f[g]\) is a \(R\)-algebra homomorphism. It suffices to verify that, for \(f=p_{m}\), \(p_{n}[p_{m}[g]]=(p_{n}[p_{m}])[g]\). That is, \(p_{n}[p_{m}[g]]=p_{nm}[g]\) in \(\Gamma\). This is (B.3.1).
**Example B.3**.: Let \(R=\mathbb{Q}\) equipped with the trivial lambda-ring structure.
1. Let \(F:=\mathbb{Q}(q,t)\) be the _usual lambda-ring_ of rational functions in \(q,t\), then \[\frac{q}{1-t}=q+qt+qt^{2}+\cdots\in F,\ \ \Rightarrow\ \ p_{n}[\frac{q}{1-t}]=q^{n}+q^{n}t^{n}+q^{n}t^{2n}+\cdots=p_{n}(q,qt,qt^{2 },\cdots)\in F.\] Thus, \(f[\frac{q}{1-t}]=f(q,qt,qt^{2},\cdots)\), for all \(f\in\Lambda_{\mathbb{Q}}\).
2. Take the _usual lambda-ring_\(\Lambda_{F}=\Lambda\otimes_{\mathbb{Z}}F\). Denote \(X:=x_{1}+x_{2}+\cdots\in\Lambda\). Then, \[g=\frac{qX}{1-t}=\sum\limits_{i\geq 1,j\geq 0}qx_{i}t^{j}\in\Lambda_{F},\ \ \Rightarrow\ \ p_{n}[\frac{qX}{1-t}]=\sum\limits_{i\geq 1,j\geq 0}q^{n}x_{i}^{n}t^{nj }=p_{n}(qx_{i}t^{j}:i\geq 1,j\geq 0)\in\Lambda_{F}.\] Thus, \(f[\frac{qX}{1-t}]=f(qx_{i}t^{j}:i\geq 1,j\geq 0)\in\Lambda_{F}\), for all \(f\in\Lambda_{\mathbb{Q}}\).
### Macdonald symmetric functions
\(F:=\mathbb{Q}(q,t)\). The _\((q,t)\)-deformed scalar product_ on \(\Lambda_{F}\) is:
(B.4.1) \[\langle p_{\lambda},p_{\mu}\rangle_{(q,t)}:=z_{\lambda}(q,t)\delta_{\lambda\mu},\quad z_{\lambda}(q,t):=z_{\lambda}\big{|}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Definition B.5** (Integral forms of Macdonald symmetric functions).:
1. For each partition \(\lambda\in\mathcal{P}\), define (B.4.8) \[c_{\lambda}(q,t):=\underset{s\in\lambda}{\prod}(1-q^{a(s)}t^{\ell(s)+1});\quad c ^{\prime}_{\lambda}(q,t):=\underset{s\in\lambda}{\prod}(1-q^{a(s)+1}t^{\ell(s) }).\] Observe that \(c^{\prime}_{\lambda}(q,t)=c_{\lambda^{\prime}}(t,q);\quad b_{\lambda}(q,t)=c_{ \lambda}(q,t)/c^{\prime}_{\lambda}(q,t)\).
2. The _Integral form of the Macdonald symmetric function_ of type \(\lambda\) is: (B.4.9) \[J_{\lambda}=J_{\lambda}(\mathbf{x};q,t):=c_{\lambda}(q,t)P_{\lambda}(\mathbf{ x};q,t)=c^{\prime}_{\lambda}(q,t)Q_{\lambda}(\mathbf{x};q,t)\in\Lambda_{F}.\]
**Note**: \(\omega_{q,t}(J_{\lambda}(q,t))=J_{\lambda^{\prime}}(t,q)\), and \(\langle J_{\lambda},J_{\lambda}\rangle_{(q,t)}=c_{\lambda}(q,t)c^{\prime}_{ \lambda}(q,t)=\langle J_{\lambda^{\prime}},J_{\lambda^{\prime}}\rangle\).
Recall that \(F=\mathbb{Q}(q,t)\) is equipped with the _usual lambda-ring structure_. Finally, we have
**Definition B.6** (modified Macdonald symmetric functions).: For each \(\lambda\in\mathcal{P}\), define
(B.4.10) \[H_{\lambda}(\mathbf{x};q,t):=J_{\lambda}[\frac{X}{1-t};q,t]\ \text{( Definition/Proposition B.2)};\ \widetilde{H}_{\lambda}(\mathbf{x};q,t):=t^{n(\lambda)}H_{\lambda}(\mathbf{x}; q,1/t)\in\Lambda_{F}.\]
\(\widetilde{H}_{\lambda}\) is called the _modified (or transformed) Macdonald symmetric function_ of type \(\lambda\). \(\forall\lambda,\mu\in\mathcal{P}\), define the \((q,t)\)_-Kostka polynomial_\(\widetilde{K}_{\lambda\mu}(q,t)\in F\) by
(B.4.11) \[\widetilde{H}_{\mu}(\mathbf{x};q,t)=:\underset{\lambda}{\sum}s_{\lambda}( \mathbf{x})\widetilde{K}_{\lambda\mu}(q,t).\]
**Note**: we have a duality [26, Cor.3.2]:
(B.4.12) \[\widetilde{H}_{\lambda}(\mathbf{x};q,t)=\widetilde{H}_{\lambda^{\prime}}( \mathbf{x};t,q).\]
As a final remark, \(\widetilde{H}_{\mu}(\mathbf{x};q,t)\) also admits a geometric interpretation via \(\operatorname{Hilb}^{n}(\mathbb{C}^{2})\)[33, 34].
## Appendix C Quotients of varieties
We collect some results on quotients of varieties. Hopefully, it helps to digest the main body of this article. Most statements below are standard, so we skip the proofs whenever possible.
Recall our **Convention 1**: A _\(\mathbb{K}\)-variety_ means a _reduced separated scheme of finite type_ over \(\mathbb{K}\).
**Convention 7**: Fix \(G,H\) as linear algebraic groups over \(\mathbb{K}\), unless otherwise stated.
### Various quotients for algebraic group actions
#### c.1.1. Categorical quotient
Let \(X\) be a \(G\)-scheme over \(\mathbb{K}\), unless otherwise stated. Let \(\operatorname{Irr}(X)\) (resp. \(\operatorname{Conn}(X)\)) be the set of irreducible (resp. connected) components of \(X\). Let \(G^{0}\) be the identity component of \(G\), then \(\pi_{0}(G):=G/G^{0}\) is a finite group acting on \(\operatorname{Irr}(X)\) and \(\operatorname{Conn}(X)\).
**Definition C.1**.: As usual, define the _\(G\)-topology_ on \(X\) via _\(G\)-open (or \(G\)-closed) subsets_. Let \(G\)-\(\operatorname{Irr}(X)\) (resp. \(G\)-\(\operatorname{Conn}(X)\)) be the set of _\(G\)-irreducible (resp. \(G\)-connected) components_ of \(X\).
**Corollary C.2**.: \(G\)-\(\operatorname{Irr}(X)=\operatorname{Irr}(X)/\pi_{0}(G)\)_, \(G\)-\(\operatorname{Conn}(X)=\operatorname{Conn}(X)/\pi_{0}(G)\)._
**Definition C.3** (categorical quotient).: Let \(\mathcal{C}\) be the category of \(\mathbb{K}\)-schemes. A _categoric quotient_ of \(X\) by \(G\) is a \(G\)-invariant morphism \(\pi:X\to Y\) into a \(\mathbb{K}\)-scheme \(Y\) that is _universal_ in \(\mathcal{C}\):
The categorical quotient, denoted as \(X/^{\mathrm{ca}}G:=Y\), if exists, is unique up to a unique isomorphism.
**Proposition C.4**.: _If \(\pi:X\to Y\) is a \(G\)-invariant morphism into a \(\mathbb{K}\)-scheme \(Y\), then:_
1. _(_local on the target_) _If_ \(Y=\cup U_{i}\) _is an open cover such that each_ \(\pi|_{\pi^{-1}(U_{i})}:\pi^{-1}(U)\to U\) _is a categorical quotient, then so is_ \(\pi\)_._
2. _From now on, suppose_ \(\pi:X\to Y\) _is a categorical quotient for_ \(G\curvearrowright X\)_._
3. _If_ \(Y=\mathrm{Spec}\ A\) _is affine, then_ \(\pi\) _induces a natural isomorphism_ 5 _of_ \(\mathbb{K}\)_-algebras_ Footnote 5: \(Y\) is only a \(\mathbb{K}\)-scheme (without finite type condition), as \(\mathcal{O}_{X}(X)^{G}\) may not be finitely generated even for nice \(X\). \[\pi^{\#}:A=\mathcal{O}_{Y}(Y)\xrightarrow{\simeq}(\pi_{*}\mathcal{O}_{X})^{G }(Y)=\mathcal{O}_{X}(X)^{G}\subset\mathcal{O}_{X}(X).\]
4. \(X\) _is reduced_ \(\Rightarrow\) _Y is reduced._
_From now on, we further assume_ \(X\) _is reduced_ 6_, hence so is_ \(Y\)_._
Footnote 6: For \((4)-(6)\), we need \(X\) to be reduced, then any morphism \(\pi:X\to Y\) factors through \(X=X_{\mathrm{red}}\to\overline{\pi(X)}_{\mathrm{red}}\hookrightarrow Y\).
\(\pi:X\to Y\) _is_ dominant_. In particular, we have:_
\[X\]
_is irreducible_ \[\Rightarrow\]
_X is_ \[G\] _-irreducible_ \[\Rightarrow\]
_Y is irreducible_,_ \[X\] _is connected_ \[\Rightarrow\]
_X is_ \[G\] _-connected_ \[\Rightarrow\]
_Y is connected._
5. _If_ \(X=\sqcup_{i\in I}X_{i}\) _with each_ \(X_{i}\)__\(G\)_-connected, denote_ \(\iota_{i}:Y_{i}:=\overline{\pi(X_{i})}_{\mathrm{red}}\hookrightarrow Y\)_. Then_ \(\sqcup_{i\in I}Y_{i}\xrightarrow{\simeq}Y\)_, and_ \(\pi_{i}:=\pi|_{X_{i}}:X_{i}\to Y_{i}\) _is a categorical quotient for_ \(G\curvearrowright X_{i}\)_. In particular, we get a bijection_ \[C_{0}(\pi):G\text{-}\mathrm{Conn}(X)\xrightarrow{\simeq}\mathrm{Conn}(Y):[X_ {i}]\mapsto[Y_{i}=\overline{\pi(X_{i})}]\]
6. _If_ \(X\) _is a reduced Noetherian_ 7 _scheme over_ \(\mathbb{K}\)_, then:_ \(X\) _is normal_ \(\Rightarrow\) _Y is normal._ Footnote 7: Why Noetherian condition?: By [76, Lemma 033M], if \(X\) is a normal (hence reduced) Noetherian scheme over \(\mathbb{K}\), then \(X=\sqcup_{i\in I}X_{i}\) is a finite disjoint union of integral normal Noetherian schemes \(X_{i}\) over \(\mathbb{K}\).
#### c.1.2. Good and geometric quotients
**From now on,**\(X\) is a \(G\)-variety over \(\mathbb{K}\), unless stated otherwise. The action is called _free_, if \(G\times X\to X\times X:(g,x)\mapsto(x,gx)\) is a closed embedding.
**Definition C.5** (good quotient. [39, Def.3.27]).: A morphism \(\pi:X\to Y\) of \(\mathbb{K}\)-varieties is a _good quotient_ for the action \(G\curvearrowright X\) if:
1. \(\pi\) is \(G\)-invariant (and surjective).
2. \(\pi\) is affine.
3. If \(W_{1},W_{2}\subset X\) are disjoint \(G\)-invariant closed subsets, then \(\pi(W_{1})\) and \(\pi(W_{2})\) are disjoint closed subsets in \(Y\). (In particular, can take \(W_{2}=\varnothing\)).
4. We get isomorphisms \(\pi^{\#}:\mathcal{O}_{Y}(U)\to\mathcal{O}_{X}(\pi^{-1}(U))^{G}\), \(\forall U\subset Y\) open. I.e., \(\pi^{\#}:\mathcal{O}_{Y}\xrightarrow{\simeq}(\pi_{*}\mathcal{O}_{X})^{G}\). In this case, we may denote \(Y\) by \(X/\sideset{{}^{\infty}}{\operatorname{\mathsf{so}}}{}G\).
**Remark C.6**.: In the definition above:
\(\bullet\)_Surjectivity_ follows from _(3),(4)_: (4)\(\Rightarrow\pi\) is dominant; (3)\(\Rightarrow\pi(X)\subset Y\) is closed.
\(\bullet\) (1) and (3) imply that \(Y\) has the **quotient topology** defined by \(\pi:X\to Y\).
**Definition C.7** (geometric quotient. [39, Def.3.27]).: Let \(G\curvearrowright X\) as above.
1. A \(G\)-invariant morphism \(\pi:X\to Y\) of \(\mathbb{K}\)-varieties is an _orbit map_ if it's surjective, and for each \(\mathbb{K}\)-point \(y\in Y\), the fiber \(\pi^{-1}(y)\) is a single \(G\)-orbit.
**Note**: In this case, \(\pi\) is surjective and the \(G\)-orbits in \(X\) are all closed.
1. A morphism \(\pi:X\to Y\) of \(\mathbb{K}\)-varieties is a _geometric quotient_8 of \(G\curvearrowright X\) if it's both a good quotient and an orbit map. Denote such a variety \(Y\) by \(X/\sideset{{}^{\infty}}{\operatorname{\mathsf{so}}}{}G\).
**Note**: 'Orbit map' and Definition C.5 (3)\(\Rightarrow\) the map \(\pi:X\to Y\) is _open_.
Footnote 8: The definition we adopt here is slightly stronger than [57, Def.0.6]: we also require that \(\pi\) is affine.
**Lemma C.8** ([39, Prop.3.30, Cor.3.32, Cor.3.33]).: _Let \(G\curvearrowright X\), and \(\pi:X\to Y\) is a \(G\)-invariant morphism of \(\mathbb{K}\)-varieties._
1. _If_ \(\pi\) _satisfies Definition C.5.(3), (4), then it's a categorical quotient. So, any good (resp. geometric) quotient is a categorical quotient, (if exists)_ unique _up to a unique isomorphism._
2. _If_ \(\pi:X\to Y\) _is a good quotient of_ \(G\curvearrowright X\)_, then:_ 1. _For any two_ \(\mathbb{K}\)_-points_ \(x_{1},x_{2}\in X\)_,_ \(\overline{G\cdot x_{1}}\cap\overline{G\cdot x_{2}}\neq\varnothing\) _if and only if_ \(\pi(x_{1})=\pi(x_{2})\)_._
3. _For each_ \(\mathbb{K}\)_-point_ \(y\in Y\)_, the fiber_ \(\pi^{-1}(y)\) _contains a unique closed orbit._ _In particular, if all orbits are closed, then_ \(\pi:X\to Y\) _is a geometric quotient._
4. _(_local on the target_) If_ \(Y=\cup U_{i}\) _is an open cover such that, each_ \(\pi|_{\pi^{-1}(U_{i})}:\pi^{-1}(U_{i})\to U_{i}\) _is a good (resp. geometric) quotient, then so is_ \(\pi\)_._
5. _(_base change by open embeddings_) If_ \(\pi:X\to Y\) _is a good (resp. geometric) quotient, then so is_ \(\pi|_{\pi^{-1}(U)}:\pi^{-1}(U)\to U\)_, for any open subset_ \(U\subset Y\)_._
#### c.1.3. Affine GIT quotient
Let \(G\curvearrowright X\), with \(G\)_linearly reductive_ and \(X\) affine variety over \(\mathbb{K}\).
**Definition C.9** (Affine GIT quotient).: The _affine GIT quotient_ of \(G\curvearrowright X\) is the natural morphism \(\pi:X\to X//G:=\operatorname{Spec}\,\mathcal{O}(X)^{G}\) induced by the inclusion \(\mathcal{O}(X)^{G}\hookrightarrow\mathcal{O}(X)\).
**Lemma C.10** ([39, Thm.4.30]).: _The affine GIT quotient \(\pi:X\to X//G\) is a good quotient. So, the affine GIT quotient \(X//G\) is a geometric quotient if and only if all \(G\)-orbits are closed._
E.g., \(X//G\) is a geometric quotient if the action is _free_, as all orbits are closed [6, Prop.1.8].
**Definition C.11** (stable points).: A \(\mathbb{K}\)-point \(x\in X\) is _stable_ if \(G\cdot x\) is closed and \(\dim G_{x}=0\). Denote by \(X^{s}\) the set of stable points in \(X\).
**Lemma C.12** ([39, Prop.4.36]).: \(X^{s}\subset X\) _is a \(G\)-invariant open subset, \(Y^{s}:=\pi(X^{s})\) is an open subset of \(Y\), and \(X^{s}=\pi^{-1}(Y^{s})\). Moreover, \(\pi|_{X^{s}}:X^{s}\to Y^{s}\) is a geometric quotient._
The following is a consequence of Luna's etale slice theorem [49]:
**Lemma C.13**.: _Let \(G\) be a linearly reductive algebraic group acting **freely** on an affine variety \(X\) over \(\mathbb{K}\). Then the quotient map \(\pi:X=X^{s}\to X//G=X/G\) is a principal \(G\)-bundle._
### Principal bundles
**Definition C.14** (Principal bundles).: Let \(G\) be an linear algebraic group over \(\mathbb{K}\). A _principal \(G\)-bundle_ over \(Y\) (in the _etale topology_) is a morphism \(\pi:P\to Y\) of \(\mathbb{K}\)-varieties such that:
1. \(P\) is a \(G\)-variety9 over \(\mathbb{K}\). Footnote 9: If necessary, a left \(G\)-action may be viewed as a right action: \(P\times G\to P:(p,g)\mapsto p\cdot g:=g^{-1}\cdot p\).
2. \(\forall y\in Y(\mathbb{K})\Rightarrow\exists\) etale neighborhood \(\iota:U\to Y\) and a \(G\)-isomorphism \(U\times_{Y}P\cong U\times G\) over \(U\): \[\begin{CD}U\times G@>{\cong}>{}>U\times_{Y}P@>{}>{}>P\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ U@V{}V{}V@V{}V{}V@V{}V{}V\\ U@V{}V{}V@V{}V{}V@V{}V{}V\\ U@V{}V{}V@V{}V{}V@V{}V{}V\\ V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\ V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\ V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\ V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\ V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\ V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\ V@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\ X@>{}>{}>X@>{}>{}>Y\end{CD}\]
**Note**: for \(\tau=\)_Zariski_, _smooth_, _fppf_, or _fpqc_, define _principal \(G\)-bundles_ in \(\tau\)-topology similarly.
**Proposition C.15**.: _If \(G\curvearrowright X\)**freely**, and \(\pi:X\to Y\) is a **flat** orbit map into a \(\mathbb{K}\)**-variety**, then:_
1. _The fiber product_ \(X\times_{Y}X\) _is_ \(G\)_-isomorphic to_ \(X\times G\)_, i.e. we get a cartesian diagram:_ (C.2.1)
2. \(\pi:X\to Y\) _is smooth and affine._
3. \(\pi:X\to Y\) _is a principal_ \(G\)_-bundle (in the etale topology)._
**Remark C.16**.: In above, \(\pi:X\to Y\) is in fact a geometric quotient. See Proposition C.24.
Proof of Proposition C.15.: (1). The proof is covered by the three commutative diagrams:
\[\begin{CD}X\times_{Y}X@>{\cong}>{}>X\times X\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ Y@V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>X\times X\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>Y\times Y\\ X@>{\cong}>{}>X\times Y\\ Y@>{\cong}>{}>X\times Y\\ Y@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>Y\times Y\\ X@>{\cong}>{}>Y\times Y\\ X@>{\cong}>{}>X\times G\\ X\times_{Y}X@>{\cong}>{}>X\times X\\ @V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>Y\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>{}>Y\times Y\\ X@>{\cong}>{}>{}>X\times G\\ X\times_{Y}X@>{\cong}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>Y\times Y\\ X@>{\cong}>{}>{}>X\times G\\ X\times_{Y}X@>{\cong}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>Y\\ X@>{\cong}>{}>{}>Y\times Y\\ X@>{\cong}>{}>{}>X\times G\\ X\times_{Y}X@>{\cong}>{}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>Y\\ X@>{\cong}>{}>{}>Y\times Y\\ X@>{\cong}>{}>{}>{}>X\times G\\ X\times_{Y}X@>{\cong}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V@V{}V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V{}V@V{}V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>Y\\ X@>{\cong}>{}>{}>Y\times Y\\ X@>{\cong}>{}>{}>{}>X\times G\\ X\times_{Y}X@>{\cong}>{}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>X\times X\\ @V{}V{}V@V{}V{}V\\ X\times_{Y}X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>>X\times Y\\ X@>{\cong}>{}>>X\times Y\\ X@>{\cong}>>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>>{}>X\times Y\\ X@>{\cong}>>{}>X\times Y\\ X@>{\cong}>{}>>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{\cong}>{}>X\times Y\\ X@>{
First, consider the left cartesian diagram. As \(Y\) is **separated**, \(\Delta_{Y}\) is a closed embedding, so is \(\widetilde{\Delta}_{Y}:X\times_{Y}X\to X\times X\) by base change. A free action means the morphism \(a:X\times G\to X\times X:(x,g)\mapsto(x,xg)\) is a closed embedding. \(\pi\) is \(G\)-invariant means \(\pi\circ p_{1}=\pi\circ p_{1}\circ a=\pi\circ p_{2}\circ a:X\times G\to Y\).
This induces the middle commutative diagram. As \(a\) and \(\widetilde{\Delta}_{Y}\) are closed embeddings, so is \(c:X\times G\to X\times_{Y}X:(x,g)\mapsto(x,xg)\), by the **cancellation property** for closed embeddings.
Now, consider the right commutative diagram. By assumption, each closed fiber of \(\pi\) is isomorphic to \(G\). Then so is \(p_{1}:X\times_{Y}X\to X\) by base change. Thus, \(c\) is an isomorphism on the closed fibers over \(X\), hence a dominant morphism.
By base change and composition, \(X\times_{Y}X\) is of finite type over \(\mathbb{K}\), and hence \(p_{1}:X\times_{Y}X\to X\) is a surjective **flat** morphism of schemes **of finite type** over \(\mathbb{K}\), with _reduced_ closed fibers and _reduced_ base \(X\). Then, \(X\times_{Y}X\) is reduced by Lemma C.17 below. Now, \(c\) is a a closed embedding and a dominant morphism between reduced schemes, hence an isomorphism. This proves (1).
(2). Clearly, \(\pi:X\to Y\) is **fpqc**. By [76, Lemma 02VL] (resp. [76, Lemma 02L5]), a morphism being smooth (resp. affine) is fpqc local on the target. Then by the cartesian diagram in (1), \(\pi\) is smooth and affine, as \(p_{1}:X\times G\to X\) is.
(3). By (2), \(\pi:X\to Y\) is a _smooth covering_. By [76, Lemma 055V] (_slicing smooth morphisms and refining a smooth covering by an etale covering_), there exists a morphism \(s:\mathcal{V}\to X\) such that the composition \(\pi\circ s:\mathcal{V}\to X\to Y\) is an etale covering. Now, by (1), the base change of \(\pi:X\to Y\) along \(\pi\circ s:\mathcal{V}\xrightarrow{s}X\xrightarrow{\pi}Y\) is \(G\)-isomorphic to \(\mathcal{V}\times G\) over \(\mathcal{V}\). This gives a local trivialization of \(\pi:X\to Y\) in the etale topology. This finishes the proof.
**Lemma C.17** (Reducedness).: _If \(f:X\to Y\) be a surjective morphism of schemes **of finite type** over any field \(k\), and all closed fibers (i.e. \(X_{y}\), \(\forall\) closed point \(y\in Y\)) of \(f\) are **reduced**, then_
1. _Any fiber_ \(X_{y}\) _of_ \(f\) _is reduced (the point_ \(y\in Y\) _may not be closed)._
2. _If in addition_ \(Y\) _is reduced and_ \(f\) _is flat, then_ \(X\) _is also reduced._
**Remark C.18** (Jacobson schemes).: Let \(Y\) be a scheme of finite type over any field \(k\).
(1) Recall by [76, Definition 02J1] that, a point \(y\in Y\) is a **finite type point** if the canonical morphism \(\operatorname{Spec}\,k(y)\to Y\) is of finite type. Equivalently by [76, Lemma 01TA], \(y\) is a closed point in some affine open subset \(U=\operatorname{Spec}\,R\) of \(Y\), and the field extension \(k\hookrightarrow k(y)\) is finite.
(2) By [76, Lemma 02J6], \(Y\) is **Jacobson**: the closed points are dense in every closed subset [76, Definition 01P2]. Equivalently, every nonempty locally closed subset contains a closed point.
(3) Now by [76, Lemma 01TB], the closed points in \(Y\) are precisely the finite type points.
In particular, if \(k\) is an algebraically closed field, then the closed points of \(Y\) are the \(k\)-points, and every nonempty locally closed subset contains a closed (i.e. \(k\)-) point.
Proof of Lemma c.17.: (1). Take any point \(y\in Y\), define \(Y^{\prime}:=\overline{\{y\}}\hookrightarrow Y\) equipped with the reduced close subscheme structure. Then \(Y\) is integral and \(y\) is a generic point of \(Y^{\prime}\). Let \(f^{\prime}:X^{\prime}=X\times_{Y}Y^{\prime}\to Y^{\prime}\) be the base change of \(f\) along \(Y^{\prime}\hookrightarrow Y\), which is still a surjective morphism of schemes of finite type over \(k\). By base change and our assumption, all the closed fibers of \(f^{\prime}\) are also reduced. Moreover, \(X^{\prime}_{y}=(f^{\prime})^{-1}(y)=X_{y}\).
If \(X^{\prime}_{y}\) is non-reduced, then by [76, Lemma 0575], there exists a nonempty open subset \(V\subset Y^{\prime}\) such that, for all \(v\in V\) the fiber \(X^{\prime}_{v}\) is non-reduced. However, by Remark C.18, \(Y\) is Jacobson10, hence \(V\) contains a closed point. This is a contradiction.
Footnote 10: It’s this step that the finite type assumption in the lemma becomes essential.
(2). This follows from (1) and [29, Cor.3.3.5] (alternatively, see [51, Thm.23.9, Cor.]):
_Let \(f:X\to Y\) be a flat morphism between two locally Noetherian schemes. If \(Y\) is reduced at the points of \(f(X)\), and \(f^{-1}(y)\) is a reduced \(k(y)\)-scheme, \(\forall\) point \(y\in f(X)\), then \(X\) is reduced. _
**Remark C.19**.: As in the proof of Proposition C.15 (3), a similar argument also shows:
If \(\pi:X\to Y\) is a morphism of \(\mathbb{K}\)-varieties and \(F\) is a \(\mathbb{K}\)-variety, then \(\pi\) is a fiber bundle with fiber \(F\) in the etale topology if and only if \(\pi\) is a fiber bundle with fiber \(F\) in the smooth topology.
The flatness condition in Proposition C.15 can be relaxed when the base is normal:
**Proposition C.20**.: _If \(G\) acts **freely** on a **pure dimensional** variety \(X\) over \(\mathbb{K}\) (of char. \(0\)), and \(\pi:X\to Y\) is an orbit map, with \(Y\) a **normal pure dimensional**\(\mathbb{K}\)-variety, then:_
1. \(\pi\) _is flat. So Proposition_ C.15 _applies:_ \(\pi\) _is smooth, affine, and a principal_ \(G\)_-bundle, etc._
2. \(X\) _is normal._
Proof.: (1). By Lemma C.21, \(\pi\) is flat, so Proposition C.15 applies.
(2). This follows from (1) and [76, Lemma 034F]: _a morphism being normal is local in the smooth topology_: If \(S^{\prime}\to S\) is a smooth morphism, then \(S\) is normal \(\Rightarrow\) so is \(S^{\prime}\). If \(S^{\prime}\to S\) is smooth surjective, then \(S^{\prime}\) is normal \(\Rightarrow\) so is \(S\).
**Lemma C.21** (**A variant of miracle flatness**. [69, Thm.3.3.27]).: _If \(R\to S\) be a local morphism of Noetherian local rings, \(R\) is an excellent normal local domain with perfect residue field, and the closed fiber is regular of dimension \(\dim S-\dim R\), then \(R\to S\) is faithfully flat._
As a partial converse to Lemma C.8 (1), we have
**Corollary C.22**.: _Let \(G\curvearrowright X\)**freely** with \(X\)**normal**. If \(\pi:X\to Y\) is a **categorical quotient** and an **orbit map** into a \(\mathbb{K}\)-variety, then \(Y\) is normal, and the results of Proposition C.20 hold._
Proof.: By Proposition C.4 (6), \(Y\) is normal. By [76, Lemma 033M], \(Y=\sqcup Y_{i}\) is a finite disjoint union of integral normal \(\mathbb{K}\)-varieties \(Y_{i}\). Let \(X_{i}:=\pi^{-1}(Y_{i})\), then \(X=\sqcup X_{i}\),with each \(X_{i}\subset X\) a
normal \(G\)-variety over \(\mathbb{K}\). Again by [76, Lemma 033M], \(X_{i}\) is a finite disjoint union of integral normal \(\mathbb{K}\)-varieties. In particular, the irreducible components of \(X_{i}\) are the connected components of \(X_{i}\), hence \(G\text{-}\text{Irr}(X_{i})=G\text{-}\text{Conn}(X_{i})\) by Corollary C.2. By Proposition C.4 (5), the restriction \(\pi_{i}:=\pi|_{X_{i}}:X_{i}\to Y_{i}\) is also a categorical quotient, and \(X_{i}\) is \(G\)-connected, equivalently, \(G\)-irreducible by above. This in particular implies that \(X_{i}\) is _pure dimensional_. Now, \(\pi_{i}:X_{i}\to Y_{i}\) satisfies the hypotheses of Proposition C.20. This suffices.
As a moral converse to Proposition C.15, we have
**Lemma C.23**.: _If \(\pi:X\to Y\) is a principal \(G\)-bundle in **fpqc topology**, with \(X,Y\)\(\mathbb{K}\)-varieties, then:_
1. \(\pi\) _is smooth and affine._
2. _We have a natural_ \(G\)_-isomorphism_ \(c:X\times G\xrightarrow{\simeq}X\times_{Y}X:(x,g)\mapsto(x,xg)\)_, i.e. the cartesian diagram (_C.2.1_) holds. In addition, the_ \(G\)_-action on_ \(X\) _is free._
3. \(\pi:X\to Y\) _is an orbit map and a principal_ \(G\)_-bundle in the etale topology._
Proof.: (1). The proof is similar to that of Proposition C.15 (2).
(2). As in the proof of Proposition C.15 (1), we get a \(G\)-morphism \(c\) in a commutative diagram:
and a closed embedding \(X\times_{Y}X\hookrightarrow X\times X\) (\(Y\) is _separated_). It remains to show \(c\) is an isomorphism.
By our assumption and base change, \(p_{1}:X\times_{Y}X\to X\) is a principal \(G\)-bundle in the fpqc topology. So, there exists a fpqc covering \(\tau:\mathcal{V}\to X\) and a \(G\)-isomorphism \(\phi_{\tau}:\mathcal{V}\times_{Y}X\xrightarrow{\simeq}\mathcal{V}\times G\) over \(\mathcal{V}\). In other words, we obtain the following commutative diagram:
As a \(G\)-morphism over \(\mathcal{V}\), \(\widetilde{c}_{\tau}:=\phi_{\tau}\circ c_{\tau}:\mathcal{V}\times G\to\mathcal{ V}\times G\) is then a \(G\)-isomorphism over \(\mathcal{V}\). Thus, so is \(c_{\tau}\). Now, by [76, Lemma 02L4], _a morphism being an isomorphism is fpqc local on the target_. It follows that \(c:X\times G\to X\times_{Y}X\) is a \(G\)-isomorphism over \(X\). We're done.
(3). As \(\pi:X\to Y\) is smooth surjective, by base change, the closed fibers of \(\pi:X\to Y\) are the same as those of \(p_{1}:X\times_{Y}X\to X\), which are isomorphic to \(G\) by (2). Thus, \(\pi:X\to Y\) is an orbit map. Now, the result follows from Proposition C.15 (3).
### Associated fiber bundles
Recall **Convention**7. As promised in Remark C.16, the following proposition complements Proposition C.15, Proposition C.20, and Lemma C.23.
**Proposition C.24** (Associated fiber bundles).: _Let \(F\) be an **affine**\(H\)-variety over \(\mathbb{K}\), and \(\mathfrak{p}:P\to B\) be an (etale) principal \(H\)-bundle11. \(H\) acts on \(P\times F\) diagonally: \(h\cdot(a,z):=(ah^{-1},hz)\). Then:_
Footnote 11: By Proposition C.15, it suffices to assume: \(\mathfrak{p}\) is a flat orbit map of \(\mathbb{K}\)-varieties for a free action \(H\curvearrowright P\).
1. _The action_ \(H\curvearrowright P\times F\) _admits a_ geometric quotient__\(\pi:P\times F\to P\times^{H}F=(P\times F)/H\)_, with_ \(P\times^{H}F\) _is a_ \(\mathbb{K}\)_-variety. In particular,_ \(\mathfrak{p}:P\to B\) _is a geometric quotient, with_ \(F=\operatorname{Spec}\,k\)_._
2. _The canonical map_ \(\pi:P\times F\to P\times^{H}F\) _is a principal_ \(H\)_-bundle._
3. _The induced map_ \(q:P\times^{H}F\to B:[a,z]\mapsto\mathfrak{p}(a)\) _is an (etale)_ fiber bundle with fiber_ \(F\)_._
4. _We have a fiber product diagram_
(C.3.1)
_Here, \(p_{i}\) always stands for the projection to the \(i\)-th factor._
Proof.: By Lemma C.23, \(\mathfrak{p}\) is smooth, affine, an orbit map, and the action \(H\curvearrowright P\) is free. By gluing, it suffices to consider the case when \(B\) is affine.
**Step 1**: As \(B\) is affine, so are \(P\) and \(X=P\times F\). Let's define a candidate for \(Y=P\times^{H}F\).
**Note**: By [76, Lemma 02L5], a morphism being affine is fpqc local on the base. If \(Y\) exists, then by (2), \(q\) and hence \(Y\) are affine. So, \(Y=\operatorname{Spec}\,\mathcal{O}_{X}(X)^{H}\) (Lemma C.8.(1), Proposition C.4.(2)).
Thus, define \(Y:=\operatorname{Spec}\,\mathcal{O}_{X}(X)^{H}\), then we obtain a canonical \(H\)-invariant morphism
\[\pi:X=P\times F\to Y\quad\Leftrightarrow\quad\pi^{\#}:\mathcal{O}_{Y}(Y)= \mathcal{O}_{X}(X)^{H}\hookrightarrow\mathcal{O}_{X}(X).\]
and
\[q:Y=\operatorname{Spec}\,\mathcal{O}_{X}(X)^{H}\to B\quad\Leftrightarrow\quad \begin{CD}\mathcal{O}_{B}(B)@>{q^{\#}}>{}>\mathcal{O}_{X}(X)^{H}\\ @V{(\mathfrak{p}\circ p_{1})^{\#}}>{}>\mathcal{O}_{X}(X)\end{CD}\]
As \(X\) is reduced, so is \(Y\). Now, \(Y\) is a reduced affine. In **Step 3**, we will show that \(Y\) is of finite type over \(\mathbb{K}\), i.e. \(\mathcal{O}_{Y}(Y)=\mathcal{O}_{X}(X)^{H}\) is finitely generated over \(\mathbb{K}\). Then, \(Y\) is indeed a \(\mathbb{K}\)-variety.
**Step 2**: As \(\mathfrak{p}:P\to B\) is a principal \(H\)-bundle, let \(\eta:U\to B\) be etale surjective with \(U\) affine such that, \(U\times_{B}P\) is \(H\)-isomorphic to \(U\times H\) over \(U\). So we get a commutative diagram:
(C.3.2)
\[\begin{CD}U\times F\times H@>{(u,z,h)\mapsto(u,h,h^{-1}z)}>{}>U\times H\times F @V{\bar{\eta}\times\mathrm{id}_{F}}>{}>X=P\times F\\ U\times F@V{\varkappa}V{\varkappa}V@V{\pi}V{\varkappa}V{\varkappa}V\\ U\times F@V{\eta^{\prime}}>{}>Y=\operatorname{Spec}\,\mathcal{O}_{X}(X)^{H} \\ U@>{}>{}>U@>{\eta}>{}>B\end{CD}\]
Here, \(a_{F}:H\times F\to F\) denotes the action map. By the left half diagram, \(\mathsf{id}_{U}\times a_{F}:U\times H\times F\to U\times F\) is a trivial principal \(H\)-bundle, so a geometric quotient (also a categorical quotient). By its universal property, we obtain a unique morphism \(\eta^{\prime}:U\times F\to Y\), equivalently,
\[(\eta^{\prime})^{\#}:\mathcal{O}_{Y}(Y)=\mathcal{O}_{X}(X)^{H}\to\mathcal{O}_{ X}(U\times H\times F)^{H}=\mathcal{O}_{U\times F}(U\times F).\]
Observe that the right half outer diagram of (C.3.2) is cartesian.
**Claim**: The lower right square is also cartesian, equivalently, we have
(C.3.3) \[\mathcal{O}(U)\otimes_{\mathcal{O}(B)}\mathcal{O}(X)^{H}=(\mathcal{O}(U) \otimes_{\mathcal{O}(B)}\mathcal{O}(X))^{H}=\mathcal{O}(U\times F).\]
As a consequence, every square in (C.3.2) is cartesian.
Proof of Claim.: It suffices to show the first equality. Denote \(R:=\mathcal{O}(B)\), \(S:=\mathcal{O}(U)\), \(T:=\mathcal{O}(X)\). By our assumption on \(U\), \(S\) is flat over \(R\). By composition, \(q\circ\pi=\mathfrak{p}\circ p_{1}:P\times F\xrightarrow{p_{1}}P\xrightarrow{ \mathfrak{p}}B\) is flat, i.e. \(T=\mathcal{O}(X)\) is flat over \(R=\mathcal{O}(B)\). We have the following short exact sequences
\[0 \to T^{H}\to T\xrightarrow{\delta=(1-h)_{h\in H}}\prod_{h\in H}T, \quad\delta_{h}:=1-h:x\mapsto x-hx,\] \[0 \to(S\otimes_{R}T)^{H}\to S\otimes_{R}T\xrightarrow{\delta=(1-h)_{h \in H}}\prod_{h\in H}S\otimes_{R}T.\]
By the exactness of \(S\otimes_{R}-\), we get a commutative diagram with exact solid rows:
Here, \(I_{1}:=\operatorname{Im}(\widetilde{\delta})\) and \(I_{2}:=\operatorname{Im}(\delta)\), and \(i_{1},i_{2},j\) are the induced maps. Clearly, \(i_{1},i_{2}\) are injective.
As \(T\) is flat over \(R\), by Lemma C.25 below, \(\phi_{S}\) is injective. Then so is \(j\) by the identity \(\phi_{S}\circ i_{1}=i_{2}\circ j\). Thus, we obtain the following commutative diagram with short exact rows
By the snake lemma, we see that \(\psi\) is an isomorphism. This finishes the proof of the claim.
**Step 3**: By [76, Lemma 02KZ], _a morphism being of finite type is fpqc local on the base_. So, \(q:Y\to B\) is of finite type by the cartesian diagram (C.3.2) in **Step 2**. Then by composition, \(Y=\operatorname{Spec}\mathcal{O}_{X}(X)^{H}\) itself is of finite type over \(\mathbb{K}\). In conclusion, \(Y\) is an affine \(\mathbb{K}\)-variety.
By base change, \(\eta^{\prime}\) in (C.3.2) is an etale covering for \(Y\). By the same cartesian diagram (C.3.2), \(q:Y\to B\) is a fiber bundle with fiber \(F\), and \(\pi:X=P\times F\to Y=P\times^{H}F\) is a principal \(H\)-bundle in the etale topology. This shows (2) and (3).
Let's also verify (4). By commutativity, we get a natural morphism \(\theta_{B}:X=P\times F\to Y\times_{B}P\). By [76, Lemma 02L4], _a morphism being an isomorphism is fpqc local on the target_. Then, by base change along the etale covering \(\eta:U\to B\) and the cartesian diagram (C.3.2), we see that \(\theta_{B}\) is an isomorphism. This shows (4).
**Step 4**: It remains to show that \(\pi:X=P\times F\to Y=P\times^{H}F\) is a geometric quotient.
By Lemma C.23 or check the diagram (C.3.2) again, \(\pi:X\to Y\) is \(H\)-invariant and surjective, smooth, affine, and an orbit map. In particular, \(\pi\) is _flat locally of finite presentation_, hence open by [76, Lemma 01UA]. Then, \(Y\) has the quotient topology defined by \(\pi\), and \(W=\pi^{-1}(\pi(W))\) for any \(H\)-invariant closed subset \(W\subset X\). We see that \(\pi\) satisfies Definition C.5 (1)-(3).
It suffices to check Definition C.5 (4) for \(\pi\). \(\pi^{\#}:\mathcal{O}_{Y}\to(\pi_{*}\mathcal{O}_{X})^{H}\) is already a morphism of sheaves of \(\mathcal{O}_{Y}\)-modules. It suffices to show that, for any open affine subset \(B^{\prime}\subset Y\), the natural map \(\pi^{\#}:\mathcal{O}_{Y}(B^{\prime})=\mathcal{O}(B^{\prime})\to\mathcal{O}_{ X}(\pi^{-1}(B^{\prime}))^{H}\) is an isomorphism.
Denote \(X^{\prime}=P^{\prime}:=\pi^{-1}(B^{\prime})\) and \(F^{\prime}:=\text{Spec }k\). So \(X^{\prime}=P^{\prime}\times F^{\prime}\) and \(\mathfrak{p}^{\prime}:=\pi|_{P^{\prime}}:P^{\prime}\to B^{\prime}\) is a principal \(H\)-bundle by base change. In particular, \(X^{\prime}=P^{\prime}\) is an affine \(\mathbb{K}\)-variety. Define \(Y^{\prime}:=\text{Spec }\mathcal{O}_{X^{\prime}}(X^{\prime})^{H}\). Then we obtain a canonical \(H\)-invariant morphism
\[\pi^{\prime}:X^{\prime}=P^{\prime}\times F^{\prime}\to Y^{\prime}\quad \Leftrightarrow\quad(\pi^{\prime})^{\#}:\mathcal{O}_{Y^{\prime}}(Y^{\prime})= \mathcal{O}_{X^{\prime}}(X^{\prime})^{H}\hookrightarrow\mathcal{O}_{X^{\prime }}(X^{\prime}).\]
and
\[q^{\prime}:Y^{\prime}=\text{Spec }\mathcal{O}_{X^{\prime}}(X^{\prime})^{H} \to B^{\prime}\quad\Leftrightarrow\quad\mathcal{O}_{B^{\prime}}(B^{\prime}) \overbrace{\begin{subarray}{c}(q^{\prime})^{\#}\\ \mathfrak{p}^{\prime\#}\end{subarray}}^{(q^{\prime})^{\#}}\mathcal{O}_{X^{ \prime}}(X^{\prime})^{H}\]
It suffices to show that the natural morphism \(q^{\prime}:Y^{\prime}\to B^{\prime}\) is an isomorphism.
As in **Step 1**, we see that \(Y^{\prime}\) is a reduced affine (so separated) scheme over \(\mathbb{K}\). As \(\mathfrak{p}^{\prime}:P^{\prime}\to B^{\prime}\) is a principal \(H\)-bundle, let \(\eta^{\prime}:U^{\prime}\to B^{\prime}\) be etale surjective with \(U^{\prime}\) affine such that, \(U^{\prime}\times_{B^{\prime}}P^{\prime}\) is \(H\)-isomorphic to \(U^{\prime}\times H\) over \(U^{\prime}\). As in **Step 2**, we get a commutative diagram:
(C.3.4)
Again, as in **Step 2**, we have \(\mathcal{O}(U^{\prime})\otimes_{\mathcal{O}(B^{\prime})}\mathcal{O}(X^{ \prime})^{H}=(\mathcal{O}(U^{\prime})\otimes_{\mathcal{O}(B^{\prime})}\mathcal{ O}(X^{\prime}))^{H}=\mathcal{O}(U^{\prime}\times F^{\prime})\), and (C.3.4) is cartesian. By [76, Lemma 02L4], _a morphism being an isomorphism is fpqc local on the target_. Thus, by the same cartesian (C.3.4), \(q^{\prime}\) is an isomorphism. Done.
**Lemma C.25**.: _Let \(R\) be a Noetherian ring and \(H\) be a set. Let \(S,T\) are two \(R\)-modules, and consider the natural morphism \(\phi_{S}:S\otimes_{R}\prod_{h\in H}T\to\prod_{h\in H}S\otimes_{R}T\). Then:_
1. _If_ \(S\) _is finitely generated (hence finitely presented) over_ \(R\)_, then_ \(\phi_{S}\) _is an isomorphism._
2. _If_ \(T\) _is flat over_ \(R\)_, then_ \(\phi_{S}\) _is injective._
Proof.: (1). We have a short exact sequence \(R^{J}\to R^{I}\to S\to 0\), for some finite sets \(I,J\). Observe that we have a commutative diagram with exact solid rows:
Here, \(K_{i}=\ker(d_{i})\), and \(p_{1}\), \(p_{2}\), \(k\) are the induced maps. So, \(p_{1}\), \(p_{2}\) are surjective, then so is \(k\) by the equality \(k\circ p_{1}=p_{2}\). Then we obtain the following commutative diagram with exact rows:
Apply the snake lemma, we see that \(\phi_{S}\) is an isomorphism.
(2). Let \(x\in\ker(\phi_{S})\), we may write \(x=\sum_{i=1}^{N}s_{i}\otimes_{R}(t_{i}^{h})_{h\in H}\), \(s_{i}\in S\), \((t_{i}^{h})_{h\in H}\in\prod_{h\in H}T\). Let \(M\) be the \(R\)-submodule of \(S\) generated by \(s_{1},\cdots,s_{N}\). We then obtain a short exact sequence \(0\to M\xrightarrow{i_{M}}S\). By assumption, the functor \(\prod_{h\in H}(-\otimes_{R}T)\) is exact, which then induces a short exact sequence
\[0\to\prod_{h\in H}M\otimes_{R}T\xrightarrow{\widetilde{i}_{M}}\prod_{h\in H}S \otimes_{R}T.\]
On the other hand, we have a commutative diagram, in which \(\phi_{M}\) is an isomorphism by (1):
Now, \(x\in M\otimes_{R}\prod_{h\in H}T\) and \(0=\phi_{S}\circ\widetilde{i}_{M}^{\prime}(x)=\widetilde{i}_{M}\circ\phi_{M}(x)\). Then \(x=0\) by above.
**Remark C.26**.: Proposition C.24 holds for **any \(H\)-variety \(F\) covered by open affine \(H\)-varieties**.
**Example C.27** (Homogeneous spaces. [6, II.Thm.6.8]).: Let \(H\subset G\) be a closed subgroup. Then \(q:G\to G/H=G/\operatorname{\text{\text{\text{\text{\text{\text{\text{\text{ \
**Corollary C.28** (base change).: _If \(\mathfrak{p}:P\to B\) is a principal \(H\)-bundle, with \(P^{\prime}\subset P\) a locally closed \(H\)-subvariety, then \(B^{\prime}:=\mathfrak{p}(P^{\prime})\subset B\) is a locally closed subvariety, and we get a cartesian square_
_In particular, \(\mathfrak{p}^{\prime}:P^{\prime}\to B^{\prime}\) is a principal \(H\)-bundle as well as a geometric quotient._
Proof.: By the composition \(P^{\prime}\hookrightarrow\overline{P}^{\prime}\hookrightarrow P\), it suffices to consider the case when \(P^{\prime}\subset P\) is an open or a closed \(H\)-subvariety. By Proposition C.24, \(\mathfrak{p}:P\to B\) is a geometric quotient. Then as an \(H\)-invariant subset, \(P^{\prime}\subset P\) is open (resp. closed) if and only if \(B^{\prime}\subset B\) is open (resp. closed), and \(P^{\prime}=\mathfrak{p}^{-1}(B^{\prime})\). It remains to show that the obviously commutative diagram is cartesian.
The open case is trivial. For the closed case, take \(P^{\prime\prime}:=B^{\prime}\times_{B}P\) and \(\mathfrak{p}^{\prime\prime}:=\mathfrak{p}|_{P^{\prime\prime}}:P^{\prime\prime }\to B^{\prime}\) in the category of schemes. By base change, \(\mathfrak{p}^{\prime\prime}\) is a principal \(H\)-bundle. In particular, \(\mathfrak{p}^{\prime\prime}\) is flat, and its closed fibers are isomorphic to \(H\), hence reduced. Of course, \(B^{\prime}\) is reduced, then so is \(P^{\prime\prime}\) by Lemma C.17. On the closed subset \(P^{\prime}=\mathfrak{p}^{-1}(B^{\prime})\), there is a unique reduced close subscheme structure. Thus, the natural morphism \(P^{\prime}\to P^{\prime\prime}\) is an isomorphism.
**Proposition C.29** (Reduction of principal bundles).: _Let \(H\subset G\) be a closed subgroup, \(X\) be a \(G\)-variety, and \(i:Z\hookrightarrow X\) be a closed \(H\)-subvariety. So \(H\curvearrowright G\times Z:h\cdot(g,z)=(gh^{-1},h\cdot z)\). If:_
1. \(\pi:X\to B\) _be a principal_ \(G\)_-bundle over a_ \(\mathbb{K}\)_-variety_ \(B\)_._
2. \(\widetilde{a}:G\times Z\to X:(g,z)\mapsto gz\) _is a principal_ \(H\)_-bundle._
_Then \(\mathfrak{r}:=\pi\circ i:Z\to B\) is a principal \(H\)-bundle, and we have a cartesian diagram:_
(C.4.1)
Proof.: Clearly, \(\mathfrak{r}:Z\to B\) is an \(H\)-invariant morphism. We have the following cartesian diagram:
Here, \(p_{i}\) denotes the projection to the \(i\)-th factor. \(a:G\times X\to X:(g,x)\mapsto gx\) is the action map. So, \(\widetilde{a}=a\circ(\mathsf{id}_{G}\times i)\). The isomorphism \(G\times X\xrightarrow{\simeq}X\times_{B}X\) follows from Lemma C.23. Then by assumption and base change, \(\mathfrak{r}:Z\to B\) is a principal \(H\)-bundle in the smooth topology, hence in the etale topology by Lemma C.23. It remains to show (C.4.1): the right square is cartesian by above; By Lemma C.23, so is the outer square. The rest is due to Proposition C.24.
**Remark C.30**.: In above, by Propositions C.15, C.20, the conditions (1)-(2) can be relaxed to:
\(\bullet\): The \(G\)-action on \(X\) is free.
\(\bullet\): \(\pi:X\to B\) and \(\widetilde{a}:G\times Z\to X\) are either _flat_ orbit maps, or orbit maps between _pure dimensional_ varieties with \(B\)_normal_.
**Proposition C.31** (Quotient of a principal bundle by a subgroup).: _Let \(H\subset G\) be a closed subgroup such that \(G/H\) is **affine**. Let \(\mathfrak{p}:P\to B=P/G\) be a principal \(G\)-bundle, then:_
1. _There exists a geometric quotient_ \(\mathfrak{p}_{H}:P\to P/H\)_, and_ \(\mathfrak{p}_{H}\) _is a principal_ \(H\)_-bundle._
2. _The natural map_ \(q_{H}:P/H\to B\) _is a fiber bundle with fiber_ \(G/H\) _in the etale topology._
3. _If in addition,_ \(H\triangleleft G\) _is a normal subgroup, then_ \(q_{H}:P/H\to B\) _is a principal_ \(G/H\)_-bundle._
Proof.: Let \(F:=G/H\) and \(X:=P\times F\). So, \(G\) acts diagonally on \(X\): \(g\cdot(p,zH):=(pg^{-1},gzH)\). As \(F=G/H\) is **affine**, by Proposition C.24, we conclude that \(G\curvearrowright X\) admits a geometric quotient \(\pi:X=P\times F\to Y:=P\times^{G}F\), which is a principal \(G\)-bundle, and the induced map \(q_{H}=q:Y=P\times^{G}F\to B\) is a fiber bundle with fiber \(F\) in the etale topology. Moreover, we obtain the following cartesian diagram
(1). By the left cartesian square, \(\mathfrak{p}_{H}:P\to Y=P\times^{G}G/H\) is a principal \(H\)-bundle. By Proposition C.24 (1), \(\mathfrak{p}_{H}\) is also a geometric quotient. So we can write \(P/H=Y=P\times^{G}G/H\).
(2). By the right cartesian square above, \(q_{H}:P/H=Y\to B\) is a fiber bundle with fiber \(G/H\) in the smooth topology, hence in the etale topology by Remark C.19.
(3). The action map \(a:P\times G\to P\) induces an action \(P/H\times G/H\to P/H:(pH,gH)\mapsto pHgH=pgH\). Then by the right cartesian square above \(q_{H}:Y=P/H\to B\) is a principal \(G/H\)-bundle in the smooth topology, hence in the etale topology by Lemma C.23.
**Proposition C.32**.: _Let \(G\) be a **unipotent** algebraic group and \(B\) be an **affine** variety over \(\mathbb{K}\). Then any principal \(G\)-bundle \(\mathfrak{p}:P\to B\) is trivial._
**Note**: \(G\) is connected: \(\pi_{0}(G)=G/G^{0}\) is a finite unipotent algebraic group, which must be trivial.
Proof.: First, assume \(G\) is commutative. Then, \(G=G_{a}^{m}\) for some \(m\geq 0\), as its (abelian) Lie algebra is a \(\mathbb{K}\)-vector space with trivial Lie bracket. Now, the principal \(G\)-bundles over \(B\) are classified by \(H_{\text{\'{e}t}}^{1}(B,G_{a}^{m})\) (see [76, Lemma 03F7]). By [76, Proposition 03DW] and the affine vanishing property, \(H_{\text{\'{e}t}}^{1}(B,G_{a}^{m})\cong H^{1}(B,\mathcal{O}_{B}^{m})=0\). Thus, \(\mathfrak{p}\) is a trivial principal \(G\)-bundle.
In general, we prove by induction on \(\dim G\). Say, \(n:=\dim G>0\). Let \(H:=Z(G)\) be the center of \(G\). Then, \(H\) is a commutative unipotent algebraic group of positive dimension, and
\(\overline{G}:=G/H\) is a unipotent algebraic group of dimension \(<n\). By Proposition C.31, there exists a \(\mathbb{K}\)-variety \(\overline{P}:=P/H\) such that, \(\mathfrak{p}_{H}:P\rightarrow\overline{P}\) is a principal \(H\)-bundle, and \(q_{H}:\overline{P}\to B\) is a principal \(\overline{G}\)-bundle. By induction, \(q_{H}\) is trivial, hence admits a section \(s:B\rightarrow\overline{P}\). We then obtain the following commutative diagram in which the square is cartesian:
By base change, \(\widetilde{p}_{H}:s^{-1}P\to B\) is a principal \(H\)-bundle. As \(H\) is a commutative unipotent algebraic group, \(\widetilde{p}_{H}\) is trivial by above, hence admits a section \(s^{\prime}:B\to s^{-1}P\). Then, \(\widetilde{s}\circ s^{\prime}:B\to P\) defines a section of \(\mathfrak{p}:P\to B\). This implies that \(\mathfrak{p}\) is a trivial principal \(G\)-bundle.
|
2309.11494 | Investigating the Atmospheric Mass Loss of the Kepler-105 Planets
Straddling the Radius Gap | An intriguing pattern among exoplanets is the lack of detected planets
between approximately $1.5$ R$_\oplus$ and $2.0$ R$_\oplus$. One proposed
explanation for this "radius gap" is the photoevaporation of planetary
atmospheres, a theory that can be tested by studying individual planetary
systems. Kepler-105 is an ideal system for such testing due to the ordering and
sizes of its planets. Kepler-105 is a sun-like star that hosts two planets
straddling the radius gap in a rare architecture with the larger planet closer
to the host star ($R_b = 2.53\pm0.07$ R$_\oplus$, $P_b = 5.41$ days, $R_c =
1.44\pm0.04$ R$_\oplus$, $P_c = 7.13$ days). If photoevaporation sculpted the
atmospheres of these planets, then Kepler-105b would need to be much more
massive than Kepler-105c to retain its atmosphere, given its closer proximity
to the host star. To test this hypothesis, we simultaneously analyzed radial
velocities (RVs) and transit timing variations (TTVs) of the Kepler-105 system,
measuring disparate masses of $M_b = 10.8\pm2.3$ M$_\oplus$ ($ \rho_b =
0.97\pm0.22$ g cm$^{-3}$) and $M_c = 5.6\pm1.2$ M$_\oplus $ ($\rho_c =
2.64\pm0.61$ g cm$^{-3}$). Based on these masses, the difference in gas
envelope content of the Kepler-105 planets could be entirely due to
photoevaporation (in 76\% of scenarios), although other mechanisms like
core-powered mass loss could have played a role for some planet albedos. | Aaron Householder, Lauren M. Weiss, James E. Owen, Howard Isaacson, Andrew W. Howard, Daniel Fabrycky, Leslie A. Rogers, Hilke E. Schlichting, Benjamin J. Fulton, Erik A. Petigura, Steven Giacalone, Joseph M. Akana Murphy, Corey Beard, Ashley Chontos, Fei Dai, Judah Van Zandt, Jack Lubin, Malena Rice, Alex S. Polanski, Paul Dalba, Sarah Blunt, Emma V. Turtelboom, Ryan Rubenzahl, Casey Brinkman | 2023-09-20T17:50:16Z | http://arxiv.org/abs/2309.11494v2 | # Investigating the Atmospheric Mass Loss of the Kepler-105 Planets Straddling the Radius Gap
###### Abstract
An intriguing pattern among exoplanets is the lack of detected planets between approximately 1.5 R\({}_{\oplus}\) and 2.0 R\({}_{\oplus}\). One proposed explanation for this "radius gap" is the photoevaporation of planetary atmospheres, a theory that can be tested by studying individual planetary systems. Kepler-105 is an ideal system for such testing due to the ordering and sizes of its planets. Kepler-105 is a sun-like star that hosts two planets straddling the radius gap in a rare architecture with the larger planet closer to the host star (\(R_{b}=2.53\pm 0.07\) R\({}_{\oplus}\), \(P_{b}=5.41\) days, \(R_{c}=1.44\pm 0.04\) R\({}_{\oplus}\), \(P_{c}=7.13\) days). If photoevaporation sculpted the atmospheres of these planets, then Kepler-105b would need to be much more massive than Kepler-105c to retain its atmosphere, given its closer proximity to the host star. To test this hypothesis, we simultaneously analyzed radial velocities (RVs) and transit timing variations (TTVs) of the Kepler-105 system, measuring disparate masses of \(M_{b}=10.8\pm 2.3\) M\({}_{\oplus}\) (\(\rho_{b}=3.68\pm 0.84\) g cm\({}^{-3}\)) and \(M_{c}=5.6\pm 1.2\) M\({}_{\oplus}\) (\(\rho_{c}=10.4\pm 2.39\) g cm\({}^{-3}\)). Based on these masses, the difference in gas envelope content of the Kepler-105 planets could be entirely due to photoevaporation (in 76% of scenarios), although other mechanisms like core-powered mass loss could have played a role for some planet albedos.
0000-0002-0001-7500]M.
## 1 Introduction
In one of the most significant exoplanet discoveries in recent years, Fulton et al. (2017) identified a gap in the occurrence rate of exoplanets between approximately 1.5 R\({}_{\oplus}\) and 2.0 R\({}_{\oplus}\). Various theories have been proposed to explain this "radius gap," two of which are particularly prominent: core-powered mass loss (Ginzburg et al., 2018; Gupta and Schlichting, 2019) and photoevaporation (Owen and Wu, 2017). As planets form and accrete gas and dust from the protoplanetary disk, they can become surrounded by a gaseous envelope. However, this primordial envelope can be removed. Core-powered mass loss facilitates the loss of planetary atmospheres due to the cooling luminosity of a planet's core. X-ray and ultraviolet (XUV) radiation from the host star can also drive atmospheric mass loss via photoevaporation, where the XUV radiation ionizes and heats up the gas in the planetary atmosphere, causing it to escape into space. Both of these processes can cause planets to lose a substantial amount of their gas envelopes, leading to a significant reduction in their overall radii. Theoretical models suggest that planets within the radius gap (1.5 R\({}_{\oplus}-\)2.0 R\({}_{\oplus}\)) lose their gas envelopes on short timescales, leading to a reduction in their radii, whereas planets larger than the gap have much longer timescales for atmospheric mass-loss (Owen and Wu, 2013; Lopez and Fortney, 2013; Owen and Wu, 2017; Mordasini, 2020). Furthermore, planets smaller than 1.5 R\({}_{\oplus}\) that are close to their stars are typically thought to be rocky, in which case they have no atmospheres left to lose (Weiss and Marcy, 2014; Rogers, 2015).
The explanation of the radius gap is sometimes framed as a binary choice between core-powered mass loss and photoevaporation. However, such a simplistic interpretation likely does not encompass the full complexity of atmospheric mass-loss for the planets in this regime. A more nuanced approach to understanding the radius gap likely involves a combination of both core-powered mass loss and photoevaporation, each playing a role in sculpting planetary architectures. Therefore, instead of seeking a definitive answer to which theory explains the radius gap, it is more prudent to explore how each mechanism contributes to shaping different types of planetary architectures. Studying individual planetary systems offers a unique advantage in this context. By focusing on planets that share the same host star properties (i.e. mass, radius, temperature, age, XUV radiation history, and metallicity), we can eliminate a multitude of confounding factors that limit broader population studies. Thus, individual planetary systems provide us with a more robust testbed for examining theories such as photoevaporation and core-powered mass loss.
Kepler-105 is a system that serves as an excellent natural laboratory for investigating the role of photoevaporation in sculpting planetary architectures. In the case of photoevaporation-driven atmospheric mass loss, the time-integrated XUV radiation that a planet receives affects how much atmosphere the planet loses. Thus, the best systems for testing photoevaporation have an unusual architecture in which a gas-rich sub-Neptune is interior to a rocky planet (Owen and Campos Estrada, 2020). Kepler-105 has two confirmed planets that follow this architecture: a sub-Neptune (\(R_{b}=2.53\pm 0.07R_{\oplus}\)) with a period of 5.41 days and a super-Earth (\(R_{c}=1.44\pm 0.04R_{\oplus}\)) with a period of 7.13 days (Fulton and Petigura, 2018). Based on previous mass measurements of planets with similar sizes, we expect Kepler-105 to have a predominantly rocky composition, while Kepler-105 likely has a significant gaseous envelope. If their compositions are typical for their sizes, it is unclear how the inner planet, which likely received more XUV flux from the host star, managed to retain a significant gaseous envelope while the smaller outer planet did not.
A similar problem was posed for Kepler-36, a benchmark system that played an important role in developing radius valley predictions and photoevaporation models (Carter et al., 2012; Lopez and Fortney, 2013; Owen and Campos Estrada, 2020). Kepler-36 hosts two confirmed planets near the 6:7 mean motion resonance (MMR), with a Neptune-sized planet exterior to a super-Earth. The Neptune-sized planet was found to possess a much more massive core, thereby making it more likely to retain a gaseous envelope (Lopez and Fortney, 2013). These findings prompted us to explore a similar scenario for the Kepler-105 system. If Kepler-105 and Kepler-105 formed in situ, then Kepler-105 would receive \(\sim 44\%\) more cumulative XUV radiation than Kepler-105. Thus, similar to the Kepler-36 system, the mass of Kepler-105 must be substantially larger than that of Kepler-105 for the latter to have lost its envelope due to photoevaporation while the former retained it.
To test this hypothesis, we first analyzed TTVs (Section 2) and RVs (Section 3) to measure the masses of both planets. We then jointly modeled the TTVs and RVs to refine these mass measurements in Section 4 and explored potential planetary compositions (Section 5). Section 6 compares our measured masses with numerical predictions from EvaMass(Owen and Campos Estrada, 2020). This allows us to assess the viability of photoevaporation to explain the observed difference in gas content between Kepler-105b and Kepler-105c. We also
explore core-powered mass loss as an alternative mechanism to explain the different gas compositions of these planets. Finally, in Section 7, we provide a summary of our findings and outline potential avenues for future research related to Kepler-105.
## 2 Mass from Ttvs
TTVs are variations in the orbital period of a planet caused by the gravitational influence of other objects in the same system such as planets or moons. Since the amplitude of the TTV of a planet depends on the mass of the companion (Lithwick et al., 2012), we can use TTVs to measure the masses of Kepler-105b and Kepler-105c, which are interior to the 4:3 MMR (Fulton and Petigura, 2018). We analyzed 246 transit times for Kepler-105b and 179 transit times for Kepler-105c from Q1-Q17 short and long cadence data from the _Kepler Space Telescope_(_priv. communication Jason Rowe, 2022_, based on Rowe et al., 2015), shown in Figure 1 with a linear ephemeris subtracted. To model the TTVs, we used TTVFaster(Agol and Deck, 2016), which uses perturbation theory to model all terms to first-order in eccentricity. This semi-analytic approach has been demonstrated to produce accurate results for planets that are low-mass, low-eccentricity and not too deep within resonance (Agol and Deck, 2016), such as Kepler-105b and Kepler-105c. To find the best fit to the transit times of Kepler-105b and Kepler-105c using TTVFaster, we maximized the following log-likelihood function:
\[\log(\mathcal{L})=-0.5\sum_{i}\frac{\left(\mathrm{TT}_{i}-\mathrm{TT}_{m,i} \right)^{2}}{\sigma_{\mathrm{TT},i}^{2}} \tag{1}\]
where \(\mathrm{TT}_{i}\) and \(\mathrm{TT}_{m,i}\) are the observed and model-predicted transit times for the \(i\)-th observation, respectively, and \(\sigma_{\mathrm{TT},i}\) is the observational uncertainty for that transit. \(\sum_{i}\) indicates that we sum over all observed transits for both Kepler-105b and Kepler-105c.
To explore various solutions in our parameter space we used emcee(Foreman-Mackey et al., 2013): a Python package that runs a Markov Chain Monte Carlo (MCMC) algorithm with an affine-invariant ensemble sampler (Goodman and Weare, 2010). We varied the masses (\(M\)), orbital periods (\(P\)), \(\sqrt{e}\) cos\(\omega_{p}\), \(\sqrt{e}\) sin\(\omega_{p}\), and the initial times of transit (\(t_{0}\)). We re-parameterized \(e\) and \(\omega\) in this way to mitigate against an artificial build up of eccentricities near zero due to the boundary condition at \(e=0\)(Eastman et al., 2013). We allowed stellar mass to vary as well, using a Gaussian prior of \(0.99\pm 0.03M_{\odot}\) based on previous stellar characterization (Fulton and Petigura, 2018). Since the planets in Kepler-105 have very close orbits, we used a Hill stability prior to prevent orbit crossing:
\[\begin{split}& a_{c}\left(1-e_{c}\right)>a_{b}\ \ (1+e_{b})+\\ &\max\left(a_{b}\ \left(1-e_{b}\right)\ \left(\frac{M_{b}}{3M_{*}} \right)^{\frac{1}{3}},\\ & a_{c}\ \left(1-e_{c}\right)\ \left(\frac{M_{c}}{3M_{*}} \right)^{\frac{1}{3}}\right)\end{split} \tag{2}\]
where \(a\) represents the semi-major axes for Kepler-105b and Kepler-105c (denoted with subscripts b and c) and \(M_{*}\) represents the stellar mass. Gaussian priors were placed on \(P\) and \(t_{0}\) and uniform priors were placed on \(e\), \(\omega\) and \(M\) (see Table 1). We determined \(a\) in Equation 2 using Kepler's Third Law from \(P\) and \(M_{*}\). We also fixed the orbital inclinations of the planets in an edge-on configuration (\(i=90^{\circ}\)). This is because TTVFaster
Figure 1: Observed transit times minus a linear ephemeris (black) for Kepler-105b (top) and Kepler-105c (bottom). The plot also includes the best fit TTVFaster(Agol and Deck, 2016) solution to the TTVs (dark green) as well as the 1\(\sigma\) confidence intervals from our model (light green). Our TTV model is strongly preferred to a linear ephemeris (\(\Delta AIC=-27\), where \(AIC\) is the Akaike Information Criterion, Akaike, 1974), indicating the presence of dynamical perturbations affecting the transit times. Based on the TTVs alone, we detected Kepler-105c with 4\(\sigma\) confidence (\(5.9\pm 1.4\) M\({}_{\oplus}\)) and Kepler-105b with 2\(\sigma\) confidence (\(9.3^{+4.9}_{-4.6}\) M\({}_{\oplus}\)). Furthermore, we ran two additional MCMC runs where the mass of Kepler-105c was constrained to be either \(\leq 3.1\) M\({}_{\oplus}\) or \(\geq 8.7\) M\({}_{\oplus}\). Our best-fit TTVFaster analytic solution was strongly preferred over these MCMC models (\(\Delta AIC=-11\) for both cases).
assumes coplanar orbits for each planet since the amplitude of TTVs scales with mutual inclination to second-order (Lithwick et al., 2012). With this set-up, we ran the MCMC until convergence, discarding the first \(10^{5}\) steps as burn-in. To check for convergence, we used the potential scale reduction factor (PSRF, Gelman and Rubin, 1992), requiring each parameter in our model to have a PSRF less than 1.01.
Our MCMC analysis of the TTVs yielded a mass of \(9.3^{+4.9}_{-4.6}\) M\({}_{\oplus}\) for Kepler-105b and a mass of \(5.9\pm 1.4\) M\({}_{\oplus}\) for Kepler-105c. These findings place strong constraints on the mass of Kepler-105c (\(4\sigma\)), but not Kepler-105b (only \(2\sigma\)). This outcome is consistent with previous TTV analyses of the system (Hadden and Lithwick, 2017; Jontof-Hutter et al., 2016). However, this may seem surprising, given that Kepler-105b is more massive and should theoretically induce larger and more easily detectable TTVs than Kepler-105c. The explanation for this is two-fold. Firstly, the mass error for Kepler-105c scales with the transit midpoint error of the larger Kepler-105b. Secondly, Kepler-105b produces a deeper transit, making it easier to precisely measure the midpoint of each transit. Thus, the higher precision in transit midpoint measurements of Kepler-105b leads to better constraints on the mass of Kepler-105c. By the same logic, the mass of Kepler-105b is not as well determined from TTVs due to the smaller transit depth of Kepler-105c, which leads to larger transit midpoint uncertainties for Kepler-105c.
## 3 Mass from Rvs
The RV method is a commonly used exoplanet detection technique that involves measuring the Doppler shift in the emitted light of a star caused by the gravitational influence of orbiting planets. In this paper, we measured 92 RV observations of Kepler-105 (Table 2) with the High Resolution Echelle Spectrometer (HIRES) on Keck I (Vogt et al., 1994). The observations of Kepler-105 were performed using the C2 Decker with typical exposure times of 1800 seconds (median S/N at 5500 A = 89 pix\({}^{-1}\)). The data was processed through the standard HIRES RV data reduction pipeline (Howard et al., 2010).
### Simple Two-Planet Model
Given the challenges in accurately deducing planetary properties from RV data, which is often complicated by the presence of stellar activity, we faced a decision on how to model the RVs. For instance, we could include a Gaussian process (GP) into our RV model to help model the correlated noise from stellar activity (e.g. Haywood et al., 2014; Rajpaul et al., 2015; Grunblatt et al., 2015). However, Kepler-105 is a low activity star (\(\log R^{\prime}_{\rm HK}=-5.19\)), so incorporating a correlated noise model may introduce unnecessary free parameters (e.g. Blunt et al., 2023). Thus, we chose to model the RVs twice, both with and without a GP, to determine which model produced a more reliable fit to the data. For our first approach, we implemented a simple two-planet Keplerian model using the Radial Velocity Modeling Toolkit RadVel(Fulton et al., 2018). We allowed \(P\), \(\sqrt{e}\) cos\(\omega_{*}\), \(\sqrt{e}\) sin\(\omega_{*}\), \(t_{0}\) and the RV semi-amplitude (\(K\)) to vary for both Kepler-105b and Kepler-105c. Similar to our TTV model, we used Gaussian priors on \(P\) and \(t_{0}\) and uniform priors on \(e\), \(\omega\), and \(K\) (Table 1). Additionally, we include two nuisance parameters, jitter (\(\sigma_{jit}\)) and gamma (\(\gamma\)), to account for additional astronomical and instrumental noise and the RV offset, respectively. Lastly, we used a Hill stability prior to prevent orbit crossing (Equation 2).
### GP model
We also extended the simple two-planet model in 3.1 by including a GP to model correlated noise in the RVs caused by stellar activity. To fully specify a GP, one must define a covariance function, often referred to as a "kernel" (Rasmussen and Williams, 2006). Given the quasi-periodic nature of stellar activity (Rajpaul et al., 2015; Nicholson and Aigrain, 2022), we used a quasi-periodic kernel with RadVel:
\[C_{ij}=A^{2}\exp\left[-\frac{\left|t_{i}-t_{j}\right|^{2}}{\lambda_{e}^{2}}- \frac{\sin^{2}\left(\left|t_{i}-t_{j}\right|/P_{rot}\right)}{2\lambda_{p}^{2} }\right] \tag{3}\]
Here, \(i\) and \(j\) are indexes of the covariance matrix \(C\) and \(A\), \(\lambda_{e}\), \(P_{rot}\), and \(\lambda_{p}\) are the hyperparameters of our quasi-periodic kernel, representing the GP amplitude, the exponential decay timescale (a proxy for the lifetime of star spots), the stellar rotation period, and the harmonic complexity, respectively. While GPs provide the flexibility to fit complex data sets, they are notorious for overfitting. To mitigate this issue, we imposed physically motivated priors on the GP hyperparameters:
\[\begin{split} 0\leq A\leq\sigma_{RV}\\ 0.5P_{rot}\leq\lambda_{e}\leq 10P_{rot}\\ 0\leq P_{rot}\leq 50.5\ \text{days}\\ 0.5\leq\lambda_{p}\leq 5\end{split} \tag{4}\]
For \(A\), this broad prior prevents the overall GP amplitude from exceeding the standard deviation of the RVs. The prior on \(P_{rot}\) is set between 0 and the \(3\sigma\) upper bound reported in McQuillan et al. (2013). For the priors on \(\lambda_{e}\) and \(\lambda_{p}\), we follow the recommendations of Rajpaul (2017).
### Three-Planet Models
In addition to the two-planet models described in Sections 3.1 and 3.2, we also ran _two_ different three-planet RV models. For these models, we adopt the same approaches as Section 3.1 (non-GP) and Section 3.2 (GP), with the addition of the third candidate planet. Specifically, we included the 0.55 R\({}_{\oplus}\) candidate planet at 3.43 days, with a Gaussian prior on \(P\) and \(t_{0}\) based on Thompson et al. (2018).
### Model Comparison
Based on these set-ups, we ran the MCMC code embedded within RadVel for these models to maximize the following log-likelihood function:
\[\log(\mathcal{L})=-0.5\bigg{(}\sum_{k}\frac{\left(\mathrm{RV}_{k}-\mathrm{RV} _{m,k}(t_{k})\right)^{2}}{\sigma_{\mathrm{RV},k}^{2}}+\log(\sigma_{\mathrm{RV },k}^{2})\bigg{)} \tag{5}\]
where \(\mathrm{RV}_{k}\) is the \(k\)-th observed RV measurement, and \(\mathrm{RV}_{m,k}(t_{k})\) is the Keplerian-modeled RV at time \(t_{k}\). \(\sigma_{RV,k}\) is the uncertainty for the \(k\)-th RV measurement, which is defined as the observational uncertainty (\(\sigma_{k}\)) added together in quadrature with a jitter (\(\sigma_{jit}\)) term: \(\sqrt{(\sigma_{k})^{2}+(\sigma_{jit})^{2}}\). We ran this MCMC algorithm with 50 walkers until convergence. The initial 10% of steps were discarded as burn-in. To check for convergence, we once again required the PSRF to be less than 1.01 for each parameter.
Our two-planet non-GP RadVel fit to the RV data yielded a mass of \(10.7\pm 2.8\) M\({}_{\oplus}\) for Kepler-105b. We were unable to strongly detect Kepler-105c in the RVs, so we only placed a 95% upper limit of 4.6 M\({}_{\oplus}\) for this planet. In comparison, our two planet fit with a GP yielded masses of \(M_{b}=10.2\pm 2.6\) M\({}_{\oplus}\) and \(M_{c}<3.8\) M\({}_{\oplus}\) (95% upper limit). Thus, both models produced similar masses for Kepler-105b and Kepler-105c. Given that the results were nearly identical, we determined that the added complexity of a GP is not justified for Kepler-105 since the simpler model is strongly favored (\(\Delta AIC=-32\)). Furthermore, the two different three planet models did not detect the 0.55 R\({}_{\oplus}\) candidate planet, placing 95% upper limits of 4.6 (non-GP) and 4.8 M\({}_{\oplus}\) (GP). These models also failed to detect an RV signal for Kepler-105c, only placing 95% upper limits of 4.6 (non-GP) and 4.7 M\({}_{\oplus}\) (GP). Since these models are strongly disfavored to our two-planet non-GP fit (\(\Delta AIC>38\) for both models) and no additional planetary signals were detected in the Lomb-Scargle periodogram of the RVs (Lomb, 1976; Scargle, 1982), we conclude that the simple two-planet model is the best fit to the RVs of Kepler-105.
### Why didn't we detect Kepler-105c in the RVs?
In an effort to understand why Kepler-105c was not detected in the RVs, we conducted an injection-recovery test where we injected a synthetic planetary signal into the RVs based on the posteriors of Kepler-105c from the TTVs. Even with the injected signal, we still do not strongly detect Kepler-105c in the RV data. This is somewhat surprising: Kepler-105c should generate a signal of 2 m s\({}^{-1}\) based on its TTV mass (\(5.9\pm 1.4\) M\({}_{\oplus}\)). According to Equation 7 of Howard and Fulton (2016), a 2 m s\({}^{-1}\) signal is expected to be detectable with 6\(\sigma\) confidence in a sample of 92 RVs. However, it is important to note that the Howard and Fulton (2016) relation is primarily derived from the RVs of giant planets. This could limit its relevance to smaller planets like Kepler-105b and Kepler-105c, which may explain why we only detected the more massive Kepler-105b with 4\(\sigma\) confidence and did not strongly detect Kepler-105c. Other factors, such as the presence of additional planets
Figure 2: Phase-folded RVs (red) and the RadVel(Fulton et al., 2018) fit to the RVs (blue) for Kepler-105b (left) and Kepler-105c (right). The RadVel two-planet fit without a GP yielded a 4\(\sigma\) detection of Kepler-105b (\(10.7\pm 2.8\) M\({}_{\oplus}\)) but did not strongly detect Kepler-105c, despite imposing a planetary signal at its known orbital period. Thus, we can only place a 95% upper limit of 4.6 M\({}_{\oplus}\) on the mass of Kepler-105c based on only the RVs.
or unmitigated stellar activity, may also contribute to our failure to confidently detect Kepler-105c in the RVs. Furthermore, discrepancies between RV and TTV mass estimates are not unprecedented and have been a subject of ongoing study for many years (Weiss and Marcy, 2014; Steffen, 2016; Mills and Mazeh, 2017; Otegi et al., 2020). As a result, this discrepancy between the RV- and TTV-determined mass of Kepler-105c merits further scrutiny, both to understand the specific case of Kepler-105c as well as the broader issue of reconciling measured RV and TTV masses.
## 4 Joint Modeling of RVs and Ttvs
In the previous sections, we analyzed the RVs and TTVs separately. The TTVs placed strong constraints on the mass of Kepler-105c but not Kepler-105b, while the opposite was true for the RVs. To obtain precise mass measurements for both planets, we combined these two methods using a joint RV and TTV model. To do this, we used TTVFaster and RadVel to maximize the log-likelihood function
\[\begin{split}\log(\mathcal{L})=-0.5\bigg{(}\sum_{i}\frac{\left( \mathrm{TT}_{i}-\mathrm{TT}_{m,i}\right)^{2}}{\sigma_{\mathrm{TT},i}^{2}}+\\ \sum_{k}\frac{\left(\mathrm{RV}_{k}-\mathrm{RV}_{m,k}(t_{k}) \right)^{2}}{\sigma_{\mathrm{RV},k}^{2}}+\log(\sigma_{\mathrm{RV},k}^{2}) \bigg{)}\end{split} \tag{6}\]
With this set-up, we used the python package emcee to vary the masses, orbital periods, \(\sqrt{e}\cos\omega_{*}\), \(\sqrt{e}\sin\omega_{*}\), and initial transit times for both planets, as well as the nuisance parameters \(\gamma\) and \(\sigma\). It is important to note that TTVFaster and RadVel use different conventions where the ascending node and value of \(\omega_{*}\) differ by \(180^{\circ}\)(Householder and Weiss, 2022). Here, we adopt the RadVel convention, which uses a \(\hat{Z}\) unit vector that points away from the observer and defines the ascending node as the point where the planet pierces the sky plane moving away from the observer. This is opposite to the coordinate system used in TTVFaster, where \(\hat{Z}\) points toward the observer and the planet approaches the observer at the ascending node. To account for this difference, we added \(180^{\circ}\) to the value of \(\omega_{*}\) in the TTVFaster component of our model (note that TTVFaster specifically models \(\omega_{p}\), but it is straightforward to convert between \(\omega_{p}\) and \(\omega_{*}\): \(\omega_{*}=\omega_{p}+180^{\circ}\)). Similarly to our other models, we implemented uniform priors on \(e\), \(M\), \(\omega\) and Gaussian priors on \(p\) and \(t_{0}\). We also used a Gaussian prior on stellar mass as well as a Hill stability prior. We ran this MCMC for \(8\times 10^{5}\) steps, discarding the first \(10^{5}\) steps as burn-in. To ensure that our chains converged, we required the PSRF to be less than 1.01 for each parameter in our model. All of the chains met this PSRF threshold. This MCMC model yielded masses of \(10.8\pm 2.3\) M\({}_{\oplus}\) and \(5.6\pm 1.2\) M\({}_{\oplus}\) for Kepler-105b and Kepler-105c, respectively (Table 1).
## 5 Planet Interiors
Using the radius measurements from Fulton and Petigura (2018), we can now plot Kepler-105b and Kepler-105c on a mass-radius diagram (Figure 3). While the masses and radii alone cannot reveal the composition of the planets, this figure does provide some insight into their potential compositions. The mass and radius of Kepler-105c are consistent with a rocky planet without a significant gaseous envelope. Kepler-105b, on the other hand, lies above the 100% rocky composition line, suggesting that Kepler-105b has a substantial volatile envelope. Assuming that Kepler-105b has an Earth-like core mass fraction of 67.5% MgSiO\({}_{3}\) and 32.5% Fe (Seager et al., 2007), the envelope mass fraction of H\({}_{2}\)-He of Kepler-105b would be between 0.5%-2% (Lopez and Fortney, 2014).
Another possible composition that has been suggested for planets of similar masses and sizes to Kepler-105b is that of a "water world:" a rocky planet with hundreds or thousands of kilometers of water, although the existence of such planets remains a topic of debate (Bean et al., 2021; Neil et al., 2022; Rogers et al., 2023). If Kepler-105b is a water-world, it likely would have formed beyond the H\({}_{2}\)O snow line and migrated inward to its present orbit via Type I migration. This scenario is supported by the fact that Kepler-105b and Kepler-105c are near the 4:3 mean motion resonance, as Type I migration is a common mechanism for the formation of planets in mean motion resonances (Kley and Nelson, 2012). However, this would require Kepler-105c to form beyond the snow line, which is inconsistent with its high density (\(\rho_{c}=10.4\pm 2.39\) g cm\({}^{-3}\)). It may have been possible for Kepler-105c to form in-situ and for Kepler-105b to migrate from beyond the snow-line, but this would necessitate a fast orbit crossing between the planets. Given these challenges, it seems more plausible that Kepler-105b has a H\({}_{2}\)-He dominate envelope rather than being a water-world, but it is difficult to make a definitive assertion without better observational evidence. Unfortunately, it will be difficult to determine the precise composition of the atmosphere of Kepler-105b, even with potential follow-up observations. Its atmospheric characterization with the _James Webb Space Telescope (JWST)_ is not feasible (see Transit Spectroscopy Metric (TSM), Kempton et al., 2018), primarily due to the faint magnitude of Kepler-105 (J \(\approx\) 11.8).
## 6 Scenarios for Atmospheric Mass Loss
### Photoevaporation
Recently, there has been a growing interest in testing photoevaporation models in systems like Kepler-105 where rocky super-Earths are exterior to gaseous sub-Neptunes (Owen and Campos Estrada, 2020). Such planetary architectures offer a unique testbed for photoevaporation because the sub-Neptune retained its gaseous envelope despite being subject to more cumulative XUV flux (assuming in situ formation). With the masses from our joint RV and TTV analysis, we can assess if the Kepler-105 planets have a formation history that is consistent with photoevaporation. In this context, "consistent with photoevaporation" means that the measured masses and radii of the Kepler-105 planets support the hypothesis that Kepler-105c lost its gaseous envelope due to photoevaporation, while Kepler-105b retained a significant gaseous envelope. To evaluate the validity of this hypothesis, we used the publicly available code EvapMass(Owen and Campos Estrada, 2020).
We provide a brief outline of the EvapMass numerical procedure, which is more fully described in Owen and Campos Estrada (2020). EvapMass assumes that both Kepler-105b and Kepler-105c formed in-situ with H\({}_{2}\)-He envelopes and that Kepler-105c was _just_ able to lose its envelope entirely due to photoevaporation, maximizing its atmospheric mass-loss time-scale. The atmospheric loss time scale, \(t_{m}\), is given by \(t_{m}=M_{env}/\dot{M}_{env}\) where the equation for the rate of atmospheric mass loss (\(\dot{M}_{env}\)) is expressed as follows (Owen and Campos Estrada, 2020):
\[\dot{M}_{env}=\frac{\eta L_{\rm XUV}R_{p}^{3}}{4GM_{p}a_{p}^{2}} \tag{7}\]
The variables \(\eta\), \(L_{\rm XUV}\), and \(G\), represent the mass loss efficiency, XUV luminosity of the host star, and the gravitational constant, respectively. Since the Kepler-105 planets have \(\sim 99\%\) of their mass in solid materials (section 4), we can assume that \(M_{p}\approx M_{core}\), so the equation for \(t_{m}\) can be written as:
\[t_{m}=\frac{4GM_{p}^{2}a_{p}^{2}X}{\eta L_{\rm XUV}R_{p}^{3}} \tag{8}\]
where \(X\) is the envelope mass fraction: \(X\equiv M_{env}/M_{core}\). For Kepler-105c, our goal is to find the envelope mass at which the mass-loss timescale is maximized. Since we assumed that Kepler-105c formed in situ and that \(M_{p}\approx M_{core}\), \(L_{\rm XUV}\), \(M_{p}\), and \(a_{p}\) are independent of the envelope mass. Thus, we can maximize the following:
\[t_{m}\propto\frac{X}{\eta R_{p}^{3}} \tag{9}\]
EvapMass solves for these dependencies numerically (i.e. computing \(X\) as a function of \(M\), \(a\), and \(R\) involves numerically evaluating several integrals) and then compares the mass-loss timescales for the two planets (Owen
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Parameter & RV-only & TTV-only & Joint RV and TTV & Prior \\ \hline \(P_{b}\) (days) & \(5.412207\pm 0.000002\) & \(5.41220324\pm 0.0000003\) & \(5.4122034\pm 0.0000004\) & Norm(5.412207130,0.000002488) \\ \hline \(e_{b}\) & \(0.05\pm 0.04\) & \(0.01\pm 0.01\) & \(0.02\pm 0.02\) & Unif(0,1) \\ \hline \(\omega_{b}\) (\({}^{\circ}\)) & \(358.6^{+92.8}_{-152.0}\) & \(61.2^{+60.0}_{-142.5}\) & \(225.0^{+68.4}_{-137.1}\) & Unif(0,360) \\ \hline \(t_{0,b}\) (BJD) & \(2454955.3185\pm 0.0006\) & \(2454955.3186\pm 0.0003\) & \(2454955.3186\pm 0.0002\) & Norm(2454955.318609,0.000536) \\ \hline \(M_{b}\) (M\({}_{\oplus}\)) & \(10.7\pm 2.8\) & \(9.3^{+4.9}_{-4.6}\) & \(10.8\pm 2.3\) & Unif(0,50) \\ \hline \(R_{b}\) (R\({}_{\oplus}\)) & \(2.53\pm 0.07\) & \(2.53\pm 0.07\) & \(2.53\pm 0.07\) & - \\ \hline \(\rho_{b}\) (g cm\({}^{-3}\)) & \(3.65\pm 1.01\) & \(3.17^{+1.69}_{-1.59}\) & \(3.68\pm 0.84\) & - \\ \hline \(P_{c}\) (days) & \(7.12594\pm 0.00001\) & \(7.12592\pm 0.00001\) & \(7.12592\pm 0.00001\) & Norm(7.125945910,0.000012500) \\ \hline \(e_{c}\) & \(0.04\pm 0.04\) & \(0.02\pm 0.02\) & \(0.02\pm 0.02\) & Unif(0,1) \\ \hline \(\omega_{c}\) (\({}^{\circ}\)) & \(310.9^{+156.4}_{-112.4}\) & \(124.3^{+156.1}_{-67.4}\) & \(298.9^{+135.7}_{-59.5}\) & Unif(0,360) \\ \hline \(t_{0,c}\) (BJD) & \(2454957.753\pm 0.0001\) & \(2454957.754\pm 0.0003\) & \(2454957.753\pm 0.0001\) & Norm(2454957.753432,0.001687) \\ \hline \(M_{c}\) (M\({}_{\oplus}\)) & \(4.6\) (95\% Upper Limit) & \(5.9\pm 1.4\) & \(5.6\pm 1.2\) & Unif(0,50) \\ \hline \(R_{c}\) (R\({}_{\oplus}\)) & \(1.44\pm 0.04\) & \(1.44\pm 0.04\) & \(1.44\pm 0.04\) & - \\ \hline \(\rho_{c}\) (g cm\({}^{-3}\)) & \(2.31^{+3.54}_{-1.97}\) & \(10.9\pm 2.75\) & \(10.4\pm 2.39\) & - \\ \hline \end{tabular}
\end{table}
Table 1: Dynamical parameters of Kepler-105b and Kepler-105c from an RV-only fit (RadVel), a TTV-only fit (TTVFaster), and a simultaneous fit to RVs and TTVs (RadVel and TTVFaster). Planet parameters were derived based on the stellar parameters reported in Fulton and Petigura (2018). We also report the radii of both planets from Fulton and Petigura (2018) to compute their densities, although the planet radii were not directly measured in this paper. It is also worth noting that in TTVFaster, \(180^{\circ}\) was added to the argument of periastron of the planets (\(\omega_{b}\) and \(\omega_{c}\)) to address the inconsistency in the modeling of \(\omega\) between RadVel and TTVFaster.
& Campos Estrada, 2020). Since Kepler-105b has a significant gaseous envelope, we require that its atmospheric mass loss timescale is greater than or equal to the maximum atmospheric mass loss timescale of the rocky Kepler-105c. This approach effectively minimizes the mass-loss timescale for Kepler-105b, providing us with a mass lower limit for Kepler-105b that is consistent with photoevaporation. If Kepler-105b had a mass below this value, its mass-loss timescale would be too short to sustain its current gaseous envelope, given that Kepler-105c was stripped of its envelope due to photoevaporation.
EvaPMass was specifically designed to compute a minimum mass without measured masses and is often used to report a 95% limit that the planet mass must be bigger than to be consistent with photoevaporation (Owen & Campos Estrada, 2020). However, with the availability of our measured posterior distributions of the Kepler-105 planets, we can adopt a slightly different approach. By randomly selecting samples from these posterior distributions, EvaPMass can compute a minimum mass for each sample. We can compare each minimum mass with the corresponding measured mass to determine the percentage of samples where the measured mass of Kepler
Figure 3: The mass-radius relationship for transiting exoplanets with combined fractional mass and radius uncertainties less than 50% (plotted in grey as a function of fractional uncertainty), based on the NASA Exoplanet Archive (Akeson et al., 2013, queried on September 22, 2022). We also depict the radius gap (Fulton et al., 2017) from 1.5 R\({}_{\oplus}-2.0\) R\({}_{\oplus}\) (light grey) as well as the planetary compositions (light blue, dashed) from Zeng et al. (2019). Additionally, we include the 1\(\sigma\) radius (Fulton & Petigura, 2018) and mass measurements of Kepler-105b (blue, \(2.53\pm 0.07R_{\oplus}\), \(10.8\pm 2.3M_{\oplus}\)) and Kepler-105c (brown, \(1.44\pm 0.04R_{\oplus}\), \(5.6\pm 1.2M_{\oplus}\)). We also show the EvaPMass (Owen & Campos Estrada, 2020) predicted 1\(\sigma\) minimum mass distribution for Kepler-105b that is consistent with photoevaporation (red, \(8.7\pm 2.4M_{\oplus}\)). If the measured mass of Kepler-105b is greater than the EvaPMass prediction, then the difference in gas content between Kepler-105b and Kepler-105c can be explained by photoevaporation. Comparing these distributions, we conclude that the difference in gas envelopes of the Kepler-105 planets is entirely attributable to photoevaporation in 76% of scenarios (i.e. \(M_{b,\rm measured}>M_{b,\rm predicted}\))
105b is greater than its EvaPMass predicted minimum mass. A higher percentage of cases where the measured mass is greater the EvaPMass predicted minimum mass indicates a higher consistency with photoevaporation.
Since the EvaPMass computation depends on both the mass and radius of Kepler-105b and Kepler-105c as well as the properties of their host star (i.e. temperature, mass, radius, age), we evaluated 50,000 randomly drawn samples from these measured distributions. We adopted a value of 1.8 R\({}_{\oplus}\) for the location of the radius gap, a value that is generally accepted for FGK stars, although it can be lower (\(\sim 1.5\) R\({}_{\oplus}\)) for M-dwarfs (Van Eylen et al., 2018, 2021). This selection means that the entire radius distribution of Kepler-105c falls below the radius gap. Our calculations for \(\eta\) are based on the hydrodynamical models from Owen and Jackson (2012). Using Fulton and Petigura (2018) for our host star properties and planet radii, combined with our measured mass distribution of Kepler-105c, we computed a minimum mass distribution of \(8.7\pm 2.4\) M\({}_{\oplus}\) for Kepler-105b, assuming Kepler-105c was stripped of its envelope due to photoevaporation (Figure 3). For each of our 50,000 samples, we compared the measured mass sample of Kepler-105b with the predicted mass sample of Kepler-105b using EvaPMass. Our analysis revealed that 76% of the compared samples were consistent with photoevaporation (i.e. \(M_{b,\mathrm{measured}}>M_{b,\mathrm{predicted}}\)). Thus, we conclude that it is probable that the difference in gas content of the Kepler-105 planets is consistent with a history of photoevaporation.
For the 24% of cases that are inconsistent with photoevaporation (i.e. \(M_{b,\mathrm{measured}}<M_{b,\mathrm{predicted}}\)), we find that these scenarios are also inconsistent with core-powered mass loss in 99% of cases (details of this procedure can be found in Section 6.2). In these scenarios, stochastic events such as giant impacts (Inamdar and Schlichting, 2015; Bonomo et al., 2019) could explain the differing envelope fractions of the planets. It is also possible that the Kepler-105 planets underwent migration, in which case their present gas envelopes need not be consistent with in-situ mass loss predictions.
### Core-Powered Mass Loss
Since we tested the viability of photoevaporation to explain the difference in gas content between Kepler-105b and Kepler-105c, it is natural to explore another frequently cited mechanism for the radius gap: core-powered mass loss. Core-powered mass loss relies on the internal heat from a planet's core and the thermal radiation from the host star to drive the evaporation of its atmosphere (Ginzburg et al., 2018). Rather than conduct a full numerical procedure like we did for photoevaporation, we follow the simpler approach of Cloutier et al. (2020). Specifically, we required the timescale for core-powered atmospheric mass loss for Kepler-105b to be greater than or equal to that of Kepler-105c. This condition provides the following constraint on planetary parameters (derived in Appendix B of Cloutier et al., 2020):
\[\begin{split} 1&\leq\left(\frac{M_{\mathrm{core},b}}{M_{ \mathrm{core},c}}\right)\left(\frac{T_{\mathrm{eq},b}}{T_{\mathrm{eq},c}}\right) ^{-3/2}\\ &\times\exp\left[c^{\prime}\left(\frac{M_{\mathrm{core},b}}{T_{ \mathrm{eq},b}R_{b}}-\frac{M_{\mathrm{core},c}}{T_{\mathrm{eq},c}R_{c}}\right) \right]\end{split} \tag{10}\]
where \(M_{\mathrm{core}}\approx M_{p}\), \(T_{\mathrm{eq}}\) is the equilibrium temperature of the planet and \(c^{\prime}\) is a constant: \(\sim 10^{4}\) K R\({}_{\odot}\) M\({}_{\odot}\)\({}^{-1}\). We use host star properties to compute \(T_{\mathrm{eq}}\) for both planets:
\[T_{\mathrm{eq}}=T_{\mathrm{eff}}\sqrt{\frac{R_{*}}{2a}}(1-A_{B})^{1/4} \tag{11}\]
where \(T_{\mathrm{eff}}\) and \(R_{*}\) are the temperature and radius of the star, and \(A_{B}\) is the Bond albedo. Assuming Gaussian distributions for \(T_{\mathrm{eff}}\) and \(R_{*}\) (\(5933\pm 60\) K, \(1.03\pm 0.02\) R\({}_{\odot}\)) based on Fulton and Petigura (2018) and choosing a Bond albedo of 0.3 for both planets, we compute \(T_{\mathrm{eq},b}=1076\pm 15\) K and \(T_{\mathrm{eq},c}=981\pm 13\) K. When we apply Equation 10 to these equilibrium temperatures and the mass and radius distributions of Kepler-105b and Kepler-105c, we find that these planets satisfy the condition for core-powered mass loss (Equation 10) in 48% of scenarios.
### Varying Bond Albedo
While our analysis suggests that core-powered mass loss is a plausible explanation for the atmospheric differences in the Kepler-105 planets, it is important to consider the role of Bond albedo in our computation. For instance, if we use a Bond albedo for Kepler-105c that is similar to Venus (\(A_{b}=0.8\)) instead of 0.3, \(T_{\mathrm{eq},c}=717\pm 10\) K. With this single alteration, the consistency of these planets with core-powered mass loss decreases from 48% to 12%. Conversely, if we instead change the Bond albedo of Kepler-105b to 0.8, the consistency increases to 86%. Thus, while our analysis suggests that core-powered mass loss could potentially explain the differences in gas content between Kepler-105b and Kepler-105c, better measurements of \(T_{\mathrm{eq}}\) or \(A_{b}\) will be necessary for a more definitive assessment.
We also explored the implications of varying the Bond albedo on the photoevaporation models in Section 6.1. EvaPMass assumes a Bond albedo of 0 for both planets when computing their equilibrium temperature. We found that setting both \(A_{b}\) and \(A_{c}\) to 0.3 for the calcu
lation of \(T_{eq}\) resulted in 83% consistency with photoevaporation. Altering these values to \(A_{b}=0.3\), \(A_{c}=0.8\) and \(A_{b}=0.8\), \(A_{c}=0.3\) led to slightly different consistencies of 81% and 93%, respectively. Since the equilibrium temperature is essentially a proxy for stellar flux in EvaPMass, we can also modify the atmosphere's response by varying the opacity, \(\kappa\), given by the following:
\[\kappa=\kappa_{0}P^{\alpha}T^{\beta} \tag{12}\]
Here, \(\kappa_{0}\) is the opacity constant and \(\alpha\) and \(\beta\) describe the pressure (P) and temperature (T) dependence of opacity. By default, EvaPMass sets \(\kappa_{0}=10^{-7.32}\), \(\alpha=0.68\), and \(\beta=0.45\), where pressure and temperature are expressed in cgs units (Rogers & Seager, 2010). We varied \(\kappa_{0}\) by an order of magnitude (i.e. \(\kappa_{0}=10^{-6.32},\kappa_{0}=10^{-8.32}\)). For these scenarios, the consistency with the photoevaporation model remained 76%. Thus, the photoevaporation models are less sensitive to changes in Bond albedo compared to core-powered mass loss models. This result aligns with findings from Owen & Jackson (2012), which demonstrate that photoevaporation mass-loss rates are not highly sensitive to variations in the underlying planetary atmospheric temperature.
Interestingly, systems like Kepler-105 present an opportunity to indirectly constrain the Bond albedo for sub-Neptunes and super-Earths. By jointly modeling photoevaporation and core-powered mass loss in systems like Kepler-105, it may be possible to identify the range of Bond albedos that would allow Kepler-105b to sustain its envelope given that Kepler-105c lost its envelope. This approach could provide us with some of the first Bond albedo constraints for smaller planets, since Bond albedo can typically only be constrained for larger planets with detectable secondary eclipses.
## 7 Summary and Discussion
In this paper, we investigated the unusual architecture of the Kepler-105 planetary system, with two planets straddling the exoplanet radius gap in an ideal way for testing photoevaporation. By combining precise radial velocity measurements from HIRES on Keck I with transit timing variations acquired from the _Kepler Space Telescope_ during Q1-Q17, we measured masses of 10.8 \(\pm\) 2.3 M\({}_{\oplus}\) (\(\rho_{b}=3.68\pm 0.84\) g cm\({}^{-3}\)) and \(5.6\pm 1.2\) M\({}_{\oplus}\) (\(\rho_{c}=10.4\pm 2.39\) g cm\({}^{-3}\)) for Kepler-105b and Kepler-105c, respectively. Our numerical mass predictions with EvaPMass suggest that in 76% of scenarios, the difference in gas envelope content between Kepler-105b and Kepler-105c can be explained by photoevaporation (i.e. \(M_{\rm b,measured}>M_{b,predicted}\)). However, we acknowledge that alternative mechanisms, such as core-powered mass loss, cannot be definitively ruled out at this stage and warrant further investigation. Furthermore, our mass measurements reveal a \(\sim 2\sigma\) mass difference between the cores of Kepler-105b and Kepler-105c. While photoevaporation sculpts the gas envelopes of exoplanets, it does not generate differences in the mass of solid materials, leading to an unresolved question: what mechanism produced the difference in solid mass between Kepler-105b and Kepler-105c? Further investigations into the formation and evolution of Kepler-105b and Kepler-105c will be required to determine the underlying mechanisms responsible for the origin of these planets.
## Acknowledgements
We thank the anonymous referee whose insights and suggestions significantly enhanced the quality of this manuscript.
This material is based on work supported by the National Science Foundation REU Program (grant no. 2050527). AH thanks Beatriz Campos Estrada, Greg Laughlin, Andrew W. Mayo, and the Astroweiss group for useful conversations and feedback. AH also thanks Jason Rowe for generously sharing the transit times used in this paper. We are also grateful to Miki Nakajima for her contributions to the proposal that enabled the acquisition of the RV data presented in this work.
L.M.W. acknowledges support from the NASA-Keck Key Strategic Mission Support program (grant no. 80NSSC19K1475) and the NASA Exoplanet Research Program (grant no. 80NSSC23K0269). R.A.R. is supported by the NSF Graduate Research Fellowship, grant No. DGE 1745301. J.M.A.M. acknowledges support from the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1842400 and from NASA'S Interdisciplinary Consortia for Astrobiology Research (NNH19ZDA001N-ICAR) under award number 19-ICAR19_2-0041. This work was supported by a NASA Keck PI Data Award, administered by the NASA Exoplanet Science Institute. Data presented herein were obtained at the W. M. Keck Observatory from telescope time allocated to (1) the University of Hawai'i, and (2) the National Aeronautics and Space Administration through the agency's scientific partnership with the California Institute of Technology and the University of California. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
The authors also wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to
have the opportunity to conduct observations from this mountain.
_Kepler_, Keck-HIRES
RadVel(Fulton et al., 2018), TTVFaster(Agol & Deck, 2016), TTVFast(Deck et al., 2014), NumPy(van der Walt et al., 2011), Matplotlib(Hunter, 2007),
Pandas(McKinney, 2010). |
2309.12823 | Group divisible designs with block size 4 and group sizes 4 and 7 | In this paper, we consider the existence of group divisible designs (GDDs)
with block size $4$ and group sizes $4$ and $7$. We show that there exists a
4-GDD of type $4^t 7^s$ for all but a finite specified set of feasible values
for $(t, s)$. | R. Julian R. Abel, Thomas Britz, Yudhistira A. Bunjamin, Diana Combe | 2023-09-22T12:25:56Z | http://arxiv.org/abs/2309.12823v4 | # Group divisible designs with block size \(4\) and group sizes \(4\) and \(7\)
# Group divisible designs with block size \(4\) and group sizes \(4\) and \(7\)
R. Julian R. Abel
School of Mathematics and Statistics
UNSW Sydney
NSW 2052, Australia
[email protected]
Thomas Britz
School of Mathematics and Statistics
UNSW Sydney
NSW 2052, Australia
[email protected]
Yudhistira A. Bunjamin
School of Mathematics and Statistics
UNSW Sydney
NSW 2052, Australia
[email protected]
Diana Combe
School of Mathematics and Statistics
UNSW Sydney
NSW 2052, Australia
[email protected]
**Abstract:** In this paper, we consider the existence of group divisible designs (GDDs) with block size \(4\) and group sizes \(4\) and \(7\). We show that there exists a \(4\)-GDD of type \(4^{4}7^{s}\) for all but a finite specified set of feasible values for \((t,s)\).
**Keywords:** group divisible design (GDD), feasible group type.
**Mathematics Subject Classification:** 05B05
## 1 Introduction
Let \(X\) be a finite set of _points_ with a partition into parts which we call _groups_. Any \(k\)-element subset of \(X\) is called a _block_. A collection of _blocks_ is a _group divisible design_ with block size \(k\), or \(k\)-GDD if (i) no two points from the same group appear together in any block and (ii) any two points from distinct groups appear together in exactly one block. The _group type_ (or _type_) of a \(k\)-GDD is the multiset \(\{|G|:G\) is a part of the partition\(\}\). The group type can also be expressed in 'exponential' notation where the type \(t_{1}^{u_{1}}t_{2}^{u_{2}}\ldots t_{m}^{u_{m}}\) means there are \(u_{i}\) groups of size \(t_{i}\) for \(i=1,2,\ldots,m\). A \(k\)-GDD of type \(g^{k}\), in which there are \(k\) groups all of the same size \(g\), is commonly referred to as a _transversal design_, denoted TD\((k,g)\).
There are known necessary conditions for the existence of a \(4\)-GDD of type \(\{g_{1},g_{2},\ldots,g_{m}\}\). These are given in Theorem 1.1. These necessary conditions are not sufficient.
**Theorem 1.1**.: [4, 5, 18, 19] _Suppose there exists a \(4\)-GDD of type \(\{g_{1},g_{2},\ldots,g_{m}\}\) where \(g_{1}\geq g_{2}\geq\cdots\geq g_{m}>0\). Set \(v=\sum_{i=1}^{m}g_{i}\). Then_
1. \(m\geq 4\)
2. \(v\equiv g_{1}\equiv g_{2}\equiv\cdots\equiv g_{m}\pmod{3}\);
3. \(\sum_{i=1}^{m}g_{i}(v-g_{i})\equiv 0\pmod{4}\);
4. \(3g_{i}+g_{j}\leq v\) for all \(i,j\in\{1,2,\ldots,m\}\), \(i\neq j\);
5. if \(m=4\), then \(g_{i}=g_{j}\) for all \(i,j\in\{1,2,\ldots,m\}\);
6. if \(m=5\), then the group type is of the form \(h^{4}n^{1}\) where \(n\leq 3h/2\);
7. if \(3g_{1}+g_{2}=v\) and \(g_{1}>g_{2}\), then \(g_{3}\leq 2g_{1}/3\);
8. if the group type is of the form \(h_{1}^{1}\)\(h_{2}^{x}\)\(h_{3}^{1}\)\(h_{4}^{1}\cdots h_{n}^{1}\) where \(x\geq 1\) and \(3h_{1}+h_{2}=v\) and \(h_{2}>h_{3}\geq\cdots\geq h_{n}\), then \(n\geq 6\). If further \(n=6\), then for \(i,j\in\{3,4,5,6\}\) we have \(h_{i}(h_{2}-h_{i})=h_{j}(h_{2}-h_{j})\).
In this paper we are concerned with \(4\)-GDDs with block size \(k=4\). We say that a multiset \(\{g_{1},g_{2},\ldots,g_{m}\}\) of positive integers is a _feasible_ group type for a \(4\)-GDD if it satisfies the conditions of Theorem 1.1. Much work on \(4\)-GDDs has concentrated on the existence question, determining whether there exists a GDD with a particular type - sometimes looking at GDDs on small sets, sometimes looking at infinite families. Some work on \(4\)-GDDs has considered the enumeration question - for a particular type determining how many nonisomorphic designs can be constructed and comparing their automorphism groups.
It is sometimes convenient to consider alternative forms of some of the necessary conditions in Theorem 1.1. In Lemma 1.2 and 1.3, we show the equivalence of some simpler conditions that are particularly useful in this paper.
**Lemma 1.2**.: _In the necessary conditions for the existence of a \(4\)-GDD of type \(\{g_{1},g_{2},\ldots,g_{m}\}\), the condition that \(v\equiv g_{1}\equiv g_{2}\equiv\cdots\equiv g_{m}\pmod{3}\) can be replaced by the condition that \(g_{1}\equiv g_{2}\equiv\cdots\equiv g_{m}\pmod{3}\) and either \(g_{1}\equiv 0\pmod{3}\) or \(m\equiv 1\pmod{3}\)._
Proof.: Firstly, suppose that \(v\equiv g_{1}\equiv g_{2}\equiv\cdots\equiv g_{m}\pmod{3}\). Recall that \(v=g_{1}+g_{2}+\cdots+g_{m}\). Thus, \(v\equiv g_{1}+g_{2}+\cdots+g_{m}\equiv mg_{1}\pmod{3}\). This means that \(v\equiv g_{1}\equiv mg_{1}\pmod{3}\). Hence, either \(g_{1}\equiv 0\pmod{3}\) or \(m\equiv 1\pmod{3}\).
Next, suppose that \(g_{1}\equiv g_{2}\equiv\cdots\equiv g_{m}\pmod{3}\) and \(g_{1}\equiv 0\pmod{3}\). Then \(g_{1}\equiv g_{2}\equiv\cdots\equiv g_{m}\equiv 0\pmod{3}\). This means that \(v\equiv g_{1}+g_{2}+\cdots+g_{m}\equiv 0\pmod{3}\). Hence, \(v\equiv g_{1}\equiv g_{2}\equiv\cdots\equiv g_{m}\pmod{3}\).
Finally, suppose that \(g_{1}\equiv g_{2}\equiv\cdots\equiv g_{m}\pmod{3}\) and \(m\equiv 1\pmod{3}\). Then \(v\equiv g_{1}+g_{2}+\cdots+g_{m}\equiv mg_{1}\equiv g_{1}\pmod{3}\). Hence, \(v\equiv g_{1}\equiv g_{2}\equiv\cdots\equiv g_{m}\pmod{3}\).
**Lemma 1.3**.: _In the necessary conditions for the existence of a \(4\)-GDD of type \(\{g_{1},g_{2},\ldots,g_{m}\}\), the condition that \(\sum_{i=1}^{m}g_{i}(v-g_{i})\equiv 0\pmod{4}\) can be replaced by the condition that the number of groups of odd size is either \(0\) or \(1\) modulo \(4\)._
Proof.: Consider the sum of point-block pairs \(S=\sum_{i=1}^{m}g_{i}(v-g_{i})\) and suppose that
\[n=\left|\{i\in\{1,\ldots,m\}\,:\,g_{i}\equiv 1\pmod{2}\}\right|.\]
Then \(n\) is the number of groups of odd size. Note that \(v\equiv n\pmod{2}\).
Suppose that \(v\) is even. If \(g_{i}\equiv 0\pmod{2}\), then \(g_{i}(v-g_{i})\equiv 0\pmod{4}\). If \(g_{i}\equiv 1\pmod{2}\), then \(g_{i}(v-g_{i})\equiv 1\pmod{4}\) when \(v\equiv 2\pmod{4}\) and \(g_{i}(v-g_{i})\equiv-1\pmod{4}\) when \(v\equiv 0\pmod{4}\). Thus, \(S\equiv\pm n\pmod{4}\). Hence, when \(v\) is even, \(S\equiv 0\pmod{4}\) if and only if \(n\equiv 0\pmod{4}\).
Now, suppose that \(v\) is odd. If \(a\), \(b\) and \(c\) are the numbers of groups of sizes congruent to \(v\), \(2\) and \(v+2\) modulo \(4\) respectively, then \(n=a+c\). If \(g_{i}\equiv 0\) or \(v\pmod{4}\), then \(g_{i}(v-g_{i})\equiv 0\pmod{4}\). If \(g_{i}\equiv 2\) or \(v+2\pmod{4}\), then \(g_{i}(v-g_{i})\equiv 2\pmod{4}\). Thus, \(S\equiv 2(b+c)\pmod{4}\). Next, counting the number of points in the \(4\)-GDD modulo \(4\) gives \(v\equiv va+2b+(v+2)c\pmod{4}\) so rearranging gives \(2(b+c)\equiv v(1-a-c)\pmod{4}\). Thus, \(S\equiv 2(b+c)\equiv v(1-a-c)\equiv v(1-n)\pmod{4}\). Hence, when \(v\) is odd, \(S\equiv 0\pmod{4}\) if and only if \(n\equiv 1\pmod{4}\).
The main aim of this paper is to answer the existence question for \(4\)-GDDs of type \(4^{\prime}7^{s}\). This paper is organised as follows. In Section 2, we state some prior-known results regarding \(4\)-GDDs, especially those on \(4\)-GDDs with up to \(50\) points and \(4\)-GDDs of type \(g^{p}\) and \(g^{p}n^{1}\) which will be used later on. In Section 3, we give constructions for all feasible types of \(4^{\prime}7^{s}\) except for a short and finite list of exceptions. Finally, in Section 4, we state the main result of this paper and provide a corollary to this result.
## 2 Some known results about \(4\)-GDDs
### Designs on sets with \(v\leq 50\)
The existence of \(4\)-GDDs with no more than \(30\) points has been well investigated. In [18], the existence question was answered for all but three types, and the existence results for those three types were completed in [2, 4].
**Theorem 2.1**.: [2, 4, 18] _The feasible group types for a \(4\)-GDD on at most \(30\) points are \(1^{4}\), \(2^{4}\), \(3^{4}\), \(1^{13}\), \(1^{9}4^{1}\), \(2^{7}\), \(3^{5}\), \(1^{16}\), \(1^{12}4^{1}\), \(1^{8}4^{2}\), \(1^{14}3^{4}\), \(4^{4}\), \(2^{6}5^{1}\), \(2^{10}\), \(5^{4}\), \(3^{5}6^{1}\), \(1^{15}7^{1}\), \(2^{9}5^{1}\), \(3^{8}\), \(3^{4}6^{2}\), \(6^{4}\), \(1^{25}\), \(1^{21}4^{1}\), \(1^{17}4^{2}\), \(1^{13}4^{3}\), \(1^{9}4^{4}\), \(1^{5}4^{5}\), \(1^{1}4^{6}\), \(2^{13}\), \(2^{3}5^{4}\), \(2^{9}8^{1}\), \(3^{9}\), \(3^{5}6^{2}\), \(3^{1}6^{4}\), \(1^{28}\), \(1^{24}4^{1}\), \(1^{20}4^{2}\), \(1^{16}4^{3}\), \(1^{12}4^{4}\), \(1^{8}4^{5}\), \(1^{4}4^{6}\), \(4^{7}\), \(1^{14}7^{2}\), \(1^{10}4^{1}7^{2}\), \(1^{6}4^{2}7^{2}\), \(1^{2}4^{3}7^{2}\), \(7^{4}\), \(2^{12}5^{1}\), \(2^{2}5^{5}\), \(2^{8}5^{1}8^{1}\), \(3^{8}6^{1}\), \(3^{4}6^{3}\), \(6^{5}\), \(3^{7}9^{1}\). A \(4\)-GDD exists for all these types with the definite exception of types \(2^{4}\), \(2^{6}5^{1}\) and \(6^{4}\)._
Existence results have been extended to point sets with up to \(v=50\) points. For \(31\leq v\leq 50\) and \(v\equiv 0\pmod{3}\) there exist designs for all feasible types; this result was completed in [5]. In [3], the feasible types for \(31\leq v\leq 50\) and \(v\equiv 1,2\pmod{3}\) are considered. The questions of existence
were completed for \(v\equiv 1\pmod{3}\); and for \(v\equiv 2\pmod{3}\) the known results were extended leaving unknown the question of existence for types \(2^{11}8^{1}11^{1}\), \(2^{1}5^{4}8^{1}11^{1}\), \(2^{6}5^{2}11^{2}\), \(2^{5}5^{3}8^{1}11^{1}\), \(2^{2}5^{2}8^{1}11^{2}\), \(2^{1}5^{3}8^{2}11^{1}\), \(2^{5}5^{3}11^{2}\), \(2^{2}5^{2}11^{3}\), \(2^{1}5^{3}8^{1}11^{2}\), \(2^{9}5^{2}11^{2}\), \(2^{8}5^{3}8^{1}11^{1}\), \(2^{6}5^{1}11^{3}\), \(2^{5}5^{2}8^{1}11^{2}\), \(2^{4}5^{4}8^{1}14^{1}\), \(2^{4}5^{3}8^{2}11^{1}\), \(2^{3}11^{4}\), \(2^{2}5^{7}11^{1}\), \(2^{2}5^{1}8^{1}11^{3}\), \(2^{1}5^{2}8^{2}11^{2}\) and \(5^{3}8^{3}11^{1}\).
### Existence results for infinite families of types
Extensive work has been done on the existence of \(4\)-GDDs of types \(g^{p}\) and \(g^{p}n^{1}\). For type \(g^{p}\), existence has been completely determined as given in Lemma 2.2.
**Lemma 2.2**.: [7] _There exists a \(4\)-GDD of type \(g^{p}\) if and only if (1) \(p\geq 4\); (2) \(g(p-1)\equiv 0\pmod{3}\); (3) \(g^{2}p(p-1)\equiv 0\pmod{12}\) and (4) \((g,p)\notin\{(2,4),(6,4)\}\)._
For type \(g^{p}n^{1}\) with \(p\geq 4\), necessary conditions for existence are \(g\equiv n\pmod{3}\), \(gp\equiv 0\pmod{3}\), \(gp\big{(}g(p-1)+2n\big{)}\equiv 0\pmod{4}\) and \(0\leq n\leq g(p-1)/2\). The existence of such designs has been determined in many cases; see for instance Forbes [8, 9, 11], Ge and Rees [14], Ge et al. [15], Rees [19], and Wei and Ge [24]. However, for a number of these feasible types, the question of the existence of a corresponding design remains unanswered.
In this paper we make use of the existence results given in Lemmas 2.3 to 2.7.
**Lemma 2.3**.: [11, 23, 24] _Suppose that \(g\equiv 0\pmod{6}\), \(g\geq 6\), \(p\geq 4\), \(n\equiv 0\pmod{3}\) and \(0\leq n\leq g(p-1)/2\). Then there exists a \(4\)-GDD of type \(g^{p}n^{1}\) except when \((g,p,n)=(6,4,0)\)._
**Lemma 2.4**.: [8, 24] _Suppose that \(g\equiv 3\pmod{6}\), \(g<141\), \(p\geq 4\) and one of the following conditions hold_:__
* \(p\equiv 0\pmod{4}\) _and_ \(n\equiv 0\pmod{3}\) _where_ \(0\leq n\leq(g(p-1)-3)/2,\) _or_
* \(p\equiv 1\pmod{4}\) _and_ \(n\equiv 0\pmod{6}\) _where_ \(0\leq n\leq g(p-1)/2,\) _or_
* \(p\equiv 3\pmod{4}\) _and_ \(n\equiv 3\pmod{6}\) _where_ \(3\leq n\leq g(p-1)/2.\)__
_Then there exists a \(4\)-GDD of type \(g^{p}n^{1}\)._
**Lemma 2.5**.: [13] _There exists a \(4\)-GDD of type \(4^{p}n^{1}\) if and only if \(p\equiv 0\pmod{3}\) where \(p\geq 4\) and \(n\equiv 1\pmod{3}\) where \(1\leq n\leq 2(p-1)\)._
Proof.: The necessary conditions follow from Theorem 1.1. Thus, it remains to show that they are sufficient. If \(p\in\{6,9\}\), then they exist by [13, Lemma 3.17]. If \(p\geq 12\), then they exist by [13, Theorem 3.18].
**Lemma 2.6**.: [15, 21, 22] _There exists a \(4\)-GDD of type \(7^{p}n^{1}\) if and only if \(p\geq 4\) and one of the following conditions hold_:__
* \(p\equiv 0\pmod{12}\) _and_ \(n\equiv 1\pmod{3}\) _where_ \(1\leq n\leq(7(p-1)-3)/2\)_, or_
* \(p\equiv 3\pmod{12}\) _and_ \(n\equiv 1\pmod{6}\) _where_ \(1\leq n\leq 7(p-1)/2\)_, or_
* \(p\equiv 9\pmod{12}\) _and_ \(n\equiv 4\pmod{6}\) _where_ \(4\leq n\leq 7(p-1)/2\)_._
Proof.: The necessary conditions follow from Theorem 1.1. Thus, it remains to show that they are sufficient. If \(p=9\), then \(n\in\{4,10,16,22,28\}\), so they exist by [21, Lemma 2.18] or [22, Lemma 3.14]. If \(p\geq 12\) and \(n\in\{1,4\}\), then they exist by [15, Theorem 2.11]. If \(p\geq 12\) and \(n\geq 7\), then they exist by [22, Theorem 3.15].
**Lemma 2.7**.: [9, 21] _Suppose that \(p\equiv 0\pmod{3}\) where \(p\geq 4\) and that \(n\equiv 1\pmod{3}\) where \(n\leq 14(p-1)\). Then there exists a \(4\)-GDD of type \(28^{p}n^{1}\)._
Proof.: Constructions for types \(28^{9}19^{1}\), \(28^{9}25^{1}\) and \(28^{9}31^{1}\) are given in [9, Lemma 2.6]. Otherwise, they exist by [21, Theorem 5.17].
Other work on the existence of \(4\)-GDDs has concentrated on whether the group sizes are congruent to \(0,1\) or \(2\pmod{3}\) and on GDDs whose groups are of only two or three different sizes. See, for example, [1], [2], [3], [4], [5], [6], [7], [9], [10] and [24].
### Some known enumeration results
In [17], the \(4\)-GDDs of type \(2^{10}\) were enumerated; up to isomorphism there are five different such \(4\)-GDDs and each has a non-trivial automorphism group. The automorphism groups have orders \(8\), \(12\), \(16\), \(72\) and \(720\). The \(4\)-GDDs of type \(7^{4}\) were enumerated in [3]; up to isomorphism there are seven different such \(4\)-GDDs and, as in the case of the designs of type \(2^{10}\), each has a non-trivial automorphism group. Two of the \(4\)-GDDs of type \(7^{4}\) have automorphism groups of order \(6\) and the remaining ones have orders \(24\), \(48\), \(126\), \(2352\) and \(3528\). In [6], the \(4\)-GDDs of type \(3^{5}6^{2}\) were enumerated and only \(3\) out of \(22\) had any non-trivial automorphisms. Of those that did have non-trivial automorphisms, one had an automorphism group of order \(2\) and two had automorphism groups of order \(3\).
### Some direct constructions
In the past, several GDDs have been found directly by assuming the existence of a cyclic automorphism group, \(\mathbb{G}\). See for instance [1], [2], [3], [4], [5], [6], [8], [9], [10], [11], [23] and [24]. In these designs, one assigns a selection of base blocks and then develops these base blocks to produce the blocks of the required design. For the \(4\)-GDDs constructed directly in this paper, the point set of the design consists of one or more copies of the group \(\mathbb{G}\) (here, \(\mathbb{G}\) is \(\mathbb{Z}_{5}\), \(\mathbb{Z}_{6}\), \(\mathbb{Z}_{7}\) or \(\mathbb{Z}_{14}\)) plus possibly a few copies of \(\mathbb{Z}_{2}\) or \(\mathbb{Z}_{3}\) and possibly one or more infinite points. Blocks in these designs
are obtained by developing the subscripts of the points from copies of \(\mathbb{G}\) over \(\mathbb{G}\); the infinite points remain unaltered when developed. When the point set includes any extra copies of \(\mathbb{Z}_{2}\) or \(\mathbb{Z}_{3}\), those points are developed over \(\mathbb{Z}_{2}\) or \(\mathbb{Z}_{3}\) as the others are developed over \(\mathbb{G}\). Also there are usually a few blocks which remain invariant when some nonzero element of \(\mathbb{G}\) is added to it; the number of blocks generated by any one of those blocks is less than the size of \(\mathbb{G}\).
As an example, we give a \(4\)-GDD of type \(1^{3}4^{1}7^{6}\) from [3, Table 27] using this method. The points are \(a_{i},b_{i},c_{i},d_{i},e_{i},f_{i},g_{i}\) for \(i\in\mathbb{Z}_{6}\), \(y_{i}\) for \(i\in\mathbb{Z}_{3}\), and \(\infty_{1}\), \(\infty_{2}\), \(\infty_{3}\), \(\infty_{4}\). The groups are \(\{a_{i},b_{i},c_{i},d_{i},e_{i},f_{i},g_{i}\}\) for \(i\in\mathbb{Z}_{6}\), \(\{y_{i}\}\) for \(i\in\mathbb{Z}_{3}\) and \(\{\infty_{1},\infty_{2},\infty_{3},\infty_{4}\}\). The blocks are obtained by developing the blocks in the following array \((\bmod\,6)\). The six blocks in the first column generate three blocks each.
\begin{tabular}{|l|l|l|l|l|l|} \hline \(\{a_{0},a_{3},y_{0},\infty_{1}\}\) & \(\{a_{0},a_{1},b_{2},e_{3}\}\) & \(\{a_{0},c_{4},f_{3},f_{5}\}\) & \(\{b_{0},b_{1},d_{3},f_{2}\}\) & \(\{c_{0},c_{1},f_{3},y_{2}\}\) & \(\{d_{0},f_{2},f_{3},g_{1}\}\) \\ \hline \(\{b_{0},b_{3},y_{0},\infty_{2}\}\) & \(\{a_{0},a_{2},c_{3},e_{1}\}\) & \(\{a_{0},d_{5},g_{2},y_{1}\}\) & \(\{b_{0},c_{4},d_{5},e_{3}\}\) & \(\{c_{0},c_{2},g_{4},g_{5}\}\) & \(\{e_{0},e_{1},g_{3},g_{5}\}\) \\ \hline \(\{c_{0},c_{3},y_{0},\infty_{3}\}\) & \(\{a_{0},b_{3},c_{2},d_{1}\}\) & \(\{a_{0},d_{4},g_{3},\infty_{2}\}\) & \(\{b_{0},c_{2},e_{5},\infty_{4}\}\) & \(\{c_{0},d_{2},e_{1},\infty_{1}\}\) \\ \hline \(\{d_{0},d_{3},y_{0},\infty_{4}\}\) & \(\{a_{0},b_{5},f_{2},g_{4}\}\) & \(\{a_{0},e_{4},f_{1},y_{2}\}\) & \(\{b_{0},d_{1},f_{5},y_{2}\}\) & \(\{c_{0},e_{2},f_{4},\infty_{2}\}\) \\ \hline \(\{e_{0},e_{3},f_{1},f_{4}\}\) & \(\{a_{0},b_{4},g_{1},\infty_{3}\}\) & \(\{a_{0},f_{4},g_{5},\infty_{4}\}\) & \(\{b_{0},e_{2},e_{4},y_{1}\}\) & \(\{d_{0},d_{2},e_{3},g_{4}\}\) \\ \hline \(\{g_{0},g_{3},y_{0},y_{1}\}\) & \(\{a_{0},c_{5},d_{2},d_{3}\}\) & \(\{b_{0},b_{2},c_{3},g_{4}\}\) & \(\{b_{0},f_{4},g_{1},\infty_{1}\}\) & \(\{d_{0},e_{2},f_{1},\infty_{3}\}\) \\ \hline \end{tabular}
All \(4\)-GDDs of type \(4^{t}7^{s}\) constructed directly in this paper are given in the Appendix.
## 3 Constructions of \(4\)-GDDs of type \(4^{t}7^{s}\)
One tool for constructing GDDs from other GDDs is that of 'filling in groups'. This is given in Construction 3.1 in the form applicable to \(k\)-GDDs. This construction is valid by [12, Theorem 1.4.12]; see also the proof of that theorem.
**Construction 3.1**.: _Suppose that a \(k\)-GDD of type \(\{g_{1},g_{2},\ldots,g_{m}\}\) exists and let \(u\) be a non-negative integer. Suppose also that, for each integer \(i=1,2,\ldots,m-1\), there exists a \(k\)-GDD of type \(\{g(i,1),g(i,2),\ldots,g(i,s_{i}),u\}\) for which \(\sum_{j=1}^{s_{i}}g(i,j)=g_{i}\). Then there exists a \(k\)-GDD of type \(\{g(1,1),g(1,2),\ldots,g(1,s_{1}),g(2,1),g(2,2),\ldots,g(2,s_{2}),\ldots,g(m-1,1),g(m-1,2),\ldots,g(m-1,s_{m-1}),g_{m}+u\}\). If further, there exists a \(k\)-GDD on \(g_{m}+u\) points of type \(\{g(m,1),g(m,2),\ldots,\)\(g(m,s_{m})\}\), then there exists a \(k\)-GDD of type \(\{g(1,1),g(1,2),\ldots,g(1,s_{1}),g(2,1),g(2,2),\ldots,g(2,s_{2}),\)\(\ldots,g(m,1),g(m,2),\ldots,g(m,s_{m})\}\)._
One tool for constructing GDDs from other GDDs is _Wilson's fundamental GDD construction_[25]. A version of this construction is given in Construction 3.2 in the form in which it will be used in Lemma 3.6. This construction is valid by [12, Theorem 1.4.9]; see also the proof of that theorem.
**Construction 3.2**.: _Suppose there exists a \(k\)-GDD of type \(g_{1}^{u_{1}}g_{2}^{u_{2}}\cdots g_{m}^{u_{m}}\) and there exists a \(\operatorname{TD}(k,h)\). Then there exists a \(k\)-GDD of type \((hg_{1})^{u_{1}}(hg_{2})^{u_{2}}\cdots(hg_{m})^{u_{m}}\)._
In Lemma 3.4, we give the necessary conditions for the existence of a \(4\)-GDD of type \(4^{t}7^{s}\). However, we first need to consider the result in Lemma 3.3 which simplifies some of the necessary conditions in Theorem 1.1, specifically when the \(4\)-GDD has up to two group sizes.
**Lemma 3.3**.: _Suppose there exists a \(4\)-GDD of type \(g_{1}^{t}g_{2}^{s}\). Then \(t\geq 4\) or \(s\geq 4\) or \(t=s=3\)._
Proof.: By Condition (1) in Theorem 1.1, \(t+s\geq 4\). If \(t+s=4\), then by Condition (5), either \(t=4\) or \(s=4\). If \(t+s=5\), then by Condition (6), either \(t=4\) or \(s=4\). If \(t+s=6\) and neither \(t\geq 4\) nor \(s\geq 4\), then \(t=s=3\). If \(t+s\geq 7\), then either \(t\geq 4\) or \(s\geq 4\).
**Lemma 3.4**.: _If a \(4\)-GDD of type \(4^{t}7^{s}\) exists then all of the following conditions hold\(:\)_
* \(t+s\equiv 1\pmod{3};\)__
* \(s\equiv 0\) _or_ \(1\pmod{4};\) _and_
* _either_ \(t\geq 4\) _or_ \(s\geq 4\)_._
Proof.: Firstly, note that a \(4\)-GDD of type \(4^{t}7^{s}\) has \(t+s\) groups. Since \(4\not\equiv 0\pmod{3}\) and \(7\not\equiv 0\pmod{3}\), it follows from Lemma 1.2 that \(t+s\equiv 1\pmod{3}\).
Next, observe that the only groups of odd size are the groups of size \(7\). Hence, by Lemma 1.3, it follows that \(s\equiv 0\) or \(1\pmod{4}\).
Finally, consider when \(t=s=3\). Then \(t+s=6\) which contradicts \(t+s\equiv 1\pmod{3}\). Hence, by Lemma 3.4, it follows that either \(t\geq 4\) or \(s\geq 4\).
**Lemma 3.5**.: _Suppose that \(v=4t+7s\). Then the conditions that \(t+s\equiv 1\pmod{3}\) and \(s\equiv 0\pmod{4}\) are equivalent to the condition that \(v\equiv 4\pmod{12}\). Similarly, the conditions that \(t+s\equiv 1\pmod{3}\) and \(s\equiv 1\pmod{4}\) are equivalent to the condition that \(v\equiv 7\pmod{12}\)._
Proof.: Firstly, note that \(t+s\equiv 1\pmod{3}\) implies that \(4t+4s\equiv 1\pmod{3}\), so \(v\equiv 4t+7s\equiv 1\pmod{3}\). Also, observe that \(s\equiv 0\) or \(1\pmod{4}\) implies that \(3s\equiv 0\) or \(3\pmod{4}\), so \(v\equiv 4t+7s\equiv 0\) or \(3\pmod{4}\). If \(v\equiv 1\pmod{3}\) and \(v\equiv 0\pmod{4}\), then \(v\equiv 4\pmod{12}\). If \(v\equiv 1\pmod{3}\) and \(v\equiv 3\pmod{4}\), then \(v\equiv 7\pmod{12}\).
Now, note that \(v\equiv 4t+7s\equiv t+s\pmod{3}\) and that \(v\equiv 4t+7s\equiv-s\pmod{4}\). If \(v\equiv 4\pmod{12}\) then \(v\equiv 1\pmod{3}\) and \(v\equiv 0\pmod{4}\). This means that \(t+s\equiv 1\pmod{3}\) and \(-s\equiv 0\pmod{4}\), so \(s\equiv 0\pmod{4}\). If \(v\equiv 7\pmod{12}\) then \(v\equiv 1\pmod{3}\) and \(v\equiv 3\pmod{4}\). This means that \(t+s\equiv 1\pmod{3}\) and \(-s\equiv 3\pmod{4}\), so \(s\equiv 1\pmod{4}\).
We can now begin to construct \(4\)-GDDs of type \(4^{t}7^{s}\). Following from the result in Lemma 3.5, we find it convenient to organise the constructions by the number of points \(v=4t+7s\). Specifically, we organise the constructions by the value of \(v\) modulo \(84\) where \(v\equiv 4\) or \(7\pmod{12}\).
Note that throughout this paper, the parameters \(t\) and \(s\) are always non-negative integers.
Firstly, in Lemma 3.6, we first establish the existence of two \(4\)-GDDs that will be used for some constructions in Lemma 3.7.
**Lemma 3.6**.: _There exist \(4\)-GDDs of types \(4^{10}28^{3}\) and \(4^{13}28^{3}\)._
Proof.: For type \(4^{10}28^{3}\), start with a \(4\)-GDD of type \(1^{10}7^{3}\) which can be obtained by forming a block on each group of size \(4\) in a \(4\)-GDD of type \(1^{2}4^{2}7^{3}\) which exists by [5, Table 8]. For type \(4^{13}28^{3}\), start with a \(4\)-GDD of type \(1^{13}7^{3}\) which can be obtained by forming a block on each group of size \(4\) in a \(4\)-GDD of type \(1^{1}4^{3}7^{3}\) which exists by [3, Table 3]. Then, in both cases, apply Construction 3.2 with \(h=4\).
**Lemma 3.7**.: _Suppose that \(v=4t+7s\) where \(v\equiv 4\) or \(7\pmod{12}\) and either \(t\geq 4\) or \(s\geq 4\). Suppose also that \(v\leq 151\). Then there exists a \(4\)-GDD of type \(4^{t}7^{s}\) except possibly when_
* \(v=76:(t,s)=(12,4)\) _or_ \((5,8)\);
* \(v=79:(t,s)=(11,5)\);
* \(v=100:(t,s)=(11,8)\);
* \(v=103:(t,s)=(17,5)\), \((10,9)\) _or_ \((3,13)\);
* \(v=115:(t,s)=(20,5)\) _or_ \((13,9)\);
* \(v=124:(t,s)=(3,16)\);
* \(v=127:(t,s)=(23,5)\), \((16,9)\), \((9,13)\) _or_ \((2,17)\);
* \(v=139:(t,s)=(26,5)\), \((19,9)\), \((12,13)\) _or_ \((5,17)\).
Proof.: Constructions of these \(4\)-GDDs are given in Table 1 for \(v\leq 76\) and in Table 2 for \(79\leq v\leq 151\).
Some of the \(4\)-GDDs in Table 2 are constructed using Construction 3.1. Start with the given input \(4\)-GDD which exists by the reference given in the last column of the table. Then, apply Construction 3.1 using the listed value of \(u\). The required fill-in \(4\)-GDDs are also given and references for these are given in Table 1.
The \(4\)-GDDs in Table 1 and the remaining \(4\)-GDDs in Table 2 are constructed directly in this paper or are constructed elsewhere. A reference is given for each of these \(4\)-GDDs. The direct constructions in this paper can be found in the Appendix. As explained in Section 2.4, these were found by assuming the existence of a cyclic automorphism group, \(\mathbb{G}\), assigning a selection of base blocks, and then developing these base blocks to produce the blocks of the required design.
We use the \(4\)-GDDs in Lemma 3.7 to construct infinite families of the remaining \(4\)-GDDs of type \(4^{t}7^{s}\). Note that \(v\equiv 4\) or \(7\pmod{12}\) implies that \(v\equiv 4,7,16,19,\ldots,64\) or \(67\pmod{84}\). We
first consider the cases when \(v\not\equiv 19,76\) or \(79\pmod{84}\) in Lemma 3.8. The cases when \(v\equiv 19,76\) or \(79\pmod{84}\) are then handled in Lemmas 3.9 to 3.14.
**Lemma 3.8**.: _Suppose that \(v=4t+7s\) where \(v\equiv 4\) or \(7\pmod{12}\). Suppose also that \(v\geq 172\) where \(v\not\equiv 19,76\) or \(79\pmod{84}\). Then there exists a \(4\)-GDD of type \(4^{4}7^{s}\)._
Proof.: If \(t=0\) or \(s\in\{0,1\}\), then existence is given by Lemmas 2.2 and 2.5, so we may assume that \(t>0\) and \(s\geq 2\).
Suppose that \(\ell\equiv 4t+7s\pmod{84}\) where \(0\leq\ell<84\) and \(\ell\neq 19,76\) or \(79\). Recall that \(v\not\equiv 19,76\) or \(79\pmod{84}\), so it follows that \(0\leq\ell\leq 67\). Then set \(m=(4t+7s-\ell)/28\). Now, note that \(m\equiv 0\pmod{3}\), so \(v\geq 172\) implies that \(m\geq 6\). This means that \(\ell\leq 67\leq 14(m-1)\). Hence, by Lemma 2.7, there exists a \(4\)-GDD of type \(28^{m}\ell^{1}\).
Start with a \(4\)-GDD of type \(28^{m}\ell^{1}\). Then, apply Construction 3.1 with \(u=0\).
If \(\ell=4\), then set \(x=s/4\).
If \(\ell=7\), then set \(x=(s-1)/4\).
If \(\ell=16\), then fill in the group of size \(\ell\) with a \(4\)-GDD of type \(4^{4}\) and set \(x=s/4\).
If \(\ell\in\{28,40,52\}\), then there are at least four groups of size \(7\). Fill in the group of size \(\ell\) with a \(4\)-GDD of type \(7^{4}\), \(4^{3}7^{4}\) or \(4^{6}7^{4}\) for \(\ell=28,40,52\), respectively, and set \(x=(s-4)/4\).
If \(\ell\in\{43,55\}\), then there are at least five groups of size \(7\). Fill in the group of size \(\ell\) with a \(4\)-GDD of type \(4^{2}7^{5}\) or \(4^{5}7^{5}\) for \(\ell=43,55\), respectively, and set \(x=(s-5)/4\).
If \(\ell=64\), then there are at least four groups of size \(7\). If \(s=4\), then fill in the group of size \(\ell\) with a \(4\)-GDD of type \(4^{9}7^{4}\) and set \(x=0\). If \(s\geq 8\), then fill in the group of size \(\ell\) with a \(4\)-GDD of type \(4^{2}7^{8}\) and set \(x=(s-8)/4\).
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \(v\) & Types & Reference & \(v\) & Types & Reference \\ \hline \(16\) & \(4^{4}\) & [7, Lemma 6.13] & 55 & \(4^{12}7^{1}\) & [13, Theorem 3.18] \\ \hline \(28\) & \(4^{7}\) & [7, Lemma 6.13] & & \(4^{5}7^{5}\) & Table 8 \\ \hline & \(7^{4}\) & [16, Lemma 3.5] & 64 & \(4^{16}\) & [7, Lemma 6.13] \\ \hline \(31\) & \(4^{6}7^{1}\) & [13, Lemma 3.17] & & \(4^{9}7^{4}\) & Table 9 \\ \hline \(40\) & \(4^{10}\) & [7, Lemma 6.13] & & \(4^{2}7^{8}\) & Table 10 \\ \hline & \(4^{3}7^{4}\) & [3, Table 8] & 67 & \(4^{15}7^{1}\) & [13, Theorem 3.18] \\ \hline \(43\) & \(4^{9}7^{1}\) & [13, Lemma 3.17] & & \(4^{8}7^{5}\) & Table 11 \\ \hline & \(4^{2}7^{5}\) & [3, Table 15] & & \(4^{17}9\) & [15, Theorem 2.11] \\ \hline \(52\) & \(4^{13}\) & [7, Lemma 6.13] & 76 & \(4^{19}\) & [7, Lemma 6.13] \\ \hline & \(4^{6}7^{4}\) & Table 7 & & \(4^{12}7^{4}\) & Unknown \\ \hline & & & & \(4^{5}7^{8}\) & Unknown \\ \hline \end{tabular}
\end{table}
Table 1: Constructions for \(4\)-GDDs of type \(4^{t}7^{s}\) with \(v\leq 76\) points in Lemma 3.7.
If \(\ell=67\), then there are at least five groups of size \(7\). If \(s=5\), then fill in the group of size \(\ell\) with a \(4\)-GDD of type \(4^{8}7^{5}\) and set \(x=0\). If \(s\geq 9\), then fill in the group of size \(\ell\) with a \(4\)-GDD of type \(4^{1}7^{9}\) and set \(x=(s-9)/4\).
Finally, fill in \(x\) groups of size \(28\) with a \(4\)-GDD of type \(7^{4}\) and the remaining \(m-x\) groups of size \(28\), if any, with a \(4\)-GDD of type \(4^{7}\).
**Lemma 3.9**.: _Suppose that \(v=4t+7s\) where \(v\equiv 19\pmod{84}\) and \(187\leq v\leq 523\). Then there exists a \(4\)-GDD of type \(4^{t}7^{s}\)._
Proof.: Constructions for these \(4\)-GDDs are given in Table 3. For each pair \((t,s)\), we start with a
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \(v\) & Types & Input \(4\)-GDD & \(u\) & Fill-in \(4\)-GDD & Reference \\ \hline \(79\) & \(4^{18}7^{1}\) & & & & Lemma 2.5 \\ \hline & \(4^{11}7^{5}\) & & & & Unknown \\ \hline & \(4^{4}7^{9}\) & \(7^{9}16^{1}\) & 0 & \(4^{4}\) & Lemma 2.6 \\ \hline \(88\) & \(4^{22}\), \(4^{15}7^{4}\) & \(4^{15}28^{1}\) & 0 & \(4^{7},7^{4}\) & Lemma 2.5 \\ \hline & \(4^{8}7^{8}\) & & & & Table 12 \\ \hline & \(4^{1}7^{12}\) & & & & Lemma 2.6 \\ \hline \(91\) & \(4^{21}7^{1}\) & & & & Lemma 2.5 \\ \hline & \(4^{14}7^{5}\) & & & & Table 13 \\ \hline & \(4^{7}7^{9}\), \(7^{13}\) & \(7^{9}28^{1}\) & 0 & \(4^{7},7^{4}\) & Lemma 2.6 \\ \hline \(100\) & \(4^{20}\), \(4^{18}7^{4}\) & \(4^{18}28^{1}\) & 0 & \(4^{\prime},7^{4}\) & Lemma 2.5 \\ \hline & \(4^{11}7^{8}\) & & & & Unknown \\ \hline & \(4^{4}7^{12}\) & \(7^{12}16^{1}\) & 0 & \(4^{4}\) & Lemma 2.6 \\ \hline \(103\) & \(4^{24}7^{1}\) & & & & Lemma 2.5 \\ \hline & \(4^{17}7^{5}\), \(4^{10}7^{9}\), \(4^{3}7^{13}\) & & & & Unknown \\ \hline \(112\) & \(4^{28}\), \(4^{21}7^{4}\), \(4^{14}7^{8}\), \(4^{{}^{\prime}}7^{12}\), \(7^{16}\) & \(28^{4}\) & 0 & \(4^{\prime},7^{4}\) & Lemma 2.2 \\ \hline \(115\) & \(4^{27}7^{1}\) & & & & Lemma 2.5 \\ \hline & \(4^{20}7^{5}\), \(4^{13}7^{9}\) & & & & Unknown \\ \hline & \(4^{6}7^{13}\) & \(7^{12}31^{1}\) & 0 & \(4^{6}7^{1}\) & Lemma 2.6 \\ \hline \(124\) & \(4^{31}\), \(4^{24}7^{4}\), \(4^{17}7^{8}\), \(4^{10}7^{12}\) & \(4^{10}28^{3}\) & 0 & \(4^{\prime},7^{4}\) & Lemma 3.6 \\ \hline & \(4^{37}7^{16}\) & & & & Unknown \\ \hline \(127\) & \(4^{30}7^{1}\) & & & & Lemma 2.5 \\ \hline & \(4^{23}7^{5}\), \(4^{16}7^{9}\), \(4^{9}7^{13}\), \(4^{2}7^{17}\) & & & & Unknown \\ \hline \(136\) & \(4^{34}\), \(4^{27}7^{4}\), \(4^{20}7^{8}\), \(4^{13}7^{12}\) & \(4^{13}28^{3}\) & 0 & \(4^{\prime},7^{4}\) & Lemma 3.6 \\ \hline & \(4^{6}7^{16}\) & \(7^{15}31^{1}\) & 0 & \(4^{6}7^{1}\) & Lemma 2.6 \\ \hline \(139\) & \(4^{33}7^{1}\) & & & & Lemma 2.5 \\ \hline & \(4^{26}7^{5}\), \(4^{19}7^{9}\), \(4^{12}7^{13}\), \(4^{5}7^{17}\) & & & & Unknown \\ \hline \(148\) & \(4^{37}\), \(4^{30}7^{4}\), \(4^{23}7^{8}\), \(4^{16}7^{12}\), \(4^{9}7^{16}\) & \(36^{4}\) & 4 & \(4^{10},4^{37}7^{4}\) & Lemma 2.2 \\ \hline & \(4^{2}7^{20}\) & & \(7^{15}43^{1}\) & 0 & \(4^{2}7^{5}\) & Lemma 2.6 \\ \hline \(151\) & \(4^{36}7^{1}\), \(4^{29}7^{5}\), \(4^{22}7^{9}\), \(4^{15}7^{13}\), \(4^{8}7^{17}\) & \(36^{4}\) & 7 & \(4^{9}7^{1},4^{2}7^{5}\) & Lemma 2.2 \\ \hline & \(4^{1}7^{21}\) & & & & Lemma 2.6 \\ \hline \end{tabular}
\end{table}
Table 2: Constructions for \(4\)-GDDs of type \(4^{t}7^{s}\) with \(79\leq v\leq 151\) points in Lemma 3.7.
given input \(4\)-GDD and then apply Construction 3.1 using the given value of \(u\). We also give the required fill-in designs for each construction. Some input \(4\)-GDDs have a type of the form \(g^{p}n^{1}\) with \(g\in\{7,36,39\}\) in which case they exist by Lemma 2.3, 2.4 or 2.6. All other input \(4\)-GDDs have a type of the form \(g^{p}\) and exist by Lemma 2.2.
**Lemma 3.10**.: _Suppose that \(v=4t+7s\) where \(v\equiv 19\pmod{84}\) and \(v\geq 607\). Then there exists a \(4\)-GDD of type \(4^{t}7^{s}\)._
Proof.: If \(t=1\) or \(s=1\), then existence is given by Lemmas 2.5 and 2.6. Since \(v\equiv 19\pmod{84}\), it follows that \(v\equiv 7\pmod{12}\). Thus, by Lemma 3.5, we have \(s\equiv 1\pmod{4}\) so we may assume that \(t>1\) and \(s\geq 5\).
Set \(m=(4t+7s-187)/28\). Note that \(v\geq 607\) implies that \(m\geq 15\). This means that \(187\leq 14(m-1)\). Hence, by Lemma 2.7, there exists a \(4\)-GDD of type \(28^{m}187^{1}\).
Start with a \(4\)-GDD of type \(28^{m}187^{1}\). Then, apply Construction 3.1 with \(u=0\).
If \(s\leq 21\), then fill in the group of size \(187\) with a \(4\)-GDD of type \(4^{(187-7s)/4}7^{s}\) which exists by Lemma 3.9. Then, fill in all \(m\) groups of size \(28\) with a \(4\)-GDD of type \(4^{7}\).
If \(s\geq 25\), then fill in the group of size \(187\) with a \(4\)-GDD of type \(4^{3}7^{25}\) which exists by Lemma 3.9. Then, fill in \((s-25)/4\) groups of size \(28\) with a \(4\)-GDD of type \(7^{4}\) and fill in the remaining \(m-(s-25)/4=(t-3)/7\) groups of size \(28\) with a \(4\)-GDD of type \(4^{7}\).
**Lemma 3.11**.: _Suppose that \(v=4t+7s\) where \(v\equiv 76\pmod{84}\) and \(160\leq v\leq 496\). Then there exists a \(4\)-GDD of type \(4^{t}7^{s}\)._
Proof.: Constructions for these \(4\)-GDDs are given in Table 4. For each pair \((t,s)\), we start with a given input \(4\)-GDD and then apply Construction 3.1 using the given value of \(u\). We also give the
\begin{table}
\begin{tabular}{|c|l|l|l|l|} \hline \(v\) & Types & Input \(4\)-GDD & \(u\) & Fill-in \(4\)-GDDs \\ \hline \(187\) & \(4^{457}1,4^{38}7^{5},4^{3}7^{9},4^{24}7^{13},4^{17}7^{17}\) & \(36^{5}\) & \(7\) & \(4^{9}7^{1},4^{2}7^{5}\) \\ \hline & \(4^{10}7^{21},4^{3}7^{25}\) & \(7^{24}40^{1}\) & \(0\) & \(4^{10},4^{3}7^{4}\) \\ \hline \(271\) & \(4^{66}71,4^{59}75,4^{52}7^{9},\ldots,4^{17}7^{29}\) & \(36^{5}51^{1}\) & \(4\) & \(4^{10},4^{3}7^{4},4^{12}7^{1},4^{5}7^{5}\) \\ \hline & \(4^{10}7^{33},4^{3}7^{37}\) & \(7^{33}40^{1}\) & \(0\) & \(4^{10},4^{3}7^{4}\) \\ \hline \(355\) & \(4^{87}71,4^{80}75,4^{73}7^{9},\ldots,4^{17}7^{41}\) & \(36^{6}63^{1}\) & \(4\) & \(4^{10},4^{3}7^{4},4^{15}7^{1},4^{8}7^{9}\) \\ \hline & \(4^{10}7^{45},4^{3}7^{49}\) & \(7^{45}40^{1}\) & \(0\) & \(4^{10},4^{3}7^{4}\) \\ \hline \(439\) & \(4^{108}7^{1},4^{101}7^{5},4^{94}7^{9},\ldots,4^{24}7^{49}\) & \(36^{10}79^{1}\) & \(4\) & \(4^{10},4^{3}7^{4},4^{18}7^{1},4^{4}7^{9}\) \\ \hline & \(4^{17}7^{53}\) & \(39^{9}84^{1}\) & \(4\) & \(4^{2}7^{5},4^{8}7^{8}\) \\ \hline & \(4^{10}7^{57},4^{3}7^{61}\) & \(7^{57}40^{1}\) & \(0\) & \(4^{10},4^{3}7^{4}\) \\ \hline \(523\) & \(4^{129}7^{1},4^{122}7^{5},4^{115}7^{9},\ldots,4^{24}7^{61}\) & \(36^{12}87^{1}\) & \(4\) & \(4^{10},4^{3}7^{4},4^{21}7^{1},7^{13}\) \\ \hline & \(4^{17}7^{65}\) & \(39^{12}51^{1}\) & \(4\) & \(4^{2}7^{5},4^{5}7^{5}\) \\ \hline & \(4^{10}7^{69},4^{3}7^{73}\) & \(7^{69}40^{1}\) & \(0\) & \(4^{10},4^{3}7^{4}\) \\ \hline \end{tabular}
\end{table}
Table 3: Constructions for \(4\)-GDDs of type \(4^{t}7^{s}\) with \(187\leq 4t+7s\leq 523\) in Lemma 3.9.
required fill-in designs for each construction. Some input \(4\)-GDDs have a type of the form \(g^{p}n^{1}\) with \(g\in\{7,36\}\) in which case they exist by Lemma 2.3 or 2.6. All other input \(4\)-GDDs have a type of the form \(g^{p}\) and exist by Lemma 2.2.
**Lemma 3.12**.: _Suppose that \(v=4t+7s\) where \(v\equiv 76\pmod{84}\) and \(v\geq 580\). Then there exists a \(4\)-GDD of type \(4^{t}7^{s}\)._
Proof.: If \(t=0\) or \(s=0\), then existence is given by Lemma 2.2. Since \(v\equiv 76\pmod{84}\), it follows that \(v\equiv 4\pmod{12}\). Thus, by Lemma 3.5, we have \(s\equiv 0\pmod{4}\) so we may assume that \(t>0\) and \(s\geq 4\).
Set \(m=(4t+7s-160)/28\). Note that \(v\geq 580\) implies that \(m\geq 15\). This means that \(160\leq 14(m-1)\). Hence, by Lemma 2.7, there exists a \(4\)-GDD of type \(28^{m}160^{1}\).
Start with a \(4\)-GDD of type \(28^{m}160^{1}\). Then, apply Construction 3.1 with \(u=0\).
If \(s\leq 16\), then fill in the group of size \(160\) with a \(4\)-GDD of type \(4^{(160-7s)/4}7^{s}\) which exists by Lemma 3.11. Then, fill in all \(m\) groups of size \(28\) with a \(4\)-GDD of type \(4^{7}\).
If \(s\geq 20\), then fill in the group of size \(160\) with a \(4\)-GDD of type \(4^{5}7^{20}\) which exists by Lemma 3.11. Then, fill in \((s-20)/4\) groups of size \(28\) with a \(4\)-GDD of type \(7^{4}\) and fill in the remaining \(m-(s-20)/4=(t-5)/7\) groups of size \(28\) with a \(4\)-GDD of type \(4^{7}\).
**Lemma 3.13**.: _Suppose that \(v=4t+7s\) where \(v\equiv 79\pmod{84}\) and \(v\in\{163,247\}\). Then there exists a \(4\)-GDD of type \(4^{t}7^{s}\)._
Proof.: Constructions for these \(4\)-GDDs are given in Table 5. For each pair \((t,s)\), we start with a given input \(4\)-GDD and then apply Construction 3.1 using the given value of \(u\). We also give the required fill-in designs for each construction. All input \(4\)-GDDs have a type of the form \(g^{p}n^{1}\) with \(g\in\{7,12,36,39\}\) in which case they exist by Lemma 2.3, 2.4 or 2.6.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline \(v\) & Types & Input \(4\)-GDD type & \(u\) & Fill-in \(4\)-GDD types \\ \hline \(160\) & \(4^{\text{\tiny{$4^{\text{\tiny{$33^{\text{\text{\text{\text{\text{\
**Lemma 3.14**.: _Suppose that \(v=4t+7s\) where \(v\equiv 79\pmod{84}\) and \(v\geq 331\). Then there exists a \(4\)-GDD of type \(4^{t}7^{s}\)._
Proof.: If \(t=0\) or \(s=1\), then existence is given by Lemma 2.2 or 2.5. Since \(v\equiv 79\pmod{84}\), it follows that \(v\equiv 7\pmod{12}\). Thus, by Lemma 3.5, we have \(s\equiv 1\pmod{4}\) so we may assume that \(t>0\) and \(s\geq 5\).
Set \(m=(4t+7s-79)/28\). Note that \(v\geq 331\) implies that \(m\geq 9\). This means that \(79\leq 14(m-1)\). Hence, by Lemma 2.7, there exists a \(4\)-GDD of type \(28^{m}79^{1}\).
Start with a \(4\)-GDD of type \(28^{m}79^{1}\). Then, apply Construction 3.1 with \(u=0\).
If \(s=5\), then fill in the group of size \(79\) with a \(4\)-GDD of type \(4^{18}7^{1}\) which exists by Lemma 2.5. Then, fill in one group of size \(28\) with a \(4\)-GDD of type \(7^{4}\) and the remaining \(m-1\) groups of size \(28\) with a \(4\)-GDD of type \(4^{7}\).
If \(s\geq 9\), then fill in the group of size \(79\) with a \(4\)-GDD of type \(4^{4}7^{9}\) which exists by Lemma 3.7. Then, fill in \((s-9)/4\) groups of size \(28\) with a \(4\)-GDD of type \(7^{4}\) and fill in the remaining \(m-(s-9)/4=(t-4)/7\) groups of size \(28\) with a \(4\)-GDD of type \(4^{7}\).
## 4 Summary
**Theorem 4.1**.: _Suppose that \(t+s\equiv 1\pmod{3}\) and \(s\equiv 0\) or \(1\pmod{4}\) where either \(t\geq 4\) or \(s\geq 4\). Then there exists a \(4\)-GDD of type \(4^{t}7^{s}\) except possibly when_
* \(v=\ 76:4^{12}7^{4}\) _or_ \(4^{5}7^{8}\);
* \(v=\ 79:4^{11}7^{5}\);
* \(v=100:4^{11}7^{8}\);
* \(v=103:4^{17}7^{5}\)_,_ \(4^{10}7^{9}\) _or_ \(4^{3}7^{13}\);
* \(v=115:4^{20}7^{5}\) _or_ \(4^{13}7^{9}\);
* \(v=124:4^{3}7^{16}\);
* \(v=127:4^{23}7^{5}\)_,_ \(4^{16}7^{9}\)_,_ \(4^{9}7^{13}\) _or_ \(4^{2}7^{17}\);
* \(v=139:4^{26}7^{5}\)_,_ \(4^{19}7^{9}\)_,_ \(4^{12}7^{13}\) _or_ \(4^{5}7^{17}\).
\begin{table}
\begin{tabular}{|c|l|l|l|l|} \hline \(v\) & Types & Input \(4\)-GDD type & \(u\) & Fill-in \(4\)-GDD types \\ \hline \(163\) & \(4^{39}7^{1}\) & \(12^{13}3^{1}\) & \(4\) & \(4^{4}\) \\ \hline & \(4^{32}7^{5},4^{25}7^{9},4^{18}7^{13},4^{11}7^{17},4^{4}7^{21}\) & \(39^{4}3^{1}\) & \(4\) & \(4^{27}7^{5},4^{9}7^{1}\) \\ \hline \(247\) & \(4^{60}7^{1},4^{53}7^{5},4^{46}7^{9},\ldots,4^{11}7^{29}\) & \(36^{5}63^{1}\) & \(4\) & \(4^{10},4^{3}7^{4},4^{15}7^{1},4^{4}7^{9}\) \\ \hline & \(4^{4}7^{33}\) & \(7^{33}16^{1}\) & \(0\) & \(4^{4}\) \\ \hline \end{tabular}
\end{table}
Table 5: Constructions for \(4\)-GDDs of type \(4^{t}7^{s}\) with \(v\in\{163,247\}\) in Lemma 3.13.
Proof.: Recall from Lemma 3.5 that the conditions that \(t+s\equiv 1\pmod{3}\) and \(s\equiv 0\) or \(1\pmod{4}\) are equivalent to the condition that \(v\equiv 4\) or \(7\pmod{12}\) where \(v=4t+7s\). If \(v\leq 151\) where either \(t\geq 4\) or \(s\geq 4\), then the \(4\)-GDDs exists by Lemma 3.7 except possibly for the values of \(t\) and \(s\) listed above. Otherwise, if \(v\geq 160\), then for each congruence class of \(v\) modulo \(84\), a reference for the relevant \(4\)-GDD is given in Table 6.
By forming a block on each group of size \(4\) in the known \(4\)-GDDs in Theorem 4.1, we get Corollary 4.2.
**Corollary 4.2**.: _Suppose that \(t+s\equiv 1\pmod{3}\) and \(s\equiv 0\) or \(1\pmod{4}\) where either \(t\geq 4\) or \(s\geq 4\). Then there exists a \(4\)-GDD of type \(1^{4t}7^{s}\) except for the listed possible exceptions in Theorem 4.1._
## 5 Acknowledgements
This research used the computational cluster Katana supported by Research Technology Services at UNSW Sydney. The third author acknowledges the support from an Australian Government Research Training Program Scholarship and from the School of Mathematics and Statistics, UNSW Sydney.
## Orcid
R. J. R. Abel: [https://orcid.org/0000-0002-3632-9612](https://orcid.org/0000-0002-3632-9612)
T. Britz: [https://orcid.org/0000-0003-4891-3055](https://orcid.org/0000-0003-4891-3055)
Y. A. Bunjamin: [https://orcid.org/0000-0001-6849-2986](https://orcid.org/0000-0001-6849-2986)
D. Combe: [https://orcid.org/0000-0002-1055-3894](https://orcid.org/0000-0002-1055-3894)
\begin{table}
\begin{tabular}{|l|l|l|} \hline \(v\pmod{84}\) & Range of \(v\) & Reference \\ \hline \(4,7,16\) & \(v\geq 172\) & Lemma 3.8 \\ \hline \(19\) & \(187\leq v\leq 523\) & Lemma 3.9 \\ \hline & \(v\geq 607\) & Lemma 3.10 \\ \hline \(28,31,40,43,52,55,64,67\) & \(v\geq 196\) & Lemma 3.8 \\ \hline \(76\) & \(160\leq v\leq 496\) & Lemma 3.11 \\ \hline & \(v\geq 580\) & Lemma 3.12 \\ \hline \(79\) & \(163\leq v\leq 247\) & Lemma 3.13 \\ \hline & \(v\geq 331\) & Lemma 3.14 \\ \hline \end{tabular}
\end{table}
Table 6: Constructions for \(4\)-GDDs of type \(4^{t}7^{s}\) with \(v\geq 160\) points in Theorem 4.1. |
2309.06856 | Qualitative properties of the fourth-order hyperbolic equations | We investigate the qualitative properties of the weak solutions to the
boundary value problems for the hyperbolic fourth-order linear equations with
constant coefficients in the plane bounded domain convex with respect to
characteristics. The main question is to prove the analogue of maximum
principle, solvability and uniqueness results for the weak solutions of initial
and boundary value problems in the case of weak regularities of initial data
from $L^2.$ | K. Buryachenko | 2023-09-13T10:03:11Z | http://arxiv.org/abs/2309.06856v1 | # Qualitative properties of the fourth-order hyperbolic equations
###### Abstract
We investigate the qualitative properties of the weak solutions to the boundary value problems for the hyperbolic fourth-order linear equations with constant coefficients in the plane bounded domain convex with respect to characteristics. The main question is to prove the analogue of maximum principle, solvability and uniqueness results for the weak solutions of initial and boundary value problems in the case of weak regularities of initial data from \(L^{2}.\)
**Keywords:** Cauchy problem, Goursat problem, Dirichlet problem, maximum principle, hyperbolic fourth-order PDEs, weak solutions, duality equation-domain, L-traces, characteristic billiard, John's mapping, Fredholm property.
## 1 Introduction
This paper is devoted to the problem of proving the analog of maximum principle and its further application to the questions of uniqueness and existence for the weak solutions of Goursat, Cauchy and Dirichlet problems for the fourth-order linear hyperbolic equations with the constant coefficients and homogeneous non-degenerate symbol in some plane bounded domain \(\Omega\in\mathbb{R}^{2}\) convex with respect to characteristics:
\[L(\partial_{x})u=a_{0}\frac{\partial^{4}u}{\partial x_{1}^{4}}+a_{1}\frac{ \partial^{4}u}{\partial x_{1}^{3}\partial x_{2}}+a_{2}\frac{\partial^{4}u}{ \partial x_{1}^{2}\partial x_{2}^{2}}+a_{3}\frac{\partial^{4}u}{\partial x_{ 1}\partial x_{2}^{3}}+a_{4}\frac{\partial^{4}u}{\partial x_{2}^{4}}=f(x). \tag{1.1}\]
Here coefficients \(a_{j},\,j=0,\,1,...,\,4\) are constant, \(f(x)\in L^{2}(\Omega),\,\partial_{x}=\left(\frac{\partial}{\partial x_{1}}, \,\frac{\partial}{\partial x_{2}}\right).\) We assume, that Eq. (1.1) is hyperbolic, that means that all roots of characteristics equation
\[L(1,\,\lambda)=a_{0}\lambda^{4}+a_{1}\lambda^{3}+a_{2}\lambda^{2}+a_{3}\lambda +a_{4}=0\]
are prime, real and are not equal to \(\pm i\), that means that the symbol of Eq. (1.1) is non-degenerate or that the Eq. (1.1) is a principal-type equation. The equations for which the roots of the corresponding characteristic equation are multiple and can take the values \(\pm i\) are called the equation with degenerate symbol (see [7]).
The main novelty of the paper is to prove the analog of maximum principle for the fourth-order hyperbolic equations. This question is very important due to usually a natural physical
interpretation, and that it helps to establish the qualitative properties of the solutions (in our case the questions of uniqueness and existence of weak solution). But as it is well known, the maximum principle even for the simple case of hyperbolic equation (one dimensional wave equation [22]) are quite different from those for elliptic and parabolic cases, for which it is a natural fact, such a way a role of characteristics curves and surfaces becomes evident in the situation of hyperbolic type PDEs.
We call the angle of characteristics slop the solution to the equation \(-\tan\varphi_{j}=\lambda_{j}\), and angle between \(j-\) and \(k-\) characteristics: \(\varphi_{k}-\varphi_{j}\neq\pi l,\,l\in\mathbb{Z}\), where \(\lambda_{j}\neq\pm i\) are real and prime roots of the characteristics equation, \(j,\,k=1,\,2,\,3,\,4\).
Most of these equations serve as mathematical models of many physical processes and attract the interest of researchers. The most famous of them are elasticity beam equations (Timoshenko beam equations with and without internal damping) [9], short laser pulse equation [7], equations which describe the structures are subjected to moving loads, and equation of Euler-Bernoulli beam resting on two-parameter Pasternak foundation and subjected to a moving load or mass [23] and others.
Due to evident practice application, these models need more exact tools for studying, and as consequence, to attract fundamental knowledge. As usual, most of these models are studied by the analytical-numerical methods (Galerkin's methods).
The range of problems studied in this work belongs to a class of quite actual problems of well-posedness of so-called general boundary-value problems for higher-order differential equations originating from the works by L. Hormander and M.Vishik who used the theory of extensions to prove the existence of well-posed boundary-value problems for linear differential equations of arbitrary order with constant complex coefficients in a bounded domain with smooth boundary. This theory got its present-day development in the works by G. Grubb [12], L.Hormander [13], and A. Posilicano [21]. Later, the problem of well-posedness of boundary-value problems for various types of second order differential equations was studied by V. Burskii and A. Zhedanov [2], [3] which developed a method of traces associated with a differential operator and applied this method to establish the Poncelet, Abel and Goursat problems. In the previous works of author (see [6]) there have been developed qualitative methods of studying Cauchy problems and nonstandard in the case of hyperbolic equations Dirichlet and Neumann problems for the linear fourth-order equations (moreover, for an equation of any even order \(2m,\,m\geq 2,\,\)) with the help of operator methods (L-traces, theory of extension, moment problem, method of duality equation-domain and others), [4]. There were proved the existence and uniqueness results, obtained the criteria of nontrivial solvability of the Dirichlet and Neumann problems in a disk for the principal type equations and equations with degenerate symbol, in particular, there were established the interrelation between the multiplicity of roots of characteristics equation and the existence of a nontrivial solution of the corresponding Dirichlet and Neumann problems (as a consequences, there was established the Fredholm property of the operator of such problems).
As concern maximum principle, at the present time there are not any results for the fourth order equations even in linear case. As it was mentioned above, the maximum principle even for the simple case of one dimensional wave equation [22], and for the second-order telegraph
equation [17]-- [20] are quite different from those for elliptic and parabolic cases. In the monograph of Protter and Weinberger [22] there was shown that solutions of hyperbolic equations and inequalities do not exhibit the classical formulation of maximum principle. Even in the simplest case of the wave equation in two independent variables \(u_{tt}-u_{xx}=0\) the maximum of a nonconstant solution \(u=\sin x\sin t\) in a rectangle domain \(\{(x,t):\,x\in[0,\pi],\,t\in[0,\pi]\}\) occurs at an interior point \(\left(\frac{\pi}{2},\,\frac{\pi}{2}\right)\). In Chapter 4 [22] the maximum principle for linear second hyperbolic equations of general type, with variable coefficients has also been obtained for Cauchy problems and boundary value problems on characteristics (Goursat problem).
Following R.Ortega, A.Robles-Perez [20], we introduce the definition of a "weak form" of the maximum principle, which is used for the hyperbolic equations, which will be used later.
_Definition 1._[20] Let \(L=Lu\) be linear differential operator, acting on functions \(u:\,D\to\mathbb{R}\), in some domain \(D\). These functions will belong to the certain family \(B\), which includes boundary conditions or others requirements. It is said that \(L\) satisfies the maximum principle, if
\[L\geq 0,\ u\in B,\]
implies \(u\geq 0\) in \(D\).
In further works of these authors (see [17], [18], [19]) there was studied the maximum principle for weak bounded twice periodical solutions from the space \(L^{\infty}\) of the telegraph equation with parameter \(\lambda\) in lower term, one-, two-, and -tree dimensional spaces, and which includes the cases of variables coefficients. The precise condition for \(\lambda\) under which the maximum principle still valid was font. There was also introduced a method of upper and lower solutions associated with the nonlinear equation, which allows to obtain the analogous results (uniqueness, existence and regularity theorems) for the telegraph equations with external nonlinear forcing, applying maximum principle. There was considered also the case when the external forcing belongs to a certain space of measures.
The maximum principle for general quasilinear hyperbolic systems with dissipation was proved by Kong De Xing [16]. There were given two estimates for the solution to the general quasilinear hyperbolic system and introduced the concept of dissipation (strong dissipation and weak dissipation), then state some maximum principles of quasilinear hyperbolic systems with dissipation. Using the maximum principle there were reproved the existence and uniqueness theorems of the global smooth solution to the Cauchy problem for considered quasilinear hyperbolic system.
So, the problem to prove the maximum principle for the weak solutions stills more complicated and at that time becomes more interesting in the case of fourth-order hyperbolic equations, especially, in the case of non-classical boundary value problems with weak-regularity data. There are no results on maximum principle even for model case of linear 2-dimensional fourth-order hyperbolic equations with the constant coefficients and without lower terms. Moreover, we can not use the term of usual traces in the cases of initial data of weak regularity, and we come to the notions of \(L-\)traces, the traces, which associated with differential operator. Let us
remind (see, for example, [2]), that \(L-\) traces exist for the weak solutions from space \(L^{2}\) even in the situations when classical notions of traces does not work for such solutions.
Interesting interpretation of violation of Fredholm property arises as periodicity of characteristic billiard or John's mapping. In the simple case of second order hyperbolic equations there was proved [14], [3] that periodicity of John's algorithm is sufficient for violation of the Fredholm property of the Dirichlet problem. The analogous result is true for the case of fourth-order hyperbolic equations and will be proved in the present paper.
Therefore, to establish the maximum principle and to obtain results on uniqueness, existence and regularity, kernel dimensionality, Fredholm property for weak solutions for fourth order hyperbolic operators and boundary value problems for them are very important for the reason of their applications and further study of these problems, and it is the mail goal of the paper.
## 2 Statement of the problem and auxiliary definitions
Let us start to establish the maximum principle for the weak solutions to the Cauchy problem for the Eq.(1.1) in some admissible planar domain. It is expected, that in hyperbolic case the characteristics of the equations play a crucial role.
Let \(C_{j},\,j=1,\,2,\,3,\,4\) be characteristics, \(\Gamma_{0}:=\{x_{1}\in[a,\,b],\,x_{2}=0\}\), and define domain \(\Omega\) as a domain which is restricted by the characteristics \(C_{j},\,j=1,\,2,\,3,\,4\) and \(\Gamma_{0}.\) Consider also the following Cauchy problem for the Eq. (1.1) on \(\Gamma_{0}:\)
\[u|_{\Gamma_{0}}=\varphi(x),\,u^{\prime}_{\nu}|_{\Gamma_{0}}=\psi(x),\,u^{ \prime\prime}_{\nu\nu}|_{\Gamma_{0}}=\sigma(x),\,u^{\prime\prime\prime}_{\nu \nu}|_{\Gamma_{0}}=\chi(x), \tag{2.1}\]
where \(\varphi,\,\psi,\,\sigma\) and \(\chi\) are given weak regular functions on \(\Gamma_{0},\) in general case \(\varphi,\,\psi,\,\sigma,\,\chi\in L^{2}(\Gamma_{0}),\,\nu-\) is outer normal of \(\Gamma_{0}.\)
_Definition 2_. We call a domain \(D:=\{(x_{1},\,x_{2}):\,x_{1}\in(-\infty,\,+\infty),\,x_{2}>0\}\) in the half-plane \(x_{2}>0\) an admissible domain if it has the property that for each point \(C\in D\) the corresponding characteristics domain \(\Omega\) is also in \(D\). More generally, \(D\) is admissible, if it is the finite or countable union of characteristics \(5\)-angles (in the case of fourth-order equations with constant coefficients, that is existence of \(4\) different and real characteristics lines).
Establishment of the maximum principle in this situation allows us to obtain a local properties of the solution to Cauchy problem (1.1)--(2.1) on the arbitrary interior point \(C\in D\).
We will consider the weak solution to the problem (1.1)--(2.1) from the \(D(L)\), domain of definition of maximal operator, associated with the differential operation \(L\) in Eq.(1.1). Following [6], [12] and [13], we remind the corresponding definitions.
In the bounded domain \(\Omega\) we consider the linear differential operation \(\mathcal{L}\) of the order \(m,\,m\geq 2,\) and formally adjoint \(\mathcal{L}^{+}\):
\[\mathcal{L}(D_{x})=\sum_{|\alpha|\leq m}a_{\alpha}D^{\alpha},\,\mathcal{L}^{+ }(D_{x})=\sum_{|\alpha|\leq m}D^{\alpha}(a_{\alpha}), \tag{2.2}\]
where \(\alpha=(\alpha_{1},\,\alpha_{2},...\alpha_{n}),\,|\alpha|=\alpha_{1}+\alpha_{2 }+...+\alpha_{n}\) is multi-index. Note, that for Eq. (1.1) \(n=2,\,m=4.\)
**Definition 3**: _Minimum operator._[6]. Let us consider the differential operation (2.2) on functions from the space \(C_{0}^{\infty}(\Omega).\) The minimum operator \(L_{0}\) is called the extension of the operation from \(C_{0}^{\infty}(\Omega)\) to the set \(D(L_{0}):=\overline{C_{0}^{\infty}(\Omega)}.\) The closure is realized in the norm of the graph of operator \(L\): \(||u||_{L}^{2}:=||u||_{L_{2}(\Omega)}^{2}+||Lu||_{L_{2}(\Omega)}^{2}.\)
**Definition 4**: _Maximum operator._[6]. The maximum operator \(L\) is define as the restriction of the differential operation \({\cal L}(D_{x})\) to the set \(D(L):=\{u\in L^{2}(\Omega):\,Lu\in L^{2}(\Omega)\}.\)
**Definition 5**: _[6]. The operator \(\tilde{L}\) is define as the extension of the minimum operator \(L_{0},\) to the set \(D(\tilde{L}):=\overline{C^{\infty}(\Omega)}.\)_
**Definition 6**: _Regular operator._[6]. The maximum operator is called regular if \(D(L)=D(\tilde{L}).\)
It is easy to see, that in the case of operation of the fourth-order (1.1), maximal operator is regular and \(D(L)=D(\tilde{L})=H^{4}(\Omega),\)\(D(L_{0})=\stackrel{{ 0}}{{=}}\stackrel{{ 0}}{{H^{4}}}(\Omega),\) the Hilbert Sobolev space of fourthly weak differentiable functions from \(L^{2}(\Omega).\)
The definition of a weak solution to the problem (1.1)--(2.1) from the space \(D(L)\) is closely connected with the notion of \(L-\) traces, that is traces, which are associated with the differential operator \(L.\)
**Definition 7**: \(L\)-traces._[5]. Assume, that for a function \(u\in D(\tilde{L})\) there exist linear continuous functionals \(L_{(p)}u\) over the space \(H^{m-p-1/2}(\partial\Omega),\)\(p=0,1,2...,m-1,\) such that the following equality is satisfied:
\[(Lu,v)_{L^{2}(\Omega)}-(u,L^{+}v)_{L^{2}(\Omega)}=\sum_{j=0}^{m-1}(L_{(m-1-j) }u,\,\partial_{\nu}^{(j)}v). \tag{2.3}\]
The functionals \(L_{(p)}u\) is called the \(L_{(p)}-\) traces of the function \(u\in D(\tilde{L}).\) Here \((\cdot,\,\cdot)_{L^{2}(\Omega)}\) is a scalar product in Hilbert space \(L^{2}(\Omega).\)
For \(L^{2}-\) solutions the notion of \(L_{(p)}-\) traces can be realized by the following way.
**Definition 8**: \(L\)_-traces._
Finally, we are going to the definition of the weak solution to the problem (1.1)--(2.1):
**Definition 9**: _We will call the function \(u\in D(L)\) a weak solution to the Cauchy problem (1.1)-(2.1), if it satisfies to the following integral identity_
\[(f,\,v)_{L^{2}(\Omega)}-(u,\,L^{+}v)_{L^{2}(\Omega)}=\sum_{j=0}^{3}(L_{(3-j)}u,\,\partial_{\nu}^{(j)}v), \tag{2.4}\]
_for any functions \(v\in C_{0}^{\infty}(\Omega).\) The functionals \(L_{(p)}u\) is called the \(L_{(p)}-\) traces of the function \(u,\,p=0,\,1,\,2,\,3,\) and completely determined by the initial functions \(\varphi,\,\psi,\,\sigma,\,\chi\) by the following way:_
\[\begin{array}{c}L_{(0)}u=-L(x)u|_{\partial\Omega}=-L(\nu)\varphi;\\ \\ L_{(1)}u=L(\nu)\psi+\alpha_{1}\varphi_{\tau}^{\prime}+\alpha_{2}\varphi;\\ \\ L_{(2)}u=-L(\nu)\sigma+\beta_{1}\psi_{\tau}^{\prime}+\beta_{2}\psi+\beta_{3} \varphi_{\tau\tau}^{\prime\prime}+\beta_{4}\varphi_{\tau}^{\prime}+\beta_{5} \varphi;\\ \\ L_{(3)}u=L(\nu)\chi+\delta_{1}\varphi_{\tau\tau\tau}^{\prime\prime}+\delta_{2} \sigma+\delta_{3}\psi_{\tau\tau}^{\prime\prime}+\delta_{4}\psi_{\tau}^{ \prime}+\delta_{5}\psi+\delta_{6}\varphi_{\tau\tau}^{\prime\prime}+\delta_{7} \varphi_{\tau}^{\prime}+\delta_{8}\varphi.\end{array} \tag{2.5}\]
_Here \(\alpha_{i},\,i=1,\,2,\,\beta_{j},\,j=1,\,2,...,\,5,\) and \(\delta_{k},\,k=1,\,...,\,9\) are smooth functions, completely determined by the coefficients of the Eq.(1.1)._
_Remark 1._ We can use a general form of the operators \(\gamma_{j}\) in the left-hand side of the identity (2.4) instead of operators of differentiation \(\partial_{\nu}^{(j)}v\). Indeed, we define \(\gamma_{j}=p_{j}\gamma\), where \(\gamma:\,u\in H^{m}(\Omega)\rightarrow(u|_{\partial\Omega},\,...,\,u_{\nu}^{(m- 1)}|_{\partial\Omega})\in H^{(m)}=H^{m-1/2}(\partial\Omega)\times H^{m-3/2}( \partial\Omega)\times...\times H^{1/2}(\partial\Omega)\), and \(p_{j}:\,H^{(m)}\to H^{m-j-1/2}(\partial\Omega)-\) projection.
As it has been mentioned above, some examples show (see [2]) that in the general case the solutions \(u\in D(L)\) do not exist ordinary traces in the sense of distributions even for the simplest hyperbolic equations. Indeed, for the wave equation \(Lu=\frac{\partial^{2}u}{\partial x_{1}\partial x_{2}}=0\) in the unit disk \(K:\,|x|=1\), the solution \(u(x)=(1-x_{1}^{2})^{-\frac{5}{2}}\) belongs to \(L^{2}(K)\), but \(<u|_{\partial K},1>_{\partial K}=\infty\) it means that \(\lim_{r\to 1-0}\int\limits_{|x|=r}u(x)ds_{x}=\infty\), such a way the trace \(u|_{\partial K}\) does not exist even as a distribution. However, for every solution \(u\in L^{2}(K)\) the \(L_{(0)}-\)trace \(L_{(0)}u:=-L(x)u(x)|_{|x|=1}=-x_{1}x_{2}u(x)|_{|x|=1}\in L^{2}(\partial K).\) Likewise, \(L_{(1)}-\) trace \(L_{(1)}u\) exists for every \(u\in L^{2}(K)\):
\[L_{(1)}u=\left(L(x)u_{\nu}^{\prime}+L_{\tau}^{\prime}u_{\tau}^{\prime}+\frac{ 1}{2}L_{\tau\tau}^{\prime\prime}u\right)|_{\partial K}\in H^{-\frac{3}{2}}( \partial K).\]
where \(\tau\) is the angular coordinate and \(u_{\tau}^{\prime}\) is the tangential derivative, and \(L(x)=x_{1}x_{2}-\) symbol of the operator \(L=\frac{\partial^{2}}{\partial x_{1}\partial x_{2}}\).
Maximum principle for the weak solutions of Cauchy problem. Existence, uniqueness and regularity of solution
We prove here the first simple case: the maximum principle for the weak solution of the Cauchy problem (1.1)--(2.1) in admissible plane domain \(\Omega\), restricted by the different and non-congruent characteristics \(C_{j}\), \(j=1,\,2,...,\,4\) and initial line \(\Gamma_{0}\).
_Theorem 1. Maximum principle._ Let \(u\in D(L)\) satisfy the following inequalities:
\[Lu=f\leq 0,\,\,\,x\in D, \tag{3.1}\]
and
\[L_{(0)}u\mid_{\Gamma_{0}}\geq 0,\,L_{(1)}u|_{\Gamma_{0}}\geq 0,\,L_{(2)}u|_{ \Gamma_{0}}\geq 0,\,L_{(3)}u|_{\Gamma_{0}}\geq 0, \tag{3.2}\]
then \(u\leq 0\) in \(D\).
_Proof._
Due to homogeneity of the symbol in Eq. (1.1), \(L(\xi)=a_{0}\xi_{1}^{4}+a_{1}\xi_{1}^{3}\xi_{2}+a_{2}\xi_{1}^{2}\xi_{2}^{2}+a_ {3}\xi_{1}\xi_{2}^{3}+a_{4}\xi_{2}^{4}=\)\(<\xi,\,a^{1}><\xi,\,a^{2}><\xi,\,a^{3}><\xi,\,a^{4}>,\,\xi=(\xi_{1},\,\xi_{2})\in \mathbb{R}^{2}\), we can rewrite this equation in the following form:
\[<\nabla,\,a^{1}><\nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u=f(x). \tag{3.3}\]
The vectors \(a^{j}=(a_{1}^{j},\,a_{2}^{j}),\,j=1,\,2,\,3,\,4\) are determined by the coefficients \(a_{i},\,i=0,\,1,\,2,\,3,\,4\), and \(<a,\,b>=a_{1}\bar{b}_{1}+a_{2}\bar{b}_{2}\) is a scalar product in \(\mathbb{C}^{2}\). It is easy to see that vector \(a^{j}\) is a tangent vector of \(j-\)th characteristic, slope \(\varphi_{j}\) of which is determined by \(-\tan\varphi_{j}=\lambda_{j},\,j=1,\,2,\,3,\,4\). In what follows, we also consider the vectors \(\tilde{a}^{j}=(-\bar{a}_{2}^{j},\,\tilde{a}_{1}^{j}),\,j=1,\,2,\,3,\,4.\) It is obvious that \(<\tilde{a}^{j},\,a^{j}>=0\), so \(\tilde{a}^{j}\) is a normal vector of \(j-\)th characteristic.
Use the definitions 7 and 9 for the case \(m=4\), that is fourth-order operator in Eq. (1.1), and domain \(\Omega\), which is restricted by the characteristics \(C_{j}\), \(j=1\), 2, 3, 4 and \(\Gamma_{0}:\)
\[\int\limits_{\Omega}\{Lu\cdot\bar{v}-u\cdot\overline{L^{+}v}\}dx=\sum\limits_ {k=0}^{3}\int\limits_{\partial\Omega}L_{(3-k)}u\cdot\partial_{\nu}^{(k)}v\,ds=\]
\[=\sum\limits_{k=0}^{3}\int\limits_{C_{1}}L_{(3-k)}u\cdot\partial_{\nu}^{(k)}v \,ds+\sum\limits_{k=0}^{3}\int\limits_{C_{2}}L_{(3-k)}u\cdot\partial_{\nu}^{(k )}v\,ds+\sum\limits_{k=0}^{3}\int\limits_{C_{3}}L_{(3-k)}u\cdot\partial_{\nu}^{( k)}v\,ds+\sum\limits_{k=0}^{3}\int\limits_{C_{4}}L_{(3-k)}u\cdot\partial_{\nu}^{(k )}v\,ds+\]
\[+\sum\limits_{k=0}^{3}\int\limits_{\Gamma_{0}}L_{(3-k)}u\cdot\partial_{\nu}^{( k)}v\,ds. \tag{3.4}\]
Using the representation (3.3), we arrive to
\[\int\limits_{\Omega}Lu\cdot\bar{v}\,dx=\int\limits_{\Omega}<\nabla,\,a^{1}>< \nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u\cdot\bar{v}\,dx=\]
\[\int\limits_{\partial\Omega}<\nu,\,a^{1}>\cdot<\nabla,\,a^{2}><\nabla,\,a^{3}> <\nabla,\,a^{4}>u\cdot\bar{v}\,ds-\]
\[\int\limits_{\Omega}<\nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u\cdot \overline{<\nabla,\,a^{1}>v}\,dx.\]
Integrating by parts further, we obtain:
\[\int\limits_{\Omega}Lu\cdot\bar{v}\,dx=\int\limits_{\partial\Omega}<\nu,\,a^{ 1}>\cdot<\nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u\cdot\bar{v}\,ds-\]
\[\int\limits_{\partial\Omega}<\nu,\,a^{2}>\cdot<\nabla,\,a^{3}><\nabla,\,a^{4 }>u\cdot\overline{<\nabla,\,a^{1}>v}\,ds+\]
\[+\int\limits_{\partial\Omega}<\nu,\,a^{3}>\cdot<\nabla,\,a^{4}>u\cdot \overline{<\nabla,\,a^{2}><\nabla,\,a^{1}>v}\,ds-\]
\[-\int\limits_{\partial\Omega}<\nu,\,a^{4}>u\cdot\overline{<\nabla,\,a^{3}>< \nabla,\,a^{2}><\nabla,\,a^{1}>v}\,ds+\]
\[+\int\limits_{\Omega}u\cdot\overline{<\nabla,\,a^{4}><\nabla,\,a^{3}><\nabla, \,a^{2}><\nabla,\,a^{1}>v}\,dx.\]
Since \(<\nabla,\,a^{4}><\nabla,\,a^{3}><\nabla,\,a^{2}><\nabla,\,a^{1}>v=L^{+}v\), and determining
\[\tilde{L}_{(0)}u:=<\nu,\,a^{4}>u,\,\,\tilde{L}_{(1)}u:=<\nu,\,a^{3}>\cdot< \nabla,\,a^{4}>u,\]
\[\tilde{L}_{(2)}u:=<\nu,\,a^{2}>\cdot<\nabla,\,a^{3}><\nabla,\,a^{4}>u,\]
\[\tilde{L}_{(3)}u=L_{(3)}u=<\nu,\,a^{1}>\cdot<\nabla,\,a^{2}><\nabla,\,a^{3}> <\nabla,\,a^{4}>u\]
that are analogues of \(L-\)traces from the formula (3.4). Such a way we have
\[\int\limits_{\Omega}\{Lu\cdot\bar{v}-u\cdot\overline{L^{+}v}\}\,dx=\int \limits_{\partial\Omega}L_{(3)}u\cdot\bar{v}\,ds-\int\limits_{\partial\Omega} \tilde{L}_{(2)}u\cdot\overline{<\nabla,\,a^{1}>v}\,ds+\]
\[+\int\limits_{\partial\Omega}\tilde{L}_{(1)}u\cdot\overline{<\nabla,\,a^{2}><\nabla, \,a^{1}>v}\,ds-\int\limits_{\partial\Omega}\tilde{L}_{(0)}u\cdot\overline{< \nabla,\,a^{3}><\nabla,\,a^{2}><\nabla,\,a^{1}>v}\,ds. \tag{3.5}\]
Difference between formulas (3.4) and (3.5) is that natural \(L_{(3-k)}\) traces in (3.4) are multiplied by \(k-\) derivative by outer normal \(\nu\) of truncated function \(v\,:\,\partial_{\nu}^{(k)}v\), on the other hand, in (3.5) we determined by \(\tilde{L}_{(3-k)}\) some expressions which multiplied by differential operators \(L_{k}^{+}v\) of order \(k\) and which can serve as analogous of natural \(L_{(3-k)}\) traces, \(k=0,\,1,\,2,\,3.\) So, in the
\[L_{1}^{+}v:=<\nabla,\,a^{1}>v,\,L_{2}^{+}v:=<\nabla,\,a^{2}><\nabla,\,a^{1}>v, \tag{3.5}\]
\[L_{0}^{+}v=v,\,L_{3}^{+}v:=<\nabla,\,a^{3}><\nabla,\,a^{2}><\nabla,\,a^{1}>v.\]
Let \(v\in KerL^{+}\) in (3.5) and calculate \(L-\) traces on \(\partial\Omega=C_{1}\cup C_{2}\cup C_{3}\cup C_{4}\cup\Gamma_{0}\). For instance, for \(L_{(3)}u\) we obtain: \(L_{(3)}u=<\nu,\,a^{1}><\nabla,\,a^{2}><\nabla,\,a^{3}><\nabla,\,a^{4}>u\), and use that \(<\nabla,\,a^{j}>u=\)
\(<\nu,\,a^{j}>u^{\prime}_{\nu}+<\tau,\,a^{j}>u^{\prime}_{\tau},\,j=1,\,2,\,3, 4\), where \(\nu-\) normal vector, \(\tau-\) tangent vector. Due to presence the product \(<\nu,\,a^{1}>,\,L_{(3)}u=0\) on characteristic \(C_{1}\), normal vector \(\tilde{a}^{1}\) of which is orthogonal to the vector \(a^{1}\). On the other parts of \(\partial\Omega\) there will vanish the terms containing \(<\nu,a^{j}>\) on \(C_{j}\). After that
\[\int\limits_{\partial\Omega}<\nu,\,a^{1}><\nabla,\,a^{2}><\nabla,\,a^{3}>< \nabla,\,a^{4}>u=\int\limits_{\Gamma_{0}}L_{(3)}u\,ds+\]
\[<\tilde{a}^{2},\,a^{1}><a^{2},\,a^{2}><\tilde{a}^{2},\,a^{3}><\tilde{a}^{2},\, a^{4}>\int\limits_{C_{2}}u_{\nu\tau}\,ds+\]
\[<\tilde{a}^{3},\,a^{1}><\tilde{a}^{3},\,a^{2}><a^{3},\,a^{3}><\tilde{a}^{3}, \,a^{4}>\int\limits_{C_{3}}u_{\nu\tau}\,ds+\]
\[<\tilde{a}^{4},\,a^{1}><\tilde{a}^{4},\,a^{2}><\tilde{a}^{4},\,a^{3}><a^{4}, \,a^{4}>\int\limits_{C_{4}}u_{\nu\tau}\,ds+\]
\[\{<\tilde{a}^{2},\,a^{1}><a^{2},\,a^{2}><\tilde{a}^{2},\,a^{3}><a^{2},\,a^{4}> +<\tilde{a}^{2},\,a^{1}><a^{2},\,a^{2}><a^{2},\,a^{3}><\tilde{a}^{2},\,a^{4}>\} \int\limits_{C_{2}}u_{\tau\tau\nu}\,ds+\]
\[\{<\tilde{a}^{3},\,a^{1}><\tilde{a}^{3},\,a^{2}><a^{3},\,a^{3}><a^{3},\,a^{4}> +<\tilde{a}^{3},\,a^{1}><a^{3},\,a^{2}><a^{3},\,a^{3}><\tilde{a}^{3},\,a^{4}> \}\int\limits_{C_{3}}u_{\tau\tau\nu}\,ds+\]
\[\{<\tilde{a}^{4},\,a^{1}><\tilde{a}^{4},\,a^{2}><a^{4},\,a^{3}><a^{4},\,a^{4}> +<\tilde{a}^{4},\,a^{1}><a^{4},\,a^{2}><\tilde{a}^{4},\,a^{3}><a^{4},\,a^{4}> \}\int\limits_{C_{4}}u_{\tau\tau\nu}\,ds+\]
\[<\tilde{a}^{2},\,a^{1}><a^{2},\,a^{2}><a^{2},\,a^{3}><a^{2},\,a^{4}>\int \limits_{C_{2}}u_{\tau\tau\tau}\,ds+\]
\[<\tilde{a}^{3},\,a^{1}><a^{3},\,a^{2}><a^{3},\,a^{3}><a^{3},\,a^{4}>\int \limits_{C_{3}}u_{\tau\tau\tau}\,ds+\]
\[<\tilde{a}^{4},\,a^{1}><a^{4},\,a^{2}><a^{4},\,a^{3}><a^{4},\,a^{4}>\int \limits_{C_{4}}u_{\tau\tau\tau}\,ds+\alpha_{4,1}\int\limits_{C_{2}}u_{\nu\nu} \,ds+\alpha_{4,2}\int\limits_{C_{3}}u_{\nu\nu}\,ds+\alpha_{4,3}\int\limits_{C_{4 }}u_{\nu\nu}\,ds+\]
\[\alpha_{5,1}\int\limits_{C_{2}}u_{\nu\tau}\,ds+\alpha_{5,2}\int\limits_{C_{3}}u_{ \nu\tau}\,ds+\alpha_{5,3}\int\limits_{C_{4}}u_{\nu\tau}\,ds+\alpha_{6,1}\int \limits_{C_{2}}u_{\tau\tau}\,ds+\alpha_{6,2}\int\limits_{C_{3}}u_{\tau\tau}\, ds+\alpha_{6,3}\int\limits_{C_{4}}u_{\tau\tau}\,ds+\]
\[\alpha_{7,1}\int\limits_{C_{2}}u_{\nu}\,ds+\alpha_{7,2}\int\limits_{C_{3}}u_{ \nu}\,ds+\alpha_{7,3}\int\limits_{C_{4}}u_{\nu}\,ds+\alpha_{8,1}\int\limits_{ C_{2}}u_{\tau}\,ds+\alpha_{8,2}\int\limits_{C_{3}}u_{\tau}\,ds+\alpha_{8,3} \int\limits_{C_{4}}u_{\tau}\,ds.\]
Here correspondent coefficients \(\alpha_{i,j}\) were numerated as follows: first index \(i\) indicates the derivative of \(u\): \(1)\,u_{\nu\nu\tau},\,2)\,u_{\nu\tau\tau},\,3)\,u_{\tau\tau\tau},\,4)\,u_{\nu\nu \tau},\,5)\,u_{\nu\tau},\,6)\,u_{\tau\tau},\,7)\,u_{\nu}\,8)\,u_{\tau},\)\)\,u_{\tau}\)\)\,
Method of equation-domain duality and its application to the Goursat problem
We consider here the method of equation-domain duality (see also in [[6]],[[2]] ) for the study of the Goursat problem. This method allows us to reduce the Cauchy problem (1.1)--(2.1) in bounded domain \(\Omega\) to equivalent the Goursat boundary value problem. We will show that the method of equation-domain duality can be applied also to boundary value problems in the generalized statement, and we will extend the previously obtained results of the study the Goursat problem for specific equations with constant coefficients. First of all we consider the method of equation-domain duality for the case of classical, smooth solutions.
### The method of equation-domain duality for the case of classical, smooth solutions.
Let \(\Omega\in\mathbb{R}^{n}\) be a bounded domain defined by the inequality \(P(x)>0\) with some real polynomial \(P(x)\). The equation \(P(x)=0\) denotes the boundary \(\partial\Omega\). It is assumed that the boundary of the domain is non degenerate for \(P\), i.e., \(|\nabla P|\neq=0\) on \(\partial\Omega\). Consider the general boundary value problem with \(\gamma\) conditions on the boundary for the differential operator \(L\) (2.2) of the order \(m\), and \(\gamma\leq m\):
\[L(D_{x})u=f(x),\,u|_{\partial\Omega}=0,\,u^{\prime}_{\nu}|_{\partial\Omega}=0,\,...,\,u^{(\gamma-1)}_{\nu}|_{\partial\Omega}=0. \tag{4.1}\]
By the equation-domain duality we mean (see [[6]]) a correspondence (in the sense of Fourier transform) between problem (4.1) and the equation
\[P^{m-\gamma}(-D_{\xi})\{L(\xi)w(\xi)\}=\hat{f}(\xi). \tag{4.2}\]
This correspondence is described in the following lemma.
**Lemma 1.** For any nontrivial solution of problem (4.1) in the space of smooth functions \(C^{m}(\bar{\Omega})\), there exists a nontrivial analytic solution \(w\) of equation (4.2) from the space \(\mathbb{C}^{n}\) in a class \(Z^{m}_{\Omega}\) of entire functions. The class \(Z^{m}_{\Omega}\) is defined as the space of Fourier transforms of functions of the form \(\theta_{\Omega}\eta\), where \(\eta\in C^{m}(\mathbb{R}^{n})\), and \(\theta_{\Omega}\) is the characteristic function of the domain \(\Omega\), \(w(\xi)=\widehat{\theta_{\Omega}u}.\) The function \(f(x)\) is assumed to be extended by zero beyond the boundary of the domain.
_Proof._ Let \(m=4,\,\gamma=2\) and consider the following Dirichlet problem for the fourth-order operator in (1.1):
\[L(D_{x})u=f,\,u|_{P(x)=0}=f,\,u^{\prime}_{\nu}|_{P(x)=0}=0. \tag{4.3}\]
Let also \(u\in C^{4}(\bar{\Omega})\) be a classical solution to the problem (4.3. Denote by \(\tilde{u}\in C^{4}(\mathbb{R}^{2})\) the extension of \(u\), and apply the fourth-order operator \(L(D_{x})\) in (1.1) to the product \(\tilde{u}\theta_{\Omega}\), where \(\theta_{\Omega}\) is a characteristics function of the domain \(\Omega\): \(\theta_{\Omega}=1\) in \(\Omega\) and \(\theta_{\Omega}=0\) out of \(\Omega.\) We obtain:
\[L(D_{x})(\tilde{u}\theta_{\Omega})=\theta_{\Omega}L(D_{x})\tilde{u}+\tilde{u} L(D_{x})\theta_{\Omega}+\]
\[+L^{(1)}_{3}(D_{x})\tilde{u}<\nabla,\,a^{1}>\theta_{\Omega}+L^{(2)}_{3}(D_{x} )\tilde{u}<\nabla,\,a^{2}>\theta_{\Omega}+\]
\[+L_{3}^{(3)}(D_{x})\tilde{u}<\nabla,\,a^{3}>\theta_{\Omega}+L_{3}^{(4)}(D_{x}) \tilde{u}<\nabla,\,a^{4}>\theta_{\Omega}+\]
\[+L_{3}^{(1)}(D_{x})\theta_{\Omega}<\nabla,\,a^{1}>\tilde{u}+L_{3}^{(2)}(D_{x}) \theta_{\Omega}<\nabla,\,a^{2}>\tilde{u}+\]
\[+L_{3}^{(3)}(D_{x})\theta_{\Omega}<\nabla,\,a^{3}>\tilde{u}+L_{3}^{(4)}(D_{x}) \theta_{\Omega}<\nabla,\,a^{4}>\tilde{u}+\]
\[+L_{2}^{(1,2)}(D_{x})\tilde{u}<\nabla,\,a^{1}><\nabla,\,a^{2}>\theta_{\Omega}+ L_{2}^{(1,3)}(D_{x})\tilde{u}<\nabla,\,a^{1}><\nabla,\,a^{3}>\theta_{\Omega}+\]
\[+L_{2}^{(1,4)}(D_{x})\tilde{u}<\nabla,\,a^{1}><\nabla,\,a^{4}>\theta_{\Omega}+ L_{2}^{(2,3)}(D_{x})\tilde{u}<\nabla,\,a^{2}><\nabla,\,a^{3}>\theta_{\Omega}+\]
\[+L_{2}^{(2,4)}(D_{x})\tilde{u}<\nabla,\,a^{2}><\nabla,\,a^{4}>\theta_{\Omega}+ L_{2}^{(3,4)}(D_{x})\tilde{u}<\nabla,\,a^{3}><\nabla,\,a^{4}>\theta_{\Omega}+\]
\[+L_{2}^{(1,2)}(D_{x})\theta_{\Omega}<\nabla,\,a^{1}><\nabla,\,a^{2}>\tilde{u}+ L_{2}^{(1,3)}(D_{x})\theta_{\Omega}<\nabla,\,a^{1}><\nabla,\,a^{3}>\tilde{u}+\]
\[+L_{2}^{(1,4)}(D_{x})\theta_{\Omega}<\nabla,\,a^{1}><\nabla,\,a^{4}>\tilde{u}+ L_{2}^{(2,3)}(D_{x})\theta_{\Omega}<\nabla,\,a^{2}><\nabla,\,a^{3}>\tilde{u}+\]
\[+L_{2}^{(2,4)}(D_{x})\theta_{\Omega}<\nabla,\,a^{2}><\nabla,\,a^{4}>\tilde{u}+ L_{2}^{(3,4)}(D_{x})\theta_{\Omega}<\nabla,\,a^{3}><\nabla,\,a^{4}>\tilde{u}.\]
Here \(L_{3}^{(j)}(D_{x})\) and \(L_{2}^{(j,k)}(D_{x})\), \(j,\,k=1,2,3,4\) are some differential operations of the \(3-\) and \(2-\) order correspondingly, defined by the fourth-order differential operation \(L(D_{x})\) in (1.1):
\[L_{3}^{(j)}(D_{x})=\frac{L(D_{x})}{<\nabla,\,a^{j}>},\,j=1,...,\,4,\]
\[L_{2}^{(j,k)}(D_{x})=\frac{L(D_{x})}{<\nabla,\,a^{j}><\nabla,\,a^{k}>},\,j\neq k,\,j,k=1,..\,4.\]
Since \(\tilde{u}\) is a solution of the equation (1.1), we arrive to
\[L(D_{x})(\tilde{u}\theta_{\Omega})=\theta_{\Omega}f+\tilde{u}L(D_{x})\theta_{ \Omega}+A^{(1)}(x)(\delta_{\partial\Omega})^{\prime\prime}_{\nu\nu}+A^{(2)}(x )(\delta_{\partial\Omega})^{\prime}_{\nu}+A^{(3)}(x)\delta_{\partial\Omega},\]
where \(A^{(j)}(x)-\) are some smooth functions, depend on coefficients \(a^{k},\,k=1,...,\,4\) of the equation (1.1) and \(j-\) derivatives of function \(u\) by outer normal \(\nu:u_{\nu}^{(j)},\) and tangent direction \(\tau\): \(u_{\tau}^{(j)},\,j=1,2,3.\) Taking into account the conditions (4.3), and \(<(\delta_{\partial\Omega})^{\prime}_{\nu},\phi>=-<\delta_{\partial\Omega},\phi^ {\prime}_{\nu}>=-\int\limits_{\partial\Omega}\vec{\phi}_{\nu}(s)\,ds,\,\forall \,\psi\in{\cal D}(\mathbb{R}^{2}),\) we have that \(\tilde{u}L(D_{x})\theta_{\Omega}+A^{(1)}(x)(\delta_{\partial\Omega})^{\prime \prime}_{\nu\nu}=0,\) and \(A^{(2)}(x)(\delta_{\partial\Omega})^{\prime}_{\nu}=-\int\limits_{\partial\Omega }(A^{(2)}(s))^{\prime}_{\nu}\,ds=\tilde{A}^{(3)}(x)\delta_{\partial\Omega},\) from (4.4) we obtain
\[L(D_{x})(\tilde{u}\theta_{\Omega})=\theta_{\Omega}f+B^{(3)}(x)\delta_{\partial \Omega},\]
where \(B^{(3)}(x)=\tilde{A}^{(3)}(x)+A^{(3)}(x)\) is some smooth function, depend on coefficients \(a^{k},\,k=1,...,\,4\) of the equation (1.1) and \(3-\) derivatives of function \(u\) by outer normal \(\nu:u_{\nu}^{\prime\prime\prime},\) and tangent direction \(\tau\): \(u_{\tau}^{\prime\prime\prime}.\) Let us multiply (4.5) by \(P^{2}(x),\) so that \(P^{2}(x)B^{(3)}(x)\delta_{\partial\Omega}=0,\) due to \(P(x)=0\) on \(\partial\Omega,\) and after that we apply the Fourier transform:
\[P^{2}(-D_{\xi})(v(\xi))=\hat{f}.\]
Here \(v(\xi)=L(\xi)w(\xi),\,w(\xi)=\widehat{\hat{u}\theta_{\Omega}},\) Fourier transform of the function \(\tilde{u}\theta_{\Omega}.\) Such a way we arrive to the dual problem (4.2). Function \(w(\xi)\in Z^{4}_{\Omega},\) the space of entire functions, which is defined as the space of Fourier transforms of functions of the form \(\theta_{\Omega}\tilde{u},\) where \(\tilde{\in}C^{4}(\mathbb{R}^{n}),\) see, for example, [13]. The Lemma is proved.
As an application of the Lemma 1, let us consider the Dirichlet problem problem for the fourth-order hyperbolic equation (1.1) in the unit disk \(K=\{x\in\mathbb{R}^{2}:\,|x|<1\}:\)
\[u|_{|x|=1}=0,\,u^{\prime}_{\nu}|_{|x|=1}=0. \tag{4.6}\]
For this case \(m=4,\,\gamma=2,\,m-\gamma=2\) and we arrive to the following dual problem:
\[\Delta^{2}v=\hat{f}(\xi),\,\,v|_{L(\xi)=0}=0, \tag{4.7}\]
\(v=L(\xi)w(\xi)\). Taking into account the representation (3.3), the condition \(w|_{L(\xi)=0}=0\) equivalent to the following four conditions:
\[w|_{<\xi,\,a^{1}>=0}=0,\,w|_{<\xi,\,a^{2}>=0}=0,\,w|_{<\xi,\,a^{3}>=0}=0,\,w|_ {<\xi,\,a^{4}>=0}=0. \tag{4.8}\]
Since \(<\xi,\,a^{j}>=0\) is a characteristics, \(j=1,\,2,...,4\) we conclude, that problem (4.7) is a Goursat problem. Method of equation-domain duality allows us to reduce the problem of solvability of a boundary value problem for a typeless equation (particularly, hyperbolic type) to an analogous problem for an equation, possibly of less complicated structure and of lower order (in particular, for elliptic type equation, see (4.7)).
### The method of equation-domain duality for the case of weak solutions and solutions from \(D(L).\)
We prove here the analog of the Lemma 1 for the case of solutions \(u\in D(L).\) From the Definition 8 and formulae (2.5), for any function \(u\in H^{m}(\Omega),\,m\geq 4,\,L_{(p)}u-\) traces can be expressed by the following way: \(L_{(p)}u=\sum\limits_{k=0}^{p}\alpha_{p,k}\partial_{\nu}^{k}u|_{\partial \Omega},\,p=0,1,2,3\). For \(p=0,\,L_{(0)}u=u|_{\partial\Omega},\) there coincide \(L_{(0)}-\) trace and usual trace. If \(u\in D(L),\) then we consider the following boundary value problem
\[L(D_{x})u=f(x),\,L_{(0)}u=0,\,L_{(1)}u=0,\,...,\,L_{(\gamma-1)}u=0,\,\gamma \leq m. \tag{4.9}\]
instead of (4.1). For example, in the case of the Dirichlet problem (4.3) for the fourth-order operator (1.1), and for \(u\in D(L)\) we have
\[L(D_{x})u=f(x),\,L_{(0)}u=0,\,L_{(1)}u=0,\gamma=2<m=4. \tag{4.10}\]
The equation-domain duality principle for the solutions \(u\in D(L)\) of the problem (4.9) is assumed as the correspondence (in the sense of Fourier transform) between the problem (4.9) and the equation (4.2), realized by the following statement, which is analog of the Lemma 1 for the solutions \(u\in D(L):\)
**Lemma 2.** For any nontrivial solution of problem 4.9 in the space \(D(L),\) there exists a nontrivial analytic solution \(w\) of equation (4.2) from the space \(\mathbb{C}^{n}\) in a class \(Z_{\Omega}\) of entire functions. The class \(Z_{\Omega}\) is defined as the space of Fourier transforms of functions from the set \(V=\{v:\,\mbox{there exists some function}\,u\in D(L),\,\mbox{such that}:\,v=u\,\mbox{in}\, \Omega,\,v=0,\,\mbox{out}\,\mbox{of}\,\bar{\Omega}\},\,w(\xi)=\widehat{v}.\) The function \(f(x)\) is assumed to be extended by zero beyond the boundary of the domain.
The proof is following from the Definition 9, substituting into the equality (2.4) \(v(x)=P^{m-\gamma}(x)e^{i(x,\hat{x}^{j})}\in ker(L^{+}),\,j=1,...,4.\) Function \(w(\xi)=\widehat{v}\in Z_{\Omega},\) the space of entire functions (see, for instance, the Paley-Wiener theorem, [13].
Connection between Cauchy and Dirichlet problems. Existence and uniqueness of solutions for the hyperbolic equations.
The main result of this section is the following theorem on existence and solution uniqueness of the Cauchy problem (1.1)-(2.1).
**Theorem 2.** Let us assume that there exist four functions \(L_{3},\,L_{2},\,L_{1},\,L_{0}\in L^{2}(\partial\Omega),\) satisfying conditions
\[\int\limits_{\partial\Omega}\{L_{3}(x)Q(-\tilde{a}^{j}\cdot x)+L_{2}(x)Q^{ \prime}(-\tilde{a}^{j}\cdot x)+L_{1}(x)Q^{\prime\prime}(-\tilde{a}^{j}\cdot x )+L_{0}(x)Q^{\prime\prime\prime}(-\tilde{a}^{j}\cdot x)\}dS_{x}=\int\limits_{ \Omega}f(x)\overline{Q(-\tilde{a}^{j}\cdot x)}dx, \tag{5.1}\]
for any polynomial \(Q\in C[z]\) from the kernel \(KerL^{+}\) of the operator \(L^{+},\,Q(-\tilde{a}^{j}\cdot x),\,j=1,\,2,\,3,\,4.\)
Then, there exists a unique solution \(u\in D(L)\) of the Cauchy problem (1.1)-(2.1), whose \(L-\) traces are the given functions \(L_{3},\,L_{2},\,L_{1},\,L_{0}:L_{j}=L_{(j)-}\) trace, \(j=0,\,1,\,2,\,3,\) and connected with the initial data \(\varphi,\,\psi,\sigma,\,\chi\) by the relations (2.5).
_Proof._ First of all, we prove the existence of the solution to the Cauchy problem (1.1)-(2.1) from the space \(D(L).\)
Let us consider the auxiliary Dirichlet problem for the properly elliptic eight-order operator \(\Delta^{4}\) with the given boundary conditions \(\varphi,\,\psi,\sigma,\,\chi:\)
\[\Delta^{4}\omega=0,\,\,\omega|_{\partial\Omega}=\varphi,\,\omega_{\nu}|_{ \partial\Omega}=\psi,\,\omega_{\nu\nu}|_{\partial\Omega}=\sigma,\,\omega_{\nu \nu}|_{\partial\Omega}=\chi. \tag{5.2}\]
It is well known that solution of the problem (5.2) exists and belongs to the space \(H^{m}(\Omega),\,m\geq 4.\) We find the solution \(u\) of the Cauchy problem in the form
\[u=\omega+v, \tag{5.3}\]
where \(v\) is a solution of the following problem with null boundary data:
\[L(D_{x})v=-L(D_{x})\omega+f(x),\,v|_{\partial\Omega}=0,\,v_{\nu}|_{\partial \Omega}=0,\,v_{\nu\nu}|_{\partial\Omega}=0,\,v_{\nu\nu\nu}|_{\partial\Omega}=0. \tag{5.4}\]
Since the \(L-\)traces of the function \(v\) are zero and operator \(L\) is regular, we conclude, that \(v\in D(L_{0})\) and prove the resolvability of the operator equation with the minimum operator \(L_{0}(D_{x}):\)
\[L_{0}(D_{x})v=-L\omega+f(x) \tag{5.5}\]
in the space \(D(L_{0}).\)
For resolvability of the operator equation (5.5) with the minimum operator \(L_{0}(D_{x})\) it is necessary and sufficiently that the right-hand part satisfies the following Fredholm condition
\[\int\limits_{\Omega}\{-L\omega+f(x)\}\overline{Q(x)}dx=0, \tag{5.6}\]
for any \(Q\in Ker\,L^{+}.\)
We use formula (2.3) for the case of function \(\omega\) and fourth-order operator, \(m=4\), and taking into account the boundary conditions (5.2), which mean that the functions \(L_{0},\,L_{1},\,L_{2},\,L_{3}\) are \(L-\) traces for the function \(\omega\), conditions (5.1), we arrive to the fulfilment (5.6) for any \(Q\in Ker\,L^{+}\), and as consequences, we arrive to the resolvability of Eq.(5.5) in \(D(L_{0}).\) Such a way, taking into account the representation (5.3), we arrive to the conclusion on existence solution \(u\in D(L).\)
Uniqueness of solution follows from the established above maximum principle for the solutions of the Cauchy problem. Theorem is proved.
_Remark 3._ For given boundary data \((L_{3},\,L_{2},\,L_{1},\,L_{0})\in H^{m-7/2}(\partial\Omega)\times H^{m-5/2}( \partial\Omega)\times H^{m-3/2}(\partial\Omega)\times H^{m-1/2}(\partial\Omega),\, m\geq 4,\) and \(f\in H^{m-4}(\Omega),\,m\geq 4,\) in the case of ellipticity equation (1.1), the solution \(u\in H^{m}(\Omega),\,m\geq 4\) (see [ [5]]). But for hyperbolic case it is not true, because the symbol \(L(\xi)\) has four real roots, and using Fourier transform and Lemma 2, we arrive at a decreasing of solution regularity as stated in the Theorem 2.
_Remark 4._ The problem of resolvability the Cauchy problem (1.1)-(2.1) is reduced to the integral moment problem (5.1).
### The Dirichlet problem.
In some bounded domain \(\Omega\in\mathbb{R}^{2}\) with elliptic boundary \(\partial\Omega=\{x:\,P(x)=0\}\) we consider the following Dirichlet problem for the fourth-order hyperbolic equation (1.1):
\[L_{(0)}u|_{P(x)=0}=\varphi,\,\,L_{(1)}u_{\nu}|_{P(x)=0}=\psi. \tag{5.7}\]
Connection between the Dirichlet problem (1.1), (5.7) and corresponding Cauchy problem is assumed as follows: let there exists some solution \(u^{*}\in D(L)\) of the Dirichlet problem (1.1), (5.7), then we can construct \(L_{(j)}u^{*}-\)traces, that are functions \(L_{3},\,L_{2},\,L_{1},\,L_{0}\) from the Theorem 2, which are satisfied the condition (5.1). From the Theorem 2 it means that the Cauchy problem is solvable in \(D(L)\). To prove the solvability of the Dirichlet problem (1.1), (5.7) in \(D(L)\), we have to show that there are exist pair of functions \(L_{2},\,L_{3}\in L^{2}(\partial\Omega),\) which is uniquely determined by the \(L_{(0)},\,L_{(1)}-\) traces of the Dirichlet problem (5.7). Such a way we arrive to the following inhomogeneous moment problem of determining unknown functions \(L_{3},\,L_{2}\) via known left-hand side:
\[\int\limits_{\partial\Omega}\{L_{3}(x)Q(-\tilde{a}^{j}\cdot x)+L_{2}(x)Q^{ \prime}(-\tilde{a}^{j}\cdot x)\}dS_{x}= \tag{5.8}\]
\[=\int\limits_{\Omega}f(x)\overline{Q(-\tilde{a}^{j}\cdot x)}dx-\int\limits_{ \partial\Omega}\{L_{(1)}(x)Q^{\prime\prime}(-\tilde{a}^{j}\cdot x)+L_{(0)}(x) Q^{\prime\prime\prime}(-\tilde{a}^{j}\cdot x)\}dS_{x}\]
for any polynomial \(Q\in C[z]\) from the kernel \(KerL^{+}\) of the operator \(L^{+}\), \(Q(-\tilde{a}^{j}\cdot x),\,j=1,\,2,\,3,\,4.\) Thus, solvability of the Dirichlet problem (5.7) in \(D(L)\) reduces to the solvability of moment problem (5.8).
**Theorem 3.** For solvability of the Dirichlet problem (1.1), (5.7) in \(D(L),\) it is necessary and sufficient that there is exist some solution \((L_{3}^{*}(x),\,L_{2}^{*}(x))\in L^{2}(\partial\Omega)\times L^{2}(\partial\Omega)\) of the moment problem (5.8). Then \(L_{3}^{*}(x)=L_{(3)}-\) trace, and \(L_{2}^{*}(x)=L_{(2)}-\) trace.
_Remark 5._ In the particular cases of domain \(\Omega\) there can be found the explicit formulas for the evaluation of a couple of functions \((L_{3}^{*}(x),\,L_{2}^{*}(x))\in L^{2}(\partial\Omega)\times L^{2}(\partial\Omega)\) via the known \(L_{(0)},\,L_{(1)}-\) traces. For example, the case of unit disk was considered in [5].
## 6 The role of characteristics billiard for the Fredholm property.
In this section we consider the case when solution uniqueness of the problems considered above is break down, moreover, the Fredholm property does not hold. In the work [6] there was proved the theorem on Fredholm violation of the Dirichlet problem in \(C^{m}(\Omega),\,m\geq 4\) for the case of typeless PDE. Taking into account the Lemma 2, we arrive to the analogous result in \(L^{2}(\Omega):\)
**Theorem 4.** The homogeneous Dirichlet problem \((1.1)^{0},\,(5.7)^{0}\) has a nontrivial solution in \(L^{2}(\Omega)\) if and only if
\[\varphi_{j}-\varphi_{k}=\frac{\pi p_{jk}}{q}, \tag{6.1}\]
with some \(p_{jk},\,q\in\mathbb{Z},\,j,k=1,2,3,4.\) Under conditions (6.1) there exists a countable set of linearly independent polynomial solutions in the form:
\[u(x)=\sum_{j=1}^{4}C_{j}\left(\frac{1}{2q}T_{q}(-\tilde{a}^{j}\cdot x)-\frac{ 1}{2(q-2)}T_{q-2}(-\tilde{a}^{j}\cdot x)\right). \tag{6.2}\]
Here \(T_{q}(-\tilde{a}^{j}\cdot x)\) are Chebyshev's polynomials, and \(\frac{1}{2q}T_{q}(-\tilde{a}^{j}\cdot x)-\frac{1}{2(q-2)}T_{q-2}(-\tilde{a}^{ j}\cdot x)\in KerL^{+},\,j=1,2,3,4.\)
The necessity of condition (6.1) follows from the equation-domain duality (in the case of unit disk), see Lemma 2; sufficiency is proved by the construction of nontrivial polynomial solutions (6.2). It is remarkable the fact, that the theorem 4 is true for all types of operator \(L\). Here we discuss the conditions (6.1) for the hyperbolic case, in which these conditions mean the periodicity of characteristics billiard or John's mapping.
### Characteristic billiard.
For the domain \(\Omega,\) which is convex with respect to characteristics, we construct the mappings \(T_{j},\,j=1,...,4\) for the fourth order hyperbolic equations by the following way. Let \(M_{j}\) be some point on \(\partial\Omega.\) Passing through the point \(M_{j}\)\(j-\)th characteristic, which angle of slope is \(\varphi_{j},\) we obtain some point \(M_{j+1}\in\partial\Omega.\) Such a way, \(T_{j}\) is a mapping which transforms \(M_{j}\) into \(M_{j+1}\) on the \(j-\)characteristic direction with \(\varphi_{j}\) angle of slope, \(j=1,2,3,4.\) We apply the mapping \(T_{1}\) for the point \(M_{1}\in\partial\Omega\) and obtain the point \(M_{2}\). After that we apply the mapping \(T_{2}\) for the point \(M_{2}\) and obtain the point \(M_{3}.\) We transform \(M_{3}\) into \(M_{4}\) on direction of characteristic, which angle of slope equals \(\varphi_{3},\) and finally, we transform \(M_{4}\) into \(M_{5}\) on direction of fourth characteristic. Denote by \(T=T_{4}\circ T_{3}\circ T_{2}\circ T_{1}:M_{1}\in\partial\Omega\to M_{5}\in \partial\Omega,\,T\) is called the John's
mapping. Characteristic billiard is understood as a discrete dynamical system on \(\partial\Omega,\) i.e., an action of the group \(\mathbb{Z}.\)
Some point \(M\in\partial\Omega\) is called a periodic point, if there exists some \(n\in\mathbb{N}\) such that \(T^{n}(M)=M.\) Minimal \(n\) for which condition \(T^{n}(M)=M\) holds is called the period of the point \(M.\) In the case of second order hyperbolic equations there was proved [[3]] that periodicity of John's algorithm is sufficient for violation of the Fredholm property of the Dirichlet problem. The analogous result is true for the case of fourth-order hyperbolic equations (1.1). Let us consider the model case of domain \(\Omega=K,\) unit disk in \(\mathbb{R}^{2}.\)
Let us show that conditions (6.1) are necessary and sufficient for the periodicity of John's algorithm. It is clear that
\[T_{j}(M(\tau))=2\varphi_{j}-\tau, \tag{6.3}\]
where \(\tau\) is angular parameter of the point \(M\in K.\) From (6.3) it follows
\[T^{n}(M)=2n(\varphi_{4}-\varphi_{3}+\varphi_{2}-\varphi_{1})+\tau=2n(\varphi_{ 4}-\varphi_{3}+\varphi_{2}-\varphi_{1})+2\pi m+\tau,\]
for any \(m\in\mathbb{Z}.\) Under conditions (6.1) any point \(M\in K\) is periodical, thus, the John's algorithm is periodical. If now mapping \(T\) is periodical for some \(n\in\mathbb{N},\) then \(\varphi_{4}-\varphi_{3}+\varphi_{2}-\varphi_{1}\in\pi\mathbb{Q},\) which implies the conditions (6.1) are satisfied.
Such a way we arrive to the following statement.
**Theorem 5.** The periodicity of characteristic billiard on the unit disk is necessary and sufficient for the violation of Fredholm property of the Dirichlet problem (1.1)\({}^{0},\) (5.7)\({}^{0}\) in \(L^{2}(K),\) and its kernel consists of countable set of linearly independent polynomial solutions (6.1).
## Funding
This work is supported by the Volkswagen Foundation (the project numbers are A131968 and 9C624) and by the Ministry of Education and Science of Ukraine (project number is 0121U109525).
|
2309.07590 | Revisiting Supertagging for Faster HPSG Pasing | We present new supertaggers trained on English grammar-based treebanks and
test the effects of the best tagger on parsing speed and accuracy. The
treebanks are produced automatically by large manually built grammars and
feature high-quality annotation based on a well-developed linguistic theory
(HPSG). The English Resource Grammar treebanks include diverse and challenging
test datasets, beyond the usual WSJ section 23 and Wikipedia data. HPSG
supertagging has previously relied on MaxEnt-based models. We use SVM and
neural CRF- and BERT-based methods and show that both SVM and neural
supertaggers achieve considerably higher accuracy compared to the baseline and
lead to an increase not only in the parsing speed but also the parser accuracy
with respect to gold dependency structures. Our fine-tuned BERT-based tagger
achieves 97.26\% accuracy on 950 sentences from WSJ23 and 93.88% on the
out-of-domain technical essay The Cathedral and the Bazaar (cb). We present
experiments with integrating the best supertagger into an HPSG parser and
observe a speedup of a factor of 3 with respect to the system which uses no
tagging at all, as well as large recall gains and an overall precision gain. We
also compare our system to an existing integrated tagger and show that although
the well-integrated tagger remains the fastest, our experimental system can be
more accurate. Finally, we hope that the diverse and difficult datasets we used
for evaluation will gain more popularity in the field: we show that results can
differ depending on the dataset, even if it is an in-domain one. We contribute
the complete datasets reformatted for Huggingface token classification. | Olga Zamaraeva, Carlos Gómez-Rodríguez | 2023-09-14T10:49:16Z | http://arxiv.org/abs/2309.07590v2 | # Revisiting Supertagging for HPSG
###### Abstract
We present new supertaggers trained on HPSG-based treebanks. These treebanks feature high-quality annotation based on a well-developed linguistic theory and include diverse and challenging test datasets, beyond the usual WSJ section 23 and Wikipedia data. HPSG supertagging has previously relied on MaxEnt-based models. We use SVM and neural CRF- and BERT-based methods and show that both SVM and neural supertaggers achieve considerably higher accuracy compared to the baseline. Our fine-tuned BERT-based tagger achieves 97.26% accuracy on 1000 sentences from WSJ23 and 93.88% on the completely out-of-domain _The Cathedral and the Bazaar_ (_cb_)). We conclude that it therefore makes sense to integrate these new supertaggers into modern HPSG parsers, and we also hope that the diverse and difficult datasets we used here will gain more popularity in the field. We contribute the complete dataset reformatted for token classification.
## 1 Introduction
We present new supertaggers for English which can be used in particular to improve parsing efficiency for HPSG grammars. Head-Driven Phrase Structure Grammar [23, HPSG] is a theory of syntax that has been applied in computational linguistic research (see Bender and Emerson 2021 SS3-SS4 for a recent overview). At the core of such research are precision grammars which encode a strict notion of grammaticality -- their purpose is to only cover and generate grammatical structures. They include a relatively small set of phrase-structure rules and a large lexicon where lexical entries contain information about the word's syntactic behavior. HPSG treebanks (and the grammars that produce them) encode not only constituency but also dependency and semantic relations and have proven useful in natural language processing, e.g. in grammar coaching [15, 16, 17], natural language generation [18], and creating training data for parsers [18, 19].
HPSG parsing is relatively slow, sometimes prohibitively so for long sentences [1, 15]. Thus the true potential of HPSG parsing in NLP remains not fully realized. Approaches to speed up HPSG parsing include local ambiguity packing [14, 11, 15], on the one hand, and forgoing exact search and reducing the parser search space, on the other [15, 16, 17]. Here we contribute to the second line of research, aka supertagging, a technique to discard unlikely interpretations of tokens. Dridan et al. (2008); Dridan (2009, 2013) used maximum entropy-based models trained on a combination of gold and automatically labeled data from English, and they report an efficiency improvement of a factor of 3 for the parser they worked with [14].
We present new models for HPSG supertagging, an SVM-based one, a neural CRF-based one, and a fine-tuned-BERT one, and compare their tagging accuracy with a MaxEnt baseline. We now have more English gold training data thanks to the HPSG grammar engineering consortium's treebanking efforts [15, 16, 17].1 It makes sense to train modern models on this wealth of gold data. Once more accurate taggers such as the one we present here are integrated into HPSG parsers, the parsers can be used to create even larger HPSG treebanks and also more widely in NLP, leading to higher precision and interpretability and reducing computational resources spent in
training of purely statistical and neural parsers.
We trained the neural models with NVIDIA GeForce RTX 2080 GPU, CUDA version 11.2. The SVN model and the MaxEnt baseline were trained using Intel Core i7-9700K 3,60Hz CPU. Further details can be found in the Appendix. The code and configurations for the reported results are on GitHub.2 The data we used is publicly available.1 The details about model training and reproducibility are in the Appendix.
Footnote 2: [https://github.com/olzama/neural-supertagging](https://github.com/olzama/neural-supertagging)
## 2 Background
Below we explain HPSG lexical types (SS2.1), and in SS2.2, we give the background on the English treebanks which served as our training and evaluation data and the quality of which we think is key to our results. SS2.3 is a summary for HPSG parsing.
### Lexical types
Any HPSG grammar consists of a hierarchy of types, including phrasal and lexical types, and of a large lexicon which can be used to map surface tokens to lexical types. Each token in the text is recognized by the parser as belonging to one or more of the lexical entries in the lexicon (assuming such an orthographic form is present at all). Lexical entries, in turn, belong to lexical types. Lexical types are similar to part-of-speech (POS) tags but are more fine grained (e.g. a precision grammar may distinguish between multiple types of proper nouns or multiple types of _wh_-words, etc). After the lexical analysis stage, the bottom-up parser runs a constraint unification-based algorithm Carpenter (1992) to return a (possibly empty) set of possible parses for the sentence.
### The ERG treebanks
The English Resource Grammar (ERG; Flickinger (2000, 2011)) is a broad-coverage precision grammar of English implemented in the HPSG formalism. The latest release is from 2023. Its intrinsic evaluation relies on a set of English text corpora. Each release of the ERG includes a treebank of those texts parsed by the current version. The parses are created automatically and treebanked manually. Treebanking in the ERG context is the process of choosing linguistically (semantically) correct structures from the multiple trees corresponding to one string that the grammar may produce. In the case of the ERG, most of the treebanking was done by its main developer, Dan Flickinger. Fast treebanking is made possible by automatically comparing parse forests and by discriminant-based bulk elimination of unwanted trees (Oepen, 1999; Packard, 2015). The treebanks are stored as databases that require specialized software for processing e.g. Pydelphin3.
Footnote 3: [https://pydelphin.readthedocs.io/](https://pydelphin.readthedocs.io/)
The 2020 ERG release comes with 30 treebanked corpora containing over 1.5 million tokens and 105,155 sentences. In principle, there are 43,505 different lexical types in the ERG (cf. 48 tags in the Penn Treebank POS tagset (PTB; Marcus et al., 1993)) however only 1299 of them are found in the training portion of the treebank. The genres include well-edited text (news, Wikipedia articles, fiction, travel brochures, and technical essays) as well as customer service emails and transcribed phone conversations. It also includes constructed test suites illustrating linguistic phenomena such as raising and control.4 As such, these treebanks present more challenging test data compared to the conventional WSJ23 (which is also included). The ERG 2020's average treebanked coverage over all the corpora is 92.95% (raw coverage before treebanking is 96.14%). The ERG uses PTB-style punctuation tokens and includes PTB POS tags in all tokens, along with a lexical type (SS2.1).
Footnote 4: [https://github.com/delph-in/docs/wiki/RedwoodStop](https://github.com/delph-in/docs/wiki/RedwoodStop)
### HPSG parsing
Several parsers for different variations of the HPSG formalism exist. We work with the DELPH-IN formalism (Copestake, 2002) which is deliberately restricted; it only encodes the unification operation natively. The DELPH-IN restrictions were motivated by efficiency considerations, among other things, though its worst-case complexity is intractable (Oepen and Carroll, 2002). Carroll (1993, SS3.2.3) (cited in Bender and Emerson 2021, p.1109) states that the worst-case parsing time for HPSG feature structures is proportional to \(C^{2}n^{\rho+1}\) where \(\rho\) is the maximum number of children in a phrase structure rule and C is the (potentially large) maximum number of feature structures. The unification operator takes two feature structures as input and outputs one feature structure which satisfies the constraints encoded
in both inputs. Given the complex nature of such structures, implementing a fast unification parser is a hard problem. As it is, the existing parsers may take prohibitively long to parse a long sentence (see e.g. Marimon et al., 2014).
## 3 Methodology
Supertagging (Bangalore and Joshi, 1999) reduces the parser search space by discarding the less likely interpretations of an orthography. For example, the word _bark_ in English can be a verb or a noun, and in _the dog barks_ it is a lot less likely to be a noun than a verb. In HPSG, there are fine-grained lexical types within the POS class (e.g. subtypes of common nouns or _wh_-words), so the search space can be reduced further compared to POS tagging.
In precision grammars, this methodology comes at a cost to coverage; selecting a wrong lexical type means the entire sentence will not be parsed correctly. Thus accuracy is important.
### Previous and related work
Bangalore and Joshi (1999) introduced the concept of supertagging. Clark and Curran (2003) showed mathematically that supertagging improves parsing efficiency for a lexicalized formalism (CCG). They used a maximum entropy model; Xu et al. (2015) introduced a neural supertagger for CCG. Vaswani et al. (2016) and Tian et al. (2020) further improved the accuracy of neural-based CCG supertagging achieving an accuracy of 96.25% on WSJ23. Liu et al. (2021) use finer categories within the CCG tagset and report 95.5% accuracy on in-domain test data and 81% and 92.4% accuracy on two out-of-domain datasets (Bioinfer and Wikipedia).5
Footnote 5: These works do not report experiments on parsing speed.
Supertagging experiments with HPSG parsing speed using hand-engineered grammars (Prins and van Noord, 2004) are summarized in Table 1. In addition, there were experiments on the use of supertagging for parse ranking with statistically derived HPSG-like grammars (Ninomiya et al., 2007; Matsuzaki et al., 2007; Miyao and Tsujii, 2008; Zhang et al., 2009, 2010; Zhang and Krieger, 2011; Zhang et al., 2012). These statistically derived systems are principally different from the ERG as they do not represent HPSG theory as understood by syntacticians. In the context of the ERG, Dridan et al. 2008 represents our baseline SOTA.6
Footnote 6: Dridan 2013 is a related work on “ubertagging”, which includes multi-word expressions.
### Data
We train and evaluate our taggers, both for the baseline (SS3.3) and for the experiment (SS3.4), on gold lexical types from the ERG 2020 release (SS2.2). We use the train-dev-test split recommended in the release.7 There are 84,894 sentences in the training data (83,607 after removing one-word sentences), 2,045 in dev, and 7,918 in test. WSJ section 23 is used as test data, as is traditional, but so are a number of other corpora, notably _The Cathedral and the Bazaar_(Raymond, 1999), a technical essay which serves as the out-of-domain test data. See Table 2 for the details about the data.
Footnote 7: Download redwoods.xls from the ERG repository for details and see [https://github.com/delph-in/docs/wiki/RedwoodsTop](https://github.com/delph-in/docs/wiki/RedwoodsTop). This split is different than in Dridan 2009.
### Baseline
For our baseline, we use a MaxEnt model similar to Dridan 2009. While Dridan (2009) used off-the-shelf TnT (Brants, 2000) and C&C (Clark and Curran, 2003) taggers, we use the off-the-shelf logistic regression library from scikit-learn (Pedregosa et al., 2011) which is a popular and relatively recent off-the-shelf tool for classic machine learning algorithms. The baseline tagger accuracy is included in Table 2 in SS4. The details how the best baseline model was chosen are in Appendix A. Here, we note that we tried autoregressive and non-autoregressive models and, since non-autoregressive were not inferior in terms of accuracy and were much faster in decoding, we use a non-autoregressive MaxEnt model as our baseline.
### SVM, LSTM+CRF, and fine-tuned BERT
We train a liblinear SVM model with default parameters (L2 Squared Hinge loss, C=1, one-v-rest, up to 1000 training iterations) using scikit-learn library. To train an lstm sequence labeling model, we use the NCRF++ library (Yang and Zhang, 2018). We choose the model by training and validating 31 models up to 100 iterations with the starting learning rate of 0.009 and the batch size of 3 (the latter parameters are the largest that are feasible for the combination of our data and the library code). The best NCRF++ model is described
in the Appendix in Table 3. To fine-tune BERT, we use the Huggingface transformers library [20] and Pytorch [21]. We try both 'base-bert-cased' and 'base-bert-uncased' pretrained models which we fine-tune for up to 50 epochs (stopping once there is no improvement for 5 epochs) with weight decay=0.01. The 'cased' model with learning rate 2e-5 achieves the best accuracy on the dev set (see Table 7 in the Appendix).
## 4 Results: Tagger accuracy and speed
The results are presented in Table 2. The names of the datasets are given as in the ERG2020 release. The number of sentences is the number of sentences for which there is a gold parse. The number of tokens is given from similar datasets used for training -- e.g. for WSJ23 the 'training tok' number corresponds to WSJ 1-22.
Table 2 shows that the baseline models achieve similar performance to Dridan 2009 (D2009 in Table 2) on in-domain data and are better on out-of-domain data. This may indicate that these models are close to their maximum performance on in-domain data on this task but adding more training data still helps for out-of-domain data. Dridan's (2009) models were trained on a subset of our data. In particular, the test data did not include the non* corpus which was instead included in the training data. Dridan (2009, p.84) reports getting 91.47% accuracy on the in-domain data (which loosely corresponds to row 'jh*, tg*, ps*, non*') using the TnT off-the-shelf tagger [1].8
Footnote 8: As a sanity check, we obtain 91.94% using an autoregressive one-versus-rest L1 SAGA MaxEnt model trained with the scikit-learn library [2] on training and test datasets very similar to the ones used by Dridan (2009) this experiment is not included in Table 2). On _The Cathedral and the Baczav_ with the same setup, we obtain 73.85% compared to Dridan’s (2009) 74.61%. We attribute it to the differences between TnT and scikit-learn.
The SVM and the neural models are better than the baseline models on all test datasets, and fine-tuned BERT is the best overall. On the portion of WSJ23 for which we have gold data, fine-tuned BERT achieves 97.26%. The neural models are slower than the baseline models (using GPU for decoding); on the other hand, SVM is remarkably fast (at over 7000 sen/sec).
## 5 Conclusion and future work
We used the advancements in HPSG treebanking to train more accurate supertaggers, which we tested on diverse data well beyond the usual WSJ23. The ERG is a major project in syntactic theory and an important resource for semantic parsing and grammar coaching. It has the potential to contribute to NLP tasks that require high precision and/or interpretability, and thus making HPSG parsing faster is strategic for NLP. The next step is integrating the new supertagging models into state-of-the-art HPSG parsers and achieving improvements in HPSG parsing speed allowing us to create even larger treebanks. As the methodologies discussed here are applicable to any grammar for which there is sufficient treebank data, with fur
\begin{table}
\begin{tabular}{l l l l|l l l|l|l} \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ model & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \hline N-gram & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ HMM & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ MEMM & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \end{tabular}
\end{table}
Table 1: Supertagging effects on HPSG parsing speed.
\begin{table}
\begin{tabular}{l l l l l|l l l l|l} dataset & description & sent & tok & train tok & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ cb & technical essay & 713 & 17,244 & 0 & 88.96 & 89.53 & 91.94 & **93.88** & 74.61 \\ ecpr & e-commerce & 1088 & 11,550 & 24,934 & 91.80 & 91.99 & 95.09 & **96.09** & \\ jh*,tg*,ps*, non* & travel brochures & 2116 & 34,098 & 147,166 & 90.45 & 91.21 & 95.44 & **96.11** & 91.47 \\ petet & textual entailment & 581 & 7135 & 1578 & 92.88 & 95.31 & 96.93 & **97.71** & \\ vm32 & phone conv. & 1000 & 8730 & 86,630 & 93.57 & 94.29 & 95.62 & **96.64** & \\ ws213-214 & Wikipedia & 1470 & 29,697 & 161,623 & 91.31 & 92.02 & 93.66 & **95.59** & \\ wsj23 & Wall Street J. & 950 & 22,987 & 959,709 & 94.27 & 94.72 & 96.05 & **97.26** & \\ \hline all & all test sets as one & 7,918 & 131,441 & 1,381,645 & 91.57 & 92.28 & 94.46 & **96.02** & \\ all & average & 7,918 & 131,441 & 1,381,645 & 91.89 & 92.72 & 94.96 & **96.18** & \\ \hline speed (sen/sec) & average & 7,918 & 131,441 & 1,381,645 & 1024 & **7414** & 125 & 346 & \\ \end{tabular}
\end{table}
Table 2: Baseline (MaxEnt) and experimental supertaggers’ accuracy and speed on test data; tagset size is 1299.
ther advancements in multilingual grammar engineering we will see higher precision in NLP for many languages. In the meantime, we contribute the complete ERG dataset converted to hugging-face transformers format intended for token classification, along with the code which can be adapted for other means.
## 6 Limitations
Our paper is concerned with training supertagging models on an English HPSG treebank. The limitations therefore are associated mainly with the training of the models including neural networks, and with the building of broad-coverage grammars such as the English Resource Grammar. Crucially, while our method does not require industry-scale computational resources, training a neural classifier such as ours still requires a certain amount of training data, and this means that our method assumes that a large HPSG treebank is available for training. The availability of such a treebank, in turn, depends directly on the availability of a broad-coverage grammar. While choosing the gold trees for the treebank can be done relatively fast using treebanking tools once the grammar parsed the corpus, building a broad-coverage grammar itself requires an investment of years of expert work. At the moment, such an investment was made only for a few languages (English, Spanish, Japanese, Chinese), English being the largest one. Furthermore, the coverage of a precision grammar is never perfect and regular grammar updates are needed. A limitation related to using neural networks is that while the NCRF++ library can in principle be very efficient on some tasks (e.g. POS tagging), with our data and large label set it proved relatively slow, and so ideally a more efficient neural architecture may be required for future work in this direction.
## Acknowledgments
We acknowledge the European Research Council (ERC), which has funded this research under the Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615, and GAUSS, grant agreement No 101063104), ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), and Centro de Investigacion de Galicia "CITIC", funded by the Xunta de Galicia through the collaboration agreement between the Conselleria de Cultura, Educacion, Fornacion Professional e Universidades and the Galician universities for the reinforcement of the research centres of the Galician University System (CIGUS).
|
2309.03906 | A-Eval: A Benchmark for Cross-Dataset Evaluation of Abdominal
Multi-Organ Segmentation | Although deep learning have revolutionized abdominal multi-organ
segmentation, models often struggle with generalization due to training on
small, specific datasets. With the recent emergence of large-scale datasets,
some important questions arise: \textbf{Can models trained on these datasets
generalize well on different ones? If yes/no, how to further improve their
generalizability?} To address these questions, we introduce A-Eval, a benchmark
for the cross-dataset Evaluation ('Eval') of Abdominal ('A') multi-organ
segmentation. We employ training sets from four large-scale public datasets:
FLARE22, AMOS, WORD, and TotalSegmentator, each providing extensive labels for
abdominal multi-organ segmentation. For evaluation, we incorporate the
validation sets from these datasets along with the training set from the BTCV
dataset, forming a robust benchmark comprising five distinct datasets. We
evaluate the generalizability of various models using the A-Eval benchmark,
with a focus on diverse data usage scenarios: training on individual datasets
independently, utilizing unlabeled data via pseudo-labeling, mixing different
modalities, and joint training across all available datasets. Additionally, we
explore the impact of model sizes on cross-dataset generalizability. Through
these analyses, we underline the importance of effective data usage in
enhancing models' generalization capabilities, offering valuable insights for
assembling large-scale datasets and improving training strategies. The code and
pre-trained models are available at
\href{https://github.com/uni-medical/A-Eval}{https://github.com/uni-medical/A-Eval}. | Ziyan Huang, Zhongying Deng, Jin Ye, Haoyu Wang, Yanzhou Su, Tianbin Li, Hui Sun, Junlong Cheng, Jianpin Chen, Junjun He, Yun Gu, Shaoting Zhang, Lixu Gu, Yu Qiao | 2023-09-07T17:59:50Z | http://arxiv.org/abs/2309.03906v1 | # A-Eval: A Benchmark for Cross-Dataset Evaluation of Abdominal Multi-Organ Segmentation
###### Abstract
Although deep learning have revolutionized abdominal multi-organ segmentation, models often struggle with generalization due to training on small, specific datasets. With the recent emergence of large-scale datasets, some important questions arise: **Can models trained on these datasets generalize well on different ones? If yes/no, how to further improve their generalizability?** To address these questions, we introduce A-Eval, a benchmark for the cross-dataset Evaluation ('Eval') of Abdominal ('A') multi-organ segmentation. We employ training sets from four large-scale public datasets: FLARE22, AMOS, WORD, and TotalSegmentator, each providing extensive labels for abdominal multi-organ segmentation. For evaluation, we incorporate the validation sets from these datasets along with the training set from the BTCV dataset, forming a robust benchmark comprising five distinct datasets. We evaluate the generalizability of various models using the A-Eval benchmark, with a focus on diverse data usage scenarios: training on individual datasets independently, utilizing unlabeled data via pseudo-labeling, mixing different modalities, and joint training across all available datasets. Additionally, we explore the impact of model sizes on cross-dataset generalizability. Through these analyses, we underline the importance of effective data usage in enhancing models' generalization capabilities, offering valuable insights for assembling large-scale datasets and improving training strategies. The code and pre-trained models are available at [https://github.com/uni-medical/A-Eval](https://github.com/uni-medical/A-Eval).
Figure 1: Comparison of the original evaluation approach versus our proposed A-Eval benchmark for assessing model generalizability. (a) Original evaluation involves training and testing on the same dataset, providing good results but leaving uncertainty when applied to other datasets. (b) A-Eval, on the other hand, trains and tests across different datasets, offering a more comprehensive evaluation of model performance and its potential for generalizability.
## 1 Introduction
Accurate segmentation of abdominal organs is essential for clinical applications, especially in the diagnosis and treatment of prevalent abdominal cancers [35, 38]. Traditionally, this labor-intensive and often tedious task has been manually carried out by specialists [29]. However, such a manual segmentation approach inevitably brings inaccurate results, particularly when imaging protocols and anatomical structures vary significantly in abdominal organs [8]. Deep learning has addressed this issue and revolutionized this field by introducing efficient and reliable methods [5, 7, 28, 33, 41].
The success of deep learning in abdominal organ segmentation significantly relies on the quality and quantity of available training datasets [37, 39]. Initially, the focus was mainly on the segmentation of individual organs along with their related tumors [19, 27, 45]. This trend was influenced by the constraints of early datasets such as MSD [1], LiTS [2], and KiTS [10, 11]. The advent of multi-organ datasets like BTCV [18] allows for more holistic and complex abdominal studies, but their small size has limited their utility. More recently, several large-scale datasets for abdominal organ segmentation, such as FLARE22 [24], AMOS [16], WORD [21], and TotalSegmentator [42] have emerged. These datasets are distinguished by their scale and the diversity of organs they include, thereby greatly expanding the possibilities for model development, refinement, and performance evaluation. However, while these models demonstrate impressive performance in segmenting abdominal organs within their original datasets, their generalizability across different datasets remains an open question, as shown in Figure 1 (a).
The uncertainty in model generalizability can be attributed to several contributing factors, often referred to as 'domain gaps' or 'domain shifts' [43, 21, 40]. First, variations in imaging protocols across different medical centers introduce inconsistency. Second, a broad range of diseases represented in the training cohorts complicates the models' ability to generalize. Third, inconsistent image characteristics, such as contrast and resolution, create additional layers of variability. Finally, inconsistent annotation practices across different oncologists or radiologists further compromise the integrity of ground truth data. Although efforts have been made to include diverse data in these large abdominal datasets, their validation often remains confined to their own scope [16, 42]. Even when some studies venture to test these models on external datasets, these efforts are often limited in scale and lack standardization, serving mainly as supplementary validations rather than comprehensive assessments [21, 24, 25]. Such limitations restrict the full evaluation of model generalizability, highlighting the need for more comprehensive benchmarks to assess performance across diverse datasets.
To bridge this gap, we introduce A-Eval, a comprehensive benchmark specifically designed for cross-dataset evaluation in abdominal multi-organ segmentation. In A-Eval, 'A signifies 'Abdomen,' and 'Eval' denotes 'Evaluation.' As illustrated in Figure 1 (b), A-Eval incorporates the official training sets from four major public datasets: FLARE22 [24], AMOS [16], WORD [21], and TotalSegmentator [42]. Importantly, these datasets are large-scale and comprehensively cover multiple abdominal organs, ensuring a well-rounded evaluation of abdominal organ segmentation. Since the labels for some official test sets are not publicly available, we use the official validation sets from these four datasets for evaluation purposes and augment them with the training set from the BTCV [18] dataset to enhance diversity. By combining these diverse collections, we establish a robust benchmark that includes five unique datasets. This design enables A-Eval to explicitly evaluate model generalizability across different datasets, offering a more comprehensive assessment than single-dataset benchmarks.
Based on the A-Eval benchmark, we conduct a thorough investigation into the factors affecting the ability of deep learning models to generalize in abdominal multi-organ segmentation. Initially, we train models on each of the four major datasets within A-Eval and test their performance across all five datasets. This approach offers a straightforward evaluation of how well models trained on existing large-scale datasets generalize. Subsequently, we delve into additional data-related factors influencing generalizability, including the utilization of unlabeled data from FLARE22 [24], handling multi-modality with CT and MR images from AMOS [16], and the impact of joint training across multiple datasets. We also consider the role of model size in cross-dataset generalizability. These analyses clarify how data usage and model architecture can improve performance and offer key insights for future work, making these models more reliable for real-world applications.
The main contributions of our study are two-fold:
1. We introduce A-Eval, a comprehensive benchmark designed for cross-dataset generalizability in abdominal multi-organ segmentation. The benchmark integrates training sets from FLARE22 [24], AMOS [16], WORD [21], and TotalSegmentator [42], and employs their validation sets supplemented by BTCV [18] for a robust evaluation across five distinct datasets.
2. Using A-Eval, we investigate model generalizability across diverse data usage scenarios, including individual dataset training, unlabeled data utilization, multi-modality handling, and joint dataset training. We also explore the role of model size, providing key insights for enhancing generalizability and reliability in real-world settings.
## 2 Related Work
### Abdominal Multi-Organ Segmentation Benchmarks
Early benchmarks in abdominal organ segmentation primarily focused on individual organs and associated tumors. This focus is exemplified by datasets from the Medical Segmentation Decathlon (MSD) [1], including MSD Liver, MSD Lung, MSD Pancreas, MSD Hepatic Vessel, MSD Spleen, and MSD Colon, as well as datasets like LiTS [2], KiTS [10, 11] and Pancreas-CT [32]. The BTCV [18] dataset, an initial step towards multi-organ segmentation, was constrained by its limited size. The CHAOS [17] dataset, although providing multi-modality segmentation from both CT and MRI data, also suffers from limited volume.
Recent progress in the field has focused on the creation of numerous large-scale datasets for abdominal multi-organ segmentation [21, 23, 24, 25, 30, 31, 42]. These datasets are characterized by their substantial volume and variety, encompassing numerous instances and diverse organ types. AMOS [16] stands out with its multimodal data inclusion. FLARE22 [24] distinguishes itself by providing a small number of labeled cases and a substantially larger pool of unlabeled cases in its training set. Lastly, the TotalSegmentator dataset [42] further extends the scope by offering full-body organ segmentation.
Despite the growing diversity and size of abdominal multi-organ segmentation datasets, most existing benchmarks focus on intra-dataset evaluation, leaving the models' cross-dataset generalizability unexplored. Our work addresses this gap by introducing a benchmark explicitly designed for cross-dataset generalizability assessments.
### Model Generalizability
Generalizability, the ability of a machine learning model to perform effectively on unseen data, is critical for medical image analysis models due to their anticipated applications in diverse real-world clinical scenarios [3, 26, 44]. To enhance model generalizability, researchers typically pursue two main avenues: data-centric methods [4, 47, 15, 34, 36, 4] and model tweaks [6, 48, 20].
Data augmentation is a commonly used data-centric strategy [15]. Generative Adversarial Networks, specifically CycleGAN [47], have been employed to augment CT data, leading to a significant boost in performance on non-contrast CT images [34]. Dual-Energy CT images and novel image fusion techniques have also been used to surpass traditional single-energy CT methods in segmentation accuracy, particularly in abdominal organs, thus boosting generalizability across different CT protocols and scanners [4]. SLAug [36] employs class-level distributions and gradient guidance for enhanced data augmentation, reducing generalizability risk in unseen domains. Another simple but effective method to improve generalizability is not only increasing the volume but also enhancing the diversity and multi-center nature of training data [16, 24, 25]. However, the generalizability of models trained on these datasets is often only evaluated within the same dataset, lacking a cross-dataset evaluation.
On the architectural side, unsupervised domain adaptation frameworks have been developed that include specialized modules like Domain Adaptation (DAM) and Domain Critic Modules (DCM), aiming to improve cross-modality biomedical image segmentation [6]. Boundary-weighted domain adaptive networks like BOWDA-Net have been proposed to increase models' sensitivity to object boundaries in prostate MR images [48]. MS-Net [20] incorporates Domain-Specific Batch Normalization layers and aggregates data from multiple sites, offering a robust solution for prostate segmentation in heterogeneous MRI data.
Unlike prior work focused on data augmentation or model tweaks, our study, using the A-Eval benchmark, centers on the impact of data diversity and model size on generalizability across multiple large-scale abdominal datasets.
## 3 A-Eval Benchmark
We present A-Eval, a benchmark explicitly aimed at standardizing the evaluation of abdominal organ segmentation across diverse large-scale datasets. This section details our approach to training, testing, data pre-processing, model architectures, and evaluation metrics.
### Datasets for A-Eval
A-Eval incorporates five representative datasets, each carefully selected for its extensive scale, comprehensive organ annotations, diverse sources and diseases, as well as different image characteristics. This provides a reliable foundation for our evaluation. Details of these selected datasets are as follows: FLARE22 [24], AMOS [16], WORD [21], TotalSegmentator [42], and BTCV [18]. For model training, we employ the official training sets from the first four datasets. The corresponding official validation sets from these, along with the training set from BTCV, are used for evaluation. A summary of these datasets, including their modalities, case numbers, the number of organ categories, and regions, is provided in Table 1.
Some datasets encompass organ classes that others do not, as illustrated in Table 2. Therefore, to ensure a meaningful and fair comparison across datasets, we evaluate the models' performance based on a set of eight organ classes shared by all five datasets. These organ classes are _liver, right kidney, left kidney, spleen, pancreas, gallbladder, esophagus, and stomach_. This selection enables a direct and consistent comparison across all datasets.
### Cross-Dataset Protocols
To comprehensively evaluate the generalizability of abdominal multi-organ segmentation models, we define a set of data usage scenarios termed as cross-dataset protocols. These protocols are designed to reflect diverse data scenarios commonly encountered in real-world applications. These protocols include:
**Protocol 1. Training on Individual Dataset:** Initially, we train separate models on each dataset, focusing on utilizing the labeled CT data available in each one. This serves as a baseline for assessing the generalizability of models trained on individual large-scale datasets.
**Protocol 2. Using Unlabeled Data:** FLARE22 [24] provides a set of unlabeled scans in addition to its labeled data. Through the assignment of pseudo-labels to these scans, we aim to investigate the impact of utilizing unlabeled data on model generalizability.
**Protocol 3. Using Multi-modal Data:** AMOS [16] offers both CT and MR scans, providing a unique opportunity to explore multi-modal training. We evaluate the effects of training models using exclusively CT scans, exclusively MR scans, and a combination of both.
**Protocol 4. Joint Training Across Datasets:** As the most comprehensive approach, we train a unified model across all available datasets. This enables us to assess the model's ability to generalize across diverse, large-scale datasets.
By exploring these protocols within our A-Eval, we aim to examine cross-dataset generalizability in abdominal multi-organ segmentation and highlight the importance of diverse dataset usage.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & **Modality** & **\# Train** & **\# Test** & **\# Organs** & **Region** \\ \hline \multirow{2}{*}{FLARE22 [24]} & \multirow{2}{*}{CT} & 50 labeled & \multirow{2}{*}{50} & \multirow{2}{*}{13} & North American \\ & & 2000 unlabeled & & & European \\ \multirow{2}{*}{AMOS [16]} & \multirow{2}{*}{CT \& MR} & 200 CT & 100 CT & \multirow{2}{*}{15} \\ & & 40 MR & 20 MR & & \\ WORD [21] & & 100 & 20 & 16 & Asian \\ TotalSegmentator [42] & & 1082 & 57 & 104 & European \\ BTCV [18] & & - & 30 & 13 & North American \\ \multirow{2}{*}{A-Eval Totals} & \multirow{2}{*}{CT \& MR} & 1432 labeled CT & \multirow{2}{*}{257 CT} & North American \\ & & 2000 unlabeled CT & 20 MR & & European \\ \multirow{2}{*}{} & \multirow{2}{*}{40 MR} & 40 MR & & Asian \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the datasets used in A-Eval. The official training sets of FLARE22 [24], AMOS [16], WORD [21], and TotalSegmentator [42] are used for model training. Their official validation sets, along with the training set from BTCV [18], are employed as the test sets for model evaluation. The ’# Train’ and ’# Test’ columns indicate the number of labeled CT cases for training and testing, unless stated otherwise. The last row provides a cumulative summary of the datasets used in A-Eval.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Organ Class** & **FLARE22 [24]** & **AMOS [16]** & **WORD [21]** & **TotalSegmentator [42]** & **BTCV [18]** & **A-Eval** \\ \hline Liver & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Kidney Right & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Kidney Left & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Spleen & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Pancreas & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Aorta & \(\surd\) & \(\surd\) & \(\times\) & \(\surd\) & \(\surd\) & \(\times\) \\ Inferior Vena Cava & \(\surd\) & \(\surd\) & \(\times\) & \(\surd\) & \(\surd\) & \(\times\) \\ Adrenal Gland Right & \(\surd\) & \(\surd\) & \(\times\) & \(\surd\) & \(\surd\) & \(\times\) \\ Adrenal Gland Left & \(\surd\) & \(\surd\) & \(\times\) & \(\surd\) & \(\surd\) & \(\times\) \\ Gallbladder & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Esophagus & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Stomach & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Duodenum & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\times\) & \(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the presence of 13 organ classes across five different datasets and their intersection in our A-Eval evaluation, based on the FLARE22 [24] label system. Each dataset column indicates whether a particular organ class is present (”\(\surd\)”) or absent (”\(\times\)”). Notably, the WORD [21] dataset includes a general annotation for adrenal glands without distinguishing between left and right. The ’A-Eval’ column provides a summary, highlighting the organ classes that are consistently present across all datasets and are thus included in our A-Eval evaluation.
### Model Architecture and Training Procedure
To ensure a fair comparison during our exploration of different data usage strategies, we consistently use the STU-Net [12] model architecture, a scalable and transferable derivative of the nnU-Net [14] specifically designed for medical image segmentation tasks.
STU-Net retains the fundamental symmetric encoder-decoder structure of nnU-Net, each containing residual blocks [9] as their basic units. Each residual block is composed of two Conv-IN-LeakyReLU layers. The model incorporates six resolution stages and isotropic kernels (3,3,3) for all tasks. These features enhance the model's scalability and transferability. To avoid weight mismatch during task transfers, the model employs a downsampling operation within the first residual block of each stage and uses weight-free interpolation for upsampling.
In most of our experiments, we specifically employ STU-Net-L. This model variant was chosen due to its substantial parameter capacity, approximately 440MB, which ensures that our evaluations are not limited by model size. To investigate how the model size impacts cross-dataset generalizability, we study four STU-Net variants. These variants cover a range of parameter scales, from a compact 14M to a substantial 1.4B.
For preprocessing, each image is standardized to the same spacing via resampling, the value of which is automatically determined based on the dataset. We use different normalization methods depending on the modality: CT images are first clipped to a predetermined intensity range and then normalized using the dataset's mean and standard deviation values, whereas MR images and mixed CT and MR images undergo intensity normalization based on the mean and standard deviation calculated for each image.
The training process adheres to the standard nnU-Net pipeline, which incorporates on-the-fly data augmentation techniques, such as random rotations, scaling, and mirroring. The loss function is a combination of Dice and cross-entropy losses [22]. Furthermore, to ensure convergence on the corresponding datasets, we adjust the number of training epochs according to the different data usage protocols we employ. This adaptive training strategy further enhances the fairness and robustness of our evaluation and comparison process.
### Evaluation Metrics and Inference Procedure
For the evaluation of model performance across different cross-dataset protocols, we rely on two robust evaluation metrics - Dice Similarity Coefficient (DSC) and Normalized Surface Dice (NSD). These metrics offer complementary perspectives on model performance, with DSC quantifying the overlap between predicted and ground-truth segmentations, and NSD providing a measure of agreement between the predicted and ground-truth boundaries.
The DSC is defined as follows:
\[DSC=\frac{2|P\cap G|}{|P|+|G|} \tag{1}\]
Where \(P\) is the predicted segmentation, and \(G\) is the ground-truth segmentation. DSC values range between 0 and 1, with higher values indicating better performance.
The NSD is defined as:
\[\text{NSD}(G,S)=\frac{|B_{\partial G}(\tau)\cap\partial S|+|B_{\partial S}( \tau)\cap\partial G|}{|\partial G|+|\partial S|} \tag{2}\]
Where \(B_{\partial G}(\tau)\) and \(B_{\partial S}(\tau)\) denote the border regions of the ground truth \(G\) and the segmentation surface \(S\) at a tolerance \(\tau\), respectively. \(\partial G\) and \(\partial S\) represent the boundaries of the ground truth and the segmentation. \(\tau\) is a tolerance defined based on clinical requirements or consistency between radiologists [24].
During inference, we follow the standard nnU-Net framework, utilizing a sliding window approach with a step size of 0.5. This approach ensures comprehensive coverage of the input volume, thereby allowing the model to exploit all available information. To further enhance evaluation robustness, we employ test-time augmentation (TTA) involving mirroring across the sagittal, coronal, and axial planes.
## 4 Experiments and Results
### Implementation Details
All the experiments were conducted in a CentOS 7 environment using Python 3.9 and PyTorch 2.0 with nnU-Net 2.1. We followed the default data preprocessing and training procedures provided by nnU-Net. Importantly, optimal patch sizes and target spacing were automatically configured by nnU-Net based on the characteristics of each training dataset. We employed the SGD optimizer, setting Nesterov momentum at 0.99 with a weight decay of 1e-3. We kept the batch size constant at 2 and designed each epoch to comprise 250 iterations. The initial learning rate was set at 0.01 for training from scratch and was adjusted over time according to the poly learning rate policy, expressed as \((1-\text{epoch}/\text{total epochs})^{0.9}\). Regarding the training duration, we adopted different strategies tailored to each dataset, acknowledging that the required training time for models to converge can vary across different datasets. In line with the practice suggested by TotalSegmentator [42] and STU-Net [12], we extended the training on the TotalSegmentator dataset to 4000 epochs, exploiting its substantial scale and complexity to the fullest. However, for the remaining datasets - FLARE22 [24], AMOS [16], and WORD [21], we adhered to nnU-Net's default configuration of 1000 epochs, as our testing revealed no substantial performance gains beyond this point.
### Cross-Dataset Evaluation for Models Trained on Individual Datasets
In this section, we assess the generalizability of models that are individually trained on FLARE22 [24], AMOS CT [16], WORD [21], and TotalSegmentator [42] datasets (Protocol 1). It is important to note that only labeled CT data were used for FLARE22 [24] and exclusively CT data for AMOS [16].
As shown in Table 3, there exists significant variation in model performance across different datasets. When examining the datasets from a training perspective, it becomes evident that models trained on individual datasets generally perform best when tested on the same datasets but often underperform when evaluated on others. These trends are visually depicted in Figure 2. Overall, the model trained on the TotalSegmentator [42] dataset achieves the highest average performance (DSC of 89.82% and NSD of 93.70%) with the lowest standard deviations (SD for DSC is 3.00% and for NSD is 1.94%). This indicates superior generalizability and stability of this model across different test datasets. In contrast, the FLARE22-trained [24] model lags behind, scoring lower in both average performance and sta
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline Train/Test & \multicolumn{2}{c}{FLARE22} & \multicolumn{2}{c}{AMOS CT} & \multicolumn{2}{c}{WORD} & \multicolumn{2}{c}{TotalSegmentator} & \multicolumn{2}{c}{BTCV} & \multicolumn{2}{c}{Mean \(\pm\) SD} \\ \cline{2-13} & DSC & NSD & DSC & NSD & DSC & NSD & DSC & NSD & DSC & NSD & DSC & NSD \\ \hline FLARE22 & 89.20 & 90.19 & 76.53 & 80.25 & 85.94 & 90.76 & 74.06 & 76.56 & 86.11 & 89.28 & 82.37 \(\pm\) 5.94 & 85.41 \(\pm\) 5.85 \\ AMOS CT & 89.14 & 89.49 & **93.02** & **96.47** & 89.01 & 94.82 & 86.59 & 89.28 & 86.84 & 91.65 & 88.88 \(\pm\) 2.35 & 92.34 \(\pm\) 2.87 \\ WORD & 86.86 & 88.73 & 87.53 & 92.34 & **90.92** & **94.75** & 80.95 & 83.47 & 84.69 & 88.74 & 86.12 \(\pm\) 3.42 & 89.81 \(\pm\) 4.01 \\ TotalSegmentator & **90.32** & **91.96** & 89.65 & 94.02 & 86.30 & 92.46 & **95.12** & **97.33** & **87.73** & **92.72** & **89.82 \(\pm\) 3.00** & **93.20 \(\pm\) 1.94** \\ \hline Mean \(\pm\) SD & 88.88 \(\pm\) 1.26 & 90.09 \(\pm\) 1.20 & 86.68 \(\pm\) 6.18 & 90.77 \(\pm\) 6.25 & 88.04 \(\pm\) 2.04 & 93.45 \(\pm\) 1.96 & 84.04 \(\pm\) 7.74 & 86.66 \(\pm\) 7.63 & 86.34 \(\pm\) 1.11 & 90.60 \(\pm\) 1.64 & 86.80 \(\pm\) 4.88 & 90.31 \(\pm\) 5.07 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of the STU-Net-L model trained individually on different datasets and evaluated across various CT datasets. Displayed values represent average DSC (%) and NSD (%) for the eight shared classes across these datasets. The ’Mean \(\pm\) SD’ values, summarized both per row and per column, reflect the model’s variability across various training and testing conditions. The overall mean and standard deviation values at the bottom right corner provide a summary of the model’s generalization capabilities.
Figure 2: Visualization of the performance of STU-Net-L models trained individually on different datasets (FLARE22 [24], AMOS CT [16], AMOS MR [16], WORD [21], TotalSegmentator [42]) and validated on multiple datasets (FLARE22 [24], AMOS CT [16], AMOS MR [16], WORD [21], TotalSegmentator [42], BTCV [18]) within the A-Eval Benchmark. Each row corresponds to testing on a different dataset, while each column depicts various elements: the original image, ground truth, and the segmentation results obtained from models trained individually on different datasets.
bility (DSC of 82.37%, NSD of 85.41%, SD for DSC is 5.94%, and for NSD is 5.85%). Interestingly, there seems to be a direct correlation between the size of the training dataset and the model's generalizability, following the order of FLARE22 \(<\)WORD \(<\)AMOS CT \(<\)TotalSegmentator. This observation reveals that larger training datasets enhance the model's generalizability.
From a testing perspective, the BTCV dataset shows low variability (SD for DSC of 1.11% and for NSD of 1.64%), indicating consistent but less discriminating evaluations. In contrast, the TotalSegmentator and AMOS CT datasets, show greater performance variability: 7.74% for DSC and 7.63% for NSD in TotalSegmentator, and 6.18% for DSC and 6.25% for NSD in AMOS CT. This suggests these datasets contain unique, challenging validation samples that resist easy generalization. For example, some TotalSegmentator validation cases may exclude the abdominal region, raising the bar for model generalization.
### Impact of Pseudo-Labeling on Model Generalizability
This section centers around Protocol 2, which can be used to investigate the influence of the pseudo-labeling (PL) technique on the generalization capability of models. Protocol 2 is with a particular focus on the FLARE22 dataset, which comprises 2000 unlabeled and 50 labeled images.
Following the champion solution of FLARE22 [13], we first train STU-Net-L model exclusively on the 50 labeled images from the FLARE22 dataset. This model was then used to generate pseudo-labels for the unlabeled images, thereby creating an augmented dataset inclusive of these new labels. In this process, we do not conduct label selection to maintain simplicity. Subsequently, we retrained the model on this augmented dataset to examine whether the unlabeled data could enhance the model's generalization ability.
As presented in Table 4, the utilization of pseudo-labeling (PL) demonstrates a clear improvement in the model's generalization capability. The mean DSC increased from 82.37% (without PL) to 87.91% (with PL), while the mean NSD increased from 85.41% to 91.12%. Concurrently, the standard deviations of DSC and NSD decrease from 5.94% and 5.85% (without PL) to 2.15% and 1.69% (with PL) respectively. These changes not only indicate a boost in performance but also an increase in stability across various datasets. The significant and consistent improvement across all datasets underscores the efficacy of pseudo labeling in enhancing model performance.
### Impact of Multi-Modality Data on Model Generalizability
In this section, we delve into the Protocol 3 of multi-modality training to evaluate the cross-modality generalizability of existing models. For this purpose, we utilize the AMOS [16] dataset, comprising both CT and MR data. We explore three training scenarios: using only the 100 available CT images, only the 40 MR images, and a combination of both modalities. To ensure a comprehensive assessment of cross-modality generalizability, we integrate the official AMOS MR validation set into our cross-dataset evaluation.
As demonstrated in Table 5 and Figure 2, models trained solely on one modality exhibit limitations when evaluated on the other. Specifically, a model trained exclusively on AMOS CT data excels in other CT datasets but falls short on the AMOS MR dataset, with a DSC of 70.08% and an NSD of 72.92%. Conversely, a model trained only on AMOS MR data performs well on its own MR validation set but poorly on CT datasets. Notably, a model trained on both CT and
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Train/Test} & \multicolumn{2}{c}{FLARE22} & \multicolumn{2}{c}{AMOS CT} & \multicolumn{2}{c}{WORD} & \multicolumn{2}{c}{TotalSegentator} & \multicolumn{2}{c}{BTCV} & \multicolumn{2}{c}{Mean \(\pm\) SD} \\ \cline{2-13} & DSC & NSD & DSC & NSD & DSC & NSD & DSC & NSD & DSC & NSD & DSC & NSD \\ \hline FLARE22 w/o PL & 89.20 & 90.19 & 76.53 & 80.25 & 85.94 & 90.76 & 74.06 & 76.56 & 86.11 & 89.28 & 82.37 \(\pm\) 5.94 & 85.41 \(\pm\) 5.85 \\ FLARE22 w/ PL & **91.98** & **93.46** & **87.53** & **90.92** & **87.15** & **92.01** & **85.55** & **88.29** & **87.35** & **90.94** & **87.91 \(\pm\) 2.15** & **91.12 \(\pm\) 1.69** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison of the STU-Net-L model trained with and without Pseudo Labeling (PL) on the FLARE22 dataset. Displayed values represent average DSC (%) and NSD (%) for the eight shared classes across various CT datasets. The ’Mean \(\pm\) SD’ values summarize the average performance and variability across all datasets for each training method.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Train/Test} & \multicolumn{2}{c}{FLARE22} & \multicolumn{2}{c}{AMOS CT} & \multicolumn{2}{c}{AMOS MR} & \multicolumn{2}{c}{WORD} & \multicolumn{2}{c}{TotalSegentator} & \multicolumn{2}{c}{BTCV} & \multicolumn{2}{c}{Mean \(\pm\) SD} \\ \cline{2-13} & DSC & NSD & DSC & NSD & DSC & NSD & DSC & NSD & DSC & NSD & DSC & NSD \\ \hline \hline \multirow{2}{*}{AMOS CT AMOS MR AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS MR AMOS CT AMOS AMOS CT AMOS MR AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS MR AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS CT AMOS AMOS AMOS CT AMOS AMOS CT AMOS AMOS AMOS CT AMOS AMOS AMOS CT AMOS AMOS AMOS CT AMOS AMOS AMOS CT AMOS AMOS AMOS AMOS CT AMOS AMOS AMOS AMOS CT AMOS AMOS AMOS AMOS CT AMOS
MR data from the AMOS dataset (AMOS CT+MR) outperforms those trained on individual modalities across all datasets. This result highlights the utility of simple mixed-modality training in multi-modal learning for medical image segmentation and suggests the potential benefits of integrating MR data into traditionally CT-focused training procedures.
### Improving Generalizability Through Joint Training Across Multiple Datasets
In this section, we explore the improvement of model generalizability through joint training across multiple datasets, as per Protocol 4. We employ a jointly trained model that utilizes a comprehensive set of 3472 images available in the A-Eval benchmark. This set includes 1432 labeled CT images, 2000 unlabeled CT images, and 40 labeled MR images, as detailed in Table 1. We compare the performance of this jointly trained model to models trained on individual datasets, evaluated on their respective validation sets. Additionally, we assess its ability to generalize using the BTCV dataset, which was excluded from the training process.
Initially, we train a model on labeled data from three datasets: FLARE22, AMOS, and TotalSegmentator, with labels that are aligned to match the FLARE22 label system. Using this pre-trained model, we generate pseudo labels for the WORD dataset's four missing categories. A second round of training incorporates these newly annotated data. Finally, we create pseudo labels for 2000 unlabeled images with the updated model and use them in a last round of joint training from all datasets.
As shown in Table 6, the model trained through joint training consistently matches or surpasses the performance of models trained on individual datasets. For example, it outperforms the FLARE22-specific model, achieving a DSC of 91.98% vs. 89.20% and NSD of 93.58% vs. 90.19%. Notably, the joint-trained model also performs well on the AMOS MR dataset, even though MR images make up only 1.15% of the total training set. This suggests that despite an imbalanced mix of imaging modalities, the model retains its effectiveness in processing MR data through joint training.
The efficacy of joint training is further validated on the untested BTCV dataset, as shown in Tables 7 and 8. Despite BTCV being excluded from the training process, the joint model surpasses all models trained on individual datasets when tested on it, achieving the highest mean DSC (88.90%) and NSD (93.81%) scores. This indicates that joint training with multiple datasets improves the model's generalizability to unseen data, surpassing single-dataset models.
We conduct cross-dataset generalizability tests on each variant, similar to those described for STU-Net-L in Table 3. Each model is trained on four datasets and evaluated on the other five, with the Dice Similarity Coefficient (DSC) and Normalized Surface Dice (NSD) calculated across eight shared classes. The mean and standard deviation (SD) of these 20 values represent each model's generalizability. Detailed results are provided in the Appendix for reference.
Figure 3 presents the average DSC and NSD values derived from all test datasets for each STU-Net variant. A clear trend is noticeable: as the model size increases from'small' to 'base' to 'large', there is a steady rise in performance. For DSC values, there is a successive increase from 84.65 for STU-Net-S, to 85.41% for STU-Net-B, and further to 86.64% for STU-Net-L. A similar progression is detected in NSD values, with a rise from 88.29% for STU-Net-S, to 89.12% for STU-Net-B, and peaking at 90.14% for STU-Net-L.
However, when we scale up the model to the 'huge' variant (STU-Net-H), the performance does not proportionally increase. Despite its size, STU-Net-H's DSC (86.16%) and NSD (89.95%) do not surpass the 'large' variant. This suggests that merely enlarging the model does not always improve generalizability and may cause overfitting, particularly when training data is limited in size and diversity.
## 5 Conclusion
In this paper, we introduce A-Eval, a rigorous benchmark specifically designed for evaluating the cross-dataset generalizability of models in abdominal multi-organ segmentation. A-Eval acts as a standardized platform for in-depth investigation into various factors affecting model generalizability, including model architecture, training strategies, and training data. By offering such a comprehensive framework, A-Eval serves as an invaluable blueprint for future investigations in the development of models with superior generalizability.
Our A-Eval-based findings underscore the critical importance of data-centric strategies for achieving superior model generalizability. We have found that utilizing larger training datasets, integrating unlabeled data via pseudo-labeling, employing multi-modal learning, and conducting joint training across multiple datasets significantly enhances both model performance and generalizability. Furthermore, our results suggest that appropriately increasing a model's size contributes to better performance, thereby highlighting the potential of larger models in achieving enhanced generalizability. These insights offer actionable recommendations for assembling future large-scale datasets and serve as a roadmap for the design of models with robust generalizability, laying the foundation for advancements in abdominal multi-organ segmentation and beyond.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline Train data & Liver & Right Kidney & Spleen & Pancreas & Gallbladder & Esophagus & Stomach & Left Kidney & Mean \\ \hline FLARE22 & 94.48 & 86.68 & 91.27 & 82.00 & 76.31 & 81.29 & 89.31 & 87.51 & 86.11 \\ FLARE22 w/ PL & 95.33 & 88.29 & 92.00 & **83.90** & 78.17 & **82.40** & 90.88 & 87.81 & 87.35 \\ AMOS CT & 95.47 & 89.74 & 93.24 & 82.64 & 76.92 & 81.35 & 87.03 & 88.37 & 86.84 \\ AMOS CT+MR & 95.59 & 90.10 & 92.28 & 82.04 & **79.97** & 81.72 & 90.57 & 89.00 & 87.66 \\ WORD & 94.24 & 88.04 & 91.61 & 80.26 & 73.71 & 79.51 & 83.25 & 86.93 & 84.69 \\ TotalSegmentator & 96.29 & 91.38 & 93.24 & 81.98 & 73.65 & 79.73 & 92.03 & **93.54** & 87.73 \\ \hline Joint Train & **96.51** & **92.06** & **94.48** & 83.40 & 78.36 & 81.11 & **93.15** & 92.11 & **88.90** \\ \hline \hline \end{tabular}
\end{table}
Table 7: The DSC (%) scores of the STU-Net-L models trained individually or jointly on all datasets for the shared eight organ categories in the BTCV dataset. The ’Mean’ column denotes the average DSC across all organ categories.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline Train data & Liver & Right Kidney & Spleen & Pancreas & Gallbladder & Esophagus & Stomach & Left Kidney & Mean \\ \hline FLARE22 & 94.43 & 85.39 & 93.40 & 94.19 & 76.80 & 91.03 & 91.98 & 87.02 & 89.28 \\ FLARE22 w/ PL & 95.57 & 87.30 & 94.54 & **95.75** & 79.84 & 92.37 & 95.25 & 86.89 & 90.94 \\ AMOS CT & 96.55 & 93.09 & 95.35 & 93.93 & 78.63 & 92.42 & 91.67 & 91.54 & 91.65 \\ AMOS CT+MR & 96.90 & 94.07 & 94.37 & 93.06 & **81.56** & **93.03** & 94.42 & 92.80 & 92.53 \\ WORD & 93.26 & 90.09 & 93.18 & 93.39 & 73.34 & 91.35 & 87.35 & 87.95 & 88.74 \\ TotalSegentator & 97.82 & **95.48** & 94.98 & 94.08 & 74.00 & 91.93 & 96.01 & **97.47** & 92.72 \\ \hline Joint Train & **97.83** & 95.44 & **96.92** & 94.75 & 80.18 & 92.56 & **97.08** & 95.68 & **93.81** \\ \hline \hline \end{tabular}
\end{table}
Table 8: The NSD (%) scores of the STU-Net-L models trained individually or jointly on all datasets for the shared eight organ categories in the BTCV dataset. The ’Mean’ column denotes the average NSD across all organ categories. |
2309.09595 | $\mathbb{F}$-valued trace of a finite-dimensional commutative
$\mathbb{F}$-algebra | A non-zero $\mathbb{F}$-valued $\mathbb{F}$-linear map on a finite
dimensional $\mathbb{F}$-algebra is called an $\mathbb{F}$-valued trace if its
kernel does not contain any non-zero ideals. However, given an
$\mathbb{F}$-algebra such a map may not always exist. We find an infinite class
of finite-dimensional commutative $\mathbb{F}$-algebras which admit an
$\mathbb{F}$-valued trace. In fact, in these cases, we explicitly construct a
trace map. The existence of an $\mathbb{F}$-valued trace on a finite
dimensional commutative $\mathbb{F}$-algebra induces a non-degenerate bilinear
form on the $\mathbb{F}$-algebra which may be helpful both theoretically and
computationally. In this article, we suggest a couple of applications of an
$\mathbb{F}$-valued trace map of an $\mathbb{F}$-algebra to algebraic coding
theory. | Anuj Kr Bhagat, Ritumoni Sarma | 2023-09-18T09:04:54Z | http://arxiv.org/abs/2309.09595v2 | # \(\mathbb{F}\)-valued trace of a finite-dimensional commutative \(\mathbb{F}\)-algebra
###### Abstract.
A non-zero \(\mathbb{F}\)-valued \(\mathbb{F}\)-linear map on a finite dimensional \(\mathbb{F}\)-algebra is called an \(\mathbb{F}\)-valued trace if its kernel does not contain any non-zero ideals. However, given an \(\mathbb{F}\)-algebra such a map may not always exist. We find an infinite class of finite-dimensional commutative \(\mathbb{F}\)-algebras which admit an \(\mathbb{F}\)-valued trace. In fact, in these cases, we explicitly construct a trace map. The existence of an \(\mathbb{F}\)-valued trace on a finite dimensional commutative \(\mathbb{F}\)-algebra induces a non-degenerate bilinear form on the \(\mathbb{F}\)-algebra which may be helpful both theoretically and computationally. In this article, we suggest a couple of applications of an \(\mathbb{F}\)-valued trace map of an \(\mathbb{F}\)-algebra to algebraic coding theory.
## 1. Introduction
Throughout this manuscript, \(\mathbb{F}\) denotes a field and \(\mathbb{F}_{q}\) denotes the finite field of order \(q,\) where \(q\) is a prime power.
The \(\mathbb{F}\)-valued trace map of an extension of \(\mathbb{F}\) plays an important role in the theory of both finite and infinite fields. For instance, every functional of a finite extension \(\mathbb{K}\) over \(\mathbb{F}\) can be described as \(\alpha\mapsto\operatorname{Tr}_{\mathbb{K}/\mathbb{F}}(\alpha\beta),\) for a unique \(\beta\in\mathbb{K}.\) In fact, \(\operatorname{Tr}_{\mathbb{K}/\mathbb{F}}\) induces a non-degenerate bilinear form, namely, \((\alpha,\beta)\mapsto\operatorname{Tr}_{\mathbb{K}/\mathbb{F}}(\alpha\beta)\) on \(\mathbb{K}\) and consequently, given any basis \(\{\alpha_{1}<\alpha_{2}<\cdots<\alpha_{n}\}\) for \(\mathbb{K}\) over \(\mathbb{F}\), there is a unique basis \(\{\beta_{1}<\beta_{2}<\cdots<\beta_{n}\}\) for \(\mathbb{K}\) over \(\mathbb{F}\) such that \(\operatorname{Tr}_{\mathbb{K}/\mathbb{F}}(\alpha_{i}\beta_{j})=\delta_{ij}.\) Because of these properties, the trace map turns out to be an important tool in the theory of fields. As its application, in coding theory, the trace map is used to construct "trace codes" and it also helps in the computation of "subfield codes" [1].
Motivated by the wide range of applications of the trace map, we got curious to find if there is an analogue to the trace map for a finite-dimensional \(\mathbb{F}\)-algebra. In this manuscript, we study the trace maps of finite dimensional commutative \(\mathbb{F}\)-algebras. For a commutative ring \(R\) with unity, and an \(R\)-algebra \(A\), a surjective \(R\)-linear map \(\tau:A\to R\) is called a \(R\)-valued trace if \(\ker(\tau)\) contains no non-zero left ideals of \(A\) (see [3], [6], [7]). Not every finite-dimensional \(\mathbb{F}\)-algebra has an \(\mathbb{F}\)-valued trace. For example, \(\mathbb{F}_{2}[u,v]/\langle u^{2},v^{2},uv\rangle\) does not admit any \(\mathbb{F}_{2}\)-valued trace. In this article, we show by construction that a finite-dimensional commutative \(\mathbb{F}\)-algebra of the form \(\mathcal{R}=\mathbb{F}[x_{1},x_{2},\ldots,x_{n}]/\langle g_{1}(x_{1}),g_{2}(x _{2}),\ldots,g_{n}(x_{n})\rangle,\)\(g_{i}(x_{i})\in\mathbb{F}[x_{i}],\) has an \(\mathbb{F}\)-valued trace.
Let \(\mathbb{K}\) be a finite field extension of \(\mathbb{F}\), then \(\mathbb{K}\) can be viewed as a vector space over \(\mathbb{F}\). The multiplication by \(\alpha\in\mathbb{K}\), \(\mathfrak{m}_{\alpha}:\mathbb{K}\to\mathbb{K}\) given by \(x\mapsto\alpha x\) is an \(\mathbb{F}\)-linear transformation. Then \(\operatorname{Tr}_{\mathbb{K}/\mathbb{F}}(\alpha)\) is defined as the trace of the linear operator \(\mathfrak{m}_{\alpha}\). One may try to extend this definition to finite-dimensional \(\mathbb{F}\)-algebras as they are \(\mathbb{F}\)-vector spaces. But, sometimes this map turns out to be the zero map. For instance, if \(\mathcal{R}=\mathbb{F}_{2}[x]/\langle x^{2}\rangle,\) then \(\operatorname{Tr}_{\mathcal{R}/\mathbb{F}_{2}}(a+bx+\langle x^{2}\rangle)=0,\) for all \(a,b\in\mathbb{F}_{2}.\) Thus, the obvious
## 1. Introduction
Let \(\mathcal{R}\) be a finite-dimensional commutative \(\mathbb{F}\)-algebra and let \(\tau:\mathcal{R}\to\mathbb{F}\) be a non-zero \(\mathbb{F}\)-linear map. Then the following statements are equivalent.
1. \(\ker(\tau)\) does not contain any non-zero ideals of \(\mathcal{R}\).
2. \(f:\mathcal{R}\times\mathcal{R}\to\mathbb{F}\) defined by \(f(\boldsymbol{x},\boldsymbol{y})=\tau(\boldsymbol{x}\boldsymbol{y})\) is a non-degenerate bilinear form, that is, \(\tau(\boldsymbol{x}\boldsymbol{y})=0,\forall\,\boldsymbol{x}\in\mathcal{R} \implies\boldsymbol{y}=\boldsymbol{0}\).
**Definition 2.1**.: Let \(\mathcal{R}\) be a finite-dimensional commutative \(\mathbb{F}\)-algebra and let \(\tau:\mathcal{R}\to\mathbb{F}\) be a non-zero \(\mathbb{F}\)-linear map. Then the following statements are equivalent.
1. \(\ker(\tau)\) does not contain any non-zero ideals of \(\mathcal{R}\).
2. \(f:\mathcal{R}\times\mathcal{R}\to\mathbb{F}\) defined by \(f(\boldsymbol{x},\boldsymbol{y})=\tau(\boldsymbol{x}\boldsymbol{y})\) is a non-degenerate bilinear form, that is, \(\tau(\boldsymbol{x}\boldsymbol{y})=0,\forall\,\boldsymbol{x}\in\mathcal{R} \implies\boldsymbol{y}=\boldsymbol{0}\).
**Definition 2.2**.: Let \(\mathcal{R}\) be a finite-dimensional commutative \(\mathbb{F}\)-algebra and let \(\tau:\mathcal{R}\to\mathbb{F}\) be a non-zero \(\mathbb{F}\)-linear map. Then the following statements are equivalent.
1. \(\ker(\tau)\) does not contain any non-zero ideals of \(\mathcal{R}\).
2. \(f:\mathcal{R}\times\mathcal{R}\to\mathbb{F}\) defined by \(f(\boldsymbol{x},\boldsymbol{y})=\tau(\boldsymbol{x}\boldsymbol{y})\) is a non-degenerate bilinear form, that is, \(\tau(\boldsymbol{x}\boldsymbol{y})=0,\forall\,\boldsymbol{x}\in\mathcal{R} \implies\boldsymbol{y}=\boldsymbol{0}\).
**Definition 2.3**.: Let \(\mathcal{R}\) be a finite-dimensional commutative \(\mathbb{F}\)-algebra and let \(\tau:\mathcal{R}\to\mathbb{F}\) be a non-zero \(\mathbb{F}\)-linear map. Then the following statements are equivalent.
1. \(\ker(\tau)\) does not contain any non-zero ideals of \(\mathcal{R}\).
2. \(f:\mathcal{R}\times\mathcal{R}\to\mathbb{F}\) defined by \(f(\boldsymbol{x},\boldsymbol{y})=\tau(\boldsymbol{x}\boldsymbol{y})\) is a non-degenerate bilinear form, that is, \(\tau(\boldsymbol{x}\boldsymbol{y})=0,\forall\,\boldsymbol{x}\in\mathcal{R} \implies\boldsymbol{y}=\boldsymbol{0}\).
**Definition 2.4**.: Let \(\mathcal{R}\) be a finite-dimensional commutative \(\mathbb{F}\)-algebra. A non-zero \(\mathbb{F}\)-linear map \(\tau:\mathcal{R}\to\mathbb{F}\) is called an \(\mathbb{F}\)_-valued trace_ of \(\mathcal{R}\), if it satisfies any one of the statements of Proposition 2.3.
**Remark 2.5**.: In [3], the notion of trace (or Generalized Frobenius trace) is defined for a general ring. Let \(S\) be a ring (need not be commutative) with unity and \(R\) be a subring of \(S\) sharing the same unity. Then a homomorphism of left \(R\)-modules \(\mathrm{Tr}_{R}^{S}:S\to R\) is called a trace from \(S\) to \(R\) if it is surjective and \(\ker(\mathrm{Tr}_{R}^{S})\) does not contain any non-zero left ideals of \(S.\) We note that if \(R=\mathbb{F}\) and \(S\) is an \(\mathbb{F}\)-algebra, then Definition 2.4 is a special case of the definition in [3].
Let \(V\) be a finite-dimensional vector space over \(\mathbb{F}\) and let \(T\) be a linear operator on \(V.\)
**Definition 2.6**.: [4] If \(\boldsymbol{\alpha}\) is a vector in \(V\), then the subspace \(Z(\boldsymbol{\alpha};T):=\{g(T)\boldsymbol{\alpha}:g(x)\in\mathbb{F}[x]\}\) is called the \(T\)_-cyclic subspace generated by_\(\boldsymbol{\alpha}.\) If \(Z(\boldsymbol{\alpha};T)=V,\) then \(\boldsymbol{\alpha}\) is called a _cyclic vector_ for \(T.\)
**Definition 2.7**.: [4] If \(\boldsymbol{\alpha}\) is a vector in \(V\), then then ideal \(M(\boldsymbol{\alpha};T):=\{g(x)\in\mathbb{F}[x]:g(T)\boldsymbol{\alpha}=0\}\) of \(\mathbb{F}[x]\) is called the \(T\)_-annihilator of_\(\boldsymbol{\alpha}.\)
Since \(\mathbb{F}[x]\) is an Euclidean domain, there is a unique monic polynomial \(p_{\boldsymbol{\alpha}}(x)\) which generates \(M(\boldsymbol{\alpha};T);\) the polynomial \(p_{\boldsymbol{\alpha}}(x)\) is also called the \(T\)-annihilator of \(\boldsymbol{\alpha}\). With these notations we note the following result.
**Theorem 2.8** ([4]).: _Let \(\boldsymbol{\alpha}\in V\setminus\{\boldsymbol{0}\}.\) Then_
1. \(\dim_{\mathbb{F}}Z(\boldsymbol{\alpha};T)=\deg p_{\boldsymbol{\alpha}}(x)\)__
2. _If_ \(\deg p_{\boldsymbol{\alpha}}(x)=k,\) _then_ \(\{\boldsymbol{\alpha},T\boldsymbol{\alpha},\ldots T^{k-1}\boldsymbol{\alpha}\}\) _is a basis for_ \(Z(\boldsymbol{\alpha};T).\)__
The following result that describes a basis of a tensor product of vector spaces is helpful for computation.
**Proposition 2.9**.: Let \(V\) and \(W\) be finite-dimensional vector spaces over \(\mathbb{F}\) with bases \(\{\boldsymbol{\alpha}_{1}\ldots,\boldsymbol{\alpha}_{m}\}\) and \(\{\boldsymbol{\beta}_{1}\ldots,\boldsymbol{\beta}_{n}\}\) respectively. Then \(\{\boldsymbol{\alpha}_{i}\otimes\boldsymbol{\beta}_{j}:1\leq i\leq m,1\leq j \leq n\}\) is a basis for \(V\otimes_{\mathbb{F}}W.\)
**Theorem 2.10**.: _Let, for \(1\leq i\leq n,\)\(g_{i}(x)\in\mathbb{F}[x]\). Then_
\[\frac{\mathbb{F}[x_{1},x_{2},\ldots,x_{n}]}{\langle g_{1}(x_{1}),g_{2}(x_{2}), \ldots,g_{n}(x_{n})\rangle}\cong\frac{\mathbb{F}[x_{1}]}{\langle g_{1}(x_{1}) \rangle}\otimes_{\mathbb{F}}\frac{\mathbb{F}[x_{2}]}{\langle g_{2}(x_{2}) \rangle}\otimes_{\mathbb{F}}\cdots\otimes_{\mathbb{F}}\frac{\mathbb{F}[x_{n} ]}{\langle g_{n}(x_{n})\rangle}.\]
## 3. Trace on finite dimensional commutative \(\mathbb{F}\)-algebras
**Theorem 3.1**.: _Let \(g(x)\in\mathbb{F}[x]\) be an irreducible polynomial over \(\mathbb{F}\) of degree \(n\), and let \(T:\mathbb{F}[x]/\langle g(x)\rangle\rightarrow\mathbb{F}\) be a non-zero functional. Set \(\mathcal{R}=\mathbb{F}[x]/\langle g(x)^{r}\rangle,\) for \(r\in\mathbb{N}.\) Then the map \(\tau:\mathcal{R}\rightarrow\mathbb{F}\) given by_
\[p_{0}(x)+p_{1}(x)g(x)+\cdots+p_{r-1}(x)g(x)^{r-1}+\langle g(x)^{r}\rangle \mapsto\sum_{i=0}^{r-1}T(p_{i}(x)+\langle g(x)\rangle)\]
_is an \(\mathbb{F}\)-valued trace of \(\mathcal{R}.\)_
Proof.: Note that \(\mathcal{B}:=\{\boldsymbol{e}_{i,j}=x^{i}g(x)^{j}:0\leq i\leq n-1,0\leq j\leq r -1\}\) is a basis for \(\mathcal{R}\) and hence, every element \(p(x)+\langle g(x)^{r}\rangle\) of \(\mathcal{R}\) can be uniquely expressed in the form \(p_{0}(x)+p_{1}(x)g(x)+\cdots+p_{r-1}(x)g(x)^{r-1}+\langle g(x)^{r}\rangle,\) where \(p_{i}(x)\in\mathbb{F}[x]\) with \(\deg p_{i}(x)\leq n-1.\) For convenience, identify \(\mathbb{F}[x]/\langle g(x)\rangle\) with \(\mathbb{F}^{n}\) via the map \(f(x)+\langle g(x)\rangle\mapsto\boldsymbol{f}=(a_{0},a_{1},\ldots,a_{n-1})\in \mathbb{F}^{n},\) if \(f(x)=a_{0}+a_{1}x+\cdots+a_{n-1}x^{n-1}.\) In turn, get the identification of \(\mathcal{R}\) with \(\mathbb{F}^{nr}\) as \(\mathbb{F}\)-vector spaces via the vector space isomorphism
\[p(x)+\langle g(x)^{r}\rangle\mapsto\overline{\boldsymbol{p}}=(\boldsymbol{p} _{0},\boldsymbol{p}_{1},\ldots,\boldsymbol{p}_{r-1}) \tag{3.1}\]
where \(\boldsymbol{p}_{i}=(p_{i,0},p_{i,1},\ldots,p_{i,(n-1)})\in\mathbb{F}^{n}.\)
Since \(T\) is a non-zero functional on \(\mathbb{F}[x]/\langle g(x)\rangle,\) there exists \(\boldsymbol{s}=(s_{0},\ldots,s_{n-1})\in\mathbb{F}^{n}\setminus\{\boldsymbol{0}\}\) such that \(T(\boldsymbol{a})=\boldsymbol{s}\cdot\boldsymbol{a},\) the usual dot product of \(\boldsymbol{s}\) and \(\boldsymbol{a}.\) With the above identification, \(\tau\) can be expressed as
\[\tau(\overline{\boldsymbol{p}})=\sum_{i=0}^{r-1}\boldsymbol{s}\cdot\boldsymbol {p}_{i}. \tag{3.2}\]
Next consider the operator
\[\mathfrak{m}_{x}:\mathcal{R} \rightarrow\mathcal{R}\] \[p(x)+\langle g(x)^{r}\rangle \mapsto xp(x)+\langle g(x)^{r}\rangle.\]
Order \(\mathcal{B}\) such that \(\boldsymbol{e}_{i,j}<\boldsymbol{e}_{k,l}\) if either \(j=l\) and \(i<k\) or \(j<l\). Suppose \(A=[\mathfrak{m}_{x}]_{\mathcal{B}}\) is the matrix of \(\mathfrak{m}_{x}\) relative to the ordered basis \(\mathcal{B}\). Now, for \(1\leq j\leq r-1\), compute \(g(x)^{j}\left(p(x)+\langle g(x)^{r}\rangle\right)\). Observe,
\[g(x)^{j}\left(\underset{i=0}{\overset{r-1}{\sum}}p_{i}(x)g(x)^{i}\right)= \underset{i=j}{\overset{r-1}{\sum}}p_{i-j}(x)g(x)^{i}\]
and
\[g(x)^{r}\left(\underset{i=0}{\overset{r-1}{\sum}}p_{i}(x)g(x)^{i}\right)= \boldsymbol{0}.\]
Hence,
\[g(A)=\left(\begin{array}{ccccc}O&O&\cdots&O&O\\ I&O&\cdots&O&O\\ O&I&\cdots&O&O\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ O&O&\cdots&I&O\end{array}\right)_{nr\times nr}\]
where \(O\) is the null matrix of order \(n\times n\) and \(I\) is the identity matrix of order \(n\times n\) over \(\mathbb{F}\). Observe that \(r\) is the least positive integer such that \(g(A)^{r}=O\). Hence, the minimal polynomial of \(\mathfrak{m}_{x}\) divides \(g(x)^{r}\). Since \(g(x)\) is irreducible and \(g(x)^{i}\) does not annihilate \(\mathfrak{m}_{x}\) for \(0\leq i\leq r-1\), the minimal polynomial of \(\mathfrak{m}_{x}\) is \(g(x)^{r}\).
Consider \(\overline{\boldsymbol{s}}:=(\boldsymbol{s},\boldsymbol{s},\ldots,\boldsymbol {s})\in\mathbb{F}^{nr}\), where \(\boldsymbol{s}\) is identified with \(s_{0}+s_{1}x+\cdots+s_{n-1}x^{n-1}\in\mathbb{F}[x]/\langle g(x)\rangle\). Since \(\mathfrak{m}_{x}\)-annihilator of \(\overline{\boldsymbol{s}}\) is \(g(x)^{r}\) and \(\deg g(x)^{r}=nr=\dim_{\mathbb{F}}\mathcal{R}\), the vector \(\overline{\boldsymbol{s}}\) is a cyclic vector for \(\mathfrak{m}_{x}\) and \(\mathcal{B}^{\prime}=\{\overline{\boldsymbol{s}}^{t},A\overline{\boldsymbol{s }}^{t},\ldots,A^{nr-1}\overline{\boldsymbol{s}}^{t}\}\) is a basis for \(\mathcal{R}\) via the identification given in (3.1).
Now,
\[\tau\left(x^{j}(p(x)+\langle g(x)^{r}\rangle)\right)=\tau(A^{j}\overline{ \boldsymbol{p}})=\overline{\boldsymbol{s}}\cdot\left(A^{j}\overline{ \boldsymbol{p}}\right)=\left(\overline{\boldsymbol{s}}A^{j}\right)\cdot \overline{\boldsymbol{p}}=(A^{j}\overline{\boldsymbol{s}}^{t})^{t}\cdot \overline{\boldsymbol{p}}.\]
If \(\tau\left(x^{j}(p(x)+\langle g(x)^{r}\rangle)\right)=0,\forall\,0\leq j\leq nr-1\), then \((A^{j}\overline{\boldsymbol{s}}^{t})^{t}\cdot\overline{\boldsymbol{p}}=0, \forall\,0\leq j\leq nr-1\), we have \(\overline{\boldsymbol{p}}=\overline{\boldsymbol{0}}\) as \(\mathcal{B}^{{}^{\prime}}\) is a basis for \(\mathcal{R}\) and \(\overline{\boldsymbol{s}}\) is also a cyclic vector for \(A^{t}\).
**Remark 3.2**.: The usual trace map \(T=\operatorname{Tr}_{\mathbb{F}[\alpha]/\mathbb{F}}\), where \(\alpha=x+\langle g(x)\rangle\) is always a choice of \(T\) in Theorem 3.1.
**Lemma 3.3**.: Let \(\mathcal{R}_{1},\mathcal{R}_{2},\ldots,\mathcal{R}_{s}\) be finite-dimensional commutative \(\mathbb{F}\)-algebras. If each \(\mathcal{R}_{i}\) admits an \(\mathbb{F}\)-valued trace, then so does \(\prod_{i=1}^{s}\mathcal{R}_{i}\).
Proof.: Suppose for each \(i,\)\(\tau_{i}:\mathcal{R}_{i}\rightarrow\mathbb{F}\) is an \(\mathbb{F}\)-valued trace of \(\mathcal{R}_{i}\). Set \(\mathcal{R}=\prod_{i=1}^{s}\mathcal{R}_{i}\) and define
\[\tau:\mathcal{R} \rightarrow\mathbb{F}\] \[\boldsymbol{x} =(x_{1},\ldots,x_{s}) \mapsto\sum_{i=1}^{s}\tau_{i}(x_{i}).\]
Then \(\tau\) is clearly a non-zero \(\mathbb{F}\)-linear map from \(\mathcal{R}\) to \(\mathbb{F}.\) Suppose \(\tau(\boldsymbol{x})=0,\) where \(\boldsymbol{x}=(x_{1},\ldots,x_{s})\in\mathcal{R}\setminus\{\boldsymbol{0}\}.\) Without loss of generality, assume that \(x_{1}\neq 0.\) Then there is \(y_{x_{1}}\in\mathcal{R}_{1}\setminus\{0\}\) such that \(\tau_{1}(x_{1}y_{x_{1}})\neq 0.\) If \(\boldsymbol{y}=(y_{x_{1}},0,\ldots,0)\in\mathcal{R},\) then
\[\tau(\boldsymbol{x}\boldsymbol{y})=\tau(x_{1}y_{x_{1}},0,\ldots,0)=\tau_{1}(x _{1}y_{x_{1}})\neq 0.\]
**Theorem 3.4**.: _Let \(\mathcal{R}\) be a finite-dimensional commutative \(\mathbb{F}\)-algebra generated by only one element. Then there is an \(\mathbb{F}\)-valued trace of \(\mathcal{R}.\)_
Proof.: Since \(\mathcal{R}\) is a finite-dimensional commutative \(\mathbb{F}\)-algebra generated by one element, \(\mathcal{R}\cong\mathbb{F}[x]/\langle h(x)\rangle\) for some \(h(x)\in\mathbb{F}[x]\setminus\{\boldsymbol{0}\}.\) Let \(h(x)=\prod_{i=1}^{s}h_{i}(x)^{r_{i}}\) be the irreducible factorization of \(h(x)\) in \(\mathbb{F}[x].\) Then by Chinese Remainder Theorem,
\[\mathcal{R}\cong\prod_{i=1}^{s}\mathcal{R}_{i}\]
where \(\mathcal{R}_{i}=\mathbb{F}[x]/\langle h_{i}(x)^{r_{i}}\rangle.\) By Theorem 3.1, for each \(i,\) there exists an \(\mathbb{F}\)-valued trace of \(\mathcal{R}_{i}\) and consequently, by Lemma 3.3 there is an \(\mathbb{F}\)-valued trace of \(\mathcal{R}.\)
However, if \(\mathcal{R}\) is a finite-dimensional commutative \(\mathbb{F}\)-algebra generated by two elements, then there may not exist any \(\mathbb{F}\)-valued trace of \(\mathcal{R}.\)
**Example 3.5**.: Consider the \(\mathbb{F}_{2}\)-algebra \(\mathcal{R}=\mathbb{F}_{2}[x,y]/\langle x^{2},y^{2},xy\rangle.\) Let \(u\) denote \(x+\langle x^{2},y^{2},xy\rangle\) and \(v\) denote \(y+\langle x^{2},y^{2},xy\rangle\). Then any \(\mathbb{F}_{2}\)-linear map \(\tau:\mathcal{R}\rightarrow\mathbb{F}_{2}\) is of the form:
\[\tau(a+bu+cv)=\alpha a+\beta b+\gamma c,\]
for some \(\alpha,\beta,\gamma\in\mathbb{F}_{2}\). If one of \(\beta\) or \(\gamma\) is non-zero, set \(\boldsymbol{s}=\gamma u+\beta v\in\mathcal{R}.\) Then
\[\tau\left(\boldsymbol{r}\boldsymbol{s}\right)=\tau\left(a\gamma u+a\beta v \right)=a\beta\gamma+a\beta\gamma=0,\forall\,\boldsymbol{r}=a+bu+cv\in \mathcal{R}.\]
If both \(\gamma\) and \(\beta\) are zero, then set \(\boldsymbol{s}=u\) and note that \(\tau(\boldsymbol{r}\boldsymbol{s})=0\) for all \(\boldsymbol{r}\in\mathcal{R}.\) Hence, for any non-zero \(\tau\), there is \(\boldsymbol{s}\in\mathcal{R}\setminus\{\boldsymbol{0}\}\) such that \(\tau(\boldsymbol{r}\boldsymbol{s})=0,\)\(\forall\,\boldsymbol{r}\in\mathcal{R},\) proving that there is no \(\mathbb{F}_{2}\)-valued trace of \(\mathcal{R}.\)
We note from the following example that writing elements of \(\mathcal{R}=\mathbb{F}[x]/\langle g(x)^{r}\rangle\) in a special form as described in Theorem 3.1 is crucial to determine an \(\mathbb{F}\)-valued trace of \(\mathcal{R}.\)
**Example 3.6**.: Let \(\mathcal{R}=\mathbb{F}_{2}[x]/\langle(1+x)^{2}\rangle\) and let \(u=x+\langle(1+x)^{2}\rangle.\) Then \(\sigma:\mathcal{R}\rightarrow\mathbb{F}_{2}\) given by \(\sigma(a+bu)=a+b\) for \(a,b\in\mathbb{F}_{2}\) is not an \(\mathbb{F}_{2}\)-valued trace of \(\mathcal{R}\) as \(\ker(\sigma)=\langle 1+u\rangle.\) But by Theorem 3.1, for \(a,b\in\mathbb{F}_{2},\) the map \(\tau:\mathcal{R}\rightarrow\mathbb{F}_{2}\) given by by \(\tau(a+bu)=\tau(a+b+b(1+u))=a+b+b=a\) is an \(\mathbb{F}_{2}\)-valued trace of \(\mathcal{R}.\)
**Example 3.7**.: Consider the \(\mathbb{F}_{2}\)-algebra \(\mathcal{R}=\mathbb{F}_{2}[x]/\langle x^{3}-x\rangle\cong\mathbb{F}_{2}\times \mathbb{F}_{2}[x]/\langle(1+x)^{2}\rangle,\) where the isomorphism is given by:
\[a+bx+cx^{2}+\langle x^{3}-x\rangle\mapsto\left(a,a+b+c+b(1+x)+\langle(1+x)^{2} \rangle\right).\]
If \(u\) denotes \(x+\langle x^{3}-x\rangle,\) then
\[\tau:\mathcal{R} \to\mathbb{F}_{2}\] \[a+bu+cu^{2} \mapsto a+(a+b+c+b)=c\]
is an \(\mathbb{F}_{2}\)-valued trace of \(\mathcal{R}\) by Theorem 3.1 and Theorem 3.4.
**Theorem 3.8**.: _Let \(\mathcal{R}\) and \(\mathcal{S}\) be finite-dimensional commutative \(\mathbb{F}\)-algebras. If both \(\mathcal{R}\) and \(\mathcal{S}\) admit \(\mathbb{F}\)-valued trace, then so does \(\mathcal{R}\otimes_{\mathbb{F}}\mathcal{S}.\)_
Proof.: Let \(\tau_{\mathcal{R}}\) and \(\tau_{\mathcal{S}}\) be \(\mathbb{F}\)-valued trace maps of \(\mathcal{R}\) and \(\mathcal{S}\) respectively. Suppose \(\mathcal{B}=\{\boldsymbol{\alpha}_{1}<\boldsymbol{\alpha}_{2}<\cdots< \boldsymbol{\alpha}_{m}\}\) and \(\mathcal{B}^{{}^{\prime}}=\{\boldsymbol{\beta}_{1}<\boldsymbol{\beta}_{2} \cdots<\boldsymbol{\beta}_{n}\}\) are ordered bases for \(\mathcal{R}\) and \(\mathcal{S}\) respectively over \(\mathbb{F}\) so that \(\{\boldsymbol{\alpha}_{i}\otimes\boldsymbol{\beta}_{j}:1\leq i\leq m,1\leq j \leq n\}\) is a basis for \(\mathcal{R}\otimes_{\mathbb{F}}\mathcal{S}\) over \(\mathbb{F}.\)
Let \(\boldsymbol{x}=x_{1}\boldsymbol{\alpha}_{1}+\cdots+x_{m}\boldsymbol{\alpha}_{m}\in \mathcal{R}\) and let \([\boldsymbol{x}]_{\mathcal{B}}\) be the coordinate vector of \(\boldsymbol{x}\) relative to \(\mathcal{B}.\) We often write \([\boldsymbol{x}]_{\mathcal{B}}\) as \(\mathbf{x}=\begin{bmatrix}x_{1}&x_{2}&\cdots&x_{m}\end{bmatrix}^{t}.\) Define \(\mathfrak{m}_{\boldsymbol{\alpha}_{i}}:\mathcal{R}\to\mathcal{R}\) by \(\mathfrak{m}_{\boldsymbol{\alpha}_{i}}(\boldsymbol{x})=\boldsymbol{\alpha}_{i }\boldsymbol{x}.\) Then \(\mathfrak{m}_{\boldsymbol{\alpha}_{i}}\) is an \(\mathbb{F}\)-linear map. Denote the matrix of \(\mathfrak{m}_{\boldsymbol{\alpha}_{i}}\) relative to \(\mathcal{B}\) by \(A_{i}.\) Then \([\boldsymbol{\alpha}_{i}\boldsymbol{x}]_{\mathcal{B}}=A_{i}\mathbf{x}.\) Thus,
\[\tau_{\mathcal{R}}(\boldsymbol{x})=\sum_{i=1}^{m}x_{i}\tau_{ \mathcal{R}}(\boldsymbol{\alpha}_{i})=\mathbf{x}^{t}\boldsymbol{\tau}_{ \mathcal{R}},\text{where}\ \boldsymbol{\tau}_{\mathcal{R}}=\begin{bmatrix}\tau_{ \mathcal{R}}(\boldsymbol{\alpha}_{1})&\tau_{\mathcal{R}}(\boldsymbol{\alpha}_{ 2})&\cdots&\tau_{\mathcal{R}}(\boldsymbol{\alpha}_{m})\end{bmatrix}^{t}\]
and consequently, \(\tau_{\mathcal{R}}(\boldsymbol{\alpha}_{i}\boldsymbol{x})=\mathbf{x}^{t}A_{i} ^{t}\boldsymbol{\tau}_{\mathcal{R}}.\) Since \(\tau_{\mathcal{R}}\) is an \(\mathbb{F}\)-valued trace of \(\mathcal{R},\)
\[\tau_{\mathcal{R}}(\boldsymbol{\alpha}_{i}\boldsymbol{x})=0, \forall\,1\leq i\leq m\implies\mathbf{x}=\mathbf{0}. \tag{3.3}\]
Let \(\boldsymbol{y}=y_{1}\boldsymbol{\beta}_{1}+\cdots+y_{n}\boldsymbol{\beta}_{n} \in\mathcal{S}\) and \([\boldsymbol{y}]_{\mathcal{B}^{{}^{\prime}}}\) be the coordinate vector of \(\boldsymbol{y}\) relative to \(\mathcal{B}^{{}^{\prime}}.\) If \(B_{j}\) is the matrix of the \(\mathbb{F}\)-linear map \(\mathfrak{M}_{\boldsymbol{\beta}_{j}}:\mathcal{S}\to\mathcal{S}\) defined by \(\mathfrak{M}_{\boldsymbol{\beta}_{j}}(\boldsymbol{y})=\boldsymbol{\beta}_{j} \boldsymbol{y}\) relative to \(\mathcal{B}^{{}^{\prime}}\) and writing \([\boldsymbol{y}]_{\mathcal{B}^{{}^{\prime}}}\) as \(\mathbf{y}=\begin{bmatrix}y_{1}&y_{2}&\cdots&y_{n}\end{bmatrix}^{t},\) then by a similar argument as above, we obtain \(\tau_{\mathcal{S}}(\boldsymbol{\beta}_{j}\boldsymbol{y})=\mathbf{y}^{t}B_{j}^{t }\boldsymbol{\tau}_{\mathcal{S}},\) where \(\boldsymbol{\tau}_{\mathcal{S}}=\begin{bmatrix}\tau_{\mathcal{S}}(\boldsymbol{ \beta}_{1})&\tau_{\mathcal{S}}(\boldsymbol{\beta}_{2})&\cdots&\tau_{\mathcal{S}} (\boldsymbol{\beta}_{n})\end{bmatrix}^{t},\) and,
\[\tau_{\mathcal{S}}(\boldsymbol{\beta}_{j}\boldsymbol{y})=0, \forall\,1\leq j\leq n\implies\mathbf{y}=\mathbf{0}. \tag{3.4}\]
Define
\[T:\mathcal{R}\otimes_{\mathbb{F}}\mathcal{S} \to\mathbb{F}\] \[\boldsymbol{z}:=\sum_{i=1}^{m}\sum_{j=1}^{n}z_{i,j}\boldsymbol{ \alpha}_{i}\otimes\boldsymbol{\beta}_{j} \mapsto\sum_{i=1}^{m}\sum_{j=1}^{n}z_{i,j}\tau_{\mathcal{R}}( \boldsymbol{\alpha}_{i})\tau_{\mathcal{S}}(\boldsymbol{\beta}_{j}).\]
Then,
\[T(\mathbf{z}) =\sum_{j=1}^{n}\left(\sum_{i=1}^{m}z_{i,j}\tau_{\mathcal{R}}(\mathbf{ \alpha}_{i})\right)\tau_{\mathcal{S}}(\mathbf{\beta}_{j})\] \[=\sum_{j=1}^{n}\left(\mathbf{z}_{\cdot j}^{t}\mathbf{\tau}_{\mathcal{R }}\right)\tau_{\mathcal{S}}(\mathbf{\beta}_{j}) \text{where}\;\mathbf{z}_{\cdot j}=\begin{bmatrix}z_{1,j}&z_{2,j}& \cdots&z_{m,j}\end{bmatrix}^{t}\] \[=\sum_{j=1}^{n}a_{j}\tau_{\mathcal{S}}(\mathbf{\beta}_{j}) \text{where}\;a_{j}=\mathbf{z}_{\cdot j}^{t}\mathbf{\tau}_{\mathcal{R}}\] \[=\mathbf{a}^{t}\mathbf{\tau}_{\mathcal{S}} \text{where}\;\mathbf{a}=\begin{bmatrix}a_{1}&a_{2}&\cdots&a_{n} \end{bmatrix}^{t}\]
Thus, if \(\mathbf{z}=[z_{i,j}]_{m\times n}\), then
\[T(\mathbf{z})=\mathbf{\tau}_{\mathcal{R}}^{t}\mathbf{z}\mathbf{\tau}_{\mathcal{S}}\,. \tag{3.5}\]
Let \(\mathbf{e}_{k}^{(r)}=\begin{bmatrix}0&0&\cdots&1&\cdots&0\end{bmatrix}\) denote the row vector of size \(r\) whose \(k\)-th component is \(1\) and all other components are \(0\). Then \(\mathbf{e}_{k}^{(m)}A_{i}\) is the \(k\)th row of \(A_{i}\) and \(B_{j}\mathbf{e}_{k}^{(n)}{}^{t}\) is the \(k\)th column of \(B_{j}.\) Now, for \(1\leq r\leq m\),
\[\mathbf{\alpha}_{r}\mathbf{z} =\sum_{j=1}^{n}\left(\sum_{i=1}^{m}z_{i,j}\mathbf{\alpha}_{i}\mathbf{ \alpha}_{r}\right)\otimes\mathbf{\beta}_{j}\] \[=\sum_{j=1}^{n}\sum_{i=1}^{m}\mathbf{e}_{i}^{(m)}A_{r}\mathbf{z}_ {\cdot j}(\mathbf{\alpha}_{i}\otimes\mathbf{\beta}_{j})\]
Similarly, for \(1\leq s\leq n\), we have
\[\mathbf{\beta}_{s}\mathbf{z}=\sum_{j=1}^{n}\sum_{i=1}^{m}\mathbf{e}_{j}^{(n)}B_{s}\bm {z}_{i\cdot}^{t}(\mathbf{\alpha}_{i}\otimes\mathbf{\beta}_{j})\]
where \(\mathbf{z}_{i\cdot}=\begin{bmatrix}z_{i,1}&z_{i,2}&\cdots&z_{i,n}\end{bmatrix}\) and consequently
\[\mathbf{\beta}_{s}\mathbf{\alpha}_{r}\mathbf{z}=\sum_{j=1}^{n}\sum_{i=1}^{m}\mathbf{e}_{j }^{(n)}B_{s}\mathbf{w}_{i\cdot}^{t}(\mathbf{\alpha}_{i}\otimes\mathbf{\beta}_{j})\]
where
\[\mathbf{w}_{i\cdot}^{t} =\begin{bmatrix}\mathbf{e}_{i}^{(m)}A_{r}\mathbf{z}_{\cdot 1}& \mathbf{e}_{i}^{(m)}A_{r}\mathbf{z}_{\cdot 2}&\cdots&\mathbf{e}_{i}^{(m)}A_{r} \mathbf{z}_{\cdot n}\end{bmatrix}^{t}\] \[=\begin{pmatrix}\mathbf{e}_{i}^{(m)}A_{r}\mathbf{z}\end{pmatrix} ^{t}\] \[=\mathbf{z}^{t}A_{r}^{t}\mathbf{e}_{i}^{(m)}{}^{t}\]
Hence,
\[\mathbf{\beta}_{s}\mathbf{\alpha}_{r}\mathbf{z}=\sum_{j=1}^{n}\sum_{i=1}^{m}u_{i,j}(\mathbf{ \alpha}_{i}\otimes\mathbf{\beta}_{j})\]
where \(u_{i,j}=\mathbf{e}_{j}^{(n)}B_{s}\mathbf{z}^{t}A_{r}^{t}\mathbf{e}_{i}^{(m)^{t}}.\) Then by (3.5),
\[T(\boldsymbol{\beta}_{s}\boldsymbol{\alpha}_{r}\boldsymbol{z})=\boldsymbol{\tau }_{\mathcal{R}}^{t}\mathbf{u}\boldsymbol{\tau}_{\mathcal{S}},\,\text{where}\,\, \boldsymbol{u}=[u_{i,j}]_{m\times n}. \tag{3.6}\]
Observe, \(\mathbf{u}=A_{r}\mathbf{z}B_{s}^{t}.\) Hence, by (3.6)
\[T(\boldsymbol{\beta}_{s}\boldsymbol{\alpha}_{r}\boldsymbol{z})=\left(\mathbf{z }^{t}A_{r}^{t}\boldsymbol{\tau}_{\mathcal{R}}\right)^{t}B_{s}^{t}\boldsymbol{ \tau}_{\mathcal{S}}. \tag{3.7}\]
Suppose \(T(\boldsymbol{\beta}_{s}\boldsymbol{\alpha}_{r}\boldsymbol{z})=0,\forall\,1 \leq r\leq m\) and \(\forall\,1\leq s\leq n.\) Then
\[\left(\mathbf{z}^{t}A_{r}^{t}\boldsymbol{\tau}_{\mathcal{R}} \right)^{t}B_{s}^{t}\boldsymbol{\tau}_{\mathcal{S}}=0,\forall\,1\leq s\leq n.\] \[\implies\mathbf{z}^{t}A_{r}^{t}\boldsymbol{\tau}_{\mathcal{R}}= \mathbf{0},\forall\,1\leq r\leq m \text{(using (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eq:eqeqeq:eqeqeq:eq
**Corollary 3.11**.: Let \(\mathcal{R}_{1},\mathcal{R}_{2},\ldots,\mathcal{R}_{n}\) be finite-dimensional commutative \(\mathbb{F}\)-algebras. If there is an \(\mathbb{F}\)-valued trace of each \(\mathcal{R}_{i},\) then there is an \(\mathbb{F}\)-valued trace of \(\mathcal{R}_{1}\otimes_{\mathbb{F}}\mathcal{R}_{2}\otimes_{\mathbb{F}}\cdots \otimes_{\mathbb{F}}\mathcal{R}_{n}.\)
**Corollary 3.12**.: If \(\mathcal{R}=\mathbb{F}[x_{1},x_{2},\ldots,x_{n}]/\langle g_{1}(x_{1}),g_{2}(x_ {2}),\ldots,g_{n}(x_{n})\rangle,\) where \(g_{i}(x_{i})\in\mathbb{F}[x_{i}]\setminus\{\mathbf{0}\},\) then \(\mathcal{R}\) admits an \(\mathbb{F}\)-valued trace.
Proof.: By Theorem 2.10, \(\mathcal{R}\cong\mathbb{F}[x_{1}]/\langle g_{1}(x_{1})\rangle\otimes_{ \mathbb{F}}\mathbb{F}[x_{2}]/\langle g_{2}(x_{2})\rangle\otimes_{\mathbb{F}} \cdots\otimes_{\mathbb{F}}\mathbb{F}[x_{n}]/\langle g_{n}(x_{n})\rangle.\) By Theorem 3.4, there is an \(\mathbb{F}\)-valued trace of each \(\mathbb{F}[x_{i}]/\langle g_{i}(x_{i})\rangle,\) hence, by Corollary 3.11, there is an \(\mathbb{F}\)-valued trace of \(\mathcal{R}.\)
**Example 3.13**.: Let \(\mathcal{R}=\mathbb{F}[x,y]/\langle x^{2},y^{2}\rangle\) and set \(w=x+\langle x^{2}\rangle.\) Then \(\tau:\mathbb{F}[x]/\langle x^{2}\rangle\to\mathbb{F}\) defined by \(\tau(a+bw)=a+b\) is an \(\mathbb{F}\)-valued trace on \(\mathbb{F}[x]/\langle x^{2}\rangle\) by Theorem 3.1 and hence \(T:\mathcal{R}\to\mathbb{F}\) defined by \(T(a+bu+cv+duv)=a+b+c+d\) is an \(\mathbb{F}\)-valued trace on \(\mathcal{R}\) by Corollary 3.10, where \(u=x+\langle x^{2},y^{2}\rangle\) and \(v=y+\langle x^{2},y^{2}\rangle.\)
## 4. Applications
### Bases and dual basis
Let \(\mathcal{R}\) be a finite-dimensional commutative \(\mathbb{F}\)-algebra that admits an \(\mathbb{F}\)-valued trace \(\tau.\) For \(\boldsymbol{\beta}\in\mathcal{R},\) define
\[f_{\boldsymbol{\beta}}:\mathcal{R}\to\mathbb{F}\] \[f_{\boldsymbol{\beta}}(\boldsymbol{\alpha})=\tau(\boldsymbol{ \beta}\boldsymbol{\alpha}),\,\forall\,\boldsymbol{\alpha}\in\mathcal{R}.\]
By \(\mathbb{F}\)-linearity of \(\tau,\)\(f_{\boldsymbol{\beta}}\) is a linear functional on \(\mathcal{R}.\) Moreover, we have
**Theorem 4.1**.: _If \(\mathcal{R}\) is a finite-dimensional commutative \(\mathbb{F}\)-algebra that admits an \(\mathbb{F}\)-valued trace \(\tau,\) then for every functional \(f:\mathcal{R}\to\mathbb{F},\) there exists \(\boldsymbol{\beta}\in\mathcal{R}\) such that \(f=f_{\boldsymbol{\beta}}.\)_
Proof.: Let \(\boldsymbol{\beta},\boldsymbol{\gamma}\in\mathcal{R},\) with \(\boldsymbol{\beta}\neq\boldsymbol{\gamma}.\) Since \(\boldsymbol{\beta}-\boldsymbol{\gamma}\neq\mathbf{0},\) it follows from the definition of \(\tau\) that there is some \(\boldsymbol{\alpha}\in\mathcal{R}\) such that \(\tau((\boldsymbol{\beta}-\boldsymbol{\gamma})\boldsymbol{\alpha})\neq 0.\) Hence, \(f_{\boldsymbol{\beta}}(\boldsymbol{\alpha})-f_{\boldsymbol{\gamma}}( \boldsymbol{\alpha})=\tau((\boldsymbol{\beta}-\boldsymbol{\gamma})\boldsymbol{ \alpha})\neq 0,\) and so the maps \(f_{\boldsymbol{\beta}}\) and \(f_{\boldsymbol{\gamma}}\) are different.
Let \(\mathcal{R}\) be a commutative \(\mathbb{F}\)-algebra of dimension \(n\) that admits an \(\mathbb{F}\)-valued trace \(\tau.\) If \(\{\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n}\}\) is a basis for \(\mathcal{R}\) over \(\mathbb{F},\) then the projection map \(\pi_{j}:\mathcal{R}\to\mathbb{F},\sum_{i=1}^{n}x_{i}\boldsymbol{\alpha}_{j} \mapsto x_{j}\) is an \(\mathbb{F}\)-linear map from \(\mathcal{R}\) to \(\mathbb{F},\) hence by Theorem 4.1, there is \(\boldsymbol{\beta}_{j}\in\mathcal{R}\) such that \(\pi_{j}(\boldsymbol{\alpha})=\tau(\boldsymbol{\beta}_{j}\boldsymbol{\alpha})\) for all \(\boldsymbol{\alpha}\in\mathcal{R}.\) Putting \(\boldsymbol{\alpha}=\boldsymbol{\alpha}_{i},1\leq i\leq n,\) we have
\[\tau(\boldsymbol{\beta}_{j}\boldsymbol{\alpha}_{i})=\begin{cases}0&\text{if }i \neq j\\ 1&\text{if }i=j\end{cases}\]
Thus, \(\{\beta_{1},\ldots,\beta_{n}\}\) is also a basis for \(\mathcal{R}\) over \(\mathbb{F}.\)
**Definition 4.2**.: Let \(\mathcal{R}\) be a finite-dimensional commutative \(\mathbb{F}\)-algebra that admits an \(\mathbb{F}\)-valued trace \(\tau.\) Then the two bases \(\{\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n}\}\) and \(\{\boldsymbol{\beta}_{1},\ldots,\boldsymbol{\beta}_{n}\}\) for \(\mathcal{R}\) over \(\mathbb{F}\) are said to be \(\tau\)_-dual (or \(\tau\)- complementary) bases_ if \(\tau(\boldsymbol{\alpha}_{i}\boldsymbol{\beta}_{j})=\delta_{ij},\) where \(\delta_{ij}\) denotes the Kronecker symbol.
From the above discussion, we have the following proposition.
**Proposition 4.3**.: Let \(\mathcal{R}\) be a finite-dimensional commutative \(\mathbb{F}\)-algebra. If there is an \(\mathbb{F}\)-valued trace \(\tau\) of \(\mathcal{R},\) then for any basis for \(\mathcal{R}\) over \(\mathbb{F},\) there exists a unique \(\tau\)-dual basis.
**Example 4.4**.: Let \(\mathcal{R}=\mathbb{F}[x_{1},x_{2},\ldots,x_{n}]/\langle g_{1}(x_{1}),g_{2}(x_{2}), \ldots,g_{n}(x_{n})\rangle,\) where \(g_{i}(x_{i})\in\mathbb{F}[x_{i}].\) Then by Corollary 3.12, there is an \(\mathbb{F}\)-valued trace \(\tau\) of \(\mathcal{R}.\) Hence, for any basis for \(\mathcal{R},\) there is a \(\tau\)-dual basis.
**Definition 4.5**.: Let \(\mathcal{R}\) be a commutative \(\mathbb{F}\)-algebra of dimension \(n\) that admits an \(\mathbb{F}\)-valued trace \(\tau.\) Then the discriminant \(\Delta_{\tau}(\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n})\) of the elements \(\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n}\in\mathcal{R}\) is defined by the determinant of order \(n\) given by
\[\Delta_{\tau}(\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n})=\begin{ }\begin{vmatrix}\tau(\boldsymbol{\alpha}_{1}\boldsymbol{\alpha}_{1})&\tau( \boldsymbol{\alpha}_{1}\boldsymbol{\alpha}_{2})&\cdots&\tau(\boldsymbol{ \alpha}_{1}\boldsymbol{\alpha}_{n})\\ \tau(\boldsymbol{\alpha}_{2}\boldsymbol{\alpha}_{1})&\tau(\boldsymbol{ \alpha}_{2}\boldsymbol{\alpha}_{2})&\cdots&\tau(\boldsymbol{\alpha}_{2} \boldsymbol{\alpha}_{n})\\ \vdots&\vdots&\vdots&\vdots\\ \tau(\boldsymbol{\alpha}_{n}\boldsymbol{\alpha}_{1})&\tau(\boldsymbol{ \alpha}_{n}\boldsymbol{\alpha}_{2})&\cdots&\tau(\boldsymbol{\alpha}_{n} \boldsymbol{\alpha}_{n})\end{vmatrix}\end{vmatrix}\]
Note that \(\Delta_{\tau}(\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n})\in \mathbb{F}.\)
A simple characterization of bases for \(\mathcal{R}\) which admits an \(\mathbb{F}\)-valued trace can be given as follows:
**Theorem 4.6**.: _Let \(\mathcal{R}\) be a commutative \(\mathbb{F}\)-algebra of dimension \(n\) that admits an \(\mathbb{F}\)-valued trace \(\tau.\) Then \(\{\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n}\}\) is a basis for \(\mathcal{R}\) over \(\mathbb{F}\) iff \(\Delta_{\tau}(\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n})\neq 0.\)_
Proof.: Let \(\{\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n}\}\) be a basis for \(\mathcal{R}\) over \(\mathbb{F}.\) We show that the rows of the determinant \(\Delta_{\tau}(\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n})\) are linearly independent. Suppose that \(c_{1}\tau(\boldsymbol{\alpha}_{1}\boldsymbol{\alpha}_{j})+\cdots+c_{n}\tau( \boldsymbol{\alpha}_{n}\boldsymbol{\alpha}_{j})=0\) for \(1\leq j\leq n,\) where \(c_{i}\in\mathbb{F}.\) If \(\boldsymbol{\beta}=c_{1}\boldsymbol{\alpha}_{1}+\cdots+c_{n}\boldsymbol{ \alpha}_{n},\) then \(\tau(\boldsymbol{\beta}\boldsymbol{\alpha}_{j})=0,\) for \(1\leq j\leq n.\) However, this implies \(\boldsymbol{\beta}=\boldsymbol{0}\) as \(\{\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n}\}\) forms a basis for \(\mathcal{R}.\) Hence, \(c_{1}=\cdots=c_{n}=0,\) which ensures that \(\Delta_{\tau}(\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n})\neq 0.\)
Conversely, suppose that \(\Delta_{\tau}(\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n})\neq 0.\) Let \(c_{1}\boldsymbol{\alpha}_{1}+\cdots+c_{n}\boldsymbol{\alpha}_{n}=\boldsymbol{0},\) for some \(c_{i}\in\mathbb{F}.\) Multiplying by \(\boldsymbol{\alpha}_{j}\) and applying \(\tau,\) we obtain
\[c_{1}\tau(\boldsymbol{\alpha}_{1}\boldsymbol{\alpha}_{j})+\cdots+c_{n}\tau( \boldsymbol{\alpha}_{n}\boldsymbol{\alpha}_{j})=0\text{ for}\,1\leq j\leq n.\]
Since \(\Delta_{\tau}(\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{n})\neq 0,\) it follows that \(c_{1}=\cdots=c_{n}=0.\)
### Applications to Algebraic coding theory
Let \(R\) be a finite commutative ring. Then \(R^{n}\) is a free \(R\)-module of rank \(n.\) Any non-empty subset \(\mathcal{C}\) of \(R^{n}\) is called a _code_ of length \(n\) over \(R\). If, in addition, \(\mathcal{C}\) is a \(R\)-submodule of \(R^{n},\) then the code \(\mathcal{C}\) is called a _linear code_. One may refer to [5] and [10] for more on codes over rings and fields.
Let \(\mathcal{R}_{q}\) be a finite-dimensional commutative \(\mathbb{F}_{q}\)-algebra that admits an \(\mathbb{F}_{q}\)-valued trace \(\tau.\) Then the trace map \(\tau\) can be used to go down from a code defined over \(\mathcal{R}_{q}\) to a code over \(\mathbb{F}_{q}\) as follows:
**Definition 4.7** (\(\tau\)-trace code).: For a linear code \(\mathcal{C}\) of length \(n\) over \(\mathcal{R}_{q},\) define the \(\tau\)-trace code of \(\mathcal{C}\) by
\[\tau\left(\mathcal{C}\right):=\left\{(\tau(c_{1}),\ldots,\tau(c_{n})):(c_{1}, \ldots,c_{n})\in\mathcal{C}\right\},\]
which is a linear code of length \(n\) over \(\mathbb{F}_{q}.\)
Suppose \(\varphi:\mathbb{F}_{q}\hookrightarrow\mathcal{R}_{q}\) defines the \(\mathbb{F}_{q}\)-algebra structure of \(\mathcal{R}_{q},\) then we identify \(\mathbb{F}_{q}\) with \(\varphi(\mathbb{F}_{q}).\)
Another way to go down from a code defined over \(\mathcal{R}_{q}\) to a code over \(\mathbb{F}_{q}\) is the following:
**Definition 4.8** (Subfield subcode).: Let \(\mathcal{C}\) be a linear code of length \(n\) over \(\mathcal{R}_{q}.\) The code \(\mathcal{C}|_{\mathbb{F}_{q}}:=\mathcal{C}\cap\mathbb{F}_{q}^{n},\) is called the subfield subcode.
If \(\mathrm{Tr}\) is the usual trace map from \(\mathbb{F}_{q^{m}}\) to \(\mathbb{F}_{q},\) then there is a nice relationship between the trace and subfield subcodes. Interestingly, the same relationship holds between \(\tau\)-trace code and subfield subcodes.
**Theorem 4.9**.: _Let \(\tau\) be an \(\mathbb{F}_{q}\)-valued trace of \(\mathcal{R}_{q}.\) Suppose \(\mathcal{C}\) is a linear code of length \(n\) over \(\mathcal{R}_{q},\) then_
\[\tau\left(\mathcal{C}^{\perp}\right)=\left(\mathcal{C}|_{\mathbb{F}_{q}} \right)^{\perp}\]
_where \(\mathcal{C}^{\perp}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathcal{R}_{q}^{n}:\mathbf{x }\cdot\mathbf{y}=\sum_{i=1}^{n}x_{i}y_{i}=0,\)\(\forall\,\mathbf{y}=(y_{1},\ldots,y_{n})\in\mathcal{C}\}.\)_
Proof.: Let \(\mathbf{c}=(c_{1},\ldots,c_{n})\in\mathcal{C}|_{\mathbb{F}_{q}}\) and \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathcal{C}^{\perp}\). Then
\[\mathbf{c}\cdot\tau(\mathbf{a})=\sum_{i=1}^{n}c_{i}\tau(a_{i})=\tau\left(\sum_{i=1}^{n }c_{i}a_{i}\right)=\tau(\mathbf{c}\cdot\mathbf{a})=0,\]
showing that \(\tau\left(\mathcal{C}^{\perp}\right)\subseteq\left(\mathcal{C}|_{\mathbb{F}_{ q}}\right)^{\perp}.\) Next, we show that \(\tau\left(\mathcal{C}^{\perp}\right)\supseteq\left(\mathcal{C}|_{\mathbb{F}_{q}} \right)^{\perp}.\) This assertion is equivalent to
\[\left(\tau\left(\mathcal{C}^{\perp}\right)\right)^{\perp}\subseteq\mathcal{C} |_{\mathbb{F}_{q}}.\]
Suppose the above relationship does not hold, then there exist some \(\mathbf{u}\in\left(\tau\left(\mathcal{C}^{\perp}\right)\right)^{\perp}\setminus \mathcal{C}|_{\mathbb{F}_{q}}\) and \(\mathbf{v}\in\mathcal{C}^{\perp}\) with \(\mathbf{u}\cdot\mathbf{v}\neq 0\). As \(\tau\) is an \(\mathbb{F}_{q}\)-valued trace of \(\mathcal{R}_{q},\) there is an element \(\alpha\in\mathcal{R}_{q}\) such that \(\tau(\alpha(\mathbf{u}\cdot\mathbf{v}))\neq 0\). Hence,
\[\mathbf{u}\cdot\tau(\alpha\mathbf{v})=\tau(\mathbf{u}\cdot\alpha\mathbf{v})=\tau(\alpha(\mathbf{u }\cdot\mathbf{v}))\neq 0.\]
But, on the other hand, we have \(\mathbf{u}\cdot\tau(\alpha\mathbf{v})=0\) because \(\mathbf{u}\in\left(\tau\left(\mathcal{C}^{\perp}\right)\right)^{\perp}\) and \(\alpha\mathbf{v}\in C^{\perp}.\) The desired result follows from this contradiction.
An \(\mathbb{F}_{q}\)-valued trace \(\tau\) of \(\mathcal{R}_{q}\) can also be used as a tool to construct linear codes over \(\mathbb{F}_{q}\) as follows:
Let \(D=\{\{\mathbf{d}_{1}<\mathbf{d}_{2}<...<\mathbf{d}_{n}\}\}\) be an ordered multiset, where each \(\mathbf{d}_{i}\in\mathcal{R}_{q}.\) Define
\[\mathcal{C}_{D}:=\{(\tau(\mathbf{x}\mathbf{d}_{1}),\tau(\mathbf{x}\mathbf{d}_{2}),\ldots,\tau( \mathbf{x}\mathbf{d}_{n})):\mathbf{x}\in\mathcal{R}_{q}\}\]
Then \(\mathcal{C}_{D}\) is a linear code of length \(n\) over \(\mathbb{F}_{q}\) and we call \(D\) the _defining sequence_ of the code \(\mathcal{C}_{D}.\)
**Example 4.10**.: Consider \(\mathcal{R}_{2}=\mathbb{F}_{2}[x]/\langle x^{3}-x\rangle\) and let \(u=x+\langle x^{3}-x\rangle.\) Then by Example 3.7, \(\tau(a+bu+cu^{2})=c\) is an \(\mathbb{F}_{2}\)-valued trace of \(\mathcal{R}_{2}.\) Let \(D=\{\{1<u<1+u<1+u^{2}<u+u^{2}<1+u+u^{2}\}\}.\) Then \(\mathcal{C}_{D}\) is a binary \([6,3,3]\)-_quasicyclic_ linear code of degree \(2.\)
#### 4.2.1. Construction of subfield codes
An \(\mathbb{F}_{q}\)-valued trace \(\tau\) of \(\mathcal{R}_{q}\) is useful in the computation of subfield codes.
Suppose that \(\mathcal{C}\) is a code of length \(n\) over \(\mathcal{R}_{q}\) generated by the (full rank) matrix \(G\) and \(\mathcal{B}\) be a basis for \(\mathcal{R}_{q}\) over \(\mathbb{F}_{q}.\) The code \(\mathcal{C}^{(q)}\) over \(\mathbb{F}_{q}\) generated by the matrix which is obtained by replacing each entry of \(G\) by its column representation relative to \(\mathcal{B},\) is called the _subfield code_ of \(\mathcal{C}\)[1]. In fact, it is independent of the choice of \(\mathcal{B}.\)
Let \(\{\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{m}\}\) be a basis for \(\mathcal{R}_{q}\) and let \(\{\boldsymbol{\beta}_{1},\ldots,\boldsymbol{\beta}_{m}\}\) be its \(\tau\)-dual basis for \(\mathcal{R}_{q}\) over \(\mathbb{F}_{q}\). Then if \(\boldsymbol{r}=\sum_{i=1}^{m}r_{i}\boldsymbol{\alpha}_{i}\in\mathcal{R}_{q},\) for \(r_{i}\in\mathbb{F}_{q},\) then
\[r_{i}=\tau(\boldsymbol{r}\boldsymbol{\beta}_{i}).\]
With the above discussion, Theorem 2.4 of [1] gets generalized to \(\mathcal{R}_{q}.\)
**Theorem 4.11**.: _Let \(\tau:\mathcal{R}_{q}\to\mathbb{F}_{q}\) be an \(\mathbb{F}_{q}\)-valued trace of \(\mathcal{R}_{q}\) and let \(\mathcal{B}=\{\boldsymbol{\alpha}_{1},\ldots,\boldsymbol{\alpha}_{m}\}\) be a basis for \(\mathcal{R}_{q}\) over \(\mathbb{F}_{q}.\) Suppose that \(\mathcal{C}\) is a linear code of length \(n\) over \(\mathcal{R}_{q}\) generated by:_
\[G=\begin{bmatrix}\boldsymbol{g}_{11}&\boldsymbol{g}_{12}&\cdots&\boldsymbol{ g}_{1n}\\ \boldsymbol{g}_{21}&\boldsymbol{g}_{22}&\cdots&\boldsymbol{g}_{2n}\\ \vdots&\vdots&\vdots&\vdots\\ \boldsymbol{g}_{k1}&\boldsymbol{g}_{k2}&\cdots&\boldsymbol{g}_{kn}\end{bmatrix}\]
_Then the subfield code \(\mathcal{C}^{(q)}\) of \(\mathcal{C}\) is generated by_
\[G^{(q)}=\begin{bmatrix}G_{1}^{(q)}\\ G_{2}^{(q)}\\ \vdots\\ G_{k}^{(q)}\end{bmatrix}\]
_where for \(1\leq i\leq k,\)_
\[G_{i}^{(q)}=\begin{bmatrix}\tau(\boldsymbol{g}_{i1}\boldsymbol{\alpha}_{1})& \tau(\boldsymbol{g}_{i2}\boldsymbol{\alpha}_{1})&\ldots&\tau(\boldsymbol{g}_ {in}\boldsymbol{\alpha}_{1})\\ \tau(\boldsymbol{g}_{i1}\boldsymbol{\alpha}_{2})&\tau(\boldsymbol{g}_{i2} \boldsymbol{\alpha}_{2})&\ldots&\tau(\boldsymbol{g}_{in}\boldsymbol{\alpha}_{ 2})\\ \vdots&\vdots&\vdots&\vdots\\ \tau(\boldsymbol{g}_{i1}\boldsymbol{\alpha}_{m})&\tau(\boldsymbol{g}_{i2} \boldsymbol{\alpha}_{m})&\ldots&\tau(\boldsymbol{g}_{in}\boldsymbol{\alpha}_{ m})\end{bmatrix}.\]
## 5. Conclusion and Discussion
In this manuscript, we studied the \(\mathbb{F}\)-valued trace of a finite-dimensional commutative \(\mathbb{F}\)-algebra. Given a finite-dimensional commutative \(\mathbb{F}\)-algebra, an \(\mathbb{F}\)-valued trace may not exist; however, we proved that a finite-dimensional commutative \(\mathbb{F}\)-algebra of the form \(\mathcal{R}=\mathbb{F}[x_{1},x_{2},\ldots,x_{n}]/\langle g_{1}(x_{1}),g_{2}(x_ {2}),\ldots,g_{n}(x_{n})\rangle,\)\(g_{i}(x_{i})\in\mathbb{F}[x_{i}],\) possesses an \(\mathbb{F}\)-valued trace, and we constructed such a map on \(\mathcal{R}\). We showed that an \(\mathbb{F}\)-valued trace on a finite-dimensional commutative \(\mathbb{F}\)-algebra induces a non-degenerate bilinear form on \(\mathcal{R}\) and hence determines all linear transformations from \(\mathcal{R}\) to \(\mathbb{F}\), and it is helpful in the characterisation of bases for \(\mathcal{R}.\) In the field of algebraic coding theory, we presented how an \(\mathbb{F}_{q}\)-valued trace can be used as a tool to descend from codes over an \(\mathbb{F}_{q}\)-algebra to codes over \(\mathbb{F}_{q}.\)
A non-zero \(\mathbb{F}\)-valued \(\mathbb{F}\)-linear map on a non-commutative \(\mathbb{F}\)-algebra is called an \(\mathbb{F}\)-valued trace if its kernel contains no non-zero left ideals. Let \(\mathcal{R}=M_{n}(\mathbb{F}),\) the ring of all \(n\times n\) matrices over \(\mathbb{F}.\) Then \(\mathcal{R}\) is a non-commutative \(\mathbb{F}\)-algebra for \(n\geq 2\). Define \(\tau:\mathcal{R}\to\mathbb{F}\) by \(A=[a_{ij}]\mapsto\sum_{i=1}^{n}a_{ii},\) the usual trace of \(A.\) Let \(\mathcal{I}\) be any non-zero left
ideal of \(\mathcal{R}\) and suppose that \(B=(b_{ij})\in\mathcal{I}\) be such \(b_{ij}\neq 0\) for some \(1\leq i,j\leq n\). If \(E_{ij}\in\mathcal{R}\) be such that whose \((i,j)\)th entry is \(1\) and whose other entries are all \(0\), then \(\operatorname{Tr}(E_{jj}\sum_{k=1}^{n}E_{ki}B)=b_{ij}\neq 0\), and consequently, \(\mathcal{I}\nsubseteq\ker(\tau).\) Hence, \(\ker(\tau)\) contains no non-zero left ideals of \(\mathcal{R}\), proving that \(\tau\) is an \(\mathbb{F}\)-valued trace of \(\mathcal{R}.\)
The non-commutative \(\mathbb{F}_{2}\)-algebra \(\,\mathcal{U}=\left\{A=\begin{bmatrix}a&b\\ 0&c\end{bmatrix}:a,b,c\in\mathbb{F}_{2}\right\}\) does not admit any \(\mathbb{F}_{2}\)-valued trace map.
Consider the non-commutative non-unital ring \(E=\langle a,b\,|\,2a=2b=0,a^{2}=a,b^{2}=b,ab=b,ba=a\rangle.\) Note that \(E\) has no \(\mathbb{F}_{2}\)-algebra structure and the underlying set of \(E\) is \(\{0,a,b,c=a+b\}.\) Consider the following action of \(\mathbb{F}_{2}\) on \(E\): \(0e=e0=0\) and \(1e=e1=e\) for all \(e\in E\). Then every element of \(E\) can be expressed as \(as+ct\) for \(s,t\in\mathbb{F}_{2}.\) The only non-zero projections of \(E\) onto \(\mathbb{F}_{2}\) are \(\tau_{i}:E\to\mathbb{F}_{2},i=1,2,3,\) where
\[\tau_{1}(as+ct) =a\] \[\tau_{2}(as+ct) =c\] \[\tau_{3}(as+ct) =a+c.\]
It is not difficult to verify that \(\ker(\tau_{i})\) contains non-zero left ideals of \(E\), for \(i=1,2,3\).
### Conflict of Interest
Both authors declare that they have no conflict of interest.
## Acknowledgements
The work of the first author was supported by Council of Scientific and Industrial Research (CSIR) India, under the grant no. 09/0086(13310)/2022-EMR-I.
|
2309.08853 | Computational Enhancement for Day-Ahead Energy Scheduling with Sparse
Neural Network-based Battery Degradation Model | Battery energy storage systems (BESS) play a pivotal role in future power
systems as they contribute to achiev-ing the net-zero carbon emission
objectives. The BESS systems, predominantly employing lithium-ion batteries,
have been exten-sively deployed. The degradation of these batteries
significantly affects system efficiency. Deep neural networks can accurately
quantify the battery degradation, however, the model complexity hinders their
applications in energy scheduling for various power systems at different
scales. To address this issue, this paper pre-sents a novel approach,
introducing a linearized sparse neural network-based battery degradation model
(SNNBD), specifically tailored to quantify battery degradation based on the
scheduled BESS daily operational profiles. By leveraging sparse neural
networks, this approach achieves accurate degradation predic-tion while
substantially reducing the complexity associated with a dense neural network
model. The computational burden of inte-grating battery degradation into
day-ahead energy scheduling is thus substantially alleviated. Case studies,
conducted on both microgrids and bulk power grids, demonstrated the efficiency
and suitability of the proposed SNNBD-integrated scheduling model that can
effectively address battery degradation concerns while optimizing day-ahead
energy scheduling operations. | Cunzhi Zhao, Xingpeng Li | 2023-09-16T03:11:05Z | http://arxiv.org/abs/2309.08853v1 | Computational Enhancement for Day-Ahead Energy Scheduling with Sparse Neural Network-based Battery Degradation Model
###### Abstract
Battery energy storage systems (BESS) play a pivotal role in future power systems as they contribute to achieving the net-zero carbon emission objectives. The BESS systems, predominantly employing lithium-ion batteries, have been extensively deployed. The degradation of these batteries significantly affects system efficiency. Deep neural networks can accurately quantify the battery degradation, however, the model complexity hinders their applications in energy scheduling for various power systems at different scales. To address this issue, this paper presents a novel approach, introducing a linearized sparse neural network-based battery degradation model (NNBD), specifically tailored to quantify battery degradation based on the scheduled BESS daily operational profiles. By leveraging sparse neural networks, this approach achieves accurate degradation prediction while substantially reducing the complexity associated with a dense neural network model. The computational burden of integrating battery degradation into day-ahead energy scheduling is thus substantially alleviated. Case studies, conducted on both microgrids and bulk power grids, demonstrated the efficiency and suitability of the proposed SNNBD-integrated scheduling model that can effectively address battery degradation concerns while optimizing day-ahead energy scheduling operations.
Battery degradation modeling, Bulk power grids, Day-ahead scheduling, Energy management, Machine learning, Microgrids, Optimization, Sparse neural network.
## Nomenclature
_Indices:_
\(g\): Generator index.
\(s\): Battery energy storage system index.
\(k\): Transmission line index.
\(l\): Load index.
\(wt\): Wind turbine index.
\(pv\): Photovoltaic index.
_Sets:_
\(T\): Set of time intervals.
\(G\): Set of controllable micro generators.
\(S\): Set of energy storage systems.
\(WT\): Set of wind turbines.
\(PV\): Set of PV systems.
_Parameters:_
\(c_{g}\): Linear cost for controllable unit \(g\).
\(c_{g}^{NL}\): No load cost for controllable unit \(g\).
\(c_{g}^{SU}\): Start-up cost for controllable unit \(g\).
\(\Delta T\): Length of a single dispatch interval.
\(R_{prcnt}\): Ratio of the backup power to the total power.
\(E_{s}^{Max}\): Maximum energy capacity of ESS \(s\).
\(E_{s}^{min}\): Minimum energy capacity of ESS \(s\).
\(c_{t}^{Buy}\): Wholesale electricity purchase price in time interval \(t\).
\(c_{t}^{Sell}\): Wholesale electricity sell price in time interval \(t\).
\(p_{t}^{Max}\): Maximum capacity of generator \(g\).
\(p_{t}^{Min}\): Minimum capacity of generator \(g\).
\(p_{k}^{Max}\): Maximum thermal limit of transmission line \(k\).
\(b_{k}\): Susceptance, inverse of impedance, of branch \(k\).
\(p_{t}^{Max}\): Maximum thermal limit of tie-line between main grid
\(p_{t}^{Ramp}\): and microgrid.
\(p_{g}^{Ramp}\): Ramping limit of diesel generator \(g\).
\(p_{s}^{Max}\): Maximum charge/discharge power of BESS \(s\).
\(p_{s}^{Min}\): Minimum charge/discharge power of BESS \(s\).
\(p_{s}^{Disc}\): Discharge efficiency of BESS \(s\).
\(\eta_{s}^{char}\): Charge efficiency of BESS \(s\).
_Variables:_
\(U_{t}^{Buy}\): Status of buying power from main grid in time interval \(t\).
\(U_{t}^{Sell}\): Status of selling power to main grid status in time \(t\).
\(U_{s,t}^{Char}\): Charging status of energy storage system \(s\) in time interval \(t\). It is 1 if charging status; otherwise 0.
\(U_{s,t}^{Disc}\): Discharging status of energy storage system \(i\) in time interval \(t\). It is 1 if discharging status; otherwise 0.
\(U_{g,t}\): Status of generator \(g\) in time interval \(t\). It is 1 if on status; otherwise 0.
\(V_{g,t}\): Startup indicator of Status of generator \(g\) in time interval \(t\). It is 1 if unit \(g\) starts up; otherwise 0.
\(P_{g,t}\): Output of generator \(g\) in time interval \(t\).
\(\theta_{n(k)}^{t}\): Phase angle of sending bus \(n\) of branch \(k\).
\(\theta_{m(k)}^{t}\): Phase angle of receiving bus \(m\) of branch \(k\).
\(P_{k,t}\): Line flow at transmission line \(k\) at time period \(t\).
\(p_{t}^{buy}\): Amount of power purchased from main grid power in
time interval \(t\).
\(P_{t}^{Sell}\): Amount of power sold to main grid power in time interval \(t\).
\(P_{t,t}\): Demand of the microgrid in time interval \(t\).
\(p_{t,t}^{Disc}\): Discharging power of energy storage system \(s\) at time \(t\).
\(p_{s,t}^{Char}\): Charging power of energy storage system \(s\) at time t.
## I Introduction
Renewable energy sources (RES) have emerged as a pivotal component of the future power system, due to their environmental friendly attributes in contrast to conventional fossil fuels. By producing clean, sustainable, and inexhaustible electric energy, RES plays a transformative role in reducing greenhouse gas emissions in the electricity sector and thus mitigating climate change [1]. Nonetheless, the escalating utilization of RES for power generation has introduced inherent stability challenges in the system, primarily due to the unpredictable and intermittent nature of deeply integrated RES [2]-[4]. In response to this challenge, battery energy storage
systems (BESS) are being extensively adopted as an effective and practically viable solution [5].
BESS effectively addresses the variability and uncertainty inherent in RES by efficiently storing excess renewable energy during peak periods and releasing it during off-peak periods of renewable generation [6]. This capability not only promotes a seamless integration of renewable energy in the grid but also reinforces the resilience of the system. Furthermore, BESS plays a pivotal role in providing essential ancillary services such as frequency regulation, voltage control, and peak shaving, thereby enhancing the stability and efficiency of the overall power system [7]-[8].
Numerous studies have demonstrated the successful integration of BESS into both bulk power systems and microgrids, particularly those integrating high penetrations of RES. For instance, [9]-[10] demonstrate the microgrid's ability to support the main grid with integrated BESS. Moreover, [11] highlights the significant benefits of incorporating BESS into the power system. Another notable example is the offshore BESS system presented in [12], which effectively reduces carbon emissions. Various other models have been proposed to incorporate BESS to mitigate fluctuations caused by renewable energy sources, as presented in [13]-[16]. In summary, the deployment of BESS is indispensable for the successful integration of renewable energy into the power system. It not only improves the system's stability and efficiency but also paves the way for a cleaner and more sustainable energy future.
The primary component utilized in BESS presently available in the market is lithium-ion batteries [17]. However, these batteries' chemical characteristics make them susceptible to degrade over cycling, ultimately resulting in a negative impact on their performance and efficiency. The degradation of lithium-ion batteries is primarily attributed by the depletion of Liions, electrolyte, and the escalation of internal resistance. Those changes contribute to an increase in internal resistance, and decrease the overall available energy capacity during daily cycling [19]-[20]. Multiple factors contribute to battery aging, including ambient temperature, charging/discharging rate, state of charge (SOC), state of health (SOH), and depth of discharge (DOD), each playing a pivotal role in the degradation process over the battery cycling [21]-[22]. Nevertheless, accurately assessing the internal state of the battery remains a difficult challenge. This complexity is particularly amplified by the escalating significance of batteries functioning as energy storage systems in both microgrid systems and bulk power systems. Thus, accurately quantifying battery degradation is an urging task, particularly when BESS operates in diverse conditions and environments in the power system.
Previous studies have extensively developed battery degradation models. However, these models fail to comprehensively address battery degradation across diverse operational conditions. One widely used approach is the DOD-based battery degradation model. Papers [23]-[27] proposed battery degradation models that depend on the DOD of each individual cycle. While this approach may be effective under consistent operating environments, it falls short when applied to the complex and diverse daily cycling scenarios of BESS. The DOD-based model omits various factors that can significantly contribute to substantial prediction errors in degradation. Another frequently employed model is the linear degradation model. As discussed in [28], this model incorporates a linear degradation cost based on power usage or energy consumption within the battery degradation model. However, similar to the DOD-based model, it can only offer a rough estimation of battery degradation and is not suitable for accurate predictions in daily day-ahead scheduling problems due to its limited accuracy. Therefore, despite the existence of previous research on battery degradation models, none of these approaches adequately addresses the battery aging factors in BESS operations comprehensively.
In our previous research work [29], we applied a data-driven approach that utilized a neural network to accurately quantify battery degradation value. Distinct from the DOD-based and linear degradation models, our neural network-based battery degradation (NNBD) model takes into account various factors such as SOC, DOD, ambient temperature, charge or discharge rate, and SOH for each cycle, resulting in more precise degradation quantification. However, the highly non-linear and non-convex nature of the NNBD model poses challenges when seeking direct solutions to the day-ahead scheduling optimization problem. To address this challenge, we proposed a neural network and optimization decoupled heuristic algorithm in our previous work, which solves the complex neural network-embedded optimization problem iteratively. While the proposed iterative methodology exhibited commendable efficiency with the simple problems, unfortunately, its performance filtered when confronted with the complexities of a multi-BESS day-ahead scheduling optimization problem. The iteration method failed to converge when applied to a system with multiple integrated BESSs.
To overcome the non-linearity characteristic of the neural network-based day-ahead scheduling problem, we present a piecewise linear model in this paper. This model enables us to directly solve the optimization problem without relying on an iterative method mentioned in our previous work. The non-linearity within the NNBD model arises from the adoption of the rectified linear unit (ReLU) activation function in the hidden layer neurons. The linearized model, using the BigM method, is designed to linearize a ReLU activation function through the introduction of four linear constraints with an auxiliary binary variable. This allows for the direct solution of the NNBD-integrated day-ahead scheduling problem. However, when multiple BESSs are present in the system, a severe computational challenge would be observed. As the number of BESS units escalates, the computational complexity rises exponentially due to the corresponding increase in the number of constraints and binary variables. This escalation made the optimization problem much more challenging to solve and may take an impractically long time to obtain a feasible solution. Thus, the research gap lies in finding methods to reduce the computational burden associated with neural network integrated optimization problems.
Heuristic methods were proposed in reducing the complexity of neural network models. For instance, in [30], a low-complexity neural belief propagation decoder is constructed using network pruning techniques. This approach involves removing unimportant edges from the network structure. However, it should be noted that these techniques may inadvertently decrease the training accuracy. Another approach to reducing complexity is the utilization of a sparse feature learn
ing model [31]. This model focuses on learning useful representations and decreasing the complexity of hierarchical deep neural network models. In [32], the effectiveness of sparse convolutional neural networks for single-instruction-multiple-data (SIMD)-like accelerators is demonstrated. This technique helps alleviate the computational burden by applying pruning methods to eliminate unnecessary connections within fully connected structures, as exemplified in [33] for wideband power amplifiers' neural network models. Similarly, pruning techniques are also employed in [34] to compact the deep neural networks for SCADA applications. Furthermore, [35]-[36] suggest that modern deep neural networks often suffer from overparameterization, with a large number of learnable parameters. One potential solution to this issue is model compression through the use of sparse connections. These approaches contribute to reducing the complexity and computational burden associated with neural network models, enabling more efficient and streamlined implementations.
Since the sparsity and pruning techniques have proved to be efficient to reduce the complexity of neural networks in many other applications, it may be a perfect solution to obtain a low computational complexity model in battery degradation prediction. Thus, we propose a sparse neural network-based battery degradation model (SNNBD) to quantify the battery degradation in BESS daily operations. SNNBD is designed to be significantly less complex than the traditional fully-connected dense neural network model. SNNBD is designed to reduce the computation burden induced by the ReLU activation function. Achieving this entails a strategic process of pruning during training, whereby a predetermined percentage of neurons is systematically pruned. The sparsity percentage is defined as the ratio of pruned neurons to the total neurons in the neural network. A higher percentage of sparsity may decrease the computation complexity significantly, but the accuracy of the battery degradation prediction may decrease as compared with a less-sparse or dense model. It will be a trade-off between the sparsity and the training accuracy. Compared to the NNBD model [29], the proposed SNNBD model contains only a percentage of NNBD's neurons which may reduce the computational burden significantly while maintaining accurate battery degradation prediction.
The main contributions of this paper are as follows:
* _Refined Battery Degradation Modeling_: The proposed SNNBD model significantly refines existing NNBD model, elevating its proficiency in quantifying battery degradation within the day-ahead scheduling model.
* _Computational Augmentation with SNNBD_: To efficiently address the day-ahead scheduling optimization challenge, this paper proposes an innovative SNNBD-assisted computational enhancement model. Capitalizing on the capabilities of the SNNBD model, this enhancement substantially improves the computational efficiency of the optimization process. This, in turn, translates into more responsive and informed decision-making procedures.
* _Linearization Technique for Practicality:_ The integration of the SNNBD model into the day-ahead scheduling framework is accompanied by a pertinent linearization technique. This technique simplifies the model's analysis and evaluation, making it more practical and feasible for real-world application scenarios.
* _Versatile Performance Evaluation_: This paper showcases the SNNBD model's efficacy across various levels of sparsity, and highlights its adaptability in capturing battery degradation under diverse operational scenarios. The day-ahead scheduling, enriched by the SNNBD model, is rigorously assessed on both expansive bulk power systems and local microgrid systems. These validation trials substantiate the SNNBD model's robustness and effectiveness in real-world power system environments.
* _In-depth Economic Insights:_ This paper provides an insightful market analysis. By comparing locational marginal prices (LMPs) across three scenarios: (1) zero-degradation BESS, (2) degraded BESS, and (3) no BESS integration, the economic implications and advantages of incorporating BESS into the power system and capturing its degradation are explored. This analysis provides a comprehension of the economic landscape, enriching decision-making processes within the energy market.
The rest of the paper is organized as follows. Section II describes the sparse neural network model and training strategy. Section III presents the traditional day-ahead scheduling model. Section IV presents the SNNBD integrated day-ahead scheduling model. Section V presents case studies and Section VI concludes the paper.
## II Spare Neural Network Based Battery Degradation
This section outlines the training process for the proposed SNNBD model. We proposed two training schemes: (i) Warm Start that trains the SNNBD based on the pre-trained NNBD model, and (ii) Cold Start that trains the SNNBD model directly with random initial weights. Both models consist of 5 input neurons, 20 neurons in hidden layer 1, 10 neurons in hidden layer 2, and 1 neuron in the output layer. The hidden layers utilize the ReLU as the activation function for each neuron.
### _Warm Start_
The training process for Warm Start is illustrated in the algorithm explained below. Initially, the weights derived from the trained neural network model are utilized as the initial weights for the SNNBD model. During the training of the SNNBD model, a pruning mask is generated based on a certain predetermined sparsity percentage value. This mask is then applied to prune the weights after each training epoch to achieve the desired sparsity. The pruning masks are binary matrices that indicate which neurons are pruned (set to zero) in order to achieve sparsity throughout the entire structure.
### _Cold Start_
Cold Start offers a simple approach compared to Warm Start. Instead of training the neural network based on the fine-tuned NNBD weights, Cold Start directly trains a sparse neural network using random initial weights. In essence, the key difference between Warm Start and Cold Start lies in the
choice of initial weights. However, all other training techniques remain consistent between the two options. The performance and efficiency of both Warm Start and Cold Start will be evaluated and compared the performance.
### _NNBD Model_
The training of deep neural networks requires a substantial amount of data. In our study, we utilize MATLAB Simulink [37] to perform battery aging tests by implementing a battery model. By employing a battery cycle generator, we simulate charging and discharging cycles at predefined rates and replicate various battery types, conditions, operating profiles, ambient temperature effects, and internal resistance. These battery aging tests are conducted at different initial SOC and DOD levels, as well as under different ambient temperatures and charging/discharging rates.
In order to enhance the training efficiency and accuracy of the model, the battery degradation data collected from Simulink needs to be normalized before being fed into the training process. The original data consists of SOC, DOD, temperature, charging/discharging rate, and SOH. The C Rate, denoting the speed at which a battery is charged or discharged. SOH data is collected at the end of each charging/discharging cycle when the battery returns to its pre-set SOC value. Each cycle represents the process of discharging a battery from a specific SOC to a lower SOC and then recharging it back to the original SOC.
### _Pruning Method_
Pruning is a technique employed in neural networks to reduce the size and complexity of a model by eliminating unnecessary connections or neurons [38]. The objective of pruning is to enhance the efficiency of the training model, minimize memory requirements, and potentially improve its generalization capabilities. During the pruning process, a pruning mask is applied to identify and eliminate neurons that contribute less to the overall network performance, as depicted in Fig. 1. The pruning masks are regenerated for each epoch which means each pruning mask are identical. It also helps the robustness of the proposed SNNBD model. These pruning masks enable a compact representation of the sparse neural network. Instead of storing and computing all connection weights, only the active connections are considered, resulting in reduced memory usage and computational demands. By incorporating pruning masks, sparse neural networks strike a balance between model complexity and efficiency, making them a valuable approach for various applications, particularly in scenarios with limited computational resources or deployment constraints.
### _Fine Tuning and Setup_
After the pruning stage, the network undergoes retraining to restore and fine-tune its performance in the next epoch. During retraining, the pruning mask plays a crucial role in removing the pruned connections, effectively fixing their weights at zero. Only the remaining active connections are updated during the retraining process. This allows the network to relearn and redistribute its capacity among the remaining connections, compensating for the pruned ones.
During the training phase, the sparse neural network is trained using the mini-batch gradient descent strategy. The performance of the deep neural network is assessed based on its capacity to make precise predictions, which is evaluated using the mean squared error metric as shown in equation (1). The mean squared error is computed by averaging the squared differences between the actual and predicted values across all the training data points, serving as the loss function throughout the training process.
\[\textit{Mean Square Error}=\frac{1}{n}\sum_{t=1}^{n}(y_{t}-\bar{y}_{t})^{2} \tag{1}\]
## III Traditional Day-Ahead Scheduling Model
This section presents the day-ahead scheduling problem for both the bulk power system and the microgrid system. The bulk power system is inherently more complex than the microgrid system due to the presence of multiple buses and transmission lines. It's important to note that neither of the models listed in this section consider the battery degradation.
### _Bulk Power System Energy Scheduling Model_
The day-ahead scheduling problem of the bulk power system is represented by the traditional security constrained unit commitment (SCUC) model. The objective of the traditional SCUC model is to minimize the total operating cost of the system as defined in equation (2).
_Objective function_:
\[f^{cost}=\sum\sum P_{g,t}c_{g,t}+U_{g}c_{g}^{NL}+\ V_{g}c_{g}^{SU},\forall g,t \tag{2}\]
_Constraints_:
\[\sum\nolimits_{g\in S_{g}}P_{g,t}+\sum\nolimits_{wt\in S_{WT}}P_{wt,t}+\] \[\sum\nolimits_{pw\in S_{pv}}P_{pv,t}+\sum\nolimits_{s\in S_{S}}P _{s,t}^{Disc}+\sum\nolimits_{k\in S_{n}}P_{k,t} \tag{3}\] \[=\sum\nolimits_{k\in S_{n+}}P_{k,t}\sum\nolimits_{t\in S_{L}}P_{ l,t}+\sum\nolimits_{s\in S_{S}}P_{s,t}^{char}\] \[P_{g}^{Min}\in P_{g,t}\leq P_{g}^{Max},\forall g,t\] (4) \[P_{g,t+1}-P_{g,t}\leq\Delta T\cdot P_{g}^{Ramp},\forall g,t\] (5) \[P_{g,t}-P_{g,t+1}\leq\Delta T\cdot P_{g}^{Ramp},\forall g,t\] (6) \[V_{g,t}\geq U_{g,t}-U_{g,t-1},\forall g,t,\] (7) \[V_{g,t+1}\leq 1-U_{g,t},\forall g,t,\] (8) \[V_{g,t}\leq U_{g,t},\forall g,t,\] (9) \[-P_{k}^{Max}\leq P_{k,t}\leq P_{k}^{Max},\forall k,t,\] (10) \[P_{k,t}-b_{k}\left(\theta_{n(k)}^{t}-\theta_{m(k)}^{t}\right)=0 \,\forall k,t,\] (11) \[U_{s,t}^{Disc}+U_{s}^{char}\leq 1,\forall s,t\] (12) \[U_{s,t}^{Char}\cdot P_{g}^{Min}\leq P_{s,t}^{Char}\leq U_{s,t}^{ Char}\cdot P_{s}^{Max},\forall s,t\] (13) \[U_{s,t}^{Disc}\cdot P_{g}^{Min}\leq P_{s}^{Disc}\leq U_{s,t}^{ Disc}\cdot P_{s}^{Max},\forall s,t \tag{14}\]
Fig. 1: Pruning of a sample neural network model.
\[E_{s,t}-E_{s,t-1}+\Delta T\big{(}P_{s,t-1}^{Disc}/\eta_{s}^{Disc}-P_{ s,t-1}^{char}\eta_{s}^{char}\big{)} \tag{15}\] \[=0,\forall s,t\] \[E_{s,t=2*}=E_{s}^{initial},\forall s\] (16) \[0\leq E_{s,t}\leq E_{s,t}^{max} \tag{17}\]
The power balance equation for bus n incorporates synchronous generators, renewable energy sources, battery energy storage systems, and load demand, as represented by equation (3). Constraints (4-6) define the power output limits and ramping limits for each generator. To establish the relationship between a generator's start-up status and its on/off status, equations (7)-(9) are employed. Equation (10) enforces the thermal limit of the transmission lines. Constraint (11) calculates the power flow within the network.
For the BESS, the state of charge (SOC) level is determined by the ratio between the current stored energy and the maximum available energy capacity, as shown in equation (12). Constraints (13)-(14) maintain the charging/discharging power of the BESS within specified limits. Equation (15) calculates the stored energy of the BESS for each time interval. Equation (16) mandates that the final SOC level of the BESS matches the initial value. Equation (17) establishes the upper limit for the stored energy of the BESS.
### _Microgrid Energy Scheduling Model_
The traditional microgrid day-ahead scheduling problem shares some constraints of the bulk power system model, excluding the power flow constraints. The objective function for microgrids aims to minimize the total cost, incorporating the cost of traditional generators and the cost of tie-line power exchange, as depicted in (18).
_Objective function_:
\[\begin{split} f^{cost}=\sum\sum(& P_{g,t}c_{g,t}+ U_{g}c_{g}^{sh}+\ \ V_{g}c_{g}^{sy})\\ &+\ P_{t}^{buy}c_{t}^{buy}-P_{t}^{sel}c_{t}^{sel},\forall g,t \end{split} \tag{18}\]
_Constraints:_
The power balance equation for microgrid is presented in (19). To ensure the appropriate status of power exchange between the microgrid and the main grid, (20) is utilized, specifying the status of being a buyer, seller, or idle. Constraints (21) and (22) limit the thermal limits of the tie-line. Lastly, equation (23) setup the emergency reserve of the system. The traditional microgrid day-ahead scheduling constraints encompass (4)-(9) and (12)-(23). Unlike the power flow constraints present in the bulk power system model, the microgrid model incorporates tie-line exchange equations within the day-ahead scheduling framework.
\[\begin{split}& P_{t}^{buy}+\sum_{g\in S_{G}}p_{g,t}+\sum_{wtr\in S _{WT}}p_{w,t,t}+\sum_{p\in S_{WP}}P_{pv,t}\\ &+\sum_{s\in S_{S}}p_{s,t}^{Disc}=p_{t}^{sel}+\sum_{l\in S_{L}}P_ {l,t}+\sum_{s\in S_{S}}p_{s,t}^{char}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
\[a_{h}^{i}=relu\big{(}x_{h}^{i}\big{)}=max(0,x_{h}^{i}) \tag{29}\] \[a_{h}^{i}\leq x_{h}^{i}+Big*(1-\delta_{h}^{i})\] (30) \[a_{h}^{i}\geq x_{h}^{i}\] (31) \[a_{h}^{i}\leq BigM*\delta_{h}^{i}\] (32) \[a_{h}^{i}\geq 0 \tag{33}\]
The SNNBD-integrated day-ahead energy scheduling models for different systems are shown in the Table I.
Table I The proposed day-ahead scheduling models
\begin{tabular}{|c|c|} \hline Systems & Equations \\ \hline Bulk Power Grid & (2)-(17), (24)-(33) \\ \hline Microgrid & (4)-(9), (12)-(33) \\ \hline \end{tabular}
### _Benchmark Model_
To evaluate the performance of the SNNBD model in day-ahead scheduling problems, a benchmark model will be employed. The benchmark model utilizes the NNBD model, which has been previously introduced in studies. The purpose of this benchmark model is to provide a basis for comparison and assess the effectiveness of the SNNBD model.
It is important to note that there the day-ahead scheduling modeling remains consistent across both the NNBD and SNNBD models. Both models are applied within the same day-ahead scheduling framework, sharing the same variables and constraints. The distinction between the two models lies in the methodology used to quantify battery degradation. The NNBD model, used in the benchmark model, employs a conventional deep neural network approach to predict battery degradation based on the input variables. In contrast, the SNNBD model, under evaluation, utilizes a sparse neural network architecture. By comparing the performance of the SNNBD model against the benchmark NNBD model, it becomes possible to evaluate the effectiveness of the sparse architecture introduced in the SNNBD model. This comparison aids in determining the advancements made by the SNNBD model in compact the neutral network structure, which in turn can contribute to reduce the computational complexity of the neural network integrated day-ahead scheduling problems.
## V Case Studies
### _Training Strategies: Warm Start vs. Cold Start_
The analysis of training outcomes, as illustrated in Table II, distinctly highlights the superiority of Warm Start over Cold Start concerning training accuracy. However, it's noteworthy that Cold Start requires fewer epochs to complete the training process. It is important to mention that the training epochs for Warm Start represent the combined training epochs required by the NNBD model and the SNNBD model, while for Cold Start, it refers to the training epochs of the sparse neural network alone. Training the sparse neural network from random initial weights (Cold Start) proves to be notably challenging when it comes to achieve an equivalently level of accuracy as the Warm Start. In contrast, Warm Start is designed to take advantage of the pre- trained NNBD model, which serves as a stable starting point. The SNNBD model is then applied to further refine and sparse the structure of the already trained model. This suggests that the pre-trained NNBD model provides a beneficial foundation for the SNNBD model. The initial training with the NNBD model establishes a solid baseline, and the subsequent application of the SNNBD model enables fine-tuning with sparsity. By leveraging the existing knowledge encoded in the NNBD model, Warm Start demonstrates superior training accuracy compared to Cold Start.
Table II Results between Warm Start and Cold Start
\begin{tabular}{|c|c|c|} \hline Training Options & Accuracy & Epochs \\ \hline Warm Start & 94\% & 550 \\ \hline Cold Start & 77\% & 300 \\ \hline \end{tabular}
### _SNNBD Model Training_
All the results presented here are based on training Warm Start, as it outperforms the Cold Start. The training results of the proposed SNNBD model are depicted in Fig. 2 and Table III. In Fig. 2, the 0% sparsity represents the original NNBD model without any sparsity applied. The subsequent markers--5%, 10%, and 15%--tinted in blue, red, and green, respectively, signify distinct error tolerance thresholds. Notably, the pattern that unfolds the interplay between sparsity and prediction accuracy. As sparsity percentage scales up, the precision of battery degradation value predictions undergoes a gradual decrement across all tolerance thresholds. This trend continues until the 70% sparsity mark is attained. When comparing the 0% sparsity model (NNBD model) and the 50% sparsity model, the accuracy stands at 94.5% and 93.7% respectively, considering a 15% error tolerance. However, the 50% sparsity model significantly reduces the computational complexity compared to the original NNBD model since half of the neurons are pruned to be zero, thereby eliminating their connections. This reduction in computational complexity is exponential, as all connections associated with zero-valued neurons are discarded.
### _Microgrid Test Case_
To evaluate the performance of the integrated SNNBD day-ahead scheduling model, a typical grid-connected microgrid with renewable energy sources was employed as a testbed, as demonstrated in Section IV. The microgrid configuration consists of several components, including a traditional diesel generator, wind turbines, residential houses equipped with solar panels, and a lithium-ion BESS with a charging/discharging efficiency of 90%. The parameters for these main components are provided in Table IV.
To simulate realistic conditions, the load data for the microgrid is based on the electricity consumption of 1000 residential houses. The ambient temperature and available solar power for a 24-hour period are sourced from the Pecan Street Dataport [39], ensuring accurate representation of real-world environmental conditions. The wholesale electricity price data is obtained from ERCOT [40], allowing the model to consider market dynamics in the day-ahead scheduling decisions.
The optimization problem, formulated as part of the day-ahead scheduling model, was solved on a computer with the following specifications: an AMD(r) Ryzen 7 3800X processor, 32 GB RAM, and an Nvidia Quadro RTX 2700 Super GPU with 8 GB of memory. The Pyomo [41] package, a powerful optimization modeling framework, was utilized to formulate and solve the day-ahead optimization problem. A high-performance mathematical programming solver Gurobi [42] was employed to efficiently find optimal solutions. By utilizing this realistic microgrid test platform and the computational resources mentioned, the SNNBD integrated day-ahead scheduling model can accurately capture the dynamics of the renewable energy sources, optimize the scheduling decisions, and assess the performance of the proposed approach.
Table V presents the validation results for different sparsity levels of the SNNBD models in the microgrid day-ahead scheduling problem. The table provides insights into the performance of these models across various metrics. "Pseudo Total" represents the total cost with the SNNBD model, which serves as the objective of the day-ahead scheduling including the operating cost and degradation cost in optimization problem. "BD Cost" represents the equivalent battery degradation cost estimated using the SNNBD model. "Operation" shows the microgrid operating cost, including the cost associated with generators and power trading. "OG BD Cost" indicates the battery degradation cost obtained from the original NNBD model, which does not incorporate sparsity. "Updated Total" represents the sum of the operation cost and the "OG BD Cost". "0% sparsity" is considered as the benchmark model, used to evaluate the performance of the other SNNBD models with different sparsity levels.
From the information in Table V, it appears that the SNNBD model does not significantly reduce the solving time in the microgrid model. Furthermore, there is no substantial difference observed in the total cost and updated total cost among the various SNNBD models compared to the benchmark model. These findings suggest that the inclusion of sparsity in the SNNBD model does not significantly impact the overall cost in the microgrid day-ahead scheduling problem. Fig. 3 illustrates the output curves of the BESS under different battery degradation models. The figure shows that the BESS charge/discharge power profiles largely overlap across most time intervals. The only notable difference is observed in the 10% and 20% sparsity models, where the BESS charges at 20 kW during the 7-8 pm. Overall, these results demonstrate that the SNNBD model is capable of finding solutions for the day-ahead scheduling problem. Based on these findings, it can be concluded that the SNNBD model is reliable and able to identify optimal solutions compared to the non-sparse NNBD model in the microgrid day-ahead scheduling problem. However, it should be noted that the SNNBD model does not yield efficiency improvements, even with higher sparsity levels. One possible reason for this observation could be the small scale of the microgrid case and the presence of only one BESS, which does not impose a heavy computational burden.
### _Bulk Power System Test Case_
To evaluate the day-ahead scheduling of the bulk power grid model, a typical IEEE 24-bus system (Fig. 4) [43] is employed as a test bed. This system consists of 33 generators and serves as a representative model for large-scale power grids. In addition to the existing infrastructure, the test bed incorporates several BESSs and wind farms to evaluate their impact on the day-ahead scheduling. Fig. 4 illustrates the layout of the IEEE 24-bus system, showcasing the interconnected buses and the corresponding transmission lines. The objective of this evaluation is to optimize the scheduling decisions considering the presence of the multiple BESS and wind farm within the larger power grid system.
Similar to the microgrid case discussed earlier, the day-ahead scheduling problem for the bulk power grid is solved using same solving software packages. The integration of the BESS and wind farm within the IEEE 24-bus system enables the evaluation of their impact on optimizing power generation, transmission, and scheduling decisions at a larger scale.
Table VI provides the parameters of the BESSs installed at different buses within the IEEE 24-bus system. These parameters characterize the specifications of each BESS, including
Figure 3. BESS output in a microgrid system.
their energy capacities and power output capabilities. Notably, BESS number four possesses the largest energy capacity and the highest output power among the five BESSs considered in the system. The minimum power for charging or discharging is set to zero. Additionally, the IEEE 24-bus system incorporates five wind farms, each comprising a varying number of wind turbines. The capacity of each wind turbine is fixed at 200 kW. To obtain suitable wind profiles for this study, the wind profile data sourced from the Pecan Street Dataport [41] are appropriately scaled. The inclusion of these parameters and data in the evaluation allows for a comprehensive analysis of the day-ahead scheduling problem within the IEEE 24-bus system.
The outcome for the IEEE 24-bus system with different sparsity levels of the SNNBD model are presented in Table VII. It is vital to recognize that all tabulated results are anchored on a relative MipGap of 0.001, which is a critical gauge of the optimality gap. The table clearly demonstrates that the solving time decreases exponentially as the sparsity level of the SNNBD model. The results based on the 60% and 70% sparsity have not been included as the BESS output curve deviates significantly from the solutions based on lower sparsity level models. The 0% and 10% sparsity models results are not listed here since they cannot be solved within the given time frame, whereas the 50% sparsity model requires only 455 seconds for solution. Similarly, for the 20% sparsity model, the day-ahead scheduling problem cannot find the optimal solution within the span of 20 hours, resulting in a reported non-optimal benchmark result.
We also found that the 50% sparsity model lead to the minimum total cost. However, the total cost, does not change significantly despite the variation in solving time. This indicates that while the solving time is reduced to an acceptable number with high sparsity SNNBD models, the overall cost remains relatively stable. By analyzing these results, it becomes evident that increasing the sparsity level in the SNNBD model significantly reduces solving time without significantly impacting the overall cost. However, it is crucial to validate the BESS power output pattern when assessing the performance of the SNNBD model. Examining the BESS power output pattern ensures that the model captures the desired behavior and produces outputs consistent with expectations.
Figs. 5 and 6 display the SOC curves of BESS #4 and #5 under different sparsity levels of the proposed SNNBD model. These two BESS units are particularly active among the five units considered in the testbed. For benchmarking purposes, the SOC curve is also plotted when there is no battery degradation considered in the day-ahead scheduling problem. The SOC curve provides insights into the utilization of the BESS, with more fluctuation indicating more active and flatter curves indicating less active. When degradation is not considered, the BESS units are utilized to their maximum capacity since there is no equivalent degradation cost factored into the optimization problem. We found that both BESS #4 and #5 are scheduled to discharge to 0% SOC twice when degradation is not considered. In Figure 5, the output curves of BESS #4 with SNNBD models significantly shrink compared to the case where degradation is not considered. However, the output patterns of BESS #4 with different sparsity levels of the SNNBD model exhibit a similar pattern and overlap for most time periods, which demonstrating the effectiveness of the proposed SNNBD model.
The output patterns of BESS #5 in Figure 6 appear different from those of BESS #4. However, similar to BESS #4, BESS #5 discharges significantly less when degradation is considered. Table III provides insights into the tradeoff between sparsity and accuracy. A higher sparsity level leads to lower accuracy, while a lower sparsity level results in longer solving times for day-ahead scheduling. Thus, a balance must be compromised between sparsity and accuracy. Overall, the 50% sparsity model performs the best since its SOC curve closely resembles those of the 20%, 30%, and 40% sparsity models while having the lowest total cost.
Fig. 4: Illustration of the modified IEEE 24-bus system [43].
Fig. 5: SOC curves of BESS #4 in the 24-bus bulk power system.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline BESS & Bus & Energy Capacity & Power Rating & Initial \\ _No._ & _No._ & (MWh) & (MW) & SOC \\ \hline
1 & 21 & 50 & 20 & 40\% \\ \hline
2 & 22 & 10 & 4 & 40\% \\ \hline
3 & 7 & 10 & 4 & 40\% \\ \hline
4 & 14 & 200 & 100 & 40\% \\ \hline
5 & 9 & 30 & 10 & 50\% \\ \hline \end{tabular}
\end{table} TABLE VI: Setting of BESSs
### _Market Analysis_
Fig. 7 presents sample results demonstrating the influence of locational marginal price (LMP) when integrating BESSs into the bulk power system. Our exploration encompassed 3 models including "no BESS model", "BESS considered with degradation", and "BESS considered without degradation". A comparison was made with the "no BESS model" to assess the system's ability to reduce line congestion when a BESS is integrated. The LMP results in Fig. 7 specifically focus on bus 14, the location of the largest BESS unit, BESS #4. From the figure, it is evident that during most time periods, such as 1 am to 5 am and 12 pm to 6 pm, the LMP values are consistent across the different cases, indicating there is no line congestion at bus 14 during those periods. However, as the clock strikes 3 pm to 6 pm, a surge in LMP is evident, which suggests that the line is loaded higher than in the previous hours but is not yet congested. During the normal daily peak load periods of 7 am to 9 am and 7 pm to 8 pm, the LMP values differ among the proposed models. In comparison to the "no BESS model," the models with integrated BESS units can significantly reduce the LMP, indicating that the BESS can alleviate line congestion. Note that when battery degradation is not considered, the BESS exhibits a higher capability to mitigate line congestion, leading to the lowest LMP during those congested hours. This analysis of LMP with the integration of a BESS system provides valuable insights for both grid operators and BESS investors, as BESS installations play a crucial role in addressing line congestion within the grid.
### _Sensitivity Analysis of Relative Optimization Gaps_
A sensitivity test was conducted to examine the impact of different relative gaps. Fig. 8 displays SOC curves of BESS #4 based on the optimal solution obtained using different relative MipGap values. The results presented in Figure 8 are based on the 50% sparsity SNNBD model. The solving times for MipGap values of 0.01, 0.005, and 0.001 are 339 seconds, 380 seconds, and 450 seconds, respectively. The solving time increases as the MipGap value decreases because a more accurate optimal solution is sought. However, upon analyzing the SOC curve depicted in Figure 8, it becomes evident that the SOC curves mostly overlap, indicating minimal differences between the solutions obtained under different MipGap values. Consequently, for the 50% sparsity model, a higher MipGap value is preferred as it reduces the computation time while maintaining a comparable solution quality.
## VI Conclusions
This paper introduces a novel sparse neural network-based battery degradation model that can accurately quantify battery degradation in BESS and largely address the computational challenges associated with traditional dense neural networks when being incorporated into optimization models. By leveraging the sparse technique, the proposed SNNBD achieves accurate degradation prediction while significantly reducing the computational complexity. It has been proven that the accuracy of the SNNBD model does not decrease significantly until the sparsity is increased to 80%. It can obtain 91% accuracy at 60% sparsity, compared to 95% when no sparsity is implemented.
The results also show that our proposed SNNBD model can significantly reduce the computational burden, making neural network-integrated day-ahead energy scheduling directly solvable, even for complicated multi-BESS integrated systems. Furthermore, the results have been proven to be accurate and feasible with a high sparsity SNNBD model in both microgrid and bulk power system. Choosing different sparsity levels for the proposed SNNBD model provides flexibility for the grid operator, as it involves a tradeoff between accuracy and solving time. Overall, the SNNBD model opens up new possibilities for efficiently addressing battery degradation in day-ahead energy scheduling for multi-BESS systems.
|
2301.13489 | Effects of turbulent diffusion and back-reaction on the dust
distribution around two resonant planets | In evolved and dusty circumstellar discs, two planets with masses comparable
to Jupiter and Saturn that migrate outwards while maintaining an orbital
resonance can produce distinctive features in the dust distribution. Dust
accumulates at the outer edge of the common gas gap, which behaves as a dust
trap, where the local dust concentration is significantly enhanced by the
planets outward motion. Concurrently, an expanding cavity forms in the dust
distribution inside the planets orbits, because dust does not filter through
the common gaseous gap and grain depletion in the region continues via inward
drifting. There is no cavity in the gas distribution because gas can filter
through the gap, although ongoing gas accretion on the planets can reduce the
gas density in the inner disc. Such behaviour was demonstrated by means of
simulations neglecting the effects of dust diffusion due to turbulence and of
dust backreaction on the gas. Both effects may alter the formation of the dust
peak at the gap outer edge and of the inner dust cavity, by letting grains
filter through the dust trap. We performed high resolution hydrodynamical
simulations of the coupled evolution of gas and dust species, the latter
treated as pressureless fluids, in the presence of two giant planets. We show
that diffusion and backreaction can change some morphological aspects of the
dust distribution but do not alter some main features, such as the outer peak
and the expanding inner cavity. These findings are confirmed for different
parametrizations of gas viscosity. | Francesco Marzari, Gennaro DAngelo | 2023-01-31T09:15:04Z | http://arxiv.org/abs/2301.13489v1 | Effects of turbulent diffusion and back-reaction on the dust distribution around two resonant planets
###### Abstract
In evolved and dusty circumstellar discs, two planets with masses comparable to Jupiter and Saturn that migrate outwards while maintaining an orbital resonance can produce distinctive features in the dust distribution. Dust accumulates at the outer edge of the common gas gap, which behaves as a dust trap, where the local dust concentration is significantly enhanced by the planets' outward motion. Concurrently, an expanding cavity forms in the dust distribution inside the planets' orbits, because dust does not filter through the common gaseous gap and grain depletion in the region continues via inward drifting. There is no cavity in the gas distribution because gas can filter through the gap, although ongoing gas accretion on the planets can reduce the gas density in the inner disc. Such behaviour was demonstrated by means of simulations neglecting the effects of dust diffusion due to turbulence and of dust backreaction on the gas. Both effects may alter the formation of the dust peak at the gap outer edge and of the inner dust cavity, by letting grains filter through the dust trap. We performed high resolution hydrodynamical simulations of the coupled evolution of gas and dust species, the latter treated as pressureless fluids, in the presence of two giant planets. We show that diffusion and backreaction can change some morphological aspects of the dust distribution but do not alter some main features, such as the outer peak and the expanding inner cavity. These findings are confirmed for different parametrizations of gas viscosity.
keywords: accretion, accretion discs -- methods: numerical -- planets and satellites: gaseous planets -- planet-disc interactions
## 1 Introduction
In a recent study, Marzari et al. (2019) examined the distribution of dust particles around two resonant planets embedded in a circumstellar disc. The two planets, with masses comparable to those of Jupiter and Saturn, had orbits in a resonant configuration, with ratios of the mean motion equal to either 2:1 or 3:2. Because of the type of resonance and of the applied disc conditions, the planets tend to migrate outwards and dust particles tend to accumulate outside of the orbit of the exterior planet. Concurrently, the inward migration of dust grains that move inside of the orbit of the interior planet leads to an enlargement of the dust gap compared to the gap in the gas and to a dynamical decoupling between the gaps in the gas and dust distributions. The build-up of the dust density at the outer edge of the gap surrounding the planets is markedly higher in the case of the 2:1 mean-motion resonance and may appear as a bright ring (at appropriate wavelengths) in resolved observations of discs. A similar phenomenon was also found for lower-mass planets (in the Super-Earth mass range, Marzari & D'Angelo, 2020), although less pronounced. All those simulations were performed without including the effects of possible dust diffusion due to gas turbulence and of dust back-reaction on the gas. Here we explore the consequences of these two mechanism on the accumulation of dust grains at the outer edge of the gap and on the formation of a wider gap in the dust distribution compared to the density gap in the gas. This latter feature may lead to the formation of a transition disc, if the planets are close enough to the star (low gas surface density inside the planets' orbits can be caused by ongoing accretion of gas on star and planets).
Circumstellar discs are likely turbulent (e.g., Hughes et al., 2011). Various mechanisms have been proposed as drivers of turbulence, such as convective instability (e.g., Klahr & Hubbard, 2014; Lyra, 2014), vertical shear instability (e.g., Urpin, 2003; Nelson et al., 2013; Stoll et al., 2017), and magneto-rotational instability (e.g., Balbus & Hawley, 1991, 1998, 2003). Gas turbulence may force dust grains to diffuse (over a length-scale dictated by the type of turbulence), a process that not only may affect dust accumulation but can also hinder the efficiency of dust entrapment at radial location of gas pressure maxima. In fact, according to Sierra et al. (2019) and Pinilla et al. (2020), dust diffusion may reduce, or even prevent, significant concentrations of grains at locations of gas "bumps", since the grains can disperse out of the region. This process might affect the conclusions of our previous results on the accumulation of dust at the outer edge of the gaseous gap of two planets in resonance by letting dust seep through the gap and reach the inner disc regions. If the effect is large enough, the concentration of dust at those radial locations, obtained in the numerical simulations by Marzari et al. (2019) and Marzari & D'Angelo (2020), may be severely depleted or even largely absent.
In addition to dust diffusion, the back-reaction of dust on gas can also impact the formation of grain concentration at a local pressure maximum. According to Taki et al. (2016), dust back-reaction can deform the pressure gradient of the gas when high-enough values of the dust-to-gas mass ratio are reached. This may be the case of the dust concentration attained at the outer edge of gaseous gaps, observed in the simulations of two planets in resonance migrating away from the star.
To test the relevance of these two mechanisms, diffusion and back-reaction, on the formation of dust over-dense regions caused by the outward migration of two planets in resonance, we performed simulations of the evolution of two planets in resonance in which both these two mechanisms are taken into account. In Section 2, we describe the numerical model exploited to study the coupled evolution of dust and gas in presence of the two resonant planets. In Section 3, we outline the dust behaviour when the planets evolve in the 3:2 mean-motion resonance whereas, in Section 4, we analyse the case of the 2:1 mean-motion resonance. In Section 5, we test the robustness of our results by changing the viscosity parameterization, including a constant kinematic viscosity the one that applies a constant value of the \(\alpha\) parameter. Finally, in Section 6, we discuss our results.
## 2 Methods and algorithms
In past work on the coupled evolution of dust and gas in protoplanetary discs, we adopted a Lagrangian description of the solid component. Instead, an Eulerian formalism is applied in the present study, since solids are treated as pressureless fluids. Some details on the involved equations are provided below to highlight the differences between the two approaches. Marzari et al. (2019) and Marzari & D'Angelo (2020) used the two-dimensional (2D) FARGO hydrodynamics code (Masset, 2000), modified to include the dynamical evolution of dust particles embedded in the gaseous disc in a Lagrangian fashion (Picogna & Kley, 2015; Picogna et al., 2018).
The drag force on the particles was computed from the local gas density according to the equation (Woitke & Helling, 2003)
\[\mathbf{F}=\left(\frac{3K}{3K+1}\right)^{2}\mathbf{F}_{\mathrm{E}}+\left( \frac{1}{3K+1}\right)^{2}\mathbf{F}_{\mathrm{S}}, \tag{1}\]
where \(\mathbf{F}_{\mathrm{E}}\) is the Epstein drag contribution, given by
\[\mathbf{F}_{\mathrm{E}}=-\frac{4}{3}\left(1+\frac{9\pi}{128}M^{2}\right)^{1/ 2}s^{2}\rho_{\mathrm{g}}v_{\mathrm{th}}\mathbf{v}_{\mathrm{rel}}, \tag{2}\]
and \(\mathbf{F}_{\mathrm{S}}\) is the Stokes drag component, given by
\[\mathbf{F}_{\mathrm{S}}=-\frac{1}{2}C_{D}\pi s^{2}\rho_{\mathrm{g}}v_{ \mathrm{rel}}\mathbf{v}_{\mathrm{rel}}. \tag{3}\]
In the above equations, \(\rho_{\mathrm{g}}\) is the gas density, \(s\) is the radius of the particle, \(v_{\mathrm{th}}\) is the local thermal velocity of the gas and \(\mathbf{v}_{\mathrm{rel}}\) is the velocity of the dust particle relative to the gas. The quantity \(K\) is the Knudsen number and \(M\) is the Mach number (computed from the particle's relative velocity \(v_{\mathrm{rel}}\)). Quantity \(C_{D}\) is the drag coefficient for the Stokes regime.
In this paper, to test the relevance of diffusion and back-reaction on the formation of dust over-dense regions caused by the outward migration of two planets in resonance, we carry out simulations with the code FARGO3D (Benitez-Llambay et al., 2019). In this version of the code, the dust particles are treated as additional pressureless fluids where momentum is transferred between the gas and each of the dust species (but not among dust species). The dust fluid is affected by Epstein drag, which imparts a force _per unit volume_ to a dust species given by
\[\mathbf{F}_{d}=-\rho_{d}\frac{\Omega}{\tau_{s}}(\mathbf{v}_{d}-\mathbf{v}_{g}), \tag{4}\]
where \(\rho_{d}\) is the dust density, \(\tau_{s}\) is the Stokes number of the particle and \(\Omega\) is the Keplerian frequency of the gas. An equal and opposite force is imparted to the gas. Note that, in Equation (4), information regarding the drag coefficient are incorporated into \(\tau_{s}\).
A term is added to the continuity equation to model the diffusion of the dust species within the gas (Morfill & Voelk, 1984)
\[\frac{\partial\rho_{d}}{\partial t}=\nabla\cdot\left(D_{d}\rho\nabla\frac{ \rho_{d}}{\rho}\right), \tag{5}\]
where \(\rho=\rho_{g}+\rho_{d}\) and \(\rho_{d}\) is the density of individual dust species. Equation (5) is only applied to the pressureless fluids and it assumes the same diffusion coefficient for all dust species, which is taken equal to the value of the gas kinematic viscosity (Brauer et al., 2008)
\[D_{d}=\nu. \tag{6}\]
The effects of this choice are not tested.
The original version of the code applies Stokes numbers, \(\tau_{s}\), in Equation (4) that are constants. The code was modified so that we can select each pressureless fluid (dust species) not according to a Stokes number, which varies as a function of the local properties of the gas (density, temperature and velocity), but rather according to the particle size.
## 3 Dust distribution near planets in the 3:2 resonance
We first investigate the case of a pair of planets that become captured in the 3:2 mean-motion resonance and migrate outward thereafter. The interior planet has the mass of Jupiter whereas the exterior planet has the mass of Saturn. A more massive inner planet is a condition conducive to outward migration. The planets orbit in a cold, local-isothermal disc with a fixed aspect ratio \(H/r=h=0.02\). (Calculations with a larger ratio, \(h=0.05\), are also presented.)
The disc extends from 0.4 to 12 au and is discretised over a grid of \(512\times 1024\) area elements (in the radius and azimuth, respectively). The initial surface density of the gas declines as
\[\Sigma(r)=\Sigma_{0}\left(\frac{r_{0}}{r}\right), \tag{7}\]
where \(\Sigma_{0}=200\,\mathrm{g}\,\mathrm{cm}^{-2}\) is the density at the reference radius \(r_{0}=1\,\mathrm{au}\).
Three different populations of icy grains (bulk density \(1\,\mathrm{g}\,\mathrm{cm}^{-3}\)) are included in the simulations, whose sizes are \(100\,\mu\)m, \(1\,\mathrm{mm}\) and \(1\,\mathrm{cm}\). For the applied disc conditions, these particles have Stokes numbers less than \(\approx 0.1\).
The initial dust-to-gas mass ratio for each of the three dust species is 0.0033, so that the overall dust-to-gas mass ratio adds up to 0.01, which is a typical value adopted for circumstellar discs and is based on values found in the interstellar medium. However, dust needs not be primordial in origin, that is, part of the inventory of solids from which the planets formed. In fact, the dust populations may represent, or contain, second generation dust produced by collisions among left-over planetesimals, after the planets became massive enough (Turrini et al., 2019; D'Angelo & Marzari, 2022; Marta Bernabó et al., 2022). The equations describing the evolution of the system are solved in a non-inertial reference frame centered on the star, including the indirect terms arising from the planets' and disc's gravity.
Various values of the gas kinematic viscosity, \(\nu\), are considered because this parameter affects the tidal interactions between the planets and the gas, and also determines the amount of dust diffusion through Equation (6). Additionally, it can also affect the speed of the planets' outward migration. In these models, we adopt a constant value of kinematic viscosity. The impact of \(\nu\) on the efficiency of the outward migration is illustrated in Figure 1, for a given value of the initial gas density at the reference radius \(r_{0}\). In these models, the planets start to migrate at the beginning of the simulations, when the distributions of gas and dust are unperturbed (hence the initial rapid inward migration of the planets). As the outer planet approaches the inner planet and their orbits become caught in resonance, the tidal perturbations exerted by the outer planet on the circumstellar material alter the torque balance on the inner planet. Consequently, the inner planet first slows down and then migrates away from the star, pushing the outer planet outward through the resonance forcing.
The evolution of the semi-major axis of the outer planet, \(a_{2}\), is shown for three different values of the gas kinematic viscosity: \(\nu=10^{-6}\), \(5\times 10^{-6}\), and \(10^{-5}\), in units of \(r_{0}^{2}\Omega_{0}\) (\(\Omega_{0}\) is the Keplerian frequency at \(r_{0}\)). Note that a constant kinematic viscosity corresponds to a variable \(\alpha\) parameter, \(\alpha\propto\nu/(h^{2}\sqrt{\nu})\). With our choice of parameters, at \(r=r_{0}\), said values of \(\nu\) would correspond to \(\alpha_{0}=0.0025\), \(0.0125\), and \(0.025\), respectively. However, in the disc regions where the planets orbit, \(\alpha\) would be smaller by a factor of 2 or more.
After a different behaviour at the beginning of the evolution, prior to or shortly after the capture into the 3:2 orbital resonance (see Figure 1), the planets undergo sustained outward migration locked in mean-motion resonance. The migration speed of the pair is related to \(\nu\) and determined by the shape of the common (or overlapping) gaseous gap of the two planets. A similar outcome is obtained for the cases involving the 2:1 resonance, as shown in the next section. Notice that the outer planet is subject to a negative torque exerted by the disc material exterior to its orbit, hence it would tend to move inward, whereas the resonance forcing pushes it outward. These two opposing torques allow the resonance to be maintained during the outward migration of the pair.
In the calculations presented herein, all disc material exerts torques on the planets, including material within the planets' Hill spheres. Since the numerical resolution is limited and density variations close to the planets may not be properly resolved, some spurious effects may arise that affect outward migration. To quantify possible differences, one model was also performed by removing the torques exerted from within the planets' Hill spheres. The orbital evolution is comparable to that of the calculation with default setup, although at a somewhat reduced outward migration speed. The amount of kinematic viscosity can also alter the local distribution of material around the planets and unresolved density gradients can possibly impact the resulting migration velocity. Nonetheless, it must be pointed out that for the purposes of this study the details of the outward migration process are not important, henceforth tests on the response of the system to numerical parameters are unnecessary. The only requirement is that the planet pair becomes locked in resonance and moves away from the star for a prolonged amount of time.
When the outer planet reaches 6.5 au from the star, we compare the dust distributions in the three cases with different viscosity. This comparison is shown in Figure 2. The dust density profiles show a peak at the outer border of the gas gap carved by the planets' tidal perturbations. This peak is more marked in the distributions of the largest grains, which are less coupled to the gas (i.e., they have a larger Stokes number and therefore a longer coupling timescale). Dust-to-gas mass ratios at the peak location range from 0.04 to 0.08, increasing as \(\nu\) decreases. In the region inside the inner edge of the gap there is a significant depletion of dust due to drifting motion towards the star. Re-supply of dust to this region is reduced, or halted, by the dust trap at the outer edge of the gas gap. After some time, the disc would appear as a transition disc with an inner cavity in the dust density, which expands outward over time due to the outward migration of the planets. These effects appear more evident at lower gas viscosity, which may be due to the lower level of diffusion but it may also be related to the different migration velocity of the planet pair (see Figure 1), as discussed below. Beyond \(\approx 10\) au the disc is depleted of mm- and cm-grains (but not of the smallest grains), an effect associated to the inward drift of the particles via gas drag (which does not affect as much the smallest grains). This is a boundary effect related to the fact that particles are not flowing inward from greater distances (i.e., there is not re-supply of solids at the outer boundary). Test simulations not reported here confirm this issue.
The accumulation efficiency of particles at the outer edge of the gas gap depends on the stopping time of the particles (\(\tau_{\rm g}/\Omega\)) and the timescale over which the radial pressure gradient of the gas moves (in our case, due to planet migration). For a given stopping time, the shorter the outward migration timescale, the less time dust grains have to accumulate. Stated differently, for a similar orbital
Figure 1: The top panel shows the semi-major axis of pairs of planets during their migration, for three values of a constant kinematic viscosity, \(\nu\), of the gas (in units of \(r_{0}^{2}\Omega_{0}\), see text). In these cases, the planet pair is locked in the 3:2 mean-motion resonance. In the bottom panel illustrates the evolution of the orbital eccentricity. The inner, more massive planet has a lower eccentricity in all cases. Data are averaged over a 250 yr window; see text for further details.
configuration of the planets, i.e., similar orbital frequencies \(\Omega\), a more rapid outward migration can facilitate the filtering process of dust toward the star. The overall outcome is a less depleted inner disc in the cases with more vigorous outward migration.
For the same reason, the different migration speed also controls the build up of the dust at the outer edge of the gas gap. As a consequence, the reduction in the peak density at the outer edge of the gap observed at higher viscosity values, shown in Figure 2, is likely due to the combination of the two effects: a higher diffusion rate and a decrease in the trapping efficiency of dust grains caused by the faster outward migration rate of the planets.
To better characterize the role of diffusion and back-reaction in the formation, size and shape of the outer peak in dust density, we ran a simulation in which both these two processes were neglected, adopting a kinematic viscosity \(\nu=10^{-5}\ r_{0}^{2}\Omega_{0}\) (see Figure 3). In this model, the migration speed is broadly consistent with that of the model illustrated in the bottom panel of Figure 2, but the peak in the dust density distributions of the largest particles tend to be higher and sharper compared to corresponding one in the the bottom panel of Figure 2. The implication is that diffusion is indeed affecting the concentration of the dust by spreading large grains over a broader radial region. Nonetheless, the reduced efficiency in collecting dust caused by diffusion is not strong enough to prevent the formation of a prominent density peak outside the planets' orbits, which may be detected by high resolution observations.
In Figure 4 we plot the surface density distribution of the gas and of dust grains of different sizes for the case with lowest viscosity, \(\nu=10^{-6}\ r_{0}^{2}\Omega_{0}\). The gas gap is significantly narrower than that of the dust and the width of the latter is larger for larger grain size. For 1 cm size dust grains, the density distribution is confined in an over-dense ring at the outer edge of the gas gap. Therefore, according to these results, dust diffusion and back-reaction are not able to prevent the formation of narrow rings in the dust distribution at the outer edge of the gas gap. In our simulations there is no re-supply of dust at the outer boundary of the simulated disc, but it is expected that a continuous distribution of solids beyond the grid boundary would supply dust to the inner disc regions. In this case, at the outer edge of the gap, we would observe an enhanced density, as predicted by our simulations. Beyond this density peak, however, there would be a continuous distribution of dust originating from more distant regions. Over timescales much longer than those simulated here, dust drifting
Figure 3: Dust and gas profiles (as in Figure 2) with \(\nu=10^{-5}\ r_{0}^{2}\Omega_{0}\), but without the inclusion of diffusion and back-reaction.
Figure 2: Surface density (averaged in azimuth around the star) of dust particles of different sizes, ranging from 100 \(\mu\)m to 1 cm. These profiles are compared to the re-scaled gas density (i.e., multiplied by 0.0033; the total dust-to-gas mass ratio is 0.01). The top panel refers to a kinematic viscosity equal to \(\nu=10^{-6}\), the middle panel to \(\nu=5\times 10^{-6}\) and the bottom panel to \(\nu=10^{-5}\), in units of \(r_{0}^{2}\Omega_{0}\).
from larger distances may also affect the density peaks at the outer edge of the gas gap. In fact, an additional simulation with a wider radial boundary (not reported here) does show enhanced peaks in large grains, due to solids drifting from farther distances. Also the population of small grains would increase at the peaks over longer times, but at a slower rate, dictated by the drift velocity. Nonetheless, the depletion of dust within the inner edge of the gas gap would not be affected by this process because the outer dust trap appears to be efficient enough to halt (or severely impede) refilling of grains. In fact, if refilling of the inner disc was sustained, it would occur within the time of our simulations but it is not observed.
Continued supply of dust from farther out in the disc may, at some point, raise the density in the peak regions beyond some threshold value to make the peaks unstable. For example, the dust-to-gas mass ratio may become large enough to induce a back-reaction response on the gas that redistributes the particles over some radial region via collisions and/or enhanced dust coagulation may ensue (coagulation into larger particles would reduce the back-reaction force exerted on the gas, because of the lower surface-to-mass ratio). The back-reaction response may also smooth out the gas density gradient in the radial direction, altering the radial pressure gradient and reducing its ability to retain grains. Such processes are not considered in this study.
To test the impact of a larger aspect ratio (i.e., warmer disc) on the formation of the inner dust cavity and of the peak external to the gas gap, we performed two additional simulations adopting \(h=0.05\). In the first model we used a higher gas surface density (\(\Sigma_{0}=800\) g cm\({}^{-2}\)), in order to increase the speed of planet, migration while in the second we used the same density as in the previous cases (\(\Sigma_{0}=200\) g cm\({}^{-2}\)). In the latter simulation, since the rate of outward migration is smaller, the planets are located closer to the star in the plot. In the top panel of Figure 5, the density profiles are shown for the different grain sizes in the high density case. The bottom panel illustrates those profiles for the low density case. In both simulations the density patterns are similar to those in Figure 2, suggesting that a higher aspect ratio does not impair the ability of two planets in resonance to carve an inner gap in the distribution of the larger grains and to trap dust at the gap's outer edge (dust-to-gas mass ratios around the peak region are 0.02-0.03). Thus, dust filtering through the outer edge of the gas gap is not increased by the different morphology of the dust trap in these warmer discs.
## 4 Dust distribution near planets in the 2:1 resonance
To test the evolution of the dust when the planets are captured in a 2:1 mean-motion resonance, we decreased the gas density to \(\Sigma_{0}=40\) g/cm\({}^{2}\) in order to induce orbital locking in this resonance. It is known that capture in this mean-motion resonance is a more delicate process than capture in the 3:2 resonance. If the relative migration velocity with which the pair of planets approach each other is above a certain threshold, the resonance forcing is overcome and convergent migration continues. Additionally, once the 2:1 orbital resonance is established, migration of the two planets typically proceeds inward because of the way the two gas gaps overlap. In this configuration, outward migration may be obtained by choosing an appropriately low kinematic viscosity so that a wide, common gaseous gap forms around the orbits of the two planets.
As in the the models of the previous section, the planets start to migrate in unperturbed distributions of gas and therefore the initial inward migration of the planets is artificially rapid. This choice af
Figure 4: Surface density maps illustrating gas and dust distributions around two planets locked in the 3:2 resonance (\(\nu=10^{-6}\)\(r_{\rm c}^{2}\Omega_{0}\)). The top panel shows the gas density, whereas the second, third and fourth panels represent 100 \(\mu\)m, 1 mm and 1 cm particles, respectively.
fects the radius at which the planets are trapped in resonance but is otherwise not much relevant for the purposes of this study. As for the models discussed in Section 3, the model with a larger value of the kinematic viscosity results in a more vigorous outward migration of the planets once they become trapped in resonance. This is illustrated in Figure 6 for cases with \(\nu=10^{-6}\) and \(\nu=10^{-7}\)\(r_{0}^{2}\Omega_{0}\) (\(\alpha_{0}=2.5\times 10^{-3}\) and \(2.5\times 10^{-4}\), respectively). Note, however, that the orbits also become significantly eccentric, which also affects outward migration (D'Angelo et al., 2006). We also tested a larger value, \(\nu=5\times 10^{-6}\)\(r_{0}^{2}\Omega_{0}\), but the outer planet crosses the 2:1 resonance with the inner planet. The pair is temporarily trapped in the 5:3 resonance at which point it begins migrating outwards. That resonance is then broken and the pair becomes finally captured in the 3:2 resonance, continuing to migrate outward. For even higher values of the kinematic viscosity, there is no trapping in the 2:1 resonance, which might be attained by further reducing \(\Sigma_{0}\). However, this possibility was not tested since it could lead to initial dust-to-gas mass ratios too dissimilar from the other simulations presented herein.
The dust distributions are shown in Figure 7 for the two different values of kinematic viscosity and, in the bottom panel, for a model without the inclusion of diffusion and back-reaction. Because of the significantly different migration speed, we only compare the dust density distributions at the end of the simulation (when the planet are located at different orbital radii). For this resonance too a significant dust enhancement develops at the outer edge of the gas gap, where dust-to-gas mass ratios can achieve values of order unity. The inner cavity is larger at all grain sizes, compared to that produced by the 3:2 resonant configuration. This is possibly related to the slower outward migration rate of the planets in the 2:1 resonance, which can reduce filtering of the dust through the orbits of the planets and toward the star. Additionally, the lower gas density increases the Stokes numbers of the grains, reducing their drift timescale and facilitating grain removal from the inner disc.
Comparing the top and bottom panels of Figure 7, one can notice significant differences in the dusty features exterior to the planets orbits. Since the gas density is also different in the two models (at those times), it is unclear how much of the difference is caused by the action of diffusion and back-reaction of the solids.
The orbital eccentricity of the planets can drive asymmetries in the distribution of the disc material. Consequently, both the gas gap outer edge and dusty rings can become asymmetric (see Figure 8). We did not perform a detailed analysis of ring asymmetries. However, one possible explanation for the more asymmetric features arising from the planets in the 2:1 resonance (compared to those in the 3:2 resonance, see Figure 4), may be the larger eccentric perturbation
Figure 5: Dust and gas density profiles (as in Figure 2) in a disc with \(\nu=10^{-5}\)\(r_{0}^{2}\Omega_{0}\) and aspect ratio \(H/r=h=0.05\). In the top panel \(\Sigma_{0}=800\,\mathrm{g}\,\mathrm{cm}^{-2}\) whereas \(\Sigma_{0}=200\,\mathrm{g}\,\mathrm{cm}^{-2}\) in the bottom panel.
Figure 6: The top panel shows the migration of the exterior planet of a pair, for two values of a constant kinematic viscosity of the gas, \(\nu\), as indicated in the legend. The planet pair is locked in the 2:1 mean-motion resonance. Only the semi-major axis of the exterior planet is plotted because of the larger mutual distance compared, for example, to the 3:2 case. The orbital eccentricity of both interior and exterior planets is illustrated in the bottom panel. Data are averaged over a 250 yr window as in the case of the 3:2 resonance (Figure 1).
driven by the inner (more massive) planet, which tends to have a larger orbital eccentricity in the 2:1 resonance than in the 3:2 resonance (the outer planets can have comparable eccentricities).
In Figure 8, the dust density distributions are shown for grains of various sizes for the case with lowest viscosity, \(\nu=10^{-7}\)\(r_{0}^{2}\Omega_{0}\). As mentioned, the width of the gap is significantly larger compared to the case of the 3:2 resonance and the dusty ring at the outer edge of the gas gap is evident for all particles, more than in the 3:2 resonance. For the 2:1 resonance, the effects of the dust trap appear more marked, both at the inner and at the outer edge of the gas gap. It is expected that after sufficient time from the beginning of the outward migration, the disc reduces to a single overdense ring at the outer edge of the gap, close to the outer planet orbit. As discussed in the previous section, this feature would result from the lack of dust supply from larger orbital radii, beyond the outer boundary of the grid.
## 5 Comparison with Alpha Viscosity
To test the robustness of our results, we performed two additional simulations, for the 2:1 resonance, in which the kinematic viscosity is set as \(\nu=\alpha H^{2}\Omega\), where \(H\) is the local pressure scale height of the disc and the parameter \(\alpha\) is assumed to be a constant. Since we work with a local-isothermal disc and a constant aspect ratio, \(H\propto r\) and \(\nu\propto\alpha\sqrt{\nu}\). In the first model, we set \(\alpha=10^{-3}\) whereas we set \(\alpha=10^{-4}\) in the second. These parameters result in \(\nu\approx 10^{-6}\) and \(\approx 10^{-7}\)\(r_{0}^{2}\Omega_{0}\) around the middle radius of the computational domain.
The outcomes of these simulations are illustrated in Figure 9, after \(4\times 10^{4}\) yrs of evolution. The density peak at the outer edge of the gas gap is clearly visible in both cases, although there are differences in over-density morphology. In the top panel, at \(\alpha=10^{-3}\), the peak appears similar at all sizes while, in the bottom panel (\(\alpha=10^{-4}\)), the peak is split in two for the largest particles (\(s=1\) mm and \(s=1\) cm) and composed of three separate maxima for \(s=100\,\mu\)m. This behaviour was not observed for the 3:2 resonance but it was already present in the simulation involving the 2:1 mean-motion resonance obtained with a constant kinematic viscosity (see Figure 7, top panel). A possible interpretation is that for the 2:1 resonance multiple dust traps develop at the outer border of the gap. By comparing the gas density distribution in the two cases shown in Figure 9, this hypothesis appears to be confirmed since, in the case with \(\alpha=10^{-4}\), \(\Sigma\) at the outer border of the gas gap appears more variable compared to that of the case with \(\alpha=10^{-3}\).
The splitting of the peak in various maxima can be also observed in the simulations of Marzari et al. (2019), for the case involving a pair of planet locked in the 2:1 resonance, even if their physical model (radiative disc), the initial parameters for the gas density and the viscosity values are different.
## 6 Conclusions
According to the models presented by Marzari et al. (2019), two planets in the mass range of Jupiter and Saturn, embedded in a circumstellar disc and migrating outwards, can strongly affect the dust distribution in their surroundings. They can create a transition disc with an inner cavity in the dust distribution that expands following the outward migration of the planets. They can also build a strong peak in the dust density at the outer edge of the gas gap carved by the resonant planets, which would appear as a bright dust ring. However, these findings may be altered by the dust diffusion due
Figure 7: Density profile of dust particles of different sizes ranging from 100 \(\mu\)m to 1 cm with the planets in a 2:1 resonance. These profiles are compared to the re-scaled gas density (multiplied by 0,0033). The top panel is for a kinematic viscosity equal to \(\nu=10^{-6}\)\(r_{0}^{2}\Omega_{0}\) and the middle panel for \(\nu=10^{-7}\)\(r_{0}^{2}\Omega_{0}\). The bottom panel refers to a model with \(\nu=10^{-6}\)\(r_{0}^{2}\Omega_{0}\), but without diffusion and back-reaction.
to gas turbulence and by a change in gap morphology due to the back-reaction of the dust on the gas. It is, therefore, important to test if these two processes weaken the dust trap at the outer edge of the gap, allowing the dust to filter through the gap and preventing the formation of an inner dust cavity and of the outer over-density ring.
We used the code FARGO3D (Benitez-Llambay et al., 2019), in which dust species are treated as additional pressureless fluids, and performed a set of local-isothermal, high resolution simulations involving giant planets locked in the 3:2 and 2:1 orbital resonances and including the effects of diffusion and dust back-reaction. For the 3:2 resonance, the diffusion and back-reaction slightly affect the dust distribution by reducing the height of the overdense region at the outer edge of the gas gap but without removing it. This has been tested for different values of the disc viscosity, which determines the amount of diffusion. This outcome proves that the dust trap is still strong enough to halt the inward drift of the dust and to lead to the formation of an inner cavity in the dust distribution and an overdense ring at the outer edge of the gap. The details of peak heights can also be affected by continued supply of solids from large radial distances and, therefore, depend on boundary effects and evolution timescales. Given the limited radial extent of the models presented herein, the possible feedback of very dense rings of solids on the gas distribution was not investigated.
Figure 8: Density maps illustrating gas and dust distributions around planets locked in the 2:1 orbital resonance (\(\nu=10^{-7}\)\(r_{0}^{2}\Omega_{0}\)). The top panel refers to the surface density of the gas. The other panels refer, respectively, to the distributions of 100 \(\mu\)m, 1 mm and 1 cm dust particles.
Figure 9: Dust and gas density profiles (as in Figure 7) around a pair of planets locked in a 2:1 orbital resonance, after \(4\times 10^{4}\) yrs, in an \(\alpha\)-viscosity disc. In the top panel \(\alpha=10^{-3}\) and in the bottom panel \(\alpha=10^{-4}\).
The 2:1 resonance also provides a robust mechanism capable of creating an efficient dust trap. The height of the outer peak (outside the planets' orbits) is not much affected by diffusion. Its morphology, however, appears more complex than it does in the 3:2 resonance situation, since in some cases two dust peaks form at the outer edge of the gas gap, possibly due to the development of multiple dust traps. The robustness of these results for this resonance were tested by performing two additional simulations in which we switched from a constant kinematic viscosity, \(\nu\), to a constant \(\alpha\) viscosity parameter. In terms of large-scale features, the outcome of these simulations do not significantly differ from those with constant \(\nu\), showing that the formation of the inner dust cavity and the outer peak(s) are not due to the viscosity parametrization (although details can depend on the type of viscosity).
In the models presented herein, the gas distribution interior to the planets' orbits is not significantly depleted. However, ongoing accretion on the planets, neglected here, is expected to reduce the gas mass flux toward the inner disc (Lubow & D'Angelo, 2006), possibly leading to the formation of an inner cavity in the gas distribution as well.
## Acknowledgements
We thank the reviewer, Clement Baruteau, whose comments helped us improve this paper. GD acknowledges support provided by NASA's Research Opportunities in Space and Earth Science. Computational resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center.
## Data Availability
The data underlying the research results described in the article will be shared upon reasonable request to the authors.
|
2310.20702 | A simple range characterization for spherical mean transform in odd
dimensions and its applications | This article provides a novel and simple range description for the spherical
mean transform of functions supported in the unit ball of an odd dimensional
Euclidean space. The new description comprises a set of symmetry relations
between the values of certain differential operators acting on the coefficients
of the spherical harmonics expansion of the function in the range of the
transform. As a central part of the proof of our main result, we derive a
remarkable cross product identity for the spherical Bessel functions of the
first and second kind, which may be of independent interest in the theory of
special functions. Finally, as one application of the range characterization,
we construct an explicit counterexample proving that unique continuation type
results cannot hold for the spherical mean transform in odd dimensional spaces. | Divyansh Agrawal, Gaik Ambartsoumian, Venkateswaran P. Krishnan, Nisha Singhal | 2023-10-31T17:58:54Z | http://arxiv.org/abs/2310.20702v3 | # A simple range characterization for spherical mean transform in odd dimensions and its applications
###### Abstract.
This article provides a novel and simple range description for the spherical mean transform of functions supported in the unit ball of an odd dimensional Euclidean space. The new description comprises a set of symmetry relations between the values of certain differential operators acting on the coefficients of the spherical harmonics expansion of the function in the range of the transform. As one application of this range characterization, we construct an explicit counterexample proving that unique continuation type results cannot hold for the spherical mean transform in odd dimensional spaces. Finally, as an auxiliary result of one of our proofs, we derive a remarkable cross product identity for the spherical Bessel functions of the first and second kind, which may be of independent interest in the theory of special functions.
Key words and phrases:Spherical mean transform; Range characterization; unique continuation; Bessel functions 2020 Mathematics Subject Classification: 44A12, 45Q05, 44A15, 44A20, 33C10 \({}^{1}\)After one of the authors presented a preliminary version of this work at Isaac Newton Institute for Mathematical Sciences (Cambridge, UK) in May 2023, Peter Kuchment and Leonid Kunyansky, who were in the audience, took interest in this work and have recently informed us that they have derived a range characterization for the same SMT that we study, using only information from radii \(0\leq r\leq 1\) in all odd and even dimensions.
## 1. Introduction
In this paper we study the problem of finding the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical value of the critical critical value of the critical value of the critical value of the critical critical value of the critical value of the critical critical of the critical value of the critical value of the critical critical value of the critical critical value of the critical value of the critical value of the critical critical value of the critical value of the critical value of the critical value of the critical critical value of the critical critical value of the critical value of the critical value of the critical critical value of the critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical of the critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical value of the critical critical of the critical value of the critical critical value of the critical critical of the critical value of the critical critical of the critical value of the critical critical critical of the critical value of the critical critical critical value of the critical critical critical of the critical value of the critical critical critical of the critical value of the critical critical critical critical of the critical value of the critical critical critical critical of the critical value of the critical critical critical critical of the critical value of the critical critical critical critical of the critical value of the critical critical critical critical of the critical value of the critical critical critical critical of the critical critical critical critical critical of the critical critical critical critical critical critical of the critical critical critical critical critical critical of the critical critical critical critical critical critical critical of the
where
\[f_{m,l}(r)=\int\limits_{\mathbb{S}^{n-1}}f(r\theta)\overline{Y}_{m,l}(\theta) \mathrm{d}\theta,\]
and
\[d_{m}=\frac{(2m+n-2)(n+m-3)!}{m!(n-2)!},\quad d_{0}=1.\]
Since \(f\in C_{c}^{\infty}(\mathbb{B})\), we have that \(f_{m,l}\in C^{\infty}([0,1))\) with support strictly away from \(1\).
Likewise, we expand \(g=\mathcal{R}f\) into spherical harmonics:
\[g(\theta,t)=\sum_{m=0}^{\infty}\sum_{l=1}^{d_{m}}g_{m,l}(t)Y_{m,l}(\theta),\]
with \(g_{m,l}\in C_{c}^{\infty}((0,2))\).
**Theorem 1.4** (Range characterization - general case).: _Let \(\mathbb{B}\) denote the unit ball in \(\mathbb{R}^{n}\) for an odd \(n\geq 3\), and \(k:=(n-3)/2\). A function \(g\in C_{c}^{\infty}(\mathbb{S}^{n-1}\times(0,2))\) is representable as \(g=\mathcal{R}f\) for \(f\in C_{c}^{\infty}(\mathbb{B})\) if and only if for each \((m,l),m\geq 0,0\leq l\leq d_{m}\), \(h_{m,l}(t)=t^{n-2}g_{m,l}(t)\) satisfies the following two conditions:_
* _there is a function_ \(\phi_{m,l}\in C_{c}^{\infty}((0,2))\) _such that_ \[h_{m,l}(t)=D^{m}\phi_{m,l}(t),\] (1.3)
* _the function_ \(\phi_{m,l}(t)\) _satisfies_ \[[\mathcal{L}_{m+k}\phi_{m,l}](1-t)=[\mathcal{L}_{m+k}\phi_{m,l}](1+t).\] (1.4)
**Remark 1.5**.: One interesting consequence of Theorem 1.4 is that the range of SMT in a fixed odd dimension is characterized by the range of radial functions in higher odd dimensions. More specifically, let \(\mathcal{R}_{n}\) denote the SMT in \(n\)-dimensions. Then, combining the above two results, we deduce
\[\left\{t^{n-2}\mathcal{R}_{n}f(p,t):f(x)\coloneqq f_{m,l}(|x|)Y_{m,l}\left(\frac{x}{|x|}\right)\text{for some }f_{m,l}\in C_{c}^{\infty}([0,1))\right\}\] \[=\Bigg{\{}\left(\frac{1}{t}\frac{\mathrm{d}}{\mathrm{d}t}\right) ^{m}\left(t^{2m+n-2}\mathcal{R}_{2m+n}\phi(t)\right):\phi\in C_{c}^{\infty}( \mathbb{B}^{2m+n})\text{ is radial}\Bigg{\}}Y_{m,l}(p).\]
**Remark 1.6**.: A condition about oddness of a differential operator applied to a function appears in [25] (see Proposition 8 in [25]), where the authors use this result to characterize the range of the solution map of the wave equation (see the proof of Theorem 3 there). We have verified that the characterization in Theorem 1.1 is equivalent to [25, Proposition 8] for \(n=3,5\), and we believe, with some effort, one can verify that these are equivalent in general odd dimensions as well. In fact, Theorem 1.1 can also be proved using [25, Theorem 3] (see Section 4).
**Remark 1.7**.: It is well known that in the spherical geometry of data acquisition (i.e. when the centers \(p\) of the integration spheres are restricted to the boundary of the unit ball containing the support of the function \(f\)), one can uniquely recover \(f\) from \(\mathcal{R}f(p,t)\) using only half of the radial data, i.e. when \(p\in\mathbb{S}^{n-1}\) and \(t\in(0,1)\) or \(t\in(1,2)\) (e.g. see [10, 11, 12]). In other words, the knowledge of \(\mathcal{R}f(p,t)\) for \(t\in(0,1)\) completely determines \(\mathcal{R}f(p,t)\) for \(t\in(1,2)\), and vice versa. Therefore, the existence of relations between the two halves of the data set is not surprising. The remarkable feature of relations (1.1) and (1.4) is their simplicity.
Our next two results provide counterexamples to UCP for SMT in odd dimensions (see Section 2.3 for the precise definition).
**Theorem 1.8** (Counterexample to UCP for SMT in odd dimensions - symmetric case).: _Let \(n\geq 3\) be odd, \(\epsilon\in(0,1)\) and let \(U=B_{\epsilon}(0)\coloneqq\{x\in\mathbb{R}^{n}:|x|<\epsilon\}\). There exists a non-trivial function \(f\in C_{c}^{\infty}(\mathbb{B})\) such that \(f\) vanishes in \(U\) and \(\mathcal{R}f(p,t)=0\) for all \(p\in\mathbb{S}^{n-1}\) and \(t\in(1-\epsilon,1+\epsilon)\)._
Note that the set \(U\) here is taken to be a ball around the origin. One might wonder whether this is a special case due to radial symmetry of the functions. However, this is not the case, and to disprove the unique continuation in full generality, we also have
**Corollary 1.9** (Counterexample to UCP for SMT in odd dimensions - general case).: _Let \(n\geq 3\) be an odd integer and \(U\subset\mathbb{B}\) be an arbitrary open set. There exists a non-trivial function \(f\in C_{c}^{\infty}(\mathbb{B})\) such that \(f|_{U}=0\) and \(\mathcal{R}f\) vanishes on all spheres passing through \(U\)._
This will be proved by using the symmetric case, see Theorem 1.8.
We finish this section with the statement and a short discussion of a corollary of Theorem 3.2 formulated and proved in Section 3.1.
**Corollary 1.10**.: _Let \(h(t)\) and \(k\) be as defined in Theorem 1.1. Then, for any \(\lambda>0\):_
\[\left(\int\limits_{0}^{\infty}h(t)\,j_{k+\frac{1}{2}}(\lambda t)\,t\,\mathrm{d }t\right)y_{k+\frac{1}{2}}(\lambda)=\left(\int\limits_{0}^{\infty}h(t)\,y_{k+ \frac{1}{2}}(\lambda t)\,t\,\mathrm{d}t\right)j_{k+\frac{1}{2}}(\lambda), \tag{1.5}\]
_where \(j_{\alpha}\) and \(y_{\alpha}\) are the normalized (or spherical) Bessel functions of the first and second kind, respectively (see Section 2.1)._
Formula (1.5) is remarkable for two reasons. First, it provides an infinite family (corresponding to different choices of \(h\)) of "cross product" identities for the spherical Bessel functions of the first and second kind, analogs of which we did not find in literature. Therefore, it may be valuable as a standalone result in the context of theory of special functions. Second, it illuminates the structure of the zeros of the Hankel transform of a function in the range of the SMT, which play an important role in the description of the range of that transform (see [3, 5, 14, 25].)
## 2. Notation and Preliminaries
Let \(n\geq 3\) be an odd integer of the form \(n=2k+3,k\geq 0\) and \(\mathbb{R}^{n}\) denote the \(n\)-dimensional Euclidean space. Let \(\mathbb{B}\) denote the unit ball in \(\mathbb{R}^{n}\) with its boundary denoted as \(\mathbb{S}^{n-1}\).
### Bessel functions and Hankel transform
For \(\alpha\in\mathbb{C}\) such that \(\mathrm{Re}(\alpha)\geq 0\), the Bessel function of the first kind of order \(\alpha\) are defined as (see for instance [48])
\[J_{\alpha}(x)=\left(\frac{x}{2}\right)^{\alpha}\sum_{i=0}^{\infty}\frac{(-1)^ {i}(\frac{x}{2})^{2i}}{i!\Gamma(i+\alpha+1)},\quad\text{for}\quad x\in(0, \infty).\]
Bessel functions of order \(\alpha\) are solutions of the second order differential equation
\[\frac{\mathrm{d}^{2}y}{\mathrm{d}x^{2}}+\frac{1}{x}\frac{\mathrm{d}y}{ \mathrm{d}x}+\left(1-\frac{\alpha^{2}}{x^{2}}\right)y=0,\]
called Bessel differential equation.
Let us also define the normalized (or spherical) Bessel functions of the first kind. For \(\alpha\in\mathbb{R}\) such that \(\alpha>-1/2\), these are given as
\[j_{\alpha}(x) =\Gamma(\alpha+1)\left(\frac{2}{x}\right)^{\alpha}J_{\alpha}(x)\] \[=\Gamma(\alpha+1)\sum_{i=0}^{\infty}\frac{(-1)^{i}(\frac{x}{2})^{ 2i}}{i!\Gamma(i+\alpha+1)}.\]
We are mostly interested in the case when \(\alpha\) is half of an odd integer. In this case, \(j_{\alpha}\) is also given by Rayleigh's formula [1]
\[j_{\alpha}(x)=-\frac{(-2)^{\alpha+1/2}\Gamma(\alpha+1)}{\sqrt{\pi}}\left( \frac{1}{x}\frac{\mathrm{d}}{\mathrm{d}x}\right)^{\alpha-1/2}\left(\frac{\sin x }{x}\right),\quad\text{when}\quad 2\alpha\in\{1,3,\dots\}. \tag{2.1}\]
We will also need the normalized Bessel function of the second kind of half integer order, which is defined as
\[y_{\alpha}(x)=-\frac{(-2)^{\alpha+1/2}\Gamma(\alpha+1)}{\sqrt{\pi}}\left( \frac{1}{x}\frac{\mathrm{d}}{\mathrm{d}x}\right)^{\alpha-1/2}\left(\frac{\cos x }{x}\right),\quad\text{when}\quad 2\alpha\in\{1,3,\dots\}. \tag{2.2}\]
**Remark 2.1**.: We caution the reader that the normalization of the Bessel functions is not standard. Our normalization differs from the one in [1]. The Rayleigh's formula stated above has been modified accordingly. For a comprehensive study of Bessel functions, we refer the reader to the classical treatise of Watson [49].
The Hankel (also called Fourier-Bessel or Fourier-Hankel) transform of order \(\alpha\) is defined as
\[\mathcal{F}_{\alpha}(g)(\lambda)=\int\limits_{0}^{\infty}g(t)j_{\alpha}(\lambda t )t^{2\alpha+1}\,\mathrm{d}t.\]
Its inverse is given by
\[g(t)=\frac{1}{2^{2\alpha}\Gamma^{2}(\alpha+1)}\int\limits_{0}^{\infty} \mathcal{F}_{\alpha}(g)(\lambda)j_{\alpha}(t\lambda)\lambda^{2\alpha+1}\, \mathrm{d}\lambda.\]
### Spherical mean transform
The SMT of a continuous function in \(\mathbb{R}^{n}\) denotes the averages of the function over spheres with centers varying over \(\mathbb{R}^{n}\) and positive radii. A formal dimension count gives that the SMT depends on \((n+1)\)-variables, while the function itself depends on only \(n\)-variables. This, and certain applications in tomography, motivate restricting the centers of spheres to \((n-1)\)-dimensional hypersurfaces, which makes the problem interesting as well as challenging.
We will consider the case when the function is supported in \(\mathbb{B}\) and the centers are fixed on \(\mathbb{S}^{n-1}\). This can be easily generalized to balls and spheres of any radius by a simple dilation. For \(f\in C_{c}^{\infty}(\mathbb{B})\), the _spherical mean transform_ is defined as
\[\mathcal{R}f(p,t)=\frac{1}{\omega_{n}}\int\limits_{\mathbb{S}^{n-1}}f(p+t \theta)\,\mathrm{d}S(\theta),\]
where \(\omega_{n}\) denotes the surface area of \(\mathbb{S}^{n-1}\) and \(\mathrm{d}S\) denotes the surface measure on it. We caution the reader that some authors also define the above transform with weight \(t^{n-1}\), in which case our results need to be modified accordingly. Due to the support restriction on \(f\), \(\mathcal{R}f(\cdot,t)=0\) for \(t\geq 2\). Thus, we have \(\mathcal{R}:C_{c}^{\infty}(\mathbb{B})\to C_{c}^{\infty}(\mathbb{S}^{n-1} \times(0,2))\).
In the setting discussed above, the problem of inverting the SMT has been considered by many authors, and explicit inversion formulas exist. Before stating the relevant inversion formulas, let us point out that when \(f\) is a radial function, \(\mathcal{R}f\) is independent of the center of integration. This can be seen by a simple application of the Funk-Hecke theorem, as follows:
\[\mathcal{R}f(p,t) =\frac{1}{\omega_{n}}\int\limits_{\mathbb{S}^{n-1}}f(|p+t\theta|) \,\mathrm{d}S(\theta)\] \[=\frac{1}{\omega_{n}}\int\limits_{\mathbb{S}^{n-1}}f\left(\sqrt{1 +t^{2}+2t(p\cdot\theta)}\right)\,\mathrm{d}S(\theta).\]
An application of the Funk-Hecke theorem now gives
\[\mathcal{R}f(p,t)=\frac{\omega_{n-1}}{\omega_{n}}\int\limits_{-1}^{1}f\left( \sqrt{1+t^{2}+2st}\right)(1-s^{2})^{k}\,\mathrm{d}s, \tag{2.3}\]
where the right-hand side is independent of \(p\). This observation is not new and has been used to obtain inversion procedures for the SMT. The above equation can be seen as a Volterra integral equation of the first kind with a weakly singular kernel, which can be modified into a Volterra integral equation of the second kind and then solved using Picard's method of successive iterations. This procedure is not specific to radial functions. The case of general functions can also be solved similarly by expansion into spherical harmonics, see [11, 12, 45, 46]. Due to the rotation invariance of the SMT, the \(n\)-th term in the spherical harmonics expansion of \(\mathcal{R}f\) depends only on the \(n\)-th term in the expansion of \(f\) via a Volterra integral equation, which has a unique solution. It follows that if \(\mathcal{R}f\) is independent of the centers of integration, then \(f\) is necessarily a radial function.
Let us now state an explicit inversion formula in odd dimensions which we use in our proofs.
**Theorem 2.2**.: _[_24_, Theorem 3]_ _A smooth function \(f\in C_{c}^{\infty}(\mathbb{B})\) can be obtained from the knowledge of its spherical mean transform as follows:_
\[f(x) =K(n)\left(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}^{2}t \mathcal{D}\mathcal{N}f\right)(x) \tag{2.4}\] \[=K(n)\left(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\partial_{t }\partial_{t}\mathcal{D}\mathcal{N}f\right)(x)\] (2.5) \[=K(n)\Delta_{x}\left(\mathcal{N}^{*}\mathcal{D}^{*}t\mathcal{D} \mathcal{N}f\right)(x),\]
_where \(K(n)=\frac{-\pi}{2\Gamma(n/2)^{2}}\), and the various operators involved are given by_
\[(\mathcal{N}f)(p,t)=t^{n-2}(\mathcal{R}f)(p,t),\]
_and for a function \(G\in C_{c}^{\infty}(\mathbb{S}^{n-1}\times(0,2))\),_
\[(\mathcal{D}G)(p,t) =\left(\frac{1}{2t}\frac{\partial}{\partial t}\right)^{k}(G(p,t)),\] \[(\mathcal{N}^{*}G)(x) =\frac{1}{\omega_{n}}\int\limits_{\mathbb{S}^{n-1}}\frac{G(p,|p- x|)}{|p-x|}\mathrm{d}S(p),\] \[(\mathcal{D}^{*}G)(p,t) =(-1)^{k}t\mathcal{D}\left(\frac{G(p,t)}{t}\right).\]
**Remark 2.3**.: The fact that \(f\) is necessarily a radial function if \(\mathcal{R}f\) is independent of the centers of integration can also be seen from the inversion formula above. If \(\mathcal{R}f\) is independent of \(p\), then so is \(\mathcal{D}^{*}\partial_{t}^{2}t\mathcal{D}\mathcal{N}\), and hence \(\mathcal{N}^{*}(\mathcal{D}^{*}\partial_{t}^{2}t\mathcal{D}\mathcal{N})(x)\) depends only on \(|x|\) (see eq. (4.2)).
One of our proofs of sufficiency is based on the range characterization given in [5], where several equivalent conditions are given.
**Theorem 2.4**.: _[_5_, Theorem 11]_ _Let \(n>1\) be an odd integer. A function \(g\in C_{c}^{\infty}(\mathbb{S}^{n-1}\times[0,2])\) is representable as \(\mathcal{R}f\) for some \(f\in C_{c}^{\infty}(\mathbb{B})\) if and only if for any \(m\), the \(m\)th order spherical harmonic term \(\widehat{g}_{m}(p,\lambda)\) of \(\widehat{g}(p,\lambda)\) vanishes at non-zero zeros of the Bessel function \(J_{m+n/2-1}(\lambda)\), where_
\[\widehat{g}(p,\lambda)=\mathcal{F}_{\frac{n-2}{2}}(g)(p,\lambda)\]
_is the Hankel transform of \(g\) of order \(\alpha=(n-2)/2\), for each fixed \(p\)._
We will make extensive use of the following standard result:
**Theorem 2.5** (Funk-Hecke).: _If \(\int\limits_{-1}^{1}|F(t)|(1-t^{2})^{\frac{n-3}{2}}\mathrm{d}t<\infty\), then_
\[\int\limits_{\mathbb{S}^{n-1}}F\left(\left<\sigma,\eta\right>\right)Y_{l}( \sigma)\mathrm{d}\sigma=\frac{\left|\mathbb{S}^{n-2}\right|}{C_{l}^{\frac{n} {2}-1}(1)}\left(\int\limits_{-1}^{1}F(t)C_{l}^{\frac{n}{2}-1}(t)(1-t^{2})^{ \frac{n-3}{2}}\mathrm{d}t\right)Y_{l}(\eta),\]
_where \(|\mathbb{S}^{n-2}|\) denotes the surface measure of the unit sphere in \(\mathbb{R}^{n-1}\), \(C_{l}^{\frac{n}{2}-1}(t)\) are the Gegenbauer polynomials and \(Y_{l}\) are spherical harmonics._
### Unique continuation principle for spherical mean transform
Let \(\mathcal{P}\) denote any operator. For any open set \(U\), if \(\mathcal{P}u|_{U}=0\) and \(u|_{U}=0\) implies that \(u\) vanishes identically, then \(\mathcal{P}\) is said to possess a _unique continuation property_. Some examples of operators possessing UCP are fractional powers of the Laplacian, the normal operators of the X-ray and momentum ray transforms, normal operators of \(d\)-plane transforms (for \(d\) odd), etc. In all these examples, the inversion formulas are non-local in nature.
Motivated by the results for X-ray and momentum ray transforms, we propose the following analog of the UCP in the context of SMT:
**Question 1** (Unique continuation for spherical mean transform).: _Let \(U\subset\mathbb{B}\) be an arbitrary open set. Let \(f\in C_{c}^{\infty}(\mathbb{B})\) be such that \(f\) vanishes on \(U\), and the spherical mean transform of \(f\) vanishes on all spheres intersecting \(U\). Does \(f\) vanish identically?_
A closer look at the inversion formula above reveals that in odd dimensions, the inversion formula for SMT is local in nature, that is, the value of the function \(f\) at a point \(x\) depends only on the spherical means of \(f\) on spheres passing through a small neighbourhood of \(x\). This observation suggests that a unique continuation result should not hold for \(\mathcal{R}\) in odd dimensions. This is indeed true and is the content of Theorems 1.8 and 1.9.
### Some auxiliary lemmas
In this subsection, we collect some basic mathematical results which will be used in the calculations. All these results are well known and are stated for the sake of completeness and easy reference.
Let us begin by recalling the Faa di Bruno's formula, which is an identity relating the higher order derivatives of composition of two functions to the derivatives of the functions. This is a generalization of the usual chain rule to higher order derivatives (see, for instance, [33]).
**Lemma 2.6** (Faa di Bruno's formula).: _Let \(F\) and \(G\) be two smooth functions of a real variable. The derivatives of the composite function \(F\circ G\) in terms of the derivatives of \(F\) and \(G\) are given as_
\[\frac{\mathrm{d}^{p}}{\mathrm{d}t^{p}}F(G(t))=\sum_{q=1}^{p}F^{(q)}(G(t))B_{p,q}(G^{(1)}(t),\dots,G^{(p-q+1)}(t)),\]
_where \(B_{p,q}\) are the Bell polynomials given by_
\[B_{p,q}(x_{1},\dots,x_{p-q+1})=\sum\frac{p!}{j_{1}!\dots j_{p-q+1}!}\left( \frac{x_{1}}{1!}\right)^{j_{1}}\dots\left(\frac{x_{p-q+1}}{(p-q+1)!}\right)^{j _{p-q+1}},\]
_with the sum taken over all non-negative sequences, \(j_{1},\cdots,j_{p-q+1}\) such that the following two conditions are satisfied:_
\[j_{1}+j_{2}+\dots+j_{p-q+1}=q,\] \[j_{1}+2j_{2}+\dots+(p-q+1)j_{p-q+1}=p.\]
We will be working with the operator \(D\) defined as
\[D=\frac{1}{t}\frac{\mathrm{d}}{\mathrm{d}t}.\]
Multiplying the standard chain rule by \(\frac{1}{t}\), we see that the \(D-\)derivative of composition of two functions can then be re-written as
\[D(F(G(t)))=F^{\prime}(G(t))\cdot DG(t),\]
where \(F^{\prime}\) denotes the usual derivative of \(F\). The following lemma is then an easy verification.
**Lemma 2.7** (Faa di Bruno's formula for the operator \(D\)).: _Let \(F\) and \(G\) be two smooth functions of 1-real variable. The \(D\)-derivatives of the composite function \(F\circ G\) are given as_
\[D^{p}F(G(t))=\sum_{q=1}^{p}F^{(q)}(G(t))B_{p,q}((DG)(t),\dots,D^{(p-q+1)}G(t)).\]
It can be quite difficult to work with the above formula in its full generality. However, for the case that we have at hand, applying Faa di Bruno's formula becomes much simpler. In our case, we have \(D^{j}G=0\) for \(j\geq 3\), and the formula simplifies to
**Lemma 2.8** (Faa di Bruno's formula- special case).: _Let \(F\) and \(G\) be two smooth functions of 1-real variable such that \(D^{j}G=0\) for \(j\geq 3\). The following identity holds_
\[D^{p}F(G(t))=\sum_{q\geq p/2}^{p}\frac{p!}{(2q-p)!(p-q)!2^{p-q}}F^{(q)}(G(t) )\left(DG(t)\right)^{2q-p}\left(D^{2}G(t)\right)^{p-q}.\]
Proof.: Due to the existence of only two non-trivial \(D\) derivatives of \(G\), the Bell polynomials are subject to the following two conditions:
\[j_{1}+j_{2}=q,\quad j_{1}+2j_{2}=p.\]
Solving this gives the following unique solution: \(j_{2}=p-q\) and \(j_{1}=2q-p\). Since \(j_{1}\geq 0\), we have the additional requirement that \(q\geq p/2\). With all these considerations, we arrive at the required formula for \(D^{p}(F(G(t)))\).
Finally, let us record the expression for repeated integration by parts with the operator \(D\).
**Lemma 2.9**.: _For two smooth functions \(F\) and \(G\), the following identity holds:_
\[\int\limits_{a}^{b}\partial_{t}D^{k}F\cdot G\,\mathrm{d}t=\left[\sum_{l=0}^{k -1}(-1)^{l}D^{k-l}F\cdot D^{l}G\right]_{t=a}^{b}+(-1)^{k}\int\limits_{a}^{b} \partial_{t}F\cdot D^{k}G\,\mathrm{d}t, \tag{2.6}\]
_where the sum is interpreted as empty for \(k=0\)._
The proof is straightforward and hence omitted.
## 3. Proof of main results
### Range characterization for radial functions
Before we proceed to the proof of the range characterization, let us explain the idea. When the function \(f\) possesses radial symmetry, some relation between \(\mathcal{R}f\) at points \(1\pm t\) is expected, as the figure below suggests.
Notice that both the spheres (of radii \(1\pm t\)) pass through points having the same values of \(f\). Let us consider the case of \(3-\)dimensions, where the necessary condition is straightforward to obtain. For \(t\in(0,2)\), we have
\[\mathcal{R}f(t)=\frac{2\pi}{4\pi}\int\limits_{-1}^{1}f\left(\sqrt{1+t^{2}+2st }\right)\,\mathrm{d}s.\]
Consider the change of variables \(u=\sqrt{1+t^{2}+2st}\) to obtain
\[\mathcal{R}f(t) =\frac{1}{2t}\int\limits_{|1-t|}^{1+t}uf(u)\,\mathrm{d}u\] \[=\frac{1}{2t}\int\limits_{|1-t|}^{1}uf(u)\,\mathrm{d}u,\]
since \(f\) vanishes outside the unit ball and it follows that the function \(t\mathcal{R}f(t)\) satisfies
\[[t\mathcal{R}f](1-t)=[t\mathcal{R}f](1+t)\text{ for }t\in[0,1],\]
or equivalently
\[[t\mathcal{R}f](t)=[t\mathcal{R}f](2-t)\text{ for }t\in[0,1].\]
This relation also suggests working with \(t^{n-2}\mathcal{R}f\) instead of \(\mathcal{R}f\).
Let us move on to the proof of Theorem 1.1. We prove that the condition is necessary and sufficient in the next two subsections respectively.
Figure 1. Relation between SMT of a radial function at radii \((1+t)\) and \((1-t)\) for \(0<t<1\).
#### 3.1.1. Proof that the condition is necessary
In this subsection, we give the necessity part of the proof of the Theorem 1.1.
In order to see what condition to expect, let us consider the case of spherical Radon transform in \(5\)-dimensions. Let \(f\) be a smooth radial function supported in the unit ball in \(\mathbb{R}^{5}\), that is, \(f(x)=\widetilde{f}(|x|)\) for a smooth compactly supported function \(\widetilde{f}\) on \([0,\infty)\). In order to avoid proliferation of new notation, we use the same \(f\) to denote the function of one variable associated to \(f\). That is, we denote \(\widetilde{f}\) by \(f\). We have
\[\mathcal{R}f(p,t)=\frac{1}{\omega_{5}}\int\limits_{\mathbb{S}^{4}}f(p+t\theta) \mathrm{d}\theta. \tag{3.1}\]
In (3.1) above, \(\omega_{5}\) is the surface area of the unit sphere in \(\mathbb{R}^{5}\). Then
\[g(t)=\mathcal{R}f(p,t) =\frac{1}{\omega_{5}}\int\limits_{\mathbb{S}^{4}}f(|p+t\theta|) \mathrm{d}\theta\] \[=\frac{1}{\omega_{5}}\int\limits_{\mathbb{S}^{4}}f(\sqrt{1+t^{2} +2tp\cdot\theta})\mathrm{d}\theta.\]
Applying Funk-Hecke theorem, we get,
\[g(t)=\frac{\omega_{4}}{\omega_{5}}\int\limits_{t/2}^{1}f(\sqrt{1+t^{2}-2st})( 1-s^{2})\mathrm{d}s. \tag{3.2}\]
For \(t>0\), making the change of variable, \(u=\sqrt{1+t^{2}-2st}\), we get,
\[g(t) =\frac{\omega_{4}}{t\omega_{5}}\int\limits_{|1-t|}^{1}f(u)u\left( 1-\left(\frac{1+t^{2}-u^{2}}{2t}\right)^{2}\right)\mathrm{d}u\] \[=\frac{\omega_{4}}{4t^{3}\omega_{5}}\int\limits_{|1-t|}^{1}f(u)u \left(4t^{2}-\left(1+t^{2}-u^{2}\right)^{2}\right)\mathrm{d}u.\]
Let us denote
\[h(t)=t^{3}g(t)\text{ and }C=\frac{\omega_{4}}{4\omega_{5}}.\]
Then we have
\[h(t) =C\int\limits_{|1-t|}^{1}f(u)u\left(4t^{2}-(1+t^{2}-u^{2})^{2} \right)\mathrm{d}u\] \[=C\int\limits_{|1-t|}^{1}f(u)u\left((1+t)^{2}-u^{2}\right)(u^{2}- (t-1)^{2})\mathrm{d}u.\]
We let \(0<t<1\). Then
\[h(t)=C\int\limits_{1-t}^{1}f(u)u\left((1+t)^{2}-u^{2}\right)(u^{2}-(t-1)^{2}) \mathrm{d}u. \tag{3.3}\]
We replace \(t\) by \(2-t\) in the above expression. We get,
\[h(2-t)=C\int\limits_{1-t}^{1}f(u)u((3-t)^{2}-u^{2})(u^{2}-(t-1)^{2})\mathrm{d}u. \tag{3.4}\]
Let us expand both (3.3) and (3.4). Then
\[h(t)=-C(t^{2}-1)^{2}\int\limits_{1-t}^{1}f(u)u\mathrm{d}u+2C(1+t^{2})\int \limits_{1-t}^{1}f(u)u^{3}\mathrm{d}u-C\int\limits_{1-t}^{1}f(u)u^{5}\mathrm{ d}u. \tag{3.5}\]
\[h(2-t)=-C((3-t)(1-t))^{2}\int\limits_{1-t}^{1}f(u)u{\rm d}u+2C(1+(2-t)^{2})\int \limits_{1-t}^{1}f(u)u^{3}{\rm d}u-C\int\limits_{1-t}^{1}f(u)u^{5}{\rm d}u. \tag{3.6}\]
For simplicity of notation, we will denote
\[\alpha=C\int\limits_{1-t}^{1}f(u)u{\rm d}u,\beta=C\int\limits_{1-t}^{1}f(u)u^{ 3}{\rm d}u,\gamma=C\int\limits_{1-t}^{1}f(u)u^{5}{\rm d}u.\]
Our goal is to find a relation eliminating these unknowns. In this notation, we have
\[h(t)=-(t^{2}-1)^{2}\alpha+2(1+t^{2})\beta-\gamma. \tag{3.7}\]
\[h(2-t)=-((3-t)(1-t))^{2}\alpha+2(1+(2-t)^{2})\beta-\gamma. \tag{3.8}\]
Differentiating the above two expressions, we get,
\[h^{\prime}(t)=-4t(t^{2}-1)\alpha+4t\beta. \tag{3.9}\]
\[h^{\prime}(2-t)=4(t-1)(t-2)(t-3)\alpha-4(t-2)\beta. \tag{3.10}\]
Note that those terms which involve the derivative of the integral add to \(0\). Solving (3.9) and (3.10), we get,
\[\alpha=-\frac{(t-2)h^{\prime}(t)+th^{\prime}(2-t)}{16t(t-1)(t-2)}. \tag{3.11}\]
\[\beta=-\frac{(t-2)(t-3)h^{\prime}(t)+t(t+1)h^{\prime}(2-t)}{16t(t-2)}. \tag{3.12}\]
Substituting this back into (3.7) and (3.8), we then get (eliminating \(\gamma\)),
\[h(t)+(t^{2}-1)^{2}\alpha-2(1+t^{2})\beta=h(2-t)+((3-t)(1-t))^{2}\alpha-2(1+(2- t)^{2})\beta.\]
Using the expression for \(\alpha\) and \(\beta\), from (3.11) and (3.12), respectively, we have,
\[h(t)+\frac{(1-t)}{t}h^{\prime}(t)=h(2-t)+\frac{(1-t)}{(t-2)}h^{\prime}(2-t) \text{ for all }t\in(0,1). \tag{3.13}\]
In the notation of \(D\) operator, we then get,
\[h(t)+(1-t)[Dh](t)=h(2-t)-(1-t)[Dh](2-t)\text{ for all }t\in(0,1). \tag{3.14}\]
By continuity, we also have
\[h(t)+(1-t)[Dh](t)=h(2-t)-(1-t)[Dh](2-t)\text{ for all }t\in[0,1]. \tag{3.15}\]
This can be rewritten in the final form as follows:
\[h(t)-h(2-t)+(1-t)\left([Dh](t)+[Dh](2-t)\right)=0\text{ for all }t\in[0,1]. \tag{3.16}\]
Note that due to the smoothness condition on \(h\), the expression above is well-defined for \(t=1\) as well.
Our goal next is to generalize the above approach for odd dimensional spherical Radon transform set-up. The strategy, as in this specific example, is to eliminate integral expressions involving \(f\). We also make the following observations:
* We work with \(D\) derivatives instead of the usual derivatives.
* In the general odd dimensional set-up, we can take up to \(k^{\text{th}}\) order \(D\) derivatives and all such derivatives pass through the integral. In other words, the derivative of the integral has no contribution up to the \(k^{\text{th}}\) order.
* Based on the calculations done for the 5D-case, we consider coefficients of \(D\) derivatives as powers of \((1-t)\) multiplied by suitable constants. As in (3.16), these are subtracted when evaluated at \(t\) and \((2-t)\) for even order \(D\) derivatives and added for odd order \(D\) derivatives and set to \(0\) to determine the coefficients.
We carry out this program for the general odd dimensional case now. We should mention here that while the computations done for the 5D case serve as a motivation for our approach below, it is very difficult to generalize it to higher dimensional cases, since the solution to the problem relies on the explicit inversion of a matrix. Nevertheless, finding the correct combination of derivatives leads to a positive answer as we show below. The 3D case is trivial, and the 5D computations done above can be recast as follows: Let us start with the expression for \(h(t)\):
\[h(t) =\frac{\omega_{4}}{4\omega_{5}}\int\limits_{|1-t|}^{1}f(u)u\left( 4t^{2}-(1+t^{2}-u^{2})^{2}\right)\mathrm{d}u\] \[=\frac{\omega_{4}}{4\omega_{5}}\int\limits_{|1-t|}^{1}f(u)u\left( 2(u^{2}+1)t^{2}-t^{4}-(1-u^{2})^{2}\right)\mathrm{d}u.\]
Let
\[P(t,u)=2(u^{2}+1)t^{2}-t^{4}-(1-u^{2})^{2}.\]
It is a straightforward exercise to check that
\[(P(t,u)-P(2-t,u))+(1-t)\left([DP](t,u)+[DP](2-t,u)\right)\equiv 0.\]
This then gives that
\[(h(t)-h(2-t))+(1-t)\left([Dh](t)+[Dh](2-t)\right)\equiv 0.\]
This is exactly what we derived earlier using a slightly different approach. Nevertheless, this serves as a motivation for what follows.
Proof of necessity of Theorem 1.1.: Let \(n\) be of the form \(n=2k+3\) with \(k\geq 0\). Let \(f\in C_{c}^{\infty}(\mathbb{B})\) in \(n\) dimensions be a function depending only on the distance from the origin. Then \(f\) can be written as
\[f(x)=\widetilde{f}(|x|),\text{ for some }\widetilde{f}:[0,\infty)\to\mathbb{R}.\]
We have that \(\widetilde{f}\in C^{\infty}([0,\infty))\) and all odd order derivatives of \(\widetilde{f}\) vanish at the origin. As before, we do not distinguish between \(f\) and \(\widetilde{f}\). The spherical Radon transform of \(f\) is
\[\mathcal{R}f(p,t) =\frac{1}{\omega_{n}}\int\limits_{\mathbb{S}^{n-1}}f(|p+t\theta|) \mathrm{d}S(\theta)\] \[=\frac{1}{\omega_{n}}\int\limits_{\mathbb{S}^{n-1}}f(\sqrt{1+t^{2 }+2tp\cdot\theta})\mathrm{d}S(\theta)\] \[=\frac{\omega_{n-1}}{\omega_{n}}\int\limits_{t/2}^{1}f(\sqrt{1+t ^{2}-2t}s)(1-s^{2})^{\frac{n-3}{2}}\mathrm{d}s.\]
The last equality follows from Funk-Hecke theorem combined with the fact that the support of \(f\) is in the unit ball which forces \(\frac{t}{2}\leq-p\cdot\theta\leq 1\). Next employing the change of variable,
\[1+t^{2}-2ts=u^{2},\]
we have
\[h(t):=t^{n-2}\mathcal{R}f(p,t) =\frac{\omega_{n-1}}{4^{k}\omega_{n}}\int\limits_{|1-t|}^{1}uf(u) \left(4t^{2}-(1+t^{2}-u^{2})^{2}\right)^{k}\mathrm{d}u\] \[=\frac{\omega_{n-1}}{4^{k}\omega_{n}}\int\limits_{|1-t|}^{1}uf(u) \left(2(u^{2}+1)t^{2}-t^{4}-(1-u^{2})^{2}\right)^{k}\mathrm{d}u\]
The integral kernel for \(h(t)\) is a polynomial in \(t,u\) variables. In order to derive a necessary condition for a function \(h(t)\in C_{c}^{\infty}((0,2))\) to be in the range of the spherical Radon transform, a reasonable approach would be differentiate \(h\) several times and derive a system of equations eliminating integrals with integrand of the form \(f(u)u^{m}\) for certain positive integers \(m\). Before we proceed, we make the following remark. We are interested in taking \(k\) derivatives of \(h(t)\). In fact, \(h\) is infinitely differentiable
in \(t\). This is clear for \(t\neq 1\). However, for \(t=1\), we can argue as follows. We have that \(h(t)\) (involving the spherical Radon transform of a smooth function) is smooth in the \(t\) variable and for \(t\neq 1\), the derivatives of \(h(t)\) can be computed by chain rule. Hence the derivatives of \(h(t)\) at \(t=1\) can be evaluated by taking the limit as \(t\to 1\) of the corresponding derivatives evaluated at \(t\neq 1\). The same remark applies for higher order \(D\) derivatives instead of ordinary derivatives. With this remark in mind, we will not distinguish between \(t=1\) and \(t\neq 1\). With
\[Q(t,u)=2(u^{2}+1)t^{2}-t^{4}-(1-u^{2})^{2},\]
we consider \(P(t,u)=\left(Q(t,u)\right)^{k}\), and we are interested in taking \(D\) derivatives up to order \(k\) of \(h(t)\). Note that up to order \(k\), the derivatives are only evaluated on \(P(t,u)\), since \(Q(t,1-t)=0\).
As a first step, we find explicit expression for higher order \(D\) derivatives of \(h(t)\). We use a special case of Faa di Bruno's formula, Lemma 2.8: Let's consider
\[P(t)=t^{k}\text{ and }Q(t,u)=2(u^{2}+1)t^{2}-t^{4}-(1-u^{2})^{2}.\]
In the set-up that we have, we take higher order \(D\) derivatives of \(Q\) and we observe that \(D^{p}Q(t,u))=0\) for \(p\geq 3\). Furthermore,
\[DQ(t,u)=4(u^{2}+1-t^{2})=4\left(\frac{Q(2-t,u)-Q(t,u)}{8(1-t)}+2(1-t)\right),\]
and
\[D^{2}Q(t,u)=-8.\]
With all these considerations, we arrive at the following formula for \(D^{p}P(t,u)\):
\[D^{p}P(t,u)=\sum_{q\geq p/2}^{p}\frac{k!}{(k-q)!}\left(Q(t,u)\right)^{k-q} \frac{p!}{(2q-p)!(p-q)!2^{p-q}}\left(\frac{Q(2-t,u)-Q(t,u)}{2(1-t)}+8(1-t) \right)^{2q-p}(-8)^{p-q}.\]
Let us write this as
\[D^{p}P(t,u)=\sum_{q\geq p/2}^{p}\frac{K(p,q)}{(1-t)^{2q-p}}Q(t,u)^{k-q}(Q(2-t,u)-Q(t,u)+16(1-t)^{2})^{2q-p}, \tag{3.17}\]
with
\[K(p,q)=\frac{k!p!(-4)^{p-q}}{(k-q)!(2q-p)!(p-q)!2^{2q-p}}.\]
Since we are only interested in derivatives in the \(t\) variable, we are going to suppress the dependence of \(P,Q\) and their derivatives on \(u\), and simply write \(P(t),Q(t)\), etc. We also recall our convention that \([D^{p}P](\cdot)\) denotes evaluation of the function \(D^{p}P\) at the given point. Based on the necessary condition derived for the 5D case, keeping in mind odd or even order \(D\) derivatives, let us consider
\[(1-t)^{p}\left([D^{p}P](t)+(-1)^{p+1}[D^{p}P](2-t)\right). \tag{3.18}\]
Let us expand \([D^{p}P](t)\) by binomial theorem. We get,
\[[D^{p}P](t) =\sum_{q\geq p/2}^{p}\frac{K(p,q)}{(1-t)^{2q-p}}Q(t)^{k-q}\sum_{r =0}^{2q-p}\binom{2q-p}{r}(Q(2-t)-Q(t))^{2q-p-r}16^{r}(1-t)^{2r}\] \[=\sum_{q\geq p/2}^{p}\sum_{r=0}^{2q-p}\frac{16^{r}K(p,q)}{(1-t)^ {2q-p}}(1-t)^{2r}\binom{2q-p}{r}Q(t)^{k-q}(Q(2-t)-Q(t))^{2q-p-r}.\]
Hence
\[[D^{p}P](2-t){=}\hskip-2.845276pt\sum_{q\geq p/2}^{p}\sum_{r=0}^{2q-p}\frac{ (-1)^{2q-p-r}16^{r}K(p,q)}{(-1)^{2q-p}(1-t)^{2q-p}}(1-t)^{2r}\binom{2q-p}{r}Q( 2-t)^{k-q}(Q(2-t)-Q(t))^{2q-p-r}.\]
Therefore
\[(1-t)^{p}\left([D^{p}P](t)+(-1)^{p+1}[D^{p}P](2-t)\right)=\sum_{q \geq p/2}^{p}\sum_{r=0}^{2q-p}K(p,q)16^{r}(1-t)^{2p-2q+2r}\binom{2q-p}{r}\times\] \[(Q(2-t)-Q(t))^{2q-p-r}\Bigg{\{}Q(t)^{k-q}+(-1)^{p-r+1}Q(2-t)^{k-q }\Bigg{\}}.\]
We want to find coefficients \(\{C(k,p)\}\) for \(0\leq p\leq k\) such that
\[\sum_{p=0}^{k}\sum_{q\geq p/2}^{p}\sum_{r=0}^{2q-p}C(k,p)K(p,q)(-1)^{2q-p-r}16^{r }(1-t)^{2p-2q+2r}\binom{2q-p}{r}\times\]
\[(Q(t)-Q(2-t))^{2q-p-r}\Bigg{\{}Q(t)^{k-q}+(-1)^{p-r+1}Q(2-t)^{k-q}\Bigg{\}}=0.\]
Simplifying the constants in the equality above, we arrive at
\[\sum_{p=0}^{k}\sum_{q\geq p/2}^{p}\sum_{r=0}^{2q-p}C(k,p)\frac{k!p! (-1)^{q-r}2^{3p-4q+4r}}{(k-q)!(p-q)!r!(2q-p-r)!}(1-t)^{2(p-q+r)} \tag{3.19}\] \[\times(Q(t)-Q(2-t))^{2q-p-r}\Big{\{}Q(t)^{k-q}+(-1)^{p-r+1}Q(2-t)^ {k-q}\Big{\}}=0.\]
Our goal is to find constants \(C(k,p)\) such that (3.19) is valid. Let us fix a power of \((1-t)^{2}\) of the form \((1-t)^{2(k-l)}\). Our strategy for determining the coefficients is to set sum of the terms corresponding to this fixed power \((1-t)^{2(k-l)}=0\). Let us assume \(l\) is odd; the proof for the even case is similar. The maximum possible choices of triples \((p,q,r)\) we have to consider are:
1. \((k,k-l,k-2l),(k,k-l+1,k-2l+1),\cdots,(k,k,k-l)\)
2. \((k-1,k-l,k-2l+1),(k-1,k-l+1,k-2l+2),\cdots,(k-1,k-1,k-l)\) \(\vdots\)
3. \((k-l,k-l,k-l)\).
The maximum number of terms above is \(\frac{(l+1)(l+2)}{2}\).
We prove our result by induction. Let us start by considering the highest power of \((1-t)\) in (3.19).
We claim that a term of the form \((1-t)^{2k}\) does not appear in the expansion above. This can be seen as follows: To get the term, \((1-t)^{2(p-q+r)}=(1-t)^{2k}\), we must have \(p-q+r=k\). If \(p<k\), then we must have \(q<r\). But we have \(0\leq r\leq 2q-p<2r-p\). This then implies that \(r<2r-p\), which then gives that \(r>p\), but this impossible. Hence \(p=k\). This then gives \(r=q\), which then gives that \(p\leq q\). But \(q\leq p\) always and hence \(q=p=k\). This forces \(r=k\). Due to the presence of \((-1)^{p-r+1}\) in the term above, we get that \((1-t)^{2k}\) term does not appear in the expansion above.
Next we show that a term involving \((1-t)^{2k-2}\) appears exactly twice in the term involving \(C(k,k)\) and once in the term involving \(C(k,k-1)\). First, consider \(p=k\). Then we have to consider \(p-q+r=k-1\), and since \(p=k\), we have \(q-r=1\), which then implies that \(k-1\leq q\). Hence the two choices of \(q\) that are possible are \(q=k-1\) and \(q=k\). If \(q=k-1\), then \(r=k-2\), and if \(q=k\), then \(r=k-1\). If we consider \(p=k-1\), then exactly the same argument as in the previous paragraph leads to \(r=q=p=k-1\). Hence only one choice is possible. Next let \(p=k-2\). We then have \(r-q=1\), and following the same arguments as above, we get that \(q\geq k-1\), which is impossible since \(q\leq p=k-2\). A similar argument follows for all \(p<k-2\). Hence we have established that there are exactly three terms.
Summarizing the content of the above paragraph, there are exactly 3 terms in the above expansion involving \((1-t)^{2k-2}\). They correspond to the following triples: \((p,q,r)=(k,k-1,k-2),(k,k,k-1)\) and \((k-1,k-1,k-1)\). We have to be careful with terms involving the case when \(q=k\), since in this case, \(Q(t)^{k-q}+(-1)^{p-r+1}Q(2-t)^{k-q}\) is either 0 or 2. Note that this appears below when dealing with \(K(k,k)\) term. Setting the term involving \((1-t)^{2k-2}\) to be 0, we get
\[\Bigg{\{}C(k,k)\Bigg{\{}K(k,k-1)8^{k-2}-K(k,k)8^{k-1}k\Bigg{\}}+C(k,k-1)K(k-1, k-1)8^{k-1}\Bigg{\}}(Q(t)-Q(2-t))=0.\]
By setting the terms within the outer parantheses to be 0 and using the values of \(K(k,k-1),K(k,k)\) and \(K(k-1,k-1)\), we then get the following:
\[-C(k,k)\Bigg{\{}\frac{4k!^{2}8^{k-2}}{(k-2)!}+k!8^{k-1}k\Bigg{\}}+C(k,k-1)k!8^ {k-1}=0. \tag{3.20}\]
Setting \(C(k,k)=1\), we find that \(C(k,k-1)=k(k+1)/2\). Notice that the values of \(\{C(k,p)\}\) are unique up to a normalizing constant. We choose the normalization so that \(C(k,k)=1\).
Let us assume by induction that \(C(k,k-s)=\frac{(k+s)!}{(k-s)!2^{s}s!}\) for all \(0\leq s\leq l-1\). Note that \(C(k,k)=1\). Our goal is to determine \(C(k,k-l)\). For ease of notation, from now on, we let \(A=Q(t)\) and \(B=Q(2-t)\).
The terms corresponding to the triples from (1) above are:
\[-C(k,k)(k!)^{2}2^{3k-4l}\sum_{s=0}^{l}\frac{(A-B)^{s}\left(A^{l-s}+(-1)^{2l+1-s}B ^{l-s}\right)}{((l-s)!)^{2}(k-2l+s)!s!}.\]
The terms corresponding to the triples \((p,q,r)\) from (2) above are:
\[C(k,k-1)k!(k-1)!2^{3k-4l+1}\sum_{s=0}^{l-1}\frac{(A-B)^{s}(A^{l-s}+(-1)^{2l-s-1 }B^{l-s})}{(l-s)!(l-s-1)!(k-2l+1+s)!s!}.\]
The terms corresponding to triples \((p,q,r)\) with \(p=k-2\) are:
\[-C(k,k-2)k!(k-2)!2^{3k-4l+2}\sum_{s=0}^{l-2}\frac{(A-B)^{s}(A^{l-s}+(-1)^{2l-s- 3}B^{l-s})}{(l-s)!(l-s-2)!(k-2l+2+s)!s!}.\]
Continuing in this fashion and summing up all the terms corresponding to \((1-t)^{2(k-l)}\) and setting it to \(0\), we have
\[k!\sum_{m=0}^{l}(-1)^{l-m}C(k,k-m)(k-m)!2^{3k-4l+m}\sum_{s=0}^{l-m}\frac{(A-B) ^{s}(A^{l-s}-(-1)^{2l-s}B^{l-s})}{(l-s)!(l-s-m)!(k-2l+s+m)!s!}=0.\]
We ignore \(k!\) and \(2^{3k-4l}\) in the above expression from now on. Interchanging the order of summation, we get,
\[\sum_{s=0}^{l}\sum_{m=0}^{l-s}(-1)^{l-m}C(k,k-m)(k-m)!2^{m}\frac{(A-B)^{s}(A^ {l-s}-(-1)^{s}B^{l-s})}{(l-s)!(l-s-m)!(k-2l+s+m)!s!}=0. \tag{3.21}\]
Let us split (3.21) as
\[\sum_{s=1}^{l-1}\sum_{m=1}^{l-s}(-1)^{l-m}C(k,k-m)(k-m)!2^{m}\frac {(A-B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})}{(l-s)!(l-s-m)!(k-2l+s+m)!s!}\] \[+\sum_{m=1}^{l-1}\frac{(-1)^{l-m}C(k,k-m)(k-m)!2^{m}(A^{l}-B^{l}) }{l!(l-m)!(k-2l+m)!}-2(A-B)^{l}\binom{k}{l}\] \[-\sum_{s=1}^{l-1}\frac{k!(A-B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})}{((l- s)!)^{2}(k-2l+s)!s!}+\frac{C(k,k-l)2^{l}(A^{l}-B^{l})}{l!}-\frac{k!(A^{l}-B^{l}) }{(l!)^{2}(k-2l)!}=0.\]
Using the fact that \(C(k,k-m)=\frac{(k+m)!}{(k-m)!2^{m}ml}\) for \(0\leq m<l\), we get,
\[\sum_{s=1}^{l-1}\sum_{m=1}^{l-s}(-1)^{l-m}\frac{(k+m)!(A-B)^{s}( A^{l-s}-(-1)^{s}B^{l-s})}{m!(l-s)!(k-m)!(k-2l+s+m)!s!}-\sum_{m=1}^{l-1}(-1)^{m} \frac{(k+m)!(A^{l}-B^{l})}{l!m!(l-m)!(k-2l+m)!}\] \[-2(A-B)^{l}\binom{k}{l}-\sum_{s=1}^{l-1}\frac{k!(A-B)^{s}(A^{l-s }-(-1)^{s}B^{l-s})}{((l-s)!)^{2}(k-2l+s)!s!}+\frac{C(k,k-l)2^{l}(A^{l}-B^{l}) }{l!}-\frac{k!(A^{l}-B^{l})}{(l!)^{2}(k-2l)!}=0.\]
We can write this as
\[-\sum_{s=1}^{l-1}\frac{(2l-s)!(A-B)^{s}(A^{l-s}-(-1)^{s}B^{l-s} )}{((l-s)!)^{2}s!}\sum_{m=1}^{l-s}(-1)^{m}\binom{k+m}{2l-s}\binom{l-s}{m} \tag{3.22}\] \[-\frac{(2l)!(A^{l}-B^{l})}{(l!)^{2}}\sum_{m=1}^{l-1}(-1)^{m}\binom {k+m}{2l}\binom{l}{m}-2(A-B)^{l}\binom{k}{l}\] \[-\sum_{s=1}^{l-1}\binom{k}{2l-s}\binom{2l-s}{l}\binom{l}{s}(A-B) ^{s}(A^{l-s}-(-1)^{s}B^{l-s})\] \[+\frac{C(k,k-l)2^{l}(A^{l}-B^{l})}{l!}-\frac{k!(A^{l}-B^{l})}{(l!) ^{2}(k-2l)!}=0.\]
Our next step is to simplify the first summand in (3.22) above, which we denote by \(\beta\):
\[\beta=-\sum_{s=1}^{l-1}\frac{(2l-s)!(A-B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})}{((l-s)!)^{ 2}s!}\sum_{m=1}^{l-s}(-1)^{m}\binom{k+m}{2l-s}\binom{l-s}{m}.\]
We first simplify the second summand in \(\beta\): We write using Vandermonde identity [27]:
\[\binom{k+m}{2l-s}=\sum_{j=1}^{m}\binom{k}{2l-s-j}\binom{m}{j}+\binom{k}{2l-s},\]
the second term on the right being the term corresponding to the index \(j=0\) from the first sum. Using this, we have,
\[\sum_{m=1}^{l-s}(-1)^{m}\binom{k+m}{2l-s}\binom{l-s}{m}= \sum_{m=1}^{l-s}(-1)^{m}\Bigg{\{}\sum_{j=1}^{m}\binom{k}{2l-s-j} \binom{m}{j}+\binom{k}{2l-s}\Bigg{\}}\binom{l-s}{m}\] \[= \sum_{m=1}^{l-s}(-1)^{m}\Bigg{\{}\sum_{j=1}^{m}\binom{k}{2l-s-j} \binom{m}{j}\binom{l-s}{m}+\binom{k}{2l-s}\binom{l-s}{m}\Bigg{\}}\] \[= \sum_{m=1}^{l-s}(-1)^{m}\Bigg{\{}\sum_{j=1}^{m}\binom{k}{2l-s-j} \binom{l-s}{j}\binom{l-s-j}{l-s-m}+\binom{k}{2l-s}\binom{l-s}{m}\Bigg{\}}.\]
In the last equality, we have used the standard fact:
\[\binom{a}{b}\binom{b}{c}=\binom{a}{c}\binom{a-c}{b-c}=\binom{a}{c}\binom{a-c}{ a-b}.\]
Let us interchange the order of summation. We then get,
\[\sum_{m=1}^{l-s}(-1)^{m}\binom{k+m}{2l-s}\binom{l-s}{m} =\sum_{j=1}^{l-s}\sum_{m=j}^{l-s}(-1)^{m}\binom{k}{2l-s-j}\binom{l- s}{j}\binom{l-s-j}{l-s-m} \tag{3.23}\] \[+\sum_{m=1}^{l-s}(-1)^{m}\binom{k}{2l-s}\binom{l-s}{m}. \tag{3.24}\]
Note that
\[\sum_{m=0}^{l-s}(-1)^{m}\binom{l-s}{m}=0.\]
Hence (3.24) simplifies to
\[\sum_{m=1}^{l-s}(-1)^{m}\binom{k}{2l-s}\binom{l-s}{m}=-\binom{k}{2l-s}.\]
Next let us consider the first summand on the right in (3.23). We write
\[\sum_{j=1}^{l-s}\sum_{m=j}^{l-s}(-1)^{m}\binom{k}{2l-s-j}\binom{l -s}{j}\binom{l-s-j}{l-s-m}\] \[=\sum_{j=1}^{l-s-1}\sum_{m=j}^{l-s}(-1)^{m}\binom{k}{2l-s-j} \binom{l-s}{j}\binom{l-s-j}{l-s-m}+(-1)^{l-s}\binom{k}{l}\] \[=\sum_{j=1}^{l-s-1}\binom{k}{2l-s-j}\binom{l-s}{j}\sum_{m=j}^{l-s} (-1)^{m}\binom{l-s-j}{l-s-m}+(-1)^{l-s}\binom{k}{l}.\]
We have that
\[\sum_{m=j}^{l-s}(-1)^{m}\binom{l-s-j}{l-s-m}=0.\]
Hence
\[\sum_{j=1}^{l-s}\sum_{m=j}^{l-s}(-1)^{m}\binom{k}{2l-s-j}\binom{l-s}{j} \binom{l-s-j}{l-s-m}=(-1)^{l-s}\binom{k}{l}.\]
Putting this together, we get
\[\sum_{m=1}^{l-s}(-1)^{m}\binom{k+m}{2l-s}\binom{l-s}{m}=(-1)^{l-s}\binom{k}{l}- \binom{k}{2l-s}. \tag{3.25}\]
Substituting (3.25) into \(\beta\), we have
\[\beta =-\binom{k}{l}\sum_{s=1}^{l-1}(-1)^{l-s}\frac{(2l-s)!(A-B)^{s}(A^{ l-s}-(-1)^{s}B^{l-s})}{((l-s)!)^{2}s!}\] \[+\sum_{s=1}^{l-1}\binom{k}{2l-s}\frac{(2l-s)!(A-B)^{s}(A^{l-s}-(-1 )^{s}B^{l-s})}{((l-s)!)^{2}s!}\]
We write
\[\frac{(2l-s)!}{((l-s)!)^{2}s!}=\binom{2l-s}{l-s}\binom{l}{s}=\binom{2l-s}{l} \binom{l}{s}.\]
With this, we have
\[\beta =-\binom{k}{l}\sum_{s=1}^{l-1}(-1)^{l-s}\binom{2l-s}{l}\binom{l}{ s}(A-B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})\] \[+\sum_{s=1}^{l-1}\binom{k}{2l-s}\binom{2l-s}{l}\binom{l}{s}(A-B)^ {s}(A^{l-s}-(-1)^{s}B^{l-s})\] \[=\binom{k}{l}\sum_{s=1}^{l-1}(-1)^{s}\binom{2l-s}{l}\binom{l}{s}(A -B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})\] \[+\binom{k}{l}\sum_{s=1}^{l-1}\binom{k-l}{l-s}\binom{l}{s}(A-B)^{s} (A^{l-s}-(-1)^{s}B^{l-s}).\]
Note that in the second line from bottom above, we have used the fact that \(l\) is odd. Let us write
\[\binom{k}{l}\sum_{s=1}^{l-1}(-1)^{s}\binom{2l-s}{l}\binom{l}{s}(A- B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})\] \[=\binom{k}{l}\Bigg{\{}\sum_{s=0}^{l}(-1)^{s}\binom{2l-s}{l} \binom{l}{s}(A-B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})-\frac{(2l)!}{(l!)^{2}}(A^{l}- B^{l})+2(A-B)^{l}\Bigg{\}}.\]
Hence
\[\beta =\binom{k}{l}\Bigg{\{}\sum_{s=0}^{l}(-1)^{s}\binom{2l-s}{l} \binom{l}{s}(A-B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})-\frac{(2l)!}{(l!)^{2}}(A^{l}- B^{l})+2(A-B)^{l}\Bigg{\}}\] \[+\binom{k}{l}\sum_{s=1}^{l-1}\binom{k-l}{l-s}\binom{l}{s}(A-B)^{ s}(A^{l-s}-(-1)^{s}B^{l-s}).\]
**Lemma 3.1**.: _We have that for any \(A\) and \(B\) and for any \(l\geq 0\),_
\[\sum_{s=0}^{l}(-1)^{s}\binom{2l-s}{l}\binom{l}{s}(A-B)^{s}(A^{l-s}-(-1)^{s}B ^{l-s})=0.\]
Proof.: We first split the left hand side as follows:
\[\sum_{s=0}^{l}(-1)^{s}\binom{2l-s}{l}\binom{l}{s}(A-B)^{s}(A^{l- s}-(-1)^{s}B^{l-s}) =\sum_{s=0}^{l}\binom{2l-s}{l}\binom{l}{s}(B-A)^{s}A^{l-s}\] \[-\sum_{s=0}^{l}\binom{2l-s}{l}\binom{l}{s}(A-B)^{s}B^{l-s}.\]
Here and in several instances throughout the rest of the paper, we use contour integration technique to evaluate combinatorial sums pioneered by Egorychev [22]. We write
\[\binom{2l-s}{l}=\frac{1}{2\pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\frac{(1+z) ^{2l-s}}{z^{l+1}}\mathrm{d}z,\]
and hence
\[\sum\limits_{s=0}^{l}\binom{2l-s}{l}\binom{l}{s}(B-A)^{s}A^{l-s} =\frac{1}{2\pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\sum\limits _{s=0}^{l}\binom{l}{s}(B-A)^{s}A^{l-s}\frac{(1+z)^{2l-s}}{z^{l+1}}\mathrm{d}z\] \[=\frac{1}{2\pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\sum\limits _{s=0}^{l}\binom{l}{s}\left(\frac{B-A}{1+z}\right)^{s}A^{l-s}\frac{(1+z)^{2l}} {z^{l+1}}\mathrm{d}z\] \[=\frac{1}{2\pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\left(A+ \frac{B-A}{1+z}\right)^{l}\frac{(1+z)^{2l}}{z^{l+1}}\mathrm{d}z\] \[=\frac{1}{2\pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\frac{((B+ Az)(1+z))^{l}}{z^{l+1}}\mathrm{d}z.\]
Similarly,
\[\sum\limits_{s=0}^{l}\binom{2l-s}{l}\binom{l}{s}(A-B)^{s}B^{l-s}=\frac{1}{2 \pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\frac{((A+Bz)(1+z))^{l}}{z^{l+1}} \mathrm{d}z.\]
We would like to show then that
\[\frac{1}{2\pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\frac{((B+Az)(1+z))^{l}}{ z^{l+1}}\mathrm{d}z-\frac{1}{2\pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\frac{((A+Bz) (1+z))^{l}}{z^{l+1}}\mathrm{d}z=0.\]
Expanding \((B+Az)^{l}(1+z)^{l}\) using binomial theorem, we get,
\[(B+Az)^{l}(1+z)^{l}=\sum\limits_{u,v=0}^{l}\binom{l}{u}\binom{l}{v}B^{u}A^{l- u}z^{l-u}z^{v}.\]
Then
\[\frac{(B+Az)^{l}(1+z)^{l}}{z^{l+1}}=\frac{\sum\limits_{u,v=0}^{l}\binom{l}{u} \binom{l}{v}B^{u}A^{l-u}z^{v-u}}{z}.\]
Hence
\[\frac{1}{2\pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\frac{((B+Az)(1+z))^{l}} {z^{l+1}}\mathrm{d}z=\sum\limits_{u=0}^{l}\binom{l}{u}^{2}B^{u}A^{l-u},\]
by Cauchy's theorem combined with the fact that for any negative power of \(z\) that is not \(-1\), the integral vanishes, since \(z^{-p}\) has a primitive in a neighborhood of \(|z|=\varepsilon\) for \(p\neq 1\). Similarly,
\[\frac{(A+Bz)^{l}(1+z)^{l}}{z^{l+1}}=\frac{\sum\limits_{u,v=0}^{l}\binom{l}{u} \binom{l}{v}A^{u}B^{l-u}z^{v-u}}{z}.\]
Exactly the same argument gives,
\[\frac{1}{2\pi\mathrm{i}}\int\limits_{|z|=\varepsilon}\frac{(A+Bz)^{l}(1+z)^{l }}{z^{l+1}}\mathrm{d}z=\sum\limits_{u=0}^{l}\binom{l}{u}^{2}A^{u}B^{l-u}.\]
Since \(\binom{l}{u}=\binom{l}{l-u}\), these two sums are the same. This concludes the proof of the lemma.
Going back to the proof of the result, we have now
\[\beta=\binom{k}{l}\Bigg{\{}-\frac{(2l)!}{(l!)^{2}}(A^{l}-B^{l})+2(A-B)^{l}\Bigg{\}} +\binom{k}{l}\sum_{s=1}^{l-1}\binom{k-l}{l-s}\binom{l}{s}(A-B)^{s}(A^{l-s}-(-1) ^{s}B^{l-s}).\]
Substituting this into (3.22), we then get,
\[0 =\binom{k}{l}\Bigg{\{}-\frac{(2l)!}{(l!)^{2}}(A^{l}-B^{l})+2(A-B)^ {l}\Bigg{\}}+\binom{k}{l}\sum_{s=1}^{l-1}\binom{k-l}{l-s}\binom{l}{s}(A-B)^{s}( A^{l-s}-(-1)^{s}B^{l-s})\] \[-\frac{(2l)!(A^{l}-B^{l})}{(l!)^{2}}\sum_{m=1}^{l-1}(-1)^{m}\binom {k+m}{2l}\binom{l}{m}-2(A-B)^{l}\binom{k}{l}\] \[-\sum_{s=1}^{l-1}\binom{k}{2l-s}\binom{2l-s}{l}\binom{l}{s}(A-B) ^{s}(A^{l-s}-(-1)^{s}B^{l-s})+\frac{C(k,k-l)2^{l}(A^{l}-B^{l})}{l!}-\frac{k!(A ^{l}-B^{l})}{(l!)^{2}(k-2l)!}. \tag{3.26}\]
Finally, let us simplify the third summand on the right:
\[\sum_{m=1}^{l-1}(-1)^{m}\binom{k+m}{2l}\binom{l}{m}.\]
We have, again using Vandermonde identity,
\[\sum_{m=1}^{l-1}(-1)^{m}\binom{k+m}{2l}\binom{l}{m}=\sum_{m=1}^{l-1}(-1)^{m} \sum_{j=1}^{m}\binom{k}{2l-j}\binom{m}{j}\binom{l}{m}+\sum_{m=1}^{l-1}(-1)^{m} \binom{k}{2l}\binom{l}{m}.\]
We note that in the second sum above, there are at least two terms in the expansion since \(m\) starts from \(1\) and \(l\) is odd. Hence the second summand on the right is \(0\). Then
\[\sum_{m=1}^{l-1}(-1)^{m}\binom{k+m}{2l}\binom{l}{m}=\sum_{m=1}^{l-1}(-1)^{m} \sum_{j=1}^{m}\binom{k}{2l-j}\binom{l}{j}\binom{l-j}{m-j}.\]
Interchanging the order of summation, we get,
\[\sum_{m=1}^{l-1}(-1)^{m}\binom{k+m}{2l}\binom{l}{m}= \sum_{j=1}^{l-1}\binom{k}{2l-j}\binom{l}{j}\sum_{m=j}^{l-1}(-1)^ {m}\binom{l-j}{m-j}\] \[= \sum_{j=1}^{l-1}\binom{k}{2l-j}\binom{l}{j}.\]
Note that the second line follows due to the fact that
\[0 =\sum_{m=j}^{l}(-1)^{m}\binom{l-j}{m-j}\] \[=\sum_{m=j}^{l-1}(-1)^{m}\binom{l-j}{m-j}+(-1)^{l}.\]
Since \(l\) is assumed to be odd, we get,
\[\sum_{m=j}^{l-1}(-1)^{m}\binom{l-j}{m-j}=1.\]
We have,
\[\sum_{k}\binom{p}{k}\binom{q-j}{q-k}=\binom{p+q-j}{q}.\]
Using this formula, we get,
\[\sum_{m=1}^{l-1}(-1)^{m}\binom{k+m}{2l}\binom{l}{m}=\binom{k+l}{2l}-\binom{k} {2l}-\binom{k}{l}.\]
Using this in (3.26), we have
\[\begin{split} 0&=\binom{k}{l}\Bigg{\{}-\frac{(2l)!}{(l!)^{2}}( A^{l}-B^{l})+2(A-B)^{l}\Bigg{\}}+\binom{k}{l}\sum_{s=1}^{l-1}\binom{k-l}{l-s} \binom{l}{s}(A-B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})\\ &-\frac{(2l)!(A^{l}-B^{l})}{(l!)^{2}}\Bigg{\{}\binom{k+l}{2l}- \binom{k}{2l}-\binom{k}{l}\Bigg{\}}\\ &-2(A-B)^{l}\binom{k}{l}-\sum_{s=1}^{l-1}\binom{k}{2l-s}\binom{2l -s}{l}\binom{l}{s}(A-B)^{s}(A^{l-s}-(-1)^{s}B^{l-s})\\ &+\frac{C(k,k-l)2^{l}(A^{l}-B^{l})}{l!}-\frac{k!(A^{l}-B^{l})}{( l!)^{2}(k-2l)!}.\end{split} \tag{3.27}\]
The second and fifth terms on the right in (3.27) cancel. Further cancelling out other common terms in (3.27), we arrive at
\[C(k,k-l)=\frac{(k+l)!}{(k-l)!2^{l}l!}.\]
This completes the induction step. A similar argument can be employed for the case of \(l\) even and for this reason we will skip the proof.
Going back to (3.19), we have found the coefficients \(C(k,p)\) such that this equation is \(0\). In other words, we have obtained the following:
\[\sum_{p=0}^{k}C(k,p)(1-t)^{p}\left([D^{p}P](t)+(-1)^{p+1}[D^{p}P](2-t)\right)=0,\]
where
\[C(k,p)=\frac{(2k-p)!}{p!2^{k-p}(k-p)!}.\]
Since, as already mentioned above, the \(D\) derivatives up to order \(k\) of \(h(t)\) are applied only to \(P(t,u)\), we have obtained the following necessary condition for a function \(g\in C_{c}^{\infty}((0,2))\) to be in the range of a smooth radial function supported in the unit ball in \(\mathbb{R}^{n}\), where \(n=2k+3\): Letting \(h(t)=t^{n-2}g(t)\), we have for all \(t\in[0,1]\)
\[\Bigg{\{}\sum_{p=0}^{k}C(k,p)(1-\cdot)^{p}[D^{p}h(\cdot)]\Bigg{\}}(t)=\Bigg{\{} \sum_{p=0}^{k}C(k,p)(1-\cdot)^{p}[D^{p}h(\cdot)]\Bigg{\}}(2-t). \tag{3.28}\]
#### 3.1.2. Proof of sufficiency in Theorem 1.1
In this subsection, we give the proof of sufficiency of Theorem 1.1. We start with a result about special functions that could be of independent interest.
**Theorem 3.2**.: _Let \(h(t)\in C_{c}^{\infty}((0,2))\) satisfy the following evenness condition:_
\[\Bigg{\{}\sum_{p=0}^{k}C(k,p)(1-\cdot)^{p}[D^{p}h](\cdot)\Bigg{\}}(1-t)= \Bigg{\{}\sum_{p=0}^{k}C(k,p)(1-\cdot)^{p}[D^{p}h](\cdot)\Bigg{\}}(1+t)\text{ for all }t\in[0,1]. \tag{3.29}\]
_Then \(h(t)\) satisfies the following identity: For \(\lambda>0\):_
\[\left(\int\limits_{0}^{\infty}j_{k+\frac{1}{2}}(\lambda t)th(t)\mathrm{d}t \right)y_{k+\frac{1}{2}}(\lambda)=\left(\int\limits_{0}^{\infty}y_{k+\frac{1} {2}}(\lambda t)th(t)\mathrm{d}t\right)j_{k+\frac{1}{2}}(\lambda). \tag{3.30}\]
Proof.: We define
\[H(t)=\sum_{p=0}^{k}C(k,p)(1-t)^{p}[D^{p}h](t).\]
We observe the following properties of \(H(t)\).
1. \(H(t)=H(2-t)\) for \(0\leq t\leq 1\),
2. \(H(t)=0\) for \(t>2\).
We claim that the following integral
\[I_{k}=\int\limits_{0}^{\infty}\sum\limits_{p=0}^{k}C(k,p)(1-t)^{p}[D^{p}h](t)j_{ k+\frac{1}{2}}(\lambda(t-1))(t-1)\mathrm{d}t=0. \tag{3.31}\]
To see this, first of all, we observe that due to the support condition on \(h\), \(H(t)\) has non-trivial support only in \((0,2)\). Then
\[I_{k} =\int\limits_{0}^{2}\sum\limits_{p=0}^{k}C(k,p)(1-t)^{p}[D^{p}h](t )j_{k+\frac{1}{2}}(\lambda(t-1))(t-1)\mathrm{d}t\] \[=\int\limits_{0}^{1}\sum\limits_{p=0}^{k}C(k,p)(1-t)^{p}[D^{p}h]( t)j_{k+\frac{1}{2}}(\lambda(t-1))(t-1)\mathrm{d}t\] \[+\int\limits_{1}^{2}\sum\limits_{p=0}^{k}C(k,p)(1-t)^{p}[D^{p}h]( t)j_{k+\frac{1}{2}}(\lambda(t-1))(t-1)\mathrm{d}t.\]
Substituting \(t\) by \(2-t\) in the second integral, noting that \(j_{k+\frac{1}{2}}(x)\) is an even function in \(x\), and using (3.29), we have
\[I_{k} =\int\limits_{0}^{1}\sum\limits_{p=0}^{k}C(k,p)(1-t)^{p}[D^{p}h] (t)j_{k+\frac{1}{2}}(\lambda(t-1))(t-1)\mathrm{d}t\] \[-\int\limits_{0}^{1}\sum\limits_{p=0}^{k}C(k,p)(1-t)^{p}[D^{p}h] (t)j_{k+\frac{1}{2}}(\lambda(t-1))(t-1)\mathrm{d}t=0.\]
Next we have
\[0 =I_{k}=\int\limits_{0}^{\infty}\sum\limits_{p=0}^{k}C(k,p)(1-t)^{ p}[D^{p}h](t)j_{k+\frac{1}{2}}(\lambda(t-1))(t-1)\mathrm{d}t\] \[=\int\limits_{0}^{\infty}\sum\limits_{p=0}^{k}C(k,p)th(t)D^{p} \left(\frac{(t-1)^{p+1}j_{k+\frac{1}{2}}(\lambda(t-1))}{t}\right)\mathrm{d}t.\]
Substituting \(t\) by \(-t\), we get
\[I_{k}=-\int\limits_{-\infty}^{0}\sum\limits_{p=0}^{k}(-1)^{p}C(k,p)D^{p} \left(\frac{(t+1)^{p+1}j_{k+\frac{1}{2}}(\lambda(t+1))}{t}\right)th(-t) \mathrm{d}t.\]
From Theorem 3.3 below, we obtain the following equality. This is a technical result and in order not to disturb the flow of proof, we prefer to give it at the end.
\[I_{k}=(-1)^{k+1}\int\limits_{-\infty}^{0}\left\{D^{k}\left(\frac{\sin( \lambda t)}{t}\right)y_{k+\frac{1}{2}}(\lambda)+D^{k}\left(\frac{\cos(\lambda t )}{t}\right)j_{k+\frac{1}{2}}(\lambda)\right\}th(-t)\mathrm{d}t.\]
From the formulas in Lemma 3.4, we see that
\[D^{k}\left(\frac{\sin\lambda(\cdot)}{(\cdot)}\right)(-t) =D^{k}\left(\frac{\sin\lambda(\cdot)}{(\cdot)}\right)(t),\] \[D^{k}\left(\frac{\cos\lambda(\cdot)}{(\cdot)}\right)(-t) =-D^{k}\left(\frac{\cos\lambda(\cdot)}{(\cdot)}\right)(t).\]
Letting \(t\to-t\) in the integral above, we have
\[0=(-1)^{k}\int\limits_{0}^{\infty}\left\{D^{k}\left(\frac{\sin(\lambda t)}{t }\right)y_{k+\frac{1}{2}}(\lambda)-D^{k}\left(\frac{\cos(\lambda t)}{t}\right) j_{k+\frac{1}{2}}(\lambda)\right\}th(t)\mathrm{d}t.\]
Hence
\[\left(\int\limits_{0}^{\infty}D^{k}\left(\frac{\sin(\lambda t)}{t}\right)th(t) \mathrm{d}t\right)y_{k+\frac{1}{2}}(\lambda)=\left(\int\limits_{0}^{\infty}D^{k }\left(\frac{\cos(\lambda t)}{t}\right)\,th((t)\mathrm{d}t\right)j_{k+\frac{1}{ 2}}(\lambda). \tag{3.32}\]
The above formula (3.32) can be written in a more symmetric form as follows. For \(\lambda>0\):
\[\left(\int\limits_{0}^{\infty}j_{k+\frac{1}{2}}(\lambda t)th(t)\mathrm{d}t \right)y_{k+\frac{1}{2}}(\lambda)=\left(\int\limits_{0}^{\infty}y_{k+\frac{1}{ 2}}(\lambda t)th(t)\mathrm{d}t\right)j_{k+\frac{1}{2}}(\lambda).\]
**Theorem 3.3**.: _Let \(j_{k+1/2}(x)=D^{k}\left(\frac{\sin x}{x}\right)\) be the spherical Bessel function of the first kind modulo constants and recall \(C(k,p)=\frac{(2k-p)!}{p!(k-p)!^{2k-p}}\). For \(\lambda>0\) and \(t\neq 0\),_
\[\begin{split} M_{k}(\lambda)&:=\sum_{p=0}^{k}C(k, p)(-1)^{p}D^{p}\left(\frac{(1+t)^{p+1}j_{k+\frac{1}{2}}(\lambda(1+t))}{t} \right)\\ &=(-1)^{k}\left\{D^{k}\left(\frac{\sin(\lambda t)}{t}\right)y_{k+ \frac{1}{2}}(\lambda)+D^{k}\left(\frac{\cos(\lambda t)}{t}\right)j_{k+\frac{1 }{2}}(\lambda)\right\},\end{split} \tag{3.33}\]
_where_
\[y_{k+\frac{1}{2}}(x)=D^{k}\left(\frac{\cos x}{x}\right),\]
_is the spherical Bessel function of the second kind modulo constants._
We collect a few formulas first:
**Lemma 3.4**.: _We have_
\[D^{p}\left(\frac{\sin x}{x}\right)=\sum_{l=0}^{p}\frac{C(p,l)x^{ l}}{x^{2p+1}}\left\{\sin x\left(\frac{(-1)^{l}+1}{2}\right)(-1)^{p+\frac{l}{2} }+\cos x\left(\frac{(-1)^{l+1}+1}{2}\right)(-1)^{p+\frac{l+1}{2}}\right\} \tag{3.34}\] \[D^{p}\left(\frac{\cos x}{x}\right)=\sum_{l=0}^{p}\frac{C(p,l)x^{ l}}{x^{2p+1}}\left\{\cos x\left(\frac{(-1)^{l}+1}{2}\right)(-1)^{p+\frac{l}{2} }-\sin x\left(\frac{(-1)^{l+1}+1}{2}\right)(-1)^{p+\frac{l+1}{2}}\right\}\] (3.35) \[D^{m}\left(\frac{1}{t(t+1)^{d}}\right)=(-1)^{m}\sum_{r=0}^{m} \frac{C(m,r)\binom{d+r-1}{r}r!}{t^{2m+1-r}(t+1)^{d+r}},\text{ with the convention that }\binom{n}{0}=1\text{ for }n\in\mathbb{Z}. \tag{3.36}\]
The proofs of these formulas follow in a straightforward manner by induction and will be skipped.
Proof.: We begin the proof of Theorem 3.3. We can assume in what follows that \(t\neq-1\). The result for the case \(t=-1\) will follow from continuity.
We have
\[(-1)^{k}M_{k} =\sum_{p=0}^{k}\sum_{l=0}^{k}\frac{(-1)^{p}C(k,p)C(k,l)}{\lambda ^{2k+1-l}}D^{p}\Bigg{[}\frac{1}{t(1+t)^{2k-l-p}}\] \[\times\Bigg{\{}\cos\lambda t\Big{\{}(-1)^{l/2}\left(\frac{(-1)^{l }+1}{2}\right)\sin\lambda+(-1)^{(l+1)/2}\left(\frac{(-1)^{l+1}+1}{2}\right) \cos\lambda\Big{\}}\] \[+\sin\lambda t\Big{\{}(-1)^{l/2}\left(\frac{(-1)^{l}+1}{2} \right)\cos\lambda-(-1)^{(l+1)/2}\left(\frac{(-1)^{l+1}+1}{2}\right)\sin \lambda\Big{\}}\Bigg{\}}\Bigg{]}.\]
For simplicity of notation, we will denote the following:
\[U_{l} =\Big{\{}(-1)^{l/2}\left(\frac{(-1)^{l}+1}{2}\right)\sin\lambda+( -1)^{(l+1)/2}\left(\frac{(-1)^{l+1}+1}{2}\right)\cos\lambda\Big{\}},\] \[V_{l} =\Big{\{}(-1)^{l/2}\left(\frac{(-1)^{l}+1}{2}\right)\cos\lambda-( -1)^{(l+1)/2}\left(\frac{(-1)^{l+1}+1}{2}\right)\sin\lambda\Big{\}}.\]
Then we have
\[(-1)^{k}M_{k} =\sum_{p=0}^{k}\sum_{l=0}^{k}\frac{(-1)^{p}C(k,p)C(k,l)}{\lambda^{2k+ 1-l}}D^{p}\Bigg{\{}\frac{1}{t(1+t)^{2k-l-p}}\left(\cos\lambda tU_{l}+\sin\lambda t V _{l}\right)\Bigg{\}}\] \[=\sum_{p=0}^{k}\sum_{l=0}^{k}\frac{(-1)^{p}C(k,p)C(k,l)U_{l}}{ \lambda^{2k+1-l}}D^{p}\Bigg{\{}\frac{1}{t(1+t)^{2k-l-p}}\cos\lambda t\Bigg{\}}\] \[+\sum_{p=0}^{k}\sum_{l=0}^{k}\frac{(-1)^{p}C(k,p)C(k,l)V_{l}}{ \lambda^{2k+1-l}}D^{p}\Bigg{\{}\frac{1}{t(1+t)^{2k-l-p}}\sin\lambda t\Bigg{\}}.\]
Using the expressions for the derivatives from Lemma 3.4, and after some rearrangements, we get
\[(-1)^{k}M_{k} =\sum_{p=0}^{k}\sum_{l=0}^{k}\sum_{r=0}^{p}\frac{C(k,p)C(k,l)C(p, r)!{2^{k-l-p+r-1}\choose r}}{\lambda^{2k+1-l}t^{2p+1-r}(1+t)^{2k-l-p+r}}\Big{\{} \cos\lambda tU_{l}+\sin\lambda tV_{l}\Big{\}}\] \[-\sum_{p=1}^{k}\sum_{l=0}^{k}\sum_{m=0}^{p-1}\sum_{r=0}^{m}\sum_{ s=0}^{p-m-1}\frac{C(k,p)C(k,l){p\choose m}C(m,r)C(p-m-1,s)r!{2^{k-p-l+r-1} \choose r}}{\lambda^{2k-l-s}t^{2p-r-s}(1+t)^{2k-l-p+r}}\] \[\times\Bigg{[}U_{l}\Big{\{}(-1)^{1+\frac{s}{2}}\left(\frac{(-1)^ {s}+1}{2}\right)\sin\lambda t+(-1)^{1+\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1} {2}\right)\cos\lambda t\Big{\}}\] \[-V_{l}\Big{\{}(-1)^{1+\frac{s}{2}}\left(\frac{(-1)^{s}+1}{2} \right)\cos\lambda t-(-1)^{1+\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{2} \right)\sin\lambda t\Big{\}}\Bigg{]}.\]
Note that in the expression above, we have separated the \(m=p\) term. Our motivation for doing so is that we want to use the expressions in Lemma 3.4. When \(m<p\), at least one derivative lands on the \(\sin\) or \(\cos\) term. We carry out one \(D\) derivative and then invoke the expressions from Lemma 3.4 for \(p-m-1\) derivatives of \(\frac{\sin x}{x}\) and \(\frac{\cos x}{x}\). Interchanging the order of summation in the second summand, we get
\[(-1)^{k}M_{k} =\sum_{p=0}^{k}\sum_{l=0}^{k}\sum_{r=0}^{p}\frac{C(k,p)C(k,l)C(p, r)!{2^{k-l-p+r-1}\choose r}}{\lambda^{2k+1-l}t^{2p+1-r}(1+t)^{2k-l-p+r}}\Big{\{} \cos\lambda tU_{l}+\sin\lambda tV_{l}\Big{\}} \tag{3.37}\] \[-\sum_{l=0}^{k}\sum_{s=0}^{k-1}\sum_{p=s+1}^{k}\sum_{r=0}^{p-1-s} \sum_{m=r}^{p-1-s}\frac{C(k,p)C(k,l){p\choose m}C(m,r)C(p-m-1,s)r!{2^{k-p-l+r- 1}\choose r}}{\lambda^{2k-l-s}t^{2p-r-s}(1+t)^{2k-l-p+r}}\] \[\times\Bigg{[}U_{l}\Big{\{}(-1)^{1+\frac{s}{2}}\left(\frac{(-1)^ {s}+1}{2}\right)\sin\lambda t+(-1)^{1+\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{ 2}\right)\cos\lambda t\Big{\}}\] \[-V_{l}\Big{\{}(-1)^{1+\frac{s}{2}}\left(\frac{(-1)^{s}+1}{2} \right)\cos\lambda t-(-1)^{1+\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{2} \right)\sin\lambda t\Big{\}}\Bigg{]}.\]
Next let us simplify the summation in \(m\) in the second summand. We have the following lemma:
**Lemma 3.5**.: _Denote by_
\[C:=\sum_{m=r}^{p-1-s}\frac{1}{p-m}{2m-r\choose m-r}{2(p-1-m)-s\choose p-1-m-s }. \tag{3.38}\]
_Then_
\[C=\frac{1}{s+1}{2p-r-s-1\choose p}.\]
Proof.: This follows directly from the Abel-Aigner identity. For the sake of completeness, we give the proof. The Abel-Aigner identity (see [9, 27]) is as follows:
\[\sum_{k}\frac{r}{tk+r}{tk+r\choose k}{t(n-k)+s\choose n-k}={tn+r+s \choose n}. \tag{3.39}\]
We have
\[C =\sum_{m=r}^{p-1-s}\frac{1}{p-m}\binom{2(m-r)+r}{m-r}\binom{2(p-1-s-m )+s}{p-1-s-m}\] \[=\sum_{m=0}^{p-1-s-r}\frac{1}{p-m-r}\binom{2m+r}{m}\binom{2(p-1-s-r -m)+s}{p-1-s-r-m}\] \[=\sum_{m=0}^{p-1-s-r}\frac{1}{s+1+m}\binom{2(p-1-s-r-m)+r}{p-1-s-r -m}\binom{2m+s}{m}\] \[=\sum_{m=0}^{p-1-s-r}\frac{1}{2m+s+1}\binom{2(p-1-s-r-m)+r}{p-1-s- r-m}\binom{2m+s+1}{m}.\]
In the last but one step, we have replaced the index \(m\) by \(p-1-s-r-m\) and in the last step, we have used the following equality,
\[\frac{1}{m+s+1}\binom{2m+s}{m}=\frac{1}{2m+s+1}\binom{2m+s+1}{m}.\]
Now using Abel-Aigner identity (3.39), we get,
\[C=\frac{1}{s+1}\binom{2(p-1-s-r)+r+s+1}{p-1-s-r}=\frac{1}{s+1}\binom{2p-s-r-1} {p-1-s-r}=\frac{1}{s+1}\binom{2p-s-r-1}{p}.\]
This completes the proof of Lemma 3.5.
Substituting this back in (3.37), we have
\[\frac{(-1)^{k}M_{k}\lambda^{2k}(t+1)^{2k}}{k!} =\sum_{p=0}^{k}\sum_{l=0}^{k}\sum_{r=0}^{p}\frac{C(k,l)\binom{2k-p }{k}\binom{2p-r}{p}\binom{2k-l-p+r-1}{r}\lambda^{l-1}(t+1)^{l+p-r}}{2^{k-r}t^ {2p+1-r}}\] \[\times\Big{\{}\cos\lambda tU_{l}+\sin\lambda tV_{l}\Big{\}}\] \[-\sum_{l=0}^{k}\sum_{s=0}^{k-1}\sum_{p=s+1}^{k}\sum_{r=0}^{p-1-s} \frac{C(k,l)\binom{2k-p}{k}\binom{2p-s-r-1}{p}\binom{2k-p-l+r-1}{r}\lambda^{l+ s}(t+1)^{l+p-r}}{t^{2p-r-s}(s+1)!2^{k-1-r-s}}\] \[\times\Bigg{[}U_{l}\Big{\{}(-1)^{1+\frac{s}{2}}\left(\frac{(-1)^ {s}+1}{2}\right)\sin\lambda t+(-1)^{1+\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{ 2}\right)\cos\lambda t\Big{\}}\] \[-V_{l}\Big{\{}(-1)^{1+\frac{s}{2}}\left(\frac{(-1)^{s}+1}{2} \right)\cos\lambda t-(-1)^{1+\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{2} \right)\sin\lambda t\Big{\}}\Bigg{]}.\]
We note that when \(s=-1\), the term within square parantheses in the second summand above is precisely \(-\left(\cos\lambda tU_{l}+\sin\lambda tV_{l}\right)\), and the remaining terms match. Therefore the first summand can be absorbed in to the second by adding \(s=-1\) term in the second. We get,
\[\frac{(-1)^{k}M_{k}\lambda^{2k}(t+1)^{2k}}{k!} =-\sum_{l=0}^{k}\sum_{s=-1}^{k-1}\sum_{p=s+1}^{k}\sum_{r=0}^{p-1- s}\frac{C(k,l)\binom{2k-p}{k}\binom{2p-s-r-1}{p}\binom{2k-p-l+r-1}{r} \lambda^{l+s}(t+1)^{l+p-r}}{t^{2p-r-s}(s+1)!2^{k-1-r-s}}\] \[\times\Bigg{[}U_{l}\Big{\{}(-1)^{1+\frac{s}{2}}\left(\frac{(-1)^{ s}+1}{2}\right)\sin\lambda t+(-1)^{1+\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{2} \right)\cos\lambda t\Big{\}}\] \[-V_{l}\Big{\{}(-1)^{1+\frac{s}{2}}\left(\frac{(-1)^{s}+1}{2} \right)\cos\lambda t-(-1)^{1+\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{2} \right)\sin\lambda t\Big{\}}\Bigg{]}.\]
Reindexing in \(s\), we then get,
\[\frac{(-1)^{k}M_{k}\lambda^{2k}(t+1)^{2k}}{k!} =\sum_{l=0}^{k}\sum_{s=0}^{k}\sum_{p=s}^{k}\sum_{r=0}^{p-s}\frac{ C(k,l)\binom{2k-p}{k}\binom{2p-s-r}{p}\binom{2k-p-l+r-1}{r}\lambda^{l+s-1}(t+1)^{l +p-r}}{t^{2p-r-s+1}s!2^{k-r-s}}\] \[\times\Bigg{[}U_{l}\Big{\{}-(-1)^{\frac{s+1}{2}}\left(\frac{(-1) ^{s+1}+1}{2}\right)\sin\lambda t+(-1)^{\frac{s}{2}}\left(\frac{(-1)^{s}+1}{2} \right)\cos\lambda t\Big{\}}\]
\[+V_{l}\Big{\{}(-1)^{\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{2}\right) \cos\lambda t+(-1)^{\frac{s}{2}}\left(\frac{(-1)^{s}+1}{2}\right)\sin\lambda t \Big{\}}\Bigg{]}.\]
For simplicity, we let
\[B_{l,s} =U_{l}\Big{\{}-(-1)^{\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{2} \right)\sin\lambda t+(-1)^{\frac{s}{2}}\left(\frac{(-1)^{s}+1}{2}\right)\cos \lambda t\Big{\}}\] \[+V_{l}\Big{\{}(-1)^{\frac{s+1}{2}}\left(\frac{(-1)^{s+1}+1}{2} \right)\cos\lambda t+(-1)^{\frac{s}{2}}\left(\frac{(-1)^{s}+1}{2}\right)\sin \lambda t\Big{\}},\]
where we recall that
\[U_{l} =\Big{\{}(-1)^{l/2}\left(\frac{(-1)^{l}+1}{2}\right)\sin\lambda+ (-1)^{(l+1)/2}\left(\frac{(-1)^{l+1}+1}{2}\right)\cos\lambda\Big{\}},\] \[V_{l} =\Big{\{}(-1)^{l/2}\left(\frac{(-1)^{l}+1}{2}\right)\cos\lambda- (-1)^{(l+1)/2}\left(\frac{(-1)^{l+1}+1}{2}\right)\sin\lambda\Big{\}}.\]
Replacing \(r\) by \(p-s-r\), we get,
\[\frac{(-2)^{k}\lambda^{2k+1}(1+t)^{2k}M_{k}}{k!}=\sum_{l=0}^{k} \sum_{s=0}^{k}\sum_{p=s}^{k}\sum_{r=0}^{p-s}\frac{2^{p-r}\binom{2k-p}{k}C(k,l )\binom{p+r}{p}\binom{2k-s-l-1-r}{p-s-r}\lambda^{l+s}(1+t)^{l+s+r}}{t^{p+r+1} s!}B_{l,s}.\]
We can let the lower limit of \(p\) to be \(0\) without affecting the summation. We then get
\[\frac{(-2)^{k}\lambda^{2k+1}(1+t)^{2k}M_{k}}{k!}=\sum_{l=0}^{k} \sum_{s=0}^{k}\sum_{p=0}^{k}\sum_{r=0}^{p-s}\frac{2^{p-r}\binom{2k-p}{k}C(k,l )\binom{p+r}{p}\binom{2k-s-l-1-r}{p-s-r}\lambda^{l+s}(1+t)^{l+s+r}}{t^{p+r+1} s!}B_{l,s}.\]
Let us restrict the sum to those \((l,s)\) such that \(l+s=u\), where \(0\leq u\leq 2k\). It is straightforward to check that \(B_{l,s}\) depends on \(l+s\). If \(l+s=u\), sometimes we denote \(B_{l,s}\) as \(B_{u}\) for convenience. We call this restricted sum on the right above as \(S\). If \(0\leq u\leq k\), then
\[S=S(u)=\frac{(\lambda(1+t))^{u}k!B_{l,s}}{2^{k-u}u!t}\sum_{s=0} ^{u}\sum_{p=0}^{k}\sum_{r=0}^{p}\frac{2^{p-r-s}\binom{2k-p}{k}\binom{2k-u+s}{ k-u+s}\binom{u}{s}\binom{p+r}{r}\binom{2k-u-1-r}{p-s-r}(1+t)^{r}}{t^{p+r}}. \tag{3.40}\]
On the other hand, if \(k<u\leq 2k\), we have
\[S=S(u)=\frac{(\lambda(1+t))^{u}k!B_{l,s}}{2^{k-u}u!t}\sum_{s=u-k} ^{k}\sum_{p=0}^{k}\sum_{r=0}^{p}\frac{2^{p-r-s}\binom{2k-p}{k}\binom{2k-u+s}{ k-u+s}\binom{u}{s}\binom{p+r}{r}\binom{2k-u-1-r}{p-s-r}(1+t)^{r}}{t^{p+r}}. \tag{3.41}\]
With this, we have
\[M_{k}=\frac{(-1)^{k}k!}{2^{k}\lambda^{2k+1}(1+t)^{2k}}\sum_{u=0} ^{2k}S(u). \tag{3.42}\]
Replacing the index \(s\) by \(u-s\) in (3.40), we get,
\[S=\frac{(\lambda(1+t))^{u}k!B_{l,s}}{2^{k}u!t}\sum_{s=0}^{u} \sum_{p=0}^{k}\sum_{r=0}^{p}\frac{2^{p-r+s}\binom{2k-p}{k-p}\binom{2k-s}{k-s} \binom{u}{s}\binom{p+r}{r}\binom{2k-u-1-r}{2k-1-p-s}(1+t)^{r}}{t^{p+r}}. \tag{3.43}\]
Similarly, we replace the index \(s\) by \(u-s\) in (3.41). We then get,
\[S=\frac{(\lambda(1+t))^{u}k!B_{l,s}}{2^{k}u!t}\sum_{s=u-k}^{k} \sum_{p=0}^{k}\sum_{r=0}^{p}\frac{2^{p-r+s}\binom{2k-p}{k}\binom{2k-s}{k-s} \binom{u}{s}\binom{p+r}{r}\binom{2k-u-1-r}{2k-1-p-s}(1+t)^{r}}{t^{p+r}}. \tag{3.44}\]
Our goal next is to simplify the summation given in (3.43) and (3.44). With this in mind, let us focus our attention on
\[S_{1}:=\sum_{p=0}^{k}\sum_{r=0}^{p}\frac{2^{p-r}\binom{2k-p}{k} \binom{p+r}{r}\binom{2k-u-1-r}{2k-1-p-s}(1+t)^{r}}{t^{p+r}}. \tag{3.45}\]
We write \(S_{1}\) as follows:
\[S_{1}=\sum_{p=0}^{k}\sum_{r=0}^{p}\frac{1}{(2\pi\mathrm{i})^{3}} \int\limits_{|z|=\varepsilon_{1}}\int\limits_{|w|=\varepsilon_{2}}\int\limits_ {|v|=\varepsilon_{3}} 2^{p-r}\frac{1}{(1-z)^{k+1}z^{k-p+1}}\frac{1}{(1-w)^{p+1}w^{r+1}} \tag{3.46}\] \[\times\frac{1}{(1-v)^{2k-p-s}v^{-u+s-r+p+1}}\frac{(1+t)^{r}}{t^{p +r}}\mathrm{d}z\mathrm{d}w\mathrm{d}v,\]
for suitably chosen \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}\).
Note that the right hand side of (3.46) vanishes when \(r>p\) or when \(p>k\). For, when \(r>p\), the integral in \(v\) is \(0\) by Cauchy's theorem and likewise when \(p>k\), the integral in \(z\) is \(0\) for the same reason. Hence in computing the integral in (3.46), we can let \(r=p=\infty\). Later on, we will sum in the \(s\) variable as well. Note that due to the presence of the combinatorial term \(\binom{2k-s}{k-s}\), we can let the upper limit of \(s\) to be \(u\) regardless of whether \(0\leq u\leq k\) or \(k<u\leq 2k\). Furthermore, in the case when \(k<u\leq 2k\); see (3.44), we can let the lower limit of \(s\) to be \(0\) as well, since in (3.46), the integral in \(v\) is \(0\).
We now establish the choice of contours in (3.46). The contours will be determined based on taking \(t\) fixed. Recall that we have \(t\neq 0\) in the statement of the theorem. We will also assume that \(t\neq-1\) as well. Equation (3.47) below is obtained by performing summation in \(p\) and \(r\) variable. In order for the series to converge, we choose contours such that
\[|v|<\left|\frac{2tw}{1+t}\right|\text{ and }\left|\frac{2z(1-v)}{(1-w)vt} \right|<1.\]
With \(t\) arbitrary, but fixed, choose \(|w|=\varepsilon_{2}\ll 1\) and \(|v|=\varepsilon_{3}\ll 1\) both positive so that \(\varepsilon_{3}<\frac{2|t|\varepsilon_{2}}{|1+t|}\). Next choose \(|z|=\varepsilon_{1}\ll 1\) so that \(\frac{2\varepsilon_{1}(1+\varepsilon_{3})}{(1-\varepsilon_{2})\varepsilon_{3} |t|}<1\). Then
\[\left|\frac{2z(1-v)}{(1-w)vt}\right|\leq\frac{2\varepsilon_{1}(1+\varepsilon_ {3})}{(1-\varepsilon_{2})\varepsilon_{3}|t|}<1.\]
We have
\[S_{1}=\frac{2t^{2}}{(2\pi\mathrm{i})^{3}}\iiint\frac{1}{(1-z)^{k+1}z^{k+1}} \frac{v^{u-s}}{(1-v)^{2k-s}}\frac{1}{t(1-w)v-2z(1-v)}\frac{1}{2tw-v(1+t)} \mathrm{d}z\mathrm{d}w\mathrm{d}v. \tag{3.47}\]
By choosing \(|z|=\varepsilon_{1}\) small enough, we can make \(w=1-\frac{2z(1-v)}{tv}\) an external pole. Therefore, performing integration in \(w\) using residue theorem, we get
\[S_{1} =\frac{2t}{(2\pi\mathrm{i})^{2}}\iint\frac{1}{(1-z)^{k+1}z^{k+1} }\frac{v^{u-s}}{(1-v)^{2k-s}}\frac{1}{2tv-v^{2}(1+t)-4z(1-v)}\mathrm{d}z \mathrm{d}v\] \[=-\frac{2t}{(2\pi\mathrm{i})^{2}(t+1)}\iint\frac{1}{(1-z)^{k+1}z^ {k+1}}\frac{v^{u-s}}{(1-v)^{2k-s}}\frac{1}{v^{2}-\frac{2tv}{t+1}+\frac{4z}{1+ t}(1-v)}\mathrm{d}z\mathrm{d}v\] \[=-\frac{2t}{(2\pi\mathrm{i})^{2}(t+1)}\iint\frac{1}{(1-z)^{k+1}z^ {k+1}}\frac{v^{u-s}}{(1-v)^{2k-s}}\frac{1}{\left(v-\frac{t+2z}{t+1}-\frac{ \sqrt{t^{2}+4z^{2}-4z}}{t+1}\right)}\] \[\times\frac{1}{\left(v-\frac{t+2z}{t+1}+\frac{\sqrt{t^{2}+4z^{2}- 4z}}{t+1}\right)}\mathrm{d}z\mathrm{d}v.\]
We have that
\[v=\frac{t+2z}{t+1}-\frac{\sqrt{t^{2}+4z^{2}-4z}}{t+1}, \tag{3.48}\]
is a simple pole. Reducing \(\varepsilon_{1}\) if necessary, we can ensure that \(v\) is in the interior of \(|v|=\varepsilon_{3}\), since \(v\) in (3.48) can be written in the form,
\[v=\frac{t+2z}{t+1}-\frac{\sqrt{(t+2z)^{2}-4z(t+1)}}{t+1}.\]
The other root of \(v\) can be made an external pole by choosing \(\varepsilon_{1}\) small enough. Integrating in \(v\), we get,
\[S_{1}=\frac{t(t+1)^{2k-u}}{2\pi\mathrm{i}}\int\frac{1}{(1-z)^{k+1}z^{k+1}}\frac{ \left((t+2z)-\sqrt{t^{2}+4z^{2}-4z}\right)^{u-s}}{(1-2z+\sqrt{t^{2}+4z^{2}-4z}) ^{2k-s}}\frac{1}{\sqrt{t^{2}+4z^{2}-4z}}\mathrm{d}z.\]
As in [43], we make the change of variable \(z(1-z)=\eta\), and we have that the image of \(|z|=\varepsilon_{1}\) is a closed contour which makes one complete turn with origin in its interior and which can be deformed to a circle. We have
\[z=\frac{1-\sqrt{1-4\eta}}{2}.\]
Then
\[S_{1}=\frac{t(t+1)^{2k-u}}{2\pi\mathrm{i}}\int\frac{1}{\eta^{k+1}}\frac{\left( t+1-\sqrt{1-4\eta}-\sqrt{t^{2}-4\eta}\right)^{u-s}}{(\sqrt{1-4\eta}+\sqrt{t^{2} -4\eta})^{2k-s}}\frac{1}{\sqrt{t^{2}-4\eta}\sqrt{1-4\eta}}\mathrm{d}\eta.\]
For simplicity of notation, we let
\[\alpha=\sqrt{1-4\eta},\quad\beta=\sqrt{t^{2}-4\eta}.\]
Next let us perform summation in \(s\) variable. Recall from the earlier discussion that we can let the lower and upper limits of \(s\) to be \(0\) and \(u\), respectively, regardless of whether \(0\leq u\leq k\) or \(k<u\leq 2k\). We get
\[S_{2} :=\sum_{s=0}^{u}2^{s}\binom{u}{s}\binom{2k-s}{k}S_{1}\] \[=\frac{t(t+1)^{2k-u}}{(2\pi\mathrm{i})^{2}}\iint\frac{1}{(1-w)^{ k+1}w^{k+1}\eta^{k+1}}\frac{(t+1-\alpha-\beta+2w(\alpha+\beta))^{u}}{(\alpha+ \beta)^{2k}}\frac{1}{\alpha\beta}\mathrm{d}\eta\mathrm{d}w.\]
As before, let us make the change of variable \(w(1-w)=\gamma\). Then we have
\[S_{2} =\frac{t(t+1)^{2k-u}}{(2\pi\mathrm{i})^{2}}\iint\frac{1}{(\gamma \eta)^{k+1}}\frac{\left((t+1-(\sqrt{1-4\gamma})(\alpha+\beta)\right)^{u}}{( \alpha+\beta)^{2k}}\frac{1}{\alpha\beta}\frac{1}{\sqrt{1-4\gamma}}\mathrm{d} \eta\mathrm{d}\gamma\] \[=\frac{t(t+1)^{2k-u}}{(2\pi\mathrm{i})^{2}}\sum_{s=0}^{u}(-1)^{u +s}\binom{u}{s}\iint\frac{1}{(\gamma\eta)^{k+1}}\frac{(t+1)^{s}}{(\alpha+ \beta)^{2k-u+s}}\frac{1}{\alpha\beta}\frac{1}{\left(\sqrt{1-4\gamma}\right)^ {1+s-u}}\mathrm{d}\eta\mathrm{d}\gamma\] \[=\frac{t(t+1)^{2k-u}}{(2\pi\mathrm{i})^{2}}\sum_{s=0}^{u}(-1)^{u +s}\binom{u}{s}(t+1)^{s}\int\frac{\left(\sqrt{1-4\gamma}\right)^{u-s-1}}{ \gamma^{k+1}}\mathrm{d}\gamma\int\frac{1}{(\alpha+\beta)^{2k-u+s}\eta^{k+1} \alpha\beta}\mathrm{d}\eta.\]
Next let us make the change of variable, \(\alpha+\beta=\delta\). The image of the \(\eta\) curve is a closed contour with \(1+t\) in its interior.
We have
\[-2\left(\frac{\alpha+\beta}{\alpha\beta}\right)\mathrm{d}\eta=\mathrm{d}\delta.\]
Also
\[\eta=\frac{4\delta^{2}t^{2}-(\delta^{2}+t^{2}-1)^{2}}{16\delta^{2}}=\frac{(1-( \delta-t)^{2})((\delta+t)^{2}-1)}{16\delta^{2}}.\]
Then
\[S_{2} =-\frac{2^{4k+3}t(t+1)^{2k-u}}{(2\pi\mathrm{i})^{2}}\sum_{s=0}^{u }(-1)^{u+s}\binom{u}{s}(t+1)^{s}\int\frac{\left(\sqrt{1-4\gamma}\right)^{u-s-1 }}{\gamma^{k+1}}\mathrm{d}\gamma\] \[\times\int\frac{\left((1-(\delta-t)^{2})((\delta+t)^{2}-1) \right)^{-k-1}}{\delta^{s-1-u}}\mathrm{d}\delta\] \[=(-1)^{k}\frac{2^{4k+3}t(t+1)^{2k-u}}{(2\pi\mathrm{i})^{2}}\sum_ {s=0}^{u}(-1)^{u+s}\binom{u}{s}(t+1)^{s}\int\frac{\left(\sqrt{1-4\gamma} \right)^{u-s-1}}{\gamma^{k+1}}\mathrm{d}\gamma\] \[\times\int\frac{\delta^{u+1-s}}{\left((\delta^{2}-(t+1)^{2})( \delta^{2}-(t-1)^{2})\right)^{k+1}}\mathrm{d}\delta.\]
Let us introduce one more change of variable to make the computation easier:
\[\delta^{2}-(t+1)^{2}=\beta.\]
Then we have
\[S_{2}=(-1)^{k}\frac{2^{4k+2}t(t+1)^{2k-u}}{(2\pi\mathrm{i})^{2}}\sum_{s=0}^{u}(- 1)^{u+s}\binom{u}{s}(t+1)^{s}\int\frac{\left(\sqrt{1-4\gamma}\right)^{u-s-1}}{ \gamma^{k+1}}\mathrm{d}\gamma\int\frac{\left(\beta+(t+1)^{2}\right)^{\frac{u-s }{2}}}{\left(\beta(\beta+4t)\right)^{k+1}}\mathrm{d}\beta.\]
Note that the contour in \(\beta\) variable is a simple closed curve with origin in its interior. We rewrite (replacing \(s\) by \(u-s\) in the summation),
\[S_{2}=(-1)^{k}\frac{2^{4k+2}t(t+1)^{2k}}{(2\pi\mathrm{i})^{2}}\sum_{s=0}^{u}(- 1)^{s}\binom{u}{s}(t+1)^{-s}\int\frac{(1-4\gamma)^{\frac{s-1}{2}}}{\gamma^{k+1 }}\mathrm{d}\gamma\int\frac{\left(\beta+(t+1)^{2}\right)^{\frac{s}{2}}}{\left( \beta(\beta+4t)\right)^{k+1}}\mathrm{d}\beta.\]
We note that only those terms for which \(s\) is even survive. Therefore we can write \(S_{2}\) as
\[S_{2}=(-1)^{k}\frac{2^{4k+2}t(t+1)^{2k}}{(2\pi\mathrm{i})^{2}}\sum_{s=0,s- \mathrm{even}}^{u}(-1)^{s}\binom{u}{s}(t+1)^{-s}\int\frac{(1-4\gamma)^{\frac{s- 1}{2}}}{\gamma^{k+1}}\mathrm{d}\gamma\int\frac{\left(\beta+(t+1)^{2}\right)^{ \frac{s}{2}}}{\left(\beta(\beta+4t)\right)^{k+1}}\mathrm{d}\beta.\]
We now assume that \(u\) is even. The odd case can be dealt with similarly, and we will not give the proof separately. We have
\[S_{2} =(-1)^{k}\frac{2^{4k+2}t(t+1)^{2k}}{(2\pi\mathrm{i})^{2}}\sum_{m= 0}^{u/2}\binom{u}{2m}(t+1)^{-2m}\int\frac{(1-4\gamma)^{\frac{2m-1}{2}}}{\gamma ^{k+1}}\mathrm{d}\gamma\int\frac{\left(\beta+(t+1)^{2}\right)^{m}}{\left( \beta(\beta+4t)\right)^{k+1}}\mathrm{d}\beta\] \[=(-1)^{k}\frac{2^{4k+2}t(t+1)^{2k}}{(2\pi\mathrm{i})^{2}}\sum_{m= 0}^{u/2}\binom{u}{2m}(t+1)^{-2m}\int\frac{(1-4\gamma)^{\frac{2m-1}{2}}}{\gamma ^{k+1}}\mathrm{d}\gamma\int\sum_{q=0}^{m}\binom{m}{q}\frac{\beta^{q}(t+1)^{2m- 2q}}{\left(\beta(\beta+4t)\right)^{k+1}}\mathrm{d}\beta\] \[=(-1)^{k}\frac{2^{2k}(t+1)^{2k}}{t^{k}(2\pi\mathrm{i})^{2}}\sum_{m =0}^{u/2}\sum_{q=0}^{m}\binom{u}{2m}\binom{m}{q}(t+1)^{-2q}\int\frac{(1-4\gamma )^{\frac{2m-1}{2}}}{\gamma^{k+1}}\mathrm{d}\gamma\int\frac{1}{\beta^{k-q+1}} \sum_{p\geq 0}\binom{k+p}{p}\frac{(-\beta)^{p}}{(4t)^{p}}\mathrm{d}\beta\] \[=\frac{(t+1)^{2k-u}}{t^{2k}2\pi\mathrm{i}}\sum_{m=0}^{u/2}\sum_{q =0}^{m}(-1)^{q}(4t)^{q}(t+1)^{u-2q}\binom{u}{2m}\binom{m}{q}\int\frac{(1-4 \gamma)^{\frac{2m-1}{2}}}{\gamma^{k+1}}\mathrm{d}\gamma\binom{2k-q}{k}.\]
We have
\[\frac{1}{2\pi\mathrm{i}}\int\frac{(1-4\gamma)^{\frac{2m-1}{2}}}{\gamma^{k+1}} \mathrm{d}\gamma=\frac{(-1)^{m}\binom{2m}{m}\binom{2k-m}{k}}{\binom{2k-m}{m}}.\]
Then
\[S_{2}=\frac{(t+1)^{2k-u}}{t^{2k}}\sum_{m=0}^{u/2}\sum_{q=0}^{m}(-1)^{q+m}(4t)^ {q}(t+1)^{u-2q}\frac{\binom{u}{2m}\binom{m}{q}\binom{2m}{k}\binom{2k-m}{k} \binom{2k-q}{k}}{\binom{2k-m}{m}}.\]
Expanding \((t+1)^{u-2q}\), we get,
\[S_{2}=\frac{(t+1)^{2k-u}}{t^{2k}}\sum_{m=0}^{u/2}\sum_{q=0}^{m}\sum_{r=0}^{u-2q }(-1)^{q+m}(4t)^{q}\binom{u-2q}{r}t^{r}\frac{\binom{u}{2m}\binom{m}{q}\binom{2 m}{m}\binom{2k-m}{k}\binom{2k-q}{k}}{\binom{2k-m}{m}}.\]
We now look at specific coefficients of a fixed power of \(t\) inside the summation. With this in mind, let us set \(q+r=j\). Note that \(0\leq j\leq u\). Then we get the following: The coefficient of \(t^{j}\) in the summation is
\[C(j):=\sum_{m=0}^{u/2}\sum_{q=0}^{m}\frac{(-1)^{q+m}4^{q}\binom{u-2q}{j-q} \binom{u}{2m}\binom{m}{q}\binom{2m}{m}\binom{2k-m}{k}\binom{2k-q}{k}}{\binom{2 k-m}{m}}\]
\[=\sum_{q=0}^{u/2}\sum_{m=q}^{u/2}\frac{(-1)^{q+m}4^{q}\binom{u-2q}{j-q} \binom{u}{2m}\binom{m}{q}\binom{2m}{k}\binom{2k-m}{k}\binom{2k-q}{k}}{\binom{2k-m}{ m}}.\]
With this, we have
\[S_{2}=\frac{(t+1)^{2k-u}}{t^{2k}}\sum_{j=0}^{u}C(j)t^{j}.\]
Our final goal is to simplify this summation.
We first make a few straightforward observations about \(C(j)\).
* The sum is invariant when \(j\) is replaced by \(u-j\). Hence it is enough to prove for \(0\leq j\leq u/2\).
* The sum is \(0\) when \(j\geq u+1\).
Due to the third combinatorial term, we can replace the lower limit of the summation in \(m\) by \(0\). We first consider summation in \(m\). We consider
\[C_{1}:=\sum_{m=0}^{u/2}\frac{(-1)^{m}\binom{u}{2m}\binom{m}{q} \binom{2m}{m}\binom{2k-m}{k}}{\binom{2k-m}{m}}. \tag{3.49}\]
Using \(\frac{\binom{2k-m}{k}}{\binom{2k-m}{m}}=\frac{\binom{2k-2m}{k-m}}{\binom{m}{ n}}\), we have
\[C_{1}=\frac{(2k-u)!u!}{k!q!(k-q)!}\sum_{m=0}^{u/2}(-1)^{m}\binom{2k-2m}{2k-u} \binom{k-q}{k-m}.\]
Now due to the first combinatorial sum inside the summation, we can replace the upper index of the summation by \(k\). Further replacing \(k-m\) by \(m\), we then get,
\[C_{1}=\frac{(-1)^{k}(2k-u)!u!}{k!q!(k-q)!}\sum_{m=0}^{k}(-1)^{m} \binom{2m}{2k-u}\binom{k-q}{m}. \tag{3.50}\]
In (3.50) above, we can assume the summation in \(m\) is till \(k-q\). We then get,
\[C_{1} =\frac{(-1)^{k}(2k-u)!u!}{k!q!(k-q)!(2\pi\mathrm{i})}\int\frac{1 }{z^{2k-u+1}}(1-(1+z)^{2})^{k-q}\mathrm{d}z\] \[=\frac{(-1)^{q}(2k-u)!u!}{k!q!(k-q)!(2\pi\mathrm{i})}\int\frac{( z+2)^{k-q}}{z^{k-u+q+1}}\mathrm{d}z\] \[=\frac{(-1)^{q}2^{k-q}(2k-u)!u!}{k!q!(k-q)!(2\pi\mathrm{i})}\int \frac{k-q}{r}\binom{k-q}{r}\frac{z^{r}}{2^{r}z^{k-u+q+1}}\mathrm{d}z\] \[=\frac{(-1)^{q}2^{k-q}(2k-u)!u!}{2^{k-u+q}k!q!(k-q)!}\binom{k-q}{ k-u+q}\] \[=\frac{(-1)^{q}2^{u-2q}(2k-u)!u!}{k!q!(k-q)!}\binom{k-q}{u-2q}.\]
With this the summation in \(q\) becomes
\[C(j) =\frac{2^{u}(2k-u)!u!}{(k!)^{2}}\sum_{q=0}^{u/2}\binom{u-2q}{j-q} \binom{2k-q}{k}\binom{k-q}{u-2q}\binom{k}{q}\] \[=\frac{2^{u}(2k-u)!u!}{(k!)^{2}}\sum_{q=0}^{u/2}\binom{u-2q}{j-q} \binom{2k-q}{k-q}\binom{k-q}{u-2q}\binom{k}{q}\] \[=\frac{2^{u}(2k-u)!u!}{(k!)^{2}}\sum_{q=0}^{u/2}\binom{u-2q}{j-q} \binom{2k-q}{u-2q}\binom{2k-u+q}{k}\binom{k}{q}\]
\[=\frac{2^{u}(2k-u)!u!}{(k!)^{2}}\binom{2k-j}{k}\binom{k}{u-j}\binom{ k-j}{u-q-j}\binom{k}{q}.\]
We consider the following summation. Here note that we have let the upper limit of the summation index \(q\) to be \(k\). This is justified by the fact observed earlier that it is enough to consider \(0\leq j\leq u/2\).
\[C_{2}:=\sum_{q=0}^{k}\binom{2k-q}{j-q}\binom{k-j}{u-q-j}\binom{k}{q}.\]
We have
\[C_{2} =\sum_{q=0}^{k}\frac{1}{(2\pi\mathrm{i})^{2}}\iint\frac{(1+z)^{2k -q}}{z^{j-q+1}}\frac{(1+w)^{k-j}}{w^{u-q-j+1}}\binom{k}{q}\mathrm{d}z\mathrm{d}w\] \[=\frac{1}{(2\pi\mathrm{i})^{2}}\iint\frac{(1+z)^{k}}{z^{j+1}} \frac{(1+w)^{k-j}}{w^{u-j+1}}\left(1+\frac{zw}{1+z}\right)^{k}\mathrm{d}z \mathrm{d}w\] \[=\frac{1}{(2\pi\mathrm{i})^{2}}\iint\frac{(1+z)^{k}}{z^{j+1}} \frac{(1+w)^{k-j}}{w^{u-j+1}}\left(1+z(1+w)\right)^{k}\mathrm{d}z\mathrm{d}w\] \[=\frac{1}{(2\pi\mathrm{i})^{2}}\iint\frac{(1+z)^{k}}{z^{j+1}} \frac{(1+w)^{k-j}}{w^{u-j+1}}\sum_{q=0}^{k}\binom{k}{q}z^{q}(1+w)^{q}\mathrm{d} z\mathrm{d}w\] \[=\sum_{q=0}^{k}\binom{k}{q}\frac{1}{(2\pi\mathrm{i})^{2}}\iint \frac{(1+z)^{k}}{z^{j-q+1}}\frac{(1+w)^{k+q-j}}{w^{u-j+1}}\mathrm{d}z\mathrm{d}w\] \[=\sum_{q=0}^{k}\binom{k}{q}\binom{k}{j-q}\binom{k+q-j}{u-j}\] \[=\sum_{q=0}^{k}\binom{k}{q}\binom{k}{k-j+q}\binom{k+q-j}{u-j}\] \[=\sum_{q=0}^{k}\binom{k}{q}\binom{k}{u-j}\binom{k-u+j}{j-q}\] \[=\binom{k}{u-j}\binom{2k-u+j}{j}.\]
Now we have
\[C(j) =\frac{2^{u}(2k-u)!u!}{(k!)^{2}}\binom{2k-j}{k}\binom{k}{u-j} \binom{2k-u+j}{j}\] \[=\frac{2^{u}(2k-u)!u!}{k!(k-j)!}\frac{(2k-j)!}{!(u-j)!(k-u+j)!} \frac{(2k-u+j)!}{j!(2k-u)!}\] \[=2^{u}\binom{u}{j}\binom{2k-j}{k}\binom{2k-u+j}{k}.\]
Therefore, going back to (3.42), we now have
\[M_{k}(\lambda)=\frac{(-1)^{k}(k!)^{2}}{4^{k}(\lambda t)^{2k+1}}\sum_{u=0}^{ 2}\sum_{j=0}^{u}\frac{2^{u}\lambda^{u}}{u!}\binom{u}{j}\binom{2k-j}{k}\binom{ 2k-u+j}{k}t^{j}B_{u}.\]
To conclude the proof of Theorem 3.3, let us expand the right hand side of (3.33). We let the right hand side of (3.33) be \(\widetilde{M}_{k}\). We have
\[\widetilde{M}_{k}=(-1)^{k}\left\{D^{k}\left(\frac{\sin(\lambda t)}{t}\right) y_{\alpha}(\lambda)+D^{k}\left(\frac{\cos(\lambda t)}{t}\right)j_{\alpha}( \lambda)\right\}.\]
Expanding using formulas from Lemma 3.4, we have
\[\widetilde{M}_{k} =\frac{(-1)^{k}}{\lambda^{2k+1}t^{2k+1}}\sum_{l=0}^{k}\sum_{m=0}^{k }C(k,l)C(k,m)\lambda^{l+m}t^{m}\] \[\times\Bigg{\{}\sin\lambda(1+t)(-1)^{\frac{l+m}{2}}\left\{\left( \frac{(-1)^{l}+1}{2}\right)\left(\frac{(-1)^{m}+1}{2}\right)+\left(\frac{(-1) ^{l+1}+1}{2}\right)\left(\frac{(-1)^{m+1}+1}{2}\right)\right\}\] \[+\cos\lambda(1+t)(-1)^{\frac{l+m+1}{2}}\left\{\left(\frac{(-1)^{ l}+1}{2}\right)\left(\frac{(-1)^{m+1}+1}{2}\right)+\left(\frac{(-1)^{l+1}+1}{2} \right)\left(\frac{(-1)^{m}+1}{2}\right)\right\}\Bigg{\}}.\]
Using the expression for \(B_{l,s}\) defined earlier, we have
\[\widetilde{M}_{k}=\frac{(-1)^{k}}{\lambda^{2k+1}t^{2k+1}}\sum_{l=0}^{k}\sum_{m =0}^{k}C(k,l)C(k,m)\lambda^{l+m}t^{m}B_{l,m}.\]
We now restrict the sum to those \((l,m)\) such that \(l+m=u\) with \(0\leq u\leq 2k\). Then
\[\widetilde{M}_{k} =\frac{(-1)^{k}}{\lambda^{2k+1}t^{2k+1}}\sum_{u=0}^{2k}\lambda^{u }\sum_{m=0}^{u}C(k,u-m)C(k,m)t^{m}B_{u}\] \[=\frac{(-1)^{k}(k!)^{2}}{4^{k}\lambda^{2k+1}t^{2k+1}}\sum_{u=0}^{2 k}\sum_{m=0}^{u}\frac{2^{u}\lambda^{u}}{u!}{u\choose m}{2k-m\choose k}{2k-u+m \choose k}t^{m}B_{u}.\]
We have shown that \(M_{k}=\widetilde{M}_{k}\) and this completes the proof of the theorem.
Using Theorem 3.3, we now prove the sufficiency part of Theorem 1.1.
Proof of Sufficiency part of Theorem 1.1.: The sufficiency part of proof of the main theorem follows as a straightforward consequence of (3.32) combined with Theorem 2.4. Indeed for \(\lambda>0\), the left hand side of (3.32) is the product of the Hankel transform of \(g\) (recall that \(h(t)=t^{n-2}g(t)\)) and the spherical Bessel function of the second kind. Theorem 3.2 says that this factors into a product of two functions, one of them being the spherical Bessel function of the first kind in \(\lambda\). Since \(j_{k+\frac{1}{2}}(\lambda)\) and \(y_{k+\frac{1}{2}}(\lambda)\) have no common zeros [1, eq.(9.5.2)], by Theorem 2.4, we have the sufficiency part of Theorem 1.1.
### Range characterization for general functions
We now prove the range characterization for a general (not necessarily radial) function by expansion into spherical harmonics. The calculations of the previous proof are going to be crucially used.
Proof of Theorem 1.4.: Following the calculations done in [46], we have the following:
\[g_{m,l}(t) =\frac{\omega_{n-1}}{4^{\frac{n-3}{2}}t^{n-2}\omega_{n}C_{m}^{ \frac{n-2}{2}}(1)}\int\limits_{|1-t|}^{1}uf_{m,l}(u)C_{m}^{\frac{n-2}{2}}\left( \frac{1+u^{2}-t^{2}}{2u}\right)\Big{\{}\left((1+t)^{2}-u^{2}\right)\left(u^{2} -(1-t)^{2}\right)\Big{\}}^{\frac{n-3}{2}}\mathrm{d}u\] \[=\frac{\omega_{n-1}}{t^{n-2}\omega_{n}C_{m}^{\frac{n-2}{2}}(1)} \int\limits_{|1-t|}^{1}u^{n-2}f_{m,l}(u)C_{m}^{\frac{n-2}{2}}\left(\frac{1+u^{2 }-t^{2}}{2u}\right)\left\{1-\frac{\left(1+u^{2}-t^{2}\right)^{2}}{4u^{2}} \right\}^{\frac{n-3}{2}}\mathrm{d}u. \tag{3.51}\]
We use the following formula for Gegenbauer polynomials:
\[C_{m}^{(\alpha)}(x)=K(1-x^{2})^{-\alpha+\frac{1}{2}}\frac{\mathrm{d}^{m}}{ \mathrm{d}x^{m}}\left(1-x^{2}\right)^{m+\alpha-\frac{1}{2}},\]
where
\[K=\frac{(-1)^{m}\Gamma(\alpha+\frac{1}{2})\Gamma(m+2\alpha)}{2^{m}m!\Gamma(2 \alpha)\Gamma(m+\alpha+\frac{1}{2})}.\]
By repeated application of chain rule, we have
\[C_{m}^{\frac{n-2}{2}}\left(\frac{1+u^{2}-t^{2}}{2u}\right)=K\left(1-\left( \frac{1+u^{2}-t^{2}}{2u}\right)^{2}\right)^{-\frac{n-3}{2}}(-u)^{m}D^{m}\left( 1-\frac{\left(1+u^{2}-t^{2}\right)^{2}}{4u^{2}}\right)^{m+\frac{n-3}{2}}, \tag{3.52}\]
where, we recall that \(D=\frac{1}{t}\frac{\mathrm{d}}{\mathrm{d}t}\). Substituting (3.52) into (3.51), we get,
\[t^{n-2}g_{m,l}(t)=\frac{K(-1)^{m}\omega_{n-1}}{\omega_{n}C_{m}^{\frac{n-2}{2}} \left(1\right)}\int\limits_{|1-t|}^{1}u^{m+n-2}f_{m,l}(u)D^{m}\left(1-\frac{ \left(1+u^{2}-t^{2}\right)^{2}}{4u^{2}}\right)^{m+\frac{n-3}{2}}\mathrm{d}u.\]
Noting that \(k=\frac{n-3}{2}\) and that \(D^{m}\) can be taken outside the integral, we get,
\[t^{n-2}g_{m,l}(t) =\frac{K(-1)^{m}\omega_{n-1}}{4^{m+k}\omega_{n}C_{m}^{\frac{n-2}{2 }}\left(1\right)}D^{m}\int\limits_{|1-t|}^{1}u^{1-m}f_{m,l}(u)\left(4u^{2}- \left(1+u^{2}-t^{2}\right)^{2}\right)^{m+\frac{n-3}{2}}\mathrm{d}u\] \[=\frac{K(-1)^{m}\omega_{n-1}}{4^{m+k}\omega_{n}C_{m}^{\frac{n-2}{ 2}}\left(1\right)}D^{m}\int\limits_{|1-t|}^{1}u^{1-m}f_{m,l}(u)\left(2(u^{2}+ 1)t^{2}-t^{4}-(1-u^{2})^{2}\right)^{m+\frac{n-3}{2}}\mathrm{d}u.\]
We denote
\[h_{m,l}(t) =t^{n-2}g_{m,l}(t)\] \[\phi_{m,l}(t) =\int\limits_{|1-t|}^{1-m}f_{m,l}(u)\left(2(u^{2}+1)t^{2}-t^{4}-( 1-u^{2})^{2}\right)^{m+\frac{n-3}{2}}\mathrm{d}u.\]
Then we have
\[h_{m,l}(t)=\frac{K(-1)^{m}\omega_{n-1}}{4^{m+k}\omega_{n}C_{m}^{\frac{n-2}{2} }\left(1\right)}D^{m}\phi_{m,l}(t).\]
We make the following observations:
* \(\phi_{m,l}(t)\in C_{c}^{\infty}((0,2))\),
* \(\phi_{m,l}(t)\) satisfies \[[\mathcal{L}_{m+k}\phi_{m,l}](1-t)=[\mathcal{L}_{m+k}\phi_{m,l}](1+t),\] where, we recall that \[\mathcal{L}_{m+k}=\sum_{p=0}^{m+k}\frac{(m+k+p)!}{(m+k-p)!p!2^{p}}(1-t)^{m+k- p}D^{m+k-p},\qquad D=\frac{1}{t}\frac{\mathrm{d}}{\mathrm{d}t},\]
The smoothness in the first point follows from the fact that \(g_{m,l}(t)\) is a smooth function and \(\phi_{m,l}(t)\) is the solution of a linear ODE with smooth coefficients and with zero initial conditions. The fact that the support is strictly in \((0,2)\) is due to the fact that \(f_{m,l}\in C^{\infty}([0,1))\) has support strictly away from \(1\). The second point follows from the necessity part of Theorem 1.1 by replacing \(k\) by \(m+k\). Hence we have the following necessary condition: There is a function \(\phi_{m,l}\in C_{c}^{\infty}((0,2))\), such that \(h_{m,l}(t)=D^{m}\phi_{m,l}(t)\) and \(\phi_{m,l}(t)\) satisfies
\[[\mathcal{L}_{m+k}\phi_{m,l}](1-t)=[\mathcal{L}_{m+k}\phi_{m,l}](1+t).\]
We note that for each \(0\leq l\leq d_{m}\), \(\phi_{m,l}\) satisfies the same ODE.
Next we show that this condition is also sufficient. Since \(\phi_{m,l}(t)\in C_{c}^{\infty}((0,2))\) and \(\phi_{m,l}(t)\) satisfies
\[[\mathcal{L}_{m+k}\phi_{m,l}](1-t)=[\mathcal{L}_{m+k}\phi_{m,l}](1+t),\]
we have by the sufficiency part of the proof of Theorem 1.1 that
\[\left(\int\limits_{0}^{\infty}j_{k+m+\frac{1}{2}}(\lambda t)t\phi_{m,l}(t) \mathrm{d}t\right)y_{k+m+\frac{1}{2}}(\lambda)=\left(\int\limits_{0}^{\infty}y _{k+m+\frac{1}{2}}(\lambda t)t\phi_{m,l}(t)\mathrm{d}t\right)j_{k+m+\frac{1}{2 }}(\lambda).\]
Therefore, we have
\[\left(\int\limits_{0}^{\infty}D^{m}j_{k+\frac{1}{2}}(\lambda t)t\phi_{m,l}(t) \mathrm{d}t\right)y_{k+m+\frac{1}{2}}(\lambda)=\left(\int\limits_{0}^{\infty}D ^{m}y_{k+\frac{1}{2}}(\lambda t)t\phi_{m,l}(t)\mathrm{d}t\right)j_{k+m+\frac{1 }{2}}(\lambda).\]
Integrating by parts, we get,
\[\left(\int\limits_{0}^{\infty}j_{k+\frac{1}{2}}(\lambda t)th_{m,l}(t)\mathrm{d}t \right)y_{k+m+\frac{1}{2}}(\lambda)=\left(\int\limits_{0}^{\infty}y_{k+\frac{1 }{2}}(\lambda t)th_{m,l}(t)\mathrm{d}t\right)j_{k+m+\frac{1}{2}}(\lambda).\]
We have the same expression for each \(0\leq l\leq d_{m}\) and hence the \(m^{\mathrm{th}}\) order spherical harmonic term of the Hankel transform of \(g\) defined as the orthogonal projection of the Hankel transform of \(g\) onto the subspace of spherical harmonics of degree \(m\) vanishes at the non-zero zeros of the spherical Bessel function \(j_{k+m+\frac{1}{2}}(\lambda)\) satisfying [3, Condition 4, Theorem 11]. We are done with the general case as well.
### Counterexample to UCP
In this subsection we prove Theorem 1.8 and Corollary 1.9. In both the cases, we consider functions possessing radial symmetry. The proof presented here uses the range characterization (Theorem 1.1). In fact, this approach has been employed before, see for instance [37, Section VI.4] where it was used to show that the interior problem of computed tomography is not uniquely solvable. The second proof (see Section 4) directly produces the function \(f\) claimed in the theorem. Due to the local nature of the operator, the construction of such an \(f\) is relatively easier. However, in case of non-local problems, the approach via the range characterization may be better suited.
Proof of Theorem 1.8.: Let \(g\in C_{c}^{\infty}((0,2))\) be a non-trivial function such that \(h(t)=t^{n-2}g(t)\) satisfies (1.1). Let \(\alpha>0\) be such that \(\alpha<1-\epsilon\). Let us choose \(g\) such that \(\operatorname{supp}g\subset(\alpha,1-\epsilon)\cup(1+\epsilon,2-\alpha)\) (see lemma 3.8 for existence of such a non-trivial function). By theorem 1.1, there exists a unique non-trivial function \(f\in C_{c}^{\infty}(\mathbb{B})\) possessing radial symmetry, such that \(\mathcal{R}f(p,t)=g(t)\) and hence \(\mathcal{R}f(p,t)=0\) for all \(p\in\mathbb{S}^{n-1}\) and \(t\in(1-\epsilon,1+\epsilon)\). This \(f\) can be represented by the expressions given in Theorem 2.2. Since the value of \(f\) at a point \(x\in\mathbb{B}\) depends only on the values of \(\mathcal{R}f\) on spheres passing through a neighborhood of \(x\), we have \(f|_{U}=0\). The proof is complete.
**Remark 3.6**.: Since \(\mathcal{R}f(p,t)=0\) for \(t<\alpha\), one can also conclude that \(f(x)=0\) for \(|x|>1-\alpha\), using support-type theorems [10].
Proof of Corollary 1.9.: Let \(U\) be an arbitrary open set in \(\mathbb{B}\), and define \(m\coloneqq\inf_{x\in U}|x|\) and \(M\coloneqq\sup_{x\in U}|x|\). Invoking theorem 1.8 with \(\epsilon=M\), there exists a non-trivial radial function \(f\) such that \(f\) vanishes in \(\{|x|<M\}\) and \(\mathcal{R}f\) vanishes for all \(t\in(1-M,1+M)\), i.e., \(\mathcal{R}f\) vanishes on all spheres intersecting \(\{|x|<M\}\). In particular, \(f\) vanishes on \(U\) and \(\mathcal{R}f\) vanishes on all spheres intersecting \(U\).
**Remark 3.7**.: In the case of functions possessing radial symmetry, the above counterexample is optimal in the sense that the function necessarily vanishes on all of \(\{|x|<M\}\). This can be seen as follows: Due to radial symmetry, if \(f\) vanishes in \(U\), it vanishes in the annulus \(A_{U}\coloneqq\{x\in\mathbb{B}:m<|x|<M\}\). Similarly, if \(\mathcal{R}f\) vanishes on all spheres intersecting \(U\), it vanishes on all spheres passing through \(A_{U}\). In particular, \(\mathcal{R}f\) vanishes on all spheres passing through \(\{|x|<M\}\). The local nature of the inversion formula implies that \(f\) vanishes on \(\{|x|<M\}\).
The counterexamples to unique continuation given above rely on the existence of a non-trivial function satisfying the range condition, and having appropriate support. We prove the existence of such a function using basic theory of linear ordinary differential equations with variable coefficients.
**Lemma 3.8**.: _Let \(\epsilon\in(0,1)\) and \(\alpha>0\) such that \(\alpha<1-\epsilon\). There exists a non-trivial function \(h\in C_{c}^{\infty}((0,2))\) such that \(\operatorname{supp}h\subset(\alpha,1-\epsilon)\cup(1+\epsilon,2-\alpha)\) and satisfying_
\[[\mathcal{L}_{k}h](1-t)=[\mathcal{L}_{k}h](1+t)\quad\text{for all}\quad t\in(0,1).\]
Proof.: Let us first consider \(k=0\). In this case, we want a function supported in \((\alpha,1-\epsilon)\cup(1+\epsilon,2-\alpha)\) and satisfying
\[h(1-t)=h(1+t)\quad\text{ for all }t\in(0,1).\]
This can be easily done by choosing a smooth function supported in \((1+\epsilon,2-\alpha)\) and then extending it to \((0,1)\) by the relation given above. This idea also works for \(k>0\), with some added technical difficulties.
Let us now assume \(k>0\). The range condition can be written as
\[\sum_{l=0}^{k}\frac{(-1)^{k-l}(k+l)!}{(k-l)!l!2^{l}}t^{k-l}\left( \frac{1}{(1-t)}\frac{\mathrm{d}}{\mathrm{d}t}\right)^{k-l}(h(1-t)) \tag{3.53}\] \[=\sum_{l=0}^{k}\frac{(-1)^{k-l}(k+l)!}{(k-l)!l!2^{l}}t^{k-l}\left( \frac{1}{(1+t)}\frac{\mathrm{d}}{\mathrm{d}t}\right)^{k-l}(h(1+t)).\]
Let \(\tilde{H}\in C_{c}^{\infty}((1,2))\) be such that \(\mathrm{supp}(\tilde{H})\subset(1+\epsilon,2-\alpha)\) to be chosen later and for \(t\in(0,1)\), denote
\[G(t)=\sum_{l=0}^{k}\frac{(-1)^{k-l}(k+l)!}{(k-l)!l!2^{l}}t^{k-l}\left(\frac{1} {(1+t)}\frac{\mathrm{d}}{\mathrm{d}t}\right)^{k-l}(\tilde{H}(1+t)).\]
Then \(G\in C_{c}^{\infty}((0,1))\) and \(\mathrm{supp}(G)\subset(\epsilon,1-\alpha)\). Let us consider the ODE
\[\begin{cases}\sum\limits_{l=0}^{k}\frac{(-1)^{k-l}(k+l)!}{(k-l)!l!2^{l}}t^{k-l }\left(\frac{1}{(1-t)}\frac{\mathrm{d}}{\mathrm{d}t}\right)^{k-l}(H(t))&=G(t) \quad\text{for}\quad t\in(\epsilon,1-\alpha),\\ \left(H(\epsilon),H^{(1)}(\epsilon),\ldots,H^{(k-1)}(\epsilon)\right)&=0. \end{cases} \tag{3.54}\]
The above ODE can be re-written as
\[\begin{cases}\sum\limits_{l=0}^{k}a_{l}(t)\left(\frac{\mathrm{d}}{\mathrm{d}t }\right)^{l}H(t)&=G(t)\quad\text{for}\quad t\in(\epsilon,1-\alpha),\\ \left(H(\epsilon),H^{(1)}(\epsilon),\ldots,H^{(k-1)}(\epsilon)\right)&=0, \end{cases} \tag{3.55}\]
where \(a_{l}\) are rational functions of \(t\) smooth in the interval \((\epsilon,1-\alpha)\). Note that
\[a_{k}(t)=\frac{(-1)^{k}t^{k}}{(1-t)^{k}},\]
and thus \(\frac{1}{a_{k}}\) is also smooth in \((\epsilon,1-\alpha)\). Multiplying throughout by \(1/a_{k}\), the ODE becomes
\[\begin{cases}H^{(k)}(t)+\sum\limits_{l=0}^{k-1}\frac{a_{l}(t)}{a_{k}(t)}\left( \frac{\mathrm{d}}{\mathrm{d}t}\right)^{l}H(t)&=(-1)^{k}\frac{(1-t)^{k}}{t^{k}} G(t)\quad\text{for}\quad t\in(\epsilon,1-\alpha),\\ \left(H(\epsilon),H^{(1)}(\epsilon),\ldots,H^{(k-1)}(\epsilon)\right)&=0. \end{cases} \tag{3.56}\]
Next we use the representation for the solution to the above ODE, given in [18, Ch. 3, eq.(6.2)]. If \(\varphi_{1},\ldots,\varphi_{k}\) is a basis of solutions to the homogeneous equation
\[H^{(k)}(t)+\sum\limits_{l=0}^{k-1}\frac{a_{l}(t)}{a_{k}(t)}\left(\frac{ \mathrm{d}}{\mathrm{d}t}\right)^{l}H(t)=0,\]
then the solution to (3.56) is given by
\[H(t)=\sum\limits_{j=1}^{k}\varphi_{j}(t)\int\limits_{\epsilon}^{t}\frac{W_{j} (s)}{W(\varphi_{1},\ldots,\varphi_{k})(s)}\,(-1)^{k}\frac{(1-s)^{k}}{s^{k}}G(s )\,\mathrm{d}s, \tag{3.57}\]
where \(W(\varphi_{1},\ldots,\varphi_{k})\) is the Wronskian of the basis \(\varphi_{1},\ldots,\varphi_{k}\) and \(W_{j}(s)\) is obtained from \(W(\varphi_{1},\ldots,\varphi_{k})\) by replacing the \(j-\)th column \(\left(\varphi_{j},\varphi_{j}^{(1)},\ldots,\varphi_{j}^{(k-1)}\right)\) by \((0,0,\ldots,1)\) and then taking the determinant. Due to the support restriction of \(G\), \(H\) vanishes in a small interval to the right of \(t=\epsilon\), and hence all its derivatives vanish at \(t=\epsilon\). In particular, \(H(\epsilon)=H^{(1)}(\epsilon)=\cdots=H^{(k-1)}(\epsilon)=0\). Thus, by uniqueness, this is the solution of the ODE (3.56).
We also want the function \(H\) and all its derivatives to vanish at \(t=1-\alpha\). To this end, recall that
\[G(t) =\sum\limits_{l=0}^{k}\frac{(-1)^{k-l}(k+l)!}{(k-l)!l!2^{l}}t^{k- l}\left(\frac{1}{(1+t)}\frac{\mathrm{d}}{\mathrm{d}t}\right)^{k-l}(\tilde{H}(1+t)) \tag{3.58}\] \[=\sum\limits_{l=0}^{k}b_{l}(t)\left(\frac{\mathrm{d}}{\mathrm{d} t}\right)^{l}(\tilde{H}(1+t)). \tag{3.59}\]
The exact expression of the coefficients \(b_{l}\) is not important, but note that these are rational functions of \(t\) smooth in the interval \((\epsilon,1-\alpha)\). Substituting this into the expression for \(H\) and performing integration by parts (no boundary terms due to support condition of \(H\)), we obtain
\[H(1-\alpha)=\int\limits_{\epsilon}^{1-\alpha}\Phi(s)\tilde{H}(1+s)\,\mathrm{d}s \tag{3.60}\]
for some smooth function \(\Phi\).
If \(\Phi\equiv 0\), there is nothing to prove. If not, \(\exists s_{0}\in(\epsilon,1-\alpha)\) in which \(\Phi(s)\) is either positive or negative and hence by continuity, keeps the same sign in a small interval around \(s_{0}\). Let this interval be \(I_{0}\). Let \(I_{1},I_{2}\subset I_{0}\) be disjoint. Choose two cut-off functions \(\chi_{1}\) and \(\chi_{2}\) supported in \(I_{1}\) and \(I_{2}\) respectively. For \(t\in(1,2)\), let us choose
\[\tilde{H}(t)=c_{1}\chi_{1}(t-1)+c_{2}\chi_{2}(t-1)\]
for \(c_{1},c_{2}\) to be chosen later. We then have
\[\int\limits_{\epsilon}^{1-\alpha}\Phi(s)\tilde{H}(1+s)\,\mathrm{d}s =c_{1}\int\limits_{\epsilon}^{1-\alpha}\Phi(s)\chi_{1}(s)\, \mathrm{d}s+c_{2}\int\limits_{\epsilon}^{1-\alpha}\Phi(s)\chi_{2}(s)\, \mathrm{d}s\] \[=c_{1}\int\limits_{I_{1}}\Phi(s)\chi_{1}(s)\,\mathrm{d}s+c_{2} \int\limits_{I_{2}}\Phi(s)\chi_{2}(s)\,\mathrm{d}s.\]
Choosing \(c_{1}=-\int\limits_{I_{2}}\Phi(s)\chi_{2}(s)\,\mathrm{d}s\) and \(c_{2}=\int\limits_{I_{1}}\Phi(s)\chi_{1}(s)\,\mathrm{d}s\), we get
\[H(1-\alpha) =\int\limits_{\epsilon}^{1-\alpha}\Phi(s)\tilde{H}(1+s)\,\mathrm{ d}s\] \[=0.\]
In fact, due to the choice of support of \(\tilde{H}\), \(H\) vanishes in a small interval to the left of \(t=1-\alpha\) and hence all its derivatives also vanish at \(t=1-\alpha\). Thus, the function \(H\), defined in \((\epsilon,1-\alpha)\), obtained above can be extended by \(0\) to a smooth function in \((0,1)\). Finally, the function \(h\in C_{c}^{\infty}((0,2))\) defined as
\[h(t)=\begin{cases}H(1-t),\quad\text{for}\quad t\in(0,1),\\ \tilde{H}(t),\quad\text{for}\quad t\in(1,2),\end{cases} \tag{3.61}\]
satisfies the assumptions of the lemma.
## 4. Alternate proof of main theorems
In Section 3.1, we proved the necessary and sufficient condition separately. Our proof for sufficiency was based on showing that our range condition implies the existing range characterization of [5] (see Theorem 2.4). In this section, however, we are going to take a different approach based on the results in [25], which proves both implications directly. Let us explain the main idea now.
Consider the inversion formula (2.4), which can be re-written as
\[f(x)=K(n)\left(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}\mathcal{ N}f\right)(x)+K(n)\left(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}t \partial_{t}\mathcal{D}\mathcal{N}f\right)(x).\]
Comparing this with (2.5), we observe
\[\left(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}\mathcal{N}f \right)(x)=0,\]
and thus
\[\operatorname{range}\left(\mathcal{D}\mathcal{N}\right)\subset\ker\left( \mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\right).\]
In fact, the reverse inclusion also holds (see the discussion following [25, Theorem 3]) and we have
\[\operatorname{range}\left(\mathcal{D}\mathcal{N}\right)=\ker\left(\mathcal{N }^{*}\mathcal{D}^{*}\partial_{t}\right). \tag{4.1}\]
This is a key observation for our proof.
Proof of Theorem 1.1.: Let \(g\in C^{\infty}_{c}((0,2))\) and consider \(h(t)\coloneqq t^{n-2}g(t)\) as before. Our first step is to find conditions on \(h\) such that \(\mathcal{D}h\in\ker(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t})\). Since \(h\in C^{\infty}_{c}((0,2))\), \(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}h\in C^{\infty}(\mathbb{ R}^{n})\). Thus, it is enough to find conditions on \(h\) such that \(\left(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}h\right)(x)=0\) for \(x\) such that \(|x|\in(0,1)\).
For \(U=U(t)\in C^{\infty}_{c}((0,2))\), using Funk-Hecke theorem, we have
\[(\mathcal{N}^{*}U)(x) =\frac{1}{\omega_{n}}\int\limits_{\mathbb{S}^{n-1}}\frac{U(|p-x |)}{|p-x|}\,\mathrm{d}S(p)\] \[=\frac{\omega_{n-1}}{\omega_{n}}\int\limits_{-1}^{1}\frac{U( \sqrt{1+|x|^{2}-2|x|t})}{\sqrt{1+|x|^{2}-2|x|t}}(1-t^{2})^{k}\,\mathrm{d}t.\]
Let \(c(n)\) denote the constant \(\frac{\omega_{n-1}}{\omega_{n}}\). Changing the variables \(u=\sqrt{1+|x|^{2}-2|x|t}\), we get
\[(\mathcal{N}^{*}U)(x)=\frac{c(n)}{2^{2k}|x|^{2k+1}}\int\limits_{1-|x|}^{1+|x| }U(u)[4|x|^{2}-(1+|x|^{2}-u^{2})^{2}]^{k}\,\mathrm{d}u. \tag{4.2}\]
Let us denote
\[P(x,u) =1+|x|^{2}-u^{2}\] \[\text{and}\quad A(x,u) =4|x|^{2}-P^{2}(x,u).\]
Observe that
\[P(x,1\pm|x|) =\mp 2|x|,\] \[A(x,1\pm|x|) =0.\]
Thus we have for \(x\neq 0\),
\[(\mathcal{N}^{*}U)(x)=\frac{c(n)}{2^{2k}|x|^{2k+1}}\int\limits_{1-|x|}^{1+|x|} U(u)A^{k}(x,u)\,\mathrm{d}u.\]
We also have the expression
\[\mathcal{D}^{*}\partial_{t}\mathcal{D}h=\frac{(-1)^{k}}{2^{2k}}\partial_{t}D ^{2k}h.\]
These yield
\[(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}h)(x)=\frac{c(n)(-1)^{k }}{2^{4k}|x|^{2k+1}}\int\limits_{1-|x|}^{1+|x|}\partial_{t}D^{2k}h\cdot A^{k} (x,t)\,\mathrm{d}t.\]
Since \(A\) vanishes at \(t=1\pm|x|\), we can perform integration by parts \(k\)-times without picking up the boundary terms to get
\[(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}h)(x)=\frac{c(n)}{2^{4k }|x|^{2k+1}}\int\limits_{1-|x|}^{1+|x|}\partial_{t}D^{k}h\cdot D^{k}A^{k}(x,t )\,\mathrm{d}t.\]
We want to transfer all the derivatives to \(A\), but now we will pick up the boundary terms. Invoking Lemma 2.9, we obtain
\[(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}h)(x) =\frac{c(n)}{2^{4k}|x|^{2k+1}}\left[\sum\limits_{l=0}^{k-1}(-1)^{ l}D^{k-l}h\cdot D^{k+l}A^{k}\right]_{t=1-|x|}^{1+|x|}\] \[\qquad+(-1)^{k}\frac{c(n)}{2^{4k}|x|^{2k+1}}\int\limits_{1-|x|}^{ 1+|x|}\partial_{t}h\cdot D^{2k}A^{k}\,\mathrm{d}t.\]
Next, we need an expression for
\[D^{k+l}A^{k}(x,t)\quad\text{for}\quad 0\leq l\leq k.\]
Observe that
\[DA =4P,\] \[D^{2}A =-8,\] \[\text{and}\quad D^{j}A =0\quad\text{for }j\geq 3.\]
We invoke the special case of Faa di Bruno's formula (see Lemma 2.8) with \(F(t)=t^{k}\) and \(G(t)=A(x,t)\). Notice that \(F\) is a polynomial of degree \(k\) and thus, we obtain
\[D^{k+l}A^{k}(x,t)=\sum_{i\geq\frac{k+l}{2}}^{k}(-1)^{k+l-i}\frac{k!(k+l)!2^{2i} }{(k-i)!(2i-k-l)!(k+l-i)!}P^{2i-k-l}A^{k-i}.\]
Substituting this above, we find
\[(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}h)(x) =\frac{c(n)}{2^{4k}|x|^{2k+1}}\left[\sum_{l=0}^{k-1}(-1)^{l}D^{k-l }h\cdot\sum_{i\geq\frac{k+l}{2}}^{k}\frac{(-1)^{k+l-i}k!(k+l)!2^{2i}}{(k-i)!(2 i-k-l)!(k+l-i)!}P^{2i-k-l}A^{k-i}\right]_{1-|x|}^{1+|x|}\] \[\qquad+(-1)^{k}\frac{c(n)}{2^{2k}|x|^{2k+1}}\int_{1-|x|}^{1+|x|} \partial_{t}h\cdot\left(\frac{(-1)^{k}k!(2k)!}{k!}\right)\,\mathrm{d}t.\]
Since \(A(x,1\pm|x|)=0\), only \(i=k\) term survives in the boundary term to give
\[(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}h)(x) =\frac{c(n)}{2^{4k}|x|^{2k+1}}\left[\sum_{l=0}^{k-1}\frac{k!(k+l)! 2^{2k}}{(k-l)!l!}P^{k-l}D^{k-l}h\right]_{1-|x|}^{1+|x|}\] \[\qquad+\frac{c(n)}{2^{2k}|x|^{2k+1}}(2k)![h]_{1-|x|}^{1+|x|}.\]
Writing it out, we have
\[(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}h)(x)= \frac{c(n)}{2^{2k}|x|^{2k+1}}\Bigg{[}\sum_{l=0}^{k-1}\frac{(-1)^{k-l}2^{k-l}k!(k+l)!}{(k-l)!l!}|x|^{k-l}\left[D^{k-l}h\right](1+|x|) \tag{4.3}\] \[\qquad\qquad\qquad\qquad\qquad\qquad-\sum_{l=0}^{k-1}\frac{2^{k -l}k!(k+l)!}{(k-l)!l!}|x|^{k-l}\left[D^{k-l}h\right](1-|x|)\Bigg{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\frac{c(n)}{2^{2k}|x|^{2k+ 1}}(2k)!\left[h(1+|x|)-h(1-|x|)\right]\]
or
\[(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t}\mathcal{D}h)(x)= \frac{c(n)k!}{2^{k}|x|^{2k+1}}\left([\mathcal{L}_{k}h](1+|x|)-[\mathcal{L}_{k }h](1-|x|)\right), \tag{4.4}\]
where we recall that \(\mathcal{L}_{k}\) is the linear differential operator of order \(k\), defined as
\[\mathcal{L}_{k}=\sum_{l=0}^{k}\frac{(k+l)!}{(k-l)!l!2^{l}}(1-t)^{k-l}D^{k-l}\]
Thus \(\mathcal{D}h\in\ker(\mathcal{N}^{*}\mathcal{D}^{*}\partial_{t})=\mathrm{range} (\mathcal{D}\mathcal{N})\)_if and only if_\([\mathcal{L}_{k}h](1+t)=[\mathcal{L}_{k}h](1-t)\) for all \(t\in[0,1]\). This is equivalent to saying that there exists \(f\in C_{c}^{\infty}(\mathbb{B})\) such that
\[\mathcal{D}(t^{n-2}\mathcal{R}f)=\mathcal{D}h.\]
Since \(\mathcal{D}\) is a linear differential operator, it has a trivial kernel in the space of compactly supported smooth functions on \((0,2)\). Thus, the above is equivalent to saying that \(h=t^{n-2}\mathcal{R}f\) or \(g=\mathcal{R}f\).
**Remark 4.1**.: The sufficiency part of Theorem 1.4 can also be proved similarly with minor changes. We omit the proof.
Proof of Theorem 1.8.: Recall that when \(f\) has radial symmetry, we have (2.3):
\[\mathcal{R}f(p,t)=\frac{\omega_{n-1}}{\omega_{n}}\int\limits_{-1}^{1}f\left( \sqrt{1+t^{2}+2st}\right)(1-s^{2})^{k}\,\mathrm{d}s.\]
Consider the change of variables \(u=\sqrt{1+t^{2}+2st}\) to get
\[\mathcal{R}f(p,t)=\frac{\omega_{n-1}}{\omega_{n}}\frac{1}{t}\int\limits_{|1-t |}^{1+t}uf(u)\left(1-\left(\frac{u^{2}-1-t^{2}}{2t}\right)^{2}\right)^{k}\, \mathrm{d}u.\]
Choose \(F\in C_{c}^{\infty}((0,1))\) such that \(\mathrm{supp}(F)\subset(\epsilon,1)\) and take \(f(t)=\frac{\mathrm{d}^{m}}{\mathrm{d}^{m}}F(t)\) for any \(m\geq 4k+2\). With this choice of \(f\), we have for \(t\in(1-\epsilon,1+\epsilon)\)
\[\mathcal{R}f(p,t)=\frac{\omega_{n-1}}{\omega_{n}}\frac{1}{t}\int\limits_{ \epsilon}^{1}u\left(\frac{\mathrm{d}^{m}}{\mathrm{d}u^{m}}F(u)\right)\left(1 -\left(\frac{u^{2}-1-t^{2}}{2t}\right)^{2}\right)^{k}\,\mathrm{d}u\]
due to the choice of support of \(F\). Performing repeated integration by parts, we obtain that \(\mathcal{R}f(p,t)=0\) for all \(p\in\mathbb{S}^{n-1}\) and \(t\in(1-\epsilon,1+\epsilon)\).
## 5. Further directions
* In this article, we have given a complete range characterization for the SMT in odd dimensions. See also [35] for a related work on this subject. A direction of further research is the derivation of simple range descriptions (e.g. for radial functions) in even dimensions. Once the range conditions are obtained for radial functions, the case of general functions can probably be handled by using the result for the radial case, similar to our approach presented in this paper. Notice that our range conditions in odd dimensions are of a differential nature. Since the operator is non-local in even dimensions, it is conceivable that the range conditions are also non-local in even dimensions (perhaps of an integral nature).
* One of the results of this paper is a counterexample to UCP for SMT in odd dimensions. The authors believe that the UCP (as introduced in this article) should hold in even dimensions, while the interior problem (see [37]) should not have a unique solution there. The authors plan to address these questions in a future work.
* An offshoot of the current work is the discovery of explicit inversion formulas for the SMT that we study, similar in spirit to the works of Norton [40], Norton-Linzer [41], Xu-Wang [50] and others based on Fourier series/spherical harmonics and Hankel transform. Our inversion formulas are valid in all odd and even dimensions, and are simpler than some of the already existing ones. We plan to report this work in an upcoming article.
## Acknowledgements
GA was partially supported by the NIH grant U01-EB029826.
DA was supported by Infosys-TIFR Leading Edge travel grants and National Board of Higher Mathematics travel grant. DA would like to thank the Indian Institute of Science Education and Research, Bhopal for the hospitality during his visit, where part of this work was completed. DA thanks Prof. Sombuddha Bhattacharyya for the kind invitation.
VK would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK, for support and hospitality during the workshop, _Rich and Nonlinear Tomography - a multidisciplinary approach_ in 2023 where part of this work was done (supported by EPSRC Grant Number EP/R014604/1).
All the authors thank Mark Agranovsky, Peter Kuchment, Leonid Kunyansky, Todd Quinto, Rakesh and Boris Rubin for several fruitful discussions while this work was being done.
|
2309.15816 | Orbit closures, stabilizer limits and intermediate $G$-varieties | In this paper we study the orbit closure problem for a reductive group
$G\subseteq GL(X)$ acting on a finite dimensional vector space $V$ over $\C$.
We assume that the center of $GL(X)$ lies within $G$ and acts on $V$ through a
fixed non-trivial character. We study points $y,z\in V$ where (i) $z$ is
obtained as the leading term of the action of a 1-parameter subgroup $\lambda
(t)\subseteq G$ on $y$, and (ii) $y$ and $z$ have large distinctive stabilizers
$K,H \subseteq G$. Let $O(z)$ (resp. $O(y)$) denote the $G$-orbits of $z$
(resp. $y$), and $\overline{O(z)}$ (resp. $\overline{O(y)}$) their closures,
then (i) implies that $z\in \overline{O(y)}$. We address the question: under
what conditions can (i) and (ii) be simultaneously satisfied, i.e, there exists
a 1-PS $\lambda \subseteq G$ for which $z$ is observed as a limit of $y$. Using
$\lambda$, we develop a leading term analysis which applies to $V$ as well as
to ${\cal G}= Lie(G)$ the Lie algebra of $G$ and its subalgebras ${\cal K}$ and
${\cal H}$, the Lie algebras of $K$ and $H$ respectively. Through this we
construct the Lie algebra $\hat{\cal K} \subseteq {\cal H}$ which connects $y$
and $z$ through their Lie algebras. We develop the properties of $\hat{\cal K}$
and relate it to the action of ${\cal H}$ on $\overline{N}=V/T_z O(z)$, the
normal slice to the orbit $O(z)$.
We examine the case of {\em alignment} when a semisimple element belongs to
both ${\cal H}$ and ${\cal K}$, and the conditions for the same. We illustrate
some consequences of alignment. Next, we examine the possibility of {\em
intermediate $G$-varieties} $W$ which lie between the orbit closures of $z$ and
$y$, i.e. $\overline{O(z)} \subsetneq W \subsetneq O(y)$. These have a direct
bearing on representation theoretic as well as geometric properties which
connect $z$ and $y$. | Bharat Adsul, Milind Sohoni, K V Subrahmanyam | 2023-09-27T17:43:46Z | http://arxiv.org/abs/2309.15816v2 | # Orbit closures, stabilizer limits and intermediate \(G\)-varieties
###### Abstract
In this paper we study the orbit closure problem for a reductive group \(G\subseteq GL(X)\) acting on a finite dimensional vector space \(V\) over \(\mathbb{C}\). We assume that the center of \(GL(X)\) lies within \(G\) and acts on \(V\) through a fixed non-trivial character. We study points \(y,z\in V\) where (i) \(z\) is obtained as the leading term of the action of a 1-parameter subgroup \(\lambda(t)\subseteq G\) on \(y\), and (ii) \(y\) and \(z\) have large distinctive stabilizers \(K,H\subseteq G\). Let \(O(z)\) (resp. \(O(y)\)) denote the \(G\)-orbits of \(z\) (resp. \(y\)), and \(\overline{O(z)}\) (resp. \(\overline{O(y)}\)) their closures, then (i) implies that \(z\in\overline{O(y)}\). We address the question: under what conditions can (i) and (ii) be simultaneously satisfied, i.e, there exists a 1-PS \(\lambda\subseteq G\) for which \(z\) is observed as a limit of \(y\).
Using \(\lambda\), we develop a leading term analysis which applies to \(V\) as well as to \(\mathcal{G}=Lie(G)\) the Lie algebra of \(G\) and its subalgebras \(\mathcal{K}\) and \(\mathcal{H}\), the Lie algebras of \(K\) and \(H\) respectively. Through this we construct the Lie algebra \(\widehat{\mathcal{K}}\subseteq\mathcal{H}\) which connects \(y\) and \(z\) through their Lie algebras. We develop the properties of \(\hat{\mathcal{K}}\) and relate it to the action of \(\mathcal{H}\) on \(\overline{N}=V/T_{z}O(z)\), the normal slice to the orbit \(O(z)\).
We examine the case of _alignment_ when a semisimple element belongs to both \(\mathcal{H}\) and \(\mathcal{K}\), and the conditions for the same. We illustrate some consequences of alignment and relate it to existing work in the case of the determinant and permanent. Next, we examine the possibility of _intermediate \(G\)-varieties_\(W\) which lie between the orbit closures of \(z\) and \(y\), i.e. \(\overline{O(z)}\subsetneq W\subsetneq O(y)\). These have a direct bearing on representation theoretic as well as geometric properties which connect \(z\) and \(y\).
The paper hopes to contribute to the Geometric Complexity Theory approach of addressing problems in computational complexity in theoretical computer science.
## 1 Introduction
Let \(X\) be a vector space over \(\mathbb{C}\) of dimension \(n\) and let \(G\subseteq GL(X)\) be a reductive algebraic group over \(\mathbb{C}\). Furthermore, if \(Z=\{tI|t\in\mathbb{C}^{*}\}\), the center of \(GL(X)\) of non-zero multiples of the identity matrix \(I\), then we assume that \(Z\) is a subgroup of
\(G\). Let \(V\) be a finite dimensional \(G\)-module via a rational map \(\rho:G\to GL(V)\). For a \(g\in G\) and \(v\in V\), let \(\rho(g)\cdot v\), or simply \(gv\) denote the action of \(g\) on \(v\) via \(\rho\). We assume that \(Z\) acts through a fixed non-trivial character on \(V\). In other words, there is an integer \(c\neq 0\) such that for any \(v\in V\), we have \(tI\cdot v=\rho(tI)v=t^{c}v\).
Let \(y\in V\) and \(O(y)\) denote the \(G\)-orbit of \(y\). Since \(Z\) acts non-trivially, the closure \(\overline{O(y)}\) is also a cone and its ideal \(I_{y}\subseteq\mathbb{C}[V]\) is a homogeneous ideal. Let \(\lambda:\mathbb{C}^{*}\to G\) be a one-parameter subgroup (1-PS) such that
\[\lambda(t)y=\sum_{i=d}^{D}t^{i}y_{i}=t^{d}z+t^{e}y_{e}+\ldots t^{D}y_{D} \tag{1}\]
where \(z=y_{d}\) and \(y_{e}\) are non-zero vectors and \(d<e<\ldots<D\). We call \(z\) the leading term of \(y\) under \(\lambda\) and \(y_{e}\) as the _tangent of approach_.
The motivation of this paper is to study the following question. Given special elements \(z,y\in V\) with large and distinctive stabilizers \(H,K\subseteq G\), to determine, using just the stabilizer data, if \(z\) can arise as a leading term of \(y\) under a 1-PS \(\lambda(t)\subseteq G\). By dividing by \(t^{d}\) (which is also achieved by applying a suitable multiple of the identity), it is clear that the leading term \(z\) lies within \(\overline{O(y)}\), the closure of the \(G\)-orbit of \(y\in V\).
### Our Contributions
Let us set up some of the background notation.
Let \(\mathcal{G}\) denote the Lie algebra of \(G\). The central objects are \(\mathcal{G}_{y}=\mathcal{K}\), the Lie algebra of the stabilizer \(K\) of \(y\) and \(\mathcal{G}_{z}=\mathcal{H}\), that of the stabilizer \(H\) of \(z\). Let \(O(z)\) and \(O(y)\) denote the \(G\)-orbits of \(z\) and \(y\) respectively. Let \(\overline{O(z)},\overline{O(y)}\) denote their closures, \(I_{z},I_{y}\subseteq\mathbb{C}[V]\) their ideals and \(A_{z},A_{y}\) their coordinate rings. Let \(T_{z}O(z)=\mathcal{G}\cdot z\), denote the tangent space of the orbit \(O(z)\) at the point \(z\). Let \(\overline{N}\) be the \(\mathcal{H}\)-module \(V/(T_{z}O(z))\) which represents a "normal" slice at \(z\) to the tangent space \(T_{z}O(z)\).
Finally, the 1-PS \(\lambda(t)\) allows a grading of \(V\) by weights under the usual action, and \(\mathcal{G}\) under the adjoint action. We thus have \(V=\oplus_{i}V_{i}\) with \(\lambda(t)v_{i}=t^{i}v_{i}\) for any \(v_{i}\in V_{i}\). We also have \(\mathcal{G}=\oplus_{j}\mathcal{G}_{j}\). For any non-zero \(v=\sum_{i}v_{i}\), let the leading term \(\hat{v}\) be the non-zero term \(v_{a}\) of smallest degree. Similarly, for a non-zero \(\mathfrak{g}\in\mathcal{G}\) with \(\mathfrak{g}=\sum_{j}\mathfrak{g}_{j}\), let \(\hat{\mathfrak{g}}\) denote the non-zero term \(\mathfrak{g}_{b}\) of smallest degree. Thus, in the above notation, we have \(\hat{y}=z\).
We prove:
**Theorem 1.1**: _Let \(\hat{\mathcal{K}}\) be the vector space generated by \(\{\hat{\mathfrak{k}}|\mathfrak{k}\in\mathcal{K}\}\), the collection of leading terms of \(\mathcal{K}\). Then \(\hat{\mathcal{K}}\) is a Lie subalgebra of \(\mathcal{H}\) and \(dim(\hat{\mathcal{K}})=dim(\mathcal{K})\). Moreover, \(\hat{\mathcal{K}}\subseteq\mathcal{H}_{\overline{y_{e}}}\), the Lie algebra stabilizer within \(\mathcal{H}\) of \(\overline{y_{e}}\in\overline{N}\)._
\(\hat{\mathcal{K}}\) is also the limit of \(\lambda(t)\mathcal{K}\lambda(t)^{-1}\), the stabilizer of \(y(t)=\lambda(t)y\). The above theorem brings out the role of \(y_{e}\), the _tangent of approach_ as an element of \(\overline{N}\), a normal section to the orbit \(O(z)\) at \(z\). The injection \(\hat{\mathcal{K}}\subseteq\mathcal{H}\) sets up a direct Lie algebraic connection between \(y\) and its limit \(z\) through their stabilizers.
Even though \({\cal K}\) may be semisimple, \(\hat{\cal K}\) demonstrates a variety of possibilities, depending on the alignment between \(\lambda(t)\) and \({\cal K}\). Let \(P(\lambda)\) be the parabolic subgroup of \(G\) corresponding to \(\lambda\) (see Definition 2.16). Let \(P(\lambda)=L(\lambda)U(\lambda)\) be the Levi decomposition of \(P(\lambda)\). Let \({\cal P}(\lambda),{\cal L}(\lambda),{\cal U}(\lambda)\) be their Lie algebras. Finally, let \(\overline{\ell}\in{\cal G}\) be the toric element such that \(t^{\overline{\ell}}=\lambda(t)\). We prove:
**Theorem 1.2**: _Let \(y,z,\lambda\) and \(\overline{\ell}\) be as above. Then at least one of the following holds:_
**(A)**: _Let_ \({\cal K}^{\prime}=\hat{\cal K}\oplus{\mathbb{C}}\overline{\ell}\)_, then_ \({\cal K}^{\prime}\subseteq{\cal H}\) _is a Lie algebra of rank 1, i.e., the dimension of any maximal torus in_ \({\cal K}^{\prime}\) _is_ \(1\)_._ _or_ _(B)**: _There is a semisimple element_ \(\mathfrak{k}\in{\cal K}\) _and a (unipotent) element_ \(u\in U(\lambda)\) _such that the conjugate_ \(\mathfrak{k}^{u}\in{\cal H}\)_._
_We call such an element \(\mathfrak{k}^{u}\in{\cal H}\cap{\cal K}^{u}\) as a alignment between \(z\) and \(y^{u}\)._
Alignment, i.e., the presence of a common semisimple element in \({\cal H}\) and \({\cal K}\) (or its conjugate) has important consequences for the determinant vs. permanent problem as well as co-dimension one orbits on the boundary of the orbit of the determinant.
Let \(X\) be an \(n\times n\)-matrix of indeterminates and \(V=Sym^{n}(X^{*})\). Consider \(y=det_{n}(X)\) with stabilizer \(K_{n}\subseteq GL_{n^{2}}\). It is well known [10] that the boundary of the determinant orbit is a finite union of \(G\)-varieties of codimension 1, that is, \(\overline{O(y)}-O(y)=\cup_{i}W_{i}\) where \(W_{i}\) is a \(G\)-variety of of dimension one less than that of \(\overline{O(y)}\).
**Corollary 1.3**: _If \(W_{i}\) equals \(\overline{O(Q_{i})}\) for some form \(Q_{i}\in V\) which is obtained as a limit of a 1-PS \(\lambda_{i}\) acting on \(det_{n}(X)\), then either the stabilizer \({\cal H}_{i}\) of \(Q_{i}\) is of rank \(1\), or there is an alignment between \(Q_{i}\) and \(det_{n}\) (or its conjugate)._
The special case of \(n=3\) is analysed and the extent of alignment between \(Q_{i}\)'s and \(det_{3}\) is illustrated.
We also consider the case when \(Y\cong{\mathbb{C}}^{m^{2}+1}\), is the space of \(m\times m\) matrices along with an auxiliary variable \(Y_{nn}\). Suppose \(\phi:Y\to X\) is such that the pullback of \(det_{n}(X)\) is the padded permanent \(Y_{nn}^{n-m}perm_{m}(Y)\). We show that any alignment provides explicit combinatorial information on the structure of \(\phi\).
**Proposition 1.4**: _Suppose \(\phi:Y\to X\) as above has an alignment then there is a rectangular decomposition \({\cal R}=\{R_{1},\ldots,R_{r}\}\) of the index set of \(Y\) and \({\cal S}=\{S_{1},\ldots,S_{s}\}\) of that of \(X\) and a relation \(\Phi\subseteq{\cal R}\times{\cal S}\) such that: \(\phi(Y_{R_{i}})\subseteq\oplus_{S_{j}\in\Phi(R_{i})}X_{S_{j}}\)._
For both cases above, we show that the absence of an alignment poses exceptional conditions on \(\lambda\).
The second part deals with developing the connection between \({\cal K}\) and \({\cal H}\) through _intermediate \(G\)-varieties_, defined below:
**Definition 1.5**: _We say that the closed variety \(W\) is an intermediate variety between \(\overline{O(y)}\) and \(\overline{O(z)}\) if \(W\) is \(G\)-stable and \(\overline{O(y)}\supseteq W\supseteq\overline{O(z)}\). We say \(W\) is strict if \(\overline{O(y)}\supsetneq W\supsetneq\overline{O(z)}\)._
We provide two recipes to construct intermediate varieties \(\overline{O(y)}\supseteq W\supseteq\overline{O(z)}\).
In the first case, we attempt to construct the smallest \(G\)-variety \(W\) such that there is an \(x\in W\) and a 1-PS \(\mu(t)\subseteq G\) such that \(z\) is the leading term of \(x\) under the action of \(\mu\) and \(y_{e}\) is the tangent of approach. In other words, there is a 1-PS path lying entirely within \(W\) which approaches \(z\) with \(y_{e}\) as the tangent.
Towards this, we construct the associated graded ring for the ideal \(I_{z}\subseteq\mathbb{C}[V]\) as \(R=\oplus R_{i}\), where \(R_{i}=I_{z}^{i}/I_{z}^{i+1}\). Note that \(R\cong\mathbb{C}[V]\) as \(G\)-modules and \(dim(R)=dim(\mathbb{C}[V])\) as algebras over \(\mathbb{C}\). For an ideal \(I\subseteq\mathbb{C}[V]\), let \(\overline{I}\subseteq R\) be the graded ideal corresponding to \(I\).
For any \(w\in O(z)\) and \(v\in T_{w}O(z)\), we define \(D^{k}_{w,v}:I_{z}^{k}/I_{z}^{k+1}\to\mathbb{C}\) where for any \(\overline{f}\in I_{z}^{k}/I_{z}^{k+1}\) and representative \(f\in I_{z}^{k}\), \(D^{k}_{w,v}(\overline{f})\) is the coefficient of \(t^{k}\) in \(f(w+tv)\). Thus \(D^{k}_{w,v}\) are generalized derivations at the point \(w\) in the direction \(v\).
We then have the following theorem:
**Theorem 1.6**: _Let_
\[\overline{J}_{k}=\{\overline{f}\in I_{z}^{k}/I_{z}^{k+1}|D^{k}_{gz,gy_{e}}( \overline{f})=0\mbox{ for all }g\in G\}\]
_Then:_
1. \(\overline{J}_{z,y_{e}}=\oplus_{k\geq 1}\overline{J}_{k}\) _is a_ \(G\)_-stable ideal of_ \(R\)_. Moreover,_ \(\overline{I}_{z}\supseteq\overline{J}_{z,y_{e}}\supseteq\overline{I}_{y}\)_._
2. _The dimension of_ \(\overline{J}_{z,y_{e}}\) _is_ \(dim(G)-dim(\mathcal{H}_{\overline{y_{e}}})\)_._
We know in general that \(\hat{\mathcal{K}}\subseteq\mathcal{H}_{\overline{y_{e}}}\). By the above theorem, if \(dim(\mathcal{K})<dim(\mathcal{H}_{\overline{y_{e}}})\), then, in the normal cone \(Spec(R)\), there is indeed an intermediate variety between \(\overline{O(z)}\) and \(\overline{O(y)}\).
**Conjecture 1.7**: _If \(dim(\mathcal{K})<dim(\mathcal{H}_{\overline{y_{e}}})\), then there is a strictly intermediate variety \(\overline{O(z)}\subsetneq W\subsetneq\overline{O(y)}\) of dimension \(dim(G)-dim(\mathcal{H}_{\overline{y_{e}}})\)._
In the second construction, we fix \(\lambda\) and look at limits of elements \(y^{\prime}\in O(y)\) which are of the same degree as \(z\). Let \(\pi_{d}:V\to V_{d}\) be the projection onto the weight space. Then, we define:
\[Y_{d}=\{y^{\prime}\in O(y)\mbox{ such that }z^{\prime}=\widehat{y^{\prime}} \mbox{ is of degree }d\}\]
and \(Z_{d}=\pi_{d}(Y_{d})\), the set of leading terms of elements in \(Y_{d}\). Thus \(Z_{d}\) is the space of _co-limits_ of \(z\) and obtained from elements of \(O(y)\) using \(\lambda\). It is easy to see:
**Lemma 1.8**: _Let \(O(Z_{d})=\{gz^{\prime}|z^{\prime}\in Z_{d}\mbox{ and }g\in G\}\) and \(\overline{O(Z_{d})}\) be its closure. Then \(\overline{O(Z_{d})}\) is an intermediate variety._
The question is if the strict condition \(\overline{O(z)}\subsetneq\overline{O(Z_{d})}\) holds. We check this by comparing the tangent spaces of degree \(d\), viz., \(T_{z}Z_{d}\) and \((T_{z}O(z))_{d}={\cal G}_{0}z\)
**Definition 1.9**: _Let \(y,\lambda\) and \(d\) be fixed as above. A element \(\mathfrak{g}\in{\cal G}\) is called a \(d\)-stabilizer iff \(\mathfrak{g}=\sum_{i}\mathfrak{g}_{i}\) is such that \((\mathfrak{g}y)_{a}=0\) for all \(a<d\). Let \({\cal G}_{y,d}\subseteq{\cal G}\) be the collection of \(d\)-stabilizers of \(y\)._
In other words, \({\cal G}_{y,d}\) is the collection of "Lie elements" \(g\in G\) such that \(gy\subseteq Y_{d}\).
**Proposition 1.10**: _Let \(k=dim(H)-dim(K)\). Suppose that \(y\) and \(z\) are smooth within \(Y_{d}\) and \(Z_{d}\) respectively. Then there is a subspace \(F\subseteq{\cal G}_{y,d}\) of dimension at most \(k\) such that \({\cal G}_{y,d}={\cal P}(\lambda)+{\cal K}+F\). Moreover, let \(TW_{z}=\pi_{d}(\{\mathfrak{g}\cdot y|\mathfrak{g}\in F\})\) be the leading terms of \(F\cdot y\), then \(T_{z}Z_{d}\) =\(TW_{z}+{\cal G}_{0}z\)._
If \(y\) is in the null cone of \(V\) for the \(G\)-action and \(\lambda\) is the "optimal" 1-PS then \({\cal G}_{y,d}={\cal P}(\lambda)\), see [10], Lemma 4.6. Thus \(F\) measures the deviation of \(\lambda\) from the optimal 1-PS which drives \(y\) to \(0\). If \(\lambda\) were optimal then \(\overline{O(z)}=\overline{O(Z_{d})}\).
### Background
The problem of analysing the stabilizers \(K\) of \(y\) and \(H\) of the limit \(z\), arises in showing lower bounds in algebraic complexity theory, for example the permanent vs. determinant question, see [11], Conjecture 4.3, for details. That the question of orbit closures for such special \(y\) and \(z\) can be settled by using purely the stabilizer data is the central thesis of Geometric Complexity Theory (GCT), see [11] and others ([1], [12]). This paper hopes to contribute to the theory by providing Lie algebraic techniques for the same.
An earlier approach, proposed in [11] was to use the Peter-Weyl condition as follows. For the \(H,K\) as above, the \(G\)-modules which appear in \(A_{y}\) (or \(A_{z}\)) are determined by the Peter-Weyl condition, i.e., these are \(G\)-modules \(V_{\mu}\) such that \(V_{\mu}^{*}\) has a \(K\) (resp. \(H\) fixed vector). The existence of a 1-PS as above would mean that \(z\in\overline{O(y)}\) and we would have a \(G\)-equivariant surjection \(A_{y}\to A_{z}\). Thus, the proof of the nonexistence of a suitable 1-PS is obtained by the presence of certain \(G\)-modules in \(A_{z}\) as _obstructions_, thereby offering a combinatorial recipe for the problem. However, for the \(H,K\) in question, the mere absence or presence of certain \(G\)-modules as obstructions was shown to be inadequate to prove the required exponential lower bounds, see [1]. As pointed by them and other authors, a more refined analysis of occurrences and multiplicities of \(G\)-modules in \(A_{y},A_{z}\), [13], [14] may yield the required obstructions.
Besides GCT, there have been other algebraic approaches to lower bounds in algebraic complexity theory, and in particular to the determinant and permanent problem. See, for example, [12], for a quadratic lower bound which uses the curvature data for the zero sets of determinant and permanent as hypesurfaces. The more traditional approach to lower bounds problems has been to study properties of polynomials computed by circuit families with restrictions placed on their size
and/or depth of these circuits. Here combinatorial and algebraic techniques are used and these approaches have met with fair success, see the recent survey by [10]. In [11] the authors study the closure of forms computed by algebraic \(\Sigma^{k}\Pi\Sigma\)-circuits with the top \(\Sigma\)-gate having constant fanin \(k\). They show that forms in the closure of this circuit class have polynomial determinantal complexity, see [12].
Coming to this paper, in [1] we developed Lie algebraic techniques to study when \(z\in\overline{O(y)}\) is obtained as the limit of a general 1-parameter family (1-PF) \(\gamma(t)\subseteq G\). The central objects of study in [1] are the Lie algebras \(\mathcal{H}\) of \(H\), \(\mathcal{K}\) of \(K\) and \(\mathcal{G}\) of \(G\). We gave explicit formulas for the action of \(\mathcal{G}\) on an appropriately chosen slice at \(z\), and called this a local model at \(z\). This was then used to construct the limiting Lie algebra \(\hat{\mathcal{K}}\subseteq\mathcal{H}\) of \(\mathcal{K}\) and to analyse its properties.
The first part of this paper arose as an attempted simplification of the construction of \(\hat{\mathcal{K}}\) and its properties as discussed in [1] when the 1-PF is actually a 1-PS. No familiarity with that paper is assumed.
## 2 Stabilizer Limits
We recall the setting from Section 1. \(G\subseteq GL(X)\) is a reductive algebraic group over \(\mathbb{C}\) where \(X\) is a vector space over \(\mathbb{C}\) of dimension \(n\). The center \(Z\) of \(GL(X)\) is a subgroup of \(G\). \(V\) is a finite dimensional \(G\)-module with the center \(Z\) acting as a nontrivial character on \(V\). We have \(y\in V\) with stabilizer \(K\) and \(z\in\overline{O(y)}\) with stabilizer \(H\). The ideals of \(\overline{O(y)}\) and \(\overline{O(z)}\) within \(\mathbb{C}[V]\) are \(I_{y}\) and \(I_{z}\) respectively.
Let \(\mathcal{G}=Lie(G)\) be the Lie algebra of \(G\). Note that \(\mathcal{G}\) is a subalgebra of \(gl(X)=gl_{n}(\mathbb{C})\), the Lie algebra of \(n\times n\) matrices over \(\mathbb{C}\). We recall the following basic lemma:
**Lemma 2.1**: _For the above data, we have:_
1. \(G\) _acts on_ \(\mathcal{G}\) _by conjugation and this is called the adjoint action:_ \(adj(g)\cdot\mathfrak{g}=g\mathfrak{g}g^{-1}\)_._
2. _For the action_ \(\rho:G\to GL(V)\)_, there is a Lie algebra action_ \(\rho_{1}:\mathcal{G}\to End(V)\) _such that for any_ \(g\in G\) _and_ \(\mathfrak{g}\in\mathcal{G}\)_:_ \[\rho(g)\rho_{1}(\mathfrak{g})\rho(g)^{-1}=\rho_{1}(adj(g)\cdot\mathfrak{g})= \rho_{1}(g\mathfrak{g}g^{-1})\]
For any \(v\in V\) and \(\mathfrak{g}\in\mathcal{G}\) (resp. \(g\in G\)), \(\mathfrak{g}v\) (resp. \(gv\)) will denote the action \(\mathfrak{g}\) (resp. \(g\)) on the element \(v\) via \(\rho_{1}\) (resp. \(\rho\)). For any \(v\in V\), the _stabilizer_ of \(v\) will refer to either \(G_{v}:=\{g\in G|gv=v\}\), the subgroup of \(G\) fixing \(v\), or to \(\mathcal{G}_{v}:=\{\mathfrak{g}\in\mathcal{G}|\mathfrak{g}v=0\}\), the Lie subalgebra of \(\mathcal{G}\) sending \(v\) to zero.
### Leading term modules and algebras
The situation we are interested in is when \(y,z\) are special elements of \(V\) as above and \(\lambda:\mathbb{C}^{*}\to G\) is a 1-parameter subgroup (1-PS) such that:
\[\lambda(t)y=y(t)=y_{d}t^{d}+y_{e}t^{e}+\ldots y_{D}t^{D}\]
where \(y_{d}=z\neq 0\). We remind the reader that in the expression on the right side above, the exponents of \(t\) are increasing from left to right. The vector \(z\) is the limit of \(y\) under \(\lambda\).
Now \(\lambda\) gives us a grading of \(V\) as well as of the Lie algebra \({\cal G}\) of \(G\). We state this as a lemma without proof.
**Lemma 2.2**: _Under the action of \(\lambda\), we have a \({\mathbb{Z}}\) grading \(V=\oplus_{i}V_{i}\) such that for any \(v\), we have \(v=\sum_{i}v_{i}\) with_
\[\lambda(t)v=\sum_{i}t^{i}v_{i}\]
_Similarly, we have a \({\mathbb{Z}}\)-grading \({\cal G}=\oplus_{j}{\cal G}_{j}\)1 such that for any \({\mathfrak{g}}\in{\cal G}\), we have \({\mathfrak{g}}=\sum_{j}{\mathfrak{g}}_{j}\) and:_
Footnote 1: Note the unfortunate use of the notation \({\cal G}_{y}\) and \({\cal G}_{j}\) for the stabilizer of \(y\) and also the degree \(j\)-component of \({\cal G}\). When \(j\) is an integer it will always mean the latter.
\[\lambda(t){\mathfrak{g}}=\sum_{j}t^{j}{\mathfrak{g}}_{j}\]
_Finally, if \({\mathfrak{g}}v=w=\sum_{i}w_{i}\), then \(w_{i}=\sum_{j}{\mathfrak{g}}_{i}v_{i-j}\)._
The _degree_ of a non-zero element \(v_{i}\in V_{i}\) (resp. \({\mathfrak{g}}_{j}\in{\cal G}_{j}\)) is the number \(i\) (resp. \(j\)).
**Definition 2.3**: _For any non-zero \(v\in V\) as in the above lemma, we define the leading term as \(v_{a}\neq 0\) of smallest degree, and denote it by \(\hat{v}\). The degree \(deg(v)\) is defined as the integer \(a\). Similarly, for any \({\mathfrak{g}}\neq 0\) as above, the leading term of \({\mathfrak{g}}\) is defined as \({\mathfrak{g}}_{b}\neq 0\) of smallest degree, and is denoted by \(\hat{{\mathfrak{g}}}\). Its degree is defined as \(b\). As a convention, we define \(deg(0_{V})=deg(0_{\cal G})=\infty\)._
In our motivating example \(z\) is the leading term of \(y\). Of importance is also the coefficient of second lowest power of \(t\) in the expression, \(y_{e}\), which we call the tangent of approach.
**Lemma 2.4**: _(A) Let \(v,v^{\prime}\in V\) and \(deg(v)<deg(v^{\prime})\) then \(deg(v+v^{\prime})=deg(v)\). If \(deg(v)=deg(v^{\prime})\) then either (i) \(deg(v+v^{\prime})=deg(v)\) and \((\widehat{v+v^{\prime}})=\hat{v}+\hat{v^{\prime}}\) or (ii) \(deg(v+v^{\prime})>deg(v)\). (B) For \({\mathfrak{g}},{\mathfrak{g}}^{\prime}\in{\cal G}\), either (i) \(deg([{\mathfrak{g}},{\mathfrak{g}}^{\prime}])=deg({\mathfrak{g}})+deg({ \mathfrak{g}}^{\prime})\) and then \([\hat{{\mathfrak{g}}},\hat{{\mathfrak{g}}}^{\prime}]=\widehat{[{\mathfrak{g}},{\mathfrak{g}}^{\prime}]}\), or \(deg([{\mathfrak{g}},{\mathfrak{g}}^{\prime}])>deg({\mathfrak{g}})+deg({ \mathfrak{g}}^{\prime})\) and then \([\hat{{\mathfrak{g}}},\hat{{\mathfrak{g}}}^{\prime}]=0\). Finally, if \({\mathfrak{g}}\in{\cal G}\) and \(v\in V\) are arbitrary elements then either \(\hat{{\mathfrak{g}}}\hat{v}=0\) or \(deg({\mathfrak{g}}(v))=deg(v)+deg({\mathfrak{g}})\)_
**Proof**: (A) follows from the linearity of the action of \(\lambda\), i.e., \((v+v^{\prime})(t)=v(t)+v^{\prime}(t)\). For (B), for the first claim, note that for the adjoint action of \(G\) on \({\cal G}\), we have \(\lambda(t)[{\mathfrak{g}},{\mathfrak{g}}^{\prime}]=[\lambda(t){\mathfrak{g}}, \lambda(t){\mathfrak{g}}^{\prime}]\), or in other words, \([{\mathfrak{g}},{\mathfrak{g}}^{\prime}](t)=[{\mathfrak{g}}(t),{\mathfrak{g}} ^{\prime}(t)]\). Thus if \({\mathfrak{g}}(t)=\sum_{i\geq a}{\mathfrak{g}}_{i}t^{i}\) and \({\mathfrak{g}}^{\prime}(t)=\sum_{j\geq b}{\mathfrak{g}}^{\prime}_{j}t^{j}\), then we have:
\[[{\mathfrak{g}},{\mathfrak{g}}^{\prime}](t) = [{\mathfrak{g}}(t),{\mathfrak{g}}^{\prime}(t)]\] \[= ({\mathfrak{g}}_{a}t^{a}+\ldots)({\mathfrak{g}}^{\prime}_{b}t^{b }+\ldots)\] \[= [{\mathfrak{g}}_{a},{\mathfrak{g}}^{\prime}_{b}]t^{a+b}+\ldots\] \[= [\hat{{\mathfrak{g}}},{\mathfrak{g}}^{\prime}]t^{a+b}+\ldots\]
The two cases are determined by whether \([\hat{\mathfrak{g}},\hat{\mathfrak{g}}^{\prime}]=0\) or not. For the second claim, let \(deg(v)=a\) and \(deg(\mathfrak{g})=b\). We see that:
\[\begin{array}{rcl}\rho(\lambda(t))(\rho_{1}(\mathfrak{g})(v))&=&\rho(\lambda(t ))\rho_{1}(\mathfrak{g})\rho(\lambda(t)^{-1})\rho(\lambda(t))(v)\\ &=&\mathfrak{g}(t)v(t)\\ &=&t^{a+b}\mathfrak{g}_{b}v_{a}+\mbox{ higher degree terms}\\ &=&t^{a+b}\hat{\mathfrak{g}}\hat{v}+\mbox{ higher degree terms}\end{array}\]
Again, which of the two conditions holds depends on whether \(\hat{\mathfrak{g}}\hat{v}=0\) or not. This proves the second claim. \(\Box\)
**Lemma 2.5**: _Let \(M\subseteq V\) be a subspace. Define \(\hat{M}\) as the subspace generated by the set \(\{\hat{m}|m\in M\}\). Then \(\hat{M}\) is a finite dimensional subspace of \(V\) and \(dim_{\mathbb{C}}(M)=dim_{\mathbb{C}}(\hat{M})\). Similarly, for any subspace \({\cal L}\subseteq{\cal G}\), \(\hat{\cal L}\) is a finite dimensional subspace of \({\cal G}\) and has the same dimension as \({\cal L}\)._
**Proof**: Define \(M_{i}=\{m\in M|deg(m)\geq i\}\), i.e., \(M_{i}:=M\cap(\oplus_{j\geq i}V_{j})\). Then \(M_{i}\) is a finite dimensional subspace of \(M\) and \(M_{i}\supseteq M_{i+1}\). Let \(D=\{deg(m)|m\in M\}\) be the set of all degrees which are seen. Let \(D=\{i_{1},\ldots,i_{k},\ldots\}\). Then \(D\) is also the set of indices where \(M_{i_{j}}\supsetneq M_{i_{j}+1}\). Since \(M\) is finite-dimensional, we see that \(D\) is finite, say \(D=\{i_{1},\ldots,i_{k}\}\).
Let \(d_{j}=dim(M_{i_{j}})-dim(M_{i_{j}+1})\) and \(B_{j}=\{m_{j,1},\ldots,m_{j,d_{j}}\}\subseteq M\) be linearly independent elements of \(M\) such that \(M_{i_{j}}=M_{i_{j}+1}+\mathbb{C}\cdot B_{j}\) (where \(\mathbb{C}\cdot B_{j}\) is the linear space generated by the elements of \(B_{j}\)).
Let \(B=\cup_{j=1}^{k}B_{j}\). Our first claim is that (i) the elements of \(B\) are linearly independent. As a contradiction, suppose that \(\sum_{i}\alpha_{i}b_{i}=0\) for some non-zero \(\alpha_{i}\), with \(b_{i}\in B\). Let \(s\) be the minimum of the degrees of \(b_{i}\)'s in this linear combination. Then restricting this linear combination to those \(b_{i}\)'s of degree \(s\) gives us a non-trivial linear dependence on \(M_{s}/M_{s+1}\). This is a contradiction to the choice of independent elements in \(B\) spanning \(M_{s}\). This proves (i).
Next, we claim that (ii) \(B\) is a basis for \(M\). Suppose not, let \(m\in M-\mathbb{C}\cdot B\) be of maximum degree among all such \(m\). If \(d=deg(m)\), then \(d=i_{j}\) for some \(j\). Then subtracting a suitable linear combination of the elements of \(B_{j}\), i.e., \(m^{\prime}=m-\sum_{r}\alpha_{r}m_{i_{j},r}\) will give us an element \(m^{\prime}\) of a higher degree and an element outside \(\mathbb{C}\cdot B\). But this contradicts the choice of \(m\). This proves (ii).
Finally, (iii) we claim that \(\hat{B}\) is a basis for \(\hat{M}\). Clearly, all leading terms of \(\hat{M}\) have degrees from the set \(D\). For any \(\hat{m}\) of degree \(d=i_{j}\), we know that there is an element \(m^{\prime\prime}=\sum_{r}\alpha_{r}m_{i_{j},r}\) of \(M\) such that \(m^{\prime}=m-m^{\prime\prime}\) is either of a higher degree or is zero. In either case, we have \(\hat{m}=\sum_{r}\alpha_{r}\widehat{m_{i_{j},r}}\). Coming to the linear independence of \(\hat{B}\), note that \(\hat{B}=\dot{\cup}_{j}\hat{B}_{j}\), where each \(\hat{B}_{j}\) is a subset of different graded components \(V_{j}\). Thus any linear dependence must be purely within \(\hat{B}_{j}\). But that would force a linear dependence on \(B_{j}\). This proves (iii) and the assertion about \(\hat{M}\). The \(\hat{\cal L}\) assertion is similarly proved. \(\Box\)
With this definition, we have the following lemmas.
**Lemma 2.6**: _Let \({\cal K}\) be a Lie subalgebra of \({\cal G}\) and \(N\subseteq V\) a \({\cal K}\)-module. Then (i) \(\hat{\cal K}\) is a graded Lie subalgebra of \({\cal G}\), and \(dim_{\mathbb{C}}(\hat{\cal K})=dim_{\mathbb{C}}({\cal K})\), (ii) \(\hat{N}\subseteq V\) is a \(\hat{\cal K}\)-module with \(dim_{\mathbb{C}}\hat{N}=dim_{\mathbb{C}}N\)._
**Proof**: Let us first prove that \(\hat{\cal K}\) is a Lie subalgebra. For that, it is adequate to show that \([\mathfrak{k}_{1},\mathfrak{k}_{2}]\in\hat{\cal K}\) for any leading terms \(\mathfrak{k}_{1},\mathfrak{k}_{2}\in\hat{\cal K}\). Given such elements, there are elements \(\mathfrak{k}^{\prime}_{1},\mathfrak{k}^{\prime}_{2}\in{\cal K}\) such that \(\mathfrak{k}_{i}=\hat{\mathfrak{k}^{\prime}_{i}}\). Let the degrees of the leading terms be \(d_{1}\) and \(d_{2}\). Consider the element \(\mathfrak{k}^{\prime}=[\mathfrak{k}^{\prime}_{1},\mathfrak{k}^{\prime}_{2}] \in{\cal K}\). Note that \(\mathfrak{k}^{\prime}(t)=[\mathfrak{k}^{\prime}_{1}(t),\mathfrak{k}^{\prime}_ {2}(t)]\), and hence, by Lemma 2.4, either (a) the leading term of \(\mathfrak{k}^{\prime}\) is of degree \(d_{1}+d_{2}\), in which case, we have \(\mathfrak{k}=[\mathfrak{k}_{1},\mathfrak{k}_{2}]\) is the leading term of \(\mathfrak{k}^{\prime}\), and so is an element of \(\hat{\cal K}\), or (b), the leading term is of a higher degree, in which case \([\mathfrak{k}_{1},\mathfrak{k}_{2}]=0\). This proves that \(\hat{\cal K}\) is a Lie subalgebra. That \(\hat{\cal K}\) is graded is clear since it is generated by leading terms, which are homogeneous. Finally, \(dim_{\mathbb{C}}(\hat{\cal K})=dim_{\mathbb{C}}({\cal K})\) follows from lemma 2.5.
Let us now prove that \(\hat{N}\) is a \(\hat{\cal K}\)-module. For that, take any leading term \(\mathfrak{k}\in\hat{\cal K}\) and \(n\in\hat{N}\). Let \(\mathfrak{k}^{\prime}\in{\cal K}\) and \(n^{\prime}\in N\) be such that \(\hat{\mathfrak{k}}^{\prime}=\mathfrak{k}\) and \(\hat{n^{\prime}}=n\). We see that \((\mathfrak{k}^{\prime}n^{\prime})(t)=\mathfrak{k}^{\prime}(t)n^{\prime}(t)\), whence again, either \(\mathfrak{k}n=0\) or it equals \((\overline{\mathfrak{k}^{\prime}n^{\prime}})\in\hat{N}\). Again, by lemma 2.5, \(dim_{\mathbb{C}}(N)=dim_{\mathbb{C}}(\hat{N})\). This proves the lemma. \(\square\)
**Lemma 2.7**: _Let \(v\) be an arbitrary element of \(V\) and \(\mathfrak{g}\in{\cal G}\) is such that \(\mathfrak{g}\cdot v=0\). Then \(\hat{\mathfrak{g}}\cdot\hat{v}=0\). In other words \((\widehat{{\cal G}_{v}})\subseteq{\cal G}_{\hat{v}}\)._
**Proof**: Suppose that \(v\) is of degree \(a\) and \(v_{a}=\hat{v}\). Similarly, suppose that \(\mathfrak{g}\in{\cal G}_{v}\), the stabilizer of \(v\), is of degree \(b\) and \(\mathfrak{g}_{b}=\hat{\mathfrak{g}}\). Then \(\mathfrak{g}v=0\) implies \(\mathfrak{g}(t)v(t)=0\) as well. This implies that terms of all degree in the product are zero, in particular, that of degree \(a+b\), viz. \(\mathfrak{g}_{a}v_{b}=0\) and thus \(\hat{\mathfrak{g}}\in{\cal G}_{\hat{v}}\). This proves the assertion. \(\square\)
**Remark 2.8**: _Let \(Mod_{\cal K}(V)\) (resp. \(Mod_{\hat{\cal K}}(V)\)) be the collection of \({\cal K}\) (resp. \(\hat{\cal K}\)) submodules of \(W\). Then lemmas 2.6 and lemma 2.7 set up a map \(Mod_{\cal K}(V)\stackrel{{\lambda}}{{\rightarrow}}Mod_{\hat{ \cal K}}(V)\), where the \({\cal K}\)-module \(N\) goes to the \(\hat{\cal K}\)-module \(\hat{N}\), which is of the same dimension. Moreover, if \(N\) has a \({\cal K}\)-fixed point then \(\hat{N}\) has a \(\hat{\cal K}\)-fixed point._
We illustrate Lemma 2.6 with two examples. The first example illustrates the computation of the leading term algebra \(\hat{\cal K}\) from \({\cal K}\) and its dependence of \(\hat{\cal K}\) on the _alignment_ of \({\cal K}\) with respect to the 1-parameter subgroup.
**Example 2.9**: _Let us consider \(G=GL_{4}(\mathbb{C})\) and the 1-parameter subgroup \(\lambda(t)\) below. The action of \(\lambda\) on a typical element \(M\in{\cal G}\) is given below, where each \(M_{ij}\) is a \(2\times 2\)-matrix. Note that the degrees which occur are \(-1,0\) and \(1\), thus we have the spaces \({\cal G}_{-1},{\cal G}_{0}\) and \({\cal G}_{1}\), of matrices with leading terms of degree \(-1,0\) and \(1\) respectively._
\[\lambda(t)=\left[\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&t&0\\ 0&0&0&t\end{array}\right],\lambda(t)M\lambda(t)^{-1}=\left[\begin{array}{ cc}M_{11}&t^{-1}M_{12}\\ tM_{21}&M_{22}\end{array}\right]\]
_Let \({\cal K}\) be as shown below and let us construct \(\hat{\cal K}\). Let \({\cal K}_{i}={\cal G}_{j\geq i}\cap{\cal K}\). We then have \(dim({\cal K}_{-1})=4,dim({\cal K}_{0})=4\) and \(dim({\cal K}_{1})=0\). Thus, we get a basis \(B=B_{0}\) of \({\cal K}\) of
dimension \(4\). Moreover, since each element of \(B_{0}\) is homogeneous, we have \(\hat{B_{0}}=B_{0}\) and \(\hat{\mathcal{K}}=\mathcal{K}\)._
\[\mathcal{K}=\left[\begin{array}{cccc}a&b&0&0\\ c&d&0&0\\ 0&0&a&b\\ 0&0&c&d\end{array}\right]\ \ \lambda(t)=\left[\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&t&0\\ 0&0&0&t\end{array}\right]\]
_The leading term Lie algebra \(\hat{\mathcal{K}}\) of \(\mathcal{K}\) depends intimately on \(\lambda(t)\) and may change dramatically under conjugation. Let \(A\) be as shown below and \(\mathcal{K}^{\prime}=A\mathcal{K}A^{-1}\). This gives us \(\mathcal{K}^{\prime}\) as shown below. Let us compute \(\hat{\mathcal{K}}^{\prime}\)._
\[A=\left[\begin{array}{cccc}1&0&0&1\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{array}\right]\ \ \mathcal{K}^{\prime}=\left[\begin{array}{cccc}a&b&c&d-a \\ c&d&0&-c\\ 0&0&a&b\\ 0&0&c&d\end{array}\right]\]
_As before, let \(\mathcal{K}^{\prime}_{i}=\mathcal{K}^{\prime}\cap\mathcal{G}_{j\geq i}\) and note that \(dim(\mathcal{K}^{\prime}_{-1})=4\). Now \(\mathcal{K}^{\prime}_{0}\) consists of all elements \(\mathfrak{k}\in\mathcal{K}^{\prime}\) which have no term of degree \(-1\). This forces \(d=a\) and \(c=0\), making \(dim(\mathcal{K}^{\prime}_{0})=2\).Finally \(dim(\mathcal{K}^{\prime}_{1})=0\). Thus \(\hat{\mathcal{K}}^{\prime}\) is what is given below, with \(r,s,t,u\in\mathbb{C}\). Note that while \(\hat{\mathcal{K}}\) is reductive, \(\hat{\mathcal{K}}^{\prime}\) is a solvable Lie algebra._
\[\hat{\mathcal{K}}^{\prime}=\left[\begin{array}{cccc}u&t&s&r\\ 0&u&0&-s\\ 0&0&u&t\\ 0&0&0&u\end{array}\right]\]
**Example 2.10**: _This example illustrates the consequences of Lemma 2.7. Consider a set of indeterminates \(\{x,y,z\}\) and \(X=\mathbb{C}\cdot\{x,y,z\}\). Let \(GL_{3}\) acts on \(X\) in the natural way, and let \(V=Sym^{4}(X)\). Consider \(f=(x^{2}+y^{2}+z^{2})^{2}\in V\). The stabilizer algebra \(\mathcal{G}_{f}\) is \(3\)-dimensional and is given below. Consider next the 1-PS \(\lambda(t)\subseteq GL(X)\) given by \(\lambda(x)=x,\lambda(y)=y\) and \(\lambda(z)=tz\), as shown below. We have \(g=\hat{f}\) is the leading term of \(((x^{2}+y^{2}+t^{2}z^{2})^{2})\) and is \((x^{2}+y^{2})^{2}\). The stabilizer \(\mathcal{G}_{g}\) is \(4\)-dimensional and is shown below (with \(a,b,c,d\in\mathbb{C}\)). Note that the last column of \(\mathcal{G}_{g}\) is given by the operator \(\nabla=(bx+cy+dz)\frac{\partial}{\partial z}\) which certainly stabilizes \(g\)._
\[\mathcal{G}_{f}=\left[\begin{array}{ccc}0&a&b\\ -a&0&c\\ -b&-c&0\end{array}\right]\ \ \lambda(t)=\left[\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&t\end{array}\right]\ \ \mathcal{G}_{g}=\left[\begin{array}{ccc}0&a&b\\ -a&0&c\\ 0&0&d\end{array}\right]\]
_Let us compute \(\widehat{\mathcal{G}_{f}}\) as leading terms of \(\mathcal{G}_{f}(t)\) below:_
\[\lambda(t)\mathcal{G}_{f}\lambda(t)^{-1}=\left[\begin{array}{ccc}0&a&t^{-1 }b\\ -a&0&t^{-1}c\\ -tb&-tc&0\end{array}\right]\]
_Reasoning as we did in the previous example, \(\widehat{\mathcal{G}_{f}}\)is the \(3\)-dimensional Lie algebra of
matrices (with \(a,b,c\in\mathbb{C}\)) shown below._
\[\left[\begin{array}{ccc}0&a&b\\ -a&0&c\\ 0&0&0\end{array}\right]\subseteq\mathcal{G}_{g}\]
_Thus the "leading term" operation applied to \(\mathcal{G}_{f}\), the stabilizer of \(f\), inserts \(\widehat{\mathcal{G}_{f}}\) into the stabilizer \(\mathcal{G}_{g}\) of the limit \(g\)._
**Proposition 2.11**: _Recall that \(K=G_{y}\) and \(H=G_{z}\) are stabilizer groups of \(y\) and \(z\) resp. and \(\mathcal{K}=\mathcal{G}_{y}\) and \(\mathcal{H}=\mathcal{G}_{z}\), their stabilizer Lie algebras._
1. _Let_ \(T_{z}O(z)=\mathcal{G}\cdot z\)_, be the tangent space of the orbit_ \(O(z)=G\cdot z\) _at the point_ \(z\)_. Then_ \(V/(T_{z}O(z))\) _is an_ \(\mathcal{H}\)_-module. We call this the_ \(\star\)_-action._
2. _Let_ \(\mathcal{H}_{\overline{y_{e}}}\) _be the stabilizer of_ \(\overline{y_{e}}\in V/T_{z}(O(z))\) _for the above action. Then the subalgebra_ \(\hat{\mathcal{K}}\subseteq\mathcal{H}_{\overline{y_{e}}}\)_, the stabilizer of the image of the tangent of approach in_ \(V/T_{z}(O(z))\)_._
**Proof**: Since \(\mathcal{H}=\mathcal{G}_{z}\), it fixes the tangent space \(T_{z}(O(z))\). The space \(V/(T_{z}(O(z))\) is the quotient of \(\mathcal{H}\)-modules and hence is itself an \(\mathcal{H}\)-module. Thus (1) is clear.
Next, let \(\mathfrak{k}\in\mathcal{K}\) be arbitrary. Then \(\mathfrak{k}y=0\) implies \(\mathfrak{k}(t)y(t)=0\). If \(\mathfrak{k}(t)=\mathfrak{k}_{a}t^{a}+\mathfrak{k}_{a+1}+\ldots\) and \(y(t)=y_{d}t^{d}+y_{e}t^{e}+\ldots\), then we have:
\[(\mathfrak{k}_{a}t^{a}+\mathfrak{k}_{a+1}t^{a+1}+\ldots)(y_{d}t^{d}+y_{e}t^{ e}+\ldots)=0\]
Examining the terms of degree \(a+d,a+d+1,\ldots,a+e-1,a+e\) in this product, we have:
\[\mathfrak{k}_{a+i}y_{d}=0\quad\text{ for }i=0,\ldots e-d-1\]
\[\mathfrak{k}_{a}y_{e}+\mathfrak{k}_{a+e-d}y_{d}=0\]
This tells us that \(\mathfrak{k}_{a+i}\in\mathcal{H}\) for \(i=0,\ldots,e-d-1\). Since \(\mathfrak{k}_{a}\in\mathcal{H}\) and \(\mathfrak{k}_{a}y_{e}\in\mathcal{G}y_{d}\), we have \(\mathfrak{k}_{a}\cdot\overline{y_{e}}=0\) and thus \(\mathfrak{k}_{a}=\hat{\mathfrak{k}}\in\mathcal{H}_{\overline{y_{e}}}\). Since elements of the type \(\hat{\mathfrak{k}}\), with \(\mathfrak{k}\in\mathcal{K}\) generate \(\hat{\mathcal{K}}\), we have proved (2). \(\square\)
**Example 2.12**: _We continue with example 2.10 to illustrate the proposition just proved. The limit of \((x^{2}+y^{2}+z^{2})^{2}\) under the given 1-PS is \(g=(x^{2}+y^{2})^{2}\) and the tangent of approach is \(y_{e}=(x^{2}+y^{2})z^{2}\). The tangent space to the orbit of \(g\) contains the form \((x^{2}+y^{2})(2bxz+2cyz)\) for arbitrary \(b,c\in\mathbb{C}\). This can be seen by applying the following differential operator to \((x^{2}+y^{2})^{2}\)._
\[\left[\begin{array}{ccc}0&0&0\\ 0&0&0\\ b&c&0\end{array}\right]\]
_A generic element of \(\widehat{\mathcal{G}_{f}}\) corresponds to the differential form_
\[ay\frac{\partial}{\partial x}-ax\frac{\partial}{\partial y}+bx\frac{\partial} {\partial z}+cy\frac{\partial}{\partial z}.\]
_Applying this to \(y_{e}\) gives us \((x^{2}+y^{2})(2bxz+2cyz)\). Now this is zero in \(V/T_{g}(O(g))\), since \((x^{2}+y^{2})(2bxz+2cyz)\in T_{g}(O(g))\) as we have just seen. So \(\overline{y_{e}}\) is stabilized by \(\widehat{\mathcal{G}_{f}}\). In fact, the full stabilizer of \(\overline{y_{e}}\) is \(\widehat{\mathcal{G}_{f}}\)._
**Remark 2.13**: _We use the notation of the motivating example - \(y\in V\) picks up \(z\) as the limit of a 1-PS \(\lambda\). Let \(N\) be a complement to \(T_{z}O(z)\) within \(V\). We denote by \(\overline{N}\), the \(H\)-module (and therefore \(\mathcal{H}\)-module) \(V/T_{z}O(z)\)._
### Alignment
In this subsection, we look at elements which are in both \(\mathcal{H}\) and \(\mathcal{K}\) or in \(\mathcal{H}\) and a conjugate of \(\mathcal{K}\). Such common elements will indicate common nested subspaces in the two stabilizers. We define:
**Definition 2.14**: _For \(y,z\) and \(\lambda\) as above, a semisimple element \(\mathfrak{k}\in\mathcal{H}\cap\mathcal{K}^{g}\), for some \(g\in G\) is called an alignment._
Recall that the center \(Z=\{tI|t\in\mathbb{C}^{*}\}\) acts non-trivially on \(V\). Whence, for our 1-PS \(\lambda(t)\), there is a 1-PS \(\lambda^{\prime}(t)=t^{a}\lambda(t)\) such that:
\[\lambda^{\prime}(t)y=y_{d}+y_{e}t^{e-d}+\ldots+y_{D}t^{D-d}\]
with \(y_{d}=z\) as before. Thus \(z\) is stabilized by \(\lambda^{\prime}(t)\) and thus \(\lambda^{\prime}(t)\subseteq H\).
We begin with a few definitions.
**Definition 2.15**: _Define \(\overline{\ell}\in\mathcal{G}\) as the element such that \(t^{\overline{\ell}}=\lambda^{\prime}(t)\)._
Note that \(\overline{\ell}\cdot v_{i}=(i-d)v_{i}\) for all \(v_{i}\in V_{i}\).
**Definition 2.16**: _Let \(P(\lambda)\) be defined as below:_
\[P(\lambda)=\{g\in G|\lim_{t\to 0}\lambda(t)g\lambda(t)^{-1}\text{ exists}\}\]
_and let \(U(\lambda)\) be its unipotent radical. Let \(L(\lambda)\) be elements of \(G\) which commute with \(\lambda(t)\). Then \(L(\lambda)\) is also a specially identified reductive complement to \(U(\lambda)\). Let \(\mathcal{P}(\lambda),\mathcal{L}(\lambda)\) and \(\mathcal{U}(\lambda)\) be the Lie algebras of \(P(\lambda),L(\lambda)\) and \(U(\lambda)\). 2_
Footnote 2: Note that \(P(\lambda)=P(\lambda^{\prime})\) and so on.
Note that \(\mathcal{P}(\lambda)=\oplus_{a\geq 0}\mathcal{G}_{a}\), \(\mathcal{L}(\lambda)=\mathcal{G}_{0}\) and \(\mathcal{U}(\lambda)=\oplus_{a>0}\mathcal{G}_{a}\),[Kem78, Section 2].
We now begin with examining further the connection between \(K\) and \(H\) through \(\mathcal{K}\) and \(\hat{\mathcal{K}}\). More specifically, we examine which elements of \(K\) (or its conjugate) descend into \(H\).
**Lemma 2.17**: _With \(y,z,\lambda\) as above, (i) if \(R\subseteq K\) commutes with \(\lambda\), then \(R\subseteq H\). Furthermore, \(Lie(R)\subseteq\hat{\mathcal{K}}\subseteq\mathcal{H}_{\overline{y_{e}}} \subseteq\mathcal{H}\), (ii) if \(\mathfrak{g}\in\mathcal{L}(\lambda)\cap\mathcal{K}\) then \(\mathfrak{g}\in\hat{\mathcal{K}}\subseteq\mathcal{H}_{\overline{y_{e}}} \subseteq\mathcal{H}\)._
**Proof**: If \(\sigma\in K\) commutes with \(\lambda\), then we have \(\sigma\lambda(t)y=\lambda(t)\sigma y=\lambda(t)y\). Whence \(\sigma\) stabilizes each degree component of \(\lambda(t)y\), and therefore \(z\) and \(y_{e}\) as well. Part (i) follows. For (ii) note that, by its very definition, every element of \({\cal L}(\lambda)\) commutes with \(\lambda\). \(\Box\)
**Proposition 2.18**: _Let \(y,z,\lambda\) be as above. Suppose that \(\mathfrak{k}\in{\cal P}(\lambda)\cap{\cal K}\) is a semi-simple element, then there is a unipotent element \(u\in U(\lambda)\) such that:_
1. \(\mathfrak{k}^{u}=u\mathfrak{k}u^{-1}\in{\cal L}(\lambda)\)_, i.e., it commutes with_ \(\overline{\ell}\) _and_ \(\lambda\)_, and it stabilizes_ \(y^{u}=u\cdot y\)_._
2. _Moreover_ \(z\) _is the leading term of_ \(y^{u}\)_, i.e.,_ \[\lambda(t)y^{u}=t^{d}z+\mbox{ higher degree terms}\] (2) _Thus,_ \(z=\hat{y^{u}}\) _and_ \(\mathfrak{k}^{u}\in{\cal H}\cap{\cal G}_{y^{u}}\)_._
_In other words, \(\mathfrak{k}^{u}\) is an alignment between \(z\) and \(y^{u}\)._
**Proof**: Note that \(P(\lambda)=L(\lambda)U(\lambda)=U(\lambda)L(\lambda)\) is a Levi factorization, with \(L(\lambda)\) as a reductive complement. Since \(\mathfrak{k}\) is a semisimple element of \({\cal P}(\lambda)\) there is a maximal torus \(T^{\prime}\) of \({\cal P}(\lambda)\) containing \(\mathfrak{k}\). Since maximal tori in \(P(\lambda)\) are \(U(\lambda)\)-conjugate, \(T^{\prime}\) is conjugate to the maximal torus in \(L(\lambda)\). So there is a \(u\in U(\lambda)\) so that \(\mathfrak{k}^{u}=u\mathfrak{k}u^{-1}\in{\cal L}(\lambda)\). Moreover, it is straightforward that \(\mathfrak{k}^{u}\) stabilizes \(y^{u}=u\cdot y\). This proves (1).
Suppose \(y\) has the weight decomposition:
\[y=z+y_{e}+w \tag{3}\]
where \(y_{e}\in V_{e}\) and \(w\in\oplus_{i>e}V_{i}\). On applying \(u\) to Eq. 3 we have:
\[u\cdot y=u\cdot z+u\cdot y_{e}+uw\]
Since \(u\) is unipotent, for all \(j\), we have \(u\cdot(\oplus_{i\geq j}V_{i})\subseteq\oplus_{i\geq j}V_{i}\). Thus we have the graded expression:
\[u\cdot y=z+y^{\prime}_{d+1}+w^{\prime}\]
with \(y^{\prime}_{d+1}\in V_{d+1}\) and \(w^{\prime}\in\oplus_{i>d+1}V_{i}\). Thus, \(z\) continues to be the leading term of \(y^{u}\). In other words, we have:
\[\lambda(t)(y^{u})=t^{d}z+t^{d+1}y^{\prime}_{d+1}+\mbox{higher degree terms}\]
Now \(\mathfrak{k}^{u}\) commutes with \(\lambda\), \(\mathfrak{k}^{u}\) stabilizes \(y^{u}\) and \(z=\hat{y^{u}}\) with stabilizer \({\cal H}\). It follows from the previous lemma that \(\mathfrak{k}^{u}\in\widehat{\mathcal{G}_{y^{u}}}\subseteq{\cal H}\). Thus \(\mathfrak{k}^{u}\) is the required alignment. \(\Box\)
Thus, conjugates of semisimple elements \(\mathfrak{k}\in{\cal K}\cap{\cal P}(\lambda)\) are also elements of \({\cal H}\). Note that \(O(y^{u})=O(y)\). The next proposition handles the general case about the intersection \({\cal P}(\lambda)\cap{\cal K}\).
**Remark 2.19**: _Let \({\cal G}_{-}=\oplus_{i<0}{\cal G}_{i}\), be the complement of \({\cal P}(\lambda)\) and \(\Pi_{-}:{\cal G}\to{\cal G}_{-}\) be the projection. The condition that \(\hat{{\cal K}}\cap{\cal P}(\lambda)\neq 0\) is tantamount to saying that \(dim(\Pi_{-}({\cal K}))<dim({\cal K})\), i.e., \(\lambda\) is not generically placed with respect to \({\cal K}\)._
**Proposition 2.20**: _Let \(y,z,\lambda\) and \(\overline{\ell}\) be as above. Then at least one of the following holds:_
**(A)**: _Let_ \({\cal K}^{\prime}=\hat{\cal K}\oplus\mathbb{C}\overline{\ell}\)_, then_ \({\cal K}^{\prime}\) _is a Lie algebra of rank 1, i.e., the dimension of any maximal torus in_ \({\cal K}^{\prime}\) _is_ \(1\)_._ _or_
**(B)**: _there is a unipotent element_ \(u\in U(\lambda)\) _and a semisimple element_ \(\mathfrak{k}\in{\cal K}^{u}\) _such that_ \(\mathfrak{k}\in{\cal H}\)_. In other words, there is an alignment between_ \(z\) _and_ \(y^{u}\)_._
**Proof**: Note that \({\cal K}^{\prime}=\hat{\cal K}\oplus\mathbb{C}\overline{\ell}\) is indeed a Lie algebra since \([\overline{\ell},\hat{\cal K}]\subseteq\hat{\cal K}\).
**Case 1**: Suppose that \(dim(\Pi_{-}({\cal K}))=dim({\cal K})\). If this happens, then \(\hat{\cal K}\subseteq{\cal G}_{-}\) and \(\hat{\cal K}\) is nilpotent. Even more, \({\cal K}^{\prime}=\hat{\cal K}\oplus\mathbb{C}\overline{\ell}\) is a Levi decomposition of \({\cal K}^{\prime}\). Thus (A) holds.
**Case 2**: On the other hand, if \(dim(\Pi_{-}({\cal K}))<dim({\cal K})\), then \({\cal K}\cap{\cal P}(\lambda)\neq 0\).
**2a**: There is a semisimple element \(\mathfrak{k}\in{\cal P}(\lambda)\cap{\cal K}\). Then, by Prop. 2.18, there is indeed a \(u\in U(\lambda)\) such that \(\mathfrak{k}^{u}\in{\cal H}\), and (B) hold.
**2b**: We are left with the case that there are no semisimple elements in \({\cal P}(\lambda)\cap{\cal K}\). Suppose now that the rank of \(\hat{\cal K}\oplus\overline{\ell}>1\). Then, there is an element \(\mathfrak{k}\in{\cal K}\) such that \(\hat{\mathfrak{k}}\) is semisimple and \([\overline{\ell},\hat{\mathfrak{k}}]=0\). This implies that the leading term of \(\mathfrak{k}\) is of degree zero and therefore \(\mathfrak{k}\in{\cal P}(\lambda)\cap{\cal K}\). Thus \(\mathfrak{k}=\hat{\mathfrak{k}}+\mathfrak{k}_{+}\), where \(\mathfrak{k}_{+}\in{\cal U}(\lambda)\).
Note that \({\cal P}(\lambda)\cap{\cal K}\) is the Lie algebra of an algebraic subgroup of \(GL(X)\). So the Jordon-Chevalley decomposition holds for all elements \(\mathfrak{k}\in{\cal P}(\lambda)\cap{\cal K}\), see [Bor91, Chapter 1, Section 4]. Hence, \(\mathfrak{k}\) may be written uniquely as a sum of a semisimple element \(\mathfrak{k}_{s}\) and a nilpotent element \(\mathfrak{k}_{n}\), \(\mathfrak{k}=\mathfrak{k}_{s}+\mathfrak{k}_{n}\), with \([\mathfrak{k}_{s},\mathfrak{k}_{n}]=0\), and \(\mathfrak{k}_{s},\mathfrak{k}_{n}\in{\cal P}(\lambda)\cap{\cal K}\). Since there are no semisimple elements in \({\cal P}(\lambda)\cap{\cal K}\), we must have \(\mathfrak{k}=\mathfrak{k}_{n}\). Since \(\mathfrak{k}\) is now a nilpotent matrix, we have \(\mathfrak{k}^{k}=0\) for some \(k>0\). This implies that \(\hat{\mathfrak{k}}^{k}=0\) as well. That contradicts the assumption that \(\hat{\mathfrak{k}}\) is semisimple.
This proves the proposition. \(\Box\)
### Alignment: The boundary of the general determinant \(det_{n}\) and \(det_{3}\)
In this and the next section we study forms of interest in GCT and ask whether the results developed connect with existing work. We show that this is the case when there is alignment. We separately address the case when there is no alignment.
Let \(X=(X_{ij})\) be an \(n\times n\)-matrix of indeterminates and \(V=Sym^{n}(X^{*})\). Consider \(y=det_{n}(X)\) with stabilizer \(K_{n}\subseteq GL_{n^{2}}\). It is well known that since the stabilizer of \(det_{n}\) is reductive, \(\overline{O(y)}-O(y)=\cup_{i}W_{i}\), is the union of closed \(G\)-varieties of co-dimension 1,, see for example [BLMW11, 4.2].
We then have the corollary:
**Corollary 2.21**: _Suppose that \(W_{i}=\overline{O(Q_{i})}\), for a form \(Q_{i}\in\overline{O(det_{n})}\) obtained as a limit of a 1-PS \(\lambda_{i}\), then either the stabilizer \({\cal H}_{i}\) of \(Q_{i}\) is of rank \(1\), or there is an alignment between \(Q_{i}\) and a conjugate of \(det_{n}\)._
**Proof**: let \({\cal H}_{i}\) be the stabilizer of \(Q_{i}\). Then since \(dim({\cal H}_{i})=dim({\cal K}_{n})+1\), we have \({\cal H}_{i}={\cal K}^{\prime}=\hat{\cal K}_{n}\oplus{\mathbb{C}}\bar{\ell}\). By Prop. 2.20, either \({\cal H}_{i}\) must be of rank \(1\) or there must be a semisimple \(\mathfrak{k}\in{\cal K}_{n}\) whose conjugate \(\mathfrak{k}^{u}\in{\cal H}_{i}\). This proves the assertion. \(\square\)
**Corollary 2.22**: _Suppose that there is indeed an alignment, then \({\cal R}=({\cal H}_{i})_{0}\cap{\cal K}_{n}^{u}\neq 0\), and thus there is a common subgroup \(R\subseteq H_{i}\cap K_{n}^{u}\) of atleast rank \(1\) which stabilizes both \(Q\) as well as \((det_{n})^{u}\)._
**Remark 2.23**: _The above result provides a recipe for constructing and testing possible boundary forms \(Q\)s which are aligned with \(det_{n}\). The steps are:_
1. _Pick a subgroup_ \(R\subseteq K_{n}\) _which decomposes_ \(X=\oplus_{i=1}^{r}X_{r}\)_._
2. _Pick a coarsening_ \(X=\oplus_{i=1}^{s}Y_{i}\) _of the partition above. Pick_ \(e_{1},\ldots,e_{s}\) _suitably and construct_ \(\lambda(t)\) _such that_ \(\lambda(t)(Y_{i})=t^{e_{i}}Y_{i}\)_._
3. _Compute the leading term_ \(Q=\widehat{det_{n}}^{\lambda}\)_. Compute the dimension of_ \({\cal G}_{Q}\)_. If this equals_ \(dim({\cal K}_{n})+1\)_, then_ \({\cal G}_{Q}=\hat{\cal K}_{n}\oplus{\mathbb{C}}\ell\) _and_ \(Q\) _is a boundary form._
We illustrate the above recipe for the \(3\times 3\)-determinant. These calculations are inspired by the work in [11].
Let \(X=\{x_{1},\ldots,x_{9}\}\) and \(V=Sym^{3}(X)\) be the space of homogeneous forms of degree \(3\) acted upon by \(GL(X)\). The \(3\times 3\)-determinant, \(det_{3}(X)\) is a special element as given below:
\[det_{3}(X)=det\left(\left[\begin{array}{ccc}x_{1}&x_{2}&x_{3}\\ x_{4}&x_{5}&x_{6}\\ x_{7}&x_{8}&x_{9}\end{array}\right]\right)\]
The stabilizer \(K_{3}\subseteq GL(X)\) of \(det_{3}(X)\) is given by transformations (i) \(X\to AXB^{-1}\), where \(A,B\in GL_{3}\) with \(det(AB^{-1})=1\), and (ii) \(X\to X^{T}\). The dimension of \(K\) is \(16\). Sitting within \(K_{3}\) are two groups \(R_{1}=\{X\to AXA^{-1}|A\in GL_{3}\}\) and \(R_{2}=\{X\to AXA^{T}|A\in SL_{3}\}\).
Huttenhain and Lairez [11] have proved that the boundary of the \(GL(9)\)-orbit of \(det_{3}\) has two irreducible components. Moreover, these are the orbit closures of two forms \(Q_{1}\) and \(Q_{2}\) given below.
\[Q_{1}(X) = det\left(\left[\begin{array}{ccc}x_{1}&x_{2}&x_{3}\\ x_{4}&x_{5}&x_{6}\\ x_{7}&x_{8}&-x_{5}-x_{1}\end{array}\right]\right)\] \[Q_{2}(X) = 2(x_{4}x_{1}^{2}+x_{5}x_{2}^{2}+x_{6}x_{3}^{2}+x_{7}x_{1}x_{2}+x_ {8}x_{2}x_{3}+x_{9}x_{1}x_{3})\]
It is easy to check that \(Q_{1}\) and \(Q_{2}\) arise from \(R_{1}\) and \(R_{2}\) using Remark 2.23.
**Proposition 2.24**: _With above notation, we have:_
1. _The space_ \(X\) _under_ \(R_{1}\) _decomposes as_ \(X=X_{0}\oplus cI\)_, the space of trace-zero matrices, and multiples of the identity. Under_ \(R_{2}\)_, we have_ \(X=X_{a}\oplus X_{s}\)_, where_ \(X_{a}\) _is the space of antisymmetric matrices and_ \(X_{s}\)_, that of symmetric matrices._
2. _Let_ \(\lambda_{1}\) _be such that_ \(\lambda_{1}(x_{0})=x_{0}\) _for all_ \(x_{0}\in X_{0}\)_, while_ \(\lambda_{1}(I)=tI\)_. Similarly, let_ \(\lambda_{2}(x_{a})=x_{a}\) _and_ \(\lambda_{2}(x_{s})=tx_{s}\) _for all_ \(x_{a}\in X_{a}\) _and_ \(x_{s}\in X_{s}\)_. Then_ \(\widehat{det_{3}}^{1}=Q_{1}\) _and_ \(\widehat{det_{3}}^{2}=Q_{2}\) _upto conjugates._
Let us now describe the stabilizers of the forms \(Q_{1}\) and \(Q_{2}\) above for \(n=3\).
**Lemma 2.25**: _The stabilizer of \(\mathcal{H}_{i}\) of \(Q_{i}\) within \(gl(X)\) has dimension 17. Moreover, it may be expressed as \(\mathcal{H}_{i}=\ell^{i}\oplus\widehat{\mathcal{K}}^{i}\), where \(t^{\ell^{i}}=\lambda_{i}(t)\) and \(\widehat{\mathcal{K}}^{i}\) is the leading term algebra of \(\mathcal{K}\) under \(\lambda_{i}(t)\). Moreover, \((\mathcal{H}_{i})\overline{y_{e}}\), the stabilizer of the tangent of approach equals \(\widehat{\mathcal{K}}^{i}\). Finally, \(\text{Lie}(R_{i})\subseteq\mathcal{H}_{i}\)._
The proofs of Propositon 2.24 and Lemma 2.25 are computations.
What if \(Q\) has no alignment? Is \(\lambda\) special in this case too? This is partly answered by the following proposition.
**Proposition 2.26**: _Let \(T\subseteq GL(X)\) be a maximal torus containing \(\lambda\). Let \(det_{n}=\sum_{\alpha}a_{\alpha}X^{\alpha}\) be the expression for the determinant in this basis. For any monomial index \(\alpha\), let \(\xi(\alpha)\) denote its \(T\)-weight. Define \(\Xi_{T}(det_{n})=\{\xi(\alpha)|a_{\alpha\neq 0}\}\) as the support of \(det_{n}\) for this \(T\). Similarly, define \(\Xi_{T}(Q)\) as the support of the leading term \(Q\). Then, in the absence of an alignment (i) the dimension of the \(\mathbb{R}\)-vector space formed by \(\Xi_{T}(det_{n})\) is \(n^{2}\) while that formed by \(\Xi_{T}(Q)\) is \(n^{2}-1\). Moreover, \(\langle\overline{\ell},\chi\rangle\geq 0\) for all \(\chi\in\Xi_{T}(det_{n})\) but \(\langle\overline{\ell},\beta\rangle=0\) for all \(\beta\in\Xi_{T}(Q)\)._
It is easy to see that if \(T\cap K_{n}\neq\{I\}\), the identity, then \(dim(\Xi_{T}(det_{n}))<n^{2}\). See Section 4.1 for details.
If a boundary form \(Q\) is obtained by a suitable \(\lambda\) then a conjugate \(\lambda^{\prime}\) is available in any maximal torus \(T\subseteq GL(X)\). Then, the requirement of Prop. 2.26 severely limits the space of suitable \(\lambda^{\prime}\) within this chosen maximal torus \(T\) to a finite and discrete set of possibilities.
### Alignment, weight spaces and the permanent vs. determinant case
Let us consider the case when \(X\cong\mathbb{C}^{r+s},W\cong\mathbb{C}^{r}\) and \(f\in Sym^{n}(X^{*})\) and \(g\in Sym^{n}(W^{*})\) be special forms with stabilizers \(GL(X)_{f}=K\) and \(GL(W)_{f_{0}}=H_{W}\). Let \(\phi:W\to X\) be an invertible linear map such that the pull back of \(f\) equals \(g\), or in other words, \(g=f\circ\phi\). Let \(Y=\phi(W)\) and \(Z\) be a suitable complement of \(Y\subseteq X\). Let us also identify \(W\) with \(Y\) and therefore \(H_{W}\) with \(H_{Y}\). Then, we can construct a
\(\lambda(t)\subseteq GL(X)\) such that we have the weight space decomposition \(X=X_{0}\oplus X_{1}\) with \(X_{0}=Y\) and \(X_{1}=Z\). We then have:
\[\lambda(t^{-1})f=t^{0}f_{0}+\ldots+t^{m}f_{m}\]
with \(f_{0}\) as the leading term and \(f_{0}|_{Y}=g\). The group \(GL(X)_{f_{0}}=H\) has the following form:
\[H=\left\{\left[\begin{array}{cc}a&b\\ 0&d\end{array}\right]|a\in GL(Z),b\in Hom(Z,Y),d\in H_{Y}\right\}\]
Let us suppose that there is an alignment between \(f_{0}\) and \(f\), i.e., a semisimple element \(\mathfrak{k}\in\mathcal{H}\cap\mathcal{K}^{u}\) as in Prop. 2.20.
**Proposition 2.27**: _Suppose that \(\mathfrak{k}\) above has rational eigenvalues, then there is a \(\phi^{\prime}:W\to X\), a non-trivial 1-PS \(\mu_{X}\subseteq K\) and a 1-PS \(\mu_{W}\subseteq H_{W}\) such that:_
1. \(\mu_{X}\circ\phi^{\prime}=\phi^{\prime}\circ\mu_{W}\)_._
2. \(f\circ\phi^{\prime}=g\)_._
3. _If_ \(X=\oplus_{i}X_{i}\) _and_ \(W=\oplus W_{i}\) _is the weight space decomposition of_ \(X\) _and_ \(W\) _under_ \(\mu_{X}\) _and_ \(\mu_{W}\)_, respectively, then_ \(W_{i}=(\phi^{\prime})^{-1}(X_{i})\)_, and thus_ \(\phi^{\prime}(W_{i})\subseteq X_{i}\)_._
**Proof**: Let \(\mathfrak{k}\in H\cap\mathcal{K}^{u}\) be a semisimple alignment and let \(\mu_{X}(a)=a^{\mathfrak{k}}\subseteq GL(X)\). Then, since \(\mathfrak{k}\) commutes with \(\lambda\), the 1-PS \(\mu_{X}\) must preserve both \(Y\) and \(Z\). Let \(\mu_{Y}\) be the restriction of \(\mu\) to \(Y\) and define \(\mu_{W}=\phi^{-1}\circ\mu_{Y}\circ\phi\). Finally, let \(\phi^{\prime}=u\circ\phi\). It is easy to check (1) and (2). (3) follows from (1). \(\square\)
Let us apply this to the case where \(X\) is a vector space of dimension \(n^{2}\) with coordinate functions \(\mathcal{X}=(X_{ij})_{i,j=1,\ldots,m}\). Let \(V=Sym^{n}(X^{*})\) with \(f(X)=det_{n}(X)\). Let \(W\) be a space of dimension \(m^{2}+1\) with coordinate functions \(\mathcal{W}=(W_{ij})_{i,j=1,\ldots,m}\cup W_{nn}\). Let \(g_{m,n}=W_{nn}^{n-m}perm_{m}(W)\), the padded permanent.
Suppose we have a \(\phi:W\to X\) such that \(f\circ\phi=g\) and the corresponding \(\lambda\) and the partition \(X=Y\oplus Z\) with \(Y=\phi(W)\), such that:
\[\lambda(t)f=t^{0}f_{0}+t^{1}+f_{1}+\ldots+t^{m}f_{m}\]
such that \(f_{0}\circ\phi=g_{m,n}\). Prop. 2.27 allows us to connect the weight spaces of stabilizer elements of the padded permanent with that of the determinant.
Towards this, we define:
**Definition 2.28**: _Let \(A=(\{1,\ldots,m\}\times\{1,\ldots,m\})\cup(\{n\}\times\{n\})\) and \(B=\{1,\ldots,n\}\times\{1,\ldots,n\}\) be sets of array indices. For a subset \(R\subseteq A\), let \(W_{R}=\{w\in W|W_{i,j}(w)=0\text{ for all }(i,j)\not\in R\}\). Thus, \(W_{R}\) is the subspace of all vectors whose support is in the set \(R\). Similarly, for \(S\subseteq B\), we define \(X_{S}\). A rectangular partition \(\mathcal{R}=\{R_{1},\ldots,R_{r}\}\) of \(A\) is where each \(R\) is of the form \(I_{i}\times J_{j}\), where \((I_{i})\) and \((J_{j})\) are two partitions of the row set and, respectively, the column set of \(A\). Each rectangular partition \(\mathcal{R}\) gives us a decomposition of \(W=\oplus_{R\in\mathcal{R}}W_{R}\). Similarly, we define a rectangular partition \(\mathcal{S}=\{S_{1},\ldots,S_{s}\}\) of \(B\) and the partition \(X=\oplus_{S\in\mathcal{S}}X_{S}\)._
**Proposition 2.29**: _Let \(\lambda(t)\) be as above and suppose that there is a rational alignment between \(f=det_{n}\) and \(f_{0}=Y_{nn}^{n-m}perm_{n}(Y)\). Then there is:_
1. _a map_ \(\phi^{\prime\prime}:W\to X\) _such that_ \(g_{m,n}=det_{n}\circ\phi^{\prime\prime}\)__
2. _a rectangular partition_ \({\cal R}=\{R_{1},\ldots,R_{r}\}\) _of_ \(A\)_, and a rectangular partition_ \({\cal S}=\{S_{1},\ldots,S_{s}\}\) _of_ \(B\)_,_
3. _a correspondence_ \(\Phi\subseteq{\cal R}\times{\cal S}\)_, such that_ \(\phi^{\prime\prime}(W_{R_{i}})\subseteq\oplus_{S_{j}\in\Phi(R_{i})}X_{S_{j}}\)_._
**Proof**: By Prop. 2.27, we have the 1-PS \(\mu_{X}\) and \(\mu_{Y}\) and a \(\phi^{\prime}:W\to X\) such that \(\mu_{W}\subseteq H_{W}\) and \(\mu_{X}\subseteq K\). However, \(\mu_{X}\) may not lie in the image of the standard torus \(D_{n}\times D_{n}\) within \(K_{n}\), the stabilizer of \(det_{n}\). However, we may find a \(k\in K_{n}\) such that \(\mu_{X}^{k}\) does indeed lie within this torus. Define \(\phi^{\prime\prime}=k\circ\phi^{\prime}\). The 1-PS \(\mu_{X}^{k}\) gives us the rectangular partition \({\cal S}\) of \(B\) and the weight space decomposition of \(X\).
We also have the decomposition of \(W=\oplus_{i}W_{i}\) by the weights of \(\mu_{W}\). Now the connected part of \(H_{W}\), the stabilizer of the padded permanent is a sub-torus of \(({\mathbb{C}}^{*})^{m}\times({\mathbb{C}}^{*})^{m}\times{\mathbb{C}}^{*}\), where the action of \((\overline{\alpha})\times(\overline{\beta})\times\gamma\) is given by \(W_{ij}\rightarrow\alpha_{i}\beta_{j}W_{ij}\) and \(W_{nn}\rightarrow\gamma W_{nn}\). Thus the 1-PS \(\mu_{W}\subseteq H_{W}\) indeed gives us a rectangular partition \(A\) and a weight space decomposition \({\cal R}\) of \(A\). For a given weight, say \(d\), we define \(\Phi_{d}=\{(R,S)\in{\cal R}\times{\cal S}|wt_{\mu_{W}}(W_{R})=wt_{\mu_{X}}(X_{ S})=d\}\) and \(\Phi=\cup_{d}\Phi_{d}\). The condition that \(\phi^{\prime\prime}W_{d}\subseteq X_{d}\) then implies (3). \(\Box\)
**Remark 2.30**: _The above proposition shows that a rational alignment \(\mathfrak{k}\) leads to specific information about the function \(\phi\) and the support within the matrix \(X\) for every coordinate \(w_{ij}\in W\)._
_In general, Prop. 2.27 also indicates the importance of weight-spaces for 1-PS within stabilizers of forms and the coupling achieved when there is alignment. This is true even when the leading term is not of degree \(0\). For example, in the case of \(det_{3}\), the form \(Q_{1}\) is a leading term of degree \(0\) and the alignment between the weight spaces is evident. This is seen even for \(Q_{2}\), which is not a leading term of degree \(0\)._
_For both, the permanent as well as the determinant, these rectangular spaces are also linear subspaces within their respective hypersurfaces. Such subspaces are of interest as the following proposition illustrates._
**Proposition 2.31**: _Let \(H_{m}\subseteq{\mathbb{C}}^{m^{2}}\) be the hypersurface of \(perm_{m}\). Suppose that there is a function \(k(m)\) and a sequence of points \((x_{m})\) for every \(m\) such that \(x_{m}\in H_{m}\), and the guarantee that dimension of any linear subspace \(W\subseteq H_{m}\) containing \(x_{m}\) is bounded by \(k(m)\). Then, if \(x_{nn}^{n-m}perm_{m}=\widehat{det_{n}}^{\lambda}\), for some \(\lambda(t)\subseteq GL_{n^{2}}({\mathbb{C}})\), then \(n>m^{2}-k(m)-1\)._
**Proof**: Let \(\lambda\) and \(Y\subseteq X={\mathbb{C}}^{n^{2}}\) be as above so that a suitable conjugate \(det_{n}^{g}\) of \(det_{n}\), when restricted to \(Y\) gives us the padded permanent \(x_{nn}^{n-m}perm_{m}\). Now, \(x_{m}\in Y\) be as above. Since \(det_{n}^{g}(x_{n})=x_{nn}^{n-m}perm_{m}(x_{m})=0\), there is a linear subspace \(Z\subseteq X\) containing \(x_{m}\) of dimension \(n^{2}-n\) such that \(det_{m}^{g}(z)=0\) for all \(z\in Z\). Let \(W=Z\cap Y\)
Then \(x_{nn}^{n-m}perm_{m}(x_{m})\) vanishes on \(W\). By the hypothesis, we have \(dim(W)\leq k(m)\). But we also have \(dim(W)\geq(n^{2}-n)+(m^{2}+1)-n^{2}\). Combining the two, we get:
\[m^{2}-1-n\leq k(m)\]
Thus \(n^{\geq}m^{2}-k(m)-1\). \(\Box\)
**Remark 2.32**: _The presence of alignment and its use in proving lower bounds was explored in [17]. They propose a stronger form where, for every (or a large fraction) of elements \(h\in H_{Y}\), we have an element \(g\in K\) which preserves \(Y\) and matches \(h\). They obtain exponential lower bounds for \(m\) in terms of \(n\) for such \(\phi\). The rectangular partitions are seen in their implementation of \(perm_{3}\) via \(det_{7}\), (with \(x_{77}=1\)) is as given in [1] and is reproduced below:_
\[\left[\begin{array}{cccccc}0&0&0&0&x_{31}&-x_{32}&x_{33}\\ x_{11}&1&0&0&0&0&0\\ x_{12}&0&1&0&0&0&0\\ x_{13}&0&0&1&0&0&0\\ 0&-x_{22}&x_{21}&0&1&0&0\\ 0&-x_{23}&0&x_{21}&0&1&0\\ 0&0&-x_{23}&x_{22}&0&0&1\end{array}\right]\]
_The partitions are, of course, \(I=\{1\}\{2\}\{3\}\{7\}\) and \(J=\{1,2,3\}\{7\}\) for the permanent and \(I=J=\{1\}\{2,3,4\}\{5,6,7\}\) for the determinant._
_The lower bound in Prop. 2.31 has already been shown by [17] using the Hessian of a generic point on the hypersurface of the determinant, and a special point \(x_{n}\in H_{n}\) as above. Our result requires us to compute \(k(n)\). It has been mentioned here for its connection with weight spaces of stabilizer elements._
Finally, what about the case when there is no alignment?
**Definition 2.33**: _Let \(Z_{0}=\{\widehat{det_{n}^{9}}^{\lambda}|g\in GL(X)\}\) be the collection of leading terms obtained by applying the special 1-PS \(\lambda\) to all elements of the \(GL(X)\)-orbit \(O(det_{n})\), and let \(\overline{Z_{0}}\) be its closure. We call these the_ **co-limits** _of \(f_{0}=g_{m,n}(Y)\)._
We then have:
**Proposition 2.34**: _For any semisimple element \(s\in K_{n}\), there is a \(\overline{u}\in\overline{U(\lambda)}\), the unipotent radical of \(H\), and a \(u\in U(\lambda)\) such that \(s^{u\overline{u}}\) stabilizes \(f_{0}^{\prime}=\widehat{det_{n}^{\overline{u}}}\in Z_{0}\). Moreover, there is an irreducible component \(Z^{i}\) of \(\overline{Z_{0}}\) containing both \(f_{0}\)and \(f_{0}^{\prime}\)._
The structure of \(Z_{0}\) is discussed in Section 4.2. We also conjecture, 4.28, that the \(GL(Y)\)-stability of \(f_{0}\), the padded permanent, and the uniqueness of the form within \(Sym^{n}(Y^{*})\) for its stabilizer, point to a necessary alignment between \(f_{0}\) and \(det_{n}\).
### A computation
By Prop. 2.18, semisimple elements in \({\cal P}(\lambda)\cap{\cal K}\) lead us to the presence of their conjugates within \({\cal H}\). This offers a significant insight into the alignment between \({\cal K}\) and \({\cal H}\). But what if no such elements exist? If \({\cal T}\) is a maximal torus of \({\cal K}\), what if \({\cal T}\cap{\cal P}(\lambda)=0\)? The degree conditions of lemma 2.4 lead to us to define the integers:
\[\delta({\mathfrak{g}},{\mathfrak{g}}^{\prime})=deg([{\mathfrak{g}},{ \mathfrak{g}}^{\prime}])-deg({\mathfrak{g}})-deg({\mathfrak{g}}^{\prime})\]
For any \({\mathfrak{g}},{\mathfrak{g}}^{\prime}\in{\cal G}\), we have \(\delta({\mathfrak{g}},{\mathfrak{g}}^{\prime})\geq 0\) and the dichotomy that either (i) \(\delta({\mathfrak{g}},{\mathfrak{g}}^{\prime})=0\), or (ii) \([{\hat{\mathfrak{g}}},{\hat{\mathfrak{g}}}^{\prime}]=0\). These lead to significant combinatorial constraints which we now illustrate.
Assume that there is a subalgebra \({\cal L}\subseteq{\cal K}\) where \({\cal L}\cong sl_{r}\) for some \(r>0\). This happens for example when \(y\) is the determinant polynomial. Let \({\cal T}_{r}\) be an identified maximal torus and suppose that \(deg({\mathfrak{t}})<0\) for all \({\mathfrak{t}}\in{\cal T}_{r}\). We can then find a basis \({\cal C}\) such that \(\hat{\cal C}\) generates \(\hat{\cal T}_{r}\subseteq{\cal G}_{-}\). Let \({\cal C}=\{K_{i}|i=1,\ldots,r-1\}\) be such a basis. Note that such bases exist in an open set of \({\cal T}_{r}^{r-1}\).
In terms of this basis we then have \({\cal X}=\{X_{ij}|1\leq i\neq j\leq r\}\subseteq{\cal L}\), the collection of root vectors. Together \({\cal C}\cup{\cal X}\) form a basis for \({\cal L}\). In terms of this basis, there are the standard Lie bracket relations, some of which are presented below:
\[\begin{array}{rcll}[K_{i},K_{j}]&=&0&(a)\\ [X_{ij},X_{kl}]&=&0\mbox{ when }j\neq k&(b)\\ [K_{i},X_{jk}]&=&c^{\prime}X_{jk}&(c)\\ [X_{ij},X_{jk}]&=&cX_{ik}\mbox{ when }i\neq k&(d)\\ [X_{ij},X_{ji}]&=&K_{ij}=\sum_{k=i}^{j-1}K_{k}\mbox{ when }i<j&(e)\end{array} \tag{4}\]
for some \(c\in\mathbb{R},c\neq 0\) and \(c^{\prime}\in\mathbb{R}\), possibly zero.
For all \(i,j\), let \(d_{ij}=deg(X_{ij})=deg(\hat{X}_{ij})\) and \(k_{ij}=deg(K_{ij})=deg(\hat{K}_{ij})\) and note that \(k_{ij}<0\).
Let us now analyse \(\hat{\cal L}_{\cal C}\subseteq\hat{\cal L}\), the subalgebra generated by the leading terms \(\hat{\cal X}\cup\hat{\cal C}\) of the chosen basis. Note that that while \(\hat{\cal T}_{r}\subseteq\hat{\cal L}_{\cal C}\) irrespective of the basis \({\cal C}\), the algebra \(\hat{\cal L}_{\cal C}\) is determined by the choice of \({\cal C}\) and _need not equal_\(\hat{\cal L}\).
Conditions \((a)\) and \((b)\) give us \([\hat{K}_{i},\hat{K}_{j}]=0\) and \([\hat{X}_{ij},\hat{X}_{kl}]=0\) when \(j\neq k\). Looking at (c), we have \([K_{i},X_{jk}]=c^{\prime}X_{jk}\) and comparing degrees, we see that the total degrees on the left and right do not match, i.e., \(d_{jk}+k_{i}\neq d_{jk}\) and hence \([\hat{K}_{i},\hat{X}_{jk}]=0\). In other words \([\hat{\cal T}_{r},\hat{\cal L}_{\cal C}]=0\) and \(\hat{\cal T}_{r}\) is in the center of \(\hat{\cal L}_{\cal C}\).
For any triple \(r,s,t\), with \(r\neq s\) and \(s\neq t\), let \(\delta_{rst}=d_{rt}-d_{rs}-d_{st}\) corresponding to the equation \([X_{rs},X_{st}]=cX_{st}\) (with \(c\neq 0\)). We then have \(\delta_{rst}\geq 0\). Now consider a triple \(i,j,k\) of distinct numbers, we have:
\[\begin{array}{rcl}[X_{ij},X_{ji}]&=&K_{ik}\\ [X_{ij},X_{jk}]&=&X_{ik}\end{array}\]
This gives is the degree conditions:
\[\begin{array}{rcl}d_{ij}+d_{ji}+\delta_{iji}&=&k_{ik}\\ d_{ij}+d_{jk}+\delta_{ijk}&=&d_{ik}\end{array}\]
Eliminating \(d_{ij}\) and rearranging, we get:
\[\begin{array}{rcl}d_{ji}+d_{ik}-d_{jk}+\delta_{iji}&=&k_{ik}+\delta_{ijk} \end{array}\]
In other words:
\[\delta_{jik}+\delta_{iji}=k_{ik}+\delta_{ijk}\]
Since \(k_{ik}<0\), this forces the condition \(\delta_{ijk}>0\), or in other words \([\hat{X}_{ij},\hat{X}_{jk}]=0\)! What about \([\hat{X}_{ij},\hat{X}_{ji}]\)? Since \([X_{ij},X_{ji}]=K_{ij}\), we either have \([\hat{X}_{ij},\hat{X}_{ji}]=0\) or \([\hat{X}_{ij},\hat{X}_{ji}]=\hat{K}_{ij}\). Thus for all \(i,j,k,l\), we have \([\hat{X}_{ij},\hat{X}_{kl}]\in\hat{\mathbb{C}}\cdot\mathcal{C}\). Thus \(\hat{\mathcal{L}}_{\mathcal{C}}/\hat{\mathcal{T}}_{r}\) is abelian. So \(\hat{\mathcal{L}}_{\mathcal{C}}\) is an abelian extension of \(\hat{\mathcal{T}}_{r}\).
## 3 The normal cone
In this section, we examine the role of \(y_{e}\), the tangent of approach to the point \(z\) and \(O(z)\), and define suitable \(G\)-varieties \(W\supseteq\overline{O(z)}\) which allow approaching \(z\) along the tangent \(y_{e}\) while staying within \(W\). This analysis considers a more general 1-parameter family (1-PF) \(\gamma(t)\subseteq G\) for taking limits. This allows us to handle elements \(z\in\overline{O(y)}\) which do not arise as leading terms of 1-PS.
### Generalities
**Definition 3.1**: \(\mathbb{C}[[t]]\) _will denote the ring of formal power series, and \(\mathbb{C}((t))\) the ring of Laurent series, the quotient field of \(\mathbb{C}[[t]]\). A 1-parameter family (or simply 1-PF) \(\gamma\) is a family of group elements \(\gamma(t)=(g_{ij}(t))\), where each \(g_{ij}(t)\in\mathbb{C}((t))\)._
**Remark 3.2**: _By Theorem 1.4 [Kem78], 1-PF above are adequate to detect closure in the Zariski topology. In other words, if \(z\in\overline{O(y)}\), the Zariski closure, then there is a 1-PF \(\gamma(t)\), where \(\gamma(t)\) is a matrix in \(G\) with power series entries, such that_
\[y(t)=\gamma(t)y=y_{0}+\sum_{i\geq 1}y_{i}t^{i}\]
_with \(z=y_{0}\)._
**Lemma 3.3**: _Let \(\gamma\) be a 1-PF and let \(v\in V\) and \(v\neq 0\). Let \(v(t)=\rho(\gamma(t))(v)\), then there is an \(a\in\mathbb{Z}\) such that:_
\[v(t)=\sum_{i\geq a}v_{i}t^{i}\]
_with \(v_{i}\in V\) for all \(i\) and \(v_{a}\neq 0\). Similarly, for \(\mathfrak{g}\in\mathcal{G},\mathfrak{g}\neq 0\) and \(\mathfrak{g}(t)=adj(\gamma(t))(\mathfrak{g})\), then there is a \(b\in\mathbb{Z}\) such that:_
\[\mathfrak{g}(t)=\sum_{i\geq b}\mathfrak{g}_{i}t^{i}\]
_with \(\mathfrak{g}_{i}\in\mathcal{G}\) for all \(i\) and \(\mathfrak{g}_{b}\neq 0\)._
**Remark 3.4**: _Note that, unlike the case of a 1-PS, we may not have a decomposition of the ambient space \(V\) as well as the Lie algebra \(\mathcal{G}\)._
**Proof**: Suppose that \(dim(V)=s\). From the rationality of the representation \(\rho\), and for a chosen basis \(\mathcal{B}\) of \(V\), \(\rho(\gamma(t))(\mathcal{B})=A(t)\mathcal{B}\), where \(\mathcal{B}\) is \(1\times s\) and \(A(t)\) is an \(s\times s\)-matrix with entries in \(\mathbb{C}((t))\). If \(v=c\mathcal{B}\), then \(\gamma(t)\cdot v=v(t)=cA(t)\mathcal{B}\). Thus, there is a row-vector \(c(t)\) such that \(v(t)=c(t)\mathcal{B}\). Unpacking \(c(t)\) by degrees gives us \(v(t)=\sum_{i\geq a}(c_{i}\mathcal{B})t^{i}\), where \(c_{i}\in\mathbb{C}^{1\times s}\) are row vectors. Using these as the coefficients for \(\mathcal{B}\) proves that \(v(t)=\sum_{i\geq a}v_{i}t^{i}\).
Coming to the second statement, let \(\mathcal{B}=\{\mathfrak{g}_{1},\ldots,\mathfrak{g}_{r}\}\) now be a basis for \(\mathcal{G}\). Each matrix \(\mathfrak{g}_{i}\in\mathcal{G}\subseteq gl_{n}\) may be expressed as a column vector \(\mathfrak{c}_{i}\in\mathbb{C}^{n^{2}}\). By the same calculation as above we have \(\gamma(t)\cdot\mathfrak{g}=\mathfrak{g}(t)\) is a matrix in \(gl_{n}\) with entries in the Laurent series \(\mathbb{C}((t))\), and therefore as a column vector \(\mathfrak{c}(t)\) too. Let \(A(t)\) be the \(n^{2}\times(r+1)\)-matrix \([\mathfrak{c}_{1},\ldots,\mathfrak{c}_{r},\mathfrak{c}(t)]\). Clearly, since \(\gamma(t)\subseteq G\), we have \(\gamma(t)\cdot\mathfrak{g}\in\mathcal{G}\), and the rank of this matrix (with entries in \(\mathbb{C}((t))\)) is \(r\). Therefore \(\mathfrak{g}(t)\) is expressible as \(\sum_{i=1}^{r}a_{i}(t)\mathfrak{g}_{i}\), for some elements \(a_{i}(t)\in\mathbb{C}((t))\). Collecting terms of the same degree gives us the result. \(\square\)
**Definition 3.5**: _For an element \(\mathfrak{k}\in\mathcal{K}\), we define \(\mathfrak{k}(t)\) as the element \(\gamma(t)\cdot\mathfrak{k}\in\mathcal{G}\otimes\mathbb{C}((t))\) and \(\hat{\mathfrak{k}}\) as the leading term of \(\mathfrak{k}(t)\). The space \(\hat{\mathcal{K}}\) will denote the \(\mathbb{C}\)-space formed by all elements \(\{\hat{\mathfrak{k}}|\mathfrak{k}\in\mathcal{K}\}\). The space \(\mathcal{K}(t)\) will denote the \(\mathbb{C}((t))\)-space formed by the elements \(\{\mathfrak{k}(t)|\mathfrak{k}\in\mathcal{K}\}\). In other words, \(\mathcal{K}(t)=\mathcal{K}\otimes C((t))\)._
Note that \(\mathcal{K}(t)\) is a Lie algebra over \(\mathbb{C}((t))\). We then have the following:
**Proposition 3.6**: _In the above notation, we have:_
1. _There is a basis_ \(\{\mathfrak{k}_{i}(t)\}_{i=1}^{r}\subseteq\mathcal{G}\otimes\mathbb{C}[[t]]\) _of_ \(\mathcal{K}(t)\) _such that (i)_ \(r=dim(\mathcal{K})\)_, (ii) if_ \(\mathcal{K}(0)\) _is the space formed by the elements_ \(\{\mathfrak{k}_{i}(0)\}_{i=1}^{r}\)_, then_ \(\mathcal{K}(0)\) _is a subalgebra of_ \(\mathcal{H}_{\overline{v_{e}}}\subseteq\mathcal{H}\) _of the same dimension._
2. \(\hat{\mathcal{K}}\subseteq\mathcal{K}(0)\)_._
The proof is a careful reworking of the proof of lemma 2.5, for details see [1, Theorem 3.13]. We omit the proof. Note that \(\hat{\mathcal{K}}\subsetneq\mathcal{K}(0)\) is eminently possible.
We continue with the notation \(\gamma(t)\subseteq G\) such that:
\[y(t)=\gamma(t)y=z+y_{e}t^{e}+\text{ higher degree terms}\]
where \(y_{0}=z\) and \(y_{e}\) is the tangent of approach. Note that for the 1-PS \(\lambda\) of earlier sections, we do have the 1-PS \(\lambda^{\prime}\) which is of the above format.
Recall that \(I_{y}\) (resp. \(I_{z}\)) are ideals in \(\mathbb{C}[V]\) for the varieties \(\overline{O(y)}\) (resp. \(\overline{O(z)}\)). The rings \(A_{y}\) (resp. \(A_{z}\)) are the corresponding coordinate rings, i.e., \(\mathbb{C}[V]/I_{y}\) (resp. \(\mathbb{C}[V]/I_{z}\)). Note that since \(\overline{O(y)}\) and \(\overline{O(z)}\) are cones, the ideals \(I_{y}\) and \(I_{z}\) are homogeneous.
We use this limiting 1-PF \(\gamma\) to construct a suitable set of derivations \(\mathcal{D}\) on the ideal \(I_{z}\). We then use this to define a \(G\)-invariant ideal \(\overline{J}\) in the associated graded ring of \(\mathbb{C}[V]\) with respect to \(I_{z}\).
We begin with basic results on graded rings.
**Definition 3.7**: _Let \(R=\oplus_{i\geq 0}R_{i}\) be the associated graded ring for the ideal \(I_{z}\subseteq\mathbb{C}[V]\). In other words, \(R_{i}=I_{z}^{i}/I_{z}^{i+1}\) (with \(I_{z}^{0}=\mathbb{C}[V]\)). For any homogeneous ideal \(I\), set \(I_{i}=I_{z}^{i}\cap I\) and \(\overline{I}_{i}=(I_{z}^{i+1}+I_{i})/I_{z}^{i+1}\). Let \(\overline{I}=\oplus_{i\geq 0}\overline{I}_{i}\) be the filtration of the ideal \(I\)._
Note that \((I_{i})\) is an \(I_{z}\)-stable filtration of the ideal \(I\) and that \(\overline{I}\) is an ideal within \(R\). The ring \(R\) is the ring of functions on the "normal cone" to the variety \(\overline{O(z)}\). We have
**Proposition 3.8**: _Using the above notation:_
1. _The ring_ \(\mathbb{C}[V]\) _is isomorphic to_ \(R\) _as_ \(G\)_-modules. For any_ \(G\)_-invariant ideal_ \(I\subseteq\mathbb{C}[V]\)__\(I\) _and_ \(\overline{I}\) _are isomorphic as_ \(G\)_-modules and so are_ \(\mathbb{C}[V]/I\) _and_ \(R/\overline{I}\)_._
2. _The ring_ \(A_{y}\) _is isomorphic to_ \(R_{y}=R/\overline{I_{y}}=\sum_{i\geq 0}R_{i}/\overline{(I_{y})}_{i}\) _as_ \(G\)_-modules. Moreover,_ \((R_{y})_{0}=A_{z}\)_._
3. _We have the exact sequence of ideals and rings (as well as_ \(G\)_-modules):_ \[0\rightarrow\overline{I_{z}}/\overline{I_{y}}\to R/\overline{I_{y}} \to R/\overline{I_{z}}\to 0\] _In this sequence_ \((R/\overline{I_{z}})_{i}=0\) _for all_ \(i>0\)_, and thus_ \((\overline{I_{z}}/\overline{I_{y}})_{i}\cong(R/\overline{I_{y}})_{i}\)_, for_ \(i>0\)_._
The proof is clear and is omitted.
### The tangent ideal \(\overline{J}\)
The main result of this subsection is an extension of the Peter-Weyl condition and is given below. It is based on the construction of the ideal \(\overline{J}\subseteq R\).
**Proposition 3.9**: _Let \(z\) be the leading term of \(y\) under the action of the 1-PF \(\gamma\) and let \(y_{e}\) be the tangent of approach. Let \(\mathcal{H}_{\overline{y_{e}}}\subseteq\mathcal{H}\) be the stabilizer of \(\overline{y_{e}}\in V/T_{z}O(z)\). Then for all \(i\geq 1\), then the dual \((R_{i}/\overline{(I_{y})}_{i})^{*}\) is non-zero and has an \(\mathcal{H}_{\overline{y_{e}}}\)-fixed vector._
**Remark 3.10**: _For any subgroup \(L\subseteq G\), let \(Rep_{G}(L)\) be those \(G\)-modules \(W\) such that \(W^{*}\) has an \(L\)-fixed vector. The importance of this is in the Peter-Weyl theorem on closed \(G\)-orbits \(O(v)\) with reductive stabilizer \(L=G_{v}\). In this case, we have the isomorphism of \(G\)-modules \(\mathbb{C}[O(v)]\cong\mathbb{C}[G]^{L}\cong\oplus_{W\in Rep_{G}(L)}n_{W}W\), where \(n_{W}\) is the dimension of the space of \(L\)-fixed vectors in \(W^{*}\). In the above notation, the kernel \(I_{z}/I_{y}\) of the surjection \(A_{y}\to A_{z}\) contains \(G\)-modules in the set \(Rep_{G}(\mathcal{H}_{\overline{y_{e}}})\). The other modules in \(A_{y}\) come from \(A_{z}\) and belong to \(Rep_{G}(\mathcal{H})\)_
The proof of the above proposition needs the construction of the ideal \(\overline{J}\), which we now proceed to do.
**Definition 3.11**: _Let \(w\in V\) be a point, \(T_{w}V\) be the tangent space at \(w\) and \(v\in T_{w}V\) be a tangent vector at \(w\). For an indeterminate \(\epsilon\), we consider the substition and the expansion:_
\[f(w+\epsilon v)=f_{0}+f_{1}\epsilon+\ldots\]
_where \(f_{i}\in\mathbb{C}\). We define \(D_{w,v}:\mathbb{C}[V]\rightarrow\mathbb{C}\) as \(D_{w,v}(f)=f_{1}\)._
The functional \(D_{w,v}\) is called a derivation at \(w\). It is easy to check that for any \(f,f^{\prime}\in\mathbb{C}[V]\), we have \(D_{w,v}(ff^{\prime})=f(w)D_{w,v}(f^{\prime})+f^{\prime}(w)D_{w,v}(f)\).
For the \(\gamma(t)\subseteq G\) and \(g\in G\), we define \(\gamma^{g}(t)\) as the family \(g\gamma(t)\subseteq G\). Let \(y(t)=\gamma(t)y\) and \(y^{g}(t)=\gamma^{g}(t)y\). Then we have:
\[y^{g}(t)=gz+(gy_{e})t^{e}+\mbox{ higher degree terms}\]
Note that both \(y(t)\) and \(y^{g}(t)\) are elements of \(V\otimes\mathbb{C}[[t]]\), i.e., power series in \(t\) with coefficients in \(V\).
**Lemma 3.12**: _Given any polynomial \(f\in\mathbb{C}[V]\) and for any \(g\in G\), we make the substitution as below:_
\[f(y^{g}(t))=f_{0}+f_{1}t^{1}+\ldots\]
_Then \(f_{0}=f(gz)\), \(f_{1}=\cdots=f_{e-1}=0\) and \(f_{e}=D_{gz,gy_{e}}(f)\)._
The proof is a simple computation. This makes the substitution of \(y^{g}(t)\) within \(f\) behave as a path which evaluates the derivative of \(f\) at \(gz\) along the direction of the tangent of approach \(gy_{e}\).
**Definition 3.13**: _Let \(w\in\overline{O(z)}\), \(v\in T_{w}V\) be arbitrary. For a \(k\geq 1\), and an element \(f\in I_{z}^{k}\), upon substitution of \(w+\epsilon v\) in \(f\), we have:_
\[f(w+\epsilon v)=f_{0}+f_{1}\epsilon\ldots\]
_Then it is clear that \(f_{0}=\cdots=f_{k-1}=0\). We define \(D_{w,v}^{k}(f)\) as \(f_{k}\), the coefficient of \(\epsilon^{k}\)._
\(D^{k}_{w,v}\) is well defined and \(D^{k}_{w,v}(f)=0\) for \(f\in I^{k+1}_{z}\), whence \(D^{k}_{w,v}\) is effectively a map \(D^{k}_{w,v}:I^{k}_{z}/I^{k+1}_{z}\to\mathbb{C}\).
**Proposition 3.14**: _For the notation as above, let \(f\in I^{k}_{z}\) and \(g\in G\), then \(f(y^{g}(t))\) is a power series in \(t\) of the form:_
\[f(y^{g}(t))=\alpha t^{ke}+\mbox{ higher terms in }t\]
_where \(\alpha=D^{k}_{gz,gy_{e}}(f)\)._
The proof is straightforward.
**Proposition 3.15**: _With the above notation, for any \(w\in\overline{O(z)},v\in T_{w}V\), we have:_
1. _For_ \(f,f^{\prime}\in I^{r}_{z}\) _and_ \(f^{\prime\prime}\in I^{s}_{z}\)_, we have:_ \[\begin{array}{rcl}D^{k}_{w,v}(f)&=&0\mbox{ for all }k<r\\ D^{r}_{w,v}(f+f^{\prime})&=&D^{r}_{w,v}(f)+D^{r}_{w,v}(f^{\prime})\\ D^{r+s}_{w,v}(ff^{\prime\prime})&=&D^{r}_{w,v}(f)D^{s}_{w,v}(f^{\prime\prime}) \end{array}\]
2. _For any_ \(g\in G\)_, and_ \(f\in I^{r}_{z}\)_, let_ \(f^{g}\) _denote the function_ \(f^{g}(x)=f(g^{-1}x)\)_. Then,_ \(f^{g}\in I^{r}_{z}\) _as well. Moreover, for any_ \(g^{\prime}\in G\)_, we have:_ \[D^{r}_{gz,gy_{e}}f^{g^{\prime}}=D^{r}_{(g^{\prime})^{-1}gz,(g^{\prime})^{-1} gy_{e}}(f)\]
**Proof**: The proof of (1) is straightforward. For (2), note that \(f^{g^{\prime}}(y^{g}(t))=f((g^{\prime})^{-1}y^{g}(t))=f(y^{(g^{\prime})^{-1}g }(t))\). This proves the proposition. \(\Box\)
**Lemma 3.16**: _For \(w\in O(z)\) and \(v\in T_{w}V\) let \(v^{\prime}\in T_{w}O(z)\), the tangent space of the orbit \(O(z)\) at \(w\). Then for an \(f\in I^{r}_{z}\) with \(r>0\), we have \(D^{r}_{w,v+v^{\prime}}(f)=D^{r}_{w,v}(f)\)._
**Proof**: Let \(f^{\prime}\in I_{z}\). Since \(v^{\prime}\in T_{w}O(z)\), the tangent space of the orbit \(O(z)\) and since \(f^{\prime}\) vanishes on \(O(z)\), it is easy to check that \(D_{w,v^{\prime}}(f^{\prime})=0\). Hence, for any \(f\in I^{r}_{Z}\), we have \(D^{r}_{w,v^{\prime}}(f)=0\) as well. This proves the claim. \(\Box\)
**Remark 3.17**: _Proposition 3.15 tells us that since \(D^{k}_{w,v}\) vanishes on \(I^{k+1}_{z}\), it is effectively a functional on \(I^{k}_{z}/I^{k+1}_{z}\). The above lemma tells us that the functional depends on the representative \(\overline{v}\in\overline{N}=V/T_{w}O(z)\), and not on \(v\) itself._
We now use the tangent of approach \(y_{e}\) to define the modules \(\overline{J}_{k}\).
**Definition 3.18**: _For the above data, let \(\overline{J}_{k}\subseteq R_{k}\) be defined as follows:_
\[\overline{J}_{k}=\{\overline{f}\in I^{k}_{z}/I^{k+1}_{z}|D^{k}_{gz,gy_{e}}( \overline{f})=0\mbox{ for all }g\in G\}\]
**Proposition 3.19**: _With the above definition of \(\overline{J}_{k}\), we have:_
1. \(\overline{J}_{r}I_{z}^{s}\subseteq\overline{J}_{r+s}\) _and_
2. \((\overline{I}_{y})_{k}\subseteq\overline{J}_{k}\)_._
3. _Let_ \(\overline{J}_{z,y_{e}}\) _or simply_ \(\overline{J}=\oplus_{k\geq 1}\overline{J}_{k}\)_. Then_ \(\overline{J}\) _is a_ \(G\)_-invariant ideal of_ \(R\) _and_ \(\overline{I_{y}}\subseteq\overline{J}\subseteq\overline{I_{z}}=\oplus_{i\geq 1 }R_{i}\)_._
_Henceforth, we call \(\overline{J}=\overline{J}_{z,y_{e}}\) as the_ **tangent ideal** _for the data \((z,y_{e})\)._
**Proof**: For (1), we note that for any \(f\in\overline{J}_{r}\) and \(f^{\prime}\in I_{z}^{s}\), have
\[D^{r+s}_{gz,gy_{e}}(ff^{\prime})=D^{r}_{gz,gy_{e}}(f)D^{s}_{gz,gy_{e}}(f^{ \prime})\]
Since the first term is zero, so is the product. For (2), let \(f^{\prime\prime}\in I_{y}\cap I_{z}^{k}\). Note that, by Prop. 3.14, \(D^{k}_{gz,gy_{e}}(f^{\prime\prime})\) is also obtained by evaluating \(f^{\prime\prime}\) on the path \(y^{g}(t)=\gamma^{g}(t)y\). Since \(y^{g}(t)\in O(y)\) for all \(t>0\), we have \(f^{\prime\prime}(y(t))=0\) for all \(t>0\). Passing to the limit, we must have \(D^{k}_{gz,gy_{e}}(f^{\prime\prime})=0\). This proves (2). For (3) above, (1) ensures that \((\overline{J}_{k})\) is \(I_{z}\)-stable. Its \(G\)-invariance follows from the fact that if for some \(f\in I_{z}^{k}\), \(D^{k}_{gz,gy_{e}}(f)\) vanishes for all \(g\), then, by Prop. 3.15 (2), so do \(D^{k}_{gz,gy_{e}}(f^{g^{\prime}})\), for any \(g^{\prime}\)-translate of \(f\). This proves the first part of (3). The second part follows from (2). This completes the proof of the proposition. \(\Box\)
**Remark 3.20**: _Thus, the variety \(\overline{W}\subseteq Spec(R)\) of \(\overline{J}\) is an infinitesimal \(G\)-invariant thickening of \(O(z)\) within the normal bundle, and in the direction \(y_{e}\) and its translates. Assuming that \(y_{e}\not\in T_{z}O(z)\), and in light of Remark 3.17, this thickening is in a direction transverse to the tangent space to the orbit at \(z\). A natural question is whether there is an ideal \(J^{\prime}\subseteq\mathbb{C}[V]\) such that its filtered version \(\overline{J^{\prime}}\) equals \(\overline{J}\)? In that case, the variety \(W^{\prime}\subseteq V\) of \(J^{\prime}\) would be a model for \(\overline{W}\) in the normal bundle. By Prop. 3.19, this model would be an intermediate variety and lie between \(\overline{O(y)}\) and \(\overline{O(z)}\). The construction of \(J^{\prime}\) and a recipe for computing its dimension are addressed in the next subsection._
We present three examples which illustrate that with \(y\stackrel{{\gamma}}{{\to}}z\), the ideal \(\overline{J}\) and the existence of an intermediate variety depend crucially on \(y_{e}\). Moreover, the construction allows for \(\overline{O(z)}\) to be singular within \(\overline{O(y)}\). Also see Ex. 4.14 which will be discussed later.
**Example 3.21**: _Let \(V=\mathbb{C}^{2}\), and \(z=[0,0]^{T}\) and \(y=[1,1]^{T}\). Let \(G\) be the parabolic group given below. \(V\) consists of \(3\) orbits, viz., \(O(z)\) of the sole point \(z\), \(O(y)\) of \(\mathbb{C}^{2}\) minus the \(X\)-axis, and \(O([1,0]^{T})\) of the \(X\)-axis minus \(z\). The stabilizer \(H\) of \(z\) is \(G\)._
_Consider the two families \(\gamma_{1}\) and \(\gamma_{2}\) below and the paths \(\gamma^{1}(t)y\) and \(\gamma^{2}(t)y\) which take \(y\) to \(z\) but with tangents \(y_{1}=[1,1]^{T}\) and \(y_{2}=[1,0]^{T}\)._
\[G=\left\{\left[\begin{array}{cc}a&b\\ 0&c\end{array}\right]|ac\neq 0\right\}\ \gamma^{1}(t)=\left[\begin{array}{cc}t&0 \\ 0&t\end{array}\right]\ \gamma^{2}(t)=\left[\begin{array}{cc}t&0\\ 0&t^{2}\end{array}\right]\]
_Let \(\overline{J}_{1}\) and \(\overline{J}_{2}\) be the tangent ideals for the data \((z,y_{1})\) and \((z,y_{2})\) respectively. Then \(\overline{J}_{1}\) consists of all forms \(f\) of degree \(k\) whose \(k\)-th derivatives vanish in the directions
\([1,1]^{T}\) and its \(H\)-orbit. Thus \((\overline{J_{1}})_{k}=(0)\) for all \(k\). Let \(J_{1}=(0)\subseteq\mathbb{C}[V]\), then \(J_{1}\) is an ideal and \(\overline{(J_{1})}=\overline{J}_{1}\) and thus \(W_{1}=\mathbb{C}^{2}\) is the required model. This does lie between \(\overline{O(z)}\) and \(\overline{O(y)}\) but is not strictly intermediate._
_On the other hand, since the orbit of \(y_{2}=[1,0]\) is its own multiples, \((\overline{J}_{2})_{k}\) is the ideal generated by the forms \(\{y^{i}x^{k-i}|i>0\}\). Again, there is the ideal \(J_{2}=(y)\subseteq\mathbb{C}[V]\) such that \(\overline{(J_{2})}=\overline{J}_{2}\). The variety \(W_{2}=\overline{O([1,0]^{T})}\) is the required model. Note that in this case \(\overline{O(z)}\subsetneq W_{2}\subsetneq\overline{O(y)}\) and \(W_{2}\) is strictly intermediate._
**Example 3.22**: _Let us consider the group \(G=S^{1}\times\mathbb{R}^{*}\) acting on \(\mathbb{R}^{3}\) given by the matrix \(g(\theta,r)\) as:_
\[g(\theta,r)=\left[\begin{array}{ccc}r\cos\theta&r\sin\theta&0\\ -r\sin\theta&r\cos\theta&0\\ 0&0&r\end{array}\right]\]
_Consider the point \(y=[1,0,1]^{T}\) and its orbit \(O(y)=\{(r\cos\theta,r\sin\theta,r)|\theta\in[0,2\pi),r\neq 0\}\). The point \(z=[0,0,0]^{T}\) is in \(\overline{O(y)})\) via \(\lambda(t)=diag(t,t,t)\), with \(y_{e}=[1,0,1]\). Let us compute the tangent ideal \(\overline{J}\) for this data._
_Note that \(O(z)=z,G_{z}=G\) and \(I_{z}=(x,y,z)\). Let \(f_{1}=ax+by+cz\in I_{z}\), for some real \(a,b,c\), \(f_{2}=x^{2}\in I_{z}^{2}\) and \(f_{3}=x^{2}+y^{2}-z^{2}\in I_{z}^{2}\). Note that \(\overline{O(z)}=z\) and \(G_{z}=H\). We have:_
\[\begin{array}{rcl}D_{z,y_{e}}^{1}(f_{1})&=&a+c\\ D_{z,y_{e}}^{2}(f_{2})&=&1\\ D_{z,y_{e}}^{2}(f_{3})&=&0\end{array}\]
_Applying a general group element \(g(\theta,r)\) and evaluating \(D_{gz,gy_{e}}^{k}(f_{i})\) gives us that \(f_{1}\not\in\overline{J}_{1}\) for any non-zero tuple \((a,b,c)\). Indeed \(\overline{J}_{1}=(0)\). On the other hand, \(f_{2}\not\in\overline{J}_{2}\) but \(f_{3}\in\overline{J}_{2}\). In fact, for the ideal \(J^{\prime}=(x^{2}+y^{2}-z^{2})\), we see that \(\overline{J^{\prime}}=\overline{J}\). This gives \(\overline{O(y)}\) as \(W^{\prime}\) the intermediate variety. Note that \(W^{\prime}\) has a singularity at \(z\)._
We give another example of \(\overline{J}\) from the classical representation of matrices under the adjoint action.
**Example 3.23**: _Let us consider \(V=\mathbb{C}^{3\times 3}\) with the coordinate functions \(X=(X_{ij})\) in \((\mathbb{C}^{3\times 3})^{*}\). Consider the action of \(GL_{3}(\mathbb{C})\) on \(\mathbb{C}^{3\times 3}\) by conjugation. Thus, for a matrix \(A\) and \(g\in GL_{3}(\mathbb{C})\), the action of \(g\) on \(A\) is given by \(A\to gAg^{-1}\). Consider the matrix \(C\) and the family \(\lambda(t)\) below:_
\[C=\left[\begin{array}{ccc}1&0&1\\ c_{21}&1&0\\ c_{31}&c_{32}&1\end{array}\right]\qquad\lambda(t)=\left[\begin{array}{ccc}1& 0&0\\ 0&t&0\\ 0&0&t^{2}\end{array}\right]\]
_The entries \(c_{ij}\in\mathbb{C}\) are chosen such that the matrix \(C\) has distinct eigenvalues. Let \(N_{1}\), \(N_{2}\) and \(I\) be the matrices below:_
\[N_{1}=\left[\begin{array}{ccc}0&0&1\\ 0&0&0\\ 0&0&0\end{array}\right]\quad N_{2}=\left[\begin{array}{ccc}0&1&0\\ 0&0&1\\ 0&0&0\end{array}\right]\quad I=\left[\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&1\end{array}\right]\]
_We see that:_
\[\lambda(t)\cdot C=t^{-2}N_{1}+I+\mbox{ terms with higher degree in }t\]
_Thus, in the notation of this section, we have \(y=C,z=N_{1}\) and \(y_{e}=I\). The ideal of orbit closure \(\overline{O(N_{1})}\) is given by the equation \(X^{2}=0\). Moreover, \(\overline{O(C)}\) is given by \(O(C)\cup O(N_{2})\cup O(N_{1})\cup{\bf 0}\), where \({\bf 0}\) is the zero matrix. The action of the family \(\gamma(t)=t^{2}\lambda(t)\) is given as:_
\[\gamma(t)y=N_{1}+t^{2}I+\mbox{ terms with higher degree in }t\]
_Let us evaluate \(D_{z,y_{e}}(X^{2})\) and \(D_{z,y_{e}}(X^{3})\). Since \(X^{2}(N_{1})=X^{3}(N_{1})=0\) these are elements of \(I_{z}\), and we have:_
\[\begin{array}{rcl}D_{z,y_{e}}(X^{2})&=&2N_{1}\neq 0\\ D_{z,y_{2}}(X^{3})&=&N_{1}^{2}I+N_{1}IN_{1}+IN_{1}^{2}=0\end{array}\]
_Thus \(X^{3}\in\overline{J}_{1}\). This points to the variety \(W^{1}=\overline{O(N_{2})}\) as the possible intermediate variety. Note that \(\overline{O(z)}\subsetneq W^{1}\subsetneq\overline{O(y)}\)._
We conclude this section with a proof of Proposition 3.9.
**Proposition 3.24**: _In the notation of Prof. 3.8, for each \(i\geq 1\), \(R_{i}/\overline{(I_{y})}_{i}\neq 0\)._
**Proof**: Recall that \(R_{y}=\oplus_{i\geq 0}R_{i}/\overline{I_{y_{i}}}\) is isomorphic to \(A_{y}\) as a \(G\)-module. Since \(I_{y}\subset I_{z}\), the \(0\)-th term, \((R_{y})_{0}=\mathbb{C}[V]/I_{z}\) is precisely \(A_{z}\). The kernel of \(D^{i}_{z,y_{e}}:R_{i}/\overline{(I_{y})}_{i}\rightarrow\mathbb{C}\) is precisely \(\overline{J}_{i}\). Thus, it suffices to show that \(R_{i}/\overline{J}_{i}\) is non-zero. Now, by the non-singularity of the point \(z\in\overline{O(z)}\subseteq V\), there is an \(f\in I_{z}\) such that \(D_{z,y_{e}}(f)\neq 0\). Then \(f^{k}\in R_{i}\) and \(D^{k}_{z,y_{e}}(f^{k})\neq 0\). \(\Box\)
**Proof of Proposition 3.9**. In view of the above proposition, what remains to be shown is the existence of a non-zero functional on \(R_{i}/\overline{(I_{y})}_{i}\) which is \({\cal H}_{\overline{y_{e}}}\)-invariant. This is of course \(D^{i}_{z,y_{e}}:R_{i}/\overline{(I_{y})}_{i}\rightarrow\mathbb{C}\). Indeed, for any \(\mathfrak{h}\in{\cal H}_{\overline{y_{e}}}\), we have \(\mathfrak{h}D^{k}_{z,y_{e}}=D^{k}_{z,y_{e}}\). Now since \(\mathfrak{h}y_{e}\in T_{z}O(z)\), for any \(f\in I^{k}_{z}\), \(D^{k}_{z,\mathfrak{h}y_{e}}(f)=0\). This proves the invariance of \(D^{k}_{z,y_{e}}\) and the proposition. \(\Box\)
### The structure of the tangent ideal
This section analyses the tangent ideal \(\overline{J}_{z,y_{e}}\) (or simply \(\overline{J}\)) and the question of its dimension.
Let \(B\) be the affine variety \(\overline{O(z)}\times V\) and \(\mathbb{C}[B]\) its coordinate ring. Let \(B^{\prime}\) be the open subset \(=\{(w,v)\in B\;with\;w\in O(z)\}\) of \(B\). For any \((w,v)\in B\), let us define the map \(e(w,v):R\rightarrow\mathbb{C}\) as follows. For \(\overline{f}=\sum_{i=0}^{k}\overline{f}_{i}\in R\), where \(\overline{f}_{i}\in R_{i}\),we define:
\[e(w,v)(\overline{f})=\sum_{i=0}^{k}D^{i}_{w,v}\overline{f}_{i}\]
Thus \(e(w,v)\) treats \(v\) as a member of the tangent space \(T_{w}V\). We now list certain properties of \(E=\{e_{w,v}|w\in\overline{O(y)},v\in T_{w}V\}\).
**Proposition 3.25**: _For \(B\) as above, we have:_
1. _For any_ \((w,v)\in B^{\prime}\)_, the map_ \(e_{w,v}\) _is a non-trivial algebra homomorphism. Hence, the kernel_ \(M_{w,v}\subseteq R\) _is a maximal ideal of_ \(R\)_. Thus, we have a map_ \(\phi:B^{\prime}\to MaxSpec(R)\)_._
2. _For any point_ \(w\in O(z)\)_, if_ \(v^{\prime}\in T_{w}O(z)\) _then_ \(\phi(w,v)=\phi(w,v+v^{\prime})\)_. Thus, the fiber of the map_ \(\phi\) _at any point in the_ \(Im(\phi)\) _is a linear space of dimension_ \(dim(G)-dim(H)\)_._
3. _Let_ \(B_{J}=\{(gz,gy_{e}+v^{\prime})|g\in G\) _and_ \(v^{\prime}\in T_{gz}O(z)\}\)_. Then have:_ \[\overline{J}=\cap_{(w,v)\in B_{J}}M_{w,v}\]
**Proof**: That \(e(w,v)\) is an algebra homomorphism follows from Prop. 3.15. It is then clear that its kernel must be a maximal ideal of \(R\). Hence (1) is clear. The second assertion follows from Lemma 3.16, which says that \(D^{k}_{w,v+v^{\prime}}=D^{k}_{w,v}\) when \(v^{\prime}\in T_{w}O(z)\). The fiber is clearly \(dim(T_{w}O(z))\) which is \(dim(G)-dim(H)\).
Finally, coming to (3), It is clear that \(\overline{J}\subseteq\cap_{(w,v)\in B_{J}}M_{w,v}\). In the other direction, note that \(\lambda(t)\in H\) but \(\lambda(t)y_{e}=t^{e-d}y_{e}\). Thus, not only is \((z,y_{e})\in B_{J}\), but so is \((z,\alpha y_{e})\in B_{J}\), for any \(\alpha\in\mathbb{C}^{*}\). In general, for any \(\alpha\in\mathbb{C}^{*}\), and \((w,v)\in B_{J}\) we have \((w,\alpha v)\in B_{J}\) as well. Now:
\[e(w,\alpha v)(\overline{f})=\sum_{i}\alpha^{i}e(v,w)\overline{f}_{i}\]
Whence \(e(v,w)(\overline{f})=0\) implies that \(e(v,w)(\overline{f}_{i})=0\) as well. Thus for any \(\overline{f}\in\cap_{(w,v)\in B_{j}}M_{w,v}\), we have \(\overline{f}_{i}\in\cap_{(w,v)\in B_{J}}M_{w,v}\) as well. But that is equivalent to the requirement that \(\overline{f}_{i}\in\overline{J}_{i}\). Thus \(\cap_{(w,v)\in B_{J}}M_{w,v}\subseteq\overline{J}\). This proves (3) and the proposition. \(\square\)
**Proposition 3.26**: _The dimension of \(\overline{J}_{z,y_{e}}\) (i.e., \(\overline{J}\)) is \(dim(G)-dim(H_{\overline{y_{e}}})\)._
**Proof**: We have the map \(\phi:B^{\prime}\to MaxSpec(R)\) whose fiber at each point is of dimension \(dim(G)-dim(H)\). By Prop. 3.25 (3) above, \(\phi^{-1}(MaxSpec(\overline{J}_{z,y_{e}}))=B_{J}\) whose dimension is \(dim(G)-dim(H_{\overline{y_{e}}})+dim(G)-dim(H)\). This implies that the dimension of \(\overline{J}_{z,y_{e}}\) must be \(dim(G)-dim(H_{\overline{y_{e}}})\). \(\square\)
We know in general that \(\hat{\cal K}\subseteq{\cal H}_{\overline{y_{e}}}\). By the above theorem, if \(dim({\cal K})<dim({\cal H}_{\overline{y_{e}}})\), then, in the normal cone \(Spec(R)\), there is indeed an intermediate variety between \(\overline{O(z)}\) and \(\overline{O(y)}\).
**Conjecture 3.27**: _If \(dim({\cal K})<dim({\cal H}_{\overline{y_{e}}})\), then there is a strictly intermediate variety \(\overline{O(z)}\subsetneq W\subsetneq\overline{O(y)}\) of dimension \(dim(G)-dim({\cal H}_{\overline{y_{e}}})\)._
Next, we examine if there is an intermediate ideal \(J^{\prime}\subseteq\mathbb{C}[V]\) such that \(\overline{J^{\prime}}=\overline{J}\) and \(I_{z}\supsetneq J^{\prime}\supsetneq I_{y}\). Whether the ideal of the variety \(W_{z,y_{e}}\) defined below works is not clear to us.
**Definition 3.28**: _For any closed \(G\)-variety \(W\) with \(z\in W\), we say that \(w_{e}\) is a tangent of approach at \(z\) if there is a \(w\in W\) and a 1-PF \(\beta(t)\subseteq G\) such that:_
\[w(t)=\beta(t)w=z+w_{e}t^{e}+\mbox{ higher degree terms}\]
_The collection of all tangents of approach is denoted by \(\overline{T}_{z}W\)_
**Definition 3.29**: _Let \(\mathcal{W}_{z,y_{e}}\) be the collection of all homogeneous \(G\)-varieties \(W\) within \(V\) which (i) contain \(z\) (and therefore \(\overline{O(z)}\)) and (ii) for which \(y_{e}\in\overline{T}_{z}W\)._
**Lemma 3.30**: _Let \(W_{z,y_{e}}=\cap_{W\in\mathcal{W}_{z,y_{e}}}W\). Then \(W_{z,y_{e}}\) is an algebraic variety. Let_
\[I_{z,y_{e}}=\sum_{W\in\mathcal{W}_{z,y_{e}}}I_{W}\]
_where \(I_{W}\subseteq\mathbb{C}[V]\) is the ideal of \(W\). Then \(\overline{I_{z,y_{e}}}\subseteq\overline{J}\)._
**Proof**: It is clear that \(W_{z,y_{e}}\) is an algebraic variety and that \(I_{W}\) is an ideal whose variety is \(W\). Let us first show that for any \(W\) as above \(\overline{I_{W}}\subseteq\overline{J}\). Suppose that \(f\in I_{W}\cap I_{z}^{k}\). By the definition of \(W\), there is a 1-PF \(\gamma(t)\) taking some \(w\in W\) to \(z\) with the tangent of approach being \(y_{e}\). Plugging \(w(t)=\gamma(t)w\) into the expression of \(f\), we have
\[f(w(t))=t^{ke}D_{z,y_{e}}^{k}(f)+\ldots\]
Since \(W\) is a \(G\)-variety, \(w(t)\) lies entirely in \(W\). Hence, \(f\in I_{W}\) implies that \(f(w(t))\) is identically zero, i.e., \(D_{z,y_{e}}^{k}(f)=0\). The same applies to \(D_{gz,gy_{e}}^{k}\) too. Hence \(\overline{f}\in\overline{J}\). Thus \(\overline{I_{W}}\subseteq\overline{J}\) and indeed \(\overline{I_{z,y_{e}}}\subseteq\overline{J}\). \(\square\)
## 4 Intermediate strata and co-limits space
As in the previous section, we continue with the search for intermediate varieties. We go back to the 1-PS case with \(\lambda(t)\) and its action as below:
\[\lambda(t)\cdot y=y_{d}t^{d}+y_{e}t^{e}+\ldots+y_{D}t^{d}\]
with \(z=y_{d}\). In the first subsection, we use \(T\), a maximal torus of \(G\) containing \(\lambda\), and elementary polyhedral theory to arrive at possible intermediate varieties. In the second subsection, we consider **co-limits** of \(z\), i.e., leading terms \(\hat{y^{\prime}}\) of degree \(d\) for some \(y^{\prime}\in O(y)\).
### Intermediate polyhedral strata
Let \(H_{0}=L(\lambda)\cap H\) and \(T_{H}\) be a maximal torus in \(H_{0}\). Since \(\lambda(t)\) commutes with \(T_{H}\), we may choose a maximal torus \(T\subseteq G\) containing \(\lambda\) as well as \(T_{H}\). Let its rank be \(r\) which equals the rank of \(G\). We assume that \(T\subseteq D_{n}\cong(\mathbb{C}^{*})^{n}\), the group of diagonal matrices \(diag(t_{1},\ldots,t_{n})\) in \(GL(X)\).
Let the general element of \(T\) be \(\overline{t}=(t_{1},\ldots,t_{r})\in(\mathbb{C}^{*})^{r}\) and \(\rho:T\to D_{n}\) be its realization within \(GL(X)\). Through \(\rho\), we have a finite set \(\Xi(V)\) of characters, i.e., vectors \(\chi=(\chi(i))\) in \(\mathbb{Z}^{r}\), and a weight space decomposition \(V=\sum_{\chi\in\Xi(V)}V_{\chi}\) such that for any \(v_{\chi}\in V_{\chi}\), we have \(\rho(\overline{t})v_{\chi}=(\prod_{1}^{r}t_{i}^{\chi(i)})v_{\chi}\). The product is simply denoted as \(\overline{t}^{\chi}\). For any \(v\in V\) with \(v=\sum_{\chi\in\Xi(V)}v_{\chi}\), we define \(\Xi(v)=\{\chi|v_{\chi}\neq 0\}\).
Let \(\mathcal{T}\) be the Lie algebra of \(T\) and \(\phi(\mathcal{T})\subseteq\mathcal{G}\subseteq gl(X)\) be its realization. Let \((\mathfrak{t}_{1},\ldots,\mathfrak{t}_{r})\) be a basis for \(\mathcal{T}\) such that \(t^{\mathfrak{t}_{i}}=(1,1,\ldots,1,t_{i},1\ldots,1)\in(\mathbb{C}^{*})^{r}\cong T\). Let \(\mathcal{T}_{\mathbb{Z}}\) (resp. \(\mathcal{T}_{\mathbb{R}}\)) be the \(\mathbb{Z}\)-module (resp. \(\mathbb{R}\)-module) generated by \(\{\mathfrak{t}_{1},\ldots,\mathfrak{t}_{r}\}\). Henceforth, by \(\mathcal{T}\), we will mean \(\mathcal{T}_{\mathbb{R}}\).
We fix an element \(\mathbf{1}\in\mathcal{T}_{Z}\) such that \(\phi(t^{\mathbf{1}})\in Z\), the center of \(GL(X)\). For a \(\mathfrak{t}\in\mathcal{T}_{\mathbb{Z}}\), there are \((\mathfrak{t}(i))\in\mathbb{Z}^{r}\), the coefficients of \((\mathfrak{t}_{1},\ldots,\mathfrak{t}_{r})\) such that \(t^{\mathfrak{t}}=(t^{\mathfrak{t}(1)},\ldots,t^{\mathfrak{t}(r)})\in T\). Moreover, we have \(\chi(t^{\mathfrak{t}})=t^{\langle\chi,\mathfrak{t}\rangle}\), where \(\langle,\rangle\) is the usual inner product on \(\mathbb{R}^{r}\). For any \(1\)-PS \(\mu(t)\subseteq T\), we have \(\log(\mu)\in\mathcal{T}_{\mathbb{Z}}\) such that \(t^{\log(\mu)}=\mu(t)\). Finally, for \(\mathbf{1}\in\mathcal{T}_{\mathbf{Z}}\) there is a \(c\neq 0\) such that for all \(\chi\in\Xi(V)\), \(\langle\chi,\mathbf{1}\rangle=c\)
**Lemma 4.1**: _In the above notation, for any \(v\in V\), with \(v=\sum_{\chi\in\Xi(v)}v_{\chi}\) and \(\mathfrak{t}\in\mathcal{T}\), we have:_
\[\mathfrak{t}v=\sum_{\chi\in\Xi(v)}\langle\chi,\mathfrak{t}\rangle v_{\chi}\]
**Definition 4.2**: _For any \(v\in V\), let \(\mathcal{T}_{v}=\{\mathfrak{t}\in\mathcal{T}|\mathfrak{t}v=0\}\) be the Lie algebra stabilizer of \(v\) within \(\mathcal{T}\). For our \(y\in V\), let \(\mathcal{T}^{+}(y)=\{\mathfrak{t}\in\mathcal{T}|\langle\chi,\mathfrak{t} \rangle\geq 0\text{ for all }\chi\in\Xi(y)\}\). For any \(w\in V\) let \(\mathcal{T}^{+}_{w}\) be \(\mathcal{T}_{w}\cap\mathcal{T}^{+}(y)\)._
**Lemma 4.3**: _For the above data, let_
\[\mathcal{T}_{\mathbb{Z},v}=\{\mathfrak{t}\in\mathcal{T}_{\mathbb{Z}}|\langle \chi,\mathfrak{t}\rangle=0\text{ for all }\chi\in\Xi(v)\}\]
_Then \(\mathcal{T}_{v}\) equals \(\mathcal{T}_{\mathbb{Z},v}\otimes_{\mathbb{Z}}\mathbb{R}\)._
**Definition 4.4**: _A set \(F\subseteq\Xi(y)\) is called a face if there is a \(\mathfrak{t}\in\mathcal{T}^{+}(y)\) such that \(F=\{\chi\in\Xi(y)|\langle\chi,\mathfrak{t}\rangle=0\}\). This face \(F\) is also denoted by \(F(\mathfrak{t})\). The dimension \(dim(F)\) of a face is the dimension of the vector space \(\mathbb{R}\cdot F\) within \(\mathbb{R}^{n}\) spanned by the elements of \(F\). A face \(F\) of co-dimension \(1\) within \(\Xi(y)\) is called a facet. The element \(y_{F}\) is defined as \(\sum_{\chi\in F}y_{\chi}\)._
Note that \(\Xi(y)\) is also a face and that \(\Xi(y_{F})=F\). We have the following simple lemma:
**Lemma 4.5**: _For the above data, we have:_
1. _Let_ \(F\subseteq\Xi(y)\) _and_ \(y_{F}\) _be as above. Then the dimension of the face_ \(F\) _complements the dimension of the sub-torus_ \({\cal T}_{y_{F}}\subseteq{\cal T}\) _which stabilizes it, i.e.,_ \(dim(F)=r-dim({\cal T}_{y_{F}})\)_._
2. _Let_ \({\mathfrak{t}}\in{\cal T}^{+}(y)\) _and_ \(\mu=t^{{\mathfrak{t}}}\)_. Then the leading term_ \(\hat{y}^{\mu}\) _of_ \(y\) _under the action of_ \(\mu(t)\) _equals_ \(y_{F({\mathfrak{t}})}\)_. Conversely, if_ \(\mu(t)\subseteq T\) _is a 1-PS such that_ \(w=\hat{y}^{\mu}\)_, then there is a_ \(c^{\prime}\in{\mathbb{R}}\) _such that_ \({\mathfrak{t}}=\log(\mu)-c^{\prime}{\bf 1}\in{\cal T}^{+}(y)\) _and_ \(w=y_{F({\mathfrak{t}})}\)_._
The proofs of all the above lemmas are straightforward.
We now come to the main proposition of this subsection. We assume there is a 1-PS \(\lambda(t)\subseteq T\), driving \(y\) to \(z\). Recall that for this \(\lambda(t)\), we also have the 1-PS \(\lambda^{\prime}(t)\) (and an \(\overline{\ell}\in{\cal T}\) with \(t^{\overline{\ell}}=\lambda^{\prime}(t)\)) such that \(z\) is the leading term of \(y\) under \(\lambda^{\prime}\) of degree \(0\). This gives us an \(\overline{\ell}\in{\cal T}^{+}(y)\) such that \(z=y_{F(\overline{\ell})}\).
**Proposition 4.6**: _Let \(\lambda(t)\subseteq T\), \(z=\hat{y}\) and \(\overline{\ell}\) be as above. Suppose that \(dim(\Xi(y))-dim(\Xi(z))\geq 2\), then there is a \({\mathfrak{t}}\in{\cal T}^{+}(y)\), its face \(F=F({\mathfrak{t}})\) and a 1-PS \(\mu(t)=t^{{\mathfrak{t}}}\), such that (i) \(y_{F}\) is the leading term of \(y\) of degree \(0\) under \(\mu\) and \(z\) is the leading term of \(y_{F}\) under \(\lambda^{\prime}\) of degree \(0\), and (ii) \(dim(\Xi(y))>dim(\Xi(y_{F}))>dim(\Xi(z))\). Thus, there is an intermediate orbit \(O(y_{F})\) such that \(dim(\Xi(y_{F}))\) is strictly intermediate._
**Proof**: Let \(F_{y}=\Xi(y)\) and suppose that \(dim(F_{y})\geq dim(F_{z})+2\). Then, we must have \(dim({\cal T}_{\overline{z}})\geq dim({\cal T}_{y})+2\). By polyhedral theory, \(\langle\chi,\overline{\ell}\rangle>0\) for all \(\chi\in\overline{F}=F_{y}-F_{z}\). Thus \(\overline{\ell}\in{\cal T}_{z}-{\cal T}_{y}\). Suppose that \({\mathfrak{s}}\) is another element of \({\cal T}_{z}-{\cal T}_{y}\) which is linearly independent of \({\cal T}_{y}+{\mathbb{R}}\overline{\ell}\). Such an element exists since \(dim({\cal T}_{z})\geq dim({\cal T}_{y})+2\). Let \(\overline{a}=(a_{\chi})\) and \(\overline{b}=(b_{\chi})\) be vectors defined on \(\overline{F}\) such that for any \(\chi\in\overline{F}\), \(a_{\chi}=\langle\chi,\overline{\ell}\rangle\) and \(b_{\chi}=\langle\chi,{\mathfrak{s}}\rangle\). Clearly \(\overline{a}>0\) and \(\overline{b}\) is non-zero vector linearly independent of \(\overline{a}\).
Next, let us consider \({\mathfrak{t}}(\epsilon)=\overline{\ell}+\epsilon{\mathfrak{s}}\). Given the properties of \(\overline{a},\overline{b}\), there is an \(\epsilon>0\) such that (i) \(\langle\chi,{\mathfrak{t}}(\epsilon)\rangle\geq 0\) for all \(\chi\in\overline{F}\)_and_ (ii) there is at least one \(\chi^{\prime}\in\overline{F}\) for which \(\langle\chi^{\prime},{\mathfrak{t}}(\epsilon)\rangle=0\), and finally (iii) a \(\chi^{\prime\prime}\in\overline{F}\) such that \(\langle\chi^{\prime\prime},{\mathfrak{t}}(\epsilon)\rangle>0\). We denote this by \({\mathfrak{t}}^{\prime}\). If \(F=F_{{\mathfrak{t}}^{\prime}}\), then clearly \(F_{y}\supsetneq F\supsetneq F_{z}\) and \({\cal T}_{y}\subsetneq{\cal T}_{y_{F}}\subsetneq{\cal T}_{z}\). Hence, for \(\mu(t)=t^{{\mathfrak{t}}^{\prime}}\) we have \(\hat{y}^{\mu}=y_{F}\) has the required property on dimensions that \(dim(F_{y})>dim(F)>dim(F_{z})\). Finally, it is easy to see that \(\widehat{y_{F}}^{-\lambda^{\prime}}=z\). We come to the last part that \(O(y_{F})\) is intermediate to \(O(y)\) and \(O(z)\). Since \(y_{F}\) is a limit of \(y\) under \(\mu\), we have \(\overline{O(y)}\supseteq O(y_{F})\), and since \(z\) is a limit of \(y_{F}\) under \(\lambda\), we have \(\overline{O(y_{F})}\supseteq O(z)\). Thus \(y_{F}\) is indeed an intermediate orbit. This proves the proposition. \(\Box\)
**Example 4.7**: _Let \(X={\mathbb{C}}^{3},G=GL(X)\) and \(V=Sym^{2}(X^{*})\). Let \(B=\{x_{1},x_{2},x_{3}\}\) be a basis for \(X^{*}\). Let \(y=x_{1}x_{2}+x_{1}x_{3}+x_{2}x_{3}+x_{3}^{2}\) and \(\lambda(t)=diag(1,1,t)\). Thus \(\ell=[0,0,1]\) and \(t^{\ell}=\lambda(t)\). We have \(z=x_{1}x_{2}\) is the limit \(\hat{y}^{\lambda}\). \(H_{0}\) contains a torus \(\mu(u)=u^{[1,-1,0]}=diag(u,u^{-1},1)\). Let \(\ell^{\prime}=[1,-1,0]\). Note that the standard torus \(T=diag(t_{1},t_{2},t_{3})\) contains both \(\lambda(t)\) and \(\mu(u)\). We then choose it as the master maximal torus of \(G\) and use it to construct \(\Xi(V),\Xi(y)\) and \(\Xi(z)\). We record \(\Xi(y)=\{[1,1,0],[1,0,1],[0,1,1],[0,0,2]\}\) and \(\Xi(z)=\{[1,1,0]\}\). We have \({\cal T}_{y}=0\) but \({\cal T}_{z}={\mathbb{R}}\ell+{\mathbb{R}}\ell^{\prime}\) has dimension \(2\). Hence we may apply Prop. 4.6. Indeed, we may choose
_as the element \(\mathfrak{s}\) in the proof of the above proposition and construct \(\mathfrak{t}(\epsilon)=\ell+\epsilon\ell^{\prime}\). We get \(\epsilon=1\) and \(\mathfrak{t}^{\prime}=[1,-1,1]\). We then see that \(F=\{[1,1,0],[0,1,1]\},y_{F}=x_{1}x_{2}+x_{2}x_{3}\) and \(O(y_{F})\) is an intermediate limit._
**Corollary 4.8**: _Let \(z=\hat{y}\) under \(\lambda(t)\) be such that there is no intermediate \(G\)-stable variety between \(O(y)\) and \(O(z)\). Then for any maximal torus \(T\) containing \(\lambda\), there is a \(\lambda^{\prime\prime}(t)\subseteq T,z^{\prime\prime}\in O(z)\) and \(y^{\prime\prime}\in O(y)\) such that \(z^{\prime\prime}\) is the leading term of \(y^{\prime\prime}\) under \(\lambda^{\prime\prime}(t)\) and \(\Xi(z^{\prime\prime})\) is a facet of \(\Xi(y^{\prime\prime})\)._
**Proof**: We consider tuples \((y^{\prime},z^{\prime},\mu^{\prime})\), where \(y^{\prime}\in O(y),z^{\prime}\in O(z),\mu^{\prime}(t)\subseteq T\) and \(z^{\prime}=\hat{y^{\prime\prime}}^{\mu}\), and induct on \(\delta(y^{\prime},z^{\prime},\mu)=dim(\Xi(y^{\prime}))-dim(\Xi(z^{\prime}))\). We begin with \((y,z,\lambda^{\prime})\), where \(\lambda^{\prime}(t)\) is the degree \(0\) version of \(\lambda(t)\), i.e., when \(z\) is a limit of \(y\) of degree \(0\) under \(\lambda^{\prime}(t)\). If \(dim(\Xi(y))=dim(\Xi(z))+1\), we are done. If not and \(dim(\Xi(y))>dim(\Xi(z))+1\), then by the above proposition, there is a \(\mu(t)\subseteq T\) and a \(y_{F}\) whose orbit is intermediate and for which \(dim(\Xi(y_{F}))\) is strictly intermediate. Since, there is no intermediate variety between \(y\) and \(z\), we must either have (a) \(\overline{O(y_{F})}=\overline{O(y)}\) or (b) \(\overline{O(y_{F})}=\overline{O(z)}\). This implies that either (a) \(y_{F}\in O(y)\) or (b) \(y_{F}\in O(z)\). We thus get either (a) the tuple \((y_{F},z,\lambda)\) or (b) the tuple \((y,y_{F},\mu)\) (as the case may be), with a smaller \(\delta\) and yet implementing the same limit. \(\Box\)
**Example 4.9**: _Let \(X=\mathbb{C}^{9}\) be the space of \(3\times 3\)-matrices, and \(G=GL(X)\). Let \(V=Sym^{3}(X^{*})\) and \(det_{3}(X)\in V\) be \(y\). The functions \(B=\{x_{1},\ldots,x_{9}\}\) is a basis for \(X^{*}\) corresponding to the entries of the matrix. For any matrix \(x\in X\), we have the expression \(x=x_{a}+x_{s}\) decomposing \(x\) as a sum of an antisymmetric and a symmetric matrices. For \(\lambda_{2}(t)(x)=tx_{a}+x_{s}\), we have \(det_{3}(\lambda_{2}(t)x)=tQ_{2}(X)+t^{3}R_{2}(X)\). For a suitable choice of basis, we have:_
\[\lambda_{2}(t)det_{3}(X)=det\left(\left[\begin{array}{ccc}2tx_{6}&tx_{8}+x _{1}&tx_{9}-x_{2}\\ tx_{8}-x_{1}&2tx_{5}&tx_{7}+x_{3}\\ tx_{9}+x_{2}&tx_{7}-x_{3}&2tx_{4}\end{array}\right]\right)=tQ_{2}(X)+t^{3}R_{ 2}(X)\]
_where:_
\[\begin{array}{rcl}Q_{2}(X)&=&2(x_{4}x_{1}^{2}+x_{5}x_{2}^{2}+x_{6}x_{3}^{2}+ x_{7}x_{1}x_{2}+x_{8}x_{2}x_{3}+x_{9}x_{1}x_{3})\\ R_{2}(X)&=&8x_{4}x_{5}x_{6}-2x_{6}x_{7}^{2}-2x_{4}x_{8}^{2}-2x_{5}x_{9}^{2}+2 x_{7}x_{8}x_{9}\end{array}\]
_We know that there is no intermediate orbit between \(O(det_{3}(X))\) and \(O(Q_{2})\). Using the torus \(T\) with the basis \(x_{1},\ldots,x_{9}\), we have:_
\[\Xi(Q_{2})=\framebox{$\begin{array}{cccccccccc}x_{1}&x_{2}&x_{3}&x_{4}&x_{5} &x_{6}&x_{7}&x_{8}&x_{9}\\ \hline\hline 2&0&0&1&0&0&0&0&0\\ 0&2&0&0&1&0&0&0&0\\ 0&0&2&0&0&1&0&0&0\\ 1&1&0&0&0&0&1&0&0\\ 0&1&1&0&0&0&0&1&0\\ 1&0&1&0&0&0&0&0&1\end{array}$}\]
_We may similarly build \(\Xi(R_{2})\). We see that \(dim(\Xi(Q_{2}))=6,dim(\Xi(R_{2}))=4\) while \(dim(\Xi(det_{3}(X))=7\). Thus \(\Xi(Q_{2})\) is indeed a facet, i.e., a face of co-dimension \(1\) within \(\Xi(det_{3}(X))\)._
### The Co-limit space \(Z(\lambda)\) and \(O(y)\)
Let us resume our standard assumption of action by a 1-PS:
\[\lambda(t)y=t^{d}y_{d}+t^{e}y_{e}+\ldots+t^{D}y_{D}\]
with the limit \(z=\hat{y}\) and \(y_{e}\) as the tangent of approach.
Let us act \(\lambda\) on \(y^{\prime}\), a conjugate of \(y\) to get:
\[\lambda(t)\cdot y^{\prime}=t^{a}y^{\prime}_{a}+\cdots+t^{b}y^{\prime}_{b}\]
Next, for any \(d^{\prime}\), we define \(Y_{d^{\prime}}\) and \(Y\) as below:
\[Y_{d^{\prime}}=\{y^{\prime}=gy|y^{\prime}_{a}=0\mbox{ for all }a<d^{\prime}\}\ \mbox{ and }Y=\cup_{d^{\prime}}Y_{d^{\prime}}\]
Thus, \(Y_{d^{\prime}}\) consists of those elements \(y^{\prime}\in O(y)\) for which \(deg(\hat{y^{\prime}})\geq d^{\prime}\). Note that the notation and definitions are all with respect to this fixed \(\lambda\).
Let \(V_{d^{\prime}}\) be the degree \(d^{\prime}\) subspace of \(V\) and consider the projection \(\pi_{d^{\prime}}:V\to V_{d^{\prime}}\). We define \(Z_{d^{\prime}}=\pi_{d^{\prime}}(Y_{d^{\prime}})\) and \(Z\) as \(\cup_{d^{\prime}}Z_{d^{\prime}}\). Thus, it is the space of all \(z^{\prime}\) which are degree-\(d^{\prime}\) limits under \(\lambda\) of some conjugate \(y^{\prime}\) of \(y\). Note that \(y\in Y_{d}\) and \(z\in Z_{d}\). We call \(Z_{d}\) as the **space of co-limits** of \(z\).
The importance of \(Z_{d}\) comes from the following lemma:
**Lemma 4.10**: _Let \(O(Z_{d})=\{gz^{\prime}|z^{\prime}\in Z_{d}\mbox{ and }q\in G\}\) and \(\overline{O(Z_{d})}\) be its closure. Then \(\overline{O(Z_{d})}\) is an intermediate variety, i.e, \(\overline{O(z)}\subseteq\overline{O(Z_{d})}\subseteq\overline{O(y)}\)._
**Proof**: Since \(z\in Z_{d}\), it is clear that \(\overline{O(z)}\subseteq\overline{O(Z_{d})}\). For the other inclusion, we see that every element \(z^{\prime}\in Z_{d}\) is the leading term \(\hat{y^{\prime}}\) for some \(y^{\prime}\in Y_{d}\subseteq O(y)\). Hence \(z^{\prime}\in\overline{O(y^{\prime})}\). But \(O(y^{\prime})=O(y)\) and is \(G\)-stable and hence \(\overline{O(z^{\prime})}\subseteq\overline{O(y)}\). This proves the second inclusion. \(\square\)
All points in \(\overline{O(Z_{d})}\) will have a representative (up to conjugation) of pure degree \(d\). If \(y\) does not have this property, e.g., when \(y\) is stable, then \(\overline{O(Z_{d})}\subsetneq\overline{O(y)}\).
The more interesting question is whether \(O(Z_{d})\) contains \(z^{\prime}\) which are _not_ in \(O(z)\). To answer this, we will compare \(T_{z}Z_{d}\) and \((T_{z}O(z))_{d}\), the degree \(d\) component of the tangent space of \(T_{z}O(z)\). Note that \((T_{z}O(z))_{d}\subseteq T_{z}Z_{d}\).
Recall that we have the parabolic group \(P(\lambda)\), its unipotent radical \(U(\lambda)\) and a special reductive complement \(L(\lambda)\). Recall also that the Lie algebra of \(P(\lambda)\) is \({\cal P}(\lambda)=\sum_{i\geq 0}{\cal G}_{i}\), i.e., the subspace generated by elements within \({\cal G}\) of non-negative degree, and that of \(L(\lambda)\) is \({\cal L}(\lambda)={\cal G}_{0}\). We have the following important lemma:
**Lemma 4.11**: _The map \(\pi_{d}:Y_{d}\to Z_{d}\) is \(P(\lambda)\) equivariant._
The proof is straightforward. The lemma implies that \(L(\lambda)\) has an action on \(Z_{d}\).
**Assumption 4.12**: _We assume that both \(Y_{d}\) and \(Z_{d}\) are smooth at the points \(y\) and \(z\). See also Remark 4.22_
**Lemma 4.13**: _Let \(y\in Y_{d}\) and \(z=\pi_{d}(y)\in Z_{d}\) be as above. Then we have:_
1. _The tangent space_ \(T_{z}Z_{d}\) _is given by the image of_ \(T_{y}Y_{d}\)_. In other words,_ \((\pi_{d})_{*}T_{y}Y_{d}=T_{z}Z_{d}\)_._
2. _Let_ \(O_{L(\lambda)}(z)\subseteq Z_{d}\) _be the_ \(L(\lambda)\)_-orbit of_ \(z\)_. Let_ \(T_{z}O(z)\) _be the tangent space of the_ \(G\)_-orbit of_ \(z\) _and_ \(T_{z}O_{L(\lambda)}(z)\) _be the tangent space for the_ \(L(\lambda)\)_-orbit of_ \(z\)_, then_ \(T_{z}O_{L(\lambda)}(z)=(T_{z}O(z))_{d}=\mathcal{G}_{0}z\)_._
**Proof**: The first part follows from the definition of \(Z_{d}\) as the image of \(Y_{d}\) the projection \(\pi_{d}:V\to V_{d}\). For the second part, note that \(T_{z}O(z)=\sum_{i}\mathcal{G}_{i}z\) and thus \((T_{z}O(z))_{d}=\mathcal{G}_{0}z=\mathcal{L}(\lambda)z\). But this is precisely \(T_{z}O_{L(\lambda)}(z)\). \(\square\)
We illustrate the above lemma with an analysis of an example communicated to us by Professor V. Popov.
**Example 4.14**: _Let \(G=GL_{4}(\mathbb{C})\) act by left multiplication on \(V=\mathbb{C}^{4}\oplus\mathbb{C}^{4}\oplus\mathbb{C}^{4}\), represented as a \(4\times 3\)-matrix. Thus, an element \(v\) of \(V\) may be viewed as a \(4\times 3\) matrix and the action of a \(g\in G\) is \(g.v\) under the usual matrix multiplication. We will use \(e_{1},e_{2},e_{3}\) and \(e_{4}\) to denote the standard basis of \(\mathbb{C}^{4}\) as column vectors._
_We set \(y=[e_{1},e_{2},e_{3}]\in V\) and \(\lambda(t)=\text{diagonal}(1,t,t^{2},t^{2})\). Clearly,_
\[\lambda(t).y=[e_{1},0,0]+t[0,e_{2},0]+t^{2}[0,0,e_{3}]=y_{0}+ty_{1}+t^{2}y_{2}\]
_Thus, \(z=y_{0}=[e_{1},0,0]\) is the leading term of \(\lambda(t).y\) and the tangent of approach is \(y_{e}=y_{1}=[0,e_{2},0]\). The grading induced by \(\lambda(t)\) on \(\mathcal{G}\) as well as the weight-subspaces of \(V\) of \(\lambda(t)\) action are depicted in the following diagram._
\[\begin{array}{|c|c|c|c|}\hline 0&-1&-2&-2\\ \hline 1&0&-1&-1\\ \hline 2&1&0&0\\ 2&1&0&0\\ \end{array}\qquad\begin{array}{|c|c|c|}\hline 0&0&0\\ \hline 1&1&1\\ \hline 2&2&2\\ 2&2&2\\ \end{array}\]
_Weight space for \(gl_{4}\) Weight space for \(V\)_
_Note that \(O(y)\) is the collection of all matrices \(y^{\prime}\) of rank \(3\) and \(\overline{O(y)}\) equals \(V\). Since \(d=0\), we have \(Y_{0}\) is the collection of all matrices \(y^{\prime}\) which have a non-zero first row. Thus \(Z_{0}\) is the collection of all matrices \(z^{\prime}\) which are non-zero only in the first row. These give rise to an infinite collection of orbits consisting of the space of all rank \(1\) matrices of which \(z\) is one element. All these \(z^{\prime}\in Z_{0}\) are closely related to the point \(z\), indeed, they have identical stabilizers, viz., \(G_{z}=G_{z^{\prime}}\) and their orbits have the same dimension, viz., \(4\)._
_On the other hand, it is easily checked that \(W_{z,y_{e}}\) consists of all matrices where the third column is zero. Thus the \(G\)-stable spaces \(O(Z_{0})\) and \(W_{z,y_{e}}\) are not comparable, and yet \(\overline{O(z)}\subsetneq O(Z_{0}),W_{z,y_{e}}\subsetneq\overline{O(y)}\)._
_Finally, note that \(Y_{1}\) are all rank \(3\) matrices with zero first row. \(Z_{1}\) are all matrices which are non-zero only in the second row. But note that \(O(Z_{1})=O(Z_{0})\)._
Let us now analyse \(T_{y}Y_{d}\) and \(T_{z}Z_{d}\). We begin with a definition:
**Definition 4.15**: _Let \(y,\lambda\) and \(d\) be fixed as above. A element \(\mathfrak{g}\in\mathcal{G}\) is called a \(d\)-stabilizer iff \(\mathfrak{g}=\sum_{i}\mathfrak{g}_{i}\) is such that \((\mathfrak{g}y)_{a}=0\) for all \(a<d\). Let \(\mathcal{G}_{y,d}\subseteq\mathcal{G}\) be the collection of \(d\)-stabilizers of \(y\)._
In other words, \(\mathcal{G}_{y,d}\) is the collection of "Lie elements" \(g\in G\) such that \(gy\subseteq Y_{d}\). The importance of \(\mathcal{G}_{y,d}\) comes from the following lemma.
**Lemma 4.16**: _The tangent space \(T_{y}Y_{d}\) is given by the set \(\{\mathfrak{g}y|\mathfrak{g}\in\mathcal{G}_{y,d}\}\)._
**Proof**: All elements in a neighborhood of \(y\in Y_{d}\) are given by \(\rho(e^{\mathfrak{g}t})(y)\) for some \(\mathfrak{g}\in\mathcal{G}\). The tangent vector of the path \(\beta_{\mathfrak{g}}(t)=\rho(e^{\mathfrak{g}t})(y)\) at \(y\) is precisely \(\rho_{1}(\mathfrak{g})(y)\). That the path lies is \(\oplus_{i\geq d}V_{i}\) is equivalent to the condition that \(\mathfrak{g}\in\mathcal{G}_{y,d}\). \(\square\)
Let us compute \(T_{z}Z_{d}\) and compare it with \((T_{z}O(z))_{d}=\mathcal{G}_{0}z\). Since \(T_{z}Z_{d}=(\pi_{d})_{*}(T_{y}Y_{d})\), we have \(T_{z}Z_{d}=(\mathcal{G}_{y,d}y)_{d}\), the degree \(d\) component of the tangent space \(T_{y}Y_{d}\). If \(\mathfrak{g}=\mathfrak{g}_{k}+\ldots+\mathfrak{g}_{l}\in\mathcal{G}_{y,d}\), then the action of \(\mathfrak{g}\) on \(y\) gives us:
\[\begin{array}{rcl}\mathfrak{g}\cdot y&=&(\mathfrak{g}_{k}+\ldots+\mathfrak{g }_{k+(e-d)-1}+\mathfrak{g}_{k+(e-d)}+\ldots)(y_{d}+y_{e}+\ldots)\\ &=&\mathfrak{g}_{k}y_{d}+\ldots\mathfrak{g}_{k+(e-d)-1}y_{d}+(\mathfrak{g}_{k} y_{e}+\mathfrak{g}_{k+(e-d)}y_{d})+\ldots+\ldots\end{array}\]
If \(k>0\), then the leading term of \(\mathfrak{g}y\) has a degree greater than \(d\), and does not contribute to \(T_{z}Z_{d}\) and therefore is of no interest. If \(k=0\), then the leading term is \(\mathfrak{g}_{0}y_{d}\subseteq\mathcal{G}_{0}z\) which is in \((T_{z}O(z))_{d}\), and thus does not cross the orbit of \(z\). Thus the interesting situation is when \(k<0\). Then the first \(|k|\) terms of the above expression must vanish for it to contribute to \(T_{z}Z_{d}\). It is the \((|k|+1)\)-th term which gives us an element of \(T_{z}Z_{d}\) which depends not only on \(y_{d}\) but also on higher degree components of \(y\). The analysis of these cancellations motivates us to study the structure of \(\mathcal{G}_{y,d}\).
Let us begin with three special nested families within \(\mathcal{G}_{y,d}\) as specified below:
**Definition 4.17**: _Let \(\mathfrak{g}=\sum_{i\geq a}\mathfrak{g}_{a}\) be an element \(\mathfrak{g}_{a}\) of \(\mathcal{G}_{y,d}\). We define \(\hat{\mathfrak{g}}\) as the leading term of \(\mathfrak{g}\). Let:_
\[\mathcal{G}_{y,d}^{\mathcal{H}}=\{\hat{\mathfrak{g}}|\mathfrak{g}\in\mathcal{ G}_{y,d}\text{ such that }\hat{\mathfrak{g}}\in\mathcal{H}\}\]
_Similarly, we define \(\mathcal{G}_{y,d}^{\mathcal{H}_{\overline{y_{e}}}}\) (and resp. \(\mathcal{G}_{y,d}^{\hat{\mathcal{K}}}\)) as those \(\mathfrak{g}\in\mathcal{G}_{y,d}\) for which \(\hat{\mathfrak{g}}\) belongs to \(\mathcal{H}_{\overline{y_{e}}}\) (resp. \(\hat{\mathcal{K}}\))._
Note that since the members of \(\mathcal{G}_{y,d}^{\mathcal{H}}\) and other spaces are only the leading terms of elements of \(\mathcal{G}_{y,d}\), these are graded subspaces of \(\mathcal{G}\). Since \(\mathcal{H}\supseteq\mathcal{H}_{\overline{y_{e}}}\supseteq\hat{\mathcal{K}}\), we have the corresponding containments \(\mathcal{G}_{y,d}^{\mathcal{H}}\supseteq\mathcal{G}_{y,d}^{\mathcal{H}_{ \overline{y_{e}}}}\supseteq\mathcal{G}_{y,d}^{\hat{\mathcal{K}}}\) and \(dim(\mathcal{G}_{y,d}^{\mathcal{H}}/\mathcal{G}_{y,d}^{\mathcal{H}_{ \overline{y_{e}}}})\leq dim(\mathcal{H}/\mathcal{H}_{\overline{y_{e}}})\) and \(dim(\mathcal{G}_{y,d}^{\mathcal{H}_{\overline{y_{e}}}}/\mathcal{G}_{y,d}^{\hat{ \mathcal{K}}})\leq dim(\mathcal{H}_{\overline{y_{e}}}/\hat{\mathcal{K}})\).
**Proposition 4.18**: _Let \(y,z\)and \(d\) be as above. Suppose that \(dim({\cal H}/{\cal H}_{\overline{y_{e}}})=r\) and \(dim({\cal H}_{\overline{y_{e}}}/\hat{\cal K})=s\). Then there is a subspace \(F\subseteq{\cal G}_{y,d}\) of dimension at most \(r+s\) and \(a\) basis \(B=\{\mathfrak{g}_{1},\ldots,\mathfrak{g}_{b}\}\subset{\cal G}_{y,d}\) such that for all \(\mathfrak{g}\in{\cal G}_{y,d}\), \(\mathfrak{g}\in{\cal P}(\lambda)+{\cal K}+F\). Moreover the set \(\hat{B}=\{\hat{\mathfrak{g}}_{i}|\mathfrak{g}_{i}\in B\}\) is linearly independent in \({\cal G}_{y,d}^{\cal H}/{\cal G}_{y,d}^{\hat{\cal K}}\)._
**Proof**: Let \(B^{\prime}=\{\mathfrak{g}_{1},\ldots,\mathfrak{g}_{b}\}\) be elements of \({\cal G}_{y,d}\) such that \(\hat{B^{\prime}}=\{\widehat{\mathfrak{g}}_{i}|i=1,\ldots,b\}\) is a basis for \({\cal G}_{y,d}^{\cal H}/{\cal G}_{y,d}^{\hat{\cal K}}\). Let \(\mathfrak{g}=\sum_{i\geq k}\mathfrak{g}_{i}\in{\cal G}_{y,d}\) be an arbitrary element. We prove the assertion by induction on the degree of \(\hat{\mathfrak{g}}\), and on the containment of \(\hat{\mathfrak{g}}\) in the chain \({\cal G}\supseteq{\cal H}\supseteq{\cal H}_{\overline{y_{e}}}\supseteq\hat{ \cal K}\).
Suppose that \(\mathfrak{g}\) is as above, i.e., \((\mathfrak{g}y)_{a}=0\) for all \(a<d\). We write the action of \(\mathfrak{g}\) on \(y\) as before:
\[\begin{array}{rcl}\mathfrak{g}\cdot y&=&(\mathfrak{g}_{k}+\ldots+\mathfrak{ g}_{k+(e-d)-1}+\mathfrak{g}_{k+(e-d)}+\ldots)(y_{d}+y_{e}+\ldots)\\ &=&\mathfrak{g}_{k}y_{d}+\ldots\mathfrak{g}_{k+(e-d)-1}y_{d}+(\mathfrak{g}_{k }y_{e}+\mathfrak{g}_{k+(e-d)}y_{d})+\ldots+\ldots\end{array}\]
If \(k\geq 0\) then \(\mathfrak{g}\in{\cal P}(\lambda)\) and the assertion holds. That takes care of the base case. Next consider the case when \(k<0\). We then have \(\mathfrak{g}_{k}y_{d}=0\). Thus \(\hat{\mathfrak{g}}\in{\cal H}\) and \(\mathfrak{g}\in({\cal G}_{y,d}^{\cal H})_{k}\), the \(d\)-stabilizers with leading terms in \({\cal H}\) and of leading degree \(k\). Then, by the choice of \(B^{\prime}\), there is an element \(\mathfrak{g}^{\prime}\in{\mathbb{C}}\cdot F\) such that \(\mathfrak{g}-\mathfrak{g}^{\prime}\in{\cal G}_{y,d}^{\hat{\cal K}}\), i.e., where the leading term has dropped to \(\hat{\cal K}\).
We are then reduced to the case where \(\hat{\mathfrak{g}}\in\hat{\cal K}\). In which case, there is an element \(\mathfrak{k}\in{\cal K}\) such that \(\hat{\mathfrak{k}}=\hat{\mathfrak{g}}\). Since \(\mathfrak{k}\cdot y=0\), we have the element \(\mathfrak{g}^{\prime}=\mathfrak{g}-\mathfrak{k}\) of \({\cal G}_{y,d}\) which has a leading term of degree greater than \(k\). The set \(B\) is constructed from \(B^{\prime}\) from the excess of \({\cal G}_{y,d}\) over \({\cal P}(\lambda)+{\cal K}\). This proves the proposition. \(\square\)
**Remark 4.19**: _If \(y\) is in the null cone of \(V\) for the \(G\)-action and \(\lambda\) is the "optimal" 1-PS then \({\cal G}_{y,d}={\cal P}(\lambda)\), see [10], Lemma 4.6. Thus \(F\) measures the deviation of \(\lambda\) from the optimal 1-PS which drives \(y\) to \(0\)._
**Proposition 4.20**: _Let \(TW_{z}=\pi_{d}(\{\mathfrak{g}\cdot y|\mathfrak{g}\in F\})\) be the leading terms of \(F\cdot y\), then \(T_{z}Z_{d}\) =\(TW_{z}+{\cal G}_{0}z\). If \(\overline{TW}_{z}\) denotes the quotient \((TW_{z}+{\cal G}_{0}z)/{\cal G}_{0}z\), then the codimension of \((T_{z}O(z))_{d}\) in \(T_{z}Z_{d}\) is \(dim(\overline{TW}_{z})\)._
**Proof**: Assuming manifold properties of \(Y_{d}\) at \(y\), we have
\[\begin{array}{c}dim(Y_{d})=dim(T_{y}Y_{d})\geq dim((T_{y}Y_{d})_{d})=dim(T_{ z}Z_{d})=dim(Z_{d})\\ dim((T_{y}Y_{d})_{d})=dim(({\cal G}_{y,d}y)_{d})\end{array}\]
This gives us the equation \(dim(({\cal G}_{y,d}y)_{d})=dim(T_{z}Z_{d})\). But, by Prop. 4.18, \({\cal G}_{y,d}={\cal P}(\lambda)+{\cal K}+F\). Now \((({\cal P}(\lambda)+{\cal K})y)_{d}\subseteq(T_{z}O(z))_{d}\). Thus \(F\) is the only part of \({\cal G}_{y,d}\) which leads to tangent vectors outside \((T_{z}O(z))_{d}\). Whence, \(T_{z}Z_{d}=\pi_{d}(F)+(T_{z}O(z))_{d}\). But \((T_{z}O(z))_{d}={\cal G}_{0}z\) hence \(dim(T_{z}Z_{d})-dim({\cal G}_{0}z)\) is given by the expresion \(dim(TW_{z}+{\cal G}_{0}z)-dim({\cal G}_{0}z)\) and this is precisely \(dim(\overline{TW}_{z})\). This proves the proposition. \(\square\)
**Corollary 4.21**: _If \(dim(Z_{d})>dim((T_{z}O(z))_{d})\) then \(F\) is non-empty. In other words, there is a \(d\)-stabilizer \(\mathfrak{g}\not\in{\cal P}(\lambda)+{\cal K}\). If \(\lambda\) were optimal then \(\overline{O(z)}=\overline{O(Z_{d})}\)._
**Remark 4.22**: _The above computation of \(T_{z}Z_{d}\) depends on \(z\in Z_{d}\) being smooth. But is that so? If \(z\) were smooth, then there would be the action of \(H_{0}=H\cap L(\lambda)\) on \(T_{z}Z_{d}\) as follows. Since \(\pi_{d}:Y_{d}\to Z_{d}\) is \(L(\lambda)\)-equivariant, and \(H\) stabilizes \(z\), we have the action of \(H_{0}\) on \(Z_{d}\) which keeps \(z\) fixed. This gives us an action of \(H_{0}\) and its Lie algebra, \({\cal H}_{0}\) on \(T_{z}Z_{d}\)._
_Even looking at the simple case when \(\lambda\) has two components, \(d=0\) and \(e=1\), we have \(T_{z}Z_{d}={\cal H}_{-1}y_{1}+{\cal G}_{0}z\). But \({\cal G}_{0}z\) is already preserved by \(H_{0}\). Thus, we do have an \(H_{0}\)-action on \(T_{z}Z_{d}\) if \(H_{0}({\cal H}_{-1}y_{1})\subseteq TW\). This seems to be a precondition for the smoothness of \(z\) in \(Z_{d}\)._
We end this section with some examples.
**Example 4.23**: _Continuing with Ex. 4.14, with \(V={\mathbb{C}}^{4\times 3},y=[e_{1},e_{2},e_{3}]\) and \(\lambda(t)=\mbox{diagonal}(1,t,t^{2},t^{2})\):_
\[\lambda(t).y=[e_{1},0,0]+t[0,e_{2},0]+t^{2}[0,0,e_{3}]=y_{0}+ty_{1}+t^{2}y_{2}\]
_Thus \(d=0,e=1,z=[e_{1},0,0]\) and the tangent of approach is \(y_{1}=[0,e_{2},0]\). The tangent space \(T_{z}O(z)\) is \(\{[c,0,0]|c\in{\mathbb{C}}^{4}\}\), the space of all matrices which are \(0\) in the second and third columns. Also recall the weight spaces:_
\[\left[\begin{array}{c|c|c}0&-1&-2&-2\\ \hline 1&0&-1&-1\\ \hline 2&1&0&0\\ 2&1&0&0\end{array}\right]\qquad\left[\begin{array}{c|c}0&0&0\\ \hline 1&1&1\\ \hline 2&2&2\\ 2&2&2\end{array}\right]\]
_Weight space for \(gl_{4}\) Weight space for \(V\)_
_We see that:_
\[{\cal K}=\hat{\cal K}=\left[\begin{array}{cccc}0&0&0&*\\ 0&0&0&*\\ 0&0&0&*\\ 0&0&0&*\end{array}\right]\quad{\cal H}=\left[\begin{array}{cccc}0&*&*&*\\ 0&*&*&*\\ 0&*&*&*\\ 0&*&*&*\end{array}\right]\quad{\cal H}_{\overline{y_{e}}}=\left[\begin{array} []{cccc}0&0&*&*\\ 0&0&*&*\\ 0&0&*&*\end{array}\right]\]
_Observe that \({\cal H}/{\cal H}_{\overline{y_{e}}}\) and \({\cal H}_{\overline{y_{e}}}/{\cal K}\) have bases \(\{e_{1,2},e_{2,2},e_{3,2},e_{4,2}\}\) and \(\{e_{1,3},e_{2,3},e_{3,3},e_{4,3}\}\) respectively. It is easily verified that \({\cal G}_{y,d}\) equals \({\cal G}\), and is spanned by \({\cal P}(\lambda)\cup{\cal K}\cup F\) where \(F=\{e_{1,2},e_{1,3},e_{2,3}\}\). Further,_
\[e_{1,2}.y=[0,e_{1},0],\;\;e_{1,3}.y=[0,0,e_{1}]\;\;e_{2,3}.y=[0,0,e_{2}]\]
_In the notation of Proposition 4.20, we see that \(TW_{0}\), the vectors of weight \(0\) are spanned by \([0,e_{1},0]\) and \([0,0,e_{1}]\) and that \(TW_{0}=\overline{TW}_{0}\). These are elements of \(T_{z}Z_{0}-T_{z}O(z)\). Indeed, it is easily seen that \(Z_{0}\) consists of all non-zero matrices where the only non-zero row is the first row. Consider for example, \(z^{\prime}=[e_{1},\alpha_{1}e_{1},\alpha_{2}e_{1}]\), where \(\alpha_{1},\alpha_{2}\in{\mathbb{C}}\). Then for \(y^{\prime}=[e_{1},\alpha_{1}e_{1}+e_{2},\alpha_{2}e_{1}+e_{3}]\), we see that \(y^{\prime}\in O(y)\) and \(\widehat{y^{\prime}}=z^{\prime}\). Thus \(z^{\prime}\in Z_{0}\)._
_Note that \(O(z)_{0}=\{[\alpha e_{1},0,0]|\alpha\in\mathbb{C},\alpha\neq 0\}\), and \(O(z)\) are matrices of the form \([v,0,0]\) where \(v\neq 0\). However \(O(Z_{0})\) is the space of matrices of rank \(1\). This has an infinite family of orbits, each of the same dimension as \(O(z)\) (i.e., \(4\)). Thus \(\overline{O(z)}\subsetneq\overline{O(Z_{0})}\subsetneq\overline{O(y)}\) is a strict intermediate variety._
**Example 4.24**: _Consider the \(G=GL_{3}(\mathbb{C})\)-module \(V=Sym^{3}(\mathbb{C}^{3})\) and the form \(p(x,y,z)=x^{2}(x-y)+(x-2y)^{2}z+z^{2}(ax+by)+z^{3}\in V\). We may choose \(a,b\) so that \(\mathcal{K}=\mathcal{G}_{p}=0\). Let \(\lambda(t)\) be such that \(\lambda(t)x=x,\lambda(t)y=y\) but \(\lambda(t)z=tz\). For the action of \(\lambda(t)\) on \(p\), we have \(d=0,e=1\) and \(\hat{p}=(x-y)x^{2}\) is the leading term and \(q=(x-2y)^{2}z\) is the tangent of approach \(y_{e}\). Clearly \(\mathcal{H}\), the stabilizer of \(\hat{p}\) is \(4\) dimensional and contains \(Hom(\mathbb{C}z,\mathbb{C}\cdot\{x,y,z\})\). The other basis element of \(\mathcal{H}\) is the toric one parameter family of \(gl_{3}\) with \(a_{11}=1,a_{21}=3\) and \(a_{22}=-2\), which is of degree \(0\). However none of these elements stabilize \(\overline{q}\) so \(\mathcal{H}_{\overline{q}}=0\). Thus, \(\hat{\mathcal{K}}=\mathcal{H}_{\overline{q}}=0\). The expected dimension of \(\mathcal{W}_{z,y_{e}}\) is \(dim(G)-dim(\mathcal{H}_{\overline{y_{e}}})=dim(G)\). Since this is also the dimension of \(\overline{O(p)}\) there is no strictly intermediate variety of the type \(\mathcal{W}_{z,y_{e}}\)._
_On the other hand the space \(T_{\hat{p}}(O(\hat{p}))_{0}\) is given by \(\mathcal{G}_{0}\hat{p}\), which reduces to the action of \(gl(x,y)\) on \(\hat{p}\). This gives us the \(4\) forms whose coefficients in terms of the basis elements in the first row of the matrix given below are:_
\begin{tabular}{|c||r|r|r|r|} \hline \hline & \(x^{3}\) & \(x^{2}y\) & \(xy^{2}\) & \(y^{3}\) \\ \hline \hline \(x\partial p/\partial x\) & \(3\) & \(-2\) & \(0\) & \(0\) \\ \(y\partial p/\partial x\) & \(0\) & \(3\) & \(-2\) & \(0\) \\ \(x\partial p/\partial y\) & \(-1\) & \(0\) & \(0\) & \(0\) \\ \(y\partial p/\partial y\) & \(0\) & \(-1\) & \(0\) & \(0\) \\ \hline \end{tabular}
_The rank of the above matrix, and therefore the dimension of \((T_{\hat{p}}O(\hat{p}))_{0}\), is \(3\). \(y\partial/\partial z\) is an element of \(\mathcal{H}_{-1}=Hom(\mathbb{C}z,\mathbb{C}\{x,y\})\) and is in \(\mathcal{G}_{y,0}\). Applying this to \(q\) gives us \(y\partial q/\partial z=(x-2y)^{2}y\in(\mathcal{H}_{-1}q)_{0}\in T_{\hat{p}}Z_{0}\). But this element is not in \(T_{\hat{p}}(O(\hat{p}))_{0}\). So \(\overline{O(\hat{p})}\subsetneq\overline{O(Z_{0})}\). Now \(Z_{0}\) contains all forms in \(Sym^{3}(\mathbb{C}\{x,y\})\), the space of degree \(3\) forms in the variables \(x,y\). Thus, every form in \(\overline{O(Z_{0})}\) is stabilized by a conjugate of \(Hom(\mathbb{C}z,\mathbb{C}\{x,y\})\). Therefore \(\overline{O(Z_{0})}\subsetneq\overline{O(p)}\) which gives us a strict intermediate variety._
### Alignment and Co-limits
In this subsection, we connect the space \(Z_{0}\) with the presence of alignments for the case when \(V=Sym^{n}(X^{*})\) and \(X=Y\oplus Z\) are the weight spaces for \(\lambda\), as in Section 2.4. Suppose that \(f_{0}\in Sym^{n}(Y^{*})\) is obtained as a degree \(0\) limit of some \(f\in V\) under \(\lambda(t)\). We show that there is alignment between \(f\) and \(f_{0}^{\prime}\) for some co-limit \(f_{0}^{\prime}\) of \(f_{0}\).
As in Section 2.4, let us suppose that there is a \(1\)-PS \(\lambda(t)\subseteq GL(X)\) such that \(X=Y\oplus Z\), where \(Y=X_{0}\) and \(Z=X_{1}\), are the weight spaces. Suppose that:
\[\lambda(t)f=t^{0}f_{0}+\ldots+t^{n}f_{n}\]
with \(f_{0}\in Sym^{n}(Y^{*})\) as the leading term. As before, let \(GL(X)_{f}=K\) and \(GL(X)_{f_{0}}=H\). Moreover, let \(GL(Y)_{f_{0}}=H_{Y}\) be the restriction of \(H\) to \(Y\).
Let \({\cal Z}=\{z_{1},\ldots,z_{s}\}\) be a basis for \(Z\) and \({\cal Y}=\{y_{1},\ldots,y_{r}\}\) be a basis for \(Y\) and \(X=Y\oplus Z\). For simplicity, let us identify \(Z\) with \(\mathbb{C}^{s+r}\) treated as row vectors, and \(\{e_{1},\ldots,e_{s},e_{s+1},\ldots e_{s+r}\}\) as the standard basis, with \(e_{i}=z_{i}\), for \(i=1,\ldots s\) and \(e_{s+j}=y_{j}\) for \(j>s\).
Then every element \(\mathfrak{h}\in{\cal H}\), and \(h\in H\) is of the form:
\[\mathfrak{h}=\left[\begin{array}{cc}\mathfrak{a}&\mathfrak{b}\\ 0&\mathfrak{d}\end{array}\right]h=\left[\begin{array}{cc}a&b\\ 0&d\end{array}\right]\]
where \(\mathfrak{a}\in End(Z),\mathfrak{b}\in Hom(Z,Y)\) and \(\mathfrak{d}\in{\cal H}_{Y}\) (and \(a\in GL(Z),b\in End(Z,Y)\) and \(d\in H_{Y}\)). Recall also the subgroups \(L(\lambda)\) and \(P(\lambda)\). We will also use the unipotent subgroup \(\overline{U}(\lambda)\), as shown below:
\[\overline{U}(\lambda)=\left\{\left[\begin{array}{cc}I_{s}&b\\ 0&I_{r}\end{array}\right]|b\in End(Z,Y)\right\}\]
Note that \(\overline{U}(\lambda)\subseteq H\).
We begin with a lemma:
**Lemma 4.25**: _Let \(g\in GL(X)\) be a diagonalizable endomorphism. Then there is an \(h\in\overline{U}(\lambda)\) such that \(hgh^{-1}\in P(\lambda)\)._
Let us assume this for the moment. Then we have:
**Proposition 4.26**: _Let \(g\in K\) be a semisimple element, then there is \(u\in\overline{U}(\lambda)\) and a \(u^{\prime}\in U(\lambda)\) such that (i) \((g^{u})^{u^{\prime}}=g^{u^{\prime}u}\) is an alignment between \(f^{u^{\prime}u}\) and \(f^{\prime}_{0}=\widehat{f^{u^{\prime}u}}^{\lambda}\), and (ii) \(\widehat{f^{u^{\prime}u}}^{\lambda}=\widehat{f^{u}}^{\lambda}\). Moreover, (iii) \(f^{\prime}_{0}\in Z_{0}\), is a co-limit of \(f_{0}\) such that there is a common irreducible component \(Z^{i}\) of \(\overline{Z_{0}}\) which contains both \(f_{0}\) and \(f^{\prime}_{0}\)._
**Proof**: Let \(g\in K\) be a semisimple element. By lemma 4.25, there is an \(u\in\overline{U(\lambda)}\subseteq H\) such that \(ugu^{-1}\in P(\lambda)\). Thus, \(f^{u}\) satisfies the hypothesis of Prop. 2.18, and thus, there is a \(u^{\prime}\in U(\lambda)\) such that \(\widehat{f^{u}}\) and \((f^{u})^{u^{\prime}}=f^{u^{\prime}u}\) have an alignment, viz., \(g^{u^{\prime}u}\), without changing the leading term \(\widehat{f^{u}}^{\lambda}=f^{\prime}_{0}\).
Coming to (iii), since \(u\in\overline{U(\lambda)}\) is unipotent, there is an \(X\in{\cal H}_{-1}\), such that \(u=e^{X}\). Consider the 1-parameter _algebraic_ family \(f_{0}(t)=\widehat{e^{tX}f}\subseteq Z_{0}\). This is an algebraic path connecting \(f_{0}\) with \(f^{\prime}_{0}=f_{0}(1)\). This implies that there must be a component \(Z^{i}\) of \(\overline{Z_{0}}\) containing both. This proves the proposition. \(\Box\)
**Remark 4.27**: _We see that (i) \(f_{0}(t)\) are co-limits of \(f_{0}\) for all \(t\), (ii) \(f^{\prime}_{0}=f_{0}(1)\) is aligned with \(f^{g}\), a conjugate of \(f\), and finally (iii) the derivative \(f^{\prime}(0)\) equals \(Xf_{1}\in T_{f_{0}}Z_{0}\) is as identified by Prop. 4.20. The crux, of course is that \(f_{0}(t)\) for \(t>0\) need not lie in the same orbit, so it is not clear that \(f_{0}\in\overline{O(f^{\prime}_{0})}\). This leads us to the following conjecture._
**Conjecture 4.28**: _If \(f_{0}\) is \(L(\lambda)\)-stable and is the only form in \(Sym^{n}(Y)\) with stabilizer \(H_{Y}\), then there is an alignment between \(f_{0}\) and a conjugate \(f^{g}\) of \(f\)._
Let us come to the the proof of Lemma 4.25. Since \(dim(Z)=s\) and \(dim(Y)=r\), \(P(\lambda)\) and \(\overline{U(\lambda)}\) are of the form:
\[P(\lambda)=\left[\begin{array}{cc}A&0\\ C&D\end{array}\right]\quad\overline{U(\lambda)}=\left[\begin{array}{cc}I_{s }&X\\ 0&I_{r}\end{array}\right]\]
The proof then boils down to the linear algebra computation below:
**Lemma 4.29**: _Let \(R\in\mathbb{C}^{(s+r)\times(s+r)}\) be a diagonalizable matrix, then there is a matrix \(S\), of the form shown below, such that \(W=SRS^{-1}\) is block lower triangular, i.e., of the form shown below:_
\[S=\left[\begin{array}{cc}I_{s}&X\\ 0&I_{r}\end{array}\right]\quad SRS^{-1}=W=\left[\begin{array}{cc}W_{11}&0\\ W_{21}&W_{22}\end{array}\right]\]
**Proof**: Let \(v_{1},\ldots,v_{r+s}\) be left eigenvectors of \(R\), with \(v_{i}R=\lambda_{i}v_{i}\) for some \(\lambda_{i}\in\mathbb{C}\). Now, since \(\{v_{1},\ldots,v_{r+s}\}\) form a basis of \(\mathbb{C}^{r+s}\), there are some \(v_{i_{1}}\ldots,v_{i_{s}}\) such that \(\{v_{i_{1}},\ldots,v_{i_{s}},y_{i},\ldots,y_{r}\}\) is also a basis of \(\mathbb{C}^{r+s}\). Let us assume that \(i_{1}=1,\ldots,i_{s}=s\). Let \(B=[v_{1},\ldots,v_{s},y_{1},\ldots,y_{r}]^{T}\). Then \(B\) and \(BRB^{-1}\) are of the form shown below:
\[B=\left[\begin{array}{cc}A&B\\ 0&I_{r}\end{array}\right]\quad BRB^{-1}=\left[\begin{array}{cc}D&0\\ A^{\prime}&B^{\prime}\end{array}\right]\]
where \(D\) is the diagonal matrix \(diag(\lambda_{1},\ldots,\lambda_{r})\). Let \(C\) be as shown below. Then, by suitable operations, shown below we get:
\[C=\left[\begin{array}{cc}A^{-1}&0\\ 0&I_{r}\end{array}\right]\quad(CB)R(CB)^{-1}=\left[\begin{array}{cc}E^{ \prime}&0\\ A^{\prime\prime}&B^{\prime\prime}\end{array}\right]\quad CB=\left[\begin{array} []{cc}I_{s}&A^{-1}B\\ 0&I_{r}\end{array}\right]\]
Thus, the required matrix is \(CB\) above. \(\square\)
## 5 In Conclusion
Geometric Complexity Theory (GCT) as an approach to key lower bounds problems in computational complexity theory, was proposed in [10]. It is the study of specific forms (such as the determinant and the permanent) with distinctive stabilizers, their homogenized versions in different spaces and their orbit closures. It is also a belief that specific structures associated with their stabilizers will eventually lead us to the construction of obstructions which yield lower bounds. Our paper was an attempt to build a bridge between two specific approaches - the representation theoretic approach of [10, 10, 11] and others, and the geometric approach of [11], [12] and others.
We have used the existence of a 1-PS \(\lambda\) which must connect the implementation of the degree homogenized version of the permanent (\(z\)) as a determinant (\(y\)) as the starting point. This study has led us to stabilizer limits, alignment and weight spaces, normal cones, co-limits and other geometric structures which provide interesting insights into the problem. To the best of our knowledge the statement \(\hat{\cal K}\subseteq{\cal H}_{\overline{y_{e}}}\subset{\cal H}\)
is the first explicit connection made between the stabilizer \(K\) of \(y\) and the stabilizer \(H\) of \(z\) in \(y\)'s orbit closure. That allows us to probe the alignment of \(\lambda\) with respect to the two stabilizers. In the case of the determinant versus permanent problem, the presence of alignment leads to combinatorial insights in both, the boundary of the orbit closure of the determinant and the "implementation" map \(\phi\) of the permanent as a determinant. This has been explored in Section 2.
Indeed, the chain of Lie algebras \(\hat{\mathcal{K}}\subseteq\mathcal{H}_{\overline{y_{e}}}\subset\mathcal{H}\) leads us to consider intermediate orbits between \(\overline{O(z)}\) and \(\overline{O(y)}\). The varieties \(Spec(\overline{J}_{z,y_{e}})\) (in the normal cone) and \(Z_{d}\) are steps in that direction. If the intermediate subvariety conjecture is correct, it will allow us to construct a tower \((W_{i})\) of intermediate varieties where each step is "tight", i.e., with equality between \(\hat{\mathcal{K}}_{i}\) and \((\mathcal{H}_{i})_{\overline{y_{e_{i}}}}\). On the other hand, the structure of \(Z_{d}\) appears to be connected with the presence of alignment.
In summary, our approach allows a rich computational framework which connects the GCT approach to classical questions on orbit closures, local group action and stratification.
There are several interesting questions which seem to lie on this path. We briefly list these.
We know that the boundary of the orbit of the determinant is of codimension one. The techniques used here lead us to suspect that the components of this boundary arise from special 1-PS \(\lambda\) which are well-aligned with \(K\), its stabilizer. This is connected to finding matrix families preserved by a large subgroup of \(K\).
There are two stabilizer families which are of interest. The first is \(\mathcal{G}_{y^{\prime}}\) for \(y^{\prime}\in Y_{d}\), the stabilizers of elements within the orbit \(O(y)\) which preserve the degree of the limit. The second, of course, is \(\mathcal{G}_{\widehat{y^{\prime}}}\), the stabilizers of the elements \(z^{\prime}=\widehat{y^{\prime}}\in Z_{d}\). Since \(\widehat{\mathcal{G}_{y^{\prime}}}\subseteq\mathcal{G}_{\widehat{y^{\prime}}}\), this is also related to alignment in the vicinity of \(z\in Z_{d}\). The invariance of \(\hat{\mathcal{K}}\) under the action of \(U(\lambda)\), puts a compact structure on the orbit space of stabilizer limits. The master result on 1-PS is of course that of Kempf [12]. Hence the question of finding "optimal" 1-PS \(\lambda\) which "implement" the limit \(y\xrightarrow{\lambda}z\), if they exist, and their properties, seems important.
Following Kempf, we may define the "null cone" of \(z\) as the space \(\mathcal{N}(z)=\{v\in V|z\in\overline{O(v)}\}\). The stratification of \(\mathcal{N}(z)\) too will yield important information on how stabilizers change. This connects with the construction of a "local model" of [1] which described the \(\mathcal{G}\)-action in a neighbourhood of \(z\).
The last two questions appear to be connected to the varieties \(Spec(\overline{J_{z,y_{e}}})\) and \(Z_{d}\) studied in this paper.
|
2309.08847 | Computational Optimal Transport and Filtering on Riemannian manifolds | In this paper we extend recent developments in computational optimal
transport to the setting of Riemannian manifolds. In particular, we show how to
learn optimal transport maps from samples that relate probability distributions
defined on manifolds. Specializing these maps for sampling conditional
probability distributions provides an ensemble approach for solving nonlinear
filtering problems defined on such geometries. The proposed computational
methodology is illustrated with examples of transport and nonlinear filtering
on Lie groups, including the circle $S^1$, the special Euclidean group $SE(2)$,
and the special orthogonal group $SO(3)$. | Daniel Grange, Mohammad Al-Jarrah, Ricardo Baptista, Amirhossein Taghvaei, Tryphon T. Georgiou, Sean Phillips, Allen Tannenbaum | 2023-09-16T02:59:22Z | http://arxiv.org/abs/2309.08847v2 | # Computational Optimal Transport and Filtering on Riemannian Manifolds
###### Abstract
In this paper we extend recent developments in computational optimal transport to the setting of Riemannian manifolds. In particular, we show how to learn optimal transport maps from samples that relate probability distributions defined on manifolds. Specializing these maps for sampling conditional probability distributions provides an ensemble approach for solving nonlinear filtering problems defined on such geometries. The proposed computational methodology is illustrated with examples of transport and nonlinear filtering on Lie groups, including the circle S1, the special Euclidean group SE(2), and the special orthogonal group SO(3).
Optimal Transportation, Optimal Control, Nonlinear Filtering, Riemannian manifolds.
## 1 Introduction
The theory of optimal transport (OT) has emerged as a powerful mathematical tool in a wide range of engineering and control applications [2]. This is largely due to the fact that it induces a natural and computationally tractable geometry on the space of probability distributions [3, 4]. The metric that the theory provides to quantify distance between distributions, the _Wasserstein metric_, gives rise to natural geodesic flows and transport maps that can be used to interpolate, average, and correspond distributions in a physically meaningful sense. For these reasons, OT has proven enabling for an ever expanding range of applications in machine learning [4, 5, 6], and image processing [7, 8, 9, 10], besides ones in control and estimation [2, 11, 12, 13].
In this rapidly developing landscape of OT techniques and applications, neural networks and stochastic optimization have come to provide a potentially transformative framework for the development of efficient and scalable numerical algorithms [14, 15, 16, 17, 18]. The focus so far on utilizing such techniques however has been limited to applications of OT on Euclidean spaces. Yet, optimal transport can equally well be considered on manifolds, that are especially relevant in control and robotic applications. A manifold structure is naturally imposed by geometric constraints, as in attitude estimation of aircraft [19, 20], localization of mobile robots [21, 22], and visual tracking of humans and objects [23, 24].
Thus, one of the goals of the present paper is to develop a computational framework for OT in the setting of Riemannian manifolds [25, 26] with special attention to matrix Lie-groups, as these encompass the majority of the motivating applications. A second goal of the paper is to use the framework for sampling conditional distributions, in order to perform nonlinear filtering on Riemannian manifolds.
Specifically, we make the following key contributions:
1. We propose a sample-based computational methodology for computing OT maps on Riemannian manifolds. Our proposed methodology extends the min-max formulation of [17] by combining it with McCann's characterization of optimal transport maps [25].
2. We propose a sample-based and likelihood-free method to sample conditional distributions on Riemannian manifolds. In order to do so, we use the recently introduced framework of block-triangular transport maps, that is used in the context of conditional generative models [27, 28] and nonlinear filtering [29, 30].
3. We illustrate our proposed algorithms on several numerical examples on the circle, special Euclidean group \(SE(2)\), and the special orthogonal group \(SO(3)\).
## 2 Problem formulation and background
Let \(\mathcal{M}\) be a smooth connected manifold without boundary that is equipped with the Riemannian metric \(\langle\cdot,\cdot\rangle_{g}\). Let \(d(z,z^{\prime})\) denote the geodesic distance for any \(z,z^{\prime}\in M\). We are interested in solving the following two problems.
**Optimal control:** This is the problem to steer a random process \(X(t)\), taking values in \(\mathcal{M}\), from an initial probability distribution \(P\) to a terminal probability distribution \(Q\). It is formulated as follows:
\[\begin{split}&\min_{u}\ \mathbb{E}\left[\int_{0}^{1}\|u(t)\|_{g}^{2}\,dt \right],\\ &\text{s.t}\quad\dot{X}(t)=u(t),\quad X(0)\sim P,\quad X(1)\sim Q,\end{split} \tag{1}\] |
2301.01270 | Maurer-Cartan characterization, cohomology and deformations of
equivariant Lie superalgebras | In this article, we give Maurer-Cartan characterizations of equivariant Lie
superalgebra structures. We introduce equivariant cohomology and equivariant
formal deformation theory of Lie superalgebras. As an application of
equivariant cohomology we study the equivariant formal deformation theory of
Lie superalgebras. As another application we characterize equivariant central
extensions of Lie superalgebras using second equivariant cohomology. We give
some examples of Lie superalgebras with an action of a group and equivariant
formal deformations of a classical Lie superalgebras. | RB Yadav, Subir Mukhopadhyay | 2022-12-09T11:40:30Z | http://arxiv.org/abs/2301.01270v1 | # Maurer-Cartan characterization, cohomology and deformations of equivariant Lie superalgebras
###### Abstract
In this article, we give Maurer-Cartan characterizations of equivariant Lie superalgebra structures. We introduce equivariant cohomology and equivariant formal deformation theory of Lie superalgebras. As an application of equivariant cohomology we study the equivariant formal deformation theory of Lie superalgebras. As another application we characterize equivariant central extensions of Lie superalgebras using second equivariant cohomology. We give some examples of Lie superalgebras with an action of a group and equivariant formal deformations of a classical Lie superalgebras.
keywords: Lie superalgebra, cohomology, extension, formal deformations, Maurer-Cartan equation Msc: [2020] 17A70, 17B99, 16S80, 13D10, 13D03, 16E40 +
Footnote †: journal:...
## 1 Introduction
Graded Lie algebras have been a topic of interest in physics in the context of "supersymmetries" relating particles of differing statistics. In mathematics, graded Lie algebras have been studied in the context of deformation theory, [1].
Lie superalgebras were studied and a classification was given by Kac [2]. Leits [3] introduced a cohomology for Lie superalgebras. Lie superalgebras are also called \(\mathbb{Z}_{2}\)-graded Lie algebras by physicists.
Algebraic deformation theory was introduced by Gerstenhaber for rings and algebras [4],[5],[6], [7], [8]. Deformation theory of Lie superalgebras was introduced and studied by Binegar [9]. Maurer-Cartan characterization was given for Lie algebra structures by Nijenhuis and Richardson in [10] and for associative algebra structures by Gerstenhaber in [11]. Such characterization for Lie superalgebra structures was given in [12]. Deformation theory of Lie superalgebras was studied in [9].
Aim of the present paper is to give Maurer-Cartan characterization, introduce equivariant cohomology, do some equivariant cohomology computations in lower dimensions, introduce equivariant formal deformation theory of Lie superalgebras and give some examples. Organization of the paper is as follows. In Section 2, we recall definition of Lie superalgebra and give some examples. In Section 4, we give Maurer-Cartan characterization of equivariant Lie superalgebras. In this section we construct a \(\mathbb{Z}\times\mathbb{Z}_{2}\)-graded Lie algebra from a \(\mathbb{Z}_{2}\)-graded \(G\)-vector space. We show that class of Maurer-Cartan elements of this \(\mathbb{Z}\times\mathbb{Z}_{2}\)-graded Lie algebra is the class of \(G\)-equivariant Lie superalgebra structures on \(V.\) In Section 5, we introduce equivariant chain complex and equivariant cohomology of Lie superalgebras. In Section 6, we compute cohomology of Lie superalgebras in degree \(0\) and dimension \(0\), \(1\) and \(2\). In Section 7, we introduce equivariant deformation theory of Lie superalgebras. In this section we see that infinitesimals of equivariant deformations are equivariant cocycles. Also, in this section we give an example of an equivariant formal deformation of a Lie superalgebras. In Section 8, we study equivalence of two equivariant formal deformations and prove that infinitesimals of any two equivalent equivariant deformations are cohomologous.
## 2 Lie Superalgebras
In this section, we recall definitions of Lie superalgebras and modules over a Lie superalgebras. We recall some examples of Lie superalgebras. Throughout the paper we denote a fixed field by \(K\). Also, we denote the ring of formal power series with coefficients in \(K\) by \(K[[t]]\). In any \(\mathbb{Z}_{2}\)-graded vector space \(V\) we use a notation in which we replace degree \(deg(a)\) of an element \(a\in V\) by '\(a^{\prime}\) whenever \(deg(a)\) appears
in an exponent; thus, for example \((-1)^{ab}=(-1)^{deg(a)deg(b)}\).
**Definition 2.1**.: _Let \(V=V_{0}\oplus V_{1}\) and \(W=W_{0}\oplus W_{1}\) be \(\mathbb{Z}_{2}\)-graded vector spaces over a field \(K\). A linear map \(f:V\to W\) is said to be homogeneous of degree \(\alpha\) if \(f(V_{\beta})\subset W_{\alpha+\beta}\), for all \(\beta\in\mathbb{Z}_{2}=\{0,1\}\). We write \((-1)^{\deg(f)}=(-1)^{f}\). Elements of \(V_{\beta}\) are called homogeneous of degree \(\beta\)._
**Definition 2.2**.: _A superalgebra is a \(\mathbb{Z}_{2}\)-graded vector space \(A=A_{0}\oplus A_{1}\) together with a bilinear map \(m:A\times A\to A\) such that \(m(a,b)\in A_{\alpha+\beta},\) for all \(a\in A_{\alpha}\), \(b\in A_{\beta}\)._
**Definition 2.3**.: _A Lie superalgebra is a superalgebra \(L=L_{0}\oplus L_{1}\) over a field \(K\) equipped with an operation \([-,-]:L\times L\to L\) satisfying the following conditions:_
1. \([a,b]=-(-1)^{\alpha\beta}[b,a]\)_,_
2. \([[a,[b,c]]=[[a,b],c]+(-1)^{\alpha\beta}[b,[a,c]]\)_, (Jacobi identity)_
_for all \(a\in L_{\alpha}\)and \(b\in L_{\beta}\). Let \(L_{1}\) and \(L_{2}\) be two Lie superalgebras. A homomorphism \(f:L_{1}\to L_{2}\) is a \(K\)-linear map such that \(f([a,b])=[f(a),f(b)].\) Given a Lie superalgebra \(L\)\([L,L]\) is the vector subspace of \(L\) spanned by the set \(\{[x,y]:x,y\in L\}\). A Lie superalgebra \(L\) is called abelian if \([L,L]=0\)._
**Example 2.1**.: _Let \(V=V_{\bar{0}}\oplus V_{\bar{1}}\) be a \(\mathbb{Z}_{2}\)-graded vector space, \(dimV_{\bar{0}}=m\), \(dimV_{\bar{1}}=n\). Consider the associative algebra \(EndV\) of all endomorphisms of \(V\). Define_
\[End_{i}\ V=\{a\in End\ V\mid aV_{s}\subseteq V_{i+s}\}\,\ i,s\in\mathbb{Z}_{2} \tag{1}\]
_One can easily verify that \(End\ V=End_{\bar{0}}\ V\oplus End_{\bar{1}}\ V\). The bracket \([a,b]=ab-(-1)^{\bar{a}\bar{b}}ba\) makes \(EndV\) into a Lie superalgebra, denoted by \(\ell(V)\) or \(\ell(m,n)\). In some (homogeneous) basis of \(V\), \(\ell(m,n)\) consists of block matrices of the form \(\left(\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\right)\), where \(\alpha,\beta,\gamma,\delta\) are matrices of order \(m\times m\), \(m\times n\), \(n\times m\) and \(n\times n,\) respectively._
**Example 2.2**.: _Define a linear function \(str:\ell(V)\to k\), by \(str([a,b])=0,\ a,b\in\ell(V)\), and \(str\ id_{V}=m-n\). \(str(a)\) is called a supertrace of \(a\in\ell(V)\). Consider the subspace_
\[s\ell(m,n)=\{a\in\ell(m,n)\mid str\ a=0\}.\]
_Clearly, \(s\ell(m,n)\) is an ideal of \(\ell(m,n)\) of codimension 1. Therefore \(s\ell(m,n)\) is a subalgebra of \(\ell(m,n).\)_
_For any \(\big{(}\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\big{)}\) in \(\ell(m,n)\)\(str\big{(}\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\big{)}=tr\;\;\alpha-tr\;\;\delta.\;\;s\ell(n,n)\) contains the one-dimensional ideal \(\{\lambda I_{2n}:\lambda\in K\}.\)_
**Definition 2.4**.: _[_13_]_ _Let \(L=L_{0}\oplus L_{1}\) be a Lie superalgebra. A \(\mathbb{Z}_{2}\)-graded vector space \(M=M_{0}\oplus M_{1}\) over the field \(K\) is called a module over \(L\) if there exists a bilinear map \([-,-]:L\times M\to M\) such that following condition is satisfied_
\[[a,[b,m]]=[[a,b],m]+(-1)^{ab}[b,[a,m]].\]
_for all \(a\in L_{\alpha}\), \(b\in L_{\beta}\), \(\alpha,\beta\in\{0,1\}.\)_
Clearly, every Lie superalgebra is a module over itself.
## 3 \(\mathbb{Z}_{2}\)-graded Groups and their Actions on a Lie Superalgebra
**Definition 3.1**.: _We define a \(\mathbb{Z}_{2}\)-graded group as a group \(G\) having a subgroup \(G_{\bar{0}}\) and a subset \(G_{\bar{1}}\) such that for all \(x\in G_{i},y\in G_{j}\), \(xy\in G_{i+j},\) where \(i,j,i+j\in\mathbb{Z}_{2}.\)_
**Example 3.1**.: _Consider \(\mathbb{Z}_{6}=\{\bar{0},\bar{1},\bar{2},\bar{3},\bar{4},\bar{5}\}\). Take \(G=\mathbb{Z}_{6}\), \(G_{\bar{0}}=\{\bar{0},\bar{2},\bar{4}\}\), \(G_{\bar{1}}=\{\bar{1},\bar{3},\bar{5}\}.\) Clearly, with this choice of \(G_{\bar{0}}\) and \(G_{\bar{1}}\), \(G\) is a \(\mathbb{Z}_{2}\)-graded group._
**Example 3.2**.: _Every group \(G\) can be seen as \(\mathbb{Z}_{2}\)-graded group with \(G_{\bar{0}}=G\) and \(G_{\bar{1}}=\emptyset.\)_
**Definition 3.2**.: _A \(\mathbb{Z}_{2}\)-graded group \(G\) is said to act on a Lie superalgebra \(L=L_{0}\oplus L_{1}\) if there exits a map_
\[\psi:G\times L\to L,\;\;(g,x)\mapsto\psi(g,x)=gx\]
_satisfying following conditions_
1. \(ex=x,\) _for all_ \(x\in L.\) _Here_ \(e\in G\) _is the identity element of_ \(G.\)__
2. \(\forall g\in G_{i}\)_,_ \(i\in\mathbb{Z}_{2}\)__\(\psi_{g}:L\to L\) _given by_ \(\psi_{g}(x)=\psi(g,x)=gx\) _is a homogeneous linear map of degree_ \(i.\)__
3. \(\forall g_{1},g_{2}\in G\)_,_ \(\psi(g_{1}g_{2},x)=\psi(g_{1},\psi(g_{2},x))\)_, that is_ \((g_{1}g_{2})x=g_{1}(g_{2}x).\)
4. _For_ \(x,y\in L\)_,_ \(g\in G\)_,_ \([gx,gy]=g[x,y]\)_._
_We denote an action as above by \((G,L)\)._
**Proposition 3.1**.: _Let \(G\) be a finite \(\mathbb{Z}_{2}\)-graded group and \(L\) be a Lie superalgebra. Then \(G\) acts on \(L\) if and only if there exists a group homomorphism of degree \(0\)_
\[\phi:G\to Iso(L,L),\ \ g\mapsto\phi(g)=\psi_{g}\]
_from the group \(G\) to the group of homogeneous Lie superalgebra isomorphisms from \(L\) to \(L\)._
Proof.: For an action \((G,L)\), we define a map \(\phi:G\to Iso(L,L)\) by \(\phi(g)=\psi_{g}\). One can verify easily that \(\phi\) is a group homomorphism. Now, let \(\phi:G\to Iso(L,L)\) be a group homomorphism. Define a map \(G\times L\to L\) by \((g,a)\mapsto\phi(g)(a)\). It can be easily seen that this is an action of \(G\) on \(L\).
**Note:** In this article we consider action of groups \(G\), that is those \(\mathbb{Z}_{2}\)-graded groups \(G\) for which groups \(G_{0}=G\) and \(G_{1}=\emptyset\). We call a Lie Superalgebra \(L=L_{0}\oplus L_{1}\) with an action of a group \(G\)\(G\)-Lie Superalgebra.
**Example 3.3**.: **Super-Poincare algebra**_The (\(\mathcal{N}=1\)) Super-Poincare algebra \(L=L_{0}\oplus L_{1}\) is given by1_
Footnote 1: Here we have used the following notation. \(\mu,\nu,\rho,...=0,1,2,3\).; \(\sigma^{i}\), \(i=1,2,3\) represent Pauli spin matrices and one introduces \(\sigma^{\mu}=(\mathbf{1},\sigma^{1})\) and \(\bar{\sigma}^{\mu}=(\mathbf{1},-\sigma^{1})\),
\((\sigma^{\mu\nu})_{\alpha}^{\ \beta}=-\frac{1}{4}(\sigma^{\mu}\bar{\sigma}^{ \nu}-\sigma^{\nu}\bar{\sigma}^{\mu})_{\ \alpha}^{\ \beta},(\bar{\sigma}^{\mu\nu})_{\ \alpha}^{\ \beta}=-\frac{1}{4}(\bar{ \sigma}^{\mu}\sigma^{\nu}-\bar{\sigma}^{\nu}\sigma^{\mu})_{\ \dot{\beta}}^{\dot{\alpha}}\). Spinor indices are denoted by \(\alpha,\beta,\dot{\alpha},\dot{\beta}\), they take values from the set \(\{1,2\}\) and are being raised and lowered by \(\epsilon^{\alpha\beta}\) (\(\epsilon^{\dot{\alpha}\dot{\beta}}\)), and \(\epsilon_{\alpha\beta}\) (\(\epsilon_{\dot{\alpha}\dot{\beta}}\)). They are antisymmetric and we have chosen \(\epsilon^{12}=\epsilon^{\dot{1}2}=+1\)
\[i[J^{\mu\nu},J^{\rho\sigma}] =\eta^{\nu\rho}J^{\mu\sigma}-\eta^{\mu\rho}J^{\nu\sigma}-\eta^{ \sigma\mu}J^{\rho\nu}+\eta^{\sigma\nu}J^{\rho\mu},\] \[i[P^{\mu},J^{\rho\sigma}] =\eta^{\mu\rho}P^{\sigma}-\eta^{\mu\sigma}P^{\rho},\quad[P^{\mu}, P^{\rho}]=0,\] \[[Q_{\alpha},J^{\mu\nu}] =(\sigma^{\mu\nu})_{\ \alpha}^{\ \beta}\,Q_{\beta},\quad[\bar{Q}^{\dot{ \alpha}},J^{\mu\nu}]=(\bar{\sigma}^{\mu\nu})^{\dot{\alpha}}_{\ \dot{\beta}}\,\bar{Q}^{\dot{\beta}}\] \[[Q_{\alpha},P^{\mu}] =0,\quad[\bar{Q}^{\dot{\alpha}},P^{\mu}]=0\] \[\{Q_{\alpha},Q_{\beta}\} =0,\quad\{\bar{Q}^{\dot{\alpha}},\bar{Q}^{\dot{\beta}}\}=0,\] \[\{Q_{\alpha},\bar{Q}^{\dot{\beta}}\} =2(\sigma^{\mu})_{\alpha\dot{\beta}}P_{\mu}.\]
_Here \(L_{0}\) is generated by the set \(\{J^{\mu\nu}:\mu,\nu=0,1,2,3\}\cup\{P^{\mu}:\mu=0,1,2,3\}\) over \(\mathbb{C}\). \(L_{1}\) is generated by the set \(\{Q_{\alpha}:\alpha=1,2\}\cup\{\bar{Q}^{\dot{\alpha}}:\dot{\alpha}=1,2\}\) over \(\mathbb{C}\). Consider the group \(\mathbb{Z}_{m}=\{g^{n}:\ n=0,1,\ldots,m-1\}\), where \(g=e^{\frac{2-i}{m}}\). There exists an action of \(\mathbb{Z}_{m}\) on the Super-Poincare algebra \(L=L_{0}\oplus L_{1}\) given by_
\[(g^{n},J^{\mu\nu})\mapsto J^{\mu\nu},\quad(g^{n},P^{\mu})\mapsto P^{\mu},\quad (g^{n},Q_{\alpha})\mapsto g^{n}Q_{\alpha},\quad(g^{n},\bar{Q}^{\dot{\alpha}}) \mapsto g^{m-n}\bar{Q}^{\dot{\alpha}},\]
_for every \(n=0,1,\ldots,m-1\)._
**Example 3.4**.: _Let \(e_{ij}\) denote a \(2\times 2\) matrix with \((i,j)\)th entry \(1\) and all other entries \(0\). Consider \(L_{0}=span\{e_{11},e_{22}\}\), \(L_{1}=span\{e_{12},e_{21}\}\). Then \(L=L_{0}\oplus L_{1}\) is a Lie superalgebra with the bracket \([\,\ ]\) defined by_
\[[a,b]=ab-(-1)^{\bar{a}\bar{b}}ba.\]
_Define a function \(\psi:\mathbb{Z}_{2}\times L\to L\) by \(\psi(0,x)=x,\forall x\in L\), \(\psi(1,e_{11})=e_{22}\), \(\psi(1,e_{22})=e_{11}\), \(\psi(1,e_{12})=e_{21}\), \(\psi(1,e_{21})=e_{12}\). Obviously Conditions \(1-3\) hold for \((\mathbb{Z}_{2},L)\) to be an action. To verify condition \(4\) it is enough to verify for basis elements of \(L_{0}\) and \(L_{1}\). We have_
1. \(1[e_{ii},e_{ii}]=0=[1e_{ii},1e_{ii}]\), \(\forall\ i=1,2\).
2. \(1[e_{ii},e_{jj}]=0=[e_{jj},e_{ii}]=[1e_{ii},1e_{jj}]\), \(\forall\ i,j=1,2,\ i\neq j\).
3. \(1[e_{ij},e_{ji}]=1(e_{ii}-(-1)^{1}e_{jj})=e_{jj}+e_{ii}=[1e_{ij},1e_{ji}]\), \(\forall\ i,j=1,2,\ i\neq j\).
4. \(1[e_{ij},e_{ij}]=0=[e_{ji},1e_{ji}]=[1e_{ij},1e_{ij}]\), \(\forall\ i,j=1,2,\ i\neq j\).
5. \(1[e_{ii},e_{ij}]=1(e_{ij})=e_{ji}=[e_{jj},e_{ji}]=[1e_{ii},1e_{ij}]\), \(\forall\ i,j=1,2,\ i\neq j\).
6. \(1[e_{jj},e_{ij}]=1(-e_{ij})=-e_{ji}=[e_{ii},e_{ji}]=[1e_{jj},1e_{ij}]\), \(\forall\ i,j=1,2,\ i\neq j\).
_From above it is clear that \((\mathbb{Z}_{2},L)\) is an action._
**Definition 3.3**.: _Let \(L=L_{0}\oplus L_{1}\) be a Lie superalgebra. Let \(G\) be a finite group which acts on \(L\). A \(\mathbb{Z}_{2}\)-graded vector space \(M=M_{0}\oplus M_{1}\) with an action of \(G\) is called a \(G\)-module over \(L\) if there exists a \(G\)-equivariant bilinear map \([-,-]:L\times M\to M\) such that following condition is satisfied_
\[[a,[b,m]]=[[a,b],m]+(-1)^{ab}[b,[a,m]],\]
_for all \(a\in L_{\alpha}\), \(b\in L_{\beta}\), \(\alpha,\beta,\in\{0,1\}\)._
**Example 3.5**.: _Every \(G\)-Lie superalgebra is a \(G\)-module over itself._
**Example 3.6**.: _Let \(L=L_{0}\oplus L_{1}\) be the (\(\mathcal{N}=1\)) Super-Poincare algebra, Example 3.3. Let \(M_{0}\) be the span of \(\{P^{\mu}:\mu=0,1,2,3\}\) and \(M_{1}\) be the span of the set \(\{Q_{\alpha}:\alpha=1,2\}\cup\{\bar{Q}^{\dot{\alpha}}:\dot{\alpha}=1,2\}\). Then clearly \(M=M_{0}\oplus M_{1}\) is a \(\mathbb{Z}_{m}\)-module over \(L=L_{0}\oplus L_{1}\)._
## 4 Maurer-Cartan Characterization of Equivariant Lie Superalgebra Structures
**Definition 4.1**.: _A finite group \(G\) is said to act on a \(\mathbb{Z}_{2}\)-graded vector space \(V=V_{0}\oplus V_{1}\) if there exits a map_
\[\psi:G\times V\to V,\ \ (g,x)\mapsto\psi(g,x)=gx\]
_satisfying following conditions_
1. \(ex=x,\) _for all_ \(x\in V\)_. Here_ \(e\) _is the identity element of_ \(G\)_._
2. \(\forall g\in G\)_,_ \(\psi_{g}:V\to V\) _given by_ \(\psi_{g}(x)=\psi(g,x)=gx\) _is a homogeneous linear map of degree_ \(0\)_._
3. \(\forall g_{1},g_{2}\in G\)_,_ \(\psi(g_{1}g_{2},x)=\psi(g_{1},\psi(g_{2},x))\)_, that is_ \((g_{1}g_{2})x=g_{1}(g_{2}x)\)_._
_A \(\mathbb{Z}_{2}\)-graded vector space \(V=V_{0}\oplus V_{1}\) with an action of a group \(G\) is called a \(G\)-vector space._
Let \(V=V_{0}\oplus V_{1}\) and \(W=W_{0}\oplus W_{1}\) be vector spaces over a field \(\mathbb{F}\). An \(n\)-linear map \(f:V\underset{n\ times}{\underbrace{\times\cdots\times}}V\to W\) is said to be homogeneous of degree \(\alpha\) if \(f(x_{1},\cdots,x_{n})\) is homogeneous in \(W\) and \(\deg(f(x_{1},\cdots,x_{n}))-\sum_{i=1}^{n}\deg(x_{i})=\alpha\), for homogeneous \(x_{i}\in V\), \(1\leq i\leq n\). We denote the degree of a homogeneous \(f\) by \(\deg(f)\). We write \((-1)^{\deg(f)}=(-1)^{f}\).
Consider the permutation group \(S_{n}\). For any \(X=(X_{1},\ldots,X_{n})\) with \(X_{i}\in V_{x_{i}}\) and \(\sigma\in S_{n}\), define
\[K(\sigma,X)=card\{(i,j):i<j,\ X_{\sigma(i)}\in V_{1},X_{\sigma(j)}\in V_{1},\ \sigma(j)<\sigma(i)\},\]
\[\epsilon(\sigma,X)=\epsilon(\sigma)(-1)^{K(\sigma,X)},\]
where \(cardA\) denotes cardinality of a set \(A\), \(\epsilon(\sigma)\) is the signature of \(\sigma\). Also, define \(\sigma.X=(X_{\sigma^{-1}(1)},\ldots,X_{\sigma^{-1}(n)})\). We have following Lemma [12]
**Lemma 4.1**.:
1. \(K(\sigma\sigma^{\prime},X)=K(\sigma,X)+K(\sigma^{\prime},\sigma^{-1}X)\)__\((mod2).\)__
2. \(\epsilon(\sigma\sigma^{\prime},X)=\epsilon(\sigma,X)\epsilon(\sigma^{\prime}, \sigma^{-1}X)\)_._
For each \(n\in\mathbb{N},\) define \(\mathcal{F}_{n,\alpha}(V,W)\) as the vector space of all homogeneous \(n\)-linear mappings \(f:V\underbrace{\times\cdots\times V}_{n\ times}\to W\) of degree \(\alpha.\) Define \(\mathcal{F}_{n}(V,W)=\mathcal{F}_{n,0}(V,W)\oplus\mathcal{F}_{n,1}(V,W)\), \(\mathcal{F}_{0}(V,W)=W\) and \(\mathcal{F}_{-n}(V,W)=0\), \(\forall n\in\mathbb{N}.\) Take \(\mathcal{F}(V,W)=\bigoplus_{n\in\mathbb{Z}}\mathcal{F}_{n}(V,W)\).
For \(F\in\mathcal{F}_{n}(V,W)\), \(X\in V^{n},\)\(\sigma\in S_{n},\) define
\[(\sigma.F)(X)=\epsilon(\sigma,X)F(\sigma^{-1}X).\]
By using Lemma 4.1, one concludes that this defines an action of \(S_{n}\) on the \(\mathbb{Z}_{2}\)-graded vector space \(\mathcal{F}_{n}(V,W)\). Define \(\mathcal{E}_{n}\) for \(n\in\mathbb{Z}\) as follows:
Set \(\mathcal{E}_{n}=\{F\in\mathcal{F}_{n+1}(V,V):\sigma.F=F,\ \forall\ \sigma\in S_{n+1}\}\), for \(n\geq 0\) and
\[\mathcal{E}_{n}=\begin{cases}V&\text{if }n=-1\\ 0&\text{if }n<-1\end{cases}.\]
Write \(\mathcal{E}=\bigoplus_{\in\mathbb{Z}}\mathcal{E}_{n}.\) Define a product \(\circ\) on \(\mathcal{E}\) as follows: For \(F\in\mathcal{E}_{n,f}\), \(F^{\prime}\in\mathcal{E}_{n^{\prime},f^{\prime}}\) set
\[F\circ F^{\prime}=\sum_{\sigma\in S_{(n,n^{\prime}+1)}}\sigma.(F*F^{\prime}),\]
where
\[F*F^{\prime}(X_{1},\ldots,X_{n+n^{\prime}+1})=(-1)^{f^{\prime}(x_{1}+\cdots+x _{n})}F(X_{1},\ldots,X_{n},F^{\prime}(X_{n+1},\ldots,X_{n+n^{\prime}+1})),\]
for \(X_{i}\in V_{x_{i}}\), and \(S_{(n,n^{\prime}+1)}\) consists of permutations \(\sigma\in S_{n+n^{\prime}+1}\) such that \(\sigma(1)<\cdots<\sigma(n)\), \(\sigma(n+1)<\cdots<\sigma(n+n^{\prime}+1).\) Clearly, \(F\circ F^{\prime}\in\mathcal{E}_{(n+n^{\prime},f+f^{\prime})}\). We have following Lemma [12].
**Lemma 4.2**.: _For \(F\in\mathcal{E}_{n,f}\), \(F^{\prime}\in\mathcal{E}_{n^{\prime},f^{\prime}}\), \(F^{\prime\prime}\in\mathcal{E}_{n^{\prime\prime},f^{\prime\prime}}\)_
\[(F\circ F^{\prime})\circ F^{\prime\prime}-F\circ(F^{\prime}\circ F^{\prime \prime})=(-1)^{n^{\prime}n^{\prime\prime}+f^{\prime}f^{\prime\prime}}\{(F \circ F^{\prime\prime})\circ F^{\prime}-F\circ(F^{\prime\prime}\circ F^{ \prime})\}.\]
Using Lemma 4.2, we have following theorem [14], [12]
**Theorem 4.1**.: \(\mathcal{E}\) _is a \(\mathbb{Z}\times\mathbb{Z}_{2}\)-graded Lie algebra with the bracket \([\,\ ]\) defined by_
\[[F,F^{\prime}]=F\circ F^{\prime}-(-1)^{nn^{\prime}+ff^{\prime}}F^{\prime}\circ F,\]
_for \(F\in\mathcal{E}_{n,f}\), \(F^{\prime}\in\mathcal{E}_{n^{\prime},f^{\prime}}\)_
Let \(G\) be a finite group acting on the vector spaces \(V=V_{0}\oplus V_{1}\) and \(W=W_{0}\oplus W_{1}\). Denote by \(\mathcal{F}_{n}^{G}(V,W)\) the vector space of \(G\)-equivariant elements of \(\mathcal{F}_{n}(V,W)\), that is \(F(gX_{1},\ldots,gX_{n})=gF(X_{1},\ldots,X_{n})\), for each \(F\in\mathcal{F}_{n}^{G}(V,W)\), \((X_{1},\ldots,X_{n})\in V^{n}\). Write \(\mathcal{F}^{G}(V,W)=\bigoplus_{n\in\mathbb{Z}}\mathcal{F}_{n}^{G}(V,W)\). For \(\sigma\in S_{n}\), \(g\in G\), \((X_{1},\ldots,X_{n})\in V^{n}\), we have
\[\sigma.(gX_{1},\ldots,gX_{n}) = (gX_{\sigma^{-1}(1)},\ldots,gX_{\sigma^{-1}(n)}) \tag{1}\] \[= g(X_{\sigma^{-1}(1)},\ldots,X_{\sigma^{-1}(n)})\] \[= g(\sigma.(X_{1},\ldots,X_{n})).\]
Let \(F\in\mathcal{E}_{n,f}^{G}\), \(F^{\prime}\in\mathcal{E}_{n^{\prime},f^{\prime}}^{G}\). Clearly, \(F\ast F^{\prime}\in\mathcal{E}_{n+n^{\prime},f+f^{\prime}}^{G}\). Using Equation 1, we conclude that \(F\circ F^{\prime}\in\mathcal{E}_{n+n^{\prime},f+f^{\prime}}^{G}\). This implies that \([\,\ ]\) defines a product in \(\mathcal{E}^{G}\). Hence using Theorem 4.1, we have following theorem.
**Theorem 4.2**.: \(\mathcal{E}^{G}\) _is a \(\mathbb{Z}\times\mathbb{Z}_{2}\)-graded Lie algebra with with the bracket \([\,\ ]\) defined by_
\[[F,F^{\prime}]=F\circ F^{\prime}-(-1)^{nn^{\prime}+ff^{\prime}}F^{\prime}\circ F,\]
_for \(F\in\mathcal{E}_{n,f}\), \(F^{\prime}\in\mathcal{E}_{n^{\prime},f^{\prime}}\)_
Using [12], Proposition \((3.1)\), we get following theorem.
**Theorem 4.3**.: _Given \(F_{0}\in\mathcal{E}_{(1,0)}^{G}\), \(F_{0}\) defines on a \(\mathbb{Z}_{2}\)-graded \(G\)-vector space \(V\) a \(G\)-Lie superalgebra structure if and only if \([F_{0},F_{0}]=0\)._
**Remark 4.1**.: _An element \(F_{0}\in\mathcal{E}_{(1,0)}^{G}\) which satisfies the equation_
\[[F,F]=0 \tag{2}\]
_is called a Maurer-Cartan element and the Equation 2 is called Maurer-Cartan equation. Thus the class of Maurer-Cartan elements is the class of \(G\)-Lie superalgebra structures on a \(\mathbb{Z}_{2}\)-graded \(G\)-vector space \(V\)._
## 5 Equivariant Cohomology of Lie Superalgebras
Let \(L=L_{0}\oplus L_{1}\) be a Lie superalgebra and \(M=M_{0}\oplus M_{1}\) be a module over \(L\). For each \(n\geq 0\), a \(K\)-vector space \(C^{n}(L;M)\) is defined as follows: \(C^{0}(L;M)=M\) and for \(n\geq 1\), \(C^{n}(L;M)\) consists of those \(n\)-linear maps \(f\) from \(L^{n}\) to \(M\) which are homogeneous and
\[f(x_{1},\ldots,x_{i},x_{i+1},\ldots,x_{n})=-(-1)^{x_{i}x_{i+1}}f(x_{1},\ldots, x_{i+1},x_{i}\ldots,x_{n}).\]
Clearly, \(C^{n}(L;M)=C^{n}_{0}(L;M)\oplus C^{n}_{1}(L;M)\), where \(C^{n}_{0}(L;M)\) and \(C^{n}_{1}(L;M)\) are vector subspaces of \(C^{n}(L;M)\) containing elements of degree \(0\) and \(1\), respectively. A linear map \(\delta^{n}:C^{n}(L;M)\to C^{n+1}(L;M)\) is defined by ([9], [3])
\[\delta^{n}f(x_{1},\cdots,x_{n+1}) \tag{3}\] \[= \sum_{i<j}(-1)^{i+j+(x_{i}+x_{j})(x_{1}+\cdots+x_{i-1})+x_{j}(x_ {i+1}+\cdots+x_{j-1})}\] (4) \[f([x_{i},x_{j}],x_{1},\ldots,\hat{x_{i}},\ldots,\hat{x_{j}}, \ldots,x_{n+1})\] \[+\sum_{i=1}^{n+1}(-1)^{i-1+x_{i}(f+x_{1}+\cdots+x_{i-1})}[x_{i},f (x_{1},\ldots,\hat{x_{i}},\ldots,x_{n+1})],\]
for all \(f\in C^{n}(L;M)\), \(n\geq 1\), and \(\delta^{0}f(x_{1})=(-1)^{x_{1}f}[x_{1},f]\), for all \(f\in C^{0}(L;M)=M\). Clearly, for each \(f\in C^{n}_{G}(L;M)\), \(n\geq 0\), \(\deg(\delta f)=\deg(f)\). From [9], [3], we have following theorem:
**Theorem 5.1**.: \(\delta^{n+1}\circ\delta^{n}=0\)_, that is, \((C^{*}(L;M),\delta)\), where \(C^{*}(L;M)=\oplus_{n}C^{n}(L;M)\), \(\delta=\oplus_{n}\delta^{n}\), is a cochain complex._
Let \(G\) be a finite group which acts on \(L\). Let \(M\) be a \(G\)-module over \(L\). For each \(n\geq 0\), we define a \(K\)-vector space \(C^{n}_{G}(L;M)\) as follows: \(C^{0}_{G}(L;M)=M\) and for \(n\geq 1\), \(C^{n}_{G}(L;M)\) consists of those \(f\in C^{n}(L,M)\) which are \(G\)-equivariant, that is, \(f(ga_{1},\ldots,ga_{n})=gf(a_{1},\ldots,a_{n})\), for all \((a_{1},\ldots,a_{n})\in L^{n}\), \(g\in G\). Clearly, \(C^{n}_{G}(L;M)=(C^{n}_{G})_{0}(L;M)\oplus(C^{n}_{G})_{1}(L;M)\), where \((C^{n}_{G})_{0}(L;M)\) and \((C^{n}_{G})_{1}(L;M)\) are vector subspaces of \(C^{n}_{G}(L;M)\) containing elements of degree \(0\) and \(1\), respectively. We define a \(K\)-linear map \(\delta^{n}_{G}:C^{n}_{G}(L;M)\to C^{n+1}_{G}(L;M)\) by
\[\delta^{n}_{G}f(x_{1},\ldots,x_{n+1})=\delta^{n}f(x_{1},\ldots,x_{n+1}).\]
Clearly, \(\delta^{n}_{G}f(gx_{1},\ldots,gx_{n+1})=g\delta^{n}f(x_{1},\ldots,x_{n+1})\) for each \(f\in C^{n}_{G}(L;M)\), \(g\in G\). Thus \(\delta^{n}_{G}\) is well defined. Write \(C^{*}_{G}(L;M)=\oplus_{n}C^{n}_{G}(L;M)\), \(\delta_{G}=\oplus_{n}\delta^{n}_{G}\). Using Theorem 5.1 we have following theorem:
Theorem 5.2: \((C^{*}_{G}(L;M),\delta_{G})\) _is a cochain complex._
We denote \(\ker(\delta^{n}_{G})\) by \(Z^{n}_{G}(L;M)\) and image of \((\delta^{n-1}_{G})\) by \(B^{n}_{G}(L;M)\). We call the \(n\)-th cohomology \(Z^{n}_{G}(L;M)/B^{n}_{G}(L;M)\) of the cochain complex \(\{C^{n}_{G}(L;M),\delta^{n}_{G}\}\) as the \(n\)-th equivariant cohomology of \(L\) with coefficients in \(M\) and denote it by \(H^{n}_{G}(L;M)\). Since \(L\) is a module over itself. So we can consider cohomology groups \(H^{n}_{G}(L;L)\). We call \(H^{n}_{G}(L;L)\) as the \(n\)-th equivariant cohomology group of \(L\). We have
\[Z^{n}_{G}(L;M)=(Z^{n}_{G})_{0}(L;M)\oplus(Z^{n}_{G})_{1}(L;M),\ B^{n}_{G}(L;M)= (B^{n}_{G})_{0}(L;M)\oplus(B^{n}_{G})_{1}(L;M),\]
where \((Z^{n}_{G})_{i}(L;M)\)and \((B^{n}_{G})_{i}(L;M)\) are submodules of \((C^{n}_{G})_{i}(L;M)\), \(i=0,1\). Since boundary map \(\delta^{n}_{G}:C^{n}_{G}(L;M)\to C^{n+1}_{G}(L;M)\) is homogeneous of degree \(0\), we conclude that \(H^{n}_{G}(L;M)\) is \(\mathbb{Z}_{2}\)-graded and
\[H^{n}_{G}(L;M)=(H^{n}_{G})_{0}(L;M)\oplus(H^{n}_{G})_{1}(L;M),\]
where \((H^{n}_{G})_{i}(L;M)=(Z^{n}_{G})_{i}(L;M)/(B^{n}_{G})_{i}(L;M)\), \(i=0,1\).
## 6 Equivariant Cohomology of Lie Superalgebras in Low Degrees
Let \(G\) be a finite group and \(L=L_{0}\oplus L_{1}\) be a Lie superalgebra with an action of \(G\). Let \(M=M_{0}\oplus M_{1}\) be a \(G\)-module over \(L\). For \(m\in M_{0}=(C^{0}_{G})_{0}(L;M)\), \(f\in(C^{1}_{G})_{0}(L;M)\) and \(g\in(C^{2}_{G})_{0}(L;M)\)
\[\delta^{0}_{G}m(x)=[x,m], \tag{5}\]
\[\delta^{1}f(x_{1},x_{2})=-f([x_{1},x_{2}])+[x_{1},f(x_{2})]-(-1)^{x_{2}x_{1}}[ x_{2},f(x_{1})], \tag{6}\]
\[\delta^{2}g(x_{1},x_{2},x_{3}) = -g([x_{1},x_{2}],x_{3})+(-1)^{x_{3}x_{2}}g([x_{1},x_{3}],x_{2})-( -1)^{x_{1}(x_{2}+x_{3})}g([x_{2},x_{3}],x_{1}) \tag{7}\] \[+[x_{1},g(x_{2},x_{3})]-(-1)^{x_{2}x_{1}}[x_{2},g(x_{1},x_{3})]\] \[+(-1)^{x_{3}x_{1}+x_{3}x_{2}}[x_{3},g(x_{1},x_{2})].\]
The set \(\{m\in M_{0}|[x,m]=0,\forall x\in L\}\) is called annihilator of \(L\) in \(M_{0}\) and is denoted by \(ann_{M_{0}}L\). We have
\[(H_{G}^{0})_{0}(L;M) = \{m\in M_{0}|[x,m]=0,\ \mbox{for all}\ x\in L\}\] \[= ann_{M_{0}}L.\]
A \(G\)-equivariant homogeneous linear map \(f:L\to M\) is called derivation from \(L\) to \(M\) if \(f([x_{1},x_{2}])=(-1)^{fx_{1}}[x_{1},f(x_{2})]-(-1)^{fx_{2}+x_{2}x_{1}}[x_{2},f(x_{1})]\), that is \(\delta_{G}^{1}f=0\). For every \(m\in M_{0}\) the map \(x\mapsto[x,m]\) is called an inner derivation from \(L\) to \(M\). We denote the vector spaces of equivariant derivations and equivariant inner derivations from \(L\) to \(M\) by \(Der^{G}(L;M)\) and \(Der^{G}_{Inn}(L;M)\) respectively. By using 5, 6 we have
\[(H_{G}^{1})_{0}(L;M)=Der^{G}(L;M)/Der^{G}_{Inn}(L;M).\]
Let \(L\) be a Lie superalgebra with an action of a finite group \(G\) and \(M\) be a \(G\)-module over \(L\). We regard \(M\) as an abelian Lie superalgebra with an action of \(G\). An extension of \(L\) by \(M\) is an exact sequence
(*)
of Lie superalgebras such that
\[[x,i(m)]=[\pi(x),m].\]
The exact sequence \((*)\) regarded as a sequence of \(K\)-vector spaces, splits. Therefore without any loss of generality we may assume that \({\cal E}\) as a \(K\)-vector space coincides with the direct sum \(L\oplus M\) and that \(i(m)=(0,m)\), \(\pi(x,m)=x\). Thus we have \({\cal E}={\cal E}_{0}\oplus{\cal E}_{1}\), where \({\cal E}_{0}=L_{0}\oplus M_{0}\), \({\cal E}_{1}=L_{1}\oplus M_{1}\). The multiplication in \({\cal E}=L\oplus M\) has then necessarily the form
\[[(0,m_{1}),(0,m_{2})]=0,\ [(x_{1},0),(0,m_{1})]=(0,[x_{1},m_{1}]),\]
\[[(0,m_{2}),(x_{2},0)]=-(-1)^{m_{2}x_{2}}(0,[x_{2},m_{2}]),\ [(x_{1},0),(x_{2},0)]=([x_{1},x_{2}],h(x_{1},x_{2})),\]
for some \(h\in(C_{G}^{2})_{0}(L;M)\), for all homogeneous \(x_{1},x_{2}\in L\), \(m_{1},m_{2}\in M\). Thus, in general, we have
\[[(x,m),(y,n)]=([x,y],[x,n]-(-1)^{my}[y,m]+h(x,y)),\]
for all homogeneous \((x,m)\), \((y,n)\) in \(\mathcal{E}=L\oplus M\).
Conversely, let \(h:L\times L\to M\) be a bilinear \(G\)-equivariant homogeneous map of degree \(0\). For homogeneous \((x,m)\), \((y,n)\) in \(\mathcal{E}\) we define multiplication in \(\mathcal{E}=L\oplus M\) by Equation 8. For homogeneous \((x,m)\), \((y,n)\) and \((z,p)\) in \(\mathcal{E}\) we have
\[[[(x,m),(y,n)],(z,p)] \tag{9}\] \[= ([[x,y],z],[[x,y],p]-(-1)^{zx+zn}[z[x,n]]+(-1)^{ym+zy+zm}[z,[y,m] ]+[h(x,y),z]+h([x,y],z))\]
\[[(x,m),[(y,n),(z,p)]] \tag{10}\] \[= ([x,[y,z]],[x,[y,p]]-(-1)^{nz}[x,[z,n]]-(-1)^{my+mz}[[y,z],m]+[x, h(y,z)]+h(x,[y,z])\]
\[[(y,n),[(x,m),(z,p)]]\] \[= ([y,[x,z]],[y,[x,p]]-(-1)^{mz}[y,[z,m]]-(-1)^{nx+nz}[[x,z],n]+[y, h(x,z)]+h(y,[x,z]))\]
From Equations 9, 10, 11 we conclude that \(\mathcal{E}=L\oplus M\) is a Lie superalgebra with product given by Equation 8 if and only if \(\delta_{G}^{2}h=0\). We denote the Lie superalgebra given by Equation 8 using notation \(\mathcal{E}_{h}\). Thus for every cocycle \(h\in(C_{G}^{2})_{0}(L;M)\) there exists an extension
of \(L\) by \(M\), where \(i\) and \(\pi\) are inclusion and projection maps, that is, \(i(m)=(0,m)\), \(\pi(x,m)=x\). We say that two extensions
of \(L\) by \(M\) are equivalent if there is a \(G\)-equivariant Lie superalgebra isomorphism \(\psi:\mathcal{E}^{1}\rightarrow\mathcal{E}^{2}\) such that following diagram commutes:
We use \(F(L,M)\)to denote the set of all equivalence classes of extensions of \(L\) by \(M\). Equation 8 defines a mapping of \((Z_{G}^{2})_{0}(L;M)\) onto \(F(L,M)\). If for \(h,h^{\prime}\in(Z_{G}^{2})_{0}(L;M)\)\(E_{h}\) is equivalent to \(E_{h^{\prime}}\), then commutativity of diagram \((**)\) is equivalent to
\[\psi(x,m)=(x,m+f(x)),\]
for some \(f\in(C_{G}^{1})_{0}(L;M)\). We have
\[\psi([(x_{1},m_{1}),(x_{2},m_{2})]) = \psi([x_{1},x_{2}],[x_{1},m_{2}]+[m_{1},x_{2}]+h(x_{1},x_{2}))\] \[= ([x_{1},x_{2}],[x_{1},m_{2}]+[m_{1},x_{2}]+h(x_{1},x_{2})+f([x_{1},x_{2}])),\]
\[[\psi(x_{1},m_{1}),\psi(x_{2},m_{2})] = [(x_{1},m_{1}+f(x_{1})),(x_{2},m_{2}+f(x_{2}))]\] \[= ([x_{1},x_{2}],[x_{1},m_{2}+f(x_{2})]+[m_{1}+f(x_{1}),x_{2}]+h^{ \prime}(x_{1},x_{2})).\]
Since \(\psi([(x_{1},m_{1}),(x_{2},m_{2})])=[\psi(x_{1},m_{1}),\psi(x_{2},m_{2})]\), we have
\[h(x_{1},x_{2})-h^{\prime}(x_{1},x_{2}) = -f([x_{1},x_{2}])+[x_{1},f(x_{2})]+[f(x_{1}),x_{2}] \tag{14}\] \[= -f([x_{1},x_{2}])+[x_{1},f(x_{2})]-(-1)^{x_{1}x_{2}}[x_{2},f(x_{1 })]\] \[= \delta^{1}(f)(x_{1},x_{2})\]
Thus two extensions \(E_{h}\) and \(E_{h^{\prime}}\) are equivalent if and only if there exists some \(f\in(C_{G}^{1})_{0}(L;M)\) such that \(\delta^{1}f=h-h^{\prime}\). We thus have following theorem:
**Theorem 6.1**.: _The set \(F(L,M)\) of all equivalence classes of extensions of \(L\) by \(M\) is in one to one correspondence with the cohomology group \((H_{G}^{2})_{0}(L;M)\). This correspondence \(\omega:(H_{G}^{2})_{0}(L;M)\to F(L,M)\) is obtained by assigning to each cocycle \(h\in(Z_{G}^{2})_{0}(L;M)\), the extension given by multiplication 8._
## 7 Equivariant Deformation of Lie Superalgebras
Let \(L=L_{0}\oplus L_{1}\) be a Lie superalgebra. We denote the ring of all formal power series with coefficients in \(L\) by \(L[[t]]\). Clearly, \(L[[t]]=L_{0}[[t]]\oplus L_{1}[[t]]\). So every \(a_{t}\in L[[t]]\) is of the form \(a_{t}=(a_{t})_{0}\oplus(a_{t})_{1}\), where \((a_{t})_{0}\in L_{0}[[t]]\) and \((a_{t})_{1}\in L_{1}[[t]]\)
**Definition 7.1**.: _Let \(L=L_{0}\oplus L_{1}\) be a Lie superalgebra with an action of a finite group \(G\). An equivariant formal one-parameter deformation of \(L\) is a \(K[[t]]\)-bilinear map_
\[\mu_{t}:L[[t]]\times L[[t]]\to L[[t]]\]
_satisfying the following properties:_
1. \(\mu_{t}(a,b)=\sum_{i=0}^{\infty}\mu_{i}(a,b)t^{i}\)_, for all_ \(a,b\in L\)_, where_ \(\mu_{i}:L\times L\to L\)_,_ \(i\geq 0\) _are_ \(G\)_-equivariant bilinear homogeneous mappings of degree zero and_ \(\mu_{0}(a,b)=[a,b]\) _is the original product on_ \(L\)_._
2. \(\mu_{t}(a,b)=-(-1)^{ab}\mu_{t}(b,a),\) _for all homogeneous_ \(a,b\in L\)_._
3. \[\mu_{t}(a,\mu_{t}(b,c))=\mu_{t}(\mu_{t}(a,b),c)+(-1)^{ab}\mu_{t}(b,\mu_{t}(a,c )),\] (15) _for all homogeneous_ \(a,b,c\in L\)_._
_The Equation 15 is equivalent to following equation:_
\[\sum_{i+j=r}\mu_{i}(a,\mu_{j}(b,c)) \tag{16}\] \[= \sum_{i+j=r}\{\mu_{i}(\mu_{j}(a,b),c)-(-1)^{ab}\mu_{i}(b,\mu_{j}( a,c))\},\]
_for all homogeneous \(a,b,c\in L\)._
Now we define a formal deformation of finite order of a Lie superalgebra \(L\).
**Definition 7.2**.: _Let \(L\) be a Lie superalgebra with an action of a group \(G\). A formal one-parameter deformation of order \(n\) of \(L\) is a \(K[[t]]\)-bilinear map_
\[\mu_{t}:L[[t]]\times L[[t]]\to L[[t]]\]
_satisfying the following properties:_
1. \(\mu_{t}(a,b)=\sum_{i=0}^{n}\mu_{i}(a,b)t^{i}\)_,_ \(\forall a,b,c\in L\)_, where_ \(\mu_{i}:L\times L\to T\)_,_ \(0\leq i\leq n\)_, are equivariant_ \(K\)_-bilinear homogeneous maps of degree_ \(0\)_, and_ \(\mu_{0}(a,b)=[a,b]\) _is the original product on_ \(L\)
_._
2. \(\mu_{i}(a,b)=-(-1)^{ab}\mu_{i}(b,a),\) _for all homogeneous_ \(a,b\in L\)_,_ \(i\geq 0.\)__
3. \[\mu_{t}(a,\mu_{t}(b,c))=\mu_{t}(\mu_{t}(a,b),c)+(-1)^{ab}\mu_{t}(b,\mu_{t}(a,c)),\] (17) _for all homogeneous_ \(a,b,c\in L\)_._
**Remark 7.1**.:
* _For_ \(r=0\)_, conditions_ 16 _is equivalent to the fact that_ \(L\) _is a Lie superalgebra._
* _For_ \(r=1\)_, conditions_ 16 _is equivalent to_ \[0 = -\mu_{1}(a,[b,c])-[a,\mu_{1}(b,c)]\] \[+\mu_{1}([a,b],c)+(-1)^{ab}\mu_{1}(b,[a,c])+[\mu_{1}(a,b),c]+(-1)^ {ab}[b,\mu_{1}(a,c)]\] \[= \delta^{2}\mu_{1}(a,b,c);\mbox{ for all homogeneous }a,b,c\in L.\] _Thus for_ \(r=1\)_,_ 16 _is equivalent to saying that_ \(\mu_{1}\in C_{0}^{2}(L;L)\) _is a cocycle. In general, for_ \(r\geq 0\)_,_ \(\mu_{r}\) _is just a 2-cochain, that is, in_ \(\mu_{r}\in C_{0}^{2}(L;L)\)_._
**Definition 7.3**.: _The cochain \(\mu_{1}\in C_{0}^{2}(L;L)\) is called infinitesimal of the deformation \(\mu_{t}\). In general, if \(\mu_{i}=0,\) for \(1\leq i\leq n-1\), and \(\mu_{n}\) is a nonzero cochain in \(C_{0}^{2}(L;L)\), then \(\mu_{n}\) is called n-infinitesimal of the deformation \(\mu_{t}\)._
**Proposition 7.1**.: _The infinitesimal \(\mu_{1}\in C_{0}^{2}(L;L)\) of the deformation \(\mu_{t}\) is a cocycle. In general, n-infinitesimal \(\mu_{n}\) is a cocycle in \(C_{0}^{2}(L;L)\)._
Proof.: For n=1, proof is obvious from the Remark 7.1. For \(n>1\), proof is similar.
## 8 Equivalence of Equivariant Formal Deformations and Cohomology
Let \(\mu_{t}\) and \(\tilde{\mu_{t}}\) be two formal deformations of a Lie superalgebra \(L-L_{0}\oplus L_{1}\). A formal isomorphism from the deformation \(\mu_{t}\) to \(\tilde{\mu_{t}}\) is a \(K[[t]]\)-linear automorphism \(\Psi_{t}:L[[t]]\to L[[t]]\) of the form \(\Psi_{t}=\sum_{i=0}^{\infty}\psi_{i}t^{i}\), where each \(\psi_{i}\) is a homogeneous \(K\)-linear map \(L\to L\) of degree \(0\), \(\psi_{0}(a)=a\), for all \(a\in T\) and
\[\tilde{\mu_{t}}(\Psi_{t}(a),\Psi_{t}(b))=\Psi_{t}\circ\mu_{t}(a,b),\]
for all \(a,b\in L\).
**Definition 8.1**.: _Two deformations \(\mu_{t}\) and \(\tilde{\mu_{t}}\) of a Lie superalgebra \(L\) are said to be equivalent if there exists a formal isomorphism \(\Psi_{t}\) from \(\mu_{t}\) to \(\tilde{\mu_{t}}\)._
Formal isomorphism on the collection of all formal deformations of a Lie superalgebra \(L\) is an equivalence relation.
**Definition 8.2**.: _Any formal deformation of T that is equivalent to the deformation \(\mu_{0}\) is said to be a trivial deformation._
**Theorem 8.1**.: _The cohomology class of the infinitesimal of a deformation \(\mu_{t}\) of a Lie Superalgebra \(L\) is determined by the equivalence class of \(\mu_{t}\)._
Proof.: Let \(\Psi_{t}\) be a formal isomorphism from \(\mu_{t}\) to \(\tilde{\mu_{t}}\). So we have, for all \(a,b\in L\), \(\tilde{\mu_{t}}(\Psi_{t}a,\Psi_{t}b)=\Psi_{t}\circ\mu_{t}(a,b)\). This implies that
\[(\mu_{1}-\tilde{\mu_{1}})(a,b) = [\psi_{1}a,b]+[a,\psi_{1}b]-\psi_{1}([a,b])\] \[= \delta^{1}\psi_{1}(a,b).\]
So we have \(\mu_{1}-\tilde{\mu_{1}}=\delta^{1}\psi_{1}.\) This completes the proof.
## 9 Some Examples of Equivariant Deformations
In this section we discuss some examples of equivariant formal deformations of Lie superalgebras.
**Example 9.1**.: _Let \(e_{ij}\) denote a \(2\times 2\) matrix with \((i,j)\)th entry \(1\) and all other entries \(0.\) Consider \(L_{0}=span\{e_{11},e_{22}\}\), \(L_{1}=span\{e_{12},e_{21}\}\). Then \(L=L_{0}\oplus L_{1}\) is a Lie superalgebra with the bracket \([,]\) defined by_
\[[a,b]=ab-(-1)^{\bar{a}\bar{b}}ba.\]
_Define a function \(\psi:\mathbb{Z}_{2}\times L\to L\) by \(\psi(0,x)=x,\forall x\in L\), \(\psi(1,e_{11})=e_{22}\), \(\psi(1,e_{22})=e_{11}\), \(\psi(1,e_{12})=e_{21}\), \(\psi(1,e_{21})=e_{12}\). In Example 3.4, we have seen that this gives an action of \(\mathbb{Z}_{2}\) on \(L.\) Define a bilinear map \(*:L\times L\to L\) by_
\[e_{ij}*e_{kl}=\begin{cases}e_{li},\ \text{if}\ j=k\\ 0,\ \text{otherwise}\end{cases}.\]
_Now define \(\mu_{1}:L\times L\to L\) by_
\[\mu_{1}(a,b)=a*b-(-1)^{ab}b*a,\]
_for all homogeneous \(a\), \(b\) in \(L\). We have_
1. \(1\mu_{1}(e_{ii},e_{ii})=0=\mu_{1}(1e_{ii},1e_{ii}),\,\forall\;i=1,2.\)__
2. \(1\mu_{1}(e_{ii},e_{jj})=0=\mu_{1}(e_{jj},e_{ii})=\mu_{1}(1e_{ii},1e_{jj}),\, \forall\;i,j=1,2,\;i\neq j.\)__
3. \(1\mu_{1}(e_{ij},e_{ji})=1(e_{jj}-(-1)^{1}e_{ii})=e_{ii}+e_{jj}=\mu_{1}(1e_{ij}, 1e_{ji}),\,\forall\;i,j=1,2,\;i\neq j.\)__
4. \(1\mu_{1}(e_{ij},e_{ij})=0=(e_{ji},e_{ji})=\mu_{1}(1e_{ij},1e_{ij}),\,\forall\; i,j=1,2,\;i\neq j.\)__
5. \(1\mu_{1}(e_{ii},e_{ij})=1(e_{ji})=e_{ij}=\mu_{1}(e_{jj},e_{ji})=\mu_{1}(1e_{ii},1e_{ij}),\,\forall\;i,j=1,2,\;i\neq j.\)__
6. \(1\mu_{1}(e_{jj},e_{ij})=1(-e_{ji})=-e_{ij}=\mu_{1}(e_{ii},e_{ji}]=\mu_{1}(1e_{jj},1e _{ij}),\,\forall\;i,j=1,2,\;i\neq j.\)__
_Hence \(\mu_{1}\) is \(\mathbb{Z}_{2}\) equivariant. Define \(\mu_{t}=\mu_{0}+\mu_{1}t,\) where \(\mu_{0}(a,b)=[a,b].\) We shall show that \(\mu_{t}\) is an equivariant deformation of \(L\) of order \(1.\) To conclude this only thing that we need to show is that_
\[\delta^{2}\mu_{1}(a,b,c) = -\mu_{1}(a,[b,c])-[a,\mu_{1}(b,c)]\] \[+\mu_{1}([a,b],c)+(-1)^{ab}\mu_{1}(b,[a,c])+[\mu_{1}(a,b),c]+(-1) ^{ab}[b,\mu_{1}(a,c)]\] \[= 0;\quad\mbox{ for all homogeneous $a,b,c\in L$}.\]
_We have_
\[\delta^{2}\mu_{1}(b,c,a) = -\mu_{1}(b,[c,a])-[b,\mu_{1}(c,a)] \tag{18}\] \[+\mu_{1}([b,c],a)+(-1)^{bc}\mu_{1}(c,[b,a])+[\mu_{1}(b,c),a]+(-1) ^{bc}[c,\mu_{1}(b,a)]\] \[= (-1)^{ac}\mu_{1}(b,[a,c])+(-1)^{ac}[b,\mu_{1}(a,c)]\] \[-(-1)^{ab+ac}\mu_{1}(a,[b,c])+(-1)^{ab+ac}\mu_{1}([a,b],c)\] \[-(-1)^{ab+ac}[a,\mu_{1}(b,c)]+(-1)^{ab+ac}[\mu_{1}(a,b),c]\] \[= (-1)^{ab+ac}\{-\mu_{1}(a,[b,c])-[a,\mu_{1}(b,c)]\] \[+\mu_{1}([a,b],c)+(-1)^{ab}\mu_{1}(b,[a,c])+[\mu_{1}(a,b),c]+(-1) ^{ab}[b,\mu_{1}(a,c)]\}\] \[= (-1)^{ab+ac}\delta^{2}\mu_{1}(a,b,c)\]
\[\delta^{2}\mu_{1}(e_{11},e_{12},e_{21}) = -\mu_{1}(e_{11},e_{11}+e_{22})-[e_{11},e_{22}+e_{11}] \tag{19}\] \[+\mu_{1}(e_{12},e_{21})+\mu_{1}(e_{12},-e_{21})+[e_{21},e_{21}]+[e _{12},-e_{12}]\] \[= 0\]
\[\delta^{2}\mu_{1}(e_{11},e_{21},e_{12}) = -\mu_{1}(e_{11},e_{11}+e_{22})-[e_{11},e_{22}+e_{11}] \tag{20}\] \[+\mu_{1}(-e_{21},e_{12})+\mu_{1}(e_{21},e_{12})+[-e_{12},e_{12}]+ [e_{21},e_{21}]\] \[= 0\]
\[\delta^{2}\mu_{1}(e_{11},e_{12},e_{22}) = -\mu_{1}(e_{11},e_{12})-[e_{11},e_{21}] \tag{21}\] \[+\mu_{1}(e_{12},e_{22})+\mu_{1}(e_{12},0)+[e_{21},e_{22}]+[e_{12},0]\] \[= 0\]
\[\delta^{2}\mu_{1}(e_{11},e_{22},e_{12}) = -\mu_{1}(e_{11},-e_{12})-[e_{11},-e_{21}] \tag{22}\] \[+\mu_{1}(0,e_{12})+\mu_{1}(e_{22},-e_{12})+[0,e_{12}]+[e_{22},e_{ 21}]\] \[= 0\]
\[\delta^{2}\mu_{1}(e_{11},e_{22},e_{21}) = -\mu_{1}(e_{11},e_{21})-[e_{11},e_{12}] \tag{23}\] \[+\mu_{1}(0,e_{21})+\mu_{1}(e_{22},-e_{21})+[0,e_{21}]+[e_{22},-e_{ 12}]\] \[= 0\]
\[\delta^{2}\mu_{1}(e_{11},e_{21},e_{22}) = -\mu_{1}(e_{11},-e_{21})-[e_{11},-e_{12}] \tag{24}\] \[+\mu_{1}(-e_{21},e_{22})+\mu_{1}(e_{21},0)+[-e_{12},e_{22}]+[e_{2 1},0]\] \[= 0\]
\[\delta^{2}\mu_{1}(e_{22},e_{12},e_{21}) = -\mu_{1}(e_{22},e_{11}+e_{22})-[e_{22},e_{22}+e_{11}] \tag{25}\] \[+\mu_{1}(-e_{12},e_{21})+\mu_{1}(e_{12},e_{21})+-[e_{21},e_{21}]+ [e_{12},e_{12}]\] \[= 0\]
\[\delta^{2}\mu_{1}(e_{22},e_{21},e_{12}) = -\mu_{1}(e_{22},e_{11}+e_{22})-[e_{22},e_{22}+e_{11}] \tag{26}\] \[+\mu_{1}(e_{21},e_{12})+\mu_{1}(e_{21},-e_{12})+[e_{12},e_{12}]+[e_ {21},-e_{21}]\] \[= 0\]
_Using Equations 18, 19, 20, 21, 22, 23, 24, 25 and 26 we conclude that Equation 18 holds. Hence \(\mu_{t}\) is an equivariant deformation of \(L\) of order \(1\)._
|
2303.18158 | Constrained Optimization of Rank-One Functions with Indicator Variables | Optimization problems involving minimization of a rank-one convex function
over constraints modeling restrictions on the support of the decision variables
emerge in various machine learning applications. These problems are often
modeled with indicator variables for identifying the support of the continuous
variables. In this paper we investigate compact extended formulations for such
problems through perspective reformulation techniques. In contrast to the
majority of previous work that relies on support function arguments and
disjunctive programming techniques to provide convex hull results, we propose a
constructive approach that exploits a hidden conic structure induced by
perspective functions. To this end, we first establish a convex hull result for
a general conic mixed-binary set in which each conic constraint involves a
linear function of independent continuous variables and a set of binary
variables. We then demonstrate that extended representations of sets associated
with epigraphs of rank-one convex functions over constraints modeling indicator
relations naturally admit such a conic representation. This enables us to
systematically give perspective formulations for the convex hull descriptions
of these sets with nonlinear separable or non-separable objective functions,
sign constraints on continuous variables, and combinatorial constraints on
indicator variables. We illustrate the efficacy of our results on sparse
nonnegative logistic regression problems. | Soroosh Shafiee, Fatma Kılınç-Karzan | 2023-03-31T15:51:56Z | http://arxiv.org/abs/2303.18158v2 | # Constrained Optimization of Rank-One Functions with Indicator Variables
###### Abstract
Optimization problems involving minimization of a rank-one convex function over constraints modeling restrictions on the support of the decision variables emerge in various machine learning applications. These problems are often modeled with indicator variables for identifying the support of the continuous variables. In this paper we investigate compact extended formulations for such problems through perspective reformulation techniques. In contrast to the majority of previous work that relies on support function arguments and disjunctive programming techniques to provide convex hull results, we propose a constructive approach that exploits a hidden conic structure induced by perspective functions. To this end, we first establish a convex hull result for a general conic mixed-binary set in which each conic constraint involves a linear function of independent continuous variables and a set of binary variables. We then demonstrate that extended representations of sets associated with epigraphs of rank-one convex functions over constraints modeling indicator relations naturally admit such a conic representation. This enables us to systematically give perspective formulations for the convex hull descriptions of these sets with nonlinear separable or non-separable objective functions, sign constraints on continuous variables, and combinatorial constraints on indicator variables. We illustrate the efficacy of our results on sparse nonnegative logistic regression problems.
Keywords:Mixed-integer nonlinear optimization, indicator variables, perspective function, combinatorial constraints, convex hull
## 1 Introduction
We consider specific classes of the general mixed-binary convex optimization problem with indicator variables
\[\min_{\mathbf{x},\mathbf{z}}\left\{H(\mathbf{x}):\ (\mathbf{x},\mathbf{z})\in\mathcal{X}\times \mathcal{Z},\ x_{i}(1-z_{i})=0,\ \forall i\in[d]\right\}, \tag{1}\]
where \(H:\mathbb{R}^{m}\rightarrow\mathbb{R}\) is a convex function, \(\mathcal{X}\times\mathcal{Z}\subseteq\mathbb{R}^{d}\times\left\{0,1\right\}^{d}\) denotes the feasible set, and \([d]:=\left\{1,\ldots,d\right\}\). Each binary variable \(z_{i}\) determines whether a continuous variable \(x_{i}\) is zero or not by requiring \(x_{i}=0\) when \(z_{i}=0\) and allowing \(x_{i}\) to take any value when \(z_{i}=1\). We are interested in providing the closed convex hull characterization of the epigraph associated with problem (1)
\[\left\{(\tau,\mathbf{x},\mathbf{z})\in\mathbb{R}\times\mathcal{X}\times \mathcal{Z}:\ H(\mathbf{x})\leq\tau,\ x_{i}(1-z_{i})=0,\ \forall i\in[d]\right\}.\]
The primary challenge in this class of problems stems from the complementary constraints between \(x_{i}\) and \(z_{i}\). Whenever an a priori bound on the magnitude of the continuous variables \(|x_{i}|\) is known, these constraints can be linearized via the big-M method. However, finding a suitable bound on the continuous variables to set the big-M parameter can be very challenging, and the resulting big-M formulations are known to be weak. Dating back to Ceria and Soares [14], the perspective functions have played a significant role in offering big-M free reformulations of mixed-binary convex optimization problems. In particular, Frangioni and Gentile [18] introduce perspective cuts based on a linearization of perspective functions. Akturk et al. [1] then show that perspective relaxations can be viewed as implicitly including all (infinitely many) perspective cuts. Gunluk and Linderoth [22] examine a _separable_ structure where the objective function of (1) constitutes a sum of univariate functions, taking the form \(H(x)=\sum_{i\in[d]}h_{i}(x_{i})\) for some lower semicontinuous and convex functions \(h_{i}:\mathbb{R}\rightarrow\mathbb{R}\), and study the mixed-binary set
\[\mathcal{H}:=\left\{(\tau,\mathbf{x},\mathbf{z})\in\mathbb{R}\times\mathcal{X}\times \mathcal{Z}:\begin{array}{l}\sum_{i\in[d]}h_{i}(x_{i})\leq\tau,\\ x_{i}(1-z_{i})=0,\ \forall i\in[d]\end{array}\right\} \tag{2}\]
when \(\mathcal{X}=\mathbb{R}^{d}\) and \(\mathcal{Z}=\left\{0,1\right\}^{d}\). Using the perspective reformulation technique, [22] presents an _ideal_ formulation of the closed convex hull of \(\mathcal{H}\) (\(\operatorname{cl\,conv}(\mathcal{H})\)) in the original space of variables. Xie and Deng [38] revisit the separable structure and give a perspective formulation of \(\operatorname{cl\,conv}(\mathcal{H})\) when \(\mathcal{X}=\mathbb{R}^{d},\mathcal{Z}=\left\{z\in\left\{0,1\right\}^{d}: \mathbf{1}^{\top}\mathbf{z}\leq\kappa\right\}\) for some integer \(1\leq\kappa\leq d\), and the functions \(h_{i}\) are convex and quadratic. Bacci et al. [7] extend these findings to convex differentiable functions under certain constraint qualification conditions. More recently, under this separable function assumption, using a support function argument, Wei et al. [36] further generalize [38] and provide an ideal perspective formulation for \(\operatorname{cl\,conv}(\mathcal{H})\) when \(\mathcal{X}=\mathbb{R}^{d},\mathcal{Z}\subseteq\left\{0,1\right\}^{n}\) modeling combinatorial relations, and the functions \(h_{i}:\mathbb{R}\rightarrow\mathbb{R}\) are lower semicontinuous and convex.
In a number of important applications, including portfolio optimization in finance [13; 34], sparse classification and regression in machine learning [4; 10; 11; 12; 17; 24; 25; 38] and their nonnegative variants [5; 6], and outlier detection in statistics [21], the objective function \(H\) constitutes a finite sum of _rank-one convex functions_, taking the form \(H(\boldsymbol{x})=\sum_{j\in[N]}h_{j}(\boldsymbol{a}_{j}^{\top}\boldsymbol{x})\) for some convex functions \(h_{j}\) and vectors \(\boldsymbol{a}_{j}\in\mathbb{R}^{d}\). For example, in the case of sparse least squares regression, \(N\) denotes the number of samples and \(h_{j}(\boldsymbol{a}_{j}^{\top}\boldsymbol{x})=(\boldsymbol{a}_{j}^{\top} \boldsymbol{x}-b_{j})^{2}\), where \(\boldsymbol{a}_{j}\) represents the feature or input vector and \(b_{j}\) represents the label or output. As a result, a growing stream of research [28; 2; 19; 6; 30; 34] studies (1) when \(H(\boldsymbol{x})\) is a _non-separable_ quadratic function. More recently, a number of papers [3; 5; 23; 35; 36] offer strong perspective-based relaxations for rank-one non-separable objective functions by analyzing the closed convex hull of the mixed-binary set
\[\mathcal{T}:=\left\{(\tau,\boldsymbol{x},\boldsymbol{z})\in\mathbb{R}\times \mathcal{X}\times\mathcal{Z}:\begin{array}{l}h(\boldsymbol{a}^{\top} \boldsymbol{x})\leq\tau,\\ x_{i}(1-z_{i})=0,\ \forall i\in[d]\end{array}\right\} \tag{3}\]
under various assumptions on \(h(\cdot)\), \(\mathcal{X}\), and \(\mathcal{Z}\). In particular, Atamturk and Gomez [3] examine rank-one convex quadratic functions when \(\mathcal{X}=\mathbb{R}^{d}\) and \(\mathcal{Z}=\left\{0,1\right\}^{d}\). Wei et al. [35] extend these findings by allowing constraints on the binary variables, that is, \(\mathcal{Z}\subseteq\left\{0,1\right\}^{d}\). Following up on this, Wei et al. [36] give a perspective formulation of the closed convex hull of \(\mathcal{T}\) (\(\operatorname{cl\,conv}(\mathcal{T})\)) in the original space of variables when \(\mathcal{X}=\mathbb{R}^{d},\mathcal{Z}\subset\left\{0,1\right\}^{d}\), and the function \(h:\mathbb{R}\rightarrow\mathbb{R}\) is lower semicontinuous and convex. Nonetheless, the formulations in [35; 36] require including a nonlinear convex inequality for each facet of \(\operatorname{conv}(\mathcal{Z}\setminus\left\{\boldsymbol{0}\right\})\). Note that \(\operatorname{conv}(\mathcal{Z}\setminus\left\{\boldsymbol{0}\right\})\) itself may be complicated and may require an exponential number of inequalities for its description even when \(\mathcal{Z}\) is a simple boolean set itself. In a separate thread, when \(\mathcal{Z}=\left\{0,1\right\}^{d}\), Atamturk and Gomez [5] examine \(\operatorname{cl\,conv}(\mathcal{T})\) when \(h\) is a convex quadratic function and there are additional nonnegativity requirements on some of the continuous variables, i.e., \(\mathcal{X}=\left\{\boldsymbol{x}\in\mathbb{R}^{d}:\ x_{i}\geq 0,\ \forall i\in \mathcal{I}\right\}\) for some \(\mathcal{I}\subseteq[d]\), and propose classes of valid inequalities for \(\operatorname{cl\,conv}(\mathcal{T})\). However, these inequalities given in the original problem space require a cutting-surface based implementation, which may result in numerical issues. To address such issues, Han and Gomez [23] present compact extended formulations for \(\operatorname{cl\,conv}(\mathcal{T})\) when \(\mathcal{Z}=\left\{0,1\right\}^{d}\). These extended formulations are not only applicable to general lower semicontinuous and convex function \(h\), but are also easier to implement as they can be embedded within standard branch-and-bound based integer programming solvers.
In this paper, by linking the perspective formulations to conic programming, we propose a conic-programming based approach to address constrained optimization of rank-one functions with indicator variables. Our approach generalizes the existing results by studying simultaneously both sign restrictions on continuous variables and combinatorial constraints on binary variables. Specifically, we provide perspective formulations for \(\operatorname{cl\,conv}(\mathcal{H})\) and \(\operatorname{cl\,conv}(\mathcal{T})\)
when \(\mathcal{X}\!=\!\left\{\mathbf{x}\in\mathbb{R}^{d}:x_{i}\geq 0,\,\forall i\in\mathcal{I}\right\}\) for some \(\mathcal{I}\subseteq[d]\), \(\mathcal{Z}\subseteq\left\{0,1\right\}^{d}\) and all functions are proper, lower semicontinuous, and convex. The crux of our approach relies on understanding the recessive and rounding directions associated with these sets involving complementarity constraints. Based on this understanding, we reduce the given complementarity constraints to fewer and much simpler to handle complementarity relations in a lifted space. For example, in the case of the set \(\mathcal{T}\) when \(\mathcal{X}\) does not contain any sign restrictions, following this approach we analyze a set involving a _single_ complementarity constraint with a single new binary variable. In this way, our approach provides an effective way to arrive at compact extended formulations for \(\operatorname{cl\,conv}(\mathcal{H})\) and \(\operatorname{cl\,conv}(\mathcal{T})\) in a more direct manner while simultaneously handling arbitrary \(\mathcal{Z}\subseteq\left\{0,1\right\}^{d}\) and arbitrary sign restrictions included in \(\mathcal{X}\). In contrast, the proof techniques in [3; 5; 35; 23; 36] are based on support function arguments and disjunctive programming methods, which have resulted in addressing restrictions on the sets \(\mathcal{X}\) and \(\mathcal{Z}\) in separate studies through different approaches. The key contributions of this paper and its organization are summarized below.
* We derive the necessary tools for the study of the separable and nonseparable sets in Section 2. We begin by introducing Lemma 1, which provides a tool to handle a simple linking constraint and the closure operation. We then study and characterize the convex hulls of conic mixed-binary sets of the following form \[\mathcal{S}(\Delta,\mathbb{K})=\left\{(\mathbf{\alpha},\mathbf{\delta})\in\mathbb{R}^ {m}\times\Delta:\mathbf{A}_{i}\mathbf{\alpha}_{i}+\mathbf{B}_{i}\delta_{i}\in\mathbb{K}_{ i},\ \forall i\in[p]\right\},\] (4) where \(\Delta\subseteq\left\{0,1\right\}^{n}\), \(\mathbb{K}=\times_{i\in[d]}\mathbb{K}_{i}\), \(\mathbb{K}_{i}\) is convex cone containing the origin for every \(i\in[p]\), and the matrices \(\mathbf{A}_{i},\mathbf{B}_{i}\) have appropriate dimensions in Section 2.1. We use the notation \(\mathbf{\alpha}=(\mathbf{\alpha}_{1},\ldots,\mathbf{\alpha}_{p})\) to denote the vector \(\mathbf{\alpha}\in\mathbb{R}^{m}\) in terms of its subvectors \(\mathbf{\alpha}_{i}\in\mathbb{R}^{m_{i}},i\in[p]\), where \(\sum_{i\in[p]}m_{i}=m\). We show in Proposition 1 that \(\operatorname{conv}(\mathcal{S}(\Delta,\mathbb{K}))=\mathcal{S}(\operatorname {conv}(\Delta),\mathbb{K})\) as long as \(\Delta\subseteq\left\{0,1\right\}^{n}\) and \(\mathbb{K}_{i}\) is a convex cone containing the origin for all \(i\in[d]\). We then establish a simple condition in Theorem 1 under which \(\operatorname{cl\,conv}(\mathcal{S}(\Delta,\mathbb{K}))=\mathcal{S}( \operatorname{conv}(\Delta),\operatorname{cl}(\mathbb{K}))\) holds. These results highlight that the complexity of the convex hull characterizations of sets of the form \(\mathcal{S}(\Delta,\mathbb{K})\) is solely determined by the complexity of the characterization of \(\operatorname{conv}(\Delta)\). We next study perspective functions, and link them to the conic formulations of our interest \(\mathcal{S}(\Delta,\mathbb{K})\) in Section 2.2. Specifically, we analyze the epigraph of perspective functions in Lemmas 2 and 3, and establish the closed convex hull of a mixed-binary set that will serve as the primary substructure for the sets \(\mathcal{H}\) and \(\mathcal{T}\) in Proposition 2.
* In Section 3, we discuss the constrained optimization of rank-one functions with indicator variables through the lens of conic mixed-binary sets. Specifically, we derive compact extended formulations for \(\operatorname{cl\,conv}(\mathcal{H})\) and \(\operatorname{cl\,conv}(\mathcal{T})\) when \(\mathcal{X}=\left\{\mathbf{x}\in\mathbb{R}^{d}:x_{i}\geq 0,\forall i\in \mathcal{I}\right\}\) for some index set \(\mathcal{I}\subseteq[d]\) and \(\mathcal{Z}\subseteq\left\{0,1\right\}^{d}\).
* In Section 3.1, we investigate the separable case and establish \(\operatorname{cl\,conv}(\mathcal{H})\) in Theorem 2. This result extends Wei et al. (36, Theorem 3) to proper
(rather than real-valued) lower semicontinuous convex functions and by allowing for nonnegativity restrictions on \(\vec{x}\). * In Section 3.2, we explore the non-separable case and examine the set \(\mathcal{T}\) when \(\mathcal{X}=\mathbb{R}^{d}\) and \(\mathcal{Z}\subseteq\left\{0,1\right\}^{d}\). We establish an extended description for \(\operatorname{cl\,conv}(\mathcal{T})\) in Theorem 3. Our description requires a single new binary variable \(w\) and relies on the convex hull description of an associated set \(\Delta_{1}\) involving \(w\) and \(\vec{z}\) variables. This therefore reduces the complexity of characterizing \(\operatorname{cl\,conv}(\mathcal{T})\) in this setting to understanding the complexity of \(\operatorname{conv}(\Delta_{1})\). We show that \(\operatorname{conv}(\Delta_{1})\) admits simple descriptions in several cases of interest such as when \(\mathcal{Z}\) is defined by a cardinality constraint or by weak or strong hierarchy constraints. In this setting, \(\operatorname{cl\,conv}(\mathcal{T})\) in the original space was first given in (36, Theorem 1). In contrast to our result, Wei et al. (36, Theorem 1) provide ideal descriptions for \(\operatorname{cl\,conv}(\mathcal{T})\) that rely on explicit linear inequality description of the set \(\operatorname{conv}(\mathcal{Z}\setminus\{\mathbf{0}\})\) and adding a new nonlinear convex constraint based on every facet of \(\operatorname{conv}(\mathcal{Z}\setminus\{\mathbf{0}\})\). Our extended formulation, however, can immediately take advantage of any relaxation of \(\Delta_{1}\) and opens up ways to benefit from the long-line of research on convex hull descriptions of binary sets and related advanced techniques in optimization software. More recently, for \(\mathcal{Z}=\left\{0,1\right\}^{d}\) and \(\mathcal{I}=\varnothing\), Han and Gomez (23, Proposition 3) provide an extended formulation involving \(2d\) new continuous variables, that are subsequently projected out in (23, Proposition 4) to recover (36, Theorem 1). In contrast, our extended formulation works for any boolean set \(\mathcal{Z}\subseteq\left\{0,1\right\}^{d}\) and relies on only one additional binary variable. * In Section 3.3, we continue to explore the non-separable setting of \(\mathcal{T}\) when \(\mathcal{X}=\left\{\vec{x}\in\mathbb{R}^{d}:x_{i}\geq 0,\forall i\in\mathcal{I}\right\}\) for some index set \(\mathcal{I}\subseteq[d]\) and \(\mathcal{Z}\subseteq\left\{0,1\right\}^{d}\). In Theorem 5, we establish an extended formulation for \(\operatorname{cl\,conv}(\mathcal{T})\) using \(d\) new binary variables and \(d\) new continuous variables. Such an extended formulation for \(\operatorname{cl\,conv}(\mathcal{T})\) when \(\mathcal{Z}\) contains combinatorial constraints and \(\mathcal{X}\) contains sign restrictions has not been provided in the literature before. When \(\mathcal{Z}=\left\{0,1\right\}^{d}\), we recover (23, Proposition 3). Moreover, our result extends (23) by allowing combinatorial constraints on binary variables, i.e., \(\mathcal{Z}\subseteq\left\{0,1\right\}^{d}\), and extended-valued functions. Note that the proof techniques from (23) rely on explicit disjunctive programming arguments and the Fourier-Motzkin elimination method, and therefore, they cannot be easily adapted to handle combinatorial constraints on \(\mathcal{Z}\) (see Remark 3).
* Finally, in Section 4, we compare the numerical performance of formulations discussed in Section 3 on sparse nonnegative logistic regression with hierarchy constraints. We observe that our new relaxations are of high quality in terms of leading to both good quality continuous relaxation bounds and also significant improvements in the branch and bound performance.
_Notation._ We use \(\overline{\mathbb{R}}\) to denote the extended real numbers. The indicator function \(\mathds{1}_{\mathcal{I}}(i)=1\) if \(i\in\mathcal{I}\) and \(=0\) otherwise. Given a positive integer \(n\), we let \([n]:=\{1,\ldots,n\}\). We use boldface letters to denote vectors and matrices. We let \(\mathbf{0}\) and \(\mathbf{1}\) denote the vectors with all zeros and ones, respectively, while \(\boldsymbol{e}_{i}\) denote the \(i\)th unit basis vector. Given a vector \(\boldsymbol{x}\in\mathbb{R}^{n}\), we define \(\operatorname{sign}(\boldsymbol{x})\) as vector in \(\mathbb{R}^{n}\) whose \(i\)th elements is \(+1\) if \(x_{i}>0\), \(-1\) if \(x_{i}<0\), and \(0\) if \(x_{i}=0\). For a set \(\mathcal{S}\subseteq\mathbb{R}^{m}\), we denote by \(\operatorname{rint}(\mathcal{S}),\operatorname{rec}(\mathcal{S}), \operatorname{cl}(\mathcal{S}),\operatorname{conv}(\mathcal{S}),\operatorname {cl}\operatorname{conv}(\mathcal{S})\) its relative interior, recessive directions, closure, convex hull and closed convex hull, respectively. Given a boolean set \(\mathcal{Z}\), we denote its continuous relaxation by \(\operatorname{relax}(\mathcal{Z})\), and for a boolean set \(\Delta\) involving binary variables \(\boldsymbol{w}\), the set \(\operatorname{relax}_{\boldsymbol{w}}(\Delta)\) refers to partial continuous relaxation of \(\Delta\) obtained by removing the integrality restriction on only the variables \(\boldsymbol{w}\).
## 2 Technical Tools
In this section we build theoretical tools necessary for our study of separable and nonseparable sets. We begin by introducing the following lemma, which illustrates how to characterize the convex hull of a set that is obtained by adding a simple linking constraint to the direct product of sets and a sufficient condition for taking the closure of a set in an extended formulation form. The proof of this lemma can be found in Appendix A.
Lemma 1: _The following holds:_
1. _Let_ \(\mathcal{V}=\{(\tau,\boldsymbol{\mu}):\exists\boldsymbol{t}\text{ s.t. }(\boldsymbol{\mu},\boldsymbol{t})\in\mathcal{U},\mathbf{1}^{\top} \boldsymbol{t}=\tau\}\) _for some set_ \(\mathcal{U}\)_. Then_ \[\operatorname{conv}(\mathcal{V})=\{(\tau,\boldsymbol{\mu}):\exists \boldsymbol{t}\text{ s.t. }(\boldsymbol{\mu},\boldsymbol{t})\in \operatorname{conv}(\mathcal{U}),\mathbf{1}^{\top}\boldsymbol{t}=\tau\}.\]
2. _Let_ \(\mathcal{V}\!=\!\{\boldsymbol{\mu}:\exists\boldsymbol{\eta}\in\mathcal{U}\text { s.t. }\boldsymbol{A}\boldsymbol{\eta}=\boldsymbol{\mu},\ \boldsymbol{B} \boldsymbol{\eta}=\boldsymbol{b}\}\) _for some non-empty convex set_ \(\mathcal{U}\)_, matrices_ \(\boldsymbol{A}\) _and_ \(\boldsymbol{B}\)_, and vector_ \(\boldsymbol{b}\)_. Suppose that there exists a point_ \(\boldsymbol{\eta}^{*}\in\operatorname{rint}(\mathcal{U})\) _satisfying the condition_ \(\boldsymbol{B}\boldsymbol{\eta}^{*}=\boldsymbol{b}\)_. If additionally_ \(\boldsymbol{\eta}\!=\!\boldsymbol{0}\) _is the only vector in the set_ \(\operatorname{rec}(\{\boldsymbol{\eta}\in\operatorname{cl}(\mathcal{U}): \boldsymbol{A}\boldsymbol{\eta}=\boldsymbol{0},\boldsymbol{B}\boldsymbol{\eta} =\boldsymbol{b}\})\)_, then_ \[\operatorname{cl}(\mathcal{V})=\{\boldsymbol{\mu}:\exists\boldsymbol{\eta}\in \operatorname{cl}(\mathcal{U})\text{ s.t. }\boldsymbol{A}\boldsymbol{\eta}=\boldsymbol{\mu},\boldsymbol{B} \boldsymbol{\eta}=\boldsymbol{b}\}.\]
We next examine a class of conic binary sets and analyze perspective functions.
### Conic binary sets
We start by establishing a description for the convex hull of \(\mathcal{S}(\Delta,\mathbb{K})\).
Proposition 1: _Consider the set \(\mathcal{S}(\Delta,\mathbb{K})\) defined as in (4), where \(\Delta\subseteq\left\{0,1\right\}^{n}\), \(\mathbb{K}=\times_{i\in[d]}\mathbb{K}_{i}\), and each \(\mathbb{K}_{i}\) is a convex cone containing the origin for every \(i\in[p]\). Then, we have \(\operatorname{conv}(\mathcal{S}(\Delta,\mathbb{K}))=\mathcal{S}(\operatorname {conv}(\Delta),\mathbb{K})\)._
Proof: We will proceed by showing that \(\operatorname{conv}(\mathcal{S}(\Delta,\mathbb{K}))\subseteq\mathcal{S}( \operatorname{conv}(\Delta),\mathbb{K})\) and \(\operatorname{conv}(\mathcal{S}(\Delta,\mathbb{K}))\supseteq\mathcal{S}( \operatorname{conv}(\Delta),\mathbb{K})\). The first direction trivially holds as \(\mathbb{K}\) is convex and \(\operatorname{conv}(A\cap B)\subseteq\operatorname{conv}(A)\cap\operatorname{ conv}(B)\). Therefore, we focus on establishing \(\operatorname{conv}(\mathcal{S}(\Delta,\mathbb{K}))\supseteq\mathcal{S}( \operatorname{conv}(\Delta),\mathbb{K})\). Consider \((\bar{\boldsymbol{\alpha}},\bar{\boldsymbol{\delta}})\in\mathcal{S}( \operatorname{conv}(\Delta),\mathbb{K})\). Then, as \(\bar{\boldsymbol{\delta}}\in\operatorname{conv}(\Delta)\), by Caratheodory's theorem, we have \(\bar{\boldsymbol{\delta}}=\sum_{k\in[q]}\lambda_{k}\boldsymbol{\delta}^{k}\) for some finite \(q\), \(\boldsymbol{\delta}^{k}\in\Delta\), and \(\lambda_{k}>0\) with \(\sum_{k\in[q]}\lambda_{k}=1\). For each \(k\in[q]\), we construct the vector \(\boldsymbol{\alpha}^{k}\) with subvectors \(\boldsymbol{\alpha}_{i}^{k}\) as follows
\[\boldsymbol{\alpha}_{i}^{k}:=\left\{\begin{aligned} &\bar{\boldsymbol{\alpha}}_{i},& \text{if }\bar{\delta}_{i}=0\\ &\bar{\boldsymbol{\alpha}}_{i}/\bar{\delta}_{i},& \text{if }\bar{\delta}_{i}\neq 0\text{ and }\delta_{i}^{k}=1\,.\\ & 0,&\text{if }\bar{\delta}_{i}\neq 0\text{ and }\delta_{i}^{k}=0\end{aligned}\right.\]
Next, we prove that \((\boldsymbol{\alpha}^{k},\boldsymbol{\delta}^{k})\in\mathcal{S}(\Delta, \mathbb{K})\) by showing that \(\boldsymbol{A}_{i}\boldsymbol{\alpha}_{i}^{k}+\boldsymbol{B}_{i}\delta_{i}^{k} \in\mathbb{K}_{i}\) for all \(k\in[q]\) and \(i\in[p]\). Consider any index \(i\in[p]\). Note that as \((\bar{\boldsymbol{\alpha}},\bar{\boldsymbol{\delta}})\in\mathcal{S}( \operatorname{conv}(\Delta),\mathbb{K})\), we have \(\boldsymbol{A}_{i}\bar{\boldsymbol{\alpha}}_{i}+\boldsymbol{B}_{i}\bar{ \delta}_{i}\in\mathbb{K}_{i}\). If \(\bar{\delta}_{i}=0\), then \(\boldsymbol{A}_{i}\bar{\boldsymbol{\alpha}}_{i}\in\mathbb{K}_{i}\). Also, we have \(\delta_{i}^{k}=0\) for all \(k\in[q]\) because \(\bar{\delta}_{i}=\sum_{k\in[q]}\lambda_{k}\delta_{i}^{k}=0\), \(\lambda_{k}>0\), and \(\delta_{i}^{k}\in\{0,1\}\) for every \(k\in[q]\). Thus, the point \((\boldsymbol{\alpha}_{i}^{k},\delta_{i}^{k})\) satisfies \(\boldsymbol{A}_{i}\boldsymbol{\alpha}_{i}^{k}+\boldsymbol{B}_{i}\delta_{i}^{k} =\boldsymbol{A}_{i}\bar{\boldsymbol{\alpha}}_{i}\in\mathbb{K}_{i}\) for every \(k\in[q]\). If \(\bar{\delta}_{i}\neq 0\), then \(\boldsymbol{A}_{i}(\bar{\boldsymbol{\alpha}}_{i}/\bar{\delta}_{i})+\boldsymbol {B}_{i}\in\mathbb{K}_{i}\) as \(\mathbb{K}_{i}\) is a cone. Moreover, as \(\bar{\delta}_{i}\neq 0\), there exists at least one index \(\hat{k}\in[q]\) with \(\delta_{i}^{\hat{k}}=1\). For any \(k\in[q]\) such that \(\delta_{i}^{k}=1\), by definition \((\boldsymbol{\alpha}_{i}^{k},\delta_{i}^{k})=(\bar{\boldsymbol{\alpha}}_{i}/ \bar{\delta}_{i},1)\) and thus it satisfies \(\boldsymbol{A}_{i}\boldsymbol{\alpha}_{i}^{k}+\boldsymbol{B}_{i}\delta_{i}^{k} =\boldsymbol{A}_{i}(\bar{\boldsymbol{\alpha}}_{i}/\bar{\delta}_{i})+ \boldsymbol{B}_{i}\in\mathbb{K}_{i}\). Finally, for any \(k\in[q]\) such that \(\delta_{i}^{k}=0\), as \(\bar{\delta}_{i}\neq 0\), by definition \((\boldsymbol{\alpha}_{i}^{k},\delta_{i}^{k})=(\boldsymbol{0},0)\) and thus it satisfies \(\boldsymbol{A}_{i}\boldsymbol{\alpha}_{i}^{k}+\boldsymbol{B}_{i}\delta_{i}^{k} =\boldsymbol{0}\in\mathbb{K}_{i}\) as \(\mathbb{K}_{i}\) contains the origin. All in all, we have shown that \(\boldsymbol{A}_{i}\boldsymbol{\alpha}_{i}^{k}+\boldsymbol{B}_{i}\delta_{i}^{k} \in\mathbb{K}_{i}\) for all \(k\in[q]\) and \(i\in[p]\), which implies that \((\boldsymbol{\alpha}^{k},\boldsymbol{\delta}^{k})\in\mathcal{S}(\Delta, \mathbb{K})\) for all \(k\in[q]\).
We next prove that \((\bar{\boldsymbol{\alpha}},\bar{\boldsymbol{\delta}})=\sum_{k\in[q]}\lambda_{k }(\boldsymbol{\alpha}^{k},\boldsymbol{\delta}^{k})\). Recall that \(\bar{\boldsymbol{\delta}}=\sum_{k\in[q]}\lambda_{k}\boldsymbol{\delta}^{k}\) where \(\boldsymbol{\delta}^{k}\in\Delta\) for all \(k\in[q]\). Then, \(\bar{\delta}_{i}=\sum_{k\in[q]}\lambda_{k}\delta_{i}^{k}=\sum_{k\in[q]:\delta_{i }^{k}=1}\lambda_{k}\) holds for all \(i\in[p]\). Moreover, for any \(i\in[p]\) with \(\bar{\delta}_{i}\neq 0\), from the definition of \(\boldsymbol{\alpha}^{k}\), we deduce that
\[\sum_{k\in[q]}\lambda_{k}\boldsymbol{\alpha}_{i}^{k}=\sum_{k\in[q]:\delta_{i}^{ k}=1}\lambda_{k}\boldsymbol{\alpha}_{i}^{k}+\sum_{k\in[q]:\delta_{i}^{k}=0} \lambda_{k}\boldsymbol{\alpha}_{i}^{k}=\frac{\bar{\boldsymbol{\alpha}}_{i}}{ \bar{\delta}_{i}}\sum_{k\in[q]:\delta_{i}^{k}=1}\lambda_{k}=\bar{\boldsymbol {\alpha}}_{i},\]
where the last equality follows from \(\bar{\delta}_{i}=\sum_{k\in[q]:\delta_{i}^{k}=1}\lambda_{k}\). Furthermore, for any \(i\in[p]\) with \(\bar{\delta}_{i}=0\), by the definition of \(\boldsymbol{\alpha}^{k}\) we have \(\sum_{k\in[q]}\lambda_{k}\boldsymbol{\alpha}_{i}^{k}=\sum_{k\in[q]}\lambda_{k} \bar{\boldsymbol{\alpha}}_{i}=\bar{\boldsymbol{\alpha}}_{i}\). Thus, we conclude that \((\bar{\boldsymbol{\alpha}},\bar{\boldsymbol{\delta}})=\sum_{k\in[q]}\lambda_{k }(\boldsymbol{\alpha}^{k},\boldsymbol{\delta}^{k})\), which shows \((\bar{\boldsymbol{\alpha}},\bar{\boldsymbol{\delta}})\in\operatorname{conv}( \mathcal{S}(\Delta,\mathbb{K}))\), as desired.
The main result of this section depends on the following conditions.
Theorem 3.1: _There exist a point \(\boldsymbol{\delta}\in\Delta\) with \(\delta_{i}=1\) for every \(i\in[p]\). For any \((\boldsymbol{\alpha}_{i},\delta_{i})\in\mathbb{R}^{m_{i}}\times\mathbb{R}_{++}\), if \(\boldsymbol{A}_{i}\boldsymbol{\alpha}_{i}+\boldsymbol{B}_{i}\delta_{i}\in \operatorname{cl}(\mathbb{K}_{i})\), then \(\boldsymbol{A}_{i}\boldsymbol{\alpha}_{i}+\boldsymbol{B}_{i}\delta_{i}\in \operatorname{cl}_{i}\). Additionally, \(\boldsymbol{B}_{i}\in\mathbb{K}_{i}\) for every \(i\in[p]\)._
The first condition in Assumption 1 holds without loss of generality. Suppose that there exist an index \(i\in[d]\) such that for any \(\mathbf{\delta}\in\Delta\), \(\delta_{i}=0\). In this case, the continuous subvector \(\mathbf{\alpha}_{i}\) has no link with the binary value \(\delta_{i}\). Hence, the convex hull of \(\mathcal{S}(\Delta,\mathbb{K})\) can be computed from the convex hull of a lower dimensional set obtained by eliminating the subvector \(\mathbf{\alpha}_{i}\) from \(\mathbf{\alpha}\) and the value \(\delta_{i}\) from \(\mathbf{\delta}\). The elimination process continues until the first condition is met. In addition, the second condition implies that \(\mathbb{K}_{i}\) and its closure coincides for any \(\delta_{i}>0\). The next theorem establishes a description for the closure of the convex hull of \(\mathcal{S}(\Delta,\mathbb{K})\) under these conditions.
Theorem 3.1: _Under assumptions of Proposition 1 and Assumption 1, we have_
\[\operatorname{cl\,conv}(\mathcal{S}(\Delta,\mathbb{K}))=\mathcal{S}(\operatorname {conv}(\Delta),\operatorname{cl}(\mathbb{K})).\]
Proof: We proceed by showing that \(\operatorname{cl\,conv}(\mathcal{S}(\Delta,\mathbb{K}))\subseteq\mathcal{S}( \operatorname{conv}(\Delta),\operatorname{cl}(\mathbb{K}))\) and \(\operatorname{cl\,conv}(\mathcal{S}(\Delta,\mathbb{K}))\supseteq\mathcal{S}( \operatorname{conv}(\Delta),\operatorname{cl}(\mathbb{K}))\). The first direction trivially holds as \(\operatorname{conv}(\Delta)\) is a bounded polytope, \(\mathbb{K}\) is a convex set, and \(\operatorname{cl\,conv}(A\cap B)\subseteq\operatorname{cl\,conv}(A)\cap \operatorname{cl\,conv}(B)\). Therefore, in the sequel we focus on \(\operatorname{cl\,conv}(\mathcal{S}(\Delta,\mathbb{K}))\supseteq\mathcal{S}( \operatorname{conv}(\Delta),\operatorname{cl}(\mathbb{K}))\). Consider any \((\bar{\mathbf{\alpha}},\bar{\mathbf{\delta}})\in\mathcal{S}(\operatorname{conv}( \Delta),\operatorname{cl}(\mathbb{K}))\). Then as \(\bar{\mathbf{\delta}}\in\operatorname{conv}(\Delta)\), by Caratheodory's theorem, we have \(\bar{\mathbf{\delta}}=\sum_{k\in[q]}\lambda_{k}\mathbf{\delta}^{k}\) for some finite \(q\), \(\mathbf{\delta}^{k}\in\Delta\), and \(\lambda_{k}>0\) with \(\sum_{k\in[q]}\lambda_{k}=1\). Moreover, we also have \(\mathbf{A}_{i}\bar{\mathbf{\alpha}}_{i}+\mathbf{B}_{i}\bar{\delta}_{i}\in\operatorname{ cl}(\mathbb{K}_{i})\). For all \(i\in[p]\), we let \(\hat{\mathbf{\alpha}}_{i}:=\bar{\mathbf{\alpha}}_{i}\cdot\operatorname{sign}(\bar{ \delta}_{i})\) and \(\tilde{\mathbf{\alpha}}_{i}:=\bar{\mathbf{\alpha}}_{i}\cdot(1-\operatorname{sign}( \bar{\delta}_{i}))\) so that \((\bar{\mathbf{\alpha}}_{i},\bar{\delta}_{i})=(\hat{\mathbf{\alpha}}_{i},\bar{\delta} _{i})+(\hat{\mathbf{\alpha}}_{i},0)\), and we define the vectors \(\hat{\mathbf{\alpha}}\) and \(\tilde{\mathbf{\alpha}}\) based on the subvector \(\hat{\mathbf{\alpha}}_{i}\) and \(\tilde{\mathbf{\alpha}}_{i}\), respectively.
We first prove that \((\hat{\mathbf{\alpha}},\bar{\mathbf{\delta}})\in\operatorname{conv}(\mathcal{S}( \Delta,\mathbb{K}))\). Recall that \(\bar{\mathbf{\delta}}\in\operatorname{conv}(\Delta)\). Consider any index \(i\in[p]\). If \(\bar{\delta}_{i}=0\), then \(\hat{\mathbf{\alpha}}_{i}=\mathbf{0}\), which implies that \(\mathbf{A}_{i}\hat{\mathbf{\alpha}}_{i}+\mathbf{B}_{i}\bar{\delta}_{i}=\mathbf{0}\in\mathbb{K }_{i}\) as \(\mathbb{K}_{i}\) contains the origin. Moreover, if \(\bar{\delta}_{i}\neq 0\), then \(\hat{\mathbf{\alpha}}_{i}=\bar{\mathbf{\alpha}}_{i}\) and \(\mathbf{A}_{i}\hat{\mathbf{\alpha}}_{i}+\mathbf{B}_{i}\bar{\delta}_{i}=\mathbf{A}_{i}\bar{ \mathbf{\alpha}}_{i}+\mathbf{B}_{i}\bar{\delta}_{i}\in\operatorname{cl}(\mathbb{K}_{i})\). Then, as \(\bar{\delta}_{i}\neq 0\), by the second condition in Assumption 1, we conclude that \(\mathbf{A}_{i}\hat{\mathbf{\alpha}}_{i}+\mathbf{B}_{i}\bar{\delta}_{i}\in\mathbb{K}_{i}\). Hence, \(\mathbf{A}_{i}\hat{\mathbf{\alpha}}_{i}+\mathbf{B}_{i}\bar{\delta}_{i}\in\mathbb{K}_{i}\) for all \(i\in[p]\), and we have \((\hat{\mathbf{\alpha}},\bar{\mathbf{\delta}})\in\mathcal{S}(\operatorname{conv}(\Delta ),\mathbb{K})\). Then, the claim follows from Proposition 1 as \(\mathcal{S}(\operatorname{conv}(\Delta),\mathbb{K})=\operatorname{conv}( \mathcal{S}(\Delta,\mathbb{K}))\).
We next show that \((\hat{\mathbf{\alpha}},\mathbf{0})\in\mathcal{S}(\operatorname{conv}(\Delta), \operatorname{cl}(\mathbb{K}))\). Consider any index \(i\in[p]\). Recall that \(\tilde{\mathbf{\alpha}}_{i}=\bar{\mathbf{\alpha}}_{i}\cdot(1-\operatorname{sign}(\bar{ \delta}_{i}))\). If \(\bar{\delta}_{i}\neq 0\), then \(\tilde{\mathbf{a}}_{i}=\mathbf{0}\) and we have \(\mathbf{A}_{i}\tilde{\mathbf{\alpha}}_{i}=\mathbf{0}\in\operatorname{cl}(\mathbb{K}_{i})\) as \(\mathbb{K}_{i}\) contains the origin. Additionally, if \(\bar{\delta}_{i}=0\), then \(\tilde{\mathbf{\alpha}}_{i}=\bar{\mathbf{\alpha}}_{i}\) and we have \(\mathbf{A}_{i}\tilde{\mathbf{\alpha}}_{i}=\mathbf{A}_{i}\bar{\mathbf{\alpha}}_{i}+\mathbf{B}_{i} \bar{\delta}_{i}\in\operatorname{cl}(\mathbb{K}_{i})\) (as \((\bar{\mathbf{\alpha}},\bar{\mathbf{\delta}})\in\mathcal{S}(\operatorname{conv}(\Delta), \operatorname{cl}(\mathbb{K}))\) and \((\bar{\mathbf{\alpha}},\bar{\mathbf{\delta}})=(\hat{\mathbf{\alpha}},\mathbf{\delta})+(\hat{ \mathbf{\alpha}},\mathbf{0})\)). Thus, these two observations together imply that \((\tilde{\mathbf{\alpha}},\mathbf{0})\in\mathcal{S}(\operatorname{conv}(\Delta), \operatorname{cl}(\mathbb{K}))\).
Next, we write \((\tilde{\mathbf{\alpha}},\mathbf{0})\) as the sum of limit points in \(\mathcal{S}(\Delta,\mathbb{K})\). By the first condition in Assumption 1, for any \(j\in[d]\), there exists a vector \(\tilde{\mathbf{\delta}}^{j}\in\Delta\) with \(\bar{\delta}_{j}^{j}=1\). Using these binary vectors and introducing the vectors \(\tilde{\mathbf{\alpha}}^{j},j\in[d]\), with subvectors \(\tilde{\mathbf{\alpha}}_{j}^{j}:=\tilde{\mathbf{\alpha}}_{j}\) and \(\tilde{\mathbf{\alpha}}_{i}^{j}:=\mathbf{0}\) for \(i\neq j\), we have
\[(\tilde{\mathbf{\alpha}},\mathbf{0})=\lim_{\varepsilon\downarrow 0}\sum_{i\in[d]} \varepsilon(\tilde{\mathbf{\alpha}}^{j}/\varepsilon,\tilde{\mathbf{\delta}}^{j}).\]
Note that the points \((\tilde{\mathbf{\alpha}}^{j}/\varepsilon,\tilde{\mathbf{\delta}}^{j})\in\mathcal{S}( \Delta,\mathbb{K})\) for any \(\varepsilon>0\) and \(j\in[d]\). First consider any \(j\in[d]\) such that \(\bar{\delta}_{j}\neq 0\). Then, from \(\tilde{\mathbf{\alpha}}_{j}=\bar{\mathbf{\alpha}}_{j}\cdot(1-\operatorname{sign}(\bar{ \delta}_{j}))\)
we deduce that \(\tilde{\mathbf{\alpha}}_{j}^{j}=\tilde{\mathbf{\alpha}}_{j}=\mathbf{0}\). As by definition \(\tilde{\mathbf{\alpha}}_{i}^{j}=\mathbf{0}\) for all \(i\neq j\), we conclude that for any \(j\in[d]\) such that \(\bar{\delta}_{j}\neq 0\) we must have \(\tilde{\mathbf{\alpha}}^{j}=\mathbf{0}\). Then, for all \(i\in[p]\), \(\mathbf{A}_{i}(\tilde{\mathbf{\alpha}}_{i}^{j}/\varepsilon)+\mathbf{B}_{i}\tilde{\delta}_ {i}^{j}=\mathbf{B}_{i}\tilde{\delta}_{i}^{j}\in\mathbb{K}_{i}\) where the last relation follows from \(\mathbf{B}_{i}\in\mathbb{K}_{i}\) (see Assumption 1). Thus, we conclude \((\tilde{\mathbf{\alpha}}^{j}/\varepsilon,\tilde{\mathbf{\delta}}^{j})\in\mathcal{S}( \Delta,\mathbb{K})\) for all \(j\in[d]\) such that \(\bar{\delta}_{j}\neq 0\). Now consider any \(j\in[d]\) such that \(\bar{\delta}_{j}=0\). Then, by definition we have \(\tilde{\mathbf{\alpha}}_{j}^{j}=\bar{\mathbf{\alpha}}_{j}\) and \(\mathbf{\alpha}_{i}^{j}=\mathbf{0}\) for \(i\neq j\). Hence, we have \(\mathbf{A}_{j}\tilde{\mathbf{\alpha}}_{j}^{j}=\mathbf{A}_{j}\bar{\mathbf{\alpha}}_{j}+\mathbf{B}_ {j}\bar{\delta}_{j}\in\mathrm{cl}(\mathbb{K}_{j})\) (as \((\bar{\mathbf{\alpha}},\bar{\mathbf{\delta}})\in\mathcal{S}(\mathrm{conv}(\Delta), \mathrm{cl}(\mathbb{K}))\)) and \(\mathbf{B}_{j}\varepsilon\in\mathrm{cl}(\mathbb{K}_{j})\) (as \(\mathbf{B}_{j}\in\mathbb{K}_{j}\) by Assumption 1). Since \(\mathrm{cl}(\mathbb{K}_{j})\) is a convex cone, we deduce that \(\mathbf{A}_{j}\tilde{\mathbf{\alpha}}_{j}^{j}+\mathbf{B}_{j}\varepsilon\in\mathrm{cl}( \mathbb{K}_{j})\). Since \(\varepsilon>0\) and thus by applying the second condition in Assumption 1 we conclude \(\mathbf{A}_{i}(\tilde{\mathbf{\alpha}}_{j}^{j}/\varepsilon)+\mathbf{B}_{j}\tilde{\delta}_ {j}^{j}\in\mathbb{K}_{j}\). Moreover, for \(i\neq j\), we have \(\mathbf{A}_{i}(\tilde{\mathbf{\alpha}}_{i}^{j}/\varepsilon)+\mathbf{B}_{i}\tilde{\delta}_ {i}^{j}=\mathbf{B}_{i}\tilde{\delta}_{i}^{j}\in\mathbb{K}_{i}\) where the last relation follows from \(\mathbf{B}_{i}\in\mathbb{K}_{i}\) (see Assumption 1). Altogether, these show that \((\tilde{\mathbf{\alpha}}^{j}/\varepsilon,\tilde{\mathbf{\delta}}^{j})\in\mathcal{S}( \Delta,\mathbb{K})\) for all \(j\in[d]\).
Using these relations, we may thus write
\[(\bar{\mathbf{\alpha}},\bar{\mathbf{\delta}})=\lim_{\varepsilon\downarrow 0}\ \sum_{k\in[q]}(1- \varepsilon d)(\hat{\mathbf{\alpha}},\bar{\mathbf{\delta}})+\sum_{i\in[d]}\varepsilon( \tilde{\mathbf{\alpha}}^{j}/\varepsilon,\tilde{\mathbf{\delta}}^{j}).\]
Then, \((\bar{\mathbf{\alpha}},\bar{\mathbf{\delta}})\) is written as the limit of convex combinations of points from \(\mathcal{S}(\Delta,\mathbb{K})\). Thus, \((\bar{\mathbf{\alpha}},\bar{\mathbf{\delta}})\in\mathrm{cl}\,\mathrm{conv}(\mathcal{S }(\Delta,\mathbb{K}))\), as desired.
### Perspective function and its closure
Perspective function plays an important role in our analysis. For a proper lower semicontinuous and convex function \(h:\mathbb{R}^{d}\to\overline{\mathbb{R}}\) with \(h(\mathbf{0})=0\), we define its _perspective function_\(h^{+}:\mathbb{R}^{d}\times\mathbb{R}_{+}\to\overline{\mathbb{R}}\) as
\[h^{+}(\mathbf{x},w):=\begin{cases}w\,h(\mathbf{x}/w),&\text{if }w>0,\\ 0,&\text{if }w=0\text{ and }\mathbf{x}=\mathbf{0},\\ +\infty,&\text{otherwise}.\end{cases}\]
The epigraph of \(h^{+}\) is given by
\[\mathrm{epi}(h^{+}):=\left\{(t,\mathbf{x},w)\!\in\!\mathbb{R}\times\mathbb{R}^{d} \times\mathbb{R}_{+}:\ t\geq h^{+}(\mathbf{x},w)\right\}.\]
Note that our definition of perspective function \(h^{+}\) is almost matching with the classical definition of the perspective function given in [27; 15] as
\[\widetilde{h}(\mathbf{x},w):=\begin{cases}w\,h(\mathbf{x}/w),&\text{if }w>0,\\ +\infty,&\text{otherwise},\end{cases}\]
where the main distinction between \(h^{+}\) and \(\widetilde{h}\) is that \(h^{+}(\mathbf{0},0)=0\) whereas \(\widetilde{h}(\mathbf{0},0)=+\infty\). We first establish that the epigraph of the perspective function \(h^{+}\) is a convex cone under standard assumptions.
Lemma 2: _Let \(h:\mathbb{R}^{d}\to\overline{\mathbb{R}}\) be a proper lower semicontinuous and convex function with \(h(\mathbf{0})=0\). Then, \(\mathrm{epi}(h^{+})\) is a convex cone containing the origin._
Proof: Since \(h\) is assumed to be proper, lower semicontinuous and convex, the perspective function \(h^{+}\) is proper and convex. This follows from (33, p. 35). Therefore, the set \(\operatorname{epi}(h^{+})\) is convex. Moreover, \((\mathbf{0},0)\in\operatorname{epi}(h^{+})\) as \(h^{+}(\mathbf{0},0)=0\). Additionally, \(\operatorname{epi}(h^{+})\) is indeed a cone, i.e., for any \(\lambda>0\) and \((t,\boldsymbol{x},w)\in\operatorname{epi}(h^{+})\), we have \((\lambda t,\lambda\boldsymbol{x},\lambda w)\in\operatorname{epi}(h^{+})\). This is because for any \(w>0\) and any \((t,\boldsymbol{x},w)\in\operatorname{epi}(h^{+})\), we have \(w\,h(\boldsymbol{x}/w)\leq t\) or \(\lambda w\,h(\lambda\boldsymbol{x}/(\lambda w))\leq\lambda t\). We thus conclude that \(\operatorname{epi}(h^{+})\) is a cone containing the origin.
While \(\operatorname{epi}(h^{+})\) is a convex cone under standard assumptions, it is not closed. Therefore, we also study the closure of the perspective function. Recall that for a proper lower semicontinuous and convex function \(h:\mathbb{R}^{d}\to\overline{\mathbb{R}}\) with \(h(\mathbf{0})=0\), the _closure of the perspective function_\(h^{\pi}:\mathbb{R}^{d}\times\mathbb{R}_{+}\to\overline{\mathbb{R}}\) is defined as
\[h^{\pi}(\boldsymbol{x},w):=\begin{cases}w\,h(\boldsymbol{x}/w),&\text{if }w>0,\\ \lim_{s\downarrow 0}s\,h(\boldsymbol{x}/s),&\text{if }w=0,\\ +\infty,&\text{otherwise}.\end{cases}\]
It is well known that the closure of \(\widetilde{h}\), and consequently \(h^{+}\), is given by \(h^{\pi}\); see for example (27, Proposition 2.2.2). The epigraph of \(h^{\pi}\) is given by \(\operatorname{epi}(h^{\pi})\!=\!\{(t,\boldsymbol{x},w)\!\in\!\mathbb{R}\times \mathbb{R}^{d}\times\mathbb{R}_{+}\!\colon\!t\geq h^{\pi}(\boldsymbol{x},w)\}\). We analyze the cone generated by the perspective function and its closure in the next lemma.
Lemma 3: _Let \(h:\mathbb{R}^{d}\to\overline{\mathbb{R}}\) be a proper lower semicontinuous and convex function such that \(h(\mathbf{0})=0\). Then, \(\operatorname{cl}(\operatorname{epi}(h^{+}))=\operatorname{epi}(h^{\pi})\). If, additionally, \(\operatorname{epi}(h)\) does not contain a line, \(\operatorname{epi}(h^{\pi})\) is pointed._
Proof: Note that the perspective function \(h^{+}(\boldsymbol{x},w)\) coincides with its closure \(h^{\pi}(\boldsymbol{x},w)\) for \(w>0\) and \((\mathbf{0},0)\in\operatorname{dom}(h^{\pi})\). Then, by lower semicontinuity of \(h^{\pi}\) (see (33, p. 37 and Theorem 13.3)), we conclude \(\operatorname{cl}(h^{+})=h^{\pi}\). Therefore, we have \(\operatorname{cl}(\operatorname{epi}(h^{+}))=\operatorname{epi}(h^{\pi})\).
To see that \(\operatorname{epi}(h^{\pi})\) is pointed, suppose both \((-\bar{t},-\bar{\boldsymbol{x}},-\bar{w}),(\bar{t},\bar{\boldsymbol{x}},\bar{ w})\!\in\!\operatorname{epi}(h^{\pi})\). Then, we must have \(\bar{w}=0\). The closure of the perspective function at \((\boldsymbol{x},0)\) satisfies
\[h^{\pi}(\boldsymbol{x},0)=\lim_{w\downarrow 0}w\,h(\boldsymbol{x}/w)=\sup \left\{\boldsymbol{g}^{\top}\boldsymbol{x}:\boldsymbol{g}\in\operatorname{ dom}(h^{*})\right\},\]
where \(h^{*}\) denotes the conjugate of \(h\) and the last equality follows from (33, p. 37 and Theorem 13.3). Thus, as both \((-\bar{t},-\bar{\boldsymbol{x}},0),(\bar{t},\bar{\boldsymbol{x}},0)\in \operatorname{epi}(h^{\pi})\), we have
\[\begin{cases}h^{\pi}(\bar{\boldsymbol{x}},0)=\sup\left\{\boldsymbol{g}^{\top} \bar{\boldsymbol{x}}:\boldsymbol{g}\in\operatorname{dom}(h^{*})\right\}\leq \bar{t}\\ h^{\pi}(-\bar{\boldsymbol{x}},0)=\sup\left\{-\boldsymbol{g}^{\top}\bar{ \boldsymbol{x}}:\boldsymbol{g}\in\operatorname{dom}(h^{*})\right\}\leq-\bar{t},\end{cases}\]
which enforces that \(\boldsymbol{g}^{\top}\bar{\boldsymbol{x}}=\bar{t}\) for all \(\boldsymbol{g}\in\operatorname{dom}(h^{*})\). Hence, for any \(\gamma\in\mathbb{R}\), the function \(h\) satisfies
\[h(\gamma\bar{\boldsymbol{x}})=\sup_{\boldsymbol{g}\in\operatorname{dom}(h^{*}) }\gamma\boldsymbol{g}^{\top}\bar{\boldsymbol{x}}-h^{*}(\boldsymbol{g})=\sup_{ \boldsymbol{g}\in\operatorname{dom}(h^{*})}\gamma\bar{t}-h^{*}(\boldsymbol{g })=\gamma\bar{t}-h(\mathbf{0})=\gamma\bar{t},\]
where the first equality holds because \(h=h^{**}\) (as \(h\) is a proper lower semicontinuous and convex function and thus (33, Theorem 12.2) applies), the second equality follows from the observations that \(\mathbf{g}^{\top}\bar{\mathbf{x}}=\bar{t}\), the third equality follows from the definition of biconjugate function and the relation \(h(\mathbf{0})=h^{**}(\mathbf{0})=0\). If \((\bar{t},\bar{\mathbf{x}})\neq\mathbf{0}\), we have \((\gamma\bar{t},\gamma\bar{\mathbf{x}})\in\mathrm{epi}(h)\) for any \(\gamma\in\mathbb{R}\), which contradicts our assumption that \(\mathrm{epi}(h)\) does not contain a line. Hence, we have shown that \((-\bar{t},-\bar{\mathbf{x}},-\bar{w}),(\bar{t},\bar{\mathbf{x}},\bar{w})\in\mathrm{epi} (h^{\pi})\) only if \((\bar{t},\bar{\mathbf{x}},\bar{w})=\mathbf{0}\), implying that \(\mathrm{epi}(h^{\pi})\) is pointed. This then completes the proof.
Lemma 3 extends (32, Proposition 4) to non-differentiable proper functions. For univariate functions, the requirement that \(\mathrm{epi}(h)\) does not contain a line means that \(h\) is a nonlinear function. We next recall that whenever \(\mathrm{epi}(h)\) admits a conic representation so will \(\mathrm{epi}(h^{\pi})\) under a minor condition.
Remark 1: Suppose that the function \(h:\mathbb{R}^{d}\to\overline{\mathbb{R}}\) is lower semicontinuous and convex, and its epigraph admits the conic representation
\[\mathrm{epi}(h)=\left\{(t,\mathbf{x})\in\mathbb{R}\times\mathbb{R}^{d}:\ \exists\mathbf{u}\ \text{s.t.}\ \mathbf{H}_{t}t+\mathbf{H}_{x}\mathbf{x}+\mathbf{H}_{u}\mathbf{u}+\mathbf{H}_{0}\in\mathbb{H}\right\}\]
for some appropriate matrices \(\mathbf{H}_{t},\mathbf{H}_{x},\mathbf{H}_{u}\) and \(\mathbf{H}_{0}\), and a regular cone \(\mathbb{H}\). Provided that \(\mathbf{H}_{u}\mathbf{u}\in\mathbb{H}\) implies that \(\mathbf{H}_{u}\mathbf{u}=\mathbf{0}\), Ben-Tal and Nemirovski (8, Proposition 2.3.2) show that the epigraph of \(h^{\pi}\) admits the conic representation
\[\mathrm{epi}(h^{\pi})\!=\!\left\{(t,\mathbf{x},w)\in\mathbb{R}\times\mathbb{R}^{d} \times\mathbb{R}_{+}\!:\!\exists\mathbf{u}\ \text{s.t.}\ \mathbf{H}_{t}t+\mathbf{H}_{x}\mathbf{x}+\mathbf{H}_{u}\mathbf{u}+\mathbf{H}_{0}w\in \mathbb{H}\right\}.\]
While [8] present this conic representation only for conic quadratic representable functions, the result and its proof immediately extend to regular cones \(\mathbb{H}\) as discussed in (8, Section 2.3.7).
We end this subsection by analyzing the closed convex hull of a mixed-binary set that will serve as the primary substructure in the following section. In the sequel the first component of the subvector \(\mathbf{\beta}_{i}\) is denoted by \(\beta_{i,1}\).
Proposition 2: _Consider the mixed-binary set_
\[\mathcal{W}\!:=\!\left\{(\mathbf{\beta},\mathbf{\gamma},\mathbf{\delta})\!\in\!\mathbb{R} ^{m}\times\mathbb{R}^{p}\times\Delta:\begin{array}{l}h_{i}(\beta_{i,1})\leq \gamma_{i},\ \forall i\in[p],\\ \beta_{i,1}(1-\delta_{i})=0,\ \mathbf{C}_{i}\mathbf{\beta}_{i}\in\mathbb{C}_{i},\ \forall i\in[p]\end{array}\right\}\!, \tag{5}\]
_where \(\Delta\!\subseteq\!\left\{0,1\right\}^{n}\), \(h_{i}:\mathbb{R}\!\to\!\overline{\mathbb{R}}\) is a proper lower semicontinuous convex function with \(h_{i}(0)=0\), and \(\mathbb{C}_{i}\) is a closed convex cone. If for any \(i\in[p]\) there exists a point \(\mathbf{\delta}\in\Delta\) with \(\delta_{i}=1\), then_
\[\mathrm{cl\,conv}(\mathcal{W})=\left\{(\mathbf{\beta},\mathbf{\gamma},\mathbf{\delta}) \in\mathbb{R}^{m}\times\mathbb{R}^{p}\times\mathrm{conv}(\Delta):\begin{array} []{l}h_{i}^{\pi}(\beta_{i,1},\delta_{i})\leq\gamma_{i},\ \forall i\in[p],\\ \mathbf{C}_{i}\mathbf{\beta}_{i}\in\mathbb{C}_{i},\ \forall i\in[p]\end{array}\right\}.\]
Proof: By the definition of the perspective function, the set \(\mathcal{W}\) is reformulated as
\[\mathcal{W}=\left\{(\mathbf{\beta},\mathbf{\gamma},\mathbf{\delta})\in\mathbb{R}^{m} \times\mathbb{R}^{p}\times\Delta:\begin{array}{l}h_{i}^{+}(\beta_{i,1}, \delta_{i})\leq\gamma_{i},\ \forall i\in[p],\\ \mathbf{C}_{i}\mathbf{\beta}_{i}\in\mathbb{C}_{i},\ \forall i\in[p]\end{array} \right\}.\]
We next show that \(\mathcal{W}\) is an instance of the conic mixed-binary set (4). To see this, note that for every \(i\in[d]\) the constraint \(h_{i}^{+}(\beta_{i,1},\delta_{i})\leq\gamma_{i}\) is simply \((\gamma_{i},\beta_{i,1},\delta_{i})\in\mathrm{epi}(h_{i}^{+})\). As \(h_{i}\) is a proper lower semicontinuous convex function with \(h_{i}(0)=0\), by Lemma 2, \(\mathrm{epi}(h_{i}^{+})\) is a convex cone containing the origin. Thus, letting \(\mathbf{\alpha}_{i}:=(\beta_{i},\gamma_{i})\) for all \(i\in[p]\), we can express the requirements \((\gamma_{i},\beta_{i},\delta_{i})\in\mathrm{epi}(h^{+})\) and \(\mathbf{C}_{i}\mathbf{\beta}_{i}\in\mathbb{C}_{i}\) in the conic form \(\mathbf{A}_{i}\mathbf{\alpha}_{i}+\mathbf{B}_{i}\delta_{i}\in\mathbb{K}_{i}\) with
\[\mathbf{A}_{i}=\begin{bmatrix}\mathbf{0}^{\top}&1\\ \mathbf{e}_{1}^{\top}&0\\ \mathbf{0}^{\top}&0\\ \mathbf{C}_{i}&0\end{bmatrix},\ \mathbf{B}_{i}=\begin{bmatrix}0\\ 0\\ 1\\ \mathbf{0}\end{bmatrix},\ \mathbb{K}_{i}=\mathrm{epi}(h_{i}^{+})\times\mathbb{C}_{i}.\]
Thus, we have \(\mathcal{W}=\mathcal{S}(\Delta,\mathbb{K})\) with \(\mathbb{K}=\times_{i\in[p]}\,\mathbb{K}_{i}\). Moreover, by Lemma 2\(\mathrm{epi}(h_{i}^{+})\)is a convex cones containing the origin. Hence, \(\mathbb{K}_{i}\) is also a convex cone containing the origin (as \(\mathbb{C}_{i}\) is a closed convex cone as well by assumption). Therefore, the requirements of Proposition 1 are met. In the following we verify the conditions in Assumption 1. The first condition holds as we have assumed that for any \(i\in[p]\) there exists a \(\mathbf{\delta}\in\Delta\) with \(\delta_{i}=1\). For all \(i\in[p]\), by Lemma 3, we have \(\mathrm{cl}(\mathbb{K}_{i})=\mathrm{epi}(h_{i}^{\pi})\times\mathbb{C}_{i}\) as \(\mathbb{C}_{i}\) is assumed to be closed. Thus, the second condition of Assumption 1 easily follows from the definitions of the perspective function and its closure. Finally, notice that \(h_{i}^{+}(0,1)=h(0)=0\), implying that \((0,0,1)\in\mathrm{epi}(h_{i}^{+})\). Additionally, \(\mathbf{0}\in\mathbb{C}_{i}\) as \(\mathbb{C}_{i}\) is a closed convex cone. Hence, \(\mathbf{B}_{i}\in\mathbb{K}_{i}\) for all \(i\in[p]\), and the last condition of Assumption 1 also holds. Therefore, from Theorem 1 we deduce that \(\mathrm{cl}\,\mathrm{conv}(\mathcal{W})\) is obtained by replacing the set \(\Delta\) and the cones \(\mathbb{K}_{i}\) with \(\mathrm{conv}(\Delta)\) and \(\mathrm{cl}(\mathbb{K}_{i})\), respectively. Finally, as \(\mathbb{K}_{i}=\mathrm{epi}(h_{i}^{+})\times\mathbb{C}_{i}\), from Lemma 3 we deduce that \(\mathrm{cl}(\mathbb{K}_{i})=\mathrm{epi}(h_{i}^{\pi})\times\mathbb{C}_{i}\), which then concludes the proof.
## 3 Applications
In this section we analyze the convex hull of the sets \(\mathcal{H}\) and \(\mathcal{T}\). We assume that all univariate functions vanish at zero. This in fact holds without loss of any generality if zero is in the domain of the functions. In this case, we can always define a new function by subtracting the constant term \(h(0)\) from \(h\).
### Separable functions
As a warm up for the subsequent sections, we start by analyzing the separable function case in this subsection. Recall the mixed-binary set
\[\mathcal{H}=\left\{(\tau,\mathbf{x},\mathbf{z})\in\mathbb{R}\times\mathcal{X}\times \mathcal{Z}:\begin{array}{c}\sum_{i\in[d]}h_{i}(x_{i})\leq\tau,\\ x_{i}(1-z_{i})=0,\ \forall i\in[d]\end{array}\right\}.\]
This type of set arises as substructure in a number of applications such as Markowitz portfolio selection [18], network design [22], sparse learning [38; 11],
and low-rank regression [9]. The result of this section relies on the following assumption.
**Assumption 2**: _The set \(\mathcal{X}=\{\boldsymbol{x}\in\mathbb{R}^{d}:\ x_{i}\geq 0,\ \forall i\in \mathcal{I}\}\). For any \(i\in[d]\), there exists a point \(\boldsymbol{z}\in\mathcal{Z}\subseteq\{0,1\}^{d}\) such that \(z_{i}=1\). The function \(h_{i}:\mathbb{R}\to\overline{\mathbb{R}}\) is proper, lower semicontinuous and convex with \(h_{i}(0)=0\) for all \(i\in[d]\)._
Theorem 2.2: _Under Assumption 2, the set \(\mathcal{H}\) defined in (2) satisfies_
\[\operatorname{cl\,conv}(\mathcal{H})=\left\{(\tau,\boldsymbol{x},\boldsymbol {z})\in\mathbb{R}\times\mathcal{X}\times\operatorname{conv}(\mathcal{Z}):\ \sum_{i\in[d]}h_{i}^{\pi}(x_{i},z_{i})\leq\tau\right\}.\]
Theorem 2.2 extends (36, Thereom 3) by allowing sign-constrained continuous variables and proper functions. Wei et al. [36] prove the convex hull result by showing that the support function of \(\mathcal{H}\) and the set presented in Theorem 2.2 coincide when \(\mathcal{I}=\varnothing\) and \(h\) is real-valued. In contrast, our proof is constructive and make use of the hidden conic structure introduced by the perspective function. As a by product, it allows us to easily include nonnegativity constraints on the set \(\mathcal{X}\).
We demonstrate the importance of the requirement imposed on the binary set \(\mathcal{Z}\) in Assumption 2 with an example. If \(\mathcal{Z}=\{\boldsymbol{0}\}\), then the set \(\mathcal{H}\) simplifies to \(\mathcal{H}=\{(\tau,\boldsymbol{0},\boldsymbol{0}):\tau\geq 0\}\). Thus, \(\operatorname{cl\,conv}(\mathcal{H})=\mathcal{H}\). In contrast, Theorem 2.2 with no restriction on \(\mathcal{Z}\) would suggest that the set \(\{(\tau,\boldsymbol{x},\boldsymbol{0}):\sum_{i\in[d]}h_{i}^{\pi}(x_{i},0)\leq\tau\}\) gives the closed convex hull of \(\mathcal{H}\), which may not be correct in this simple case. The requirement on the binary set \(\mathcal{Z}\), however, holds without loss of generality. If there exist an index \(i\in[d]\) such that for any \(\boldsymbol{z}\in\mathcal{Z}\), \(z_{i}=0\), then \(x_{i}=0\) due to the logical constraint \(x_{i}(1-z_{i})=0\). Thus, we can eliminate \(x_{i}\) and \(z_{i}\) from the set \(\mathcal{H}\) and compute \(\operatorname{cl\,conv}(\mathcal{H})\) from a lower dimensional set.
Proof of Theorem 2.2: We first introduce the auxiliary mixed-binary set
\[\overline{\mathcal{H}}:=\left\{(\boldsymbol{x},\boldsymbol{z},\boldsymbol{t} )\in\mathcal{X}\times\mathcal{Z}\times\mathbb{R}^{d}:\begin{array}{l}h_{i}( x_{i})\leq t_{i},\ \forall i\in[d],\\ x_{i}(1-z_{i})=0,\ \forall i\in[d]\end{array}\right\}.\]
By letting \(\boldsymbol{\beta}_{i}:=(x_{i}),\boldsymbol{\gamma}:=\boldsymbol{t}, \boldsymbol{\delta}:=\boldsymbol{z},\Delta:=\mathcal{Z},\boldsymbol{C}_{i}=1, \mathbb{C}_{i}=\mathbb{R}_{+}\), the set \(\overline{\mathcal{H}}\) can be represented as an instance of the set \(\mathcal{W}\) defined as in (5). Hence, applying Proposition 2.2 yields
\[\operatorname{cl\,conv}(\overline{\mathcal{H}})=\left\{(\boldsymbol{x}, \boldsymbol{z},\boldsymbol{t})\in\mathcal{X}\times\mathcal{Z}\times\mathbb{R} ^{d}:h_{i}^{\pi}(x_{i},z_{i})\leq t_{i},\ \forall i\in[d]\right\}.\]
In the following we characterize \(\operatorname{cl\,conv}(\mathcal{H})\) in terms of \(\operatorname{cl\,conv}(\overline{\mathcal{H}})\). Notice that \(\mathcal{H}\!=\!\{(\tau,\boldsymbol{x},\boldsymbol{z})\!:\!\exists\boldsymbol{ t}\text{ s.t. }(\boldsymbol{x},\boldsymbol{z},\boldsymbol{t})\in\overline{\mathcal{H}}, \boldsymbol{1}^{\top}\boldsymbol{t}=\tau\}\). Therefore, applying Lemma 1 (i) yields \(\operatorname{conv}(\mathcal{H})\!=\!\{(\tau,\boldsymbol{x},\boldsymbol{z}): \exists\boldsymbol{t}\text{ s.t. }(\boldsymbol{x},\boldsymbol{z},\boldsymbol{t})\in \operatorname{conv}(\overline{\mathcal{H}}),\,\boldsymbol{1}^{\top} \boldsymbol{t}=\tau\}\). By letting
\[\boldsymbol{\mu}:=(\tau,\boldsymbol{x},\boldsymbol{z}),\ \boldsymbol{\eta}:=(\tau,\boldsymbol{x},\boldsymbol{z},\boldsymbol{t}),\ \mathcal{U}:=\mathbb{R}\times\operatorname{conv}(\overline{\mathcal{H}}),\] \[\boldsymbol{A}:=[\boldsymbol{I}_{2d+1},\ \boldsymbol{0}],\ \boldsymbol{B}:=(-1, \boldsymbol{0},\boldsymbol{0},\boldsymbol{1})^{\top},\ \boldsymbol{b}:=0,\]
we can apply Lemma 1 (ii) to conclude that
\[\operatorname{cl\,conv}(\mathcal{H})=\{(\tau,\boldsymbol{x},\boldsymbol{z}): \exists\boldsymbol{t}\text{ s.t. }(\tau,\boldsymbol{x},\boldsymbol{z},\boldsymbol{t})\in\mathbb{R}\times \operatorname{cl\,conv}(\overline{\mathcal{H}}),\,\boldsymbol{1}^{\top} \boldsymbol{t}=\tau\}. \tag{6}\]
This holds because the first requirement of Lemma 1 (ii) is trivially satisfied as the variable \(\tau\) is free to choose from the set \(\mathcal{U}\). In addition, \(\boldsymbol{A\eta}=\boldsymbol{0}\) if \(\boldsymbol{\eta}\in\{(0,\boldsymbol{0},\boldsymbol{0},t):\boldsymbol{1}^{\top} \boldsymbol{t}=0\}=\{\boldsymbol{0}\}\), where the equality holds because, by definition of \(\mathcal{U}\), \(h_{i}^{\pi}(0,0)=0\leq t_{i}\) enforcing \(\boldsymbol{t}\) to be the vector of all zeros. The proof concludes by using the relation (6) and then applying Fourier-Motzkin elimination [16] to project out \(\boldsymbol{t}\).
We conclude this section by a remark for the case of _totally unimodular_ binary sets. The set \(\mathcal{Z}=\{\boldsymbol{z}\in\{0,1\}^{d}:\boldsymbol{Gz}\leq\boldsymbol{g}\}\) is totally unimodular if the matrix \(\boldsymbol{G}\) is totally unimodular and the vector \(\boldsymbol{g}\) is integer-valued. Recall that every square submatrix of a totally unimodular matrix has determinant \(0\), \(+1\) or \(-1\). Examples of totally unimodular sets include cardinality constraint set, in which \(\mathcal{Z}=\{\boldsymbol{z}\in\{0,1\}^{d}:\boldsymbol{1}^{\top}\boldsymbol{z }\leq\kappa\}\) for some \(\kappa\in[d]\), weak hierarchy set, where \(\mathcal{Z}=\{\boldsymbol{z}\in\{0,1\}^{d}:z_{d}\leq\sum_{i\in[d-1]}z_{i}\}\), and strong hierarchy set, where \(\mathcal{Z}=\{\boldsymbol{z}\in\{0,1\}^{d}:z_{d}\leq z_{i},\ \forall i\in[d-1]\}\).
Remark 2: Suppose that the set \(\mathcal{Z}=\{\boldsymbol{z}\in\{0,1\}^{d}:\boldsymbol{Gz}\leq\boldsymbol{g}\}\) is totally unimodular. Then, under Assumptions 2, we have
\[\mathrm{conv}(\mathcal{H})=\left\{(\tau,\boldsymbol{x},\boldsymbol{z})\in \mathbb{R}\times\mathcal{X}\times[0,1]^{d}:\ \sum_{i\in[d]}h_{i}^{\pi}(x_{i},z_{i})\leq\tau,\ \boldsymbol{Gz}\leq \boldsymbol{g}\right\}.\]
### Rank-1 functions with \(\mathcal{X}=\mathbb{R}^{d}\)
Recall the mixed-binary set
\[\mathcal{T}=\left\{(\tau,\boldsymbol{x},\boldsymbol{z})\in\mathbb{R}\times \mathcal{X}\times\mathcal{Z}:\ h(\boldsymbol{a}^{\top}\boldsymbol{x})\leq \tau,\ x_{i}(1-z_{i})=0,\ \forall i\in[d]\right\},\]
where we assume that the vector \(\boldsymbol{a}\in\mathbb{R}^{d}\) satisfies \(a_{i}\neq 0\) for all \(i\in[d]\). In this section we will consider the case that \(\mathcal{X}=\mathbb{R}^{d}\). This type of set appears as a substructure in sparse regression [10; 38] and sparse classification [11; 36].
For a given set \(\mathcal{Z}\subseteq\{0,1\}^{d}\), we construct a graph \(G_{\mathcal{Z}}=(V,E)\), where \(V=[d]\) denotes its nodes, \(E\) denotes its edge, and \(\{i,j\}\in E\) if and only if \(i\neq j\) and there exits a vector \(\boldsymbol{z}\in\mathcal{Z}\) with \(z_{i}=z_{j}=1\). The structure of \(\mathcal{Z}\), as represented by its associated graph \(G_{\mathcal{Z}}\), plays an important role in the description of \(\mathrm{conv}(\mathcal{T})\).
#### 3.2.1 Connected graph
Our result relies on the following assumption.
**Assumption 3**: _The set \(\mathcal{X}=\mathbb{R}^{d}\). The set \(\mathcal{Z}\subseteq\{0,1\}^{d}\) satisfies \(\mathcal{Z}\neq\{\boldsymbol{0}\}\). The graph \(G_{\mathcal{Z}}\) associated with \(\mathcal{Z}\) is connected. The vector \(\boldsymbol{a}\in\mathbb{R}^{d}\) satisfies \(a_{i}\neq 0\) for all \(i\in[d]\). The function \(h:\mathbb{R}\to\overline{\mathbb{R}}\) is proper, lower semicontinuous, and convex with \(h(0)=0\)._
Theorem 3.1: _Under Assumption 3, the set \(\mathcal{T}\) defined in (3) satisfies_
\[\operatorname{cl\,conv}(\mathcal{T})=\left\{(\tau,\boldsymbol{x}, \boldsymbol{z})\in\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}^{d}:\ \exists w\in\mathbb{R}\ \text{s.t.}\begin{array}{l}h^{\pi}(\boldsymbol{a}^{\top}\boldsymbol{x},w) \leq\tau,\\ (w,\boldsymbol{z})\in\operatorname{conv}(\Delta_{1})\end{array}\right\},\]
_where \(\Delta_{1}=\{(w,\boldsymbol{z})\in\{0,1\}\times\mathcal{Z}:\ w\leq\boldsymbol {1}^{\top}\boldsymbol{z}\}\)._
Wei et al. [36, Thereom 1] give an ideal formulation in the original space of variables for \(\operatorname{cl\,conv}(\mathcal{T})\). This ideal formulation, however, relies on the explicit characterization of \(\operatorname{conv}(\mathcal{Z}\setminus\{\boldsymbol{0}\})\). In particular, [36, Proposition 1] shows that
\[\operatorname{conv}(\mathcal{Z}\setminus\{0\})=\operatorname{conv}(\mathcal{Z })\cap\left\{\boldsymbol{z}\in\{0,1\}^{d}:\ \boldsymbol{\gamma}^{\top}\boldsymbol{z}\geq 1,\ \forall \boldsymbol{\gamma}\in\mathcal{G}\right\}\]
for some finite set \(\mathcal{G}\subset\mathbb{R}^{d}\), and then, Wei et al. [36, Theorem 1] proves that
\[\operatorname{cl\,conv}(\mathcal{T})=\left\{(\tau,\boldsymbol{x},\boldsymbol {z})\in\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}^{d}:\begin{array}{l}h( \boldsymbol{a}^{\top}\boldsymbol{x})\leq\tau,\ \boldsymbol{z}\in \operatorname{conv}(\mathcal{Z}),\\ h^{\pi}(\boldsymbol{a}^{\top}\boldsymbol{x},\boldsymbol{\gamma}^{\top} \boldsymbol{z})\leq\tau,\ \forall\boldsymbol{\gamma}\in\mathcal{G}\end{array}\right\}. \tag{7}\]
This description of \(\operatorname{cl\,conv}(\mathcal{T})\) requires access to \(\mathcal{G}\), i.e., the explicit inequality description of \(\operatorname{conv}(\mathcal{Z}\setminus\{0\})\) which may not be easy to attain. Furthermore, the set \(\mathcal{G}\) can be very complex with possibly exponentially many (in the dimension \(d\)) inequalities and then the set in (7) will have exponentially many nonlinear inequalities. In contrast, Theorem 3.1 provides a compact extended formulation for \(\operatorname{cl\,conv}(\mathcal{T})\) by introducing a single binary variable \(w\) that embeds the entire complexity of the description of \(\operatorname{cl\,conv}(\mathcal{T})\) into the complexity of the set \(\operatorname{conv}(\Delta_{1})\). We will soon see that the variable \(w\) models the logical constraint \(\boldsymbol{z}=\boldsymbol{0}\implies\boldsymbol{a}^{\top}\boldsymbol{x}=0\). Another advantage of Theorem 3.1 is that it indeed does not require an a priori complete description of \(\operatorname{conv}(\Delta_{1})\), allowing us to immediately take advantage of the state-of-the-art optimization software that are very advanced in strengthening formulations of binary sets.
To prove Theorem 3.1, we first make an observation about the recessive direction of \(\mathcal{T}\).
Lemma 4: _Under Assumption 3, if \((\tau,\boldsymbol{x},\boldsymbol{z})\!\in\!\operatorname{cl\,conv}(\mathcal{T})\), then for any \(\bar{\boldsymbol{x}}\!\in\!\mathbb{R}^{n}\) satisfying \(\boldsymbol{a}^{\top}\bar{\boldsymbol{x}}=0\) we have \((\tau,\boldsymbol{x}+\bar{\boldsymbol{x}},\boldsymbol{z})\in\operatorname{cl\, conv}(\mathcal{T})\)._
Proof: As \(G_{\mathcal{Z}}=(V,E)\) is a connected graph, there exists a path of length \(k\geq d\) visiting all nodes in the graph. Let \(i_{1},\ldots,i_{k+1}\) be the nodes we visit in such a path. Then, there is an edge between node \(i_{l}\) to node \(i_{l+1}\), and hence, by definition of \(G_{\mathcal{Z}}\), there exists a binary vector \(\boldsymbol{z}^{l}\in\mathcal{Z}\) such that \(z_{i_{l}}^{l}=z_{i_{l+1}}^{l}=1\) for any \(l\in[k]\). Additionally, let \(\mathcal{M}_{0}:=\varnothing\) and \(\mathcal{M}_{j}\) be the set of nodes we visit after \(j-1\) steps in a such path for any \(j\in[k+1]\). Note that each \(\mathcal{M}_{j}\) is a set containing only unique values from the set \([d]\). For instance, \(\mathcal{M}_{1}=\{i_{1}\}\) and \(\mathcal{M}_{k+1}=[d]\) as we visit all nodes after \(k\) steps.
Based on \(\bar{\boldsymbol{x}}\), we construct the vectors \(\boldsymbol{x}^{l},l\in[k]\), as follows
\[\boldsymbol{x}^{l}:=\frac{\sum_{j\in\mathcal{M}_{l}}a_{j}\bar{x}_{j}}{a_{i_{l}} }\boldsymbol{e}_{i_{l}}-\frac{\sum_{j\in\mathcal{M}_{l}}a_{j}\bar{x}_{j}}{a_{i_ {l+1}}}\boldsymbol{e}_{i_{l+1}},\]
where \(\mathbf{e}_{i}\in\mathbb{R}^{d}\) denotes the \(i\)th unit basis of \(\mathbb{R}^{d}\). By construction, for every \(l\in[k]\), \(\mathbf{x}^{l}\) satisfies \(\mathbf{a}^{\top}\mathbf{x}^{l}=0\), and the support of \(\mathbf{x}^{l}\) is covered by the binary vector \(\mathbf{z}^{l}\in\mathcal{Z}\). Note also that
\[\sum_{l\in[k]}\mathbf{x}^{l} =\sum_{l\in[k]}\left(\frac{\sum_{j\in\mathcal{M}_{l}}a_{j}\bar{x} _{j}}{a_{i_{l}}}\mathbf{e}_{i_{l}}-\frac{\sum_{j\in\mathcal{M}_{l}}a_{j}\bar{x}_{ j}}{a_{i_{l+1}}}\mathbf{e}_{i_{l+1}}\right)\] \[=\sum_{l\in[k]}\left(\frac{\sum_{j\in\mathcal{M}_{l}}a_{j}\bar{x} _{j}}{a_{i_{l}}}\mathbf{e}_{i_{l}}-\frac{\sum_{j\in\mathcal{M}_{l}}a_{j}\bar{x}_{ j}}{a_{i_{l+1}}}\mathbf{e}_{i_{l+1}}\right)-\frac{\sum_{j\in\mathcal{M}_{0}}a_{j} \bar{x}_{j}}{a_{i_{1}}}\mathbf{e}_{i_{1}}\] \[\qquad+\frac{\sum_{j\in\mathcal{M}_{k+1}}a_{j}\bar{x}_{j}}{a_{i_ {k+1}}}\mathbf{e}_{i_{k+1}}\] \[=\sum_{l\in[k+1]}\frac{\sum_{j\in\mathcal{M}_{l}}a_{j}\bar{x}_{j} -\sum_{j\in\mathcal{M}_{l-1}}a_{j}\bar{x}_{j}}{a_{i_{l}}}\mathbf{e}_{i_{l}}=\sum_ {i\in[d]}\bar{x}_{i}\mathbf{e}_{i}=\bar{\mathbf{x}}\]
where the first equality follows from the construction of \(\mathbf{x}^{l}\), the second equality holds because \(\sum_{j\in\mathcal{M}_{0}}a_{j}\bar{x}_{j}=0\) (as \(\mathcal{M}_{0}=\varnothing\)) and \(\sum_{j\in\mathcal{M}_{k+1}}a_{j}\bar{x}_{j}=\mathbf{a}^{\top}\bar{\mathbf{x}}=0\) (as \(\mathcal{M}_{k+1}=[d]\)). The third equality holds by rearranging the summation, while the final equality follows from the fact that for any \(l\in[k+1]\) the expression \(\sum_{j\in\mathcal{M}_{l}}a_{j}\bar{x}_{j}-\sum_{j\in\mathcal{M}_{l-1}}a_{j} \bar{x}_{j}=a_{i_{l}}\bar{x}_{i_{l}}\) if we visit node \(i_{l}\) for the first time after \(l-1\) step and \(=0\) otherwise, and the fact that the path we consider visits all nodes. Hence, we conclude \(\sum_{l\in[k]}\mathbf{x}^{l}=\bar{\mathbf{x}}\).
As \(h(\mathbf{a}^{\top}\mathbf{x}^{l})=h(0)=0\), and the support of \(\lambda\mathbf{x}^{l}\) and \(\mathbf{z}^{l}\) are the same, we conclude that \((0,\lambda\mathbf{x}^{l},\mathbf{z}^{l})\in\mathcal{T}\) for every \(\lambda\in\mathbb{R}\) and \(l\in[k]\). Since \(\operatorname{cl\,conv}(\mathcal{T})\) is both closed and convex, we have
\[(\tau,\mathbf{x}+\bar{\mathbf{x}},\mathbf{z})=\lim_{\lambda\to+\infty}\left[\frac{\lambda -k}{\lambda}(\tau,\mathbf{x},\mathbf{z})+\sum_{l\in[k]}\frac{1}{\lambda}(0,\lambda\bm {x}^{l},\mathbf{z}^{l})\right]\in\operatorname{cl\,conv}(\mathcal{T}),\]
where the equality holds as \(\sum_{l\in[k]}\mathbf{x}^{l}=\bar{\mathbf{x}}\). Thus, the claim follows.
Inspired by the set \(\mathcal{T}\), we then introduce the mixed-binary set
\[\overline{\mathcal{T}}:=\left\{(\tau,\mathbf{x},\mathbf{z},s,w)\!\in\!\mathbb{R}\! \times\!\mathbb{R}^{d}\!\times\!\mathbb{R}^{d}\!\times\!\mathbb{R}\!:\begin{array} []{l}h(s)\leq\tau,\ \mathbf{a}^{\top}\mathbf{x}=s,\\ s(1-w)=0,\ (w,\mathbf{z})\in\Delta_{1}\end{array}\right\}, \tag{8}\]
where \(\Delta_{1}\) is defined as in Theorem 3.1. Note that \(\mathcal{T}\) admits the representation
\[\mathcal{T}=\left\{(\tau,\mathbf{x},\mathbf{z}):\exists s,w\ \text{s.t}\ (\tau,\mathbf{x},\mathbf{z},s,w) \in\overline{\mathcal{T}},\ x_{i}(1-z_{i})=0,\ \forall i\in[d]\right\}.\]
However, we establish that \(\operatorname{cl\,conv}(\mathcal{T})\) can be obtained solely from \(\overline{\mathcal{T}}\) using the result of Lemma 4. In particular, all individual complementary relations between the continuous variables \(x_{i}\) and the binary variables \(z_{i}\) can be dropped in the description of \(\operatorname{cl\,conv}(\mathcal{T})\). This results in a considerably simpler representation involving only a _single_ complementary relation (between the continuous variable \(s\) and the binary variable \(w\)) given in \(\overline{\mathcal{T}}\).
**Proposition 3**: _Under Assumption 3, we have_
\[\operatorname{cl\,conv}(\mathcal{T})=\operatorname{cl\,conv}\left(\left\{(\tau, \boldsymbol{x},\boldsymbol{z}):\exists s,w\text{ s.t. }(\tau,\boldsymbol{x},\boldsymbol{z},s,w)\in\overline{\mathcal{T}}\right\} \right),\]
_where \(\overline{\mathcal{T}}\) is as defined in (8)._
Proof: By Lemma 4, the set
\[\mathcal{R}:=\left\{(\bar{\tau},\bar{\boldsymbol{x}},\bar{\boldsymbol{z}}) \in\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}^{d}:\ \bar{\tau}=0,\ \bar{\boldsymbol{z}}= \boldsymbol{0},\ \boldsymbol{a}^{\top}\bar{\boldsymbol{x}}=0\right\}\]
is contained in \(\operatorname{rec}(\mathcal{T})\). Thus, \(\operatorname{cl\,conv}(\mathcal{T})=\operatorname{cl\,conv}(\mathcal{T}+ \mathcal{R})\). Define the set
\[\mathcal{T}^{\prime}:=\left\{(\tau,\boldsymbol{x},\boldsymbol{z})\in\mathbb{ R}\times\mathbb{R}^{d}\times\mathcal{Z}:\ h(\boldsymbol{a}^{\top}\boldsymbol{x})\leq\tau,\ \boldsymbol{z}=\boldsymbol{0}\implies\boldsymbol{a}^{\top}\boldsymbol{x}=0 \right\}.\]
We first prove that \(\mathcal{T}^{\prime}=\mathcal{T}+\mathcal{R}\) by showing that \(\mathcal{T}+\mathcal{R}\subseteq\mathcal{T}^{\prime}\) and \(\mathcal{T}+\mathcal{R}\supseteq\mathcal{T}^{\prime}\).
**(\(\subseteq\))** This is immediate as \(\mathcal{T}\subseteq\mathcal{T}^{\prime}\) and by definition of \(\mathcal{T}^{\prime}\) for any \(\bar{\boldsymbol{x}}\) such that \(\boldsymbol{a}^{\top}\bar{\boldsymbol{x}}=0\) we have \((0,\bar{\boldsymbol{x}},\boldsymbol{0})\) is a recessive direction in \(\mathcal{T}^{\prime}\).
**(\(\supseteq\))** Consider any \((\tau,\boldsymbol{x},\boldsymbol{z})\in\mathcal{T}^{\prime}\). Then, \(h(\boldsymbol{a}^{\top}\boldsymbol{x})\leq\tau\) and \(\boldsymbol{z}\in\mathcal{Z}\). If \(\boldsymbol{z}=\boldsymbol{0}\), then as \((\tau,\boldsymbol{x},\boldsymbol{z})\in\mathcal{T}^{\prime}\) we have \(\boldsymbol{a}^{\top}\boldsymbol{x}=0\) which implies \(h(\boldsymbol{a}^{\top}\boldsymbol{x})=h(0)=0\leq\tau\). Then, \((\tau,\boldsymbol{x},\boldsymbol{z})=(\tau,\boldsymbol{0},\boldsymbol{z})+(0, \boldsymbol{x},\boldsymbol{0})\in\mathcal{T}+\mathcal{R}\) (as \((\tau,\boldsymbol{0},\boldsymbol{z})\in\mathcal{T}\) always holds and also \(\boldsymbol{a}^{\top}\boldsymbol{x}=0\) implies \((0,\boldsymbol{x},\boldsymbol{0})\in\mathcal{R}\)). Else, there exists \(i\in[d]\) such that \(z_{i}=1\). Define \(\bar{\boldsymbol{x}}:=\boldsymbol{x}-\frac{\boldsymbol{a}^{\top}\boldsymbol{ x}}{a_{i}}\boldsymbol{e}_{i}\). Then, \(\boldsymbol{a}^{\top}\bar{\boldsymbol{x}}=0\) implying \(\boldsymbol{a}^{\top}(\boldsymbol{x}-\bar{\boldsymbol{x}})=\boldsymbol{a}^{ \top}\boldsymbol{x}\) and \(h(\boldsymbol{a}^{\top}(\boldsymbol{x}-\bar{\boldsymbol{x}}))=h(\boldsymbol{ a}^{\top}\boldsymbol{x})\leq\tau\). Moreover, since \(\boldsymbol{x}-\bar{\boldsymbol{x}}=\frac{\boldsymbol{a}^{\top}\boldsymbol{x}} {a_{i}}\boldsymbol{e}_{i}\), we have \((x_{i}-\bar{x}_{i})(1-z_{i})=0\) satisfied for all \(i\in[d]\). Therefore, \((\tau,\boldsymbol{x}-\bar{\boldsymbol{x}},\boldsymbol{z})\in\mathcal{T}\) from which we conclude that \((\tau,\boldsymbol{x},\boldsymbol{z})=(\tau,\boldsymbol{x}-\bar{\boldsymbol{x }},\boldsymbol{z})+(0,\bar{\boldsymbol{x}},\boldsymbol{0})\in\mathcal{T}+ \mathcal{R}\) (as \(\boldsymbol{a}^{\top}\bar{\boldsymbol{x}}=0\) implies \((0,\bar{\boldsymbol{x}},\boldsymbol{0})\in\mathcal{R}\)).
Thus, we showed that \(\mathcal{T}^{\prime}=\mathcal{T}+\mathcal{R}\). Now, note that \((\tau,\boldsymbol{x},\boldsymbol{z})\in\mathcal{T}^{\prime}\) if and only if there exists \(w\in\{0,1\}\) such that \(w\leq\boldsymbol{1}^{\top}\boldsymbol{z}\) and \((\boldsymbol{a}^{\top}\boldsymbol{x})(1-w)=0\). This easily holds because there exists \(w\in\{0,1\}\) satisfying \((\boldsymbol{a}^{\top}\boldsymbol{x})(1-w)=0\) and \(w\leq\boldsymbol{1}^{\top}\boldsymbol{z}\) if and only if \(\boldsymbol{z}=0\implies\boldsymbol{a}^{\top}\boldsymbol{x}=0\). Therefore, \((\tau,\boldsymbol{x},\boldsymbol{z})\in\mathcal{T}^{\prime}\) if and only if there exists \(w\in\{0,1\}\) and \(s\in\mathbb{R}\) such that \((\tau,\boldsymbol{x},s,\boldsymbol{z},w)\in\overline{\mathcal{T}}\). Put it differently, we have \(\mathcal{T}^{\prime}=\operatorname{Proj}_{\tau,\boldsymbol{x},\boldsymbol{z}}( \overline{\mathcal{T}})\), which implies that
\[\operatorname{cl\,conv}(\mathcal{T})=\operatorname{cl\,conv}(\mathcal{T}+ \mathcal{R})=\operatorname{cl\,conv}(\mathcal{T}^{\prime})=\operatorname{cl \,conv}(\mathrm{Proj}_{\tau,\boldsymbol{x},\boldsymbol{z}}(\overline{\mathcal{T} })).\]
Hence, the claim follows.
Armed with Lemma 3 and Proposition 3, we are ready to prove Theorem 3.
Proof of Theorem 3: Recall the definition of \(\overline{\mathcal{T}}\) and \(\Delta_{1}\). By letting
\[\boldsymbol{\beta}_{i}:=(s,\boldsymbol{x}),\ \boldsymbol{\gamma}:=\tau,\ \boldsymbol{\delta}:=(w, \boldsymbol{z}),\ \Delta:=\Delta_{1},\ \boldsymbol{C}_{i}=[-1,\boldsymbol{a}^{\top}],\ \mathbb{C}_{i}=\{0\}\,,\]
we can represent the set \(\overline{\mathcal{T}}\) as an instance of the set \(\mathcal{W}\) defined as in (5). Then, Proposition 2 yields
\[\operatorname{cl\,conv}(\overline{\mathcal{T}})\!=\!\left\{(\tau,\boldsymbol{x}, \boldsymbol{z},s,w)\!\in\!\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}^{d} \times\mathbb{R}\!\times\!\mathbb{R}\!:\begin{array}{l}h^{\pi}(s,w)\leq\tau, \ \boldsymbol{a}^{\top}\boldsymbol{x}=s,\\ (w,\boldsymbol{z})\in\operatorname{conv}(\Delta_{1})\end{array}\right\}.\]
In the following we characterize \(\operatorname{cl}\operatorname{conv}(\mathcal{T})\) in terms of \(\operatorname{cl}\operatorname{conv}(\overline{\mathcal{T}})\). From Proposition 3, we deduce
\[\operatorname{cl}\operatorname{conv}(\mathcal{T})=\operatorname{cl}\big{(} \operatorname{conv}(\operatorname{Proj}_{\tau,\boldsymbol{x},\boldsymbol{z}}( \overline{\mathcal{T}}))\big{)}=\operatorname{cl}\big{(}\operatorname{Proj}_{ \tau,\boldsymbol{x},\boldsymbol{z}}(\operatorname{conv}(\overline{\mathcal{T}}) )\big{)}.\]
By letting
\[\boldsymbol{\mu}:=(\tau,\boldsymbol{x},\boldsymbol{z}),\ \boldsymbol{\eta}:=(\tau, \boldsymbol{x},\boldsymbol{z},s,w),\ \mathcal{U}:=\operatorname{conv}(\overline{\mathcal{T}}),\] \[\boldsymbol{A}:=[\boldsymbol{I}_{2d+1},\ \boldsymbol{0}],\ \boldsymbol{B}:= \boldsymbol{0}^{\top},\ \boldsymbol{b}:=0,\]
we observe that \(\operatorname{Proj}_{\tau,\boldsymbol{x},\boldsymbol{z}}(\operatorname{conv}( \overline{\mathcal{T}}))=\{\boldsymbol{\mu}:\exists\boldsymbol{\eta}\in \mathcal{U}\ \text{s.t.}\ \boldsymbol{A}\boldsymbol{\eta}=\boldsymbol{\mu},\ \boldsymbol{B} \boldsymbol{\eta}=\boldsymbol{b}\}\) as in Lemma 1 (ii). Note also that the first requirement of Lemma 1 (ii) is trivially satisfied as the matrix \(\boldsymbol{B}\) equals zero. In addition, \(\boldsymbol{\eta}\in\operatorname{cl}(\mathcal{U})\) satisfies \(\boldsymbol{A}\boldsymbol{\eta}=\boldsymbol{0}\) if \(\boldsymbol{\eta}\in\{(0,\boldsymbol{0},0,s,w):s=\boldsymbol{a}^{\top} \boldsymbol{x}=0,(w,\boldsymbol{0})\in\operatorname{conv}(\Delta_{1})\}=\{ \boldsymbol{0}\}\), where the equality holds because, by definition of \(\Delta_{1}\) we have \(0\leq w\leq\boldsymbol{1}^{\top}\boldsymbol{z}\) which enforces \(w=0\) as \(\boldsymbol{z}=\boldsymbol{0}\). Thus, we can apply Lemma 1 (ii) to conclude that
\[\operatorname{cl}\operatorname{conv}(\mathcal{T})=\{(\tau,\boldsymbol{x}, \boldsymbol{z}):\exists s,w\ \text{s.t.}\ (\tau,\boldsymbol{x},\boldsymbol{z},s,w)\in \operatorname{cl}\operatorname{conv}(\overline{\mathcal{T}})\}. \tag{9}\]
This completes the proof.
It is important to note that the description of \(\operatorname{conv}(\Delta_{1})\) may not be readily available from \(\operatorname{conv}(\mathcal{Z})\) even if \(\mathcal{Z}\) is an integral or a totally unimodular set.
Example 1: Consider the set \(\mathcal{Z}=\{\boldsymbol{z}\in\{0,1\}^{2}:z_{1}=z_{2}\}\), which is integral and totally unimodular. By definition, the resulting \(\Delta_{1}\) is given by \(\Delta_{1}=\big{\{}(w,\boldsymbol{z})\in\{0,1\}^{3}:w\leq z_{1}+z_{2},\ z_{1}= z_{2}\big{\}}\). Furthermore, \(\operatorname{conv}(\Delta_{1})=\{(w,\boldsymbol{z})\in[0,1]^{3}:w\leq z_{1},\ z_{1}= z_{2}\}\). We thus observe that the continuous relaxation of \(\Delta_{1}\) and its convex hull \(\operatorname{conv}(\Delta_{1})\) are different. For example, the point \(z_{1}=z_{2}=0.5\) and \(w=1\) is in the continuous relaxation of \(\Delta_{1}\) but it is not in \(\operatorname{conv}(\Delta_{1})\).
In the sequel we examine the description of \(\operatorname{conv}(\Delta_{1})\) for some simple integral sets \(\mathcal{Z}\) of interest. The proofs of these results are provided in Appendix A. We start with the case when \(\mathcal{Z}\) is defined by a cardinality constraint, which leads to an immediate totally unimodular representation of \(\Delta_{1}\).
Lemma 5: _Suppose \(\mathcal{Z}=\{\boldsymbol{z}\in\{0,1\}^{d}:\ \boldsymbol{1}^{\top} \boldsymbol{z}\leq\kappa\}\) for some \(\kappa\in[d]\). Then_
\[\operatorname{conv}(\Delta_{1})=\left\{(w,\boldsymbol{z})\in[0,1]^{1+d}:\ w \leq\boldsymbol{1}^{\top}\boldsymbol{z},\ \boldsymbol{1}^{\top} \boldsymbol{z}\leq\kappa\right\}.\]
We next study the case of weak hierarchy constraints; in this case, \(\Delta_{1}\) also admits a totally unimodular representation.
Lemma 6: _Suppose \(\mathcal{Z}=\{\boldsymbol{z}\in\{0,1\}^{d}:\ z_{d}\leq\sum_{i\in[d-1]}z_{i}\}\). Then_
\[\operatorname{conv}(\Delta_{1})=\left\{(w,\boldsymbol{z})\in[0,1]^{1+d}:\ w \leq\sum_{i\in[d-1]}z_{i},\ z_{d}\leq\sum_{i\in[d-1]}z_{i}\right\}.\]
For general \(\mathcal{Z}\), in the same spirit of [36], we can take advantage of the fact that the set \(\operatorname{conv}(\mathcal{Z}\backslash\left\{\mathbf{0}\right\})\) is a polytope to give an explicit description of \(\operatorname{conv}(\Delta_{1})\) based on the description of \(\operatorname{conv}(\mathcal{Z}\backslash\left\{\mathbf{0}\right\})\). In particular, suppose that \(\operatorname{conv}(\mathcal{Z}\backslash\left\{\mathbf{0}\right\})\) admits the following representation
\[\operatorname{conv}(\mathcal{Z}\backslash\left\{\mathbf{0}\right\})=\left\{ \boldsymbol{z}\in\mathbb{R}^{d}:\boldsymbol{G}^{0}\boldsymbol{z}\geq\mathbf{0 },\ \ \boldsymbol{z}^{\top}\boldsymbol{g}_{k}^{+}\geq 1,\ \forall k\in\mathcal{K},\right\}. \tag{10}\]
Note that the representation of \(\operatorname{conv}(\mathcal{Z}\backslash\left\{\mathbf{0}\right\})\) in (10) is without loss of generality as we can always scale each inequality to have right hand side value in \(\left\{-1,0,1\right\}\).
Lemma 7: _Given the representation \(\operatorname{conv}(\mathcal{Z}\backslash\left\{\mathbf{0}\right\})\) in (10), we have_
\[\operatorname{conv}(\Delta_{1})=\left\{(w,\boldsymbol{z})\in\mathbb{R}^{1+d}: \boldsymbol{G}^{0}\boldsymbol{z}\geq\mathbf{0},\ \ \boldsymbol{z}^{\top}\boldsymbol{g}_{l}^{-}\leq 1,\ \forall l\in \mathcal{L},\right.\]
We conclude this section by examining the case of strong hierarchy constraints. Unlike the previous two cases, the set \(\Delta_{1}\) does not immediately admit a totally unimodular representation. Therefore, we utilize Lemma 7 to characterize \(\operatorname{conv}(\Delta_{1})\) for strong hierarchy constraints.
Lemma 8: _Suppose \(\mathcal{Z}=\left\{\boldsymbol{z}\in\left\{0,1\right\}^{d}:\ z_{d}\leq z_{i},\, \forall i\in[d-1]\right\}\). Then,_
\[\operatorname{conv}(\Delta_{1})=\left\{(w,\boldsymbol{z})\in[0,1]^{1+d}:\begin{array} []{l}w\leq\sum_{i\in[d-1]}z_{i}-(d-2)z_{d},\\ z_{d}\leq z_{i},\ \forall i\in[d-1]\end{array}\right\}.\]
#### 3.2.2 General graph
We now examine a general graph \(G_{\mathcal{Z}}\) partitioned into \(p\in[d]\) connected subgraphs. Without loss of generality, we assume that the subvector \(\boldsymbol{z}_{i}\) associated with the variables in the \(i\)th partition. Such indexing allows us to simplify the evaluation of the rank-\(1\) function as
\[h(\boldsymbol{a}^{\top}\boldsymbol{x})=\sum_{i\in[p]}h(\boldsymbol{a}_{i}^{ \top}\boldsymbol{x}_{i}) \tag{11}\]
because it is not possible to have two indices \(j,k\in[d]\) with \(z_{j}=z_{k}=1\) from two different subgraphs. Our result relies on the following assumption.
Assumption 4: _The set \(\mathcal{X}=\mathbb{R}^{d}\). The set \(\mathcal{Z}\subseteq\left\{0,1\right\}^{d}\). The graph \(G_{\mathcal{Z}}\) associated with \(\mathcal{Z}\) is partitioned into \(p\in[d]\) connected subgraphs, while the subvector \(\boldsymbol{z}_{i}\) corresponds to the variables in the \(i\)th partition. For any partition \(i\in[p]\), there exists a subvector \(\boldsymbol{z}_{i}\neq\mathbf{0}\). The vector \(\boldsymbol{a}\in\mathbb{R}^{d}\) satisfies \(a_{i}\neq 0\) for all \(i\in[d]\). The function \(h:\mathbb{R}\to\overline{\mathbb{R}}\) is proper, lower semicontinuous, and convex with \(h(0)=0\)._
**Theorem 4**: _Under Assumption 4, the set \(\mathcal{T}\) defined in (3) satisfies_
\[\operatorname{cl\,conv}(\mathcal{T})\!=\!\left\{(\tau,\boldsymbol{x}, \boldsymbol{z})\!\in\!\mathbb{R}\!\times\!\mathbb{R}^{d}\!\times\!\mathbb{R}^{d }\!:\exists\boldsymbol{w}\!\in\!\mathbb{R}^{p}\text{ s.t. }\frac{\sum_{i\in[p]}h^{\pi}(\boldsymbol{a}_{i}^{\top} \boldsymbol{x}_{i},w_{i})\!\leq\!\tau}{(\boldsymbol{w},\boldsymbol{z})\in \operatorname{conv}(\Delta_{p})}\right\},\]
_where \(\Delta_{p}=\big{\{}(\boldsymbol{w},\boldsymbol{z})\in\{0,1\}^{p}\times \mathcal{Z}:w_{i}\leq\boldsymbol{1}^{\top}\boldsymbol{z}_{i},\ \forall i\in[p]\big{\}}\)._
Wei et al. [36, Thereom 2] give an extended formulation for \(\operatorname{conv}(\mathcal{T})\) that involves \(2p(d+1)\) additional variables. Their formulation relies on having a description of \(\operatorname{conv}(\mathcal{Z}_{i})\), where \(\mathcal{Z}_{i}=\{\boldsymbol{z}\in\mathcal{Z}:\boldsymbol{z}_{j}=\boldsymbol {0},\ \forall j\neq i\}\), as the system of linear inequalities \(\boldsymbol{C}_{i}\boldsymbol{z}\leq\boldsymbol{c}_{i}\) for every \(i\in[p]\). In particular, they suppose that for every \(i\in[p]\) we have access to a finite set \(\mathcal{G}_{i}\subset\mathbb{R}^{d}\) satisfying
\[\operatorname{conv}(\mathcal{Z}_{i}\backslash\left\{0\right\})=\operatorname{ conv}(\mathcal{Z}_{i})\cap\Big{\{}\boldsymbol{z}\in\{0,1\}^{d}:\ \boldsymbol{\gamma}_{i}^{\top}\boldsymbol{z}\geq 1,\ \forall \boldsymbol{\gamma}_{i}\in\mathcal{G}_{i}\Big{\}}\,,\]
and based on this Wei et al. [36, Theorem 2] provide \(\operatorname{cl\,conv}(\mathcal{T})\) as the set
\[\Big{\{}(\tau,\boldsymbol{x},\boldsymbol{z})\in\mathbb{R}\times \mathbb{R}^{d}\times[0,1]^{d}:\exists\tilde{\boldsymbol{x}}\in\mathbb{R}^{dp}, \tilde{\boldsymbol{z}}\in[0,1]^{dp},\boldsymbol{t}\in\mathbb{R}^{p}, \boldsymbol{w}\in[0,1]^{p}\text{ s.t. }\] \[\sum_{i\in[p]}\tilde{\boldsymbol{x}}^{i}=\boldsymbol{x},\,\sum_{ i\in[p]}\tilde{\boldsymbol{z}}^{i}=\boldsymbol{z},\,\boldsymbol{1}^{\top} \boldsymbol{t}=\tau,\,\boldsymbol{1}^{\top}\boldsymbol{w}=1,\,\boldsymbol{C}_ {i}\tilde{\boldsymbol{z}}^{i}\leq\boldsymbol{c}_{i}w_{i},\ \forall i\in[p]\] \[h^{\pi}(\boldsymbol{a}^{\top}\tilde{\boldsymbol{x}}^{i},w_{i}) \leq t_{i},\,h^{\pi}(\boldsymbol{a}^{\top}\tilde{\boldsymbol{x}}^{i}, \boldsymbol{\gamma}_{i}^{\top}\boldsymbol{z})\leq\tau,\ \forall \boldsymbol{\gamma}_{i}\in\mathcal{G}_{i},\ \forall i\in[p]\Big{\}}.\]
In contrast, Theorem 4 provides a compact extended formulation for \(\operatorname{cl\,conv}(\mathcal{T})\) by introducing a single binary vector in \(\boldsymbol{w}\in\mathbb{R}^{p}\) and more importantly replacing the complexity of having explicit descriptions of convex hulls of several sets such as \(\operatorname{conv}(\mathcal{Z}_{i})\) and \(\operatorname{conv}(\mathcal{Z}_{i}\backslash\left\{0\right\})\) with the complexity of a single set \(\Delta_{p}\) that is obtained from \(\mathcal{Z}\) by adding \(p\) additional binary variables and \(p\) additional linear constraints.
Inspired by the set \(\mathcal{T}\), we introduce the mixed-binary set
\[\widetilde{\mathcal{T}}:=\left\{(\boldsymbol{x},\boldsymbol{z}, \boldsymbol{s},\boldsymbol{w},\boldsymbol{t})\!\in\!\mathbb{R}^{d+d+p+p+p}:\,s_ {i}(1-w_{i})=0,\ \forall i\in[p]\right\}. \tag{12}\] \[(\boldsymbol{w},\boldsymbol{z})\in\Delta_{p}\]
Theorem 4 relies on the following auxiliary results, whose proofs are omitted for brevity because they follow the same path as those in Section 3.2.1. Additionally, the proof of Theorem 4 is relegated to Appendix A.
**Lemma 9**: _Under Assumption 4, if \((\tau,\boldsymbol{x},\boldsymbol{z})\!\in\!\operatorname{cl\,conv}(\mathcal{T})\), then for any \(\bar{\boldsymbol{x}}\!\in\!\mathbb{R}^{n}\) satisfying \(\boldsymbol{a}_{i}^{\top}\bar{\boldsymbol{x}}_{i}=0\) for all \(i\in[p]\), we have \((\tau,\boldsymbol{x}+\bar{\boldsymbol{x}},\boldsymbol{z})\in\operatorname{cl\, conv}(\mathcal{T})\)._
**Proposition 4**: _Under Assumption 4, we have_
\[\operatorname{cl\,conv}(\mathcal{T})=\operatorname{cl\,conv}\left(\Big{\{}( \tau,\boldsymbol{x},\boldsymbol{z}):\exists\boldsymbol{s},\boldsymbol{w}, \boldsymbol{t}\text{ s.t. }(\boldsymbol{x},\boldsymbol{z},\boldsymbol{s}, \boldsymbol{w}.\boldsymbol{t})\in\widetilde{\mathcal{T}},\ \boldsymbol{1}^{\top} \boldsymbol{t}=\tau\Big{\}}\right),\]
_where \(\widetilde{\mathcal{T}}\) is as defined in (12)._
We provide a characterization for \(\Delta_{p}\) in Lemma A.1 in Appendix A.
### Rank-1 functions with sign-constrained continuous variables
Recall the mixed-binary set
\[\mathcal{T}=\left\{(\tau,\mathbf{x},\mathbf{z})\in\mathbb{R}\times\mathcal{X}\times \mathcal{Z}:\ h(\mathbf{a}^{\top}\mathbf{x})\leq\tau,\ x_{i}(1-z_{i})=0,\ \forall i\in[d]\right\}.\]
In this section we consider the case where \(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{d}:x_{i}\geq 0,\forall i\in\mathcal{I}\}\) for some \(\mathcal{I}\subseteq[d]\) and \(\mathcal{Z}\subseteq\{0,1\}^{d}\). This type of set appears as a substructure in fixed-charge network problems [37], smooth signal estimation [23], outlier detection [21], and nonnegative least squares regression. Our result relies on the following assumption.
**Assumption 5**: _The set \(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{d}:x_{i}\geq 0,\,\forall i\in\mathcal{I}\}\) for some set \(\mathcal{I}\subseteq[d]\). For any \(i,j\in[d]\), there exists a point \(\mathbf{z}\in\mathcal{Z}\subseteq\{0,1\}^{d}\) such that \(z_{i}=z_{j}=1\). The vector \(\mathbf{a}\in\mathbb{R}^{d}\) satisfies \(a_{i}\neq 0\) for all \(i\in[d]\). The function \(h:\mathbb{R}\to\overline{\mathbb{R}}\) is nonlinear, proper, lower semicontinuous, and convex with \(h(0)=0\)._
Examples of boolean sets \(\mathcal{Z}\) that satisfy Assumption 5 include cardinality constraint set with parameter \(\kappa\in[d]\backslash\{1\}\), weak hierarchy set, and strong hierarchy set. Note that when the parameter of the cardinality constraint is set \(\kappa=1\), we have \(\mathcal{Z}=\{\mathbf{z}\in\{0,1\}^{d}:\mathbf{1}^{\top}\mathbf{z}\leq 1\}\). This case is simple as \(\mathcal{Z}\) is totally unimodular, and the logical constraint \(x_{i}(1-z_{i})=0,\forall i\in[d]\) along with \(\mathbf{1}^{\top}\mathbf{z}\leq 1\) enforces that at most one \(x_{i}\) can be nonzero. Since \(h(0)=0\), we infact have \(h(\mathbf{a}^{\top}\mathbf{x})=\sum_{i\in[d]}h(a_{i}x_{i})\). Thus, Remark 2 is applicable and we arrive at
\[\mathrm{conv}(\mathcal{T})=\left\{(\tau,\mathbf{x},\mathbf{z})\in\mathbb{R}\times \mathcal{X}\times[0,1]^{d}:\ \sum_{i\in[d]}h^{\pi}(a_{i}x_{i},z_{i})\leq\tau,\ \sum_{i\in[d]}z_{i}\leq 1 \right\}.\]
In the following we consider more challenging binary sets \(\mathcal{Z}\).
Theorem 3.1: _Under Assumption 5, the set \(\mathcal{T}\) defined in (3) satisfies_
\[\mathrm{cl\,conv}(\mathcal{T})\!=\!\left\{\begin{array}{ll}&\sum_{i\in[d]}h^{ \pi}(a_{i}s_{i},w_{i})\leq\tau,\\ (\tau,\mathbf{x},\mathbf{z})\!\in\!\mathbb{R}\times\mathcal{X}\times\mathbb{R}^{d} \colon\ \exists\mathbf{s},\mathbf{w}\ \text{s.t.}&0\leq s_{i}\leq x_{i},\ \forall i\in\mathcal{I},\\ &\mathbf{a}^{\top}\mathbf{s}=\mathbf{a}^{\top}\mathbf{x},\\ &(\mathbf{w},\mathbf{z})\in\mathrm{conv}(\Omega)\end{array}\right\},\]
_where \(\Omega:=\{(\mathbf{w},\mathbf{z})\in\{0,1\}^{d}\times\mathcal{Z}:\ \mathbf{1}^{\top}\mathbf{w}\leq 1,\ \mathbf{w}\leq\mathbf{z}\}\)._
Remark 3: When \(\mathcal{Z}=\{0,1\}^{d}\) and \(h\) is real-valued, Han and Gomez [23, Proposition 3] give an extended formulation for \(\mathrm{cl\,conv}(\mathcal{T})\). Their proof involves two steps. The first step is based on [23, Theorem 1], which employs a support function argument and a disjunctive programming method to obtain a lifted description of \(\mathrm{cl\,conv}(\mathcal{T})\) with \(2d^{2}+6d+4\) additional variables. The second step employs the Fourier-Motzkin elimination method to reduce the number
of additional variables to \(2d\). Extending this proof technique to include combinatorial constraints on \(\mathcal{Z}\), however, is not straightforward (if possible at all) as both steps heavily rely on the properties of the unconstrained set \(\{0,1\}^{d}\). For example, even if \(\mathcal{I}=\varnothing\), the support function argument requires the underlying graph of \(\mathcal{Z}\) to be connected, as shown in (36, Theorem 1). In contrast, Theorem 5 introduces \(2d\) additional variables \(\boldsymbol{w},\boldsymbol{s}\) and leverages the relation between the original binary variables \(\boldsymbol{z}\) and the newly introduced the binary variables \(\boldsymbol{w}\) through the set \(\Omega\). As a result, Theorem 5 reduces the complexity of characterizing \(\operatorname{cl\,conv}(\mathcal{T})\) to the complexity of characterizing \(\operatorname{conv}(\Omega)\).
To prove Theorem 3, we first make an observation about the recessive direction of \(\mathcal{T}\).
Lemma 10: _Under Assumption 5, if \((\tau,\boldsymbol{x},\boldsymbol{z})\in\operatorname{cl\,conv}(\mathcal{T})\), then for any \(\bar{\boldsymbol{x}}\in\mathcal{X}\) satisfying \(\boldsymbol{a}^{\top}\bar{\boldsymbol{x}}=0\) we have \((\tau,\boldsymbol{x}+\bar{\boldsymbol{x}},\boldsymbol{z})\in\operatorname{ cl\,conv}(\mathcal{T})\)._
Proof: Based on \(\bar{\boldsymbol{x}}\), we introduce the index sets \(\mathcal{K}_{>}=\{k\in[d]:a_{k}\bar{x}_{k}>0\}\) and \(\mathcal{K}_{<}=\{k\in[d]:a_{k}\bar{x}_{k}<0\}\). If \(\mathcal{K}_{>}=\mathcal{K}_{<}=\varnothing\), then \(\bar{\boldsymbol{x}}=\boldsymbol{0}\) and the claim trivially holds. In the following we assume that \(\mathcal{K}_{>}\) and \(\mathcal{K}_{<}\) are both nonempty (as \(\boldsymbol{a}^{\top}\bar{\boldsymbol{x}}=0\) either one of \(\mathcal{K}_{>}\) and \(\mathcal{K}_{<}\) being nonempty implies the other is also nonempty).
For every \(k\in\mathcal{K}_{>}\) and \(l\in\mathcal{K}_{<}\), we construct the vector
\[\boldsymbol{x}^{kl}:=\left(\frac{a_{l}\bar{x}_{l}\bar{x}_{k}}{\sum_{j\in \mathcal{K}_{<}}a_{j}\bar{x}_{j}}\right)\boldsymbol{e}_{k}+\left(\frac{a_{k} \bar{x}_{k}\bar{x}_{l}}{\sum_{q\in\mathcal{K}_{>}}a_{q}\bar{x}_{q}}\right) \boldsymbol{e}_{l}.\]
By construction, \(\boldsymbol{x}^{kl}\) satisfies
\[\boldsymbol{a}^{\top}\boldsymbol{x}^{kl}=(a_{k}a_{l}\bar{x}_{l}\bar{x}_{k})/( \sum_{j\in\mathcal{K}_{<}}a_{j}\bar{x}_{j})+(a_{k}a_{l}\bar{x}_{l}\bar{x}_{k} )/(\sum_{q\in\mathcal{K}_{<}}a_{q}\bar{x}_{q})=0\]
since \(\sum_{j\in\mathcal{K}_{<}}a_{j}\bar{x}_{j}=-\sum_{q\in\mathcal{K}_{>}}a_{q} \bar{x}_{q}\) as \(\boldsymbol{a}^{\top}\bar{\boldsymbol{x}}=0\). We also have \(\boldsymbol{x}^{kl}\in\mathcal{X}\) as \(\operatorname{sign}(x_{k}^{kl})=\operatorname{sign}(\bar{x}_{k})\) and \(\operatorname{sign}(x_{l}^{kl})=\operatorname{sign}(\bar{x}_{l})\) and \(\bar{\boldsymbol{x}}\in\mathcal{X}\). Furthermore, by Assumption 5, there exits a binary vector \(\boldsymbol{z}^{kl}\in\mathcal{Z}\) such that \(\boldsymbol{z}^{kl}_{k}=\boldsymbol{z}^{kl}_{l}=1\). Thus, the support of \(\boldsymbol{x}^{kl}\) is covered by the binary vector \(\boldsymbol{z}^{kl}\). Finally, note that
\[\sum_{k\in\mathcal{K}_{>}}\sum_{l\in\mathcal{K}_{<}}\boldsymbol {x}^{kl} =\sum_{k\in\mathcal{K}_{>}}\sum_{l\in\mathcal{K}_{<}}\left(\frac{a_ {l}\bar{x}_{l}\bar{x}_{k}}{\sum_{j\in\mathcal{K}_{<}}a_{j}\bar{x}_{j}}\right) \boldsymbol{e}_{k}+\sum_{k\in\mathcal{K}_{>}}\sum_{l\in\mathcal{K}_{<}}\left( \frac{a_{k}\bar{x}_{k}\bar{x}_{l}}{\sum_{q\in\mathcal{K}_{>}}a_{q}\bar{x}_{q}} \right)\boldsymbol{e}_{l}\] \[=\sum_{k\in\mathcal{K}_{>}}\bar{x}_{k}\boldsymbol{e}_{k}+\sum_{l \in\mathcal{K}_{<}}\bar{x}_{l}\boldsymbol{e}_{l}=\sum_{i\in[d]}\bar{x}_{i} \boldsymbol{e}_{i}=\bar{\boldsymbol{x}}.\]
Since \(h(\boldsymbol{a}^{\top}\boldsymbol{x}^{kl})=h(0)=0\) and the support of \(\boldsymbol{x}^{kl}\) is covered by \(\boldsymbol{z}^{kl}\), we conclude \((0,\lambda\boldsymbol{x}^{kl},\boldsymbol{z}^{kl})\in\mathcal{T}\) for every \(\lambda\in\mathbb{R}\), \(k\in\mathcal{K}_{>}\), and \(l\in\mathcal{K}_{<}\). Define \(\kappa:=|\mathcal{K}_{>}|\cdot|\mathcal{K}_{<}|\). Then, as \(\operatorname{cl\,conv}(\mathcal{T})\) is both closed and convex, we have
\[(\tau,\boldsymbol{x}+\bar{\boldsymbol{x}},\boldsymbol{z})\!=\!\lim_{\lambda\to+ \infty}\!\left[\frac{\lambda-\kappa}{\lambda}(\tau,\boldsymbol{x},\boldsymbol{ z})+\sum_{k\in\mathcal{K}_{>}}\sum_{l\in\mathcal{K}_{<}}\frac{1}{\lambda}(0, \lambda\boldsymbol{x}^{kl},\boldsymbol{z}^{kl})\right]\!\in\!\operatorname{ cl\,conv}(\mathcal{T}).\]
Hence, the claim follows.
We next introduce the auxiliary mixed-binary set
\[\overline{\mathcal{T}}_{s}:=\left\{(\tau,\mathbf{x},\mathbf{z}):\exists\mathbf{s},\mathbf{w},\mathbf{t }\text{ s.t. }(\mathbf{x},\mathbf{s},\mathbf{z},\mathbf{w},\mathbf{t})\in\widehat{\mathcal{T}},\mathbf{a}^{\top}\mathbf{x }=\mathbf{a}^{\top}\mathbf{s},\ \mathbf{1}^{\top}\mathbf{t}=\tau\right\},\]
where
\[h(a_{i}s_{i})\leq t_{i},\,\forall i\in[d],\] \[\widehat{\mathcal{T}}:=\!\left\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
and \(\mathbf{a}^{\top}\mathbf{\delta}\neq 0\), recall also the definition of \(\mathcal{K}_{>}\) implies \((a_{k}x_{k})/(\mathbf{a}^{\top}\mathbf{\delta})>0\) for all \(k)\) with \(\lambda_{k}>0\) and \(\sum_{k\in\mathcal{K}_{\geq}}\lambda_{k}=1\). Thus, the point \((\tau,\mathbf{x},\mathbf{z})\in\operatorname{conv}(\overline{\mathcal{T}}_{s})\), which establishes \(\mathcal{T}\subseteq\operatorname{conv}(\overline{\mathcal{T}}_{s})\), as desired. Since \(\operatorname{conv}(\mathcal{T})\) is the smallest convex set containing \(\mathcal{T}\), we conclude \(\operatorname{conv}(\mathcal{T})\subseteq\operatorname{conv}(\overline{ \mathcal{T}}_{s})\). Taking closure of both side proves that \(\operatorname{cl\,conv}(\mathcal{T})\subseteq\operatorname{cl\,conv}( \overline{\mathcal{T}}_{s})\).
**(\(\mathbin{\mathbin{\vbox{\hbox{\scalebox{1.0}{$\square$}}}\hbox{\scalebox{1.0 }{$\square$}}}}\))** Let \((\tau,\mathbf{x},\mathbf{z})\in\overline{\mathcal{T}}_{s}\). Then, there exists \(\mathbf{s}\in\mathcal{X}\) with \(s_{i}\leq x_{i}\) for all \(i\in\mathcal{I}\) and \(\mathbf{w}\in\left\{0,1\right\}^{n}\) with \(\mathbf{w}\leq\mathbf{z}\), \(s_{i}(1-w_{i})=0\) for all \(i\in[d]\), and \(\mathbf{1}^{\top}\mathbf{w}\leq 1\) such that \(\mathbf{a}^{\top}\mathbf{x}=\mathbf{a}^{\top}\mathbf{s}\) and \(\sum_{i\in[d]}h(a_{i}s_{i})\leq\tau\). We will next show that \((\tau,\mathbf{x},\mathbf{z})\in\mathcal{T}+\mathcal{R}\). From \(\mathbf{w}\in\left\{0,1\right\}^{n}\) and \(\mathbf{1}^{\top}\mathbf{w}\leq 1\) we deduce that at most one element of \(\mathbf{w}\) is equal to \(1\). Then, through the constraints \(s_{i}(1-w_{i})=0\) for all \(i\in[d]\) and \(\mathbf{w}\leq\mathbf{z}\), we deduce that \(\mathbf{s}\) has at most one nonzero element and \(s_{i}(1-z_{i})=0\) for all \(i\in[d]\) as well. Hence, the constraint \(\sum_{i\in[d]}h(a_{i}s_{i})\leq\tau\) implies that \((\tau,\mathbf{s},\mathbf{z})\in\mathcal{T}\). Moreover, the vector \(\bar{\mathbf{x}}=\mathbf{x}-\mathbf{s}\) satisfies \(\bar{\mathbf{x}}\in\mathcal{X}\) (as \(s_{i}\leq x_{i}\) for all \(i\in\mathcal{I}\)) and \(\mathbf{a}^{\top}\bar{\mathbf{x}}=0\) (as \(\mathbf{a}^{\top}\mathbf{x}=\mathbf{a}^{\top}\mathbf{s}\)). Thus, we have \((0,\bar{\mathbf{x}},\mathbf{0})\in\mathcal{R}\). Therefore, \((\tau,\mathbf{x},\mathbf{z})=(\tau,\mathbf{s},\mathbf{z})+(0,\bar{\mathbf{x}},\mathbf{0})\in\mathcal{T }+\mathcal{R}\), as required. This implies that \(\mathcal{T}+\mathcal{R}\supseteq\overline{\mathcal{T}}_{s}\). This completes the proof. \(\sqcap\)\(\sqcup\)
We next show that \(\operatorname{conv}(\overline{\mathcal{T}}_{s})\) can be obtained from \(\operatorname{conv}(\widehat{\mathcal{T}})\).
Lemma 11: _Under Assumption 5, we have_
\[\operatorname{conv}(\overline{\mathcal{T}}_{s})=\left\{(\tau,\mathbf{x},\mathbf{z}): \exists\mathbf{s},\mathbf{w},\mathbf{t}\text{ s.t. }\begin{array}{c}(\mathbf{x},\mathbf{s},\mathbf{z},\mathbf{w},\mathbf{t})\in \operatorname{conv}(\widehat{\mathcal{T}})\\ \mathbf{1}^{\top}\mathbf{t}=\tau,\ \mathbf{a}^{\top}\mathbf{s}=\mathbf{a}^{\top}\mathbf{x}\end{array} \right\},\]
_where \(\widehat{\mathcal{T}}\) is as defined in (13)._
Proof: Let \(\mathcal{U}:=\mathbb{R}\times\widehat{\mathcal{T}}\), \(\mathcal{V}_{1}:=\{(\tau,\mathbf{x},\mathbf{s},\mathbf{z},\mathbf{w},\mathbf{t}\,)\in\mathbb{R}^{5 d+1}:\mathbf{a}^{\top}\mathbf{x}=\mathbf{a}^{\top}\mathbf{s}\}\), and \(\mathcal{V}_{2}:=\{(\tau,\mathbf{x},\mathbf{s},\mathbf{z},\mathbf{w},\mathbf{t}\,)\in\mathbb{R}^{5 d+1}:\mathbf{1}^{\top}\mathbf{t}\leq\tau\}\). By definition, we have
\[\operatorname{conv}(\overline{\mathcal{T}}_{s})=\operatorname{conv}( \operatorname{Proj}_{\tau,\mathbf{x},\mathbf{z}}(\mathcal{U}\cap\mathcal{V}_{1}\cap \mathcal{V}_{2}))=\operatorname{Proj}_{\tau,\mathbf{x},\mathbf{z}}(\operatorname{ conv}(\mathcal{U}\cap\mathcal{V}_{1}\cap\mathcal{V}_{2})).\]
Hence, the claim will follow if we prove that \(\operatorname{conv}(\mathcal{U}\cap\mathcal{V}_{1}\cap\mathcal{V}_{2})= \operatorname{conv}(\mathcal{U})\cap\mathcal{V}_{1}\cap\mathcal{V}_{2}\). From Lemma 1(i), we have \(\operatorname{conv}(\mathcal{U}\cap\mathcal{V}_{1}\cap\mathcal{V}_{2})= \operatorname{conv}(\mathcal{U}\cap\mathcal{V}_{1})\cap\mathcal{V}_{2}\). Therefore, in the sequel we will show that \(\operatorname{conv}(\mathcal{U}\cap\mathcal{V}_{1})=\operatorname{conv}( \mathcal{U})\cap\mathcal{V}_{1}\).
As \(\mathcal{V}_{1}\) is convex, it is sufficient to show \(\operatorname{conv}(\mathcal{U})\cap\mathcal{V}_{1}\subseteq\operatorname{conv} (\mathcal{U}\cap\mathcal{V}_{1})\). Take a point \((\bar{\tau},\bar{\mathbf{x}},\bar{\mathbf{s}},\bar{\mathbf{z}},\bar{\mathbf{w}},\bar{\mathbf{t}})\in \operatorname{conv}(\mathcal{U})\cap\mathcal{V}_{1}\). Since this point is in \(\mathcal{V}_{1}\), we have \(\mathbf{a}^{\top}\bar{\mathbf{x}}=\mathbf{a}^{\top}\bar{\mathbf{s}}\). Additionally, as the point is in \(\operatorname{conv}(\mathcal{U})\), by Caratheodory's theorem, we have \((\bar{\tau},\bar{\mathbf{x}},\bar{\mathbf{s}},\bar{\mathbf{z}},\bar{\mathbf{w}},\bar{\mathbf{t}})= \sum_{k\in[q]}\lambda_{k}(\tau^{k},\mathbf{x}^{k},\mathbf{s}^{k},\mathbf{z}^{k},\mathbf{w}^{k}, \mathbf{t}^{k})\) for some \(q\leq 5d+2\), \((\tau^{k},\mathbf{x}^{k},\mathbf{s}^{k},\mathbf{z}^{k},\mathbf{w}^{k},\mathbf{t}^{k})\in\mathcal{U}\), and \(\lambda_{k}>0\) with \(\sum_{k\in[q]}\lambda_{k}=1\). Define the sets
\[\mathcal{K}_{<}:=\left\{k\in[q]:\ \mathbf{a}^{\top}\mathbf{s}^{k}<\mathbf{a}^{\top} \mathbf{x}^{k}\right\},\ \mathcal{K}_{=}:=\left\{k\in[q]:\mathbf{a}^{\top}\mathbf{s}^{k}=\mathbf{a}^{\top}\mathbf{x}^{k} \right\},\] \[\mathcal{K}_{>}:=\left\{k\in[q]:\mathbf{a}^{\top}\mathbf{s}^{k}>\mathbf{a}^{ \top}\mathbf{x}^{k}\right\}.\]
We consider two possible scenarios.
**(i)** If \(|\mathcal{K}_{=}|=q\), then \(|\mathcal{K}_{<}|=|\mathcal{K}_{>}|=0\) and \((\tau^{k},\mathbf{x}^{k},\mathbf{s}^{k},\mathbf{z}^{k},\mathbf{w}^{k},\mathbf{t}^{k})\in\mathcal{V}_{1}\) for every \(k\in[q]\). Thus, \((\tau^{k},\mathbf{x}^{k},\mathbf{s}^{k},\mathbf{z}^{k},\mathbf{w}^{k},\mathbf{t}^{k})\in\mathcal{U} \cap\mathcal{V}_{1}\) for every \(k\in[q]\), which implies that \((\bar{\tau},\bar{\mathbf{x}},\bar{\mathbf{s}},\bar{\mathbf{z}},\bar{\mathbf{w}},\bar{\mathbf{t}})\in \operatorname{conv}(\mathcal{U}\cap\mathcal{V}_{1})\).
**(ii)** If \(|\mathcal{K}_{=}|<q\), then \(|\mathcal{K}_{<}|>0\) and \(|\mathcal{K}_{>}|>0\) because \(\boldsymbol{a}^{\top}\bar{\boldsymbol{x}}=\boldsymbol{a}^{\top}\bar{\boldsymbol {s}}\). In the following we will use a rounding scheme that iteratively replace the points \((\tau^{k},\boldsymbol{x}^{k},\boldsymbol{s}^{k},\boldsymbol{z}^{k},\boldsymbol {w}^{k},\boldsymbol{t}^{k})\) with new points \((\hat{\tau}^{k},\hat{\boldsymbol{x}}^{k},\hat{\boldsymbol{s}}^{k},\hat{ \boldsymbol{z}}^{k},\hat{\boldsymbol{w}}^{k},\hat{\boldsymbol{t}}^{k})\). Pick an index \(j\in\mathcal{K}_{<}\) and an index \(l\in\mathcal{K}_{>}\), and consider the following construction
\[(\hat{\tau}^{k},\hat{\boldsymbol{x}}^{k},\hat{\boldsymbol{s}}^{k},\hat{ \boldsymbol{z}}^{k},\hat{\boldsymbol{w}}^{k},\hat{\boldsymbol{t}}^{k})=\left\{ \begin{array}{ll}(\tau^{j},\boldsymbol{s}^{j},\boldsymbol{s}^{j},\boldsymbol {z}^{j},\boldsymbol{w}^{j},\boldsymbol{t}^{j})&k=j\\ (\tau^{l},\boldsymbol{x}^{l}+(\lambda_{j}/\lambda_{l})(\boldsymbol{x}^{j}- \boldsymbol{s}^{j}),s^{l},\boldsymbol{z}^{l},w^{l})&k=l\\ (\tau^{k},\boldsymbol{x}^{k},\boldsymbol{s}^{k},\boldsymbol{z}^{k}, \boldsymbol{w}^{k},\boldsymbol{t}^{k})&k\notin\left\{j,l\right\}.\end{array}\right.\]
By construction, we have
\[\sum_{k\in[q]}\lambda_{k}(\tau^{k},\boldsymbol{x}^{k},\boldsymbol{s}^{k}, \boldsymbol{z}^{k},\boldsymbol{w}^{k},\boldsymbol{t}^{k})=\sum_{k\in[q]} \lambda_{k}(\hat{\tau}^{k},\hat{\boldsymbol{x}}^{k},\hat{\boldsymbol{s}}^{k},\hat{\boldsymbol{z}}^{k},\hat{\boldsymbol{w}}^{k},\hat{\boldsymbol{t}}^{k}).\]
Moreover, from \((\tau^{j},\boldsymbol{x}^{j},\boldsymbol{s}^{j},\boldsymbol{z}^{j},\boldsymbol {w}^{j},\boldsymbol{t}^{j})\in\mathcal{U}\), we deduce \(0\leq s_{i}^{j}\leq x_{i}^{j}\) for every \(i\in\mathcal{I}\). Then, \(0\leq s_{i}^{l}\leq x_{i}^{l}\leq x_{i}^{l}+(\lambda_{j}/\lambda_{l})(x_{i}^{j }-s_{i}^{j})=\hat{x}_{i}^{l}\) for every \(i\in\mathcal{I}\). Thus, we conclude that \((\hat{\tau}^{k},\hat{\boldsymbol{x}}^{k},\hat{\boldsymbol{s}}^{k},\hat{ \boldsymbol{z}}^{k},\hat{\boldsymbol{w}}^{k},\hat{\boldsymbol{t}}^{k})\in \mathcal{U}\) for every \(k\in[q]\). Defining the index sets \(\hat{\mathcal{K}}_{=}=\left\{k\in[q]:\boldsymbol{a}^{\top}\hat{s}^{k}= \boldsymbol{a}^{\top}\hat{\boldsymbol{x}}^{k}\right\}\), one can show that \(|\hat{\mathcal{K}}_{=}|>|\mathcal{K}_{=}|\) because the index \(j\) now satisfies the condition \(\boldsymbol{a}^{\top}\hat{\boldsymbol{s}}^{j}=\boldsymbol{a}^{\top}\hat{ \boldsymbol{x}}^{j}\). We next replace the points \((\tau^{k},\boldsymbol{x}^{k},\boldsymbol{s}^{k},\boldsymbol{z}^{k},\boldsymbol {w}^{k},\boldsymbol{t}^{k})\) with the new points \((\hat{\tau}^{k},\hat{\boldsymbol{x}}^{k},\hat{\boldsymbol{s}}^{k},\hat{ \boldsymbol{z}}^{k},\hat{\boldsymbol{w}}^{k},\hat{\boldsymbol{t}}^{k})\), and repeat the same rounding scheme. In this way, after at most \(q\) iterations, we will obtain a set of points \((\hat{\tau}^{k},\hat{\boldsymbol{x}}^{k},\hat{\boldsymbol{s}}^{k},\hat{ \boldsymbol{z}}^{k},\hat{\boldsymbol{w}}^{k},\hat{\boldsymbol{t}}^{k})\), \(k\in[q]\), for which \(|\hat{\mathcal{K}}_{=}|=q\). Hence, \((\hat{\tau}^{k},\hat{\boldsymbol{x}}^{k},\hat{\boldsymbol{s}}^{k},\hat{ \boldsymbol{z}}^{k},\hat{\boldsymbol{w}}^{k},\hat{\boldsymbol{t}}^{k})\in \mathcal{U}\cap\mathcal{V}_{1}\) for every \(k\in[q]\) and we conclude that \((\bar{\tau},\bar{\boldsymbol{x}},\bar{\boldsymbol{s}},\bar{\boldsymbol{z}}, \bar{\boldsymbol{w}},\bar{\boldsymbol{t}})\in\operatorname{conv}(\mathcal{U} \cap\mathcal{V}_{1})\). This completes the proof.
Given Lemmas 3 and 11, and Proposition 5, we are now ready to prove Theorem 5.
Proof of Theorem 5: Recall the definition of \(\widehat{\mathcal{T}}\) and \(\Omega\). By letting
\[\boldsymbol{\beta}_{i}:=(a_{i}s_{i},x_{i}),\ \boldsymbol{\gamma}:= \boldsymbol{t},\ \boldsymbol{\delta}:=(\boldsymbol{w},\boldsymbol{z}),\ \Delta:=\Omega,\] \[\boldsymbol{C}_{i}=\begin{bmatrix}0&\mathds{1}_{\mathcal{I}}(i)\\ \mathds{1}_{\mathcal{I}}(i)/a_{i}&0\\ -\mathds{1}_{\mathcal{I}}(i)/a_{i}&\mathds{1}_{\mathcal{I}}(i)\end{bmatrix}, \ \mathbb{C}_{i}=\mathbb{R}_{+}^{3},\ \forall i\in[d],\]
we can represent the set \(\widehat{\mathcal{T}}\) as an instance of the set \(\mathcal{W}\) defined as in (5). Then, Proposition 2 yields
\[\operatorname{cl\,conv}(\widehat{\mathcal{T}})\!=\!\left\{\begin{array}{ll}h ^{\pi}(a_{i}s_{i},w_{i})\leq t_{i},\ \forall i\in[d]\\ (\boldsymbol{x},\boldsymbol{z},\boldsymbol{s},\boldsymbol{w},\boldsymbol{t})\! \in\!\mathbb{R}^{d+d+d+d+d+d}\!\!:&\boldsymbol{s},\boldsymbol{x}\in\mathcal{X },\ s_{i}\leq x_{i},\,\forall i\in\mathcal{I}\\ (\boldsymbol{w},\boldsymbol{z})\in\operatorname{conv}(\Omega)\end{array}\! \right\}.\]
In the following we characterize \(\operatorname{cl\,conv}(\mathcal{T})\) in terms of \(\operatorname{cl\,conv}(\widehat{\mathcal{T}})\). From Proposition 5 and Lemma 11, we deduce that \(\operatorname{cl\,conv}(\mathcal{T})\) coincides with
\[\operatorname{cl\,}\Big{(}\operatorname{Proj}_{\tau,\boldsymbol{x},\boldsymbol {z}}\big{(}\{(\tau,\boldsymbol{x},\boldsymbol{z},\boldsymbol{s},\boldsymbol{w}, \boldsymbol{t})\in\mathbb{R}\times\operatorname{conv}(\widehat{\mathcal{T}}): \boldsymbol{1}^{\top}\boldsymbol{t}=\tau,\ \boldsymbol{a}^{\top}\boldsymbol{s}=\boldsymbol{a}^{\top}\boldsymbol{x}\} \big{)}\Big{)}.\]
By letting
\[\mathbf{\mu}:=(\tau,\mathbf{x},\mathbf{z}),\ \mathbf{\eta}:=(\tau,\mathbf{x},\mathbf{z},\mathbf{s}, \mathbf{w},\mathbf{t}),\ \mathcal{U}:=\mathbb{R}\times\mathrm{conv}(\widehat{\mathcal{T}}),\] \[\mathbf{A}:=[\mathbf{I}_{1+2d},\ \mathbf{0}],\ \mathbf{B}:=\begin{bmatrix}-1&\mathbf{0}^{\top}&\mathbf{0}^{ \top}&\mathbf{0}^{\top}&\mathbf{0}^{\top}&\mathbf{1}^{\top}\\ 0&\mathbf{a}^{\top}&\mathbf{0}^{\top}&-\mathbf{a}^{\top}&\mathbf{0}^{\top}&\mathbf{0}^{\top}\end{bmatrix} ^{\top},\ \mathbf{b}:=\mathbf{0},\]
we observe that
\[\mathrm{Proj}_{\tau,\mathbf{x},\mathbf{z}}\big{(} \{(\tau,\mathbf{x},\mathbf{z},\mathbf{s},\mathbf{w},\mathbf{t})\in\mathbb{R}\times \mathrm{conv}(\widehat{\mathcal{T}}):\mathbf{1}^{\top}\mathbf{t}=\tau,\ \mathbf{a}^{\top}\mathbf{s}=\mathbf{a}^{\top}\mathbf{x}\}\big{)}\] \[=\{\mathbf{\mu}:\exists\mathbf{\eta}\in\mathcal{U}\ \mathrm{s.t.}\ \mathbf{A}\mathbf{\eta}=\mathbf{\mu},\ \mathbf{B}\mathbf{\eta}=\mathbf{b}\}\]
as in Lemma 1 (ii). Note also that the first requirement of Lemma 1 (ii) is trivially satisfied as the variable \(\tau\) is free to choose from the set \(\mathcal{U}\) and the variable \(\mathbf{x}\) is linearly dependent to the variable \(\mathbf{s}\). In addition, we have
\[\mathrm{rec}(\{\mathbf{\eta}\!\in\!\mathrm{cl}(\mathcal{U})\!:\!\mathbf{A }\mathbf{\eta}\!=\!\mathbf{0},\mathbf{B}\mathbf{\eta}\!=\!\mathbf{b}\})\] \[=\mathrm{rec}\left(\left\{\begin{array}{c}h^{\pi}(a_{i}s_{i},w _{i})\leq t_{i},\ \forall i\in[d]\\ (0,\mathbf{0},\mathbf{0},\mathbf{s},\mathbf{w},\mathbf{t})\!:\begin{array}{c}\mathbf{s},\in \mathcal{X},\ s_{i}\leq 0,\,\forall i\in\mathcal{I}\\ (\mathbf{w},\mathbf{0})\in\mathrm{conv}(\Omega)\\ \mathbf{a}^{\top}\mathbf{s}=0,\ \mathbf{1}^{\top}\mathbf{t}=0\end{array}\right\}\right)\] \[=\mathrm{rec}\left(\left\{\begin{array}{c}h^{\pi}(a_{i}s_{i},0 )\leq t_{i},\ \forall i\notin\mathcal{I}\\ (0,\mathbf{0},\mathbf{0},\mathbf{s},\mathbf{0},\mathbf{t})\!:\!s_{i}=0,\,0\leq t_{i},\,\forall i \in\mathcal{I}\\ \mathbf{a}^{\top}\mathbf{s}=0,\ \mathbf{1}^{\top}\mathbf{t}=0\end{array}\right\}\right),\]
where the first equation holds by the definition of \(\mathrm{conv}(\widehat{\mathcal{T}})\) and the fact that \(\mathbf{A}\mathbf{\eta}=\mathbf{0}\) implies \((\tau,\mathbf{x},\mathbf{z})=(0,\mathbf{0},\mathbf{0})\). The second equation follows from \((\mathbf{w},\mathbf{z})\in\mathrm{conv}(\Omega)\), which implies \(\mathbf{0}\leq\mathbf{w}\leq\mathbf{z}\) and as \(\mathbf{z}=0\) we deduce \(\mathbf{w}=0\). Moreover, \(\mathbf{x}=0\) implies \(0\leq s_{i}\leq x_{i}\) for all \(i\in\mathcal{I}\) and so \(s_{i}=0\) for all \(i\in\mathcal{I}\). This also results in \(h^{\pi}(a_{i}s_{i},0)=h^{\pi}(0,0)=0\leq t_{i}\) for all \(i\in\mathcal{I}\). Since the function \(h\) is nonlinear, proper, lower semicontinuous and convex, the set \(\{(u,v):h^{\pi}(u,0)\leq v\}\) is a convex closed pointed cone thanks to Lemma 3. Consequently, the origin is an extreme point of the set, meaning that \(\sum_{i\in[d]}(a_{i}s_{i},t_{i})=(0,0)\) only if \(s_{i}=t_{i}=0\). Hence, the second requirement of Lemma 1 (ii) also follows, and we can apply Lemma 1 (ii) to conclude that
\[\mathrm{cl\,conv}(\mathcal{T})=\left\{(\tau,\mathbf{x},\mathbf{z}):\exists\mathbf{s},\mathbf{ w},\mathbf{t}\ \ \mathrm{s.t.}\ \ \begin{array}{l}(\mathbf{x},\mathbf{z},\mathbf{s},\mathbf{w},\mathbf{t})\in\mathrm{cl\,conv}( \widehat{\mathcal{T}}),\\ \mathbf{1}^{\top}\mathbf{t}=\tau,\ \mathbf{a}^{\top}\mathbf{x}=\mathbf{a}^{\top}\mathbf{s}\end{array} \right\}.\]
The proof concludes by projecting out the variable \(\mathbf{t}\) using the Fourier-Motzkin elimination approach.
We conclude this section by characterizing \(\mathrm{conv}(\Omega)\) where \(\Omega=\{(\mathbf{w},\mathbf{z})\in\{0,1\}^{d}\times\mathcal{Z}:\ \mathbf{1}^{\top}\mathbf{w}\leq 1,\ \mathbf{w}\leq\mathbf{z}\}\) for some simple integral sets \(\mathcal{Z}\). The proofs of the subsequent results are provided in Appendix 0.A. We start with the case when \(\mathcal{Z}\) is defined by a cardinality constraint. In this case, the resulting set \(\Omega\) admits an immediate totally unimodular representation.
**Lemma 12**: _Suppose \(\mathcal{Z}=\{\mathbf{z}\in\{0,1\}^{d}:\mathbf{1}^{\top}\mathbf{z}\leq\kappa\}\), where \(\kappa\in[d]\backslash\{1\}\). Then_
\[\mathrm{conv}(\Omega)=\{(\mathbf{w},\mathbf{z})\in[0,1]^{d+d}:\ \mathbf{1}^{\top}\mathbf{w} \leq 1,\ \mathbf{w}\leq\mathbf{z},\ \mathbf{1}^{\top}\mathbf{z}\leq\kappa\}.\]
We next characterize \(\mathrm{conv}(\Omega)\) for the weak and strong hierarchy sets. Unlike the previous case, \(\Omega\) does not immediately admit a totally unimodular representation. Nonetheless, it turns out that the set \(\Omega\backslash\{\mathbf{0}\}\) is totally unimodular for weak hierarchy constraints. Using Lemma A.2 (i) in Appendix A, which provides a description of \(\mathrm{conv}(\Omega)\) based on \(\mathrm{conv}(\Omega\backslash\mathbf{0})\), the following lemma analyzes the weak hierarchy constraints.
Lemma 13: _Suppose \(\mathcal{Z}=\{\mathbf{z}\in\{0,1\}^{d}:\ z_{d}\leq\sum_{i\in[d-1]}z_{i}\}\). Then,_
\[\mathrm{conv}(\Omega)=\left\{(\mathbf{w},\mathbf{z})\in[0,1]^{d+d}:\begin{array}{l} \mathbf{1}^{\top}\mathbf{w}\leq 1,\ \mathbf{w}\leq\mathbf{z},\ z_{d}\leq\sum_{i\in[d-1]}z_{i}\\ \mathbf{1}^{\top}\mathbf{w}\leq\sum_{i\in[d-1]}z_{i}\end{array}\right\}.\]
We conclude this section by examining the strong hierarchy constraints. In this case, neither the set \(\Omega\) nor \(\Omega\backslash\{\mathbf{0}\}\) admits an immediate representation with totally unimodular matrices. Using Lemma A.2 (ii) in Appendix A, the following lemma analyzes the strong hierarchy constraints.
Lemma 14: _Suppose \(\mathcal{Z}=\{\mathbf{z}\in\{0,1\}^{d}:\ z_{d}\leq z_{i},\,\forall i\in[d-1]\}\). Then,_
\[\mathrm{conv}(\Omega)\!=\!\left\{\begin{array}{l}\mathbf{w}\in\mathbb{R}_{+}^{d},\ \tilde{\mathbf{z}}_{d}^{0}\geq 0,\ \mathbf{1}^{\top}\mathbf{w}\leq 1,\\ \mathbf{z}=\tilde{\mathbf{z}}^{0}+\sum_{i\in[d]}\tilde{\mathbf{z}}^{i}\\ \tilde{\mathbf{z}}_{d}^{0}\leq\tilde{z}_{j}^{0}\leq 1-\mathbf{1}^{\top}\mathbf{w},\ \forall j\in[d-1]\\ \tilde{z}_{i}^{i}=w_{i},\ \forall i\in[d-1],\\ 0\leq\tilde{\mathbf{z}}_{d}^{i}\leq\tilde{z}_{j}^{i}\leq w_{i},\ \forall i,j\in[d-1]\\ \tilde{z}_{j}^{d}=w_{d},\ \forall j\in[d]\end{array}\right\}.\]
## 4 Numerical Results
In this section we study the numerical performance of our conic formulations on a nonlinear logistic regression problem with quadratic features. The resulting exponential cone programs are modeled with JuMP [31] and solved with MOSEK 10 on a MacBook Pro with a 2.80 GHz processor and 16GB RAM. In these experiments, we set the time limit to 300 seconds and the number of threads of the solver to 4.
In the nonlinear logistic regression problem with quadratic features, for an input data \(\mathbf{\phi}\in\mathbb{R}^{p}\), we construct the lifted feature vector
\[\mathbf{a}=((\phi_{k})_{k\in[p]},(\phi_{k}^{2})_{k\in[p]},(\phi_{k}\phi_{l})_{k,l \in[p]:l>k})\in\mathbb{R}^{d},\]
where \(d=p(p+3)/2\). We denote the coefficients of the nonlinear classifier and its support by the vectors \(\mathbf{x}\in\mathbb{R}^{d}\) and \(\mathbf{z}\in\{0,1\}^{d}\). With slight abuse
of notation, we use the notation \(\mathbf{x}=((x_{k})_{k\in[p]},(x_{kk})_{k\in[p]},(x_{kl})_{k,l\in[p]:l>k}))\) to refer to the elements of \(\mathbf{x}\). In a similar fashion, we use the notation \(\mathbf{z}=((z_{k})_{k\in[p]},(z_{kk})_{k\in[p]},(z_{kl})_{k,l\in[p]:l>k}))\) for \(\mathbf{z}\). We examine the following sparse logistic regression problem
\[\begin{split}\min&\frac{1}{N}\sum_{j\in[N]}\log(1+ \exp(-b_{j}\mathbf{a}_{j}^{\top}\mathbf{x}))+\lambda\sum_{i\in[d]}z_{i}+\mu\|\mathbf{x}\|_{2 }^{2}\\ \text{s.t.}&\mathbf{x}\in\mathbb{R}_{+}^{d},\ \mathbf{z}\in \mathcal{Z},\ x_{i}(1-z_{i})=0,\ \forall i\in[d],\end{split} \tag{14}\]
where \((\mathbf{a}_{j},b_{j})_{j\in[N]}\) represents the (nonlinear) feature-label pairs constructed from the input vector \((\mathbf{\phi}_{j})_{j\in[N]}\) in the training data, and \(\lambda,\mu\in\mathbb{R}_{+}\) denote the regularization coefficients. We use the set
\[\mathcal{Z}=\left\{\mathbf{z}\in\{0,1\}^{d}:\ z_{kk}\leq z_{k},\ z_{kl}\leq z_{k}, \ z_{kl}\leq z_{l},\ \forall k,l\in[p]\ \text{s.t.}\ l>k\right\}\]
to capture strong hierarchy constraints. We consider various reformulations of (14) using the convex hull results presented in Section 3. Namely, based on Theorem 2, we introduce the _separable_ reformulation as
\[\begin{split}\min&\frac{1}{N}\sum_{j\in[N]}t_{i}+ \log(2)+\lambda\sum_{i\in[d]}z_{i}+\mu\sum_{i\in[d]}r_{i}\\ \text{s.t.}&\mathbf{x}\in\mathbb{R}_{+}^{d},\ \mathbf{z}\in \mathcal{Z},\ \mathbf{u},\mathbf{v}\in\mathbb{R}^{N},\\ &(r_{i},z_{i},x_{i})\in\mathcal{K}_{\text{rsoc}},\
generates valid cuts for the binary set \(\Delta_{b}\). Finally, based on Theorem 5.1, we introduce the _rank\({}_{1}^{+}\)_ formulation as
\[\min \frac{1}{N}\sum_{j\in[N]}t_{i}+\log(2)+\lambda\sum_{i\in[d]}z_{i}+ \mu\sum_{i\in[d]}r_{i}\] s.t. \[\mathbf{x},\mathbf{r}\in\mathbb{R}_{+}^{d},\ (\mathbf{w},\mathbf{z})\in\Omega_{b}, \ \mathbf{u},\mathbf{v},\mathbf{t},\mathbf{s}\in\mathbb{R}^{N\times d},\] \[(r_{i},z_{i},x_{i})\in\mathcal{K}_{\text{rsco}}, \forall i\in[d],\] \[\sum_{i\in[d]}a_{ji}s_{ji}=\mathbf{a}_{j}^{\top}\mathbf{x}, \forall j\in[N], (\text{rank}_{1}^{+})\] \[0\leq s_{ji}\in\mathds{1}_{\{a_{ji}\neq 0\}}\,x_{i}, \forall i\in[d],\forall j\in[N],\] \[(v_{ji},w_{ji},-t_{ji})\in\mathcal{K}_{\text{exp}}, \forall i\in[d],\forall j\in[N],\] \[(u_{ji},w_{ji},-b_{j}\mathbf{a}_{ji}s_{ji}-t_{ji})\in\mathcal{K}_{ \text{exp}}, \forall i\in[d],\forall j\in[N],\] \[u_{ij}+v_{ij}\leq 2, \forall i\in[d],\forall j\in[N].\]
where \(\Omega_{b}\!:=\!\{(\mathbf{w},\mathbf{z})\!\in\!\{0,1\}^{N\times d}\!\times\!\mathcal{Z }\!:\!\mathbf{w}\,\mathbf{1}\!\leq\!\mathbf{1},w_{ji}\!\leq\!\mathds{1}_{\{a_{ji}\neq 0\}}\,z_{i}, \forall i\!\in\![d],\!\forall j\!\in\![N]\}\). It is easy to verify that the set
\[\Omega_{v}=\left\{\begin{aligned} & w_{j,k}+w_{j,l}+w_{j,kl}\leq z_{k}+z_{l}-z_{ kl},\\ &(\mathbf{w},\mathbf{z})\in\mathbb{R}^{N\times d}\times\mathbb{R}^{d}\!:& w_{j,k}+w_{j,kk}\leq z_{k},\\ &\forall j\in[N],\,\forall k,l\in[p]\text{ s.t. }l>k,a_{jk},a_{jl}\neq 0 \end{aligned}\right\}\]
generates valid cuts for the binary set \(\Omega_{b}\).
We examine the relaxation quality and branch and bound (B&B) performance of different reformulations of (14) in terms of the optimality gap, solution time, and number of B&B nodes. Namely, we examine the separable, rank-one, and rank-one-plus relaxations obtained by relaxing the integrality restrictions of the boolean sets involved. For example, \(\text{relax}(\Omega_{b})\) corresponds to the continuous relaxation of the set \(\Omega_{b}\), etc. Inspired by [34, Section 6], we conduct a numerical experiment in which the input data \((\mathbf{\phi}_{j})_{j\in[N]}\) is sparse. Specifically, we randomly assign \(\phi_{ji}\) to either a point sampled from a standard Gaussian distribution with a mean of zero and variance of one with probability \(\pi\) or set it to zero with probability \(1-\pi\), using a threshold value \(\pi\in(0,1]\). We then randomly generate a true coefficient vector \(\mathbf{x}_{0}\in[-1,1]^{d}\). Using the vector \(\mathbf{x}_{0}\), we finally generate the label \(b_{j}\in\{-1,1\}\) by sampling from a Bernoulli distribution with \(\mathbb{P}(b_{j}=1|\mathbf{a}_{j})=1/(1+\exp(-\mathbf{a}_{j}^{\top}\mathbf{x}_{0}))\).
We examine the quality of following relaxations:
* In _natural relaxation_, we drop the complementary constraints in (14). It is easy to see that the relaxed problem is solved by \(\mathbf{z}^{\star}=0\).
* In _separable relaxation_, we replace \(\mathcal{Z}\) in (separable) with \(\text{relax}(\mathcal{Z})\).
* In _rank\({}_{1}\) relaxation_, we replace \(\Delta_{b}\) in (rank\({}_{1}\)) with \(\text{relax}(\Delta_{b})\).
* In _rank\({}_{1,v}\) relaxation_, we replace \(\Delta_{b}\) in (rank\({}_{1}\)) with \(\text{relax}(\Delta_{b}\cap\Delta_{v})\).
* In _rank\({}_{1}^{+}\) relaxation_, we replace \(\Omega_{b}\) in (rank\({}_{1}^{+}\)) with \(\text{relax}(\Omega_{b})\).
* In _rank\({}_{1,v}^{+}\) relaxation_, we replace \(\Omega_{b}\) in (rank\({}_{1}^{+}\)) with \(\text{relax}(\Omega_{b}\cap\Omega_{c})\).
Note that [36, Theorem 1] cannot be applied directly when a complete description for \(\mathrm{conv}(\mathcal{Z}\backslash\{\mathbf{0}\})\) is not available. Nonetheless, the suggested valid inequalities from set \(\Delta_{v}\) can still be employed to obtain a convex relaxation. As a result, we propose the subsequent relaxation as a substitute for [36, Theorem 1]
\[\min \frac{1}{N}\sum_{j\in[N]}t_{i}+\log(2)+\lambda\sum_{i\in[d]}z_{i}+ \mu\sum_{i\in[d]}r_{i}\] \[\mathrm{s.t.} \mathbf{x},\mathbf{r}\in\mathbb{R}^{d}_{+},\ (\mathbf{w},\mathbf{z})\in\mathrm{relax}( \Delta_{w}),\ \mathbf{u},\mathbf{v}\in\mathbb{R}^{4\times N},\ \mathbf{t}\in\mathbb{R}^{N},\] \[(r_{i},z_{i},x_{i})\in\mathcal{K}_{\mathrm{rsoc}}, \forall i\in[d],\] \[(v_{kj},w_{kj},-t_{j}),(u_{kj},w_{kj},-b_{j}\mathbf{a}_{j}^{\top}\bm {x}-t_{j})\in\mathcal{K}_{\mathrm{exp}}, \forall j\in[N],\forall k\in[4],\] \[u_{ij}+v_{ij}\leq 2, \forall j\in[N],\forall k\in[4],\]
where the set
\[\Delta_{w}=\left\{(\mathbf{w},\mathbf{z})\in\mathbb{R}^{4\times N}\times\{0,1\}^{d}: \,w_{3j}=\sum_{k,l\in[p]:l>k,a_{jk},a_{jl}\neq 0}z_{k}+z_{l}-z_{kl}, \right\}.\] \[\left.\begin{array}{rl}w_{4j}=\sum_{k\in[p]:a_{jk}\neq 0}z_{k}, \ \forall j\in[N]\end{array}\right\}.\]
Note that there are two main differences between \(\mathrm{rank}_{1,v}\) and \(\mathrm{rank}_{1,w}\) formulations. First and foremost, through Theorem 3 we have the variables \(\mathbf{w}\) in \(\mathrm{rank}_{1,v}\) restricted to be binary whereas in \(\mathrm{rank}_{1,w}\) they are continuous and indeed are simply explicit functions of other variables. And second, while the strengthening via valid inequalities in both formulations involves the same set of valid inequalities, in \(\mathrm{rank}_{1,v}\) this strengthening is done in the lifted space in a linear form, and in \(\mathrm{rank}_{1,w}\) it is essentially done in the original space of the variables through nonlinear inequalities.
In the first experiment we set \(\lambda=10^{-2},\mu=10^{-4},p=50,d=1325,\pi=0.1\) and \(N\in\{10,50,100,200\}\). In Figure 1 we report the optimality gap and solution time for different convex relaxations. In determining the optimality gap, we compare the objective value of the given relaxation against the best known feasible solution for the instance (among the ones found by the B&B method applied to any of the formulations of (14) presented in (separable), (\(\mathrm{rank}_{1}\)) and (\(\mathrm{rank}_{1}^{+}\)). This solution corresponds to the optimal solution if the time limit has not been reached in the B&B algorithm, or the best feasible solution reported by MOSEK if the time limit has been reached.
The results in Figure 1 suggest that the qualities of the convex relaxations based on the (\(\mathrm{rank}_{1}^{+}\)) formulation are the best. In particular, \(\mathrm{rank}_{1}^{+}\) relaxation attains an average gap smaller than \(5\%\). Moreover, adding valid inequalities to the sets \(\Delta_{b}\) and \(\Omega_{b}\) significantly improve the quality of the \(\mathrm{rank}_{1}\)and \(\mathrm{rank}_{1}^{+}\)relaxations. For example, the \(\mathrm{rank}_{1,v}^{+}\)relaxation attains the average gap smaller than \(1\%\), which is \(5\) times smaller than the gap attained by \(\mathrm{rank}_{1}^{+}\). However, adding these valid inequalities comes with a computational downside. Specifically, the optimization problems now involve more constraints, which result in longer solution times. It is also worth to note that the optimality gap of the
rank\({}_{1}\) relaxations is significantly superior to that of the separable and natural relaxations, albeit at the expense of relatively longer solution times.
Finally, we highlight that the optimality gap of the continuous relaxations from rank\({}_{1,v}\) and rank\({}_{1,w}\) with the latter being inspired by (36, Theorem 1), are identical. This is due to the fact that when the variables \(\boldsymbol{w}\) are relaxed to be continuous the projection of rank\({}_{1,v}\) in the original space leads to precisely the same inequalities as in rank\({}_{1,w}\) Nonetheless, it takes roughly twice as long to solve the rank\({}_{1,w}\) relaxation than the rank\({}_{1,v}\) relaxation. This is expected as the rank\({}_{1,w}\) relaxation introduces a considerably larger number of constraints and variables compared to the rank\({}_{1,v}\) relaxation. It is also important to note that a complete implementation of (36, Theorem 1) requires using a characterization of conv\((\mathcal{Z}\backslash\{\boldsymbol{0}\})\) which may possibly involve an exponential number of constraints. In contrast, rank\({}_{1,v}\) relaxation handles this complexity through the use of binary variables \(\boldsymbol{w}\), making the rank\({}_{1,v}\) relaxation much more applicable in practice.
We next examine the B&B performance of these alternative formulations of (14) in which we always keep the variables \(\boldsymbol{z}\) as binary but we create two variants for each of the rank\({}_{1}\) and rank\({}_{1}^{+}\) formulations based on whether the variables \(\boldsymbol{w}\) are kept as binary or \(\boldsymbol{w}\) are relaxed to be continuous:
Figure 1: Comparison of different continuous relaxations for \(p=50\) and \(\pi=0.1\) as \(N\) varies.
* In _separable reformulation_, we consider (separable).
* In _rank\({}_{1}\) reformulation_, we consider (rank\({}_{1}\)).
* In _rank\({}_{1,r}\) reformulation_, we replace \(\Delta_{b}\) in (rank\({}_{1}\)) with relax\({}_{\mathbf{w}}(\Delta_{b})\).
* In _rank\({}_{1,v}\) reformulation_, we replace \(\Delta_{b}\) in (rank\({}_{1}\)) with \(\Delta_{b}\cap\Delta_{v}\).
* In _rank\({}_{1,r,v}\) reformulation_, we replace \(\Delta_{b}\) in (rank\({}_{1}\)) with relax\({}_{\mathbf{w}}(\Delta_{b}\cap\Delta_{v})\).
* In _rank\({}_{1,w}\) reformulation_, we consider (rank\({}_{1,w}\)).
* In _rank\({}_{1}^{+}\) reformulation_, we consider (rank\({}_{1}^{+}\)).
* In _rank\({}_{1,r}^{+}\) reformulation_, we replace \(\Omega_{b}\) in (rank\({}_{1}^{+}\)) with relax\({}_{\mathbf{w}}(\Omega_{b})\).
* In _rank\({}_{1,v}^{+}\) reformulation_, we replace \(\Omega_{b}\) in (rank\({}_{1}^{+}\)) with \(\Omega_{b}\cap\Omega_{v}\).
* In _rank\({}_{1,r,v}^{+}\) reformulation_, we replace \(\Omega_{b}\) in (rank\({}_{1}^{+}\)) with relax\({}_{\mathbf{w}}(\Omega_{b}\cap\Omega_{c})\).
Figure 2 reports the true optimality gap computed using the best known heuristic solution (in our experiments this was usually obtained by a variant of (rank\({}_{1}\)) formulation) as discussed earlier, the optimality gap reported by the solver, the number of B&B nodes, and the solution time.
We start by analyzing the true optimality gap and solver gap in Figure 2. As also seen in Figure 1, in terms of true optimality gap, the formulations based on (rank\({}_{1}^{+}\)) consistently outperform those based on (rank\({}_{1}\)) in terms of the true optimality gap. This is primarily because the relaxations based on (rank\({}_{1}^{+}\)) can produce high-quality lower bounds, even though these formulations take longer to solve and thus result in significantly fewer number of nodes explored in B&B. Among rank\({}_{1}\) and rank\({}_{1}^{+}\) variants, the ones where the variables \(\mathbf{w}\) are kept as binary perform better in terms of the optimality gaps than the ones where \(\mathbf{w}\) are relaxed to be continuous even though having \(\mathbf{w}\) binary results in fewer B&B nodes. This is because when \(\mathbf{w}\) are binary, the solver is able to leverage the structure of the sets \(\Delta_{b}\) and \(\Omega_{b}\) to generate further cuts and results in higher quality relaxations. Among rank\({}_{1}\) variants, the performance of rank\({}_{1,v}\) seems to be best and rank\({}_{1,w}\) seems to be the worst. As the continuous relaxation of rank\({}_{1,w}\) takes longer to solve compared to those based on (rank\({}_{1}\)), its B&B can explore only a smaller number of nodes, and therefore, results in a worse optimality gap.
When we examine the gaps reported by the solver, we still observe that the same phenomena, but this time the associated gaps reported by the solver are considerably larger than the true optimality gaps. This is because for this class of problems, the heuristic methods utilized by the solver are not very advanced, and the good quality feasible solutions of a formulation are often found at integral nodes in the B&B tree, so essentially by chance. Therefore, the B&B procedure for the formulations that admit quick to solve node relaxations often results in high quality feasible solutions. Consequently, in the case of expensive lifted formulations such as (rank\({}_{1}^{+}\)) while the actual optimality gaps are very close to zero, the solver is unable to report this gap due to the inferior quality of the feasible solution found in the associated B&B tree.
In the second experiment we set \(\lambda=10^{-2},\mu=10^{-4},N=100,\pi=0.1\) and \(p\in\{10,20,30,40,50\}\), which translates to \(d\in\{65,230,495,860,1325\}\). Figure 1 and Figure 2 in Appendix B compare the quality of different continuous relaxations and also report the performance of the B&B algorithm,
Figure 2: The B&B performance for \(p=50\) and \(\pi=0.1\) as \(N\) varies.
respectively. The observations from Figure B.1 are very similar to the ones in Figure 1; thus we omit this discussion for brevity. Despite the similarity between the observations in Figure B.2 and Figure 2, it is worth noting that when \(p=10\), the B&B performance is slightly different. In particular, when \(p=10\), all methods except \(\text{rank}^{+}_{1,r}\) and \(\text{rank}^{+}_{1,r,v}\) can solve the optimization problem in less than 20 seconds. This implies that the integer programs may be relatively simple to solve when the dimension is small. As a result, stronger relaxations may not be needed when dealing with small scale instances.
In the last experiment we set \(\lambda=10^{-2},\mu=10^{-4},p=50,d=1325,N=100\) and \(\pi\in\{0.1,0.3,0.5,0.7,0.9\}\); see Figure B.3 and Figure B.4 in Appendix B for the quality of different continuous relaxations and the B&B performance. In all of these instances time limit was reached in B&B, so the solution time is not reported in Figure B.4. As the value of \(\pi\) increases, we notice that the optimality gap of the \(\text{rank}_{1}\) relaxation gets closer to that of separable relaxation. This is expected as the binary variable \(w_{j}\) models whether \(\boldsymbol{a}_{j}^{\top}\boldsymbol{x}=0\) or not. When \(\pi\) is large, the probability of such event is low. As a result, \(w_{j}\) is assigned a value of 1 with high probability, which make the \(\text{rank}_{1}\) relaxations much less effective. As a final observation, we note that the value of \(\pi\) seems to not affect the quality of the \(\text{rank}^{+}_{1}\) relaxations; in particular these relaxations continue to be of high quality even for high values of \(\pi\).
###### Acknowledgements.
This research is supported by Early Postdoc Mobility Fellowship SNSF grant P2ELP2_195149 and AFOSR grant FA9550-22-1-0365.
|
2309.05634 | Kernel Interpolation of Incident Sound Field in Region Including
Scattering Objects | A method for estimating the incident sound field inside a region containing
scattering objects is proposed. The sound field estimation method has various
applications, such as spatial audio capturing and spatial active noise control;
however, most existing methods do not take into account the presence of
scatterers within the target estimation region. Although several techniques
exist that employ knowledge or measurements of the properties of the scattering
objects, it is usually difficult to obtain them precisely in advance, and their
properties may change during the estimation process. Our proposed method is
based on the kernel ridge regression of the incident field, with a separation
from the scattering field represented by a spherical wave function expansion,
thus eliminating the need for prior modeling or measurements of the scatterers.
Moreover, we introduce a weighting matrix to induce smoothness of the
scattering field in the angular direction, which alleviates the effect of the
truncation order of the expansion coefficients on the estimation accuracy.
Experimental results indicate that the proposed method achieves a higher level
of estimation accuracy than the kernel ridge regression without separation. | Shoichi Koyama, Masaki Nakada, Juliano G. C. Ribeiro, Hiroshi Saruwatari | 2023-09-11T17:26:00Z | http://arxiv.org/abs/2309.05634v1 | # Kernel Interpolation of Incident Sound Field in Region Including Scattering Objects
###### Abstract
A method for estimating the incident sound field inside a region containing scattering objects is proposed. The sound field estimation method has various applications, such as spatial audio capturing and spatial active noise control; however, most existing methods do not take into account the presence of scatterers within the target estimation region. Although several techniques exist that employ knowledge or measurements of the properties of the scattering objects, it is usually difficult to obtain them precisely in advance, and their properties may change during the estimation process. Our proposed method is based on the kernel ridge regression of the incident field, with a separation from the scattering field represented by a spherical wave function expansion, thus eliminating the need for prior modeling or measurements of the scatterers. Moreover, we introduce a weighting matrix to induce smoothness of the scattering field in the angular direction, which alleviates the effect of the truncation order of the expansion coefficients on the estimation accuracy. Experimental results indicate that the proposed method achieves a higher level of estimation accuracy than the kernel ridge regression without separation.
Shoichi Koyama\({}^{1}\), Masaki Nakada\({}^{2}\), Juliano G. C. Ribeiro\({}^{2}\), and Hiroshi Saruwatari\({}^{2}\)\({}^{1}\) National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
\({}^{2}\) The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
[email protected] sound field estimation, kernel ridge regression, acoustic scattering, spherical wave function expansion
## 1 Introduction
Techniques for estimating and interpolating an acoustic field from multiple microphone observations are essential in the field of acoustic signal processing. By estimating a continuous pressure distribution over a target region or expansion coefficients of the wave functions around a target position from the observed signals, various applications become feasible, e.g., the visualization of acoustic fields [1, 2], interpolation of room impulse responses [3, 4], identification of sound sources [5, 6], capturing sound fields for spatial audio [7, 8, 9], spatial active noise control (ANC) [10, 11, 12], among others.
Current sound field estimation methods are typically based on the expansion of the captured sound field into the wave functions, namely eigenfunctions of the Helmholtz equation, such as plane wave and spherical wave functions [7, 13, 14]. However, these methods basically depend on the empirical setting of the truncation order and expansion center because the sound field is decomposed into a finite number of wave functions around a specific expansion center. Sparsity-based extensions are also investigated [15, 16], but the estimation is basically performed by an iterative process because the inference operator becomes nonlinear.
The infinite-dimensional analysis of a sound field is proposed in [17] and extended to incorporate prior information on source directions in [18]. This method is free from the empirical setting of truncation order and expansion center. When estimating a pressure field with omnidirectional microphones, the method based on the infinite-dimensional analysis corresponds to the kernel ridge regression with the constraint that the solution satisfies the homogeneous Helmholtz equation [19]. Furthermore, the estimation is performed by a linear operation using its closed-form solution. This method has been applied to spatial audio capturing [8, 9] and spatial ANC [12].
The main drawback of the above-described sound field estimation methods is that the presence of scattering objects inside the target region is not taken into consideration. This is because the spherical wave functions and kernel functions used in these methods are derived under the free-field assumption inside the target region, although the presence of scatterers or reverberation outside the target region is allowed. Thus, the estimation accuracy can significantly deteriorate when the target region contains acoustically non-transparent objects. However, in practical applications, it is sometimes necessary to estimate the incident sound field in the region including scattering objects. For example, in the spatial ANC, the pressure distribution of primary noise sources must be estimated inside the target control region from the microphone measurements around the surface of the region. One or more ANC users will present and move within the target region, and they can be scatterers. Several techniques to estimate the incident sound field in the region including scattering objects have been proposed [20, 21]; however, these methods require prior knowledge or measurements of the properties of the scatterers. Obviously, it will not be always possible to obtain them precisely in advance.
In this paper, we propose a method to estimate the incident sound field in the region including scattering objects without precise knowledge or measurements of their properties. By jointly estimating the coefficients of the incident field represented by a weighted sum of the kernel functions and the scattering field represented by the finite-dimensional spherical wave function expansion, the incident field is estimated based on the kernel ridge regression with a separation from the scattering field. The proposed estimation can still be performed by a linear operation. This means that the estimation can be implemented by a convolution of a finite impulse response (FIR) filter, which is suitable for many practical applications. We also introduce a weighting factor for the expansion coefficients of the scattering field derived on the basis of its smoothness to alleviate the effect of the truncation of expansion order. We conducted numerical simulations in a three-dimensional (3D) space to evaluate our proposed method.
## 2 Problem Statement and Prior Work
### Problem statement
Suppose that a region of interest \(\Omega\subset\mathbb{R}^{3}\) is a simply connected open subset of \(\mathbb{R}^{3}\). The sound pressure at the position \(\mathbf{r}\in\Omega\) and angular frequency \(\omega\in\mathbb{R}\) is denoted as \(u(\mathbf{r},\omega)\). As shown in Fig. 1, one or more scattering objects of arbitrary shape exist inside a spherical region \(\Omega_{\mathrm{act}}\subset\Omega\). \(M\) omnidirectional microphones are arbitrarily placed over \(\Omega\backslash\Omega_{\mathrm{act}}\), whose positions are denoted as \(\{\mathbf{r}_{m}\}_{m=1}^{M}\). We denote the \(m\)th microphone measurement as \(s_{m}\) that is equivalent to \(u(\mathbf{r}_{m},\omega)\) plus sensor noise. The pressure field \(u\) is represented by a sum of incident and scattering fields, \(u_{\mathrm{inc}}\) and \(u_{\mathrm{act}}\), as
\[u(\mathbf{r},\omega)=u_{\mathrm{inc}}(\mathbf{r},\omega)+u_{\mathrm{act}}(\mathbf{r}, \omega). \tag{1}\]
Our objective is to estimate the incident field \(u_{\mathrm{inc}}\) from the microphone measurements \(\{s_{m}\}_{m=1}^{M}\). Hereafter, \(\omega\) is omitted for notational simplicity.
### Current sound field estimation methods
When no scattering object exists inside \(\Omega\), the incident field \(u_{\mathrm{inc}}\), which is equivalent to \(u\), can be estimated based on spherical wave function expansion:
\[u_{\mathrm{inc}}(\mathbf{r})=\sum_{\nu,\mu}\hat{u}_{\mathrm{inc},\nu,\mu}(\mathbf{r}_ {\mathrm{o}})\varphi_{\mathrm{inc},\nu,\mu}(\mathbf{r}-\mathbf{r}_{\mathrm{o}}), \tag{2}\]
where \(\hat{u}_{\mathrm{inc},\nu,\mu}\) is the expansion coefficients for order \(\nu\) and degree \(\mu\), \(\mathbf{r}_{\mathrm{o}}\in\Omega\) is the expansion center, and \(\varphi_{\mathrm{inc},\nu,\mu}\) is the spherical wave function for interior field defined as
\[\varphi_{\mathrm{inc},\nu,\mu}(\mathbf{r}):=\sqrt{4\pi}j_{\nu}(k\|\mathbf{r}\|)Y_{\nu,\mu}\left(\mathbf{r}/\|\mathbf{r}\|\right) \tag{3}\]
with the \(\nu\)th-order spherical Bessel function \(j_{\nu}(\cdot)\), wave number \(k\) (\(\coloneqq\omega/c\) with sound velocity \(c\)), and spherical harmonic function \(Y_{\nu,\mu}(\cdot)\). The factor \(\sqrt{4\pi}\) is multiplied so that \(\hat{u}_{\mathrm{inc},0,0}(\mathbf{r}_{\mathrm{o}})\) corresponds to \(u(\mathbf{r}_{\mathrm{o}})\). Here, the summation for \(\nu\) and \(\mu\) represents \(\sum_{\nu,\mu}:=\sum_{\nu=\infty}^{\infty}\sum_{\mu=-\nu}^{\nu}\). The expansion coefficients up to a predefined truncation order \(N\) can be estimated from the microphone measurements by solving a linear equation constructed by \(\{s_{m}\}_{m}\), \(\{\varphi_{\mathrm{inc},\nu,\mu}(\mathbf{r}_{m}-\mathbf{r}_{\mathrm{o}})\}_{m,\nu,\mu}\), and \(\{\hat{u}_{\mathrm{inc},\nu,\mu}\}_{\nu,\mu}\)[7, 13]. Then, the pressure field \(u_{\mathrm{inc}}\) inside \(\Omega\) can be reconstructed by using the estimated expansion coefficients based on (2). Note that the empirical setting of the truncation order \(N\) and expansion center \(\mathbf{r}_{\mathrm{o}}\) is necessary for this estimation procedure.
To avoid the empirical setting of truncation order and expansion center, the kernel ridge regression for a sound field can be applied [17, 18], which is a special case of the analysis based on infinite-dimensional spherical wave function expansion [19]. The kernel function is formulated so that the function space to seek a solution is constrained to the solution space of the homogeneous Helmholtz equation. Based on the representer theorem [22], the pressure distribution \(u\) can be represented as a weighted sum of the reproducing kernel functions as
\[u(\mathbf{r})=\sum_{m=1}^{M}\alpha_{m}\kappa(\mathbf{r},\mathbf{r}_{m}), \tag{4}\]
where \(\{\alpha_{m}\}_{m=1}^{M}\) is the weights, and \(\kappa\) is the kernel function. The kernel function is defined as
\[\kappa(\mathbf{r}_{1},\mathbf{r}_{2})=j_{0}\left(\left[\left(\mathrm{j}\rho\mathbf{\eta}_{ \mathrm{pr}}-k(\mathbf{r}_{1}-\mathbf{r}_{2})\right)^{\mathsf{T}}\left(\mathrm{j}\rho \mathbf{\eta}_{\mathrm{pr}}-k(\mathbf{r}_{1}-\mathbf{r}_{2})\right)\right]^{\frac{1}{2}} \right), \tag{5}\]
where \(\mathbf{\eta}_{\mathrm{pr}}\in\mathbb{S}_{2}\) is the prior information on the source direction, and \(\rho\) is the weighting parameter for the prior information. When no prior information on the source direction is available, \(\rho\) is set to \(0\), and the kernel function is simplified as
\[\kappa(\mathbf{r}_{1},\mathbf{r}_{2})=j_{0}\left(k\|\mathbf{r}_{1}-\mathbf{r}_{2}\|\right). \tag{6}\]
By defining \(\mathbf{\alpha}=[\alpha_{1},\ldots,\alpha_{M}]^{\mathsf{T}}\), \(\mathbf{s}=[s_{1},\ldots,s_{M}]^{\mathsf{T}}\), and
\[\mathbf{K}=\begin{bmatrix}\kappa(\mathbf{r}_{1},\mathbf{r}_{1})&\cdots&\kappa(\mathbf{r}_{1}, \mathbf{r}_{M})\\ \vdots&\ddots&\vdots\\ \kappa(\mathbf{r}_{M},\mathbf{r}_{1})&\cdots&\kappa(\mathbf{r}_{M},\mathbf{r}_{M})\end{bmatrix}, \tag{7}\]
\(\mathbf{\alpha}\) is obtained in a closed form as
\[\mathbf{\alpha}=(\mathbf{K}+\lambda\mathbf{I})^{-1}\mathbf{s}, \tag{8}\]
where \(\lambda\) is the regularization parameter and \(\mathbf{I}\) is the identity matrix. By using the kernel function defined in (5), (6), or their weighted sum [23], the estimate obtained by using \(\mathbf{\alpha}\) is constrained to the solution of the homogeneous Helmholtz equation. The kernel ridge regression (8) is equivalent to Gaussian process regression because the kernel function has no hyperparameters to learn [24].
These methods are applicable only when the target region does not contain any scattering objects because the homogeneous Helmholtz equation is assumed to be satisfied inside \(\Omega\). One of the simple techniques to alleviate the scattering effects is to employ directional microphones whose minimum-gain direction is directed to the scattering objects [17]; however, it is difficult to develop directional microphones having ideal nulls, especially at low frequencies. Several techniques have been proposed to cancel the scattering effects by measuring or modeling them in advance [20, 21]. However, it is not always possible to precisely measure or model the scattering effects in practical situations.
## 3 Proposed Method
We consider extracting and estimating the incident field \(u_{\mathrm{inc}}\) from the measurements \(\mathbf{s}\) in \(\Omega\) containing unknown scattering objects, without precise measurement or modeling of their properties in advance. Our approach is based on the representation of the scattering field \(u_{\mathrm{act}}\) in \(\Omega\backslash\Omega_{\mathrm{act}}\) by using spherical wave function expansion.
Figure 1: Estimation of the incident sound field in the region including scattering objects.
Then, the weights of the kernel functions and the expansion coefficients of the spherical wave functions are jointly estimated. Thus, the incident field \(u_{\rm inc}\) can still be estimated as a closed-form solution.
### Model
The sound field \(u\) is represented as the sum of \(u_{\rm inc}\) and \(u_{\rm sct}\) as in (1), and \(u_{\rm inc}\) is represented as a weighted sum of the kernel functions (4). We represent the scattering field \(u_{\rm sct}\) by a finite-dimensional spherical wave function expansion for the exterior field as
\[u(\mathbf{r}) =\sum_{m=1}^{M}\alpha_{m}\kappa(\mathbf{r},\mathbf{r}_{m})+\sum_{\nu,\mu} \hat{u}_{\rm sct,\nu,\mu}(\mathbf{r}_{\rm o})\varphi_{\rm{ist},\nu,\mu}(\mathbf{r}-\bm {r}_{\rm o})\] \[\approx\sum_{m=1}^{M}\alpha_{m}\kappa(\mathbf{r},\mathbf{r}_{m})+\sum_{\nu,\mu}^{N}\hat{u}_{\rm sct,\nu,\mu}\varphi_{\rm{sct},\nu,\mu}(\mathbf{r}), \tag{9}\]
where \(\sum_{\nu,\mu}^{N}:=\sum_{\nu=0}^{N}\sum_{\mu=-\nu}^{\nu}\hat{u}_{\rm sct,\nu,\mu}\) is the expansion coefficients, and \(\varphi_{\rm{act},\nu,\mu}\) is the spherical wave function for exterior field defined as
\[\varphi_{\rm{act},\nu,\mu}(\mathbf{r}):=\sqrt{4\pi}h_{\nu}(k\|\mathbf{r}\|)Y_{\nu,\mu} (\mathbf{r}/\|\mathbf{r}\|) \tag{10}\]
with the \(\nu\)th-order spherical Hankel function of the second kind \(h_{\nu}\). Thus, the microphone measurements \(\mathbf{s}\) can be described as
\[\mathbf{s}=\mathbf{K}\mathbf{\alpha}+\mathbf{\Phi}_{\rm sct}\hat{\mathbf{u}}_{\rm sct}+\mathbf{\varepsilon}, \tag{11}\]
where \(\hat{\mathbf{u}}_{\rm sct}\in\mathbb{C}^{(N+1)^{2}}\) is the vector of \(\{\hat{u}_{\rm{sct},\nu,\mu}\}_{\nu,\mu}\), \(\mathbf{\Phi}_{\rm sct}\in\mathbb{C}^{M\times(N+1)^{2}}\) is the matrix of \(\{\varphi_{\rm{sct},\nu,\mu}(\mathbf{r}_{m})\}_{m,\nu,\mu}\), and \(\mathbf{\varepsilon}\in\mathbb{C}^{M}\) is the Gaussian sensor noise.
### Optimization problem and its solution
To estimate \(\mathbf{\alpha}\) from \(\mathbf{s}\), eliminating \(u_{\rm sct}\), we formulate the following joint optimization problem of \(\mathbf{\alpha}\) and \(\hat{\mathbf{u}}_{\rm sct}\):
\[\underset{\mathbf{\alpha},\hat{u}_{\rm sct}}{\text{minimize}}\,\mathcal{J}(\mathbf{ \alpha},\hat{\mathbf{u}}_{\rm sct})\] \[\quad\quad:=\|\mathbf{s}-\mathbf{K}\mathbf{\alpha}-\mathbf{\Phi}_{\rm sct}\hat{ \mathbf{u}}_{\rm sct}\|^{2}+\lambda_{1}\mathbf{\alpha}^{\sf H}\mathbf{K}\mathbf{\alpha}+ \lambda_{2}\hat{\mathbf{u}}_{\rm sct}^{\sf H}\mathbf{W}\hat{\mathbf{u}}_{\rm sct}, \tag{12}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are the regularization parameters, \(\mathbf{W}\in\mathbb{C}^{(N+1)^{2}\times(N+1)^{2}}\) is the weighting matrix for the expansion coefficients \(\hat{\mathbf{u}}_{\rm sct}\). A specific definition of \(\mathbf{W}\) is given in Sect. 3.3.
The optimization problem (12) can be solved in a closed form. By solving \(\partial\mathcal{J}/\partial\hat{\mathbf{u}}_{\rm sct}^{*}=\mathbf{0}\) and \(\partial\mathcal{J}/\partial\mathbf{\alpha}^{*}=\mathbf{0}\), one can obtain
\[\hat{\hat{\mathbf{u}}}_{\rm sct} =\left(\mathbf{\Phi}_{\rm sct}^{\sf H}\mathbf{\Phi}_{\rm sct}+\lambda_{2} \mathbf{W}\right)^{-1}\mathbf{\Phi}_{\rm sct}^{\sf H}(\mathbf{s}-\mathbf{K}\hat{\mathbf{\alpha}}) \tag{13}\] \[\hat{\mathbf{\alpha}} =(\mathbf{K}+\lambda_{1}\mathbf{I})^{-1}\left(\mathbf{s}-\mathbf{\Phi}_{\rm sct} \hat{\hat{\mathbf{u}}}_{\rm sct}\right). \tag{14}\]
By solving the above simultaneous equation, the estimates \(\hat{\hat{\mathbf{u}}}_{\rm sct}\) and \(\hat{\mathbf{\alpha}}\) are obtained as
\[\hat{\hat{\mathbf{u}}}_{\rm sct} =\left[\mathbf{\Phi}_{\rm sct}^{\sf H}(\mathbf{K}+\lambda_{1}\mathbf{I})^{-1} \mathbf{\Phi}_{\rm sct}+\frac{\lambda_{2}}{\lambda_{1}}\mathbf{W}\right]^{-1}\] \[\cdot\mathbf{\Phi}_{\rm sct}^{\sf H}(\mathbf{K}+\lambda_{1}\mathbf{I})^{-1} \mathbf{s}. \tag{15}\]
and
\[\hat{\mathbf{\alpha}}=\left(\mathbf{K}+\lambda_{1}\mathbf{I}+\frac{\lambda_{1}}{\lambda_{ 2}}\mathbf{\Phi}_{\rm sct}\mathbf{W}^{-1}\mathbf{\Phi}_{\rm sct}^{\sf H}\right)^{-1}\mathbf{s}, \tag{16}\]
respectively. Thus, the incident field \(u_{\rm inc}\) is obtained by using \(\hat{\mathbf{\alpha}}\) and (4).
### Weighting matrix for inducing smoothness
It is possible to set \(\mathbf{W}\) in (12) as \(\mathbf{I}\); however, the estimation accuracy can be highly dependent on the truncation order \(N\) although the optimal \(N\) depends on the geometry of the scatterers and their reflective properties. To alleviate the dependence on \(N\), we define \(\mathbf{W}\) to induce smoothness of the scattering field \(u_{\rm sct}\) in the angular direction. Such weighting factors on the expansion coefficients are also used in the context of the interpolation of head-related transfer functions [25]. We here define \(\mathbf{W}\) so that the third term of (12) corresponds to the following form:
\[\sum_{m=1}^{M}\|\nabla u_{\rm sct}(\mathbf{r}_{m})\|^{2}=\mathbf{u}_{\rm sct}^{\sf H} \left(\frac{\partial\mathbf{\Phi}_{\rm sct}^{\sf H}}{\partial\theta}\frac{\partial \mathbf{\Phi}_{\rm sct}}{\partial\theta}+\frac{\partial\mathbf{\Phi}_{\rm sct}^{\sf H}}{ \partial\phi}\frac{\partial\mathbf{\Phi}_{\rm sct}}{\partial\phi}\right)\mathbf{u}_{\rm sct}, \tag{17}\]
where \(\theta\) and \(\phi\) are the zenith and azimuth angles, respectively, in the spherical coordinates. Therefore, \(\mathbf{W}\) can be written as
\[\mathbf{W}=\frac{\partial\mathbf{\Phi}_{\rm sct}^{\sf H}}{\partial\theta}\frac{\partial \mathbf{\Phi}_{\rm sct}}{\partial\theta}+\frac{\partial\mathbf{\Phi}_{\rm sct}^{\sf H}}{ \partial\phi}\frac{\partial\mathbf{\Phi}_{\rm sct}}{\partial\phi}. \tag{18}\]
Each element of \(\mathbf{W}\) is analytically obtained by using
\[\frac{\partial\varphi_{\rm{sct},\nu\mu}}{\partial\theta}= \tag{19}\] \[\begin{cases}\sqrt{4\pi}h_{\nu}(kr)\mu\cot\theta Y_{\nu,\mu}(\theta, \phi),&\text{if $\nu=\mu$}\\ \sqrt{4\pi}h_{\nu}(kr)\Big{[}\mu\cot\theta Y_{\nu,\mu}(\theta,\phi)\\ +\sqrt{(\nu-\mu)(\nu+\mu+1)}{\rm e}^{-{\rm j}\phi}Y_{\nu,\mu+1}(\theta,\phi) \Big{]},&\text{otherwise}\end{cases} \tag{20}\]
and
\[\frac{\partial\varphi_{\rm{sct},\nu\mu}}{\partial\phi}=\sqrt{4\pi}{\rm j}\mu h_{ \nu}(kr)Y_{\nu,\mu}(\theta,\phi). \tag{21}\]
By using this weighting matrix \(\mathbf{W}\), high-order coefficients are suppressed to small values. Then, the estimation accuracy is not largely affected by the setting of the truncation order \(N\).
## 4 Experiments
We conducted numerical experiments in a 3D free field to evaluate the proposed method. For comparison, the method based on kernel ridge regression described in Sect. 2.2 is used. The proposed method and the method based on kernel ridge regression are denoted as Proposed and KRR, respectively.
As shown in Fig. 2, the target region \(\Omega\) was a sphere of radius \(R=0.5\)\(\mathrm{m}\). An acoustically rigid spherical object of radius \(0.3\)\(\mathrm{m}\) was located inside \(\Omega\). 25 omnidirectional microphones were distributed on two spherical surfaces of radius \(0.5\)\(\mathrm{m}\) and \(0.55\)\(\mathrm{m}\) by using spherical \(t\)-design [26], which are indicated by red crosses in Fig. 2; therefore, the total number of microphones was 50. A point source (blue star) was at \((2.0,2.0,0.0)\)\(\mathrm{m}\). Gaussian noise
was added to the observed signals so that the signal-to-noise ratio becomes \(40\ \mathrm{dB}\).
In Proposed, two truncation orders, \(N=\lceil kR\rceil\) and \(2\lceil kR\rceil\) with the radius \(R\) of \(\Omega\), were investigated with or without the weighting matrix \(\boldsymbol{W}\), which is indicated as \(\boldsymbol{W}\) and \(\boldsymbol{I}\), respectively. The kernel function defined in (6) is used for both Proposed and KRR. The regularization parameters \(\lambda\), \(\lambda_{1}\), and \(\lambda_{2}\) were chosen from \(10^{n}\) with \(n\in\mathbb{Z}([-15,9])\) based on the estimation accuracy. As an evaluation measure of the estimation accuracy, we define the following normalized mean square error (NMSE):
\[\mathrm{NMSE}:=\frac{\int_{\Omega}|u_{\mathrm{inc}}(\boldsymbol{r})-\hat{u}_{ \mathrm{inc}}(\boldsymbol{r})|^{2}\mathrm{d}\boldsymbol{r}}{\int_{\Omega}|u_ {\mathrm{inc}}(\boldsymbol{r})|^{2}\mathrm{d}\boldsymbol{r}}, \tag{22}\]
where \(\hat{u}_{\mathrm{inc}}\) denotes the estimated incident pressure distribution, and the integral is approximated as a summation at the evaluation points regularly distributed over \(\Omega\) at intervals of \(0.05\ \mathrm{m}\).
Fig. 3 shows the NMSE with respect to the frequency. The estimation accuracy of KRR significantly deteriorated owing to the effect of the scattering object because this method relies on the assumption that the target region is free space. In Proposed without the weighting matrix (Proposed (\(\boldsymbol{I}\), \(\lceil kR\rceil\)) and Proposed (\(\boldsymbol{I}\), \(2\lceil kR\rceil\))), the NMSE was improved, but its performance was dependent on the truncation order. The difference of NMSE between Proposed (\(\boldsymbol{I}\), \(\lceil kR\rceil\)) and Proposed (\(\boldsymbol{I}\), \(2\lceil kR\rceil\)) was significantly large between \(200\) and \(600\ \mathrm{Hz}\). The lowest NMSE was achieved by Proposed (\(\boldsymbol{W}\), \(\lceil kR\rceil\)). Even when the truncation order was \(2\lceil kR\rceil\), the deterioration of NMSE remained small.
As an example, the estimated pressure and normalized error distributions of KRR and Proposed (\(\boldsymbol{W}\), \(\lceil kR\rceil\)) on the \(x\)-\(y\) plane at \(z=0\) at the frequency of \(300\ \mathrm{Hz}\) are shown in Figs. 4 and 5, respectively. High estimation accuracy was achieved over the target region \(\Omega\) in Proposed, compared with KRR.
## 5 Conclusion
We proposed a method to estimate the incident sound field in a region including scattering objects without precise knowledge or measurements of their properties. Our proposed method is based on the kernel ridge regression of the incident field with a separation from the scattering field represented by the finite-dimensional spherical wave function expansion. The optimization problem can be solved in a closed form; thus, the estimation can be performed by a convolution of a FIR filter. The weighting matrix for the expansion coefficients to induce smoothness of the scattering field in the angular direction alleviates the dependence of the estimation accuracy on the truncation order for the representation of the scattering field. In the numerical experiments, the proposed method achieved high estimation accuracy compared with the method based on kernel ridge regression without the separation. Future work will involve developing a method to determine regularization parameters in the estimator by using approximate knowledge of the scattering objects.
## 6 Acknowledgment
This work was supported by JSPS KAKENHI Grant Number 22H03608 and JST FOREST Program Grant Number JP-MJFR216M, Japan.
Figure 4: Estimated pressure distributions at \(300\ \mathrm{Hz}\) on the \(x\)-\(y\) plane at \(z=0\). The dashed line indicates the target region.
Figure 5: Normalized error distributions at \(300\ \mathrm{Hz}\) on the \(x\)-\(y\) plane at \(z=0\). NMSEs of KRR and Proposed (\(\boldsymbol{W}\), \(\lceil kR\rceil\)) were \(-9.4\) and \(-26.4\ \mathrm{dB}\), respectively.
Figure 3: NMSE with respect to frequency.
Figure 2: Experimental setup. The spherical target region including a spherical scattering object was set. The red crosses and blue star indicate microphones and sound source, respectively. |
2309.14739 | Designing superhard magnetic material in clathrate \b{eta}-C3N2 through
atom embeddedness | Designing new compounds with the coexistence of diverse physical properties
is of great significance for broad applications in multifunctional electronic
devices. In this work, based on density functional theory, we predict the
coexistence of mechanical superhardness and the controllable magnetism in the
clathrate material \b{eta}-C3N2 through the implant of the external atom into
the intrinsic cage structure. Taking hydrogen-doping (H@\b{eta}-C3N2) and
fluorine-doping (F@\b{eta}-C3N2) as examples, our calculations indicate these
two doped configurations are stable and discovered that they belong to
antiferromagnetic semiconductor and ferromagnetic semi-metal, respectively.
These intriguing magnetic phase transitions originate from their distinctive
band structure around the Fermi level and can be well understood by the 3D
Hubbard model with half-filling occupation and the Stoner model. Moreover, the
high Vickers hardness of 49.0 GPa for H@\b{eta}-C3N2 and 48.2 GPa for
F@\b{eta}-C3N2 are obtained, suggesting they are clathrate superhard materials
as its host. Therefore, the incorporation of H and F in \b{eta}-C3N2 gives rise
to a new type of superhard antiferromagnetic semiconductor and superhard
ferromagnetic semimetal, respectively, which could have potential applications
in harsh conditions. Our work provides an effective strategy to design a new
class of highly desirable multifunctional materials with excellent mechanical
properties and magnetic properties, which may arouse spintronic applications in
superhard materials in the future. | Liping Sun, Botao Fu, Jing Chang | 2023-09-26T08:07:53Z | http://arxiv.org/abs/2309.14739v1 | Designing superhard magnetic material in clathrate \(\beta\)-C\({}_{3}\)N\({}_{2}\) through atom embeddedness+
###### Abstract
Designing new compounds with the coexistence of diverse physical properties is of great significance for broad applications in multifunctional electronic devices. In this work, based on density functional theory, we predict the coexistence of mechanical superhardness and the controllable magnetism in the clathrate material \(\beta\)-C\({}_{3}\)N\({}_{2}\) through the implant of the external atom into the intrinsic cage structure. Taking hydrogen-doping (H(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\)) and fluorine-doping (F(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\)) as examples, our calculations indicate these two doped configurations are stable and discovered that they belong to antiferromagnetic semiconductor and ferromagnetic semi-metal, respectively. These intriguing magnetic phase transitions originate from their distinctive band structure around the Fermi level and can be well understood by the 3D Hubbard model with half-filling occupation and the Stoner model. Moreover, the high Vickers hardness of 49.0 GPa for H(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) and 48.2 GPa for F(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) are obtained, suggesting they are clathrate superhard materials as its host. Therefore, the incorporation of H and F in \(\beta\)-C\({}_{3}\)N\({}_{2}\) gives rise to a new type of superhard antiferromagnetic semiconductor and superhard ferromagnetic semimetal, respectively, which could have potential applications in harsh conditions. Our work provides an effective strategy to design a new class of highly desirable multifunctional materials with excellent mechanical properties and magnetic properties, which may arouse spintronic applications in superhard materials in the future.
Magnetic phase transitions; Superhard materials; Ferromagnetic semi-metal; First-principles calculations +
Footnote †: _E-mail:_ [email protected].
## 1 Introduction
A clathrate is a large group of compounds consisting of cage-like lattice structures that can trap guest atoms or molecules. Due to the variable combination of possible guest atoms and host polyhedral cages, clathrates exhibit tunable physical properties and have received significant attention in condensed matter physics and materials science.[10] For instance, a DFT-predicted novel clathrate boride, made up of face-sharing B\({}_{26}\) cages encapsulating a single La atom, demonstrates phonon-mediated superconductivity of 18 K at ambient pressure,[2] where the transition temperature can be significantly improved by substituting various guest atoms inside the cage.[3, 4, 5]
Moreover, the incorporation of cerium as a guest atom into the intermetallic clathrate is reported, leading to exceedingly low lattice thermal conductivities and remarkable enhancement of thermopower with respect to a rare-earth-free reference material.[6, 7] Besides, the robust ferroelectricity with high transition temperature is recently observed in a new clathrate with a carbon-boron framework, opening the possibility for a new class of ferroelectric materials with potential across a broad range of applications.[8]
On the other hand, superhard materials, identified by a high Vickers hardness (\(H\)v) of more than 40 GPa,[9] are indispensable to fundamental study in condensed-matter physics and have been practically applied to a wide range of technical fields, such as cutting and polishing tools, wear-resistant coatings, etc. In recent years, several clathrate materials such as carbon allotrope,[10, 11] sodalite-like born-carbon,[12] and boron-nitrogen framework,[13, 14] are reported as potential superhard materials. These clathrate superhard materials are made up of relatively small cages formed by strong covalent bonds from light elements (B, C, and N). Distinct from traditional superhard materials with uniformly insulating nature, new clathrate superhard materials are endowed with tunable and multiple functions due to their inner cages that can accommodate various guest atoms. For example, the synthesis of the superhard iron tetraboride superconductor opens a new class of highly desirable clathrate materials combining advanced mechanical properties and superconductivity,[15, 16] Therefore, searching and designing for new clathrate superhard materials combined with other comprehensive properties such as ferromagnetism, ferroelectricity, thermoelectricity, and superconductivity[17, 18, 19] are of great significance for various applications in science and engineering.
In particular, the combination of magnetism with superior mechanical hardness, to fabricate superhard magnetic materials, has garnered significant research interest[20, 21, 22], which could have potential applications in harsh conditions, such as micro-electromechanical systems, magnetic actuators, and magnetic sensors in high-speed motors. However, only a few compounds such as transition metal borides[23], have been reported to possess both magnetism and high hardness. Here in this paper, based on the flexible adjustability of clathrate materials, we propose an alternative strategy to realize the coexistence of magnetism and superhardness in doped clathrate compounds. Starting from the previously predicted clathrate superhard materials \(\beta\)-C\({}_{3}\)N\({}_{2}\),[24, 25] our calculations indicate that the intersection of hydrogen and fluorine atoms might induce antiferromagnetic semiconductor and ferromagnetic metal states, and meanwhile maintain the high Vickers hardness of 49 GPa and 48.2 GPa, respectively. The underly mechanisms of super-high hardness and different magnetic ground states are revealed through the analysis of bonding characteristics, charge transfer, and mechanical constants. Thereinto, the H-doping system (H\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\)) belongs to 3D Hubbard model with half-filling, which might spontaneously transform into an antiferromagnetic semiconductor ground state with a large U limit, while in F doped system (F\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\)) the strong
orbital hybridization between guest and host atoms gives rise to a large DOS peak at Fermi level which further generates a Stoner ferromagnetic metal due to electronic instability. Therefore, the incorporation of H and F in \(\beta\)-C\({}_{3}\)N\({}_{2}\) might give rise to a new type of superhard antiferromagnetic semiconductor and superhard ferromagnetic semimetal, respectively. Our work provides an effective strategy to design a new class of highly desirable multifunctional materials with excellent mechanical properties and magnetic properties.
## 2 Computational methods
The first-principles calculations were performed using the Vienna ab initio simulation package (VASP) [26]. The Perdew-Burke-Ernzerhof (PBE) functional in the generalized gradient approximation was used [27]. The projector augmented wave (PAW) pseudopotential considers 2s\({}^{2}\)2p\({}^{2}\) and 2s\({}^{2}\)2p\({}^{3}\) as valence electrons for C and N atoms, respectively. A plane-wave energy cutoff was set at 800 eV and the Brillouin-zone (BZ) integration is carried out using a 7\(\times\)7\(\times\)7 Monkhorst-Pack grid in the first BZ for relaxation. During the geometrical optimization, all forces on atoms converged to less than 0.01eV/A and the change of the total energy is less than 10\({}^{5}\) eV. Before calculating the band structure, the self-consistent calculation was carried out with a much denser k-point grid by 10\(\times\)10\(\times\)10 to obtain a more accurate charge density and band structure ultimately. Phonon calculation was performed with 2\(\times\)2\(\times\)2 supercell by using PHONOPY code [28]. Elastic constants were calculated by the strain-stress method [29] and the bulk modulus, shear modulus, and Poisson's ratio of these structures were derived from the Voigt-Reuss-Hill approximation [30].
## 3 Geometric structure and stability
Recently, a group of clathrate carbon-nitride compounds with superhard characteristics were proposed. Thereinto, \(\beta\)-C\({}_{3}\)N\({}_{2}\) with space group of P-43m belongs to a sodalite-like cage structure that is composed of inter-linked strong C-N bonds, which provides a new degree for the manipulations of physical properties. As shown in Fig.1 (a), \(\beta\)-C\({}_{3}\)N\({}_{2}\) crystals in a cubic lattice with an ultimately optimized lattice constant of 5.084 A, where twelve carbon atoms located on Wyckoff positions of 12\(i\) (0.156, 0.156, 0.497) and eight nitrogen atoms at 4\(e\) (0.246, 0.246, 0.246) and 4\(e\) (0.731, 0.731, 0.731). The average radius of the cage is about 1.30 A, which allows for the intercalation of light elements with a proper atomic radius. Therefore, we consider these elements in the first three periods and the group VIIA in the periodic tables and finally obtained several stable structures with various electronic structure and magnetic properties [see supplementary material]. Here we take the hydrogen (\(r\)=0.53 A) and fluorine (\(r\)=0.71 A) as examples to explore potentially rich physical properties induced by external intercalation.
We first considered possible doping sites for inserting of H and F into cage-like \(\beta\)-C\({}_{3}\)N\({}_{2}\) and found that the
central position (0.5, 0.5, 0.5) is locally energy stable for H\({}^{31}\) and F doping, and robust against perturbations. The corresponding discussions are given in detail in the supplementary materials. In the main body, our primary focus is on the H- and F- center doped crystal structures, as shown in Fig. 1, which are named H(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\), respectively. The lattice constants of H(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\) are slightly increased by 0.017 A and 0.038 A compared with pristine \(\beta\)-C\({}_{3}\)N\({}_{2}\). The phonon spectra of pristine \(\beta\)-C\({}_{3}\)N\({}_{2}\), and H and F doped \(\beta\)-C\({}_{3}\)N\({}_{2}\) at 0 GPa were investigated with 2\(\times\)2\(\times\)2 supercell, these results are shown in Figs. 1(d)-(f). It is obvious that there is no imaginary frequency in the whole Brillouin zone for H(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\), which indicates that they are dynamically stable at 0 GPa. In addition, the phonon dispersions for two doped systems are almost the same as that of their parent \(\beta\)-C\({}_{3}\)N\({}_{2}\), implying excellent mechanical properties still be remained in the doped systems.
Furthermore, thermodynamic stability has been confirmed by the calculation of cohesive energy through the chemical equation: \(\Delta H_{c}\)= _E\({}_{total}\)_ (X(\(\bar{\alpha}\))C\({}_{3}\)N\({}_{2}\))-3_E\({}_{C}\)-2_E\({}_{N}\)_-_E\({}_{X}\)_, where _E\({}_{total}\)_ (X(\(\bar{\alpha}\)C\({}_{3}\)N\({}_{2}\)) is the total energy of compound, _E\({}_{C}\)_ is the energy of C atom, _E\({}_{N}\)_ is the energy of N atom, _E\({}_{X}\)_ is the energy of the doped atom. The calculated negative cohesive energy of -4.202, -4.002 and -3.941 eV/unit cell for \(\beta\)-C\({}_{3}\)N\({}_{2}\), H(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\), respectively, shows the energy release during compound formation, which also confirms the thermodynamic stability of the studied materials. To further verify the thermodynamic stability of H(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\), we have performed Ab initio molecular dynamics simulations (AIMD) at room temperature. A 2 \(\times\) 2 \(\times\) 2 supercell with a time step of 1 fs during the simulation was used. The calculation results are shown in Figs. 1 (g)-(i), which indicates that both H(\(\bar{\alpha}\)\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\) remain dynamic stability at 300 K.
## 4 Electronic structures
To further insight into the properties of these doped compounds, the electronic structures of pristine \(\beta\)-C\({}_{3}\)N\({}_{2}\), H and F doped \(\beta\)-C\({}_{3}\)N\({}_{2}\) are calculated and shown in Figs. 2 (a)-(c), respectively. It is found that the pristine \(\beta\)-C\({}_{3}\)N\({}_{2}\) is a wide indirect band gap (_E\({}_{\rm g}\)_=3.564 eV) insulator, with the valence-band maximum located at \(\Gamma\) point and the conduction-band minimum located at M point. We find that the highest valence bands are almost composed of N-p orbitals while the lowest conduction bands mainly derive from C-_p_ orbitals with a small number of N-_p_ orbitals. For H(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\) in Fig. 2 (b), a nearly isolated band appears inside the wide band gap of pristine \(\beta\)-C\({}_{3}\)N\({}_{2}\), which crosses the Fermi level, and gives an extended Fermi surface crossing the Brillouin zone boundary as shown in Fig. 2(e). To reveal the origin of the unique band around the Fermi level of H(\(\bar{\alpha}\))\(\beta\)-C\({}_{3}\)N\({}_{2}\), we calculated band projection, partial density of state (PDOS) and charge density. From the band projection and PDOS, we find this unique band is mainly composed of H's s-orbital, which is consistent with the charge density plotted in Fig. 2 (h), where electrons are mainly located on H-atoms in real space. Further, via the Bader charge analysis in Fig. S7, we found
the inserted H-atom only gains 0.028 electrons, which indicates the interaction between the doped H-atom and \(\beta\)-C\({}_{3}\)N\({}_{2}\) is very weak. Since there is only one H atom in the simple cubic cell, this isolated \(s\)-band has to be half-filled by one electron from H atom. Therefore, it can be described as a single-bond Hamiltonian in the cubic lattice made up of H-atom,
\[H=E_{0}-t\sum\nolimits_{\langle ij\rangle}C_{i\sigma}^{+}C_{j\sigma}+H.C. \tag{1}\]
The \(E_{0}\) is onsite energy and \(t\) is the nearest neighbor hopping. The energy spectrum is analytically given as, \(E(\mathbf{k})\)=\(E_{\sigma}\)-2\(t\)(cos\(k_{x}\)+cos\(k_{y}\)+cos\(k_{z}\)), which gives a similar constant energy surface as shown in Fig. 2 (g). It's worthwhile considering the electron-electron interaction (Hubbard U) belongs to a standard three-dimensional Hubbard model in a simple cubic lattice, which contains rich phase diagrams as pointed out by previous studies [32].
Compared with the H\(\underline{a}\)\(\beta\)-C\({}_{3}\)N\({}_{2}\), the band structure of the F\(\underline{a}\)\(\beta\)-C\({}_{3}\)N\({}_{2}\) system is more complicated. As exhibited in Fig. 2(c), there is a conductive band that crosses the Fermi level, giving rise to the Fermi pockets centered at \(\Gamma\) and M points as displayed in Fig. 2 (f). However, this band is entangled with other two valence bands, which becomes triple degenerate at the R point. From the band projection and PDOS, we find these entangle bands are almost equally composed of F's p-orbital and N's p-orbital, indicating strong orbital hybridization between inserted-F and pristine \(\beta\)-C\({}_{3}\)N\({}_{2}\), which is consistent with the charge density plotted in Fig. 2 (i) where electrons are mainly located on both F-atom and N-atom in real space. In addition, via the Bader charge analysis in Fig. S7, we find the inserted F-atom gains 0.573 electrons, which indicates the interaction between the doped F-atom and \(\beta\)-C\({}_{3}\)N\({}_{2}\) is rather large. Intriguingly, the relatively dispersionless bands with saddle points lead to a peak of DOS at the Fermi level, which may further contribute to remarkably correlated effects and novel physical properties [33]. As a whole, both H and F-doping will induce a metallic state in \(\beta\)-C\({}_{3}\)N\({}_{2}\).
## 5 Magnetic properties
The rich physical phenomena could be induced by doping atoms in cage-like materials, such as ferroelectricity and magnetism. With the unusual half-filled and flat-band characteristics of H and F doped \(\beta\)-C\({}_{3}\)N\({}_{2}\), we further explore the underlying magnetic phase transition of these systems. The spin-polarized calculation is performed which partially includes the exchange-correlation effect, and then we find both H\(\underline{a}\)\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F\(\underline{a}\)\(\beta\)-C\({}_{3}\)N\({}_{2}\) have non-zero net magnetic moments, which indicates magnetic phases are energy favorable. The magnetic moment distribution in real space is directly reflected by the illustration of effective spin charge density (\(\Delta\)\(\rho\) =\(\rho\)\(\uparrow\)-\(\rho\)\(\downarrow\)) in Fig. 3, where a magnetic moment of 1.00 \(\mu_{\rm B}\) is mainly localized on H-atom for H\(\underline{a}\)\(\beta\)-C\({}_{3}\)N\({}_{2}\), while a magnetic moment of 0.91 \(\mu_{\rm B}\) magnetic moment is extensively distributed on both F-atom and N-atom. These phenomena are consistent
with the analysis of the partial density of states and charge density distribution around the Fermi level. In order to probe the most stable magnetic configurations, a \(\sqrt{2}\times\sqrt{2}\times 2\) supercell is built with four types of magnetic configurations considered in Figs. 4(a)-(d). It involves a ferromagnetic (FM) phase and three antiferromagnetic (AFM) phases such as type-A, -C, -G AFM. The final total energies for these configurations are listed in Table 1, with respect to the nonmagnetic (NM) phases. One can find that the H(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) has the type-G AFM ground state while the F(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) possesses the FM ground state.
The spin-polarized electronic structures of type-G AFM H(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) and of FM F(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) are presented in Fig. 4(e) and 4(f), respectively. Thereinto, the H(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) exhibits AFM semiconducting nature with a band gap of 2.485 eV. This magnetic phase transition from normal half-filled metal into AFM semiconductor can be well understood by the 3D Hubbard model for a simple cubic lattice[34, 35], when a large coupling limit (U/t) is satisfied in H(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\). On the contrary, the F(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) exhibits an FM metal state as shown in Fig. 4(f), where the spin-down bands move up with the triplet point coincidently locate at the Fermi level, while the spin-up bands move down with the band around \(\Gamma\) point crossing the Fermi level. From the spin-splitting DOS, we noticed that the Fermi level is mainly occupied by the spin-down channel, giving rise to a high spin polarization of 80%. With only 0.03 electron per unit cell doping per unit cell (0.263 \(\times\) 10\({}^{-3}\) e/A\({}^{3}\)), the F(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) would transform into 100% spin polarization, namely FM half-metal. Different from the AFM ground state of H(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\), the F(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) experiences a phase transition from a normal metal state into FM metal. This transition would be comprehended by the Stoner criterion for itinerant ferromagnetism[36]: \(N(E_{F})I>\)1, where \(N(E_{F})\) is the DOS at the Fermi energy in the NM state and \(I\) is the stoner parameter. Here the almost divergent DOS peak at the Fermi level in Fig. 2(c) makes the Stoner criterion to be satisfied, [37] and the ferromagnetic state is naturally induced with lower total energy. Moreover, from the PDOS of F(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\), we further identify that the itinerant ferromagnetism originates from the strong \(p\)-orbital hybridization of F and N atoms due to larger and extended \(2p\) orbital. This is also revealed by the distribution of magnetic moment that the N-atoms have 0.466 \(\mu_{\rm B}\) while the F-atom possesses 0.293 \(\mu_{\rm B}\). According to the above analysis, H(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) and F(\(\bar{\alpha}\beta\)-C\({}_{3}\)N\({}_{2}\) are confirmed to be antiferromagnetic semiconductor and ferromagnetic metal, respectively. The magnetism in doped \(\beta\)-C\({}_{3}\)N\({}_{2}\) mainly derived from the \(p\)-orital of inserted atom, which is different from present magnetic superhard materials [22, 23] (such transition metal boride) in which the magnetism mainly originates from the unpaired \(d\)-orbital of transition atoms.
## 6 Elastic constants and mechanical properties
The atom intercalation not only dramatically modify the electronic structure of hosts but can also remarkably affect the mechanical properties. On account of the expectation of designing superhard crystal the mechanical
property of doping \(\beta\)-C\({}_{3}\)N\({}_{2}\) is investigated inevitably. The elastic behavior is of key importance for us to understand the deformation behavior for superhard materials in response to external forces. The calculated elastic constants are listed in Table 2. For a cubic structure, there are three independent elastic constants including C\({}_{11}\), C\({}_{12}\), and C\({}_{44}\). To be a stable cubic crystal, the elastic constant must obey the following mechanical criteria: [38] C\({}_{11}\) - C\({}_{12}>0\), C\({}_{11}+\) C\({}_{12}>0\), C\({}_{44}>0\). From these calculated values of elastic constants in Table 2, it is obvious that these elastic constants are positive and satisfy the mechanical criteria, revealing that both H@\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F@\(\beta\)-C\({}_{3}\)N\({}_{2}\), are mechanically stable.
The Vickers hardness of materials, \(H_{\rm v}\)(GPa), can be obtained according to the empirical model: [39]
\[H_{\rm V}=2(k^{2}G)^{0.585}-3 \tag{2}\]
where \(k=G/B\). Thereinto, the bulk modulus (\(B\)) and shear modulus (\(G\)) show the resistance to fracture and plastic deformation that can be directly derived from elastic constants by the Voigt-Reuss-Hill approximation. [30] The Pugh's ratio defined as \(B/G\), similar to Poisson's ratio v, can be used to characterize the brittleness or ductility of a material. Generally speaking, solid with \(B/G<1.75\) and v \(<0.26\) refers to material behaving in a brittle manner, otherwise, it corresponds to ductility. [40] In general, brittle materials with smaller \(B/G\) tend to have higher hardness. From Table 2, the calculated \(B/G\) and Poisson's ratio are 1.11 and 0.150 for H@\(\beta\)-C\({}_{3}\)N\({}_{2}\), 1.11 and 0.156 for F@\(\beta\)-C\({}_{3}\)N\({}_{2}\), which are comparable with that of pristine \(\beta\)-C\({}_{3}\)N\({}_{2}\) (0.93 and 0.105). All these results indicate that these two doped systems still maintain favorable brittleness as their parent \(\beta\)-C\({}_{3}\)N\({}_{2}\). As a result, the Vickers hardness based on formula (2) is further obtained. The obtained values of \(H_{\rm v}\) are 49.0 GPa and 48.2 GPa for H@\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F@\(\beta\)-C\({}_{3}\)N\({}_{2}\), respectively. Although these values are a little smaller than that of pristine \(\beta\)-C\({}_{3}\)N\({}_{2}\) (58.2 Gpa), they are higher than 40 GPa, which suggests that both H@\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F@\(\beta\)-C\({}_{3}\)N\({}_{2}\) are superhard materials as expected. Besides, in order to reveal the origin of their excellent mechanical properties, we also calculated the electron localization functions as shown in Fig. S7 in supplementary materials, from which one can identify that it is the strong B-N covalent bonding is responsible for the superhigh hardness in both pristine and doped \(\beta\)-C\({}_{3}\)N\({}_{2}\). Hence, based on the above results, we confirmed the coexistence of magnetism and superharness in doped \(\beta\)-C\({}_{3}\)N\({}_{2}\), providing potential material candidates for superhard magnetic materials.
## 7 Conclusions
We have shown that the incorporation of H and F in \(\beta\)-C\({}_{3}\)N\({}_{2}\) can realize the combination of excellent mechanical properties and distinctive magnetic properties. Via first-principles calculation, we clearly reveal it is the unique electronic structure from the intercalation atoms around the Fermi level that lead to the AFM semiconductor state and FM semimetal state in H@\(\beta\)-C\({}_{3}\)N\({}_{2}\) and F@\(\beta\)-C\({}_{3}\)N\({}_{2}\), respectively. Meanwhile, the high Vicker hardness, as
well as brittleness, still remained in both doped systems. Therefore, we think the intercalation of proper atoms in existing clathrate superhard materials can effectively induce magnetism and manipulate the electronic structure. This work extends the magnetic property of superhard materials and maybe can arouse spintronic applications in superhard materials.
## Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant No. 12204330) and the Sichuan Normal University for financial support (No. 341829001). |
2309.17230 | Spurious Feature Diversification Improves Out-of-distribution
Generalization | Generalization to out-of-distribution (OOD) data is a critical challenge in
machine learning. Ensemble-based methods, like weight space ensembles that
interpolate model parameters, have been shown to achieve superior OOD
performance. However, the underlying mechanism for their effectiveness remains
unclear. In this study, we closely examine WiSE-FT, a popular weight space
ensemble method that interpolates between a pre-trained and a fine-tuned model.
We observe an unexpected ``FalseFalseTrue" phenomenon, in which WiSE-FT
successfully corrects many cases where each individual model makes incorrect
predictions, which contributes significantly to its OOD effectiveness. To gain
further insights, we conduct theoretical analysis in a multi-class setting with
a large number of spurious features. Our analysis predicts the above phenomenon
and it further shows that ensemble-based models reduce prediction errors in the
OOD settings by utilizing a more diverse set of spurious features. Contrary to
the conventional wisdom that focuses on learning invariant features for better
OOD performance, our findings suggest that incorporating a large number of
diverse spurious features weakens their individual contributions, leading to
improved overall OOD generalization performance. Additionally, our findings
provide the first explanation for the mysterious phenomenon of weight space
ensembles outperforming output space ensembles in OOD. Empirically we
demonstrate the effectiveness of utilizing diverse spurious features on a
MultiColorMNIST dataset, and our experimental results are consistent with the
theoretical analysis. Building upon the new theoretical insights into the
efficacy of ensemble methods, we further propose a novel averaging method
called BAlaNced averaGing (BANG) which significantly enhances the OOD
performance of WiSE-FT. | Yong Lin, Lu Tan, Yifan Hao, Honam Wong, Hanze Dong, Weizhong Zhang, Yujiu Yang, Tong Zhang | 2023-09-29T13:29:22Z | http://arxiv.org/abs/2309.17230v2 | # Spurious Feature Diversification Improves Out-of-distribution Generalization
###### Abstract
Generalization to out-of-distribution (OOD) data is a critical challenge in machine learning. Ensemble-based methods, like weight space ensembles that interpolate model parameters, have been shown to achieve superior OOD performance. However, the underlying mechanism for their effectiveness remains unclear.
In this study, we closely examine WiSE-FT, a popular weight space ensemble method that interpolates between a pre-trained and a fine-tuned model. We observe an unexpected "FalseFalseTrue" phenomenon, in which WiSE-FT successfully corrects many cases where each individual model makes incorrect predictions, which contributes significantly to its OOD effectiveness. To gain further insights, we conduct theoretical analysis in a multi-class setting with a large number of spurious features. Our analysis predicts the above phenomenon and it further shows that ensemble-based models reduce prediction errors in the OOD settings by utilizing a more diverse set of spurious features. Contrary to the conventional wisdom that focuses on learning invariant features for better OOD performance, our findings suggest that incorporating a large number of diverse spurious features weakens their individual contributions, leading to improved overall OOD generalization performance. Empirically we demonstrate the effectiveness of utilizing diverse spurious features on a MultiColorMNIST dataset, and our experimental results are consistent with the theoretical analysis.
Building upon the new theoretical insights into the efficacy of ensemble methods, we further identify an issue of WiSE-FT caused by the overconfidence of fine-tuned models in OOD situations. This overconfidence magnifies the fine-tuned model's incorrect prediction, leading to deteriorated OOD ensemble performance. To remedy
this problem, we propose a novel method called BAlaNced averaGing (BANG) to mitigate the overconfidence problem, which significantly enhances the OOD performance of WiSE-FT.
## 1 Introduction
Machine learning has seen significant advancements recently. However, the assumption that testing samples follow the same distribution as training samples, known as the Identi- cly Independent Distributed (IID) assumption, can be violated in real-world applications. When a machine learning model encounters novel testing samples that it hasn't seen during training, it faces the out-of-distribution (OOD) generalization problem.
Ensemble-based models (ESM) have achieved significant success in addressing OOD problems in recent years. Specifically, denote the input as \(\mathbf{x}\) and the model as \(f_{\theta}\) with parameter \(\theta\). Given two models \(f_{\bar{\theta}}\) and \(f_{\bar{\theta}}\), existing ESM works typically consider the output space ensemble (OSE) which outputs \(f_{\bar{\theta}}(\mathbf{x})+f_{\bar{\theta}}(\mathbf{x})\) and the weight space ensemble (WSE) which outputs \(f_{(\bar{\theta}+\bar{\theta})/2}(\mathbf{x})\). WSE is also called weight averaging in literature. [60, 59, 49] show that ESM can significantly improve the OOD performance and WSE outperforms OSE. Many works, e.g., [12, 49, 6, 46, 59, 56, 33], adopt WSE to repeatedly improve the SOTA performance on many OOD benchmarks such as DomainBed [27] and ImageNet variants [60]. See Appendix B for a detailed discussion on related works.
Consider two types of features for OOD: (1) invariant features that consistently predict the label across distributions, and (2) spurious features that have unstable correlations with the label. Existing OOD theories [5, 51, 57, 2, 67] show that an ERM-trained model relying on spurious features can fail in worst-case. ESM, which combines multiple ERM-trained models, may still heavily depend on such features and potentially fail in worst-case scenarios as well. There have been some previous attempts to explain the effectiveness of model ensemble, but they do not offer satisfactory explanations on the overall OOD improvement of ESM. Furthermore, the difference between weight and output space ensemble remains under-explored (a thorough discussion on related works in Appendix B).
**An intriguing phenomenon**. To understand the benefits of ESM, we examine the WiSE-FT [60], which interpolates between a pre-trained and fine-tuned model. When evaluating OOD datasets, we divided them into four groups based on the correctness of predictions made by the individual models. Surprisingly, we found a "FalseFalseTrue" phenomenon: WiSE-FT can correct predictions on samples where both individual models make incorrect predictions. Further, we show that two individual models learn different feature sets, and WiSE-FT utilizes more diverse features. Based on these observations, we then motivate our theory by a toy example (shown in Figure 1). Suppose we have two models, \(\bar{f}\) and \(\tilde{f}\), for a 3-class classification task. For a sample from the first class, \(\bar{f}\) produces logits of (0.4, 0.6, 0), and \(\tilde{f}\) produces logits of (0.4, 0, 0.6). The ensemble model's prediction would be (0.4, 0.3, 0.3). This phenomenon can happen when \(\bar{f}\) and \(\tilde{f}\) learn different subsets of spurious features, represented as \(\bar{\mathcal{S}}\) and \(\tilde{\mathcal{S}}\), respectively. Recall that the
spurious correlations change in OOD. In the example, \(\bar{f}\) generates a high logit (0.6) for the second class influenced by \(\bar{\mathcal{S}}\), while \(\tilde{f}\) produces a high logit (0.6) for the third class influenced by \(\bar{\mathcal{S}}\) (details in Section 2).
**A new perspective on OOD generalization**. In Section 3, we extend a popular theoretical setting [51, 57] to a 3-class classification with multiple spurious features. Our theoretical results predicts the aforementioned phenomenon. We show that ESM incorporates more diverse spurious features, which weakens the contributions of individual spurious feature and further leads to improved overall OOD performance. We also shed light on the difference between the weight and output space ensemble. Recall that there has been a significant effort in OOD community to learn invariant features and discard spurious features [5]. However, these approaches have not shown satisfactory performance when applied to real-world datasets [27], which may be due to the fact that invariant learning requires numerous domains [51], strong regularization [67], and faces additional difficulties induced by non-linearity [51], overparameterization [35], and optimization challenges [15]. In contrast, our findings offer a new perspective that **spurious features diversification** actually improves OOD performance, which can be easily implemented as shown in ensemble-based models and has achieved remarkable empirical success. To further verify our findings, we introduce MultiColorMNIST in Section 3.3, a novel variant of CMNIST [5], with multiple spurious features. Through empirical analysis, we show that individual
Figure 1: Illustration of FalseFalseTrue phenomenon. Consider to classify camels, cows, and dogs. The invariant feature \(\mathbf{x}_{v}\) is the shape of the animal. There are 2 spurious features, i.e., 1) the background \(\mathbf{x}_{s,1}\), e.g., camels are always on the sand, cows are on grass and dogs are on the floor. 2) the fur of the animals \(\mathbf{x}_{s,2}\), e.g., camels have brown fur, cows have dotted fur and dogs are all in black in the training dataset. Suppose we fit two models, \(\bar{f}\) and \(\tilde{f}\), on the training dataset independently. Assume that \(\bar{f}\) uses the invariant feature \(\mathbf{x}_{v}\) and \(\mathbf{x}_{s,1}\), and \(\tilde{f}\) uses \(\mathbf{x}_{v}\) and \(\mathbf{x}_{s,2}\). \(\bar{f}\) and \(\tilde{f}\) both correctly predict the label of a sample from the training distribution. Consider an OOD testing sample of a dog with brown fur on the grass. \(\bar{f}\) puts a large logit for the cow class since the background(grass) is spuriously correlated with cows, i.e., \(\bar{f}(\mathbf{x}_{v},\mathbf{x}_{s,1})=[0.4,0.6,0]\). \(\tilde{f}\) puts a large logit for the camel class since the texture(brown fur) is spuriously correlated with camels, i.e., \(\tilde{f}(\mathbf{x}_{v},\mathbf{x}_{s,2})=[0.4,0,0.6]\). **Both \(\bar{f}\) and \(\tilde{f}\) make mistakes on this sample. However, the average of them can make correct prediction**, i.e., \(1/2\bar{f}(\mathbf{x}_{v},\mathbf{x}_{s,1})+1/2\tilde{f}(\mathbf{x}_{v},\mathbf{x}_{s,2})=[0.4,0.3,0.3]\).
models trained on MultiColorMNIST utilize different spurious features, and their ensemble achieves superior OOD performance by leveraging this diversity. Notably, while several methods promote feature diversity to enhance empirical performance, none of them have explored the spurious features diversification from a perspective similar to ours (details in Appendix B.2).
**An improved method**. Our theoretical results indicate that the scaling of \(\bar{f}\) and \(\tilde{f}\) should be similar to maintain the improvement of the model ensemble. If \(\tilde{f}\) is much more confident than \(\bar{f}\), resulting in a larger scaling for \(\tilde{f}\), the ensemble model can become biased towards \(\tilde{f}\). Unfortunately, the scaling issue arises in WiSE-FT, which combines a pre-trained model and a fine-tuned model in the weight space. Empirical evidence shows that the pre-trained model is well calibrated, whereas the fine-tuned model is highly over-confident on OOD datasets, indicating a larger scaling compared to the pre-trained model. Based on these findings, we propose BAlaNced averaGing (BANG), which combines the pre-trained model with a model fine-tuned by over-confidence preventing methods like Label Smoothing and MixUp. We demonstrate that BANG improves vanilla WiSE-FT by approximately 1.9pp in average OOD performance across five ImageNet variants.
To summarize, the following are the main contributions of the paper:
* By examining WiSE-FT, a popular method of ensemble-based models (EBM) that combines the pre-trained and fine-tuned model in the weight space, we discover an unexpected 'FalseFalseTrue' phenomenon that WiSE-FT can correct a large fraction of OOD samples on which both individual models make wrong predictions. We further show that two individual models use different sets of features and WiSE-FT utilizes more diverse features.
* Through theoretical analysis on a multi-class classification problem with multiple spurious features, we provide a natural explanation for the observed phenomenon and show EBM can improve OOD performance by leveraging more diverse spurious features. Furthermore, we provide novel insights into the distinction between weight and output space ensemble.
* Contrary to the traditional belief that emphasizes the exclusive learning of invariant features for OOD, our findings suggest that incorporating diverse spurious features weakens their individual contributions, leading to improved overall OOD generalization performance. Through experiments on our MultiColorMNIST dataset, which contains multiple spurious features, we provide concrete evidence for the effectiveness of diverse spurious features.
* Based on our theoretical and empirical findings, we show that WiSE-FT can suffer from the over-confidence problem of the fine-tuned model, which skews the ensemble and deteriorates the OOD performance. We further propose a novel method BANG to remedy this problem, and it significantly improves the OOD performance.
Understanding Ensemble-based Models via Examining WiSE-FT
**The FalseFalseTrue phenomenon**. In this section, we closely examine WiSE-FT [60] to obtain intuition on why EBM can improve OOD performance. Specifically, [60] ensemble pre-trained CLIP and the model fine-tuned on ImageNet in the weight space. In Appendix C.1, we divide each dataset (ImageNet as ID dataset and five ImageNet variants 1 as OOD datasets) into 8 groups by whether the pre-trained, fine-tuned and averaged models make correct predictions. We surprisingly find that WiSE-FT can correct a substantial part of samples on which both the pre-trained and fine-tuned models make mistakes. Specifically, we calculate the number of "FalseFalseTrue" samples, i.e., samples on which WiSE-FT is correct while both the pre-trained and fine-tuned models are incorrect. We then calculate the FalseFalseTrue ratio by dividing FalseFalseTrue number over the dataset size. Figure 2(Left) shows FalseFalseTrue ratio on each OOD dataset and compares it with "overall improvement", which is the accuracy improvement of WiSE-FT over the best of pre-trained and fine-tuned model. We can see that there are substantial parts of FalseFalseTrue samples in each dataset. Refer to Appendix C.1 for more details. It is interesting that the FalseFalseTrue ratio is even higher than the overall improvement in IN-R and IN-A, we provide in-depth analysis and explanation in Appendix C.1 and E.6.
Footnote 1: They are ImageNet-V2 [50], ImageNet-R [28], ImageNet-A [29], ImageNet Sketch [58] and ObjectNet [7]. We refer to them as IN-V2, IN-R, IN-A, IN-S, and ObjNet for short. More details in Appendix E
**Illustration on when FalseFalseTrue occurs**. In this part, we try to understand the FalseFalseTrue phenomenon. We first consider the output space ensemble to be similar to the weight space ensemble in this part and will present an analysis of their difference in Section 3. Suppose we want to distinguish from camels, cows, and dogs. There is one invariant feature \(\mathbf{x}_{v}\) (the shape of the animal) and two spurious features (the background \(\mathbf{x}_{s,1}\) and the fur of the animal \(\mathbf{x}_{s,2}\)). Camels are typically found on sand, cows on grass, and dogs on the floor. Camels have brown fur, cows have dotted fur, and dogs are all black in the training dataset. See Fig. 1 for illustration. Suppose we fit two different
Figure 2: (Left) FalseFalseTrue ratio; (Right) GradCAM feature visualization.
models, \(\bar{f}\) and \(\tilde{f}\) on the training dataset. Further assume \(\bar{f}\) uses the feature \(\mathbf{x}_{v}\) and \(\mathbf{x}_{s,1}\), and \(\tilde{f}\) uses \(\mathbf{x}_{v}\) and \(\mathbf{x}_{s,2}\)2. Both \(\bar{f}\) and \(\tilde{f}\) correctly predict samples from the training distribution. Whereas, for a sample from the testing distribution, e.g., a dog with brown fur (\(\mathbf{x}_{s,2}\)) on the grass (\(\mathbf{x}_{s,1}\)): \(\bar{f}\) puts a large logit for the cow class since the background, grass, is spuriously correlated with cow, i.e., \(\bar{f}(\mathbf{x}_{v},\mathbf{x}_{s,1})=[0.4,0.6,0]\); \(\tilde{f}\) puts a large logit for the camel class since the texture, brown fur, is spuriously correlated with camel, i.e., \(\tilde{f}(\mathbf{x}_{v},\mathbf{x}_{s,2})=[0.4,0,0.6]\). Both \(\bar{f}\) and \(\tilde{f}\) make different mistakes under distributional shifts due to using different spurious features. However, the ensemble of them can make a correct prediction, i.e., \(1/2f_{1}(\mathbf{x}_{v},\mathbf{x}_{s,1})+1/2f_{1}(\mathbf{x}_{v},\mathbf{x}_{s,2})=[0.4,0.3,0.3]\).
Footnote 2: For simplicity of illustration, we assume that \(\bar{f}\) and \(\tilde{f}\) learn the same invariant feature. However, this is not necessary for EBM to outperform both individual models, as demonstrated in Section 3
**Feature visualization**. The reasoning above assumes that individual models utilize different features. GradCam [53] visualization of the features used by the pre-trained (zero-shot), fine-tuned, and WiSE-FT in Figure 2(Right) confirms this assumption. The visualization shows that the pre-trained and fine-tuned models rely on different features, while WiSE-FT utilizes more diverse features. Additionally, [3] provides empirical evidence supporting the use of diverse features by different DNNs with the same architecture trained on the same datasets (with different initialization). They also provide formal theoretical proof for 2-layer DNNs. We include some of [3]'s empirical results in Appendix C.2. Additionally, there is more evidence suggesting that DNNs favor sparse feature representations and discard redundant features [40, 4].
## 3 Analysis on Spurious Feature Diversification
### Theoretical Settings
**Notation**. For simplicity of presentation, we consider a 3-class classification problem, i.e., \(\mathbf{y}\in\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\), where \(\mathbf{e}_{i}\) denotes the 3-dimensional unit vector with \(i\)th element equaling 1, e.g., \(\mathbf{e}_{2}=[0,1,0]^{\top}\). In Appendix F.2, we extend the setting to \(K\)-class classification. \(\mathbf{a}(k)\) means the \(k\)th element of vector \(\mathbf{a}\), \(\mathbf{A}(k)\) means the \(k\)th column of matrix \(\mathbf{A}\). We use \(\mathbf{I}_{K}\) to represent a \(K\times K\) identity matrix, e.g., \(\mathbf{I}_{3}=[\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}]\). We omit the subscript of \(\mathbf{I}\) when no confusion arises.
Suppose we have \(d_{v}\) invariant features \(\{\mathbf{x}_{v,i}\}_{i=1}^{d_{v}}\) and \(d_{s}\) spurious features \(\{\mathbf{x}_{s,j}\}_{j=1}^{d_{s}}\) where \(\mathbf{x}_{v,i},\mathbf{x}_{s,j}\in\mathbb{R}^{d}\) and the whole feature \(\mathbf{x}\in\mathbb{R}^{d\times(d_{s}+d_{v})}\) is the concatenation of them, i.e.,
\[\mathbf{x}=\text{Concat}\Big{(}\{\mathbf{x}_{v,i}\}_{i=1}^{d_{v}}\cup\{\mathbf{x}_{s,j}\}_ {j=1}^{d_{s}}\Big{)}=[\mathbf{x}_{v,1},\ldots,\mathbf{x}_{v,d_{v}},\mathbf{x}_{s,1},\ldots,\mathbf{x}_{s,d_{s}}].\]
Consider that each model \(f\) is composed of a featurizer \(\Phi\in\{0,1\}^{d_{v}+d_{s}}\) and a classifier \(\mathbf{w}\in\mathbb{R}^{d\times 3}\). \(\Phi\) first selects feature by \(\mathbf{x}\Phi\). For example, suppose \(\mathbf{x}=[\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3}]\) and \(\Phi=[1,1,0]^{\top}\), then \(\mathbf{x}\Phi=\mathbf{x}_{1}+\mathbf{x}_{2}\). Then the classifier \(\mathbf{w}\in\mathbb{R}^{d\times 3}\) is fit based on the features
selected by \(\Phi\) as
\[\mathbf{w}=\operatorname*{arg\,min}_{\mathbf{v}\in\mathbb{R}^{d\times 3}}\mathcal{R}_{id}( \mathbf{v},\Phi)=\operatorname*{arg\,min}_{\mathbf{v}\in\mathbb{R}^{d\times 3}}\mathbb{E}_{(\mathbf{x},\mathbf{y} )\sim\mathcal{D}_{id}}[\ell(\mathbf{v}^{\top}(\mathbf{x}\Phi),\mathbf{y})],\]
where \(\ell\) is the cross-entropy loss function and \(\mathcal{D}_{id}\) is the ID distribution.
**Remark**. Our theoretical models utilize a simplified two-layer structure [5, 67, 51] on concatenated features to mimic the feature learning of modern deep learning models like Vision Transformers [22], which process input images as patches. Refer to Appendix D.1 for further discussions.
Following [51, 57], we consider that each \(\mathbf{x}_{v,i}\) and \(\mathbf{x}_{s,j}\) are generated from the label \(\mathbf{y}\) with the _latent_ invariant features \(\mathbf{\mu}_{v,i}\) and spurious features \(\mathbf{\mu}_{s,i}\), where \(\mathbf{\mu}_{v,i},\mathbf{\mu}_{s,j}\in\mathbb{R}^{d\times 3}\). The full data generation process is:
**Definition 1** (Data Generation Process).: _The whole data generation process is as follows:_
\[\mathbf{y}\sim\text{Unif}\left\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3} \right\},\mathbf{x}=\text{Concat}\Big{(}\{\mathbf{x}_{v,i}\}_{i=1}^{d_{v}}\cup\{\mathbf{x}_ {s,j}\}_{j=1}^{d_{s}}\Big{)},\] \[\mathbb{P}_{\theta}(\mathbf{x}_{v,i}\mid\mathbf{y})=\mathcal{N}\left(\bm {\mu}_{v,i}\mathbf{Q}_{v,i}\mathbf{y},\sigma^{2}\mathbf{I}_{d}\right),\mathbb{P}_{\theta} (\mathbf{x}_{s,j}\mid\mathbf{y})=\mathcal{N}\left(\mathbf{\mu}_{s,j}\mathbf{Q}_{s,j}\mathbf{y}, \sigma^{2}\mathbf{I}_{d}\right),\forall i,j. \tag{1}\]
_where \(\mathbf{Q}_{v,i},\mathbf{Q}_{s,j}\in\{0,1\}^{3\times 3}\). Further, \(\mathbf{Q}_{v,i}=\mathbf{I}_{3}=[\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}]\) always hold. In the ID distribution \(\mathcal{D}_{id}\); \(\mathbf{Q}_{s,j}=\mathbf{I}_{3}\); and in OOD \(\mathcal{D}_{ood}\); the \(k\)th column of \(\mathbf{Q}\), i.e., \(\mathbf{Q}_{s,j}(k)\), is as follows for \(k=1,2,3\):_
\[\mathbf{Q}_{s,j}(k)=\begin{cases}\mathbf{e}_{k},\text{ with probability }1-p\\ \text{Unif}\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\},\text{ with probability }p.\end{cases}\]
**The intuition of the data generation process**. We consider the example in Figure 1. Figure 3 shows the intuition of \(\mathbf{\mu}_{s,j}\) and \(\mathbf{Q}_{s,j}\). Suppose the spurious feature \(\mathbf{\mu}_{s,j}\) is the background in Figure 1. Here \(\mathbf{\mu}_{s,j}=[\mathbf{\mu}_{s,j}(1),\mathbf{\mu}_{s,j}(2),\mathbf{\mu}_{s,j}(3)]\in \mathbb{R}^{d\times 3}\) and each column \(\mathbf{\mu}_{s,j}(k)\) for \(k=1,2,3\) represents a specific attribute that is associated with class \(k\) in the training set. In other words, \(\mathbf{\mu}_{s,j}(1),\mathbf{\mu}_{s,j}(2)\), and \(\mathbf{\mu}_{s,j}(3)\) represent 3 attributes of background, namely, floor, grass, and sand, which are correlated with dog, cow, and camel, respectively. Consider a dog image (i.e., \(\mathbf{y}=\mathbf{e}_{1}=[1,0,0]\) ). We have \(\mathbf{\mu}_{s,j}\mathbf{Q}\mathbf{y}|_{\mathbf{y}=\mathbf{e}_{1}}=\mathbf{\mu}_{s,j}\mathbf{Q}_{s,j}(1)\) and 3 further Footnote 3: Specifically, \(\mathbf{Q}\mathbf{y}|_{\mathbf{y}=\mathbf{e}_{1}}=\mathbf{Q}[1,0,0]^{\top}=\mathbf{Q}_{s,j}(1)\), where \(\mathbf{Q}_{s,j}(1)\) is the first column of \(\mathbf{Q}_{s,j}\).
1. In the ID distribution \(\mathcal{D}_{\text{id}}\), \(\mathbf{Q}_{s,j}(1)=\mathbf{e}_{1}\) and \(\mathbf{\mu}_{s,j}\mathbf{Q}_{s,j}\mathbf{y}|_{\mathbf{y}=\mathbf{e}_{1}}=\mathbf{\mu}_{s,j}\mathbf{e}_{1}= \mathbf{\mu}_{s,j}(1)\). Then \(\mathbf{x}_{s,j}=\mathcal{N}(\mathbf{\mu}_{s,j}(1),\sigma\mathbf{I})\), indicating that in \(\mathcal{D}_{\text{id}}\) the background of the dog (i.e., \(\mathbf{y}=\mathbf{e}_{1}\)) is the floor (i.e., \(\mathbf{\mu}_{s,j}(1)\)).
2. In the OOD distribution \(\mathcal{D}_{\text{ood}}\), \(\mathbf{Q}_{s,j}(1)=\mathbf{e}_{1}\) with probability \(1-p\) and \(\mathbf{Q}_{s,j}(1)\sim\text{Unif}\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\) with probability \(p\). Then we have the following: \[\mathbf{\mu}_{s,j}\mathbf{Q}_{s,j}\mathbf{y}|_{\mathbf{y}=\mathbf{e}_{1}}=\begin{cases}\mathbf{\mu}_{s,j}(1),\text{ with probability }1-p\\ \text{Unif}\{\mathbf{\mu}_{s,j}(1),\mathbf{\mu}_{s,j}(2),\mathbf{\mu}_{s,j}(3)\},\text{ with probability }p,\end{cases}\]
indicating that in the OOD distribution the background of the dog (i.e., \(\mathbf{y}=\mathbf{e}_{1}\)) is the floor (i.e., \(\mathbf{\mu}_{s,j}(1)\)) with probability \(1-p\) and is randomly drawn from floor, grass, and sand (i.e., \(\mathbf{\mu}_{s,j}(1)\), \(\mathbf{\mu}_{s,j}(2)\), and \(\mathbf{\mu}_{s,j}(3)\)) with \(p\). In other words, \(p\) is the probability that spurious correlation no-longer holds and a larger \(p\) indicates larger distributional shift.
**Remark**. Our data generation process extends the setting of [57, 51] to a 3-class classification problem with multiple features. This extension aligns with the intuition behind popular multi-class datasets used in empirical studies on OOD generalization, such as Full-ColorMNIST, ColoredObject, and CifarMNIST [61, 35, 67, 66, 1]. Take ColoredObject for example, correlations between classes and background colors exist in the training dataset but fail with a certain probability in OOD.
**Definition 2** (Individual models).: _Denote the whole invariant feature set as \(\mathcal{V}:=\{\mathbf{x}_{v,i}\}_{i=1}^{d_{v}}\) and spurious feature set \(\mathcal{S}:=\{\mathbf{x}_{s,j}\}_{j=1}^{d_{s}}\). Consider \(\bar{f}=(\bar{\Phi},\bar{\mathbf{w}})\) and \(\tilde{f}=(\tilde{\Phi},\tilde{\mathbf{w}})\). Suppose \(\bar{\Phi}\) learns \(\tilde{\mathcal{V}}\subset\mathcal{V}\) and \(\tilde{\mathcal{S}}\subset\mathcal{S}\), and \(\tilde{\Phi}\) learns \(\tilde{\mathcal{V}}\subset\mathcal{V}\) and \(\tilde{\mathcal{S}}\subset\mathcal{S}\). Denote \(|\tilde{\mathcal{V}}|=\tilde{n}_{v}\), \(|\tilde{\mathcal{S}}|=\tilde{n}_{s}\), \(|\tilde{\mathcal{V}}|=\tilde{n}_{v}\), \(|\tilde{\mathcal{S}}|=\tilde{n}_{s}\), \(|\tilde{\mathcal{V}}\cap\tilde{\mathcal{V}}|=n_{vo}\), and \(|\tilde{\mathcal{S}}\cap\bar{\mathcal{S}}|=n_{so}\). Specifically, we have_
\[\mathbf{x}\bar{\Phi} =\sum_{\mathbf{x}_{v}\in\tilde{\mathcal{V}}}\mathbf{x}_{v}+\sum_{\mathbf{x}_ {s}\in\tilde{\mathcal{S}}}\mathbf{x}_{s},\bar{\mathbf{w}}=\operatorname*{arg\,min}_{ \mathbf{v}\in\mathbb{R}^{d\times 3}}\mathcal{R}_{\mathit{id}}(\mathbf{v},\bar{\Phi}),\] \[\mathbf{x}\tilde{\Phi} =\sum_{\mathbf{x}_{v}\in\tilde{\mathcal{V}}}\mathbf{x}_{v}+\sum_{\mathbf{x}_ {s}\in\tilde{\mathcal{S}}}\mathbf{x}_{s},\tilde{\mathbf{w}}=\operatorname*{arg\,min}_{ \mathbf{v}\in\mathbb{R}^{d\times 3}}\mathcal{R}_{\mathit{id}}(\mathbf{v},\tilde{\Phi}).\]
Figure 3: (a) \(\mathbf{\mu}_{s,j}\in\mathbb{R}^{d\times 3}\) represents a spurious feature, e.g., the background. Each column of \(\mathbf{\mu}_{s,j}\) is an attribute of the spurious feature, e.g., \(\mathbf{\mu}_{s,j}(1)\), \(\mathbf{\mu}_{s,j}(2)\) and \(\mathbf{\mu}_{s,j}(3)\) are the floor, grass, and sand, respectively. (b) \(\mathbf{Q}_{s,j}\in\{0,1\}^{3\times 3}\) represents the relationship between labels and spurious features. In the ID distribution, \(\mathbf{Q}_{s,j}\) equals \(\mathbf{I}\), indicating that each spurious feature is perfectly correlated with the corresponding class. (c) In the OOD distribution, spurious correlation can fail, e.g., \(\mathbf{Q}_{s,j}(1)\) equals \(\mathbf{e}_{2}\) with probability \(p/3\), indicating the background of the dog is the grass.
**Definition 3** (Output space ensemble (OSE)).: _Given the two individual models defined in Definition 2, the prediction of the the output space ensemble (**OSE**) is_
\[fose(\mathbf{x})=\frac{1}{2}(\bar{\mathbf{w}}^{\top}(\mathbf{x}\bar{\Phi})+\tilde{\mathbf{w}}^{ \top}(\mathbf{x}\tilde{\Phi})).\]
The predicted class of the sample \((\mathbf{x},\mathbf{y})\) is the class with the maximum logit. Specifically, denote the logit as \(\hat{l}=f(\mathbf{x})\). The predicted class is \(\hat{k}=\arg\max_{h\in\{1,2,3\}}\hat{l}(h)\) where \(\hat{l}(h)\) of the \(h\)th dimension of the logit \(\hat{l}\). The model makes correct prediction if \(\mathbb{I}(e_{\hat{k}}=\mathbf{y})\) holds where \(\mathbb{I}\) is the indicator function. The accuracy is \(\mathcal{A}(f)=\mathbb{E}_{\mathbf{x},\mathbf{y}}[\mathbb{I}(e_{\hat{k}}=\mathbf{y})]\). We denote the OOD accuracy as \(\mathcal{A}_{\text{ood}}(f)=\mathbb{E}_{\mathbf{Q}_{s}}\left[\mathbb{E}_{\mathbf{x}, \mathbf{y}}[\mathbb{I}(e_{\hat{k}}=\mathbf{y})|\mathbf{Q}_{s}]\right],\) where we use \(\mathbf{Q}_{s}\) as a short hand for \(\mathbf{Q}_{s,1},\ldots,\mathbf{Q}_{s,d_{s}}\). We discuss the metric in Appendix D.4. We defer the analysis of ID accuracy to Appendix D.5 since we consider infinite samples and the ID accuracy of all considered models are all close to 1.
**Assumption 1** (Small Noise).: _Denote \(n^{\prime}_{v}\) and \(n^{\prime}_{s}\) as the the maximum number of invariant features and spurious features that a model can learn, respectively. We need the overall noise to be small to satisfy \(\mathbf{F}^{K}(\frac{1}{\sigma(n^{\prime}_{v}+n^{\prime}_{s})})\geq 1-\epsilon,\) in which \(\mathbf{F}\) is the cumulative distribution function of standard Gaussian random variable, and \(K\) refers to the class number (here we analyze the case \(K=3\))._
**Remark**. Since we impose random noise on each feature, e.g., \(\mathbf{x}_{v,i}=\mathbf{\mu}_{v,i}+\mathbf{z}\) where \(\mathbf{z}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}_{d})\) where \(\mathbf{I}_{d}\) is a d-dimensional identity matrix and \(d\gg d_{v}+d_{s}\), it is natural to assume the overall noise is controlled, e.g., we have \(\epsilon\leq 10^{-6}\) when \(K=10\), \(\sigma=1/100\), \(n^{\prime}_{v}+n^{\prime}_{s}=20\).
**Assumption 2** (Orthogonal features [57, 3]).: _(1) \(\|\mathbf{\mu}_{v,i}(k)\|_{2}=1\) and \(\|\mathbf{\mu}_{s,j}(k)\|_{2}=1\) for \(i=1,\cdots,d_{v}\), \(j=1,\cdots,d_{s}\), \(k=1,2,3\). (2) \(\mathbf{v}_{i}(k)\perp\mathbf{v}_{i^{\prime}}(k^{\prime})\) for any \((i,k)\neq(i^{\prime},k^{\prime})\), \(k,k^{\prime}=1,2,3\), \(\mathbf{v}_{i},\mathbf{v}_{i^{\prime}}\in\{\mathbf{\mu}_{v,1},\cdots,\mathbf{\mu}_{v,d_{v}}, \mathbf{\mu}_{s,1},\ldots,\mathbf{\mu}_{s,d_{s}}\}\)._
### Theoretical Results
We first show the intuition on the simple Example 1 and then extend to the general setting in Def. 3:
**Example 1** (Illustrative examples).: _Consider that there are totally 4 invariant features \(\{\mathbf{x}_{v,i}\}_{i=1}^{4}\) and 6 spurious features \(\{\mathbf{x}_{s,j}\}_{j=1}^{6}\), and two individual models \((\bar{\mathbf{w}},\bar{\Phi})\) and \((\tilde{\mathbf{w}},\tilde{\Phi})\) learn non-overlapped features as_
\[\mathbf{x}\bar{\Phi}=\sum_{i=1,2}\mathbf{x}_{v,i}+\sum_{j=1,2,3}\mathbf{x}_{s,j},\text{ and }\mathbf{x}\tilde{\Phi}=\sum_{i=3,4}\mathbf{x}_{v,i}+\sum_{j=4,5,6}\mathbf{x}_{s,j}.\]
**Proposition 1** (Illustrative examples).: _Consider Example 1, suppose Assumption 1 and 2 hold, and there are infinite ID and OOD samples. Omitting small terms containing \(\epsilon\), we have_
\[\mathcal{A}_{ood}(\bar{f})=\mathcal{A}_{ood}(\tilde{f})=1-\frac{1}{9}p^{3},\ \text{and}\ \mathcal{A}_{ood}(f_{OSE})=1-\frac{2p^{5}}{81}-\frac{17p^{6}}{729}.\]
We can see that OSE improves OOD by \(\mathcal{A}_{ood}(f_{\text{OSE}})-\max\{\mathcal{A}_{ood}(\bar{f}),\mathcal{ A}_{ood}(\tilde{f})\}>1/81p^{3}\).
**Intuition of the proof** (Full proof in Appendix F.1). Let's consider the samples of first class \(\mathbf{y}=\mathbf{e}_{1}=[1,0,0]\). Model \((\bar{\mathbf{w}},\bar{\Phi})\) has \(\mathbf{x}\bar{\Phi}|_{\mathbf{y}=\mathbf{e}_{1}}=\sum_{i=1}^{2}\mathbf{\mu}_{v,i}\mathbf{Q}_{v,i}( 1)+\sum_{j=1}^{3}\mathbf{\mu}_{s,j}\mathbf{Q}_{s,j}(1)+z\) where \(z\sim\mathcal{N}(0,5\sigma^{2}\mathbf{I}_{d})\). By Lemma 5, we have \(\bar{\mathbf{w}}(k)=\sum_{i=1}^{2}\mathbf{\mu}_{v,i}(k)+\sum_{j=1}^{3}\mathbf{\mu}_{s,j}(k)\) for each class \(k=1,2,3\). Omitting the small noise term, the predicted logit for class \(k\) is \(\bar{\mathbf{w}}(k)^{\top}(\mathbf{x}\bar{\Phi})|_{\mathbf{y}=\mathbf{e}_{1}}=\sum_{i=1}^{2} \mathbf{\mu}_{v,i}(k)^{\top}(\mathbf{\mu}_{v,i}\mathbf{Q}_{v,i}(1))+\sum_{j=1}^{3}\mathbf{\mu}_ {s,j}(k)^{\top}(\mathbf{\mu}_{s,j}\mathbf{Q}_{s,j}(1))\). The model will mistakenly predict \(\mathbf{e}_{2}\) on the samples with true label \(\mathbf{e}_{1}\) when \(\bar{\mathbf{w}}(1)^{\top}\mathbf{x}\bar{\Phi}|_{y=\mathbf{e}_{1}}<\bar{\mathbf{w}}(2)^{\top} \mathbf{x}\bar{\Phi}|_{y=\mathbf{e}_{1}}\). This will happen when the three events \(\{\mathbf{Q}_{s,j}(1)=\mathbf{e}_{2}\}_{j=1}^{3}\) simultaneously happen in OOD (see Appendix D.7 for detailed discussion). Each event occurs with a probability of \(p/3\), resulting in a combination probability of \(p^{3}/27\). This means that with a probability of \(p^{3}/27\), we encounter an OOD scenario where the model \(\bar{f}=(\bar{\mathbf{w}},\bar{\Phi})\) incorrectly predicts almost all samples from the first class \(\mathbf{e}_{1}\) as the second class \(\mathbf{e}_{2}\). This failure occurs because all three spurious features happen to have values that are spuriously correlated with \(\mathbf{e}_{2}\) in the training dataset. In other words, the three spurious features dominate the prediction of \(\mathbf{e}_{2}\), overshadowing the two invariant features that predict the true label \(\mathbf{e}_{1}\). For the OSE model, we have \(\bar{\mathbf{w}}(k)^{\top}(\mathbf{x}\bar{\Phi})+\tilde{\mathbf{w}}(k)^{\top}(\mathbf{x}\tilde {\Phi})|_{\mathbf{y}=\mathbf{e}_{1}}=\sum_{i=1}^{4}\mathbf{\mu}_{v,i}(k)^{\top}(\mathbf{\mu}_ {v,i}\mathbf{Q}_{v,i}(1))+\sum_{j=1}^{6}\mathbf{\mu}_{s,j}(k)^{\top}(\mathbf{\mu}_{s,j} \mathbf{Q}_{s,j}(1))\). The model will mistakenly predict \(\mathbf{e}_{2}\) on the samples with true label \(\mathbf{e}_{1}\) when at least five of the six events \(\{\mathbf{Q}_{s,j}(1)=\mathbf{e}_{2}\}_{j=1}^{6}\) simultaneously happen in OOD (see Appendix D.6 for details), whose probability is much less than that of \(\bar{f}\). Intuitively, the failure probability of the averaged model is smaller as it utilizes more spurious features, which are less likely to make the same mistakes.
**Proposition 2** (General Results for OSE).: _Consider Definition 1-3, Assumption 1-2 hold, and infinite ID and OOD samples. Omitting small constants involving \(\epsilon\), we have_
\[\mathcal{A}_{\text{ood}}(\bar{f})\ =\ F_{p}\left(\frac{(1-p)\bar{n}_{s}+\bar{n }_{v}}{\sqrt{\tilde{n}_{s}}}\right),\quad\mathcal{A}_{\text{ood}}(\tilde{f})=F_ {p}\left(\frac{(1-p)\tilde{n}_{s}+\tilde{n}_{v}}{\sqrt{\tilde{n}_{s}}}\right),\] \[\text{and}\ \mathcal{A}_{\text{ood}}(f_{\text{ose}})\ =\ F_{p}\left(\frac{(1-p)(\tilde{n}_{s}+\tilde{n}_{s})+( \tilde{n}_{v}+\bar{n}_{v})}{\sqrt{\tilde{n}_{s}+\bar{n}_{s}+2n_{so}}}\right).\]
Here \(F_{p}(x)\) is a cumulative density function (CDF) parameterized by \(p\) as defined in Appendix F.2, which is monotonically increasing with \(x\) as shown in Figure 4(a). Suppose two individuals learns the same number of features with no-overlap, i.e., \(\tilde{n}_{v}=\bar{n}_{v}=n_{v}\), \(\bar{n}_{s}=\tilde{n}_{s}=n_{s}\), and \(n_{vo}=n_{so}=0\), we have \(\mathcal{A}_{\text{ood}}(f_{\text{ose}})=F_{p}\left(\sqrt{2}t\right)\) and \(\mathcal{A}_{\text{ood}}(\bar{f})=\mathcal{A}_{\text{ood}}(\bar{f})=F_{p}(t)\) where \(t=(1-p)\sqrt{n_{s}}+\frac{n_{v}}{\sqrt{n_{s}}}\), indicating that \(f_{\text{ose}}\) is better than \(\bar{f}\) since \(F(\cdot)\) is monotonically increasing.
**Example 2**.: _Consider \(p=0.9\) and two individual models learn none overlapped, i.e., \(n_{vo}=n_{so}=0\), fixing \(\bar{n}_{v}=5,\bar{n}_{s}=20\), and vary \(\tilde{n}_{v}=0,1,...,5\) and \(\tilde{n}_{s}=0,1,...,20\)._
Figure 4(b) illustrates \(\mathcal{A}_{\text{ood}}(f_{\text{ose}})-\mathcal{A}_{\text{ood}}(\bar{f})\) on Example 2. \(f_{\text{ose}}\) achieves better OOD performance than \(\bar{f}\) in most cases. One exception is that if \(\tilde{f}\) is much weaker than \(\bar{f}\), e.g., \(\bar{f}\) learns 5 invariant features but \(\tilde{f}\) learns 0 invariant features, the ensemble model \(f_{\text{ose}}\) is inferior than \(\bar{f}\).
**The difference between the output and weight space ensemble**. It is an open problem on the difference betwee output space ensemble (OSE) and WSE (referred as OSE-WSE difference) and why WSE is better than OSE in OOD [60, 59, 49]. We shed light on this by our bilinear theoretical model \(\boldsymbol{w}^{\top}\boldsymbol{x}\Phi\):
**Definition 4** (Weight space ensemble (WSE)).: _Given the two individual models defined in Definition 2, the prediction of the weight space ensemble( **WSE**) is_
\[f_{\text{wse}}(\boldsymbol{x})=\frac{1}{4}(\bar{\boldsymbol{w}}+\tilde{ \boldsymbol{w}})^{\top}\left(\boldsymbol{x}(\bar{\Phi}+\tilde{\Phi})\right).\]
In Appendix D.2, we show that the OSE-WSE difference in a 2-layer DNN is closely connected with the OSE-WSE difference captured by our models in Definition 3- 4.
**Proposition 3** (General Results for WSE).: _Consider Definition 1-3, Assumption 1-2, and infinite ID and OOD samples. Omitting small constants involving \(\epsilon\), we have_
\[\mathcal{A}_{\text{ood}}(f_{\text{wse}})=F_{p}(\frac{(1-p)(\tilde{n}_{s}+\bar {n}_{s}+2n_{so})+(\tilde{n}_{v}+\bar{n}_{v}+2n_{vo})}{\sqrt{\tilde{n}_{s}+\bar {n}_{s}+14n_{so}}}).\]
Comparing Proposition 2 and 3, we can see that the only difference between \(\mathcal{A}_{\text{ood}}(f_{\text{wse}})\) and \(\mathcal{A}_{\text{ood}}(f_{\text{ose}})\) is the number of overlapped invariant and spurious features learned by
individual models, i.e., \(n_{vo}\) and \(n_{so}\). Specifically, when \(\bar{\Phi}\) and \(\tilde{\Phi}\) selects no overlapped features, \(f_{\text{wse}}\) and \(f_{\text{ose}}\) makes the same prediction since \(\mathbf{x}\bar{\Phi}\perp\tilde{\mathbf{w}}\) and \(\mathbf{x}\tilde{\Phi}\perp\bar{\mathbf{w}}\) by Assumption 2 and further \((\bar{\mathbf{w}}+\tilde{\mathbf{w}})^{\top}\left(\mathbf{x}(\bar{\Phi}+\tilde{\Phi}) \right)\propto\bar{\mathbf{w}}^{\top}\mathbf{x}\bar{\Phi}+\tilde{\mathbf{w}}^{\top}\mathbf{x} \tilde{\Phi}\). When there is overlapped features: (a) for WSE, the coefficient of overlapped features is amplified by 2 in \(\bar{\Phi}+\tilde{\Phi}\), and further amplified twice in \(\bar{\mathbf{w}}+\tilde{\mathbf{w}}\). This results in coefficient of the overlapped feature becoming 4 in \((\bar{\mathbf{w}}+\tilde{\mathbf{w}})^{\top}\mathbf{x}(\bar{\Phi}+\tilde{\Phi})\). (b) for OSE, i.e., \(\bar{\mathbf{w}}^{\top}\mathbf{x}\bar{\Phi}+\tilde{\mathbf{w}}^{\top}\mathbf{x}\Phi\), the coefficient of the overlapped feature is 2. See Appendix D.7.1 for a detailed discussion. In Appendix D.7.2, we provide conditions when \(f_{\text{wse}}\) outperforms \(f_{\text{OSE}}\), in addition with simulation results and supportive experiments.
### Experimental Verification on MultiColorMNIST
Previous efforts in OOD community have focused on learning invariant features and discarding spurious features [5]. However, these approaches have not performed well on real-world datasets [51]. This could be due to the requirements of invariant learning, such as the need for numerous domains [51], strong regularization [67], and the challenges posed by non-linearity, overparameterization, and optimization [51, 35, 15]. In contrast, our findings show that learning diverse spurious features also help with OOD generalization. This approach, as shown in ensemble-based models, is easily implementable and has shown remarkable empirical success.
To further verify our findings, we contruct MultiColorMNIST, a 10-class variant of CMNIST [5] with 32 spurious features, following Definition 1. As shown in Figure 5, each sample in MultiColorMNIST consists of 32 color patches, each serving as a spurious feature. We train two neural networks, denoted as \(f_{\theta_{1}}\) and \(f_{\theta_{2}}\), with the same architecture but different initializations on MultiColorMNIST. The results in Table 1 show that the OSE model (\(f_{\theta_{1}}(\mathbf{x})\) + \(f_{\theta_{2}}(\mathbf{x})\)) improve OOD performance over individual models (\(f_{\theta_{1}}(\mathbf{x})\) and \(f_{\theta_{2}}(\mathbf{x})\)). In Appendix C.3, (1) we show that each individual model learn a subset of spurious features in MultiColorMNIST and OSE utilizes more diverse spurious features (2) we construct SingleColorMNIST with only one spurious feature and show OSE yields little performance gain since both individual models learn the same spurious feature (similar to the results in [49]).
Figure 5: Two samples from MultiColorMNIST
## 4 BAlaNced averaGing (BANG)
Our previous results show that EBM can boost the OOD performance. An implicit requirement is that the scaling of the two models should be roughly the same. If the two models have different scalings, e.g., one model is much more confident than the other, the EBM improvement is weakened.
**Proposition 4** (Imbalanced scaling weakens WSE).: _Consider the Example 1, Definition 1-4, Assumption 1-2. Consider an WSE of two imbalanced models, \(\tilde{f}=(\bar{\mathbf{w}},\bar{\Phi})\) and \(\tilde{f}_{\lambda}=(\lambda\tilde{\mathbf{w}},\lambda\tilde{\Phi})\), where \(\lambda\geq 1\). Specifically, \(f_{\text{wse}}(\mathbf{x})=0.25(\bar{\mathbf{w}}+\lambda\tilde{\mathbf{w}})\mathbf{x}(\bar{\Phi }+\lambda\tilde{\Phi})\). We have_
\[\mathcal{A}_{ood}(f_{\text{wse}})|_{\lambda>\sqrt{5}}\ -\ \mathcal{A}_{ood}(f_{ \text{wse}})|_{\lambda=1}\ <\ -34/729p^{3}.\]
See Appendix F.3 for proofs and Appendix D.8 for an illustration of the over-confidence characterized by \(\lambda\). When \(\lambda=1\), indicating similar confidence levels between \(\bar{f}\) and \(\tilde{f}_{\lambda}\), the WSE is balanced. However, when \(\lambda>\sqrt{5}\) and \(\tilde{f}_{\lambda}\) is significantly more confident than \(\bar{f}\), \(f_{\text{wse}}\) becomes biased towards \(\tilde{f}_{\lambda}\), resulting in a performance drop of over \(34/729p^{3}\). Here we set \(\lambda=\sqrt{5}\) for illustration purposes and similar results can be similarly obtained for other \(\lambda>1\). Unfortunately, we find WiSE-FT, which is the WSE of the pre-trained model (PM) and fine-tuned model (FM), suffers from the imbalanced confidence issue. Specifically, we compare the PM and FM on their confidence and accuracy. The confidence is defined as the largest probability that a model assigns to a class (details in Appendix E.3). Figure 6 shows that the fine-tuned model is highly over-confident, especially on OOD datasets, e.g., ImageNetA have only 0.37 accuracy while the average confidence is over 0.7. Such overconfidence magnifies the FM's incorrect prediction, leading to deteriorate OOD ensemble performance (details in Appendix E.6).
A direct fix to the issue of over-confidence is to tune the temperature of the softmax of the fine-tuned model [33]. However, this method can not be directly applied to WiSE-FT since WiSE-FT ensemble model weights instead of the outputs (see Appendix E.6 for some adaptations of [33], which also do not yield satisfactory results). Moreover, the temperature scaling tuned on the ID dataset [33] fails to calibrate the fine-tuned model on OOD datasets, where over-confidence is more severe (Appendix E.5). We propose
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(p\) & 0.70 & 0.75 & 0.80 & 0.85 & 0.90 \\ \hline model 1 & 71.05\(\pm\)1.04 & 60.07\(\pm\)1.04 & 48.57\(\pm\)0.92 & 36.93\(\pm\)0.70 & 26.01\(\pm\)0.45 \\ model 2 & 71.77\(\pm\)0.94 & 60.75\(\pm\)0.91 & 49.26\(\pm\)0.83 & 37.74\(\pm\)0.66 & 26.63\(\pm\)0.42 \\ \hline model ensemble & **78.64\(\pm\)0.73** & **67.61\(\pm\)0.80** & **55.25\(\pm\)0.75** & **42.34\(\pm\)0.64** & **29.28\(\pm\)0.40** \\ \hline \hline \end{tabular}
\end{table}
Table 1: OOD performance of (output space) model ensemble on MultiColorMNIST. The spurious correlation is \(1\) and \(1-p\) in the training and testing set, respectively. A larger \(p\) indicates larger distributional shift
BAlaNced averaGing (BANG), which adopt label smoothing and Mixup during fine-tuning to prevent overconfidence. 1) Label smoothing: in label smoothing, instead of setting 1 as the target label for the true class and 0 for all other classes, a positive value(smoothing paramter, e.g., 0.8) is assigned to the true class and the smoothing parameter(e.g., 0.2) is distributed evenly among the other classes [37]. 2) Mixup [62] generates new samples during fine-tuning by linearly mixing pairs of training data and their labels.
We conduct experiments with CLIP ViT-B/16[45]. We impose Mixup or Label Smoothing during fine-tuning the pre-trained CLIP on ImageNet (IN), and test OOD performance on ImageNetV2 (IN-V2), ImageNet (IN-R), ImageNet-A(IN-A), ImageNetSketch (IN-S) and ObjectNet. Following WiSE-FT [60], BANG averages the pre-trained CLIP model and the model finetuned with LS and MixUp (more details in Appendix E.4). The results in Table 2 show that BANG effectively improve the performance over WiSE-FT. Specifically, BANG(LS+Mixup), where both LS and MixUp are adopted, achieves 1.9% higher average OOD accuracy than WiSE-FT. Further experimental results in the appendix show that Mixup and Label Smoothing can effectively alleviate the over-confidence of the fine-tuned model on both ID and OOD datasets. In Appendix E.5, we include comparison with [33], and also show that the performance gain of BANG can not be explained by other data augmentation methods.
Since Mixup and LS also improve the performance of the fine-tuned model, so a curious reader would wonder whether the improvement of BANG comes from better calibration or just due to the improvement in the fine-tuned model. We conduct further investigation in
Appendix E.6 to confirm the contribution of better calibration: (1) Dividing the weight of the vanilla fine-tuned model by multiple scalars significantly enhances the performance of weight averaging, which nearly matches the performance of BANG. (2) BANG can correct substantially more samples that is mis-classified by the fine-tuned model.
## 5 Conclusion and Discussion
In this paper, we investigate why ESM has superior OOD performance. Our empirical analysis of WiSE-FT, coupled with theoretical insights, reveals that ESM learns more diverse features, thereby reducing the probability of encountering OOD distributions that fail the model. Further, we improve WiSE-FT by alleviating the fine-tuned model's overconfidence problem, which is identified via theoretical analysis and empirical observations. A limitation of the present work is the lack of theoretic guarantee with general nonlinear feature extractors, which is also an open problem in OOD.
|
2301.13674 | Improved distinct bone segmentation in upper-body CT through
multi-resolution networks | Purpose: Automated distinct bone segmentation from CT scans is widely used in
planning and navigation workflows. U-Net variants are known to provide
excellent results in supervised semantic segmentation. However, in distinct
bone segmentation from upper body CTs a large field of view and a
computationally taxing 3D architecture are required. This leads to
low-resolution results lacking detail or localisation errors due to missing
spatial context when using high-resolution inputs.
Methods: We propose to solve this problem by using end-to-end trainable
segmentation networks that combine several 3D U-Nets working at different
resolutions. Our approach, which extends and generalizes HookNet and MRN,
captures spatial information at a lower resolution and skips the encoded
information to the target network, which operates on smaller high-resolution
inputs. We evaluated our proposed architecture against single resolution
networks and performed an ablation study on information concatenation and the
number of context networks.
Results: Our proposed best network achieves a median DSC of 0.86 taken over
all 125 segmented bone classes and reduces the confusion among similar-looking
bones in different locations. These results outperform our previously published
3D U-Net baseline results on the task and distinct-bone segmentation results
reported by other groups.
Conclusion: The presented multi-resolution 3D U-Nets address current
shortcomings in bone segmentation from upper-body CT scans by allowing for
capturing a larger field of view while avoiding the cubic growth of the input
pixels and intermediate computations that quickly outgrow the computational
capacities in 3D. The approach thus improves the accuracy and efficiency of
distinct bone segmentation from upper-body CT. | Eva Schnider, Julia Wolleb, Antal Huck, Mireille Toranelli, Georg Rauter, Magdalena Müller-Gerbl, Philippe C. Cattin | 2023-01-31T14:46:16Z | http://arxiv.org/abs/2301.13674v1 | # Improved distinct bone segmentation in upper-body CT through multi-resolution networks.
###### Abstract
**Purpose:** Automated distinct bone segmentation from CT scans is widely used in planning and navigation workflows. U-Net variants are known to provide excellent results in supervised semantic segmentation. However, in distinct bone segmentation from upper body CTs a large field of view and a computationally taxing 3D architecture are required. This leads to low-resolution results lacking detail or localisation errors due to missing spatial context when using high-resolution inputs.
**Methods:** We propose to solve this problem by using end-to-end trainable segmentation networks that combine several 3D U-Nets working at different resolutions. Our approach, which extends and generalizes HookNet and MRN, captures spatial information at a lower resolution and skips the encoded information to the target network, which operates on smaller high-resolution inputs. We evaluated our proposed architecture against single resolution networks and performed an ablation study on information concatenation and the number of context networks.
**Results:** Our proposed best network achieves a median DSC of 0.86 taken over all 125 segmented bone classes and reduces the confusion among similar-looking bones in different locations. These results outperform our previously published 3D U-Net baseline results on the task and distinct-bone segmentation results reported by other groups.
**Conclusion:** The presented multi-resolution 3D U-Nets address current shortcomings in bone segmentation from upper-body CT scans by allowing for capturing a larger field of view while avoiding the cubic growth of the input pixels and intermediate computations that quickly outgrow the computational capacities in 3D. The approach thus improves the accuracy and efficiency of distinct bone segmentation from upper-body CT.
Keywords:Multi-resolution, Distinct Bone Segmentation, Deep Learning
## 1 Introduction
Segmentation of bones is used in bone disease diagnosis, in image-based assessment of fracture risks [1], bone-density [2], for planning and navigation of interventions [3], and for post-treatment assessment.
Bone tissue segmentation from CT has been shown to work well using slice-wise 2D CNN-based segmentation algorithms [4; 5; 6]. The tasks and solutions become more varied when moving from bone-tissue segmentation to distinct bone segmentation (our task) where we distinguish individual bones. Vertebrae segmentation has gained much attention, with many of the algorithms using multi-stage approaches and leveraging the sequential structure of the spine [7]. Rib segmentation has been tackled by [8], who use a point cloud approach targeted at leveraging their dataset's spatial sparsity. Carpal bone segmentation is performed from X-rays of hands that were placed on a flat surface [9].
Simultaneous segmentation of distinct bones of multiple groups is still relatively little studied. [10] segment 62 different bones from upper-body CT using an atlas-based approach and kinematic joint models. [11] use a multi-stage approach with a localisation network, shape models, and a segmentation network to segment 49 distinct bones of the upper body. Segmentation of bones of different groups in one shot can be used as a starting point for more fine-grained atlas segmentations [10], or as a guide for a follow-up inner organ segmentation [12]. Segmenting multiple structures at once can also be beneficial for the segmentation accuracy, [13] found their network trained on multiple bone classes to outperform the one-class networks.
The region of interest in upper-body or full-body CT scans is typically larger than the possible input sizes of 3D convolutional neural networks (CNNs). As a result, the input needs to be sampled as patches, restricting the input field of view to the patch size. This problem exacerbates with the development of CT scanners that produce ever more highly resolved images. While a higher resolution allows for capturing more fine-grained details, it covers smaller body areas within a fixed-size input patch.
In order to extend the field of view, larger input patches can be sampled. Using bigger patches, i.e. more input pixels does not increase the number of trainable parameters in a fully connected network, but it does increase the number of necessary intermediate computations. Doubling the patch size in
all three dimensions leads to at least eight times more forward- and backward computations, which are taxing for the generally scarce GPU memory. Countermeasures fall into two categories. A) keeping the resolution and input pixel size high, but reducing the computational load elsewhere. Those measures include reducing the batch size (not to be confused with the patch size), using a simpler model, or reducing the output size. All of those means potentially hamper training and inference. B) Keeping a large field of view by using a small patch size of down-sampled inputs. This approach allows for a wider field of view for a constant input size while losing detail information.
To decide upon the better of the two approaches presented above, the requirements for the task at hand need to be considered. A suitable network for our task of complete distinct bone segmentation from upper-body CT scans (see 1) should provide the following: Its field of view should be sufficiently big to distinguish similar bones at different body locations, e.g. left from right humerus or the fourth from the eighth rib while keeping the computational burden in a feasible area.
The merits of high-resolution inputs - accurate details - and low-resolution inputs - a larger field of view - can be combined in many ways. Cascaded U-Nets consist of two or more individual U-Nets that are trained consecutively. A first model is trained on downsampled input. Its one-hot encoded segmentation results are then upsampled, potentially cropped and used as additional input channels for the following model at higher resolution [14]. These approaches all have the downside of requiring the training and sequential inference of multiple models. Instead of this, we focus on end-to-end trainable models here.
End-to-end trained multi-resolution architectures have been proposed in histopathology whole-slide segmentation. For example, MRN [15] combines a
Figure 1: Task overview: We segment 125 distinct bones from upper-body CT scans using SneakyNet, a multi-encoder-decoder network which incorporates inputs at various resolutions. The example here features one context network, but multiple are possible.
2D target U-Net and one context encoder with drop-skip-connections crossing over at every level. MRN does not contain a context decoder or context loss and is studied on a binary segmentation problem. Another such architecture is HookNet [16], which contains both a target and a context 2D U-Net and two individual losses, but only uses skip connections in the bottleneck layer.
The purpose of our work is to address common segmentation errors that originate from a lack of global context while using 3D U-Nets for distinct bone segmentation. We propose to use a multi-resolution approach and present SneakyNet, an expansion and generalization of the MRN and HookNet architectures. We compare the segmentation accuracy, complexity, and run-time of baseline 3D U-Nets with the SneakyNet. We ablate the model components and find that the use of our generalized architecture improves the results over the HookNet and MRN variants. We will use our bone segmentation in conjunction with 3D rendering of anatomical images in augmented- and virtual reality applications, where segmentations can be used on top or in conjunction with existing transfer functions [17; 18].
## 2 Materials and Methods
To assess the performance of SneakyNet on upper-body distinct bone segmentation, we train it on our in-house upper-body CT dataset, which has been described in [19]. We make ablation studies on the combination of context and target information and on the optimal number of context networks.
### SneakyNet Architecture
In general, SneakyNet consists of one target network and one or more context networks. The target network operates on high-resolution data and eventually produces the desired segmentation maps. The context networks operate
\begin{table}
\begin{tabular}{l r l r r r} \hline \hline Config & Target & Context & trainable & input & training time \\ & network & network(s) & param. & pixels & per iteration \\ & FOV & FOV & \(\cdot 10^{7}\) & \(\cdot 10^{4}\) & (s) \\ \hline A 3D U-Net & \(32^{3}\) & — & 5.8 & 3.3 & 0.44 \\ & \(64^{3}\) & — & & 26.2 & 0.57 \\
3D U-Net slim\({}^{*}\) & \(128^{3}\) & — & 1.5 & 209.7 & 4.24 \\ B HookNet & \(32^{3}\) & \(64^{3}\) & 3.7 & 6.6 & 0.41 \\ & \(64^{3}\) & \(128^{3}\) & & 52.4 & 0.72 \\ C MRN & \(32^{3}\) & \(64^{3}\) & 4.7 & 6.6 & 0.43 \\ & \(64^{3}\) & \(128^{3}\) & & 52.4 & 1.27 \\ D Sneakynet (ours) & \(32^{3}\) & \(64^{3}\) & 4.9 & 6.6 & 0.45 \\ & & \(64^{3}-128^{3}\) & 5.8 & 9.9 & 0.70 \\ & & \(64^{3}-128^{3}-256^{3}\) & 6.2 & 13.1 & 3.16 \\ & \(64^{3}\) & \(128^{3}\) & 4.9 & 52.4 & 1.28 \\ & & \(128^{3}-256^{3}\) & 5.8 & 78.6 & 3.11 \\ \hline \hline \end{tabular} \({}^{*}\) Operating the full 3D U-Net on patches of size \(128^{3}\) exceeds the available GPU memory.
\end{table}
Table 1: Comparison of architectures with different field of view (FOV) of their target and context network(s).
on lower resolution inputs spanning a larger field of view. Information is propagated from the context networks to the target network using crop-skip connections presented in Section 2.1.1. We present a detailed visual overview of the architecture with one context network in Figure 1.
In our previous work [20], we have explored the suitability of different 2D and 3D network architectures and parameter configurations for upper-body distinct bone segmentation. We found that there is little leeway in architectural choices due to the tasks large required field of view and the many classes that are segmented in parallel. A lean 3D U-Net variant was found to work best [20]. We use this variant's architecture for our target and context U-Nets here. In our baseline computations, where we have only a target network and omit the context networks, we select the number of channels in order for our variants and the baselines to have approximately the same number of trainable parameters, to facilitate comparison. Inputs to the network are required to be multiples of \(2^{M-1}\), where \(M\) denotes the number of levels of the U-Net. We use the basic architecture of \(M=5\) and therefore need multiples of 16 pixels in every dimension as input.
For the target network we use inputs of size \((Sx,Sy,Sz)\) at full resolution. For each of the context networks we use that input plus its surrounding area, which together span a field of view of \(2^{\kappa}\cdot(Sx,Sy,Sz)\). We display the case of \(\kappa=1\) in Figure 1, but also use context networks with \(\kappa=2\) and \(\kappa=3\) in our ablation studies. The context network inputs are down-sampled to reduce their size to \((Sx,Sy,Sz)\). We perform the down-sampling using \((2^{\kappa}\times 2^{\kappa}\times 2^{\kappa})\) average-pooling with a stride of \(2^{\kappa}\). Both target and context network inputs eventually have a size of \((Sx,Sy,Sz)\), but at different resolutions and fields of view.
#### Crop-skip connections
We use crop-skip connections to transfer information from the context to the target branch. We crop the encoder output at the desired level \(m\) such that
Figure 2: Detailed view of the architecture. Displayed are only two out of five levels of the U-Nets. Left: the context U-Net working on low-resolution data with a larger field of view. Right: The U-Net working with the central cropped high-resolution data. After all encoder convolutions of level \(m\), a cropped copy of the output is skipped to the target decoder at level \(m+1\). The decoder receives skip connections from its own encoder and the context network. The intermediate results of the decoder and both skip connections are concatenated along the channel axis before undergoing further convolutions.
only the centre cube of half the size per dimension remains. This centre cube is now spatially aligned to the input of the target branch. We concatenate the centre cube to the next lower level \(m+1\) of the target decoder to match the spatial size. We refer to the central cropping and subsequent concatenation into a lower level of the target branch as crop-skip-connection. A detailed schematic of the crop-skip connection is depicted in Figure 2.
We explore three network configurations which differ in their number of crop-skip connections and their use of a context loss, and compare it to a baseline U-Net. A visual comparison of the architectures is given in Figure 3 and the parameters are provided in Table 1.
* Baseline:** 3D U-Net with optimal configuration found for the task [20].
* HookNet:** One context network with a single crop-skip connection is added to the target network. The crop-skip connection enters the target network at its bottleneck layer. This configuration is used in [16].
* MRN:** Crop-skip connections connect the context encoder and the target decoder at every level. There is neither a context decoder nor a context loss function. This configuration was used in [15].
* proposed SneakyNet:** Crop-skip connections connect all levels of the context and target networks. The context network has a decoder with its own loss function.
### Training
Our dataset is split into 11 scans for training, 2 for validation and 3 for testing. We use 5-fold cross-validation, ensuring that every scan appears in precisely one of the cross-validation folds in the test set.
The loss is composed of an unweighted combination of the target network's loss and the losses of the \(K\) context networks. For both networks, we use the sum of the cross-entropy loss \(\mathcal{L}_{\text{X-Ent}}\) and Dice-Loss \(\mathcal{L}_{\text{DSC}}\)[21]. As in [20], we sum the Dice-Loss for every class separately and normalize by the number of classes. We optimized the network weights using the Adam optimizer with an initial learning rate of 0.001. We trained our networks for 100000 iterations until convergence was observed.
Figure 3: Schematic of the four network configurations used in our ablation study. A shows a base U-Net, while B, C, D show different possibilities of how to insert information into the target network, see also Section 2.1.1 for a written description.
Our input images are padded by \((S-S_{\text{target}})/2\) all-around using edge value padding. The padding step ensures that we can sample high-resolution patch centres right to the image's border.
We implemented and trained our networks using Tensorflow Keras 2.5. All training and inference were conducted on NVidia Quadro RTX 6000 GPUs of 24 GB RAM size.
### Evaluation
We evaluate the performance of our models using a class-wise Dice Score Coefficient (DSC). To indicate the performance over all classes, we give the median and the 16 and 84 quantiles (\(1\sigma\)) over all classes \(c\). To not give a distorted impression of the distribution, we exclude classes where no true positives of \(c\) have been detected and therefore DSC\({}_{\text{c}}=0\). We present the percentage of classes included as 'non-zero DSC' in Table 2 and Table 3 to make up for the omission.
## 3 Results and Discussion
Our experiments show how automated distinct bone segmentation can be improved using a multi-resolution approach. We evaluate our results on multiple target resolutions with different numbers of context networks and field of view sizes and perform an ablation study to determine the most beneficial way to combine context and target network information.
We evaluated some of the most common errors when using a baseline segmentation method. We found that the missing context information leads to similar-looking bones in different body regions being mistaken for one another. In the confusion matrix presented in Figure 4, we observe that when using
Figure 4: Confusion matrix among the long bones of the arms and legs. With our method, there is considerably less confusion between the left and right sides of the body and between arm and leg bones.
a baseline 3D U-Net, humerus pixels were predicted as femur, and the left and right humerus were confused for one another (right confusion matrix). When using context information, these errors are reduced almost entirely (left confusion matrix).
We performed an ablation study to see how different strategies of combining the context and target information within the network perform. In Table 2 we present the quantitative results. For both target patch sizes, 32 and 64, all strategies (B-D) improve upon the baseline 3D U-Net (A). The observed effect is substantially bigger when using the smaller target patch size of \(32^{3}\), where the median DCS rises from 0.64 to 0.75. The DSC still increases from 0.83 to 0.86 median DSC on the bigger target patches.
The combination of skip connections at every level and a context loss function in our proposed architecture increases the accuracy further, as compared to the HookNet [16] and the MRN [15].
In Table 3 we ablate the influence of different numbers of context networks and input patch sizes. Qualitative results are depicted in Figure 5. Comparing the baseline 3D U-Nets with the SneakyNet results, we see that adding context networks to very small target patches of \(32^{3}\) pixels almost reaches the performance of our baseline networks operating on \(64^{3}\) patches. Going up, the SneakyNet operating on patch size \(64^{3}\) even outperforms the baseline 3D U-Net of patchsize \(128^{3}\). We recall, that we had to reduce the number of channels in the baseline \(128^{3}\) network, due to memory restraints. Our ablation results suggests, that the addition of context networks are more valuable in adding performance when reaching memory limits. When considering the different FOV of the context networks, we observe the best results when including context FOVs of \(128^{3}\). This covers roughly half of the L-R and A-P dimensions of the scans and seems to contain the necessary information to correctly locate bones, see e.g. the purple lumbar vertebra in Figure 5, which is correctly located in cases where the context FOV reaches \(128^{3}\).
We provide a comparison to other results published on distinct bone segmentation in Table 4. While a direct comparison is difficult due to different datasets, our results compare favourably to both the convolutional neural networks and shape model approach by [11], and to the hierarchical atlas segmentation by [10].
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Target patch size & \multicolumn{4}{c}{32} & \multicolumn{4}{c}{64} \\ \cline{2-10} & & & \multicolumn{2}{c}{non-zero} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{non-zero} \\ DSC & Median & \(\sigma\) & \(-\sigma\) & DSC & Median & \(\sigma\) & \(-\sigma\) & DSC \\ \hline A 3D U-Net & 0.64 & +0.19 & -0.34 & 94.5\% & 0.83 & +0.09 & -0.27 & 94.5\% \\ B HookNet & 0.66 & +0.17 & -0.34 & 94.1\% & 0.85 & +0.09 & -0.32 & 95.3\% \\ C MRN & 0.69 & +0.16 & -0.37 & 95.1\% & 0.84 & +0.09 & -0.31 & 96.0\% \\ D SneakyNet (ours) & 0.75 & +0.14 & -0.33 & 95.3\% & 0.86 & +0.08 & -0.28 & 96.7\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation results in DSC for different model configurations.
_Improved distinct bone segmentation using multi-resolution networks_
## 4 Conclusion
This works presents improvements in distinct bone segmentation from upper-body CT. The proposed multi-resolution networks use additional inputs at a lower resolution but with a larger field of view to provide the necessary context information to assign the proper bone classes. We compared three different ways of combining the context and target information and evaluated the results using zero to three context networks. Using context networks improves the segmentation results on all target patch sizes.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Config & target FOV & context FOV(s) & \multicolumn{4}{c}{DSC} \\ \cline{3-8} & per dim. & per dim. & Median & \(\sigma\) & \(-\sigma\) & non-zero DSC \\ \hline A & 32 & — & 0.64 & +0.19 & -0.34 & 94.5\% \\ D & 32 & 64 & 0.75 & +0.14 & -0.33 & 95.3\% \\ D & 32 & 64-128 & **0.79** & +0.11 & -0.33 & 94.4\% \\ D & 32 & 64-128-256 & 0.79 & +0.11 & -0.33 & 95.9\% \\ \hline A & 64 & — & 0.83 & +0.09 & -0.27 & 95.6\% \\ D & 64 & 128 & **0.86** & +0.08 & -0.28 & 96.7\% \\ D & 64 & 128-256 & 0.85 & +0.09 & -0.28 & 96.1\% \\ \hline A & 128 & — & 0.82 & +0.11 & -0.30 & 94.3\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation results for the number of context networks in the SneakyNet architecture (D). Zero context networks corresponds to the baseline 3D U-Nets (A) with different input patch sizes.
Figure 5: Qualitative prediction results from our ablation study comparing different numbers of context networks at various resolutions. The first four results from the left were obtained using a target patch size of 32px per dimension (turquoise), and the remaining three scans with target patch sizes of 64px per dimension (light blue). The grey areas indicate the field of view of the context networks. The sizes of the squares are proportional to the prediction sizes.
Acknowledgments.This work was financially supported by the Werner Siemens Foundation through the MIRACLE project. We thank Azhar Zam for valuable discussions that helped shape this work.
## Declarations
* Funding: This work was financially supported by the Werner Siemens Foundation through the MIRACLE project.
* Competing interests: None of the authors have competing interests to declare that are relevant to the content of this article.
* Ethics approval: This research study was conducted retrospectively from CT data routinely obtained from body donours. No ethical approval is required.
* Consent to participate: Informed consent was obtained from all individual body donours included in the study.
* Consent for publication: Body donours signed informed consent regarding publications using their data.
* Availability of data and materials: The upper-body CT dataset is not publicly available. An anonymized version can be shared on request.
* Code availability: Our code is shared at: [https://gitlab.com/cian.unibas.ch/sneakynet](https://gitlab.com/cian.unibas.ch/sneakynet)
|
2305.00499 | Numerical tests of the large charge expansion | We perform Monte-Carlo measurements of two and three point functions of
charged operators in the critical O(2) model in 3 dimensions. Our results are
compatible with the predictions of the large charge superfluid effective field
theory. To obtain reliable measurements for large values of the charge, we
improved the Worm algorithm and devised a measurement scheme which mitigates
the uncertainties due to lattice and finite size effects. | Gabriel Cuomo, J. M. Viana Parente Lopes, José Matos, Júlio Oliveira, Joao Penedones | 2023-04-30T15:00:53Z | http://arxiv.org/abs/2305.00499v4 | # Numerical tests of the large charge expansion
###### Abstract
We perform Monte-Carlo measurements of two and three point functions of charged operators in the critical \(O(2)\) model in 3 dimensions. Our results are compatible with the predictions of the _large charge_ superfluid effective field theory. To obtain reliable measurements for large values of the charge, we improved the Worm algorithm and devised a measurement scheme which mitigates the uncertainties due to lattice and finite size effects.
+
Footnote †: institutetext: Research Fellow of the University of Tokyo, Tokyo 11-1-1 Hongo, Bunkyo-ku, Tokyo 112-8581, Japan
###### Abstract
We present a new method for constructing the \(\mathcal{O}(1
Let us review the physical picture underlying the large charge expansion. Consider for concreteness a CFT invariant under an internal \(U(1)\) symmetry group in three spacetime dimensions. By the state-operator correspondence, an operator with a large \(U(1)\) charge is associated with a finite density state for the theory quantized on the cylinder. In [6] it was argued that, in the simplest case, the corresponding state is found in a superfluid phase. In this case, one can describe large charge states via the _universal_ effective field theory (EFT) description for the _hydrodynamic_ Goldstone mode of the superfluid [7].2 This allows for the systematic calculation of correlation functions of charged operators, where the derivative expansion coincides with an expansion in inverse powers of the charge.
Footnote 2: Here we are focusing on _generic_ theories, where no additional symmetries are present and thus all other (_radial_) modes are gapped at finite density; a well-studied exception is given by supersymmetric CFTs with moduli spaces [10; 11; 12; 13], where there are additional light modes in the spectrum.
The superfluid EFT is believed to describe the large charge sector of a vast class of theories. Nonetheless, sometimes other phases are possible, e.g. Fermi spheres in fermionic theories [14; 15] or extremal Reissner-Nordstorm black holes in holographic models. Furthermore, in contrast with the large spin expansion, even in specific theories there is no rigorous bootstrap proof of the validity of the superfluid description.3 It is therefore important to check its predictions in theories where explicit calculations are possible. The main purpose of this paper is to provide evidence for the validity of the superfluid EFT in a specific strongly coupled CFT, namely the \(O(2)\) model in three dimensions, via Monte-Carlo calculations.4
Footnote 3: See however [16] for interesting progress in this direction.
Footnote 4: Notice that checking the superfluid EFT provides also an indirect test for the validity of the state-operator correspondence in the \(O(2)\) model. Recently, the state-operator map was directly tested numerically for the Ising model in [17].
### Background and summary
Let us begin discussing some of the predictions of the large charge expansion, with a particular focus on the features which are specific to the superfluid EFT. The scaling dimension \(D(Q)\) of the operator \(\mathcal{O}_{Q}\) with lowest dimension at fixed charge \(Q\) is given by [6; 7]
\[D\left(Q\right)=c_{3/2}Q^{3/2}+c_{\frac{1}{2}}Q^{1/2}+c_{0}+O\left(Q^{-1/2} \right). \tag{1}\]
The coefficients \(c_{3/2}\) and \(c_{1/2}\) in (1) are model-dependent Wilson coefficients, while \(c_{0}\simeq-0.0937\). The leading behaviour at large \(Q\) follows from dimensional analysis. The existence of an expansion in \(1/Q\) is less trivial, but it is not specific to the superfluid EFT only; for instance, the result for a Reissner-Nordstrom black hole admits a similar expansion (see e.g. [18]). The \(Q^{0}\) contribution \(c_{0}\simeq-0.0937\) is associated with the Casimir energy of the Goldstone and it is thus a specific prediction of the superfluid phase [19].
The superfluid EFT also predicts other observables in terms of the **same** Wilson coefficients. For instance, it predicts that the primary operator with the next-to-lowest dimension with charge \(Q\) has spin 2 and scaling dimension \(D(Q)+\sqrt{3}\)[6]. Importantly for this work, the EFT also predicts OPE coefficients of the operators \(\mathcal{O}_{Q}\)[7; 20; 21; 22], defined
as
\[\lambda_{Q_{1},Q_{2},-Q_{1}-Q_{2}}=\lim_{|x|\to\infty}|x|^{D(Q_{1}+Q_{2})}\langle \mathcal{O}_{Q_{1}}(0)\mathcal{O}_{Q_{2}}(1)\mathcal{O}_{-Q_{1}-Q_{2}}(x) \rangle\,. \tag{2}\]
The structure of the prediction depends on whether one considers three large charge operators or two large charge operators and one with a small charge. We refer to these predictions, respectively, as OPE in Regime I and Regime II. The results are
* **Regime I:** the prediction for the OPE of three large charge operators reads [21] \[\lambda_{Q_{1},Q_{2},-Q_{1}-Q_{2}}=\exp\left[\frac{c_{3\!/_{2}}}{2\sqrt{\pi}}Q _{1}^{3/2}f\left(y\right)+O\left(\min\left\{Q_{1},Q_{2}\right\}^{1/2}\right) \right],\quad y=\sqrt{Q_{2}/Q_{1}},\] (3) where \(Q_{1},Q_{2}\gg 1\), and \(f\left(y\right)\) is given in terms of the solution of a non-linear PDE. We shall only need its value at \(y=1\), \(f\left(1\right)\simeq 0.996\). Notice that this prediction depends only on \(c_{3\!/_{2}}\) to leading order in the derivative expansion.
* **Regime II:** the prediction for the OPE of two large charge operators and a small charge operator is [7; 20; 22] \[\lambda_{Q_{1},Q_{2},-Q_{1}-Q_{2}}=C\left(Q_{2}\right)Q_{1}^{D(Q_{2})/2}\left[ 1+0.46c_{3\!/_{2}}\times\frac{Q_{2}^{2}}{\sqrt{Q_{1}}}+O\left(1/Q_{1}\right) \right].\] (4) where \(Q_{1}\gg 1\) and \(Q_{2}=O(1)\). The coefficient \(C(Q_{2})\) is a novel Wilson coefficient, associated with the operator matching for \(\mathcal{O}_{Q_{2}}\) in terms of the superfluid Goldstone [7]. As in (1), the leading scaling with \(Q_{1}\) can be inferred from dimensional analysis [16]. The subleading correction depends on \(c_{3\!/_{2}}\) and it is a specific prediction of the superfluid EFT. The numerical coefficient multiplying \(c_{3\!/_{2}}\) does not admit a simple analytic expression. It was computed in [20] from the (small) shift of the superfluid saddle-point (equivalent to a tadpole diagram) due to the charge \(Q_{2}\) sourced by the operator insertion.5 Footnote 5: The result for this OPE reported in [22] differs by a factor of 2 because of a typo; we thank Nicola Dondi for checking the result.
We now discuss former tests of the large charge expansion in CFTs. The validity of the EFT has been unambiguously demonstrated in several perturbative theories, see e.g. [13; 23; 24; 25]; particularly relevant for us are the results for large charge operators in the \(O(N)\) model in the \(\varepsilon\)-expansion [26; 27] and at large \(N\)[28; 29]
It is of course harder to study directly strongly coupled CFTs. [30] initiated the Monte-Carlo study of the large charge sector of the \(O(2)\) model computing the scaling dimension \(D(Q)\) for \(Q=1,2,\ldots,12\). To perform the calculations the authors applied the worm algorithm [31] to the worldline formulation of the classical \(O(2)\) sigma-model [32] (see appendix A for details). Remarkably, the numerical results agree with the theoretical prediction (1) up to \(Q=O(1)\), providing a determination of the coefficients \(c_{3\!/_{2}}\) and \(c_{\frac{1}{2}}\) from the fit (with \(c_{0}\) assumed as input). Similar calculations have been performed in the \(O(N)\) models for \(N=4\)[33; 34] and \(N=6,8\)[35].
The results of [30] provide strong evidence for the existence of a \(1/Q\) expansion for \(D(Q)\). This is a very nontrivial result, but, as commented earlier, it is not necessarily specific to the superfluid EFT (even if it is admittedly hard to think of alternative descriptions for the large charge sector of the \(O(2)\) model). The main goal of this work is to test specifically the superfluid EFT by studying OPE coefficients of charged operators. Below we give a brief summary of our results.
First, in sec. 2 we compute the scaling dimension \(D(Q)\) for charges up to \(Q=19\), thus extending the pre-existing results for \(Q\leq 12\)[30]. The results are plotted in fig. 2. Our measurements are compatible with those of [30] and provide improved estimates for the values of the Wilson coefficients \(c_{\nicefrac{{3}}{{2}}}\) and \(c_{\frac{1}{2}}\) in the \(O(2)\) model, cfr. eq. (4). Unfortunately, we are not able to reach the precision needed to obtain a reliable estimate for the coefficient \(c_{0}\) in eq. (1); our results are nonetheless compatible with the theoretically predicted value.
In order to test the superfluid EFT, in sec 3 we study the OPE coefficients in eq. (3) and (4). Notice that extracting three-point functions from Monte-Carlo simulations is significantly more involved than computing two-point functions. In Regime I we computed the OPE coefficient for \(Q_{1}=Q_{2}=1,2,3,4\), while in Regime II we obtained results for \(Q_{1}=1,2,3,4,5\) with \(Q_{2}=1\) (fixed). The results are shown in fig. 8. Despite the relative smallness of the charges we find good agreement between the numerical results and the EFT predictions, in both regimes. In particular, from the extrapolation of the result in Regime I to larger values of the charges we extract the coefficient \(c_{\nicefrac{{3}}{{2}}}\), finding remarkable agreement with the value extracted from the measurement of the scaling dimension. The comparison is shown in fig. 9. From the results in Regime II we measure the value of the coefficient \(C(1)\) in eq. (4), see fig. 10. The estimate for \(c_{\nicefrac{{3}}{{2}}}\) extracted from Regime II is encouragingly compatible with the one obtained from \(D(Q)\) and the OPE coefficient in Regime I, but uncertainties are too large for our analysis to be conclusive; see fig. 11.
Overall our results provide encouraging evidence for the validity of the superfluid EFT in the large charge sector of the \(O(2)\) model, but additional data would be helpful to unambiguously confirm the EFT description. In sec. 4 we further speculate on the implications of our findings and comment on possible future directions.
To compute the correlation functions numerically we used the worm algorithm. We introduced two technical improvements with respect to the strategy of [30]. First, we introduced the continuous time update step, which reduces the computational time; details are given in appendix A. Additionally, we devised an improved procedure to take the continuum limit. To this aim, we carefully analysed lattice effects, combining numerical experiments and conformal perturbation theory; some details are given in appendix C.
## 2 Conformal dimension of lightest charged scalar operator
Measuring 2pt functions of operators with large conformal dimensions is challenging. On the one hand, the 2pt function decays quickly when the distance between the operators increases. On the other hand, measurements at short distances are contaminated by large lattice effects.
o make progress, [30] introduced a method that does not require sampling directly 2pt functions. They measure the difference between the conformal dimensions of operators with consecutive charges, \(\Delta(Q)\equiv D(Q+1)-D(Q)\), which scales as \(Q^{1/2}\) instead of \(Q^{3/2}\). This is achieved by rewriting the 2pt function \(C_{Q}\left(x\right)\equiv\left\langle\mathcal{O}_{Q}(0)\mathcal{O}_{-Q}(x)\right\rangle\) as a product of ratios
\[C_{Q}\left(x\right)=\prod_{q=1}^{Q}R_{q}\left(x\right)\,,\qquad R_{q}\left(x \right)=\frac{C_{q}\left(x\right)}{C_{q-1}\left(x\right)}\sim\frac{1}{|x|^{2 \Delta(q)}}\,,\qquad C_{0}(x)\equiv 1. \tag{1}\]
This is useful because, due to the worldline formulation of the \(O(2)\) model, \(R_{q}(x)\), can be sampled directly by computing the expectation value of operators with \(Q=1\) in a background charge distribution
\[R_{q}\left(x\right)=\left\langle e^{i\theta(0)}e^{-i\theta(x)}\right\rangle_{ \left(q-1\right)_{0}-\left(q-1\right)_{x}}, \tag{2}\]
where \(e^{i\theta}\) (\(e^{-i\theta}\)) represents a charge \(1\) (\(-1\)) operator in the nonlinear sigma model and the subscript indicates that the expectation value is computed in the presence of charge (\(q-1\)) at the origin \(0\) and charge \(-(q-1)\) at position \(x\). Check App. A.2 for details.
Having reconstructed the 2pt function, the next step is to extract \(\Delta(Q)\). The naive approach is to compute \(R_{q}(x)\) for different values of \(x\), take the log and fit the slope. Unfortunately, this cannot be done systematically due to both finite-size effects and lattice effects. Lattice effects are due to the discrete nature of the lattice. This introduces another distance scale, the lattice spacing \(a\), such that in the region where \(x/a\sim O(1)\), the discrete
Figure 1: \(\Delta(1)\), extracted with eq.(3), for \(\alpha=2\) and \(L\in[16,32]\). Notice that the region where there are significant deviations from a constant value, on the left of the vertical line, gets smaller as \(L\) increases. This is expected from lattice effects. We include the previous Monte-Carlo result with error bars at \(1\sigma\), taken from Tab.I in [30], as well as the bootstrap result [36].
nature of the lattice spoils the CFT predictions. We set \(a=1\) in the following unless specified otherwise.
In summary, there is an intermediate region, where \(x/a\gg 1\) and \(x/L\ll 1/2\), such that the continuum infinite size CFT predictions hold. As we show next, we are able to drop the second restriction through a choice of observable that eliminates finite-size effects.
Systematic errors coming from finite size effects are eliminated by computing the ratio between 2pt functions measured for different lattice sizes but at the same relative position
\[\frac{C_{Q,L}(x)}{C_{Q,\alpha L}(\alpha x)}=\alpha^{2D(Q)}\quad\text{or}\quad \frac{R_{Q,L}(x)}{R_{Q,\alpha L}(\alpha x)}=\alpha^{2\Delta(Q)}. \tag{3}\]
The right-hand side holds as long as lattice effects are negligible. Notice that these relations are independent of the position \(x\). Both ratios are insensitive to finite size effects since
Figure 2: Here we plot both the measurements of \(D(Q)/Q^{1/2}\) obtained for \(L=32\) (\(10^{7}\) Worm steps), as well as the results of the extrapolation to \(L=\infty\), obtained using \(\alpha=2\) in eq. (3). We test the leading behaviour of eq. (1) by demonstrating the linearity of \(D(Q)/Q^{1/2}\). We reproduce the previous results of [30]. As explained at the end of the section, we performed best-fits of the coefficients \(c_{\nicefrac{{3}}{{2}}}\) and \(c_{\frac{1}{2}}\) in eq. (1), restricting to charges \(Q\geq Q_{min}\) for different choices of \(Q_{min}\). The blue dashed curve is obtained using the coefficients extracted from the average of all the fits with \(Q_{min}=4,\ldots,10\). The orange dashed curve is the best fit obtained using only charges \(Q\geq 8\). Remarkably, the two lines are almost indistinguishable and almost overlap with all the data points. In the table below we list the data used in the fits presented in this section. Check app. B for a detailed discussion.
these are parameterized by the relative position \(x/L\). Deviations from a constant value (as a function of \(x\)) are a proxy for lattice effects.
In fig. 1 we present the measurements of \(\Delta(1)\), obtained through eq. (3). For small values of \(x\) there are deviations as expected. These are the lattice effects. They disappear when \(x\approx 7\), identified by the vertical lines. In particular, we can also reliably measure the correlation function for distances \(x<L/2\), which we will use to optimize our measurements. In the constant region, to the right of the vertical lines, we have an independent estimate of \(\Delta(1)\) for each position. From these, we estimate the error bars on the final result.
The generalization of this analysis to different values of \(Q\) provides reliable measurements of \(\Delta(Q)\), which are not susceptible to finite-size effects and do not require fits. Similarly, we also obtain accurate estimates of the errors introduced by lattice effects. The measurements of \(\Delta(Q)\) are independent of the lattice size. We explicitly checked this for larger charges. The results for \(L=32\) and \(L=64\) match within the statistical uncertainties, but \(L=32\) has a smaller error6 and we focused on this system size.
Footnote 6: This is because the computational time required to perform a certain number of _worm steps_ increases with the lattice size; a smaller lattice size, therefore, allows to obtain measurements with higher statistical significance.
We measured \(D(Q)\) up to \(Q=19\), see fig. 2. As remarked in the introduction, this represents a considerable improvement with respect to the existing results, which stopped at \(Q=12\)[30]. To obtain results for such high values of the charge we used a continuous-time update step (see App. A). Additionally, we sampled the ratio in eq. (3) for relatively small values of the distance \(x^{7}\), while [30] performed all measurements for \(x\sim L/2\). Indeed, as explained earlier, lattice effects are negligible already for \(x\gtrsim 7\), with the most precise measurements obtained for \(7\lesssim x\ll L/2\).8
Footnote 7: We control for lattice effects by checking the dependence of the different estimates of \(D(Q)\) on the position.
Footnote 8: A coarse estimate for the precision of a measurement is \(1/\sqrt{N}\), where \(N\) is the number of Worm steps. Then the relative error of \(R_{q}(x)\) should be of order \((1/\sqrt{N})/R_{q}(x)\sim x^{2\Delta(Q)}/\sqrt{N}\). Thus, for larger \(\Delta(Q)\), it is important that we can restrict to small \(x\), since \(N\) is limited by the available computational resources.
Let us now discuss the comparison with the theoretical prediction eq. (1). In doing so, we face some important questions: What is the theoretical error of the large charge expansion? Is this error also under control for small values of Q?
The large charge expansion is believed to be an asymptotic expansion [37]. The series in eq. (1) thus includes both _perturbative_ terms, suppressed by inverse powers of \(Q\),9 as well as _non-perturbative_ corrections, which are exponentially suppressed at large charge \(\sim e^{-\#\sqrt{Q}}\).10 Most importantly, as typical with asymptotic expansions, the series is not expected to converge to the exact result upon including infinitely many terms; rather, the large \(N\) analysis of [37] suggests that the large charge expansion of the scaling dimension \(D(Q)\) admits an optimal truncation after \(n\sim\sqrt{Q}\) terms.
Footnote 9: Sometimes these power corrections may be enhanced by logarithms of the charge, see [20].
Footnote 10: See [12, 38] for some progress in understanding similar corrections in supersymmetric theories.
While the subtleties associated with the asymptotic nature of the series are unimportant for very large charges, they make it challenging to estim
expansion for our data. In particular, we do not expect the theoretical error to be a simple function of \(Q\) when \(Q\sim O(1)\).
To (partially) account for these effects, we performed fits of \(c_{\nicefrac{{3}}{{2}}}\) and \(c_{\nicefrac{{1}}{{2}}}\) in eq. (1) using charges in the range \([Q_{\rm min},19]\) for different values of \(Q_{\rm min}\). We indeed expect that the theoretical error decreases with \(Q\). The results are shown in fig. 3. The results are independent of \(Q_{\rm min}\) for \(Q_{\rm min}\gtrsim 3\), suggesting that the truncated asymptotic expansion is trustworthy beyond this value of the charge. By averaging over the results for \(Q_{\rm min}\in[4,8]\) we obtain
\[c_{\nicefrac{{3}}{{2}}}=0.339(1)\qquad c_{\nicefrac{{1}}{{2}}}=0.25(1). \tag{4}\]
These values are compatible with the previous estimates of [30]\(c_{\nicefrac{{3}}{{2}}}=0.337(3)\) and \(c_{\nicefrac{{1}}{{2}}}=0.27(4)\).
In fig. 4 we plot the difference between the numerical data and the best-fit curve obtained using the averaged parameters. In the same plot, we also show the deviation for the best-fit curve obtained setting \(Q_{\rm min}=8\). For small charges, there are systematic
Figure 4: Difference between the data and the best-fit curves with the averaged parameters in eq. (4) (red) and with \(Q_{\rm min}=8\) (blue). For large values of \(Q\) the uncertainties exceed the range displayed in the plot.
Figure 3: Best-fit values of \(c_{\nicefrac{{3}}{{2}}}\) (left) and \(c_{\nicefrac{{1}}{{2}}}\) (right) as a function of the minimum charge included in the fit. For larger values of \(Q_{\rm min}\) the error bars are larger than the plotted range. The coloured regions represent the \(1\sigma\) interval quoted on (4).
deviations, but they become smaller than the numerical uncertainties for \(Q\gtrsim 4\). This justifies a posteriori the choice of fitting in the range \(Q_{\rm min}\in[4,8]\). It is remarkable that also for small charges the relative deviations are rather small. For instance, the best-fit curve obtained for \(Q_{\rm min}=8\) extrapolated to \(Q=1\) agrees with the measurement within a 4% relative error.
Notice that we did not try to fit the value of \(c_{0}\), which we held fixed at its theoretical value \(c_{0}\simeq-0.0937\). Indeed, as we argue below, the data are compatible with this value, but the increasing numerical uncertainties with the charge make it impossible to obtain a reliable estimate for \(c_{0}\) or other subleading coefficients.
To justify the compatibility of \(c_{0}=-0.0937\) with the Monte Carlo data, it is convenient to define the following quantity
\[\mathcal{A}(Q)=\mathcal{N}^{-1}(Q)\left[2\frac{D(1+Q)}{\sqrt{1+Q}}-\frac{D(2+Q )}{\sqrt{2+Q}}-\frac{D(Q)}{\sqrt{Q}}\right]\,, \tag{5}\]
where
\[\mathcal{N}(Q)=\frac{2}{\sqrt{1+Q}}-\frac{1}{\sqrt{Q+2}}-\frac{1}{\sqrt{Q}}\,. \tag{6}\]
The quantity \(\mathcal{A}(Q)\) is so defined to be independent of \(c_{3/2}\) and \(c_{\frac{1}{2}}\) when evaluated using eq. (1). Its \(1/Q\) expansion reads11
Footnote 11: Here we restored two-subleading orders in the expansion of the scaling dimension \(D(Q)\):
\[D\left(Q\right)=c_{3/2}Q^{3/2}+c_{\frac{1}{2}}Q^{1/2}+c_{0}+c_{-\frac{1}{2}}Q ^{-1/2}+c_{-1}Q^{-1}+O\left(Q^{-3/2}\right)\,. \tag{7}\]
The \(Q^{-1}\) term in this expression represents the contribution to the Casimir energy from higher derivative corrections to the Goldstone dispersion relation, see [20] for details.
\[\mathcal{A}(Q)=c_{0}+\frac{8}{3}\frac{c_{-\frac{1}{2}}}{\sqrt{Q}}+5\frac{c_{- 1}}{Q}+O\left(Q^{-3/2}\right)\,. \tag{8}\]
A similar sum rule was introduced in [6].
The results for \(\mathcal{A}(Q)\) are presented in fig. 5. On the one hand, measurements for large charges do not achieve a sufficient level of precision to extract \(c_{0}\). On the other hand, the results for small charges, for which the precision is high, are subject to unknown theoretical errors.12
Footnote 12: Perhaps relatedly, the extrapolation of the large \(N\) analysis of [37] suggests that non-perturbative terms in the series (1) are of the same order of \(c_{0}\) for \(Q\sim O(1)\). Our results are nonetheless compatible with the theoretical value.
Our results are nonetheless compatible with the theoretical value.
## 3 OPE coefficients
Let us now discuss how to measure the OPE coefficients (2). We consider in particular the OPE coefficient in regime I (cfr. eq. (3)) for \(Q_{1}=Q_{2}=Q\), for which the EFT prediction reads
\[\lambda_{Q,Q,-2Q}=\exp\left\{\frac{f\left(1\right)}{2\sqrt{\pi}}\left[c_{3/2} Q^{3/2}+\alpha_{1/2}Q^{1/2}+\alpha_{0}+O\left(Q^{-1/2}\right)\right]\right\}\,. \tag{9}\]
here \(f(1)\simeq 1\) and we included the first two subleading terms, which are multiplied by two unknown coefficients \(\alpha_{\nicefrac{{1}}{{2}}}\) and \(\alpha_{0}\), for future reference.13 For the OPE in regime II, we take \(Q_{1}=Q\) and \(Q_{2}=1\). The theoretical prediction (4) takes the form
Footnote 13: The \(Q^{1/2}\) term depends upon a subleading Wilson coefficient of the EFT which does not contribute to the scaling dimension \(D(Q)\); therefore we cannot compute its value from the estimates obtained in sec. 2. The \(Q^{0}\) term, \(\alpha_{0}\), is instead independent of Wilson coefficients, analogously to the \(c_{0}\) term in eq. (1). In principle, its value could be computed from the one-loop fluctuation determinant around the saddle-point of [21]. In practice, this calculation is technically challenging and we treat \(\alpha_{0}\) as an unknown parameter.
\[\lambda_{1,Q,-Q-1}=C\left(1\right)Q^{D(1)/2}\left[1+\frac{0.46c_{\nicefrac{{3} {2}}{{2}}}}{\sqrt{Q}}+\frac{\beta_{-1}}{Q}+O\left(Q^{-3/2}\right)\right]\,, \tag{16}\]
where we also included an extra-subleading term, which depends upon a new Wilson coefficient \(\beta_{-1}\).14 For the sake of concreteness, in the following we discuss how to measure the OPE coefficient (14). A similar discussion applies to the OPE in regime II.
Footnote 14: This coefficient represents a subleading contribution in the operator matching [20].
In Monte Carlo simulations operators are normalized differently than in the CFT literature. In Monte Carlo, 2pt functions at coincident points are normalized to \(1\), while in the CFT literature 2pt functions are normalized to \(1/|x|^{2D(Q)}\) asymptotically. Therefore, to extract the OPE coefficient as defined in eq. (2), we measure a suitable ratio between the 3pt function and 2pt functions. The ratio of interest is
\[\frac{\left\langle\mathcal{O}_{-Q}\left(x\right)\mathcal{O}_{2Q}\left(0\right) \mathcal{O}_{-Q}\left(-x\right)\right\rangle}{\sqrt{\left\langle\mathcal{O}_ {2Q}\left(0\right)\mathcal{O}_{-2Q}\left(x\right)\right\rangle}\left\langle \mathcal{O}_{Q}\left(0\right)\mathcal{O}_{-Q}\left(x\right)\right\rangle}=2^{D (2Q)-2D(Q)}\lambda_{Q,Q,-2Q}\,, \tag{17}\]
where on the right-hand side we expressed it in terms of the OPE coefficient (2), using the continuum infinite size CFT prediction.
In order to measure 3pt functions with the Worm algorithm, we need to rewrite them as some combination of 2pt functions in the presence of background charges, as in eq. (2).15
Figure 5: Results for \(\mathcal{A}(Q)\), defined in eq. 5. Notice the large uncertainties despite the precision of the measurements of \(\Delta(Q)\).
We, therefore, write the OPE coefficient as:
\[\left\langle\mathcal{O}_{-Q}\left(x\right)\mathcal{O}_{2Q}\left(0\right)\mathcal{O }_{-Q}\left(-x\right)\right\rangle=\left\langle\mathcal{O}_{-Q}\left(x\right) \mathcal{O}_{Q}\left(0\right)\right\rangle_{Q_{0}-Q_{-x}}\left\langle\mathcal{O }_{Q}\left(0\right)\mathcal{O}_{-Q}\left(-x\right)\right\rangle. \tag{3.4}\]
The task of measuring OPE coefficients reduces to the measurement of 2pt functions in the presence of background charges; these can be efficiently sampled in terms of ratios (as in eq. (2.1)) using the strategy outlined in the previous section. Further details are given in App. A.2.
We now discuss lattice and finite-size effects. First, it is useful to determine the region where lattice effects are negligible. To this aim, we consider the ratio of three-point functions at different lattice sizes but at the same relative position:
\[\frac{T_{Q,L}(x)}{T_{Q,\alpha L}(\alpha x)}=\alpha^{\gamma_{Q}}\,,\qquad \gamma_{Q}=D(2Q)+2D(Q)\,, \tag{3.5}\]
where we defined the 3pt function on the lattice as
\[T_{Q,L}(x)=\left\langle\mathcal{O}_{-Q}\left(x\right)\mathcal{O}_{2Q}\left(0 \right)\mathcal{O}_{-Q}\left(-x\right)\right\rangle_{T_{L}^{3}}\,; \tag{3.6}\]
the value for the exponent \(\gamma_{Q}\) in eq. (3.5) follows from scale invariance in the CFT. Analogously to eq. (2.3), the ratio (3.5) is independent of \(x\) when lattice effects are negligible.
In fig. 6 we plot the difference between the exponent \(\gamma_{Q}\) extracted from numerical measurements for different values of \(x\) and the CFT prediction in eq. (3.5). We show explicitly the results for \(Q=1,2,3\) (the plot for \(Q=4\) is analogous) and \(L=32\). The plot clearly shows that the region where lattice effects are negligible decreases with the charge. This makes it challenging to perform measurements for large values of the charge.
Figure 6: Difference between the value of \(\gamma_{Q}\) measured from Monte-Carlo and the theoretical prediction \(2D(Q)+D(2Q)\) (see eq. (3.5)). The result is obtained from the ratio of correlation functions sampled at \(L=32\) and \(L=64\), at the same relative position. We used the values \(D(Q)\) and \(D(2Q)\) measured in the previous section for the theoretical prediction.
Differently than with 2pt functions, it is not possible to eliminate completely systematic errors from finite size effects.16 Therefore, to extract the OPE coefficient we fix \(x\) and study the \(L\) dependence of the ratio of 3- and 2-pt functions in eq. (10). The idea is that, as long as \(x\) is outside the region where lattice effects are relevant, the extrapolation to \(L\to\infty\) is unaffected by lattice corrections. Moreover, in the limit \(x/L\to 0\), at fixed \(x\), the lattice 3pt function should be well described by the continuum infinite size prediction. Thus, this provides a direct measurement of the OPE coefficients. We show the results for \(Q=1\) in fig. 7. While for \(x\in[1,2,4,6]\) there are deviations, all results for \(x>6\) converge to the same value, within uncertainties. The error bars for the final result are estimated from the dispersion of the intercept.
Footnote 16: Notice that ratios of three-point functions at different lattice size are independent of the OPE coefficient, as eq. (11) shows.
For larger values of \(Q\) one obtains similar plots, but the data has larger uncertainties and the extrapolation for \(L\to\infty\) does not converge as nicely. This is due to the increased significance of lattice effects, as shown in fig. 6. To improve the precision and accuracy, we used conformal perturbation theory to parameterize both lattice corrections and finite size effects in eq. (10). We obtain an expression as a series in powers of \(a/x\), \(a/L\) and \(L/x\), see App. C for details. We improve on the naive linear extrapolation by fitting the coefficients of these powers. Notice that in this approach we perform a unique multidimensional-fit with all the data (i.e. for all values of \(x\) and \(L\)), rather than performing separate linear extrapolations for each value of \(x\).
The numerical values for the OPE coefficients in regimes I and II are shown in fig. 8. We show both the results of the linear extrapolation, as well as those obtained from accounting lattice and finite size corrections, as explained in the previous paragraph. Both methods yield the same results for the OPE coefficients Regime II. Also for the OPE coefficients in regime I the two measurements are compatible, but the results accounting
Figure 7: Linear extrapolation of the OPE coefficient \(\lambda_{Q,Q,-2Q}\) to \(L\to\infty\) at fixed \(x\) and \(Q=1\).
for lattice corrections, in dark blue, are consistently larger than those obtained from the linear extrapolation, in cyan. This suggests that our results for \(\lambda_{Q,Q,-2Q}\) may be underestimated. Unfortunately, to further investigate this issue we would need to perform more precise simulations at larger distances, which are currently beyond our reach.17 Our results are tabulated in the table in fig. 8.
Footnote 17: Our results for \(Q=4\) are obtained with \(L_{max}=256\) and \(x_{max}=12\).
The range of charges that can be sampled in regime I is limited, as the statistical errors grow quickly with the charge. In regime II, it would be possible to go further, but this would not improve the precision with which we can measure the coefficient \(c_{\nicefrac{{3}}{{2}}}\), as we will explain in the following paragraphs.
Let us now analyze our data. To this aim, for the OPE coefficient in **regime I**, we perform two separate fits: one for the first three coefficients in eq. (2) and all data points, and one for the first two Wilson coefficients only and the results for \(Q\geq 2\). The results are shown in Tab. 1.
Clearly, our analysis is limited by the small number of data points. Nonetheless, the estimates in table 1 suggest that the leading Wilson coefficient should lie in the range \(c_{\nicefrac{{3}}{{2}}}\approx 0.4\pm 0.1\), which is compatible with the value \(c_{\nicefrac{{3}}{{2}}}\approx 0.34\) obtained from the measurements of scaling dimensions in the previous section. To appreciate this point better, in fig. 9 we
Figure 8: Numerical results. In regime I, the OPE coefficients extracted from the linear extrapolation are in cyan, and OPE coefficients extracted using lattice corrections are in dark blue. In regime II, both methods yield the same results. The OPE coefficient for \(Q=1\) is the same in both regimes. Notice that the OPE coefficient in regime I grows much faster with \(Q\) than the one in regime II; this is in qualitative agreement with the EFT predictions. The data in this plot is in the table below.
show our results for
\[\frac{\log(\lambda_{Q,Q,-2Q})}{Q^{3/2}}\frac{2\sqrt{\pi}}{f(1)}\sim c_{3/2}+O(Q^{- 1})\,. \tag{3.7}\]
The value of \(c_{3/2}\) can be extracted from the asymptotic behaviour of this quantity. The value obtained with the linear extrapolation is smaller than the one with lattice corrections. The range of charges is small hence it is difficult to extrapolate to infinite charge. Nevertheless, the results are compatible with asymptotically approaching \(c_{3/2}\approx 0.34\).
We now discuss the results for the OPE coefficient in **regime II**. We begin by testing the leading behaviour of the OPE coefficient. In fig. (10), we show \(\lambda_{1,Q,-Q-1}/Q^{D(1)/2}\). If the EFT prediction eq. (3.2) holds, this ratio should approach a constant for large \(Q\). The plot shows that this is indeed the case.
Having checked the leading behaviour, we focus on studying the sub-leading correction and measuring \(c_{3/2}\). Unfortunately, fits are not very precise due to large correlations between \(C(1)\) and \(c_{3/2}\) in the range of charges available. For reference, the fits including
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline data & charges & \(c_{3/2}\) & \(\alpha_{1/2}\) & \(\alpha_{0}\) \\ \hline lattice corrections & \(Q\geq 1\) & 0.316(15) & 1.57(8) & -1.09(7) \\ \hline linear extrapolation & \(Q\geq 1\) & 0.33(4) & 1.4(2) & -0.9(2) \\ \hline lattice corrections & \(Q\geq 2\) & 0.446(8) & 0.176(7) & — \\ \hline linear extrapolation & \(Q\geq 2\) & 0.43(4) & 0.17(3) & — \\ \hline \end{tabular}
\end{table}
Table 1: Result for the fits of the OPE coefficient in regime I with different free parameters and for different values of the charges in eq. 3.1. We show the fits with both the data obtained from the linear extrapolation and with the inclusion of lattice corrections; notice the latter are more precise.
Figure 9: Comparison between the numerical results for eq. (3.7) and the predicted value \(c_{3/2}=0.34(1)\). The black line is the value of \(c_{3/2}\) obtained in sec.2, and its width is the uncertainty. The coloured full lines are the fits and the coloured regions around their uncertainty. The fits converge to the black line for larger values of \(Q\).
and excluding the subleading coefficient \(\beta_{-1}\) (cfr. eq. (10)) are given in table 2. Clearly the fit including the coefficient \(\beta_{-1}\) makes the uncertainties too large for the results to be meaningful. The second fit is compatible within \(2\sigma\) with the estimate \(c_{\nicefrac{{3}}{{2}}}\approx 0.34\).
To study directly the first sub-leading contribution in eq. (10), we consider the following ratio
\[-\frac{Q^{3/2}}{0.23}\left(\frac{\lambda_{1,Q+1,-Q-2}}{\lambda_{1,Q,-Q-1}}\left( \frac{Q}{Q+1}\right)^{D(1)/2}-1\right)\sim c_{\nicefrac{{3}}{{2}}}+O(Q^{-1/2} )\,. \tag{12}\]
The EFT predicts that eq. (12) asymptotes to \(c_{\nicefrac{{3}}{{2}}}\) for large \(Q\). Our numerical results are shown in fig. 11. They are compatible with the value of \(c_{\nicefrac{{3}}{{2}}}\) obtained in sec. 2, represented by the black dashed line. However, similarly to the analysis \(c_{0}\) in the previous section, the uncertainties increase rapidly with \(Q\), making it impossible to obtain a reliable estimate. These uncertainties also represent an obstacle towards improving our results with measurements of the OPE coefficients at larger values of \(Q\).
## 4 Conclusions and outlook
In this work, we used Monte-Carlo calculations to test the validity of the superfluid EFT for describing the large charge sector of the \(O(2)\) model. Our results were already summarized in the introduction. Here we instead discuss the implications of our findings and potential future directions.
The most surprising aspect of the result for \(D(Q)\) is the effectiveness of the large charge predictions also for \(Q=O(1)\). For instance, the extrapolation of the best-fit
curve obtained from the data with \(Q\geq 8\) reproduces the measured value of \(D(1)\) with 2% accuracy. Overall, our results confirm that this phenomenon persists also for OPE coefficients. A partial justification for the accuracy of the large charge expansion for the scaling dimension was given in [37] via resurgence analysis of the large \(N\) result of [28]. It might be interesting to perform similar analyses for OPE coefficients.
While our results are encouraging, the scarcity of data points does not allow us to draw unambiguous conclusions about the validity of the superfluid EFT. It is therefore important to obtain more data. Unfortunately, obtaining OPE coefficients for higher values of the charges is beyond the reach of our current Monte-Carlo algorithm. For instance, we estimate that obtaining the OPE coefficient in Regime I for \(Q_{1}=Q_{2}=5\), with an uncertainty of 10%, would require 10 CPU years.
A simpler target might be the Monte-Carlo calculation of the scaling dimension of the lightest charged operator with spin \(J=2\).18 Notice that the cubic symmetry group of the lattice naturally allows to represent operators up to spin \(J<4\).19 As commented in the introduction the superfluid EFT predicts the scaling dimension of the spin 2 operator to be \(D(Q)+\sqrt{3}\). However, it is unclear whether one should expect this result to converge for small values of \(Q\), as it happens for scalar operators. Indeed it is expected that the large charge sector of the \(O(2)\) model admits a rich phase diagram as a function of the ratio \(J/Q\)[39; 40]. We hope to report about progress in this direction in the future.
Footnote 18: We focus on spin 2 since the lightest charged operator with spin 1 is expected to be a descendant of the scalar operator \(\mathcal{O}_{Q}\).
Footnote 19: We thank Luca Delacretaz for useful discussions on this.
It would also be interesting to explore alternative methods to compute the spectrum of charged operators in the \(O(2)\) model. An intriguing possibility is provided by the fuzzy-sphere regularization of [17], which allows directly computing the spectrum of the theory on the cylinder.20 We were also informed of interesting numerical bootstrap results for charged operators in the \(O(3)\) model [41].
Footnote 20: We thank Andreas Lauchli for discussions regarding his ongoing work in this direction.
The accuracy of the large charge expansion is reminiscent of the success of the Regge relation for the QCD and Yang-Mills spectrum [42], and more recently of the remarkable results of the large spin bootstrap in the \(O(N)\) models [43; 44]. In both of these examples, the results are (partially) explained by the analyticity properties of the spectrum [45; 8; 46]. It remains an important open question whether the CFT data of the \(O(2)\) model enjoy similar analyticity properties as a function of the charge.
GC is supported by the Simons Foundation (Simons Collaboration on the Non-perturbative Bootstrap) grants 488647 and 397411. JM is supported by FCT with the fellowship 2021.04743.BD, co-funded by the Programme Por_Norte, the European Social Fund (ESF), and the Portuguese state budget (MCTES). JM, JO and JV thank the cluster time provided by INCD funded by FCT and FEDER under project 01/SAICT/2016 n\({}^{\text{o}}\) 022153 and the grant 2021.09830.CPCA of the Advanced Computing Projects (2nd edition) as well as
GRID FEUP. They also thank Centro de Fisica do Porto funded by Portuguese Foundation for Science and Technology (FCT) within the Strategic Funding UIDB/04650/2020. JP is supported by the Simons Foundation grant 488649 (Simons Collaboration on the Non-perturbative Bootstrap) and the Swiss National Science Foundation through the project 200020_197160 and through the National Centre of Competence in Research SwissMAP.
## Appendix A Monte Carlo
This section describes the Monte Carlo method and measurement strategies employed. It describes the world line formulation, the Worm algorithm and the procedures required to express the correlation functions as averages that can be efficiently estimated with Monte Carlo. We describe an improved Worm update, the continuous time update, that guarantees the Worm tail always moves.
The lattice Hamiltonian of the O(2) model is
\[H=-\beta\sum_{n,\rho}\cos\left(\theta_{n}-\theta_{n+a\hat{\rho}}\right), \tag{10}\]
where the field \(\theta_{x}\) is defined at the cubic lattice nodes. Simulations can be performed in this representation. However, it is more efficient to use a world-line representation [32], where the node variables are mapped into edge variables using
\[\exp\{\beta\cos\left(\theta_{n}-\theta_{n+a\hat{\rho}}\right)\}=\sum_{k=- \infty}^{\infty}I_{k}(\beta)e^{ik\left(\theta_{n}-\theta_{n+a\hat{\rho}} \right)}, \tag{11}\]
where \(I_{k}(\beta)\) is the modified Bessel function of the first kind and \(\beta\) is the inverse temperature. We work at the critical temperature of the three dimensional \(O(2)\) model, \(\beta=0.4541652\)[47]. Since each \(k\) is associated with a pair \((\theta_{x},\theta_{x+a\hat{\rho}})\), these live on the edge of the lattice connecting \(x\) to \(x+a\hat{\rho}\). After this rewriting, the path integral over \(\theta\) can be performed explicitly and the partition function becomes a sum over all possible values \(k\) for all the edges
\[Z=\sum_{\{k\}}\prod_{n,\hat{\rho}\in\{\hat{1},\hat{2},\hat{3}\}}\left\{I_{k_{n,n+a\hat{\rho}}}(\beta)\right\}\delta\left(\sum_{\hat{\rho}}\left(k_{n,n+a\hat {\rho}}-k_{n-a\hat{\rho},n}\right)\right), \tag{12}\]
where the sum is over all possible configurations of edge variables and the product is over nodes, \(n\), and the edges connected to it.
The world-line formulation brings two significant improvements. The first is the possibility of using the Worm algorithm [31], which has one of the smallest dynamical critical exponents [48]. The second is that correlation functions can be reinterpreted as the partition function in the presence of some background charge
\[\left\langle\prod_{i}\exp\left(Q_{i}\delta_{ix}\right)\right\rangle=\langle 1 \rangle_{\sum_{i}\left(Q_{i}\right)_{x_{i}}}, \tag{13}\]
where the subscript \(\sum_{i}\left(Q_{i}\right)_{x_{i}}\) indicates that the correlation function is computed with the partition function \(Z_{\sum_{i}\left(Q_{i}\right)_{x_{i}}}\)
\[Z_{\sum_{i}\left(Q_{i}\right)_{x_{i}}}=\sum_{\{k\}}\prod_{n,\hat{\rho}}\left\{I_{ k_{n,n+a\hat{\rho}}}(\beta)\right\}\delta\left(\sum_{\hat{\rho}}\left(k_{n,n+a \hat{\rho}}-k_{n-a\hat{\rho},n}\right)+\sum_{i}Q_{i}\delta_{in}\right). \tag{10}\]
Since \(\sum_{\hat{\rho}}\left(k_{n,n+a\hat{\rho}}-k_{n-a\hat{\rho},n}\right)\) is interpreted as charge conservation at each node, the extra term \(\sum_{i}\left(Q_{i}\right)_{x_{i}}\) can be interpreted as a source/sink of charge, or in other words, a background charge distribution.
### Worm algorithm
The Worm algorithm consists of two steps. An update step generates new configurations, Alg. 1. Then, a measurement step extracts the desired correlation function, see Alg. 2.
```
1. Pick a random site \(x_{h}\) (head site). Define \(x_{t}=x_{h}\) (tail site).
2. Randomly pick a direction \(\hat{\rho}\in\{\hat{1},\hat{2},\hat{3}\}\) and an orientation \(\sigma=\pm 1\).
3. Let \(k\) be the current flowing through the bond connecting \(x_{t}\) to \(x_{t}+\sigma\hat{\rho}\). If: * \(\sigma=1\) update \(k\to k+1\) with probability \(I_{k+1}\left(\beta\right)/I_{k}\left(\beta\right)\); * \(\sigma=-1\) update \(k\to k-1\) with probability \(I_{k-1}\left(\beta\right)/I_{k}\left(\beta\right).\)
4. If: * update is accepted: \(x_{t}=x_{t}+\sigma\hat{\rho}\); * update is not accepted: \(x_{t}=x_{t}\).
5. If \(x_{t}=x_{h}\) the update ends. Else go to step 2.
```
**Algorithm 1**Update step.
In the measurement step, the head of the Worm transports charge 1, and it generates configurations with charge insertions at its head and tail. Then the ratio between the number of times the head and tail are at the positions at which the correlation function is being measured and the number of times the head and the tail are at the same position is an estimate of the correlation function.
The **continuous time update** improves steps 2 and 3 of the update step, and consequently also of the measurement step, by guaranteeing the tail always moves. This is achieved by choosing to move through a given edge with probability \(P(\sigma\hat{\rho})\), given by
\[P(\sigma\hat{\rho})=\frac{P(\sigma\hat{\rho}\mid n)}{\sum_{\hat{\rho},\sigma}P( \sigma\hat{\rho}\mid n)}, \tag{10}\]
where \(P(\sigma\hat{\rho}\mid n)\) is the probability of being at position \(n\), proposing to move in the direction \(\sigma\hat{\rho}\) and accepting the proposal (as described in steps 2 and 3 of the update step). Since \(\sum_{k}P(\sigma\hat{\rho})=1\), the tail always moves. If there is a counter associated with the position \(n\), then its value should be incremented by \(\nicefrac{{1}}{{\left(\sum_{\sigma\hat{\rho}}P(\sigma\hat{\rho}|n)\right)}}\), the expected time, in the original algorithm, the tail is at \(n\) before moving. Performing this computation analytically reduces the statistical errors. This continuous time update can also be understood as the heat bath step, from which the detailed balance follows.
### Ratio between correlation functions
First, let us show how operator insertions can be interpreted as background charges. Consider the correlation function \(\left\langle e^{iQ\theta(x)}e^{-iQ\theta(y)}\right\rangle\). Then, by expanding the definition of the expectation value
\[\left\langle e^{iQ\theta(x)}e^{-iQ\theta(y)}\right\rangle=\frac{\int\left( \prod_{k}d\theta_{k}\right)\exp\left[\beta\sum_{\langle i,j\rangle}\cos\left[ \theta_{i}-\theta_{j}\right]+iQ\theta(x)-iQ\theta(y)\right]}{\int\left(\prod_{ k}d\theta_{k}\right)\exp\left[\beta\sum_{\langle i,j\rangle}\cos\left[\theta_{i}- \theta_{j}\right]\right]}, \tag{11}\]
and using the identity
\[\exp\left\{\beta\cos\left(\theta_{i}-\theta_{j}\right)\right\}=\sum_{k_{ij} \in\mathbb{Z}}I_{k_{ij}}\left(\beta\right)e^{ik_{ij}\left(\theta_{i}-\theta_{ j}\right)}, \tag{12}\]
the path integral over \(\theta\) can be performed analytically, yielding
\[\left\langle e^{iQ\theta(x)}e^{-iQ\theta(y)}\right\rangle=\frac{\sum_{\langle i,j\rangle}\sum_{k_{ij}}I_{k_{ij}}\left(\beta\right)\delta\left(\sum_{i}\left( D_{i}+Q\delta_{ix}-Q\delta_{iy}\right)\right)}{\sum_{\langle i,j\rangle}\sum_{k_{ij}}I_{k_{ij}} \left(\beta\right)\delta\left(\sum_{i}\left(D_{i}\right)\right)}, \tag{13}\]
where \(D_{i}\equiv\sum_{\hat{\rho}\in\hat{1},\hat{2},\hat{3}}k_{i,i+a\hat{\rho}}-k_{ i-a\hat{\rho},i}\) and \(\langle i,j\rangle\) is the sum over first neighbors.
We are now ready to study ratios between correlation functions. Consider the ratio appearing in eq. (1)
\[\frac{\left\langle e^{iQ\theta(x)}e^{-iQ\theta(y)}\right\rangle}{ \left\langle e^{i(Q-1)\theta(x)}e^{-i(Q-1)\theta(y)}\right\rangle} =\frac{\sum_{k_{ij}}I_{k_{ij}}(\beta)\delta\left(\sum_{i}\left( D_{i}+\left(Q-1\right)\delta_{ix}-\left(Q-1\right)\delta_{iy}+\delta_{ix}-\delta_{ iy}\right)\right)}{\sum_{k_{ij}}I_{k_{ij}}(\beta)\delta\left(\sum_{i}\left(D_{i}+ \left(Q-1\right)\delta_{ix}-\left(Q-1\right)\delta_{iy}\right)\right)} \tag{14}\]
To implement this on the lattice, it suffices to generate an initial configuration of link variables satisfying
\[\sum_{i}\left(D_{i}+(Q-1)\delta_{ix}-(Q-1)\delta_{iy}\right) \tag{15}\]
at every point and then perform the standard worm update.
This can be generalized to higher point functions. The general rule is that charge insertions in the denominator are removed from the numerator and added to the background. Thus, the 3pt function in eq. (3.4) can be rewritten as
\[\langle\mathcal{O}_{-Q}(x)\mathcal{O}_{2Q}(0)\mathcal{O}_{-Q}(-x)\rangle =\frac{\langle\mathcal{O}_{-Q}(x)\mathcal{O}_{2Q}(0)\mathcal{O}_ {-Q}(-x)\rangle}{\langle\mathcal{O}_{Q}(0)\mathcal{O}_{-Q}(-x)\rangle}\, \langle\mathcal{O}_{Q}(0)\mathcal{O}_{-Q}(-x)\rangle\] \[=\langle\mathcal{O}_{-Q}(x)\mathcal{O}_{Q}(0)\rangle_{Q_{0}-Q_{- x}}\,\langle\mathcal{O}_{Q}(0)\mathcal{O}_{-Q}(-x)\rangle\] (A.12)
## Appendix B Finite size scaling analysis
In this appendix, we study the dependence of the numerical measurements of the conformal dimension on the size of the system. The data presented in the table below fig.2 are obtained using the procedure described here.
Figure 12: Extrapolations of the difference between consecutive conformal dimensions to \(L=\infty\). In blue we present our data for \(L=32,48,64\). In orange we present the linear extrapolation to \(L=\infty\) and in red the results in [30]. In black, we present the bootstrap results [49]. In green, we present the results from [50], for \(Q=1\), and [51] for \(Q=2,3,4\). Not all of the results exist for all the charges.
In fig. 12, we show the numerical results for the differences \(\Delta(Q)\) between conformal dimensions for \(L=32,48\) and \(64\) and their extrapolation to \(L=\infty\), and present the comparison with the available results in the literature. For \(Q=1,2,3,4\), finite-size effects are relevant and bigger than statistical uncertainties. In particular, the measurement for \(Q=1\) and \(L=32\) is incompatible with the bootstrap result [49] and with previous Monte Carlo results [50; 51], while our extrapolated value is compatible.21
Footnote 21: We thank Martin H. Hasenbusch for pointing out the mismatch with the bootstrap results in a previous version of this preprint.
As charges become larger, systematic errors become less relevant and, for \(Q=7\), the results for \(L=32\) are compatible with the extrapolated results, within the uncertainties of the latter.
In light of the above analysis, we made the following choice for the data presented in tab. 3: for charges \(Q\leq 7\), we use the extrapolated data and uncertainties; for \(Q>7\) we use the data for \(L=32\) with doubled uncertainties. We made this choice because, for \(L=64\) and charge \(Q\gtrsim 10\), there are systematic errors due to the small statistics. Such systematic errors prevent us from obtaining reliable extrapolations to \(L=\infty\). From fig. 12 we observe that the statistical errors are roughly comparable with the systematic ones for \(Q\geq 7\), therefore justifying the choice of doubling the statistical uncertainties to estimate the overall uncertainty.
## Appendix C Lattice corrections
The analysis we perform in sec. 3 relies on the hypothesis that both lattice and finite size effects are small. However, we observe significant lattice effects for large charges, see fig. 6, and thus linear extrapolation is no longer enough. Given that we are already using the largest lattice size and distances that we can reasonably simulate, we now study lattice and finite size effects.
We start by identifying the projection of a lattice operator into continuum operators. Next, we identify neutral scalar irrelevant operators that can be added to the action. These need to be irrelevant otherwise the system flows away from this fixed point. Their goal is to encode the information about the existence of a lattice. Finally, our space has the topology of a torus, such that the 2pt functions are not fully fixed by symmetry. Thus, we only have access to correlation functions when operators are close, \(|x-y|\ll L\), and the OPE quickly converges. Since this is related to the UV behaviour of theory, it is not sensitive to the global topology of the system.
For the sake of simplicity, we will only present explicit computations for the 2pt functions with \(Q=1\). Generalizations to other charges or higher-order correlation functions are straightforward. We use the standard CFT notation. To make a connection with the rest of the paper, the reader should keep in mind that \(\mathcal{O}_{\text{lat}}(x)=e^{i\theta(x)}\), \(\Delta_{\phi}=D(1)\).
The lightest charged scalar lattice operator can overlap with all operators that have the same charge and are scalar under the cubic subgroup (i.e. operators whose spin \(s\equiv 0\mod 4\)). We will just consider the two lightest operators \(\phi\) and \(\Box\phi\), check Tab.3. The
lattice operator
\[\mathcal{O}_{\text{lat}}=c_{1}a^{\Delta_{\phi}}\phi+c_{2}a^{\Delta_{\phi}+2} \square\phi, \tag{108}\]
such that the 2pt function becomes
\[\left\langle\mathcal{O}_{\text{lat}}^{\dagger}(x)\mathcal{O}_{\text{lat}}(0) \right\rangle=a^{2\Delta_{\phi}}\left(|c_{1}|^{2}+2\text{Re}(c_{2}c_{1})a^{2} \square_{x}\right)\left\langle\phi^{\dagger}(x)\phi(0)\right\rangle+\mathcal{O }\left(a^{3+2\Delta_{\phi}}\right). \tag{109}\]
The next step is to deform the action. The lightest neutral scalar irrelevant operator is \(s^{\prime}\) such that the first term in the deformed action is
\[S=S_{\text{CFT}}+g_{s^{\prime}}a^{\Delta_{s^{\prime}}-3}\int_{T^{3}}d^{3}zs^{ \prime}(z)+\cdots, \tag{110}\]
where \(g_{s^{\prime}}\) is a dimensionless parameter. By the usual perturbative expansion, the perturbed correlation function at one-loop is given by
\[\left\langle\phi^{\dagger}(x)\phi(0)\right\rangle=\left\langle\phi^{\dagger} (x)\phi(0)\right\rangle_{\text{CFT}}-g_{s^{\prime}}a^{\Delta_{s^{\prime}}-3} \int_{T^{3}}d^{3}z\left\langle\phi^{\dagger}(x)\phi(0)s^{\prime}(z)\right\rangle _{\text{CFT}}. \tag{111}\]
The correlation functions on the right-hand side are computed on the unperturbed CFT.
In flat space, the 2pt and 3pt functions are known. In the torus, they are not and the only tool available is the OPE. This means we can only study the short-range behaviour of these correlation functions. In the OPE of \(\phi^{\dagger}\times\phi\) we will only include the lightest neutral scalar operator 22
Footnote 22: Descendants can also be considered. We do not include them here to keep the expressions manageable.
\[\phi^{\dagger}(x)\times\phi(0)\sim|x|^{-2\Delta_{\phi}}\left(\mathbb{I}+|x|^{ \Delta_{s}}\lambda_{\phi^{\dagger}\phi_{s}}s(0)\right). \tag{112}\]
Plugging this into (111), we obtain
\[\left\langle\phi^{\dagger}\left(x\right)\phi\left(0\right)\right\rangle=|x| ^{-2\Delta_{\phi}}\left(\left\langle\mathbb{I}\right\rangle_{\text{CFT}}+|x| ^{\Delta_{s}}\,\lambda_{\phi^{\dagger}\phi_{s}}\left\langle s\left(0\right) \right\rangle_{\text{CFT}}\right.\] \[\left.-g_{s^{\prime}}a^{\Delta_{s^{\prime}}-3}\int_{T^{3}}d^{3}z \left[\left\langle s^{\prime}\left(z\right)\right\rangle_{\text{CFT}}+|x|^{ \Delta_{s}}\,\lambda_{\phi^{\dagger}\phi_{s}}\left\langle s\left(0\right)s^ {\prime}\left(z\right)\right\rangle_{\text{CFT}}\right]\right), \tag{113}\]
which depends on vacuum expectation value (VEV) of \(s\) and \(s^{\prime}\) and in the integrated 2pt-function on the torus of \(s\) and \(s^{\prime}\). These are unknown in general, hence, instead of focusing on computing them explicitly, we extract their dependence on the dimensionful parameter \(L\). Let us go case by case:
* \(\left\langle\mathbb{I}\right\rangle_{\text{CFT}}=1\), by definition of the identity operator.
* \(\left\langle s\left(0\right)\right\rangle_{\text{CFT}}=\frac{\alpha_{1}}{L^{ \Delta_{s}}}\), where \(\alpha_{1}\) is a dimensionless parameter. \(L\) appears raised to the power of the conformal dimension of \(s\) since it is the only dimensionful parameter available23. Footnote 23: In \(\mathbb{R}^{3}\) there is no such length scale, resulting in 1pt functions that are zero.
* \(\int_{T^{3}}d^{3}z\left\langle s^{\prime}\left(z\right)\right\rangle_{\text{ CFT}}=\left\langle s^{\prime}\left(0\right)\right\rangle_{\text{CFT}}\int_{T^{3}}d^{3}z= \frac{\alpha_{2}}{L^{\Delta_{s^{\prime}}-3}}\), where we used translation invariance to bring the expectation value of \(s^{\prime}\) out of the integral.
* \(\int_{T^{3}}d^{3}z\left\langle s\left(0\right)s^{\prime}\left(z\right)\right\rangle _{\text{CFT}}=\frac{\alpha_{3}}{L^{\Delta_{s}+\Delta_{s^{\prime}}-3}}\) for the same reasons as before.
Thus, we obtain the following perturbed 2pt-function
\[\left\langle\phi^{\dagger}\left(x\right)\phi\left(0\right)\right\rangle=|x|^{ -2\Delta_{\phi}}\left[1+\tilde{\alpha}_{1}\left(\frac{a}{L}\right)^{\Delta_{s ^{\prime}}-3}+\tilde{\alpha}_{2}\left(\frac{|x|}{L}\right)^{\Delta_{s}}\left( 1+\tilde{\alpha}_{3}\left(\frac{a}{L}\right)^{\Delta_{s^{\prime}}-3}\right) \right]. \tag{100}\]
By acting with \(\square\) on this, we obtain the second term in eq. (101).
Corrections to the OPE coefficients are obtained using the same ideas, but the derivations are significantly more cumbersome. As such, we only show the end result. Thus, the lattice estimation of the OPE coefficient appearing on the right-hand side of eq. (11), here denoted as \(\lambda_{\text{OPE}}^{\text{(lat)}}\), is related with the "true" OPE coefficient, \(\lambda_{\text{OPE}}\), as
\[\lambda_{\text{OPE}}^{\text{(lat)}}=\lambda_{\text{OPE}}\left[1+ \beta_{1}\left(\frac{a}{L}\right)^{\Delta_{s^{\prime}}-3}+\beta_{2}\left(\frac {x}{L}\right)^{\Delta_{s}}\left(1+\beta_{3}\left(\frac{a}{L}\right)^{\Delta_{ s^{\prime}}-3}\right)\right.\] \[\left.+\beta_{4}\left(\frac{a}{x}\right)^{2}\left(1+\beta_{5} \left(\frac{x}{L}\right)^{\Delta_{s}}\right)+\ldots\right], \tag{101}\]
where we kept all terms up to order \(\left(\frac{a}{x}\right)^{2}\), \(\left(\frac{a}{L}\right)^{\Delta_{s^{\prime}}-3}\) and \(\left(\frac{x}{L}\right)^{\Delta_{s}}\), excluding mixed terms. This relation depends on the charges appearing on the left-hand side of eq. (11) through OPE coefficients of the type \(\lambda_{Q,-Q,s}\) and multiplicative factors of \(\Delta_{Q}\) (these will never appear on the exponents of \(x\) or \(a\)). The expansion (101) is independent of the charges of the operators, up to the undetermined coefficients.
|
2309.11641 | Attentive VQ-VAE | We present a novel approach to enhance the capabilities of VQ-VAE models
through the integration of a Residual Encoder and a Residual Pixel Attention
layer, named Attentive Residual Encoder (AREN). The objective of our research
is to improve the performance of VQ-VAE while maintaining practical parameter
levels. The AREN encoder is designed to operate effectively at multiple levels,
accommodating diverse architectural complexities. The key innovation is the
integration of an inter-pixel auto-attention mechanism into the AREN encoder.
This approach allows us to efficiently capture and utilize contextual
information across latent vectors. Additionally, our models uses additional
encoding levels to further enhance the model's representational power. Our
attention layer employs a minimal parameter approach, ensuring that latent
vectors are modified only when pertinent information from other pixels is
available. Experimental results demonstrate that our proposed modifications
lead to significant improvements in data representation and generation, making
VQ-VAEs even more suitable for a wide range of applications as the presented. | Angello Hoyos, Mariano Rivera | 2023-09-20T21:11:36Z | http://arxiv.org/abs/2309.11641v2 | # Attentive Vq-Vae
###### Abstract
We present a novel approach to enhance the capabilities of VQ-VAE models through the integration of an Attentive Residual Encoder (AREN) and a Residual Pixel Attention layer. The objective of our research is to improve the performance of VQ-VAE while maintaining practical parameter levels. The AREN encoder is designed to operate effectively at multiple levels, accommodating diverse architectural complexities. The key innovation is the integration of an inter-pixel auto-attention mechanism into the AREN encoder. This approach allows us to efficiently capture and utilize contextual information across latent vectors. Additionally, our models uses additional encoding levels to further enhance the model's representational power. Our attention layer employs a minimal parameter approach, ensuring that latent vectors are modified only when pertinent information from other pixels is available. Experimental results demonstrate that our proposed modifications lead to significant improvements in data representation and generation, making VQ-VAEs even more suitable for a wide range of applications.
Mariano Rivera and Angello Hoyos Centro de Investigacion en Matematicas A.C.
Guanajuato, Gto. 36120, Mexico +
Footnote †: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
VQ-VAE, Attention, Face Generation, GANs.
## 1 Introduction
The field of generative models has seen significant advances in recent years, enabling the creation of high-quality and diverse synthetic data [1, 2, 3, 4, 5, 6, 7, 8]. Among these, the Variational Autoencoder (VAE) has emerged as a robust framework for learning latent representations of data distributions [1]. However, traditional VAEs need help generating rich textures and tend to produce over-smoothed images. Thus, refinement models are used to improve the image's realness. More recently, the Variational Autoencoder with Vectored Quantization (VQ-VAE) [5] was introduced to address VAE's limitations, offering a novel approach that combines the strengths of autoencoding with discrete vector quantization [9]. The VQ-VAE architecture provides a unique solution to the problem of capturing fine-grained details in data while maintaining the interpretability of latent representations. However, traditional VQ-VAE faces challenges in modeling complex dependencies and preserving long-range consistency in generated samples. With an extra computational cost, a solution consists of implementing a hierarchical codification [5]. Hence, the latent vectors, \(\mathbf{z}_{i}\) in higher codification levels, are computed with an extended support region in the original image.
Among applications of VQ-VAEs are image denoising, data compression [9], data generation [10, 8], abnormality detection [11], and image/video super-resolution [12], to mention a few. Herein, we introduce a variant of VQ-VAE by incorporating an Attention and Hierarchical mechanisms [13, 14] that extends the codification capabilities. These advancements enable the encoder to more effectively preserve intricate features in the generated samples. For instance, in image generation, Attentive VQ-VAE demonstrates improved capabilities in capturing subtle facial features, like the symmetry of facial attributes, color distribution of eyes, and nuanced contours of facial components. This paper presents the architecture and training strategy of Attentive VQ-VAE, demonstrating its effectiveness through extensive experimental results. Incorporating a Generative Adversarial Network (GAN) training strategy further enriches the model's capabilities by reducing the required training iterations. Our numerical experiments highlight the distinct advantages of Attentive VQ-VAE over its predecessors.
The remainder of this paper is organized as follows: Section 2 provides a comprehensive overview of VQ-VAE models. Section 3 details the architecture of the Attentive Hierarchical VQ-VAE, including its novel components and improvements over the original VQ-VAE framework. That Section also delves into the training strategy based on GANs techniques [15]; indeed, a PatchGAN [16]. Section 4 presents the experimental setup and showc
Figure 1: General scheme of a VQ–VAE
by investigated versions of Attentive Hierarchical VQ-VAE, quantitatively and qualitatively comparing their performance. Finally, Section 5 concludes the paper by summarizing the contributions and discussing potential future research directions in the Attentive VQ-VAE and generative modeling.
## 2 Related Work
In Fig. 1, we depict the general scheme of the VQ-VAE. In this, the encoder (ENC) transforms the input data from its original space into an array of latent vectors \(\mathbf{z}\) (in dimension \(N\)) with minimal information loss. Subsequently, a Vector Quantizer (VQ) replaces each vector \(\mathbf{z}_{i}\in\mathbf{z}\) with the vector
\[\mathbf{e}_{k}=\arg\min_{\mathbf{e}_{k}\in D}\|\mathbf{z}_{i}-\mathbf{e}_{k} \|, \tag{1}\]
where \(D\) is a dictionary of vectors learned from the data. Hence, each latent vector can only be one of those defined by the dictionary. In this architecture, such a dictionary is learned from the data at training time. Thus, if the dictionary \(D\) exhibits sufficient diversity and the encoder effectively maps \(\mathbf{z}\) to \(\mathbf{e}\) with minimal error, then the decoder (DEC) is capable of generating a reliable reconstruction of the original data. An important distinction between the VQ-VAE and the traditional VAE lies in their treatment of latent variables: the former quantizes them [17, 5], while the latter constraint the latent variables by imposing them a prior distribution, often adopting a multivariate normal distribution with zero mean and identity covariance: \(p(\mathbf{z})\sim\mathcal{N}(0,I)\).
VQ-VAE models have been noted for their efficiency in encoding data into low-dimensional latent spaces. Once one trains a VQ-VAE model, the prior distribution of the latent variables, \(p(\mathbf{z})\), can be learned using autoregressive models such as PixelCNN [3, 18]. Then, one can generate new data by sampling \(p(\mathbf{z})\) and decoding such samples. Our work focuses solely on improving the efficiency and capacity of VQ-VAE models. It is essential to mention that the prior distribution \(p(\mathbf{z})\) estimation is beyond our study's scope. We focus on the VQ-VAE encoder's inherent loss of information challenge. In the context of facial images, VAE-based models have exhibited deficiencies in global consistency [10, 8]. These issues encompass generated faces with asymmetrical features (e.g., eyes of different colors) and overall incoherence (e.g., disproportional features). One can attribute these challenges to the latent vectors derived from convolutional networks, which capture characteristics from localized regions. However, enlarging the support region implies expanding convolutional kernels, leading to an escalation in parameters and training time.
## 3 Method
Our focus in this paper centers on enhancing the encoder's capabilities by redesigning encoders, reducing encoding resolution levels, and integrating an attention mechanism. These modifications aim to augment the performance of the VQ-VAE while maintaining parameters at practical levels.
### Attentive Residual Encoder
Fig. 2 depicts the schematic of our proposed Attentive Residual Encoder (AREN). Our encoder design draws inspiration from the multilevel encoder proposal of VQ-VAEv2. Although Fig. 2 illustrates the two-level case, our implementation accommodates multiple levels (we have tested up to three). The dashed rectangle represents the encoder within the broader scheme, as shown in Fig. 1. Notably, the base encoder's output is shared among all encoder levels. We design the AREN-type encoders to operate complementary by effectively splitting the information corresponding to each level. The latent vector of the upper level undergoes quantization (VQ2) and scaling (RZ2) to align the Height and Width dimensions with those of the lower level. Then, we concatenate, by channels, the quantized response of the upper level with the AREN response of the lower level. We combine the concatenated channels with a 1x1 convolution. This processed tensor serves as the output of the proposed encoder and is passed to the vector quantizer.
In Panel (a) of Fig. 3, we illustrate the constituents of the AREN encoder. It adopts the convolutional residual network architecture ResNetv2 [19], adding a pixel attention layer and a 1x1 convolution to adjust channel numbers. Particularly noteworthy is the inter-pixel auto-attention layer inspired by PixelAttention; see Panel (b) in Fig. 3.
**Additional Encoding Levels.** To introduce an extra encoding level, a hypothetical level 0 in Figure 2, we take the output of the current lower level and adjust its dimensions to match the AREN output of the new lower level. Thus, concatenate by channels those tensors. The remaining aspects of the new lower level closely resemble those of the level immediately above it.
### Residual Pixel Attention
The attention layer objective is to incorporate information from similar pixels into each latent vector through a residual-based strategy. Our attention layer employs minimal parameters, and the residual approach ensures modifications to the
Figure 2: Attentive Residual Encoder (AREN).
latent vector only when pertinent information from other pixels is available.
Given the tensor \(x\), with dimensions \((h,w,f)\), the attention matrix \(W\) is computed as
\[W_{ij}=\sigma(g_{1}(x_{i})\,g_{2}(x_{j})^{\top}), \tag{2}\]
where \(g_{1}\) and \(g_{2}\) are convolutional layers with kernel size equal to \(1\times 1\), \(\sigma\) denotes the sigmoid activation function, and the number of filters equals the latent space dimension, \(c\). Then, the residual attention layer implements:
\[x\gets x+Wx \tag{3}\]
The pseudocode in Algorithm 1 presents the details of the Attention layer. Such a pseudocode computes self-attention if we pass \(y=x\) as parameters; in another case, a cross-attention.
```
1:Input Tensor \(x\) with dimensions \((b,h,w,f)\)
2:Input Tensor \(y:(b,h,w,f)\)
3:Output Updated \(x:(b,h,w,c)\)
4:procedureAttention(\(x,y\))
5:\(y\leftarrow\textit{Conv2D}(c,1\times 1)(y)\)\(\triangleright\) Num files \(c\)
6:\(x\leftarrow\textit{Conv2D}(c,1\times 1)(x)\)
7:\(y\leftarrow\textit{Reshape}(b,h\times w,c)(y)\)
8:\(x\leftarrow\textit{Reshape}(b,h\times w,c)(x)\)
9:\(W_{b,i,j}\gets x_{b,i,c}\,y_{b,j,c}\)\(\triangleright\) Einstein notation
10:\(W\leftarrow\sigma(W)\)
11:\(x_{b,j,c}\gets x_{b,j,c}+W_{b,i,j}\,y_{b,j,c}\)
12:\(x\leftarrow\textit{Reshape}(b,h,w,c)(x)\)
13:return\(x\)
```
**Algorithm 1** PixelAttentionV2
### Implementation Details
Assuming input images of \(256\times 256\) pixels with the color channels \((H,W,3)\), Base-Residual-Encoder transforms them to a tensor \((H,W,C)\)
We inspire on ResNetV2 to implement our blocks "Id-ResBlock" and "Conv-ResBlock". Let us break down the structure of these blocks:
1. Id-ResBlock (Identity Residual Block). Layers: Batch-Normalization, LeakyReLU Activation (with a slope of \(\alpha=0.1\)), Convolution2D (with the number of filters equals the number of input channels), and the output of the last convolution is summed with the input to the block. Hence, this block's input and output have matching dimensions.
2. Conv-ResBlock (Convolutional Residual Block). This block consists of the same layers as Id-ResBlock, with the difference that the Convolution2D has a stride equal to (2,2). With this stride, the width and height dimensions are reduced by half. For this reason, a Convolution2D with the same parameters is applied to the input data to be added to the output of the main execution path.
In summary, both "Id-ResBlock" and "Conv-ResBlock" are designed to facilitate the flow of information through deep neural networks while addressing the vanishing gradient problem. The "Id-ResBlock" maintains input and output dimensions, while the "Conv-ResBlock" reduces dimensions through a stride in the convolutional layer and adjusts the input accordingly to allow for summation. These blocks are crucial for enabling the training of deep neural networks effectively. Table 1 summarizes the reminder parameters; the encoders are of the kind residual with the distinction that the Base-Encoder ended with a \(1\times 1\) convolution with many filters equal to the latent dimension instead of an attention module of the ARENs. The Discriminator is a convolutional network with strides indicated in the next row of the table.
## 4 Experiments
In this work, for demonstration purposes, we focus on face generation using the CelebA-HQ dataset[20] with a resolution equal to \(256\times 256\) pixels. We used 80% of the images for training and the remaining 20% for testing.
Fig. 4 depicts generated faces; as we can see, these are very similar to the input faces. Panel (a) shows random examples of the test set (Ground True, GT). The following panels show the reconstructions computed with our proposal Attentive and Hierarchical VQ-VAE:
* Fig. 4(b) Two levels of encoding without attention. The number of active vectors per level was [54,75] for the
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \multicolumn{3}{c}{Convolutional} \\ Module & R\({}_{x}\) & filters & Output \\ \hline \hline Base Res-Encoder & 3,2 & (128,128,128), +256 & (64,64,266) \\ AREN 1 & 2,2 & (128,128) & (32,32,256) \\ AREN 2 & 3,2 & (128,128,128) & (16,16,256) \\ AREN 3 & 3,2 & (128,128,128, 128) & (8,8,256) \\ Discriminator & — & (128,128,128,64,64,1) & (32,32,1) \\ \cline{1-1} (strides) & — & (2,2,2,1,1,1) & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of model hyper-parameters.
Figure 3: Modules of AREN: (a) Residual Encoder. and (b) Self–attention module.
low and high levels. The predictions have a \(\text{MAE}/\sigma=0.2194\), where \(\sigma=0.31\) is the variance of the data.
* Fig. 4(c) Attentive Hierarchical VQ-VAE, the active vectors were [81,1] and \(\text{MAE}/\sigma=0.2138\).
* Fig. 4(d) Attentive VQ-VAE using only one level, the active vectors were [55] and \(\text{MAE}/\sigma=0.2259\).
We trained the models for 400 epochs. Our Hierarchical VQ-VAE (two-level) preserves the textures slightly better than the version with attention. However, the model with attention achieves better symmetry in the generated faces; we can note it when comparing the colors of both eyes. We noted that the attention mechanism alone was sufficient to incorporate long-range relationships between regions of the images. Therefore, we simplified the model by leaving a single level and the attention module, Fig 4(d). The attentive model preserves better face symmetry.
Table 2 shows the computational resources demanded for each model. We train the models in NVidia 3090 RTX for 400 epochs. The patch-discriminator architecture (\(0.412\) Millions of parameters) was the same for all the models. Since all the models were trained for 400 epochs, it is reasonable to expect that the AH-VQVAE model has not reached the same grade of convergence as the simplest model. That could explain some observed asymmetries.
## 5 Conclusion
We have introduced the Attentive VQ-VAE to enhance the capabilities of VQ-VAE models. The motivation behind this work arises from the need to address limitations in existing generative models, particularly those related to fine-grained details and global consistency in generated images. Our proposed Attentive VQ-VAE incorporates attention and hierarchical mechanisms. These additions aim to improve the encoding and generation capabilities of VQ-VAEs significantly. Through experiments, we demonstrated the effectiveness of the Attentive VQ-VAE in generating high-quality and realistic face images while simultaneously reducing the computational cost. Our attentive model exhibited similar performance in symmetry, color distribution, and facial feature preservation compared to Hierarchical VQ-VAE models. Furthermore, we used a strategy based on Generative Adversarial Networks (GANs) that contributes to more efficient and effective training. We confirm adding attention does not significantly increase the number of parameters, although it does increase the computational training time. On the other hand, we can obtain, through attention, results similar in quality to those obtained using hierarchical models while keeping the complexity of the model and its training time under control. In summary, our research showcases the potential of the Attentive VQ-VAE as a valuable tool for various applications, especially in image generation. Our model opens up exciting possibilities in computer vision, image processing, and generative art by addressing the challenges of fine-grained details and global consistency. Looking ahead, our work sets the stage for future research directions in generative modeling. Exploring the Attentive VQ-VAE's potential in other domains and extending its capabilities for even higher-resolution images represent promising avenues for further investigation. We believe this model can significantly contribute to the advancement of generative models and their applications in various creative and practical domains.
**Acknowledges.** Work supported by CONAHCYT, Mexico (MR Grant CB-A1-43858, AH Scholarship).
\begin{table}
\begin{tabular}{l c c} \hline \hline & Num. parameters & Training time \\ Model & (millions) & (secs. per epoch ) \\ \hline \hline H-VQVAE & 12.446 & 923 \\ AH-VQVAE & 12.496 & 1195 \\ A-VQVAE & **7.549** & **815** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Computational resources.
Figure 4: (a) Random CelebA-HQ images \(256\times 256\) pixels, and their reconstructions computer with the proposed method: (b) Hierarchical without attention and two encoding levels; (c) Attentive–Hierarchical VQ-VAE with two encoding levels; and (d) Attentive VQ-VAE with one encoding level. |
2308.16561 | MoMA: Momentum Contrastive Learning with Multi-head Attention-based
Knowledge Distillation for Histopathology Image Analysis | There is no doubt that advanced artificial intelligence models and high
quality data are the keys to success in developing computational pathology
tools. Although the overall volume of pathology data keeps increasing, a lack
of quality data is a common issue when it comes to a specific task due to
several reasons including privacy and ethical issues with patient data. In this
work, we propose to exploit knowledge distillation, i.e., utilize the existing
model to learn a new, target model, to overcome such issues in computational
pathology. Specifically, we employ a student-teacher framework to learn a
target model from a pre-trained, teacher model without direct access to source
data and distill relevant knowledge via momentum contrastive learning with
multi-head attention mechanism, which provides consistent and context-aware
feature representations. This enables the target model to assimilate
informative representations of the teacher model while seamlessly adapting to
the unique nuances of the target data. The proposed method is rigorously
evaluated across different scenarios where the teacher model was trained on the
same, relevant, and irrelevant classification tasks with the target model.
Experimental results demonstrate the accuracy and robustness of our approach in
transferring knowledge to different domains and tasks, outperforming other
related methods. Moreover, the results provide a guideline on the learning
strategy for different types of tasks and scenarios in computational pathology.
Code is available at: \url{https://github.com/trinhvg/MoMA}. | Trinh Thi Le Vuong, Jin Tae Kwak | 2023-08-31T08:54:59Z | http://arxiv.org/abs/2308.16561v1 | MoMA: Momentum Contrastive Learning with Multi-head Attention-based Knowledge Distillation for Histopathology Image Analysis
###### Abstract
There is no doubt that advanced artificial intelligence models and high quality data are the keys to success in developing computational pathology tools. Although the overall volume of pathology data keeps increasing, a lack of quality data is a common issue when it comes to a specific task due to several reasons including privacy and ethical issues with patient data. In this work, we propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model, to overcome such issues in computational pathology. Specifically, we employ a student-teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data and distill relevant knowledge via momentum contrastive learning with multi-head attention mechanism, which provides consistent and context-aware feature representations. This enables the target model to assimilate informative representations of the teacher model while seamlessly adapting to the unique nuances of the target data. The proposed method is rigorously evaluated across different scenarios where the teacher model was trained on the same, relevant, and irrelevant classification tasks with the target model. Experimental results demonstrate the accuracy and robustness of our approach in transferring knowledge to different domains and tasks, outperforming other related methods. Moreover, the results provide a guideline on the learning strategy for different types of tasks and scenarios in computational pathology. Code is available at: [https://github.com/trinhvg/MoMA](https://github.com/trinhvg/MoMA).
Knowledge distillation, momentum contrast, multi-head self-attention, computational pathology
## I Introduction
Computational pathology is an emerging discipline that has recently shown great promise to increase the accuracy and robustness of conventional pathology, leading to improved quality of patient care, treatment, and management [1]. Due to the developments of advanced artificial intelligence (AI) and machine learning (ML) techniques and the availability of high-quality and -resolution datasets, computational pathology approaches have been successfully applied to various aspects of the routine workflow in the conventional pathology from nuclei detection [2], tissue classification [3], and disease stratification [4] to survival analysis [5, 6]. However, recent studies have pointed out that the issues with the generalizability of computational pathology tools still remain unsolved yet [7, 8].
In order to build accurate and reliable computational pathology tools, not only an advanced AI model but also a large amount of quality data is needed. In computational pathology, both the learning capability of the recent AI and ML techniques and the amount of the available pathology datasets keep increasing. However, the quantity of publicly available datasets are far fewer than those in other disciplines such as natural language processing (NLP) [9] and computer vision [10, 11]. It is partially due to the nature of pathology datasets, which include multi-gigapixel whole slide images (WSIs) and thus it is hard to share them across the world transparently, but also due to the privacy and ethical issues with patient data. Moreover, the diversity of pathology datasets is limited. For example, Kather19 [12] contains 100,000 image patches of 9 different colorectal tissue types. These 100,000 image patches were initially generated from only 86 images. GLySAC [13] includes 30,875 nuclei of 3 different cell types from 59 gastric image patches. These were prepared from 8 WSIs that were digitized using a single digital slide scanner. The lack of diversity among scanners in the dataset also hampers the generalization of AI models in computational pathology [7]. In fact, there have been some efforts to provide a large amount of diverse pathology datasets. For instance, PANDA [14] is a dataset for prostate cancer Gleason grading, which includes 12,625 WSIs from 6 institutes using 3 different digital slide scanners. However, when it comes to a specific computational pathology task, it is still challenging to obtain or have access to a sufficient number of diverse datasets. The collection of such datasets is time- and labor-intensive by any means. Therefore, there is an unmet need for computational pathology in developing task-specific models and tools.
Transfer learning is one of the most widely used learning methods to overcome the shortage of datasets by re-using or transferring knowledge gained from one problem/task to other problems/tasks. Though it has been successful and widely adopted in pathology image analysis and other disciplines, most of the previous works used the pre-trained weights from natural images such as ImageNet or JFT [15, 16]. A recent study demonstrated that the off-the-shelf features learned from these natural images are useful for computational pathology tasks, but the amount of transferable knowledge, i.e., the effectiveness of transfer learning is heavily dependent on the complexity/type of pathology images likely due to differences in image contents and statistics [17]. As the number of publicly available pathology image datasets increases, the pre-trained weights from such pathology image datasets may be used for transfer learning; however, it is unclear whether the quantity is large enough or the variety is diverse enough. It is generally
accepted that there are large intra- and inter-variations among pathology images. Hence, the effectiveness of the transfer learning may still vary depending on the characteristics of the datasets. Moreover, the knowledge distillation (KD), proposed by [18], is another approach that can overcome the deficit of proper datasets. It not only utilizes the existing model as the pre-trained weight (similar to transfer learning) but also forces the target (student) model to directly learn from the existing (teacher) model, i.e., the student model seeks to mimic the output of the teacher model throughout the training process. Variants of KD have been successfully applied to various tasks, such as model compression [19], cross-modal knowledge transfer [20, 21, 22], or ensemble distillation [23, 24, 25]. However, it has not been fully explored for transferring knowledge between models, in particular for pathology image analysis.
Herein, we sought to address the question of how to overcome the challenge of limited data and annotations in the field of computational pathology, with the ultimate goal of building computational pathology tools that are applicable to unseen data in an accurate and robust manner. To achieve this goal, we develop an efficient and effective learning framework that can exploit the existing models built based upon quality datasets and train a target model on a relatively small dataset. The proposed method, so called **M**omentum contrastive learning with **M**ulti-head **A**ttention-based knowledge distillation (**MoMA**), follows the framework of KD for transferring relevant knowledge from the existing models and adopts momentum contrastive learning and attention mechanism for obtaining consistent, reliable, and context-aware feature representations. We evaluate MoMA on multi-tissue pathology datasets under different settings to mimic real-world scenarios in the research and development of computational pathology tools. Compared to other methodologies, MoMA demonstrates superior capabilities in learning a target model for a specific task. Moreover, the experimental results provide a guideline to better transfer knowledge from pre-trained models to student models trained on a limited target dataset.
Our main contributions are summarized as follows:
* We develop an efficient and effective learning framework, so called MoMA, that can facilitate building an accurate and robust computational pathology tool on a limited dataset.
* We propose attention-based momentum contrastive learning for KD to transfer knowledge from the existing models to a target model in a consistent and reliable fashion.
* We evaluate MoMA on multi-tissue pathology datasets and outperform other related works in learning a target model for a specific task.
* We investigate and analyze MoMA and other related works under various settings and provide a guideline on the development of computational pathology tools when limited datasets are available.
## II Related work
### _Tissue phenotyping in computational pathology_
Machine learning has demonstrated its capability to analyze pathology images in various tasks. One of its major applications is tissue phenotyping. In the conventional computational pathology, hand-crafted features, such as color histograms [26], gray level co-occurrence matrix (GLCM) [27], local binary pattern [28], and Gabor filters [27, 29], have been used to extract and represent useful patterns in pathology images. These hand-crafted features, in combination with machine learning methods such as random forest [30] and support vector machine [31, 32], were used to classify types of tissues or grades of cancers. With the recent advance in graphics processing unit (GPU) parallel computing, there has been a growing number of deep learning-based methods that achieve competitive results in tissue phenotyping of pathology images. For example, a deep convolutional neural network (CNN) was built and used to detect pathology images with prostate cancer [33] and mitosis cells in the breast tissues [34]. It was also adopted to detect cancer sub-types; for instance, [35] employed a CNN model to identify Epstein-Barr Virus (EBV) positivity in gastric cancers, which is a sign of a better prognosis. To further improve the performance of the deep learning models, various approaches have been proposed, such as multi-task learning [36], multi-scale learning [37], semi-supervised learning [38], or ensemble based models [39]. Though these works showed promising results, most of them still suffer from the limited quality of the training datasets, such as a lack of diversity or class imbalance in the datasets [40]. Transfer learning, which sought to leverage the pre-trained models or weights on other datasets or domains, such as ImageNet dataset as a starting point, is a simple yet efficient method to alleviate such problems in computational pathology [41, 42]. Although transfer learning with fine-tuning has shown to be effective in many computational pathology applications, this approach does not fully utilize the pre-trained models or weights.
### _Knowledge distillation_
_Knowledge distillation (KD):_ KD in deep learning was pioneered by [18] that transfers knowledge from a powerful source (or teacher) model with large numbers of parameters to another less-parameterized target (or student) model by minimizing the KL divergence between the two models. Transfer learning uses the pre-trained weights from the teacher model as a starting point only. Meanwhile, KD tries to utilize the teacher model throughout the entire training procedure.
_Feature-Map/Embedding distillation:_ Inspired by the vanilla KD [18], many variants of KD methods have been proposed, in particular utilizing intermediate feature maps or embeddings. For instance, FitNet [43] used _hint_ regressions to guide the feature activation of the student model. Attention mechanisms were applied to the intermediate feature maps to improve regression transfer [44] and to alleviate semantic mismatch across intermediate layers in SemCKD [45]. Neuron selectivity transfer (NST) [46] proposed to align the distribution of neuron selectivity pattern between student and teacher models.
Probabilistic knowledge transfer (PKT) [47] transformed the representations of the student and teacher models into probability distributions and subsequently matched them. Moreover, some others sought to transfer knowledge among multiple samples. For example, correlation congruence for KD (CCKD) [48] utilized the correlation among multiple samples for improved knowledge distillation. Contrastive loss, exploiting positive and negative pairs, was also employed for KD [19, 49].
Such models have been mainly utilized for model compression [18], cross-modal transfer [50], or ensemble distillation [51, 24]. For instance, in DeiT [52], an attention distillation was adopted to distill knowledge from ConvNets teacher model [53] and to train vision transformer (ViT) [54] on a small dataset; in [55], a KD method was used to segment chest computed tomography and brain and cardiac magnetic resonance imaging (MRI). In [21], a cross-modal KD method was proposed for the knowledge transfer from RGB to depth modality. In [22], knowledge was distilled from a rendered hand pose dataset to a stereo hand pose dataset. These works demonstrate that KD is not only applicable to model compression where the teacher and student models are trained on the same dataset but it could also be used to aid the student model in learning and conducting a relevant task.
Furthermore, several KD methods have been proposed for computational pathology. In [56], a multi-layer feature KD was proposed for breast, colon, and gastrointestinal cancer classification using pathology images. A semi-supervised student-teacher chain [57, 3] was proposed to make use of a large unlabeled dataset and to conduct pathology image classification. [58] developed another KD method for instance-segmentation in prostate cancer grading. In [59], KD was applied for distilling knowledge across image resolutions where the knowledge from a teacher model, trained on high-resolution images, was distilled to a student model, operating at low-resolution images, to classify celiac disease and lung adenocarcinoma in pathology images.
KD has been adopted for different tasks, settings, and problems. In this work, we exploit KD to transfer knowledge between teacher and student models, of which each is built and trained for the same, similar, and different classification tasks. To the best of our knowledge, this is the first attempt to investigate the effectiveness of the KD framework on such stratification of classification tasks in pathology image analysis.
### _Self-supervised momentum contrastive learning_
To overcome the lack of (labeled) quality datasets and to improve the model efficiency, several learning approaches have been proposed and explored in the AI community. Self-supervised learning emerges as an approach to learning the feature representation of an input (i.e., a pathology image in this study) in the absence of class labels for the target task. It has been successfully adopted to learn the feature representation in both NLP and computer vision tasks. Utilizing pretext tasks such as rotation [60], colorization [61], or jigsaw solving [62] is a popular self-supervised learning approach for computer vision tasks.
Contrastive learning is another self-supervised learning paradigm that exploits similar and/or dissimilar (contrasting) samples to enhance the representation power of a model, in general, as a pre-training mechanism. Intuitively, in contrastive learning, the model learns to recognize the difference in the feature representations among different images. There are three main variations in contrastive learning, namely end-to-end SimCLR [63], contrastive learning with a memory bank [64], and momentum contrast MoCo [65]. End-to-end SimCLR [63] has been the most natural setting where the positive and negative representations are from the same batch and updated the model end-to-end by back-propagation, which requires a large batch size. The optimization with a large batch size is challenging [66]; it is even harder for pathology image analysis since the deep learning models perform better with a large input image, providing more contextual information in high resolution. Contrastive learning with a memory bank [64] was proposed to store the representations of the entire training samples. For each batch, the additional negative representations were randomly sampled from the memory bank without backpropagation. This approach permits the support of large negative samples without requiring a large volume of GPU memory. In CRD [19], the memory bank contrastive learning was incorporated into the KD framework to conduct the contrastive representation distillation. However, the feature representation of each sample in the memory bank was updated when it was last seen, i.e., the feature representations were obtained by the encoders at various steps throughout the training procedure, potentially resulting in a high degree of inconsistency. MoCo [65] offers smoothly evolving encoders over time; as a result, the negative samples in the memory bank become more consistent.
Due to its advanced ability to learn the feature representation of an input image without the burden of labeling, both self-supervised learning and contrastive learning have been applied to computational pathology in various tasks [67]. ImPash [68] adopted contrastive learning to obtain an encoder with improved consistency and invariance of feature representations, leading to robust colon tissue classification. In [69, 70], contrastive learning was employed to analyze WSIs without the need for pixel-wise annotations. In [69], an advanced scheme to update the memory bank was proposed to store the features from different types/classes of WSIs, using the WSI-level labels as the pseudo labels for the patches.
In our KD context, teacher and student models are trained on different datasets/tasks/domains. Employing a stationary teacher model would only help if the only purpose of the student model is to exactly mimic the teacher model. Inspired by MoCo, we let the teacher model slowly evolve along with the student model on the target dataset. To further improve MoCo, we propose to incorporate the attention mechanism into MoCo so as to pay more attention to important positive and negative samples.
### _Attention_
Recently, the attention mechanism, which sought to mimic the cognitive attention of human beings, has appeared as the
centerpiece of deep learning models both in computer vision and NLP. Attention in deep learning is, in general, utilized to focus on some parts of images, regions, or sequences that are most relevant to the downstream tasks. There are various kinds of attention mechanisms in deep learning, such as spatial attention [71], channel attention in squeeze-and-excitation (SE) [72], and positional-wise attention [73]. In [74], a comprehensive review of the existing literature on various types of aggregation methods using the attention module in histopathology image analysis is presented. In [75], a weighted-average aggregation technique was employed to aggregate patch-level representations, generating WSI-level representations for breast metastasis and celiac gastric cancer detection. Furthermore, [76] introduced a novel approach for blood cancer sub-type classification through domain adversarial attention multi-instance learning, utilizing the attention mechanism proposed in [77].
Attention has been effectively combined with KD, such as in AT [44]. This approach used an activation-based spatial attention map, which is generated by averaging the values across the channel dimensions between the teacher and student models. A similar attention approach was proposed for computational pathology in [56]. Such studies used attention to effectively transfer knowledge between the student and teacher models through multiple intermediate feature maps.
Self-attention, first introduced by [78], enables the estimation of the relevance of a particular image, region, or sequence to others within a given context. Self-attention is one of the main components in the recent transformer-based models [79]. Self-attention is not only utilized as a component within the transformer model but it is also employed as an attention module itself. An insightful study in [80] demonstrated that the integration of self-supervised feature extraction into attention-based multiple-instance learning leads to robust generalizability in predicting pan-cancer mutations. In the context of cancer cell detection, [81] introduced a modified self-attention mechanism based on concatenation. Another interesting variation of self-attention is co-attention [82], which was applied to a multimodal transformer for survival prediction using WSIs and genome data. In our work, we adopt the self-attention module into the momentum contrastive learning framework to learn and utilize the correlation/relevance among the positive pairs and negative pairs.
## III Methods
### _Problem formulation_
The overview of the proposed MoMA is shown in Fig. 1 and Alg. 1. Let \(D^{SC}=\{(x_{i},y_{i})\}_{i=1}^{N_{SC}}\) be a source dataset and \(D^{TG}=\{(x_{i},y_{i})\}_{i=1}^{N_{TG}}\) be a target dataset where \(x_{i}\) and \(y_{i}\) represent the \(i\)th pathology image and its ground truth label, respectively, and \(N_{SC}\) and \(N_{TG}\) represent the number of source and target samples (\(N_{SC}\gg N_{TG}\)), respectively. Let \(\mathcal{F}^{T}\) be a teacher model and \(\mathcal{F}^{S}\) be a student model. \(\mathcal{F}^{T}\) consists of a teacher encoder \(f^{T}\) and a teacher classifier \(g^{T}\). \(\mathcal{F}^{S}\) includes a student encoder \(f^{S}\) and a student classifier \(g^{S}\). In addition to \(\mathcal{F}^{T}\) and \(\mathcal{F}^{S}\), MoMA includes a teacher projection head (\(p^{T}\)), a teacher attention head (\(h^{T}\)), a student projection head (\(p^{S}\)), and a student attention head (\(h^{S}\)). Given an input image \(x_{i}\), \(f^{T}\) and \(f^{S}\) extracts initial feature representations, each of which is subsequently processed by a series of a projection head and an attention head, i.e., \(p^{T}\) followed by \(h^{T}\) or \(p^{S}\) followed by \(h^{S}\), to improve its representation power. \(g^{T}\) and \(g^{S}\) receives the initial feature representations and conducts image classification. \(g^{T}\) is only utilized during the training of \(\mathcal{F}^{T}\). Due to the restrictions on sharing medical data, we assume a scenario where \(\mathcal{F}^{T}\) has been already trained on \(D^{SC}\), the pre-trained weights of \(\mathcal{F}^{T}\) are available, but the direct access to \(D^{SC}\) is limited. Provided with the pre-trained \(f^{T}\), the objective of MoMA is to learn \(\mathcal{F}^{S}\) on \(D^{TG}\) in an accurate and robust manner. For optimization, MoMA exploits two learning paradigms: 1) KD framework and 2) momentum contrastive learning. Combining the two learning methodologies, MoMA permits a robust and dynamic transfer of knowledge from \(f^{T}\), which was pre-trained on a high-quality dataset, i.e., \(D^{SC}\), to a target \(f^{S}\), which is trained on a limited dataset, i.e., \(D^{TG}\).
### _Network architecture_
We construct \(f^{S}\) and \(f^{T}\) using the identical architecture of CNN, i.e.,
EfficientNet-b0. Both \(p^{S}\) and \(p^{T}\) are composed of multilayer perceptron (MLP) layers that are composed of a sequence of a fully-connected layer (FC), a ReLU layer, and a FC layer; the resultant output of each projector is a 512-dimensional vector. \(h^{S}\) and \(h^{T}\) represent the teacher and student multi-head self-attention layers (MSA) that are described in detail in III-C2. Classifiers (\(g^{T}\) and \(g^{S}\)) simply contain a single FC layer.
During training, only \(\{f^{S},p^{S},h^{S},h^{T}\}\) are learned via gradient backpropagation, while we adopt the momentum update policy to update \(\{f^{mT},p^{mT}\}\) using \(\{f^{S},p^{S}\}\) where \(f^{mT}\) and \(p^{mT}\) are the momentum teacher encoder and projection head, which are described in section III-C1.
During inference, we only keep the student encoder \(f^{S}\) and the classifier \(g^{S}\) and discard the momentum teacher encoder \(f^{mT}\), the student projection head \(p^{S}\), the teacher projection head \(p^{mT}\), the teacher classifier \(g^{T}\), and the multi-head attention layers \(\{h^{S},h^{T}\}\). This results in the inference model that is identical to EfficientNet-b0.
### _Momentum contrastive learning with multi-head attention_
#### Iii-C1 Momentum contrastive learning
Contrastive learning aims to exploit and learn similar/dissimilar representations in the latent feature space from the positive/negative pairs of the input image. The way that the positive and negative pairs are obtained and utilized differs from one to the other. Self-supervised contrastive learning (SSCL) in MoCo-v2 [83] and SimCLR [63] obtain a positive pair by conducting data augmentation twice for the input image. In the conventional KD, the input image is encoded twice by the student model and the teacher model independently; meanwhile, the representations of two distinct images form a negative pair. In the end-to-end contrastive learning mechanism [63], both positive and negative pairs are acquired from the same batch. The larger batch size the end-to-end mechanism adopts, the better performance it achieves due to the availability of a
large number of negative samples, but requiring large GPU memory usage. In MoCo [83], the negative representations are maintained in a queue, and only the positive pairs are encoded in each training iteration.
Inspired by MoCo, our MoMA registers a queue of negative representations \(Z^{queue}\) to increase the number of negative samples without high GPU memory demand. In every training iteration, we update \(Z^{queue}\) by enqueuing a new batch of \(N_{B}\) feature representations obtained from the teacher model and dequeuing the oldest \(N_{B}\) feature representations. To guarantee consistency among the negative samples in \(Z^{queue}\), we introduce the momentum teacher encoder \(f^{mT}\), which is updated along with the student encoder \(f^{S}\) via the momentum update rule, following MoCo-v2 [83]. Formally, we denote the parameters of \(f^{mT}\) and \(p^{mT}\) as \(\theta_{mT}\) and those of \(f^{S}\) and \(p^{S}\) as \(\theta_{S}\), we update \(\theta_{mT}\) by:
\[\theta_{mT}\leftarrow\alpha\theta_{mT}+(1-\alpha)\theta_{S}. \tag{1}\]
where \(\alpha\) is a momentum coefficient to control the contribution of the new weights from the student model. We empirically set \(\alpha\) to 0.9999, which is used in MoCo-v2 [83].
Given a batch of input images \(\{x_{i}\}_{i=1}^{N_{B}}\), a batch of two feature representations \(Z^{mT}=\{z_{i}^{mT}\}_{i=1}^{N_{B}}\) and \(Z^{S}=\{z_{i}^{S}\}_{i=1}^{N_{B}}\) are obtained as follows:
\[z_{i}^{mT} =h^{T}(p^{mT}(f^{mT}(x_{i}))),i=1,...,N_{B} \tag{2}\] \[z_{i}^{S} =h^{S}(p^{S}(f^{S}(x_{i}))),i=1,...,N_{B}. \tag{3}\]
\(Z^{mT}=\{z_{i}^{mT}\}_{i=1}^{N_{B}}\) are used to update \(Z^{queue}=\{z_{i}^{queue}\}_{i=1}^{N_{Q}}\) where \(N_{Q}\) is the size of \(Z^{queue}\) (\(N_{Q}=16,384\)). At each iteration, \(Z^{mT}\) is enqueued into \(Z^{queue}\), and the oldest batch of feature representations are dequeued from \(Z^{queue}\), maintaining a number of recent batches of feature representations. For each input image \(x_{i}\), a positive pair is defined as \((z_{i}^{S},z_{i}^{T})\) and a number of negative pairs are defined as \(\{(z_{i}^{S},z_{j}^{queue})|j=1,...,N_{Q}\}\). Then, the objective function forces the positive pair to be closer and the negative pairs to be far apart in an SSCL fashion, which is described in section III-C3.
#### Iii-B2 Multi-head attention for augmented feature representation
We adopt self-attention (SA) mechanism, which was first introduced by [78], to reweight the feature representation of an input image with respect to the context of other images in the same iteration/batch. Formally, given a batch of \(N_{B}\) input images \(X=\{x_{i}\}_{i=1}^{N_{B}}\in\mathbf{R}^{N_{B}\times C\cdot H\times W}\), we obtain \(N_{B}\)\(d\)-dimensional feature embeddings \(E=\{e_{i}\}_{i=1}^{N_{B}}\in\mathbf{R}^{N_{B}\times d}\). Using \(E\), we define a triplet of learnable weight matrices \(W^{Q}\in\mathbf{R}^{d\times d_{q}}\), \(W^{K}\in\mathbf{R}^{d\times d_{d}}\), and \(W^{V}\in\mathbf{R}^{d\times d_{v}}\) that are used to compute queries \(Q=EW^{Q}\in\mathbf{R}^{N_{B}\times d_{q}}\), keys \(K=EW^{K}\in\mathbf{R}^{N_{B}\times d_{q}}\), and values \(V=EW^{V}\in\mathbf{R}^{N_{B}\times d_{v}}\) where \(d_{q}=d_{k}=d_{v}\) denote the dimension of queries, keys, and values, respectively. Then, re-weighted feature representations \(Z\in\mathbf{R}^{N_{B}\times d_{v}}\) are given by,
\[Z=\mathrm{softmax}\left(\frac{QK^{T}}{\sqrt{d_{q}}}\right)V \tag{4}\]
By applying SA \(h\) times and concatenating the output of \(h\) SA heads, we obtain the multi-head SA (MSA) feature representations. We set the number of SA heads to \(h=4\). MSA is separately applied to the feature representations obtained from
Fig. 1: Overview of the MoMA: Attention-Augmented Momentum Contrast Knowledge Distillation framework. A batch of input images is encoded by the student encoder (\(f^{S}\)), and the momentum teacher (\(f^{T}\)), and each feature representation is reweighted with regard to other images in the batch as the context. A classifier is added on top of the student encoder. The student model is jointly optimized by contrastive loss and cross-entropy loss.
the student and teacher models, producing \(Z^{S}\) and \(Z^{T}\), respectively. The feature representations in \(Z^{queue}\) were already re-weighted by MSA. Hence, MSA allows for attending to parts of the student positive samples, teacher positive samples, and the enqueued negative samples differently.
#### Iii-B3 Objective function
The objective function for our MoMA framework is given by:
\[\mathcal{L}=\mathcal{L}_{CE}+\mathcal{L}_{NCE}+\gamma\mathcal{L}_{KL} \tag{5}\]
where \(\mathcal{L}_{CE}\), \(\mathcal{L}_{NCE}\), and \(\mathcal{L}_{KL}\) denote cross-entropy loss, InfoNCE loss [84], and Hinton KD loss, respectively, and \(\gamma\) is a binary hyper-parameter (\(\gamma=1\) or \(0\)) to determine whether to include \(\mathcal{L}_{KD}\) or not depending on the type of distillation tasks, given by:
\[\gamma=\begin{cases}1&\text{if student and teacher models conduct the same task}\\ 0&\text{otherwise}\end{cases} \tag{6}\]
\[\mathcal{L}_{CE} \tag{7}\]
where \(\sigma_{j}\) is the predicted probability for the \(j^{th}\) class computed by the softmax function and \(o_{i}^{S}\) and \(y_{i}\) denote the logit and ground truth of the \(i\)th image, respectively. \(\mathcal{L}_{KL}\) denotes KL divergence loss to minimize the difference between the predicted probability distributions given by \(\mathcal{F}^{S}\) and \(\mathcal{F}^{T}\) as follows:
\[\mathcal{L}_{KL}(O^{mT},O^{S})\] \[=-\mathcal{T}^{2}\sum_{i=1}^{N_{B}}\sum_{j=1}^{c}\sigma_{j}\left( \frac{o_{i}^{mT}}{\mathcal{T}}\right)\left[\log\sigma_{j}\left(\frac{o_{i}^{S }}{\mathcal{T}}\right)-\log\sigma_{j}\left(\frac{o_{i}^{mT}}{\mathcal{T}} \right)\right] \tag{8}\]
where \(\mathcal{T}\) is a softening temperature (\(\mathcal{T}\) = 4). \(\mathcal{L}_{NCE}\) is to optimize momentum contrastive learning in a self-supervised manner. Using \(Z^{S}\), \(Z^{mT}\), and \(Z^{queue}\), \(\mathcal{L}_{NCE}\) is calculated as follows:
\[\mathcal{L}_{NCE}(Z^{S},Z^{mT},Z^{queue})\] \[=\sum_{i=1}^{N_{B}}\log\frac{\exp(z_{i}^{S}\cdot z_{i}^{mT}/\tau )}{\exp(z_{i}^{S}\cdot z_{i}^{mT}/\tau)+\sum_{j=1}^{N_{Q}}\exp(z_{i}^{S}\cdot z _{j}^{queue}/\tau)} \tag{9}\]
where \(\tau\) is a temperature hyper-parameter (\(\tau=0.07\)). By minimizing \(\mathcal{L}_{NCE}\), we maximize the mutual information between the positive pairs, i.e., \(Z^{S}\) and \(Z^{mT}\), and minimize the similarity between \(Z^{mT}\) and negative samples from \(Z^{queue}\).
## IV Experiments
### _Datasets_
#### Iv-A1 Teacher datasets
In this study, we employ two (large-scale) teacher datasets, one is a computational pathology dataset, and the other is a natural image dataset. The first one is the Prostate cANcer graDe Assessment (PANDA) dataset [14]. From this, we obtained 5158 whole slide images (WSIs) digitized at 20x magnification using a 3DHistech Pannormamic Flash II 250 scanner (\(0.24\mu m\times 0.24\mu m\)/pixel) from Radboud University Medical Center, Netherlands. Using the 5,158 WSIs and their pixel-level annotations of benign (BN), grade 3 (G3), grade 4 (G4), and grade 5 (G5), we generate \(\sim\)100k patches of size 512 \(\times\) 512 pixels and divide them into a training set (10,809 BN, 20,948 G3, 32,986 G4, and 5,759 G5 image patches) and a test set (2,613 BN, 5,036 G3, 8,809 G4, and 1,239 G5 image patches). The second teacher dataset is the well-known ImageNet dataset, which is irrelevant to pathology images and tasks. We note that once the teacher models are trained on each of the teacher datasets, the teacher models are not re-trained on the target dataset; the pre-trained weights from the PyTorch library are adopted for the ImageNet teacher models.
#### Iv-A2 Prostate cancer 4-class dataset
**Prostate USZ**[85] was obtained from the Harvard dataverse ([https://dataverse.harvard.edu/](https://dataverse.harvard.edu/)). It is composed of 886 tissue core images, digitized at 40x magnification, that were scanned by a NanoZoomer-XR Digital slide scanner (Hamamatsu) (\(0.23\mu m\times 0.23\mu m\)/ pixel) from University Hospital Zurich (USZ). Prostate USZ is extracted at a size of 750 \(\times\) 750 pixels. Prostate USZ is used as training (2076 BN, 6303 G3, 4541 G4, and 2383 G5 patches), validation (666 BN, 923 G3, 573 G4, and 320 G5 patches), and test (127 BN, 1602 G3, 2121 G4, and 387 G5 patches) sets for prostate cancer 4-class classification.
**Prostate UBC**[86] was acquired from the training set of the Gleason2019 challenge ([https://gleason2019.grand-challenge.org/](https://gleason2019.grand-challenge.org/)). Prostate Gleason19 is used as an independent test set for prostate cancer 4-class classification. This involves a set of 244 prostate tissue cores that were digitized at 40x magnification (\(0.25\mu m\times 0.25\mu m\)/ pixel) using an Aperio digital slide scanner (Leica Biosystems) and annotated by 6 pathologists at the Vancouver Prostate Centre. There are 17,066 image patches (1284 BN, 5852 grade 3, 9682 grade 4, and 248 grade 5), of which each has a size of 690 \(\times\) 690 pixels.
#### Iv-A3 Prostate cancer 5-class dataset
**Prostate AGGC22** was obtained from the training set of the Automated Gleason Grading Challenge 2022 ([https://aggc22.grand-challenge.org/](https://aggc22.grand-challenge.org/)). The dataset consists of three distinct subsets, all available at 20\(\times\) (\(0.5\mu m\times 0.5\mu m\)/pixel). The first subset comprises 105 whole mount images that were scanned using an Akoya Biosciences scanner. From these 105 images, we obtained 133,246 patches including 17,269 stroma, 15,443 BN, 36,627 G3, 57,578 G4, and 6,329 G5 patches of size 512\(\times\)512 pixels. We utilize these image patches to conduct a 5-fold cross-validation experiment, which is designated as AGGC CV. The second subset consists of 37 biopsy images that were scanned using an Akoya Biosciences scanner. The third subset encompasses 144 whole mount images scanned using multiple scanners from different manufacturers, including Akoya Biosciences (26 images), Olympus (25 images), Zeiss (15 images), Leica (26 images), KFBio (26 images), and Philips (26 images). We reserve the second and third subsets for testing purposes (AGGC test). AGGC test contains 190,451 patches of size 512\(\times\)512 pixels same size as the AGGC CV,
including 29,225 stroma, 16,272 BN, 53,602 G3, 90,823 G4, and 529 G5 patches.
#### Iv-A4 Colon tissue type classification datasets
**Colon K19**[12] dataset includes 100,000 patches of size 224 \(\times\) 244 pixels digitized at 20 \(\times\) magnification (\(0.5\mu m\times 0.5\mu m\)/pixel). The patches are categorized into 9 tissue classes: adipose (Ad), background (Bk), debris (De), lymphocytes (Ly), mucus (Mc), smooth muscle (Ms), normal colon mucosa (No), cancer-associated stroma (St), tumor epithelium (Tu). We utilize Colon K19 for the training and validation of colon tissue type classification.
**Colon K16**[28] contains 5,000 image patches of size 224 \(\times\) 244 pixels scanned at 20 \(\times\) magnification, (\(0.495\mu m\times 0.495\mu m\)/pixel). There are eight tissue phenotypes, namely tumor epithelium (Tu), simple stroma (St), complex stroma (Complex St), lymphocyte (Ly), debris (De), normal mucosal glands (Mu), adipose (Ad), and background (Bk). The dataset is balanced with 625 patches per class. We use this Colon K16 to test the model that was trained and validated on Colon K19. Since there is no complex stroma in the training set, we exclude this tissue type, resulting in the testing set including 4375 images of 7 tissue classes.
To resolve the difference in the class label between colon K19 and colon K16, we re-group the 9 classes of colon K19 into 5 classes and 7 classes (excluded complex stroma) into 5 classes, following [87]. Specifically, we exclude Complex St and group stroma/muscle and debris/mucus as stroma and debris, respectively. The model is trained on K19 using 9 classes; grouping is only used for inference on K16 purposes. There exist two versions for Colon K19 with and without Macenko stain normalization (SN) while K16 is available without stain normalization; we use Macenko SN to construct the SN version of K16. We use both versions separately for training, validation, and testing purposes.
### _Implementation Details_
#### Iv-B1 Data augmentation
We employ _RandAugment_[88] for training all the student models. Prostate USZ is resized to \(512\times 512\) pixels during training and testing. Prostate UBC is center cropped from \(512\times 512\) pixels to \(448\times 448\) pixels. Colon K19 is trained and validated at their original size of \(224\times 224\) pixels, and Colon K16 is resized to \(224\times 224\) pixels during inference.
#### Iv-B2 Training details
All the networks are trained using Adam optimizer with default parameter values \((\beta_{1}=0.9,\beta_{2}=0.9999,\epsilon=1.0e^{8})\), with a batch-size of 64 for prostate datasets and 256 for colon datasets. Cross entropy is adopted as a classifier loss function for all the models. Each of the student models is trained for 50 epochs. The PANDA teacher is trained for 100 epochs. All the models are implemented on the PyTorch platform and executed on a workstation with two RTX A6000 GPUs.
### _Experimental design_
In order to evaluate the effectiveness of MoMA, we conduct three types of distillation tasks: 1) same task distillation: distillation between prostate cancer classification models, 2) relevant task distillation: distillation from 4-class prostate cancer classification to 5-class prostate cancer classification, and 3) irrelevant task distillation: distillation from prostate cancer classification to colon tissue type classification. Fig. 2 illustrates the distillation flow and the associated datasets and models.
We also compare MoMA with three different types of competing methods:
* **Transfer Learning** (\(TL\)): 1) TC\({}_{PANDA}\): \(f^{T}\) trained on PANDA without fine-tuning on student datasets, 2) ST\({}_{no}\): \(f^{S}\) without pre-trained weights, 3) ST\({}_{ImageNet}\): \(f^{S}\) with pre-trained weights on ImageNet, 4) ST\({}_{PANDA}\): \(f^{S}\) with pre-trained weights on PANDA.
* **Logits distillation** (\(LD\)): 1) Vanilla KD [18]: \(f^{S}\) with vanilla KD method, 2) SimKD [89]: \(f^{S}\) with re-used \(g^{T}\) but no vanilla KD. Vanilla KD and SimKD are only applied to the same task distillation experiments due to the usage of \(g^{T}\).
* **Feature-Map/Embedding Distillation** (\(FD\)): 1) FitNet [90], 2) AT [44], 3) SemCKD [45], 4) CC [48], and 5) CRD [19].
Moreover, we carry out an ablation study without utilizing MSA (MoMA w/o MSA) to investigate the influence of MSA on strengthening feature representations.
### _Quantitative evaluation_
We evaluate MoMA and its competing models on the three distillation tasks using 1) Accuracy (ACC), 2) Macro-average F1 (F1), and 3) quadratic weighted kappa (\(\kappa_{w}\)). For the
revelant distillation task on Prostate AGGC, we use weighted-average F1, F1\({}_{w}\) = 0.25 * F1\({}_{G3}\) + 0.25 * F1\({}_{G4}\) +0.25 * F1\({}_{G5}\) +0.125 * F1\({}_{Normal}\) +0.125 * F1\({}_{Stroma}\), which is the evaluation metric in the AGGC22 challenge.
## V Experimental results
### _Same task distillation: prostate cancer classification_
Table I and Fig. 3 show the results of MoMA and its competitors on the two TMA prostate datasets (Prostate USZ and Prostate UBC). On Prostate USZ, the teacher model TC\({}_{PANDA}\), which was trained on PANDA only, achieved 63.4 \(\%\) ACC, 0.526 F1, and 0.531 \(\kappa_{w}\), which is substantially lower to other student models with \(TL\), \(LD\), and \(FD\). Among the student models with \(TL\), the student model with no pre-trained weights (ST\({}_{None}\)) was inferior to other two student models; the student model pre-trained on PANDA (ST\({}_{PANDA}\)) outperformed the student model pre-trained on ImageNet (ST\({}_{ImageNet}\)). These indicate the importance of pre-trained weights and fine-tuning on the target dataset, i.e., Prostate USZ. As for the KD approaches, MoMA\({}_{PANDA}\), pre-trained on PANDA, outperformed all other KD methods, achieving ACC of 73.6 \(\%\), which is 0.9 \(\%\) higher than ST\({}_{PANDA}\), and F1 of 0.687 and \(\kappa_{w}\) of 0.670, which are comparable to those of ST\({}_{PANDA}\).
On the independent test set, Prostate UBC, it is remarkable that TC\({}_{PANDA}\) achieved 78.2 \(\%\) ACC and 0.680 \(\kappa_{w}\), which are superior to those of all the student models with \(TL\), likely suggesting that the characteristics of PANDA is more similar to Prostate UBC than Prostate USZ. The performance of the student models with \(TL\) and \(FD\) was similar to each other between Prostate USZ and Prostate UBC; for instance, MoMA\({}_{PANDA}\) obtained higher ACC but lower F1 and \(\kappa_{w}\) on Prostate UBC than on Prostate USZ. As MoMA and other student models with \(FD\) adopt vanilla KD by setting \(\gamma\) to 1 in \(\mathcal{L}\), i.e., mimicking the output logits of the teacher model, there was, in general, a substantial increase in the performance on Prostate UBC. MoMA\({}_{PANDA}\), in particular, achieved the highest ACC of 83.3 \(\%\) and \(\kappa_{w}\) of 0.763 over all models under consideration, which are 11.1 \(\%\) and 0.145 higher than those on Prostate USZ in ACC and \(\kappa_{w}\), respectively.
### _Relevant task distillation: prostate cancer classification_
Table II and Fig. 4 show the results of MoMA and its competing methods on relevant task distillation, i.e., distillation from 4-class prostate cancer classification to 5-class prostate cancer classification (Prostate AGGC22). The two tasks share 4 classes in common, and thus the direct application of the teacher model and logits distillation is infeasible. In the cross-validation experiments (AGGC CV), MoMA\({}_{PANDA}\), on average, achieved the best F1\({}_{w}\) of 0.670 and \(\kappa_{w}\) of 0.798 and obtained the second best ACC of 77.1 \(\%\). The performance of ST\({}_{PANDA}\) was generally comparable to the student models with \(FD\). Other student models with \(TL\) were, by and large, inferior to the ones with \(FD\). In a head-to-head comparison between Prostate AGGC CV and Prostate AGGC test, there was, in general, a slight performance drop, likely due to the differences in the type of images and scanners. Though there was a performance drop, similar trends were found across different models between AGGC CV and AGGC test. We also note that MoMA\({}_{PANDA}\), on average, was superior to all the competitors on two evaluation metrics (F1\({}_{w}\) and \(\kappa_{w}\)) and attained the third best ACC, which is 0.2 \(\%\) lower than CRD.
### _Irrelevant task distillation: colon tissue type classification_
Table III and Fig. 5 show the results of distillation from 4-class prostate cancer classification to colon tissue type classification. Similar to the previous tasks, the student models either pre-trained on ImageNet ST\({}_{ImageNet}\) or PANDA ST\({}_{PANDA}\) were able to improve upon the model performance without pre-training. MoMA\({}_{ImageNet}\), utilizing the pre-trained weights of ImageNet, outperformed all the competing models except AT and CC in \(\kappa_{w}\) for Colon K16 SN and Colon K16, respectively. It is worth noting that, for both \(TL\) and \(FD\) approaches, the effect of the pre-trained weights of ImageNet was larger than that of PANDA. ST\({}_{ImageNet}\) was superior to ST\({}_{PANDA}\). Similarly, MoMA\({}_{ImageNet}\) obtained better performance than MoMA\({}_{PANDA}\).
Fig. 2: Overview of distillation flow across different tasks and datasets. 1) Supervised task is always conducted, 2) Feature distillation is applied if a well-trained teacher model is available, and 3) Vanilla \(L_{KD}\) is employed if teacher and student models conduct the same task.
### _Inter- and intra-class correlations for student and teacher models_
Fig. 6 shows the inter- and intra-class correlations between feature embeddings encoded by the MoMA student and teacher models. Three types of correlations were measured, including student-to-student, student-to-teacher, and teacher-to-teacher correlations. As for the teacher model, we chose the best teacher model per task, i.e., the teacher models pre-trained on PANDA for two prostate cancer classification tasks and the ImageNet teacher model for the colon tissue type classification task. For each task, 16 samples per class were randomly chosen from the validation set.
For the three distillation tasks, the student models, in general, showed higher intra-class correlations and lower inter-class correlations, which explains the superior performance of the student models in the classification tasks. For instance, in the same task distillation, i.e., from PANDA to Prostate USZ, the PANDA teacher model was partially successful in
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \multicolumn{8}{c}{} \\ \multicolumn{8}{c|}{} \\ \multicolumn{8}{c|}{} \\ Method & Pretrained & ACC(\(\%\)) & F1 & \(\kappa_{w}\) & ACC(\(\%\)) & F1 & \(\kappa_{w}\) \\ \hline ST & None & \(74.0\pm 3.700\) & \(0.644\pm 0.039\) & \(0.774\pm 0.043\) & \(62.8\pm 1.000\) & \(0.483\pm 0.021\) & \(0.483\pm 0.089\) \\ ST & ImageNet & \(73.8\pm 4.500\) & \(0.645\pm 0.057\) & \(0.764\pm 0.043\) & \(76.0\pm 2.100\) & \(0.593\pm 0.012\) & \(0.770\pm 0.042\) \\ ST & PANDA & \(76.6\pm 2.800\) & \(0.660\pm 0.025\) & \(0.792\pm 0.031\) & \(76.5\pm 1.900\) & \(0.603\pm 0.013\) & \(0.771\pm 0.036\) \\ \hline FitNet [43] & PANDA & \(74.2\pm 4.500\) & \(0.638\pm 0.050\) & \(0.774\pm 0.037\) & \(69.6\pm 7.600\) & \(0.554\pm 0.044\) & \(0.653\pm 0.095\) \\ AT [44] & PANDA & \(75.3\pm 3.000\) & \(0.649\pm 0.053\) & \(0.782\pm 0.046\) & \(74.3\pm 3.400\) & \(0.583\pm 0.017\) & \(0.779\pm 0.038\) \\ CC [48] & PANDA & \(\mathbf{77.4\pm 3.000}\) & \(0.668\pm 0.352\) & \(0.796\pm 0.350\) & \(77.2\pm 1.500\) & \(0.603\pm 0.016\) & \(0.760\pm 0.007\) \\ CRP [19] & PANDA & \(76.7\pm 2.700\) & \(0.663\pm 0.040\) & \(0.796\pm 0.036\) & \(\mathbf{77.3\pm 4.000}\) & \(0.602\pm 0.033\) & \(0.789\pm 0.029\) \\ SemCKD [45] & PANDA & \(74.1\pm 5.800\) & \(0.639\pm 0.043\) & \(0.762\pm 0.058\) & \(65.7\pm 1.300\) & \(0.532\pm 0.087\) & \(0.642\pm 0.107\) \\ MoMA (Our) & ImageNet & \(75.7\pm 4.800\) & \(0.663\pm 0.054\) & \(0.772\pm 0.035\) & \(75.5\pm 3.500\) & \(0.587\pm 0.019\) & \(0.788\pm 0.021\) \\ MoMA (Our) & PANDA & \(77.1\pm 2.000\) & \(\mathbf{0.670\pm 0.042}\) & \(\mathbf{0.798\pm 0.029}\) & \(77.1\pm 2.300\) & \(\mathbf{0.699\pm 0.015}\) & \(\mathbf{0.794\pm 0.018}\) \\ \multicolumn{8}{c}{} \\ \end{tabular}
\end{table} TABLE II: Results of relevant task distillation.
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \multicolumn{8}{c}{} \\ \multicolumn{8}{c|}{} \\ \multicolumn{8}{c|}{} \\ Method & Pretrained & ACC(\(\%\)) & F1 & \(\kappa_{w}\) & ACC(\(\%\)) & F1 & \(\kappa_{w}\) \\ \hline TC\({}_{PANDA}\) & ImageNet & \(63.4\) & \(0.526\) & \(0.531\) & \(78.2\) & \(0.580\) & \(0.680\) \\ ST & None & \(66.4\pm 1.600\) & \(0.566\pm 0.012\) & \(0.551\pm 0.020\) & \(31.7\pm 9.600\) & \(0.239\pm 0.073\) & \(0.143\pm 0.104\) \\ ST & ImageNet & \(67.0\pm 2.600\) & \(0.612\pm 0.027\) & \(0.604\pm 0.016\) & \(71.0\pm 2.800\) & \(0.592\pm 0.026\) & \(0.619\pm 0.036\) \\ ST & PANDA & \(72.7\pm 1.100\) & \(\mathbf{0.687\pm 0.009}\) & \(\mathbf{0.671\pm 0.005}\) & \(73.1\pm 1.900\) & \(0.599\pm 0.023\) & \(0.654\pm 0.031\) \\ \hline FitNet [43] & PANDA & \(65.7\pm 3.600\) & \(0.574\pm 0.048\) & \(0.559\pm 0.056\) & \(34.5\pm 19.500\) & \(0.260\pm 0.150\) & \(0.139\pm 0.131\) \\ AT [44] & PANDA & \(71.2\pm 1.600\) & \(0.653\pm 0.021\) & \(0.652\pm 0.023\) & \(76.0\pm 3.500\) & \(0.628\pm 0.038\) & \(0.660\pm 0.053\) \\ CC [48] & PANDA & \(69.4\pm 1.400\) & \(0.624\pm 0.016\) & \(0.608\pm 0.026\) & \(51.9\pm 12.600\) & \(0.302\pm 0.100\) & \(0.268\pm 0.179\) \\ CRD [19] & PANDA & \(70.9\pm 0.700\) & \(0.642\pm 0.012\) & \(0.639\pm 0.015\) & \(70.7\pm 3.500\) & \(0.577\pm 0.032\) & \(0.610\pm 0.046\) \\ SemCKD [45] & PANDA & \(69.8\pm 1.000\) & \(0.638\pm 0.013\) & \(0.635\pm 0.007\) & \(75.4\pm 1.700\) & \(0.627\pm 0.009\) & \(0.685\pm 0.017\) \\ MoMA (Ours) & ImageNet & \(67.3\pm 3.400\) & \(0.617\pm 0.013\) & \(0.594\pm 0.028\) & \(65.4\pm 5.800\) & \(0.534\pm 0.059\) & \(0.533\pm 0.128\) \\ MoMA (Ours) & PANDA & \(\mathbf{73.6\pm 1.000}\) & \(0.687\pm 0.011\) & \(0.670\pm 0.010\) & \(75.5\pm 1.800\) & \(0.622\pm 0.019\) & \(0.666\pm 0.015\) \\ \multicolumn{8}{c}{} \\ Vanilla Kit [28] & PANDA & \(69.2\pm 0.400\) & \(0.596\pm 0.008\) & \(0.593\pm 0.005\) & \(82.9\pm 0.500\) & \(0.649\pm 0.013\) & \(0.74\pm 0.013\) \\ SimKD [89] & PANDA & \(\mathbf{65.9\pm 3.00}\) & \(0.420\pm 0.001\) & \(0.413\pm 0.004\) & \(79.8\pm 1.00\) & \(1.056\pm 0.001\) & \(0.673\pm 0.002\) \\ KL+FitNet [43] & PANDA & \(68.7\pm 1.400\) & \(0.583\pm 0.030\) & \(0.585
demonstrating the connections between four different types of class labels; it had difficulties in distinguishing some samples in BN and G3, which can be shown by the lower intra-class correlations within B3 and G3. This is likely due to variations between the source and target datasets. However, the student model, trained on a target/student dataset, was able to achieve stronger intra-class correlations for both BN and G3 while still maintaining high intra-class correlations for G4 and G5; inter-class correlations were lowered in general. Similar observations were made for other two tasks. Such improvement, achieved through the MoMA framework, is not only due to the knowledge from the teacher model but also due to the utilization of the target dataset.
### _Ablation study_
Table IV compares the performance of MoMA with and without MSA across three distillation tasks. The results demonstrate the crucial role of MSA in the proposed approach. MoMA without MSA, in general, experienced a performance drop for the three distillation tasks. Without MSA, MoMA was able to achieve the better or comparable performance to other competing models across different distillation tasks, suggesting the effectiveness and robustness of the proposed framework.
Fig. 3: Bar plots for same task distillation: All the KD models utilize the pre-trained weights from PANDA.
## VI Discussion
In this work, we introduce an approach of KD, so called MoMA, to build an optimal model for a target histopathology dataset using the existing models on similar, relevant, and irrelevant tasks. The experiment results show that MoMA offers advancements in distilling knowledge for the three classification tasks. Regardless of the type of the distillation tasks, MoMA enables the student model to inherit feature extraction capabilities from the teacher model and to conduct accurate classification for the target task. Exploiting the knowledge from both the source and target datasets, MoMA also provides the superior generalizability on unseen, independent test sets across three different tasks.
The aim of this work is to propose a method to distill the knowledge from the source/teacher domain to the target domain without direct access to the source data. Transferring only the teacher model is more feasible in various contexts;
Fig. 4: Bar plots for relevant task distillation. All the KD models utilize the pre-trained weights from PANDA.
Fig. 5: Bar plots for irrelevant task distillation. All the KD models utilize the pre-trained weights from ImageNet.
for instance, when the source data is enormous in size, like ImageNet, it is time-consuming and inefficient to train a model using such source data; healthcare data, including pathology images, is restricted for security and privacy reasons, and thus transferring to target data centers or hospitals is likely to be infeasible. In such circumstances, KD is a key to resolve all the data centers and hospitals.
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c|}{Test I} & \multicolumn{3}{c}{Test II} \\ \hline Task & Method & ACC & F1 & \(\kappa_{w}\) & ACC & F1 & \(\kappa_{w}\) \\ \hline Same task & MoMA w/o MSA & \(72.0\pm 0.400\) & \(0.663\pm 0.004\) & \(0.654\pm 0.008\) & \(73.8\pm 1.700\) & \(0.616\pm 0.014\) & \(\textbf{0.667}\pm\textbf{0.012}\) \\ distillation & MoMA & \(\textbf{73.6}\pm\textbf{1.000}\) & \(\textbf{0.687}\pm\textbf{0.011}\) & \(\textbf{0.670}\pm\textbf{0.010}\) & \(\textbf{75.5}\pm\textbf{1.800}\) & \(\textbf{0.622}\pm\textbf{0.019}\) & \(0.666\pm 0.015\) \\ \hline Relevant task & MoMA w/o MSA & \(75.6\pm 1.200\) & \(0.649\pm 0.036\) & \(0.786\pm 0.045\) & \(\textbf{77.2}\pm\textbf{2.400}\) & \(0.601\pm 0.02\) & \(\textbf{0.804}\pm\textbf{0.042}\) \\ distillation & MoMA & \(\textbf{77.1}\pm\textbf{2.000}\) & \(\textbf{0.670}\pm\textbf{0.042}\) & \(\textbf{0.798}\pm\textbf{0.029}\) & \(77.1\pm 2.300\) & \(\textbf{0.609}\pm\textbf{0.015}\) & \(0.794\pm 0.018\) \\ \hline Irrelevant task & MoMA w/o MSA & \(84.0\pm 0.400\) & \(0.838\pm 0.004\) & \(0.880\pm 0.005\) & \(85.5\pm 1.400\) & \(0.856\pm 0.015\) & \(\textbf{0.900}\pm\textbf{0.008}\) \\ distillation & MoMA & \(\textbf{85.2}\pm\textbf{0.600}\) & \(\textbf{0.850}\pm\textbf{0.006}\) & \(\textbf{0.888}\pm\textbf{0.006}\) & \(\textbf{87.2}\pm\textbf{0.700}\) & \(\textbf{0.872}\pm\textbf{0.008}\) & \(0.898\pm 0.012\) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Ablation results compare the performance of MoMA with and without Multi-head Attention on the three tasks.
Fig. 6: The correlation coefficient matrix between feature presentations of a teacher network and student network
the related issues. In the distillation, we emphasize that the choice of the pre-trained teacher model is crucial as it directly impacts the performance of the target model. Based on the experimental results across different classification tasks, it is evident that the better the teacher model is, the greater benefits to the student model it provides.
The proposed MoMA framework is an end-to-end training approach, eliminating the need for extensive training of the self-supervised task followed by fine-tuning on labeled datasets. Moreover, self-supervised methods require a large amount of training data which is not always available in medical/pathology image analysis. Leveraging the high-quality teacher model through MoMA facilitates robust training and convergence of the student model on smaller target datasets. Furthermore, the excellence performance in the relevant and irrelevant tasks suggests that MoMA could be utilized solely as a feature-embedding distillation mechanism without requiring a meticulous redesign of the model architecture and distillation framework in response to the specific requirement of downstream tasks.
In the ablation study, the role of MSA was apparent in MoMA. The previous SSCL assumed that all samples are equally important. However, depending on the appearance and characteristics of an image and the extent of augmentation applied, the classification task may become easier or more challenging. By incorporating MSA, MoMA gains the ability to selectively focus on important samples while allocating less attention to other samples. It is worth noting that the contrastive loss \(\mathcal{L}_{NCE}\) does not treat each input sample independently, unlike the supervised cross-entropy loss \(\mathcal{L}_{CE}\). With MSA, MoMA learns the relationships among samples within a batch before they are fed into the self-supervised contrastive loss, and, in turn, used to update the queue \(Z^{queue}\), prioritizing and enriching the information within these samples and allowing for more effective optimization of the model.
The results of the three distillation tasks, i.e., the same task, relevant task, and irrelevant task, provide insights into the distillation of knowledge from the source domain to the target domain and the model development for the target domain. First, supervised learning on the target domain provides comparable performance in all three distillation tasks, but its performance on unseen data, i.e., generalizability, is not guaranteed. Second, the usage of the pre-trained weights is crucial for both TL and KD, regardless of the type of distillation tasks. Third, the effect of the pre-trained weights depends on the type of distillation tasks. As for the same and relevant tasks, the pre-trained weights from the same or relevant tasks were more useful. For the irrelevant task, the pre-trained weights from ImageNet were more beneficial than those from PANDA. These indicate that not all pathology image datasets will be helpful to build a model for a specific computational pathology task and a dataset. Last, the KD strategy varies across different distillation tasks. The same task distillation takes advantage of the logits distillation, the relevant task distillation exploits the pre-trained weights, and the irrelevant task distillation does not make use of the (irrelevant) domain-specific knowledge much.
There are several limitations in our work. First, we utilize ImageNet and PANDA as the source datasets. ImageNet is one of the most widely studied and utilized large-scale datasets. PANDA is one of the large-scale pathology image datasets, but it is limited to four classes in one organ. The larger and more diverse the pathology image dataset is, the better the distillation quality we could obtain. Second, three pathology image classification tasks from two organs were considered in this study. The effect of KD may vary depending on the type of tasks and organs. Third, there exist other types of image classification tasks in computational pathology such as survival/outcome prediction. In general, the amount of survival/outcome dataset is smaller than that of cancer and tissue classifications, and thus KD may play a crucial role in survival/outcome prediction. Last, we only consider CNNs for the three distillation tasks. Several recent studies have shown that transformer-based models such as Vision Transformer (ViT) outperformed CNN-based models on image classification tasks. Evaluating the proposed MoMA on different teacher-student combinations like ViT teacher to ViT student and CNN teacher to ViT student could provide valuable insights into the effectiveness of the proposed method across different architectures. In order to focus on KD, we conduct our study on CNNs and leave the study involving transformer-based models for future research.
## VII Conclusions
Herein, we propose an efficient and effective learning framework called MoMA to build an accurate and robust classification model in pathology images. Exploiting the KD framework, momentum contrastive learning, and SA, MoMA was able to transfer knowledge from a source domain to a target domain and to learn a robust classification model for three different tasks. Moreover, the experimental results of MoMA suggest an adequate learning strategy for different distillation tasks and scenarios. We anticipate that this will be a great help in developing computational pathology tools for various tasks. Future studies will entail the further investigation of the efficient KD method and extended validation and application of MoMA to other types of datasets and tasks in computational pathology.
## Acknowledgments
This work was supported by the National Research Foundation of Korea (NRF) (No. 2021R1A2C2014557) and by the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT) through the International Cooperative R&D program (No. P0022543).
|
2309.11127 | Language-Oriented Communication with Semantic Coding and Knowledge
Distillation for Text-to-Image Generation | By integrating recent advances in large language models (LLMs) and generative
models into the emerging semantic communication (SC) paradigm, in this article
we put forward to a novel framework of language-oriented semantic communication
(LSC). In LSC, machines communicate using human language messages that can be
interpreted and manipulated via natural language processing (NLP) techniques
for SC efficiency. To demonstrate LSC's potential, we introduce three
innovative algorithms: 1) semantic source coding (SSC) which compresses a text
prompt into its key head words capturing the prompt's syntactic essence while
maintaining their appearance order to keep the prompt's context; 2) semantic
channel coding (SCC) that improves robustness against errors by substituting
head words with their lenghthier synonyms; and 3) semantic knowledge
distillation (SKD) that produces listener-customized prompts via in-context
learning the listener's language style. In a communication task for progressive
text-to-image generation, the proposed methods achieve higher perceptual
similarities with fewer transmissions while enhancing robustness in noisy
communication channels. | Hyelin Nam, Jihong Park, Jinho Choi, Mehdi Bennis, Seong-Lyun Kim | 2023-09-20T08:19:05Z | http://arxiv.org/abs/2309.11127v1 | Language-Oriented Communication with Semantic Coding and Knowledge Distillation for Text-to-Image Generation
###### Abstract
By integrating recent advances in large language models (LLMs) and generative models into the emerging semantic communication (SC) paradigm, in this article we put forward to a novel framework of language-oriented semantic communication (LSC). In LSC, machines communicate using human language messages that can be interpreted and manipulated via natural language processing (NLP) techniques for SC efficiency. To demonstrate LSC's potential, we introduce three innovative algorithms: 1) semantic source coding (SSC) which compresses a text prompt into its key head words capturing the prompt's syntactic essence while maintaining their appearance order to keep the prompt's context; 2) semantic channel coding (SCC) that improves robustness against errors by substituting head words with their lengthhier synonyms; and 3) semantic knowledge distillation (SKD) that produces listener-customized prompts via in-context learning the listener's language style. In a communication task for progressive text-to-image generation, the proposed methods achieve higher perceptual similarities with fewer transmissions while enhancing robustness in noisy communication channels.
\({}^{\dagger}\)Hyelin Nam, \({}^{\ddagger}\)Jihong Park, \({}^{\ddagger}\)Jinho Choi, \({}^{*}\)Mehdi Bennis, and \({}^{\dagger}\)Seong-Lyun Kim \({}^{\dagger}\)Yonsei University, \({}^{\ddagger}\)Deakin University, and \({}^{*}\)University of Oulu
Semantic communication (SC), large language model (LLM), generative model.
## 1 Introduction
Semantic communication (SC) is an emerging research paradigm that focuses on the meanings (i.e., semantics) and effectiveness of communicating bits [1, 2, 3, 4, 5]. Deep joint source and channel coding (DeepJSCC) is a prime example wherein an encoder-decoder structured neural network (NN) acts as a transceiver, within which task-effective features are extracted from input data and made into communication messages. These _neural messages_ are the NN's hidden-layer activations trained and tailored for a specific task, which greatly improves communication efficiency [1, 2, 3, 4].
However, neural messages constraint the full potential of SC. First, NN activations are not always universal messages, as they are influenced by their training data and communication environment. Indeed, DeepJSCC transceivers that have been trained separately are hardly interoperable without fine-tuning [6]. Furthermore, the semantics of these NN activations are nothing but what remains after achieving effective communication. It is therefore difficult to interpret and manipulate these semantics as intended.
Human language, by contrast, is universal and versatile to describe a broad range of tasks, owing to its evolution throughout diverse human experiences in history. Moreover, with recent advances in natural language processing (NLP) and generative models, machines are now capable of interpreting and manipulating human language. Motivated by this, in this paper we propose a novel _language-oriented SC (LSC)_ framework, which facilitates SC through human _language messages_. The operation of LSC transceivers is threefold:
1. **Data-to-Language Translation**: Text-based cross-modal models transform input data into language messages to be transmitted (e.g., via CLIP for image-to-text (I2T) or Whisper for speech-to-text translation).
2. **Language Analysis & Manipulation**: Large language
Figure 1: An illustration of language-oriented semantic communication (LSC) for progressive text-to-image generation, empowered by semantic source coding (SSC), semantic channel coding (SCC), and semantic knowledge distillation (SKD).
models (LTMs) and other NLP algorithms (e.g., GPT-4 [7], Llama 2 [8], and CoreNLP [9]) are utilized for analyzing the syntax, semantics, and context in language messages and manipulating these messages for improving communication efficiency.
3. **Language-to-Data Generation**: Text-conditioned generative models produce intended data using the received message seman (e.g., via Stable Diffusion for text-to-image (T2I) or Zeroscope for text-to-video generation).
To showcase the potential of LSC, this paper we consider a point-to-point LSC scenario, where Alice sends a text prompt describing an intended image, word by word, while Bob progressively generates an image based on the accumulated received text prompt. The LSC accuracy between Alice and Bob is assessed using the learned perceptual image patch similarity (LPIPS) that measures the distance between intended and generated images. To improve the communication efficiency of LSC, as visualized in Fig. 1, we focus on Step 2), and develop the following novel algorithms:
* **Head-based Semantic Source Coding (SSC)** is a lossy compression of the original prompt by pruning non-head words, inspired from our empirical findings that sending all words in a prompt does not always achieve the lowest LPIPS. The heads of a prompt are the words determining the syntactic category of the prompt, which can be identified, for example, using CoreNLP [9, 10].
* **Synonym-based Semantic Channel Coding (SCC)** adds redundancy into the prompt by replacing original head words with their longer synonyms, increasing the robustness to channel noise perturbing each character of the words. Only the synonyms ensuring the same semantics of the prompt are of interest, which can be found by, for instance, using GPT-4 [7].
* **In-Context Learning-based Semantic Knowledge Distillation (SKD)** aims to address the out-of-distribution (OOD) prompts due to different language knowledge between Alice and Bob, and enables Alice to emulate Bob's prompts by assimilating Bob's language knowledge. This can be achieved without re-training NN model parameters, by harnessing LLM's unique capability of in-context learning, i.e., few-shot learning via demonstration [11].
Simulation results reveal SSC compresses transmitted messages by up to \(42.6\)%, while surprisingly reducing LPIPS by \(0.015\) compared to full prompts. Applying SCC and SKD further cuts LPIPS by up to \(0.007\) and \(0.009\) by addressing channel noise and heterogeneous language knowledge.
**Related Works**: Recent research [3] employs generator models in SC but with neural messages, distinct from LSC's language messages. The ts_zip [12] algorithm exploits LLM-based synonyms for compression, differing from our synonym-based SCC for robustness and from our head-based SSC. LSC stands apart from other language-based SC studies that mainly focus on I2T compression or on the entropy of text truthfulness [13], in contrast to LSC harnessing LLM and NLP techniques to analyze and manipulate language messages for SC.
## 2 Semantic Source Coding for Progressive Text-to-Image Generation
In this section, we propose SSC for a progressive text-to-image (T2I) generation task in a point-to-point communication scenario, as elaborated next.
1) **Image-to-Text Translation**: Alice has an intended image \(\mathbf{v}\) to send, and translates it into a text prompt \(\mathbf{x}\), a sequence containing a set \(\mathbf{X}\) of words presented in a specific order, given as:
\[\mathbf{x}=\texttt{I2T}(\mathbf{v})=(\mathbf{x}_{1},\mathbf{x}_{2},\cdots, \mathbf{x}_{|\mathbf{X}|}), \tag{1}\]
where \(\mathbf{x}_{i}\) is the \(i\)th word comprising \(|\mathbf{x}|\) characters. The function \(\texttt{I2T}(\cdot)\) represents an image-to-text (I2T) encoder such as BLIP [14] or CLIP [15].
2) **Head-based Semantic Source Coding (SSC)**: Alice aims to compress and transmit text characters of \(\mathbf{x}\) while maintaining the semantics of \(\mathbf{x}\). The semantics can be maintained when the key words of \(\mathbf{x}\) are presented without loosing their syntax and context. To this end, SSC first identifies a set \(\mathbf{H}\) of \(\mathbf{x}\)'s head words that determine the prompt's syntactic category in linguistic analysis. While keeping head words' order of appearance in \(\mathbf{x}\), SSC produces a compressed sequence \(\mathbf{h}=(\mathbf{h}_{1},\mathbf{h}_{2},\cdots,\mathbf{h}_{|\mathbf{X}|})\) in which:
\[\mathbf{h}_{t}=\begin{cases}\mathbf{x}_{i},&\text{if }\mathbf{x}_{i}\in \mathbf{H}\\ \emptyset,&\text{otherwise}.\end{cases} \tag{2}\]
The head words in \(\mathbf{H}\) can be identified using the CoreNLP algorithm [10], i.e., \(\mathbf{H}=\texttt{CoreNLP}(\mathbf{x})\). Consequently, SSC yields the compression ratio \(|\mathbf{H}|/|\mathbf{X}|\leq 1\) in terms of words, and \(\sum_{\mathbf{x}_{i}\in\mathbf{H}}|\mathbf{x}_{i}|/\sum_{\mathbf{x}_{i}\in \mathbf{X}}|\mathbf{x}_{i}|\) in terms of characters.
3) **Text-to-Image Generation**: Bob receives the head words of \(\mathbf{h}_{i}\) in order, and progressively generates an image using a T2I generator such as Stable Diffusion [16] and DALL-E. At the \(i\)-th head word reception with \(i\in\{1,2,\cdots,|\mathbf{H}|\}\), the received prompt is \(\mathbf{h}(t)\), and the generated image is:
\[\hat{s}(t)=\texttt{T2I}(\mathbf{h}(t))=\texttt{T2I}((\mathbf{h}_{1},\mathbf{h }_{2},\cdots,\mathbf{h}_{t})). \tag{3}\]
The perceptual similarity between Bob's generated \(\mathbf{\hat{v}}(t)\) and Alice's intended image \(\mathbf{v}\) is measured by the learned perceptual image patch similariy (LPIPS) score [17] that calculates the distance at hidden layers of pre-trained AlexNet, given as:
\[\texttt{LPIPS}(\mathbf{v},\mathbf{\hat{v}}(t))=\sum_{l}\frac{1}{H_{l}W_{l}} \sum_{h,w}\|f(\mathbf{v})-f(\mathbf{\hat{v}}(t)\|_{2}^{2}. \tag{4}\]
The term \(l\) identifies the \(l\)-th layer having its width \(w\), height \(h\), and dimension \(H_{l}\times W_{l}\) with an activation function \(f(\cdot)\).
## 3 Semantic Channel Coding in Noisy Communication Channels
In the previous section, we presume that Alice's transmitted head words are perfectly received at Bob. In this section, we consider a noisy channel, and propose SCC to address noisy head word receptions at Bob.
1) **Noisy Channel Model**: Alice individually transmits a set \(C_{\mathbf{h}_{t}}\) of characters in the head word \(\mathbf{h}_{t}\). Following a discrete memoryless channel (DMC) model, Bob receives the head word \(\hat{\mathbf{h}}_{t}\) containing a set \(\hat{C}_{\mathbf{h}_{t}}\) of characters, each of which is perturbed as a different character with a cross-over probability \(\epsilon>0\) and is otherwise successfully received.
2) **Synonym-based Semantic Channel Coding (SCC)**: In this noisy channel, SCC aims to enhance \(\mathbf{h}_{t}\)'s robustness by increasing \(|\mathbf{C}_{\mathbf{h}_{t}}|\) while maintaining the same semantics of the prompt \(\mathbf{h}(t)\). In the aforementioned channel, Bob encounters the same error in each characters following a geometric distribution. This does not allow communicating short words like "cat" that changes its semantics even with a single-character variation (e.g., bat, cut, and car), motivating SCC. In SCC, we consider a set \(\mathbf{S}_{\mathbf{h}_{t}}\) of candidate synonyms of \(\mathbf{h}_{t}\), given as:
\[\mathbf{S}_{\mathbf{h}_{t}}=\{\mathbf{s}_{1},\mathbf{s}_{2},...\mathbf{s}_{| \mathbf{S}_{\mathbf{h}_{t}}|}\}, \tag{5}\]
where \(\mathbf{s}_{t}\) contains a set \(\mathbf{C}_{\mathbf{s}_{t}}\) of characters. Although \(\mathbf{S}_{\mathbf{h}_{t}}\) can be found using a dictionary, it ignores the context of \(\mathbf{h}(t)\),and does not guarantee the intended semantics. To solve this, SCC utilizes an LLM such as GPT-4 and Llama 2, a decoder-only autoregressive model that can predict the most in-context appropriate synonym \(\mathbf{s}_{t}^{*}\) of \(\mathbf{h}_{t}\) in \(\mathbf{h}(t)\) by masking \(\mathbf{h}_{t}\) and maximizing the following conditional unmasking probability:
\[\mathbf{s}_{t}^{*}=\max_{\mathbf{s}_{j}\in\mathcal{W}}p_{\mathbf{s}}(\mathbf{ s}_{j})=\Pr(\mathbf{s}_{j}|\mathbf{h}(t)\backslash\mathbf{h}_{t}), \tag{6}\]
where \(\mathcal{W}\) is a set of total characters, e.g., \(128\) characters in ASCII. By relaxing this LLM, we obtain a set \(\mathbf{\hat{S}}_{\mathbf{h}_{t}}\) of in-context synonyms associated with their unmasking probabilities exceeding a threshold \(p_{c}>0\):
\[\mathbf{\hat{S}}_{\mathbf{h}_{t}}=\{\mathbf{\hat{s}}_{1},\mathbf{\hat{s}}_{2},\cdots,\mathbf{\hat{s}}_{.}\}=\{\mathbf{s}_{j}|p_{\mathbf{s}_{j}}\geq p_{c}\}. \tag{7}\]
Consequently, SSC can increase noise robustness of \(\mathbf{h}_{t}\) within a set \(L_{\mathbf{h}_{t}}\) of the levels in terms of characters, given as
\[L_{\mathbf{h}_{t}}=\{|\mathbf{\hat{s}}_{j}|\in\mathbf{\hat{S}}_{\mathbf{h}_{t }}||\mathbf{h}_{t}|\leq|\mathbf{\hat{s}}_{j}|\leq L_{c}\}, \tag{8}\]
where \(L_{c}\) is the number of characters of the lengthiest synonym of \(\mathbf{h}_{t}\), i.e., \(L_{c}=[\max_{\mathbf{\hat{s}}_{j}\in\mathbf{\hat{S}}_{\mathbf{h}_{t}}}|\mathbf{ C}_{\mathbf{\hat{s}}_{j}}|]\).
## 4 Semantic Knowledge Distillation in Heterogeneous Language Knowledge
In this section, we aim to address the problem when Alice and Bob have different knowledge on text-image relations by proposing SKD that enables Alice to produce Bob-customized text prompts via in-context learning.
1) **Heterogeneous Knowledge Model**: BLIP and CLIP are encoder-decoder NN models that store knowledge on image-text relations through cross-attention weights, enabling T2I and I2T conversions. Suppose that Alice has BLIP for T2I while Bob utilizes CLIP for I2T. This incurs OOD generation in both, decreasing LPIPS.
2) **In-Context Learning-based Semantic Knowledge Distillation (SKD)**: A pre-trained LLM has excessive knowledge containing spurious correlations, and conditioning its knowledge within a specific context can improve task performance. In-context learning enables this by feeding few exemplary input-output pairs to teach the LLM a desired context, i.e., few-shot learning via demonstration [11]. Meanwhile, knowledge distillation (KD) is a method to transfer a target (teacher) model's knowledge into another (student) model by minimizing their output differences for common inputs [18]. Inspired from in-context learning and KD, SKD shows \(K\) input images to Alice and Bob that generate \(K\) output text prompts, using BLIP and CLIP, respectively, that are fed into an LLM for demonstration. This in-context learned LLM becomes a text-to-text (T2T) translator that can produce Bob-customized prompt \(\hat{\mathbf{x}}_{b}\) for a given Alice's prompt \(\mathbf{x}_{a}\), given as:
\[\hat{\mathbf{x}}_{b}=\texttt{T2T}\left(\mathbf{x}_{a};\{\mathbf{v}^{(i)}, \mathbf{x}_{a}^{(i)},\mathbf{x}_{b}^{(i)}\}_{i=1}^{K}\right), \tag{9}\]
where \(\mathbf{x}_{a}^{(i)}\) and \(\mathbf{x}_{b}^{(i)}\) are the \(i\)-th output prompts at Alice and Bob, respectively. After SKD, \(\hat{\mathbf{x}}_{b}\) is fed into SSC and/or SCC. Note that SKD is applicable before SSC and/or after SCC. We focus on the former for simplicity.
## 5 Numerical Results
**Simulation Settings**: We consider that Alice's I2T encoder is BLIP [19], and Bob's T2I generator is Stable Diffusion v1.5 [16] that generates each image from a prompt with \(50\) denoising steps. This diffusion process is conditioned by the text prompt encoded using CLIP [15]. SKD and SCC that require LLMs are based on GPT-4 [7], while SSC is run by
\begin{table}
\begin{tabular}{l c c c|c c c|c c c} \hline \hline Compression & \multirow{2}{*}{**w.o.SSC**} & \multirow{2}{*}{**SSC**} & \multirow{2}{*}{**SSC**} & \multirow{2}{*}{**SSC+SKD**} & \multirow{2}{*}{**SSC**} & \multirow{2}{*}{**SSC+SCC**} & \multirow{2}{*}{**SSC+SCC+SKD**} & \multirow{2}{*}{**SSC**} & \multirow{2}{*}{**SSC**} & \multirow{2}{*}{**SSC+SCC**} & \multirow{2}{*}{**SSC+SCC+SKD**} \\ Ratio & & & & & & & & & SNR = \(8.75\) dB \\ \hline w.r.t. Word & \(1\) & **0.641** & \(0.654\) & **0.641** & **0.641** & **0.641** & **0.654** & **0.654** & **0.654** \\ w.r.t. Character & \(1\) & **0.426** & \(0.468\) & **0.426** & \(0.531\) & \(0.525\) & **0.468** & \(0.579\) & \(0.565\) \\ \hline LPIPS & \(0.718\) & \(0.703\) & **0.697** & \(0.736\) & \(0.730\) & **0.721** & \(0.726\) & \(0.719\) & **0.715** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Message compression ratios of SSC with SSC and/or SKD.
CoreNLP [9, 10]. For image data, the Flicker8k dataset is considered, containing 8,092 samples, each with 256x256 pixels [20]. For text prompts, each character is 8-bit ASCII coded, and modulated using 16QAM. During SCC, \(p_{c}\) is set as \(0.72\). All LPIPS values are averaged over 100 simulation runs, except for snapshot visualizations in Fig. 2.
**Impact of SSC**: Tab. 1 reveals SSC achieves 64.1% word compression and 42.6% in character. Surprisingly, mean LPIPS improves by \(0.015\), suggesting SSC's roles not only in compression but also in prompt engineering. Fig. 3 highlights SSC's dual benefits (solid red), head extraction and appearance-based ordering. To dissect their LPIPS reduction contributions, we introduce two baselines: _Random SC_ (dotted gray), which maintains appearance order with the same compression ratio as SSC but transmits random words from \(\mathbf{x}\) instead of head words; and _Random sequence SSC_ (dotted red), sending head words from \(\mathbf{h_{k}}\) like SSC but in a shuffled order. Results indicate head extraction reduces mean LPIPS by up to \(0.04\), and appearance-based ordering contributes up to a \(0.012\) reduction, as seen when comparing SSC with Random SC and Random sequence SSC, respectively.
**Impact of SCC**: In Fig. 4, we observe that mean LPIPS decreases with SNR. Comparing SSC+SCC (dotted red) to SSC (solid red), SCC contributes to a reduction in mean LPIPS by up to \(0.007\). This reduction diminishes with SNR. However, SCC compromises compression, increasing it by up to 10.5% as shown in Tab. 1. In these simulations the increase in characters is capped at \(4\). In certain instances, as illustrated in Fig. 2 at \(\text{SNR}=7.5\text{dB}\), the LPIPS reduction from SCC can be as much as 7.57x its average. This suggests potential benefits in optimizing SSC level based on given channel conditions for future research.
**Impact of SKD**: As illustrated in Figs. 3 and 4, SKD contributes to a reduction in mean LPIPS by up to \(0.006\) and \(0.009\), respectively. Notably, the latter reduction surpasses even the contribution of SCC to LPIPS reduction. However, SKD may extend the prompt length, e.g., by an average of \(5\) characters in Fig. 3, highlighting a trade-off between compression and LPIPS.
## 6 Conclusion
In this article we proposed LSC, and developed SSC, SCC, and SKD that leverage NLP and LLM techniques to improve LSC's SC efficiency under noisy channels and heterogeneous T2I/I2T knowledge. Future research might explore various tasks such as I2T-based control and compare LSC's performance with its DeepJSCC counterpart. It could also be interesting to exploit other LLM capabilities such as reasoning and interactions with humans.
Figure 4: SSC with or without SCC and SKD w.r.t. SNR.
Figure 3: SSC with or without SKD w.r.t. transmitted characters.
Figure 2: Alice’s image and prompts (left) and Bob’s generated images and LPIPS (right), with SSC, SCC, and SKD. |
2309.09768 | Freeform surface topology prediction for prescribed illumination via
semi-supervised learning | Despite significant advances in the field of freeform optical design, there
still remain various unsolved problems. One of these is the design of smooth,
shallow freeform topologies, consisting of multiple convex, concave and saddle
shaped regions, in order to generate a prescribed illumination pattern. Such
freeform topologies are relevant in the context of glare-free illumination and
thin, refractive beam shaping elements. Machine learning techniques already
proved to be extremely valuable in solving complex inverse problems in optics
and photonics, but their application to freeform optical design is mostly
limited to imaging optics. This paper presents a rapid, standalone framework
for the prediction of freeform surface topologies that generate a prescribed
irradiance distribution, from a predefined light source. The framework employs
a 2D convolutional neural network to model the relationship between the
prescribed target irradiance and required freeform topology. This network is
trained on the loss between the obtained irradiance and input irradiance, using
a second network that replaces Monte-Carlo raytracing from source to target.
This semi-supervised learning approach proves to be superior compared to a
supervised learning approach using ground truth freeform topology/irradiance
pairs; a fact that is connected to the observation that multiple freeform
topologies can yield similar irradiance patterns. The resulting network is able
to rapidly predict smooth freeform topologies that generate arbitrary
irradiance patterns, and could serve as an inspiration for applying machine
learning to other open problems in freeform illumination design. | Jeroen Cerpentier, Youri Meuret | 2023-09-18T13:49:32Z | http://arxiv.org/abs/2309.09768v3 | # Freeform surface topology prediction for prescribed illumination via semi-supervised learning
###### Abstract
Fast, effective design of freeform illumination components with extended light sources remains an open challenge, despite significant advances in the field. Machine learning techniques already proved to be extremely valuable in solving complex inverse problems in optics and photonics, but their application to freeform optical design is so far limited to imaging optics. This paper presents a rapid, standalone framework for the prediction of freeform surface topologies that generate a prescribed irradiance distribution, from an extended light source. The framework employs a 2D convolutional neural network to model the relationship between the prescribed target irradiance and required freeform topology. This network is trained on the loss between the obtained irradiance and input irradiance, using a second network that replaces Monte-Carlo ray-tracing from source to target. This semi-supervised learning approach proves to be superior compared to a supervised learning approach using ground truth freeform topology/irradiance pairs, a fact that is most likely connected to the observation that multiple freeform topologies can yield similar irradiance patterns. The resulting network is able to rapidly predict smooth freeform topologies that generate arbitrary, complex irradiance patterns.
## Introduction
Optical systems to control the propagation of light play a major role in science and technology, and their importance will not diminish in the near future [1, 2, 3]. For decades, optical engineers have relied on systems with
multiple (a)spherical surfaces, which have rotational symmetry [4; 5]. Recent advances in manufacturing technology however, allow the fabrication of freeform optical surfaces with completely arbitrary shape, thereby offering much greater flexibility for controlling the propagation of light [6; 7]. Freeform optics are widely used in imaging systems to guide the light of points in object space effectively to corresponding points in image space [8; 9; 10; 11]. Also in the field of illumination design, freeform components are extensively used to map the emitted light distribution from a source into a desired target pattern, while maintaining the luminous flux [12; 13]. The demand for such freeform illumination systems is rapidly growing, due to their application in fast-evolving fields such as optical lithography, automotive headlights and laser beam shaping [14; 15].
Illumination optics are determined by the light source under consideration and the targeted light pattern. Their design typically comes down to calculating one or more optical surfaces that manipulate the incoming rays, in order to produce a certain prescribed intensity or irradiance distribution. Freeform illumination design methods can be separated in two categories: zero-etendue algorithms and algorithms for extended light sources [14]. Zero-etendue methods are based on the assumption that the source is ideal, e.g. a point source or a collimated laser beam. Freeform design for zero-etendue sources has matured significantly, and various accurate calculation methods exist, such as the ray-mapping and Monge-Ampere methods [16; 17; 18; 19]. Unfortunately, the etendue of real light sources can seldom be neglected in practice. When applying zero-etendue algorithms for such extended light sources, the resulting pattern becomes blurry, and more dedicated design procedures are needed [20; 21].
As opposed to zero-etendue algorithms, the search for fast and effective freeform algorithms with extended light sources is still ongoing [14; 22]. Benitez et al. have expanded the simultaneous multiple surfaces (SMS) design method from 2D to 3D [23]. Alternatively, wavefront tailoring and deconvolution-based algorithms have been proposed [22; 24; 25]. Although these methods function well under certain specific conditions, e.g. single-chip LED emitters, they remain unsuitable for arbitrary extended light sources. Alternatively, a solution can be obtained through straight-forward optimization of the parameterized freeform surface(s), based on a defined merit function, e.g. the difference between the obtained irradiance and the target distribution [26; 27]. Similarly, feedback-based methods have been introduced, iteratively updating the solution with a continuous feedback mechanism [14; 28; 29]. Finally, the usage of automatic differentiation has been proposed to obtain derivative information of optical surface parame
ters. This information allows for more rapid and effective optimization of freeform systems [30; 31], but research on applying such methods for illumination design is still in an early phase [32; 33; 34]. All these iterative methods have the potential to result in appropriate illumination optics for extended light sources. However, they are typically very computationally intensive and require a significant amount of convergence time.
To reduce the time and complexity of optical design, researchers have recently started to use machine learning techniques. Applying deep learning to solve inverse problems in photonics and optics has only recently arisen, but the potential is undisputable [35; 36; 37]. It may arguably become one of the main catalysts in designing complex optical configurations in the near future [38]. In the field of computational holography for example, mature machine learning methods are already state-of-the-art [39; 40].
Deep learning architectures for freeform design have also been presented in past research, but so far mainly within the domain of imaging optics, in order to find starting points close to a final solution [31; 41; 42; 43]. Within illumination design, only one fully trained network has been demonstrated and the approach was restricted to creating very basic shapes [44]. The authors of this work underlined the need for more advanced procedures. One of the main challenges to realize a fully trained network for freeform illumination design via supervised or unsupervised learning is the lack of a fast forward operator to link the input and output parameters. Monte-Carlo (MC) ray-tracing is typically used to evaluate the performance of freeform optics for a given (extended) light source. Given the complex shape that freeform surfaces can adopt, thousands to even millions of rays are needed to model the resulting intensity or irradiance pattern with limited statistical noise.
Aside from the considered light source and targeted light distribution, there is another aspect that is of practical importance in the design of freeform components for illumination purposes: the resulting freeform shape. Most freeform design methods result in convex or concave optical components (Fig. 1a). Such components may lead to visual discomfort glare when combined with high-brightness light sources [45]. A possible approach to address this important issue in lighting, is working with freeform lens arrays [46]. Unfortunately, such freeform lens arrays have strong C2 discontinuities in between the individual sub-lenses. These discontinuous complicate manufacturing and lead to unwanted straylight [47]. Such problems could be avoided with smooth freeform surfaces that combine multiple convex, concave and saddle shaped regions, for which we introduce the term _freeform topologies_ (Fig. 1c). An additional advantage of such freeform lens topolo
gies in comparison with a single convex or concave freeform lens, is the limited component thickness. Even though this feature can also be achieved with freeform Fresnel lenses, these also exhibit discontinuous jumps [48]. Unfortunately, direct design methods for smooth, continuous freeform topologies are nonexistent at this moment.
This paper presents a deep learning framework for solving the inverse problem of finding a freeform lens surface topology in order to obtain a prescribed irradiance distribution with an extended light source. The framework integrates supervised learning for modeling the ray-tracing on one hand, and semi-supervised learning for freeform surface prediction on the other hand. This double approach alleviates the computational burden to train the freeform surface prediction fully via Monte-Carlo ray-tracing. We demonstrate that rapid convergence can be obtained by considering the deviation of the irradiance resulting from a predicted freeform surface with
Figure 1: (**a**) Convex freeform illumination lens that generates a prescribed (far-field) target pattern from an extended light source. (**b**) The light patterns generated by each of the four individual sub-lenses of a discontinuous lens array result in the prescribed target pattern. (**c**) A smooth, continuous freeform surface _topology_ that consists of multiple convex, concave and saddle-shaped regions avoids strong C2 discontinuities and results in a much thinner component. (**d**) A freeform surface topology can be described as a NURBS surface that is controlled via a matrix of control points.
the target irradiance as a loss function during training. This approach is radically different from the more intuitive method of considering the mean average error of the predicted freeform surface shape compared to the ground truth, which proves to have inferior convergence. This behavior is directly linked to the observation that two or more completely distinct freeform topologies can result in visually identical illumination patterns. The deep learning framework rapidly provides a suitable freeform design as is illustrated for various target patterns. As such, we demonstrate that deep learning does not only serve as a tool to enhance typical optimization procedures, but that it can be used as a rapid, standalone method in freeform lens design.
## Results
### Problem statement
Consider a freeform lens consisting of a planar entrance surface, orthogonal to the optical axis (z-axis) and a freeform exit surface. The freeform surface can be represented as a non-uniform rational basis spline (NURBS) surface with \(a\times b\) equidistant control points (Fig. 1d). When the x- and y-coordinates of these control points are predefined, only the z-coordinate is a free parameter that samples the local height of the freeform lens. With this parameterization, the freeform lens is fully characterized by an \(a\times b\) matrix (\(\mathcal{F}\)). By restricting all z-coordinates to a certain interval [0,\(t\)], the thickness of the freeform lens can be limited.
Light rays can be propagated from the source through the optical surfaces towards a detector surface. By binning the radiant flux of multiple light rays on this detector, in an \(m\times n\) spatial receiver grid, the irradiance distribution on the detector (\(\mathcal{I}\)) can be sampled. A ray-tracing algorithm thus allows to construct the mapping
\[\{\mathcal{F}\in[0,t]^{a\times b}\}\rightarrow\{\mathcal{I}\in\mathbb{R}_{+}^ {m\times n}\}. \tag{1}\]
The problem that is tackled in this work is modeling the inverse relation
\[\{\mathcal{I}\in\mathbb{R}_{+}^{m\times n}\}\rightarrow\{\mathcal{F}\in[0,t]^ {a\times b}\} \tag{2}\]
i.e. for any random irradiance distribution \(\mathcal{I}\) on the target plane, predict the \(\mathcal{F}\) that results in \(\mathcal{I}\) via ray-tracing.
#### Framework structure
Our deep learning framework consists of two different architectures (Fig. 2a, 2b). The first deep learning architecture is a U-net that models the irradiance distribution on the receiver as a results of propagating light rays through the optical system. This architecture implements the mapping relation in Eq. (1) which is typically obtained with MC ray-tracing. U-nets are commonly used segmentation structures and are ideal for modeling 2D-to-2D
Figure 2: **(a)** Illustration of the U-net-SRCNN model for predicting the obtained irradiance on the target plane for a certain freeform surface. The model is trained by considering pairs of freeform topologies and the corresponding simulated far-field irradiance distributions, via MC ray-tracing. Training loss is evaluated using the SSIM loss between the MC-simulated ground truth \(\mathcal{I}\) and the U-net-SRCNN predicted \(\hat{\mathcal{I}}\). **(b)** Illustration of the model to predict a freeform grid \(\hat{\mathcal{F}}\) from an input irradiance \(\mathcal{I}\). Training is achieved by considering the SSIM loss between the input irradiance \(\mathcal{I}\) and \(\hat{\mathcal{I}}\), which is the predicted irradiance of \(\hat{\mathcal{F}}\) by the first model.
mappings. They are widely used in medical imaging and for image deconvolution [49; 50]. However, when tracing rays through arbitrary freeform surfaces, the resulting irradiance distribution often contains highly detailed features. U-nets generally predict images with relative low resolution, causing these high resolution features to vanish during prediction. To attain more detailed irradiance prediction, a super resolution CNN (SRCNN) is appended [51]. The full model, visualized in Fig. 2a, is trained on structural similarity index measure (SSIM) loss in a supervised manner [52]. The training data is generated using MC ray-tracing in a predefined setting. Moreover, significant data augmentation is performed by considering the inverse rotational symmetry of \(\mathcal{F}\) and \(\mathcal{I}\).
The second network architecture is designed to model the mapping relation of Equation 2 and is schematically it shown in Fig. 2b, together with the considered learning strategy. This architecture consists of a typical CNN encoder network, containing a 2D max-pooling head and multi-layer perceptron regressor (MLP), thus predicting the surface parameters \(\hat{\mathcal{F}}\) for an input irradiance \(\mathcal{I}\)[53]. To ensure the prediction of a freeform surface topology that yields an irradiance distribution resembling the input irradiance, it is required to capture both the local and global characteristics of the input irradiance distributions \(\mathcal{I}\) in the training phase. This is achieved by feeding also the exponential and logarithmic transform of \(\mathcal{I}\) to the network, as a 3-channel input. The logarithmic transform allows the model to learn the global context of the input irradiance, whereas the exponential accentuates local irradiance peaks. The pre-trained U-net-SRCNN model is used in the training phase of this second network for two different tasks. First of all, it is used to produce a large set of input irradiance distributions for the learning phase of the second model, without needing to run a huge amount of MC ray-tracing simulations. The advantage of this approach compared to using completely arbitrary irradiance distributions, is that these distributions are a result of the considered freeform topology parameterization, and can thus in principle be obtained with the considered freeform lens. Secondly, the pre-trained ray-tracing model is used during the actual training phase to produce a pseudo-labeled irradiance \(\hat{\mathcal{I}}\) for any predicted \(\hat{\mathcal{F}}\)[54]. The assumption is that these generated pseudo-labels serve as legitimate irradiance distributions, accurately reflecting the MC ray-traced results. Their goal is to compare them with the input irradiance distributions during the training phase of model 2. So with \(m_{1}\) and \(m_{2}\) representing model 1 and model 2 respectively, the trainable loss \(\mathcal{L}\) is evaluated as:
\[\mathcal{L}\{\mathcal{I},m_{2}(\mathcal{I})\}=1-\text{SSIM}\{\mathcal{I},m_{1}[m_{ 2}(\mathcal{I})]\}=1-\text{SSIM}\{\mathcal{I},m_{1}(\hat{\mathcal{F}})\}, \tag{3}\]
with \(m_{1}(\hat{\mathcal{F}})\) the generated pseudo-labeled irradiance to be compared with the ground truth \(\mathcal{I}\). By including the (frozen) ray-tracing model in the training phase of the freeform prediction architecture, the emphasis of model 2 is on replicating the ground truth irradiance distribution, as opposed to replicating the ground truth freeform surface, which is not considered in the training of the second model.
This approach is somehow related to the unsupervised learning strategy that was adopted in computational holography, in favor of supervised learning using an extensive set of random phase masks paired with their simulated amplitude pattern [40]. A main difference is that computational holography can rely on an analytic forward/backward operator for the complex field at the image plane. For the considered case, the role of this analytic operator is taken over by the pre-trained U-net-SRCNN model.
### Prediction results
**Optical simulation setting** The framework is applied in a specific optical setting, to illustrate its usage and performance. An extended, planar light source of \(3\times 3\) mm square is illuminating a freeform lens of \(10\times 10\) mm square at a distance of 40 mm, with a planar entrance surface and freeform exit surface. The refractive index of the lens is 1.5, and a maximum lens thickness of \(t=0.8\) mm is considered. The freeform lens redirects the incident light towards a square receiver plane at a distance of \(d_{rec}=500\) mm and side length \(s_{rec}=1000\) mm, i.e. in the far-field of the lens (see Fig. 2a). This configuration corresponds to the case were a compact LED illuminates a shallow freeform lens, and is e.g. representative for the illumination of a phase-only spatial light modulator that is implemented as a programmable freeform optic, by an extended light source [55]. The freeform surface is characterized by a matrix of \(11\times 11\) equidistant control points of the corresponding NURBS surface [34]. In order to produce irradiance patterns that cover the entire target plane, freeform surface topologies with multiple hills and valleys are necessary. This is a direct consequence of the shallow lens thickness and the fact that large surface slopes are needed to realize the required deflection angles. This is only possible by combining multiple positive and negative surface slopes.
The dataset for supervised learning of the U-net is generated in the commercial software LightTools [56], which allows MC ray-tracing through
freeform (NURBS) lens surfaces. To start, 25000 random freeform topologies are generated by selecting an arbitrary z-coordinate within the chosen interval for each lens surface control point. The corresponding irradiance distribution \(\mathcal{I}\) for each freeform topology \(\mathcal{F}\) is then calculated by tracing 15000 rays from the light source towards the receiver plane. Training is executed on the resulting \(\{\mathcal{F},\mathcal{I}\}\) pairs, using a 90-10 train-validation split.
**Irradiance prediction** Fig. 3a shows quantitative and visual results for the U-net-SRCNN model. The predicted irradiance distribution by the model is compared with the corresponding Monte-Carlo ray-tracing result for three different cases, together with the SSIM loss. The bottom figure shows an outlier in terms of SSIM, compared to the average SSIM of the complete validation set, which is 98.3%. The average 1.7% error is mainly due to the simulation noise in the considered ray-tracing simulations. This noise is more significant for samples where the irradiance is spread out over the entire receiver plane. A more extensive simulated ray-set would be required to suppress these noise effects, but this turned out to be unnecessary. Indeed, in Figure 3b, the validation irradiance for the SSIM outlier of Figure 3a, simulated with 15000 rays, is compared with the MC simulated irradiance for the same freeform lens but with a rayset of 1 million rays. Taking the absolute pixel-wise deviation of the predicted irradiance by the U-Net with the ray-traced irradiance as a measure, the differences are clearly
Figure 3: **(a)** MC-ray-tracing simulated irradiance versus predicted irradiance by the U-net-SRCNN model. The bottom image represents a low SSIM outlier in the validation set. **(b)** Pixel-wise error for the predicted irradiance compared to the ray-traced irradiance for a different number of MC-sampled rays. This illustrates that the model removes noise from the trained data.
smaller when comparing to the simulated sample with 1 million rays, even though the model has been trained on the more noisy samples. This convincingly demonstrates that the model is not solely capable of reproducing ray-tracing, but it also denoises the samples that it was trained on.
Given that the model is capable of generating accurate irradiance distributions, it can be used as a rapid alternative for the computationally intensive MC ray-tracing simulations. Following this logic, a synthetic dataset of 3.4 million freeform \(\mathcal{F}\) - irradiance \(\mathcal{I}\) pairs was rapidly generated within minutes. This enables learning on a large synthetic dataset of input irradiance distributions that are a result of the considered freeform topology, enhancing generalization and reducing over-fitting for the more complex, reverse freeform prediction task. The 3.4 million irradiance samples were again separated in a 90-10 train-validation split.
**Freeform prediction on validation data.** Figure 4a shows quantitative and visual results for the freeform prediction model. Inference is done in 11ms for a single sample, and can be even further enhanced with a TensorRT-optimized module. The predicted results are studied for three validation cases, with in each case, a ground truth freeform \(\mathcal{F}\) - irradiance \(\mathcal{I}\) pair is compared with the predicted freeform control points \(\hat{\mathcal{F}}\) by the CNN encoder network and the resulting irradiance distribution with this predicted
Figure 4: Freeform prediction results for certain validation cases. **(a)** Ground truth (\(\mathcal{F},\mathcal{I}\)) against the predicted freeform matrix (\(\hat{\mathcal{F}}\)) and corresponding MC simulated irradiance (\(\hat{\mathcal{I}}_{sim}\)) using 1 million rays. **(b)** (_i_) Pixel-wise error for the normalized predicted freeform surface control points \(|\mathcal{F}-\hat{\mathcal{F}}|\) as well as the interpolated NURBS surfaces. (_ii_) Pixel-wise error for the simulated irradiances.
Notice the significant difference in freeform surface topology (_i_) while producing an almost identical irradiance distribution (_ii_).
freeform (\(\hat{\mathcal{I}}_{sim}\)). The resulting irradiance distribution has been simulated with MC ray-tracing using an extensive ray-set of 1 million rays, rather than with the U-net model used in the training. This assures a fair assessment of the capabilities of the model to predict an \(\hat{\mathcal{F}}\) that results in a prescribed \(\mathcal{I}\) via ray-tracing. A quantitative assessment of the freeform prediction accuracy is provided by again considering the SSIM between the input \(\mathcal{I}\) and the ray-traced irradiance distribution \(\hat{\mathcal{I}}_{sim}\). The visual similarity between \(\mathcal{I}\) and \(\hat{\mathcal{I}}_{sim}\) illustrates the performance of the developed framework. However, one also notices the visual discrepancy between \(\mathcal{F}\) and \(\hat{\mathcal{F}}\). Figure 4b considers this discrepancy more in detail for one specific case, and shows that the pixel-wise deviation for the freeform control points and the interpolated NURBS topologies is much higher than for the irradiance distributions, a feature that is witnessed for most of the validation cases. This observation supports the hypothesis that radically different freeform surface topologies can produce visually identical irradiance distributions. Therefore, training for SSIM on the pseudo-labeled U-net irradiance versus the targeted irradiance is a more logical approach than training for mean average error of the predicted freeform surface compared to the ground truth.
**Freeform prediction on custom targets.** The model performance is also verified in terms of predicting freeform topologies that produce prescribed irradiance distributions, which are neither in the training nor in the validation set. This is the main target of the developed framework. Fig. 5a
Figure 5: Freeform prediction results for certain custom target irradiances \(\mathcal{I}_{t}\). **(a)**\(\mathcal{I}_{t}\) against the predicted freeform surface topologies \(\hat{\mathcal{F}}\) and the MC ray-traced irradiance \(\hat{\mathcal{I}}_{sim}\), resulting from \(\hat{\mathcal{F}}\). **(b)** Rotate test-time augmentation on the triangular target. The results illustrate the existence of various freeform surface topologies that produce almost visually identical irradiance distributions.
shows an overview of the results for three chosen irradiance patterns (\(\mathcal{I}_{t}\)). The predicted freeform surface topology (\(\hat{\mathcal{F}}\)) and corresponding ray-traced irradiance pattern (\(\hat{\mathcal{I}}_{sim}\)) are given, together with the SSIM of the simulated pattern with respect to the prescribed irradiance. The results clearly indicate that the model is capable of producing fairly complicated freeform surface topologies that closely match the target irradiance distributions. Still, one could wonder if the model produces the best freeform topology prediction in terms of SSIM, since there is no ray-traced ground truth in this case. As a test, a _rotate test-time augmentation_ is applied by considering all \(90^{\circ}\)-rotations of the target pattern. The obtained freeform topologies with the resulting MC-simulated irradiance and corresponding SSIM values are shown in Fig. 5b. These results re-affirm that multiple freeform topologies can result in nearly-identical irradiance patterns and the model finds one of these topologies. In other words, the inverse problem is ill-posed, at least within the limitations of MC ray-tracing. From a practical point-of-view however, such rotational test-time augmentations, or similar alternatives, could prove interesting to generate multiple solutions, out of which the best performing, most smooth or most varying option could be selected, depending on the specific application.
**Training performance.** Finally, the efficiency of the proposed training approach is assessed through a comparative analysis with two alternative methodologies.
1. Training in a supervised manner, on L1 loss between \(\mathcal{F}\) and \(\hat{\mathcal{F}}\), using the full synthetic dataset.
2. Training on the SSIM loss with pseudo-labeled irradiances, but using the 25000 MC ray-traced irradiance samples instead of the predicted irradiances by the U-net-SRCNN model.
Fig. 6 considers two target distributions to illustrate the main performance differences: an irradiance distribution from the validation set linked to an actual freeform topology, and a custom prescribed irradiance. For both cases, the regression activation maps (RAM) and resulting ray-traced irradiances \(\hat{\mathcal{I}}_{sim}\) from the predicted freeform topologies are shown for method (a) and (b), compared to the proposed approach. Regression activation maps are included since they provide a detailed representation of how a model localizes discriminative regions affecting the regression outcome [57]. Comparing RAM for the first target pattern, it is visible that method (a) fails to produce confident feature maps at relevant locations. In comparison, model
(b) manages to locate the core of the supplied irradiance as the most relevant area, but some noise remains. The proposed training method clearly delivers the most accurate localization, with activation maps that overlap with the target distribution. This results in the highest SSIM value for the corresponding ray-traced irradiance. Similar results can be seen for the custom target distribution. In this case, the benefit of relying on synthetic data over the base MC ray-traced data (method b) is visually clear when looking at the obtained irradiance distribution.
## Discussion
This paper presents a semi-supervised learning framework for predicting refractive freeform topologies that produce a certain target irradiance. In contrast to prior work on machine learning for freeform design that predom
Figure 6: Performance comparison of three different learning approaches. Regression activation maps (RAM, normalized) and the produced irradiance (\(\hat{\mathcal{I}}_{sim}\)) of the predicted freeform topology \(\hat{\mathcal{I}}\) are shown, both for a target irradiance from the validation set and a custom target irradiance.
nantly relies on 1D multi-layer perceptron-like networks with contextual information, this study employs 2D convolutional neural networks (CNN) to model the relationship between the obtained irradiance and freeform topology. To train the network, a U-net is used for modeling the Monte-Carlo ray-tracing of light from an extended light source, in order to generate pseudo-labeled irradiance distributions that are compared with the input target irradiances. We demonstrate that this semi-supervised learning approach for freeform topology prediction is superior compared to a supervised learning approach using ground truth freeform topology/irradiance pairs. The resulting framework offers an end-to-end solution for rapid, smooth freeform topology design with extended light sources to produce an arbitrary prescribed target irradiance. The framework is trained within a specific optical setting, using a restricted parameterization for the freeform lens topology. This implies that the model is only capable of predicting freeform components within this parameterization and for the considered illumination setting. Despite these limitations, the proposed framework offers significant opportunities regarding the design and implementation of novel freeform optics.
The fact that freeform topologies can be rapidly generated is especially interesting when considering their implementation on spatial light modulators (SLMs). Modern phase-only SLMs are capable of modulating the optical path length of visible light by multiple wavelengths. This allows to use these SLMs as a kind of programmable freeform optics, by translating a freeform refractive surface into a smooth phase retardation pattern [58, 59]. Despite phase shifts of multiples of \(2\pi\), wrapping this smooth retardation pattern remains necessary, which causes undesired diffraction effects (i.e. chromatic shift). This effect is more pronounced when targeting non-paraxial illumination [55]. Furthermore, also the fast calculation of the freeform surfaces, required to work in real-time, is very challenging when considering non-paraxial illumination with an extended light source. Both limitations can be addressed with the proposed framework. Indeed, the restricted depth of the predicted freeform topologies allows to eliminate or reduce the required phase jumps when targeting non-paraxial illumination. Also the real-time calculation of freeform topologies that generate fast-changing target irradiance distributions from an extended light source is certainly possible.
The freeform components that are introduced in this paper, with their smooth oscillating surface topology and resulting limited thickness, could also serve as a novel passive beam shaping technology. Current freeform micro-optical components always consist of a periodic array of multiple in
dividual lenses or mirrors [60, 61], and while the fabrication of such components has evolved significantly over the past years, the inevitable grooves in between the different lens/mirror elements still pose significant manufacturing challenges in order to avoid stray light [62]. The smooth surface topologies that result from the proposed network, avoid such strong surface C2 discontinuities, while still allowing non-paraxial illumination targets. A versatile test-time augmentation strategy can furthermore help to identify the most smooth surface topology to create a certain illumination target.
Finally, the proposed framework and training method could be extended to other, more general freeform illumination design problems. A straightforward extension would be a surface topology parametrization with more control points. This could allow the generation of even more complex irradiance targets, although it should be clear that the extended light source in combination with a single freeform surface also imposes a limitation in this respect. A more interesting expansion from a practical point of view, could be the extension of the framework to various illumination settings and arbitrary extended light sources. This could be realized by introducing additional contextual data to the training of the 2D-CNN network, e.g. the size of/distance to the receiver plane, and/or the position/various characteristics of the extended light source. In a further stage, multiple freeform surfaces could be considered.
While the presented semi-supervised learning strategy proves superior to a supervised learning strategy, for the prediction of shallow freeform surface topologies, it remains to be seen if this would also be the most effective strategy for the design of strictly convex/ concave freeform surfaces. The training of such a framework would certainly require a more specific surface parameterization that enforces concaveness or convexness. Alternatively, contextual data about the freeform shape could be added in the training phase of the 2D-CNN. Whatever the outcome, also in this case, the use of an additional network for modeling the ray-tracing will result in much faster and more effective training.
The discussion above makes clear that the investigation of advanced machine learning techniques for the design of freeform optics has only been started. In this respect, the proposed framework can serve as a convincing demonstration that deep learning allows rapid and standalone freeform optical design.
## Methods
### Data augmentation
Data augmentation is attained by considering the inverse 4-fold rotational symmetry of a freeform surface \(\mathcal{F}\) and the corresponding irradiance distribution \(\mathcal{I}\). In particular, since a freeform topology is characterized by a square \(11\times 11\) grid, its rotation results in a reverse rotation of the irradiance distribution via ray-tracing. In short:
\[\mathrm{rot}(90,\mathcal{F})\rightarrow\mathrm{rot}(-90,\mathcal{I}), \tag{4}\]
where \(\rightarrow\) represents ray-tracing. By considering this symmetry, each freeform surface may be rotated clockwise 3 times, resulting in 3 augmented freeform-illuminance pairs for training the U-Net.
#### Ray-tracing model (U-Net)
The ray-tracing model receives an 11 \(\times\) 11 matrix of control points that unambiguously determine the considered freeform surface topology. This matrix is then interpolated to match the CNN kernels (bilinear, 50 \(\times\) 50).
Figure 7: (**a**) U-net-SRCNN model with a more in depth visualization of how the control points are interpolated and how the SRCNN constructs the final irradiance distribution. **(b)** The freeform prediction model, with highlights on the input transformations and the MLP structure that predicts the freeform surface control points.
The resulting matrix then passes through a U-net encoder-decoder structure, with the encoder being the state-of-the-art, fully convolutional ConvNeXt [63] (Fig. 7a). Drop path rate is used to prevent over-fitting [64]. The U-net output is a 25 \(\times\) 25 pixel irradiance, which is up-scaled (bilinear) to 50 \(\times\) 50 pixels. Consequently the up-scaled images are passed through a trainable SRCNN architecture [51] with non-linear (ReLU) transformations and kernel sizes of 7, 5 and 3, respectively.
#### Loss function
The U-net-SRCNN model is trained based on the structural similarity index measure (SSIM) loss. The Structural Similarity Index (SSIM) measure is defined as [52]:
\[\text{SSIM}(x,y)=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}{(\mu_{x}^{2 }+\mu_{y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})}\]
where \(x\) and \(y\) represent the horizontal and vertical directions within an image, and \(\mu\) and \(\sigma\) the variance. Furthermore, \(\sigma_{xy}\) is the covariance of both directions, and \(C_{1}\) and \(C_{2}\) are constants that stabilize the nominator and denominator in the calculation. SSIM is used since it considers luminance and contrast perception, contrary to the commonly used mean average error (MAE). SSIM outputs a fraction between 0 and 1, implying that the loss to be minimized between an irradiance \(\mathcal{I}\) and its prediction \(\hat{\mathcal{I}}\) is:
\[1-\text{SSIM}(\mathcal{I},\hat{\mathcal{I}}). \tag{5}\]
#### Freeform prediction model
The main component of the freeform prediction architecture is a CNN encoder, which is ConvNeXt [63]. The model takes a 3-channel image as input, consisting of:
\[\{\mathcal{I},\log(\mathcal{I}),\exp(\mathcal{I})\}. \tag{6}\]
Using the logarithmic and exponential transforms of \(\mathcal{I}\) allows the model to explore the global and local properties more easily (Fig. 7b). The model head contains a 2D max-pooling layer, which is a \(2\times 2\) filter that runs over all the extracted feature maps, maintaining the maximal value as the output. This significantly reduces over-fitting. The max-pooling output is passed through a non-linear mapping MLP that outputs 121 variables, with GELU activation. The output is then reshaped into a \(11\times 11\) grid \(\hat{\mathcal{F}}\) of freeform topology control points.
The model is trained in a semi-supervised manner: for each predicted set of control points, the corresponding irradiance distribution is computed and compared with the input. The irradiance distribution is calculated as a pseudo-labeled image via the pre-trained U-net-SRCNN model. If this U-net-SRCNN model is represented by \(\mathbf{R}\), the loss function on which the model is trained is:
\[1-\text{SSIM}(\mathcal{I},\mathbf{R}(\hat{\mathcal{F}})), \tag{7}\]
with \(\mathcal{I}\) the input irradiance into the network.
#### Training setup
All models in the proposed method use ImageNet-21k pre-trained weights [65], although the contribution of these is likely limited. Training is done using exponential learning rate decay with linear warm-up. A full overview of the training, as well as the used encoder variant, is shown in Table 1. For the freeform prediction models, the training procedure is given for each of the methods discussed in the training performance section.
|
2309.08476 | A Spiking Binary Neuron -- Detector of Causal Links | Causal relationship recognition is a fundamental operation in neural networks
aimed at learning behavior, action planning, and inferring external world
dynamics. This operation is particularly crucial for reinforcement learning
(RL). In the context of spiking neural networks (SNNs), events are represented
as spikes emitted by network neurons or input nodes. Detecting causal
relationships within these events is essential for effective RL implementation.
This research paper presents a novel approach to realize causal relationship
recognition using a simple spiking binary neuron. The proposed method leverages
specially designed synaptic plasticity rules, which are both straightforward
and efficient. Notably, our approach accounts for the temporal aspects of
detected causal links and accommodates the representation of spiking signals as
single spikes or tight spike sequences (bursts), as observed in biological
brains. Furthermore, this study places a strong emphasis on the
hardware-friendliness of the proposed models, ensuring their efficient
implementation on modern and future neuroprocessors. Being compared with
precise machine learning techniques, such as decision tree algorithms and
convolutional neural networks, our neuron demonstrates satisfactory accuracy
despite its simplicity. In conclusion, we introduce a multi-neuron structure
capable of operating in more complex environments with enhanced accuracy,
making it a promising candidate for the advancement of RL applications in SNNs. | Mikhail Kiselev, Denis Larionov, Andrey Urusov | 2023-09-15T15:34:17Z | http://arxiv.org/abs/2309.08476v1 | # A Spiking Binary Neuron - Detector of Causal Links
###### Abstract
Causal relationship recognition is a fundamental operation in neural networks aimed at learning behavior, action planning, and inferring external world dynamics. This operation is particularly crucial for reinforcement learning (RL). In the context of spiking neural networks (SNNs), events are represented as spikes emitted by network neurons or input nodes. Detecting causal relationships within these events is essential for effective RL implementation. This research paper presents a novel approach to realize causal relationship recognition using a simple spiking binary neuron. The proposed method leverages specially designed synaptic plasticity rules, which are both straightforward and efficient. Notably, our approach accounts for the temporal aspects of detected causal links and accommodates the representation of spiking signals as single spikes or tight spike sequences (bursts), as observed in biological brains. Furthermore, this study places a strong emphasis on the hardware-friendliness of the proposed models, ensuring their efficient implementation on modern and future neuroprocessors. Being compared with precise machine learning techniques, such as decision tree algorithms and convolutional neural networks, our neuron demonstrates satisfactory accuracy despite its simplicity. In conclusion, we introduce a multi-neuron structure capable of operating in more complex environments with enhanced accuracy, making it a promising candidate for the advancement of RL applications in SNNs.
_Keywords_: Spiking neural network, binary neuron, spike timing dependent plasticity, dopamine-modulated plasticity, anti-Hebbian plasticity, reinforcement learning, neuromorphic hardware
## 1 Introduction
If we aim to develop an intelligent system based on neural networks capable of learning adaptive behavior towards specific objectives, it is imperative to equip it with the capacity to identify and incorporate causal relationships between events occurring both within the network and in the external environment. These relationships may encompass consistent sequences of patterns forming a unified spatiotemporal pattern, directives to actuators generated by the network, and subsequent responses from the external environment, as well as events preceding a reward and the reward itself. Therefore, the capability to discern causes and effects should constitute a foundational functionality within network structures or individual neurons. In most scenarios, the learning network lacks access to a priori knowledge outlining causal relationships in its environment. Instead, it must deduce them from observed temporal alignments of different events, under the assumption that if event B is frequently witnessed within a specific time frame following event A, then A serves as the cause and B as the consequence.
In this research, we investigate the integration of this particular functionality within spiking neural networks (SNNs), specifically within an individual neuron embedded in the SNN. In the context of SNN, information is encoded through sequences of spikes, making it necessary to frame this problem within the spike domain.
We shall formally define the task to be addressed by this neuron, referred to as the causal relationship detector. The neuron receives spikes from a group of other neurons. It is postulated that the concurrent activation of an unidentified subset of these presynaptic neurons signifies event A (the cause), while the activation of a distinct neuron indicates event B (the consequence). Given the assumption that event B consistently or frequently occurs within a time window of \(T_{P}\) following event A, the neuron's objective is to identify the specific presynaptic neurons belonging to this subset and fire whenever event A occurs. Similar to other learning challenges encountered in SNNs, this problem is resolved by introducing appropriate adjustments to the synaptic weights of the neuron-detector. Importantly, and notably complicating the matter, both the presynaptic neurons and the neuron-detector itself may produce not only individual isolated spikes but also tightly clustered and prolonged spike sequences. This aspect introduces a significant challenge, as it limits the applicability of the majority of existing synaptic plasticity models that rely on the relative timing of individual pre- and post-synaptic spikes.
Numerous studies have demonstrated how SNN can express causal relationships between observations. However, the present research offers a unique combination of three distinctive attributes:
1. Causal relationships are identified by a single neuron, rather than relying on a network-wide approach.
2. The temporal dimension of causal relationships is emphasized, with events-causes occurring certain time before their corresponding consequences.
3. Specially designed local synaptic plasticity rules are used for learning.
The majority of works consider this problem without a temporal aspect. In this context, the problem is often called Bayesian inference or causal inference. It has a close relation to supervised learning, where the network should determine which factors, either independently or in combination, most reliably indicate that the given object belongs to the target class. Recent examples of such research taking various approaches to tackle causal/Bayesian inference problems can be found in [1, 2, 3]. It is noteworthy that one of the primary objectives of these works is supervised learning, which does not explicitly incorporate time. Even when dealing with time series data, the mentioned works treat the entire time series as a single entity, assigning a label to the entire sequence. In contrast, our research aligns more closely with reinforcement learning where all the signals (input signals, network commands, and reward/punishment) exist within the continuous time where time delays carry significance.
Furthermore, our approach resonates with Friston's concepts of free energy [4], as understanding the "expected events" is essential for quantifying the "surprise" or unexpected events in terms of free energy. In essence, this research delves into the intricacies of recognizing causal relationships within the temporal context, which holds particular relevance for dynamic, time-dependent systems.
It's worth noting that our research problem bears resemblance to another significant area of interest among machine learning and neural network researchers, namely, time series forecasting. Time series forecasting techniques aim to predict the future values of specific variables (whether discrete or continuous) based on their current and recent values, and possibly those of other related variables. Naturally, if we can identify causal relationships between the values of certain variables or the states of an object and the values of specific parameters in the future, it equips us with a tool for predicting these future values. However, our primary objective differs from traditional time series forecasting, as we are not focused on predicting the exact value of a particular variable at an exact point in time. Instead, our aim is to infer causal rules that indicate that after event A, it is expected that event B will occur within a time interval of length \(T_{P}\). Consequently, our problem can be more aptly characterized as future event prediction.
Remarkably, there are relatively few applications of SNNs to this problem. One of such approaches is described in [7]. It utilizes the NeuCube system [5], which is centered around the Liquid State Machine (LSM) [6]. The LSM is a large, chaotic, non-plastic SNN designed to transform a combination of time series data and static (or slowly changing) parameters into a high-dimensional representation, represented by the current firing frequencies of neurons within the LSM. Due to the great number of neurons in the LSM, representations of various spatiotemporal patterns in the form of LSM neuronal activity tend to be
linearly separable. Consequently, classification problems related to such representations can be efficiently solved using simple linear classifiers. As described in [7], several instances of NeuCube's application to rare event forecasting have been demonstrated. One specific example, the prediction of strokes, is examined in greater depth in [8]. While the LSM-based approach has demonstrated success across a wide range of tasks, it has a notable drawback in that it requires a large, computationally intensive LSM to achieve efficiency. In contrast, our approach effectively addresses a similar problem using just a single neuron, offering a more resource-efficient solution.
Paper [9] shows how special SNN structures can be used to infer the graph of causal relationships, but, again, without temporal aspect as mentioned before.
At last, there exists another branch of SNN research closely aligned with our study. As described below, we employ a combination of two synaptic plasticity models--commonly referred to, albeit in a very approximate sense, as Hebbian plasticity and dopamine plasticity--to address the problem of identifying causal links. Hebbian plasticity, when applied to the plasticity of spiking neurons, is often denoted as the STDP plasticity model (Spike Timing Dependent Plasticity) [10], while dopamine plasticity is typically associated with reward-related effects. The collective terminology for these merged plasticity models is R-STDP (Reward Spike Timing Dependent Plasticity). Numerous works have explored and proposed various R-STDP models [11, 12, 13, 14], with some of them having already undergone testing in real-world applications [15]. Presently, there is no universally accepted consensus on how to best combine these two types of synaptic plasticity, resulting in a wide array of proposed models. Moreover, unlike our approach where reward takes the form of a spike signal, these models typically represent reward as a global real-valued variable. To the best of our knowledge, no prior work has employed similar synaptic plasticity rules for the specific purpose of detecting causal links.
Furthermore, our specific objective was to craft a simplified plasticity model to facilitate efficient implementation on contemporary and forthcoming neuroprocessors. According to work [16], there is a clear trend in the emergence and development of software and hardware systems that allow not only the inference of convolutional neural networks converted into SNN form, but also the use of arbitrary models of spiking neurons and plasticity rules, opening up the possibility of continuous learning.
In the subsequent section, we will provide a detailed description of our innovative synaptic plasticity model, which combines Hebbian (effectively, anti-Hebbian) and dopamine plasticity. Following this, we will consider the application of this model to the problem of reward prediction in reinforcement learning (RL), using as an example the task of reward prediction in the "ping-pong" RL task. In conclusion, we will outline our vision of how neurons of this kind can form network structures capable of inferring complex graphs of causal relationships essential for constructing external world models in RL. Subsequently, we will assess the advantages and limitations of our approach and outline future research plans in this direction.
## 2 Materials and Methods
In our research, we investigate the learning process of a single spiking neuron in the context of identifying causal relationships among events. This neuron is connected to a group of presynaptic neurons denoted as set \(C\), whose activity is presumed to represent various events in a general sense. In our study, we interpret these events as potential triggers for another event, which we will refer to as the "target event". This target event is signaled by spikes from a separate presynaptic neuron denoted as \(S\), which is not a part of set \(C\). The time moment of \(j\)-th firing of the \(i\)-th presynaptic neuron from the set \(C\) is \(t_{ij}\). The moments when the neuron \(S\) fires will be denoted as \(T_{i}\). We say that some event is a cause of the target event if the target event is frequently observed not later than the time \(T_{P}\) after that event. As we have mentioned, these possible causes of the target event are indicated by the specific activity of the presynaptic neurons, which the learning neuron should try to recognize. \(T_{P}\) is the temporal constant fixing time scale of the concrete domain or task. It is assumed that the target events are rare - it means that \(T_{P}\) is much less than the minimum value of interspike intervals \(T_{i}\) - \(T_{i,i}\). It is the only important assumption - without it, our problem of finding causal relationships seems to become senseless.
We introduce the concept of a "target period", which encompasses the time interval of length \(T_{P}\) preceding each \(T_{i}\). The learning neuron should mark the target periods by its activity (spikes emitted by it at the times \(T_{i}^{*}\)). If it learns to do it with sufficient accuracy, it signifies that it has successfully recognized the causal link between a specific event (corresponding to the distinct activity of a presynaptic neuron that triggers the neuron to fire) and the target event. To assess the accuracy of this recognition, we introduce the concept of a "prediction period". Each prediction period begins at some moment \(T_{i}^{*}\) and ends either after time \(T_{P}\) from that moment or at some moment \(T_{i}\) - whichever occurs first. The total duration of time \(t_{err}\), during which the target period and prediction period do not overlap, serves as a natural metric for measuring the inaccuracy in predicting the target event. The learning neuron's objective is to maximize the metric represented by the equation:
\[R=1\ \mbox{- }t_{err}\ /\ t_{tar} \tag{1}\]
where \(t_{tar}\) signifies the total duration of target periods.
In our study, we use the simplest neuron model called "binary neuron". This neuron operates within discrete time intervals. Every time interval, it receives spikes via its plastic synapses with the weights \(w_{i}\). If the sum of weights of synapses receiving spikes in this time interval is greater than the threshold value \(H\), the neuron fires. This choice makes our result general - in fact, it does not depend on the concrete neuron model. After the respective time discretization, any neuron model can be approximated by the binary neuron which retains the main spiking neuron property - neuron fires when several strong excitatory synapses obtain spikes during short time period. In order to make the weights \(w_{i}\) dimensionless, we set \(H=1\).
### General Idea of the Method and the Synaptic Plasticity Rules Used
We assume that information about potential causal events leading to the target event is encoded within spikes originating from the presynaptic neurons in set \(C\). The synapses responsible for transmitting these spikes are plastic, and their synaptic weights should be modified in such a way that would force the neuron to fire during the target period.
Activity of the learning neuron and modification of its synaptic weights should be interrelated in the following way:
1. Untrained neuron should be inactive - the plasticity mechanism should strengthen those synapses which would force the neurons to fire in the correct time. For this reason, we set weights of all plastic synapses to zero at the learning start.
2. If the neuron fires in the wrong time (outside of target periods) then the synapses which helped it to fire should be depressed.
3. If the neuron fires in the correct time then nothing should be done with its synapses - otherwise modifying its synaptic weights can move it from a trained state.
These behaviors are achieved through the specific design of synaptic plasticity rules. It is important that properties of the plastic synapses are totally different from the single synapse through which the presynaptic neuron \(S\) is connected. We will call it the "dopamine" synapse" because spikes coming to it control the plasticity of all other synapses.
The principles A, B, and C specified above are fulfilled due to the following combination of the two plasticity rules:
1. Dopamine plasticity. Every time the neuron receives a spike from the \(S\) neuron all the plastic synapses having obtained a spike during the time \(T_{P}\) before this "dopamine" spike are potentiated.
2. Anti-Hebbian plasticity. All synapses contributing to neuron firing are depressed.
It is evident that the equilibrium between dopamine plasticity and anti-Hebbian plasticity, when properly balanced, satisfies the conditions A to C, ensuring the successful training and functioning of the neuron.
### Synaptic Plasticity Model in Detail
Similar to our previous research works [21, 22], the synaptic plasticity rules are additive and applied to a variable termed "synaptic resource," denoted as \(W\), rather than directly to the synaptic weight, denoted as \(w\). There is functional dependence between \(W\) and \(w\) expressed by the formula:
\[\mathbf{W=w_{\min}+\frac{(\mathbf{w}_{\max}-\mathbf{w}_{\min})\max(\mathbf{W,0})}{\mathbf{w}_{\max} -\mathbf{w}_{\min}+\max(\mathbf{W,0})}} \tag{2}\]
where \(w_{min}\) and \(w_{max}\) are constants. It is obvious that \(w\) values lay inside the range [\(w_{\mathit{min}}\), \(w_{\mathit{max}}\)) - while \(W\) runs from -\(\infty\) to +\(\infty\). In this study, \(w_{min}<0\), while \(w_{max}>0\) so that synaptic plasticity can make excitatory synapse inhibitory and vice versa.
As previously mentioned, the synaptic plasticity model comprises two distinct and independent components, which are elaborated upon in subsections 2.2.1 and 2.2.2.
#### Anti-Hebbian Plasticity
The standard STDP (spike timing dependent plasticity) model [10] states that spikes coming short time before postsynaptic spike emission potentiate the synapses receiving them. This concept aligns with Donald Hebb's principle, which asserts that synaptic plasticity should encode causal relationships within neuron firings; in essence, synapses responsible for inducing neuron firing should be strengthened. This principle has been proven by plenty of neurophysiological observations. However, in-depth investigations into plasticity within biological neurons have revealed multiple instances of entirely distinct synaptic plasticity models existing in nature [17, 18]. Furthermore, examples of plasticity rules acting in the direction opposite to Hebbian principle (anti-Hebbian plasticity) have been observed in different organisms [19]. It makes us conclude that different kinds of synaptic plasticity are suitable for the solution of different problems. Moreover, the standard STDP model becomes senseless or self-contradictory in the case (which is quite common in the biological brain) when presynaptic and postsynaptic spikes are not stand alone in time but are packed in tight sequences (spike trains). In this case, it is senseless to say that presynaptic spike comes before or after postsynaptic spike because there are many postsynaptic spikes in the close neighborhood before and after the given presynaptic spike.
For these reasons, we have devised our own variant of the anti-Hebbian plasticity model and applied it to the problem at hand. Let us elaborate on this model.
As previously mentioned, weight modifications in the standard STDP are bound to single pre- and postsynaptic spikes. However, in the presence of spike trains, these rules lose their applicability. Consequently, in our model, the synaptic plasticity acts are bound to postsynaptic spike trains instead of single spikes. We refer to these spike trains as "tight spike sequences" (TSS). Specifically, taking the constant \(\mathit{ISI}_{max}\) as a measure of "tightness" of TSS, we define TSS as a sequence of spikes adhering to the following criteria:
1. There were no spikes during time \(\mathit{ISI}_{max}\) before the first spike in TSS;
2. Interspike intervals for all neighboring spikes in TSS are not greater than \(\mathit{ISI}_{max}\);
3. There are no spikes during time \(\mathit{ISI}_{max}\) after the last spike in TSS.
In this work, \(\mathit{ISI}_{max}\) is set to the value of \(T_{P}\).
Our anti-Hebbian plasticity model adheres to the following rules:
1. Resource of any synapse can change at most once during a single TSS. Here and below, TSS refers to postsynaptic spike train.
2. Resources of only those synapses are changed which receive at least one spike during TSS.
All synaptic resources are changed (decreased) by the same value \(d_{H}\) independently of exact timing of presynaptic spikes.
#### Dopamine Plasticity
The described neuron has a plasticity-inducing synapse (from \(S\)). When it obtains a spike, synaptic resources of all plastic synapses of the neuron having received at least one presynaptic spike during the time interval of the length \(T_{P}\) in the past are modified (increased) by the same value \(d_{D}\).
#### Neuron Stability
In our model, the synaptic plasticity values \(d_{H}\) and \(d_{D}\) are not constant. Instead, they are dynamic and exhibit characteristics that adapt over learning. Initially, during the early stages of learning, these values need to be sufficiently large. However, for a well-trained neuron that consistently makes accurate predictions, they should approach zero. This adaptation is crucial to prevent further modifications to the neuron's synaptic weights, which could potentially disrupt its established trained state. To account for this adaptive behavior, we introduce an additional component into the neuron's state, denoted as "stability."
The synaptic plasticity values decrease exponentially to zero as the stability value grows, governed by the following expressions:
\[d_{H}=\overline{d_{H}}\min(2^{-s},1),\ d_{D}=\overline{d_{D}}\min(2^{-s},1). \tag{3}\]
Here, \(\overline{d_{H}}\) and \(\overline{d_{D}}\) are constants of the neuron model. In order to balance anti-Hebbian and dopamine plasticity (that is necessary for holding the condition C from subsection 2.1) we set \(\overline{d_{H}}=\overline{d_{D}}\).
The neuron stability value changes in the two situations:
1. It is decreased by the constant \(d_{s}\) every TSS.
2. It is adjusted by the value \(d_{s}\max\left(2-\frac{|t_{\text{TSS}}-ISI_{\text{max}}|}{ISI_{\text{max}}},- 1\right)\) whenever a presynaptic dopamine spike is received. Here \(t_{\text{TSS}}\) is the time interval between the most recent TSS onset and the dopamine spike.
We see that if TSS began exactly the time \(ISI_{\text{max}}\) (\(=T_{P}\)) ago before dopamine spike (i.e. target event) the neuron stability increase is maximum and is equal to \(d_{s}\) - if to take into account its decrease by \(d_{s}\) in accordance with rule 1. This corresponds to the most accurate prediction of the target event and serves as evidence that the neuron is well trained. Conversely, if a dopamine spike occurs when the neuron has remained inactive for an extended period, it suggests inadequate training, and as a result, its stability is decreased by \(d_{s}\) to facilitate further training.
### The Test Task - Find the Cause of Obtaining Reward in the Ping-Pong ATARI Game
The described technique holds great promise in the realm of reinforcement learning (RL). While supervised learning can be conceptualized as identifying causal relationships between predictors as causes and the target value as a consequence, RL tasks encompass a broader spectrum of causal link determination, explicitly incorporating the element of time. Reward signals come rarely and, possibly, with significant delay from the world states or the agent actions they evaluate. In order to overcome problems with insufficient evaluation signal frequency, the mechanism of intermediate goals should be utilized, but it is also based on inferring causal links.
Futhermore, the key point of the most comprehensive realization of RL, known as model-based RL, is creation by the agent of the internal model of world dynamics and world reactions to agent actions. The model creation mechanism unavoidably includes inference of a network of causal relationships between world state changes and agent actions. Thus, it is entirely reasonable to assert that causal link inference is the most basic operation in RL.
Keeping this in mind, we selected the first task to test capability of the neuron model proposed from the RL domain. Namely, we chose one of the RL tasks drawn from the well-established ATARI games benchmark set [20]. This task centers around the computerized ping-pong game, where a ball traverses within a square
area, rebounding off its walls. The area has only three walls. Instead of the left wall, the racket moves in the vertical direction on the left border of this square area. The racket is controlled by the agent which can move it up and down. When the ball hits the racket it bounces back and the agent obtains a reward signal. If the ball crosses the left border without hitting the racket the agent gets punishment and the ball is returned to a random point of the middle vertical line of the area, gets random movement direction and speed and the game continues. Using the reward/punishment signals received, the agent should understand that its aim is to reflect the ball and learn how to do it.
In our example, the network (in fact, a single neuron) should solve the very first problem - to "understand" what conditions cause obtaining reward in the near future.
Let us describe the input information coming to the neuron's plastic synapses. This information includes the current positions of the ball and the racket and the ball velocity. While the ultimate formulation of this problem would involve primary raster information (i.e., the screen image), our focus here extends beyond computer vision and delves into the realm of causal relationships. Consequently, we assume that preceding layers have already processed the primary raster data and converted it into the spike-based description of the world state, which forms the basis of our neuron's input.
The input nodes that are sources of spikes sent to the learning neuron are subdivided into the following sections:
1. The ball X coordinate Consists of 30 nodes capturing the ball's horizontal position, The horizontal dimension is broken to 30 bins. When the ball is in the bin \(i\), the \(i\)-th node emits spikes with frequency 300 Hz. To establish spatial and temporal scales we assume that the size of the square area is 10\(\times\)10 cm (so that the boundary coordinates are \(\pm\)5 cm) and the discrete emulation time step is 1 msec.
2. The ball Y coordinate. Consists of 30 nodes capturing the ball's vertical position, Similar to X but for the vertical axis.
3. The ball velocity X component. Consists of 9 nodes capturing the ball's horizontal velocity. When the ball is reset in the middle of the square area, its velocity is set to the random value from the range [10, 33.3] cm/sec. Its original movement direction is also random but it is selected so that its X component would not be less than 10 cm/sec. The whole range of possible ball velocity X component values is broken to 9 bins such that the probabilities to find the ball at a random time moment in each of these bins are approximately equal. While the ball X velocity is in some bin, the respective input node emits spikes with 300 Hz frequency.
4. The ball velocity Y component. Consists of 9 nodes capturing the ball's vertical velocity. The same logic as for the X velocity component.
5. The racket Y coordinate. Consists of 30 nodes capturing the racket's vertical position. Similar to the ball Y coordinate. The racket size is 1.8 cm so that the racket takes slightly more than 5 vertical bins.
6. The relative position of the ball and the racket in the close zone. Consists of 25 nodes capturing the ball's positions close to the racket. The square visual field of size 3\(\times\)3 cm moves together with the racket so that the racket center is always at the center of the left border of this visual field. The visual field is broken to 5\(\times\)5 square zones. When the ball is in some zone, the respective input node fires with frequency 300 Hz.
In total, there are 133 input nodes transmitting their spikes to the learning neuron. The neuron's objective is to discern the conditions that lead to obtaining a reward signal within a 50 msec timeframe, and accordingly, we set \(T_{P}=100\) msec.
### Selection of the Neuron Parameters Using the Genetic Algorithm
While our neuron model appears to be relatively simple, it incorporates several parameters that require appropriate calibration. There are four key parameters in our model:
The maximum synaptic resource change \(\overline{d_{H}}\). This parameter controls learning speed. Its low values make learning slow, high values may make it unstable.
* The minimum synaptic weight value \(w_{min}\). It is negative.
* The maximum synaptic weight value \(w_{min}\).
* The stability change speed \(d_{s}\).
The optimum values of \(\overline{d_{H}}\) and \(d_{s}\) are determined by strength of causal links - in case of weak determinism or high noise, great values of these parameters will lead to learning instability. \(w_{min}\) and \(w_{max}\) should be selected on the basis of mean input spike flow - the input node count and the mean spike frequency per node.
Although the general principles for setting these parameters are sufficiently clear, we decided to find their optimum values using the genetic algorithm and the very wide search ranges: [0.03, 1] for \(\overline{d_{H}}\); [0.003, 1] for \(w_{min}\); [0.003, 3] for \(d_{s}\). For setting their random values, the log-uniform distribution was used. The population size was 300; the elitism level was 0.1; the mutation probability per chromosome was 0.5. The optimization criterion was \(R\) (1). It was measured for the last 600 sec of 2000 sec record of ping-pong game where the racket moved chaotically. The total number of rewards was 951. The genetic algorithm terminated when 3 successive generations showed no \(R\) increase.
## 3 Results and Discussion
The best result achieved during our experiments occurred in the 17\({}^{\rm th}\) generation of genetic algorithm. The \(R\) value reached was 0.553. The optimum parameter values found: \(\overline{d_{H}}=0.056\); \(w_{min}\); = -0.017 (setting it to zero only slightly decreases \(R\)); \(w_{max}=0.48\) (at least 3 input spikes are required for firing); \(d_{s}=0.23\).
Considering the inherent fuzziness in the relationship between the current world state and soon obtaining reward due to the discrete world description and chaotic racket movement, this \(R\) value appears satisfactory. To objectively evaluate the performance, we decided to compare our learning neuron with traditional machine learning methods. To ensure a fair comparison, we selected two machine learning algorithms of completely different nature: decision tree and convolutional neural network. All the algorithms were trained on the same binary signal data from the input nodes, with each emulation step serving as a learning example. The target value was binary, labeling target periods. The machine learning algorithms were applied on the same data as our learning neuron (the first 1 400 000 steps) and created scoring models that returned the probability that the current step belongs to the target period. The steps when the score returned exceeds a certain threshold (determined through optimization) were considered analogously to neuron firing. The prediction periods in the last 600 000 simulation steps were determined using the rule described at the beginning of Section 2 and the \(R\) value was calculated. The optimum threshold score value was found from the requirement that the \(R\) value for the first 1 400 000 steps should be maximum. The decision tree algorithm used the information gain split criterion. The network included 2 convolutional ReLU layers.
The maximum \(R\) value obtained by decision tree is 0.742, convolutional network gives \(R\) equal to 0.731. The observed proximity of results shown by very different methods proves validity of our approach to determination of theoretical limit for \(R\) value in our problem. Thus, the estimation of this limit equal to 0.75 seems to be realistic.
Although the result achieved by our neuron (74% of the theoretical maximum) might be considered modest, we regard our model as successful. Indeed, our neuron is very simple. Being considered as a predictive model, it contains only 133 degrees of freedom. In contrast, the decision tree model includes 51 levels and 403 non-terminal nodes. In essence, the function of our neuron is similar to the conjunction of
logical values corresponding to a few of its strongest synapses (see below). Coincidence of these factors is treated as a cause of the target event. However, it is evident that obtaining reward in our example may be preceded by several significantly different conditions. Therefore, it is rather a disjunction of several conjunctions. We propose that a network of these described neurons could yield much more precise reward predictions, and we will explore the potential structure of such a network below.
Let's examine the learning process and its outcomes. The dynamics of the neuron activity, its stability and the total weight change (the sum of absolute values of the individual synaptic weight changes) are represented on Fig. 1. Since the original weights of all plastic synapses were 0, only the dopamine plasticity worked at first. During the first 100 seconds the neuron did not fire. After 250 seconds, its firing frequency stabilized. Due to the weight stabilization mechanism (notice that the neuron's stability grows almost linearly), the synaptic weights did not change after 600 seconds. In reality, the learning process took 600 seconds instead of the planned 1400 seconds.
The learning results are presented on Fig. 2 depicting values of synaptic resources of the learning neuron at the 2000\({}^{\text{th}}\) second. The leftmost plot corresponds to 30 input nodes coding the ball X coordinate. The vertical axis of all plots except the rightmost one displays synaptic resource value. The second plot corresponds to 30 input nodes coding the Y coordinate of the ball (the blue line) and the racket (the orange line). The next two plots represent 9+9 input nodes coding the horizontal and vertical components of the ball velocity. The rightmost plot shows the color-coded values of synaptic resources of the 25 input nodes indicating the location of the ball within a 5x5 grid that moves with the racket. The distribution of synaptic resource values in these plots appears reasonable and in line with expectations.
Figure 1: Dynamics of firing frequency, stability and weight changes of the learning neuron.
Figure 2: Synaptic resources of the trained neuron.
In conclusion, we can assert that our neuron is capable of detecting causal relationships between observed events. However, as previously discussed, a single neuron, even when adequately trained, is not an extremely precise predictor of the imminent occurrence of the target event. This is because the target event may be triggered by several significantly different precursors. To achieve more accurate predictions, a network of neurons of this kind is necessary.
We can propose a hypothesis regarding the potential architecture of such a network (see Fig. 3). In this network, the neurons recognizing different causes of the target event (the blue L rings) enter the columnar structure where each column corresponds to one separate cause. These columns engage in competition to recognize events-causes using lateral inhibitory connections between their "winner-takes-all" (WTA) neurons. The firing ("winning") WTA neuron blocks not only the other WTA neurons but also the GATE neurons in the other columns. If one L neuron fires then the other L neurons should not fire because L neurons should recognize different causal links. If some L neuron fires after the winner it will not receive a reward signal because this signal will not pass through the GATE blocked by the winner. The synapses which caused its firing will be suppressed by anti-Hebbian plasticity and this situation will not repeat next time. If the winning neuron fired correctly it will be rewarded since its GATE is not blocked. We believe that this architectural approach has the potential to recognize complex networks of causal relationships, and we plan to explore and test it in our future research.
## 4 Conclusion.
In conclusion, the ability to discern causal relationships within data streams is a fundamental attribute for any learning system engaging with the real world. In this research, we have showcased the realization of this critical function at the level of an individual neuron, facilitated by a novel combination of anti-Hebbian and dopamine plasticity mechanisms. Given the pivotal role of such mechanisms in the context of implementing reinforcement learning within Spiking Neural Networks (SNNs), we tested our model on a simplified RL problem, namely, the ATARI ping-pong computer game. Our findings, including an
Figure 3: The possible structure including several neurons recognizing causal links for prediction of the target event which may be a consequence of several significantly different causes. The blue arrows are excitatory connections, the black arrows are blocking, the dashed arrows are reward.
estimation of the theoretical upper limit for prediction accuracy in this task, underscore the efficiency of the proposed neuron model in identifying causal links.
Furthermore, we introduced an SNN architecture featuring neurons of the aforementioned type, with the capacity to infer networks of causal relationships directly from raw data. In the forthcoming research, we aim to rigorously test and fine-tune this SNN framework. Additionally, our ongoing investigations will focus on augmenting the capabilities of this SNN to capture temporal aspects of causal relationships, shifting from the question "What are the potential consequences of given events?" to the more nuanced query of "When are these consequences likely to materialize?" This research represents a significant step toward enhancing the capacity of artificial neural networks to model and understand complex causal dynamics in real-world environments.
|
2309.10239 | Ditto: An Elastic and Adaptive Memory-Disaggregated Caching System | In-memory caching systems are fundamental building blocks in cloud services.
However, due to the coupled CPU and memory on monolithic servers, existing
caching systems cannot elastically adjust resources in a resource-efficient and
agile manner. To achieve better elasticity, we propose to port in-memory
caching systems to the disaggregated memory (DM) architecture, where compute
and memory resources are decoupled and can be allocated flexibly. However,
constructing an elastic caching system on DM is challenging since accessing
cached objects with CPU-bypass remote memory accesses hinders the execution of
caching algorithms. Moreover, the elastic changes of compute and memory
resources on DM affect the access patterns of cached data, compromising the hit
rates of caching algorithms. We design Ditto, the first caching system on DM,
to address these challenges. Ditto first proposes a client-centric caching
framework to efficiently execute various caching algorithms in the compute pool
of DM, relying only on remote memory accesses. Then, Ditto employs a
distributed adaptive caching scheme that adaptively switches to the best-fit
caching algorithm in real-time based on the performance of multiple caching
algorithms to improve cache hit rates. Our experiments show that Ditto
effectively adapts to the changing resources on DM and outperforms the
state-of-the-art caching systems by up to 3.6x in real-world workloads and 9x
in YCSB | Jiacheng Shen, Pengfei Zuo, Xuchuan Luo, Yuxin Su, Jiazhen Gu, Hao Feng, Yangfan Zhou, Michael R. Lyu | 2023-09-19T01:27:17Z | http://arxiv.org/abs/2309.10239v1 | # Ditto: An Elastic and Adaptive Memory-Disaggregated Caching System
###### Abstract.
In-memory caching systems are fundamental building blocks in cloud services. However, due to the coupled CPU and memory on monolithic servers, existing caching systems cannot elastically adjust resources in a resource-efficient and agile manner. To achieve better elasticity, we propose to port in-memory caching systems to the disaggregated memory (DM) architecture, where compute and memory resources are decoupled and can be allocated flexibly. However, constructing an elastic caching system on DM is challenging since accessing cached objects with CPU-bypass remote memory accesses hinders the execution of caching algorithms. Moreover, the elastic changes of compute and memory resources on DM affect the access patterns of cached data, compromising the hit rates of caching algorithms. We design Ditto, the first caching system on DM, to address these challenges. Ditto first proposes a _client-centric caching framework_ to efficiently execute various caching algorithms in the compute pool of DM, relying only on remote memory accesses. Then, Ditto employs a _distributed adaptive caching scheme_ that adaptively switches to the best-fit caching algorithm in real-time based on the performance of multiple caching algorithms to improve cache hit rates. Our experiments show that Ditto effectively adapts to the changing resources on DM and outperforms the state-of-the-art caching systems by up to 3.6\(\times\) in real-world workloads and 9\(\times\) in YCSB benchmarks.
D +
Footnote †: dagger}\)Work mainly done during the internship at Huawei Cloud.
lack of a centralized hotness monitor on data paths. Selecting eviction victims becomes inefficient since caching data structures have to be maintained with multiple high-latency remote memory accesses by clients, where data accesses are executed. Moreover, supporting various caching algorithms for different workloads [11, 58] is even more difficult on DM since caching algorithms evict objects with specified rules and tailored data structures [33, 73].
#### 2.0.2 Adjusting resources affects hit rates of caching algorithms
Hit rates of caching algorithms relate to the data access patterns of workloads [74] and the cache size [59]. On DM, both attributes change on dynamical resource adjustments. The data access pattern changes with the number of concurrent clients (_i.e._, compute resources), and the cache size changes with the allocated memory spaces (_i.e._, memory resources). As a result, the best caching algorithm that maximizes hit rate changes dynamically with resource settings. Caching systems with fixed caching algorithms cannot adapt to these dynamic features of DM and can lead to inferior hit rates.
We address these challenges with Ditto1, an elastic and adaptive caching system on DM. First, we propose a client-centric caching framework with _distributed hotness monitoring_ and _sample-based eviction_ to address the challenges of executing caching algorithms on DM. The distributed hotness monitoring uses one-sided RDMA verbs to record the access information from distributed clients in the compute pool, uses eviction priority to formally describe object hotness, and assesses objects' eviction priorities by applying priority calculation rules on the recorded access information. The sample-based eviction scheme selects eviction victims by sampling multiple objects and selecting the one with the lowest priority on the client side without maintaining remote data structures [58]. Since the key difference among caching algorithms is their definitions of eviction priorities, various caching algorithms can be integrated by defining tailored priority calculation rules with little coding effort. Second, we propose a distributed adaptive caching scheme to address the challenge of dynamic resource change. Ditto simultaneously executes multiple caching algorithms with the client-centric caching framework and uses regret minimization [26, 27, 85], an online machine learning algorithm, to perceive their performance and select the best one in the current resource setting.
Footnote 1: Ditto is a Pokémon that can arbitrarily change its appearance.
We implement Ditto and evaluate its performance with both synthesized and real-world workloads [37, 67, 83]. Ditto is more elastic than Redis regarding resource efficiency and the speed of resource adjustments. On YCSB and real-world workloads, Ditto outperforms CliqueMap [66], the state-of-the-art key-value cache, by up to 9\(\times\) and 3.6\(\times\), respectively. Moreover, Ditto can flexibly extend 12 widely-used caching algorithms with 12.5 lines of code (LOC) on average. The implementation of Ditto is open-source2.
Footnote 2: [https://github.com/dmemsys/Ditto.git](https://github.com/dmemsys/Ditto.git).
The contributions of this paper include the following:
* We identify the elasticity benefits and challenges of constructing caching systems on DM and propose Ditto, the first caching system on DM.
* We propose a client-centric caching framework where various caching algorithms can be integrated flexibly and executed efficiently on DM. A sample-friendly hash table and a frequency counter cache are designed to improve the efficiency of the framework on DM.
* We propose distributed adaptive caching to provide high hit rates by selecting the best caching algorithm according to the dynamic resource change and various data access patterns on DM. A lightweight eviction history and a lazy weight update scheme are designed to efficiently achieve adaptivity on DM.
* We implement Ditto and evaluate it with various workloads. Ditto outperforms the state-of-the-art approaches by up to 9\(\times\) under YCSB synthetic workloads and up to 3.6\(\times\) under real-world workloads.
## 2 Background and Motivation
### Issues of Caching Systems on Monolithic Servers
There are two issues with existing caching systems on monolithic servers when they adjust resources.
_1) Resource inefficiency._ Resources of existing caching services on monolithic servers, _e.g._, ElastiCache [22], are allocated with fix-sized virtual machines (VMs) with both CPU and memory, _e.g._, 1 CPU with 2 GB DRAM, to facilitate resource management in monolithic-server-based datacenters. Resources are wasted when coupled CPU and memory are allocated, but only CPU or memory needs to be dynamically increased. Moreover, applications' demands on resources must be rounded up to fit in these fix-sized VMs, causing low resource utilization in the entire datacenter [8].
_2) Slow resource adjustments._ Existing in-memory caching systems shard data to multiple VMs to leverage more CPU and memory resources [22, 28, 43, 58]. Cached data have to be resharded and migrated when new VMs are added to the caching cluster. The migration cost [23] is unavoidable when
Figure 1: The performance of Redis when adjusting resources.
either CPU or memory needs to be adjusted due to the coupled allocation of CPU and memory on monolithic servers. The performance gain when increasing resources and the resource reclamation after shrinking resources is delayed for minutes due to the time-consuming data migration (Shi et al., 2017). Moreover, the throughput drops and latency increases due to the consumption of additional CPU cycles and network bandwidths spent on moving data (Shi et al., 2017; Wang et al., 2018).
Figure 1 shows the migration cost on Redis (Wang et al., 2018), the back-end of many cloud caching services (Krishnan et al., 2018; Wang et al., 2018), during resource adjustments under the read-only YCSB-C workload (Krishnan et al., 2018) with 10 million 256B key-value pairs. We first use 32 Redis nodes, each with 1 CPU core and 1 GB DRAM, then add 32 more nodes after 3 minutes of execution, and shrink back to 32 nodes after 3 minutes of stable execution with 64 nodes. We launch all 64 Redis nodes and idle 32 of them initially to rule out the cost of starting Redis nodes. We use 512 client threads to get the maximum throughput. When scaling to 64 nodes, Redis takes 5.3 minutes to migrate data. The throughput drops up to 7%, and the 99th percentile latency increases up to 21% in the process. When shrinking back to 32 nodes, the resource reclamation is delayed for 5.6 minutes due to data migration. Such migration cost is unavoidable even if using advanced migration techniques (Krishnan et al., 2018; Wang et al., 2018) since CPU and memory are allocated in a coupled manner in VMs and objects are sharded to individual VMs.
### Disaggregated Memory
Disaggregated memory (DM) is proposed to reduce the total cost of ownership (TCO) and improve the elasticity of applications on cloud datacenters (Shi et al., 2017; Wang et al., 2018; Wang et al., 2018). It decouples compute and memory resources of monolithic servers into autonomous compute and memory pools. The compute pool contains compute nodes (CNs) with abundant CPU cores and a small amount of DRAM serving as run-time caches. The memory pool holds memory nodes (MNs) with adequate memory and a controller with weak compute power (_e.g._, 1 - 2 CPU cores) to execute management tasks, _i.e._, network connection and memory management. CNs and MNs are connected with CPU-bypass interconnects with high bandwidth and microsecond-scale latency, _e.g._, RDMA and CXL (Wang et al., 2018), ensuring the performance requirements of memory accesses. CNs can allocate and free variable-sized memory blocks in the memory pool through the ALLOC and FREE interfaces provided by the controller. Without loss of generality, in this paper, we assume that CNs access MNs through one-sided RDMA verbs, _i.e._, READ, WRITE, ATOMIC_CAS (compare and swap), and ATOMIC_FAA (fetch and add).
The decoupled compute and memory resources of DM addresses the resource efficiency and elasticity issues of existing caching systems. First, with DM, compute and memory resources can be allocated separately in a fine-grained manner (Wang et al., 2018). Resources can be used more efficiently by assigning the exact amount of resources as per application demands. Second, the frequency of data migration can be greatly reduced. Specifically, caching systems on DM do not need to migrate data when expanding or reducing memory since the cached data in the memory pool can be accessed by all CNs in the compute pool. Only in some special cases, _e.g._, the network bandwidth of an MN becomes the performance bottleneck due to skewed workloads, data migration happens to achieve better load balancing. As a result, the migration cost can be eliminated for most cases, allowing resource adjustments to take effect agilely without performance losses.
## 3. Challenges
### Executing Caching Algorithms on DM
Existing caching algorithms are designed for _server-centric_ caching systems on monolithic servers where all data are accessed and evicted by the server-side CPUs in a centralized manner. Such a setting, however, no longer holds on DM because 1) caching systems on DM are _client-centric_, where clients directly access and evict the cached data in a CPU-bypass manner, and 2) the compute power in the memory pool of DM is too weak to execute caching algorithms on the data path. Two problems need to be addressed to execute caching algorithms on DM.
The first problem is how to evaluate the hotness of cached objects in the _client-centric_ setting. Existing caching algorithms assess objects' hotness by monitoring and counting all data accesses (Beng et al., 2016; Wang et al., 2018; Wang et al., 2018). The monitoring can be trivially achieved on server-centric caching systems since the CPUs of monolithic caching servers access all data. However, on DM, accesses to cached objects cannot be monitored either in the memory pool or on clients because 1) RDMA bypasses the CPUs in the memory pool, and 2) individual clients in the compute pool are not aware of global data accesses.
The second problem is how to efficiently select eviction victims on the client side. Caching algorithms maintain various caching data structures, _e.g._, lists (Wang et al., 2018), heaps (Beng et al., 2016; Wang et al., 2018), and stacks (Krishnan et al., 2018), to reflect the hotness of cached objects and select eviction victims based on these data structures. The data structures are maintained by the CPUs of caching servers on each data access since access changes object hotness. However, the maintenance of caching data structures has to be executed by clients in the compute pool since clients directly access objects with one-sided RDMA verbs. Maintaining these data structures thus becomes inefficient due to the multiple RTTs required on the critical path. Besides, locks are
Figure 2. The cost of maintaining caching data structures on DM.
required to ensure the correctness of caching data structures under concurrent accesses (Rendle et al., 2017). The throughput of caching systems will be severely bottlenecked by the microsecond-scale lock latency and the network contention caused by iteratively retying on lock failures (Rendle et al., 2018).
To show the problem of maintaining caching data structures, we compare the performance of a linked-list-based LRU key-value cache (KVC), a key-value cache with sharded LRU lists (KVC-S), and a key-value store (KVS) on DM (Kev and Vapnik, 2017) under the read-only YCSB-C benchmark (Kev and Vapnik, 2018). All approaches use a lock-free hash table to index cached objects. KVC maintains a lock-protected linked list to execute LRU. KVC-S shards the LRU list into 32 sub-lists to avoid lock contention and sleeps 5 us on lock failures to reduce the wasted RDMA requests on lock failures. Figure 1(a) shows the throughput and latency of the three approaches with a single client, ruling out lock contention. The throughput of KVC and KVC-S is only 23% of that of KVS, and the tail latency is more than 4.5\(\times\) higher due to the additional RDMA operations on the critical path of data accesses. Figure 1(b) shows their throughput with growing numbers of client threads. The throughputs of KVC and KVC-S drop with more than 32 client threads because the RNIC of the MN is overwhelmed by the useless RDMA_CASes on lock-fail retries. The throughput of KVC-S drops more mildly due to the 5 us backoff on lock failures.
### Dynamic Resource Changes Affect Hit Rate
Hit rates of caching algorithms closely relate to the data access patterns and the cache size (Kumar et al., 2019). However, both aspects are affected when dynamically adjusting compute and memory resources, making the best caching algorithm that maximizes the hit rate changes accordingly. Since DM enables resources to be adjusted fleetly and frequently, the effect of changing resource settings is amplified. Caching systems with fixed caching algorithms cannot adapt to these dynamic features of DM and can lead to inferior hit rates.
_1) Changing compute resources affects hit rates_. On caching systems on DM, applications execute multiple client threads on CPU cores in the compute pool to access cached data in the memory pool. The access pattern on cached objects is the mixture of access patterns of all applications. The change in compute resources, _i.e._, the number of client threads of an application, alters the overall mixture of access patterns and affects the hit rate of individual caching algorithms in two ways.
First, the percentage of the data accesses of an application in the mixture changes with the number of client threads. The overall access pattern on the cached objects thus changes since applications have dissimilar access patterns (Kev and Vapnik, 2017). Figure 3 shows the simulation result on a single machine with two applications under varying numbers of client threads. One application executes an LRU-friendly workload and the other executes an LFU-friendly one from the FIU block trace (Rendle et al., 2018). The hit rates of LRU and LFU are affected by the change of the compute resources in applications, where LFU exhibits a better hit rate when the LFU-friendly application has more compute resources and vice versa.
Second, the number of concurrent clients in an application changes the original access pattern of a workload due to concurrent executions. We simulate on 74 real-world workloads from Twitter (Rendle et al., 2019) and FIU (Rendle et al., 2018) with numbers of clients ranging from 1 to 512. Figure 1(a) shows the cumulative distribution function (CDF) of the relative hit rate change in these workloads. The relative hit rate change is calculated as \(\frac{h_{max}-h_{min}}{h_{max}}\), where \(h_{max}\) and \(h_{min}\) are the highest and lowest hit rates of a workload under different numbers of clients. As we increase the number of client threads, 80% of workloads have 60% hit rate change in LRU and 21% in LFU. Meanwhile, the best caching algorithms on 36% of workload change with the varying number of concurrent clients. Figure 1(b) shows an example FIU trace where the hit rate of LFU performs better with a small number of concurrent clients but becomes inferior to LRU when the number of clients increases.
_2) Changing memory resources affects hit rates_. Changing memory resources leads to changing cache sizes of caching systems on DM. For individual workloads, the best caching algorithm that maximizes the hit rate changes with cache sizes (Kumar et al., 2019), _e.g._, one workload can be LRU-friendly with a small cache size but becomes LFU-friendly under bigger cache sizes. Our simulation finds that the best algorithm changes in 22 of the 74 real-world workloads when the cache size changes. Figure 4 shows an example FIU trace where LRU performs better with small caches and LFU performs better with larger cache sizes.
Consequently, it is necessary for caching systems on DM to dynamically select the best caching algorithm according to the changing resource settings. However, achieving adaptivity is difficult on DM due to its decentralized and distributed nature, as we will introduce in SS 4.3.
## 4. The Ditto Design
### Overview
Figure 6 shows the overall architecture of Ditto. Ditto adopts a hash table to organize objects stored in the memory pool. The hash table stores pointers to the addresses of the cached objects. Following existing architectures of storage systems on DM [65; 66], applications execute on CNs and each application owns a local Ditto client as a subprocess. Each Ditto client has multiple threads on dedicated cores and applications communicate with Ditto clients with local shared memory to execute _Get_ and _Set_ operations. Under this architecture, applications can freely scale compute resources by adding or removing the number of threads and CPU cores assigned to Ditto. The adjustment on compute resources is independent against cached data because there is no need to increase or decrease the cache size in the memory pool when adding or reducing CPU cores.
Ditto clients execute _Get_ and _Set_ operations with one-sided RDMA verbs similar to RACE hashing [88], the state-of-the-art hashing index on DM. For _Get_s, a client searches the address of the cached object in the hash table and fetches the object from the address with two RDMA_READs. For _Set_s, a client searches the slot of the cached object in the hash table with an RDMA_READ, writes the new object to a free location with an RDMA_WRITE, and atomically modifies the pointer in the slot with an RDMA_CAS.
Ditto proposes a client-centric caching framework (SS 4.2) and a distributed adaptive caching scheme (SS 4.3) to achieve cache eviction on DM. The client-centric caching framework efficiently executes multiple caching algorithms on DM by selecting multiple eviction candidates of various caching algorithms. The distributed adaptive caching scheme uses machine learning to learn the characteristics of the current data access pattern and evicts the candidate selected by the caching algorithm that performs the best.
### Client-Centric Caching Framework
The client-centric caching framework addresses the challenges of evaluating object hotness and selecting eviction candidates when executing caching algorithms on DM.
First, to assess the hotness of cached objects, Ditto records objects' access information and decides objects' hotness by defining and applying priority functions to the recorded access information. Specifically, Ditto associates each object with a small metadata recording its global access information, _e.g._, access timestamps, frequency, etc. The metadata is updated collaboratively by clients with one-sided RDMA verbs after each _Get_ and _Set_. On the client side, Ditto offers two interfaces to integrate caching algorithms, _i.e._, priority functions (double priority(Metadata)) and metadata update rules (void update(Metadata)). A priority function maps the metadata of an object to a real value indicating its hotness. Since the key difference between caching algorithms is their definition of object hotness, various caching algorithms can be integrated by defining different priority functions with the priority interface. To support as many algorithms to be simply integrated with the priority interface as possible, we summarize the access information commonly used by existing caching algorithms [54] in Table 1 and record them in Ditto by default.
For advanced caching algorithms that require more access information, we allow algorithms to extend and define their own rules to update the metadata with the update interface. Listing 1 shows an example implementation of LRU-K [52]. LRU-K evicts objects with the smallest timestamp at its last K\({}^{th}\) access. We split the _last_ts_ field into K timestamps with lower precision and construct a ring buffer with the \(freq\) counter. If the object is accessed more than K times, then its priority is its last K\({}^{th}\) access timestamp, which is indexed by \((freq-K+1)\) mod \(K\). Otherwise, we return the \(insert\_ts\) of the object to achieve FIFO eviction in the access buffer [33]. We resort to storing the modified timestamp of LRU-K with cached objects if we need to simultaneously execute LRU-K with caching algorithms that rely on _last_ts_, _e.g._, LRU.
Second, to efficiently select eviction candidates of various caching algorithms on DM, Ditto adopts sampling with client-side priority evaluation. The overhead of maintaining expensive caching data structures is then avoided. Specifically, on each eviction, Ditto randomly samples \(K\) objects in the cache and applies the defined priority functions to the access information of the sampled objects. The eviction victim is approximated as the object with the lowest priority among \(K\) sampled objects.
To efficiently execute the framework on DM, Ditto proposes a _sample-friendly hash table_ and a _frequency-counter
\begin{table}
\begin{tabular}{c|l|c|c} \hline \hline
**Name** & **Description** & Global? & Stateful? \\ \hline _size_ & Object size & ✓ & ✗ \\ _insert\_ts_ & Insert timestamp & ✓ & ✗ \\ _last\_ts_ & Last access timestamp & ✓ & ✗ \\ _freq_ & The number of accesses & ✓ & ✓ \\ _latency_ & Access latency & ✗ & ✗ \\ _cost_ & Cost to fetch the object & ✗ & ✗ \\ & from the storage server & & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 1. The recorded access information.
Figure 6. The overview of Ditto.
cache_ to reduce the overhead of sampling objects and recording access information on DM.
#### 4.2.1. Sample-friendly hash table
The sample-friendly hash table reduces the overhead of recording access information and sample objects on DM. Specifically, sampling objects on DM suffers from high access latency because multiple RDMA_READs are required to fetch the metadata of objects scattered in the memory pool. Moreover, updating access information affects the overall throughput because these additional RDMA operations consume the bounded message rate of RNICs in the memory pool.
The sample-friendly hash table co-designs the sampling process with the hash index to address these two problems. First, instead of storing all metadata together with objects, Ditto stores the most widely used metadata (_i.e._, the default ones) together with the slots in the hash index but retains the metadata extensions required by advanced caching algorithms in objects. With the co-designed hash table, sampling can be conducted with only one RDMA_READ by directly fetching continuous slots with a random offset in the hash table. Second, Ditto reduces the number of RDMA operations on updating object metadata by organizing access information according to their update frequency. The well-organized access information enables multiple access information to be updated with a single RDMA_WRITE.
**Hash table structure.** Figure 7 shows the structure of the sample-friendly hash table. The hash table has multiple buckets with multiple slots. Each slot consists of two parts, _i.e._, an atomic field and a metadata field. The atomic field is similar to the slot of Race Hashing (Sandel, 2017), which is 8-byte in length and modified atomically with RDMA_CASes when objects are inserted, updated, or deleted. The atomic field contains a 6-byte _pointer_ referring to the address of the object, a 1-byte _fp_ (fingerprint) accelerating object searching, and a 1-byte _size_ recording the size of the stored object. Similar to RACE hashing (Sandel, 2017), we use a 1-byte size field and measure the sizes of objects in the granularity of 64B memory blocks. For large objects, Ditto stores the remaining data in a second memory block that links to the first one. The metadata field records the access information required by most caching algorithms, as summarized in Table 1. An additional _hash_ field is recorded for the distributed adaptive caching scheme, which will be discussed in SS 4.3.
**Access information organization.** Ditto organizes the stored access information in two ways to reduce the number of RDMA operations on metadata updates. First, Ditto reduces the number of access information that has to be included in the metadata by distinguishing local and global information. Global information has to be maintained collaboratively by all clients and thus must be included in the metadata. Local information can be decided locally by distributed clients and hence does not need to be included. The _latency_ and _cost_ are local information because we assume that the latency and cost are approximately the same among clients and can be estimated based on the size of objects and the latency and cost of accessing other objects. Second, global information is further classified into stateless and stateful information. Stateless information is updated by overwriting its old value, while stateful information is updated based on its old value. For instance, the _insert_ts_ and _last_ts_ are stateless because the old timestamps are no longer useful. The _freq_ is stateful because it is always updated to increase by 1. Ditto groups the stateless information together in the metadata so that they can be updated with a single RDMA_WRITE. The stateful information is updated with RDMA_FAs.
#### 4.2.2. Frequency-counter cache
A client-side frequency-counter (FC) cache is proposed to further reduce the overhead of updating metadata. With the sample-friendly hash table, updating metadata still requires two RDMA operations, _i.e._, an RDMA_WRITE to update the stateless information and an RDMA_FAA to update the stateful _freq_. These RDMA operations consume the message rate of the RNIC and thus limit the overall throughput of Ditto. Besides, executing RDMA_FAA on DM is expensive due to the contention in the internal locks of RNICs (Sandel, 2017). The FC cache aims to reduce the number of RDMA_FAA on metadata updates.
The FC cache stems from the idea of write-combining on modern processors (Sandel, 2017). In modern processors, several write instructions in a short time window are likely to target the same memory region, _e.g._, a 64-byte cache line. The write combining scheme adopts a buffer to absorb writes to the same region in a short time window and convert them into a single memory write operation to save memory bandwidths.
Similar to write-combining, Ditto employs an FC cache as the write-combining buffer. The FC cache contains entries recording the object ID, the address of the slot in the hash table and the delta value of the counter. We track the insert time of each cache entry to ensure that the frequency
Figure 7. The sample-friendly hash table structure.
e of the FC cache. The update to the remote frequency counter is deferred until a cache entry is evicted.
There are two situations when an entry will be evicted from the FC cache. First, if the space of the FC cache is full, an entry with the earliest insert timestamp will be evicted. Second, if the buffered delta value of an object is greater than a threshold \(t\), the entry will be evicted. On entry eviction, the buffered counter value is added to the slot metadata with a single RDMA_FAA according to the recorded slot address, reducing the number of RDMA_FAA to up to \(1/t\).
### Distributed Adaptive Caching
Adaptive caching on monolithic servers is proposed to adapt to changing data access patterns in real-world workloads. Ditto proposes a distributed adaptive caching scheme to adapt to both changing workloads and dynamic resource settings on DM. The key problem is how to achieve adaptive caching in a distributed and client-centric manner on DM.
Recent approaches on monolithic servers formulate adaptive cache as a multi-armed bandit (MAB) problem (Gupta et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). As shown in Figure 8, caching servers simultaneously execute multiple caching algorithms, named experts in MAB (Gupta et al., 2017). Each expert is associated with a weight, reflecting its performance in the current workload. The execution of the adaptive caching consists of an eviction and an adaptive process. During the eviction process, each expert proposes an eviction candidate according to their own caching data structures (1). Eviction victims are then decided opportunistically according to the weights of the experts (2), _i.e._, candidates of experts with higher weights are more likely to be evicted. The metadata of the evicted object, _i.e._, the object ID and the experts choosing it as a candidate, are inserted into a fix-sized FIFO queue named eviction history (3). During the adaptive process, existing approaches use _regret minimization_(Gupta et al., 2017; Wang et al., 2018; Wang et al., 2018) to adjust expert weights. Specifically, finding a missed object ID in the eviction history is a _regret_ because, intuitively, a more judicious eviction decision could have rectified the cache miss (Wang et al., 2018). Hence, when the missed object ID is found in the eviction history (4), the weights of experts deciding to evict the object are decreased (5).
Two challenges have to be addressed to achieve adaptive caching on DM. First, maintaining the global FIFO eviction history is expensive due to the high overhead of accessing remote data structures on DM, as mentioned in SS 3. Second, managing expert weights on distributed clients is costly since clients need to be synchronized to get the updated weights.
The distributed adaptive caching scheme addresses these DM-specific challenges. First, Ditto evaluates multiple priority functions with the client-centric caching framework to simultaneously execute multiple caching algorithms on DM. Second, to avoid maintaining an additional FIFO queue on DM, Ditto embeds eviction history entries into the hash table with a lightweight eviction history (SS 4.3.1). Finally, to efficiently update and utilize expert weights on the client side, Ditto proposes a lazy weight update scheme to avoid the expensive synchronization among clients (SS 4.3.2).
#### 4.3.1. Lightweight eviction history
The eviction history on monolithic servers needs to maintain an additional FIFO queue and an additional hash index to organize and index history entries (Wang et al., 2018; Wang et al., 2018). The lightweight eviction history adopts two design choices to eliminate the overhead of maintaining these additional data structures on DM. First, it uses an _embedded history design_ that reuses the slots of the sample-friendly hash table to store and index history entries. No additional space needs to be allocated and no additional hash index needs to be constructed for history entries. Second, the lightweight eviction history proposes a _logical FIFO queue with a lazy eviction scheme_ to efficiently achieve FIFO replacement on history entries. No additional FIFO queue needs to be maintained to evict history entries.
**Embedded history entries.** Figure 9 shows the structure of an embedded history entry of the lightweight history. History entries are stored in the slots of the sample-friendly hash table with three differences. First, the _size_ stores a special value (_0xFF_) to tag the slot as a history entry. We use _0xFF_ instead of 0 since we use 0 to indicate empty slots. Second, the pointer field stores a 6-byte history ID instead of the address of the object. Finally, the history entry uses the _insert_ts_ of the slot to store a bitmap indicating which experts have decided to evict the object (_expert_bmap_). Besides, each entry stores the hash value of the evicted object ID in the _hash_ field to check if a missed object is contained in the eviction history. The hash value is written to the metadata when the object is inserted into the cache and will not be modified until its history entry is evicted from the FIFO eviction history.
**The logical FIFO queue.** The logical FIFO queue simulates FIFO eviction without actually maintaining a FIFO queue on DM. It is constructed with a global history counter
Figure 8. Adaptive caching on monolithic servers.
Figure 9. The structure of a lightweight history entry.
and the history IDs in history entries. The global history counter is a 6-byte circular counter that generates history IDs for new history entries. It is stored in an address in the memory pool known to all clients. The history IDs of history entries are acquired by atomically reading the global history counter and increasing it by one (_i.e._, atomic fetch-and-add). As shown in Figure 10, the global history counter and history IDs of history entries can be viewed as locations in a logical circular buffer with \(2^{48}\) entries. Combined with the size of the FIFO eviction history, the logical FIFO queue is then constructed, where the global history counter is the tail of the FIFO queue and the history IDs represent the location of history entries in the queue.
Figure 11 shows the operations of the lightweight history:
_History insertion._ A client inserts a history entry when it decides to evict a victim object from the cache. The client first acquires a history ID by performing an RDMA_FAA on the global history counter, which atomically returns the current value of the counter and increases it by one. Then the client issues an RDMA_CAS to atomically modify the _size_ and the _pointer_ in the slot of the victim object to be _0xFF_ and the acquired history ID, respectively. The expert bitmap is then asynchronously written to the _insert_ts_ field of the slot metadata with an RDMA_WRITE.
_Lazy history eviction._ Ditto adopts a lazy eviction scheme to achieve FIFO eviction on history entries, _i.e._, expired history entries are kept in the history for a while before their evictions. To prevent clients from accessing expired history entries, Ditto proposes a client-side expiration checking mechanism. Suppose the global history counter is \(v_{1}\), the history ID is \(v_{2}\), and the size of the FIFO history is \(l\). If \(v_{1}>v_{2}\), the history entry is invalid when \(v_{1}-v_{2}>l\). Otherwise, the history entry is invalid if \(v_{1}+2^{48}-v_{2}>l\), considering the wrap-up of the 48-bit global history counter. The actual evictions happen when inserting new objects into the cache. As shown in Figure 11, when inserting new objects, the expired slots are considered empty slots and are overwritten to be ordinary slots, which transparently evicts the history entry.
_Regret collection._ A regret is defined as a client finding an object to be missed in the cache but contained in the eviction history. The embedded history entry makes collecting regrets the same process as searching objects in the cache. When a client searches for an object, it calculates the hash value of the object ID, locates a bucket based on the hash value, and iteratively matches the slots in the bucket to see if the pointed object has the same object ID as the target. During the process, clients also match the hash value of the encountered history entries in the bucket. Regrets can then be collected if the object has not been found but a history entry has a matching hash value.
#### 4.3.2. Lazy expert weight update.
Ditto formulates the problem of cache replacement as MAB and uses regret minimization to dynamically adjust the weights of experts. When a regret is found, _i.e._, a missed object hits in the eviction history, the weights of the experts that evicted the object should be penalized. Suppose expert \(E_{i}\) made a bad eviction decision and the decision is the \(t\)-th entry in the eviction history. The weight of the expert is then updated to be \(\mathrm{w}_{E_{i}}=w_{E_{i}}\cdot e^{\lambda t^{d}}\), where \(\lambda\) is the learning rate and \(d^{t}\) is the penalty. The penalty \(e^{\lambda t^{d}}\) is related to the position of the entry in the FIFO history because an older regret should be penalized less, where \(d\) is a fixed discount rate3. The challenge of updating weights on DM is that regrets are no longer collected and expert weights are no longer used in a centralized manner by monolithic caching servers. Updating and using expert weights from distributed clients incurs nonnegligible overhead due to the high synchronization overhead on DM [72].
Footnote 3: Similar to [74], the discount rate is \(0.005^{1/N}\), where \(N\) is the cache size.
The idea of the lazy weight update scheme is to let clients batch the regrets locally and offload the weight update lazily to the controllers of MNs. In this way, the frequency of updating weights is reduced and the overhead of synchronization is avoided. Meanwhile, the weak controller of memory nodes will not become a bottleneck due to the infrequent update.
Figure 12 shows the process of the lazy expert weight update scheme. Each client maintains expert weights locally to make eviction decisions. When a client discovers a regret,
Figure 11. Inserting and evicting a history entry.
Figure 12. The lazy weight update scheme.
Figure 10. The logical FIFO queue structure.
it applies the penalty to the local expert weights according to the history bitmap in the history entry. The penalties are recorded in a penalty buffer. When the number of buffered penalties exceeds a threshold, the client sends all the penalties to the controller of the memory node holding the expert weights with an RDMA-based RPC request. On receiving clients' penalties, the controller of the MN first applies the penalties to the global expert weights and then replies the updated global weights to clients.
To reduce the bandwidth consumption of transferring the penalties over the network, Ditto compresses the penalties using the attribute of exponential functions. Specifically, the sum of the penalties is stored in the penalty buffer and transferred to the MN instead of a list of individual penalties.
With the lazy weight update scheme, clients' eviction decisions are made on local weights, which are not always synchronized with global weights. However, such asynchrony does not affect the adaptivity of Ditto, as shown in our experiments.
### Discussions
_Metadata extensions._ As mentioned in SS 4.2.1, Ditto stores extended metadata together with cached objects for advanced caching algorithms. In this situation, the extended metadata is stored as a metadata header ahead of each object. The update and priority functions take all metadata, _i.e._, the default ones in the hash table and the extended ones in the metadata header, as input and call user-defined metadata update and priority calculation rules to deal with the extended metadata. After executing _Get_ and _Set_ operations, an additional RDMA_WRITE is required to update the metadata stored with objects asynchronously. Finally, on cache eviction, additional RDMA_READs are required to fetch the metadata header to calculate eviction priorities.
_Metadata overhead._ In Ditto, metadata consists of history entries, the index slots for cached objects and global expert weights. First, each history entry contains 40 bytes, as shown in Figure 9. The total number of history entries is set as the maximum number of cached objects according to existing approaches (Sutton et al., 2017; Sutton et al., 2017). Second, for each cached object, the index slot uses 40 bytes, _i.e._, 8 bytes for the atomic field and 32 bytes for access information, as shown in Figure 7. Finally, for each expert, a 4-byte float variable is required as its global expert weight. Summing up all of these, the metadata overhead of Ditto is \(80\cdot C+4\cdot N\) bytes, where \(C\) is the maximum number of cached objects and \(N\) is the number of experts in the distributed adaptive caching scheme.
_Security and fairness issues._ Since Ditto clients and applications cooperate closely on the same CNs, it is possible that some malicious users can manipulate Ditto clients to make them disproportionately advantaged against other users' applications. We can enforce security techniques, _e.g._, control flow integrity (CFI) (Bog
configurations [75]. We use the first 10 traces with more than 50 million requests to evaluate Ditto. For the Twitter traces, we randomly select three traces, _i.e._, _Twitter-Compute_, _Twitter-Storage_, and _Twitter-Transient_, from a compute cluster, a storage cluster, and a transient caching cluster, respectively. The _webmail_ trace is a 14-day storage I/O trace collected from web-based email servers. We use _webmail_ as a representative FIU trace similar to existing approaches [59]. In our experiments, we randomly select traces to accelerate our evaluation to show the performance of Ditto in different use cases, _i.e._, block IO, KV cache on different clusters, and object store. We truncate traces to allow concurrent trace loading from 32 independent clients on a single CN.
_Implementations_. We implement Ditto with 20k LOCs. We use LRU and LFU, the two most widely used caching algorithms, as two experts in the distributed adaptive caching scheme. These two caching algorithms are chosen as adaptive experts since existing adaptive caching schemes have found that using a recency-based and a frequency-based caching algorithm can adapt to most workloads [59; 74]. For memory management, we use a two-level memory management scheme [65] so that clients can dynamically allocate memory spaces in the MN. We pre-register all memory on the MN to its RNIC to eliminate the overhead of memory registration on the critical path of memory allocation.
_Parameters_. The parameters of Ditto include the number of samples, the size of lightweight eviction history, the threshold and size of the FC Cache, and the learning rate and the number of batched weight updates of distributed adaptive caching. Specifically, the number of samples affects the precision of approximating caching algorithms with sampling. We sample 5 objects on cache eviction according to the default value of Redis [58]. The size of the lightweight eviction history exhibits a tradeoff between the speed of adaptation and the metadata overhead. Setting the history size larger makes adaptation faster since more penalties can be collected during execution. In return, a larger history size requires more space to store history entries. We set the history size as the cache size (calculated in the number of objects) according to LeCaR [74]. The threshold of FC Cache can affect the precision of LFU. We set the FC cache threshold to 10 and set the FC cache size to 10MB according to our grid search. The superior hit rates in our experiments show that using 10 as the FC threshold does not affect hit rates much. Finally, we configure the learning rate of Ditto to be 0.1 and update global weights every 100 local weight updates according to our grid search.
_Baselines_. We compare Ditto with Redis [58], CliqueMap [66], and Shard-LRU. First, we use Redis, one of the most widely adopted in-memory caching systems that support dynamic resource scaling [58; 22], to show the elasticity of Ditto. Second, we use CliqueMap, the state-of-the-art RDMA-based KV cache from Google, to show the efficiency and adaptivity of Ditto. CliqueMap initiates RDMA_READs on the client side to directly _Get_ cached objects, and relies on server-side CPUs to execute _Set_ operations. Since _Get_s involves only one-sided RDMA_READs, no access information can be recorded. Clients of CliqueMap record access information locally and send the information to servers periodically to enable servers to execute caching algorithms. We implemented an LRU (CM-LRU) and LFU (CM-LFU) version of CliqueMap according to its paper due to no open-source implementations. We disable the replication and fault-tolerance of CliqueMap to focus on comparing the execution of caching algorithms. Finally, we use Shard-LRU, a straightforward implementation of a caching system on DM, to show the effectiveness of the client-centric caching framework of Ditto. Clients of Shard-LRU maintain lock-protected LRU lists in the memory pool with one-sided RDMA verbs. We shard objects into 32 LRU lists according to their hash values and force clients to sleep 5 us on lock failures to mitigate lock and network contention. By default, we use one CPU core on MNs to simulate the poor compute power in the memory pool. Each CPU core on CNs exclusively runs a client thread.
### Q1: Elasticity
To show the elasticity of Ditto, we run the same experiment as in SS 2 and force Ditto to use the same amount of CPU or memory resources as Redis on the YCSB-C workload.
Compared with Redis, the elasticity of Ditto is improved in both resource utilization and speed of resource adjustments. First, due to the decoupled CPU and memory on DM, Ditto can adjust CPU cores and memory spaces separately in a fine-grained manner. Resources can be allocated precisely according to the dynamic demands of applications. Second, Ditto does not require data migration when adjusting resources, making the performance gain and resource reclamation more agile than Redis. The throughput of Ditto improves immediately from 5 Mops to 8.5 Mops with 32 more CPU cores added and resumes immediately back to 5 Mops as we shrink the number of CPU cores back to 32. The throughput doesn't scale linearly as we add CPU cores due to the extra overhead of coroutine scheduling on CNs. The
Figure 13: The throughput of Ditto when dynamically adjusting compute and memory resources.
median latency stabilizes at 12 us and the 99th percentile latency fluctuates slightly around 14 to 21 us. As for adjusting memory spaces, the throughput stabilizes on 5 Mops and the tail latency stays on 14 us. Besides, the throughput of Ditto is more than 2 times higher than that of Redis during the entire experiment. This is because Ditto allows CPU cores to equally access all data, avoiding a single core becoming the performance bottleneck. However, Redis shards data to VMs, which makes the CPU core of some VMs bottleneck the throughput of the entire caching cluster on the skewed YCSB workloads.
Besides, Ditto does not require more client-side computation than Redis. In the experiment, clients of Ditto consume 32 CPU cores on the CN. In contrast, clients of Redis consume on average 36.3 CPU cores out of 128 assigned cores on two CNs. This is because the Redis client library spends CPU cycles to encapsulate and decapsulate data according to the Redis communication protocol and network protocols. Moreover, Ditto saves compute power regarding the overall CPU utilization since Redis servers consume an additional 32 cores on the MN.
### Q2: Efficiency
To show that Ditto can efficiently execute caching algorithms on DM, we evaluate the throughput and tail latency of Shard-LRU, CliqueMap, and Ditto in the case of no cache misses on YCSB benchmarks. We vary the number of clients from 1 to 256, with each CN holding up to 32 clients.
As shown in Figure 14, Shard-LRU is bottlenecked by its remote lock contention even if the sharded LRU list and the 5 us back-off scheme mitigate the lock and network contention. The throughput of CliqueMap is limited by the weak compute power on MNs. For write-intensive workloads (YCSB A), the CPU of the MN is overwhelmed by frequent _Set_s. For read-intensive workloads (YCSB B, C, and D), the CPU of the MN is busy with merging the object access information received from clients. The overall performance is affected by the periodic synchronization of access information and the amplified network bandwidth when sending the access information from clients to the MN.
For all workloads, Ditto is bottlenecked by the message rate of the RNIC on the MN. It achieves 10.5, 13.1, 13.2, and 13.0 Mops respectively on YCSB A, B, C, and D workloads, which is up to 9x higher than Shard-LRU and CliqueMap. Compared with Shard-LRU, Ditto records the access information and selects eviction victims in a lock-free manner, eliminating the expensive lock overhead on DM. Compared with CliqueMap, Ditto accesses data and maintains access information with one-sided RDMA verbs, preventing the weak compute power on the MN from becoming the throughput bottleneck on both write-intensive and read-intensive workloads. However, Ditto performs worse than CliqueMap under the write-intensive YCSB-A workload with a single client, _i.e._, the first point in Figure 13(a). This is because the _Set_s of CliqueMap use only a single RTT, while Ditto needs three RTTs to search the remote hash table, read the object, and modify the pointer in the hash table.
Figure 15 shows the performance of CliqueMap, Redis and Ditto under YCSB-A and YCSB-C workloads with increasing numbers of MN-side CPU cores under 256 clients. We shard the LRU list (and the LFU heap) of CliqueMap into 128 shards to avoid server-side lock contention. The throughput of Ditto stays the same since Ditto does not rely on compute power on MNs. With the same compute resource in the compute pool, CliqueMap consumes more than 20 additional cores to get comparable performance with Ditto on YCSB-C. Ditto achieves 3.3x higher throughput than CliqueMap on the write-intensive YCSB-A workload since CliqueMap relies only on the server-side compute power to execute _Set_ operations and maintain caching data structures. The throughputs of Redis on both workloads are bottlenecked by the CPU core of the hottest data shard due to the skewed YCSB workloads. Redis performs slightly better than CliqueMap on YCSB-A workload with more CPU cores since its sample-based eviction eliminates the overhead of maintaining caching data structures locally.
### Q3: Adaptivity
#### 5.4.1. Adapt to real-world workloads
To show the adap
Figure 14. The throughput and tail latency of caching systems on DM.
Figure 15. The throughput of CliqueMap, Redis, and Ditto with more CPU cores on MN.
tivity of Ditto on real-world workloads with different affinities of caching algorithms, we evaluate the throughput and the hit rate of real-world workloads with different cache sizes. For all traces, we use 256-byte object sizes and set cache sizes relative to the size of each workload's footprint, _i.e._, all unique data items accessed, similar to [59]. For each workload, we use 64 clients to first execute 10 seconds to warm up the cache and then let all clients iteratively run the workload for 20 seconds to calculate the hit rate and the throughput. We use a penalized throughput to simulate real-world situations where caching systems cooperate with a distributed storage system. For each _Get_ miss, we force clients to sleep for 500 us before inserting the missed object into the cache with _Set_. The penalty simulates the overhead of fetching data from distributed storage services and 500 us is selected according to the latency of the state-of-the-art distributed storage systems [46, 53, 81].
We compare Ditto with four baseline approaches. We use CM-LRU and CM-LFU to show the performance of precise LRU and LFU implementation with CliqueMap on DM. We introduce Ditto-LRU and Ditto-LFU to show the performance of Ditto with only a single caching algorithm.
Since Ditto is an adaptive caching framework that can execute various caching algorithms and dynamically adapt to the best one based on workloads and resource settings, the performance of Ditto largely depends on the candidate caching algorithms configured by users. We configure Ditto to execute LRU and LFU as examples to show its adaptivity. Under workloads that are friendly to either LRU or LFU, the performance of Ditto should be bounded by Ditto-LRU and Ditto-LFU and approach to the better one since it adaptively selects the better one among the two algorithms.
Figures 16 and 17 show the penalized throughput and the hit rates under five real-world key-value traces. In all five workloads, the hit rate and penalized throughput of Ditto can effectively approach the better one of Ditto-LRU and Ditto-LFU. Meanwhile, Ditto outperforms CliqueMap in all workloads due to higher hit rates and the higher throughput upper-bound. Particularly, the throughput of CliqueMap is bounded by the compute power on the MN under the Twitter workloads, where the hit rates are high. One exception is the throughput of CM-LRU in Figure 15(a), which has comparable throughput with Ditto. This is because all approaches are bounded by the hit rate on the _webmail_ workload and CM-LRU has a slightly lower hit rate compared with Ditto. For most of the workloads, the throughput of Ditto is lower than that of Ditto-LRU when their hit rates are the same due to the additional overhead of adaptive caching, _i.e._, accessing and increasing the global history counter. However, the overhead is less than 5%, which is acceptable compared with the up to 63% performance gain of using an inferior caching algorithm, since users do not know in advance which caching algorithm performs better.
Figure 18 shows the box plot of relative hit rates of Ditto, max(Ditto-LRU, Ditto-LFU), and min(Ditto-LRU, Ditto-LFU) normalized over random eviction on 33 _IBM_ and _Cloud-Physics_ workloads. The hit rate of Ditto significantly exceeds min(Ditto-LRU, Ditto-LRU) and approaches the box of max(Ditto-LRU, Ditto-LFU), showing the adaptivity of Ditto.
Under changing workloads that iteratively switch between LRU- and LFU-friendly, Ditto should outperform both Ditto-LRU and Ditto-LFU. We show the performance of the four approaches on a synthetic changing workload used in [74].
The workload is synthesized to have four phases that periodically switch back and forth from being favorable to LRU to being favorable to LFU. As shown in Figure 19, Ditto outperforms all baselines on both penalized throughput and hit rate because only Ditto can adapt to workload changes.
#### 5.4.2. Adapt to dynamic resource adjustments
To show the adaptivity of Ditto on DM, we evaluate its hit rates with dynamically changing compute and memory resources on the same workload as Figures 4, 4, and 4, _i.e.,__webmail_.
**Adapt to changing compute resources.** Figure 21 shows the relative hit rates normalized to Ditto-LRU under different proportions of clients allocated to two applications with LRU and LFU access patterns. The hit rate of Ditto-LFU is higher when the LRU portion is less than 0.4, while Ditto-LRU performs better when the LRU portion grows higher. The hit rate of Ditto is higher than that of Ditto-LRU with a low LRU portion and becomes close to Ditto-LRU with a high LRU portion, indicating the adaptivity of Ditto. Besides, Ditto can adapt to the change of access pattern when multiple clients concurrently execute the same workload. Figure 21 shows the relative hit rates of Ditto and CliqueMap normalized to Ditto-LRU under dynamically increasing numbers of concurrent clients4. The hit rate of Ditto stays above the hit rates of both Ditto-LRU and Ditto-LFU because there are access pattern changes in the real-world _webmail_ workload, and only Ditto can adapt to these changes.
Footnote 4: The absolute hit rates in Figures 18, 20, and 21 can be found in our open-source repository.
**Adapt to changing memory sizes.** Figure 22 shows the hit rate of Ditto when we dynamically increase the memory space. The hit rate of Ditto approaches Ditto-LRU for most cases, outperforming Ditto-LFU. When the cache size is 20% and 30% footprint size, the hit rate of Ditto-LFU exceeds Ditto-LRU. Ditto performs better than both approaches because it can adaptively adjust its algorithm according to the affinity of caching algorithms on different cache sizes.
### Q4: Flexibility
To show that Ditto can flexibly integrate various caching algorithms, we integrate 12 commonly used caching algorithms into Ditto and evaluate their throughput, hit rate, and coding effort. Since evaluating the feasibility of executing different caching algorithms is independent of workloads, we only show the throughput and hit rates on the _webmail_ workload in Figure 24. Among all the algorithms, SIZE exhibits the best throughput and hit rate, while MRU exhibits the worst. All these algorithms can be easily implemented in Ditto with less than 23 lines of code, as shown in Table 3.
### Q5: Contribution of Each Technique
We show the contribution of techniques proposed in the paper by gradually disabling each technique of Ditto. Due to the space limit, we show the performance of different techniques without miss penalties on the _webmail_ workload in Figure 24. Ditto performs similarly on other workloads and more results can be found in our open-source repository. The sample-friendly hash table (SFHT) improves the overall throughput by 42% since it reduces the number of RDMA operations on data paths when updating the access information and sampling objects. The lightweight history scheme (LWH) improves the throughput by 13% due to the reduced number of RTTs when collecting regrets and maintaining eviction history. Finally, the lazy weight update scheme (LWU) and the frequency-counter cache (FC) contribute to 4% of the overall throughput because the reduced number of RDMA requests saves the message rate of the RNICs on MNs.
Figure 25 shows the performance of Ditto under the YCSB-C benchmark with 256 clients and different FC cache sizes. We limit FC cache size in MB since the size of each cache entry varies with the size of its recorded object ID. We only show the result under YCSB-C due to the space limit. Ditto performs similarly on other workloads and more results can be found in our open-source repository. The throughput increases from 10 Mops to 13.2 Mops with increased sizes of the FC cache since more RDMA_FAAs can be cached locally to save the message rate of RNICs. The tail latency drops from 28 us to 21 us due to the reduced number of
RDMA operations and less contended network. Also, the performance gain of the FC cache becomes insignificant when the size of the FC cache exceeds 5 MB, indicating that the FC cache can improve overall performance with small additional memory consumption on clients.
## 6. Related Work
**Disaggregated Memory.** Existing work on disaggregated memory can be classified into approaches that realize efficient memory disaggregation and approaches that design better applications are also compatible.
**Caching Algorithms.** Caching algorithms distinguish the hotness of objects using recency [73, 86], frequency [21] and other access information [11], or combining various information together [10, 11, 7, 12] to get higher hit rates. Recently, there are many machine-learning-based adaptive caching algorithms [6, 47, 59, 74]. Among them, CACHEUS [59] is the most related. It uses regret minimization to adaptively select a better caching algorithm. However, all these caching algorithms are designed for server-centric caching systems to optimize specific workloads. Ditto, on the one hand, is designed for caching systems on DM where clients directly access data without involving CPUs in the memory pool. On the other hand, Ditto is an adaptive caching framework where multiple caching algorithms can be integrated and adaptively selected according to workload and resource change.
## 7. Conclusion
We propose Ditto, the first caching system on the disaggregated memory architecture, to achieve better elasticity. Ditto addresses the challenges of constructing a caching system on DM, _i.e._, executing server-centric caching algorithms and dealing with inferior hit rates caused by dynamically changing resources and data access patterns. A client-centric caching framework is proposed to efficiently execute caching algorithms on DM. Various caching algorithms can be integrated with small coding efforts. A distributed adaptive caching scheme is proposed to adapt to the resource and workload changes. Experimental results show that Ditto effectively adapts to the resource and workload change on DM and outperforms the state-of-the-art caching system on monolithic servers by up to 9\(\times\) on YCSB synthetic workloads and 3.6\(\times\) on real-world key-value traces.
## Acknowledgments
We sincerely thank our shepherd Marcos K. Aguilera and the anonymous reviewers for their constructive comments and suggestions. This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14206921 of the General Research Fund), the National Natural Science Foundation of China (Project No. 62202511), the Natural Science Foundation of Shanghai (Project No. 22ZR1407900), and Huawei Cloud. Pengfei Zuo is the corresponding author ([email protected]).
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline
**Algs.** & LRU & LFU & MRU & GDS & LIRS & FIFO & SIZE & GDSF & LRFU & LRUK & LFUDA & HYPERBOLIC \\ \hline
**LOC** & 9 & 9 & 9 & 14 & 12 & 9 & 9 & 14 & 17 & 23 & 14 & 11 \\
**Info.** & \(t_{SL}\) & F & \(t_{SL}\) & S & F, \(t_{SL}\), M & \(t_{SI}\) & S & F, S & \(t_{SL}\), M & M & F, M & \(t_{SL}\), F, S \\ \hline \hline \end{tabular}
\end{table}
Table 3. LOCs and used access information of different caching algorithms on Ditto. \(t_{SI}\) and \(t_{SL}\) refer to the insert timestamp and the last access timestamp, respectively. S refers to the size of the object, F refers to the access frequency of the object, and M refers to the use of additional metadata. Details on the additional metadata M can be found in our open-source repository. |
2309.13730 | Families of automorphisms of abelian varieties | We consider some algebraic aspects of the dynamics of an automorphism on a
family of polarized abelian varieties parameterized by the complex unit disk.
When the action on the cohomology of the generic fiber has no cyclotomic
factor, we prove that such a map can be made regular only if the family of
abelian varieties does not degenerate. As a contrast, we show that families of
translations are always regularizable. We further describe the closure of the
orbits of such maps, inspired by results of Cantat and Amerik-Verbitsky. | Charles Favre, Alexandra Kuznetsova | 2023-09-24T19:20:46Z | http://arxiv.org/abs/2309.13730v1 | # Families of automorphisms on abelian varieties
###### Abstract.
We consider some algebraic aspects of the dynamics of an automorphism on a family of polarized abelian varieties parameterized by the complex unit disk. When the action on the cohomology of the generic fiber has no cyclotomic factor, we prove that such a map can be made regular only if the family of abelian varieties does not degenerate. As a contrast, we show that families of translations are always regularizable. We further describe the closure of the orbits of such maps, inspired by results of Cantat and Amerik-Verbitsky.
The work of Alexandra Kuznetsova was performed at the Steklov International Mathematical Center and supported by the Ministry of Science and Higher Education of the Russian Federation (agreement no. 075-15-2022-265) as well as by the Russian Science Foundation, grant 21-11-00153. Both authors extend their thanks to Diego Izquierdo and Antoine Ducros for discussions on simple central algebras and analytic GAGA.
## 1. Introduction
**Regularizable mappings**. Let \(f\colon X\dasharrow X\) be any birational self-map of a smooth complex projective variety of dimension \(N\geq 1\). Denote by \(\operatorname{Ind}(f)\) its indeterminacy locus, and by \(\mathcal{E}(f)\) its exceptional locus consisting of the union of all the contracted hypersurfaces by \(f\).
When \(\operatorname{Ind}(f)=\operatorname{Ind}(f^{-1})=\emptyset\), then \(f\) is an (biregular) automorphism, and its action by pull-back on the cohomology of \(X\) commutes with iteration. Powerful complex analytic techniques based on pluripotential theory have been developed and under suitable assumptions on the spectral properties of \(f^{*}\), they allow us to understand the ergodic properties of the dynamical system induced by \(f\), see [1]. When \(\mathcal{E}(f)=\mathcal{E}(f^{-1})=\emptyset\), then \(f\) is a _pseudo-automorphism_. Since \(f\) induces an isomorphism in codimension \(1\), its action on \(H^{2}(X,\mathbb{Q})\) is still functorial, but its action on the full cohomology ring \(H^{*}(X,\mathbb{Q})\) remains a mystery. Furthermore pluripotential techniques can only work when \(\operatorname{Ind}(f)\) and \(\operatorname{Ind}(f^{-1})\) are dynamically unrelated, see [1, 1, 10, 11]. This motivates the question to measure how far a pseudo-automorphism is from being an automorphism.
We say that a birational self-map \(f\colon X\dasharrow X\) is _regularizable_ if there exists a birational map \(\phi\colon Y\dasharrow X\) from a projective variety \(Y\) such that \(f_{Y}:=\phi^{-1}\circ f\circ\phi\) is an automorphism1. The existence of a functorial resolution of singularities (see, e.g., [12]) implies that one can always assume \(Y\) to be smooth. We shall also see that \(f\) is regularizable if and only if \(f^{n}\) is regularizable for some \(n\in\mathbb{N}^{*}\) (Proposition 2.12). Similarly, we say that \(f\) is _pseudo-regularizable_ when \(f_{Y}\) is a pseudo-automorphism on some projective model \(Y\). Our principal aim is to explore when a pseudo-regularizable birational map may be regularizable.
Footnote 1: we insist on \(Y\) to be projective hence proper contrary to some other authors, see, e.g., [1].
Various geometric assumptions on \(X\) force the regularizability of its whole group of birational transformations. This happens for instance when \(X\) does not carry any rational curve (since the total image of any point in \(\operatorname{Ind}(f)\) is covered by rational curves); when \(X\) is of general type (by Kobayashi-Ochiai theorem); or if \(X\) is a degree \(n\geq 4\) smooth hypersurface of \(\mathbb{P}^{n}_{\mathbb{C}}\) (by birational super-rigidity, see, e.g., [1]). We shall focus our attention on dynamical properties of \(f\) that ensure or forbid regularizability. A first set of constraints arises by looking at the growth of degrees of the iterates of \(f\). These degrees are defined as follows. For any ample line bundle \(L\to X\), set \(\deg_{L}(f):=f^{*}c_{1}(L)\cdot c_{1}(L)^{N-1}\). Then the sequence \(\{\deg_{L}(f^{n})\}\) is sub-multiplicative up to a bounded constant by [1, 10, 11], and we can therefore define the first dynamical degree of \(f\) by setting \(\lambda_{1}(f):=\lim_{n}\deg_{L}(f^{n})^{1/n}\). By Weil's regularization lemma (see, e.g., [1, SS1.1.1] and the references therein) it follows that if \(\deg_{L}(f^{n})\) is bounded, then \(f\) is regularizable.
One can similarly define higher degrees and dynamical degrees \(\deg_{L,j}(f):=f^{*}c_{1}(L)^{j}\cdot c_{1}(L)^{N-j}\) and \(\lambda_{j}(f):=\lim_{n}\deg_{L,j}(f^{n})^{1/n}\) for any \(j\in\{0,\cdots,N\}\). It is a fact that for any birational map \(\phi\colon Y\dasharrow X\) as above, and for any ample line bundle \(L_{Y}\to Y\), we have \(\deg_{L,j}(f^{n})\asymp\deg_{L_{Y},j}(f^{n}_{Y})\). This observation leads to the following series of results.
1. If \(f\) is regularizable and \(\lambda_{1}(f)=1\), then \(\deg_{L}(f^{n})\asymp n^{2(k-1)}\) with \(k\in\{1,\cdots,N\}\), by [1] (the case \(N=3\) was previously treated in [1]).
2. When \(N=2\) and \(\lambda_{1}(f)=1\), then \(\deg_{L}(f^{n})\asymp n^{k}\) with \(k\in\{0,1,2\}\); and \(f\) is regularizable if and only if \(k\in\{0,2\}\), by [10, 1].
3. If \(f\) is regularizable, then \(\lambda_{1}(f)\) is a unit of the ring of integers of some number field (since it is the spectral radius of \(f^{*}\colon H^{2}(X,\mathbb{Q})\to H^{2}(X,\mathbb{Q})\) which fixes the lattice \(H^{2}(X,\mathbb{Z})\)). In particular, for any birational map \(f\) of \(\mathbb{P}^{N}_{\mathbb{C}}\) of degree \(\geq 2\) with \(N\geq 2\), and for a generic \(A\in\operatorname{PGL}(N+1,\mathbb{C})\), then \(A\circ f\) is not regularizable by a theorem by Vigny [11] (see also [1]). Also maps for which \(\lambda_{1}(f)\) is transcendental cannot be regularizable (examples of such maps are given in [1]).
4. When \(N=2\) and \(\lambda_{1}(f)>1\), then all Galois conjugates of \(\lambda_{1}(f)\) have (complex) norm \(\leq 1\) by [10]. Moreover, if \(f\) is regularizable, then either \(\lambda_{1}(f)\) is a quadratic unit or a Salem number. Conversely, if \(\lambda_{1}(f)\) is a Salem number, then \(f\) is regularizable by [1].
5. When \(N\geq 3\) and \(\lambda_{1}(f)^{2}>\lambda_{2}(f)\), then again all Galois conjugates of \(\lambda_{1}(f)\) have (complex) norm \(\leq 1\) by [11]. LoBianco [1] proved that the modulus of the Galois conjugates of \(\lambda_{1}:=\lambda_{1}(f)\) of a regularizable map belongs to the set \(\{\lambda_{1},\lambda_{2}^{-1},\lambda_{1}^{-1}\lambda_{2},\lambda_{1}^{-1/2}, \lambda_{2}^{1/2},\lambda_{1}^{1/2}\lambda_{2}^{-1/2}\}\) when \(N=3\). No general results are known in higher dimension.
Let us now discuss how to produce (non-regular) examples of pseudo-automorphisms. First if \(X\) has torsion canonical bundle, then \(\mathcal{E}(f)\) and \(\mathcal{E}(f^{-1})\) are automatically empty, hence \(f\) is a pseudo-automorphism. More
generally if \(\pi\colon X\to B\) is a family of polarized manifolds with trivial canonical bundle over a \(1\)-dimensional base and \(\pi\circ f=\pi\), then one can find a birational model of \(X\) for which the relative canonical bundle \(K_{X/B}\) is trivial, see [10], hence again \(f\) is pseudo-regularizable.
In dimension \(2\) (where the notion of pseudo-automorphism and automorphism coincide), many constructions of regularizable birational maps of \(\mathbb{P}^{2}_{\mathbb{C}}\) with \(\lambda_{1}(f)>1\) have been described, see [11, 12, 13, 14, 15, 16] (the list is not exhaustive). Generalizations of these constructions to higher dimension \(N\geq 3\) lead to pseudo-regularizable birational maps, see [1, 1, 16]. To the knowledge of the authors, there is only one example by Oguiso and Truong [14] of a regularizable birational map \(f\) of \(\mathbb{P}^{N}_{\mathbb{C}}\) with \(\lambda_{1}(f)>1\) which admits a Zariski dense orbit (see also [14] for product examples having Siegel disks). Even though pseudo-automorphisms are expected to be non regularizable in general, it remains a very challenging task to actually prove it. Examples of birational maps of \(\mathbb{P}^{3}_{\mathbb{C}}\) have been treated by Bedford, Cantat and Kim [1]. The second author [17] has also proved that a generic element of a family of pseudo-automorphisms introduced by Blanc is not regularizable.
In this paper, we shall discuss the regularizability of a given bimeromorphic map \(f\) on a \(1\)-parameter family of polarized abelian varieties \(\mathcal{X}=(X_{t})_{t\in\mathbb{D}}\). Note that by our previous discussion, \(f\) is automatically pseudo-regularizable. Although we do not solve our problem in full generality, we give a necessary and sufficient condition when the action of \(f\) on \(H^{1}(X_{t},\mathbb{Z})\) has no cyclotomic factors (Theorem A). We also exhibit necessary conditions in the case \(\lambda_{1}(f)=1\) (Theorem C). We then turn to families of translations, and prove two theorems concerning this class of maps. First we show that families of translations are always regularizable (Theorem E). Second, we describe the closure of the orbits of a generic point (Theorem F), inspired by a recent work of Amerik and Verbistky [1].
Regularizability of bimeromorphic maps on families of abelian varieties. Before stating our main results, let us describe the set-up in more details. A family of polarized abelian varieties of dimension \(g\) over the complex unit disk is a proper holomorphic map \(\pi\colon\mathcal{X}\to\mathbb{D}=\{|t|<1\}\) which is a submersion over \(\mathbb{D}^{*}=\{0<|t|<1\}\), such that \(X_{t}=\pi^{-1}(t)\) is a smooth abelian variety of dimension \(g\) for each \(t\neq 0\). We shall also fix a relatively ample line bundle \(\mathrm{L}\to\mathcal{X}\), and assume that \(\mathcal{X}\) is smooth.
The family is _smooth_ when \(\pi\) is a submersion over \(\mathbb{D}\). In that case, \(X_{0}\) is a smooth abelian variety. A _base change_ of order \(n\) for the family \(\pi\colon\mathcal{X}\to\mathbb{D}\) is a family of polarized abelian varieties \(\pi_{n}\colon\mathcal{X}_{n}\to\mathbb{D}\) with a meromorphic map \(\varphi\colon\mathcal{X}_{n}\dashrightarrow\mathcal{X}\) making the following diagram commutative:
A family of polarized abelian varieties \(\pi^{\prime}\colon\mathcal{X}^{\prime}\to\mathbb{D}\) is _bimeromorphically equivalent_ to \(\pi\colon\mathcal{X}\to\mathbb{D}\) if there exists a bimeromorphic map \(\varphi\colon\mathcal{X}^{\prime}\dashrightarrow\mathcal{X}\) which is a biregular isomorphism over \(\mathcal{X}^{*}=\pi^{-1}(\mathbb{D}^{*})\) and satisfies \(\pi\circ\varphi=\pi^{\prime}\).
We shall say that a proper family of polarized abelian varieties \(\pi^{\prime}\colon\mathcal{X}^{\prime}\to\mathbb{D}\) is _not degenerating_ if it admits a base change which is a smooth family. In other words, the induced projective variety \(X\) defined over the field \(\mathbb{C}(t)\) admits a smooth model over \(\operatorname{Spec}\mathbb{C}[t]\) (possibly after a finite extension of the base field). It is also equivalent to say that the canonical holomorphic map sending \(t\in\mathbb{D}^{*}\) to the isomorphism class2 of the polarized abelian variety \([X_{t}]\) extends holomorphically through \(0\) after possibly precomposing with \(t\mapsto t^{n}\).
Footnote 2: the space of such isomorphism classes is a quasi-projective variety, see [1, chapter 8].
Consider \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) a bimeromorphic map satisfying \(\pi\circ f=\pi\). Note that the indeterminacy locus of \(f\) satisfies \(\operatorname{Ind}(f)\subset X_{0}\), since an abelian variety does not contain any rational curve. In this context, we say that \(f\) is regularizable (resp. pseudo-regularizable) iff there exists a proper family of polarized abelian varieties \(\pi^{\prime}\colon\mathcal{X}^{\prime}\to\mathbb{D}\) bimeromorphically equivalent to \(\pi\colon\mathcal{X}\to\mathbb{D}\) through a bimeromorphic map \(\varphi\colon\mathcal{X}^{\prime}\dashrightarrow\mathcal{X}\), such that \(f_{\mathcal{X}^{\prime}}:=\varphi^{-1}\circ f\circ\varphi\) is a regular automorphism (resp. regular automorphism from \(X_{1}\) onto \(X_{2}\) where \(\mathcal{X}\setminus X_{i}\) are analytic subsets of codimension \(\geq 2\)). We insist here on the condition that \(\mathcal{X}^{\prime}\) should be proper. As we shall see below, any map is regularizable on its Neron model, but the central fiber of the latter is in general non-compact. As above, \(f\) is always pseudo-regularizable.
For each \(t\neq 0\), the map \(f\) induces a biregular automorphism \(f_{t}\colon X_{t}\to X_{t}\), hence a group isomorphism \(f_{t}^{*}\colon H^{1}(X_{t},\mathbb{Z})\to H^{1}(X_{t},\mathbb{Z})\). Since all fibers \(X_{t}\) are smoothly diffeomorphic, any continuous path joining \(t\) to \(t^{\prime}\) induces an isomorphism \(H^{1}(X_{t},\mathbb{Z})\to H^{1}(X_{t^{\prime}},\mathbb{Z})\) conjugating \(f_{t}^{*}\) to \(f_{t^{\prime}}^{*}\).
**Theorem A**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be a family of polarized abelian varieties of dimension \(g\), and let \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be any bimeromorphic map._
1. _If the family is not degenerating, then_ \(f\) _is regularizable._
2. _Suppose_ \(f\) _is regularizable. If no root of unity is an eigenvalue of_ \(f_{t}^{*}\colon H^{1}(X_{t},\mathbb{Z})\to H^{1}(X_{t},\mathbb{Z})\)_, then the family is not degenerating._
The first statement is easy to prove. The main content lies in the second statement. Observe that if \(X_{t}\) is a simple abelian variety for some \(t\neq 0\), then the condition appearing in (2) is equivalent to \(\lambda_{1}(f_{t})>1\). Note also that our theorem is local over the base hence implies the next result.
**Corollary B**.: _Suppose \(\mathcal{X}\) is a projective variety endowed with a fibration over a projective curve \(\pi\colon\mathcal{X}\to B\) whose general fiber \(X_{b}=\pi^{-1}(b)\) is an abelian variety. Suppose \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) is a birational map preserving \(\pi\). If \(f_{b}^{*}\colon H^{1}(X_{b},\mathbb{Z})\to H^{1}(X_{b},\mathbb{Z})\) has no eigenvalue equal to a root of unity for a general \(b\), then \(f\) is regularizable iff there exists a commutative diagram_
_where \(\varphi^{\prime}\colon B^{\prime}\to B\) is a finite ramified cover, and \(\pi^{\prime}\colon X^{\prime}\to B^{\prime}\) is a smooth family._
Smooth non-isotrivial families of abelian varieties with automorphisms satisfying \(\lambda_{1}(f)>1\) and parameterized by a projective curve exist for any \(g\geq 2\). This follows from the fact that Hilbert-Blumenthal varieties can be compactified by adding finitely many points, see SS5.4 for details.
Let us explain our strategy for the proof of (2). We use the notion of Neron model, see, e.g., [1]. This is a (not necessarily proper) birational model \(\mathcal{N}\to\mathbb{D}\) that satisfies a suitable universal property so that \(f\) induces a regular automorphism \(f_{\mathcal{N}}\colon\mathcal{N}\to\mathcal{N}\), and the bimeromorphic map \(\phi_{\mathcal{N}}\colon\mathcal{X}\dashrightarrow\mathcal{N}\) is regular on the open subset \(X^{\mathrm{sm}}\subset\mathcal{X}\) over which \(\pi\) is a local submersion. By Grothendieck's semi-stable reduction theorem, after base change, the central fiber is a finite union of semi-abelian varieties, that is extension of an abelian variety by a multiplicative torus. Further, the semi-stable reduction theorem ([13]), again after base change, we may suppose that \(\mathcal{X}\setminus X^{\mathrm{sm}}\) has codimension \(\geq 2\) in \(\mathcal{X}\). To simplify the argument assume that \(g=2\) and \(\lambda_{1}(f_{t})>1\), and that \(X_{t}\) is simple for some \(t\). First we pick an irreducible component \(E\) of \(X_{0}\) that is \(f\)-invariant and satisfies \(\lambda_{1}(f|_{E})=\lambda_{1}(f_{t})\) (Proposition 2.3). Then we look at its image \(Z:=\phi_{\mathcal{N}}(E)\subset\mathcal{N}\) inside the central fiber of the Neron model. When \(\dim(Z)\leq 1\), we observe that \(E\) is the exceptional divisor of the blow-up of a fixed point or a fixed curve in a neighborhood of which \(f\) is a local isomorphism, which implies the dynamical degree of \(f_{\mathcal{N}}|_{Z}\) to be equal to \(1<\lambda_{1}(f_{t})\), a contradiction (see Proposition 2.6). It follows that \(Z\) has dimension \(2\), and is an extension of a multiplicative torus \(\mathbb{C}_{m}^{k}\) by an abelian variety \(A\) of dimension \(2-k\). When \(k=0\), the family is not degenerating. The proof is complete if we prove that \(\lambda_{1}(f_{\mathcal{N}}|_{Z})<\lambda_{1}(f_{t})\) when \(k\in\{1,2\}\). Roughly the reason is that given a matrix \(M\in\operatorname{SL}(g,\mathbb{Z})\) whose spectral radius is equal to \(\rho>1\), then the dynamical degree of the induced map on the product \(E^{g}\) for any elliptic curve \(E\) is equal to \(\rho^{2}\); whereas the dynamical degree of the map induced by \(M\) on \(\mathbb{C}_{m}^{g}\) is equal to \(\rho\) (Theorem 3.6).
The same line of arguments can be used in the case \(\lambda_{1}(f_{t})=1\) if one replaces the computation of dynamical degrees by the finer invariant given by the growth rate of the sequence of degrees. Recall an automorphism \(f_{t}\) of an abelian variety \(X_{t}\) with \(\lambda_{1}(f_{t})=1\) always satisfies \(\deg_{1}(f^{n})\asymp n^{2k}\) for some \(k\in\{0,\cdots,g-1\}\). When the degrees are bounded (i.e., \(k=0\)), then an iterate of \(f_{t}\) is a translation. At the other side of the spectrum, when \(\deg_{1}(f^{n})\asymp n^{2(g-1)}\), then \(X_{t}\) is isogenous to \(E^{g}_{t}\) for some elliptic curve \(E_{t}\).
However, during the proof of Theorem A we used the product formula of Dinh-Nguyen [16, Theorem 1.1] that allows to compute exactly the dynamical degrees of a map preserving a fibration in terms of the dynamical degrees in the fiber and on the base. We were only able to extend this formula to the degree growth in codimension \(1\) with linear losses, so that we couldn't fully characterize regularizable mappings.
To state our next result, we define the following invariant. Let \(r(\mathcal{X})\in\{0,\cdots,g\}\) be the dimension of the maximal multiplicative torus of \(\mathcal{N}_{0}\) when it is semi-abelian (this does not depend on the base change). Observe that \(r(\mathcal{X})=0\) if and only if \(\mathcal{X}\) is not degenerating; and \(r(\mathcal{X})=g\) if and only if \(\mathcal{X}\) has maximal degeneration (in the sense that all components of the central fiber are multiplicative tori).
**Theorem C**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be a family of polarized abelian varieties of dimension \(g\), and let \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be any bimeromorphic map such that \(\lambda_{1}(f_{t})=1\). Write \(\deg_{1}(f_{t}^{n})\asymp n^{2k}\) with \(k\in\{0,\cdots,g-1\}\)._
_If \(f\) is regularizable, then we have_
\[2k\leq\max\{r(\mathcal{X}),2g-2r(\mathcal{X})-1\}\.\]
_In particular, if \(f\) is regularizable and \(k=g-1\geq 2\), then \(r(\mathcal{X})=0\) and the family is not degenerating._
For abelian surfaces (\(g=2\)), the following table summarizes our results:
\begin{tabular}{|c|c|c|} \hline \(k\) & \(r(\mathcal{X})\) & Is \(f\) regularizable? \\ \hline
0 & & yes (Theorem E) \\
1 & 0 & yes \\
1 & 1 & no (Theorem C) \\
1 & 2 &??? \\ \hline \end{tabular}
Observe also that in any dimension, if the degeneration is maximal and \(f\) is regularizable, then our theorem says \(k\leq\lfloor g/2\rfloor\). We suspect that if \(\lambda_{1}(f_{t})=1\), \(f\) is regularizable and \(r(\mathcal{X})\leq 2k-1\), then there exists an \(f\)-invariant subfamily \(\mathcal{Y}\subset\mathcal{X}\) of polarized abelian varieties of dimension \(\leq g-r(\mathcal{X})\) which is not degenerating and such that \(\deg_{1}(f^{n}|_{\mathcal{Y}})\asymp\deg_{1}(f^{n})\). We propose the following more ambitious conjecture.
**Conjecture D**.: _Let \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be any bimeromorphic map of a family of polarized abelian varieties that fixes the \(0\)-section._
_Then \(f\) is regularizable if and only if the following holds. Possibly after base change, there is a \(f\)-invariant splitting such that \(\mathcal{X}=\mathcal{Y}\times\mathcal{Y}^{\prime}\) where \(\mathcal{Y}\) is a non-degenerating family of polarized automorphisms of abelian varieties; \(\mathcal{Y}^{\prime}\) is a family of polarized abelian varieties, and \(f|_{\mathcal{Y}^{\prime}}\) is a family of automorphisms such that \(f_{t}^{*}|_{\mathcal{Y}}\colon H^{1}(Y_{t}^{\prime},\mathcal{Z})\to H ^{1}(Y_{t}^{\prime},\mathcal{Z})\) has finite order._
**Dynamics of families of translations**. A translation on an abelian variety \(X\) is an automorphism \(f\colon X\to X\) acting as the identity on \(H^{1}(X,\mathbb{Z})\). Note that this is the same as asking \(f\) to belong to the connected component of the identity in \(\operatorname{Aut}(X)\). Let \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be a bimeromorphic map on a family of polarized abelian varieties. If \(f_{t}\) is a translation for some \(t\), then \(f_{t}\) is a translation for all \(t\neq 0\).
**Theorem E**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be a family of polarized abelian varieties, and let \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be a bimeromorphic map. If \(f_{t}^{*}\colon H^{1}(X_{t},\mathcal{I})\to H^{1}(X_{t},\mathbb{Z})\) has finite order, then \(f\) is regularizable. In particular, any family of translations is regularizable._
The proof we give relies on the description by Nakamura [14] of a compactification of the Neron model in terms of toroidal data, following ideas developed by Mumford (see the appendix of [11]). In a nutshell, one represents \(X_{t}\) as a quotient of \(\mathbb{G}_{m}^{g}\) by a co-compact action of \(\mathbb{Z}^{g}\), and one obtains the suitable compactification by building a suitable toric modification of \(\mathbb{G}_{m}^{g}\times\mathbb{D}\) along the central fiber so that the action of \(\mathbb{Z}^{g}\) extends and remains co-compact (on each fiber including the central one). It is then not difficult to check that any family of translations extends to this model.
**Remark 1.1**.: We use in an essential way the fact that the base is one-dimensional. We suspect that there exist families of elliptic curves defined over a surface that carry families of translations which are not regularizable.
Our next result is inspired by a recent theorem of Amerik and Verbitsky [1, Theorem 3.12], who generalized to hyperkahler manifolds a former result of Cantat [1, Proposition 2.2] on the density of generic orbits of translations on families of elliptic curves covering a K3 surface.
Let \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be a family of translations as in the previous theorem. Note that for any given point in \(X_{t}\), the closure (for the euclidean topology) of the orbit of \(f_{t}\) is a finite union of real tori of (real) dimension ranging from \(0\) to \(2g\), and that this dimension does not depend on the choice of point on \(X_{t}\). When \(t\) changes however, this dimension might jump.
**Theorem F**.: _Let \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be any family of translations on a polarized family of abelian varieties \(\pi\colon\mathcal{X}\to\mathbb{D}\). Then there exist families of polarized abelian varieties \(\mathcal{Y}\to\mathbb{D}\), \(\mathcal{Y}^{\prime}\to\mathbb{D}\); families of translations \(f_{\mathcal{Y}}\colon\mathcal{Y}\to\mathcal{Y}\), and \(f_{\mathcal{Y}^{\prime}}\colon\mathcal{Y}^{\prime}\to\mathcal{Y}^{\prime}\); and a meromorphic map \(\phi\colon\mathcal{Y}\times\mathcal{Y}^{\prime}\dashrightarrow\mathcal{X}\) such that_
1. \(\phi(f_{\mathcal{Y},t},f_{\mathcal{Y}^{\prime},t})=f_{t}\circ\phi\) _for all_ \(t\in\mathbb{D}\)_;_
2. \(\phi_{t}\colon Y_{t}\times Y_{t}^{\prime}\to X_{t}\) _is an isogeny for all_ \(t\neq 0\)_;_
3. _for all_ \(t\) _in a dense set of full measure, the closure of the orbit of any point_ \(p\in Y_{t}\) _under_ \(f_{\mathcal{Y},t}\) _is dense in_ \(Y_{t}\) _for the euclidean topology;_
4. _there exists a sequence of families of translations of finite order_ \(g_{n}\) _on_ \(\mathcal{Y}^{\prime}\) _such that_ \(g_{n}\to f_{\mathcal{Y}}\) _locally uniformly on_ \(\mathcal{Y}^{\prime}\)_._
In rough terms, up to isogeny, we can split \(\mathcal{X}\) into a product family \(\mathcal{Y}\times\mathcal{Y}^{\prime}\) so that orbits are dense in \(Y_{t}\) for \(t\) generic, and the family of translations on \(\mathcal{Y}^{\prime}\) is uniformly approximated by finite order automorphisms \(g_{n}\). Note that for each \(n\) and for a generic \(t\in\mathbb{D}^{*}\), the closure of the orbit of a point in \(X_{t}\) under \((f_{\mathcal{Y}},g_{n})\) is a finite union of translates of \(Y_{t}\).
Let us state the following direct consequence of the previous result.
**Corollary G**.: _Suppose \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) is a family of translations on a family of polarized abelian varieties \(\pi\colon\mathcal{X}\to B\) where \(B\) is a projective curve. If \(X_{t}\) is simple for some \(t\) and \(\deg(f^{n})\to\infty\), then the orbit of any point in a generic fiber \(X_{t}\) is dense for the euclidean topology._
Any automorphism \(f\colon\mathcal{X}\to\mathcal{X}\) on a projective surface with \(\deg(f^{n})\asymp n^{2}\) is a family of translations on a family of elliptic curves, by Gizatullin's theorem, see [1, 1, 1]. The previous result says that the orbit of \(f\) on a generic fiber is dense, extending [1, Proposition 2.2] to any projective surface.
Our strategy for the proof of Theorem F can be described as follows. The euclidean closure of the orbit of a point in the fiber \(X_{t}\) under a translation is a real torus that contains a translate of a subabelian variety of maximal dimension \(A_{t}\). For \(t\) generic, the dimension of \(A_{t}\) remains the same and \(A_{t}\) forms a family of subabelian varieties. By Poincare's irreducibility theorem, up to isogeny, we can split \(\mathcal{X}\) as a product of two families of abelian varieties. On one factor, the orbits are dense for the euclidean topology. On the other factor, the closure of the orbits are totally real. Exploiting the fact that holomorphic maps taking values in totally real submanifolds are necessarily constant, we show that in this situation translations can be approximated uniformly over the base by translations of finite order.
**Further remarks**. Various generalizations of Theorem A can be investigated. First we may consider bimeromorphic maps on \(\mathcal{X}\) such that the action on the base has infinite order. Note that in that case, the family of abelian varieties needs to be isotrivial. We may also replace \(\mathbb{D}\) by a higher dimensional base \(\mathbb{D}^{k}\), \(k\geq 2\) and ask whether regularizability along any germ of analytic disk implies regularization over \(\mathbb{D}^{k}\).
Finally we may consider algebraic versions of our results. Let \(R\) be a discrete valuation ring, and denote by \(K\) its fraction field. Let \(A\) be an abelian variety over \(K\), and \(f\colon A\to A\) be any automorphism. Observe that Neron models exist in this degree of generality by [1]. We may thus ask the following question. Suppose that the action of \(f\) on the first etale cohomology group of \(A\) has no eigenvalue equal to a root of unity. Is it true that the Neron model of \(A\) is proper if and only if there exists a proper model of \(A\) over \(\operatorname{Spec}R\) to which \(f\) extends as an automorphism?
**Plan of the paper**. In SS2, we briefly review the basic properties of dynamical degrees, and prove Propositions 2.3, 2.6 and 2.10 which are the keys to the proof of Theorem A. We conclude this part by discussing general properties of regularizable maps.
We discuss in SS3 the geometry of semi-abelian varieties and generalize the computation of dynamical degrees of morphisms on toric varieties done in [10, 11] to semi-abelian varieties.
We briefly recapitulate in SS4 the notion of Neron model, and then give the proof of our characterizations of regularizable maps on families of abelian varieties (Theorems A and C).
Section 5 can be read independently from the rest of the paper. It aims at describing interesting examples of families of automorphisms on polarized abelian varieties, and to classify (up to isogeny and base change) such families in dimension \(\leq 5\). The material is standard here, and goes back to the work of Shimura [14].
Finally, we devote Section 6 to families of translations of abelian varieties, proving Theorems E and F. The crucial ingredient is the construction (due to Mumford) of an adequate compactification of the Neron model, that we recall in SS6.1 following closely [15].
## 2. Growth of degrees of algebraic maps on quasi-projective varieties
In this section, we extend the notion of dynamical degrees to any rational self-map of a quasi-projective variety, and collect a couple of results on the behaviour of dynamical degrees under restriction.
Given any two sequences of positive real numbers \(u_{n},v_{n}\), we write \(u_{n}\asymp v_{n}\) (resp. \(u_{n}\lesssim v_{n}\)) if and only if there exists a constant \(C>1\) such that \(C^{-1}v_{n}\leq u_{n}\leq Cv_{n}\) (resp. \(v_{n}\leq Cu_{n}\)).
### Dynamical degrees
Let \(X\) be a (possibly singular) quasi-projective complex variety of dimension \(N\), and \(f\colon X\dashrightarrow X\) be any dominant rational self-map.
Let \(\overline{X}\) be any projective variety containing \(X\) as a Zariski dense subset. Fix any ample line bundle \(L\to\overline{X}\). Denote by \(\bar{f}\) the unique dominant rational self-map of \(\bar{X}\) whose restriction to \(X\) equals \(f\). This map is determined by its graph \(\Gamma\subset\bar{X}\,\times\bar{X}\) which is an irreducible variety such that the first projection \(\pi_{1}\colon\Gamma\to\bar{X}\) is a birational morphism, and the second projection \(\pi_{2}\colon\Gamma\to\bar{X}\) is surjective, and \(f(x)=\pi_{2}(\pi_{1}^{-1}(x))\) for any \(x\) off the image under \(\pi_{1}\) of its exceptional locus. For any \(i\in\{0,\cdots,N\}\), we set
\[\deg_{L,i}(\bar{f})=\pi_{2}^{*}(c_{1}(L))^{i}\wedge\pi_{1}^{*}(c_{1}(L))^{N-i }\in\mathbb{N}^{*}.\]
It was proven by Dinh and Sibony [10] (see also [14] for an argument working in arbitrary characteristic, and [1] for an estimate on the optimal constants), that given any birational map \(\varphi\colon X_{1}\dashrightarrow\bar{X}\) between projective varieties, and for any ample line bundle \(L_{1}\to\bar{X}_{1}\) there exists a constant \(C>1\) (depending only on \(\varphi,L\) and \(L_{1}\)) such that
\[C^{-1}\deg_{L_{1},i}(f_{1})\leq\deg_{L,i}(\bar{f})\leq C\deg_{L_{1},i}(f_{1}) \tag{2.1}\]
where \(f_{1}:=\varphi^{-1}\circ\bar{f}\circ\varphi\). In particular, the growth rate of the sequence \(\{\deg_{L,i}(\bar{f}^{n})\}_{n\in\mathbb{N}}\) depends neither on the compactification of \(X\), nor on the choice of \(L\). To simplify notation, we shall thus write \(\deg_{i}(f^{n})\) instead of \(\deg_{L,i}(\bar{f}^{n})\). Beware that this sequence is only defined up to bounded uniform constants.
Dinh and Sibony further proved the existence of a positive constant \(C>0\) such that \(C\deg_{i}(f^{n})\) is submultiplicative. We may thus define the \(i\)-th dynamical degree by setting:
\[\lambda_{i}(f):=\lim_{n\to\infty}\deg_{i}(f^{n})^{1/n}\in[1,+\infty[\.\]
By (2.1), \(\lambda_{i}(f)\) is invariant under birational conjugacy. Note that when \(f\) is a birational automorphism, then \(\lambda_{0}(f)=\lambda_{N}(f)=1\). In general, Khovanskii-Teissier's inequalities imply \(\lambda_{i-1}(f)\lambda_{i+1}(f)\geq\lambda_{i}(f)^{2}\) so that the dynamical degrees are log-concave. It follows that \(\lambda_{i}(f)=1\) for all \(i\) if and only if \(\lambda_{1}(f)=1\) (in which case \(f\) is necessarily birational).
Suppose now that \(X\) is smooth, and take \(\bar{X}\) a smooth projective variety as above, together with an ample line bundle \(L\to\bar{X}\). We may endow \(L\) with a positive hermitian form so that its curvature form \(\omega\) is a Kahler form on \(\bar{X}\). If \(f\) is moreover regular, then we may compute the degrees as an integral:
\[\deg_{i}(f)=\int_{X}f^{*}(\omega^{i})\wedge\omega^{N-i}\.\]
One can also compute the degrees directly by looking at the induced linear action on Dolbeault cohomology \(f_{i}^{*}\colon H^{i,i}(\bar{X})\to H^{i,i}(\bar{X})\). For any given norm on \(H^{i,i}(\bar{X})\), there exists a positive constant \(C>1\) such that
\[C^{-1}\left\|f_{i}^{*}\right\|\leq\deg_{i}(f)\leq C\left\|f_{i}^{*}\right\|. \tag{2.2}\]
One can alternatively work on the real subspace \(\mathbb{N}_{k}^{i}(\bar{X})\) of \(H^{i,i}(\bar{X})\) spanned by fundamental classes of subvarieties of codimension \(i\). Again, we have \(C^{-1}\left\|f_{i}^{*}\right\|\mathbb{N}_{k}^{i}(\bar{X})\|\leq\deg_{i}(f)\leq C \left\|f_{i}^{*}\right\|\mathbb{N}_{k}^{i}(\bar{X})\|\) for uniform bounded constant. When \(f\) is a regular map, this implies \(\lambda_{i}(f)=\rho(f_{i}^{*})=\rho(f_{i}^{*}|\,\mathbb{N}_{k}^{i}(\bar{X}))\) where \(\rho(u)\) denotes the spectral radius of a linear map \(u\).
We refer to [13] for an interpretation of dynamical degrees in terms of spectral radii of bounded operators on suitable Banach spaces.
### Degree growth and restrictions
In this section, we prove three results that allow one to compare the dynamical degrees of an automorphism with the dynamical degrees of its restriction to a subvariety. These results play a key role in the proof of Theorem A.
**Proposition 2.3**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be a flat projective morphism of relative dimension \(N\), such that \(X_{t}=\pi^{-1}(t)\) is connected for \(t\neq 0\). Suppose that \(f\colon\mathcal{X}\to\mathcal{X}\) is a biholomorphism, for which \(\pi\circ f=\pi\) and \(f(E)=E\) for any irreducible components \(E\) of \(\mathcal{X}_{0}\)._
_Then for any \(0\leqslant i\leqslant N\),_
\[\deg_{i}(f^{n}|_{X_{t}})\asymp\max_{E}\deg_{i}(f^{n}|_{E})\,\]
_where \(E\) ranges over all irreducible components \(E\) of \(\mathcal{X}_{0}\)._
**Remark 2.4**.: It is crucial for \(f\) to be holomorphic on \(\mathcal{X}\). Examples of a bimeromorphic map \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) such that \(\pi\colon\mathcal{X}\to\mathbb{D}\) is a smooth family of K3 surfaces and \(\operatorname{Ind}(f)\subset X_{0}\) such that \(\lambda_{1}(f_{0})<\lambda_{1}(f_{t})\) for all \(t\neq 0\) is given in [17].
Proof.: Let \(\mathsf{L}\to\mathcal{X}\) be any relatively ample line bundle. We have \(\deg_{i}(f^{n}|_{X_{t}})=(f^{n*}c_{1}(\mathsf{L}))^{i}\cdot c_{1}(\mathsf{L}) ^{N-i}\cdot[X_{t}]\). As \([X_{t}]\) is cohomologous to the divisor \(\pi^{-1}(0)=\sum b_{E}[E]\), we get
\[\deg_{i}(f^{n}|_{X_{t}})=\sum_{E}b_{E}\deg_{i}(f^{n}|_{E}). \tag{2.5}\]
Since \(f\colon X_{t}\to X_{t}\) is regular, by (2.2) there exists an integer \(\delta\) such that \(\deg_{i}(f^{n}|_{X_{t}})\asymp n^{\delta}\lambda_{i}(f|_{X_{t}})^{n}\). Similarly, for each irreducible component \(E\) of \(\mathcal{X}_{0}\), we may find an integer \(\delta_{E}\) such that \(\deg_{i}(f^{n}|_{E})\asymp n^{\delta_{E}}\lambda_{i}(f|_{E})^{n}\). The result follows from (2.5).
Suppose \(f\colon X\dashrightarrow Y\) is a birational map between smooth projective varieties. Recall that \(\operatorname{Ind}(f)\) denotes the indeterminacy locus of \(f\), and is a subvariety of codimension at least 2. The set of points \(p\notin\operatorname{Ind}(f)\) at which \(f\) is not a local biholomorphism is a divisor by the jacobian criterion which coincides with the exceptional locus defined in the introduction and denoted by \(\operatorname{Exc}(f)\).
We define the proper transform of an irreducible subvariety \(Z\subset X\) such that \(Z\not\subset\operatorname{Ind}(f)\) by setting \(f^{\vee}(Z):=\overline{f(Z\setminus\operatorname{Ind}(f))}\). It is also an irreducible subvariety, of the same dimension as \(Z\) when \(Z\not\subset\operatorname{Exc}(f)\).
The total transform of a closed analytic subset \(Z\subset X\) is defined as \(f(Z):=\pi_{2}(\pi_{1}^{-1}(Z))\) where \(\pi_{1}\colon\Gamma\to X\) and \(\pi_{2}\colon\Gamma\to Y\) are the canonical projections of the graph of \(f\) onto \(X\) and \(Y\) respectively. In general \(f(Z)\) strictly contains \(Z\), and \(p\in\operatorname{Ind}(f)\) if and only if \(f(p)\) has positive dimension.
**Proposition 2.6**.: _Let \(X,Y\) be smooth projective varieties and \(\alpha\colon Y\dashrightarrow X\) be a birational map. Let \(f\colon X\dashrightarrow X\) be a birational self-map and \(E\subset Y\) be an irreducible hypersurface such that:_
1. \(f^{\vee}_{Y}(E)=E\)_, where_ \(f_{Y}=\alpha^{-1}\circ f\circ\alpha\in\operatorname{Bir}(Y)\)_;_
2. _and_ \(Z=\alpha^{\vee}(E)\) _is not included in_ \(\operatorname{Ind}(f^{-1})\)_._
_Then \(Z\) is not included in \(\operatorname{Ind}(f)\), \(f^{\vee}(Z)=Z\), and we have:_
\[\lambda_{i}(f_{Y}|_{E})=\max_{\max\{0,i-c+1\}\leqslant j\leqslant\min\{i,N-c \}}\lambda_{j}(f|_{Z})\, \tag{2.7}\]
_where \(N=\dim(X)\) and \(c=\operatorname{codim}(Z)\)._
_Moreover there exists a positive constant \(C>1\) such that_
\[C^{-1}\ \deg_{1}(f^{n}|_{Z})\leq\deg_{1}(f^{n}_{Y}|_{E})\leq Cn\times\ \deg_{1}(f^{n}|_{Z}) \tag{2.8}\]
Observe that (1) requires in particular that \(E\) is not a component of \(\operatorname{Exc}(f_{Y})\). Note also that the result follows from the invariance of dynamical degrees under birational conjugacy when \(E\) is not included in \(\operatorname{Exc}(\alpha)\).
**Example 2.9**.: Consider the map \(f(x,y,z)=(x,zy,z)\) in \(\mathbb{C}^{3}\). It defines a birational self-map of \(Y:=(\mathbb{P}^{1}_{\mathbb{C}})^{3}\) which fixes pointwise the rational curve \(Z=\{x=y=0\}\) so that \(\deg_{1}(f^{n}|_{Z})=1\) for all \(n\).
Let \(\mu\colon X\to Y\) be the blow-up along \(Z\). Then \(E=\mu^{-1}(Z)\) is isomorphic to \((\mathbb{P}^{1}_{\mathbb{C}})^{2}\), and in the coordinates \(x,z\) the lift of \(f\) to \(E\) can be written as follows \(f(x,z)=(x/z,z)\) so that \(\deg_{1}(f^{n}|E)\asymp n\).
Proof.: We prove first that \(Z\not\subset\operatorname{Ind}(f)\). Since \(f^{\vee}_{Y}(E)=E\) is a divisor, we may find a point \(p\in E\setminus\operatorname{Ind}(\alpha)\cup\operatorname{Ind}(f_{Y})\cup \operatorname{Exc}(f_{Y})\) such that \(f_{Y}(p)\notin\operatorname{Ind}(\alpha)\cup\operatorname{Ind}(f_{Y})\cup \operatorname{Exc}(f_{Y})\). The total image of \(\alpha(p)\) under \(f\) is included in \(\alpha(f_{Y}(p))\) which has dimension 0. Hence \(\alpha(p)\notin\operatorname{Ind}(f)\), and \(f(\alpha(p))=\alpha(f_{Y}(p))\). It follows that \(f^{\vee}(Z)\subset Z\). Since \(E\) is irreducible, \(Z\) is irreducible too, and \(f^{\vee}_{Y}(E)=E\) implies \(f^{\vee}(Z)=Z\).
Next we prove (2.7). Recall that dynamical degrees are invariant under birational conjugacy. By Hironaka resolution of singularities, we may thus suppose that \(Z\) is smooth, \(\alpha\) is a regular map for which the sheaf of
ideals defining \(Z\) lifts to a locally principal ideal sheaf. By the universal property of blow-ups, we may thus write \(\alpha=\operatorname{bl}_{Z}\circ\beta\), where \(\operatorname{bl}_{Z}\colon Y^{\prime}\to X\) is the blow-up of \(Z\) with exceptional locus \(E_{Z}\), and \(\beta\colon Y\to Y^{\prime}\) is a birational map such that \(\beta(\operatorname{Exc}_{\beta})\) does not contain \(E_{Z}\). Note that \(\lambda_{i}(f|E)=\lambda_{i}(f_{Y^{\prime}}|E_{Z})\) where \(f_{Y^{\prime}}\colon Y^{\prime}\to Y^{\prime}\) is the lift of \(f\) to \(Y^{\prime}\).
We abuse notation and write \(\operatorname{bl}_{Z}\colon E_{Z}\to Z\) for the restriction of \(\operatorname{bl}_{Z}\) to \(E_{Z}\). The morphism \(\operatorname{bl}_{Z}\colon E_{Z}\to Z\) is then the projectivization of the normal bundle \(N_{Z/Y}\to Z\). The relative dimension of \(\operatorname{bl}_{Z}\) is equal to \(c-1\), and the dynamical degrees can be computed using the product formula of Dinh-Nguyen [11, Theorem 1.1]:
\[\lambda_{i}(f_{Y^{\prime}}|_{E_{Z}})=\max_{\max\{0,i-c+1\}\leqslant j\leqslant \min\{i,N-c\}}\lambda_{j}(f|_{Z})\lambda_{i-j}(f_{Y^{\prime}}|\operatorname{bl }_{Z})\,\]
where
\[\lambda_{j}(f_{Y^{\prime}}|\operatorname{bl}_{Z})=\lim_{n\to\infty}\left(f_{Y ^{\prime}}^{n*}c_{1}(L)^{j}\cdot c_{1}(L)^{N-c+1-j}\cdot[\operatorname{bl}_{Z }^{-1}(t)]\right)^{1/n}\,\]
for a generic \(t\in Z\) (here we abuse notation and choose an arbitrary ample line bundle \(L\to Y\)). The existence of the limit is justified in op. cit. (see also [11]).
We claim that \(\lambda_{j}(f_{Y^{\prime}}|\operatorname{bl}_{Z})=1\) for all \(j\). Grant this claim. Then, we have
\[\lambda_{i}(f_{Y^{\prime}}|_{E_{Z}})=\max_{\max\{0,i-c+1\}\leqslant j\leqslant \min\{i,N-c\}}\lambda_{j}(f|_{Z})\,\]
and the formula (2.7) follows.
It remains to prove the claim. We argue using complex analytic arguments. Note that since \(f_{Y}^{\vee}(E)=E\), \(f_{Y}\) is a local biholomorphism at a generic point in \(E\). It follows that \(f\) is also a local biholomorphism at a generic point \(p\in Z\). The map \(f_{Y^{\prime}}\colon E_{Z}\dashrightarrow E_{Z}\) hence maps the fiber \(\operatorname{bl}_{Z}^{-1}(p)\simeq\mathbb{P}_{\mathbb{C}}^{c-1}\) to \(\operatorname{bl}_{Z}^{-1}(f_{Y}(p))\) and has degree \(1\). This implies
\[f_{Y^{\prime}}^{n*}c_{1}(L)^{j}\cdot c_{1}(L)^{N-c+1-j}\cdot[\operatorname{bl }_{Z}^{-1}(t)]=c_{1}(L)^{N-c+1}\cdot[\operatorname{bl}_{Z}^{-1}(t)]\]
concluding the proof that \(\lambda_{j}(f_{Y^{\prime}}|\beta)=1\).
We finally prove (2.8). To that end, it suffices to show
\[C^{-1}\ \deg_{1}(f_{Y}^{n}|_{Z})\leq\deg_{1}(f_{Y^{\prime}}^{n}|_{E_{Z}})\leq Cn \times\deg_{1}(f_{Y}^{n}|_{Z})\]
for some constant \(C>1\). Observe that the Neron-Severi space \(\operatorname{N}_{\mathbb{R}}^{1}(E_{Z})\) equals the direct sum \(\operatorname{bl}_{Z}^{*}\operatorname{N}_{\mathbb{R}}^{1}(Z)\oplus\mathbb{R}\xi\) where \(\xi\) is the first Chern class of the tautological line bundle (whose restriction to each fiber of \(\operatorname{bl}_{Z}\) is \(O(1)\)). Note that we have the following commutative diagram:
No hypersurface in \(E_{Z}\) is contracted by \(\operatorname{bl}_{Z}\) into the indeterminacy locus of \(f_{Y}\) since \(\operatorname{bl}_{Z}\) is a fibration and \(\operatorname{codim}(\operatorname{Ind}(f_{Y}))\geq 2\). It follows that for any class \(\omega\in\operatorname{N}_{\mathbb{R}}^{1}(Z)\) we have \((f_{Y}\circ\operatorname{bl}_{Z})^{*}\omega=\operatorname{bl}_{Z}^{*}(f_{Y}^{*} (\omega))\), see, e.g., [10, Lemma 2.3]. In a similar way, \(\operatorname{bl}_{Z}\) is regular hence \((\operatorname{bl}_{Z}\circ f_{Y^{\prime}})^{*}\omega=f_{Y^{\prime}}^{*}( \operatorname{bl}_{Z}^{*}(\omega))\), and we conclude that \(f_{Y^{\prime}}^{*}(\operatorname{bl}_{Z}^{*}(\omega))=\operatorname{bl}_{Z}^{* }(f_{Y}^{*}(\omega))\) for any \(\omega\in\operatorname{N}_{\mathbb{R}}^{1}(Z)\). Note that this already implies:
\[\deg_{1}(f_{Y^{\prime}}^{n})\asymp\rho\left(f_{Y^{\prime}}^{n*}\colon \operatorname{N}_{\mathbb{R}}^{1}(E_{Z})\to\operatorname{N}_{\mathbb{R}}^{1}(E _{Z})\right)\geq\rho\left(f_{Y}^{n*}\colon\operatorname{N}_{\mathbb{R}}^{1}(Z )\to\operatorname{N}_{\mathbb{R}}^{1}(Z)\right)\asymp\deg_{1}(f_{l}^{n}).\]
Suppose first \(f_{Y}|_{Z}\) is algebraically stable (in the sense that no hypersurface is contracted by \(f_{Y}|_{Z}\) to a subvariety that is eventually mapped to \(\operatorname{Ind}(f_{Y}|_{Z})\)). Write \(f_{Y^{\prime}}^{*}\xi=\xi+\operatorname{bl}_{Z}^{*}(\omega_{*})\) for some \(\omega_{*}\in\operatorname{NS}_{\mathbb{R}}(Z)\). A repeated use of [10, Lemma 2.3] implies that for each integer \(n\), we have \(f_{Y^{\prime}}^{n*}\omega=(f_{Y}^{*})^{n}\omega\). Note that \(f_{Y^{\prime}}|_{E_{Z}}\) is then also algebraically stable, hence for any class \(\omega\in\operatorname{N}_{\mathbb{R}}^{1}(Z)\), and for any \(t\in\mathbb{R}\), we have
\[f_{Y^{\prime}}^{n*}(\operatorname{bl}_{Z}^{*}\omega+t\xi)=(f_{Y^{\prime}}^{*})^{ n}(\operatorname{bl}_{Z}^{*}\omega+t\xi)=\operatorname{bl}_{Z}^{*}(f_{Y}^{*})^{n} \omega+t\xi+\sum_{j=0}^{n-1}\operatorname{bl}_{Z}^{*}(f_{Y}^{*})^{j}\omega_{*}\]
and we conclude that \(\|f_{Y^{\prime}}^{n*}(\operatorname{bl}_{Z}^{*}\omega+t\xi)\|\leq Cn\times\rho \left(f_{Y}^{n*}\colon\operatorname{N}_{\mathbb{R}}^{1}(Z)\to\operatorname{N}_{ \mathbb{R}}^{1}(Z)\right)\).
In the general case, we proceed as follows. We first choose a sequence of proper birational morphisms \(\mu_{n}\colon Z^{(n)}\to Z\) such that the birational maps
\[\mu_{n}^{-1}\circ\mu_{n+1}\colon Z^{(n+1)}\to Z^{(n)}\text{ and }\ F_{Y}^{(n+1)}:=\mu_{n}^{-1}\circ f_{Y}\circ\mu_{n+1}\colon Z^{(n+1)} \to Z^{(n)}\]
are both regular. To simplify notation, we write \(f^{n}_{Y}=F^{(1)}_{Y}\circ\cdots\circ F^{(n-1)}_{Y}\), and \(M_{n}:=\mu_{1}\circ\cdots\circ\mu_{n-1}\). Pick any ample class \(\omega\in\operatorname{NS}_{\mathbb{R}}(Z)\). Then \(\deg_{1}(f^{n}_{Y})=F^{n*}_{Y}\omega\cdot M^{*}_{n}(\omega^{N-c-1})\). Since \(\operatorname{bl}_{Z}\) is the projectivization of a vector bundle \(V\), we have a commutative square:
where \(\hat{\mu}_{n}\) is also birational and \(\operatorname{bl}^{(n)}\) is the projectivization of \(M^{*}_{n}V\). Write \(\hat{M}_{n}:=\hat{\mu}_{n-1}\circ\cdots\circ\hat{\mu}_{1}\). Observe that the following diagram is commutative:
Pick any small \(t\) such that the class \(\omega_{+}=\operatorname{bl}^{*}_{Z}(\omega)+t\xi\) is ample. We have
\[\deg_{1}(f^{n}_{Y^{\prime}}) =F^{n*}_{Y^{\prime}}\omega_{+}\cdot\hat{M}^{*}_{n}(\omega_{+}^{N -1})\] \[=F^{n*}_{Y^{\prime}}\operatorname{bl}^{*}_{Z}(\omega)\cdot\hat{M }^{*}_{n}(\omega_{+}^{N-1})+tF^{n*}_{Y^{\prime}}\xi\cdot\hat{M}^{*}_{n}(\omega _{+}^{N-1})\] \[=(\operatorname{bl}^{(n)})^{*}F^{n*}_{Y}(\omega)\cdot\hat{M}^{*}_ {n}(\omega_{+}^{N-1})+tF^{n*}_{Y^{\prime}}\xi\cdot\hat{M}^{*}_{n}(\omega_{+}^{ N-1})\leq\deg_{1}(f^{n}_{l})+tF^{n*}_{Y^{\prime}}\xi\cdot\hat{M}^{*}_{n}(\omega_{+}^{ N-1})\]
The fact that the restriction of \(f\) to the generic fiber of \(\operatorname{bl}_{Z}\) has degree \(1\) implies that we may write \(F^{*}_{Y^{\prime}}\xi=\hat{\mu}_{1}^{*}\xi+\operatorname{bl}^{(1)*}(\beta_{ \star})\) for some \(\beta_{\star}\in\operatorname{N}_{\mathbb{R}}^{1}(Z^{(1)})\). And we obtain:
\[F^{n*}_{Y^{\prime}}\xi=\hat{M}^{*}_{n}\xi+\sum_{j=0}^{n-1}\operatorname{bl}^{( n-j)*}(\mu_{n}^{*}\cdots\mu_{j+1}^{*})F^{(j)*}_{Y}\beta_{\star}\]
Since \(|F^{(j)*}_{Y}\beta_{\star}\cdot M^{*}_{j}\omega^{N-1}|\) is bounded by \(\deg_{1}(f^{n})\) up to a uniform constant, we obtain \(F^{n*}_{Y^{\prime}}\xi\cdot\hat{M}^{*}_{n}(\omega_{+}^{N-1})=O(n\deg_{1}(f^{n}))\) as required.
**Proposition 2.10**.: _Let \(X\) be a smooth projective variety and let \(f\colon X\dashrightarrow X\) be a birational map. If \(Z\subset X\) is a codimension \(c\) subvariety such that \(Z\not\subset\operatorname{Ind}(f)\) and \(f^{\vee}(Z)=Z\) then for all \(0\leqslant k\leqslant\dim(Z)\), there exists a constant \(C>1\) such that \(\deg_{k}(f^{n}|_{Z})\leqslant C\,\min\{\deg_{k}(f^{n}),\deg_{k+c}(f^{n})\}\). In particular, we have_
\[\lambda_{k}(f|_{Z})\leqslant\min\{\lambda_{k}(f),\lambda_{k+c}(f)\}.\]
Proof.: By Hironaka's resolution of singularities and birational invariance of degrees up to uniform constants, we may suppose that \(Z\) is smooth. Fix an ample line bundle \(L\to X\).
Recall the definition of basepoint free numerical classes from Fulger and Lehmann [17, SS5]. Suffice it to say that any basepoint free class in \(\operatorname{N}_{\mathbb{R}}^{i}(X)\) is both pseudo-effective and nef, and that the cone of basepoint free classes has non-empty interior. Moreover, for any basepoint free class \(\Omega\in\operatorname{N}_{\mathbb{R}}^{i}(X)\), there exists a constant \(C>0\) such that \(\Omega\leq Cc_{1}(L)^{i}\).
It follows that the fundamental class \([Z]\) can be written as a difference of two basepoint free classes \([Z]=\Omega_{1}-\Omega_{2}\) with \(\Omega_{1},\Omega_{2}\in\operatorname{N}_{\mathbb{R}}^{c}(X)\), and we have
\[\deg_{k}(f^{n}|_{Z}) =f^{n*}c_{1}(L)^{k}\cdot c_{1}(L)^{N-c}\cdot[Z]\] \[=f^{n*}c_{1}(L)^{k}\cdot c_{1}(L)^{N-c}\cdot\Omega_{1}-f^{n*}c_{1 }(L)^{k}\cdot c_{1}(L)^{N-c}\cdot\Omega_{2}\]
Beware that the class \(f^{n*}c_{1}(L)^{k}\) is only pseudoeffective and not nef in general. To get around that difficulty, we consider for each \(n\) a resolution of the graph \(\Gamma_{n}\) of \(f^{n}\) with projection maps \(\pi_{1,n},\pi_{2,n}\colon\Gamma\to X\) so that \(f^{n}=\pi_{2,n}\circ\pi_{1,n}^{-1}\). Then for \(\epsilon\in\{1,2\}\), we get
\[f^{n*}c_{1}(L)^{k}\cdot c_{1}(L)^{N-c-k}\cdot\Omega_{\epsilon}=\pi_{2,n}^{*}c_{ 1}(L)^{k}\cdot\pi_{1}^{*}c_{1}(L)^{N-c-k}\cdot\pi_{1}^{*}\Omega_{\epsilon}\leq C \,\left(\pi_{2,n}^{*}c_{1}(L)^{k}\cdot\pi_{1}^{*}c_{1}(L)^{N-k}\right)\]
and \(\deg_{k}(f^{n}|_{Z})\leq C\,\deg_{k}(f^{n})\).
Denote by \(\Gamma_{n,Z}\) the closure of the graph of \(f^{n}|_{Z\setminus\operatorname{Ind}(f)}\) in \(\Gamma_{n}\). Let \(W\) be any complete intersection subvariety of codimension \(k\) in \(Z\) whose fundamental class is equal to \(c_{1}(L|_{Z})^{k}\). Since \(f^{\vee}(Z)=Z\), and \(f\) is birational, we may choose \(W\) so that it intersects properly each component of the locus \(V_{l}:=\{p\in X,\dim\pi_{2,n}^{-1}(p)\geq l\}\) for all \(l\geq 1\) and all \(n\geq 0\). Note that \(W\) defines two fundamental classes: \([W]_{Z}=c_{1}(L|_{Z})^{k}\in\operatorname{N}^{k}(Z)\), and \([W]_{X}=c_{1}(L)^{k}\cdot[Z]\in\operatorname{N}^{k+c}(X)\).
By [10, Lemma 4.2], the proper transform \((\pi_{2,n}^{-1})^{\vee}(W)\) (resp. \((\pi_{2,n}^{-1})^{\vee}(W)\cap\Gamma_{n,Z}\)) represents \(\pi_{2,n}^{*}[W]_{X}\) (resp. \(\pi_{2,n}^{*}[W]_{Z}\)). In particular, the class \(\pi_{2,n}^{*}[W]_{X}-\pi_{2,n}^{*}[W]_{Z}\) is represented by an effective cycle. We now compute
\[\deg_{k}(f^{n}|Z) =[(\pi_{2,n}^{-1})^{\vee}(W)\cap\Gamma_{n,Z}]\cdot\pi_{1,n}^{*}c_ {1}(L)^{N-c-k}\] \[\leq[(\pi_{2,n}^{-1})^{\vee}(W)]\cdot\pi_{1,n}^{*}c_{1}(L)^{N-c-k}\] \[=f^{n*}[W]\cdot c_{1}(L)^{N-c-k}=\deg_{k+c}(f^{n})\]
This concludes the proof.
### Regularization of bimeromorphic mappings
A bimeromorphic self-map \(f\colon X\dashrightarrow X\) of a normal complex manifold is said to be regularizable iff there exists a proper bimeromophic map \(\varphi\colon X\dashrightarrow Y\) such that \(\pi\circ f\circ\pi^{-1}\colon Y\to Y\) is a biholomorphism.
We collect here two observations on this notion.
**Proposition 2.11**.: _Suppose the bimeromorphic self-map \(f\colon X\dashrightarrow X\) is regularizable. Then there exist a smooth manifold \(Y\) and a proper bimeromophic map \(\varphi\colon X\dashrightarrow Y\) such that \(\pi\circ f\circ\pi^{-1}\colon Y\to Y\) is a biholomorphism._
Proof.: This is a consequence of the existence of a functorial resolution of singularities in the category of analytic spaces, see [20, Theorem 2.0.1].
**Proposition 2.12**.: _Let \(f\colon X\dashrightarrow X\) be any bimeromorphic self-map of a normal complex variety. If \(f^{n}\) is regularizable for some \(n\in\mathbb{N}^{*}\), then \(f\) is also regularizable._
Proof.: Let \(U\) be an open (for the analytic Zariski topology) dense subset in \(X\) such that the restrictions of \(f,\cdots,f^{n-1}\) are all regular on \(U\). Define \(\Gamma\) to be the closure (for the euclidean topology) of the set of points \(\{(x,f(x),\cdots,f^{n-1}(x)),\,x\in U\}\subset X^{n}\). The first projection \(\pi_{1}\colon\Gamma\to X\) is a proper bimeromorphic map. On \(X^{n}\), consider the biholomorphism \(F(x_{0},\cdots x_{n-2},x_{n-1})=(x_{1},\cdots,x_{n-1},f^{n}(x_{0}))\). This map preserves \(\Gamma\) and \(\pi_{1}\circ F=f\circ\pi_{1}\). This completes the proof.
## 3. Semi-abelian varieties
In this section we recall some basic facts on the geometry of semi-abelian varieties and compute the degree growth of their automorphisms.
### Geometry of semi-abelian varieties
Our reference for this section is [20, Chapter 5].
A _semi-abelian variety_\(G\) is a connected commutative complex algebraic group fitting into an exact sequence of algebraic groups:
\[1\to T\to G\to A\to 1, \tag{3.1}\]
where \(T\cong\mathbb{G}_{m}^{r}\) is a split algebraic torus and \(A\) is an abelian variety. Observe that \(G\) is projective if and only if it is abelian, and \(G\) is affine if and only if it is an affine torus.
By Chevalley's structure theorem, the torus \(T\) is uniquely determined so that the sequence (3.1) is unique (there might be other exact sequences \(1\to T\to G\to A^{\prime}\to 1\) with \(A^{\prime}\) abelian in category of complex Lie groups, see [20, SS5.1.5]).
Write \(r=\dim(T)\) and \(g=\dim(A)\). Since \(T\), \(G\) and \(A\) are abelian, the exponential maps \(\exp\colon\operatorname{Lie}(T)\to T\), \(\exp\colon\operatorname{Lie}(G)\to G\), and \(\exp\colon\operatorname{Lie}(A)\to A\) have discrete kernels, and they all fit into the following commutative
diagram in which both lines are exact:
We write \(V:=\operatorname{Lie}(G)\simeq\mathbb{C}^{r+g}\), and \(\Lambda:=\ker(\exp_{G})\). The latter is a discrete subgroup of rank \(r+2g\). Observe that \(W:=\operatorname{Lie}(T)\simeq\mathbb{C}^{r}\) is a subspace of \(V\) such that \(\Lambda_{W}:=\Lambda\cap W\) is a discrete subgroup of rank \(r\) that generates \(W\) as a complex vector space. Write \(\Lambda_{\mathbb{R}}:=\Lambda\otimes_{\mathbb{Z}}\mathbb{R}\), and consider the complex vector space
\[U:=\Lambda_{\mathbb{R}}\cap i\Lambda_{\mathbb{R}}\subset V.\]
Then \(U\) is a complex vector subspace of \(V\), and a dimension argument implies \(V=W\oplus U\).
We have thus obtained:
**Proposition 3.2**.: _To any semi-abelian variety \(G\) is associated a complex vector space \(V\) of dimension \(r+g\), a discrete subgroup \(\Lambda\) of rank \(r+2g\), and a splitting \(V=W\oplus U\) of complex vector spaces such that_
* \(\dim(W)=r\)_,_ \(\dim(U)=g\)_;_
* \(\Lambda_{W}:=\Lambda\cap W\) _has rank_ \(r\) _and generates_ \(W\) _as a complex vector space;_
* \(U=\Lambda_{\mathbb{R}}\cap i\Lambda_{\mathbb{R}}\)_;_
* _the image of_ \(\Lambda\) _in_ \(V/W\) _is a cocompact lattice_ \(\bar{\Lambda}\) _and_ \((V/W)/\bar{\Lambda}\) _is an abelian variety of dimension_ \(g\)_;_
* _the exact sequence_ \(1\to W/\Lambda_{W}\to G\to(V/W)/\bar{\Lambda}\) _is the defining sequence (_3.1_) canonically attached to_ \(G\)_._
_The quadruple \((V,W,U,\Lambda)\) is unique up to isomorphism._
A quadruple \((V,W,U,\Lambda)\) as above is called a presentation of the semi-abelian variety \(G\).
**Remark 3.3**.: The intersection \(\Lambda\cap U\) is a discrete subgroup of \(U\) whose rank can be any integer from \(0\) and \(2g\). Observe that \(G\) is a product (as an algebraic group) if and only if this rank is equal to \(2g\).
### Algebraic compactification of semi-abelian varieties
Let \(G\) be a semi-abelian variety. The exact sequence (3.1) exhibits \(G\) as the total space of a principal \(T\)-bundle over \(A\) (in algebraic terms, \(G\) is a \(T\)-torsor over \(A\)). In terms of a presentation \((V,W,U,\Lambda)\) of \(G\), this can be understood as follows.
The family of affine planes \(z+U\) projects to a smooth foliation on \(G\) that is transverse to the fibers of the canonical projection \(\pi\colon G\to A\), and its holonomy gives rise to a monodromy representation given by a morphism \(\bar{\Lambda}\) to the group of biholomorphisms of \(T\).
In concrete terms, pick any \(\bar{\lambda}\in\bar{\Lambda}\) and lift it to \(\lambda\in\Lambda\). Write \(\lambda=\lambda_{W}+\lambda_{U}\) with \(\lambda_{W}\in W\) and \(\lambda_{U}\in U\). Observe that in general \(\lambda_{W}\notin\Lambda_{W}\). However since \(\Lambda_{\mathbb{R}}=\Lambda_{W,\mathbb{R}}\oplus U\), it follows that \(\lambda_{W}\in\Lambda_{W,\mathbb{R}}\). The monodromy associated \(\bar{\lambda}\) is thus the translation by \(-\lambda_{W}\in\Lambda_{W,\mathbb{R}}\). In particular, the monodromy is given by a morphism \(\rho\colon\bar{\Lambda}\to K\) where \(K=\Lambda_{W,\mathbb{R}}/\Lambda_{W}\) is the maximal compact (real) subgroup of \(T=W/\Lambda_{W}\).
Set \(X:=V/\Lambda_{W}\): this is a principal \(T\)-bundle over \(V/W\). Observe that the canonical splitting of \(V\) descends to a biholomorphism \(\varphi\colon W/\Lambda_{W}\oplus U\to X\). One can then recover \(G\) as the quotient of \(X\) by the action of \(\bar{\Lambda}\) given by \(\bar{\lambda}\cdot\varphi(w,u):=\varphi(\rho(\bar{\lambda})\cdot w,u+\lambda_ {U})\).
Choose any smooth projective toric variety \(M\) of dimension \(r\). Then \(M\) is equipped with a \(T\)-action which has a dense orbit, and this orbit is canonically identified with \(T\). Then we define \(\bar{G}_{M}\) as the quotient of \(M\times U\) by the action of \(\bar{\Lambda}\) given by \(\bar{\lambda}\cdot(w,u)=(\rho(\bar{\lambda})\cdot w,u+\lambda_{U})\) for \(w\in M\) and \(u\in U\). In this way we obtain all smooth equivariant algebraic compactifications of \(G\).
We shall only use the case \(M=(\mathbb{P}^{1})^{r}\) in the sequel, and thus write \(\bar{G}:=\bar{G}_{(\mathbb{P}^{1})^{r}}\) in order to simplify notations.
The choice of a basis for \(\Lambda_{W}\) gives canonical coordinates \(w_{1},\dots,w_{r}\) on \(T=W/\Lambda_{W}\), hence on \((\mathbb{P}^{1})^{r}\). In the same way, we choose a basis of \(V/W\cong U\) which gives linear coordinates \(z_{1},\dots,z_{g}\) on \(V/W\). The positive smooth \((1,1)\) form
\[\omega=\frac{i}{2}\sum_{k=1}^{r}\frac{dw_{k}\wedge d\overline{w_{k}}}{(1+w_{k} \overline{w_{k}})^{2}}+\frac{i}{2}\sum_{j=1}^{g}dz_{j}\wedge d\overline{z_{j}}. \tag{3.4}\]
on \((\mathbb{P}^{1})^{r}\times U\) then descends to a Kahler form on \(\bar{G}\) since the monodromy representation takes its values in
### Automorphisms of semi-abelian varieties
Let \(G\) be a semi-abelian variety. We let \(\operatorname{Aut}(G)\) be the group of algebraic biholomorphisms3 of \(G\). This group contains all translations \(z\mapsto z+x\) for any \(x\in G\).
Footnote 3: Observe that the group of all biholomorphisms of \(G\) might be larger than \(\operatorname{Aut}(G)\).
Pick any element \(g\in\operatorname{Aut}(G)\). Since any algebraic map from \(T\) to \(A\) is constant, any algebraic self-map of \(G\) descends to \(A\). In particular, for any \(g\in\operatorname{Aut}(G)\) fixing the neutral point, the map \(\varphi_{y}(x):=g(x+y)-g(x)-g(y)\) descends to the zero map on \(A\), and we have \(\varphi_{y}(T)\subset T\). Since \(\varphi_{y}\) is algebraic its restriction to \(T\) is a morphism, hence \(\varphi_{y}\equiv 0\). This proves any \(g\in G\) is the composition of a group morphism and a translation, so that we have the following exact sequence
\[1\to G\to\operatorname{Aut}(G)\to\operatorname{Aut}_{\bullet}(G)\to 1\,\]
where \(\operatorname{Aut}_{\bullet}(G)\) is the subgroup of algebraic group isomorphisms.
Since any algebraic biholomorphism descends to \(A\), the canonical exact sequence (3.1) also yields two exact sequences
\[1\to H_{T}\to\operatorname{Aut}(G)\to H_{A}\to 1,\text{ and }\ 1\to H_{T,\bullet}\to \operatorname{Aut}(G_{\bullet})\to H_{A,\bullet}\to 1\]
where \(H_{T}\) (resp. \(H_{A}\)) is a subgroup of \(\operatorname{Aut}(T)\) (resp. \(\operatorname{Aut}(A)\)); and \(H_{T,\bullet}\) (resp. \(H_{A,\bullet}\)) is a subgroup of \(\operatorname{Aut}_{\bullet}(T)\simeq\operatorname{SL}(r,\mathbb{Z})\) (resp. \(\operatorname{Aut}_{\bullet}(A)\) which is a discrete subgroup of \(\operatorname{GL}(g,\mathbb{C})\)).
**Lemma 3.5**.: _Let \((V,W,U,\Lambda)\) be a presentation of \(G\), and \(\operatorname{pr}\colon V\to G\) be the canonical morphism. Then for any \(f\in\operatorname{Aut}(G)\) there exist a point \(x\in G\) and linear endomorphisms \(u_{T}(f)\colon W\to W\), \(u_{A}(f)\colon U\to U\) such that \(f(\operatorname{pr}(v))=\operatorname{pr}(u(f)(v))+x\) for all \(v\in V\), where \(u(f)\) is the endomorphism of \(V=W\oplus U\) defined by \(u(f)=u_{T}(f)\oplus u_{A}(f)\)._
Proof.: Replacing \(f\) by \(f-x\) with \(x=f(0)\) we may suppose \(f(0)=0\). By our previous arguments, \(f\) is then a group morphism. It thus lifts to the universal cover \(V\) of \(G\) and defines an endomorphism \(u(f)\colon V\to V\) such that \(u(f)(\Lambda)=\Lambda\). Since \(u(f)\) is \(\mathbb{C}\)-linear, it preserves \(U=\Lambda_{\mathbb{R}}\cap i\Lambda_{\mathbb{R}}\). On the other hand, we proved that \(g\) must preserve \(T\), hence \(u(f)\) fixes \(W\) too. We have thus proved that \(u(f)\) preserves each factor of the splitting \(V=W\oplus U\) as required.
Recall that an _isogeny_\(f\colon G\to G^{\prime}\) between two semi-abelian varieties is a surjective group morphism with finite kernel. This is equivalent to impose \(f\) to be finite and surjective.
We shall below consider the ring of group endomorphisms \(\operatorname{End}(G)\) which is isomorphic to the set of complex linear maps \(u\colon V\to V\) such that \(u(\Lambda)\subset\Lambda\).
### Degree growth of automorphisms of semi-abelian varieties
**Theorem 3.6**.: _Let \(G\) be any semi-abelian variety, and \((V,W,U,\Lambda)\) be a presentation of \(G\). Let \(\pi\colon G\to A\) be the canonical projection to an abelian variety such that \(\ker(\pi)\) is a split torus \(T\)._
_Pick any automorphism \(f\in\operatorname{Aut}(G)\), and let \(x\in G\), \(u_{A}(f)\in\operatorname{End}(U)\) and \(u_{T}(f)\in\operatorname{End}(W)\) such that \(f(\pi(v))=\pi(u(f)(v))+x\) for all \(v\in V\) with \(u(f)=u_{T}(f)\oplus u_{A}(f)\). Then, we have_
\[\deg_{k}(f^{n})\asymp\max_{0\leq j\leq k}\ \left\{\|\Lambda^{j,j}u_{A}(f)^{n}\| \times\|\Lambda^{k-j}u_{T}(f)^{n}\|\right\}\]
Let \(u\colon V\to V\) be any endomorphism. Then for all \(j\), we denote by \(\Lambda^{j}u\) the induced endomorphism on the vector space \(\Lambda^{j}V^{*}\); and by \(\Lambda^{j,j}u\) the endomorphism on the space \(\Lambda^{j,j}V^{*}\) of \(2j\)-forms to type \((j,j)\).
Proof.: We compute the degrees in the projective compactification \(\bar{G}\) as in SS3.2 with the Kahler form (3.4).
Observe that any translation on \(G\) extends to an automorphism on \(\bar{G}\), hence acts as the identity on the cohomology of \(\bar{G}\). It follows that we may suppose that \(x=0\). It follows that \(f\) restricts to a morphism \(f_{T}\) on \(T\), and also descends to a morphism \(f_{A}\colon A\to A\). Write
\[\omega_{T}:=\frac{i}{2}\sum_{k=1}^{r}\frac{dw_{k}\wedge d\overline{w_{k}}}{(1+ w_{k}\overline{w_{k}})^{2}}\text{ and }\ \omega_{A}:=\frac{i}{2}\sum_{j=1}^{g}dz_{j}\wedge d\overline{z_{j}}.\]
To simplify notation we identify \(\omega_{A}\) with its image in the quotient space \(A\). Since \(\lambda_{k}(f_{A})\) equals the spectral norm of the pull-back action of \(f\) on \(H^{k,k}(A)\), we have
\[\int_{A}f_{A}^{n*}\omega_{A}^{k}\wedge\omega_{A}^{g-k}\asymp\|\Lambda^{k,k}u_{A} (f)^{n}\|\ ;\]
and [11] or [12] imply similarly:
\[\int_{\mathbb{P}_{\mathbb{C}}^{r}}f_{T}^{ns}\omega_{T}^{k}\wedge\omega_{T}^{r-k} \asymp\|\Lambda^{k}u_{T}(f)^{n}\|\.\]
Choose a fundamental domain \(\Pi(G)\) for the action of \(\bar{\Lambda}\) in \(X=V/\Lambda_{W}\). Recall that there is a canonical projection \(X\to U\), and observe that we may choose \(\Pi(G)\) as the product of \(T\) with a fundamental domain \(\Pi(A)\) in \(U\) for the action of \(\bar{\Lambda}\). Denote by \(f_{X}\colon X\to X\) the lift of \(f\) (or equivalently the map induced by \(u(f)\) from \(V\) onto \(X\)). Note that by Lemma 3.5\(f_{X}\) is equal the product map \((f_{T},u_{A}(f))\) in the coordinates \((w_{j}),(z_{i})\) on \(T\times U\).
We then have the following series of equalities:
\[\deg_{k}(f^{n}) =\int_{G}f^{ns}\omega^{k}\wedge\omega^{r+g-k}=\int_{G}f^{ns} \omega^{k}\wedge\omega^{r+g-k}\] \[=\int_{\Pi(G)}u(f)^{ns}(\omega_{A}+\omega_{T})^{k}\wedge(\omega_ {A}+\omega_{T})^{r+g-k}\] \[=\sum_{j,l}\binom{k}{j}\ \binom{r+g-k}{l}\ \int_{\Pi(G)}\ u_{A}(f)^{ns} \omega_{A}^{j}\wedge\ f_{T}^{ns}\omega_{T}^{k-j}\wedge\omega_{A}^{l}\wedge \omega_{T}^{r+g-l}\] \[=\sum_{j=0}^{k}\binom{k}{j}\ \binom{r+g-k}{g-j}\ \int_{\Pi(A)}\ u_{A}(f)^{ns} \omega_{A}^{j}\wedge\omega_{A}^{g-j}\ \int_{T}\ f_{T}^{ns}\omega_{T}^{k-j}\wedge\omega_{T}^{r-j}\] \[\asymp\max_{0\leq j\leq k}\deg_{j}(f_{A}^{n})\deg_{k-j}(f_{T}^{n})\]
which concludes the proof.
Denote by \(\tau_{1},\dots,\tau_{r}\) the eigenvalues counted with multiplicities of \(u_{T}(f)\) and by \(\alpha_{1},\dots,\alpha_{g}\) the eigenvalues of \(u_{A}(f)\). Reorder them in the following way:
\[|\tau_{1}|\geqslant|\tau_{2}|\geqslant\dots\geqslant|\tau_{r}|;\] \[|\alpha_{1}|\geqslant|\alpha_{2}|\geqslant\dots\geqslant|\alpha_ {g}|.\]
The previous theorem can be used to describe the degree growth in codimension \(1\).
**Corollary 3.7**.: _Let \(f\colon G\to G\) be an automorphism of a semi-abelian variety \(G\). Then in the above notation the first dynamical degree of \(f\) is equal to:_
\[\lambda_{1}(f)=\max\{|\tau_{1}|,|\alpha_{1}|^{2}\}.\]
_Moreover, if \(\lambda_{1}(f)=1\), then \(\deg_{1}(f^{n})\asymp n^{d}\) with \(d=\max\{j_{T}-1,2(j_{A}-1)\}\) where \(j_{T}\) and \(j_{A}\) denote the unipotent index of \(u_{T}(f)\) and \(u_{A}(f)\) respectively._
**Corollary 3.8**.: _Let \(f\colon G\to G\) be an automorphism of a semi-abelian variety \(G\). In the above notation \(f\) has the following dynamical degrees:_
\[\lambda_{j}(f)=\max_{k+l\inj,0\leq k\leq r,0\leq l\leq g}\prod_{m=1}^{k}|\tau_ {m}|\times\prod_{n=1}^{l}|\alpha_{n}|^{2}\]
In the previous formula, we adopt the convention that when \(k=0\) or \(l=0\) then the product is over the emptyset, hence equals \(1\).
**Remark 3.9**.: The computation of dynamical degrees of endomorphisms of semi-abelian surfaces was also explored in Abboud's thesis, see [1, SS4.3.3].
### Decomposition of automorphisms of abelian varieties
In this section, we suppose that \(G=V/\Lambda\) is an abelian variety so that \(\Lambda\) generates the complex vector space \(V\) as a real vector space.
Recall that an abelian variety is simple when it does not contain any non-trivial abelian subvarieties. Let \(L\to G\) be any ample line bundle on \(G\). The map sending \(a\in G\) to \(t_{a}^{*}L\otimes L^{-1}\) (where \(t_{a}\) denotes the translation by \(a\)) defines an isogeny \(\phi_{L}\colon G\to G^{\vee}:=\operatorname{Pic}^{0}(G)\) (see [10, II.8 Theorem 1]). If \(H\) is an abelian subvariety of \(G\), and \(\imath\colon H\to G\) is the canonical embedding map, we may consider the fiber \(H^{\prime}\) containing \(0\) of the map \(G\to H^{\vee}\) given by \(\imath^{\vee}\circ\phi_{L}\). Then \(H\cap H^{\prime}\) is finite, and the addition map \(H\times H^{\prime}\to G\) is an isogeny, see [1, p.127].
**Theorem 3.10**.: _Pick any algebraic group isomorphism \(f\in\operatorname{Aut}_{\bullet}(G)\). Then there exist two canonically defined \(f\)-invariant abelian subvarieties \(G_{0},G_{1}\) satisfying the following conditions:_
1. _the addition map_ \(\mu\colon G^{\prime}:=G_{0}\times G_{1}\to G\) _is an isogeny;_
2. _all eigenvalues of_ \(u(f|_{G_{0}})\) _are roots of unity;_
3. _no eigenvalue of_ \(u(f|_{G_{1}})\) _is a root of unity._
**Remark 3.11**.: Note that \(\lambda_{1}(f|_{G_{0}})=1\), and for any \(f\)-invariant abelian subvariety \(H\) of \(G_{1}\) we have \(\lambda_{1}(f|_{H})>1\). Also when \(G\) is simple, then either \(G_{0}\) or \(G_{1}\) is reduced to a point.
Proof.: Consider the complex linear map \(u(f)\colon V\to V\) as above. Since \(u(f)\) preserves the lattice \(\Lambda\), it also induces a \(\mathbb{Q}\)-linear map \(u_{r}\colon\Lambda_{\mathbb{Q}}\to\Lambda_{\mathbb{Q}}\). Since \(\Lambda\) is co-compact in \(V\), the embedding \(\Lambda\subset V\) induces an isomorphism of real vector spaces \(\phi\colon\Lambda_{\mathbb{R}}\to V\).
Let \(J(\lambda):=\phi^{-1}(i\phi(\lambda))\). It defines an endomorphism \(J\colon\Lambda_{\mathbb{R}}\to\Lambda_{\mathbb{R}}\) such that \(J^{2}=-\operatorname{Id}\), and since \(u(f)\) is \(\mathbb{C}\)-linear, we have \(J\circ u_{r}=u_{r}\circ J\).
Let \(\chi_{r}\in\mathbb{Z}[T]\) be the characteristic polynomial of \(u_{r}\). Then \(\chi_{r}\) is a polynomial of degree \(2g\), which can be decomposed as a product \(\chi_{r}=PQ\) where \(P\in\mathbb{Z}[T]\) is a product of cyclotomic factors, and \(Q\in\mathbb{Z}[T]\) does not vanish at any root of unity. We obtain a splitting \(\Lambda_{\mathbb{Q}}=\ker P(u_{r})\oplus\ker Q(u_{r})\), and since \(u_{r}\) and \(J\) commute, the real spans of both \(\ker P(u_{r})\) and \(\ker Q(u_{r})\) in \(\Lambda_{\mathbb{R}}\) are \(J\)-invariant. Write \(V_{0}=\phi(\ker P(u_{r}))\) and \(V_{1}=\phi(\ker Q(u_{r}))\). It follows that \(V_{0}\) and \(V_{1}\) are \(u(f)\)-invariant complex vectors spaces such that \(V=V_{0}\oplus V_{1}\) such that \(\Lambda\cap V_{i}\) is a lattice in \(V_{i}\) for each \(i=0,1\). The theorem follows with \(G_{i}=V_{i}/(\Lambda_{i}\cap V_{i})\).
## 4. Families of abelian varieties and Neron models
In this section, we discuss Neron models in the context of degenerating families of complex abelian varieties, and give a proof of Theorem A.
### Neron models
In order to talk about Neron models, we have to make a slight twist in the terminology used in the introduction.
A family (of polarized manifolds) over the complex unit disk is a smooth complex manifold \(\mathcal{X}\), with a surjective holomorphic map \(\pi\colon\mathcal{X}\to\mathbb{D}\) which is a proper submersion over \(\mathbb{D}^{*}\), and carrying a relatively ample line bundle \(\operatorname{L}\to\mathcal{X}\).
When \(X_{t}=\pi^{-1}(t)\) is an abelian variety for each (or some) \(t\neq 0\), then we say that it is a family of polarized abelian varieties. The family of neutral elements of \(X_{t}\) gives a canonical holomorphic section over \(\mathbb{D}^{*}\), that we refer to the zero section.
We allow \(X_{0}\) to be non-compact. Observe that \(X_{0}\) is compact if and only if \(\pi\) is proper, in which case we say the family is proper. We say that a polarized family is smooth when the map \(\pi\) is a submersion over \(\mathbb{D}\). A proper smooth family of polarized abelian varieties is simply a deformation of abelian varieties defined over \(\mathbb{D}\). We write \(\mathcal{X}^{*}=\mathcal{X}\setminus X_{0}=\pi^{*}(\mathbb{D}^{*})\).
A model \(\mathcal{Y}\) of \(\mathcal{X}\) is a family of polarized manifolds \(\varpi\colon\mathcal{Y}\to\mathbb{D}\), and a meromorphic map \(\phi\colon\mathcal{Y}\dashrightarrow\mathcal{X}\) such that \(\pi^{\prime}=\pi\circ\phi\) and \(\phi\) is a biholomorphism from \(\mathcal{Y}^{*}\) onto \(\mathcal{X}^{*}\).
Let \(R\) be the ring of germs of holomorphic functions at the origin \(0\in\mathbb{D}\). With the \(t\)-adic norm, \(|\sum a_{n}t^{n}|_{t}=\exp(-\min\{n,a_{n}\neq 0\})\), it becomes a discrete valued ring, whose completion is the ring of formal power series \((\mathbb{C}[\![t]\!],|\cdot|_{t})\). The fraction field \(K\) of \(R\) is a valued field whose completion is \((\mathbb{C}(\!(t)\!],|\cdot|_{t})\).
If \(\pi\colon\mathcal{X}\to\mathbb{D}\) is a family of polarized abelian varieties, there exists \(r>0\) and an embdedding \(\imath\colon\mathcal{X}\hookrightarrow\mathbb{P}^{N}_{\mathbb{C}}\times\mathbb{ D}_{r}\) such that \(\operatorname{pr}_{2}\circ\imath=\pi\), where \(\mathbb{D}_{r}:=\{|t|<r\}\) and \(\operatorname{pr}_{2}\) is the projection onto the second factor. It follows that \(\mathcal{X}\) is defined by a finite family of homogeneous polynomials with coefficients in \(R\). This family defines a projective scheme flat over \(\operatorname{Spec}\!R\) that we denote by \(\mathcal{X}_{R}\). Its generic fiber \(X_{K}\) is a smooth abelian \(K\)-variety.
**Theorem 4.1**.: _Let \(X_{K}\) be any abelian variety over \(K\). There exists an \(R\)-scheme \(\mathcal{N}_{R}\), flat over \(\operatorname{Spec}\!R\) and an isomorphism \(\phi\colon N_{K}\to X_{K}\) such that \(\mathcal{X}\) satisfies the following property:_
* _for each smooth_ \(R\)_-scheme_ \(\mathcal{Y}\) _and for any_ \(K\)_-morphism_ \(f\colon Y_{K}\to X_{K}\)_, there exists a unique_ \(R\)_-morphism_ \(\varphi\colon\mathcal{Y}\to\mathcal{N}_{R}\) _which extends_ \(f\)_, i.e., such that_ \(f=\phi_{K}\circ\varphi_{K}\)_._
The property characterizing \(\mathcal{N}_{R}\) is called the Neron mapping property, and \(\mathcal{N}_{R}\) is then called a Neron model. It implies the uniqueness (up to isomorphism) of the Neron model. We refer to [1] for a proof of this result. It is crucial to keep in mind that in general the special fiber \(\mathcal{N}_{s}\) is not projective.
**Corollary 4.2**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be a family of polarized abelian varieties. Then there exists a family of polarized abelian varieties \(\pi_{\mathcal{N}}\colon\mathcal{N}\to\mathbb{D}\), and an isomorphism \(\phi_{N}\colon\mathcal{N}^{*}\to\mathcal{X}^{*}\) satisfying \(\pi=\pi_{\mathcal{N}}\circ\phi_{N}\), such that the following holds._
_For any smooth polarized family \(\varpi\colon\mathcal{Y}\to\mathbb{D}_{r}\) with \(0<r<1\), and for any holomorphic map \(f\colon\mathcal{Y}^{*}\to\mathcal{X}^{*}\) such that \(\varpi=f\circ\pi\), there exists a unique holomorphic map \(\varphi\colon\mathcal{Y}\to\mathcal{N}\) such that \(f=\phi_{N}\circ\varphi\)._
**Remark 4.3**.: We shall call \(\mathcal{N}\) as in the previous corollary the analytic Neron model (or Neron model for short). Observe that the extension property applied to the addition law on \(\mathcal{X}^{*}\times\mathcal{X}^{*}\) implies that \(\mathcal{N}\) is also a commutative complex algebraic group.
The extension property also implies that any holomorphic section \(\sigma\colon\mathbb{D}\to\mathcal{X}\) of \(\pi\) (i.e., satisfying \(\pi\circ\sigma(t)=t\)) induces a holomorphic map \(\sigma_{\mathcal{N}}\colon\mathbb{D}\to\mathcal{N}\) such that \(\phi_{\mathcal{N}}\circ\sigma_{\mathcal{N}}=\sigma\).
When the Neron model \(\mathcal{N}\) is relatively projective (i.e., \(\mathcal{N}_{0}\) is projective), then we shall say that \(\mathcal{X}\) is a _non-degenerating family_.
**Remark 4.4**.: When \(t\) varies in \(\mathbb{D}^{*}\), then \((X_{t},\mathbb{L}_{t})\) forms a holomorphic family of polarized abelian varieties of a fixed type \(D\) so that we have a holomorphic map from \(\mathbb{D}^{*}\) to the moduli space of polarized abelian varieties of type \(D\) (see, e.g., [1, p.219]). The family is non-degenerating if and only if this map extends holomorphically through \(0\).
**Remark 4.5**.: In the case \(g=1\) and \(\mathcal{X}\) is a family of elliptic curves, then the Neron model can be obtained as follows, see [11]. First, after applying the minimal model program, we may suppose that \(\mathcal{X}\) is relatively minimal, i.e., \(X_{0}\) does not contain any smooth rational curve of self-intersection \(-1\). The divisor \(\pi^{*}(0)\) is in general not reduced, and can be written as a sum \(\sum E_{i}+\sum b_{j}F_{j}\) where \(E_{i},F_{j}\) are the irreducible components of \(X_{0}\) and \(b_{j}\geq 2\) for all \(j\). Then \(\mathcal{N}\) can be taken as the smooth part of \(\mathcal{X}\setminus\bigcup F_{j}\). We refer to [1, SSV.7] for the description by Kodaira of all possible divisors \(\pi^{*}(0)\) as above.
Proof of Corollary 4.2.: Theorem 4.1 yields a polarized family of abelian varieties \(\mathcal{N}\to\mathbb{D}_{r}\) for some \(0<r<1\). The isomorphism \(\phi\colon N_{K}\to X_{K}\) defines a biholomorphism \(\varphi\colon\mathcal{N}^{*}\to\pi^{-1}\{0<|t|<r\}\) (possibly after reducing \(r\)), such that \(\varphi^{*}\mathbb{L}\) is the polarization on \(\mathcal{N}^{*}\).
Glue \(\mathcal{N}\) and \(\mathcal{X}^{*}\) along \(\pi^{-1}\{0<|t|<r\}\), where we identify a point \(z\in\mathcal{N}^{*}\) with \(\varphi(z)\in\pi^{-1}\{0<|t|<r\}\). The resulting space is a family of polarized abelian varieties defined over \(\mathbb{D}\). To simplify notation we shall keep the same notation \(\mathcal{N}\) for this family.
Suppose now that we are given a smooth polarized family \(\varpi\colon\mathcal{Y}\to\mathbb{D}_{r}\) with \(0<r<1\), and a holomorphic map \(f\colon\mathcal{Y}^{*}\to\mathcal{X}^{*}\) such that \(\varpi=f\circ\pi\). Since \(\varpi\) is smooth in the analytic category, the associated \(R\)-scheme is smooth (in the algebraic sense, see [10, Satz 3.1]). By Neron extension property, we can find \(\rho<r\) and an analytic map \(\varphi\colon\varpi^{-1}\{0<|t|<\rho\}\to\mathcal{N}\) such that \(f=\phi_{N}\circ\varphi\). By analytic continuation, \(\varphi\) extends to \(\mathcal{Y}\).
Finally we shall need the following result. Suppose \(\pi\colon\mathcal{X}\to\mathbb{D}\) is any proper family of polarized abelian varieties, and pick any integer \(n\in\mathbb{N}^{*}\). A base change of order \(n\) is a proper family of polarized abelian varieties \(\pi_{n}\colon\mathcal{X}_{n}\to\mathbb{D}\), with a meromorphic map \(\phi\colon\mathcal{X}_{n}\dasharrow\mathcal{X}\) such that the following square is commutative:
**Theorem 4.6**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be any proper polarized family of abelian varieties of dimension \(g\). Then there exists a finite base change \(\pi^{\prime}\colon\mathcal{X}^{\prime}\to\mathbb{D}\) of order \(n\in\mathbb{N}^{*}\) such that:_
1. _The fiber_ \(\mathcal{X}^{\prime}_{0}\) _is reduced;_
2. _the irreducible component_ \(G\) _of the central fiber_ \(\mathcal{N}^{\prime}_{0}\) _of the Neron model of_ \(\mathcal{X}^{\prime}\) _containing the neutral element is a semi-abelian variety, and the quotient space_ \(\mathcal{N}^{\prime}_{0}/G\) _is a finite group._
_Moreover, the following property holds:_
1. _the Neron model_ \(\mathcal{N}^{\prime}\) _is proper if and only if_ \(G\) _is an abelian variety._
Proof.: The first two statements follows from the semi-stable reduction theorem of Grothendieck applied to the \(R\)-scheme \(\mathcal{X}_{R}\), see [1, SS7.4, Theorem 1]. The third statement is a consequence of [1, SS7.4, Theorem 5].
Recall that it may be necessary to do a non-trivial base change for the central fiber of the Neron model to be semi-abelian.
**Example 4.7**.: Let \(A\) be an abelian variety of dimension \(g\), \(f,\sigma\in\operatorname{Aut}(A)\) two automorphisms such that \(f(0)=\sigma(0)=0\), \(\sigma^{N}=\operatorname{Id}\) for some \(N\). Then \(f\) and \(\sigma\) lifts to linear maps on the Lie algebra of \(A\) hence commute, and we can form the quotient space \(\mathcal{X}=(A\times\mathbb{D})/G\) where we identify \((z,t)\sim(\sigma(z),\zeta t)\) with \(\zeta\) a primitive \(N\)-th root of unity. Since \(f\) and \(\sigma\) commute, \(f\) descends to \(\mathcal{X}\) and defines an automorphism \(\tilde{f}\colon\mathcal{X}\to\mathcal{X}\).
In general \(\mathcal{X}_{0}\) is singular but is not birational to an abelian variety. Take for instance \(A=E^{g}\) with \(E\) an elliptic curve, \(f\) any element in \(\operatorname{SL}(g,\mathbb{Z})\) and \(\sigma=\zeta z\) where \(\zeta=-1\), or \(\zeta\) is a primitive 3rd (resp. 4th)root of unity when \(E=\mathbb{C}/\mathbb{Z}[j]\) (resp. \(E=\mathbb{C}/\mathbb{Z}[i]\)). When \(g=2\), the minimal resolution of singularities of the quotient \(A/G\) is a K3 surface when \(\zeta=-1\). One can even obtain examples for which \(A/G\) is a (singular) rational surface. The easiest example is to take the simple abelian variety obtained by quotienting by the ring of integers of \(\mathbb{Q}(\zeta_{5})\) where \(\zeta_{5}\) is a primitive 5-th root of unity admits two non conjugate complex embedding, and the variety \(A:=\mathbb{C}^{2}/\mathbb{Z}[\zeta_{5}]\) is a simple abelian variety. The multiplication by \(\zeta_{5}\) on \(\mathbb{Z}[\zeta_{5}]\) induces an automorphism \(g\) of order 5, and \(A/\langle g\rangle\) is rational. This variety also admits an automorphism with dynamical degree \(>1\) since the group of units of \(\mathbb{Q}[\zeta_{5}]\) has rank 1 by Dirichlet's unit theorem. See [21, 22] for details.
### Automorphisms of families of polarized abelian varieties
Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be a family of polarized abelian varieties, and \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be any bimeromorphic map. By the properness of \(\pi\), we necessarily have \(\pi=\pi\circ f\). We denote by \(\operatorname{Bim}(\mathcal{X})\) the group of bimeromorphic self-maps of \(\mathcal{X}\), and by \(\operatorname{Aut}(\mathcal{X})\) its group of biholomorphisms.
Since abelian varieties do not contain rational curves, it follows that \(\operatorname{Exc}(f)\) and \(\operatorname{Ind}(f)\) are both contained in \(\mathcal{X}_{0}\), and \(f_{t}\colon X_{t}\to X_{t}\) is an algebraic biholomorphism for all \(t\neq 0\).
Recall that for each \(t\neq 0\), \(f_{t}\) lifts to the universal cover \(\operatorname{pr}_{t}\colon V_{t}\to X_{t}\) as an affine map \(z\mapsto u(f_{t})(z)+x_{t}\) with \(x_{t}\in V_{t}\), and \(u(f_{t})\in\operatorname{End}(V_{t})\) so that \(u(f_{t})\) preserves the lattice \(\Lambda_{t}=\operatorname{pr}_{t}^{-1}(0)\).
Since \(\Lambda_{t}\) and \(u(f_{t})\) are varying continuously (in fact holomorphically) in \(t\), choosing a path from \(t_{0}\) to \(t_{1}\) determines a canonical isomorphism \(V_{t_{0}}\to V_{t_{1}}\) sending \(\Lambda_{t_{0}}\) to \(\Lambda_{t_{1}}\) and conjugating \(u(f_{t_{0}})\) to \(u(f_{t_{1}})\). We collect here the following two observations.
**Proposition 4.8**.: _Let \(f\in\operatorname{Bim}(\mathcal{X})\), and \(E\) be any irreducible component of the central fiber of the Neron model of \(\mathcal{X}\). Suppose that the component of \(\mathcal{N}_{0}\) containing \(0\) is semi-abelian, and that \(f_{\mathcal{N}}(E)=E\)._
_Then the characteristic polynomial of \(u(f_{t})\) for \(t\neq 0\) is identical to the one of \(u(f|E)\)._
Proof.: We may suppose \(\mathcal{X}=\mathcal{N}\) and write \(f=f_{\mathcal{N}}\). Recall that \(\mathcal{N}\) is a family of abelian complex algebraic groups. Denote by \(E_{0}\) the connected component of \(\mathcal{N}_{0}\) containing \(0\), and pick any point \(x\in E\). Then \(E\) is isomorphic to \(E-x\cong E_{0}\), hence is a translate of a semi-abelian variety. By Lemma 3.5 any algebraic automorphism \(h\colon E\to E\) thus lifts to an affine map on its universal cover. We shall denote by \(u(h)\) its linear part.
Choose any local holomorphic section \(x\colon\mathbb{D}_{r}\to\mathcal{X}\) such that \(x(0)=x\). Note that a priori \(x\) is only defined in a neighborhood of \(0\in\mathbb{D}\) (i.e., \(r<1\)) in which case we replace \(\mathcal{X}\) by \(\pi^{-1}(\mathbb{D}_{r})\). To simplify notation we shall assume \(r=1\) in the remaining of this proof.
We introduce the following automorphism \(\tilde{f}\in\operatorname{Aut}(\mathcal{N})\) by letting \(\tilde{f}_{t}:=f_{t}-f_{t}(x(t))+x(t)\) so that \(f_{t}(x(t))=x(t)\) for all \(t\). Since \(f\) and \(\tilde{f}\) are isotopic we have \(\tilde{f}(E)=E\), \(u(\tilde{f}|E)=u(f|E)\), and \(u(\tilde{f}_{t})=u(f_{t})\) for all \(t\neq 0\).
We now look at the differential \(d\tilde{f}_{t}(x(t))\in\operatorname{End}(T_{x(t)}\mathcal{N}_{t})\). We may trivialize \(T_{x(t)}\mathcal{N}_{t}\) in a continuous way and interpret \(d\tilde{f}_{t}(x(t))\) as a continuous family of endomorphisms of the same complex vector space. In particular, the characteristic polynomial of \(d\tilde{f}_{0}(x(0))\) is equal to the one of \(d\tilde{f}_{t}(x(t))\) for \(t\neq 0\) (as the latter is constant).
We conclude by observing that \(d\tilde{f}_{t}(x(t))\) (resp. \(d\tilde{f}_{0}(x(0))\)) is conjugated to \(u(f_{t})\) (resp. to \(u(f|E)\)) since \(f_{t}\) (resp. \(f|E\)) lifts to a constant linear map on its universal cover by Proposition 3.5.
**Proposition 4.9**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be a polarized family of abelian varieties, and let \(\phi\colon\mathcal{X}^{\prime}\dashrightarrow\mathcal{X}\) be any base change. Then for any \(f\in\operatorname{Bim}(\mathcal{X})\), there exists \(f^{\prime}\in\operatorname{Bim}(\mathcal{X}^{\prime})\) such that \(\phi\circ f^{\prime}=f\circ\phi\), and \(f\) is regularizable if and only if \(f^{\prime}\) is._
Proof.: We may suppose that \(\mathcal{X}^{\prime}\) is the fibered product over \(t\mapsto t^{n}\) for some \(n\in\mathbb{N}^{*}\), i.e., \(\mathcal{X}^{\prime}=\{(x,t)\in\mathcal{X}\times\mathbb{D},\,\pi(x)=t^{n}\}\) (note that \(\mathcal{X}^{\prime}\) may acquire singularities along the central fiber). The automorphism \(f^{\prime}(x,t)=(f(x),t)\) satisfies \(\pi^{\prime}\circ f^{\prime}(x,t)=\pi^{\prime}(x,t):=\pi(x)\) hence \(f^{\prime}\in\operatorname{Bim}(\mathcal{X}^{\prime})\). And since \(\phi(x,t)=x\), we have \(\phi\circ f^{\prime}=f\circ\phi\) by construction.
Observe also that if \(f\) is regularizable, then we may suppose that \(f\colon\mathcal{X}\to\mathcal{X}\) is a regular map, and \(f^{\prime}\) is also regular on the fibered product.
Suppose conversely that \(f^{\prime}\) is regularizable. We may thus find a model \(\mathcal{Y}\dashrightarrow\mathcal{X}^{\prime}\) such that the induced map \(f_{\mathcal{Y}}\colon\mathcal{Y}\to\mathcal{Y}\) is regular. Observe that \(\phi\colon\mathcal{X}^{\prime}\to\mathcal{X}\) is a cyclic cover: the \(\mathbb{Z}_{n}\)-action given by \(\zeta\cdot(x,t)=(x,\zeta\cdot t)\) induces a biholomorphism \(\mathcal{X}^{\prime}/\mathbb{Z}_{n}\mathop{\to}\limits^{\sim}\mathcal{X}\). This action commutes with \(f^{\prime}\), and by equivariant resolution of singularities we may suppose that it lifts to a regular action on \(\mathcal{Y}\). Let \(\tilde{\mathcal{Y}}\,:=\mathcal{Y}/\mathbb{Z}_{n}\). Then \(\tilde{\mathcal{Y}}\) is a model of \(\mathcal{X}\), and \(f_{\mathcal{Y}}\) descends to a regular automorphism \(\tilde{\mathcal{Y}}\). This implies \(f\) to be regularizable.
**Proposition 4.10**.: _Let \(\pi^{\prime}\colon\mathcal{X}^{\prime}\to\mathbb{D}\) and \(\pi\colon\mathcal{X}\to\mathbb{D}\) be two families of polarized abelian varieties, and let \(\phi\colon\mathcal{X}^{\prime}\dashrightarrow\mathcal{X}\) be a meromorphic map such that \(\pi^{\prime}=\pi\circ\phi\) and \(\phi_{t}\colon X^{\prime}_{t}\to X_{t}\) is an isogeny for all \(t\in\mathbb{D}^{*}\). Suppose \(f\in\operatorname{Bim}(\mathcal{X})\) and \(f^{\prime}\in\operatorname{Bim}(\mathcal{X}^{\prime})\) satisfy \(\phi\circ f^{\prime}=f\circ\phi\)._
_Then \(f\) is regularizable if and only if \(f^{\prime}\) is._
Proof.: By the resolution of singularities, we may suppose that \(\phi\colon\mathcal{X}^{\prime}\to\mathcal{X}\) is regular. Suppose that \(f\colon\mathcal{X}\to\mathcal{X}\) is an automorphism. Let \(R\) be the ring of germs of holomorphic functions at \(0\) in the unit disk, and \(K\) be its fraction field.
Consider the Stein factorization \(\mathcal{Y}\) of \(\mathcal{X}^{\prime}\to\mathcal{X}\), so that \(\mathcal{X}^{\prime}\mathop{\to}\limits^{\nu}\mathcal{Y}\mathop{\to}\limits^{ \nu}\mathcal{X}\) with \(\nu\) finite, and \(\phi=\nu\circ\varphi\). Let \(\mathcal{Y}^{\prime}\) be the normalization of \(\mathcal{Y}\). Denote by \(f^{\prime}_{Y}\colon\mathcal{Y}^{\prime}\dashrightarrow\mathcal{Y}^{\prime}\) the map induced by \(f^{\prime}\). Since \(f\circ\nu=\nu\circ f^{\prime}_{Y}\), \(f\) is an automorphism, \(\nu\) is finite, it follows that \(f^{\prime}_{Y}\) cannot have indeterminacy points, proving that \(f^{\prime}\) is regularizable since \(\mathcal{Y}^{\prime}\) is normal.
To prove the converse, recall that the dual variety \(X^{\vee}_{t}\) of \(X_{t}\) is the quotient of the space of anti-linear \(1\)-forms \(\tilde{\Omega}\) by the dual lattice \(\Lambda^{\vee}_{t}:=\{\ell\in\Omega,\Im\ell(\Lambda_{t})\subset\mathbb{Z}\}\), see [1, Chapter 2, SS4]. Since \(X_{t}\) is polarized, there exists a continuous family of positive definite hermitian forms \(H_{t}\colon V\times V\to\mathbb{C}\) such that \(\Im H_{t}(\Lambda_{t},\Lambda_{t})\subset\mathbb{Z}\). These polarizations induces a canonical isogeny map \(\phi_{t}\colon X_{t}\to X^{\vee}_{t}\) sending \(z\) to \(H_{t}(z,\cdot)\). Its degree is determined by the type of the polarization by [1, Chapter 2, Proposition 4.9]. It follows that \(X^{\vee}_{t}\) is also polarized, and the family of abelian varieties \(X^{\vee}_{t}\) varies holomorphically4. We briefly sketch how to see that \(\mathcal{X}^{\vee}\) extends as a family over \(\mathbb{D}\).
Footnote 4: the polarization is principal iff \(X^{\vee}_{t}\) is isomorphic to \(X_{t}\).
After a base change, there exists a holomorphic map \(s\mapsto Z(s)\) from the upper half-plane to the Siegel domain (\(Z(s)\) is a symmetric \(g\times g\) complex matrix such that \(\Im(Z(s)>0)\), and such that \(Z(s+1)=Z(s)+DB\) for some matrix \(B\) with integral coefficients (see SS6.2 below for more details). And \(X_{t}\) is isomorphic to \(\mathbb{C}^{g}\) quotiented out by the lattice generated by the vectors \(d_{1}e_{1},\cdots,d_{g}e_{g}\) and the columns of \(Z(e^{2i\pi s})\). It follows from [1, Chapter 8.1, (1)] that \(X^{\vee}_{t}\) is then isomorphic to \(\mathbb{C}^{g}\) quotiented out by the lattice generated by the vectors \(e_{1},\cdots,e_{g}\) and the columns of \(Z^{\prime}(s):=D^{-1}Z(e^{2i\pi s})\). Observe that \(Z^{\prime}(s+1)=Z^{\prime}(s)+B\), and Mumford's construction (see again SS6.2) based on toroidal geometry constructs a degeneration of the family \(X^{\vee}_{t}\) over \(\mathbb{D}\). We also have a canonical meromorphic map \(\phi\colon\mathcal{X}\dashrightarrow\mathcal{X}^{\vee}\) (over \(\mathbb{D}\)) restricting to \(\phi_{t}\) for all \(t\neq 0\).
We now observe that a bimeromorphism \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) canonically induces a bimeromorphism on the dual family \(f^{\vee}\colon\mathcal{X}^{\vee}\dashrightarrow\mathcal{X}^{\vee}\) and that \(f^{\vee}\) is regularizable iff \(f\) is. This follows from the previous argument and the fact that \((\mathcal{X}^{\vee})^{\vee}\) is canonically isomorphic to \(\mathcal{X}\).
If \(f^{\prime}\) is regularizable, then \((f^{\prime})^{\vee}\) is also regularizable, hence \(f^{\vee}\) since we have a map \(\mathcal{X}^{\vee}\dashrightarrow(\mathcal{X}^{\prime})^{\vee}\), and we conclude that \(f\) is regularizable as required.
### Proof of Theorem A
Recall the set-up: \(\pi\colon\mathcal{X}\to\mathbb{D}\) is a proper family of polarized abelian varieties of dimension \(g\), and \(f\in\operatorname{Bim}(\mathcal{X})\).
(1) We assume that \(\mathcal{X}\) is a non-degenerating family, and want to show that \(f\) is regularizable. By definition, this means that there exists a base change \(\pi^{\prime}\colon\mathcal{X}^{\prime}\to\mathbb{D}\) which defines a smooth family of abelian varieties. The lift of \(f\) to \(\mathcal{X}^{\prime}\) is then automatically regular (since \(\mathcal{X}^{\prime}_{0}\) does not contain any rational curve), and we conclude that \(f\) is regularizable by Proposition 4.9.
(2) Suppose that \(f\) is regularizable, and no root of unity is an eigenvalue of \(u(f_{t})\) for some (hence all) \(t\neq 0\). Note that this implies \(\lambda_{1}(f_{t})>1\). We may assume that \(f\colon\mathcal{X}\to\mathcal{X}\) is a regular map. Replacing \(\mathcal{X}\) by
a suitable base change, we may also suppose that the central fiber of \(\mathcal{X}\) is reduced and the central fiber of its Neron model \(\mathcal{N}\) is an extension of a finite group by a semi-abelian variety (see Theorem 4.6).
Let \(\mathcal{X}^{\rm sm}\) be the set of points in \(\mathcal{X}\) at which \(\pi\) is a local submersion, i.e., \(\mathcal{X}^{\rm sm}=\mathcal{X}\setminus{\rm Sing}(\mathcal{X}_{0})\supset \mathcal{X}^{*}\). The Neron mapping property implies the existence of a canonical (holomorphic) map \(\psi\colon\mathcal{X}^{\rm sm}\to\mathcal{N}\) that extends the isomorphism \(\mathcal{X}^{*}\to\mathcal{N}^{*}\).
Replacing \(f\) by an iterate, we may suppose that each irreducible component of \(\mathcal{N}_{0}\) is fixed by \(f_{\mathcal{N}}\). Furthermore, each component \(E^{\prime}\) in \(\mathcal{N}_{0}\) is a translate of the component containing the neutral point of \(\mathcal{N}_{0}\) hence is a translate of a semi-abelian variety \(G\). The restriction map \(f_{\mathcal{N}}|E^{\prime}\) can be lifted to an affine map on the universal cover of \(E^{\prime}\), the linear part of which we denote by \(u(f|E^{\prime})\). Let \(r^{\prime}\) be the dimension of the maximal split subtorus \(T\subset G\), so that \(A:=G/T\) is an abelian variety of dimension \(g^{\prime}=g-r^{\prime}\). By Lemma 3.5, \(u(f|E^{\prime})\) splits as a sum of two endomorphisms \(u(f|E^{\prime})=u_{T}\oplus u_{A}\).
Let \(\tau_{i}\) and \(\alpha_{j}\) be the eigenvalues counted with multiplicity of \(u_{T}\) and \(u_{A}\) respectively so that
\[|\tau_{1}|\geqslant|\tau_{2}|\geqslant\cdots\geqslant|\tau_{r^{ \prime}}|;\] \[|\alpha_{1}|\geqslant|\alpha_{2}|\geqslant\cdots\geqslant|\alpha_ {g^{\prime}}|.\]
On the other hand, let \(\mu_{j}\) be the eigenvalues of \(u(f_{t})\) ordered as follows \(|\mu_{1}|\geqslant\cdots\geqslant|\mu_{g}|\). By assumption, we have \(|\mu_{1}|>1\), and no \(\mu_{j}\) is a root of unity. By Proposition 4.8, the collection \(\{\mu_{j}\}\) is the union of \(\tau_{j}\) and \(\alpha_{k}\).
We claim that if \(r^{\prime}>0\) then \(|\tau_{1}|>1\). Indeed we have \(|\tau_{1}|=\lambda_{1}(f|T)\), so that by the log-concavity of dynamical degrees \(|\tau_{1}|=1\) implies \(\lambda_{j}(f|T)=1\) for all \(j\), hence \(|\tau_{j}|=1\) for all \(j\). Since \(\tau_{1}\) is the root of the characteristic polynomial of \(u_{T}\), it is an algebraic integer, and by Kronecker's theorem we conclude that \(\tau_{1}\) is a root of unity, which yields a contradiction.
Let us proceed now by contradiction and suppose the family is not degenerating. This is equivalent to assume that \(r^{\prime}\geq 1\). Let \(k\) be the least integer so that \(|\tau_{1}|>|\alpha_{k}|\). If \(|\tau_{1}|\leq|\alpha_{g^{\prime}}|\), then we set \(k=g^{\prime}+1\).
As before, let \(E\) be an irreducible component of \(\mathcal{X}_{0}\) such that \(\lambda_{k}(f|E)=\lambda_{k}(f_{t})\). By Propositions 2.6 and 2.10 we infer the existence of an irreducible component \(E^{\prime}\) of \(\mathcal{N}_{0}\) such that \(\lambda_{k}(f_{\mathcal{N}}|E^{\prime})=\lambda_{k}(f_{t})\). But Corollary 3.8 implies
\[\lambda_{k}(f_{t})=\prod_{j=1}^{k}|\mu_{j}|^{2}=\prod_{j=1}^{k-1}|\alpha_{j}|^{ 2}\times|\tau_{1}|^{2}\]
whereas
\[\lambda_{k}(f_{\mathcal{N}}|E^{\prime})=\max\left\{\prod_{j=1}^{k-1}|\alpha_{ j}|^{2}\times|\tau_{1}|,\prod_{j=1}^{k}|\alpha_{j}|^{2}\right\}\]
proving \(\lambda_{k}(f_{\mathcal{N}}|E^{\prime})<\lambda_{k}(f_{t})\) since \(|\tau_{1}|>1\).
### Proof of Theorem C
We suppose that \(f\) is an automorphism of \(\mathcal{X}\), and \(\deg(f_{t}^{n})\asymp n^{2k}\). Recall that replacing \(f\) by an iterate, and after a suitable base change, we may assume the following. First \(u(f_{t})\colon V\to V\) is unipotent. Second the central fiber \(\mathcal{X}_{0}\) is reduced. Let \(\mathcal{N}\) be the Neron model of \(\mathcal{X}\). Third, the component \(E_{0}\) of the central fiber of \(\mathcal{N}\) containing \(0\) is a semi-abelian variety.
We let \(T\) be the maximal multiplicative torus in \(E_{0}\), and denote by \(r\) its dimension, so that we have an exact sequence \(1\to T\to E_{0}\to A\to 1\) where \(A\) is an abelian variety of dimension \(g^{\prime}:=g-r\).
As in the previous section, we set \(\mathcal{X}^{\rm sm}=\mathcal{X}\setminus{\rm Sing}(\mathcal{X})\), and denote by \(\psi\colon\mathcal{X}^{\rm sm}\to\mathcal{N}\) the canonical bimeromorphic morphism.
By Proposition 2.3, there exists an irreducible component \(E\) of \(\mathcal{X}_{0}\) such that \(\deg(f^{n}|E)\asymp\deg(f_{t}^{n})\), and \(Z:=\psi(E\cap\mathcal{X}^{\rm sm})\) is an irreducible subvariety of \(\mathcal{N}_{0}\) which is \(f_{\mathcal{N}}\)-invariant. Since \(\mathcal{N}_{0}\) is quasi-projective, it admits a projective compactification by Nagata's theorem, so that Proposition 2.6 applies to the morphism \(\psi\colon\mathcal{X}^{\rm sm}\to\mathcal{N}\), and yields:
\[n^{2k-1}\lesssim\deg_{1}(f_{\mathcal{N}}^{n}|Z)\lesssim n^{2k}\.\]
We may assume that the component of \(\mathcal{N}_{0}\) containing \(Z\) is \(E_{0}\). Proposition 2.10 implies \(n^{2k-1}\lesssim\deg_{1}(f_{\mathcal{N}}^{n}|E_{0})\). By Corollary 3.7, we have \(\deg_{1}(f_{\mathcal{N}}^{n}|E_{0})\asymp\{\deg_{1}(f_{\mathcal{N}}^{n}|T),\deg _{1}(f_{\mathcal{N}}^{n}|A)\}\), therefore
\[n^{2k-1}\lesssim\max\{n^{r-1},n^{2(g^{\prime}-1)}\}\,\]
which implies the required inequality \(2k\leq\max\{r,2g-2r-1\}\).
## 5. Examples of automorphisms on families of abelian varieties
In this section, we produce examples of automorphisms on families of abelian varieties for which Theorem A applies. Using the classification of the endomorphism rings of simple abelian varieties, we give a classification up to isogenies of such families in dimension \(\leq 5\).
### Decomposition of families of abelian varieties
**Proposition 5.1**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be a polarized family of abelian varieties of dimension \(g\), and \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be a meromorphic map satisfying \(\pi\circ f=\pi\), and \(f_{t}\) is an algebraic group morphism for all \(t\in\mathbb{D}^{*}\)._
_After a suitable finite base change, there exist two families \(\pi_{j}\colon\mathcal{X}_{j}\to\mathbb{D}\), \(j=0,1\), of polarized abelian varieties, bimeromorphic maps \(f_{j}\in\operatorname{\mathrm{Bim}}(\mathcal{X}_{j})\), and a meromorphic map \(\psi\colon\mathcal{X}_{0}\times\mathcal{X}_{1}\dashrightarrow\mathcal{X}\) over \(\mathbb{D}\) such that \(\psi(f_{0},f_{1})=f\circ\psi\) and the following hold:_
1. _the induced map_ \(\mathcal{X}_{0,t}\times\mathcal{X}_{1,t}\to\mathcal{X}_{t}\) _is an isogeny for all_ \(t\in\mathbb{D}^{*}\)_;_
2. \(\lambda_{1}(f_{0,t})=1\)_, and no eigenvalue of_ \(u(f_{1,t})\) _is a root of unity._
In particular, we can apply Theorem A to the family \(f_{1}\colon\mathcal{X}_{1}\dashrightarrow\mathcal{X}_{1}\).
Proof.: For each \(t\neq 0\), we apply Theorem 3.10. We get two abelian subvarieties \(X_{0,t}\) and \(X_{1,t}\) that are both \(f_{t}\)-invariant. Denote by \(f_{i}\) the restriction of \(f\) to \(X_{i}\), \(i=0,1\). Then \(\lambda_{1}(f_{0,t})=1\) and none of the eigenvalues of \(u(f_{1,t})\) is a root of unity.
Observe that \(X_{0,t}\) and \(X_{1,t}\) form holomorphic families of abelian varieties. Indeed, we may view \(X_{t}\) as a quotient of a fixed complex vector space \(V\) of dimension \(g\) by a holomorphically varying cocompact lattice \(\Lambda_{t}\). Also, \(f_{t}\) is induced by a holomorphic family of endomorphisms \(u_{t}\colon V\to V\) such that \(u_{t}(\Lambda_{t})=\Lambda_{t}\). We factor the characteristic polynomial of \(u_{t}\) into a product of two polynomials \(P\) and \(Q\) where \(P^{-1}(0)\) consist of roots of unity, and \(Q^{-1}(0)\) contains no root of unity. By the proof of Theorem 3.10, we have \(X_{0,t}=\ker P(u_{t})/(\ker P(u_{t})\cap\Lambda_{t})\) and \(X_{1,t}=\ker Q(u_{t})/(\ker Q(u_{t})\cap\Lambda_{t})\) so that both families are holomorphic over \(\mathbb{D}^{*}\).
Note that both abelian varieties \(X_{0,t}\) and \(X_{1,t}\) are endowed with a polarization induced from the one on \(X_{t}\). We now explain briefly why these families extend over \(\mathbb{D}\). Fix any \(t_{0}\neq 0\), write \(X_{t_{0}}=V/\Lambda_{t_{0}}\). The polarization induces a hermitian form \(H_{t_{0}}\) on \(V\) such that \(\Im(H_{t_{0}})\) is a symplectic form which takes integral values on \(\Lambda_{t_{0}}\). Choose symplectic basis of \(P(u_{t_{0}})\cap\Lambda_{t_{0}}\) and \(Q(u_{t_{0}})\cap\Lambda_{t_{0}}\) respectively, and extend them as a symplectic basis of a finite index subgroup of \(\Lambda_{t_{0}}\). Let \(g_{0}\) (resp. \(g_{1}\)) be the rank of \(P(u_{t_{0}})\cap\Lambda_{t_{0}}\) (resp. of \(Q(u_{t_{0}})\cap\Lambda_{t_{0}}\)).
The family \(X_{t}^{\prime}=X_{0,t}\times X_{1,t}\) over \(\mathbb{D}^{*}\) is a quotient of \(X_{t}\) by a finite group of translations which acts meromorphically on \(\mathcal{X}\) because any torsion section extends meromorphically over \(0\) (the group law inducing a meromorphic map \(\mathcal{X}\times\mathcal{X}\dashrightarrow\mathcal{X}\)). It follows that \(\mathcal{X}^{\prime}\) forms a family of polarized abelian varieties over \(\mathbb{D}\).
As we have a symplectic basis of the lattice defining \(X_{t}^{\prime}\), after a finite base change, there exists a holomorphic map \(s\mapsto Z^{\prime}(s)\), from the upper half-plane to the Siegel domain, an integral matrix \(B^{\prime}\) and a diagonal integral matrix \(D^{\prime}\) such that: \(Z^{\prime}(s+1)=Z^{\prime}(s)+D^{\prime}B^{\prime}\); and \(X_{e^{2i\pi s}}^{\prime}\) is isomomorphic to \(\mathbb{C}^{g}\) quotiented out by the lattice \(D\mathbb{Z}^{g}+Z^{\prime}(s)\mathbb{Z}^{g}\). As our reference symplectic basis was a product of two symplectic basis of \(P(u_{t_{0}})\cap\Lambda_{t_{0}}\) and \(Q(u_{t_{0}})\cap\Lambda_{t_{0}}\), it follows that \(Z^{\prime}\) can be written in block-diagonal form \(Z^{\prime}(s)=Z_{0}(s)\oplus Z_{1}(s)\) where \(s\mapsto Z_{i}(s)\), \(i=1,2\) are holomorphic maps from the upper half-plane to the Siegel domains. The monodromy information on \(Z^{\prime}\) implies the existence of integral matrices \(B_{i}\) and diagonal integral matrices \(D_{i}\) such that: \(Z_{i}(s+1)=Z_{i}(s)+D_{i}B\) for each \(i\) which implies that both families \(X_{i,t}\) extends over \(\mathbb{D}\) (see the proof of Proposition 4.10 and SS6.2).
Let \(\psi\colon\mathcal{X}_{0}^{*}\times\mathcal{X}_{1}^{*}\to\mathcal{X}^{*}\) be the addition map, and observe that by construction \(\psi\) induces an isogeny \(\mathcal{X}_{0,t}\times\mathcal{X}_{1,t}\to\mathcal{X}_{t}\) for all \(t\neq 0\). Since \(\mathcal{X}_{0}\) is a subfamily of \(\mathcal{X}\), and \(f_{0}\) is the restriction of \(f\) to \(\mathcal{X}_{0}\), the map \(f_{0}\) extends meromorphically to \(X_{0}\). The same argument proves \(f_{1}\) is meromorphic.
### Division algebras
We recall some general facts on division algebras that will be used in the next section.
Let \(B\) be a division algebra which is finite dimensional over \(\mathbb{Q}\). Denote by \(K\) its center (this is a number field), and write \(e=[K:\mathbb{Q}]\). Then by Artin-Wedderburn theorem, \(B\otimes_{K}K^{\mathrm{alg}}\) is isomorphic to \(\operatorname{Mat}_{d}(K^{\mathrm{alg}})\) hence \(\dim_{K}(B)=d^{2}\) for some integer \(d\geq 1\) called the index of \(B\).
Pick any \(x\in B\). The sub-algebra \(\mathbb{Z}[x]\) is finitely generated hence \(x\) admits a minimal (possibly reducible) polynomial \(F_{x}\in\mathbb{Q}[x]\). The \(K\)-linear map \(b\mapsto bx\) is an endomorphism of \(B\) whose characteristic polynomial is
the \(d\)-th power of a monic polynomial \(\operatorname{\mathrm{Prd}}_{x}\in K[T]\) of degree \(ed\) called the reduced characteristic polynomial. If \(K^{\prime}|K\) is a finite extension such that \(B\otimes_{K}K^{\prime}\simeq\operatorname{\mathrm{Mat}}_{d}(K^{\prime})\) then we have \(\operatorname{\mathrm{Prd}}_{x}(T)=\det(T\operatorname{Id}-\phi(x\otimes 1))\). The reduced trace \(\operatorname{\mathrm{trd}}_{B/K}\colon B\to K\) is defined as the opposite of the coefficient of \(\operatorname{\mathrm{Prd}}_{x}\) in \(T^{ed-1}\), and the reduced norm \(\operatorname{\mathrm{Nrd}}_{B/K}\colon B\to K\) is \(\operatorname{\mathrm{Nrd}}_{B/K}(x)=(-1)^{ed}\operatorname{\mathrm{Prd}}_{x }(0)\). We define \(\operatorname{\mathrm{trd}}_{B/\mathbb{Q}}(x)=\operatorname{\mathrm{tr}}_{K /\mathbb{Q}}(\operatorname{\mathrm{trd}}_{B/K}(x))\in\mathbb{Q}\), and \(\operatorname{\mathrm{Nrd}}_{B/\mathbb{Q}}(x)=\operatorname{\mathrm{Nm}}_{K /\mathbb{Q}}(\operatorname{\mathrm{Nrd}}_{B/K}(x))\in\mathbb{Q}\) where \(\operatorname{\mathrm{tr}}\) and \(\operatorname{\mathrm{Nm}}\) are the standard trace and norm attached to the field extension \(K/\mathbb{Q}\). The reduced norm \(\operatorname{\mathrm{Nrd}}_{B/\mathbb{Q}}\colon B\to\mathbb{Q}\) is the unique (up to a scalar factor) polynomial function \(N\colon B\to\mathbb{Q}\) of degree \(ed\) on \(B\) such that \(N(ab)=N(a)\cdot N(b)\).
An order \(\mathcal{O}\) in \(B\) is a finitely generated \(\mathbb{Z}\)-submodule which is stable by multiplication (hence is a ring), and such that \(\mathcal{O}\otimes_{\mathbb{Z}}\mathbb{Q}=B\), see [19, Chapter 10]. Any element in an order is integral, i.e., is annihilated by a monic polynomial with integral coefficients. Conversely, any integral element is contained in some order that one can choose to be maximal under inclusion (see [14, Exercice 10.5]).
### Automorphisms of simple abelian varieties
We refer to [10, IV], or [12, SS5.5] for details.
Let \(X=V/\Lambda\) be any abelian variety, \(\operatorname{\mathrm{End}}(X)\) be the ring (under addition and composition) of algebraic group morphisms of \(X\), and define \(\operatorname{\mathrm{End}}_{\mathbb{Q}}(X):=\operatorname{\mathrm{End}}(X) \otimes_{\mathbb{Z}}\mathbb{Q}\). Since \(\operatorname{\mathrm{End}}(X)\) can be identified to the set of complex linear maps \(u\colon V\to V\) such that \(u(\Lambda)\subset\Lambda\), it is a ring (under the addition and composition laws), and \(\operatorname{\mathrm{End}}_{\mathbb{Q}}(X)\) is a finitely dimensional \(\mathbb{Q}\)-algebra.
Suppose \(X\) is simple, i.e., contains no non-trivial abelian subvarieties. Then the kernel of any algebraic group morphism \(u\colon X\to X\) is finite, hence \(\operatorname{\mathrm{End}}_{\mathbb{Q}}(X)\) has no divisor of zero and is a division algebra. Since \(X\) is polarized, \(\operatorname{\mathrm{End}}_{\mathbb{Q}}(X)\) admits a involution \(f\mapsto f^{\prime}\) (called the Rosati involution) which is an anti-automorphism such that the quadratic form \(q(f):=\operatorname{\mathrm{tr}}(ff^{\prime}\colon H^{1}(X,\mathbb{Q})\to H^{1 }(X,\mathbb{Q}))\) is positive definite. It follows that \(B:=\operatorname{\mathrm{End}}_{\mathbb{Q}}(X)\) is a division ring which is finite dimensional over \(\mathbb{Q}\), admits an involutive anti-automorphism \(b\mapsto b^{\prime}\) such that \(\operatorname{\mathrm{trd}}_{B/\mathbb{Q}}(bb^{\prime})>0\). The classification of such algebras was obtained by Albert.
Denote by \(K\) its center as above, so that \(B\) has dimension \(d^{2}\) over \(K\). Consider also the field \(K_{0}\subset K\) of elements fixed by the Rosati involution, and write \([K:\mathbb{Q}]=e\), and \([K_{0}:\mathbb{Q}]=e_{0}\). The strong approximation theorem implies that \(K_{0}\) is a totally real number field, and \(K\) is a totally imaginary quadratic extension of \(K_{0}\) or \(K=K_{0}\), see [12, V.5 (5.2) and (5.4)]. In particular, it is a CM field.
Further informations on the structure of \(B\) are obtained by considering its class in the Brauer group of \(K\). We refer to [10, IV. Application I] for details.
The following table summarizes the informations we know about \(B,K\) and \(K_{0}\).
To avoid confusion with the upper half plane \(\mathfrak{H}=\{z\in\mathbb{C},\Im(z)>0\}\), we let \(\mathbb{H}=\{\alpha+\beta i+\gamma j+\delta(ij),\alpha,\beta,\gamma,\delta\in \mathbb{R}\}\) be the Hamilton division algebra over \(\mathbb{R}\) with \(i^{2}=j^{2}=-1\) and \(ij=-ji\). More generally over a field \(K\), we let \(D=D_{a,b}:=\{\alpha+\beta i+\gamma j+\delta(ij),\alpha,\beta,\gamma,\delta\in K\}\) where \(a,b\) are given elements in \(K\) and \(i^{2}=a\), \(j^{2}=b\) and \(ij=-ji\). Note that \(\operatorname{\mathrm{trd}}_{B/K}(\mu)=2\alpha\in\mathcal{O}_{K}\), and \(\operatorname{\mathrm{Nrd}}_{B/K}(\mu)=\alpha^{2}-a\beta^{2}-b\gamma^{2}+(ab) \delta^{2}\), and \(D_{-1,-1}=\mathbb{H}\).
Types I, II and III are referred to as being of the first kind: the restriction of the involution is then trivial on \(K\), i.e., \(K=K_{0}\). Division algebras arising in Type IV are called of the second kind.
In Type I, \(B=K\) is any totally real field.
In Type II, \(B\) is any quaternion algebra over a totally real field \(K\) such that for each embedding \(\sigma_{i}\colon K\to\mathbb{R}\), we have \(B\otimes_{\sigma_{i}(K)}\mathbb{R}\simeq\operatorname{\mathrm{Mat}}_{2}(\mathbb{R})\), so that \(B\otimes_{\mathbb{Q}}\mathbb{R}\simeq\operatorname{\mathrm{Mat}}_{2}(\mathbb{R })^{e}\), and the reduced trace is given by \(\sum_{i=1}^{e}\operatorname{\mathrm{tr}}(M_{i})\). One says that \(B\) is totally indefinite. One can show that there exists an isomorphism \(B\otimes_{\mathbb{Q}}\mathbb{R}\simeq\operatorname{\mathrm{Mat}}_{2}(\mathbb{ R})^{e}\) such that the anti-involution on \(B\) corresponds to \((M_{1},\cdots,M_{e})\to(^{t}\!M_{1},\cdots,^{t}\!M_{e})\).
In Type III, \(B\) is any quaternion algebra over a totally real field \(K\) such that for each embedding \(\sigma_{i}\colon K\to\mathbb{R}\), we have \(B\otimes_{\sigma_{i}(K)}\mathbb{R}\simeq\mathbb{H}\), so that \(B\otimes_{\mathbb{Q}}\mathbb{R}\simeq\mathbb{H}^{e}\). One says that \(B\) is totally definite. Through this
\begin{table}
\begin{tabular}{|c|c|c c c|c|} \hline Albert type & Center (number field \(K\)) & \(B\) & Index \(d\) & \(B\otimes_{\mathbb{Q}}\mathbb{R}\) & restriction \\ \hline I & totally real & \(B=K\) & \(1\) & \(\mathbb{R}^{e}\) & \(e|g\) \\ II & totally real & \(D_{a,b}\) & \(2\) & \(\operatorname{\mathrm{Mat}}_{2}(\mathbb{R})^{e}\) & \(2e|g\) \\ III & totally real & \(D_{a,b}\) & \(a\) and \(b\ll 0\) & \(2\) & \(\mathbb{H}^{e}\) & \(2e|g\) \\ IV & CM & second kind & \(d\geq 1\) & \(\operatorname{\mathrm{Mat}}_{d}(\mathbb{C})^{e/2}\) & \(\frac{1}{5}ed^{2}|g\) \\ \hline \end{tabular}
\end{table}
Table 1. Endomorphism algebras of simple abelian varieties
isomorphism, the anti-involution on \(B\) corresponds to \((b_{1},\cdots,b_{e})\to(\sigma_{\mathsf{H}}(b_{1}),\cdots,\sigma_{\mathsf{H}}(b_{ e}))\), and the reduced trace is given by \(\sum_{i=1}^{e}\operatorname{trd}_{\mathsf{H}}(b_{i})\).
Type IV is harder to describe. Then \(B\) is any division algebra over a CM field \(K\), such that the restriction of the involution to \(K\) is the conjugation \(\sigma\colon K\to K\) whose fixed point set is \(K_{0}\), and satisfying the following conditions. For any finite place \(v\) fixed by \(\sigma\), then \([B]_{v}=0\) in the Brauer group of \(K_{v}\); and for any finite place \(v\) fixed by \(\sigma\), then \([B]_{v}+[B]_{\sigma(v)}=0\). In that case there exists an isomorphism \(B\otimes_{\mathbb{Q}}\mathbb{R}\simeq\operatorname{Mat}_{d}(\mathbb{C})^{e}\) such that the involution on \(B\) corresponds to \((M_{1},\cdots,M_{e})\to({}^{t}\!\tilde{M}_{1},\cdots,{}^{t}\!\tilde{M}_{e})\).
Suppose now that we are given \(f\in\operatorname{Aut}_{\bullet}(X)\), which we identify with the morphism \(f^{*}\colon H^{1}(X,\mathbb{Z})\to H^{1}(X,\mathbb{Z})\). Then there exists a polynomial with integer coefficients annihhating \(f\) and \(\det(f)=1\). In other words, \(f\) is integral viewed as an element in \(B\) and satisfies \(\operatorname{Nrd}_{B/\mathbb{Q}}(f)=\pm 1\). In fact, by the uniqueness of the existence of norms on division algebra, we have
\[\chi_{f}(n)=\operatorname{Nrd}_{B/\mathbb{Q}}(n-f)^{\frac{2\pi}{2n}}\]
for all \(n\in\mathbb{N}\), where \(\chi_{f}(T)=\det(T\operatorname{Id}-f)\) is the characteristic polynomial of \(f^{*}\colon H^{1}(X,\mathbb{Q})\to H^{1}(X,\mathbb{Q})\), see [1, SS13.1]. In particular, \(\lambda_{1}(f)\) is a root of the reduced characteristic polynomial of \(f\) in \(B\). This relation is exploited in [1, 23] to construct many examples of automorphisms a simple abelian varieties whose first dynamical degrees are Salem numbers.
The condition appearing in Theorem A that \(f^{*}\colon H^{1}(X,\mathbb{Q})\to H^{1}(X,\mathbb{Q})\) has no eigenvalue equal to a root of unity is thus equivalent to impose that the polynomial \(\operatorname{Nrd}_{B/\mathbb{Q}}(T-f)\) does not vanish at any root of unity.
### Automorphisms of families of abelian varieties having no cyclotomic factors
Let us explain how to construct non-isotrivial families of polarized abelian varieties \(\mathcal{X}\), and families of algebraic group isomorphisms \(f_{t}\in\operatorname{Aut}_{\bullet}(X_{t})\) such that the characteristic polynomial of \(u(f_{t})\) has no cyclotomic factor, and \(X_{t}\) is simple for some (hence for a generic) \(t\in\mathbb{D}^{*}\).
We consider the moduli spaces of abelian varieties whose ring of endomorphisms contain a given division algebra. The principle is to start with a division algebra \(B\) which is finite dimensional over \(\mathbb{Q}\), together with a positive involutive anti-automorphism, and a representation \(\rho\colon B\to\operatorname{Mat}_{g}(\mathbb{C})\) such that there exists \(\rho\colon B\to\operatorname{Mat}_{2g}(\mathbb{Q})\) for which \(\theta\otimes 1_{\mathbb{C}}=\rho\oplus\bar{\rho}\) (see [1, Lemma 9.1.1] for an intrinsic characterization of such representations). Note that all examples are described by Albert as above. Then we choose an order \(\mathcal{O}\subset B\) and an element \(f\in\mathcal{O}\) such that \(\operatorname{Nrd}_{B/\mathbb{Q}}(f)=\pm 1\).
As explained in [1, Chapter 9], each data \((B,\mathcal{O},\rho)\) gives rise to a (several for type IV algebras) complex variety \(\mathfrak{H}(B,\mathcal{O},\rho)\) parameterizing triples \((X,H,\imath)\) where \(X=\mathbb{C}^{g}/\Lambda\) is an abelian variety, \(H\) is a positive definite hermitian form on \(\mathbb{C}^{g}\) corresponding to a polarization of \(X\), and \(\imath\colon\mathcal{O}\to\operatorname{End}(X)\subset\operatorname{Mat}_{g}( \mathbb{C})\) is an embedding sending the Rosati involution to the involution in \(B\) such that \(\imath\) and \(\rho\) are equivalent representations. It turns out that \(\mathfrak{H}(B,\mathcal{O},\rho)\) is always a hermitian symmetric domain, see Table 2 below. The moduli space \(\mathfrak{M}(B,\mathcal{O},\rho)\) of such triples up to natural isomorphism is a quotient of \(\mathfrak{H}(B,\mathcal{O},\rho)\) by a discrete (arithmetic) subgroup, which admits a canonical structure of quasi-projective variety. Its Satake-Baily-Borel compactification is a projective variety \(\overline{\mathfrak{M}(B,\mathcal{O},\rho)}\).
Any holomorphic map \(\mathbb{D}\to\overline{\mathfrak{M}(B,\mathcal{O},\rho)}\) and any choice of \(\mu\in\mathcal{O}\) with \(\operatorname{Nrd}_{B/\mathbb{Q}}(\mu)=\pm 1\) gives rise to a family of polarized abelian varieties \(X_{t}\) with automorphisms \(f_{t}\colon X_{t}\to X_{t}\) over \(\mathbb{D}\).
In the next table, we set
\[\mathfrak{H}_{g} :=\{Z\in\operatorname{Mat}_{g\times g}(\mathbb{C})|\,{}^{t}\!Z=Z \text{ and }\Im(Z)>0\}\] \[\mathcal{H}_{g} :=\{Z\in\operatorname{Mat}_{g\times g}(\mathbb{C})|\,{}^{t}\!Z=-Z \text{ and }\operatorname{Id}-{}^{t}\!\bar{Z}Z>0\}\] \[\mathcal{H}_{r,s} :=\{Z\in\operatorname{Mat}_{r\times s}(\mathbb{C})|\,\operatorname{ Id}_{s}-{}^{t}\!\bar{Z}Z>0\}\]
Explaining in details all possible outcomes for all types of endomorphism algebras of simple abelian varieties would be too lengthy. We discuss only those examples that arise in dimension \(\leq 5\) (up to isogenies).
_Type I._ [1, SS9.2]. Let \(K\) be any totally real field of degree \(e\geq 2\) over \(\mathbb{Q}\) with ring of integers \(\mathcal{O}_{K}\). Write \(g=l\times e\) for some \(l\in\mathbb{N}^{*}\). Fix non-equivalent embeddings \(\{\imath_{j}\colon K\to\mathbb{R}\}_{1\leq j\leq e}\). Pick \(\mu\in\mathcal{O}_{K}^{*}\) any unit which is not a root of unity (it always exists by Dirichlet's unit theorem if \(e\geq 2\)).
For any \(Z\in(\mathfrak{H}_{l})^{e}\), introduce the lattice \(\Lambda_{z}\) given as the image of
\[\lambda_{z}\colon\mathcal{O}_{K}^{l}\oplus\mathcal{O}_{K}^{l}\to\mathbb{C}^{g}, \lambda_{z}(\alpha,\beta)=(\alpha_{1}Z_{1}+\beta_{1},\cdots,\alpha_{e}Z_{e}+ \beta_{e})\.\]
Here we have \(\alpha=(\alpha^{1},\cdots,\alpha^{l})\in\mathcal{O}_{K}^{l}\), \(\beta=(\beta^{1},\cdots,\beta^{l})\in\mathcal{O}_{K}^{l}\), and we write \(\alpha_{j}=(\imath_{j}(\alpha^{1}),\cdots,\imath_{j}(\alpha^{l}))\), and \(\beta_{j}=(\imath_{j}(\beta^{1}),\cdots,\imath_{j}(\beta^{l}))\). Then \(X_{z}:=\mathbb{C}^{g}/\Lambda_{z}\) is an abelian variety with polarization
\[E(v,w)=\sum_{j=1}^{e}\Im(v_{j}(\Im Z_{j})^{-1}\bar{w}_{j})\,\]
which is integral on \(\Lambda_{z}\). The diagonal map \((\mu_{1}\operatorname{Id}_{l},\cdots,\mu_{e}\operatorname{Id}_{l})\) on \(\mathbb{C}^{g}\) is an automorphism of \(\Lambda_{z}\) hence we get an automorphism \(f_{z}\colon X_{z}\to X_{z}\) whose characteristic polynomial is irreducible and has no cyclotomic factor. The first dynamical degree of \(f_{z}\) equals \(\max\{|\mu_{j}|^{2}\}\) by Corollary 3.7.
**Remark 5.2**.: Observe that the subspace \(\mathbb{R}^{g}\subset\mathbb{C}^{g}\) intersects \(\Lambda_{z}\) in a lattice, and is invariant under the diagonal map. It follows that \(f_{z}\) admits an invariant totally real compact torus of dimension \(g\).
The manifold \(\mathfrak{H}(K,\mathcal{O}_{K},\rho)\) is isomorphic to \((\mathfrak{H}_{l})^{e}\), has dimension \(\frac{e}{2}l(l+1)\), and \(\mathfrak{M}(K,\mathcal{O}_{K},\rho)\) is obtained by quotienting out of by an arithmetic subgroup of \(e\) copies of the symplectic group in dimension \(2l\). When \(l=1\), we have \(e=g\) and \(\mathfrak{M}(K,\mathcal{O}_{K},\rho)=\mathfrak{H}^{g}/\operatorname{PSL}(2, \mathcal{O}_{K})\) is a Hilbert-Blumenthal modular variety. We refer to [13, Chapter 1] for the proof that it is a quasi-projective variety of dimension \(g\) that can be projectively compactified by adding finitely many points.
**Remark 5.3**.: There exist smooth families of polarized abelian varieties defined over a _compact_ Riemann surface \(X\to B\) that admit a family of automorphisms with \(\lambda_{1}>1\). Indeed since the Hilbert-Blumenthal variety is the complement of a finite set in a projective varieties, it contains many compact curves (of high genus, see [10]).
_Type II_ & _III_. The precise construction is given in [1, SS9.3-5] or [14].
_Type IV_.: We refer to [1, SS9.6] for details of the construction in this case. Observe that there is no non-trivial family of such type in dimension \(\leq 5\).
### Classification in low dimension
We list all up to isogeny by Proposition 5.1 positive dimensional families of polarized abelian varieties which admit a family of automorphisms \(f_{t}\) such that the characteristic polynomial of \(f_{t}^{*}\colon H^{1}(X_{t},\mathbb{Q})\to H^{1}(X_{t},\mathbb{Q})\) has no cyclotomic factors (in particular \(\lambda_{1}(f_{t})>1\)).
Any abelian variety \(X\) is isogenous to a direct product \(X_{1}^{k_{1}}\times\cdots\times X_{m}^{k_{m}}\) where \(X_{1},\cdots,X_{m}\) are simple non isogenous abelian varieties. The ring \(\operatorname{End}_{\mathbb{Q}}(X_{1}\times\cdots\times X_{m})\) is then isomorphic to \(\operatorname{Mat}_{\mathbb{Q}}(k_{1},\operatorname{End}(X_{1}))\times\cdots \times\operatorname{Mat}_{\mathbb{Q}}(k_{m},\operatorname{End}(X_{m}))\) where \(\operatorname{End}_{\mathbb{Q}}(X_{i})\) is a division algebra of type I, II, III or IV. Thus, the classification of families of automorphisms as in Theorem A is reduced to a combinatorial question.
To exclude certain families, we observe that an abelian variety \(X\) with \(\operatorname{End}_{\mathbb{Q}}(X)=\mathbb{Q}\) does not carry any automorphism with \(\lambda_{1}>1\). In particular, in the type I case, we can always suppose \(e>1\).
Also we shall use the work of Shimura, namely [14, Theorem 5], to rule out some cases:
* Type III and \(g/2e=1\) then \(X\) is not simple;
* Type IV and \(\sum r_{\nu}s_{\nu}=0\) then \(X\) is not simple;
* Type IV \(l=2\), \(d=1\), \(r_{\nu}=s_{\nu}=1\) for all \(\nu\) then the endomorphism algebra is included in a type III example;
* Type IV and \(l=1\), \(d=2\), \(r_{\nu}=s_{\nu}=1\) for all \(\nu\), then \(X\) is not simple.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Albert type & dimension of the abelian variety & symmetric domain & dimension of the moduli space \\ \hline I & \(le\) & \((\mathfrak{H}_{l})^{e}\) & \(\frac{e}{2}l(l+1)\) \\ II & \(2le\) & \((\mathfrak{H}_{l})^{e}\) & \(\frac{e}{2}l(l+1)\) \\ III & \(2le\) & \((\mathcal{H}_{l})^{e}\) & \(\frac{e}{2}l(l-1)\) \\ IV & \(d^{2}el=2d^{2}e_{0}l\) & \(\prod\mathcal{H}_{r_{j},s_{j}}\) & \(\max_{r_{j}+s_{j}=dl}\sum_{i=1}^{e_{0}}r_{j}s_{j}\) \\ \hline \end{tabular}
\end{table}
Table 2. Moduli space of abelian varieties with endomorphism structure
In each case, we denote by \(m\) the maximal dimension of families of abelian varieties that one can obtain. In the table below, by convention by a unit we mean a unit not lying in the group of roots of unity.
\(g=2\)
* generically non-simple: \(X_{t}=E_{t}^{2}\), \(\dim(E_{t})=1\); \(f\) is determined by a unit in a totally real quadratic field (i.e., a matrix in \(\operatorname{SL}(2,\mathbb{Z})\)), \(m=1\);
* generically simple with endomorphism algebra of type I, \(l=1\), \(e=2\); \(f\) is determined by a unit in a totally real quadratic field not in \(\mathbb{U}_{\infty}\), \(m=2\).
**Remark 5.4**.: Abelian surfaces whose endomorphism algebra are of type II, III or IV are rigid (see Table 2). A direct proof of these facts is given in [11].
\(g=3\)
* generically non-simple: \(X_{t}=E_{t}^{3}\), \(\dim(E_{t})=1\); \(f\) is determined by a unit in a cubic field (i.e., a matrix in \(\operatorname{SL}(3,\mathbb{Z})\), \(m=1\);
* generically simple with endomorphism algebra of type I, \(l=1\), \(e=3\); \(f\) is determined by a unit in a totally real cubic field, \(m=3\).
**Remark 5.5**.: Simple abelian 3-folds whose endomorphism algebra is of type I and has invariants \(l=3\) and \(e=1\) do not carry automorphisms with \(\lambda_{1}>1\). Types II, III, IV are not possible since \(g\) is odd. Also any automorphism of the family \(X_{t}=E_{t}\times Y_{t}\) where \(Y_{t}\) is as in 2.2 preserves \(E_{t}\), thus, there is an eigenvalue of \(f_{*}\) which is a root of unity for any \(f\in\operatorname{End}(X_{t})\).
\(g=4\)
* generically non-simple: \(X_{t}=E_{t}^{4}\), \(\dim(E_{t})=1\); \(f\) is determined by a unit in a quartic field (i.e., a matrix in \(\operatorname{SL}(4,\mathbb{Z})\)), \(m=1\);
* generically non-simple: \(X_{t}=E_{t}^{2}\times A_{t}\), \(\dim(E_{t})=1\), \(\dim(A_{t})=2\) as in 2.2; \(f\) is determined by two units in two real quadratic fields, \(m=2\);
* generically non-simple: \(X_{t}=A_{t}^{4}\times A_{t}^{2}\), \(\dim(A_{t}^{i})=2\) as in 2.2; \(f\) is determined by two units in two real quadratic fields, \(m=4\);
* generically non-simple: \(X_{t}=(A_{t})^{2}\), \(\dim(A_{t})=2\) as in 2.2; \(f\) is determined by a matrix in \(\operatorname{GL}(2,\mathcal{O}_{K})\) for some real quadratic field \(K\), \(m=2\);
* generically simple with endomorphism algebra of type I, \(l=1\), \(e=4\); \(f\) is determined by a unit in a totally real quartic field, \(m=4\);
* generically simple with endomorphism algebra of type I, \(l=2\), \(e=2\), \(f\) is determined by a unit in a real quadratic field, \(m=6\);
* generically simple with endomorphism algebra of type II, \(e=2\), \(l=1\), \(f\) is determined by \(\mu\in B\) a totally indefinite quaternion algebra over a real quadratic field with \(\operatorname{Nrd}_{B/\mathbb{Q}}(\mu)=\pm 1\), \(m=2\);
* generically simple with endomorphism algebra of type II, \(e=1\), \(l=2\), \(f\) is determined by \(\mu\in B\) a totally indefinite quaternion algebra over \(\mathbb{Q}\) with \(\operatorname{Nrd}_{B/\mathbb{Q}}(\mu)=\pm 1\), \(m=3\);
**Remark 5.6**.: Observe that in the last two cases 4.7 and 4.8, the existence of \(\mu\) with reduced characteristic polynomial having no cyclotomic factor puts some restriction on the algebra. In case 4.6, if \(B\) is given by \(i^{2}=a\) and \(j^{2}=b\), we need the existence of a solution of the quadratic equation \(x^{2}-ay^{2}-bz^{2}+(ab)t^{2}=\epsilon\), with \((x,y,z,t)\in\mathbb{Q}\), \(\epsilon=\pm 1\), and \(x\notin\{0,\pm 1\}\) if \(\epsilon=1\) and \(x\neq 0\) if \(\epsilon=-1\).
**Remark 5.7**.: Simple abelian 4-folds whose endomorphism algebra is of type I (resp. type IV) and has invariants \(l=4\) and \(e=1\) (resp. \(d=1\), \((e_{0},l)=(1,2)\)) do not carry automorphisms with \(\lambda_{1}>1\). Simple abelian 4-folds whose endomorphism algebra is of type III with \(e=2\), \(l=1\) (resp. type IV with \(d=1\), \(l=1\)) are rigid. Finally for any totally definite quaternion algebra of type III with \(e=1\) defined over \(\mathbb{Q}\) the elements with \(\operatorname{Nrd}_{B/\mathbb{Q}}(\mu)=\pm 1\) form a finite group.
\(g=5\)
* generically non-simple: \(X_{t}=E_{t}^{5}\), \(\dim(E_{t})=1\); \(f\) is determined by a unit in a quintic field (i.e., a matrix in \(\operatorname{SL}(5,\mathbb{Z})\)), \(m=1\);
5.2 generically non-simple: \(X_{t}=E_{t}^{2}\times Y_{t}\), \(\dim(E_{t})=1\) as in 2.1, \(\dim(Y_{t})=3\) as in 3.2; \(f\) is determined by two units in the ring of integers of a totally real quadratic and cubic fields respectively, \(m=4\); 5.3 generically non-simple: \(X_{t}=A_{t}\times E_{t}^{3}\), \(\dim(A_{t})=2\) as in 2.2, \(\dim(E_{t})=1\) as in 3.1; \(f\) is determined by two units in the ring of integers of a real quadratic and of a cubic field, \(m=3\); 5.4 generically non-simple: \(X_{t}=A_{t}\times Y_{t}\), \(\dim(A_{t})=2\) as in 2.2, \(\dim(Y_{t})=3\) as in 3.2; \(f\) is determined by two units in the ring of integers of a totally real quadratic and cubic fields respectively, \(m=5\); 5.5 generically simple with endomorphism algebra of type I, \(l=1\), \(e=5\), \(f\) is determined by a unit in a totally real quartic field, \(m=5\).
**Remark 5.8**.: Simple abelian 5-folds whose endomorphism algebra is of type I and has invariants \(l=5\) and \(e=1\) do not carry automorphisms with \(\lambda_{1}>1\).
## 6. Translations on families of abelian varieties
In this section, we focus our attention to families of translations, and prove Theorems E and F from the introduction.
### Relative toric manifolds over \(\mathbb{D}\)
Following Mumford, we build a suitable compactification of degeneration of abelian varieties, by using special families of toric varieties.
Let \(R\) be the discrete valued ring of germs of holomorphic functions on \(\mathbb{D}\) at \(0\), and let \(K\) be the fraction field of \(R\). We shall consider toric \(R\)-schemes (as defined in [11, IV.3] or [1, Chapter 3.5]) which are smooth over \(R\) and whose generic fiber is equal to \(\mathbb{G}_{m}^{g}\) over \(K\). Any such scheme gives rise to a smooth complex manifold \(X\) containing \(\mathbb{D}^{*}\times\mathbb{G}_{m}^{g}\) as an open dense subset such that:
* the projection map onto the first factor extends to a surjective holomorphic map \(\pi\colon X\to\mathbb{D}\) and \(\pi^{-1}(\mathbb{D}^{*})=\mathbb{D}^{*}\times\mathbb{G}_{m}^{g}\) ;
* the action of \(\mathbb{G}_{m}^{g}\) by multiplication on the second factor of \(\mathbb{D}^{*}\times\mathbb{G}_{m}^{g}\) extends holomorphically to \(X\).
For convenience, we shall refer any such manifold to a relative toric manifold of dimension \(g\) over \(\mathbb{D}\). Let us now describe the combinatorial data that encodes toric \(R\)-schemes satisfying the conditions above, and explain how to build the relative toric manifold from these data.
Let \(M\) be the lattice of characters of \(\mathbb{G}_{m}^{g}\), so that an element \(m\in M\) is a group morphism \(\mathrm{e}(m)\colon\mathbb{G}_{m}^{g}\to\mathbb{C}_{m}^{1}\). Define \(N=\mathrm{Hom}(M,\mathbb{Z})\) its lattice of co-characters (a point \(n\in N\) corresponds to a one-parameter subgroup of \(\mathbb{G}_{m}^{g}\)). Write \(\tilde{M}_{\mathbb{R}}:=M_{\mathbb{R}}\times\mathbb{R}_{+}\) and \(\tilde{N}_{\mathbb{R}}:=N_{\mathbb{R}}\times\mathbb{R}_{+}\).
An admissible5 fan \(\Delta\) is a (possibly infinite) collection of rational polyhedral cones in \(\tilde{N}_{\mathbb{R}}\) that is closed under taking intersections and faces. We impose moreover the following conditions:
Footnote 5: this is a purely local terminology
* no cone of \(\Delta\) contain a linear subspace of \(N_{\mathbb{R}}\) (i.e., \(\sigma\) is strongly convex);
* each cone \(\sigma\in\Delta\) is simplicial (but not necessarily regular6); Footnote 6: a simplicial cone \(\sigma\) of dimension \(k\) is regular if the semi-group \(\sigma\cap\tilde{N}\) is generated by \(k\) elements
* no cone in contained in \(N_{\mathbb{R}}\times\{0\}\).
The last two conditions respectively correspond to the fact that we want the resulting toric variety \(\mathcal{X}\) to be smooth, and the generic fiber over \(\mathbb{D}\) to be the split torus.
A ray of \(\Delta\) is a one-dimensional cone.
For each cone \(\sigma\), we define the semi-group \(S_{\sigma}=\{\tilde{m}\ \in M\times\mathbb{Z},\,\langle\tilde{m},\tilde{n} \rangle\geq 0\text{ for all }\ \tilde{n}\in\sigma\}\). Following, e.g., [1, SS1.2], we associate to each cone \(\sigma\) the relative (affine) toric manifold of dimension \(g\) over \(\mathbb{D}\) by setting
\[X_{\sigma}=\{x\colon S_{\sigma}\to\mathbb{C},\,x(\tilde{m}_{1}+\tilde{m}_{2}) =x(\tilde{m}_{1})\cdot x(\tilde{m}_{2}),\,x(0)=1\ \text{ and }\ |x(0,1)|<1\}\]
which we endowed with the topology of the pointwise convergence, and the unique structure of complex manifold for which any \(\tilde{m}\in S_{\sigma}\) induces a holomorphic function.
The manifold \(X_{\sigma}\) is equipped with a canonical holomorphic map \(\pi_{\sigma}(x):=x(0,1)\in\mathbb{D}\), and \(\mathbb{C}[\tilde{M}]\) forms a subalgebra of the field of meromorphic functions on \(X_{\sigma}\) that are regular on \(X_{\sigma}^{*}=\pi_{\sigma}^{-1}(\mathbb{D}^{*})\) (the point \((0,1)\) corresponds to \(\pi_{\Delta}\)).
Note that by construction an element in \(\mathbb{C}[\tilde{M}]\) is holomorphic in \(X_{\sigma}\) if and only if it belongs to \(\mathbb{C}[S_{\sigma}]\). And this algebra generates the structure sheaf \(\mathcal{O}_{X_{\sigma}}\) as a \(\mathcal{O}_{X_{\sigma}}\)-module. It is customary to write \(\mathrm{e}(\tilde{m})\) for the holomorphic function induced by \(\tilde{m}\in S_{\sigma}\) on \(X_{\sigma}\).
The torus \(\mathbb{G}^{g}_{m}\) acts on itself by multiplication, hence on the \(\mathbb{C}\)-algebra \(\mathbb{C}[M]\) and the trivial extension to \(\mathbb{C}[\tilde{M}]\) (leaving the vector \((0,1)\) fixed) turns \(X_{\sigma}\) into a relative toric manifold over \(\mathbb{D}\).
**Remark 6.1**.: As \(\sigma\) is regular, we may suppose that \(\tilde{e}_{0}=(e_{0},1),\cdots,\tilde{e}_{k}=(e_{k},1)\in N\times\mathbb{N}\) such that \(\sigma\cap\tilde{N}\) is generated by \((\tilde{e}_{0},\cdots,\tilde{e}_{k})\). We may suppose \(e_{0}=0\) and complete the family of vectors \(e_{i}\) such that \(e_{1},\cdots,e_{g}\) form a basis of \(N\). In the canonical basis \((0,1),(e_{i}^{*},0)\) of \(M\times\mathbb{Z}\), \(S_{\sigma}\) is equal to \(\{(a_{1},\cdots,a_{g},b),\in\mathbb{Z}^{g+1},b\geq 0,b+a_{j}\geq 0\text{ for all }\ j=1,\cdots,k\}\) which is generated by \((e_{i}^{*},0),(-e_{i}^{*},1)\) for \(j=1,\cdots,k\), and \(\pm(e_{i}^{*},0)\) for \(j=k+1,\cdots,g\). Hence \(X_{\sigma}\) is biholomorphic to
\[\{(z_{1},w_{1},\cdots,z_{k},w_{k},y_{k+1},\cdots,y_{g},t)\in(\mathbb{C}^{*})^ {2k}\times\mathbb{C}^{g-k}\times\mathbb{D},\,z_{1}w_{1}=t,\cdots,z_{k}w_{k}=t \}\.\]
Any inclusion of cones \(\sigma\subset\sigma^{\prime}\) yields an inclusion \(S_{\sigma}\supset S_{\sigma^{\prime}}\) hence induces by restriction an embedding \(X_{\sigma}\subset X_{\sigma^{\prime}}\) preserving the action by \(\mathbb{G}^{g}_{m}\) and the projection to \(\mathbb{D}\). We define \(X(\Delta)\) as the disjoint union of all \(X_{\sigma}\) where \(\sigma\) ranges over all cones of the fan \(\Delta\), and \(X_{\sigma}\) and \(X_{\sigma^{\prime}}\) are patched along \(X_{\sigma\cap\sigma^{\prime}}\) (interpreted as an open subset of both).
Any relative toric manifold over \(\mathbb{D}\) is isomorphic to \(X(\Delta)\) for some fan satisfying the conditions above. We shall denote by \(\pi_{\Delta}\colon X(\Delta)\to\mathbb{D}\) the canonical projection map. As before, \(\mathbb{C}[\tilde{M}]\) forms a subalgebra of the field of meromorphic functions on \(X(\Delta)\) that are regular on \(X(\Delta)^{*}\), and the action of \(\mathbb{G}^{g}_{m}\) on \(X(\Delta)\) is the dual to its natural action on \(\mathbb{C}[\tilde{M}]\).
**Lemma 6.2**.: _Let \(\phi\colon\mathbb{D}^{*}\to\mathbb{G}^{g}_{m}\) be any holomorphic map that is meromorphic at \(0\), and let \(n_{\phi}\in N\) be the unique co-character satisfying_
\[\langle n_{\phi},m\rangle:=\operatorname{ord}_{t}\mathrm{e}(m)(\phi(t))\text{ for all }\ m\in M\.\]
_Then for any fan \(\Delta\), the map \(t\mapsto(\phi(t),t)\) extends as a holomorphic function \(\mathbb{D}\to X(\Delta)\) if and only if \(\mathbb{R}_{+}\cdot(n_{\phi},1)\) belongs to a cone in \(\Delta\)._
Proof.: Let \(\Delta\) be any admissible fan. Then \(\tilde{\phi}(t):=(\phi(t),t)\) extends as a holomorphic function \(\mathbb{D}\to X(\Delta)\) if and only if there exists a cone \(\sigma\in\Delta\) such that \(\tilde{\phi}\colon\mathbb{D}\to X_{\sigma}\) is holomorphic.
Since \(\mathbb{C}[S_{\sigma}]\) generates the structure sheaf \(\mathcal{O}_{X_{\sigma}}\) as a \(\mathcal{O}_{X_{\sigma}}\)-module, \(\tilde{\phi}\colon\mathbb{D}\to X_{\sigma}\) is holomorphic if and only if \(\mathrm{e}(\tilde{m})(\tilde{\phi})\) is holomorphic at \(0\) for any \(\tilde{m}\in S_{\sigma}\). The latter is equivalent to the condition \(\operatorname{ord}_{t}(\mathrm{e}(\tilde{m})(\tilde{\phi}(t)))\geq 0\) for all \(\tilde{m}\in S_{\sigma}\).
If \((n_{\phi},1)\) does not belong to \(\sigma\), then one can find \(\tilde{m}=(m,k)\in S_{\sigma}\subset M\times\mathbb{Z}\) such that \(\langle(n_{\phi},1),\tilde{m}\rangle<0\), and
\[\operatorname{ord}_{t}(\mathrm{e}(\tilde{m})(\tilde{\phi}(t))) =\operatorname{ord}_{t}(\mathrm{e}(m,k)(\tilde{\phi}(t)))\] \[=\operatorname{ord}_{t}(\mathrm{e}(m)(\phi(t)))+k=\langle(n_{\phi },1),\tilde{m}\rangle<0\]
so that \(\tilde{\phi}\) does not extend holomorphically.
If \((n_{\phi},1)\) belongs to \(\sigma\), then the same computation shows that \(\tilde{\phi}\) extends holomorphically.
### Analytic Mumford's construction following Nakamura
We describe in this section the construction of a proper model of a family of polarized abelian varieties over an open disc as done in [14]. Recall our notation for the upper half-plane \(\mathfrak{H}=\{l\in\mathbb{C}|\ \Im(l)>0\}\) and for the Siegel domain \(\mathfrak{H}_{g}=\{M\in\operatorname{Mat}_{g\times g}(\mathbb{C})|\ ^{l}M=M\text{ and }\Im(M)>0\}\) for any \(g\geq 1\).
Let \(\pi\colon\mathcal{X}\to\mathbb{D}\), \(\mathrm{L}\to\mathcal{X}\) be any family of polarized abelian varieties. Fix any \(t_{\star}\in\mathbb{D}^{*}\), write \(X_{\star}=X_{t_{\star}}\) as the quotient of some complex vector space \(V\) of dimension \(g\) by a co-compact lattice \(\Lambda_{\star}\). The polarization induces a positive-definite hermitian form \(H\) on \(V\) whose imaginary part \(E=\Im H\) takes integral values on \(\Lambda_{\star}\) (note that \(H(v,w)=E(iv,w)+iE(v,w)\)). Since \(E\) is a skew-symmetric form on \(V\), we may choose a symplectic basis \(f_{1},\cdots,f_{g},e_{1},\cdots,e_{g}\) of the lattice \(\Lambda_{\star}\) so that
\[E=\begin{pmatrix}0&D\\ -D&0\end{pmatrix}\]
for some diagonal integral matrix \(D=\operatorname{diag}(d_{1},\cdots,d_{g})\) with \(d_{1}|\cdots|d_{g}\). In the basis \(\tilde{e}_{i}=\frac{1}{d_{i}}e_{i}\) of \(V\), the lattice is generated by the vectors given by the columns of the following period matrix
\[\Pi_{\star}=\begin{pmatrix}a_{11}&\ldots&a_{1g}&d_{1}&0&\ldots&0\\ a_{21}&\ldots&a_{2g}&0&d_{2}&\ldots&0\\ &\ddots&&&\ddots&\\ a_{g1}&\ldots&a_{gg}&0&0&\ldots&d_{g}\end{pmatrix}=(Z_{\star}\;D)\]
where \(Z_{\star}\) belongs to \(\mathfrak{H}_{g}\). Now consider the universal covering map \(\operatorname{\mathsf{e}}\colon\mathfrak{H}\;\to\mathbb{D}^{\ast}\) with \(\operatorname{\mathsf{e}}(s)=\exp(2\pi is)\). Identifying the universal covers of \(X_{t}\) to \(V\) so that the polarization \(\operatorname{\mathsf{L}}_{t}\to X_{t}\) defines the same hermitian form \(H\) for all \(t\), we may locally follow holomorphically the lattices \(\Lambda_{t}=\ker(V\to X_{t})\) such that \(e_{1},\cdots,e_{g}\) belongs to \(\Lambda_{t}\), and since \(\mathfrak{H}\) is simply connected, we get a holomorphic function \(s\mapsto Z(s)=[a_{ij}(s)]\in\mathfrak{H}_{g}\) so that \(\Lambda_{\operatorname{\mathsf{e}}(s)}\) is generated by the column of \(\Pi(s)=(Z(s)\;D)\). Here is a diagram summarizing the construction to follow:
In the rest of the discussion, we shall replace our original family by the family \(X^{\prime}_{t}:=V/\Lambda^{\prime}_{t}\) where \(\Lambda^{\prime}_{t}:=\Lambda_{t}+\mathbb{Z}\tilde{e}_{1}+\cdots+\mathbb{Z} \tilde{e}_{g}\) is generated by the columns of \(\Pi^{\prime}(s)=(Z(s)\operatorname{Id}_{g})\). Then \(\mathcal{X}^{\prime}\) forms a family of abelian varieties which is principally polarized (i.e., the canonical map \(X^{\prime}_{t}\to(X^{\prime})^{\vee}_{t}\) is an isomophism), and we thus have a family of isogenies \(\phi_{t}\colon X_{t}\to X^{\prime}_{t}\).
Since \((Z(s+1)\;\operatorname{Id}_{g})\) and \((Z(s)\;\operatorname{Id}_{g})\) define the same polarized abelian variety, there exists an element \(M\in\operatorname{Sp}(2g,\mathcal{I})\) such that \(M\cdot Z(s+1)=Z(s)\) for all \(s\), see [1, Proposition 8.1.3] (the action is by generalized Mobius transformations). The matrix \(M\) encodes the monodromy action of \(\pi_{1}(\mathbb{D}^{\ast})\) on \(H^{1}(X_{\star},\mathbb{Z})\) which fixes \(\tilde{e}_{1},\cdots,\tilde{e}_{g}\), hence is upper block-triangular. By the Griffiths-Landman-Grothendieck's monodromy theorem (see, e.g., [1, Theorem 13.7.3]) \(M\) is moreover quasi-unipotent. Up to base change we may (and shall) suppose that \(M\) is unipotent. This means that we can find an integral-valued matrix \(B\in M(\mathbb{Z},g)\) such that \(Z(s+1)=Z(s)+B\).
**Lemma 6.3**.: _[_12_, Lemma 2.3]_ _The matrix \(B\) is symmetric positive semi-definite, and there exists a holomorphic map \(Z_{0}\colon\mathbb{D}\to\mathfrak{H}_{g}\) such that \(Z(s)=Z_{0}(\operatorname{\mathsf{e}}(s))+Bs\)._
Denote by \(r^{\prime}\) the rank of the matrix \(B\). After changing the basis \(e_{i}\), we may suppose that
\[B =\begin{pmatrix}0&0\\ 0&B^{\prime}\end{pmatrix}; Z_{0}(t) =\begin{pmatrix}Z_{1}(t)&Z_{2}(t)\\ \iota Z_{2}(t)&Z_{3}(t)\end{pmatrix}, \tag{6.4}\]
where \(B^{\prime}\in\operatorname{Mat}(r^{\prime},\mathbb{Z})\) is a positive definite integral matrix of size \(r^{\prime}\), \(g^{\prime}:=g-r^{\prime}\), and \(Z_{1}\colon\mathbb{D}\to\operatorname{Sym}(g^{\prime},\mathbb{C})\), \(Z_{2}\colon\mathbb{D}\to\operatorname{Mat}(g^{\prime}\times r^{\prime}, \mathbb{C})\), \(Z_{3}\colon\mathbb{D}\to\operatorname{Sym}(r^{\prime},\mathbb{C})\), are holomorphic. Here \(\operatorname{Sym}(h,\mathbb{C})\) denotes the space of complex symmetric square matrices of size \(h\).
**Remark 6.5**.: In order to be consistent with the notation of the rest of the paper, we insist on writing \(r^{\prime}\) for the rank of \(B\). This is in conflict with the notation used by Nakamura who used \(g^{\prime}\) for this rank.
**Remark 6.6**.: The number \(r^{\prime}\) equals the dimension of maximal torus in a semi-abelian component of the Neron model of \(\mathcal{X}\) in the case when \(\mathcal{X}\) has semi-abelian reduction. In particular, \(r^{\prime}=0\) iff \(X_{t}\) is not degenerating.
Let us explain now how to recover the family \(\mathcal{X}^{\ast}\) from (6.4). Observe that for a fixed \(t\in\mathbb{D}^{\ast}\) with \(t=\operatorname{\mathsf{e}}(s)\), then \(X_{t}\) is the quotient of \(V\simeq\mathbb{C}^{g}\) by the lattice generated by the column of \(\Pi(s)\). Taking the quotient by \(\mathbb{Z}e_{1}\oplus\cdots\oplus\mathbb{Z}e_{g}\) first, we get \(X_{t}=\mathbb{C}^{g}_{m}/\Gamma\) where \(\Gamma=\mathbb{Z}^{g}\) acts on the split torus by multiplication
\[\gamma\cdot(z^{\prime}_{1},\cdots,z^{\prime}_{g})=\left(z^{\prime}_{1}\,\prod_{ j=1}^{g}\operatorname{\mathsf{e}}(a_{1j}(s)\gamma_{j}),\cdots,z^{\prime}_{g}\,\prod_{ j=1}^{g}\operatorname{\mathsf{e}}(a_{gj}(s)\gamma_{j})\right). \tag{6.7}\]
The family \(\mathcal{X}^{*}\) can be thus obtained as the quotient of \(\mathbb{G}_{m}^{g}\times\mathbb{D}^{*}\) by an action of \(\Gamma\) which we now describe in dual terms. Recall from the previous section that \(M\) is the set of characters of \(\mathbb{G}_{m}^{g}\). The universal covering map \(V\to\mathbb{G}_{m}^{g}\) is given in the coordinates \(e_{i}\) by the map \(v=\sum v_{i}e_{i}\mapsto(\operatorname{\mathsf{e}}(v_{1}),\cdots,\operatorname {\mathsf{e}}(v_{g}))\). To exploit the block-diagonal form of \(B\) as in (6.4), we write the coordinates on \(\mathbb{G}_{m}^{g}\) as \((z_{1},\cdots,z_{g^{\prime}},w_{1},\cdots,w_{r^{\prime}})=(\operatorname{ \mathsf{e}}(v_{1}),\cdots,\operatorname{\mathsf{e}}(v_{g}))\). A character \(m\in M\) can thus be written as \(m=z^{p}w^{q}\) with \(p\in\mathbb{Z}^{g^{\prime}}\) (resp. \(q\in\mathbb{Z}^{r^{\prime}}\)) a character of \(\mathbb{G}_{m}^{g^{\prime}}\) (resp. of \(\mathbb{G}_{m}^{r^{\prime}}\)).
The action of \(\Gamma\) can then be described on \(M\times\mathbb{N}\) as follows. For any \(\gamma=(\alpha,\beta)\in\Gamma\), where \(\alpha\in\mathbb{Z}^{g^{\prime}}\) and \(\beta\in\mathbb{Z}^{r^{\prime}}\) we have (with \(t=\operatorname{\mathsf{e}}(s)\)):
\[\gamma^{*}(z^{p}) =\operatorname{\mathsf{e}}(\alpha\cdot Z_{1}(t)\cdot\!\!^{t}p+ \beta\cdot\!\!^{t}\!Z_{2}(t)\cdot\!\!^{t}p)\,z^{p};\] \[\gamma^{*}(w^{q}t^{l}) =\operatorname{\mathsf{e}}(\alpha\cdot Z_{2}(t)\cdot\!\!^{t}q+ \beta\cdot Z_{3}(t)\cdot\!\!^{t}q)\,t^{l+\beta\cdot\!\!B\cdot\!\!^{t}q}w^{q};\]
Note that this action induces a dual action of \(\Gamma\) on the lattice \(N\times\mathbb{N}\) so that \((\gamma\cdot\tilde{n})(\tilde{m})=n(\gamma^{*}\tilde{m})\) for all \(\tilde{n}\ \in N\times\mathbb{N}\) and \(\tilde{m}\in M\times\mathbb{N}\). In the dual basis \(N=\{(a,b)\in\mathbb{Z}^{g^{\prime}}\times\mathbb{Z}^{r^{\prime}}\}\) so that \(\langle(a,b),(p,q)\rangle=(a\cdot\!\!^{t}p)+(b\cdot\!\!^{t}q)\), we have \((\alpha,\beta)\cdot(a,b,k)(p,q,l)=(a,b,k)(p,q,l+\beta\cdot B^{\prime}\cdot\! \!^{t}q)\), hence
\[(\alpha,\beta)\cdot(a,b,k)=(a,b+k\,\beta\cdot B^{\prime},k). \tag{6.8}\]
It follows that \(\Gamma\) preserves the affine hyperplanes \(N\times\{k\}\) for all \(k\), acts trivially on \(N\times\{0\}\), and the action of the subgroup \(\{0\}\times\mathbb{Z}^{g^{\prime}}\) is free on \(N\times\{k\}\) for all \(k\in\mathbb{N}^{*}\).
Since \(\Gamma\) acts by multiplication on each element of \(M\times\mathbb{N}\), for any \(\gamma\in\Gamma\), the map \((z^{\prime},t)\mapsto\gamma\cdot(z^{\prime},t)\) on \(\mathbb{G}_{m}^{g}\times\mathbb{D}^{*}\) extends as a holomorphic map from \(X_{\sigma}\) to \(X_{\gamma\cdot\sigma}\) for any (rational polyhedral strongly convex) cone \(\sigma\) (in \(\tilde{N}_{\mathbb{R}}\)).
It follows that the action of \(\Gamma\) extends to the relative toric variety \(X(\Delta)\) over \(\mathbb{D}\) if and only if the fan \(\Delta\) is \(\Gamma\)-invariant.
**Theorem 6.9**.: _There exists a \(\Gamma\)-invariant fan \(\Delta\) such that the following holds:_
1. _the set of rays of_ \(\Delta\) _is exactly given by_ \(\mathbb{R}_{+}\cdot(n,1)\) _for all_ \(n\in N\) _such that_ \((n,1)\) _belongs to the span of the_ \(\Gamma\)_-orbit of_ \((0,1)\)_;_
2. _the action of_ \(\Gamma\) _on_ \(X(\Delta)\) _is free and_ \(\pi_{\Delta}\)_-equivariant;_
3. _the restriction of the action of_ \(\Gamma\) _to any fiber of_ \(\pi_{\Delta}\) _is co-compact._
**Remark 6.10**.: In order for the \(\Gamma\)-action to be co-compact it is necessary for the support of \(\Delta\) to intersect \(N\times\{1\}\) along an affine space. One can show that the Neron model is obtained by taking the subfan of \(\Delta\) having only one dimensional faces generated by the \(\Gamma\)-orbit of \((0,1)\). In general \(\Delta\) contains more rays. This happens exactly when the central fiber of the Neron model is reducible.
Sketch of proof.: The proof is given in [23, Theorem 2.6]. Let \(\Pi\) be the real affine subspace generated by the \(\Gamma\)-orbit of \((0,1)\). Observe that \(N_{\Pi}:=N\times\{1\}\cap\Pi\) is a co-compact lattice in \(\Pi\). Let \(\Delta_{0}\) be the fan whose cones are \((0)\) and the \(1\)-dimensional cones \(\mathbb{R}_{+}(n,1)\) with \(n\in\Pi\cap N\times\{1\}\).
Since \(\Gamma\) acts by translation on \(\tilde{N}_{\mathbb{R}}\) preserving \(N\times\mathbb{N}\), we may find a scalar product (hence a metric \(d\) on \(\tilde{N}_{\mathbb{R}}\)) which is \(\Gamma\)-invariant. Following [11, SS1], we define a Delaunay cell \(\sigma\) to be the closed convex hull of all elements \(\tilde{n}\in N_{\Pi}\) for which there exists \(\alpha\in\Pi\) satisfying \(\inf_{N_{\Pi}}d(\alpha,\cdot)=d(\alpha,\tilde{n})\). If the metric is sufficiently general, then we obtain a \(\Gamma\)-invariant cell-decomposition of \(\Pi\) by simplices whose vertices are all contained in \(N_{\Pi}\). Observe that this fan is not necessarily regular if \(g\geq 5\) (see [11, SS1.14]).
We define \(\Delta\) to be the fan whose cones are generated by cells of this triangulation. The fact that \(\Gamma\) acts co-compactly on \(X(\Delta)\) is proved in [23] for any dimension \(g\).
A crucial observation is the following.
**Proposition 6.11**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be any principally polarized family of abelian varieties. Then any holomorphic section over \(\mathbb{D}^{*}\) which is meromorphic at \(0\) extends holomorphically to \(\mathbb{D}\)._
_If moreover \(\mathcal{X}=X(\Delta)/\Gamma\) for some discrete group \(\Gamma\simeq\mathbb{Z}^{g}\) acting on \(\mathbb{G}_{m}^{g}\times\mathbb{D}\) as in (6.7), then any meromorphic section \(\psi\colon\mathbb{D}^{*}\to\mathcal{X}\) lifts to a holomorphic section \(\tilde{\psi}\colon\mathbb{D}\to X(\Delta)\) having the following property._
_Since \(X(\Delta)\supset\mathbb{G}_{m}^{g}\times\mathbb{D}^{*}\), post-composing \(\tilde{\psi}|_{\mathbb{D}^{*}}\) with the first projection yields a meromorphic map \(\phi\colon\mathbb{D}^{*}\to\mathbb{G}_{m}^{g}\). Then \((n_{\phi},1)\) as defined in Lemma 6.2 belongs to the linear span of \(\Gamma\cdot(0,1)\subset\tilde{N}_{\mathbb{R}}\)._
Note that \(\Gamma\cdot(0,1)\) is the translate by \((0,1)\) of a discrete group (i.e., \(\{(0,\beta\cdot B^{\prime}),\beta\in\mathbb{Z}r^{\prime}\}\)) whose rank equals the one of \(B\) (i.e., \(r^{\prime}\)), see (6.8).
Proof.: We first claim that \(\psi\) extends holomorphically through \(0\). Since \(\psi\) is meromorphic, there exists a closed analytic subset \(C\) of dimension \(1\) in \(\mathbb{D}\times\mathcal{X}\) that contains \(\{(t,\psi(t)),t\in\mathbb{D}^{*}\}\). We observe that \(C\) is necessarily analytically irreducible at the point \(p=C\cap X_{0}\), hence \(\psi\) extends continuously (hence holomorphically) at \(0\).
Choose coordinates \(z=(z_{1},\cdots,z_{g})\) centered at \(p\), and write \(\pi(z)=\sum_{I}a_{I}z^{I}\). Since \(\pi(\psi(t))=t\), we infer that at least one \(a_{i}\neq 0\). It follows that \(\pi^{-1}(0)\) is smooth and reduced at \(p\).
Suppose now that \(\mathcal{X}=X(\Delta)/\Gamma\) as given by the previous theorem. Then pick a point \(q\in X(\Delta)\) which is mapped to \(p\) in \(\mathcal{X}\). Since \(\Gamma\) acts freely, the canonical map \(X(\Delta)\to\mathcal{X}\) is a covering map, hence we can lift \(\psi\) to a holomorphic map \(\tilde{\psi}\colon\mathbb{D}\to\mathcal{X}(\Delta)\) such that \(\tilde{\phi}(0)=q\). The projection of \(\tilde{\psi}\) to \(\mathbb{G}_{m}^{g}\) induces a holomorphic map \(\phi\colon\mathbb{D}^{*}\to\mathbb{G}_{m}^{g}\), which is meromorphic at \(0\), hence by Lemma 6.2 we deduce that \(\mathbb{R}_{+}\cdot(n_{\phi},1)\) forms a one-dimensional cone of \(\Delta\). This condition is equivalent to say that \((n_{\phi},1)\) belongs to the linear span of \(\Gamma\cdot(0,1)\subset\tilde{N}_{\mathbb{R}}\).
**Remark 6.12**.: The proof shows a weak version of the Neron property for \(X(\Delta)/\Gamma\). In fact the Neron model is given by the smooth locus of \(\pi\colon X(\Delta)/\Gamma\to\mathbb{D}\) whose complement has codimension \(2\) in \(X(\Delta)/\Gamma\).
### Proof of Theorem E
Let \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be a meromorphic map of a polarized family \(\pi\colon\mathcal{X}\to\mathbb{D}\) of \(g\)-dimensional abelian varieties satisfying \(\pi\circ f=\pi\) such that for any \(t\in\mathbb{D}^{*}\), the map \(f_{t}^{*}\colon H^{1}(X_{t},\mathbb{Z})\to H^{1}(X_{t},\mathbb{Z})\) has finite order.
Replacing \(f\) by a suitable iterate and applying Proposition 2.12, we may suppose that \(f_{t}^{*}=\operatorname{Id}\) for all \(t\) so that the induced map \(f_{t}\colon\mathcal{X}_{t}\to\mathcal{X}_{t}\) is a translation of the abelian variety \(\mathcal{X}_{t}\).
By Proposition 4.10 and the discussion before Lemma 6.3, we may assume that \(X_{t}\) is principally polarized. We may also replace the family \(\mathcal{X}\) by any base change by Proposition 4.9, and by the discussion of the previous section, we may suppose that \(\mathcal{X}=X(\Delta)/\Gamma\) where \(\Gamma\simeq\mathbb{Z}^{g}\) acts on \(X(\Delta)\subset\mathbb{G}_{m}^{g}\times\mathbb{D}\) as in (6.7), and \(\Delta\) is a \(\Gamma\)-invariant fan containing \((0,1)\).
Let \(\phi(t)=f_{t}(0)\). This is a holomorphic section over \(\mathbb{D}^{*}\) which is meromorphic at \(0\) since \(f\) is meromorphic. It follows from Theorem 6.9 (1) and Proposition 6.11 that \(\phi\) lifts to a section \(\tilde{\phi}\colon\mathbb{D}\to X(\Delta)\), hence \((n_{\phi},1)\) lies in the linear span of the \(\Gamma\)-orbit of \((0,1)\) and we can find an integer \(N\) such that \((Nn_{\phi},1)=\gamma\cdot(0,1)\) for some \(\gamma\in\Gamma\).
The map \(F(z,t)=(\phi(t)\cdot z,t)\) defines a meromorphic self-map of \(\mathbb{G}_{m}^{g}\times\mathbb{D}\) lifting \(f\). Let \(\psi(t)=\widetilde{\phi}(t)^{N}/\exp(\gamma)\) Then the vector \(n_{\psi}\) defined in Lemma 6.2 equals \(0\). By Lemma 6.2, \(\psi\) extends as a holomorphic map \(\psi\colon\mathbb{D}\to\mathbb{G}_{m}^{g}\), and \(g\) defines an automorphism of \(X(\Delta)\) since any (relative) toric manifold carries an action by multiplication by \(\mathbb{G}_{m}^{g}\).
Since \(F^{N}\) is the composition of \(g\) and the action of \(\gamma\) it also induces a biholomorphism on \(X(\Delta)\) which commutes with the action of \(\Gamma\). Therefore \(f^{N}\) is a biholomorphism on \(X(\Delta)/\Gamma\), hence \(f\) is regularizable by Propositions 2.11 and 2.12.
### Orbits of translations on families of abelian varieties
We now explore the closure of the orbit of a general point under a family of translations on a family of abelian varieties. The setting is as follows: \(f\colon\mathcal{X}\to\mathcal{X}\) is an automorphism of a polarized family of abelian varieties \(\operatorname{L}\to\mathcal{X}\), \(\pi\colon\mathcal{X}\to\mathbb{D}\) given by the translation along a section \(\alpha\colon\mathbb{D}\ \to\mathcal{X}\) of \(\pi\) so that \(f_{t}(z)=z+\alpha(t)\) for all \(z\in X_{t}\) and all \(t\in\mathbb{D}^{*}\).
For each \(t\), we denote by \(\tilde{P}(t):=\overline{\{n\cdot\alpha(t)\}}_{n\in\mathbb{Z}}\subset X_{t}\) the closure (for the euclidean topology) of the orbit of \(0\), and by \(P(t)\) its connected component containing \(0\). Then \(P(t)\) is a closed connected abelian real-subgroup of \(X_{t}\) isomorphic to \((\mathbb{R}/\mathbb{Z})^{h(t)}\) for some \(h(t)\in\{0,\cdots,2g\}\), and \(\tilde{P}(t)\) is a finite union of translates of \(P(t)\). The group \(P(t)\) also contains a maximal complex Lie group \(A(t)\) of (complex) dimension \(s(t)\). Write \(r(t):=h(t)-2s(t)\).
Recall that a \(F_{\sigma}\)-set is a countable union of closed subset, and that any countable union of proper analytic subsets is an \(F_{\sigma}\)-set having zero Lebesgue measure (hence has empty interior).
We start with the following observation.
**Proposition 6.13**.: _Let \(f\colon\mathcal{X}\dashrightarrow\mathcal{X}\) be a family of translations. Set \(h(f)=\max_{t\in\mathbb{D}^{*}}h(t)\)._
1. _The set_ \(\mathcal{E}_{0}:=\{t\in\mathbb{D}^{*},h(t)<h(f)\}\) _is included in a countable union of proper analytic subsets._
2. _There exists a real analytic family_ \(\{P^{\prime}_{t}\}_{t\in\mathbb{D}^{*}}\) _of real subtori of_ \(X_{t}\)_, such that_ \(P^{\prime}_{t}\supset P(t)\) _for all_ \(t\) _and_ \(P^{\prime}_{t}=P(t)\) _for all_ \(t\notin\mathcal{E}_{0}\)
_._
3. _There exists a real analytic family_ \(\{\tilde{P}^{\prime}_{t}\}_{t\in\mathbb{D}^{*}}\) _of real closed subgroups of_ \(X_{t}\)_, such that_ \(\tilde{P}^{\prime}_{t}\supset\tilde{P}(t)\) _for all_ \(t\) _and_ \(\tilde{P}^{\prime}_{t}=\tilde{P}(t)\) _for all_ \(t\notin\mathcal{E}_{0}\)_._
4. _There exists a countable union of proper analytic subsets_ \(\mathcal{E}\supset\mathcal{E}_{0}\) _such that_ \(s(t)=s(f)\in\{0,\cdots,g\}\) _for all_ \(t\notin\mathcal{E}\)_._
5. _There exists a subfamily_ \(A^{\prime}_{t}\) _of abelian subvarieties of_ \(X_{t}\)_, such that_ \(A^{\prime}_{t}=A(t)\) _for all_ \(t\notin\mathcal{E}\)_._
Proof.: We may argue locally near \(t_{0}\in\mathbb{D}^{*}\), and suppose that \(X_{t}=V/\Lambda_{t}\) where \(V\simeq\mathbb{C}^{g}\), and \(\Lambda_{t}\) is a family of rank \(2g\) sublattices of \(V\) moving holomorphically with \(t\). Denote by \(\pi_{t}\colon V\to X_{t}\) the canonical projection.
Fix a lattice basis \(e_{1},\cdots,e_{2g}\) of \(\Lambda_{t_{0}}\) and denote by \(e_{i}\colon\mathbb{D}\to V\) the holomorphic map such that \(e_{i}(t)\in\Lambda_{t}\) for all \(t\), and \(e_{i}(t_{0})=e_{i}\). Let \(e^{\vee}_{i,t}\) be the dual \(\mathbb{R}\)-basis so that \(e^{\vee}_{i,t}(e_{j}(t))=\delta_{ij}\). Observe that \(e^{\vee}_{i,t}\) is a real linear form on \(V\) which varies real analytically in \(t\) (it does not vary holomorphically).
Consider the \(\mathbb{Q}\)-vector space \(L_{t}=\{q\in\mathbb{Q}^{2g},\,\sum_{j=1}^{2g}\ q_{j}e^{\vee}_{j,t}(\alpha(t)) \in\mathbb{Q}\}\) so that
\[P(t):=\pi_{t}\left(\bigcap_{q\in L_{t}}\ \left\{v\in V,\sum_{j=1}^{2g}\ q_{j}e^{ \vee}_{j,t}(v)=0\right\}\right)\]
and \(h(t)=2g-\dim_{\mathbb{Q}}(L_{t})\). For fixed \(q,q^{\prime}\in\mathbb{Q}^{2g}\times\mathbb{Q}\), the set \(\{t\in\mathbb{D}^{*},\sum_{j=1}^{2g}\ q_{j}e^{\vee}_{j,t}(\alpha(t))=q^{ \prime}\}\) is real-analytic. It follows that \(L_{t}\) is fixed, say equal to \(L_{*}\), for all \(t\) outside a countable union of proper real-analytic subsets \(\mathcal{E}_{0}\). And \(L_{*}\subset L_{t}\) for all \(t\in\mathbb{D}^{*}\) which proves \(h(t)\leq h(f)\) for all \(t\). This proves (1). Observe also that \(P^{\prime}_{t}=\pi_{t}(\bigcap_{q\in L_{*}}\ \{\sum_{j=1}^{2g}\ q_{j}e^{\vee}_{j,t}=0\})\) is a real-analytic family of real subtori for which (2) holds. In the same way, \(\tilde{L}_{t}=\{(q,q^{\prime})\in\mathbb{Q}^{2g+1},\,\sum_{j=1}^{2g}\ q_{j}e^ {\vee}_{j,t}(\alpha(t))=q^{\prime}\}\) is constant equal to \(\tilde{L}_{*}\) off \(\mathcal{E}_{0}\), hence \(\tilde{P}^{\prime}_{t}=\pi_{t}(\bigcap_{(q,q^{\prime})\in\tilde{L}_{*}}\ \{\sum_{j=1}^{2g}\ q_{j}e^{\vee}_{j,t}=q^{ \prime}\})\) is a real-analytic family of closed subgroups of \(X_{t}\) equal to \(\tilde{P}(t)\) for all \(t\notin\mathcal{E}_{0}\), proving (3).
To any \(\mathbb{R}\)-linear form \(\ell_{q,t}=\sum_{j=1}^{2g}\ q_{j}e^{\vee}_{j,t}\), we attach the \(\mathbb{C}\)-linear form \(u_{q,t}(v)=\ell_{q,t}(v)-i\ell_{q,t}(iv)\in V^{*}\) so that \(\ker(u_{q,t})\) is the complex linear hypersurface included in \(\ker(\ell_{q,t})\), and
\[A(t):=\pi_{t}\left(\bigcap_{q\in L_{t}}\ \ker(u_{q,t})\right)\.\]
As \(u_{q,t}\) varies analytically in \(t\), it follows that \(t\mapsto\dim_{\mathbb{C}}\left(\bigcap_{q\in L_{*}}\ \ker(u_{q,t})\right)\) is again constant off a countable union of proper analytic subsets \(\mathcal{E}_{1}\) (and is upper-semicontinuous). It follows that \(s(t):=\dim_{\mathbb{C}}A(t)\) is constant on \(\mathcal{E}:=\mathcal{E}_{0}\cup\mathcal{E}_{1}\) equal to some constant \(s_{*}\), hence (4) holds.
We now prove that \(\{A(t)\}_{t\notin\mathcal{E}}\) glue to a holomorphic family over \(\mathbb{D}\). Suppose that \(t_{0}\notin\mathcal{E}\), and pick a basis of lattice points \(f_{1},\cdots,f_{2s(f)}\) in \(\bigcap_{q\in L_{*}}\ \ker(u_{q,t})\). Then \(A(t)\) is the image under \(\pi_{t}\) of the complex vector space generated by the holomorphically varying points \(f_{1,t},\cdots,f_{2s(f),t}\). It follows that \(A^{\prime}_{t}=\pi_{t}(\operatorname{Span}\{f_{1,t},\cdots,f_{2s(f),t}\})\) forms a family of subabelian varieties over \(\mathbb{D}^{*}\). Since the family \(X_{t}\) is polarized, \(A^{\prime}_{t}\) is also polarized and extend through \(0\). This proves (5).
In the remaining of the section, \(\mathcal{E}\) will always denote the \(F_{\sigma}\)-set defined by the previous lemma. Our next objective is the following decomposition result.
**Proposition 6.14**.: _Let \(f\in\operatorname{Aut}(\mathcal{X})\) be a family of translations on a family \(\pi\colon\mathcal{X}\to\mathbb{D}\) of polarized abelian varieties. Then there exist two families of sub-abelian varieties \(\pi_{\mathcal{A}}\colon\mathcal{A}\to\mathbb{D}\), \(\pi_{\mathcal{B}}\colon\mathcal{B}\to\mathbb{D}\) of \(\mathcal{X}\), two families of translations \(f_{\mathcal{A}}\in\operatorname{Aut}(\mathcal{A})\), and \(f_{\mathcal{B}}\in\operatorname{Aut}(\mathcal{B})\), and a meromorphic map \(\Phi\colon\mathcal{A}\times\mathcal{B}\dashrightarrow\mathcal{X}\) such that:_
1. \(\Phi\) _induces an isogeny_ \(A_{t}\times B_{t}\to X_{t}\) _for all_ \(t\in\mathbb{D}^{*}\)_;_
2. _the orbit of_ \(0\) _under_ \(f_{\mathcal{A},t}\) _is dense in_ \(A_{t}\) _for all_ \(t\notin\mathcal{E}\)_;_
3. _the closure of the orbit of_ \(0\) _under_ \(f_{\mathcal{B},t}\) _is totally real for all_ \(t\notin\mathcal{E}\)_._
Proof.: We let \(\mathcal{A}\) be the family of subabelian varieties given by Proposition 6.13. Recall the definition of the dual family \(\mathcal{X}^{\vee}\) and of the canonical meromorphic map \(\phi\colon\mathcal{X}\dashrightarrow\mathcal{X}^{\vee}\) from the proof of Proposition 4.10.
Define a family of subabelian varieties \(B_{t}\) as the connected components containing \(0\) of the kernel of the composition map \(X_{t}\longrightarrow X_{t}^{\vee}\to A_{t}^{\vee}\longrightarrow A_{t}\to X_{t}\). We obtain a family of polarized subvarieties over \(\mathbb{D}^{*}\) hence over \(\mathbb{D}\) and the first property is clear, see [1, Chapter 5, SS3].
Let us fix any parameter \(t\in\mathbb{D}^{*}\). We have a canonical splitting \(V=E_{t}\oplus F_{t}\) so that \(\pi_{t}(E_{t})=A_{t}\) and \(\pi_{t}(F_{t})=B_{t}\) and since \(A_{t},B_{t}\) are varying holomorphically, the same holds for the vector spaces \(E_{t}\) and \(F_{t}\). We may thus write \(\alpha(t)=a(t)\oplus b(t)\) with \(a,b\) holomorphic. Denote by \(f_{\mathcal{A},t}\) the translation by \(a(t)\) on \(A_{t}\) and by \(f_{\mathcal{B},t}\) the translation by \(b(t)\) on \(B_{t}\).
Suppose that \(t\notin\mathcal{E}\). Replacing \(\alpha(t)\) by a suitable multiple, we may suppose that the closure of the orbit of \(0\) is a real torus so that \(\widetilde{P}(t)=P(t)\). Denote by \(\Pi_{t}\) its lift to \(V\), and pick \(\widetilde{\alpha}(t)\) (resp. \(\widetilde{a}(t)\)) a lift of \(\alpha(t)\) (resp. of \(a(t)\)) to \(V\). Observe that \(\Pi_{t}\) is the closure of \(\mathcal{I}\widetilde{\alpha}(t)+\Lambda_{t}\), and that \(\Pi_{t}\cap\Lambda_{t}\) is a co-compact lattice in \(\Pi_{t}\). Since \(A_{t}\) is the largest subabelian variety in \(P(t)\), we have \(E_{t}=\Pi_{t}\cap i\Pi_{t}\), and \(F_{t}\cap\Pi_{t}\) cannot contain any complex linear subspace, hence is totally real.
The linear projection \(p_{E,t}:V\to E_{t}\) parallel to \(F_{t}\) semi-conjugates the translation by \(\alpha(t)\) and the translation by \(a(t)\). In particular, the image of \(\mathcal{I}\widetilde{\alpha}(t)+\Lambda_{t}\) under \(p_{E,t}\) equals \(\mathcal{I}a(t)+p_{E,t}(\Lambda_{t})\), so that the latter is dense in \(E_{t}\). Since the addition map \(A_{t}\times B_{t}\to X_{t}\) is an isogeny, the co-compact lattice \(\Lambda_{t}\cap E_{t}\) has finite index in \(p_{E,t}(\Lambda_{t})\), and we conclude that \(\mathcal{I}\widetilde{a}(t)+(\Lambda_{t}\cap E_{t})\) is dense in \(E_{t}\). This proves (2).
The proof of (3) is completely analogous, once one observes that the image of \(\Pi_{t}\) under the projection \(p_{F,t}\,:\,V\to F_{t}\) parallel to \(E_{t}\) is totally real.
The proof of Theorem F will be complete after we prove:
**Proposition 6.15**.: _Let \(\pi\colon\mathcal{X}\to\mathbb{D}\) be a family of polarized abelian varieties. Let \(f\in\operatorname{Aut}(\mathcal{X})\) be a family of translations such that the closure of the orbit of \(0\) under \(f_{X,t}\) is totally real for Lebesgue-almost every \(t\)._
_Then the closure of the orbit of \(0\) under \(f_{X,t}\) is totally real for all \(t\), and there exist a proper model \(\mathcal{X}^{\prime}\) and sequence of automorphisms \(f_{n}\in\operatorname{Aut}(\mathcal{X}^{\prime})\) of finite order such that \(f_{n}\to f\) locally uniformly on \(\mathcal{X}^{\prime}\)._
Proof.: Replacing \(\alpha\) by a suitable multiple, we may (and shall) assume that \(\tilde{P}(t)=P(t)\) is connected.
We shall first argue locally near a fixed point \(t_{0}\notin\mathcal{E}\), where \(\mathcal{E}\) is the set defined in Proposition 6.13. We use the same notation as in the proof of the previous proposition. Fix a basis \(\tilde{e}_{1},\cdots,\tilde{e}_{k}\) for the lattice \(\Pi_{t_{0}}\cap\Lambda_{t_{0}}\), and complete it as a maximal set of \(\mathbb{Z}\)-independent elements \(\tilde{e}_{1},\cdots,\tilde{e}_{2g}\) in \(\Lambda_{t_{0}}\). Observe that it may happen that \(\tilde{e}_{1},\cdots,\tilde{e}_{2g}\) generate only a finite index subgroup of \(\Lambda_{t_{0}}\). Observe also, that the holomorphically varying vectors \(\tilde{e}_{1,t},\cdots,\tilde{e}_{k,t}\) are \(\mathbb{C}\)-linearly independent because \(\Pi_{t}\) is totally real. We may thus suppose that \(\tilde{e}_{1},\cdots,\tilde{e}_{g}\) are \(\mathbb{C}\)-linearly independent.
Since the closure of the orbit of \(0\) is connected, \(\widetilde{\alpha}(t)\in\Pi_{t}\) for all \(t\), and we may find \(\alpha_{j}(t)\in\mathbb{R}\) such that \(\widetilde{\alpha}(t)=\sum_{j=1}^{k}\alpha_{j}(t)\tilde{e}_{j,t}\). The map \(t\mapsto\widetilde{\alpha}(t)\) is holomorphic map from \(\mathbb{D}^{*}\) to the dual space \(V^{*}\) hence \(\alpha_{1},\cdots,\alpha_{k}\) are all constant.
We now replace \(\mathcal{X}\) by an isogenous family which is principally polarized, and choose the model \(\mathcal{X}^{\prime}=X(\Delta)/\Gamma\) described in SS6.2. Pick any \(\beta\in\mathbb{R}^{k}\), and set \(\beta(t)=\sum_{j=1}^{k}\beta_{j}\tilde{e}_{j,t}\) for \(t\) close to \(t_{0}\). We claim that \(\beta(t)\) extends to a holomorphic section of \(\mathcal{X}^{\prime}\to\mathbb{D}\) over \(\mathbb{D}\).
To see this, write \(\pi\colon X(\Delta)\to\mathcal{X}^{\prime}\) for the canonical projection. By analytic continuation, we may find a lift \(s\mapsto\hat{\beta}(s)\) defined over \(s\in\mathfrak{H}\) such that \(\pi(\hat{\beta}(s))=\beta(e^{2i\pi s})\). We now write \(\hat{\beta}\) in terms of a holomorphic varying symplectic basis \(e_{1},\cdots,e_{2g}\) as in SS6.2: there exists some real numbers \(\theta_{1},\cdots,\theta_{g}\) and \(b_{1},\cdots,b_{g}\) such that
\[\hat{\beta}(s)=\left(e^{i\pi\theta_{1}}\,\prod_{j=1}^{g}\operatorname{e}(a_{1j }(s)b_{j}),\cdots,e^{i\pi\theta_{g}}\,\prod_{j=1}^{g}\operatorname{e}(a_{gj}(s )b_{j})\right)\]
(see (6.7)). Doing the same for \(\alpha\), we get some \(\vartheta_{j},a_{j}\in\mathbb{R}\) such that
\[\hat{\alpha}(s)=\left(e^{i\pi\vartheta_{1}}\,\prod_{j=1}^{g}\operatorname{e}(a_ {1j}(s)a_{j}),\cdots,e^{i\pi\vartheta_{g}}\,\prod_{j=1}^{g}\operatorname{e}(a_ {gj}(s)a_{j})\right)\]
But \(\alpha\) is a section over \(\mathbb{D}\) so the vector \(Ba\) belongs to \(\mathbb{Z}^{g}\).
Observe that the function \(v\in\mathbb{R}^{g}/\mathbb{Z}^{g}\mapsto By\in\mathbb{R}^{g}/\mathbb{Z}^{g}\) is continuous, and vanishes at all vectors \(\{na\}_{n\in\mathbb{N}}\). Since \(P(t)=\pi_{t}(\Pi_{t})\) is the closure of the orbit of \(0\) under the translation by \(\alpha(t)\) we may find a sequence \(q^{(n)}\in\mathbb{N}\) such that \(q^{(n)}\alpha(t)\to\beta(t)\) in a neighborhood of \(t_{0}\). This implies \(q^{(n)}a\to b\) in \(\mathbb{R}^{g}/\mathbb{Z}^{g}\), hence \(Bb=0\) in \(\mathbb{R}^{g}/\mathbb{Z}^{g}\), and \(\beta\) defines a holomorphic section over \(\mathbb{D}^{*}\). By Lemma 6.3 this section is meromorphic at \(0\). By Proposition 6.11 it extends to a holomorphic section over \(\mathbb{D}\).
Now choose any sequence \(\beta^{(n)}\in\mathbb{Q}^{k}\) converging to \((\alpha_{1},\cdots,\alpha_{k})\). Then \(\beta^{(n)}(t)=\sum_{j=1}^{k}\beta_{j}^{(n)}e_{j,t}\) converges locally uniformly to \(\alpha(t)\) on \(\mathbb{D}\), and the translation \(f_{n}\) by \(\beta^{(n)}(t)\) has finite order, which concludes the proof.
**Remark 6.16**.: The previous proof implies that the family of real subtori \(P^{\prime}_{t}\) actually extends to a family over \(\mathbb{D}\). Recall that the smooth locus of the central fiber \(\mathcal{X}^{\prime}_{0}\) is a finite union of semi-abelian varieties which are principal \(\mathbb{G}^{r^{\prime}}_{m}\)-bundles over an abelian variety (where \(r^{\prime}\) is the rank of the monodromy, see the discussion after Lemma 6.3).
One can also prove that \(P^{\prime}_{0}\) is transversal to the fibers of the \(\mathbb{G}^{r^{\prime}}_{m}\)-fibration, hence \(s(f)\leq r^{\prime}\).
|
2301.07167 | Creation and annihilation operators in Schrödinger representation in
curved backgrounds | I propose modified set of creation and annihilation operators for the
Schr\"odinger representation which is compatible with the Fock representation
which differs from previous works. I take into account the relation between
different non-unitary vacuums obtained from restricted frameworks like the
relation between the Minkowski and Unruh vacuums. | Daniel López | 2022-12-13T01:22:08Z | http://arxiv.org/abs/2301.07167v1 | # Creation and annihilation operators in Schrodinger representation in curved backgrounds
###### Abstract
I propose modified set of creation and annihilation operators for the Schrodinger representation which is compatible with the Fock representation which differs from previous works. I take into account the relation between different non-unitary vacuums obtained from restricted frameworks like the relation between the Minkowski and Unruh vacuums.
## I Introduction
Background independent Quantum Field Theories that are heavily based on the use of the Schrodinger representation has been developed in the last 30 years[1; 2; 3; 4; 5].
The understanding of this representation in curved space-time and its relation with other representations through the Algebraic Quantum Field Theory (AQFT) using the Gel'fand, Naimark, Segal (GNS) construction theorem has been studied in papers such as [6; 7; 8]. The AQFT framework has been proven more fundamental and general, making it a powerful tool to study other representations and the relations among them. The Schrodinger representation seems to be a suitable tool for finding a background independent formulation of Quantum Field Theory which can be a relevant tool in the pursuit of Quantum Gravity [3]. In an attempt to extend the formalism in curved backgrounds [5] (and [4]) built the Schrodinger representation on a curved manifold in order to avoid the usual description of the Fock representation, which interprets the vacuum as a non-particle state. They argue that such an approach is inherently problematic in curved space-time, where there is no uniquely favored mode of decomposition and no guarantee that the usual concept of a particle is a good description of the spectrum of the theory. We agree with this interpretation of the vacuum. Unlike Long and Long's (1996) interpretation, Schrodinger's representation provides an intrinsic description of the vacuum. For Long and Long, the vacuum depends explicitly on quantities that are linked to the choice of the foliation. A suitable choice of modes is made by choosing a time-like Killing vector which induces a decomposition mode.
Canonical quantization is introduced in [9], where he prefers to construct the theory by using the Dirac quantization method. Here, the selection of modes are represented by the selection of the complex structure \(J\). The complex structure \(J\) is part of a projection operator \(K=(1-iJ)/2\) which maps the phase space to a complex one which is dense in the Hilbert space. Additionally, \(J\) induces a norm on a Hilbert space. Specifically, the elements \(K\phi\) are identified as the positive frequency modes. Finally, he constructs a Fock representation using such a structure.
The development of a more formal and mathematical precise treatment in AQFT results in a natural relationship between different representations. This is made though the use of representation independent algebraic states in AQFT, which leads to a connection between the Fock and Schrodinger representations. This relationship was discussed in the paper [8] using the GNS theorem. They found that the momentum density operator in its functional representation should be modified as \(\hat{\pi}[g]=\int_{\Sigma}d^{3}x\left(-ig\frac{\delta}{\delta\phi}+\phi(iB^{- 1}-B^{-1}A)g\right)\) so both representations can match.
[6] reproduces such results by using a geometric quantization approach. The formalism used there allows for a certain degree of freedom in the choice of the final shape. This compatibility requires a Gaussian integration measure; therefore, the states \(\Psi[\phi]\) will not longer be integrated with a "Lebesgue like" measure \(\mathcal{D}\phi\). Instead, it will be a "well" defined Gaussian measure \(d\mu_{B^{-1}}\) which depends on operator \(B\) which is one of the components of the complex structure. Therefore, the integration and normalization of the states depends strongly on the choice of \(J\). The creation and annihilation operators are also modified since they depend on \(\pi\). For instance, the annihilation operator is \(\hat{b}_{\text{Gauss}}[\overline{\mathbf{K}\lambda}]=\frac{i}{2}\int_{\Sigma} d^{3}x\left(Ag+Bh+ig\right)\frac{\delta}{\delta\phi}\), and the associated vacuum state is \(\Psi=e^{i\theta}\), with \(\theta\) being an arbitrary phase. Such a fact is not consistent with relations found among different vacuums found in Thermofield Dynamics for the Minkowski and Rindler's vacuums. Where although the relation being non-unitary [10] exists. Furthermore, the explicit dependency of the vacuum in the selection of modes is lost. In this article, we modify the calculations made in [8] and [7] slightly in order to obtain a desirable Gaussian vacuum which depends explicitly on the choice of \(J\). Schrodinger's representation is compatible with Fock's representation in the Weyl sense. Parallel to this result, an alternative representation of a momentum density operator is obtained with its corresponding creation annihilation operators. Such representation seems to fulfill the requirements of Thermofield dynamics, especially when we attempt to relate the Minkowski vacuum to Rindler's vacuum.
Complex structure J.
We will work with an arbitrary foliation of a manifold with topology \(\mathcal{M}=\mathbb{R}\times\Sigma\). The phase space \(\mathbb{M}\) is composed by the cotangent bundle whose elements are pairs of the form \((\phi,\pi)\) where each element compactly supported. An element in the solution space \(\mathbb{S}\) has a one to one relationship with \(\phi\) in the phase space. The operators \(K\) and \(\overline{K}\) act as a projectors from the phase space \(\mathbb{M}\) to the complex spaces \(\mathbb{W}\) and \(\overline{\mathbb{W}}\), respectively. The phase space related to complex solutions of the Klein-Gordon equation is then the disjoint sum \(V_{\mathbb{C}}=\mathbb{W}\oplus\overline{\mathbb{W}}\). The projector operator can be written in terms of an operator \(J\) through the relation \(K=(1-iJ)/2\). The condition of projector \(K^{2}=K\) implies \(J^{2}=-1\); thus, \(J\) is a complex structure. The complex structure \(J\) can be written as a matrix of operators of dimension two, namely:
\[-J=\begin{pmatrix}A&B\\ C&D\end{pmatrix} \tag{1}\]
By the restriction of \(J^{2}=-1\), a set of relations among the components can be established.
\[A^{2}+BC =-1 A+BD =0\] \[CA+DC =0 C+D^{2} =-1 \tag{2}\]
More specifically, the projection from the phase space to \(\mathbb{M}\) to \(\mathbb{W}\) and \(\overline{\mathbb{W}}\) is described as:
\[\mathbf{K}\!\begin{pmatrix}g\\ h\end{pmatrix} =\frac{1}{2}(1-iJ)\!\begin{pmatrix}g\\ h\end{pmatrix}=-\frac{i}{2}\!\begin{pmatrix}i-A&-B\\ -C&i-D\end{pmatrix}\!\begin{pmatrix}g\\ h\end{pmatrix}\] \[=-\frac{i}{2}\!\begin{pmatrix}ig-Ag-Bh\\ -Cg+ih-Dh\end{pmatrix}=\begin{pmatrix}g_{+}\\ h_{+}\end{pmatrix} \tag{3}\]
\[\overline{\mathbf{K}}\!\begin{pmatrix}g\\ h\end{pmatrix} =\frac{1}{2}(1+iJ)\!\begin{pmatrix}g\\ h\end{pmatrix}=-\frac{i}{2}\!\begin{pmatrix}i+A&B\\ C&i+D\end{pmatrix}\!\begin{pmatrix}g\\ h\end{pmatrix}\] \[=-\frac{i}{2}\!\begin{pmatrix}ig+Ag+Bh\\ Cg+ih+Dh\end{pmatrix}=\begin{pmatrix}g_{-}\\ h_{-}\end{pmatrix} \tag{4}\]
where \(\begin{pmatrix}g\\ h\end{pmatrix}\in\mathbb{M}\). The inner product is:
\[\langle\Psi_{\text{1-sys}}|\Psi_{\text{1-sys}^{\prime}}\rangle =-i\Omega\left(\overline{\mathbf{K}}\lambda,\mathbf{K}\nu\right)\] \[=-\frac{1}{2}\Omega(\lambda,J\nu)-i\frac{1}{2}\Omega(\lambda,\nu) \tag{5}\]
The term \(-\Omega(\lambda,J\nu)\) is usually denoted as \(\mu(\lambda,\nu)\). Note that it is symmetric, so the following relations must be satisfied by the components of the complex structure:
\[\mu(\lambda,\nu) =-\Omega(\lambda,J\nu)\] \[=\int_{\Sigma}d^{3}x\left(hAq+hBp-gCq-gDp\right)\] \[=-\Omega(\nu,J\lambda)\] \[=\int_{\Sigma}d^{3}x\left(pAg+pBh-qCg-qDh\right) \tag{6}\]
Therefore, the terms must satisfy the relations:
\[\int_{\Sigma}d^{3}xhAq=-\int_{\Sigma}d^{3}xqDh\] \[\int_{\Sigma}d^{3}xhBp=\int_{\Sigma}d^{3}xpBh\] \[\int_{\Sigma}d^{3}xgCq=\int_{\Sigma}d^{3}xqCg \tag{7}\]
The equations (2) and (7) are conditions of consistency of the components of \(J\). Additionally, \(\mu(\lambda,\nu)\) is the real part of the product, and here the free choice of modes was encapsulated.
## III Relating the Fock and the Schrodinger representation.
### The GNS construction
The operators \(\hat{\phi}\) and \(\hat{\pi}\) generates elementary linear observables; therefore, it is valid to ask about the general Lie group these operators might generate. An element of this Lie group could look like:
\[W[g,h]=e^{-i\int\{h(x)\hat{\phi}(x)-g(x)\hat{\pi}(x)\}dx} \tag{8}\]
Regarding the exponent is \(\hat{\Omega}([g,h],\cdot)\), and in order to simplify our notation \(\lambda:=[g,h]\), hence:
\[W[g,h]=W(\lambda)=e^{-i\hat{\Omega}(\lambda,\cdot)} \tag{9}\]
It is worthwhile asking about the product between two different elements of our Lie group:
\[W(\lambda_{1})W(\lambda_{2})=e^{-i\hat{\Omega}(\lambda_{1},\cdot )}e^{-i\hat{\Omega}(\lambda_{2},\cdot)}\] \[=e^{\frac{i}{2}\Omega(\lambda_{1},\lambda_{2})}e^{-i\hat{\Omega} (\lambda_{1}+\lambda_{2},\cdot)}=e^{\frac{i}{2}\Omega(\lambda_{1},\lambda_{2}) }W(\lambda_{1}+\lambda_{2}) \tag{10}\]
where we used the Baker-Campbell-Hausdorff formula. Additionally:
\[W(\lambda)^{*}=e^{i\hat{\Omega}(\lambda,\cdot)}=e^{-i\hat{\Omega}(-\lambda, \cdot)}=W(-\lambda) \tag{11}\]
And furthermore if \(\lambda=0\), then:
\[W(0)=1 \tag{12}\]
The relations (10), (11), and (12) are called the Weyl relations. All operators that satisfy the Weyl relation belong to a representation of the so called _Weyl algebras_\(\mathcal{A}\). Quantization means associating a representation of the Weyl relations on a Hilbert space. It is easy to see that \(W(\lambda)\) satisfies all the requirements of a \(C^{*}\)-algebra. Additionally, \(W(\lambda)\in R(\mathcal{A})\) are, by their own, representations of \(\mathcal{A}\).
In the research conducted by [8], it is used the GNS construction which can be stated by the following theorem:
**Theorem 1**.: _Let \(\mathcal{A}\) be a unital \(C^{*}\)-algebra and let \(\omega:\mathcal{A}\rightarrow\mathbb{C}\) be a state. Then there is a Hilbert space \(\mathcal{H}\), a representation \(R:\mathcal{A}\to L(\mathcal{H})\) and a vector \(|\Psi_{0}\rangle\in\mathcal{H}\) such that,_
\[\omega(A)=\left\langle\Psi_{0}|R(A)\Psi_{0}\right\rangle_{\mathcal{H}} \tag{13}\]
_Furthermore, the vector \(|\Psi_{0}\rangle\) is cyclic. The triplet \((\mathcal{H},R,\Psi_{0})\) with these properties is unique (up to unitary equivalence)._
The expected value of the Weyl operator can be obtained if we apply the functional \(\omega_{\text{Fock}}(\cdot)\) over \(R_{\text{Fock}}(\hat{W}(g))\), which is a Fock representation of the elements of the Weyl algebra. Thus we get:
\[\omega(R_{\text{Fock}}(\hat{W}(g)))_{\text{Fock}}=\text{{}_{Fock }}\left\langle 0\right|R_{\text{Fock}}(W(\lambda))\left|0\right\rangle_{\text{ Fock}}\\ =\exp\left(-\frac{1}{4}\mu(\lambda,\lambda)\right) \tag{14}\]
\(|0\rangle_{\text{Fock}}\) is the Fock vacuum chosen by the correspondingly creation/annihilation operators. It is useful to note that a Schrodinger representation of the Weyl operator is as follows:
\[R_{\text{Sch}}(\hat{W}(\lambda))=e^{i\hat{\phi}[h]-i\hat{\pi}[g]}=e^{i\hat{ \phi}[h]}e^{-i\hat{\pi}[g]}e^{-\frac{1}{2}[\hat{\phi}[h],\hat{\pi}[g]]} \tag{15}\]
We have two operators whose action over a Hilbert space is not determined. Solving this situation requires that one of them should be fixed. Let us see which one. One of the reasons why we want to calculate the expectation value of \(R_{\text{Sch}}(\hat{W}(\lambda))\) is to define normalizations of the kind "\(\int\mathcal{D}\phi\Psi^{*}[\phi]\Psi[\phi]=1\)" correctly. Of course, we might have chosen to integrate over \(\pi\). The interpretation of \(\Psi[\phi]\) is that \(|\Psi[\phi]|^{2}\) is proportional to the probability density for the quantum field \(\hat{\phi}(x,\Sigma_{t})\) to assume the value \(\phi(x,\Sigma_{t})\) at the fixed surface \(\Sigma_{t}\), which is parameterized by \(t\). Such interpretation, automatically fixes the functional \(\Psi[\phi]\) as the eigenvector of \(\hat{\phi}(x,\Sigma_{t})\), namely \(\hat{\phi}\Psi[\phi]=\phi\Psi[\phi]\), where I omit the hyper-surface variables. Alternatively, it can be affirmed that \(\hat{\phi}\) is diagonal in the Schrodinger representation. The canonical commutator is \([\hat{\phi}[h],\hat{\pi}[g]]=i\int_{\Sigma}d^{3}xhg\), where we omit \(t\) in \(\Sigma\) henceforth. If the vacuum expectation of \(R_{\text{Sch}}(\hat{W}(\lambda))\) is calculated and using the fact that expectation values should be independent of the representation, it is valid to equate the equation (14) with \(\omega(R_{\text{Sch}}(\hat{W}(\lambda)))_{\text{Sch}}\), thus:
\[\omega(R_{\text{Sch}}(\hat{W}(\lambda)))_{\text{Sch}}\\ =\left\langle\Psi_{0}\right|e^{i\hat{\pi}[g]}e^{-i\hat{\pi}[g]}e^ {-\frac{i}{2}\int_{\Sigma}d^{3}xhg}\Psi_{0}\right\rangle\\ =e^{-\frac{i}{2}\int_{\Sigma}d^{3}xhg}\left\langle\Psi_{0}\right| e^{i\int_{\Sigma}d^{3}x\phi h}e^{-i\hat{\pi}[g]}\Psi_{0}\Big{\rangle}\\ =\exp\left(-\frac{1}{4}\mu(\lambda,\lambda)\right) \tag{16}\]
Through this relation, we obtain an equation that can be used to find a representation for \(\hat{\pi}[g]\), which furthermore, depends of the complex structure \(J\) via \(\mu\). Explicitly:
\[\left\langle\Psi_{0}\right| e^{i\int_{\Sigma}d^{3}x\phi h}e^{-i\hat{\pi}[g]}\Psi_{0}\Big{\rangle}\] \[=\exp\left(\frac{i}{2}\int_{\Sigma}d^{3}xhg+\frac{1}{4}\Omega( \lambda,J\lambda)\right) \tag{17}\]
There is already a sketch for \(\hat{\pi}[g]\), motivated by the commutation relation \([\hat{\phi}[h],\hat{\pi}[g]]=i\int_{\Sigma}d^{3}xhg\). In order to satisfy this relation, \(\hat{\pi}[g]\) must depend on a functional derivative over \(\phi\), and at most, polynomial terms on \(\phi\) should be allowed. To keep the calculations from getting too complex, they work just with linear terms. The following proposition of the momentum density operator is made:
\[\hat{\pi}[g]=\int_{\Sigma}d^{3}x\left(-ig\frac{\delta}{\delta\phi}+\phi(M+N)g \right) \tag{18}\]
\(M\) and \(N\) are operators that act over elements in the solution space \(\mathbb{S}\) and its cotangent space. In [8] and [7] a similar momentum density operator is made but differs from (18) in the term \(N\). This choice is made because it is required to keep the expression as general as possible and as simple at the same time. Keeping in mind that the momentum density is in the exponent, it is important to use the BCH again[11]:
\[e^{-i\hat{\pi}[g]} =e^{-i\int_{\Sigma}d^{3}x\phi(M+N)g-\int_{\Sigma}d^{3}xg\frac{d}{ \delta\phi}}\] \[=e^{-i\int_{\Sigma}d^{3}x\phi Mg}\,e^{-\int_{\Sigma}d^{3}x\phi Ng \frac{d}{\delta\phi}+i\phi Ng)}\] \[\times e^{-\frac{i}{2}\int_{\Sigma}d^{3}x\int_{\Sigma}d^{3}ygMg[ \phi,\frac{\delta}{\delta\phi}]}\] \[=e^{\frac{i}{2}\int_{\Sigma}d^{3}xgMg}\,e^{-i\int_{\Sigma}d^{3} x\phi Mg}\] \[\times e^{-\int_{\Sigma}d^{3}x(g\frac{d}{\delta\phi}+i\phi Ng)} \tag{19}\]
Substituting (19) in (17):
\[e^{\frac{i}{2}\int_{\Sigma}d^{3}xgMg}\left\langle\Psi_{0}\right| e^{i\int_{\Sigma}d^{3}x\phi h}\,e^{-i\int_{\Sigma}d^{3}x\phi Mg}\\ \times e^{-\int_{\Sigma}d^{3}x(g\frac{d}{\delta\phi}+i\phi Ng)} \left|\Psi_{0}\right\rangle\\ =\exp\left(\frac{i}{2}\int_{\Sigma}d^{3}xhg+\frac{1}{4}\Omega( \lambda,J\lambda)\right) \tag{20}\]
Now, how can \(e^{-\int_{\Sigma}d^{3}x(g\frac{d}{\delta\phi}+i\phi Ng)}\left|\Psi_{0}\right\rangle\) be calculated? The functional \(\Psi_{0}\) is a sort of "free choice", and whenever it does not come into conflict with the state induced by the annihilation operator, all the construction might be regarded as consistent. The exponential term can be written as:
\[e^{-\int_{\Sigma}d^{3}x(g\frac{d}{\delta\phi}+i\phi Ng)}=\\ \lim_{n\rightarrow\infty}\left(1-\frac{1}{n}\int_{\Sigma}d^{3}x \left[g\frac{\delta}{\delta\phi}+i\phi Ng\right]\right)^{n} \tag{21}\]
Lets choose a functional that can satisfy the following condition:
\[\int_{\Sigma}d^{3}x\left[g\frac{\delta}{\delta\phi}+i\phi Ng\right]\Psi_{0}[\phi]=0 \tag{22}\]
This implies \(e^{-\int_{\Sigma}d^{3}x\,(g\frac{\delta}{g_{0}}+i\phi Ng)}\left|\Psi_{0}\right\rangle =\left|\Psi_{0}\right\rangle\). The states are functionals, of the form
\[\Psi_{0}[\phi]=Ce^{-\frac{i}{2}\int_{\Sigma}d^{3}x\,\phi N\phi} \tag{23}\]
\(C\) is a normalization constant. Assume that the operator \(N\) can be split in \(N=N^{\prime}+iN^{\prime\prime}\). An appropriate measure is needed to guaranty that the integral is well defined. Let us call it \(\hat{\mu}\). Moreover, the integral shall run over the configuration space \(\mathbb{S}\). Using the conditions (22) and (23) in (20) we can obtain:
\[e^{\frac{i}{2}\int_{\Sigma}d^{3}xgMg}\int_{\mathbb{S}}d\hat{\mu} \,e^{i\int_{\Sigma}d^{3}x\,\phi N^{\prime\prime}\phi}\,e^{i\int_{\Sigma}d^{3}x \,\phi h}\times\\ e^{-i\int_{\Sigma}d^{3}x\,\phi Mg}=\exp\left(\frac{i}{2}\int_{ \Sigma}d^{3}x\,hg+\frac{1}{4}\Omega(\lambda,J\lambda)\right) \tag{24}\]
It is worthwhile to simplify the equation. If we take a look at (16), one may note that this is true for any vector on the phase space, so lets us choose \(\lambda=[0,h]\). If this is so, \(\mu(\lambda,\lambda)=-\Omega([0,h],J[0,h])=\Omega([0,h],[Bh,Dh])=\int_{\Sigma} d^{3}xhBh\), substituting \(\lambda\) and the previous result into (24) leads to:
\[\int_{\mathbb{S}}d\hat{\mu}\,e^{\int_{\Sigma_{t}}d^{3}x\,\phi N^{\prime\prime }\phi}e^{i\int_{\Sigma_{t}}d^{3}x\,\phi h}=e^{-\frac{1}{4}\int_{\Sigma_{t}}d ^{3}x\,hBh} \tag{25}\]
Is worthwhile to reabsorb the quadratic term in the exponential into the measure:
\[\int_{\mathbb{S}}d\tilde{\mu}e^{i\int_{\Sigma}d^{3}x\,\phi h}=e^{-\frac{1}{4} \int_{\Sigma}d^{3}x\,hBh} \tag{26}\]
Where \(d\tilde{\mu}=d\tilde{\mu}e^{\int_{\Sigma_{t}}d^{3}x\,\phi N^{\prime\prime}\phi}\). The relation (26) is the Fourier transform of the measure \(\tilde{\mu}\). There is a theorem that links the Fourier transform with being a Gaussian measure [12]:
**Theorem 2**.: _A measure \(\tilde{\mu}\) on a locally convex space \(X\) is Gaussian and centered, if and only if its Fourier transform has the form:_
\[\chi(\tilde{\mu})=e^{-\frac{1}{2}(h,Oh)} \tag{27}\]
\(O\) _is a symmetric bilinear function on \(X^{*}\) and the bilinear form and \((h,Oh)\) is positive defined._
Thus, it is concluded that \(\tilde{\mu}\) is Gaussian. The measure looks like:
\[d\tilde{\mu}_{B^{-1}}=\mathcal{D}\phi e^{-\int_{\Sigma}d^{3}x\,\phi B^{-1}\phi} \tag{28}\]
The subindex \(B^{-1}\) in \(d\tilde{\mu}_{B^{-1}}\) indicates the dependence of the measure in the operator \(B^{-1}\). Such result implies that from (28) and the definition of \(\tilde{\mu}\):
\[d\hat{\mu}=\mathcal{D}\phi e^{-\int_{\Sigma_{t}}d^{3}x\,\phi(B^{-1}+N^{\prime \prime})\phi} \tag{29}\]
So knowing \(N^{\prime\prime}\) allows us to know the integration measure. Returning to the equation (24), using the fact that \(\Psi\) is a complex constant in (20) and that we are dealing with a Gaussian measure:
\[e^{\frac{i}{2}\int_{\Sigma}d^{3}xgMg}\int_{\mathbb{S}}d\tilde{ \mu}_{B^{-1}}\,e^{i\int_{\Sigma}d^{3}x\phi(h-Mg)}\\ =\exp\left(\frac{i}{2}\int_{\Sigma}d^{3}xhg+\frac{1}{4}\Omega( \lambda,J\lambda)\right) \tag{30}\]
Thus, the functional integral in the left hand side is:
\[e^{\frac{i}{2}\int_{\Sigma}d^{3}xgMg}\,e^{-\frac{1}{4}\int_{ \Sigma}d^{3}x(h-Mg)B(h-Mg)}\\ =\exp\left(\frac{i}{2}\int_{\Sigma}d^{3}xhg+\frac{1}{4}\Omega( \lambda,J\lambda)\right) \tag{31}\]
Lets dive into the term of the right hand side. The term \(\Omega(\lambda,J\lambda)\) can be written using the matrix form of \(J\), \(\Omega([g,h],J[g,h])=-\Omega([g,h],[Ag+Bh,Cg+Dh])\):
\[\Omega([g,h],[Ag+Bh,Cg+Dh])\\ =\int_{\Sigma}d^{3}x(hAg+hBh-gCg-gDh) \tag{32}\]
Hence, equating the exponents and the integrands:
\[\frac{i}{2}gMg-\frac{1}{4}(hBh-2h(BMg)+(Mg)(BMg))\\ =\frac{i}{2}hg-\frac{1}{4}(hAg+hBh-gCg-gDh) \tag{33}\]
where the symmetry of \(B\) obtained in (7) was used. From here, we extract the relations:
\[gCg =i2gMg-(Mg)(BMg) \tag{34}\] \[h(BMg) =ihg-hAg \tag{35}\]
where the relations (7) were used, especially the fact that \(hAg=-gDh\) under integral sign. Now, from the second relation in (34) it is extracted the equation:
\[\boxed{M=B^{-1}(i1-A)=iB^{-1}-B^{-1}A} \tag{36}\]
Such operator was obtained in [7; 8]. Now we will look for the operator \(N^{\prime\prime}\)
### Using the Schrodinger representation to find the operator \(\mathbf{N}\)"
The representation of the momentum density operator in the Schrodinger representation with a Gaussian measure is:
\[\hat{\pi}[g]=\int_{\Sigma}d^{3}x\left(-ig\frac{\delta}{\delta\phi}+\phi(iB^{-1 }-B^{-1}A+N)g\right) \tag{37}\]
The further term found in (37) is product of have considered a Gaussian measure. Lets see explicitly how the creation/annihilation operators look like. The
creation operator in terms of \(\phi\) and \(\pi\) can be written as:
\[\hat{b}^{\dagger}[\mathbf{K}\lambda] =-i\hat{\Omega}(\mathbf{K}\lambda,\cdot)=-i\hat{\Omega}\left(\frac{ 1}{2}(1-iJ)\lambda,\cdot\right)\] \[=\frac{1}{2}(\hat{\phi}[Cg+Dh-ih]-\hat{\pi}[Ag+Bh-ig]) \tag{38}\]
Analogously, the annihilation operator is:
\[\hat{b}[\overline{\mathbf{K}\lambda}] =i\hat{\Omega}(\overline{\mathbf{K}\lambda},\cdot)=i\hat{\Omega} \left(\frac{1}{2}(1+iJ)\lambda,\cdot\right)\] \[=\frac{1}{2}(\hat{\phi}[Cg+Dh+ih]-\hat{\pi}[Ag+Bh+ig]) \tag{39}\]
It is straightforward to show that the annihilation operator derived from the representation of \(\pi\) in the Gaussian measure is explicitly:
\[\hat{b}_{\text{Gauss}}[\overline{\mathbf{K}\lambda}]=\frac{1}{2 }\int_{\Sigma}d^{3}x\,\left[(Ag+Bh+ig)i\frac{\delta}{\delta\phi}\right.\\ -\left.\phi N(Ag+Bh+ig)\right] \tag{40}\]
Where the subindex "Gauss" indicates the corresponding measure. It is simple to see that the creation operator can be written as:
\[\hat{b}^{\dagger}_{\text{Gauss}}[\mathbf{K}\lambda]=-\frac{1}{2 }\int_{\Sigma}d^{3}x\left(2\hat{\phi}(B^{-1}(iA+1)g+ih)\right.\\ \left.+\phi N(-i(iA+1)g+Bh)-i(Ag+Bh-ig)\frac{\delta}{\delta\phi}\right) \tag{41}\]
We are free to choose the operators of creation and annihilation, in order to simplify the calculations, lets choose \(\hat{b}^{\dagger}_{\text{Gauss}}[\mathbf{K}\lambda]\) as proportional to the functional derivative. Therefore:
\[N=-2iB^{-1} \tag{42}\]
Lets remember that \(N=N^{\prime}+iN^{\prime\prime}\). \(N^{\prime}\) is just a phase in \(\Psi_{0}[\phi]\) and it can be fixed as zero. Thus:
\[N^{\prime\prime}=-2B^{-1} \tag{43}\]
The measure ends up being:
\[d\hat{\mu}=\mathcal{D}\phi e^{\int_{\Sigma}d^{3}x\,\phi B^{-1}\phi} \tag{44}\]
Which clearly is not Gaussian. Although the vacuum is Gaussian:
\[\boxed{\Psi_{0}[\phi]=C[B]e^{-\int_{\Sigma}d^{3}x\,\phi B^{-1}\phi}} \tag{45}\]
where \(C[B]\) is a normalization constant. Here we make explicit the functional dependence of \(B\) because the vacuum depends on its choice (the choice of the complex structure \(J\)). The vacuum compensates this 'ill' definition of the measure. Therefore, by substituting the operator (42) in (37), we obtain:
\[\hat{\pi}[g]=-i\int_{\Sigma}d^{3}x\left(g\frac{\delta}{\delta\phi}+\phi(B^{-1 }-iB^{-1}A)g\right) \tag{46}\]
The creation and annihilation operators are correspondingly:
\[\hat{b}^{\dagger}_{\text{Gauss}}[\mathbf{K}\lambda]=\frac{i}{2}\int_{\Sigma} d^{3}x\,\left(Ag+Bh-ig\right)\frac{\delta}{\delta\phi} \tag{47}\]
\[\hat{b}_{\text{Gauss}}[\overline{\mathbf{K}\lambda}]=\frac{i}{2} \int_{\Sigma}d^{3}x\,\left(Ag+Bh+ig\right)\\ \times\left[\frac{\delta}{\delta\phi}+2B^{-1}\phi\right] \tag{48}\]
Alternatively, the positive and negative frequency modes can be explicitly noted:
\[\hat{b}^{\dagger}_{\text{Gauss}}[\mathbf{K}\lambda]=\int_{\Sigma }d^{3}x\,g^{+}\frac{\delta}{\delta\phi} \tag{49}\] \[\hat{b}_{\text{Gauss}}[\overline{\mathbf{K}\lambda}]=-\int_{ \Sigma}d^{3}x\,g^{-}\left[\frac{\delta}{\delta\phi}+2B^{-1}\phi\right] \tag{50}\]
## IV Discussion
A shape of the vacuum which depends on the complex structure seems more natural and compatible with the Fock description of states. For instance the restriction of the Minkowski vacuum to the Right (or left) wedge which is highly mixed. This is consequence of the Reeh-Schliender theorem. It is expected that the restriction of the Minkowski vacuum in such a Wedge, should be written in terms of a linear combinations of Rindler's pure states. The whole Minkowski vacuum restricted to both wedges is pure and therefore, should be orthogonal to total Rindler's vacuum \(\omega^{L}_{R}\otimes\omega^{R}_{R}\)[13]. An appropriate Schrodinger representation should manifest this fact.
I suspect that the shape of the vacuum (45) respects these relations among different vacuums, even in the non-unitary case like the relation between Minkowski and Rindler displayed in (2.76) of [14]. Such constrain should be taking into account in order to reproduce theoretical results obtained in the Fock representation, like the Unruh effect or Hawking effect. Although a further analysis should be made in order to compute even the expectation values trough this representation.
|
2309.15341 | Representing low mass black hole seeds in cosmological simulations: A
new sub-grid stochastic seed model | The nature of the first seeds of supermassive black holes (SMBHs) is
currently unknown, with postulated initial masses ranging from
$\sim10^5~M_{\odot}$ to as low as $\sim10^2~M_{\odot}$. However, most existing
cosmological simulations resolve BHs only down to $\sim10^5-10^6~M_{\odot}$. In
this work, we introduce a novel sub-grid BH seed model that is directly
calibrated from high resolution zoom simulations that can trace the formation
and growth of $\sim 10^3~M_{\odot}$ seeds forming in halos with pristine,
star-forming gas. We trace the BH growth along merger trees until their
descendants reach masses of $\sim10^4$ or $10^5~M_{\odot}$. The descendants
assemble in galaxies with a broad range of properties (e.g., halo masses
$\sim10^7-10^9~M_{\odot}$) that evolve with redshift and are sensitive to seed
parameters. The results are used to build a new stochastic seeding model that
directly seeds these descendants in lower resolution versions of our zoom
region. Remarkably, we find that by seeding the descendants simply based on
total galaxy mass, redshift and an environmental richness parameter, we can
reproduce the results of the detailed gas based seeding model. The baryonic
properties of the host galaxies are well reproduced by the mass-based seeding
criterion. The redshift-dependence of the mass-based criterion captures the
influence of halo growth, star formation and metal enrichment on seed
formation. The environment based seeding criterion seeds the descendants in
rich environments with higher numbers of neighboring galaxies. This accounts
for the impact of unresolved merger dominated growth of BHs, which produces
faster growth of descendants in richer environments with more extensive BH
merger history. Our new seed model will be useful for representing a variety of
low mass seeding channels within next generation larger volume uniform
cosmological simulations. | Aklant K Bhowmick, Laura Blecha, Paul Torrey, Rainer Weinberger, Luke Zoltan Kelley, Mark Vogelsberger, Lars Hernquist, Rachel S. Somerville | 2023-09-27T01:15:27Z | http://arxiv.org/abs/2309.15341v1 | Representing low mass black hole seeds in cosmological simulations: A new sub-grid stochastic seed model
###### Abstract
The nature of the first seeds of supermassive black holes (SMBHs) is currently unknown, with postulated initial masses ranging from \(\sim 10^{5}\)\(M_{\odot}\) to as low as \(\sim 10^{2}\)\(M_{\odot}\). However, most existing cosmological hydrodynamical simulations resolve BHs only down to \(\sim 10^{5}-10^{6}\)\(M_{\odot}\). In this work, we introduce a novel sub-grid BH seeding model for cosmological simulations that is directly calibrated from high resolution zoom simulations that can trace the formation and growth of \(\sim 10^{3}\)\(M_{\odot}\) seeds that form in halos with pristine, star-forming gas. We trace the BH growth along galaxy merger trees until their descendants reach masses of \(\sim 10^{4}\) or \(10^{5}\)\(M_{\odot}\). The descendants assemble in galaxies with a broad range of properties (e.g., halo masses ranging from \(\sim 10^{7}-10^{9}\)\(M_{\odot}\)) that evolve with redshift and are also sensitive to seed parameters. The results are used to build a new stochastic seeding model that directly seeds these descendants in lower resolution versions of our zoom region. Remarkably, we find that by seeding the descendants simply based on total galaxy mass, redshift and an environmental richness parameter, we can reproduce the results of the detailed gas based seeding model. The baryonic properties of the host galaxies are well reproduced by the mass-based seeding criterion. The redshift-dependence of the mass-based criterion captures the combined influence of halo growth, star formation and metal enrichment on the formation of \(\sim 10^{3}\)\(M_{\odot}\) seeds. The environment based seeding criterion seeds the descendants in rich environments with higher numbers of neighboring galaxies. This accounts for the impact of unresolved merger dominated growth of BHs, which produces faster growth of descendants in richer environments with more extensive BH merger history. Our new seed model will be useful for representing a variety of low mass seeding channels within next generation larger volume uniform cosmological simulations.
keywords: (galaxies:) quasars: supermassive black holes; (galaxies:) formation; (galaxies:) evolution; (methods:) numerical
## 1 Introduction
The origin of supermassive black holes (SMBHs) is a key missing piece in our current understanding of galaxy formation. Several theoretical channels have been proposed for the first "seeds" of SMBHs, predicting a wide range of postulated initial masses. At the lowest mass end of the initial seed mass function, we have the remnants of the first generation Population III stars, a.k.a. Pop III seeds (Fryer et al., 2001; Madau and Rees, 2001; Xu et al., 2013; Smith et al., 2018) ranging from \(\sim 10^{2}-10^{3}\)\(M_{\odot}\). Next, we have seeds postulated at the "intermediate mass" range of \(\sim 10^{3}-10^{4}\)\(M_{\odot}\) that can form via runaway stellar and black hole (BH) collisions within dense Nuclear Star Clusters, a.k.a NSC seeds (Davies et al., 2011; Lupi et al., 2014; Kroupa et al., 2020; Das et al., 2021, 2021). Finally, we can have "high mass seeds" formed via direct isothermal collapse of gas at sufficiently high temperatures (\(\gtrsim 10^{4}\) K), a.k.a direct collapse black hole or DCBH seeds (Bromm and Loeb, 2003; Begelman et al., 2006; Regan et al., 2014; Latif et al., 2016; Luo et al., 2018; Wise et al., 2019; Luo et al., 2020; Begelman and Silk, 2023). DCBHs masses are traditionally postulated to be ranging within \(\sim 10^{4}-10^{6}\)\(M_{\odot}\), but recent works have suggested that they can also be as massive as \(\sim 10^{8}\)\(M_{\odot}\)(Mayer et al., 2023).
The growing observed population of luminous quasars at \(z\sim 6-8\)(Fan et al., 2001; Willott et al., 2010; Mortlock et al., 2011; Venemans et al., 2015; Jiang et al., 2016; Banados et al., 2016; Reed et al., 2017; Matsuoka et al., 2018; Wang et al., 2018;
Baaiados et al., 2018; Matsuoka et al., 2019; Yang et al., 2019; Wang et al., 2021) tells us that \(\sim 10^{9}-10^{10}\)\(M_{\odot}\) BHs already assembled within the first few hundred million years after the Big Bang. These already pose a serious challenge to models of BH formation as well as BH growth. For example, light seeds may need to sustainably accrete gas at super-Eddington rates to grow by \(\sim 6-7\) orders of magnitude within such a short time. Alternatively, they can boost their seed mass via mergers, but it is unclear as to how efficiently these seeds sink and merge with each other within the shallow potential wells of high redshift proto-galaxies (Volonteri, 2007; Ma et al., 2021). Heavier seed masses such as DCBHs are substantially more conducive for assembling the high-z quasars, but it is unclear if they form frequently enough to account for the observed number densities (1 Gpc\({}^{-3}\)).
Due to possible degeneracies in the impact of different BH formation versus BH growth models, it is challenging to constrain seed models solely using observations of luminous high-z quasars. To that end, detections of lower mass BH populations at high-z are going to be crucial for constraining seed models as these BHs are more likely to retain the memory of their initial seeds. The James Webb Space Telescope (JWST; Gardner et al., 2006) is pushing the frontiers of SMBH studies by detecting lower luminosity active galactic nuclei (AGN) at high redshifts. In addition to the first statistical sample of \(\sim 10^{6}-10^{7}\)\(M_{\odot}\) AGN at \(z\sim 4-7\)(Harikane et al., 2023), JWST has also produced the first detections at \(z\gtrsim 8.3\)(Larson et al., 2023) and \(z\sim 10.6\)(Maiolino et al., 2023). Moreover, there is an exciting possibility of future detections of BHs as small as \(\sim 10^{5}\)\(M_{\odot}\) using JWST, which would potentially enable us to probe the massive end of the seed population for the very first time (Natarajan et al., 2017; Cann et al., 2018; Inayoshi et al., 2022).
Even with JWST and proposed X-ray facilities like ATHENA (Barcons et al., 2017) and Axis (Mushotzky et al., 2019), low mass seeds \(\sim 10^{2}-10^{4}\)\(M_{\odot}\) are likely to be inaccessible to electromagnetic (EM) observations at high-z. However, with the new observational window of gravitational waves (GW) opened for the first time by the Laser Interferometer Gravitational-Wave Observatory (LIGO; Abbott et al., 2009), we can close this gap. In addition to detecting numerous (\(\sim 80\)) stellar mass BH mergers, LIGO has also started probing the elusive population of intermediate mass black holes (IMBH: \(\sim 10^{2}-10^{5}\)\(M_{\odot}\)) with GW190521 (Abbott et al., 2020) producing a \(\sim 142\)\(M_{\odot}\) BH remnant. At the other end of BH mass spectrum, the North American Nanohertz Observatory for Gravitational Waves (NANOGRAV) have also detected the Hellings-Downs correlation expected from a stochastic GW background that most likely originates from populations of merging SMBHs (Agazie et al., 2023). But the strongest imprints of BH formation will likely be provided by the upcoming Laser Interferometer Space Antenna (LISA; Baker et al., 2019), which is expected to detect GWs from mergers of IMBHs as small as \(\sim 10^{3}\)\(M_{\odot}\) up to \(z\sim 15\)(Amaro-Seoane et al., 2017).
Cosmological hydrodynamic simulations (Di Matteo et al., 2012; Vogelsberger et al., 2014; Sijacki et al., 2015; Khandai et al., 2015; Schaye et al., 2015; Volonteri et al., 2016; Dubois et al., 2016; Kaviraj et al., 2017; Tremmel et al., 2017; Nelson et al., 2019; Volonteri et al., 2020) have emerged as powerful tools for testing galaxy formation theories (see, e.g., the review by Vogelsberger et al., 2020). However, most such simulations can resolve gas elements only down to \(\sim 10^{5}-10^{7}\)\(M_{\odot}\), depending on the simulation volume. This is particularly true for simulation volumes needed to produce statistical samples of galaxies and BHs that can be directly compared to observations. Therefore, most cosmological simulations only model BH seeds down to \(\sim 10^{5}\)\(M_{\odot}\)(for e.g. Vogelsberger et al., 2014; Khandai et al., 2015; Tremmel et al., 2017). Notably, there are simulations that do attempt to capture seed masses down to \(\sim 10^{4}\)\(M_{\odot}\)(Ni et al., 2022) and \(\sim 10^{3}\)\(M_{\odot}\)(Taylor and Kobayashi, 2014; Wang et al., 2019), but they do so without explicitly resolving the seed-forming gas to those masses. Overall, directly resolving the low mass seed population (\(\sim 10^{2}-10^{4}\)\(M_{\odot}\) encompassing Pop III and NSC seeding channels) is completely inaccessible within state of the art cosmological simulations, and pushing beyond current resolution limits will require a substantial advancement in available computing power.
Given that BH seed formation is primarily governed by properties of the seed-forming gas, the insufficient resolution within cosmological simulations carries the additional liability of having poorly converged gas properties. For instance, Pop III and NSC seeds are supposed to be born out of star-forming and metal poor gas. However, the rates of star formation and metal enrichment may not be well converged in these simulations at their typical gas mass resolutions of \(\sim 10^{5}-10^{7}\)\(M_{\odot}\)(for example, see Figure 19 of Bhowmick et al., 2021). As a result, many simulations (Di Matteo et al., 2012; Vogelsberger et al., 2014; Nelson et al., 2018; Ni et al., 2022) simply use a host halo mass threshold to seed BHs. Several cosmological simulations have also used local gas properties for seeding (Taylor and Kobayashi, 2014; Tremmel et al., 2017; Wang et al., 2019). These simulations produce seeds directly out of sufficiently dense and metal poor gas cells, which is much more consistent with proposed theoretical seeding channels. But these approaches can lead to stronger resolution dependence in the simulated BH populations (see Figure 10 of Taylor and Kobayashi, 2014). In any case, most of these seeding approaches have achieved significant success in generating satisfactory agreement with the observed SMBH populations at \(z\sim 0\)(Habouzit et al., 2020). However, it is important to note that they do not provide definitive discrimination among the potential seeding channels from which the simulated BHs may have originated.
A standard approach to achieve very high resolutions in cosmological simulations is to use the 'zoom-in' technique. In our previous work (Bhowmick et al., 2021, 2022), we used cosmological zoom-in simulations with gas mass resolutions up to \(\sim 10^{3}\)\(M_{\odot}\) to build a new set of gas based seed models that placed seeds down to the lowest masses (\(1.56\times 10^{3}\)\(M_{\odot}/h\)) within halos containing sufficient amounts of star forming & metal poor gas. We systematically explored these gas based seed models and found that the strongest constraints for seeding are expected within merger rates measurable with LISA. However, the predictions for these zoom simulations are subject to large cosmic variance, as they correspond to biased regions of the large-scale structure. In order to make observationally testable predictions with these gas based seed models, we must find a way to represent them in cosmological simulations despite the lack of sufficient resolution.
In this work, we build a new sub-grid stochastic seed model
that can represent low mass seeds born out of star forming and metal poor gas, within lower-resolution and larger-volume simulations that cannot directly resolve them. To do this, we first run a suite of highest resolution zoom simulations that places \(1.56\times 10^{3}\ M_{\odot}/h\) seeds within star forming and metal poor gas using the gas based seed models from Bhowmick et al. (2021). We then study the growth of \(1.56\times 10^{3}\ M_{\odot}/h\) seeds and the evolution of their formation environments. We particularly study the halo and galaxy properties wherein these seeds assemble higher mass (\(1.25\times 10^{4}\ \&\ 1\times 10^{5}\ M_{\odot}/h\)) descendants. We then use the results to build our stochastic seed model that directly seeds these descendants within lower resolution versions of the same zoom region. In the process, we determine the key ingredients required for these stochastic seed models to reproduce the results of the gas based seed models in the lower resolution zooms.
Section 2 presents the basic methodology, which includes the simulation suite, the underlying galaxy formation model, as well as the BH seed models. Our main results are described in sections 3 and 4. In section 3, we present the results for the formation and growth of \(1.56\times 10^{3}\ M_{\odot}/h\) seeds within our highest resolution zoom simulations. In section 4, we use the results from section 3 to build our stochastic seed model. Finally, Section 5 summarizes our main results.
## 2 Methods
### AREPO cosmological code and the Illustris-TNG model
We use the AREPO gravity + magneto-hydrodynamics (MHD) solver (Springel, 2010; Pakmor et al., 2011, 2016; Weinberger et al., 2020) to run our simulations. The simulations use a \(\Lambda\) cold dark matter cosmology with parameters adopted from Planck Collaboration et al. (2016): (\(\Omega_{\Lambda}=0.6911,\Omega_{m}=0.3089,\Omega_{b}=0.0486,H_{0}=67.74\ \rm km\ sec^{-1}Mpc^{-1}, \sigma_{8}=0.8159,n_{s}=0.9667\)). The gravity solver uses the PM Tree (Barnes & Hut, 1986) method and the MHD solver for gas dynamics uses a quasi-Lagrangian description of the fluid within an unstructured grid generated via a Voronoi tessellation of the domain. Halos are identified using the friends of friends (FOF) algorithm (Davis et al., 1985) with a linking length of 0.2 times the mean particle separation. Subhalos are computed using the SUBFIND (Springel et al., 2001) algorithm for each simulation snapshot. Aside from our BH seed models, our underlying galaxy formation model is the same as the IllustrisTNG (TNG) simulation suite (Springel et al., 2018; Pillepich et al., 2018; Nelson et al., 2018; Naiman et al., 2018; Marinacci et al., 2018; Nelson et al., 2019) (see also Weinberger et al., 2018; Genel et al., 2018; Donami et al., 2019; Torrey et al., 2019; Rodriguez-Gomez et al., 2019; Nelson et al., 2019; Pillepich et al., 2019; Ubler et al., 2021; Habouzit et al., 2021). The TNG model includes a wide range of subgrid physics for star formation and evolution, metal enrichment and feedback as detailed in Pillepich et al. (2018) and also summarized in our earlier papers (Bhowmick et al., 2021, 2022, 2022, 2022, 2022).
### BH accretion, feedback and dynamics
BH accretion rates are determined by the Eddington-limited Bondi-Hoyle formalism given by
\[\dot{M}_{\rm bh}=\min(\dot{M}_{\rm Bondi},\dot{M}_{\rm Edd}) \tag{1}\] \[\dot{M}_{\rm Bondi}=\frac{4\pi G^{2}M_{\rm bh}^{2}\rho}{c_{s}^{3}}\] (2) \[\dot{M}_{\rm Edd}=\frac{4\pi GM_{\rm bh}m_{p}}{\epsilon_{r} \sigma_{T}\ c} \tag{3}\]
where \(G\) is the gravitational constant, \(\rho\) is the local gas density, \(M_{\rm bh}\) is the BH mass, \(c_{s}\) is the local sound speed, \(m_{p}\) is the proton mass, and \(\sigma_{T}\) is the Thompson scattering cross section. Accreting black holes radiate at bolometric luminosities given by,
\[L_{\rm bol}=\epsilon_{r}\dot{M}_{\rm bh}c^{2}, \tag{4}\]
where \(\epsilon_{r}=0.2\) is the radiative efficiency.
IllustrisTNG implements a dual mode AGN feedback. 'Thermal feedback' is implemented for Eddington ratios (\(\eta\equiv\dot{M}_{\rm bh}/\dot{M}_{\rm Edd}\)) higher than a critical value of \(\dot{\eta}_{\rm crit}=\min[0.002(M_{\rm BH}/10^{8}M_{\odot})^{2},0.1]\). Here, thermal energy is deposited on to the neighboring gas at a rate of \(\epsilon_{f,\rm high}\epsilon_{r}\dot{M}_{\rm BH}c^{2}\) with \(\epsilon_{f,\rm high}\epsilon_{r}=0.02\) where \(\epsilon_{f,\rm high}\) is the "high accretion state" coupling efficiency. 'Kinetic feedback' is implemented for Eddington ratios lower than the critical value. Here, kinetic energy is injected into the gas in a pulsed fashion whenever sufficient feedback energy is available, which manifests as a 'wind' oriented along a randomly chosen direction. The injected rate is \(\epsilon_{f,\rm low}\dot{M}_{\rm BH}c^{2}\) where \(\epsilon_{f,\rm low}\) is called the 'low accretion state' coupling efficiency (\(\epsilon_{f,\rm low}\lesssim 0.2\)). For further details, we direct the interested readers to Weinberger et al. (2017).
The limited mass resolution hinders our simulations from fully capturing the crucial BH dynamical friction force, especially for low masses. To stabilize the dynamics, BHs are relocated to the nearest potential minimum within their proximity, determined by the closest \(10^{3}\) neighboring gas cells. When one BH enters the neighborhood of another, prompt merger occurs.
### Black hole seed models
#### 2.3.1 Gas based seed model
We explore the formation and growth of the lowest mass \(1.56\times 10^{3}\ M_{\odot}/h\) seeds using the gas based seeding prescriptions developed in Bhowmick et al. (2021). In order to contrast these seeds from those produced by the seed model discussed in the next subsection, we shall hereafter refer to them as _direct gas based_ seeds or DGBs with mass \(M_{\rm seed}^{\rm DGB}\). These seeding criteria are meant to broadly encompasses popular theoretical channels such as Pop III, NSC and DCBH seeds, that are postulated to form in regions comprised of dense and metal poor gas. We briefly summarize them as follows:
* _Star forming & metal poor gas mass criterion:_ We place DGBs in halos with a minimum threshold of dense (\(>0.1\ \rm cm^{-3}\)) & metal poor (\(Z<10^{-4}\ Z_{\odot}\)) gas mass, denoted by \(\dot{M}_{\rm fdmp}\) (in the units of \(M_{\rm end}^{\rm DCB}\)). The values of \(\dot{M}_{\rm fdmp}\) are not constrained, but we expect it to be different for the various seeding channels. In this work, we consider models with \(\dot{M}_{\rm fdmp}=5,50,150\) & \(1000\).
* _Halo mass criterion:_ We place DGBs in halos with a total mass exceeding a critical threshold, specified by \(\bar{M}_{h}\) in the units of \(M_{\rm seed}^{\rm DGB}\). In this work, we consider \(\bar{M}_{h}=3000\) & \(10000\). While our seeding prescriptions are meant to be based on the gas properties within halos, we still adopt this criterion to avoid seeding in halos significantly below the atomic cooling threshold. This is because our simulations do not include the necessary physics (for e.g. \(H_{2}\) cooling) to self-consistently capture the collapse of gas and the formation of stars within these (mini)halos. Additionally, these lowest mass halos are also impacted by the finite simulation resolution, many of which are spuriously identified gas clumps with very little DM mass. (Please see Figure 11 and Appendix B for further discussion about the foregoing points.) Another motivation for this criterion is that NSC seeds are anticipated to grow more efficiently within sufficiently deep gravitational potential wells where runaway BH merger remnants face difficulties escaping the cluster. Deeper gravitational potentials are expected in higher mass halos.
Our gas based seed models will therefore contain three parameters, namely \(\bar{M}_{\rm sfmp}\), \(\bar{M}_{\rm h}\) and \(M_{\rm seed}^{\rm DGB}\). The simulation suite that will use these seed models will be referred to as GAS_BASED. The individual runs will be labelled as SM*_FOF* where the "*" is correspond to the values of \(\bar{M}_{\rm sfmp}\) and \(\bar{M}_{\rm h}\) respectively. For example, \(\bar{M}_{\rm sfmp}=5\) and \(\bar{M}_{\rm h}=3000\) will correspond to SM*_FOF3000. As already mentioned, the seed masses in this suite will be \(M_{\rm seed}^{\rm DGB}=1.56\times 10^{3}\ M_{\odot}/h\).
#### 2.3.2 Stochastic seed model
As we mentioned, the key goal of this work is to build a new approach to represent low mass seeds in larger-volume lower-resolution cosmological simulations that cannot directly resolve them. As we shall see in Section 4, this is achieved via a new stochastic seeding model. The complete details of this seed model are described in Section 4, where we thoroughly discuss their motivation and calibration using the results obtained from the GAS_BASED suite. Here, we briefly summarize key features so that the reader can contrast it against the gas based seed models described in the previous subsection.
Since the simulations here will not fully resolve the \(1.56\times 10^{3}\ M_{\odot}/h\) DGBs, we will essentially seed their resolvable descendants. To distinguish them from the DGBs, we shall refer to these seeded descendants as _extrapolated seed descendants_ or ESDs with masses (denoted by \(M_{\rm seed}^{\rm ESD}\)) limited to the gas mass resolution of the simulations. In this work, we will largely explore ESD masses \(M_{\rm seed}^{\rm ESD}=1.25\times 10^{4}\) & \(1\times 10^{5}\ M_{\odot}/h\), to be used for simulations with gas mass resolutions of \(\sim 10^{4}\) & \(10^{5}\ M_{\odot}/h\) respectively.
To seed the ESDs, we identify sites using the FOF algorithm, but with a shorter linking length (by factor of \(\sim 1/3\)) compared to that used for identifying halos. We shall refer to these short linking length FOFs as "best-Friends of Friends or bFOFs". These bFOFs essentially correspond to galaxies or proto-galaxies residing inside the halos. We do this to accommodate the formation of multiple ESDs per halo; this is because even if we seed one DGB per halo in the gas based seed models, subsequent evolution of hierarchical structure naturally leads to halos occupying multiple higher mass descendants. Notably, one could alternatively seed in subhalos computed by SUBFIND; however, SUBFIND is prohibitively expensive to be called frequently enough for seeding BHs. Hereafter, in most instances, we shall simply refer to these bFOFs as "galaxies". Their properties are comprehensively studied in Section 4.1.
The ESDs will be stochastically placed in galaxies based on where the descendants of the \(1.56\times 10^{3}\ M_{\odot}/h\) DGBs end up within the GAS_BASED suite. Below we provide a brief summary of the seeding criteria
* _Galaxy mass criterion_: We will apply a galaxy mass ('galaxy mass' hereafter refers to the total mass including dark matter, gas and stars) seeding threshold that will be stochastically drawn from galaxy mass distributions predicted for the assembly of (\(1.25\times 10^{4}\) and \(10^{5}\ M_{\odot}/h\)) BHs that are descendants of \(1.56\times 10^{3}\ M_{\odot}/h\) DGBs within the GAS_BASED suite. As we explore further, it becomes evident that these distributions vary with redshift and exhibit significant scatter. The redshift dependence will capture the influence of halo growth, star formation, and metal enrichment on seed formation in our gas based seed models.
* _Galaxy environment criterion_: In the context of a galaxy, we define its _environment_ as the count of neighboring halos (\(N_{\rm ngb}\)) that exceed the mass of its host halo and are located within a specified distance (denoted by \(D_{\rm ngb}\)) from the host halo. In this study, we determine \(N_{\rm ngb}\) within a range of 5 times the virial radius (\(R_{\rm vir}\)) of the host halo, i.e. \(D_{\rm ngb}=5R_{\rm vir}\). This choice is suitable for investigating the immediate small-scale external surroundings of the galaxy, extending beyond its host halo. We then apply a seeding probability (less than unity) to suppress ESD formation in galaxies with \(\leq 1\) neighboring halos, thereby favoring their formation in richer environments. By doing this, we account for the impact of unresolved hierarchical merger dominated growth from \(M_{\rm seed}^{\rm DGB}\) to \(M_{\rm seed}^{\rm ESD}\), as it favors more rapid BH growth within galaxies in richer environments.
The simulations that use only the _galaxy mass criterion_ will be referred to as the STOCHASTIC_MASS_ONLY suite. For simulations which use both _galaxy mass criterion_ and _galaxy environment criterion_, we will refer to them as the STOCHASTIC_MASS_ENV suite. During the course of this paper, we will illustrate that the outcomes of each simulation of a specific region within the GAS_BASED suite, employing a distinct set of gas based seeding parameters, can be reasonably well reproduced in a lower-resolution simulation of the same region within the STOCHASTIC_MASS_ENV suite.
### Simulation suite
Our simulation suite consists of zoom runs for the same overdense region as that used in Bhowmick et al. (2021) (referred to as ZOOM_REGION_z5). The region was chosen from a parent uniform volume of (25 Mpc/\(h\))\({}^{3}\), and is targeted to produce a \(3.5\times 10^{11}\ M_{\odot}/h\) halo at \(z=5\). The simulations were run from \(z=127\) to \(z=7\) using the MUSIC (Hahn & Abel, 2011) initial condition generator. The background grid's resolution and the resolution of high-resolution zoom regions are determined by two key parameters: \(L_{\rm min}\) (or levelnin) and \(L_{\rm max}\) (or levelmax) respectively. These parameters define the resolution level, denoted as \(L\), which is equivalent to the mass resolution produced by \(2^{L}\) number of dark matter (DM) particles per side in a uniform-resolution (25 Mpc/\(h\))\({}^{3}\) box. Specifically, we set \(L_{\rm min}=7\) for the background grid,
resulting in a DM mass resolution of \(5.3\times 10^{9}\ M_{\odot}/h\). For the high-resolution zoom region, we explore \(L_{\rm max}\) values of 10, 11 and 12. In addition, there is a buffer region that consists of DM particles with intermediate resolutions bridging the gap between the background grid and the zoom region. This buffer region serves a crucial purpose of facilitating a smooth transition between the zoom region and the background grid. Our simulation suite is comprised of the following set of resolutions for the zoom regions:
* In our highest resolution \(L_{\rm max}=12\) runs, we achieve a DM mass resolution of \(1.6\times 10^{4}\ M_{\odot}/h\) and a gas mass resolution of \(\sim 10^{3}M_{\odot}/h\) (the gas cell masses are contingent upon the degree of refinement or derefinement of the Voronoi cells, thereby introducing some variability). These runs are used for the GAS_BASED suite that seeds DGBs at \(1.56\times 10^{3}\ M_{\odot}/h\) using the gas based seed models described in Section 2.3.1.
* For our \(L_{\rm max}=11\ \&\ 10\) runs, we achieve DM mass resolutions of \(1.3\times 10^{5}\ \&\ 1\times 10^{6}\ M_{\odot}/h\) and gas mass resolutions of \(\sim 10^{4}\ \&\ 10^{5}\ M_{\odot}/h\) respectively. These runs will be used for the STOCHASTIC_MASS_ONLY and STOCHASTIC_MASS_ENV suite, that will seed ESDs at \(1.25\times 10^{4}\ \&\ 1\times 10^{5}\ M_{\odot}/h\) for \(L_{\rm max}=11\ \&\ 10\) respectively, using the stochastic seed models described in Section 2.3.2.
Further details of our full simulation suite are summarized in Table 1. It is important to note that our new stochastic seed models will be primarily designed for implementation within larger-volume uniform simulations. However, this paper specifically focuses on zoom simulations. In particular, we are using \(L_{\rm max}=11\ \&\ 10\) zoom simulations for testing the stochastic seed models against the highest resolution \(L_{\rm max}=12\) zooms that use the gas based seed models. In a subsequent paper (Bhowmick et al in prep), we will be applying the stochastic seed models on uniform volume simulations of the same resolutions as the \(L_{\rm max}=11\ \&\ 10\) zooms.
### Tracing BH growth along merger trees: The SUBLINK algorithm
We use the GAS_BASED suite to trace the growth of the lowest mass \(1.56\times 10^{3}\ M_{\odot}/h\) DGBs and study the evolution of their environments (halo and galaxy properties) as they assemble higher mass BHs. We do this by first constructing subhalo merger trees using the SUBLINK algorithm (Rodriguez-Gomez et al., 2015), which was designed for SUBFIND based subhalos. Note that these SUBFIND based subhalos, like bFOFs, also trace the substructure within halos. Therefore, to avoid confusion, we shall refer to SUBFIND based subhalos as "subfind-subhalos". It is also very common to interpret the subfind-subhalos as "galaxies". As we shall see however, in this work, we only use these subfind-subhalos as an intermediate step to arrive at the FOF and bFOF merger trees. Therefore, there is no further mention of subfind-subhalos after this subsection. On that note, recall again that any mention of "galaxy" in our paper refers to the bFOFs.
SUBFIND was run on-the-fly to compute subfind-subhalos within both FOF and bFOF catalogues. Therefore, for obtaining both FOF and bFOF merger trees, we first compute the merger trees of their corresponding subfind-subhalos. Following are the key steps in the construction of the subfind-subhalo merger tree:
* For each progenitor subfind-subhalo at a given snapshot, SUBLINK determines a set of candidate descendant subfind-subhalos from the next snapshot. Candidate descendants are those subfind-subhalos which have common DM particles with the progenitor.
* Next, each candidate descendant is given a score based on the merit function \(\chi=\sum_{i}1/R_{i}^{-1}\) where \(R_{i}\) is the binding energy rank of particle \(i\) within the progenitor. DM particles with higher binding energy within the progenitor are given a lower rank. \(\sum_{i}\) denotes a sum for all the particles within the candidate descendant.
* Amongst all the candidate descendants, the final unique descendant is chosen to be the one with the highest score. This essentially ensures that the unique descendant has the highest likelihood of retaining the most bound DM particles that resided within the progenitor.
From the subfind-subhalo merger trees, we use the ones that only consist of central subfind-subhalos (most massive within a FOF or bFOF) and construct the corresponding FOF/ halo merger trees and bFOF/galaxy merger trees. We then trace the growth of BHs along these merger trees, and the outcomes of this analysis are elaborated upon in the subsequent sections.
## 3 Results I: Black hole mass assembly in high-resolution zooms
We start our analysis by looking at the growth history of \(1.5\times 10^{3}\ M_{\odot}/h\) DGBs within the GAS_BASED suite. We trace their growth along halo merger trees (see Section 2.5) from the time of their formation to when they assemble higher mass (\(1.25\times 10^{4},1\times 10^{5}\ \&\ 8\times 10^{5}\ M_{\odot}/h\)) descendant BHs. We choose these descendant BH masses as they encompass the target gas mass resolutions of our lower resolution (\(L_{\rm max}=11\ \&\ 10\)) zooms. These are also comparable to typical gas mass resolutions of cosmological simulations in the existing literature. For example, the TNG100 (Nelson et al., 2018), Illustris(Vogelsberger et al., 2014, 2014), EAGLE(Schaye et al., 2015), MassiveBlackII(Khandai et al., 2015), BlueTides(Feng et al., 2016) and HorizonAGN(Kaviraj et al., 2017) simulations have a gas mass resolution of \(\sim 10^{6}\ M_{\odot}\) and similar values for the seed masses. The relatively smaller volume cosmological simulations such as ROMULUS25(Tremmel et al., 2017) and TNG50(Pillepich et al., 2019) have a gas mass resolution of \(\sim 10^{5}\ M_{\odot}\) with a seed mass of \(10^{6}\ M_{\odot}\). Recall again that most of these simulations seed BHs simply based on either a constant halo mass threshold, or poorly resolved local gas properties. The results presented in this section will be used in Section 4 to calibrate the stochastic seed model that will represent the gas based \(1.56\times 10^{3}\ M_{\odot}/h\) seeds in the lower-resolution zooms without resolving them directly.
### Evolution of seed forming sites: Rapid metal enrichment after seed formation
Figure 1 depicts the evolution of gas density, star formation rate (SFR) density, and gas metallicity at DGB forming sites from two distinct epochs (\(z=20\ \&\ 12\)). As dictated by our gas based seed models, for each of the DGB forming sites
Figure 1: Evolution of gas density (red/orange), star formation rate density (grayscale) and gas metallicity (yellow/purple) of various seed forming sites in our zoom simulations that use the gas based seed models described in Section 2.3.1. Hereafter, we shall refer to the seeds formed by the gas based seed models as “Direct Gas Based seeds” or DGBs. The large panels correspond to DGB forming sites from two distinct epochs namely \(z=20\) (top) and \(z=12\) (bottom). Within each large panel, the leftmost sub-panel corresponds to the snapshot at the time of DGB formation, wherein the yellow circles mark the location of the formation site that contains the star forming & metal poor gas. The remaining subpanels from left to right show the evolution of that formation site along three subsequent snapshots. We can clearly see that at the time of DGB formation, the regions in the immediate vicinity of the formation site have already started the process of metal enrichment. As a result, these regions get completely polluted with metals within a very short time after DGB formation.
Figure 2: Assembly history of halos forming \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs using gas based seed models. Top to bottom, the rows show the evolution of total halo mass (\(M_{\rm total}\)), star forming gas mass (\(M^{\rm SF}\)), star forming & metal poor gas mass (\(M^{\rm SF}_{\rm metal\ poor}\)), and gas metallicity (\(Z_{\rm gas}\)). Left, middle and right panels show halos seeded at \(z=20\), \(z=15\) and \(z=10\) (vertical dashed lines in each column) respectively, using the gas based seeding criterion, \(\bar{M}_{\rm sfmp}=1000\) (horizontal dashed line in 3rd row) and \(\bar{M}_{\rm h}=3000\) (horizontal dashed line in 1st row). The faded dotted lines show the evolution of all DGB-forming halos along their merger trees. The thick solid lines show the mean trend, i.e. logarithmic average of the values of all the faded dotted lines at each redshift. The star forming & metal poor gas masses tend to sharply drop soon after seeding, independent of the time of seeding. This is because the DGB forming halos have already started to undergo rapid metal enrichment, which is shown in the fourth row by the rapid increase in gas metallicity prior to the seeding event.
there exists gas that is simultaneously forming stars but is also metal poor (marked in yellow circles). However, we also find that metal enrichment has already commenced at the immediate vicinity of these DGB forming sites. In other words, DGB formation occurs in halos where metal enrichment has already begun due to prior star formation and evolution, but it has not polluted the entire halo yet. But soon after DGB formation, i.e. within a few tens of million years, we find that the entirety of the regions becomes polluted with metals.
The rapid metal enrichment of DGB forming halos is shown much more comprehensively and quantitatively in Figure 2. Here we show the evolution of halo mass, star forming gas mass, star forming metal poor gas mass and gas metallicity from \(z\sim 25-7\) for all DGB forming halos along their respective merger trees (faded dotted lines). To avoid overcrowding of the plots, we select trees based on the most restrictive seeding criterion of \(\tilde{M}_{\rm{sfmp}}=1000\) & \(\tilde{M}_{\rm h}=3000\), but our general conclusions hold true for other seeding thresholds as well. Not surprisingly, the halo mass (1st row) and star forming gas mass (2nd row) tend to monotonically increase with decreasing redshift on average (thick solid black lines). Note that for individual trees, the halo mass can occasionally decrease with time due to tidal stripping. On more rare occasions, there may also be a sharp drop in the the halo mass at given snapshot followed by a sharp rise back to being close to the original value. This is likely because the FOF finder "mistakenly" splits a larger halo in two at that snapshot. The star forming gas mass can also additionally decrease with time due to the star forming gas being converted to star particles.
Very importantly, the star forming & metal poor gas mass (3rd row of Figure 2) increases initially and peaks at the time of DGB formation, following which it rapidly drops
\begin{table}
\begin{tabular}{c c c c c c c} \(L_{\rm{max}}\) & \(M_{dm}\) (\(M_{\odot}/h\)) & \(M_{gas}\) (\(M_{\odot}/h\)) & \(\epsilon\) (\(kpc/h\)) & Black hole neighbors & Seed mass (\(M_{\odot}/h\)) & Seed model \\ \hline
12 & \(1.6\times 10^{4}\) & \(\sim 10^{3}\) & 0.125 & 256 & \(M_{\rm{seed}}^{\rm{DGB}}=1.56\times 10^{3}\) & gas based seeding \\
11 & \(1.3\times 10^{5}\) & \(\sim 10^{4}\) & 0.25 & 128 & \(M_{\rm{seed}}^{\rm{DGB}}=1.25\times 10^{4}\) & Stochastic seeding \\
10 & \(1\times 10^{6}\) & \(\sim 10^{5}\) & 0.5 & 64 & \(M_{\rm{seed}}^{\rm{DGB}}=1\times 10^{5}\) & Stochastic seeding \\ \hline \end{tabular}
\end{table}
Table 1: Spatial and mass resolutions within the zoom region of our simulations for various values of \(L_{\rm{max}}\) (see Section 2.4 for the definition). \(M_{dm}\) is the mass of a dark matter particle, \(M_{gas}\) is the typical mass of a gas cell (note that gas cells can refine and de-refine depending on the local density), and \(\epsilon\) is the gravitational smoothing length. The 4th column represents the number of nearest gas cells that are assigned to be BH neighbors. The 5th and 6th columns correspond to the seed mass and seed model used at the different resolutions.
Figure 3: The evolution of host star formation rates or SFR (top panels) and \(Z_{\rm{gas}}\) (bottom panels) versus host mass is shown for \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs formed at \(z=15\). In the leftmost panels, the filled orange circles indicate the halos that form DGBs at \(z=15\). The filled orange circles in the subsequent panels (from left to right) show the same host halos at \(z=14,13\) & 12. The full population of halos at each redshift is shown in blue In other words, we select the orange circles at \(z=15\) using our gas based seeding criteria [\(\tilde{M}_{\rm{h}},\tilde{M}_{\rm{sfmp}}=3000,1000\)] (assuming \(M_{\rm{seed}}^{\rm{DGB}}=1.56\times 10^{3}\)\(M_{\odot}/h\) ), and follow their evolution on the halo merger tree. Comparing them to the full population of halos at each redshift, we find that even though the DGB forming halos at \(z=15\) are biased towards lower gas metallicities at fixed halo mass (lower left panel), subsequent evolution of these halos to lower redshifts causes them to become more unbiased at \(z=14,13\) & 12. This is due to the rapid metal enrichment of these DGB forming halos depicted in Figure 2.
down. This happens independent of the formation redshift, and is due to the rapid metal enrichment depicted in Figure 1. The rapid metal enrichment can be quantitatively seen in the average gas metallicity evolution (4th row of Figure 2). We can see that even prior to the DGB formation, the average gas metallicities already start to increase from the pre-enrichment values (\(\sim 10^{-8}\ Z_{\odot}\)), to \(\sim 10^{-3}\ Z_{\odot}\) at the time of formation. Therefore, even at the time of formation, the average metallicities of halos are already greater than the maximum seeding threshold of \(10^{-4}\ Z_{\odot}\); however, there are still pockets of star forming gas with metallicities \(\leq 10^{-4}\ Z_{\odot}\), wherein DGBs form.
In Figure 3, we select halos that form DGBs at \(z=15\) using gas based seeding parameters \(\bar{M}_{\rm{stmp}}=1000\) & \(\bar{M}_{\rm{h}}=3000\), and we show their evolution (orange circles) to \(z=14,13\) & \(12\) on the SFR versus halo mass plane (upper panels) and the gas metallicity versus halo mass plane (lower panels). We compare them to the full population of halos at their respective redshifts (blue points). We investigate how biased these DGB forming halos are compared to typical halos of similar masses. On the SFR versus halo mass plane, the DGB forming halos have similar SFRs compared to halos of similar masses; not surprisingly, this continues to be so as they evolve to lower redshifts. On the metallicity versus halo mass plane, we find that DGB forming halos have significantly lower metallicities compared to halos of similar masses. This is a natural consequence of the requirement that the DGB forming halos have sufficient amounts of metal poor gas. However, due to the rapid metal enrichment of these halos seen in Figures 1 and 2, their descendants at \(z=14,13\) & \(12\) end up having metallicities similar to halos of comparable mass.
The picture that emerges from Figures 1 - 3 is one in which DGB-forming halos are generally _not_ a special subset of halos (in terms of properties that persist to lower redshift), but rather they are fairly typical halos that have the right conditions for DGB formation at a special moment in _time_. In other words, despite our seeding criterion favoring low-metallicity, star-forming halos, their descendants still end up with similar SFRs and metallicities compared to the general population of similar-mass halos. While Figure 3 only shows the evolution of DGB-forming halos at \(z=15\), this general conclusion holds true for DGB-forming halos at all redshifts. A key consequence is that the descendants of seed forming halos can be well characterized by their halo mass distributions, largely because they are in this transient phase of rapid metal enrichment at the time of seed formation.
We utilize this characteristic of our gas based seeding models to develop the new sub-grid seeding model for lower-resolution simulations in Section 4. Rather than requiring information about detailed properties of the descendant galaxies of these gas based seeding sites, we show in Section 4.2 that most galaxy properties are well reproduced by simply matching the galaxy mass distribution. We then show in Section 4.3 that by additionally imposing a criterion on galaxy environment, we can robustly capture the evolved descendants of seeding sites from our high-resolution simulations.
### DGB formation and subsequent growth
We have thus far talked about the DGB forming halos and their evolution. In this subsection, we will focus on the formation of the DGBs themselves, and their subsequent growth to assemble higher mass BHs.
#### 3.2.1 Drivers of DGB formation: Halo growth, star formation and metal enrichment
Our gas based seeding criteria identify three main physical processes that govern DGB formation in our simulations, i.e. halo growth, star formation and metal enrichment. Halo growth and star formation tend to promote DGB formation with time, whereas metal enrichment suppresses DGB formation with time. The overall rate of DGB formation at various redshifts is determined by the complex interplay between these three processes. We study this interplay in Figure 4, wherein we show the number of halos satisfying three different criteria: \(M_{\rm total}>\bar{M}_{\rm{h}}\times M_{\rm{med}}^{\rm{DGB}}\) (dotted line), \(M^{\rm{SF}}>\bar{M}_{\rm{stmp}}\times M_{\rm{seed}}^{\rm{DGB}}\) (dashed line) and \(M_{\rm{metal\ poor}}^{\rm{SF}}>\bar{M}_{\rm{stmp}}\times M_{\rm{seed}}^{\rm{ DGB}}\) (solid line). \(M_{\rm{total}}\), \(M^{\rm{SF}}\) and \(M_{\rm{metal\ poor}}^{\rm{SF}}\) correspond to the total halo mass, star forming gas mass, and star forming & metal poor gas mass of halos respectively. Amongst the above three criteria, the one that is most restrictive essentially determines the driving physical process for DGB formation at a given redshift. For example, in the rightmost panel of Figure 4, the dotted lines have the lowest normalization from \(z\sim 25-10\); this implies that halo growth is primary driver and leads to the production of more DGBs with time. In the 3rd panel from the left, the solid and dashed lines have similar normalization, and both of them are lower than the dotted lines at the highest redshifts; this indicates that star formation is the key driver, which also enhances DGB formation with time. Lastly, in all of the panels, the solid lines have substantially lower normalization than both dashed and dotted lines at the lowest redshifts. In this case, metal enrichment is the primary driver, which leads to slow down and eventual suppression of DGB formation with time.
Comparing the different columns in Figure 4, we note that the gas based seeding parameters (\(\bar{M}_{\rm{h}}\) and \(\bar{M}_{\rm{stmp}}\)) have a strong influence in determining which process dominantly drives DGB formation at various redshifts. For \(\bar{M}_{\rm{h}}=3000\) and \(\bar{M}_{\rm{stmp}}=5\) (leftmost panel), halo growth is the key driver for DGB formation from \(z\sim 30-15\); at \(z\lesssim 15\), metal enrichment becomes the primary driver and slows down DGB formation. When \(\bar{M}_{\rm{h}}\) is fixed at \(3000\) and \(\bar{M}_{\rm{stmp}}\) is increased to \(50\) or \(150\) (2nd and 3rd panels respectively), star formation replaces halo growth to become the primary driver for DGB formation at \(z\sim 30-15\); however, metal enrichment continues to be the main driver in slowing down DGB formation at \(z\lesssim 15\). Finally, when \(\bar{M}_{\rm{stmp}}\) is fixed at \(5\) and \(\bar{M}_{\rm{h}}\) is increased to \(10000\) (rightmost panels), halo growth becomes the key driver for DGB formation from \(z\sim 30-10\). In this case, metal enrichment takes the driving seat at a lower redshift of \(z\sim 10\) compared to the cases when \(\bar{M}_{\rm{h}}=3000\).
To further summarize the above findings from Figure 4, we find that when \(\bar{M}_{\rm{h}}\) is \(3000\), DGB formation is ramped up by either star formation or halo growth until \(z\sim 15\). After \(z\sim 15\), it is slowed down by metal enrichment. But when \(\bar{M}_{\rm{h}}=10000\), the halo mass criterion becomes much more restrictive and halo growth continues to ramp up DGB formation until \(z\sim 10\) before it is slowed down by metal enrichment. In the next subsection, we shall see the implications of the foregoing on the rates of DGB formation at various redshifts.
#### 3.2.2 Formation rates of \(\sim 10^{3}\)\(M_{\odot}\) DGBs
The leftmost panel of Figure 5 shows the formation rates of \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs for the different gas based seed models. The interplay between halo growth, star formation and metal enrichment discussed in the previous subsection is readily seen in the DGB formation rates. For \(\bar{M}_{\rm h}=3000\) and \(\bar{M}_{\rm fimp}=5,50,150\) & 1000, we find that DGB formation ramps up as the redshift decreases from \(z\sim 30-15\), driven predominantly either by halo growth (for \(\bar{M}_{\rm fimp}=5\)) or star formation (for \(\bar{M}_{\rm fimp}=50,150\) & 1000). As the redshift decreases below \(z\sim 15\), metal enrichment significantly slows down DGB formation. However, when \(\bar{M}_{\rm h}\) is increased to 10000 (red line), halo growth continues to ramp up DGB formation till \(z\sim 10\), after which the suppression of DGB formation due to metal enrichment takes place. Note also that at \(z\la 10\), DGB formation is finally strongly suppressed due to metal pollution for all the seed models. This is because most of the newly star forming regions are already metal enriched by then, likely due to stellar feedback dispersing the metals throughout the simulation volume.
2.3 Assembly rates of \(\sim 10^{4}-10^{6}\)\(M_{\odot}\) BHs from \(\sim 10^{3}\)\(M_{\odot}\) seeds
The assembly rates of \(1.25\times 10^{4},1\times 10^{5}\) & \(8\times 10^{5}\)\(M_{\odot}/h\) BHs are shown in 2nd, 3rd and 4th panels of Figure 5 respectively. As in Blhowmick et al. (2021), we find that nearly 100% of the growth of these DGBs is happening via mergers. This is partly due to the \(M_{\rm BH}^{2}\) scaling of Bondi Hoyle accretion rates, which leads to much slower accretion onto low mass DGBs, and it is consistent with the findings of Taylor & Kobayashi (2014) (see Figure 2 in their paper).
Let us first focus on the impact of this merger dominated growth on the assembly of \(1.25\times 10^{4}\)\(M_{\odot}/h\) BHs (2nd panel of Figure 5). They generally assemble at rates \(\sim 50-80\) times lower than the rates at which \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs form. Notably, the trends seen in the DGB formation rates directly reflect upon the rates at which \(1.25\times 10^{4}\)\(M_{\odot}/h\) BHs assemble. In particular, for \(\bar{M}_{\rm h}=3000\) and \(\bar{M}_{\rm fimp}=5,50\) & 150, we see an increase in the assembly rates as the redshift decreases from \(z\sim 25-15\) wherein DGB formation is driven by halo growth or star formation. The assembly rates slow down at \(z\la 15\) as metal enrichment slows down DGB formation. For a higher value of \(\bar{M}_{\rm h}=10000\), halo growth continues to increase the assembly rates until \(z\sim 10\), before metal enrichment slows it down. Overall, these results suggest that the interplay of halo growth, star formation and metal enrichment processes that we witnessed on the formation rates of \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs, are also retained in the assembly rates of their higher mass \(1.25\times 10^{4}\)\(M_{\odot}/h\) descendants.
We also see the assembly of a handful of \(1\times 10^{5}\) and \(8\times 10^{5}\)\(M_{\odot}/h\) BHs (3rd and 4th panels of Figure 5). \(1\times 10^{5}\)\(M_{\odot}/h\) BHs generally start assembling at \(z\la 15\) and \(8\times 10^{5}\)\(M_{\odot}/h\) BHs assemble at \(z\la 12\). However, any potential trends similar to that identified in the previous paragraph for \(1.25\times 10^{4}\)\(M_{\odot}/h\) descendants, are difficult to discern for the \(1\times 10^{5}\) and \(8\times 10^{5}\)\(M_{\odot}/h\) descendants due to very limited statistical power.
### In which host halos do the \(\sim 10^{4}-10^{6}\)\(M_{\odot}\) descendant BHs assemble?
Figure 6 shows the host halo masses (denoted by \(M_{\rm total}^{\rm halo}\)) and redshifts at which \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs form (leftmost panel), followed by the assembly of \(1.25\times 10^{4}\)\(M_{\odot}/h\) and \(1\times 10^{5}\)\(M_{\odot}/h\) BHs (middle and right panels respectively). Broadly speaking, \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs form in \(\sim 10^{6.5}-10^{7.5}\)\(M_{\odot}/h\) halos, \(1.25\times 10^{4}\)\(M_{\odot}/h\) BHs assemble in \(\sim 10^{7.5}-10^{8.5}\)\(M_{\odot}/h\) haloes, and \(1\times 10^{5}\)\(M_{\odot}/h\) BHs
Figure 4: The upper panels show the number of halos satisfying different cuts that were used in our gas based seed models: dotted lines correspond to a total mass cut of \(\bar{M}_{h}\times M_{\rm seed}^{\rm DGB}\), dashed lines correspond to a star forming gas mass cut of \(\bar{M}_{\rm fimp}\times M_{\rm seed}^{\rm DGB}\), and solid lines show a star forming & metal poor gas mass cut of \(\bar{M}_{\rm fimp}\times M_{\rm seed}^{\rm DGB}\). The lower panels show ratio of the normalizations w.r.t. the dotted lines from the top panel. The line with the smallest normalization determines which of the processes between halo growth versus star formation versus metal enrichment is the key driver for DGB formation at a given epoch. For \(\bar{M}_{h}=3000\), we find that metal enrichment becomes the key driver for (suppressing) DGB formation around \(z\sim 13\) for all \(\bar{M}_{\rm fimp}\) values between \(5-150\). However, when \(\bar{M}_{h}=10000\), halo growth continues to be the primary regulator for DGB formation until \(z\sim 10\), after which metal enrichment takes over.
assemble in \(\sim 10^{8.5}-10^{9.5}\)\(M_{\odot}/h\) haloes. Therefore, rates of BH growth versus halo growth are broadly similar. This is a natural expectation from merger-dominated BH growth, since the BH mergers crucially depend on the merging of their host halos. Note however that in the absence of our currently imposed BH repositioning scheme that promptly merges close enough BH pairs, we could expect larger differences between the merger rates of BHs and their host halos.
The interplay between halo growth, star formation and metal enrichment at different redshifts (as noted in Section 3.2) profoundly influences the redshift evolution of the halo masses in which the seeding of \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs and assembly of higher-mass BHs take place. Let us first focus on the seeding of \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs (Figure 6: left panel).
We find for \(\bar{M}_{\rm h}=3000\) & \(\bar{M}_{\rm stmp}=50,150\) that the halo masses steadily increase with time as star formation drives the formation of DGBs. As described in more detail in Appendix B, this is a simple consequence of cosmological expansion, which makes it more difficult for the gas to cool and form stars at later times within halos of a fixed mass. Notably, as metal enrichment gradually takes over at \(z\lesssim 15\), the redshift evolution becomes substantially steeper, pushing DGB formation towards even more massive halos at later times. This may seem counterintuitive since we expect more massive halos to have stronger metal enrichment, which should suppress DGB formation within them. However, more massive halos also generally have higher overall star forming gas mass, a portion of which may remain metal poor since star-forming halos are not fully metal enriched instantaneously. As it turns out in our simulations, when metal enrichment increases, it favors DGB formation in more massive halos because they are more likely to have sufficient amount of star forming & metal poor gas mass. For further details on this, the reader can refer to Appendix B. When \(\bar{M}_{\rm h}\) is increased to 10000, the redshift evolution of DGB forming halo mass is flat until \(z\sim 10\) since the seed formation is primarily driven by the _halo mass criterion_. It is only after \(z\sim 10\) that the DGB forming halo mass starts to steeply increase due to the full influence of metal enrichment.
The above trends directly impact the redshift evolution of the host halo masses in which \(1.25\times 10^{4}\)\(M_{\odot}/h\) assemble (middle panel of Figure 6). For the model with a stricter
Figure 5: We trace the growth of \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs (leftmost panels) along merger trees and show the redshifts when they assemble BHs of masses \(1.25\times 10^{4}\)\(M_{\odot}/h\), \(1\times 10^{5}\)\(M_{\odot}/h\) and \(8\times 10^{5}\)\(M_{\odot}/h\) (2nd, 3rd and 4th panels from the left). Different colors correspond to the different gas based seed models with varying \(\bar{M}_{\rm stmp}=5,50,150\)\(\&\)\(1000,\bar{M}_{\rm h}=3000\) and \(\bar{M}_{\rm stmp}=5,\bar{M}_{\rm h}=10000\). We find that the impacts of increasing \(\bar{M}_{\rm stmp}\) and \(\bar{M}_{\rm h}\) are qualitatively distinguishable. For \(\bar{M}_{\rm h}=3000\) and \(\bar{M}_{\rm stmp}=5-1000\), metal enrichment starts to slow down DGB formation around \(z\sim 15\). In contrast, when \(\bar{M}_{\rm h}\) is increased from 3000 to 10000, the slow down of DGB formation due to metal enrichment starts much later (\(z\lesssim 10\)). Similar trends are seen in the assembly rates of higher mass descendants (particularly \(1.25\times 10^{4}\)\(M_{\odot}/h\) BHs).
Figure 6: The left panel shows the redshifts and the FOF total masses at which \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs form. Middle and right panels show the redshifts and the FOF total masses at which \(1.25\times 10^{4}\)\(M_{\odot}/h\) and \(1\times 10^{5}\)\(M_{\odot}/h\) descendant BHs respectively assemble on the FOF merger tree. The different colors correspond to different gas based seed models. Each data point corresponds to a single instance of assembly or seeding. We only show data points for a limited set of models to avoid overcrowding. Solid lines show the mean trend and the shaded regions show \(\pm 1\sigma\) standard deviations. We find that as metal enrichment takes over as the driving force and suppresses DGB formation at lower redshifts, DGBs form in increasingly massive halos. This also drives a similar redshift dependence for the assembly of \(1.25\times 10^{4}\)\(M_{\odot}/h\) BHs.
halo mass criterion (i.e., \(\tilde{M}_{\rm h}=10000\) & \(\tilde{M}_{\rm stmp}=5\)), the transition in the slope of the \(M_{\rm total}^{\rm halo}\) versus redshift relation occurs much later (transition occurs between \(z\sim 12-10\)) compared to models with more lenient halo mass criterion \(\tilde{M}_{\rm h}=3000\) & \(\tilde{M}_{\rm stmp}=5-150\) (\(z\gtrsim 15\)). This, again, is because metal enrichment starts to suppress DGB formation much later in the model with stricter halo mass criterion. Finally, for the assembly of \(1\times 10^{5}\)\(M_{\odot}/h\) BHs, the redshift evolution of the host halo masses cannot be robustly deciphered due to statistical uncertainties. But here too, we see hints of higher host halo masses at lower redshifts in regimes where metal enrichment is the primary driver for (the suppression of) DGB formation.
Overall, the impact of halo growth, star formation and metal enrichment on DGB formation is well imprinted in the redshift evolution of the host halo masses within which their descendant BHs assemble. We shall see in later sections how this fact is going to be crucial in building the new seed model to represent (descendants of) \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs in lower-resolution simulations.
## 4 Results II: A new stochastic seed model for larger simulations
We have thus far traced the growth of low mass (\(1.56\times 10^{3}\)\(M_{\odot}/h\)) DGBs born in regions with dense & metal poor gas, in order to determine the host properties of their higher-mass (\(1.25\times 10^{4}\)\(k\)\(1\times 10^{5}\)\(M_{\odot}/h\)) descendant BHs. We will now use these results to build a new stochastic seed model that can represent these \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs within simulations that cannot directly resolve them. In section 2.3.2, we gave a brief introduction of this seed model and mentioned that this model would rely on a _galaxy mass criterion_ and a _galaxy environment criterion_. Here we detail the motivation, construction, and calibration of both of these seeding criteria and demonstrate that the resulting model can reproduce reasonably well the high-resolution, gas based seed model predictions in lower-resolution simulations.
Note that some of our gas based seed parameter combinations do not produce enough descendant BHs in our zoom region to perform a robust calibration. These include \(\tilde{M}_{\rm h}=3000;\tilde{M}_{\rm stmp}=1000\) for the \(1.25\times 10^{4}\)\(M_{\odot}/h\) descendants and \(\tilde{M}_{\rm h}=3000\) & \(10000;\tilde{M}_{\rm stmp}=150\) & \(1000\) for the \(1\times 10^{5}\)\(M_{\odot}/h\) descendants. Therefore, we shall not consider these parameter values hereafter.
In the stochastic seed model, we will directly seed the descendants with initial masses set by the gas mass resolution (\(1.25\times 10^{4}\) & \(1\times 10^{5}\)\(M_{\odot}/h\) in \(L_{\rm max}=11\) & \(10\) respectively). As already mentioned in Section 2.3.2, because these massive seeds are meant to represent descendants of \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs that cannot be resolved directly, we refer to the former as "extrapolated seed descendants" or ESDs with initial mass denoted by \(M_{\rm seed}^{\rm ESD}\). In other words, our new stochastic seeding prescription will place ESDs with \(M_{\rm seed}^{\rm ESD}\) set by the gas mass resolution of \(1.25\times 10^{4}\) or \(1\times 10^{5}\)\(M_{\odot}/h\), but they are intended to represent our gas based seed models with unresolvable \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs. To that end, the next few subsections address the following question: _How do we build a new seed model that can capture the unresolved growth phase from \(M_{\rm seed}^{\rm DGB}=1.56\times 10^{3}\)\(M_{\odot}/h\) to \(M_{\rm seed}^{\rm ESD}=1.25\times 10^{4}\) or \(1\times 10^{5}\)\(M_{\odot}/h\)?_
### Seeding sites for ESDs: "Best Friends of Friends (bFOF)" galaxies
It is common practice in many (but not all) cosmological simulations to place one seed per halo at a given time step. The advantage to this is that the halo properties (particularly the total halo mass) show much better resolution convergence compared to the local gas properties. However, this is not quite realistic, as halos typically have a significant amount of substructure and can therefore have multiple seeding sites at a given time. Despite this, subhalos are not typically used to seed BHs, likely because on-the-fly subhalo finders like SUBFIND are much more computationally expensive compared to on-the-fly halo finders like the FOF finder.
Recall that in our gas based seed model, \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs were also seeded as "one seed per halo". But even in this case, as these smaller seed-forming halos and their BHs undergo mergers, configurations with multiple \(1.25\times 10^{4}\) or \(1\times 10^{5}\)\(M_{\odot}/h\) BHs per halo tend to naturally emerge. We emulate this in our new seed model by seeding ESDs within bFOFs introduced in Section 2.3.2. The linking length for the bFOFs was chosen to be 1/3rd of the value adopted for standard FOF halos (which is 0.2 times the mean particle separation). This value was chosen after exploring a number of possibilities. On one hand, a much larger linking length does not resolve the substructure adequately. On the other hand, if the linking length is much smaller, a significant number of FOFs and up not containing any bFOFs.
Figure 7 summarizes the bFOF properties in relation to the familiar FOF halos at \(z=8\). The leftmost panel shows the relationship between the masses of FOFs and bFOFs. Within a FOF, the most massive bFOF is assigned as the "central bFOF" (blue circles) and the remaining bFOFs are assigned as the "satellite bFOFs" (orange circles). The central bFOFs are about \(\sim 7\) times less massive than the host FOF. Not surprisingly, the satellite bFOFs span a much wider range of masses all the way down to the lowest possible masses at the bFOF/FOF identification limit (\(\geq 32\) DM particles). The middle panel of Figure 7 shows the bFOF occupation statistics for FOFs of different masses. More massive FOFs tend to host a higher number of bFOFs; the most massive \(\sim 3\times 10^{10}\)\(M_{\odot}/h\) FOF has about \(\sim 4\times 10^{3}\) bFOFs. We can see that in addition to the central bFOF, the satellite bFOFs can also contain BHs (orange, green and maroon points in the middle panel). To that end, the right panel of Figure 7 shows the total BH occupations inside FOFs and bFOFs as a function of their respective masses. We can clearly see that while individual FOFs can contain multiple BHs (up to a few tens), the vast majority of individual bFOFs contain 0 or 1 BHs. In fact, amongst the \(\sim 30000\) bFOFs at \(z=8\), only 12 of them have more than 1 BH. These results generally hold true at all redshifts.
By building our seed model based on bFOFs instead of FOFs (i.e. one ESD per bFOF), we expect to naturally place multiple \(1.25\times 10^{4}\)\(M_{\odot}/h\) or \(1\times 10^{5}\)\(M_{\odot}/h\) ESDs in individual halos. As a result, we will successfully capture situations where multiple \(1.25\times 10^{4}\)\(M_{\odot}/h\) or \(1\times 10^{5}\)\(M_{\odot}/h\) descendant BHs assemble from \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs in a single halo within close succession. As mentioned in Section 2.3.2, these bFOFs are essentially the sites where high-z (proto)galaxies reside; we therefore use the phrase "galaxies" to refer to these bFOFs.
Figure 8: Top and bottom rows show the redshifts and the galaxy total masses (\(M_{\rm total}^{\rm galaxy}\) that includes DM, gas and stars) at which \(1.25\times 10^{4}\ M_{\odot}/h\) and \(1\times 10^{5}\ M_{\odot}/h\) BHs respectively assemble from \(1.56\times 10^{3}\ M_{\odot}/h\) DGBs when the BH growth is traced along the galaxy merger tree. The 1st, 2nd and 3rd columns show different gas based seeding models with \(\tilde{M}_{h}=3000\) and \(\tilde{M}_{\rm fmp}=5,50\ \&\ 150\). The 4th column shows \(\tilde{M}_{h}=10000\) and \(\tilde{M}_{\rm fmp}=5\). Solid lines show the mean trend and the shaded regions show \(\pm 1\sigma\) standard deviations. We find that for all the models, there is a transition in the slope of the mean trend at redshift \(z\equiv z_{\rm trans}\sim 12-13\), which is driven by the suppression of seed formation by metal enrichment. The trends are reasonably well fit by a double power law (dashed lines). These fits are used in our stochastic seed models that directly seed the descendants (referred to as “extrapolated seed descendants or ESDs) at \(1.25\times 10^{4}\ M_{\odot}/h\) or \(1\times 10^{5}\ M_{\odot}/h\) within the lower resolution \(L_{\rm max}=11\ \&\ 10\) zooms, respectively. To obtain fits in the top row, we first assumed \(z_{\rm trans}=13.1\) for \(\tilde{M}_{h}=3000,\tilde{M}_{\rm fmp}=5,50\ \&\ 150\), and \(z_{\rm trans}=12.1\) for \(\tilde{M}_{h}=10000,\tilde{M}_{\rm fmp}=5\) via a visual inspection. The fits were then performed to obtain the slopes at \(z<z_{\rm trans}\) and \(z>z_{\rm trans}\) using scipy.optimize.curve_fit. The final fitted parameters are shown in Table 2.
Figure 7: Introduction to best friends of friends (bFOF) galaxies, which are identified using the FOF algorithm but with one-third of the linking length used for identifying halos: Left panel shows the relation between halo mass and the mass (\(M_{\rm total}^{\rm galaxy}\)) of the central or most massive bFOF in blue, and satellite bFOF in orange. On an average, the central bFOFs are \(\sim 7\) times less massive than their host FOFs, but with substantial scatter (\(\gtrsim 1\) dex) for fixed FOF mass (\(M_{\rm total}^{\rm halo}\)). The middle panel shows the number of bFOFs for FOFs of different total masses. The plots are shown at \(z=8\) and for the gas based seed model [\(\tilde{M}_{h},\tilde{M}_{\rm fmp}=3000,5\)]. Blue color shows all bFOFs (with or without BHs); orange, green and mean lines show bFOFs with a total BH mass of \(1.5\times 10^{3}\ M_{\odot}/h\), \(1.25\times 10^{4}\ M_{\odot}/h\) and \(1\times 10^{5}\ M_{\odot}/h\) respectively. Right panel shows the number of BHs occupied by FOFs and bFOFs. While \(\gtrsim 12\%\) of FOFs contain multiple BHs (up to \(\sim 30\)), only \(\sim 1\%\) of bFOFs contain multiple BHs. All this motivates us to use bFOFs as seeding sites (instead of FOFs) in our new stochastic seed models that would be able to represent the lowest mass (\(\sim 10^{3}\ M_{\odot}/h\)) DGBs in lower resolution simulations that cannot directly resolve them. These bFOFs are essentially sites of (proto)galaxies residing within the high-z halos. We hereafter refer to these bFOFs as “galaxies”.
### Building the _galaxy mass criterion_
Recall from Section 3.1 that because DGB formation in our gas based seeding model occurs during a transient phase of rapid metal enrichment in halos that are otherwise fairly typical, their descendents have metallicities (and SFRs) similar to that of typical halos with similar total masses. This motivates us to first explore low-resolution simulations with seeding criterion that simply matches the galaxy mass distribution of seeding sites in our high-resolution, gas based models. We refer to this seeding criterion as the _galaxy mass criterion_; notably, this differs from typical halo-mass-based seeding models in the use of a distribution of host mass thresholds rather than a single value. The corresponding simulations are referred to as STOCHASTIC_MASS_ONLY.
2.1 Galaxy masses at assembly of \(\sim 10^{4}\)\(\&\)\(10^{5}\)\(M_{\odot}\) BHs from \(\sim 10^{3}\)\(M_{\odot}\) seeds
To calibrate our seed models, we first determine the galaxy masses (\(M_{\rm total}^{\rm galaxy}\)) in which \(1.25\times 10^{4}\)\(M_{\odot}/h\) and \(1\times 10^{5}\)\(M_{\odot}/h\) BHs assemble from \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs within our GAS_BASED simulations; these are shown in Figure 8. Let us first focus on the assembly of \(1.25\times 10^{4}\)\(M_{\odot}/h\) descendants (Figure 8, top panels). Similar to that of \(M_{\rm total}^{\rm galaxy}\) versus redshift relations (Figure 6, middle panel), the \(M_{\rm total}^{\rm galaxy}\) versus redshift relations show features that reflect the interplay between halo growth, star formation and metal enrichment in influencing DGB formation. For \(\tilde{M}_{\rm h}=3000,\tilde{M}_{\rm fsmp}=50\) & 150, we see that the slope of redshift evolution of the mean (denoted by \(\left<M_{\rm total}^{\rm galaxy}\right>\) and shown as solid lines) undergoes a gradual transition between \(z\sim 13-15\). This corresponds to the slow down of DGB formation due to metal enrichment. When \(\tilde{M}_{\rm h}=10000\)\(\&\)\(\tilde{M}_{\rm fsmp}=5\), this transition occurs at comparatively lower redshifts (\(z\sim 12-10\)) as the influence of metal enrichment starts later due to the higher \(\tilde{M}_{\rm h}\). We then fit the mean trend by a double power law (dashed lines in Figure 8, upper panels) given by
\[\log_{10}\left<M_{\rm total}^{\rm galaxy}\right>=\] \[\left\{\begin{array}{ll}(z-z_{\rm trans})\times\alpha+\log_{10} M_{\rm trans},&\mbox{if $z\geq z_{\rm trans}$}\\ (z-z_{\rm trans})\times\beta+\log_{10}M_{\rm trans},&\mbox{if $z<z_{\rm trans}$} \end{array}\right\}.\]
\(z_{\rm trans}\) roughly marks the transition in the driving physical process for DGB formation. For \(z>z_{\rm trans}\), halo growth or star formation primarily drives DGB formation; for \(z<z_{\rm trans}\), metal enrichment takes over as the primary driver to suppress DGB formation. \(M_{\rm trans}\) is the value of \(\left<M_{\rm total}^{\rm galaxy}\right>\) at the transition redshift. Finally, \(\alpha\) and \(\beta\) are the slopes of the \(\left<M_{\rm total}^{\rm galaxy}\right>\) versus redshift relation at \(z>z_{\rm trans}\) and \(z<z_{\rm trans}\) respectively. To simplify our fitting procedure, we first select \(z_{\rm trans}\) for each of the cases via visual inspection and determine \(M_{\rm trans}\) by interpolating the \(\left<M_{\rm total}^{\rm galaxy}\right>\) versus redshift relation. We then fit for \(\alpha\) and \(\beta\) using the scipy.optimize.curve_fit python package. Note that the double power-law function assumes a sharp transition in the \(\left<M_{\rm total}^{\rm galaxy}\right>\) versus redshift relation at \(z=z_{\rm trans}\). However, as we can see in Figure 8, this transition occurs much more gradually as metal enrichment starts to slow down and eventually suppresses DGB formation. Nevertheless, the double power-law model offers a simple (albeit approximate) framework to capture the intricate convolution of the impact of halo growth, star formation and metal enrichment that leads to the initial rise and eventual suppression of DGB formation.
The values of \(z_{\rm trans}\), \(M_{\rm trans}\), \(\alpha\) and \(\beta\) for the different gas based seed models are listed in the top four rows of Table 2. We choose \(z_{\rm trans}=13.1\) for \(\tilde{M}_{\rm h}=3000,\tilde{M}_{\rm fsmp}=5,50\) & 150. \(z_{\rm trans}\) is the same for all three \(\tilde{M}_{\rm fsmp}\) values to encode that the slow down of seed formation due to metal enrichment starts at similar redshifts for all these models. For \(\tilde{M}_{\rm h}=10000,\tilde{M}_{\rm fsmp}=5\), we choose a lower transition redshift of \(z_{\rm trans}=12.1\) as halo growth continues to drive up seed formation up to lower redshifts compared to the models with \(\tilde{M}_{\rm h}=3000\).
The impact of \(\tilde{M}_{\rm h}\) and \(\tilde{M}_{\rm fsmp}\) on \(M_{\rm trans}\), \(\alpha\) and \(\beta\) is noteworthy. As \(\tilde{M}_{\rm h}\) or \(\tilde{M}_{\rm fsmp}\) increases, the value of \(M_{\rm trans}\) also increases to generally reflect the fact that descendant BHs
\begin{table}
\begin{tabular}{c c c c c c c c c c} \(\tilde{M}_{\rm fsmp}\) & \(\tilde{M}_{h}\) & \(z_{\rm trans}\) & \(\log_{10}M_{\rm trans}[M_{\odot}/h]\) & \(\alpha\) & \(\beta\) & \(\sigma\) & \(p_{0}\) & \(p_{1}\) & \(\gamma\) \\ \hline & & & \(M_{\rm seed}^{\rm ESD}=1.25\times 10^{4}\)\(M_{\odot}/h\) & & & & & & \\ \hline
5 & 3000 & 13.1 & 6.86 & -0.105 & -0.041 & 0.330 & NA & NA & NA \\
50 & 3000 & 13.1 & 7.09 & -0.128 & -0.017 & 0.319 & 0.1 & 0.3 & 1.6 \\
150 & 3000 & 13.1 & 7.30 & -0.151 & 0.009 & 0.360 & 0.1 & 0.3 & 1.6 \\
5 & 10000 & 12.1 & 7.39 & -0.091 & 0.067 & 0.278 & 0.2 & 0.4 & 1.2 \\ \hline & & & \(M_{\rm seed}^{\rm ESD}=1\times 10^{5}\)\(M_{\odot}/h\) & & & & & \\ \hline
5 & 3000 & 13.1 & 7.72 & -0.120 & 0 & 0.246 & 0.2 & 0.4 & 1.2 \\
50 & 3000 & 13.1 & 8.10 & -0.067 & 0 & 0.286 & 0.2 & 0.4 & 1.2 \\
150 & 3000 & 13.1 & 8.41 & -0.060 & 0 & 0.298 & 0.2 & 0.4 & 1.2 \\ \hline \end{tabular}
\end{table}
Table 2: Fiducial model parameters for the stochastic seed model, calibrated for each of the gas based seeding parameters. Columns 1 and 2 show the gas based seeding parameters \(\tilde{M}_{\rm h}\) and \(\tilde{M}_{\rm fsmp}\). For each set of \(\tilde{M}_{\rm h}\) and \(\tilde{M}_{\rm fsmp}\) values, the remaining columns list the parameters of the stochastic seed model. Columns 3 to 7 show the parameter values used for the _galaxy mass criterion_, which are derived from gas based seed model predictions of the \(M_{\rm total}^{\rm galaxy}\) versus redshift relations (Figure 8). \(z_{\rm trans}\), \(M_{\rm trans}\), \(\alpha\), \(\delta\)\(\beta\) are obtained by fitting the mean trends using the double power-law function shown in Equation 5. \(\sigma\) is the standard deviation. Columns 8 to 10 show the parameter values for the _galaxy environment criterion_ (i.e., \(p_{0}\), \(p_{1}\) and \(\gamma\)). These are obtained by exploring a range of possible values to find the best match with the small-scale BH clustering and overall BH counts predicted by the gas based seed model.
of a fixed mass are assembling in more massive halos. \(\alpha\) is significantly more sensitive to \(\tilde{M}_{\rm{ftmp}}\) compared to \(\tilde{M}_{\rm{h}}\); this is not surprising as \(\alpha\) corresponds to the regime where metal enrichment primarily governs seed formation. A higher value of \(\tilde{M}_{\rm{sfmp}}\) produces a steeper \(\alpha\), as it leads to stronger suppression of DGB formation by metal enrichment. Lastly, \(\beta\) is impacted by both \(\tilde{M}_{\rm{fmap}}\) and \(\tilde{M}_{\rm{h}}\). This also makes sense because \(\beta\) corresponds to the regime where either star formation or halo growth can drive seed formation. Increasing \(\tilde{M}_{\rm{sfmp}}\) enhances the role of star formation, and increasing \(\tilde{M}_{\rm{h}}\) enhances the role of halo growth. Generally, we see that as the number of DGBs forming at the highest redshifts is decreased due to increase in \(\tilde{M}_{\rm{h}}\) or \(\tilde{M}_{\rm{ftmp}}\), \(\beta\) tends to go from negative to positive values thereby favoring higher \(M_{\rm{total}}^{\rm{galaxy}}\) at higher redshifts. This is likely because when BHs are very few, merger driven growth is slow and galaxies have more time to grow via DM accretion between successive mergers. As a result, galaxy growth is slightly faster than merger dominated BH growth at these highest redshifts where there are very few BHs.
We now turn our attention to the assembly of \(10^{5}\)\(M_{\odot}/h\) descendant BHs (bottom panels of Figure 8). In this case, we do not have adequate statistics to robustly determine the \(\left<M_{\rm{total}}^{\rm{galaxy}}\right>\) versus redshift relations. We can see that
Figure 9: Colored dashed lines show 1D distributions of galaxy properties in which \(1.25\times 10^{4}\)\(M_{\odot}/h\) BHs assemble from \(1.56\times 10^{3}\)\(M_{\odot}/h\) DGBs within GAS_BASED simulations. From left to right, the panels in each row show the total galaxy masses (\(M_{\rm{total}}^{\rm{galaxy}}\)), stellar masses (\(M_{\rm{*}}^{\rm{galaxy}}\)), SFRs, gas metallicities (\(Z\)), and environments (\(N_{\rm{ngb}}\) i.e. the number of neighboring halos around the galaxy as defined in Section 2.3.2). Top, middle and bottom rows correspond to different sets of gas based seed parameters: \([\tilde{M}_{\rm{h}},\tilde{M}_{\rm{fmap}}=3000,50]\), \([\tilde{M}_{\rm{h}},\tilde{M}_{\rm{fmap}}=3000,150]\) and \([\tilde{M}_{\rm{h}},\tilde{M}_{\rm{ftmp}}=10000,5]\) respectively. In each panel, the light grey lines show host properties for the \(1.25\times 10^{4}\)\(M_{\odot}/h\) ESBs in the corresponding STOCASTIC_MASS_ONLY simulation. Note that unlike the rest of the paper, here the STOCASTIC_MASS_ONLY simulations are run at the highest resolution of \(L_{\rm{max}}=12\) for a fair comparison of their predicted galaxy baryonic properties with the GAS_BASED simulations run at the same resolution. The total galaxy masses of BH hosts in the STOCASTIC_MASS_ONLY simulations are calibrated match the GAS_BASED simulations, but no other calibration is performed. The agreement of the distributions of baryonic properties (\((M_{\rm{*}}\), SFR, & \(Z\)) between the two types of simulations results naturally from matching the \(M_{\rm{total}}^{\rm{galaxy}}\) distribution. However, the STOCASTIC_MASS_ONLY simulations do end up placing the ESDs in significantly less rich environments (smaller \(N_{\rm{ngb}}\)) compared to what is required by the GAS_BASED simulations.
Figure 11: Impact of _galaxy environment criterion_ on the two-point clustering and the overall counts of \(>1.25\times 10^{4}\ M_{\odot}/h\) BHs. The dashed maroon lines show a simulation that uses the gas based seed model \([\bar{M}_{\rm h},\bar{M}_{\rm sfmp}=3000,150]\) with \(M_{\rm seed}^{\rm DGB}=1.56\times 10^{3}\ M_{\odot}/h\). The grey solid lines correspond to simulations that use the stochastic seed model, and directly place ESDs of mass \(1.25\times 10^{4}\ M_{\odot}/h\) based on both the _galaxy mass criterion_ and _galaxy environment criterion_. For the _galaxy environment criterion_, we systematically decrease \(p_{0}\) and \(p_{1}\) as the shade gets darker (see legend). _Upper panels_: The total galaxy mass (left panel) and galaxy environment (right panel) during the initial assembly of \(1.25\times 10^{4}\ M_{\odot}/h\) BHs. _Lower panels_: The left three panels show the two point clustering of \(>1.25\times 10^{4}\ M_{\odot}/h\) BHs at \(z=8,11\ \&\ 14\) respectively, and the rightmost panel shows the overall number of \(>1.25\times 10^{4}\ M_{\odot}/h\) BHs in each snapshot. We find that the STOCASTIC_MASS_ONLY simulation (\(p_{0}=1\) and \(p_{1}=1\)) significantly underestimates the small-scale clustering and overestimates the BH counts compared to the GAS_BASED simulations. As we introduce the _galaxy environment criterion_ (STOCASTIC_MASS_ENV) and decrease \(p_{0}\) and \(p_{1}\) to favor seeding in richer environments, we find that the small-scale clustering is enhanced and the BH counts decrease. The model with \(p_{0},p_{1}=0.1,0.3\) produces the best match for the small-scale clustering as well as the BH counts.
Figure 12: Here we demonstrate the ability of different \(L_{\rm max}=11\) stochastic seed models to represent the \(1.25\times 10^{4}\ M_{\odot}/h\) descendants of \(1.56\times 10^{3}\ M_{\odot}/h\) DGBs formed in \(L_{\rm max}=12\) gas based seed models. The leftmost two panels show the total galaxy mass and galaxy environment at the time of assembly of \(1.25\times 10^{4}\ M_{\odot}/h\) BHs. The remaining three panels on the right show the statistics of \(>1.25\times 10^{4}\ M_{\odot}/h\) BHs, namely the total BH counts versus redshift, the two-point clustering at \(z=8\), and the merger rates. The colored dashed lines show the GASS_BASED simulations wherein \(1.56\times 10^{3}\ M_{\odot}/h\) DGBs form and eventually grow to assemble \(1.25\times 10^{4}\ M_{\odot}/h\) BHs. The different rows correspond to different values of \(M_{\rm rgp}\) and \(M_{h}\) (see legend). The remaining lines correspond to simulations using stochastic seed models that place ESDs directly at \(1.25\times 10^{4}\ M_{\odot}/h\). The thick and solid silver and black lines and histograms show the STOCHASTIC_MASS_ONLY and STOCHASTIC_MASS_ENV simulations respectively; they use the fiducial seeding parameters calibrated for each set of gas based seeding parameters listed in Table 2. The thin black dashed lines in the right three panels show STOCHASTIC_MASS_ONLY simulations that assume zero scatter in the _galaxy mass criterion_ i.e \(\sigma=0\). The thinnest black solid line in the same panels show simulations that assume a constant galaxy mass threshold fixed at the mean of the distributions from the leftmost panels (see vertical line). Amongst all the simulations that use stochastic seeding, only the STOCHASTIC_MASS_ENV simulations are able to successfully capture the GASS_BASED simulation predictions.
Figure 13: Same as Figure 12, but for the assembly of \(1\times 10^{5}\ M_{\odot}/h\) BHs. The statistics are more limited compared to the previous figure. The shaded grey regions correspond to \(z>13.1\), wherein we could not calibrate the _galaxy mass criterion_ due to lack of data points in Figure 8. But at \(z<13.1\) where calibration was possible, we find that the STOCHASTIC_MASS_ENV simulations (at a resolution of \(L_{\rm max}=10\)) do reasonably match with the BH counts predicted by the \(L_{\rm max}=12\) GASS_BASED simulations.
data points only exist at \(z\lesssim 13\), wherein \(\left\langle M_{\rm total}^{\rm galaxy}\right\rangle\) tends to increase with decreasing redshift, except for \(\tilde{M}_{\rm h}=10000,\tilde{M}_{\rm sfmp}=5\), where statistics are too poor to reveal any useful trends). Here, we only fit for \(\alpha\) after assuming the same values of \(z_{\rm trans}\) that were used for the assembly of \(1.25\times 10^{4}\ M_{\odot}/h\) BHs (dashed lines in Figure 8, lower panels). The best fit values are shown in the bottom three rows of Table 2. Overall, we should still keep in mind that there are very few \(10^{5}\ M_{\odot}/h\) descendants. Therefore, these fits are not very statistically robust. Nevertheless, they will still be useful to test our stochastic seed models in the next subsection.
In addition to the mean trends, the \(M_{\rm total}^{\rm galaxy}\) versus redshift relations show a significant amount of scatter (\(\sigma\)). This is defined to be the 1 sigma standard deviation shown by the shaded regions in Figure 8. Generally we see that the scatter does not have a strong redshift evolution. The overall mean scatter (averaged over the entire redshift range) for the different gas based seed models is shown in the seventh column of Table 2. The scatter decreases slightly as we make the gas based seeding criterion more restrictive by increasing \(\tilde{M}_{\rm h}\) or \(\tilde{M}_{\rm sfmp}\). This is likely because for more restrictive seed models, assembly of higher-mass BHs occurs in more massive galaxies for which the underlying galaxy mass function is steeper. For the same reason, the scatter is also smaller for the assembly of \(1\times 10^{5}\ M_{\odot}/h\) BHs compared to that of \(1.25\times 10^{4}\ M_{\odot}/h\) BHs.
#### 4.2.2 Properties of galaxies that form ESDs: Comparison with gas based seed model predictions
We finally use the \(M_{\rm total}^{\rm galaxy}\) versus redshift relations to formulate our _galaxy mass criterion_. More specifically, we place ESDs of mass \(1.25\times 10^{4}\ M_{\odot}/h\) and \(1\times 10^{5}\ M_{\odot}/h\) based on minimum galaxy mass thresholds. The threshold value (\(M_{\rm th}\)) is stochastically drawn from redshift dependent distributions described by a log-normal function, i.e \(\propto\exp{[-\frac{1}{2}(\log_{10}M_{\rm th}^{2}-\mu^{2})/\sigma^{2}]}\), with mean \(\mu\equiv\left\langle M_{\rm total}^{\rm galaxy}\right\rangle(z)\) described by the double power-law fits shown in Figure 8 and Table 2. The standard deviation \(\sigma\) is shown in Table 2 (column 7).
In Figure 9, we show the 1D distributions (marginalized over all redshifts until \(z=7\)) of the various galaxy properties wherein \(1.25\times 10^{4}\ M_{\odot}/h\) descendants assemble (i.e., total mass, stellar mass, SFRs, gas metallicities and environments). We compare the predictions for the GAS_BASED simulations that assemble the \(1.25\times 10^{4}\ M_{\odot}/h\) descendants from \(1.56\times 10^{3}\ M_{\odot}/h\) DGBs (colored lines), and the STOCHISTIC_MASS_ONLY simulations that directly seed the \(1.25\times 10^{4}\ M_{\odot}/h\) ESDs (grey lines). We can clearly see that after calibrating the STOCHISTIC_MASS_ONLY simulations to reproduce the total galaxy masses (1st panels from the left) predicted by the GAS_BASED simulation, it also broadly reproduces the baryonic properties of the galaxies such as stellar masses, SFRs and metallicities (2nd, 3rd and 4th panels). This further solidifies our findings from Figures 1 to 3, that the galaxies wherein the \(1.25\times 10^{4}\ M_{\odot}/h\) descendants assemble are reasonably well characterized by their total mass alone. Recall that this is attributed to the transience of the rapid metal enrichment phase in which halos form \(1.56\times 10^{3}\ M_{\odot}/h\) DGBs in the GAS_BASED suite.
However, we see that the _galaxy mass criterion_ places the ESDs in sparser environments (hosts with fewer neighboring halos) compared to the GAS_BASED simulation predictions (rightmost panels in Figure 9). This reflects the fact that when the low-mass DGBs assemble higher-mass BHs through merger-dominated BH growth, their descendants naturally grow faster in regions with more frequent major halo and galaxy mergers. Therefore, for a given distribution of total galaxy masses, those living in richer environments are more likely to contain higher-mass descendant BHs.
These results for the assembly of \(1.25\times 10^{4}\ M_{\odot}/h\) BHs also hold true for the assembly of \(1\times 10^{5}\ M_{\odot}/h\) BHs, as shown in Figure 10. In the next section, we develop an additional seeding criterion to account for this small-scale clustering of the assembly sites of higher mass descendants in our GAS_BASED models.
### Building the _galaxy environment criterion_
In this section, we describe an additional _galaxy environment criterion_ to favor the placement of ESDs in galaxies in richer environments (at fixed galaxy mass). We then explore its implications on their two-point clustering and the overall BH population.
First, we assume that any potential seeding site with two or more neighbors (\(N_{\rm sbg}\geq 1\)) will always seed an ESD. Potential seeding sites with zero or one neighbors will seed an ESD with a probability \(0\leq P_{\rm seed}^{\rm env}\leq 1\). For these cases, we assign a different linear dependence of \(P_{\rm seed}^{\rm env}\) on the galaxy mass \(M_{\rm total}^{\rm galaxy}\), such that the probability for any potential seeding site to actually form an ESD is given by
\[P_{\rm seed}^{\rm env}=\left\{\begin{array}{ll}\left(M_{\rm total}^{\rm galaxy }-\left\langle M_{\rm total}^{\rm galaxy}\right\rangle\right)\gamma+p_{0},& \mbox{if $N_{\rm sbg}=0$}\\ \left(M_{\rm total}^{\rm galaxy}-\left\langle M_{\rm total}^{\rm galaxy} \right\rangle\right)\gamma+p_{1},&\mbox{if $N_{\rm sbg}=1$}\\ 1,&\mbox{if $N_{\rm sbg}>1$}\end{array}\right\}. \tag{5}\]
Here, \(p_{0}\) and \(p_{1}\) denote the seeding probability in galaxies with 0 and 1 neighbors respectively, at the mean \(\left(\left\langle M_{\rm total}^{\rm galaxy}\right\rangle\right)\) of the total mass distributions of galaxies wherein the descendant BHs assemble.
The parameter \(\gamma\) defines the slope for the linear dependence of \(P_{\rm seed}^{\rm env}\) on the galaxy mass; it varies slightly between the underlying gas based seed models used for calibration, as listed in Table 2. The motivation for this linear dependence and the adopted \(\gamma\) values are described in Appendix A. But to briefly summarize the main physical motivation, we use a \(\gamma>0\) to encode the natural expectation that for fixed \(N_{\rm sbg}\), descendants will grow faster within galaxies with higher total mass. This is because \(N_{\rm sbg}\), by definition, counts the number of halos with masses _higher than_ the host halo mass of the galaxy that are within \(5R_{\rm vir}\). As a result, a higher-mass galaxy with \(N_{\rm sbg}\) neighbors is in a more overdense region than a lower-mass galaxy with the same \(N_{\rm sbg}\) neighbors.
We add the _galaxy environment criterion_ to the already applied _galaxy mass criterion_. We shall refer to the resulting suite of simulations as STOCHISTIC_MASS_ENV. In Figure 11, we systematically compare the GAS_BASED simulations (maroon lines) to the STOCHISTIC_MASS_ENV simulations that trace \(1.25\times 10^{4}\ M_{\odot}/h\) descendants (grey lines) for a range of parameter values for \(p_{0}\) and \(p_{1}\). We start with \(p_{0}=1,p_{1}=1\), which is basically the STOCHISTIC_MASS_ONLY
simulation (lightest grey lines), and find that it significantly underestimates the two point clustering (by factors up to \(\sim 5\)) of the \(\geq 1.25\times 10^{4}\ M_{\odot}/h\) BHs compared to the GAS_BASED simulations (lower left three panels). At the same time, the STOCHASTIC_MASS_ONLY simulation also over-estimates the overall counts of the \(\geq 1.25\times 10^{4}\ M_{\odot}/h\) BHs (lower right most panel). Upon decreasing the probabilities as \(p_{0}<p_{1}<1\), we can see that the two-point clustering starts to increase while the overall BH counts simultaneously decrease. For \(p_{0}=0.1\ \&\ p_{1}=0.3\), we produce the best agreement of the two-point clustering as well as the overall BH counts. Further decreasing \(p_{0}\) and \(p_{1}\) mildly enhances the two-point clustering, but leads to too much suppression of the BH counts compared to GAS_BASED simulations. Therefore, we identify \(p_{0}=0.1\ \&\ p_{1}=0.3\) as the best set of parameter values for the gas based seeding parameters \([\tilde{M}_{\rm h},\tilde{M}_{\rm fmp}=3000,150]\).
As a caveat, we must also note in Figure 11 that while \(p_{0}=0.1\ \&\ p_{1}=0.3\) produces the best agreement with the two point correlation function between GAS_BASED and STOCHASTIC_MASS_ENV simulations, it does place the ESDs in galaxies with somewhat higher \(N_{\rm agb}\) compared to the GAS_BASED simulations (upper right panels). To that end, recall that \(N_{\rm ngb}\) only measures the galaxy environment at a fixed separation scale of \(D_{\rm ngb}=5\ R_{\rm vir}\) (revisit Section 2.3.2). Therefore, we cannot expect \(N_{\rm ngb}\) to fully determine the two-point correlation profile, which measures the environment over a wide range of separation scales (\(\sim 0.01-1\) Mpc/\(h\) in our case). In other words, one could come up with alternative set of _galaxy environment criteria_ (for example, using \(N_{\rm agb}\) within a different \(D_{\rm ngb}\neq 5\ R_{\rm vir}\) or even multiple set of \(N_{\rm ngb}\) values within different multiple \(D_{\rm ngb}\) values) and still be able simultaneously reproduce the two-point correlation function as well as the BH counts. Finding all these different possibilities of _galaxy environment criteria_ is not the focus of this work. Instead, our objective here is simply to demonstrate that to reproduce the GAS_BASED simulation predictions, we need a _galaxy environment criterion_ to favor the placing of ESDs in galaxies with richer environments. Furthermore, we showed that by applying a _galaxy environment criterion_ that brings the two point correlation function into agreement with the GAS_BASED simulations, our STOCHASTIC_MASS_ENV simulations achieve the primary goal for our sub-grid seeding model: faithfully representing the descendants of \(1.56\times 10^{3}\ M_{\odot}/h\) seeds produced in the GAS_BASED simulations.
Thus far we have calibrated a STOCHASTIC_MASS_ENV simulation to reproduce the \(1.25\times 10^{4}\ M_{\odot}/h\) descendant BH population from a gas based seed model with \([\tilde{M}_{\rm h},\tilde{M}_{\rm fmp}=3000,150]\) and \(M_{\rm seed}=1.56\times 10^{3}\ M_{\odot}/h\). We can perform the same calibration for the remaining gas based seed models in our suite, and for the assembly of \(1\times 10^{5}\ M_{\odot}/h\) descendant BHs in addition to \(1.25\times 10^{4}\ M_{\odot}/h\) descendants. The resulting \(p_{0}\) and \(p_{1}\) values for all the gas based seeding parameters are listed in Table 2. Broadly speaking, we require \(p_{0}\sim 0.1-0.2\) and \(p_{1}\sim 0.3-0.4\) to simultaneously reproduce the gas based seed model predictions for the small-scale clustering and BH counts of the descendant BHs. Slightly higher \(p_{0}\) and \(p_{1}\) values are favored for more restrictive gas based criteria and for higher-mass descendant BHs, possibly because in both cases the descendant BHs assemble in higher-mass galaxies. Note that higher-mass galaxies tend to be more strongly clustered than lower mass galaxies. As a result, during the calibration of the STOCHASTIC_MASS_ENV simulations, the _galaxy mass criterion_ alone will already produce a slightly stronger clustering for the ESDs. This lessens the burden on the _galaxy environment criterion_ to achieve the desired clustering predicted by the gas based seed models.
In Figures 12 and 13, we show the STOCHASTIC_MASS_ENV (solid black lines) versus GAS_BASED (colored dashed lines) seed model predictions. For \(M_{\rm seed}^{\rm ESD}=1.25\times 10^{4}\ M_{\odot}/h\) (Figure 12), we calibrate models corresponding to \([\tilde{M}_{\rm h},\tilde{M}_{\rm fmp}=3000,50\ \&\ 3000,150]\) and \([\tilde{M}_{\rm h},\tilde{M}_{\rm fmp}=10000,5]\). We exclude the most lenient gas based seed parameters of \([\tilde{M}_{\rm h},\tilde{M}_{\rm fmp}=3000,5]\), since it leads to a significant portion of \(1.25\times 10^{4}\ M_{\odot}/h\) descendants to assemble in galaxies that cannot be resolved in the \(L_{\rm max}=11\) runs. For the remaining gas based seed parameters, the STOCHASTIC_MASS_ENV simulations well reproduce the GAS_BASED simulation predictions for the BH counts, two-point correlation functions and merger rates of \(>1.25\times 10^{4}\ M_{\odot}/h\) BHs.
For \(M_{\rm seed}^{\rm ESD}=1\times 10^{5}\ M_{\odot}/h\) (Figure 13), we only do this exercise for the most lenient gas based seed models i.e. \([\tilde{M}_{\rm h},\tilde{M}_{\rm fmp}=3000,5\ \&\ 3000,50]\). This is because for the stricter gas based seed models, there are too few BHs produced overall. Here, the STOCHASTIC_MASS_ENV simulations well reproduce the counts of \(>1\times 10^{5}\ M_{\odot}/h\) BHs at \(z<13.1\) (wherein there is enough data to calibrate the slope \(\alpha\); revisit Figure 8, bottom row). For \(z>13.1\), \(\beta=0\) is assumed due to the absence of enough data points to perform any fitting; here, the STOCHASTIC_MASS_ENV seed model overestimates the number of \(>1\times 10^{5}\ M_{\odot}/h\) BHs and their high-\(z\) merger rates. Regardless, where enough data exist for robust calibration, these results imply that with a calibrated combination of _galaxy mass criterion_ and _galaxy environment criterion_, the STOCHASTIC_MASS_ENV simulations can well reproduce the GAS_BASED simulation predictions for a wide range of gas based seeding parameters.
Figures 12 and 13 also disentangle the impact of the various components of our final stochastic seed model, and they highlight the importance of each component in the successful representation of the gas based seed models. As seen previously, the STOCHASTIC_MASS_ONLY seed model overestimates the BH counts and merger rates by factors between \(\sim 2-5\). Next, when we assume zero scatter in the _galaxy mass criterion_ (\(\Sigma=0\), black dashed lines), it further overestimates the BH counts and merger rates up to factors of \(\sim 1.5\) (grey solid versus black dashed lines). Finally, if we remove the redshift dependence in the _galaxy mass criterion_ and instead assume a constant threshold value (thin dotted lines), the BH counts and merger rates monotonically increase with time. Not surprisingly, this is because such a model cannot capture the suppression of seed formation due to metal enrichment.
Overall, we can clearly see that in order to represent our \(L_{\rm max}=12\) gas based seed models forming \(1.56\times 10^{3}\ M_{\odot}/h\) BH seeds in lower-resolution, larger-volume simulations, we need a stochastic seed model that places their resolvable descendant BHs (ESDs) using the following two criteria
* A _galaxy mass criterion_ with a galaxy mass seeding threshold that is drawn from a distribution that evolves with redshift. The redshift evolution encodes the impact of star formation, halo growth and metal enrichment on seed formation.
\(\bullet\) A _galaxy environment criterion_ that favors seeding within galaxies living in rich environments. This encodes the impact of the unresolved, hierarchical-merger-dominated growth of these seeds from \(M_{\rm seed}^{\rm DGR}\) to \(M_{\rm seed}^{\rm ESD}\).
### Accounting for unresolved minor mergers
We have thus far successfully built a new stochastic BH seed model that places ESDs which represent the \(\sim 10^{4}-10^{5}\ M_{\odot}/h\) descendants of \(\sim 10^{3}\ M_{\odot}/h\) DGBs in simulations that cannot directly resolve these lowest-mass BHs. In this section, we model the subsequent growth of these ESDs. To do so, we must first account for one additional contribution to their growth: unresolved minor mergers.
Recall from Bhowmick et al. (2021) that the earliest growth of these \(\sim 10^{3}\ M_{\odot}/h\) DGBs is completely driven by BH mergers, with negligible contribution from gas accretion. For our present purposes, these BH mergers can be classified into three types:
* _Heavy mergers:_ In these mergers, both the primary and secondary black holes (with masses \(M_{1}\) and \(M_{2}\) respectively) are greater than the mass of the ESDs (\(M_{1}>M_{2}>M_{\rm seed}^{\rm ESD}\)). Therefore, these mergers will be fully resolvable within STOCHASTIC_MASS_ENV simulations.
* _Light major mergers:_ In these mergers, both the primary and secondary black holes are less massive than the ESDs (\(M_{\rm seed}^{\rm DGR}<M_{2}<M_{1}<M_{\rm seed}^{\rm ESD}\)). These mergers cannot be resolved in STOCHASTIC_MASS_ENV simulations. However, these are the mergers that lead to the initial assembly of the descendants represented by the ESDs, such that their contribution to BH assembly is already implicitly captured within the stochastic seed model.
* _Light minor mergers:_ In these mergers, the primary black hole is more massive than the ESD mass, but the secondary black hole is not (\(M_{1}>M_{\rm seed}^{\rm ESD}\) & \(M_{\rm seed}^{\rm DGB}<M_{2}<M_{\rm seed}^{\rm ESD}\)). These mergers cannot be resolved in STOCHASTIC_MASS_ENV simulations, and their contributions to BH mass assembly cannot be captured by the _galaxy mass criterion_ or the _galaxy environment criterion_. Therefore, we must modify our prescription to explicitly add their contribution to the growth of the ESDs.
We first determine the contribution of light minor mergers within the GAS_BASED simulations. Here we only show the results for \(M_{\rm seed}^{\rm ESD}=1.25\times 10^{4}\ M_{\odot}/h\), since there are too few \(1\times 10^{5}\ M_{\odot}\) BHs formed in the GAS_BASED simulations to robustly perform this analysis for the latter. The light minor mergers are thus defined to have \(M_{1}>1.25\times 10^{4}\ M_{\odot}/h\) and \(1.56\times 10^{3}<M_{2}<1.25\times 10^{4}\ M_{\odot}/h\), and heavy mergers are defined to be those with \(M_{1}>M_{2}>1.25\times 10^{4}\ M_{\odot}/h\). In Figure 14, we compare the contributions of the light minor mergers and heavy mergers to the growth of \(>1.25\times 10^{4}\ M_{\odot}/h\) BHs in the GAS_BASED simulations. The light minor mergers are \(\sim 30\) times more frequent than the heavy mergers (top row), this is simply due to higher overall number of \(M_{\rm BH}<1.25\times 10^{4}\ M_{\odot}/h\) BHs compared to \(M_{\rm th}>1.25\times 10^{4}\ M_{\odot}/h\) BHs. When we compare the mass growth contributed by light minor mergers versus heavy mergers (middle row), we find that the light minor mergers dominate at the highest redshifts (\(z\sim 15-19\)). As BH growth proceeds over time, the mass growth contributed by heavy mergers increases and eventually exceeds that of the light minor mergers at \(z\lesssim 12\), even though the overall merger rates are still dominated by light minor mergers. This is because the masses of the BHs involved in the heavy mergers continue to increase with time. Eventually, when new DGB formation is strongly suppressed by metal enrichment, the mass growth due to the light minor mergers becomes small. We clearly see these trends in the third row of Figure 14 which shows \(\Delta M_{\rm minor}^{\rm light}\) defined as the amount of mass growth due to light minor mergers between successive _heavy merger_ events. \(\Delta M_{\rm minor}^{\rm light}\) monotonically decreases with redshift and its evolution is reasonably well fit by power laws.
We use the power law fits of \(\Delta M_{\rm minor}^{\rm light}\) (shown in the last row of Figure 14) to determine the missing BH growth contribution from light minor mergers. More specifically, for each heavy merger event in a STOCHASTIC_MASS_ENV simulation, we added extra mass growth of \(\Delta M_{\rm minor}^{\rm light}\) due to light minor mergers, calculated based on these power law fits. Figure 15 shows that it is only after the inclusion of these unresolved light minor mergers, we achieve reasonable agreement between the BH mass functions predicted by the GAS_BASED and the STOCHASTIC_MASS_ENV simulations (colored dashed lines versus solid black lines). Note that at masses between \(M_{\rm seed}^{\rm ESD}\) and \(2M_{\rm seed}^{\rm ESD}\), the STOCHASTIC_MASS_ENV simulations will inevitably continue to slightly underpredict the mass functions. This is because within our prescription, the contribution from light minor mergers does not occur until the first heavy merger event between the ESDs.
## 5 Summary and conclusions
In this work, we tackle one of the longstanding challenges in modeling BH seeds in cosmological hydrodynamic simulations: how do we simulate low mass (\(\lesssim 10^{3}\ M_{\odot}\)) seeds in simulations that cannot directly resolve them? We address this challenge by building a new sub-grid seed model that can stochastically seed the smallest resolvable descendants of low mass seeds in lower-resolution simulations (hereafter referred to as "stochastic seed model"). Our new seed model is motivated and calibrated based on highest resolution simulations that directly resolve the low mass seeds. With this new tool, we have bridged a critical gap between high-resolution simulations that directly resolves low mass seeds, and larger-volume simulations that can generate sufficient numbers of BHs to compare against observational measurements. This paves the way for making statistically robust predictions for signatures of low-mass seeds using cosmological hydrodynamic simulations, which is a crucial step in preparation for the wealth of observations with ongoing JWST, as well as upcoming facilities such as LISA.
The core objective of this work has been to determine the key ingredients needed to construct such a seed model. To do this, we study the growth of the lowest mass \(1.56\times 10^{3}\ M_{\odot}/h\) seeds that were fully resolved using highest resolution zoom simulations. These seeds are placed in halos containing gas that is simultaneously star forming as well as metal poor (\(<10^{-4}Z_{\odot}\)), consistent with proposed low mass seeding candidates such as Pop III stellar remnants. We trace the growth of these \(1.56\times 10^{3}\ M_{\odot}/h\) seeds until they assemble descendants with masses that are close to different possible gas mass resolutions (\(\sim 10^{4}-10^{6}\ M_{\odot}\)) expected in larger cosmological volumes. We characterize the environments in which these
descendants assemble; for e.g. they assemble in halos with masses ranging from \(\sim 10^{7}-10^{9}\)\(M_{\odot}\). The results are used to build our stochastic seed model that directly seeds these descendants in lower resolution simulations. To distinguish against the _actual_\(1.56\times 10^{3}\)\(M_{\odot}/h\) seeds, we refer to the "seeds" formed by the stochastic seed model as "extrapolated seed descendants" or ESDs (with mass \(M_{\rm seed}^{\rm ESD}\)). We consider \(1.25\times 10^{4}\) & \(1\times 10^{5}\)\(M_{\odot}/h\) ESDs that are aimed at faithfully representing the descendants of \(1.56\times 10^{3}\)\(M_{\odot}/h\) seeds born out of star forming and metal poor gas. Specifically, we explore a wide range of stochastic seed models on lower resolution versions of our zoom region, and determine the crucial ingredients required to reproduce the results of the highest resolution zoom simulations that explicitly resolve the \(1.56\times 10^{3}\)\(M_{\odot}/h\) seeds. Following are the key features of our new seed model:
* We seed the ESDs in high-z (proto)galaxies which are bound substructures within high-z halos. Since halos can contain multiple galaxies, this naturally allows the placement of multiple ESDs per halo. This is important because even if \(1.56\times 10^{3}\)\(M_{\odot}/h\) seeds are placed as one seed per halo, their subsequent hierarchical growth inevitably assembles multiple higher mass descendants within individual halos.
* We introduce a _galaxy mass criterion_ which places the ESDs based on galaxy mass thresholds. These thresholds are stochastically drawn from galaxy mass (including DM, stars and gas) distributions wherein \(1.25\times 10^{4}\) & \(1\times 10^{5}\)\(M_{\odot}/h\) BHs assemble from \(1.56\times 10^{3}\)\(M_{\odot}/h\) seeds. We find that the _galaxy mass criterion_ effortlessly also replicates the baryonic properties of the galaxies at the time of assembly of the seed descendants, including stellar mass, SFRs, and gas metallicities. This is because, although \(1.56\times 10^{3}\)\(M_{\odot}/h\) seeds form within halos exhibiting a bias towards lower metallicities in comparison to typical halos of similar masses, they undergo a transient phase characterized by rapid metal enrichment. As a result, the higher mass \(1.25\times 10^{4}\) & \(1\times 10^{5}\)\(M_{\odot}/h\) descendants end up in unbiased halos with metallicities similar to halos with similar masses. The redshift dependence of the distributions underlying the galaxy mass thresholds capture the complex influence of processes such as halo growth, star formation and metal enrichment, on the formation of \(1.56\times 10^{3}\)\(M_{\odot}/h\) seeds.
* However, if our stochastic seed model only contains the _galaxy mass criterion_, it underestimates the two-point clustering (at scales of \(0.01-0.1\) Mpc/\(h\)) of \(\geq 1.25\times 10^{4}\) & \(1\times 10^{5}\)\(M_{\odot}/h\) BHs by factors of \(\sim 5\). At the same time, it overestimates the BH abundances and merger rates of \(\geq 1.25\times 10^{4}\) & \(1\times 10^{5}\)\(M_{\odot}/h\) BHs by factors up to \(\sim 5\). This is a direct consequence of the fact that in our highest resolution zooms, the \(1.56\times 10^{3}\)\(M_{\odot}/h\) seeds grow primarily via BH-BH mergers. As a result, the assembly of the higher mass descendants is more efficient in galaxies with richer en
Figure 14: Comparing the contributions of heavy mergers versus light minor mergers to the merger driven BH growth within the GAS_BASED suite. The green lines show heavy mergers where the masses of both primary and secondary BHs are \(\geq 1.25\times 10^{4}\)\(M_{\odot}/h\). The orange lines show the light minor mergers where the secondary BH mass is \(<1.25\times 10^{4}\)\(M_{\odot}/h\) but the primary BH mass is \(\geq 1.25\times 10^{4}\)\(M_{\odot}/h\). The olive lines show the total contribution from both types of mergers i.e. all mergers with primary BHs \(\geq 1.25\times 10^{4}\)\(M_{\odot}/h\). The different columns show different gas based seed models. Middle panels show the mass growth rate due to mergers as a function of redshift, which is defined as the total mass of all merging secondary BHs per unit redshift. The light minor mergers show a dominant contribution at \(z\gtrsim 11\), whereas heavy mergers tend to be more prevalent at \(z\lesssim 11\). The bottom panels show the mass growth (\(\Delta M_{\rm minor}^{\rm light}\)) due to the light minor mergers between successive heavy mergers. This contribution needs to be explicitly included in simulations that use the stochastic seed models, to produce BH growth consistent with the GAS_BASED simulations.
vironments (higher number of neighboring halos) with a more extensive merger history. This cannot be captured solely by the _galaxy mass criterion_.
* To successfully capture the two-point clustering of the \(\geq 1.25\times 10^{4}\) & \(1\times 10^{5}\)\(M_{\odot}/h\) descendant BHs, we introduce a _galaxy environment criterion_, where we assign seeding probabilities less than unity for galaxies with \(\leq 1\) neighbors. By doing this, we preferentially place ESDs in richer environments, which enhances the two-point clustering. We demonstrate that by adding a _galaxy-environment criterion_ that is calibrated to produce the correct two-point clustering, our stochastic seed models can simultaneously also reproduce the BH abundances and merger rates of the \(\geq 1.25\times 10^{4}\) & \(1\times 10^{5}\)\(M_{\odot}/h\) BHs.
* Lastly, the BH growth in our stochastic seed models is underestimated due to the absence of light minor mergers, defined as those involving a resolved primary (\(M_{1}>M_{\rm seed}^{\rm ESD}\)) but an unresolved secondary (\(M_{2}<M_{\rm seed}^{\rm ESD}\)). We compute the contribution of these mergers from the highest resolution zooms that resolve the \(1.56\times 10^{3}\)\(M_{\odot}/h\) seeds, and explicitly add them to the simulations that use the stochastic seed models. It is only after adding the contribution from light minor mergers, do our stochastic seed models achieve success in accurately reproducing the BH mass functions predicted by the highest resolution zooms.
Overall, our stochastic seed model requires three main seeding components to successfully represent low mass seeds in lower resolution-larger volume simulations: 1) a _galaxy mass criterion_, 2) _galaxy environment criterion_, and 3) inclusion of unresolved light minor mergers. In our upcoming companion paper (Bhowmick et al. in prep), we apply these stochastic seed models to uniform volume cosmological simulations, and thereby make predictions that would be directly
Figure 15: Comparison of the cumulative mass functions (i.e. the number of BHs above a minimum BH mass threshold \(M_{\rm BH}^{\rm min}\)) between the GAS_BASED (colored lines) and STOCHASTIC_MASS_ENV (black lines) simulations. The top, middle and bottom rows show \(z=8\), 10 and 12, respectively. The black dashed and solid lines show the STOCHASTIC_MASS_ENV predictions with and without the explicit inclusion of the contribution from the unresolved light minor mergers. Without the light minor mergers, the STOCHASTIC_MASS_ENV BH mass functions are significantly steeper than in the GAS_BASED simulations. After including the contribution from the _unresolved light mergers_, the STOCHASTIC_MASS_ENV simulations are able to achieve reasonable agreement with the BH mass functions predicted by the GAS_BASED simulations.
comparable to facilities such as JWST and LISA for different seeding scenarios.
The construction of our stochastic seed model essentially rests only on two important aspects of the formation of low mass seeds. First, these seeds are forming in regions which are already in the process of rapid metal enrichment, which is a natural consequence of seeding within star forming & metal poor gas. Second, the BH growth is dominantly driven by BH-BH mergers. Therefore, our stochastic seed model could be tuned to represent _any_ low mass seeding scenario for which the foregoing assumptions hold true. These include scenarios beyond the ones we consider in this work. Furthermore, we can calibrate our stochastic seed model against any high resolution simulation run with different galaxy formation models or using different state-of-the-art numerical solvers such as GADGET-4(Springel et al., 2021), GIZMO(Hopkins, 2015) etc. Lastly, a key advantage of our seed model is that it depends solely only on galaxy total mass (which is dark matter dominated) and galaxy environment. Therefore, it can also be readily applied to DM only simulations as well as semi-analytic models that are typically much less expensive compared to full hydrodynamic simulations.
In the near future, we shall test our stochastic seed model for their ability to represent low mass seeds when coupled with alternate accretion and dynamics models. For example, having a smaller scaling exponent between BH accretion rate and BH mass (such as \(\alpha=1/6\) for gravitational torque driven accretion model) may significantly enhance the role of gas accretion in the growth of low mass seeds at high redshifts. Similarly, having a more physically motivated BH dynamics prescription will likely impact the merger rates and change the relative importance of accretion versus mergers in driving BH growth. In such a case, we can envision requiring additional ingredient(s) in our stochastic seed model to capture the impact of unresolved accretion driven growth of low mass seeds, similar to how the galaxy environment criterion was needed to account for the impact of unresolved merger dominated BH growth.
Nevertheless, our new stochastic seed model offers a substantial improvement from existing cosmological simulations that have either relied on a threshold halo / stellar mass, or on poorly resolved gas properties for seeding. Unlike most of these currently used seed models, our models will allow us to represent low-mass seeds in cosmological simulations without the need to either explicitly resolve the seeds, or seed below the gas mass resolution of the simulation. Overall, this work is an important step towards the next generation of cosmological hydrodynamic simulations in terms of improved modeling of high redshift SMBHs, to finally understand their role in shaping high redshift galaxy evolution in the ongoing JWST and upcoming LISA era.
## Acknowledgements
LB acknowledges support from NSF award AST-1909933 and Cottrell Scholar Award #27553 from the Research Corporation for Science Advancement. PT acknowledges support from NSF-AST 2008490. RW acknowledges funding of a Leibniz Junior Research Group (project number J131/2022).
## Data Availability
The underlying data used in this work shall be made available upon reasonable request to the corresponding author.
|
2309.03537 | Data-Adaptive Graph Framelets with Generalized Vanishing Moments for
Graph Signal Processing | In this paper, we propose a novel and general framework to construct tight
framelet systems on graphs with localized supports based on hierarchical
partitions. Our construction provides parametrized graph framelet systems with
great generality based on partition trees, by which we are able to find the
size of a low-dimensional subspace that best fits the low-rank structure of a
family of signals. The orthogonal decomposition of subspaces provides a key
ingredient for the definition of "generalized vanishing moments" for graph
framelets. In a data-adaptive setting, the graph framelet systems can be
learned by solving an optimization problem on Stiefel manifolds with respect to
our parameterization. Moreover, such graph framelet systems can be further
improved by solving a subsequent optimization problem on Stiefel manifolds,
aiming at providing the utmost sparsity for a given family of graph signals.
Experimental results show that our learned graph framelet systems perform
superiorly in non-linear approximation and denoising tasks. | Ruigang Zheng, Xiaosheng Zhuang | 2023-09-07T07:49:43Z | http://arxiv.org/abs/2309.03537v2 | # Subgraph-based Tight Frames on Graphs with Compact Supports and Vanishing Moments
###### Abstract
In this work, we proposed a novel and general method to construct tight frames on graph with compact supports based on a series of hierarchical partitions. Starting from our abstract construction that generalize previous methods based on partition trees, we are able to flexibly incorporate subgraph Laplacians into our design of graph frames. Consequently, our general methods permit adjusting the (subgraph) vanishing moments of the framelets and extra properties, such as directionality for efficiently representing graph signals with path-like supports. Several variants are explicitly defined and tested. Experimental results show our proposed graph frames performs superiorly in non-linear approximation tasks.
Graph signal processing, tight frames, graph wavelet/framelet, compact support, vanishing moment.
## I Introduction
Graphs are prevalent data format which have a variety of realizations in real life, e.g. social networks, traffic networks, sensor networks, etc. Due to its practical importance, it is desired to have efficient signal processing tools on graphs and this has been an on-going research topic for over a decade [1, 2]. Compared with images and sequential data, graphs are more irregular, flexible representations. Therefore, classical signal processing methods can not directly be applied on graphs. In graph signal processing, a central topic is to develop notions and tools that resemble those on continuous spaces or discrete 1-D signals, e.g. Fourier and wavelet analyses/transforms. Stemmed from the spectral graph theory [3], the graph Fourier transform (GFT) based on graph Laplacians are by far one of the most fundamental concepts upon which many other tools are developed.
By GFT, notions of frequencies, low-pass and high-pass filters are subsequently defined. These concepts further suggest developing a analysis-synthesis framework using two-channel filterspanks and up/down samplings on graphs analog to classical 1-D cases. The representative work in [4] shows that this can established on bipartite graphs, on which up/down sampling is simply restricted on either sets of the bipartition. This is essentially due to the algebraic property of _spectral folding_ that is satisfied on bipartite graphs. For arbitary graphs, this property is not guaranteed and therefore the overall analysis-synthesis framework resorts to a decomposition of the original graph into bipartite subgraphs. The signal processing is then conducted on each bipartite subgraph independently. Subsequent works improves [4] by: 1. Using compact (polynomial) and biorthogonal filters by allowing differernt filterbanks in analysis and synthesis phases [5]. 2. Applying oversampled bipartite graphs since the algorithm of docomposition in bipartite subgraphs in [4] results in a loss of edges of the original graphs [6]. 3. Improving computational complexity and quality of down samplings from using maximum spanning trees [7, 8]. 4. Generalizations to arbitaty graphs by applying generalized sampling operators [9], generalized graph Laplacians and inner-products [10]. All of these works have shown certain capability among various graph signal processing tasks.
Apart from the aforementioned works, there are a variety of other realizations of graph signal analysis-synthesis based on different motivations and possessing other characteristics and merits [11, 12, 13, 14, 15, 16, 17, 18]. Nonetheless, from a most general perspective, we can still unify these different approaches by viewing graph signal as vectors in \(\mathbb{R}^{n}\) and the analysis and synthesis phases as two metrics \(\mathbf{T}_{a},\mathbf{T}_{s}\), since all operations including up/down samplings are linear. Most importantly, they all possess properties of perfect reconstruction, i.e. \(\mathbf{T}_{a}\mathbf{T}_{s}=\mathbf{I}\), and also critical sampling, i.e. \(\mathbf{T}_{a},\mathbf{T}_{s}\in\mathbb{R}^{n\times n}\). These are natural requirements, as perfect reconstruction allows recovering the signal after decomposition, and critical sampling indicates that compared with the original signal, there is no extra storage overhead after analysis. However, by viewing \(\mathbf{T}_{a}\) as a dictionary to represent graph signal, critical sampling could be a drawback, since the dictionary is parsimonious for not having enough atoms. Therefore, for specific and complicated graph signals, critical sampling implies possible inefficiency in providing sparse representations. Moreover, except special cases in [4, 12, 13], in the remaining aforementioned works, \(\mathbf{T}_{s}\neq\mathbf{T}_{a}^{T}\) and thus we can not represent the analysis-synthesis framework using only \(\mathbf{T}_{a}\). These discussions lead to finding \(\mathbf{T}_{a},\mathbf{T}_{s}\) such that \(\mathbf{T}_{s}=\mathbf{T}_{a}^{T},\mathbf{T}_{a}\mathbf{T}_{a}^{T}=\mathbf{I},\mathbf{T}_{a}\in\mathbb{ R}^{m\times n},m\geq n\). This is equivalent to require that the rows of \(\mathbf{T}_{a}\) forms a tight frame on \(\mathbb{R}^{n}\).
Frame theory is well-established in classical wavelet/framelet analysis [19, 20, 21] and there well-known examples of frames [22, 23] which possess properties that ordinary wavelet bases do not have, e.g. directionality. As for frames on graphs, there a few of existing works [24, 25, 26, 27, 28], most of which are derived from [24] GFT and thus they are called spectral wavelets/framelets. While spectral wavelets/framelets are well-interpreted in the frequency domain, unless the filters in constructing are poyniomals, the wavelets/framelets do not have compact support. In this case, \(\mathbf{T}_{a}\) is a dense matrix and therefore causes computation and storage burden. To facilitate efficient computations, polynomial approximations to the target filters are applied. But then it fails to be a tight frame, i.e. \(\mathbf{T}_{a}\mathbf{T}_{a}^{T}\neq\mathbf{I}\), since it is shown that no polynomial filters satisfy |
2309.07989 | Asteroseismology of double-mode radial $δ$ Scuti stars: AE Ursae
Majoris and RV Arietis | We construct complex seismic models of two high-amplitude delta Sct stars, AE
UMa and RV Ari, each pulsating in two radial modes: fundamental and first
overtone. The models reproduce, besides the frequencies of two radial modes,
also the amplitude of bolometric flux variations (the parameter f) for the
dominant mode. Applying the Monte Carlo-based Bayesian analysis, we derive
strong constraints, on the parameters of the model as well as on the free
parameters of the theory. A vast majority of seismic models of the two stars
are just at the beginning of hydrogen-shell burning and a small fraction is at
the very end of an overall contraction. The stars have a similar age of about
1.6 Gyr for the hydrogen-shell burning phase. Both stars have unusual low
overshooting from the convective core; about 0.02 and 0.004 of the pressure
scale height for AE UMa and RV Ari, respectively. This result presumably
indicates that overshooting should vary with time and scale with a decreasing
convective core. The efficiency of convection in the envelope of both stars is
rather low and described by the mixing length parameter alphaMLT of about
0.3-0.6. The third frequency of RV Ari, confirmed by us in the TESS photometry,
can only be associated with mixed nonradial modes l=1, g4-g8 or l=2, g10-g12.
We include the dipole mode into our Bayesian modelling and demonstrate its huge
asteroseismic potential. | J. Daszynska-Daszkiewicz, P. Walczak, W. Szewczuk, W. Niewiadomski | 2023-09-14T19:01:51Z | http://arxiv.org/abs/2309.07989v1 | # Asteroseismology of double-mode radial \(\delta\) Scuti stars:
###### Abstract
We construct complex seismic models of two high-amplitude \(\delta\) Sct stars, AE UMa and RV Ari, each pulsating in two radial modes: fundamental and first overtone. The models reproduce, besides the frequencies of two radial modes, also the amplitude of bolometric flux variations (the non-adiabatic parameter \(f\)) for the dominant mode. Applying the Monte Carlo-based Bayesian analysis, we derive strong constraints, on the parameters of the model as well as on the free parameters of the theory. A vast majority of seismic models of the two stars are just at the beginning of hydrogen-shell burning and a small fraction is at the very end of an overall contraction. The stars have a similar age of about 1.6 Gyr for the hydrogen-shell burning phase. Both stars have unusual low overshooting from the convective core; about 0.02 and 0.004 of the pressure scale height for AE UMa and RV Ari, respectively. This result presumably indicates that overshooting should vary with time and scale with a decreasing convective core. The efficiency of convection in the envelope of both stars is rather low and described by the mixing length parameter \(\alpha_{\rm MLT}\) of about 0.3\(-\)0.6. The third frequency of RV Ari, confirmed by us in the TESS photometry, can only be associated with mixed nonradial modes \(\ell=1,\ g_{4}-g_{8}\) or \(\ell=2,\ g_{10}-g_{12}\). We include the dipole mode into our Bayesian modelling and demonstrate its huge asteroseismic potential.
keywords: stars: evolution - stars: oscillation - Physical Data and Processes: opacity, convection- stars: individual: AE UMa, RV Ari
## 1 Introduction
Asteroseismology of stars pulsating in more than one radial mode is of particular importance because the period ratio of such modes takes values in a very narrow range. The high-amplitude \(\delta\) Scuti stars (HADS), a special subclass of \(\delta\) Sct variables, often pulsate in two radial modes, usually in the fundamental and first overtone mode (e.g., Breger 2000; McNamara 2000; Furgoni 2016; Yang et al. 2021). \(\delta\) Scuti stars are classical pulsating variables of AF spectral type and their instability is driven by the opacity mechanism operating in the second helium ionization zone (Chevalier 1971), with a small contribution from the hydrogen ionization region (Pamyarhykh 1999). The masses of \(\delta\) Sct pulsators are in the range of about 1.6 - 2.6 M\({}_{\odot}\) and most of them are in the main-sequence phase of evolution (e.g., Breger & Pamyatnykh 1998; Bowman et al. 2016). Radial and non-radial pulsations in pressure (p) and gravity (g) modes are be excited.
HADS stars change their brightness in the \(V\)-passband in the range greater than 0.3 mag. They are in an advanced phase of main-sequence evolution or, usually, already in a post-main sequence phase (e.g., Breger 2000). HADS pulsators have typically low rotational velocities, below \(V_{\rm rot}\sin i=40\ {\rm km\ s^{-1}}\)(Breger 2000), although there is at least one exception, i.e., V2367 Cyg with the rotational velocity of about 100 km s\({}^{-1}\)(Balona et al. 2012).
From the fitting of frequencies of just two radial modes, one can already obtain valuable constraints on global stellar parameters such as a mass, effective temperature, luminosity (e.g., Petersen & Christensen-Dalsgaard 1996; Daszynska-Daszkiewicz et al. 2022; Netzel & Smolec 2022). However, to get more unambiguous solution and more information about a star, e.g., on chemical composition, mixing processes or efficiency of convection, including nonradial modes or other seismic tools is essential. In particular, the non-adiabatic parameter \(f\) is most suitable for obtaining reliable constraints on convection in the outer layers of \(\delta\) Sct stars (Daszynska-Daszkiewicz et al. 2003). The parameter \(f\) gives the relative amplitude of the radiative flux perturbation at the photospheric level. Its diagnostic potential for constraining the mixing length parameter \(\alpha_{\rm MLT}\) has been already demonstrated many times for the AF-type pulsators, e.g., \(\beta\) Cas, AB Cas, 20 CVn (Daszynska-Daszkiewicz et al. 2003; Daszynska-Daszkiewicz 2007), FG Vir (Daszynska-Daszkiewicz et al. 2005), SX Phe (Daszynska-Daszkiewicz et al. 2020, 2023), the prototype \(\delta\) Sct (Daszynska-Daszkiewicz et al. 2021) and BP Peg (Daszynska-Daszkiewicz et al. 2022, 2023). The main results for AE UMa and RV Ari were published by Daszynska
Daszkiewicz et al. (2023), where we showed for the four HADS stars that only the seismic models computed with the OPAL opacities (Iglesias & Rogers, 1996) are caught within the observed error box in the HR diagram. Seismic models computed with OP tables (Seaton, 2005) and OPLIB tables (Colgan et al., 2016) were much cooler and less luminous.
Here, we present the details of complex seismic modelling of AE UMa and RV Ari, which relies on the simultaneous fitting of the two radial modes and the non-adiabatic parameter \(f\) for the dominant mode. Besides, we present the Fourier frequency analysis of the TESS space data, mode identification from the photometric observables and the asteroseismic potential of the nonradial mode present in the star RV Ari.
Sect. 2 contains basic information about the stars and determination of main observational parameters. In Sec. 3, we present the frequency analysis of the TESS data of the two HADS pulsators. In the case of RV Ari, the ASAS photometry is also analysed. In Sect. 4, we identify the degree \(\ell\) of two pulsational modes using the method based on the photometric amplitudes and phases, to confirm, independently of the period ratio, their radial nature. Sect. 5, presents the details of our complex seismic modelling of both HADS stars based on the Bayesian analysis using Monte Carlo simulations. In Sect. 6 we include the nonradial mode into seismic modelling of RV Ari. The summary is given in Sect. 7.
## 2 The two double-mode radial pulsators: AE UMa and RV Ari
AE Ursae Majoris is an A9-spectral type star with the mean brightness in the V passband of 11.35 mag (SIMBAD Astronomical Database). The variability of the star was discovered by Greyer et al. (1955) and, firstly, is was classified as a dwarf Cepheid by Tesevich (1973) who determined the period of light variations. The secondary period was found by Szeidl (1974) and Broglia & Conconi (1975). Garcia et al. (1995) listed it as an SXP The variable and this classification is still in GCVS and on the SIMBAD website. However, already Cox et al. (1979) postulated, on the basis of the period ratio, that AE UMa is a Population I high-amplitude \(\delta\) Sct star. Moreover, Rodriguez et al. (1992) showed that the metallicty of AE UMa is [m/H]\(=-0.3\), using the photometric index \(\delta m_{1}\). Hintz et al. (1997) determined [m/H] from \(-0.1\) to \(-0.4\) using an approximate relationship between the metallicity and the period ratio. Thus, there is no doubt that AE UMa belongs to Population I. As a majority of HADS stars, AE UMa is a slow rotator with \(V\sin i\approx 28\) km s\({}^{-1}\)(Jonsson et al., 2020). Pocs & Szeidl (2001) analysing 25 years of photometric observations concluded that the period of the fundamental radial mode is stable and the period of the first overtone is decreasing with a rate \(\dot{P}/P=-7.3\cdot 10^{-8}\) yr\({}^{-1}\). According to the authors amplitudes of both modes undergo only small changes.
Niu et al. (2017) constructed for the first time seismic models of AE UMa based on the two radial modes and the period changes. From about 440 times of maximum light, they determined the positive period change for the dominant mode with a rate of \(\dot{P}/P=+5.4(1.9)\cdot 10^{-9}\) yr\({}^{-1}\). In their seismic modelling of AE UMa, Niu et al. (2017) ignored all effects of rotation and fixed the values of overshooting parameter from a convective core \(f_{\rm ov}=0.015\) and the mixing length parameter \(\alpha_{\rm MLT}=1.89\). They concluded that AE UMa is in the post-MS stage of evolution. Recently, Xue et al. (2022) performed the frequency analysis of the TESS data of AE UMa made in sector 21. They found two independent frequencies, \(11.6257(2)\) d\({}^{-1}\) and \(15.0314(2)\) d\({}^{-1}\), as well as 63 harmonics and combinations of them. Using the times of maximum light from about 46 years, they obtained \(\dot{P}/P=+2.96(5)\cdot 10^{-9}\) yr\({}^{-1}\) for the dominant period. Xue et al. (2022) demonstrated also a prospect of using the period changes in asteroseismic modelling and constructed such seismic models for the fixed values of the mixing length parameter of \(\alpha_{\rm MLT}=1.89\). The authors ignored all effects of rotation and assumed zero-overshooting from the convective core.
RV Arietis in the Population I star with an A spectral type and the mean brightness in the V passband of 12.27 mag. The star was identified as variable by Hoffmeister (1934). Broglia & Pestarino (1955) and Detre (1956) derived the main period and detected the second mode from the modulation period. These two periodic variations are explained by excitation of the fundamental and first overtone radial modes (Cox et al., 1979). RV Ari is a slow rotator with the projected rotational velocity of \(V\sin i\approx 18\) km s\({}^{-1}\)(Rodriguez et al., 2000). Using the photometric index \(\delta m_{1}\), Rodriguez et al. (1992) determined for RV Ari the above-solar metallicity of [m/H]\(=+0.1\). Pocs et al. (2002) gathered an extensive BVRI photometry, covering about 20 years, and obtained a decreasing period of the fundamental mode with a rate \(\dot{P}/P=-0.6\cdot 10^{-8}\) yr\({}^{-1}\) and an increasing period for the first overtone \(\dot{P}/P=+0.9\cdot 10^{-8}\) yr\({}^{-1}\). The opposite sign of period changes for the two modes indicates some non-evolutionary effects. The authors detected also the third signal in their photometry with the frequency \(13.6116\) d\({}^{-1}\) that can correspond only to nonradial mode.
Casas et al. (2006) presented the first seismic modelling of the star adopting four discrete values of the mixing length parameter \(\alpha_{\rm MLT}=0.5,1.0,1.5,2.0\). They considered only main-sequence models and obtained the constraints on effective temperature \([7065,7245]\) K and on age \([1.19,1.27]\) Gyr.
In our seismic analysis of both stars, we adopted the whole range of the effective temperature found in the literature. To derive the luminosity, we adopted distances determined on the basis of Starshorse2 model (Anders et al., 2022), using the Gaia EDR3 observations (Gaia Collaboration et al., 2022). The bolometric corrections were taken from Kurucz models for the microturbulent velocity \(\xi_{t}=2\) and \(4\) km s\({}^{-1}\). We considered the metallicity [m/H]\(=-0.5,\ -0.3,\ -0.2,\ -0.1\) for AE UMa and [m/H]\(=0.0,\ +0.1,\ +0.3,\ +0.5\) for RV Ari. The adopted parameters were as follows:
\(\bullet\) AE UMa: log \(T_{\rm eff}=3.88353(2922)\), log \(L/L_{\rm O}=1.0907(896)\),
RV Ari: log \(T_{\rm eff}=3.8787(367)\), log \(L/L_{\rm O}=1.1029(263)\).
In Fig. 1, we show the position of both stars on the Hertzsprung-Russell diagram. As one can see, the stars occupy a similar position but their metallicity is quite different; AE UMa has [m/H] below the solar value and RV Ari - above the solar value. For guidance, we show also a few evolutionary tracks computed with Warsaw-New Jersey code described in Sect. 5. The tracks were computed for the OPAL opacity table (Iglesias & Rogers, 1996) and the solar chemical mixture ofAsplund et al. (2009), hereafter AGSS09.
## 3 Frequency analysis
Both stars, AE UMa and RV Ari, were observed in the framework of the TESS mission (Ricker et al., 2015). Here, we used corrected 120 s cadence observations delivered by TESS Science Processing Operations Center (SPOC, Jenkins et al., 2016).
AE UMa was observed in the two 27 d sectors, S21 and S48, which are more than 2 years apart. Therefore, we decided to analyse
each sector separately. The Rayleigh resolution for each sector is about \(\Delta v_{R}=1/T=0.037\,\mathrm{d}^{-1}\).
RV Ari was observed in two sectors, 42 and 43, which span 51 days. Since these sector are consecutive, we analyze them together. The Rayleigh resolution for the combined sectors is \(0.020\,\mathrm{d}^{-1}\). In addition to the space TESS data, we analysed the ground-based ASAS-3 V-band photometry (Pojmanski, 2002) of RV Ari. ASAS data cover 2518 days what translates into the Rayleigh resolution \(\Delta v_{R}=0.0004\,\mathrm{d}^{-1}\)).
In the first step, we normalized the TESS light curves by dividing them by a linear fit. Only data points with quality flag 0 were used. The normalization was done for each sector separately. In order to extract frequencies of the light variability, we proceeded the standard pre-whitening procedure. Amplitude periodograms (Deeming, 1975; Kurtz, 1985) were calculated up to the Nyquist frequency for TESS 120 s cadence data, i.e., to \(360\,\mathrm{d}^{-1}\). The fixed frequency step in periodogram equal to \(5\times 10^{-5}\,\mathrm{d}^{-1}\) was used for both analyzed stars. In the case of TESS data, as a significance criterion of a given frequency peak we chose the signal-to-noise ratio, \(S/N=5\). This threshold is higher than the standard value of 4 (Breger, 1993; Kuschnig et al., 1997), but it corresponds to an estimate made by Baran & Koen (2021) for TESS data. The noise \(N\) was calculated as the mean value in a one day window centred at the frequency before its extraction.
In the case of data with a high point-to-point precision, there is a risk of artificially introducing spurious signals in the pre-whitening process. According to Loumos & Deeming (1978), in their most conservative case, frequencies that are separated less than 2.5 times the Rayleigh resolution cannot be resolved properly and may be spurious. Therefore, we decided to skip frequency with smaller amplitude in such close pairs.
Adopting the above criteria, in the case of AE UMa we found 59 significant frequency peaks in the S21 data and 57 in the S48 data. In the case of each sector, two peaks were rejected because of the adopted frequency resolution \(2.5\Delta v_{R}\). For RV Ari, we found 137 frequency peaks. Two frequencies were rejected because of the adopted resolution.
In Fig. 2, we show the three amplitude periodograms calculated for the S48 data of AE UMa, i.e, 1) for the original data (top panel), 2) after subtracting the frequency \(\nu_{1}\) (middle panel) and 3) after subtracting all significant frequencies (bottom panel). Periodograms for the S21 data are visually indistinguishable. In Fig. 3, we show four amplitude periodograms for the S42+S43 data of RV Ari. From top to bottom these are: 1) the periodogram calculated for the original data, 2) after subtracting \(\nu_{1}\), 3) after subtracting \(\nu_{1}\), \(\nu_{2}\) and five combinations/harmonics, and 4) after subtracting all significant signals.
Our final frequencies, amplitudes and phases were determined
Figure 1: The HR diagrams with the position of AE UMa (left panel) and RV Ari (right panel). The evolutionary tracks were computed with Warsaw-New Jersey code, assuming the OPAL opacity tables, the AGSS09 solar mixture and the initial hydrogen abundance \(X_{0}=0.70\). The metallicity is indicated in each panel. The mixing length parameter was \(\alpha_{\mathrm{MLT}}=0.5\) and zero-overshooting from the convective core was adopted. The initial velocity of rotation was \(V_{\mathrm{rot,0}}=10\) km s\({}^{-1}\).
Figure 2: The Fourier amplitude periodograms for AE UMa obtained from the TESS light curve for the sector S48. The panels from top to bottom show: the periodogram for the original data, after subtracting one term and after subtracting 57 terms. A red line indicates the \(5S/N\) level.
using the nonlinear least-squares fit using the following formula:
\[S(t)=\sum_{i=1}^{N}A_{i}\sin\left(2\pi\left(\nu_{i}t+\phi_{i}\right)\right)+c, \tag{1}\]
where \(N\) is the number of sinusoidal components, \(A_{i}\), \(\nu_{i}\), \(\phi_{i}\) are the amplitude, frequency and phase of the \(i-\)th component, respectively, while the \(c\) is an offset. Moreover, we applied the correction to formal frequency errors as suggested by Schwarzenberg-Czerny (1991, the post-mortem analysis). These corrections for both analyzed stars were of about 1.5.
Finally, the entire set of frequencies was searched for harmonics and combinations. A given frequency was considered a combination if it satisfied the equation
\[\nu_{i}=m\nu_{j}+n\nu_{k}+o\nu_{l} \tag{2}\]
within the Rayleigh resolution. In the case of two-parent combinations one of the integers, \(m\), \(n\) or \(o\), was set to zero, while in the case of harmonics two integers were set to zero. Moreover, we assumed that \(\nu_{j}\), \(\nu_{k}\) and \(\nu_{l}\) have higher amplitudes than \(\nu_{i}\).
dependent. Their values, amplitudes and S/N ratios are given in Table 2. The rest of 132 peaks can be explained by various combinations of these three independent frequencies. The third frequency \(\nu_{3}\) has been already suggested from ground-based photometry by Pocs et al. (2002) and we confirm it in the space data. Thus, RV Ari is one of a few HADS stars with an unquestionably existing third independent frequency that can only be associated with a nonradial mode.
Next, we analyzed the ASAS-3 V-band photometry of RV Ari. These data consist of the photometry made in five different apertures. We used the one with the smallest mean error. Only points with quality flag A and B were retained. In the case of ground-based photometry, we adopted \(S/N=4\) as a threshold for significant frequency peaks. The noise was calculated in a wider window of \(10\,\mathrm{d}^{-1}\). The amplitude periodograms were calculated up to \(150\,\mathrm{d}^{-1}\), what covers frequency range found in TESS data. The step in periodograms was set to \(10^{-3}\,\mathrm{d}^{-1}\). We found five significant signals, two of them are independent and the remaining three are combinations and harmonic. These signals with the amplitudes and \(S/N\) ratios are given in Table 3. In Fig. 4, we show the amplitude periodograms calculated for original data (top panel), after subtracting one frequency (middle panel) and after subtracting all 5 significant frequencies (bottom panel).
## 4 Identification of the mode degree \(\ell\)
The period ratio corresponding to the frequencies \(\nu_{1}\) and \(\nu_{2}\) amounts to 0.77343 for AE UMa and 0.77256 for RV Ari. These values strongly suggest that in each star \(\nu_{1}\) corresponds to a radial fundamental mode and \(\nu_{2}\) to a radial first overtone. In this section, we independently verify this hypothesis using the method of mode identification based on the photometric amplitudes and phases. To this end, we use time-series photometry in the Stromgren \(uvby\) passbands made by Rodriguez et al. (1992). In Table 4, we give the amplitudes and phases derived from these data.
Here, we apply the method of Daszynska-Daszkiewicz et al. (2003) based on a simultaneous determination of the mode degree \(\ell\), the intrinsic mode amplitude \(\varepsilon\) multiplied by \(Y_{\ell}^{m}(i,0)\) and the non-adiabatic parameter \(f\) for a given observed frequency. The numbers \(\ell\) and \(m\) are the spherical harmonic degree and the azimuthal order, respectively, and \(i\) is the inclination angle.
The intrinsic amplitude \(\varepsilon\) of a mode is defined by the formula:
\[\delta r(R,\theta,\varphi)=R\mathrm{Re}\{\varepsilon Y_{\ell}^{m}(\theta, \varphi)\mathrm{e}^{-\mathrm{i}\omega t}\}, \tag{2}\]
which gives the relative local radial displacement of the surface element caused by a pulsational mode with the angular frequency \(\omega\). Other symbols have their usual meanings. The corresponding parameter \(f\) is defined by changes of the bolometric flux, \(\mathcal{F}_{\mathrm{bol}}\) as
\[\frac{\delta\mathcal{F}_{\mathrm{bol}}}{\mathcal{F}_{\mathrm{bol}}}=\mathrm{ Re}\{\varepsilon fY_{\ell}^{m}(\theta,\varphi)\mathrm{e}^{-\mathrm{i}\omega t }\}. \tag{3}\]
Both, \(\varepsilon\) and \(f\) have to be regarded as complex numbers because pulsations are non-adiabatic. The theoretical values of \(f=(f_{R},\ f_{I})\) are derived from linear non-adiabatic computations of stellar pulsations whereas \(\varepsilon\) is indeterminable under linear theory.
In the linear and zero-rotation approximation, the theoretical expression for the complex amplitude of the relative total flux variation in a passband \(\lambda\), for a given pulsational mode, can be written in the form (e.g. Daszynska-Daszkiewicz et al., 2003, 2005):
\[\mathcal{A}_{\lambda}=\bar{\varepsilon}\left(\mathcal{D}_{\ell}^{\lambda}f+ \mathcal{E}_{\ell}^{\lambda}\right), \tag{4}\]
where
\[\bar{\varepsilon}\equiv\varepsilon Y_{\ell}^{m}(i,0), \tag{5a}\] \[\mathcal{D}_{\ell}^{\lambda}=b_{\ell}^{\lambda}\frac{1}{4}\frac{\partial\log( \mathcal{F}_{\lambda}|b_{\ell}^{\lambda}|)}{\partial\log\mathrm{f}_{\mathrm{ eff}}},\] (5b) \[\mathcal{E}_{\ell}^{\lambda}=b_{\ell}^{\lambda}\left[(2+\ell)(1- \ell)-\left(\frac{\omega^{2}R^{3}}{GM}+2\right)\frac{\partial\log(\mathcal{F} _{\lambda}|b_{\ell}^{\lambda}|)}{\partial\log g}\right], \tag{5c}\]
and
\[b_{\ell}^{\lambda}=\int_{0}^{1}h_{\lambda}(\mu)\mu P_{\ell}(\mu)d\mu. \tag{5d}\]
The term \(\mathcal{D}_{\ell}^{\lambda}\) describes the temperature effects and \(\mathcal{E}_{\ell}^{\lambda}\) combines the geometrical and pressure effect. \(G,\ M,\ R\) have their usual meanings. In equation Eq. 5d, \(h_{\lambda}(\mu)\) stands for the limb darkening law and \(P_{\lambda}(\mu)\) is the Legendre polynomial. The partial derivatives of \(\mathcal{F}_{\lambda}(T_{\mathrm{eff}},\log g)\) in \(\mathcal{D}_{\ell}^{\lambda}\) and \(\mathcal{E}_{\ell}^{\lambda}\) as well as \(b_{\ell}^{\lambda}(T_{\mathrm{eff}},\log g)\) and its derivatives have to be calculated from model atmospheres. Their values are sensitive to the metallicity [m/H] and microturbulent velocity \(\xi_{I}\). Here, we used Vienna model atmospheres (Heiter et al., 2002) that include turbulent convection treatment from Canuto et al. (1996). For the limb darkening law, \(h_{\lambda}(\mu)\), we computed coefficients assuming the non-linear, four-parametric formula of Claret (2000). The values of the photometric amplitudes and phases themselves are given by \(A_{\lambda}=|\mathcal{A}_{\lambda}|\) and \(\varphi_{\lambda}=\arg(\mathcal{A}_{\lambda})\), respectively.
Next, the system of equations (4) for the four passband \(uvby\) was solved for a given \(\ell\) and \((T_{\mathrm{eff}},\log g)\) to determine \(\bar{\varepsilon}\) and \(f\). We considered the degree \(\ell\) and associated complex values of \(\bar{\varepsilon}\) and \(f\) as most probable if there is a clear minimum in the difference between the calculated and observed photometric amplitudes and phases. The goodness of the fit is measured by:
\[\chi^{2}=\frac{1}{2N-N_{P}}\sum_{i=1}^{N}\frac{\left|\mathcal{P}_{\lambda_{i}}^{ obs}-\mathcal{A}_{\lambda_{i}}^{cal}\right|^{2}}{|\mathcal{A}_{\lambda_{i}}|^{2}}, \tag{6}\]
where the superscripts \(obs\) and \(cal\) denote the observed and calculated complex amplitude \(\mathcal{A}_{\lambda}=A_{\lambda}\mathrm{e}^{\mathrm{i}\varphi_{\lambda}}\), respectively. \(N\) is the number of passbands \(\lambda\) and \(N_{P}\) is the number of parameters to be determined. \(N_{P}=4\) because there are two complex parameters, \(\bar{\varepsilon}\) and \(f\). The observational errors \(\sigma_{\lambda}\) are computed as
\[|\sigma_{\lambda}|^{2}=\sigma^{2}(A_{\lambda})+A_{\lambda}^{2}\sigma^{2}( \varphi_{\lambda}), \tag{7}\]
where \(\sigma(A_{\lambda})\) and \(\sigma(\varphi)\) are the errors of the observed amplitude and phase in a passband \(\lambda\), respectively.
In Fig. 5, we show the values of the discriminant \(\chi^{2}\) as a function of \(\ell\) for the two frequencies of \(\Delta\)E UMa; top panels are for \(\nu_{1}\) and bottom panels for \(\nu_{2}\). We considered several values of \(T_{\mathrm{eff}}\) and \(\log L/\mathrm{L}_{\sun}\) within the observed error box. We also checked the effect of the atmospheric metallicity [m/H] and microturbulent velocity \(\xi_{I}\). In case of the dominant mode, for all values of \((\log T_{\mathrm{eff}},\ \log L/\mathrm{L}_{\sun})\) and all considered pairs of \((\lfloor\mathrm{m}/\mathrm{H}\rfloor,\ \xi_{I})\) the clear minimum of \(\chi^{2}\) is at \(\ell=0\). Thus, there is no doubt that \(\nu_{1}\) is a radial mode. For the second frequency, the minimum at \(\ell=0\) is not significantly smaller than at the other \(\ell\)'s. However, given that the visibility of pulsational modes decreases very rapidly with increasing \(\ell\), it is reasonable to assume the \(\ell=0\) identification for \(\nu_{2}\).
The identification of \(\ell\) of the two modes of RV Ari is presented in Fig. 6. Our photometric method clearly indicates that the dominant mode is radial. For the second frequency the identification
is not unambiguous. This is mostly because of much larger errors in the photometric amplitudes and phases of RV Ari (about three time larger comparing to AE UMa). It results from lower number of observational data points and the fact that RV Ari is fainter than AE UMa. However, from the plot \(\chi^{2}\) vs. \(\ell\) for \(\nu_{2}\), one can conclude that \(\ell=0,1,2,3\) are equally possible. Combining this fact with the period ratio and the largest visibility factor \(b_{\ell}\) (cf. Eq. 5d) for \(\ell=0\), it is safe to assume for further analysis that \(\nu_{2}\) is also a radial mode.
## 5 Complex seismic modelling
The complex seismic modelling consists in the simultaneous matching of pulsational frequencies and the corresponding values of the non-adiabatic parameter \(f\). The parameter \(f\) gives the relative amplitude of the radiative flux perturbation at the photosphereic level. Its theoretical value for a given pulsational mode is obtained from non-adiabatic computations of stellar pulsations and it is complex because there is a phase shift between the radiative flux variation and radius variation. In the case of \(\delta\) Sct stellar models, the theore
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline star & frequency & \(A_{\rm u}\) & \(\phi_{\rm u}\) & \(A_{\rm v}\) & \(\phi_{\rm v}\) & \(A_{\rm b}\) & \(\phi_{\rm b}\) & \(A_{\rm y}\) & \(\phi_{\rm y}\) & \(N\) \\ & [d\({}^{-1}\)] & [mag] & [rad] & [mag] & [rad] & [mag] & [rad] & [mag] & [rad] & \\ \hline AE UMa & \(\nu_{1}\) & 0.2312(17) & 4.301(7) & 0.2941(16) & 4.186(5) & 0.2569(14) & 4.191(5) & 0.2112(15) & 4.176(7) & 229 \\ & \(\nu_{2}\) & 0.0411(16) & 4.892(40) & 0.0508(16) & 4.828(32) & 0.0429(14) & 4.841(33) & 0.0348(14) & 4.861(42) & \\ \hline RV Ari & \(\nu_{1}\) & 0.2402(42) & 2.181(19) & 0.3083(39) & 2.103(14) & 0.2657(34) & 2.093(14) & 0.2213(33) & 2.072(16) & 140 \\ & \(\nu_{2}\) & 0.0730(43) & 3.115(61) & 0.0909(40) & 3.018(45) & 0.0809(35) & 3.012(44) & 0.0613(34) & 3.048(56) & \\ \hline \end{tabular}
\end{table}
Table 4: The photometric amplitudes and phases in the \(uvby\) passbands for the two dominant frequencies of AE UMa and RV Ari determined from the data of Rodríguez et al. (1992). The last column give the number of observational data points \(N\).
Figure 6: A similar figure to Fig. 5 but for the frequencies \(\nu_{1}=10.7377\) d\({}^{-1}\) and \(\nu_{1}=13.8985\) d\({}^{-1}\) of RV Ari.
ical values of \(f\) are very sensitive to the efficiency of convection in the outer layers and to opacity data (e.g., Daszynska-Daszkiewicz et al., 2003, 2023). By comparing the theoretical and empirical values of \(f\), one can get valuable constraints on the physical conditions inside the star. Thus, the parameter \(f\) is a seismic tool that carries information about the stellar interior and that is independent and complementary to the pulsation frequency.
The empirical values of \(f\) and \(\bar{\varepsilon}\) were determined from the amplitudes and phases in the \(uvby\) passbands using the method outlined in Sect. 4. In the case of radial modes \(Y_{\ell}^{m}(i,0)=1\) and we have the value of \(\varepsilon\) itself (cf. Eq. 5a), i.e., we can say what are the percentage changes in the radius caused by each pulsation mode (cf. Eq. 2). As before, we adopted Vienna model atmospheres (Heiter et al., 2002). Models with the microturbulent velocity \(\xi_{t}=4\) km s\({}^{-1}\) gave the smallest errors in the empirical values of \(\varepsilon\) and \(f\). Therefore, we adopted \(\xi_{t}=4\) km s\({}^{-1}\) whereas the atmospheric metallicity [m/H] was changed consistently with the metallicity \(Z\) in evolutionary computations.
We performed an extensive complex seismic modelling of AE UMA and RV Ari by fitting the two radial mode frequencies and the non-adiabatic parameter \(f\) for the dominant modes. In the case of both stars, the second modes had too low amplitudes to determine the empirical values of \(f\) with enough accuracy. To find seismic models, we used the Bayesian analysis based on Monte Carlo simulations. Our approach is shortly described in Appendix B and the details can be found in Daszynska-Daszkiewicz et al. (2022, 2023). Here, we just recall the adjustable parameters: mass \(M\), initial hydrogen abundance \(X_{0}\), metallicity \(Z\), initial rotational velocity \(V_{\rm rot,0}\), convective overshooting parameter \(\alpha_{\rm ov}\) and the mixing length parameter \(\alpha_{\rm MLT}\). Because only computations with the OPAL data give consistent results with the observational values of \((T_{\rm eff},\ L/L_{\sun})\)(Daszynska-Daszkiewicz et al., 2023), we adopted these opacities in all computations. At lower temperature range, i.e., for \(\log T<3.95\), opacity data from Ferguson et al. (2005) were used.
Evolutionary computations were performed using the Warsaw-New Jersey code, (e.g., Pamyatnykh, 1999). The code takes into account the mean effect of the centrifugal force, assuming solid-body rotation and constant global angular momentum during evolution. Because both stars are slow rotators, neglecting differential rotation is justified. Convection in stellar envelope is treated in the framework of standard mixing-length theory (MLT) and its efficiency is measured by the mixing length parameter \(\alpha_{\rm MLT}\). The solar chemical mixture was adopted from Asplund et al. (2009) and the OPAL2005 equation of state was used (Rogers et al., 1996; Rogers & Nayfonov, 2002). Overshooting from a convective core in the code is implemented according to Dziembowski & Pamyatnykh (2008). Their prescription takes into account both the distance of overshooting \(d_{\rm ov}=\alpha_{\rm ov}H_{p}\), where \(H_{p}\) is the pressure scale height and \(\alpha_{\rm ov}\) is a free parameter, as well as a hydrogen profile \(X(m)\) in the overshoot layer.
Non-adiabatic stellar pulsations were computed using a linear code of Dziembowski (1977). The code assumes that the convective flux does not change during pulsations which is justified if convection is not very efficient in the envelope. The effects of rotation on pulsational frequencies are taken into account up to the second order in the framework of perturbation theory.
We calculated evolutionary and pulsational models for each set of randomly selected parameters \((M,\ X_{0},\ Z,\ V_{\rm rot,0},\ \alpha_{\rm MLT},\ \alpha_{\rm ov})\). The number of simulations was about 360 000 for each star. In the case of initial hydrogen abundance \(X_{0}\), we assumed a beta function \(B(2,2)\) as a prior probability to limit its value to the reasonable range, i.e., from 0.65 to 0.75 with \(X_{0}=0.7\) as the most probable. For other parameters we used uninformative priors, i.e., uniform distributions. The vast majority of our seismic models of the two stars, that have the values of \((T_{\rm eff},\ L/L_{\sun})\) consistent with the observational determinations, are already at the beginning of hydrogen-shell burning (HSB). Only a small fraction of seismic models with proper values of \((T_{\rm eff},\ L/L_{\sun})\) is an overall contraction (OC) phase, at its very end. In all seismic modes both radial modes, fundamental and first overtone, are unstable in both stars. In Table 5, we give the expected and median values of determined parameters of the seismic models in the HSB phase for the two HADS stars. The errors in parentheses at the expected values are standard deviations. The median errors were estimated from the 0.84 and 16 quantiles, which correspond to the one standard deviation from the mean value in the case of a normal distribution. Table 6 contains the same statistics for the seismic models in the OC phase. The corresponding corner plots and histograms for the HSB seismic models are presented in Appendix B, in Figs. B1-B4. The histograms for the OC seismic models look qualitatively similar. Two stars have a very similar position in the HR diagram but RV Ari is more massive and has higher metallicity than AE UMa. The age of both HADS stars is quite similar and amounts to about 1.6 Gyr, if the stars are in the HSB phase of evolution. Seismic models in the OC phase are about 100 Myr older in the case of AE UMa and about 30 Myr younger in the case of RV Ari.
We obtained, that convection in the outer layers of the two stars is not very efficient. For the HSB seismic models, the mixing length parameter amounts to about 0.4 for AE UMa and about 0.5 for RV Ari. The OC seismic models of AE UMa and RV Ari, have \(\alpha_{\rm MLT}\) of about 0.3 and 0.6, respectively.
The most striking result is a very small overshooting from the convective core. This result may indicate that the overshooting parameter should depend on time (evolution) and, presumably, should scale with a mass and size of the convective core.
In Fig. 7, we show the structure of two representative complex seismic models of AE UMa. Seismic models of RV Ari have qualitatively the same structure. Left panels show the model in the HSB phase and right panels the model in the OC phase. Both models have the following common parameters: \(X_{0}=0.70,\ \alpha_{\rm ov}=0.0\) and \(\alpha_{\rm MLT}=0.5\). The other parameters are: \(M=1.546M_{\sun},\ Z=0.0127,\ \log T_{\rm eff}=3.8566,\ \log L/L_{\sun}=1.067\) for the HSB model and \(M=1.576M_{\sun},\ Z=0.0135,\ \log T_{\rm eff}=3.8575,\ \log L/L_{\sun}=1.076\) for the OC model. In the top panels, we show the positions of these models on the HR diagram with the corresponding evolutionary tracks. Middle panels show the run of main gradients (actual \(\nabla\), radiative \(\nabla_{\rm rad}\), adiabatic \(\nabla_{\rm ad}\)) and the mean Rosseland opacity \(\kappa\) inside each model. The local radiative and convective luminosity is presented in the bottom panels.
In the case of HSB model, the small helium core has the size of 3% of the stellar radius and the interior up to \(\log(T/{\rm K})\approx 4.8\) is radiative. The whole energy comes from hydrogen-burning in the shell proceeding via the CNO cycle. An overproduction of energy in the shell \(L_{r}/L>1\) is used for expansion of the envelope. The radiative gradient become very large around \(\log(T/{\rm K})=4.0\) where the actual gradient split into the two maxima corresponding to the hydrogen ionization and first ionization of helium. In this narrow layer the local convective luminosity becomes important. In the zone of second helium ionization \(\log(T/{\rm K})\approx 4.7\), where pulsational driving occurs, \(\nabla\) is only slightly larger than \(\nabla_{\rm ad}\) and the local convective luminosity is zero. In the case of the OC models, the hydrogen is burned in a small convective core with the radius of about 3%. The structure of the OC model above the core is very similar to that of the HSB model.
As we mentioned at the beginning of this section, from our analysis, we obtained also the empirical values of the intrinsic mode amplitude \(\varepsilon\) for both radial modes. These values cannot be compared with theoretical predictions, because we use the linear theory, but it provides us with an estimate of the relative radius changes and the expected amplitude of radial velocity variations \(A(V_{\rm puls})\). In Table 7, we give the values of \(\varepsilon\) and the corresponding radial velocity amplitude for the two radial modes of both stars. These are the modal values, i.e., is the most frequently occurring values in computed seismic models. As one can see, the radial fundamental modes of AE UMa and RV Ari cause the radius changes of about 1.9% and 1.5%, respectively. In turn, these radius changes cause the radial velocity variations with an amplitude of about 18 and 14 km s\({}^{-1}\), respectively. The first overtone modes have much smaller radius variations of about 0.2% and 0.3% for AE UMa and RV Ari, respectively. The amplitude of radial velocity variations caused by the first overtone modes are about 2.0 and 3.4 km s\({}^{-1}\) for AE UMa and RV Ari, respectively.
## 6 Including the Nonradial Mode of RV Ari into Seismic Modelling
Our results of the Fourier analysis of the space TESS data confirmed that the third frequency of RV Ari, \(\nu_{3}=13.61183\) d\({}^{-1}\), proposed by Pocs et al. (2002), is a real and independent signal. This frequency can only be associated with a nonradial mode because of its proximity to the second frequency \(\nu_{2}=13.89137\) d\({}^{-1}\) which corresponds to the first overtone radial mode. The \(\nu_{3}\) does not appear in the Rodriguez et al. (1992) data, so we cannot even try to identify its pulsational mode from the photometric amplitudes and phases. On the other hand, taking into account the very small amplitude of this frequency (more than 3 times smaller than the amplitude of \(\nu_{2}\)), we doubt that such an attempt would be successful. For this purpose, new time-series multi-colour photometry or spectroscopy is required. However, guided by the fact that the visibility of modes in photometry decreases rapidly with the degree \(\ell\), \(\nu_{3}\) is quite likely a dipole or quadrupole mode. Therefore, we will make such working hypothesis in this Section.
In Table 8, we list the parameters of the four HSB and one OC seismic models of RV Ari with the main characteristic of dipole and quadrupole modes having frequencies closest to the observed value of \(\nu_{3}=13.61183\)d\({}^{-1}\). We provide the ratio of the kinetic energy in the gravity propagation zone to the total kinetic energy \(E_{\rm k,g}/E_{\rm k}\), the normalized instability parameter \(\eta\) and the Ledoux constant \(C_{\rm n\ell}\). As one can see, despite of quite high frequencies, in the case of the HSB seismic models, the modes have a very strong
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline star & value & \(M\) & \(Z\) & \(X_{0}\) & \(\alpha_{\rm null}\) & age & \(V_{0,\rm total}\) & \(V_{\rm rot}\) & \(\log\)(\(T_{\rm eff}\)/K) & \(\log\)\(L\)/L\({}_{\sun}\) & \(\alpha_{\rm ov}\) \\ & [M\({}_{\sun}\)] & & & & [Gyr] & [km\(\cdot\)s\({}^{-1}\)] & [km\(\cdot\)s\({}^{-1}\)] & & & & \\ \hline AE UMa & EX & 1.532(52) & 0.0120(11) & 0.701(20) & 0.33(17) & 1.706(114) & 17.7(13.3) & 17.5(13.2) & 3.8578(69) & 1.070(31) & 0.060(36) \\ & Med & 1.535\({}^{+0.050}_{-0.08}\) & 0.0119\({}^{+0.0041}_{-0.0011}\) & 0.698\({}^{+0.026}_{-0.017}\) & 0.33\({}^{+0.16}_{-0.17}\) & 1.683\({}^{+0.158}_{-0.094}\) & 14.6\({}^{+18.0}_{-10.6}\) & 14.3\({}^{+18.0}_{-10.5}\) & 3.8587\({}^{+0.005}_{-0.0076}\) & 1.072\({}^{+0.027}_{-0.02}\) & 0.055\({}^{+0.049}_{-0.022}\) \\ \hline RV Ari & EX & 1.657(26) & 0.0180(11) & 0.692(12) & 0.59(7) & 1.528(49) & 18.6(15.4) & 18.1(15.0) & 3.8525(49) & 1.115(24) & 0.004(3) \\ & Med & 1.657\({}^{+0.027}_{-0.026}\) & 0.0180\({}^{+0.0012}_{-0.0011}\) & 0.693\({}^{+0.012}_{-0.015}\) & 0.59\({}^{+0.07}_{-0.07}\) & 1.520\({}^{+0.054}_{-0.041}\) & 16.5\({}^{+18.7}_{-12.4}\) & 15.9\({}^{+18.3}_{-11.9}\) & 3.8532\({}^{+0.0043}_{-0.0041}\) & 1.118\({}^{+0.021}_{-0.027}\) & 0.004\({}^{+0.003}_{-0.003}\) \\ \hline \end{tabular}
\end{table}
Table 6: The same as in Table 5 but for the **OC** seismic models of the stars AE UMa and RV Ari.
\begin{table}
\begin{tabular}{c c c c c c c} \hline & \multicolumn{3}{c}{\(\nu_{1}\)} & \multicolumn{3}{c}{\(\nu_{2}\)} \\ & \(\ell=0,\ p_{1}\) & \(\ell=0,\ p_{2}\) \\ \hline star & \(|\varepsilon|\) & \(A(V_{\rm puls})\) & \(|\varepsilon|\) & \(A(V_{\rm puls})\) \\ & & [km\(\cdot\)s\({}^{-1}\)] & & [km\(\cdot\)s\({}^{-1}\)] \\ \hline AE UMa & 0.0189(24) & 17.7(2.2) & 0.0017(6) & 2.0(7) \\ \hline RV Ari & 0.0153(4) & 13.9(4) & 0.0029(20) & 3.4(2.3) \\ \hline \end{tabular}
\end{table}
Table 7: The modal values of the empirical intrinsic mode amplitude \(|\varepsilon|\) (a fraction of the radius changes) and the resulting radial velocity amplitude due to pulsations for the two radial modes of AE UMa and RV Ari.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline star & value & \(M\) & \(Z\) & \(X_{0}\) & \(\alpha_{\rm null}\) & age & \(V_{0,\rm total}\) & \(V_{\rm rot}\) & \(\log\)(\(T_{\rm eff}\)/K) & \(\log\)\(L\)/L\({}_{\sun}\) & \(\alpha_{\rm ov}\) \\ & & [M\({}_{\sun}\)] & & & & [Gyr] & [km\(\cdot\)s\({}^{-1}\)] & [km\(\cdot\)s\({}^{-1}\)] & & & \\ \hline AE UMa & EX & 1.567(41) & 0.0130(10) & 0.698(17) & 0.43(17) & 1.587(73) & 18.4(11.5) & 18.4(11.5) & 3.8613(36) & 1.091(20) & 0.024(18) \\ & Med & \(1.564^{+0.049}_{-0.040}\) & 0.0131\({}^{+0.0010}_{-0.0009}\) & 0.697\({}^{+0.023}_{-0.018}\) & 0.44\({}^{+0.15}_{-0.18}\) & 1.574\({}^{+0.078}_{-0.057}\) & 17.9\({}^{+12.9}_{-12.5}\) & 17.8\({}^{+12.9}_{-12.5}\) & 3.8617\({}^{+0.0031}_{-0.0040}\) & 1.092\({}^{+0.020}_{-0.022}\) & 0.021\({}^{+0.020}_{-0.012}\) \\ \hline RV Ari & EX & 1.629(43) & 0.0178(18) & 0.689(20) & 0.53(7) & 1.565(72) & 23.9(15.5) & 23.4(15.2) & 3.8484(50) & 1.093(27) & 0.004(2) \\ & Med & \(1.631^{+0.037}_{-0.044}\) & 0.0179\({}^{+0.0020}_{-0.0021}\) & 0.685\({}^{+0.024}_{-0.015}\) & 0.53\({}^{+0.07}_{-0.08}\) & 1.552\({}^{+0.085}_{-0.099}\) & 22.1\({}^{+20.8}_{-15.1}\) & 21.5\({}^{+20.6}_{-14.7}\) & 3.8501\({}^{+0.0027}_{-0.0075}\) & 1.100\({}^{+0.017}_{-0.038}\) & 0.004\({}^{+0
gravity character with \(E_{\rm k,g}\) greater than 70% of the total \(E_{\rm k}\) in all cases but one. Only in the case of HSB model with \(M=1.652\,\rm M_{\sun}\), the mode \(\ell=1,\ g_{5}\) has this ratio slightly below 0.5. In the case of the OC seismic model, \(\ell=1,\ g_{1}\) is almost a pure pressure mode but its frequency is quite far from \(\nu_{3}\). The mode \(\ell=2,\ g_{4}\) is mixed with \(E_{\rm k,g}\) of about 60%. All modes are pulsational unstable.
In Fig. 8, we show the propagation diagram (top panels) and the distribution of kinetic energy density of the dipole and quadruple mode (lower panels) for the seismic models indicated with asterisk in Table 8 (the 2nd and 5th model). All quantities are plotted as a function of the fractional radius in a logarithmic scale. The 2nd model is in the phase of hydrogen-shell burning whereas the 5th model is in the overall contraction phases. The Lamb frequency \(L_{\ell}\) was depicted for \(\ell=1\) and 2. The horizontal line in the top panels corresponds to the observed frequency \(\nu_{3}=13.61183\,\rm d^{-1}\). In the case of the HSB model, the maximum of the Brunt-Vasala frequency \(N^{2}\) occurs at the edge of a small helium core. The small convective core of the OC model is precisely defined by \(N^{2}<0\). The core edges are marked as a vertical line. The middle and bottom panels show the kinetic energy density for the modes \(\ell=1\) and \(\ell=2\), respectively. As one can see, the kinetic energy density of both modes of the HSB models is large and strongly concentrated within the helium core of a size of \(0.03R\). Modes with such property have a very strong potential for probing near-core conditions and
Figure 7: The structure of two seismic models of AE UMa. Their positions on the HR diagram is indicated in top panels. The models reproduce two radial mode frequencies and the non-adiabatic parameter \(f\) for the dominant mode. Left panel presents the model in the HSB phase and right panel the model in the overall contraction. Middle panels show the run of main three gradients and the mean Rosseland opacity inside the models whereas bottom panels show the local radiative and convective luminosity.
chemical composition. In the case of the OC model, the mode \(\ell=1,g_{1}\) is almost pure pressure and its kinetic energy concentrates in the outer layers. The mixed mode \(\ell=2,g_{4}\) has the kinetic energy concentrated within the chemical gradient zone.
In the next step, we constructed seismic models that fit simultaneously the two radial mode frequencies, the complex parameter \(f\) of the dominant mode and \(v_{3}\) as a dipole axisymmetric mode. Again, the Bayesian analysis based on Monte Carlo simulations was applied. We obtained very narrow ranges of determined parameters \(M\), \(X_{0}\), \(Z\) with errors about 2 to 5 times smaller that in Sect. 5, whereas the values of overshooting and mixing length parameters are again around 0.0 and 0.5, respectively, with a similar uncertainty. Interestingly, in the case of HSB seismic models our simulations converge to the two solutions for \((M,\ X_{0},\ Z)\) with the following expected values:
* \(M=1.644(7)\,\mathrm{M}_{\sun},\ X_{0}=0.689(10),\ Z=0.0186(4)\)
* \(M=1.626(8)\,\mathrm{M}_{\sun},\ X_{0}=0.696(7),\ Z=0.0164(3)\)
This dichotomy is most evident for metallicity \(Z\). In the Appendix B, in Fig. 11, we plot the metallicity \(Z\) as a function of the model number. As one can see, independently of the starting value, the simulations converge only to the two values of \(Z\) given above. In the first solution a dipole mode is always \(g_{7}\) and in the second solution it is always \(g_{8}\).
For OC seismic models we got one solution with a higher \(Z\):
* \(M=1.640(10)\,\mathrm{M}_{\sun},\ X_{0}=0.693(7),\ Z=0.0181(5),\)
and a dipole mode is \(g_{4}\). As in the case of fitting the two radial modes, the OC seismic models that reproduce also the third frequency \(v_{3}\) as a dipole mode are a definite minority.
The interesting result is that the obtained range of the rotational velocity differs significantly between the HSB and OC seismic models. For the HSB models, the expected value of the current rotation velocity is in the range \(V_{\mathrm{rot}}\in(5,\ 46)\,\mathrm{km}\,\mathrm{s}^{-1}\) whereas for the OC models we obtained the range \(V_{\mathrm{rot}}\in(32,\ 47)\,\mathrm{km}\,\mathrm{s}^{-1}\). Thus, if we had independent information on the rotation rate, e.g., from the rotational splitting of dipole modes, then perhaps a choice between HSB and OC seismic models would be possible.
## 7 Summary
We presented the analysis of TESS space data and detailed complex seismic modelling of the two high-amplitude \(\delta\) Scuti stars AE UMa and RV Ari. The Fourier analysis of the TESS light curves revealed the two well-known frequencies for each HADS. The important result is a confirmation of the third frequency of RV Ari that has been detected from the ground-based photometry.
The ratio of two dominant frequencies of AE UMa and RV Ari strongly indicates that two radial modes, fundamental and first overtone, are excited in both stars. We verified this hypothesis using the method of mode identification based on the multi-colour photometric amplitudes and phases.
Our seismic modelling of the two HADS stars consisted in simultaneous fitting of the two radial mode frequencies as well as the complex amplitude of relative bolometric flux variations of the dominant mode, the so-called parameter \(f\). To this end, the Bayesian analysis based on Monte-Carlo simulations was used. Our extensive seismic modelling allowed to constrain the global parameters as well as free parameters. The mixing length parameter \(\alpha_{\mathrm{MLT}}\), that describes the efficiency of envelope convection, amounts to about 0.3\(-\)0.6. Determination of the narrow range of \(\alpha_{\mathrm{MLT}}\) was possible only due to the inclusion of the parameter \(f\) into seismic modelling. All models are in the post-main sequence phase of evolution, however the question whether it is the HSB or OC phase cannot be unequivocally resolved. On the other hand, the HSB seismic models account for the vast majority, so it can be assumed that this phase is much more likely.
An interesting result is a very small value of the overshooting parameter \(\alpha_{\mathrm{ov}}\), which describes the amount of mixing at the edge of convective core during the phase of main-sequence and overall contraction. This result may be due to the fact that \(\alpha_{\mathrm{ov}}\) is assumed to be constant during evolution. Presumably, it should depend on time and scale with a decreasing convective core.
The third frequency of RV Ari can only be associated with non-radial mode because of its proximity to the second frequency, which is the first overtone radial mode. It is very likely a dipole or quadrupole mode because of the disk averaging effect in photometric amplitudes. We made a working hypothesis that \(v_{3}\) is a dipole axisymmetric mode and repeated seismic modelling. Thus, we fitted the three frequencies and the parameter \(f\) for the dominant
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \(M\) & \(Z\) & \(X_{0}\) & \(\alpha_{\mathrm{MLT}}\) & \(\log(T_{\mathrm{eff}}/\mathrm{K})\) & \(\log L/\mathrm{L}_{\sun}\) & age & phase & \(\nu_{\mathrm{rot}}\) & \(\nu_{\mathrm{puls}}\) & mode & \(E_{\mathrm{k,g}}/E_{\mathrm{k}}\) & \(\eta\) & \(C_{\mathrm{off}}\) \\ \([\mathrm{M}_{\sun}]\) & & & & & & & [Gyr] & & [\(\mathrm{d}^{-1}\)] & [\(\mathrm{d}^{-1}\)] & & & & \\ \hline
1.590 & 0.015 & 0.70 & 0.55 & 3.8469 & 1.081 & 1.6435 & HSB & 0.206 & 13.69773 & \(\ell=1,\ g_{7}\) & 0.82 & 0.090 & 0.426 \\ & & & & & & & & & 13.87880 & \(\ell=2,\ g_{12}\) & 0.77 & 0.089 & 0.130 \\ \hline
1.624* & 0.016 & 0.70 & 0.55 & 3.8485 & 1.093 & 1.5973 & HSB & 0.203 & 13.70427 & \(\ell=1,\ g_{6}\) & 0.83 & 0.089 & 0.432 \\ & & & & & & & & 13.44373 & \(\ell=2,\ g_{12}\) & 0.76 & 0.090 & 0.147 \\ \hline
1.652 & 0.017 & 0.70 & 0.55 & 3.8491 & 1.100 & 1.5684 & HSB & 0.202 & 14.16947 & \(\ell=1,\ g_{5}\) & 0.43 & 0.088 & 0.216 \\ & & & & & & & & 13.35265 & \(\ell=2,\ g_{11}\) & 0.76 & 0.090 & 0.151 \\ \hline
1.640 & 0.019 & 0.68 & 0.64 & 3.8493 & 1.098 & 1.5205 & HSB & 0.210 & 13.37437 & \(\ell=1,\ g_{5}\) & 0.90 & 0.089 & 0.468 \\ & & & & & & & & 13.34091 & \(\ell=2,\ g_{10}\) & 0.75 & 0.089 & 0.152 \\ \hline
1.557* & 0.018 & 0.68 & 0.50 & 3.8274 & 0.995 & 1.7090 & OC & 0.399 & 14.3346 & \(\ell=1,\ g_{1}\) & 0.07 & 0.066 & 0.030 \\ & & & & & & & & 13.2824 & \(\ell=2,\ g_{4}\) & 0.61 & 0.076 & 0.144 \\ \hline \end{tabular}
\end{table}
Table 8: Examples of seismic models of RV Ari with the characteristic of nonradial modes \(\ell=1,2\) having frequencies closest to the observed value \(\nu_{3}=13.6118\,\mathrm{d}^{-1}\). All seismic models have \(\alpha_{\mathrm{ov}}=0.0\) and were computed with the OPAL opacities. The 8th column indicates the evolutionary phases. The last three columns contain a ratio of the energy in gravity propagation zone to the total kinetic energy, the instability parameter \(\eta\) and the Ledoux constant. The models indicated with asterisks will be discussed later in the text.
frequency. Including the nonradial mode constrained enormously, in particular, the global stellar parameters. As before, the number of seismic models in the HSB phase is much larger. Moreover, the HSB and OC seismic models have different ranges of the rotational velocity. Thus, perhaps independent information on the rotation would finally decide between these two phases of evolution.
## Acknowledgements
The work was financially supported by the Polish NCN grant 2018/29/B/ST9/02803. Calculations have been partly carried out using resources provided by Wroclaw Centre for Networking and Supercomputing ([http://www.wcss.pl](http://www.wcss.pl)), grant No. 265. This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA's Science Mission Directorate. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
## Data availability
The TESS data are available from the NASA MAST portal [https://archive.stsci.edu/](https://archive.stsci.edu/). The ASAS observations are available at the website of [http://www.astrouw.edu.pl/asas](http://www.astrouw.edu.pl/asas). Theoretical computations will be shared on reasonable request to the corresponding author.
Figure 8: Top panels: the propagation diagram for the HSB (left) and OC (right) seismic models of RV Ari with the parameters given in the 2nd and 5th line of Table 8. The horizontal line indicates the value of the observed frequency \(\nu_{3}=13.61183\,\mathrm{d}^{-1}\). Lower panels: the distribution of the kinetic energy density for nonradial modes listed in Table 8. The vertical lines mark the edge of a core. |
2309.11133 | Shape Anchor Guided Holistic Indoor Scene Understanding | This paper proposes a shape anchor guided learning strategy (AncLearn) for
robust holistic indoor scene understanding. We observe that the search space
constructed by current methods for proposal feature grouping and instance point
sampling often introduces massive noise to instance detection and mesh
reconstruction. Accordingly, we develop AncLearn to generate anchors that
dynamically fit instance surfaces to (i) unmix noise and target-related
features for offering reliable proposals at the detection stage, and (ii)
reduce outliers in object point sampling for directly providing well-structured
geometry priors without segmentation during reconstruction. We embed AncLearn
into a reconstruction-from-detection learning system (AncRec) to generate
high-quality semantic scene models in a purely instance-oriented manner.
Experiments conducted on the challenging ScanNetv2 dataset demonstrate that our
shape anchor-based method consistently achieves state-of-the-art performance in
terms of 3D object detection, layout estimation, and shape reconstruction. The
code will be available at https://github.com/Geo-Tell/AncRec. | Mingyue Dong, Linxi Huan, Hanjiang Xiong, Shuhan Shen, Xianwei Zheng | 2023-09-20T08:30:20Z | http://arxiv.org/abs/2309.11133v1 | # Shape Anchor Guided Holistic Indoor Scene Understanding
###### Abstract
This paper proposes a shape anchor guided learning strategy (AnLearn) for robust holistic indoor scene understanding. We observe that the search space constructed by current methods for proposal feature grouping and instance point sampling often introduces massive noise to instance detection and mesh reconstruction. Accordingly, we develop AnLearn to generate anchors that dynamically fit instance surfaces to (i) unmix noise and target-related features for offering reliable proposals at the detection stage, and (ii) reduce outliers in object point sampling for directly providing well-structured geometry priors without segmentation during reconstruction. We embed AnLearn into a reconstruction-from-detection learning system (AnRec) to generate high-quality semantic scene models in a purely instance-oriented manner. Experiments conducted on the challenging ScanNetv2 dataset demonstrate that our shape anchor-based method consistently achieves state-of-the-art performance in terms of 3D object detection, layout estimation, and shape reconstruction. The code will be available at [https://github.com/Geo-Tell/AnRec](https://github.com/Geo-Tell/AnRec).
## 1 Introduction
Holistic indoor scene understanding from partial observations (, single-view images or 3D scans) is a comprehensive task that provides 3D semantic scene models for indoor applications. Early works studied this task with a reconstruction-from-detection framework that recovers the geometries of room structures and objects from the corresponding detection in a separate way. Later, end-to-end learning methods were proposed to simultaneously perform layout estimation, object detection, and shape prediction in one forward pass for semantic scene reconstruction [27, 37, 23]. With the recent success of point-based 3D detection and instance reconstruction, surging interest has been witnessed in detecting and modeling objects directly from sparse point clouds [28, 33].
Benefiting from the use of rich geometry information, current scan-based deep methods have improved the performance of semantic scene reconstruction. However, they still left two issues that bottleneck high-quality semantic reconstruction: (1) the noisy instance feature learning at the detection phase, and (2) the difficulty in retrieving instances from sparse point clouds for reconstruction.
At the detection phase, it is required to group features for instance representation learning. The ball query [30, 31] and 3D convolution[39, 20] are two basic operations for point feature grouping, but they often mix massive noise with informative features due to the fixed grouping range,
Figure 1: Comparison between different feature grouping strategies. (a) the original scene model. (b)-(e) The different feature grouping operations all suffer from the issue of confusing non-target noise with useful features. (f) The proposed shape anchor guided grouped directly generates anchors at the instance surface to merge instance-related features, which largely alleviates noise interference.
as shown in Fig. 1 (b) and (c). To compensate for the deficiency caused by the fixed range, VoteNet [14] and its variants [34, 29, 38, 35] adopt a voting strategy to cluster object features by moving surface points towards object centers. Albeit more flexible than the basic operations, the voting-based strategy often generates an unconstrained grouping area that brings quantities of outliers as illustrated in Fig. 1 (d). BRNet [10] hence restricts the grouping space by sampling around representative points given by coarse box proposals. Nevertheless, limited by box-like grouping area, the sampling points can still fall far beyond targets when the objects are irregularly shaped, as depicted in Fig. 1 (e).
At the reconstruction stage, the retrieval of outlier-free object points is a prerequisite for object recovery. Due to the noise introduced by feature grouping in the previous detection phase, the points grouped for localizing objects can hardly serve as ideal reconstruction priors as indicated by Fig. 1. Consequently, the current methods are forced to employ an additional foreground classifier [28] or replace the detector with a complex instance segmentation backbone to sample object points from the raw scans [33]. However, the existence of numerous non-target outliers in the search space challenges instance segmentation, resulting in increased risks of gluing different instances and misclassifying background points.
Based on the discussion above, the two noise interference issues during detection and reconstruction are actually highly coupled and can be resolved together as long as outliers are excluded during feature grouping. To this end, we are motivated to propose a shape anchor-guided learning strategy (_AncLearn_) that generates surface anchors to fit the feature grouping areas to object shape distributions, as displayed in Fig. 1 (f). With the geometry constraint provided by surface anchors, it is able to merge local target-focused features for predicting reliable object proposals and construct a shape-aware search space for robustly sampling instance points without segmentation during reconstruction. The proposed anchor-guided learning strategy can be easily embedded into an end-to-end learning system to accomplish object detection, layout estimation, and instance reconstruction for holistic scene understanding. The main contributions are summarized as follows:
* We present a shape anchor guided learning strategy to simultaneously address the issues of noisy feature learning in detection and instance point retrieval during reconstruction.
* We embed the proposed anchor-guided learning strategy into an end-to-end learning system to accomplish object detection, layout estimation, and instance reconstruction for holistic scene understanding in a purely instance-oriented way.
* Extensive experiments demonstrate that our AncRec achieves high-quality semantic scene reconstruction with state-of-the-art performance in instance detection and mesh prediction on the challenging Scan-Netv2 dataset [12] (some ground truths provided by Scan2CAD [1] and SceneCAD [3]).
## 2 Related Work
Semantic scene understanding has been extensively studied over the past years. Many methods focused on acquiring the semantics of 3D scenes [22, 15, 8], while others recovered scene geometries by scene completion [18, 13, 36]. Recently, increasing interest has emerged in semantic scene reconstruction, which recovers both the semantics and geometric shapes of objects. By treating semantic reconstruction as a problem of holistic scene understanding, promising progress has been made based on a reconstruction-from-detection principle [28]. In the following, we review the research from the two core aspects of the reconstruction-from-detection pipeline,, 3D object detection and scene-aligned instance reconstruction.
### 3D Object Detection
Over the past years, deep detectors have gained great success in 2D object detection [40, 25]. The progress of 2D deep detection has inspired the development of deep learning techniques for recognizing objects from scene point clouds. Compared to image data, point clouds provide rich surface geometry clues for locating objects in real scenes. Nevertheless, the sparse, irregular, and orderless characteristics of point clouds make them hard to be handled by the grid-based convolution model.
Works in the early stage leveraged the 2D proposals as 3D detection constraints or projected point clouds into regular 2D/3D grids. Although these 3D detectors are applicable, their performance is still limited by the 2D detectors and the missing geometric details brought by projection. To directly learn the rich geometry features from the raw points, PointNet [30] and PointNet++ [31] used a ball query operation for grouping point features. Later, PointRCNN [32] adopted the point-wise features extracted by PointNet++ for producing instance proposals with point clouds. Hindered by the fixed feature grouping range of grid-based convolutions or ball query operation, these methods often mixed massive noise with informative features, which impairs the reliability of proposals. Considering that the observed object surface points usually lie far away from the object centers, VoteNet [14] introduced a voting mechanism for proposal generation. Based on the voting mechanism, numerous variants further refined proposal features with context relationships [34], hierarchical clues [35], and hybrid geometric primitives [38]. However, the issue of outliers in votes still blocks the learning of representative box features. BRNet [10] derived proposal features based on the generated virtual points given by coarse box proposals,
whereas the virtual points can fall into non-target areas and bring noise. In this paper, we directly merge features anchored on target surfaces into robust proposal representations for indoor instance learning.
### Scene-aligned Instance Reconstruction
Scene-aligned instance reconstruction requires not only modeling 3D object shapes but also correctly arranging the shapes in the 3D scene space. The semantic model of an indoor scene was early constructed with retrieval techniques, which search for CAD shapes or geometric primitives from an offline database and align the approximate object models to input scene data [1, 2, 17, 24]. Albeit able to present delicate scene models, the retrieval-based methods lack generalization ability to various scenes due to the inefficient inference and the limited database scale [37].
The promising advances in deep learnable shape representations motivated scene-aligned instance reconstruction without a finite model pooling [16, 11]. Many prior arts in literature generate 3D objects with explicit or implicit representations with learned features derived from 2D recognition results [19, 27, 37, 23]. Unlike previous single-view modeling methods, RevealNet [21] and the following works [5, 4] predicted the occupancy grids of semantic instances with 3D features extracted from voxelized RGB-D scans. For high scene resolution that is hard to acquire by volumetric representation, RfD-Net [28] and DIMR [33] extracted instance points from scans with segmentation techniques for subsequent single object reconstruction with deep implicit functions [26, 9]. Because of the existence of non-target points, these two state-of-the-art methods have to use an extra foreground classifier or a complex instance segmentation backbone to sample object points for reconstruction. In this paper, we utilized the previously generated surface anchors to localize instance points in scans. Hereby, we introduced an easy but effective anchor-guided sampling strategy to offer geometry priors for the following instance reconstruction without an extra segmentation stage.
## 3 Method
We illustrate the proposed AncRec in Fig. 2. AncRec achieves semantic scene reconstruction with the proposed anchor-guided learning strategy in an instance-oriented way. It first localizes objects and walls via a dual-branch detector that is equipped with shape anchors learned for grouping point features at the detection stage. The object shape anchors are subsequently leveraged to sample instance geometry priors for predicting object meshes. Finally, the complete semantic scene model is reconstructed by arranging object meshes in the post-processed room layout with alignment to the parsed bounding boxes and poses. In the following, we elaborate on how the shape anchors are learned and leveraged to address the noise interference during instance detection and mesh prediction for high-quality semantic scene reconstruction.
### Shape Anchor-guided Instance Detection
We modify the VoteNet into a dual-branch instance detector to simultaneously localize objects and walls in one
Figure 2: Overview of the AncRec framework. At the detection stage, seed point features for walls and objects are first learned with a modified dual-branch PointNet++. Following each branch, a voting module and an anchor-guided feature grouper respectively generate proposal features \(\mathbf{f}^{\text{vote}}_{\text{obywall}}\) and \(\mathbf{f}^{\text{anchor}}_{\text{obywall}}\), which are fed into the decoders to predict object bounding boxes and wall quads. The room layout is then constructed by processing the wall quads into connected corners. At the reconstruction stage, objects with high objectness scores are reconstructed under the guidance of \(\mathbf{f}^{\text{vote}}_{\text{ob}}\), \(\mathbf{f}^{\text{anchor}}_{\text{ob}}\), and the geometry priors sampled by shape anchors. Finally, object models are arranged in the scene with the predicted layout according to the spatial alignment provided by predicted bounding boxes.
forward pass (Fig. 2). Although the voting mechanism of VoteNet is workable for generating proposals, the reliability of voting-based predictions is often impaired by the outliers in votes. We hence design an anchor-guided feature grouper to learn target-focused features for robust instance detection.
#### 3.1.1 Anchor-guided Feature Grouper
We illustrate the mechanism of the anchor-guided feature grouper in Fig. 3. In the following, by taking the object detection branch as an example, we describe how it works with the voting module in three steps to predict the oriented 3D bounding boxes.
**Step 1: Anchor Generation.** Given the \(i^{th}\) voting-based proposal feature \(\mathbf{f}_{i}^{\text{vote}}\in\mathbb{R}^{128}\), the anchor-guided grouper first generates shape anchors that depict the geometry of the \(i^{th}\) object candidate via template deformation. With \(N\) initial anchors \(A=\big{\{}\mathbf{a}_{j}\in\mathbb{R}^{3}|j=1,...,N\big{\}}\) uniformly selected from a unit ball surface, a deformation layer is applied to translate the initial anchors to the \(i^{th}\) candidate object surface by
\[\hat{\mathbf{a}}_{j}=\mathbf{a}_{j}+\underbrace{\text{Tanh}(\text{MLP}([\mathbf{a}_{j},\mathbf{f}_{i}^{ \text{vote}}]))}_{\text{deformation offset }\Delta\mathbf{a}_{j}}+\mathbf{c}_{i}. \tag{1}\]
As indicated by Eq. (1), the deformation layer moves \(\mathbf{a}_{j}\) with offsets \(\Delta\mathbf{a}_{j}\) inferred by a multi-layer perceptron (MLP) and a Tanh activation function and then places the deformed anchors at the candidate center \(\mathbf{c}_{i}\) clustered by the voting module. The training of the deformation layer is supervised by the Chamfer distance loss defined as
\[\begin{split}\mathcal{L}_{\text{anchor}}=&\frac{1}{| P_{\text{gt}}|}\sum_{\mathbf{p}\in P_{\text{gt}}}\min_{\hat{\mathbf{a}}\in\hat{A}}\| \mathbf{p}-\mathbf{\hat{a}}\|_{2}^{2}\\ &+\frac{1}{|\hat{A}|}\sum_{\hat{\mathbf{a}}\in\hat{A}}\min_{\mathbf{p}\in P _{\text{gt}}}\|\hat{\mathbf{a}}-\mathbf{p}\|_{2}^{2},\end{split} \tag{2}\]
where \(\hat{A}\) and \(P_{\text{gt}}\) denote the sets of deformed anchors and surface points sampled from the corresponding scene-aligned 3D meshes. Thereby, the anchors are learned to fit the \(i^{th}\) object surface, and the clustering center \(\mathbf{c}_{i}\) is also refined with shape constraint provided by Eq. (2). Moreover, due to the supervision with complete surface points, the shape anchors can assist in exploiting context for recovering unobserved structures,, the missing object bottom that is supported by the observed floor areas.
**Step 2: Feature Grouping.** The anchor-guided grouper propagates the seed point features (extracted by the dual-branch PointNet++ backbone) to each anchor via interpolation followed by an MLP. As the deformed anchors are mainly located on the object surface, the seed point features for propagation can be reliably selected from target-related areas. The \(i^{th}\) noise-reduced proposal representation \(\mathbf{f}_{i}^{\text{anchor}}\in\mathbb{R}^{128}\) is obtained by averaging the anchor features.
**Step 3: Prediction Fusion.** We employ two decoders to predict the object category and oriented box parameters from \(\mathbf{f}_{i}^{\text{vote}}\) and \(\mathbf{f}_{i}^{\text{anchor}}\) respectively for the \(i^{th}\) candidate. This is based on the consideration that the voting-based proposal feature \(\mathbf{f}_{i}^{\text{vote}}\) contains more contextual information while the anchor-guided proposal feature \(\mathbf{f}_{i}^{\text{anchor}}\) is more target-focused. The estimated parameters, denoted as \(\mathbf{\Theta}_{i}^{\text{vote}}\) and \(\mathbf{\Theta}_{i}^{\text{anchor}}\), are averaged with learnable weights to obtain the final object parameters:
\[\mathbf{\Theta_{i}}=\mathbf{w_{1}}\cdot\mathbf{\Theta}_{i}^{\text{vote}}+\mathbf{w_{2}}\cdot \mathbf{\Theta}_{i}^{\text{anchor}}. \tag{3}\]
From the three steps above, the attributes of objects are parsed with robustness to noise. The anchor-guided detection of wall instances works in a similar way. Considering that walls are generally connected to each other, we additionally deploy the attention operation used in [35] to enhance the anchor-guided wall proposal features with the strong relationship between walls for precise layout estimation.
#### 3.1.2 The Detection Training Loss
The total loss function of the anchor-guided instance detector is defined as
\[\mathcal{L}=\mathcal{L}_{\text{obj}}+\mathcal{L}_{\text{wall}}+\mathcal{L}_{ \text{obj}}^{\text{anchor}}+\mathcal{L}_{\text{wall}}^{\text{anchor}}. \tag{4}\]
In Eq. (4), \(\mathcal{L}_{\text{obj}}\) is the object detection loss given by [14] while \(\mathcal{L}_{\text{wall}}\) is the wall quad loss used in [7]. \(\mathcal{L}_{\text{obj}}^{\text{anchor}}\) and \(\mathcal{L}_{\text{wall}}^{\text{anchor}}\) are the chamfer distance losses that supervise the anchor deformation learning in terms of positive object and wall candidates.
### Anchor-guided Object Reconstruction
Instance points from the input scan are desirable geometry priors for reconstruction. Previous works utilized segmentation operations for instance point sampling. Despite
Figure 3: The anchor-guided feature grouper. The anchor-guided proposal feature \(\mathbf{f}_{i}^{\text{anchor}}\) is the average of features at shape anchors. The shape anchors are deformed from initial anchors \(\{\mathbf{a}_{j}\}\) with offsets derived from the concatenation of the voting-based proposal feature \(\mathbf{f}_{i}^{\text{vote}}\) and \(\{\mathbf{a}_{j}\}\). The anchors are then translated to the cluster center \(\mathbf{c}_{i}\).
being applicable, accurate instance segmentation is difficult due to the existence of massive non-target noise in the search space. In contrast, we take advantage of the shape anchors to construct a shape-aware search space, in which object points can be directly localized with little noise interference. Fig. 4 illustrates the workflow of our anchor-guided instance point sampling.
We first add the shape anchors of positive proposals to the original scene scan to enhance indoor objects' structural information. From the enhanced scan, we select points that lie within a given radius of each anchor and update the anchor set with these selected points to expand the searching space. With the searching space compactly fitting the object shape, we can further sample more instance points with high coverage. In this way, our anchor-guided sampling efficiently generates object geometry prior without reliance on an extra segmentation module. In the experiments, the anchor-guided sampling process iterates twice with the sampling radius set to the minimum distance between the anchors.
The geometry priors are next concatenated with the corresponding proposal features \(\mathbf{f}_{\text{obj}}^{\text{vote}}\) and \(\mathbf{f}_{\text{obj}}^{\text{anchor}}\) and encoded into shape embeddings \(\mathbf{f}_{\text{obj}}^{\text{shape}}\) by a ResPointNet [28, 30]. Based on \(\mathbf{f}_{\text{obj}}^{\text{shape}}\), the decoder of BSP-Net [9] is adopted to predict the signed distances of query points at the canonical coordinates. Following the BSP-Net, we apply the Constructive Solid Geometry (CSG) technique to extract the shape surfaces.
### Semantic Scene Reconstruction
We now build the semantic scene model with the anchor-guided results of the instance detector and shape predictor. The room structure is first constructed by transforming the detected wall quads into orderly connected corners with the merging technique [7]. Next, the models of objects with high objectness are chosen to be arranged in the scene with alignment to the predicted 3D bounding boxes [28]. In the end, the semantic scene model is built as a combination of the reconstructed layout and the scene-aligned object shapes.
## 4 Experiments
### Experiment Settings
#### 4.1.1 Dataset
We test our method on ScanNetv2 [12] with ground truths from Scan2CAD [1] and SceneCAD datasets [3] for holistic indoor scene understanding. (1) ScanNetv2 is a benchmark for indoor scene analysis with 1,513 scanned room point clouds; (2) Scan2CAD is an alignment dataset that matches ShapeNet models to their counterpart object instances in ScanNet with oriented 3D bounding boxes; and (3) SceneCAD provides 3D layout annotations for the scans in ScanNetv2. We follow [7] and [9] to pre-process the layout polygons and object meshes for network training supervision. The train/test split is kept in line with the previous works with eight challenging object categories considered in the experiments.
#### 4.1.2 Evaluation Metrics
The quality of semantic scene reconstruction is evaluated with the performance of object detection, layout estimation, and scene-aligned shape reconstruction. In line with previous works [28, 7], we use mean average precision across all classes (mAP) with 0.5 IoU threshold for object detection and F1-score for layout estimation. To evaluate the reconstruction quality, we use the chamfer distance (CD) based mAP with thresholds 0.1 and 0.047, light field distance (LFD) based mAP with thresholds 5000 and 2500, as well as the mAP at 3D IoU thresholds 0.25 and 0.5.
#### 4.1.3 Implementation Details
The training of our method is conducted on a Titan GPU with two stages, and all parameters are updated by the Adam optimizer. In the first stage, we train the dual-branch instance detector of AncRec for simultaneous optimization of object detection and layout estimation modules. We set the batch size to 8, initialize the learning rate to 1e-3, and adopt the _ReducerLrOnPlateu_ learning scheduler in PyTorch. In the second stage, we use the object proposals predicted by the instance detector for mesh reconstruction. We train the shape predictor with the BSPNet decoder pretrained on the 8-category ShapeNet data [6] for training stability and efficiency. The training process lasts 100 epochs with batch size set to 32, learning rate initialized as 1e-4, and the same training schedule. We found the number of shape anchors \(N\) is an insensitive hyper-parameter and thus set it to 18 for computational efficiency.
### Comparison and Analysis
**Scene Reconstruction.** We compare our method with the state-of-the-art works, _i.e._, RfD-Net [28] and DIMR [33]. RfD-Net predicts object shapes with the instance points segmented from proposals given by VoteNet, while
Figure 4: The anchor-guided instance point sampling. Object points are iteratively sampled from the input scan under the guidance of shape anchors. The sampled object points serve as the geometry prior for shape reconstruction.
DMR infers object models from the instance points provided by a complex instance segmentation backbone. As shown in Fig. 5, our AncRec can robustly recognize and reconstruct objects from noisy point clouds with accurate object localization and high-fidelity shape details. With the target-focused proposal feature learning offered by AncLearn in terms of object detection, our method generates fewer false positive models than the compared approaches. With more accurate detection results, our AncRec can leverage outlier-reduced instance priors sampled by AncLearn to infer more detailed object structures, _e.g._, the folding chairs are recovered in the second column of Fig. 5. Tab. 1 presents the quantitative comparison of different methods. Our AncRec outperforms other methods under all metrics. As the comparison methods didn't conduct layout estimation, we illustrate our holistic scene understanding results in Fig. 6. Considering that semantic scene reconstruction is a comprehensive problem, we further analyze the performance of our method on the key sub-tasks, _i.e._, scene parsing and object reconstruction, as follows.
Figure 5: Visualization comparison of scene semantic reconstruction on ScanNetv2.
Figure 6: Holistic scene understanding by our proposed AnchorRec. (The comparison methods didn’t conduct layout estimation.)
**Scene Parsing.** We compare AncRec with the state-of-the-art methods regarding 8-category object detection and layout estimation. The quantitative results in Tab. 2 show that our AnchorRec edges out the comparison methods by significant performance gains. Especially for the categories of the display, bathtub, and trash bin, AncRec exceeds the second best by 6.6%, 5.8%, and 10.7% in precision measurement. The results indicate that, with adaptive grouping feature areas, AncLearn works effectively for detecting objects shaped irregularly, _e.g_., diverse-scaled bathtubs, thin displays, and tiny trash bins. In Tab. 2, we also compare our method with PQ-Transformer [7], the current state-of-the-art scene parsing approach, in terms of layout estimation. The results demonstrate that our AncLearn is also applicable to localizing walls that are particularly large and thin.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & [email protected] & [email protected] & [email protected] & [email protected] & LFD@5000 & LFD@2500 \\ \hline RfD-Net & 42.5 & 16.9 & 45.7 & 19.1 & 28.6 & 7.8 \\ DIMR & 46.3 & 12.5 & 51.9 & 25.7 & 29.5 & 8.6 \\ AnchorRec & **52.9** & **18.9** & **56.8** & **29.4** & **30.3** & **9.9** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results on the ScanNetv2 dataset. We evaluate the reconstruction quality with mAP at different thresholds. (higher is better)
Figure 8: The influence of different instance point sampling strategies on object reconstruction. Points from the same instance are painted in the same color. Without the reliance on segmentation like RfD-Net and DIMR, our method can correctly separate adjacent objects and predict shape models.
Figure 7: Visualization comparison results of object detection on ScanNetv2
Visualized object detection result provided in Fig. 7 further manifests that our AnchorRec can reliably delineate objects with compact bounding box prediction.
**Object Mesh Reconstruction.** In Tab. 3, we also evaluate the performance of different methods with respect to class-wise object reconstruction. The evaluation is based on how matchable the predicted object meshes are with the scene-aligned ground truths. The numeric results in Tab. 3 show that AnchRec achieves the best performance on 5 categories and the overall chamfer distance-based mAP score.
### Ablation Studies
**AncLearn for Scene Parsing.** We study different settings of AncLearn for instance detection and report the results in Tab. 4. Compared to the baseline without AncLearn, the proposed AncLearn brings consistent improvement to object detection and layout estimation. Especially, when cooperating with the self-attention layer for layout estimation, AncLearn enables the dual-branch instance detector of AncRec to obtain the best performance in both parsing tasks.
**AncLearn for Object Reconstruction.** In the shape predictor of AncRec, AncLearn works to sample instance points as geometry priors for reconstruction. With the instance detector of AncRec, we compare AncLearn with the segmentation-based strategy used in RfD-Net and a box cropping sampling method. The quantitative comparison in Tab. 5 shows that AncLearn outperforms the other two approaches by providing instructive geometry priors with shape anchors in terms of all metrics.
### Contributions of the Vote and Anchor Features
As mentioned in Sec. 3.1.1, we perform instance detection by taking both vote and anchor features into account. To investigate the difference in their contributions to the detection parameters, we visualize the two weight vectors in Eq. (3) for perceptual comparison. As shown in Fig. 9, vote features have a larger impact on scoring objectness, indicating that the context encoded in vote features is useful in differentiating between object and non-object regions. Compared to vote features, anchor features contribute significantly more to angle prediction, which demonstrates the superiority of anchor-guided strategy in providing shape clues for learning shape-sensitive parameters. Considering the gap between the bounding box and the shape of objects, the vote features contribute more to predicting the bounding
\begin{table}
\begin{tabular}{l c c c} \hline \hline OD anchor & layout anchor & layout SA & [email protected] & F1 score \\ \hline & & & 41.12 & 60.62 \\ \hline ✓ & & & 43.33(+2.21) & 60.59(- 0.03) \\ & ✓ & & 40.21(-0.91) & 62.61(+ 1.99) \\ ✓ & ✓ & & 41.24(+0.12) & 64.00(+3.38) \\ & ✓ & ✓ & 41.16(+0.04) & 65.34(+4.72) \\ ✓ & ✓ & ✓ & **43.38(+2.26)** & **70.45(+9.83)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of the proposed AncLearn. [email protected] and F1 score respectively reflect the performance of object detection and layout estimation. Higher values mean better results.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Method & table & chair & bookshelf & sofa & trash bin & cabinet & display & bathtub & mAP \\ \hline RfD-Net [28] & 25.51 & 82.11 & 32.53 & 44.21 & 44.74 & 28.37 & 65.51 & 42.57 & 45.70 \\ DIMR [33] & 39.44 & 81.04 & **38.24** & 44.09 & **62.60** & 23.57 & **75.12** & 50.93 & 51.88 \\ AnchorRec (Ours) & **53.60** & **86.23** & 36.82 & **50.92** & 59.59 & **39.56** & 73.68 & **54.32** & **56.84** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison between different instance point sampling methods. Higher values mean better results.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline OD anchor & layout anchor & layout SA & [email protected] & F1 score \\ \hline & & & & 41.12 & 60.62 \\ \hline ✓ & & & 43.33(+2.21) & 60.59(- 0.03) \\ & ✓ & & 40.21(-0.91) & 62.61(+ 1.99) \\ ✓ & ✓ & & 41.24(+0.12) & 64.00(+3.38) \\ & ✓ & ✓ & 41.16(+0.04) & 65.34(+4.72) \\ ✓ & ✓ & ✓ & **43.38(+2.26)** & **70.45(+9.83)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of the proposed AncLearn. [email protected] and F1 score respectively reflect the performance of object detection and layout estimation. Higher values mean better results.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Method & table & chair & bookshelf & sofa & trash bin & cabinet & display & bathtub & mAP \\ \hline RfD-Net [28] & 25.51 & 82.11 & 32.53 & 44.21 & 44.74 & 28.37 & 65.51 & 42.57 & 45.70 \\ DIMR [33] & 39.44 & 81.04 & **38.24** & 44.09 & **62.60** & 23.57 & **75.12** & 50.93 & 51.88 \\ AnchorRec (Ours) & **53.60** & **86.23** & 36.82 & **50.92** & 59.59 & **39.56** & 73.68 & **54.32** & **56.84** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Numeric results on object reconstruction. The AP scores are measured with CD at a threshold of 0.1. (Higher is better.)
box center and size residuals.
### The Characteristics of the Learned Anchors
The supervision with surface points derived from complete 3D models enables the anchors to depict object shapes, even with unobserved structures, which assists in object detection and reconstruction when some parts of objects are missing. There also exists a difference between anchor distribution characteristics for objects and non-object areas. As shown in Fig. 10, the anchors and sampled points depict object shapes, while those for non-object areas scatter irregularly. This characteristic may play a role in differentiating objects from non-object regions.
## 5 Conclusions
In this paper, we introduce a shape anchor guided learning strategy (AncLearn) that is embedded into a reconstruction-from-detection learning system (AncRec) to handle the issue of noise interference in point-based holistic scene understanding. Extensive experiments demonstrate that AncRec achieves high-quality indoor semantic scene reconstruction. The quantitative and qualitative results show that AncRec outperforms current methods in terms of object detection, layout estimation, and shape modeling. The ablation studies convincingly verify that AncLearn can largely exclude noise from search space for reliable feature grouping and robust instance point sampling. In the future, it is promising to study the application of shape anchor guided learning strategy in other point-based 3D vision tasks.
## Acknowledgment
This research was supported by the Fundamental Research Funds for the Central Universities of China under Grant 2042022dx0001, NSFC-projects under Grant 42071370, Wuhan University-Huawei Geoinformatics Innovation Laboratory, and the Open fund of Key Laboratory of Urban Land Resources Monitoring and Simulation, Ministry of Natural Resources under Grant KF202106084.
|
2309.07751 | Host Galaxy Dispersion Measure of Fast Radio Burst | Fast radio bursts are a class of transient radio sources that are thought to
originate from extragalactic sources since their dispersion measure greatly
exceeds the highest dispersion measure that the Milky Way interstellar medium
can provide. Host Galaxies of twenty-two fast radio bursts have already been
identified. In this paper, the dispersion measurement of these fast radio
bursts produced by the Milky Way interstellar medium, and the intergalactic
medium is obtained through known physical models to yield the host galaxy
dispersion measure. It is observed that the host galaxy dispersion measure
increases with its redshift value. We also obtained that the host galaxy
dispersion measure has different distribution between repeaters and
non-repeaters. It is noted that the reason for the divergence of the host
galaxy dispersion measures should be accounted for by the difference in their
local environment. | Xinxin Wang, Ye-Zhao Yu | 2023-09-14T14:36:52Z | http://arxiv.org/abs/2309.07751v1 | # Host Galaxy Dispersion Measure of Fast Radio Burst
###### Abstract
Fast radio bursts are a class of transient radio sources that are generally thought to originate from extragalactic sources since their dispersion measure usually greatly exceeds the maximum dispersion measure that the Milky Way interstellar medium can provide. Host Galaxies of twenty-two fast radio bursts have already been identified. In this paper, the dispersion measure of these FRBs produced by the Milky Way interstellar medium and the intergalactic medium is obtained through known physical models to yield the host galaxy dispersion measure. It is found that the host galaxy dispersion measure increases with its redshift value, and that the host galaxy dispersion measure has different distribution between repeaters and non-repeaters. Further analysis suggests that there is no significant difference between the host galaxies of repeaters and non-repeaters, and the reason for the divergence of the host galaxy dispersion measures should be accounted for by the difference in their local environment.
Keywords:Fast radio burst, Host galaxy, Dispersion measure +
Footnote †: journal: Physics Letters
## 1 Introduction
Fast Radio Burst (FRB) is a kind of radio transient with typical duration of several milliseconds and usually brighter than 1 Jy ms. Since the discovery of the first FRB (Lorimer et al., 2007), hundreds of FRB events have been detected (Petroff et al., 2016; CHIME/FRB Collaboration et al., 2021), which are divided into repeating and non-repeating bursts. Repeaters are FRBs that have been detected to occur twice or more (Spitler et al., 2016; CHIME/FRB Collaboration et al., 2019; CHIME/FRB Collaboration et al., 2019; Fonseca et al., 2020). Non-repeaters, on the other hand, are FRBs that have only been detected once so far. The difference in their origin is currently undetermined since we cannot conclude whether non-repeaters are also potential candidates for repeaters. Thus, although numerous of theoretical models have been proposed for FRB (Platts et al., 2019), its emission mechanism and physical origin still remain enigmatic. It has been suggested that non-repeating bursts are not unrepeating, but rather that their repeating bursts have not yet been detected by people, however, some studies argue that the probability that all FRBs are repeating bursts is extremely small (Palaniswamy et al., 2018; Caleb et al., 2019). It is commonly believed that repeaters and non-repeaters originate from different physical processes. Repeaters originate from non-catastrophic physical processes, such as giant pulses of young |
2309.08462 | Frequency-scanning considerations in axionlike dark matter
spin-precession experiments | Galactic dark matter may consist of axionlike particles (ALPs) that can be
described as an "ultralight bosonic field" oscillating at the ALP Compton
frequency. The ALP field can be searched for using nuclear magnetic resonance
(NMR), where resonant precession of spins of a polarized sample can be
sensitively detected. The ALP mass to which the experiment is sensitive is
scanned by sweeping the bias magnetic field. The scanning either results in
detection of ALP dark matter or rules out ALP dark matter with sufficiently
strong couplings to nuclear spins over the range of ALP masses corresponding to
the covered span of Larmor frequencies. In this work, scanning strategies are
analyzed with the goal of optimizing the parameter-space coverage via a proper
choice of experimental parameters (e.g., the effective transverse relaxation
time). | Yuzhe Zhang, Deniz Aybas Tumturk, Hendrik Bekker, Dmitry Budker, Derek F. Jackson Kimball, Alexander O. Sushkov, Arne Wickenbrock | 2023-09-15T15:09:32Z | http://arxiv.org/abs/2309.08462v1 | # Frequency-scanning considerations in axionlike dark matter spin-precession experiments
###### Abstract
Galactic dark matter may consist of axionlike particles (ALPs) that can be described as an "ultralight bosonic field" oscillating at the ALP Compton frequency. The ALP field can be searched for using nuclear magnetic resonance (NMR), where resonant precession of spins of a polarized sample can be sensitively detected. The ALP mass to which the experiment is sensitive is scanned by sweeping the bias magnetic field. The scanning either results in detection of ALP dark matter or rules out ALP dark matter with sufficiently strong couplings to nuclear spins over the range of ALP masses corresponding to the covered span of Larmor frequencies. In this work, scanning strategies are analyzed with the goal of optimizing the parameter-space coverage via a proper choice of experimental parameters (e.g., the effective transverse relaxation time).
Introduction
### Dark matter, axion and axionlike particles
As a long-standing mystery, the nature of dark matter (DM) has attracted scientists' attention for decades. Various theories have been put forward to explain the origin and composition of DM. The axion, a hypothetical elementary particle, was first invented in 1977 as a solution to the strong-\(CP\) problem in quantum chromodynamics (QCD) [1, 2, 3, 4, 5]. Here \(CP\) refers to the combined symmetry of charge conjugation (\(C\)) and parity transformation (\(P\)). The axion that solves the strong-\(CP\) problem is called the "QCD axion." It was later found to be a candidate for DM since the axion could acquire mass due to spontaneous breaking of the Peccei-Quinn symmetry at some scale, \(f_{a}\), and soft explicit symmetry breaking due to QCD effects, generating a nonzero axion mass \(m_{a}\,\sim\,(\Lambda_{QCD}^{2}/f_{a})\)[6]. Here, \(\Lambda_{QCD}\,\sim\,200\,\)MeV [7, 8] is the characteristic energy scale of strong interactions. Pseudoscalar bosons that acquire mass from mechanisms other than QCD are referred to as axionlike particles (ALPs) [1, 9, 10]. From now on we do not differentiate between the concepts of axions and ALPs, and use "ALP" to represent the entire class of such particles.
The ALP mass could be low (\(m_{a}\,\ll\,1\,\)eV) compared to other DM candidates such as weakly interacting massive particles (WIMPs) with mass \(\gtrsim\,50\,\)GeV [11]. Considering the local DM density \(\rho_{DM}\,\approx\,0.4\,\)GeV\({}^{-3}\)[12, 13, 14], the ALP number density is expected to be so high that we can use the language of a classical field to describe the influence of ALPs on laboratory detectors as opposed to a particle-like description of interactions used in the case of WIMPs. The ALP field is stochastic in nature [15] but on time scales shorter than its characteristic coherence time \(\tau_{a}\) it can be approximated as
\[a(r,t)=a_{0}\cos(\omega_{a}t-{\bf k}\cdot{\bf r}+\phi)\,. \tag{1}\]
Here \(a_{0}\) is the amplitude of the field, \(\omega_{a}\,=\,m_{a}c^{2}/\hbar\) is the ALP Compton frequency where \(c\) is the speed of light and \(\hbar\) is the reduced Planck constant, \({\bf k}\,=\,m_{a}{\bf v}_{a}/\hbar\) is the wave vector (\({\bf v}_{a}\) is the relative velocity of ALP and the detector), \({\bf r}\) is the displacement vector, and \(\phi\) is a random phase in the interval [0, 2\(\pi\)). The amplitude \(a_{0}\) follows a Rayleigh distribution and the average root-mean-square (r.m.s) value of \(a_{0}\) can be estimated from \(\rho_{DM}\)[16, 15, 17]:
\[\rho_{DM}\approx\frac{c\,m_{a}^{2}a_{0}^{2}}{2\hbar^{3}}\,. \tag{2}\]
In this work, we assume that ALP DM is virialized in the galaxy and its velocity follows the Maxwell-Boltzmann distribution with average speed \(v_{a}\,\approx\,\)220 km/s [18]. Due to the second-order Doppler effect, the spectral linewidth of the ALP field observed with a detector on Earth \(\Gamma_{a}\) is approximately \((v_{a}/c)^{2}\omega_{a}\sim\omega_{a}/Q_{a}\)[19] (\(v_{a}=|{\bf v}_{a}|\), \(Q_{a}\equiv(c/v_{a})^{2}\) is the ALP quality factor). The mode of the ALP frequency distribution (i.e., the value of the frequency appearing in the distribution with highest probability) is \(\omega_{a}^{{}^{\prime}}\,\approx\,\omega_{a}(1+\,v_{a}^{2}/2c^{2})\)[16]. Since the difference between \(\omega_{a}\) and \(\omega_{a}^{{}^{\prime}}\) is relatively small, we do not differentiate between them in the following.
While astrophysical evidence for the existence of DM comes from its gravitational effects, understanding its nature requires probing its possible non-gravitational couplings. In the case of ALPs, there are three kinds of non-gravitational couplings to standard-model particles that are predicted by theory [20]: the ALP-photon, ALP-gluon and ALP-fermion couplings. In this paper, we analyze spin precession that arises due to the latter two couplings. Moreover, for concreteness, we concentrate on the ALP-fermion coupling to nuclei, although the results should be general for all spin-precession searches.
The Hamiltonian describing such an interaction between the ALP field and nuclei, also referred to as the "nuclear-gradient coupling," can be written as:
\[H_{\rm aNN}\sim{\rm g_{aNN}}\mathbf{\nabla}a\cdot{\bf I}\,, \tag{3}\]
where \(\rm g_{\rm aNN}\) is the coupling strength for a neutron or a proton in units of \(\rm GeV^{-1}\), \(\bf\nabla a\) is the ALP-field gradient and \(\bf I\) is the nuclear spin operator. The exact form of this expression is model dependent (for example, the proton and neutron couplings vary between different ALP theories). The connection between the ALP couplings to protons and neutrons and the ALP coupling to the entire nucleus depends on nuclear physics [21].
### ALP search with spin-precession haloscope
Depending on the theoretical model, different schemes of spin-precession experiments are adopted to search for DM fields. For example, the Global Network of Optical Magnetometers for Exotic physics searches (GNOME) [22, 23, 24, 25, 26] investigates transient exotic spin couplings. Other examples are the Cosmic Axion Spin Precession Experiment (CASPEr) [27], aiming at probing a persistent (and fluctuating, see, for example, Ref. [15]) pseudomagnetic field with nuclear magnetic resonance (NMR) and the QUaerere AXions (QUAX) experiment probing possible interactions of the galactic ALP field with electrons [28].
In this work, we focus on CASPEr and similar experiments. CASPEr is carried out in parallel at Boston and Mainz. At Boston, the main focus is the search for the ALP coupling to gluons resulting in oscillating parity- and time-reversal-invariance-violating nuclear moments. At Mainz, the focus is on the ALP-field-gradient coupling to nuclei. There are a number of setups at Mainz using different magnets (or magnetic shielding) and addressing different ALP-mass ranges. The target frequency band to scan is determined by the maximum magnetic field of the NMR setup and the gyromagnetic ratio of the spins. In the "CASPEr-Gradient-Low-Field" experiment, the magnetic field is limited to about \(0.1\,\rm T\), while the "CASPEr-Gradient-High-Field" experiment will operate with a tunable \(14.1\,\rm T\) magnet. With proton spins, the corresponding maximum frequencies are about \(4.3\) and \(600\,\rm MHz\), respectively. The sensitivity of spin-precession experiments depends on the number of polarized spins in the sample, the relaxation times and the sensitivity of the detector. In the following sections, we discuss the mechanism of spin-precession experiments in DM search, and formulate an optimal strategy to scan through the mass-\(|\rm g_{\rm aNN}|\)-coupling parameter space describing the nuclear-gradient coupling.
We take the CASPEr-Gradient-Low-Field experiment as an example. A conceptual diagram of the apparatus is illustrated in Figure 1. The cryostat contains a liquid helium reservoir (providing a cryogenic environment for the magnet), an excitation coil, a pickup coil and a superconducting quantum interference device (SQUID). We can use multiple pickup coils and SQUIDs in the experiment. The magnet produces the bias field \(\bf B_{0}\). The magnetization of the sample is prepared so that it is oriented collinearly with \(\bf B_{0}\), and the magnetization can be tilted by applying a transverse oscillating magnetic field with the excitation coil. The role of this transverse oscillating field could also be taken, in principle, by the gradient of the ALP field. The transverse magnetization induces an oscillating magnetic flux in the pickup coil which couples to the SQUID loop.
### The goal of the present work
Before discussing the search optimization, we need to define what is being optimized. This may depend on the specific situation. One reasonable goal may be to obtain a "needle sensitivity" by achieving the best possible sensitivity at a fixed value (i.e., a narrow range) of the ALP mass. While chances of finding DM around one random frequency are insignificant, a needle-sensitivity experiment may be useful for exploring the practical limits of sensitivity and the study of systematic effects. Another circumstance when one may wish to look in a narrow mass range is if a plausible candidate is found and it is necessary to check if it is real at the observed mass of the candidate. However, in this paper we consider a complementary case where the ultimate goal is to scan a large range of the ALP mass (frequency) parameter space in search for DM; therefore, the question we address here with respect to our scanning strategy is: what are the optimal experimental settings for exploring the largest area in the parameter space?
In other words, which experimental settings correspond to the highest sensitivity to the ALP coupling strength over the entire frequency range? The specific questions that we address here include:
* What is the best frequency-step size for scanning?
* What are optimal values of the transverse relaxation time \(T_{2}\) and the effective transverse relaxation time \(T_{2}^{*}\)?
* How does sensitivity scale with the total measurement duration \(T_{\rm tot}\) (or the dwell time at one ALP frequency, \(T\))?
An interesting issue to consider in conjunction with choosing the optimal scanning strategy is whether it may be beneficial to apply a "seed" transverse oscillating magnetic field at the resonance frequency to heterodyne the ALP signal [29, 30]. We leave seeding outside the scope of the current work. Similarly, we do not consider here the case of sub-kHz frequencies addressed previously in CASPEr-ZULF [31, 32], where ZULF stands for zero-to-ultralow field or other related low-frequency work [33, 34, 35, 36].
We note here in passing that the issue of relative value (value scaling) of searching in different parts of the parameter space is nontrivial and highly consequential. The value scaling depends on theoretical assumptions and preferences. To give an example, it is commonly assumed that a ultralight-bosonic-dark-matter (UBDM) particle could have a mass in the \(10^{-22}\,-\,10\,\)eV range. If we assume that the a-priori probability of finding the particle is uniform over this interval, this strongly biases the search towards the highest masses. However, it is the area in the log-lin or log-log (mass-1/coupling) parameter space that is usually taken as the figure-of-merit (see, for example, Ref. [37] for a related discussion). This makes a search covering, say, a decade at low masses just as valuable as a decade searched at higher masses. The distinction between these value scalings is not important for searches that cover a fractionally small mass range, which is common.
We also note the work of Ref. [38], where the authors address the sensitivity of DM search experiments like CASPEr. We claim that there is no conflict in terms of conclusions between Ref. [38] and this paper. In Ref. [38], experimental sensitivities under any hierarchy between ALP coherence time \(\tau_{a}\), NMR
Figure 1: Conceptual diagram of the CASPEr-Gradient-Low-Field setup. The superconducting devices (indicated by dark green) are submerged in a liquid helium bath (light green). The magnet comes with shim coils so that the inhomogeneity of the magnetic field can be manipulated. The excitation and pickup coils can be used to characterize the properties of magnetic field and sample. The sample is isolated from the liquid helium bath via a vacuum insulation (indicated by light blue), and the sample temperature can be adjusted with a temperature-control system, affecting such as the \(T_{2}\) relaxation. The setup is compatible with different samples that require significantly different temperatures (\(\geq 100\,\)K difference).
transverse relaxation time \(T_{2}\), and measurement time \(T\) are derived. However, in this paper, we always assume that \(T\gg\tau_{a}\), \(T_{2}\) and consider hierarchy between \(\tau_{a}\), \(T_{2}\) and \(T_{2}^{*}\). The effect of \(T_{2}^{*}\) is discussed here as well.
## 2 Spin-Precession Experiment
### Characteristic parameters
The effect of the ALP field is best described as continuous-wave (CW) NMR, for which a key parameter is the transverse spin-relaxation rate. Transverse relaxation occurs via homogeneous mechanisms (e.g., intermolecular interactions in the sample) as well as inhomogeneous mechanisms1, for example, due to magnetic-field gradients across the sample. The homogeneous-relaxation time is called \(T_{2}\). The total effective relaxation time is called \(T_{2}^{*}\ \leq\ T_{2}\). If we tilt the spins away from the bias field, turn the excitation field off, we observe a decaying sine wave NMR signal as a function of time. We can obtain the NMR "decay" spectrum by taking the Fourier transform of the time series. The full-width-at-half-maximum (FWHM) or the linewidth of the spectral line is \(\Gamma_{n}\,=\,2/T_{2}^{*}\)[39], which is the result of relaxation from both homogeneous and inhomogeneous mechanisms. Here we approximate the NMR spectral lineshape as a Lorentzian for simplicity (which is not true in general, see for example Ref. [40]). In NMR the linewidth is sometimes normalized with the center frequency \(\omega\):
Footnote 1: Though they may not be regarded as true relaxation mechanisms. Their effect is also described as “dephasing”.
\[\delta_{\omega}\equiv\frac{\Gamma_{n}}{\omega}=\frac{2}{\omega T_{2}^{*}}\,, \tag{4}\]
often in the unit of parts per million (ppm). The value of \(\delta_{\omega}\) can be manipulated with magnetic shimming in the case where inhomogeneous broadening due to the field gradients is significant.
The ALP-field gradient can be viewed as a pseudomagnetic field because it interacts with nuclear spins in a similar way as a magnetic field does. This implies the possibility of detecting an ALP-field gradient through magnetic resonance experiments where the oscillating gradient of the galactic ALP field drives spin precession. We can regard the ALP field as the oscillating drive field for nuclear spins in a bias magnetic field. The Rabi frequency of this pseudomagnetic field is determined by the coupling strength, the amplitude of the ALP-field gradient, ALP mass and the magnitude of the relative velocity of experiment and ALP field perpendicular to the direction of the bias field \(v_{\perp}\). It is important to note that the ALP field is a stochastically varying quantity, so that the Rabi frequency does not have a constant value over time. The r.m.s. Rabi frequency \(\Omega_{a}\) can be computed based on Equations (1), (2) and (3) [15]:
\[\Omega_{a}=\frac{1}{2\hbar}\mbox{g}_{\rm aNN}a_{0}m_{a}cv_{\perp}\approx\frac {1}{2}\mbox{g}_{\rm aNN}\sqrt{2\hbar c\rho_{DM}}v_{\perp}\,, \tag{5}\]
Here we assume that the ALP-field gradient comes exclusively from the relative motion of the field and the detector.2 If we take \(v_{\perp}\ =\ 220\,{\rm km\,s^{-1}}\), we have \(\Omega_{a}/2\pi\ =\ \mbox{g}_{\rm aNN}\ \times\,0.2\,{\rm GeV\,Hz}\). For an experiment searching for \(\mbox{g}_{\rm aNN}\ \leq\ 10^{-10}{\rm GeV}^{-1}\), the period of Rabi oscillation is \(2\pi/\Omega_{a}\ \geq\ 1000\,{\rm yr}\), for which reason we can regard ALP-field gradient as a weak drive for the spins. The stochastic nature of the ALP field [15, 19] leads to its finite coherence time \(\tau_{a}\), which can be estimated as
Footnote 2: This is not necessarily the case, for instance, for ALPs gravitationally bound in stationary states like axion stars or “axion atoms” [41, 42]. We do not consider such regimes here.
\[\tau_{a}=\frac{2\pi Q_{a}}{\omega_{a}}\,. \tag{6}\]
Here \(Q_{a}\) is the ALP quality factor defined after Equation (2). The parameter \(\tau_{a}\) describes the ALP decoherence and needs to be taken into consideration for measurements lasting for \(\tau_{a}\) or longer. We are considering frequencies above \(1\,{\rm kHz}\), and \(\tau_{a}\,\leq\,1000\,{\rm s}\). Dwell time \(T\) for the measurement at one frequency
is chosen to be much longer than \(\tau_{a}\), \(T_{2}^{*}\) and \(T_{2}\). The linewidth of the ALP field \(\Gamma_{a}\) = \(\omega_{a}/Q_{a}\) can be expressed with \(\tau_{a}\) as:
\[\Gamma_{a}=\frac{2\pi}{\tau_{a}}\,. \tag{7}\]
Additionally, the hyperpolarized samples used for the experiment lose polarization due to \(T_{1}\) relaxation, which sets a limit on the transverse relaxation time of \(T_{2}\leq 2T_{1}\).
### Weak-drive NMR
We expect the ALP-field gradient to induce nuclear spin precession. For a sample with longitudinal magnetization \(M_{0}\) prepared along the bias field, the angle of magnetization is tilted by the ALP-field gradient, generating a transverse magnetization \(M_{1}\).
Since the coupling strength g\({}_{\rm ANN}\) and the corresponding r.m.s. Rabi frequency \(\Omega_{a}\) are assumed to be so small that \(\Omega_{a}T\ll 1\) [see the discussion following Equation (5)], we can consider the precession under weak drive: the tipping angle \(\xi\ll 1\) and \(M_{1}\) = \(M_{0}\sin\xi\approx M_{0}\xi\)[43]. At the beginning of the precession (evolution time \(t\,\ll\,T_{2}\), \(T_{2}^{*}\) or \(\tau_{a}\)), the tipping angle increases linearly with time (\(\xi\,\approx\,\Omega_{a}t\)). Meanwhile, the relaxation decreases the magnitude of \(M_{1}\). If \(T_{2}^{*}\,\ll\,\tau_{a}\), the drive and relaxation establish a steady state where the tipping angle stays mostly constant when \(t\,\geq\,T_{2}^{*}\). However, if \(\tau_{a}\,\ll\,T_{2}^{*}\), we should not expect to measure a constant tipping angle since the ALP field is not coherent at the time scale of \(T_{2}^{*}\). These cases are considered below.
For the case of a weak drive (which is the ALP field in our case), there is a general expression for the r.m.s. transverse magnetization [43]:
\[\sqrt{\langle M_{1}^{2}\rangle}\approx\frac{1}{\sqrt{2}}u_{n}M_{0}\sqrt{ \langle\xi^{2}\rangle}\,, \tag{8}\]
Here \(u_{n}\) is a factor indicating the fraction of on-resonance spins out of all the spins, \(M_{0}\) is the longitudinal magnetization and \(\xi\) is the tipping angle of the magnetization. The factor of \(1/\sqrt{2}\) appears due to \(M_{1}\) oscillating at Larmor frequency. Note that we make the approximation that \(\sin\xi\approx\xi\) here based on the weak-drive assumption.
Assume that the Larmor frequency is tuned to the value of the ALP Compton frequency. As an approximation, the NMR and ALP lineshapes can be taken to be rectangles, the spectral \(u_{n}\) factor can be estimated from the ratio of ALP linewidth to NMR linewidth (the numerical factor depends of the line-shape model):
\[u_{n}\approx\left\{\begin{array}{cc}1\,,&\tau_{a}\ll T_{2}^{*}\\ \\ \frac{\Gamma_{a}}{\Gamma_{n}}=\frac{\pi T_{2}^{*}}{\tau_{a}}\,,&T_{2}^{*} \ll\tau_{a}\end{array}\right.\,. \tag{9}\]
Here, the second regime corresponds to that all nuclear spins being on-resonance with the ALP field.
The expectation value of \(\sqrt{\xi^{2}}\) is limited by \(\tau_{a}\), \(T_{2}\) and \(T_{2}^{*}\). When \(\tau_{a}\,\ll\,T_{2}\) or \(\tau_{a}\,\ll\,T_{2}^{*}\), we need to consider the decoherence of the ALP field during the measurement. To model the effect of decoherence, we assume that the phase \(\phi\) of the ALP field, corresponding to the orientation of the pseudomagnetic field in the rotating frame, changes to a random value in [0, 2\(\pi\)) in the period of \(\tau_{a}\). The induced transverse magnetization vector in the rotating frame \({\bf M_{1}}\,=\,M_{x}{\bf e_{x}}\,+\,M_{y}{\bf e_{y}}\) can be represented as a function of \(M_{0}\), tipping angle \(\xi_{x}\) and \(\xi_{y}\) under the weak-drive condition: \({\bf M_{1}}\,=\,u_{n}M_{0}(\xi_{x}{\bf e_{x}}\,+\,\xi_{y}{\bf e_{y}})\). It can be estimated via a two-dimensional random walk [44], with a characteristic step size of \(\Omega_{a}\tau_{a}\). Given the evolution time \(t\), the r.m.s value of \(M_{1}\) is \(u_{n}M_{0}\sqrt{\langle\xi_{x}^{2}+\xi_{y}^{2}\rangle}\,=\,u_{n}M_{0}\sqrt{ \langle\xi^{2}\rangle}\), and \(\sqrt{\langle\xi^{2}\rangle}\,=\,\Omega_{a}\sqrt{\tau_{a}t}\), growing linearly with \(\sqrt{t}\). If we take relaxation into account, \(\sqrt{\langle\xi^{2}\rangle}\) stops increasing after the relaxation time of the on-resonance spin ensemble. When \(\tau_{a}\ll T_{2}^{*}\), all the spins are on-resonance, and we can take
so that \(\sqrt{\langle\xi^{2}\rangle}\,=\,\Omega_{a}\sqrt{\tau_{a}T_{2}^{*}}\). When \(T_{2}^{*}\,\ll\,\tau_{a}\,\ll\,T_{2}\), only a small fraction of the spins in the sample are on resonance with the ALP field, and their linewidth is equal to the ALP linewidth \(\Gamma_{a}\). The relaxation time of this small fraction, dominated by the residual inhomogeneous relaxation, is \(2/\Gamma_{a}\,=\,\tau_{a}/\pi\), hence we take \(t=\tau_{a}/\pi\) and have \(\sqrt{\langle\xi^{2}\rangle}\,=\,\Omega_{a}\sqrt{\tau_{a}t}\,=\,\Omega_{a} \tau_{a}/\sqrt{\pi}\) in this case.3
Footnote 3: To clarify the meaning of residual inhomogeneous relaxation, note that the ALP spectral line overlaps with a portion of width \(\sim\,1/\tau_{a}\) out of a broader inhomogeneously broadened spectrum with overall width \(\sim\,1/T_{2}^{*}\). The groups of spins that are not overlapping with the ALP spectral line (off-resonance spin groups) cannot be driven coherently by the ALP field during the long dwell time \(T\). Considering just the on-resonance group of spins, when \(\tau_{a}\,\ll\,T_{2}\), relaxation is determined by the differences in spin-precession frequencies across this group (residual inhomogeneous broadening) instead of \(T_{2}\), i.e., the relaxation time is \(\sim\tau_{a}\).
However, when \(T_{2}\,\ll\,\tau_{a}\), the phase of the ALP-field gradient can be regarded as coherent during \(T_{2}\), and spin transverse decoherence is dominated by \(T_{2}\) relaxation. Therefore we have \(\sqrt{\langle\xi^{2}\rangle}\,=\,\Omega_{a}T_{2}\). We can summarize the cases above as:
\[\sqrt{\langle\xi^{2}\rangle}\,=\,\left\{\begin{array}{ll}\Omega_{a}\sqrt{ \tau_{a}T_{2}^{*}},&\tau_{a}\ll T_{2}^{*}\\ \\ \Omega_{a}\frac{\tau_{a}}{\sqrt{\pi}},&T_{2}^{*}\ll\tau_{a}\ll T_{2}\\ \\ \Omega_{a}T_{2},&T_{2}\ll\tau_{a}\end{array}\right.. \tag{10}\]
For frequency ranges of interest to us, the \(T_{2}\) time is assumed here as a fixed value independent of the external magnetic field.
Summarizing Equations (8), (9) and (10), we can obtain the expression for the r.m.s transverse magnetization:
\[T\gg\tau_{a}\,,T_{2}^{*}\mbox{ or }T_{2},\,\sqrt{\langle M_{1}^{2}\rangle} \approx\frac{1}{\sqrt{2}}M_{0}\Omega_{a}\left\{\begin{array}{ll}\sqrt{\tau_ {a}T_{2}^{*}}\,,&\tau_{a}\ll T_{2}^{*}\\ \\ \frac{T_{2}^{*}}{\sqrt{\pi}}\,,&T_{2}^{*}\ll\tau_{a}\ll T_{2}\\ \\ \frac{T_{2}^{*}T_{2}}{\sqrt{\pi}\tau_{a}}\,,&T_{2}\ll\tau_{a}\end{array}\right.. \tag{11}\]
In the third case of the Equation (11), as \(T_{2}\,\ll\,\tau_{a}\), \(T_{2}\) and \(T_{2}^{*}\) suppress the growth of \(\sqrt{\langle M_{1}^{2}\rangle}\) over the dwell time. In the limit of \(\tau_{a}\,\rightarrow\,\infty\), since \(T\) is assumed to be much larger than \(\tau_{a}\), we have \(T\,\rightarrow\,\infty\), whereby the ALP field cannot drive the nuclear spins coherently all the time and \(\sqrt{\langle M_{1}^{2}\rangle}\) decreases to zero.
### Summary of regimes
As can be seen in the discussion on the spectral factor and the tipping angle, there are different regimes in the experiment. To organize the discussion, we summarize the possible regimes depending on the relative magnitudes of \(T_{2}\), \(T_{2}^{*}\) and \(\tau_{a}\):
1. \(\tau_{a}\ll T_{2}^{*}\);
2. \(T_{2}^{*}\ll\tau_{a}\ll T_{2}\);
3. \(T_{2}^{*}\ll T_{2}\ll\tau_{a}\);
4. \(T_{2}^{*}=T_{2}\ll\tau_{a}\);
These four regimes are also illustrated in Figure 2. Since the NMR and ALP linewidth can also be expressed with \(T_{2}^{*}\) and \(\tau_{a}\), we do not mention linewidth in these regimes. We do not separately consider intermediate regimes such as \(T_{2}\approx\tau_{a}\) and \(T_{2}^{*}\approx\tau_{a}\). The estimates for these regimes can be done by interpolation or extrapolation.
## 3 Sensitivity and scanning optimization
### Single-frequency measurement
The relationship between the flux in the SQUID \(\Phi\) and the transverse magnetization \(M_{1}\) is described by pickup transfer coefficient \(\alpha\):
\[\alpha=\frac{\Phi}{\mu_{0}M_{1}} \tag{12}\]
where \(\mu_{0}\) is the the vacuum permeability. The average flux power of an ALP-induced spin-precession signal is determined by ALP-induced transverse magnetization that is described in Equations (8), (9) and (10):
\[\langle\Phi_{a}^{2}\rangle=\frac{1}{2}(\alpha u_{n}\mu_{0}M_{0})^{2}\langle \xi^{2}\rangle=\frac{1}{2}(\alpha u_{n}\mu_{0}M_{0}\Omega_{a})^{2}\left\{ \begin{array}{ll}\tau_{a}T_{2}^{*},&\tau_{a}\ll T_{2}^{*}\\ \\ \frac{\tau_{a}^{2}}{\pi},&T_{2}^{*}\ll\tau_{a}\ll T_{2}\\ \\ T_{2}^{2},&T_{2}\ll\tau_{a}\end{array}\right.\,. \tag{13}\]
This power is concentrated in a spectral interval corresponding to the width of the NMR line. The corresponding signal amplitude in power spectrum or power spectral density (PSD) can be computed from the flux power in Equation (13):
\[S=\frac{\langle\Phi_{a}^{2}\rangle}{\Gamma/2\pi}\,, \tag{14}\]
where \(\Gamma\) is the spectral linewidth of the expected signal. Considering the NMR and ALP linewidth, we have
\[\Gamma=\left\{\begin{array}{ll}\Gamma_{a},&\Gamma_{a}\ll\Gamma_{n}\\ \\ \Gamma_{n},&\Gamma_{a}\gg\Gamma_{n}\end{array}\right.\,. \tag{15}\]
The first case in Equation (15) refers to the situation where only a fraction of nuclear spins are on-resonance with the ALP field, and the second case refers to the situation where all the spins are on-resonance. The number of data points in the power spectrum within a frequency bin of \(\Gamma\) is
\[N_{\Gamma}=\frac{\Gamma T}{2\pi}\gg 1\,, \tag{16}\]
Figure 2: Different regimes that are considered in the experimental schemes.
where \(T\) is the dwell time of the measurement in one step. These \(N_{\Gamma}\) PSD values are averaged to improve the signal-to-noise ratio. Having \(N_{\Gamma}\) large is an experimental choice; we normally want at least several points per ALP linewidth, for example, to be able to discriminate ALP signals using their expected lineshape [19]. At present, dominant sources of noise in the CASPEr-Gradient-Low-Field experiment are from the SQUIDs and their electronics. The expected noise level after averaging is
\[\sigma=\frac{\sigma_{0}}{\sqrt{N_{\Gamma}}}=\sqrt{\frac{2\pi}{\Gamma T}}\sigma_ {0}\,, \tag{17}\]
where \(\sigma_{0}\) is the expected noise of the power spectrum without any binning, characterizing the noise of the system.
We choose the detection threshold as \(3.355\sigma\), corresponding to a detection of a signal whose true power is \(5\sigma\) above the background (accounting for noise) with \(95\%\) confidence level [17, 45]:
\[S\geqslant 3.355\sigma\,. \tag{18}\]
Notice that \(S\) is a function of \(\mathrm{g_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}\) since the \(\Omega_{a}\) appearing in Equation (13) is proportional to \(\mathrm{g_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}\) when no significant signal is discovered, we can use Equations (5), (13), (14), (17) and (18) to exclude the coupling strengths whose values satisfy:
\[T\gg\tau_{a}\,,T_{2}^{*}\text{ or }T_{2},\,|\mathrm{g_{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}} \left|\mathrm{g_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}} \left|\mathrm{g_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}} \left|\mathrm{g_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\,\,\,\,\,\}\ \ \ \}}\\}\}\\}\\}\\\\\\}\\}\\\\\\\\\ \\\\\\\\\\\\}\ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\}\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\}\\\\\\\\\\\}\\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\}\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\}\\
D. The measured data are processed to generate a power spectrum. E. The power spectrum is binned, and the data are averaged in each bin. The frequency-bin size is chosen to distinguish an ALP signal with the expected width \(\Gamma\) (dwell time \(T\gg 1/\Gamma\)). F. The PSD values in the frequency bins are analyzed. An example is the histogram in Figure 3a to determine whether there is an ALP candidate within the bandwidth of the step. G. If a DM candidate is found, further analysis on the candidate is performed and the measurement at this frequency is repeated. H. If no DM candidates is found, certain ALP couplings strength can be excluded for the given mass.
Note that after taking data, the value of the field can be incremented for the next step of the scan;
1. Repeat (2) until the target frequency band has been scanned.
The second item listed here is essentially "one step" of the experiment. The procedure is also illustrated in Figure 3. In the case where a statistically significant DM candidate signal is observed, more measurements (like measuring the daily and annual modulations [19]) can be done to verify that the signal characteristics correspond to the ALP DM hypothesis.
The frequency-step size is the interval between the center frequencies of adjacent steps. In a DM spin-precession experiment, one is sensitive to signals in a bandwidth of the maximum of \(\Gamma_{n}\) and \(\Gamma_{a}\) around a given value of the bias magnetic field. An appropriate step size in the scanning should be the sensitive bandwidth. Choosing the step size to be larger than the sensitive bandwidth results in low sensitivity between center frequencies of adjacent steps. On the other hand, choosing smaller steps leads to large number of steps to scan the whole frequency range and potentially, more time consumption in ramping the bias field between steps. We do not expect having smaller steps to improve the sensitivity.
We note that off-bandwidth detection can also be considered (see, for example, Ref. [46]). The analysis of whether this is beneficial or not depends on the dominant sources of noise and the frequency dependence of the noise. We leave this outside the scope of the current manuscript. If we deliberately introduce magnetic field inhomogeneity (increasing the inhomogeneous broadening) so that the NMR linewidth is larger than the ALP linewidth, the possible ALP-driven spin-precession signal would be weaker as we decrease the number of spins resonant with the ALP field. On the other hand, by spreading the spins over a broader frequency band, we extend the sensitive frequency range of an experimental step. Which of these two strategies is optimal is discussed at the end of this section.
One can choose different schemes in distributing the total measurement time to each step. For example, we can allocate measurement time proportional to the \(\tau_{a}\) for a measurement at one frequency (dwell time \(T\propto\tau_{a}\propto\omega^{-1}\)), or uniformly distributing measurement time to each step (\(T\propto\omega^{0}\)). We can even generalize the discussion to \(T\propto\omega^{n}\) [\(n\in(-\infty,\,\infty)\)] and figure out the optimal \(n\) value so as to maximize the area in the parameter space. A detailed discussion is included in the Appendix. The optimal dwell times for each regime are mentioned in Table 1.
We assume that the target frequency band involves only one regime, otherwise we split the band and discuss the optimal scanning strategies for them separately. Indeed, the CASPEr program as a whole covers multiple regimes. The parameters for all the regimes above are summarized in Table. 1.
With the information from Table. 1 and the sensitivity of single steps described by Equation (19), the
Figure 3: Schematic of the experimental procedure. a) One step of the scan. The spectrum of the decay signal is obtained by pulsed-NMR, and the \(T_{2}\) measurement can be done with a spin-echo experiment. The diagnostics can be repeated after the data acquisition. The data acquired with the magnetometer can be used to calculate the power spectrum, which is later binned and analyzed. One of the analysis methods is the histogram of PSD values. The threshold for DM candidates is shown in the histogram with the dashed line, where \(\mu\) stands for the average PSD, and \(\sigma\) is the standard deviation. More analyses can be performed to search for and evaluate ALP-signal candidates (see Ref. [40]). If no ALP signal is found, the sensitivity plot can be made for this step, and the magnetic field would be ramped to a new value for the next step. b) One example of decay spectra (blue curves) and sensitivity in mass-coupling space (reddish area) after scanning. Here we assume the NMR linewidth \(\Gamma_{d}\) to be the frequency increment between steps.
sensitivities of the scanning are given as:
\[|\mathrm{g_{anN}}|\geqslant\left\{\begin{array}{ll}\eta\left(\frac{T_{\mathrm{ tot}}}{Q_{a}\ln\left(\omega_{\mathrm{end}}/\omega_{\mathrm{start}}\right)}\right)^{- \frac{1}{4}}\times(T_{2}^{*})^{-\frac{3}{4}}\tau_{a}^{-\frac{1}{2}},&\tau_{a} \ll T_{2}^{*}\ \mathrm{or}\ T_{2}^{*}\ll\tau_{a}\ll T_{2}\\ \\ \eta\left(\frac{\pi^{2}T_{\mathrm{tot}}}{Q_{a}\ln\left(\omega_{\mathrm{end}}/ \omega_{\mathrm{start}}\right)}\right)^{-\frac{1}{4}}\times(T_{2}^{*})^{-\frac{3 }{4}}T_{2}^{-1}\tau_{a}^{\frac{1}{2}},&T_{2}^{*}\ll T_{2}\ll\tau_{a}\ \mathrm{or}\ T_{2}^{*}=T_{2}\ll\tau_{a}\end{array}\right.. \tag{21}\]
The essence of these results can be summarized as follows. First of all, in each of these regimes, the sensitivity goes with \(T_{\mathrm{tot}}^{-1/4}\). This is a consequence of the fact that we consider measurements on time scales much longer than the relaxation times and the ALP coherence time [27, 29]. The second point concerns the scanning strategy. With smaller NMR linewidth, under conditions when off-bandwidth detection is detrimental in terms of the SNR, we have to make more steps in scanning the target frequency band, resulting in less dwell time for each step. However, despite this disadvantage, there is still benefit from longer \(T_{2}\) and \(T_{2}^{*}\) that can be traced back to larger ALP-signal amplitude. We can consider the benefit in three cases: \(\tau_{a}\ll T_{2}^{*}\), \(T_{2}^{*}\ll\tau_{a}\) and \(T_{2}\ll\tau_{a}\). In the first case, longer \(T_{2}^{*}\) increases the r.m.s. tipping angle and decreases the ALP-signal linewidth, hence increasing the signal amplitude. Though the ALP-signal linewidth does not decrease with longer \(T_{2}^{*}\) in the second case, longer \(T_{2}^{*}\) leads to more spins being addressed by the ALP field as the magnetic-field inhomogeneity is reduced. In the third case where \(T_{2}\ll\tau_{a}\), longer \(T_{2}\) increases the r.m.s. tipping angle. Overall, longer \(T_{2}\) and \(T_{2}^{*}\) are beneficial for improving the sensitivity.
In summary, we find that the optimal strategy to achieve the goals of the experiment (mentioned in Section 1) is to work with maximally possible relaxation times (\(T_{2}\) and \(T_{2}^{*}\)) and split the available experimental time among frequency steps with the step size given by the maximum of the inhomogeneous broadening and the ALP width. When the dwell time at a certain frequency becomes much longer than the ALP coherence time, the sensitivity of the search scales as \(T^{-1/4}\).
## 4 Conclusion
We discussed a possible way of detecting axionlike DM through the ALP-fermion coupling using spin-precession experiments. The ALP-field gradient exerts a torque on nuclear spins, generating spin-precession signals that can be detected with magnetometers. A wide range of ALP masses can be scanned by sweeping the magnitude of the bias magnetic field. The scanning scheme is determined by the NMR and ALP
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Regime & Optimal dwell time & expected signal linewidth & spectral factor & r.m.s. tipping angle \\ & & \(T\) & \(\Gamma\) & \(u_{n}\) & \(\sqrt{\langle\xi^{2}\rangle}\) \\ \hline (1) & \(\tau_{a}\ll T_{2}^{*}\) & \(\frac{T_{\mathrm{tot}}}{Q_{a}\ln\left(\omega_{\mathrm{end}}/\omega_{\mathrm{ start}}\right)}\) & \(\frac{2}{T_{2}^{*}}\) & \(1\) & \(\Omega_{a}\sqrt{\tau_{a}T_{2}^{*}}\) \\ (2) & \(T_{2}^{*}\ll\tau_{a}\ll T_{2}\) & \(\frac{T_{\mathrm{tot}}\tau_{a}}{\pi Q_{a}T_{2}^{*}\ln\left(\omega_{\mathrm{end}}/ \omega_{\mathrm{start}}\right)}\) & \(\frac{2\pi}{\tau_{a}}\) & \(\frac{\pi T_{2}^{*}}{\tau_{a}}\) & \(\Omega_{a}\frac{\tau_{a}}{\sqrt{\pi}}\) \\ (3) & \(T_{2}^{*}\ll T_{2}\ll\tau_{a}\) & \(\frac{T_{\mathrm{tot}}\tau_{a}}{\pi Q_{a}T_{2}^{*}\ln\left(\omega_{\mathrm{end}}/ \omega_{\mathrm{start}}\right)}\) & \(\frac{2\pi}{\tau_{a}}\) & \(\frac{\pi T_{2}^{*}}{\tau_{a}}\) & \(\Omega_{a}T_{2}\) \\ (4) & \(T_{2}^{*}=T_{2}\ll\tau_{a}\) & \(\frac{T_{\mathrm{tot}}\tau_{a}}{\pi Q_{a}T_{2}\ln\left(\omega_{\mathrm{end}}/ \omega_{\mathrm{start}}\right)}\) & \(\frac{2\pi}{\tau_{a}}\) & \(\frac{\pi T_{2}}{\tau_{a}}\) & \(\Omega_{a}T_{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of parameters for different regimes. Note that \(\delta_{\omega}\quad=\quad\tau_{a}/\pi Q_{a}T_{2}^{*}\) is assumed to be a constant in each regime.
linewidth, while the sensitivity of the experiment (leading to the discovery or exclusion of coupling strength \(|\mathrm{g_{\mathrm{ANN}}}|\)) is dependent on sample relaxation times and the ALP coherence time. We divided the discussion into four regimes for spin-precession experiments, based on the relative magnitudes of \(T_{2}\), \(T_{2}^{*}\) and \(\tau_{a}\). For each of the regimes, we introduced the scheme of scanning and calculated the corresponding sensitivity. Analyzing the parameters in determining the sensitivity, we conclude that to search for ALPs with a \(|\mathrm{g_{\mathrm{ANN}}}|\) coupling over a range of ALP masses, there are advantages to increasing \(T_{2}^{*}\) and \(T_{2}\), especially when they are much shorter than \(\tau_{a}\). This maximizes the sensitive area of the experiment in the ALP mass-\(|\mathrm{g_{\mathrm{ANN}}}|\)-coupling parameter space.
Here we explicitly answer the questions posed in Sec. 1.
* The best frequency-step size for scanning should be comparable to the sensitive bandwidth, which is the NMR or ALP linewidth;
* The transverse relaxation time \(T_{2}\) and the effective transverse relaxation time \(T_{2}^{*}\) should always be as long as possible;
* The scaling of the sensitivity with the dwell time \(T\) or the total measurement duration \(T_{\mathrm{tot}}\) is \(\propto T^{-1/4}\) or \(\propto T_{\mathrm{tot}}^{-1/4}\) for all the cases considered here.
The present results will provide guidance for the CASPEr experiments and other spin-precession-based searches.
## Acknowledgements
The authors acknowledge helpful discussions with Younggeun Kim and Gilad Perez. This work was supported in part by the Cluster of Excellence "Precision Physics, Fundamental Interactions, and Structure of Matter" (PRISMA+ EXC 2118/1) funded by the German Research Foundation (DFG) within the German Excellence Strategy (Project ID 39083149) and COST Action COSMIC WISPers CA21106, supported by COST (European Cooperation in Science and Technology), and also by the U.S. National Science Foundation under grant PHYS-2110388. The authors would like to express special thanks to the Mainz Institute for Theoretical Physics (MITP) of the Cluster of Excellence PRISMA+ (Project ID 39083149), for its hospitality and support. The work of AOS was supported by the NSF CAREER grant PHY-2145162 and the U.S. Department of Energy, Office of High Energy Physics program under the QuantISED program, FWP 100495.
|
2309.17224 | Training and inference of large language models using 8-bit floating
point | FP8 formats are gaining popularity to boost the computational efficiency for
training and inference of large deep learning models. Their main challenge is
that a careful choice of scaling is needed to prevent degradation due to the
reduced dynamic range compared to higher-precision formats. Although there
exists ample literature about selecting such scalings for INT formats, this
critical aspect has yet to be addressed for FP8. This paper presents a
methodology to select the scalings for FP8 linear layers, based on dynamically
updating per-tensor scales for the weights, gradients and activations. We apply
this methodology to train and validate large language models of the type of GPT
and Llama 2 using FP8, for model sizes ranging from 111M to 70B. To facilitate
the understanding of the FP8 dynamics, our results are accompanied by plots of
the per-tensor scale distribution for weights, activations and gradients during
both training and inference. | Sergio P. Perez, Yan Zhang, James Briggs, Charlie Blake, Josh Levy-Kramer, Paul Balanca, Carlo Luschi, Stephen Barlow, Andrew William Fitzgibbon | 2023-09-29T13:24:33Z | http://arxiv.org/abs/2309.17224v1 | # Training and inference of large language models using 8-bit floating point
###### Abstract
FP8 formats are gaining popularity to boost the computational efficiency for training and inference of large deep learning models. Their main challenge is that a careful choice of scaling is needed to prevent degradation due to the reduced dynamic range compared to higher-precision formats. Although there exists ample literature about selecting such scalings for INT formats, this critical aspect has yet to be addressed for FP8. This paper presents a methodology to select the scalings for FP8 linear layers, based on dynamically updating per-tensor scales for the weights, gradients and activations. We apply this methodology to train and validate large language models of the type of GPT and Llama 2 using FP8, for model sizes ranging from 111M to 70B. To facilitate the understanding of the FP8 dynamics, our results are accompanied by plots of the per-tensor scale distribution for weights, activations and gradients during both training and inference.
## 1 Introduction
Reducing the number of bits used by numerical formats offers significant efficiency gains for the training and inference of deep learning models. Inference latency is typically bottlenecked by the memory and communication bandwidth of a system (Pope et al., 2023), model-size by the total available memory, and throughput and training-time are often limited by the rate at which operations can be executed. All of these factors are improved substantially if we are able to represent values using fewer bits, with costs typically scaling linearly in the number of bits per value.
These benefits motivated the adoption of 16-bit floating-point formats -- FP16 (Micikevicius et al., 2017) and BF16 (Kalamkar et al., 2019) -- over the FP32 format used to represent continuous values for early deep learning models. More recently, 8-bit floating-point (FP8) formats have been proposed alongside hardware with dedicated support for FP8 arithmetic (Noune et al., 2022, Micikevicius et al., 2022), offering further efficiency gains. The standardisation of the FP8 format is under active development by the IEEE working group P3109 (2023). The reader can find an introduction of floating-point formats for deep learning in Appendix A, and a description of the different FP8 formats in Appendix B. In this work, we assume the formats of Noune et al. (2022) when referring to FP8, denoting as FP8 E4 the weight and activation format and as FP8 E5 the gradient format.
These initial studies indicate that FP8 inference and training (that is, mixed-precision with matrix multiplications in FP8) are indeed possible, but come with a range of associated difficulties. Removing mantissa bits from a format limits numerical accuracy, while removing exponent bits limits the range of values that can be represented. The latter problem poses a particular challenge to practitioners: how to ensure that the set of values generated when performing model training and inference is within the set of representable values. Overflowing or underflowing this range can rapidly degrade model accuracy.
class LinearFP8Training:
```
#FPformats:fp8e4,fp8e5,fp16,fp32,bf16
#fp16tensorscanbereplacedbyfp32orbf16 fp8e5_max:fp8e5=57344 fp8e4_max:fp8e4=240 defforward( w8:fp8e4,w_scale:int,x:fp16 ) ->fp16: x_scale:int=compute_bias(x,fp8e4) x:fp16=scale(x,x_scale) x8:fp8e4=cast(x,fp8e4) y:fp16=matmul(x8,w8.T) y:fp16=unscale(y,x_scale+w_scale) returny defbackward( dy:fp16,w8:fp8e4,w_scale:int, x8:fp8e4,x_scale:int ) ->fp16,fp16: dy_scale:int=compute_bias(dy,fp8e5) dy:fp16=scale(dy,dy_scale) dy8:fp8e5=cast(dy,fp8e5) dx:fp16=matmul(dy8,w8.T) dx:fp16=unscale(dx,dy_scale+w_scale) dw:fp16=matmul(dy8,x8.T) dy:fp16=unscale(dw,dy_scale+x_scale) returnv*2**(-v_scale)
```
To combat this for FP16 training, the standard approach is to globally shift gradients by a single _loss scale_[11, 12, 13], though this is not always sufficient [11, 12]. For inference, a popular technique is quantisation to the 8-bit _integer_ format (INT8). Previous generations of AI hardware have offered accelerated arithmetic for INT8 but not FP8, limiting FP8 uptake despite its potential as a more broadly-applicable 8-bit format in the context of machine learning (see Appendix C for further discussion). More complex group-quantisation schemes have also been proposed for inference which enable some values to be stored in fewer than 8 bits [10]. However, this introduces additional complexity and compute must still be done in higher-precision.
To address the issue of substantially reduced range for FP8 formats, it has been proposed to rely on the exponent bias associated with FP8 tensors. The exponent bias is part of the definition of every
Figure 1: Training phase of a linear layer quantised to FP8. The forward and backward pass illustrate how the scaling biases are computed and applied to the weights, activations and gradients.
Figure 2: Inference phase of a linear layer quantised to FP8. Post-training quantisation is applied to a checkpoint. Scaling biases are computed and applied to the weights and activations.
floating-point format. By adding or subtracting an integer to the exponent bias, one can effectively shift the representable range on a per-tensor basis, giving more granular scaling than standard _loss scaling_ and applying to both forward and backward passes. This integer, denoted as _scaling bias_, is supplied by the user and can be supported either in software or directly in hardware.
The process by which these scales are determined and how they are practically applied is essential to leveraging the benefits of FP8 for training and inference. Existing FP8 literature has not covered this topic extensively, leaving users reliant on scaling decisions taken in software implementations that may not be clearly justified (Nvidia, 2022b). We seek to support this important design aspect through the following contributions:
1. We present a methodology to select the per-tensor scaling biases in the linear layers present in large language models of the type of GPT (Brown et al., 2020) and Llama (Touvron et al., 2023). Such methodology is illustrated in Figure 1 for the training phase and in Figure 2 for the inference phase. These specific details are useful for practitioners aiming to leverage FP8 and have been missing from the FP8 literature, which has either employed sweeps of values (Noune et al., 2022) or not specified how the scaling biases are computed (Micikevicius et al., 2022).
2. We showcase how our FP8 methodology leads to convergence of GPT and Llama models from 111M to 70B parameters, for both inference and training.
3. For inference, we detail how our methodology can be employed as post-training quantisation to cast a high-precision checkpoint to FP8 and perform inference without degradation.
4. For training, we prove that our methodology is able to dynamically update the per-tensor scaling biases and prevent degradation using FP8 in large language models. We provide plots of how the scaling biases evolve and extract insights from them.
## 2 The linear layer adapted to FP8
Performing the matrix multiplication operation in FP8 requires the use of _scalings_ to prevent underflow and overflow. By _scalings_ we mean factors that, when multiplied times a tensor, result in a scaled tensor representable in the FP8 dynamic range. Without such scale, the tensor underflows or overflows. Such scalings are needed for the matrix multiplications found in both the forward pass (to compute the activations) and in the backward pass (to compute weight and activation gradients). Using scalings for lower precision is not new and has been a popular strategy for FP16 training, with the loss scaling method (Noune et al., 2022; Perez, 2022; Micikevicius et al., 2017) consisting of multiplying the loss function with a constant to prevent underflow of the gradients. Although loss scaling works fine for reasonably sized FP16 models, as the number of parameters increases the limited range of FP16 becomes an issue. Models of more than 100 billion parameters like Bloom (Scao et al., 2022) or OPT (Zhang et al., 2022) struggled to find a stable loss scaling for FP16 and ended up employing BF16. Consequently, it's uncertain whether even for FP16 it is enough to have a common scaling for all the gradients. The same question has been explored for FP8: it is not clear whether one scaling is enough (Noune et al., 2022) or a per-tensor scaling is needed (Micikevicius et al., 2022). In addition, for FP8 E4 weights and activations also need scalings due to the reduced dynamic range compared to FP16.
Figure 3 illustrates how the scalings are implemented for the forward pass of a FP8 linear layer. Firstly, focusing on the full FP16 precision, Figure 2(a) displays both weights and activations in FP16 and no scaling is needed before the matrix multiplication, whose accumulation can be performed in FP16 too. This scenario is identical for other formats like FP32 or BF16. In comparison, Figure 2(b) shows how different scaling and casting blocks are needed to leverage FP8 matrix multiplication in mixed precision. The inputs are FP8 but the output is FP16: this dichotomy comes from the need of accumulating the partial results of the FP8 operations in FP16 to prevent overflows. Since the accumulation is in FP16, hardware providers (Graphcore, 2022b; Nvidia, 2022a) output the internal FP16 result and let the user decide whether to cast back down to FP8.
WeightsFor training and inference, the linear layer needs to be modified to include a cast to FP8 E4 from a higher-precision format like FP16. In training, this cast is necessary after every weight update, which takes place in a higher-precision format like FP16 or FP32. In inference, if the weights are stored in FP8 then no cast is needed. Conversely, if the weights are in a higher-precision format
like FP16, BF16 or FP32, a cast to FP8 E4 is needed just once before using those weights in the matrix multiplication. For both cases, before the cast to FP8 E4, a scaling of the weights is needed to prevent underflow or overflow when performing such cast. The scaling shifts the weight distribution and makes it overlap as much as possible with the dynamic range of FP8 E4. The optimal scalings may change during training so there's the need to recompute the scaling again after a certain number of steps. During inference, the scalings don't change since the weights are not updated.
ActivationsDue to the matrix multiplication accumulation being done in higher precision, it is necessary to cast back to FP8 E4 before the next matrix multiplication. When casting to FP8 E4, we need a scaling factor to minimize the underflow/overflow since the dynamic range of FP8 E4 is narrower compared to higher-precision formats like FP16. After the matrix multiplication is performed, the output activations are unscaled taking into account the scaling factors computed for the weights and activations before the matrix multiplication.
### Applying a scaling bias before casting to FP8
Casting the weights and activations from FP16 to FP8 E4 results in a narrower dynamic range that may lead to underflow or overflow. To prevent it, we introduce per-tensor scalings that shift the FP16 distributions before casting to FP8 E4. The type of scaling employed in this work is a _scaling bias_. Starting from the floating point representation defined in Equation 4, we add an integer scaling bias \(b_{\mathrm{scale}}\) to the \(\mathrm{exponent}\) such that
\[\mathrm{scaled\ exponent}=b_{\mathrm{exp}}-\mathrm{bias}+b_{\mathrm{scale}}, \tag{1}\]
which is equivalent to multiplying the FP16 number times \(2^{b_{\mathrm{scale}}}\). Both the weights and activations in Figure 2(b) require a scaling bias before being cast from FP16 to FP8 E4. Let's denote as \(b_{\mathrm{w},\mathrm{scale}}\)
Figure 3: Comparison of the forward pass for a FP16 vs FP8 linear layer.
and \(b_{\text{x,scale}}\) the scaling biases for the weights and activations, respectively. Then, once the matrix multiplication is performed in a higher-precision accumulation like FP16, the resulting activations need to be unscaled by applying a scaling bias equal to \(-(b_{\text{w,scale}}+b_{\text{x,scale}})\):
\[\text{unscaled exponent}=b_{\text{exp}}-\text{bias}-(b_{\text{w,scale}}+b_{ \text{x,scale}}). \tag{2}\]
We refer the reader to the scale and unscale functions in Figure 1, which are employed in the code for the training and inference phases in Figures 1 and 2.
### FP8 for gradients during training
The backward pass for the linear layer contains two matrix multiplications: one to compute the weight gradients and another for the input activation gradients. Both matrix multiplications can be accelerated with FP8. The process is similar to the matrix multiplication in the forward pass: the inputs of the matrix multiplication need to be scaled and then cast to FP8 before being passed to the matrix multiplication. Subsequently, the matrix multiplication output (i.e the weight gradients or activation gradients) are unscaled taking into account the scales of the FP8 matrix multiplication inputs. It's important to recall that the FP8 type is different for weights and activations versus gradients: whereas the weights and activations are cast to FP8 E4, the gradients need to be cast to FP8 E5 to preserve a wider dynamic range (see Appendix B for the differences between the two formats). We refer the reader to the pseudocode in Figure 1 for details about the backward pass in FP8.
### Choosing the appropriate scaling bias
There are various methods to quantise from a higher-precision format into a lower one. Some popular approaches to cast from a floating point format like FP32 into a fixed-point format like INT8 consist of mapping the largest absolute value to \(\pm 127\), which is the maximum representable integer in INT8. This ensures that the outliers fit within the dynamic range of INT8, but may underutilise the dynamic range if the outliers are much larger than the other values. Other approaches consider a percentile or the full distribution of values and compute the mean square error or KL divergence to minimise the information loss between the higher-precision distribution and the quantised one.
In this work we propose a methodology based on setting dynamic per-tensor scalings, computed via the absolute maximum approach. Our strategy has similarities to the Nvidia (2022b) library; however some of the fine-grained details and justifications of this implementation are not made explicit. We hope that by opening up our methodology and testing it in the experiments in Section 4, other FP8 researchers can build on top of it.
Our methodology depends on the maximum representable number of the FP8 format, which is different for the FP8 E4 and FP8 E5 formats (see Appendix B). Denoting that maximum as \(\max_{\text{num}}\), the calculation of the scaling bias per tensor follows
\[\begin{split}\text{amax}=\max\left(\left|\text{tensor}\right| \right),\\ \text{scaling\_bias}=\text{floor}\left(\log_{2}\left(\max_{ \text{num}}/\text{amax}\right)\right),\end{split} \tag{3}\]
where \(\text{floor}(\text{a})\) returns the largest integer not greater than a. The function compute_bias in Figure 1 translates this algorithm into code. For training (see Figure 1), three scaling biases are computed in each linear layer, corresponding to the weights, input activations and output activation gradients. For inference(see Figure 2), only the weight and input activation need scaling biases.
### Loss scaling in addition to scaling bias when accumulating in FP16
Loss scaling is a popular technique to enable FP16 training (Noune et al., 2022, Perez, 2022, Micikevicius et al., 2017). Loss scaling is necessary in FP16 because the gradients underflow due to the narrower dynamic range of FP16, compared to other formats like FP32 or BF16. The reason because of which loss scaling is also relevant for FP8 quantisation is related to the higher-precision accumulation of the FP8 matrix multiplication. Such accumulation is usually performed in FP16, BF16 or FP32 (Graphcore, 2022b, Nvidia, 2022a). If it was done in FP8, it wouldn't work due to the limited dynamic range for FP8 E4 or the lack of precision in FP8 E5. As a result, the linear layer quantisation to FP8 described in this section is actually mixed-precision quantisation.
When the accumulation is performed in BF16 or FP32, loss scaling is not necessary and just the scaling biases explained in Subsection 2.3 are enough to prevent underflow or overflow after casting
to FP8. However, when the accumulation is performed in FP16, loss scaling is needed to better represent the gradients after they are output by the FP8 matrix multiplication and unscaled. The method to tune the loss scaling for mixed FP8-FP16 training is identical to the full FP16 training. There are several approaches in the literature: run a sweep of loss scalings [11], inspect the gradient histogram to adapt the loss scaling during training [22], back off to skip weight updates when an overflow is produced, or scale the loss such that its mean plus standard deviation times a constant equals \(log_{2}\) of the maximum representable value in FP16 [10]. We refer the reader to section 4 of [20] for an analysis about how these loss scaling methods impact mixed FP8-FP16 training. In our experiments in Section 4, we use a constant loss scaling, using the same values for the full FP16 training and mixed FP8-FP16 training.
## 3 Details to perform training and inference in FP8
We follow two different strategies to compute the scaling bias for training and inference:
* FP8-AMAX: this is the absolute maximum method detailed in Section 2.3 and in the compute_bias function of Figure 1. The calculation takes place per linear layer for every micro batch and every data or tensor replica, following the diagram in Figure 2(b).
* FP8-CSCALE: a simpler strategy based on having the same scaling bias for all weights, activations and gradients. The scaling bias remains constant throughout the training and inference. We run sweeps of scaling bias values to find the ones that don't degrade accuracy.
Even though in this paper we focus on the numerical differences, it is worth pointing out that the relative throughput and memory cost of FP8-AMAX versus FP8-CSCALE depends on the hardware employed. When using FP8-AMAX in hardware with limited SRAM, FP16 tensors in L2-cache incur the overhead of a second round-trip to memory: the first to calculate the tensor's absolute max, and the second to apply the scaling. This cost could cancel out the speedup from the FP8 matruls. A remedy could be to rely on the past history of absolute max instead of using the just-in-time absolute max Nvidia [2022b]. On the contrary, hardware with enough SRAM can calculate the scaling biases just-in-time and perform FP8 as detailed in this work.
### Inference with FP8
When performing inference, the weights come from a checkpoint that is either in a higher-precision format like FP16, FP16 or FP32, or directly in FP8 E4. In the former case, quantising the weights to FP8 is simpler compared to fixed-point representations like INT8, which may need quantisation-aware training (QAT) in addition to post-training quantisation (PTQ) [23]. For FP8, it is enough to employ PTQ consisting of applying a scaling bias to each tensor and subsequently casting to FP8, as described in Section 2.3. The scaling bias calculation for the weights is performed only once when loading the checkpoint (see Figure 2). In the latter case, when the checkpoint comes from training in FP8, the weights can be used directly without any quantisation.
### Training with FP8
For pre-training or fine-tuning, we need different FP8 formats for the weights/activations and gradients (see Appendix B and Noune et al. [2022]). For both formats, we compute the scaling bias following either the FP8-AMAX or the FP8-CSCALE, as stated in each of the experiments in Section 4. We perform the weight update in FP16 and keep master weights in FP16. The calculation of the scaling bias for the weights and the weight cast to FP8 E4 takes place just after the weight update. When accumulating in FP16, there's a risk of overflowing when performing the two matrix multiplications of the backward pass, which have inputs FP8 E4 and FP8 E5: this is due to the fact that FP8 E5 and FP16 have a similar dynamic range (see Table 7), and when employing FP8-AMAX the resulting FP8 E5 input to the matmul gets values closer to the maximum representable number in FP16. Consequently, we set a _margin_ to reduce the scaling bias resulting from FP8-AMAX method. Empirically we observe that a value of \(3\) is enough to prevent overflow. The optimal value for this margin is related to the square root of the batch size [11, 23], which in our fine-tuning experiments is 512 (see Appendix H). This results in a optimal margin of \(log_{2}(\sqrt{512})=4.5\), which is close to our empirical value of \(3\).
## 4 Experiments
### Model architecture used for the experiments
We employ two varieties of language transformer decoder models in our experiments. The first one is a GPT-3-like architecture (Brown et al., 2020) with the sole difference of using dense attention in all decoder blocks, instead of dense and sparse-banded attention. For this model we test five different model sizes (see Table 1). In our fine-tuning experiments, we employ the pre-trained checkpoints provided by Dey et al. (2023). In our inference experiments, we start from an already fine-tuned checkpoint in FP16 for each specific task. We focus on three GLUE tasks (Wang et al., 2018): the inference task MNLI, the single-sentence task SST-2 and the similarity and paraphrase task QQP.
The second variety of decoder language model is the Llama 2 model detailed in Touvron et al. (2023). The main changes with respect to the GPT-3-like architecture are the pre-normalization using RMSNorm, SwiGLU as activation function and rotary positional embeddings. In addition, the 70-billion-parameter version employs grouped-query attention. We employ the open-source checkpoints from the pre-trained models that are not fine-tuned for dialogue use cases. The details of the 2 sizes tested in our experiments are shown in Table 1. We focus on six benchmarks included in Touvron et al. (2023): MMLU, HellaSwag, ARC-e, ARC-c, PIQA and WinoGrande.
For both architectures, we quantise to FP8 the linear layers in all the decoder layers. Details about such linear layers are shown in Appendix E. Figure 4 displays the main components of the GPT and Llama decoder layers and indicates the ones quantised to FP8. Further details about hyperparameters and hardware to run the experiments are contained in Appendix H.
### FP8 inference for the GPT model
We compare the validation results using the FP8-AMAX and FP8-CSCALE methods versus the FP16 benchmark, for a GPT model with sizes from 111M to 13B. The results are displayed in Table 2. With both approaches we manage to match the FP16 validation accuracy for all sizes.
For the FP8-CSCALE method, we run sweeps of scaling biases. Not all the scaling biases reach the FP16 accuracy, and in Table 2 we report the average accuracy obtained with only the values that reach a final accuracy greater than 99.5% of the FP16 value. The interval containing the convergent values is displayed in Table 3. For the scaling bias values outside the intervals in Table 3, the validation accuracy degrades significantly. In Figure 5 in Appendix F we show a comparison of the accuracy obtained with each of the scaling bias in the sweep, for the MNLI task. As soon as the chosen scaling bias is not within the interval, it quickly degrades. On average we observe that the interval of convergent scaling bias values contains five integers centred around zero.
For the FP8-AMAX method, there's a different scaling bias for each weight and activation tensor. To understand how the different scaling biases vary depending on the decoder layer and type of linear layer, we plot their distributions in Figure 6 for the 111M, 1.3B and 6.7B parameter models. The reader can find details about how Figure 6 is produced in Appendix G, together with some insights about the scaling bias distribution.
\begin{table}
\begin{tabular}{l|c c c c c} Parameters & \(d_{\mathrm{model}}\) & \(n_{\mathrm{layers}}\) & \(n_{\mathrm{heads}}\) & \(d_{\mathrm{head}}\) & \(d_{\mathrm{ffn}}\) \\ \hline GPT 111M & 768 & 10 & 12 & 64 & 3072 \\ GPT 590M & 1536 & 18 & 12 & 128 & 6144 \\ GPT 1.3B & 2048 & 24 & 16 & 128 & 8192 \\ GPT 6.7B & 4096 & 32 & 32 & 128 & 16384 \\ GPT 13B & 5120 & 40 & 40 & 128 & 20480 \\ Llama 2 7B & 4096 & 32 & 32 & 128 & 11008 \\ Llama 2 70B & 8192 & 80 & 64 & 128 & 28672 \\ \end{tabular}
\end{table}
Table 1: Hierarchy of GPT and Llama 2 model sizes used in the training and validation experiments.
### FP8 few-shot inference for the Llama 2 model
We run six of the evaluation benchmarks in Touvron et al. (2023) with both FP16 and FP8-AMAX, for the model sizes of 7B and 70B parameters. For the benchmarks we employ Eleuther's Evaluation Harness Library (Gao et al., 2021). The results are displayed in Table 4. We find that the FP16 and FP8-AMAX quantisations give comparable results. For some benchmarks like HellaSwag there is some difference with respect to the result published in Touvron et al. (2023), which we attribute to the fact that the authors employ an internal evaluation library different from Gao et al. (2021). We checked this by comparing the harness' benchmark results in FP32 running on CPU to those obtained with FP16 and confirmed that the metrics obtained are identical.
### Is FP8-CSCALE enough to train in FP8?
Running sweeps of loss scaling values is a common practice to train models in FP16. As the size of the model increases, one typically needs to increase the loss scaling value. Even though there exists more sophisticated approaches to update the loss scaling during training (Perez, 2022; Kuchaiev et al., 2018), practitioners still run sweeps of loss scaling values until finding the one that converges.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Model** & **Quantisation** & **MMLU** & **HellaSwag** & **ARC-e** & **ARC-c** & **PIQA** & **WinoGrande** \\ \hline \multirow{3}{*}{7B} & Llama 2 paper & 45.3 & 77.2 & 75.2 & 45.9 & 78.8 & 69.2 \\ & FP16 & 46.6 & 76.0 & 74.6 & 46.3 & 79.1 & 69.1 \\ & FP8-AMAX & 46.3 & 75.8 & 74.5 & 45.7 & 78.7 & 69.1 \\ \hline \multirow{3}{*}{70B} & Llama 2 paper & 68.9 & 85.3 & 80.2 & 57.4 & 82.8 & 80.2 \\ & FP16 & 69.6 & 83.8 & 81.1 & 57.3 & 82.8 & 78.0 \\ \cline{1-1} & FP8-AMAX & 69.3 & 83.8 & 80.9 & 57.7 & 82.6 & 78.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Inference results of Llama 2. For the evaluation we follow Touvron et al. (2023), performing 5-shot evaluation for MMLU and 0-shot evaluation for HellaSwag, ARC-e, ARC-c, PIQA and WinoGrande. For WinoGrande we report the accuracy and for MMLU, HellaSwag, ARC-e, ARC-c and PIQA the normalized accuracy, which takes into account the lenght of each possible answer.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **Quantisation** & **MMLI** & **QQP** & **SST-2** \\ \hline \multirow{3}{*}{111M} & FP16 & 72.61 & 85.76 & 84.26 \\ & FP8-AMAX & 72.39 & 85.78 & 84.38 \\ & FP8-CSCALE & 72.49 & 85.73 & 84.59 \\ \hline \multirow{3}{*}{590M} & FP16 & 78.59 & 88.40 & 90.63 \\ & FP8-AMAX & 78.44 & 88.37 & 90.63 \\ & FP8-CSCALE & 78.56 & 88.40 & 90.54 \\ \hline \multirow{3}{*}{1.3B} & FP16 & 82.82 & 89.43 & 91.55 \\ & FP8-AMAX & 82.68 & 89.42 & 91.44 \\ & FP8-CSCALE & 82.72 & 89.36 & 91.42 \\ \hline \multirow{3}{*}{6.7B} & FP16 & 87.17 & 91.19 & 94.50 \\ & FP8-AMAX & 87.15 & 91.22 & 94.38 \\ & FP8-CSCALE & 87.18 & 91.18 & 94.48 \\ \hline \multirow{3}{*}{13B} & FP16 & 88.26 & 91.22 & 94.61 \\ & FP8-AMAX & 88.27 & 91.21 & 94.61 \\ \cline{1-1} & FP8-CSCALE & 88.26 & 91.20 & 94.50 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Inference results: validation accuracy comparing FP16 with FP8-AMAX and FP8-CSCALE, for the different GPT model sizes.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **MNLI** & **QQP** & **SST-2** \\ \hline \multirow{3}{*}{11M} & [-3, 2] & [-4, 2] & [-4, 2] \\ & [-3, 2] & [-4, 2] & [-1, 2] \\ \cline{1-1} & [-3, 3] & [-4, 2] & [-3, 2] \\ \cline{1-1} & [-3, 2] & [-3, 2] & [-3, 2] \\ \cline{1-1} & [-3, 2] & [-4, 2] & [-4, 2] \\ \hline \multirow{3}{*}{13B} & \multirow{3}{*}{[-3, 2]} & \multirow{3}{*}{[-4, 2]} & \multirow{3}{*}{[-4, 2]} \\ \cline{1-1} & [-3, 2] & & \\ \cline{1-1} & [-3, 2] & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Inference results with FP8-CSCALE: range of the scaling bias that reaches a validation accuracy greater than 99.5% of the FP16 value, when performing FP8 validation with FP8-CSCALE. Both weights and activations in all decoder layers share the same scaling bias.
Inspired by this practice, we aim to understand if the FP8-CSCALE approach is able to converge to the required accuracy. For that we run sweeps of values and let the fine-tuning for the MNLI task complete three epochs for the smaller models up to 1.3B and 1 epoch for the 6.7B and 13B. Then we check if the validation accuracy matches the reference FP16 fine-tuning.
Our results are summarised in Table 6. We are able to converge to a validation accuracy of at least 99.5% of the FP16 reference for all the model sizes, but as the size increases the range of converging scaling biases gets reduced. For the larger model sizes of 6.7B and 13B, we observe that convergence is not always guaranteed even within the intervals in Table 6: for example, a different seed can lead to divergence. These results suggest that FP8-AMAX is a more robust strategy when fine-tuning in FP8 compared to FP8-CSCALE, even though convergence with FP8-CSCALE may be possible.
### FP8 fine-tuning results for the GPT model
After testing FP8-CSCALE, we employ the FP8-AMAX method to fine-tune the GPT models for the sizes from 111M to 13B. With FP8-AMAX we are able to converge fine for all the sizes tested and the three different GLUE tasks of MNLI, QQP and SST-2, when compared to the validation accuracy reached in FP16. The results are displayed in Table 5. The loss function evolution also converges similarly when comparing FP8-AMAX and FP16. The loss function plots for the MNLI task are shown in Figure 10 within the Appendix J.
In Appendix I we provide plots and analysis about how the scaling biases evolve as the fine-tuning progresses, for the model sizes of 111M, 1.3B and 6.7B. Inspecting the per-tensor scalings resulting from FP8-AMAX is helpful to elucidate why the FP8-CSCALE strategy in Subsection 4.4 is not robust for large models. It also gives insights about the update frequency needed if one is interested in saving some of the extra computations needed to update the scaling bias with FP8-AMAX.
## 5 Conclusion
We provide the technical details for practitioners interested in leveraging FP8 quantisation to effectively employ it for inference and training. We show that our methodology is able to adapt the scaling biases to prevent underflow or overflow from the FP8 format and match the reference results obtained in higher precision, for large language models like GPT and Llama up to 70B parameters.
In this work we have focused on quantising the linear layers to FP8, but there are other layers ubiquitous in most transformer architectures that may benefit from FP8 quantisation, like the dot-product attention. We'll explore those in future works, as well as the application of FP8 in other models that don't belong to the transformer family of architectures, such as graph neural networks or computer vision models based on convolutional layers.
\begin{table}
\begin{tabular}{c c} \hline
**Model** & **MNLI** \\ \hline
111M & [-3, 2] \\
590M & [-2, 2] \\
1.3B & [-2, 1] \\
6.7B & [-1, 1] \\
13B & [-1, 0] \\ \hline \end{tabular}
\end{table}
Table 6: Fine-tuning results with FP8-CSCALE: range of the scaling bias that reaches a validation accuracy greater than 99.5% of the FP16 value, when performing FP8 fine-tuning with FP8-CSCALE. Weights, activations and gradients in all decoder layers share the same scaling bias.
\begin{table}
\begin{tabular}{c c c c c} \hline
**Model** & **Quantisation** & **MNLI** & **QQP** & **SST-2** \\ \hline
111M & FP16 & 72.61 & 85.32 & 85.07 \\ & FP8-AMAX & 72.50 & 85.84 & 85.57 \\ \hline
590M & FP16 & 78.59 & 88.25 & 89.27 \\ & FP8-AMAX & 79.12 & 88.31 & 89.00 \\ \hline
1.3B & FP16 & 82.82 & 89.32 & 91.36 \\ & FP8-AMAX & 82.58 & 89.32 & 91.28 \\ \hline
6.7B & FP16 & 87.17 & 91.19 & 94.53 \\ & FP8-AMAX & 87.26 & 91.06 & 94.84 \\ \hline
13B & FP16 & 88.26 & 91.22 & 94.61 \\ & FP8-AMAX & 88.28 & 91.53 & 94.50 \\ \hline \end{tabular}
\end{table}
Table 5: Fine-tuning results: validation accuracy after fine-tuning in FP16 and FP8-AMAX for 3 epochs.
## Acknowledgements
We would like to thank the following people for their contributions to the paper at the various stages of its development: Matthew Haddock, Shiraz Butt, Artemiy Bulavin, Mark Kattenbelt, Godfrey Da Costa, Jake Hall, Tim Poole, Douglas Orr, Graham Horn, Ian Hales, Sylvain Viguier, Anjlee Gopiani, Arsalan Uddin and Manuele Sigona.
|
2309.07037 | When and how does ram pressure stripping in low-mass satellite galaxies
enhance star formation | We investigate how a satellite's star formation rate (SFR) and surviving gas
respond to ram pressure stripping in various environments. Using a suite of
high-resolution "wind-tunnel" simulations with radiative cooling, star
formation, and supernovae feedback, we model the first infall orbit of a
low-mass disk galaxy ($M_{*} = 10^{9.7} M_{\odot}$) in different host halos,
ranging from Milky Way-like to cluster hosts. When the ram pressure is
moderate, we find that the stripping satellite shows an enhanced SFR relative
to the isolated control case, despite gas loss due to stripping. The SFR
enhancement is caused, not directly by compression, but by ram pressure-driven
mass flows, which can increase the dense gas fraction in the central disk
regions. The spatially-resolved star formation main sequence and
Kennicutt-Schmidt relations in our simulations are consistent with recent
findings of the VERTICO and GASP surveys. Our results predict the environmental
signals of RPS in future multiwavelength, high-angular resolution observations:
the star formation and gas surface densities will be centralized, and
symmetrically enhanced within the stripping radius. | Jingyao Zhu, Stephanie Tonnesen, Greg L Bryan | 2023-09-13T15:45:08Z | http://arxiv.org/abs/2309.07037v1 | # When and how does ram pressure stripping in low-mass satellite galaxies enhance star formation
###### Abstract
We investigate how a satellite's star formation rate (SFR) and surviving gas respond to ram pressure stripping in various environments. Using a suite of high-resolution "wind-tunnel" simulations with radiative cooling, star formation, and supernovae feedback, we model the first infall orbit of a low-mass disk galaxy (\(M_{*}=10^{9.7}~{}M_{\odot}\)) in different host halos, ranging from Milky Way-like to cluster hosts. When the ram pressure is moderate, we find that the stripping satellite shows an enhanced SFR relative to the isolated control case, despite gas loss due to stripping. The SFR enhancement is caused, not directly by compression, but by ram pressure-driven mass flows, which can increase the dense gas fraction in the central disk regions. The spatially-resolved star formation main sequence and Kennicutt-Schmidt relations in our simulations are consistent with recent findings of the VERTICO and GASP surveys. Our results predict the environmental signals of RPS in future multiwavelength, high-angular resolution observations: the star formation and gas surface densities will be centralized, and symmetrically enhanced within the stripping radius.
hydrodynamical simulations -- star formation -- galaxy interactions -- interstellar medium 0000-0002-4880-7880]Jingyao Zhu
## 1 Introduction
A galaxy can either be star-forming or quenched, depending on internal and environmental processes (Kauffmann et al., 2004; Baldry et al., 2006; Peng et al., 2010). For the central galaxies within halos, internal processes such as supernova and AGN feedback are the main star formation regulators (Croton et al., 2006; Dalla Vecchia and Schaye, 2008). This explains the "main sequence" of star formation over cosmic time: a tight correlation between the star formation rate (SFR) and stellar mass (\(M_{*}\)) of galaxies (Speagle et al., 2014). For satellite galaxies, environmental factors from the interactions with a central halo become significant: satellites show a strong observational bias to be'red', or star formation quenched, compared with their central counterparts at the same stellar masses (Peng et al., 2012; Wetzel et al., 2012; Phillips et al., 2014). The environmental quenching of satellite galaxies is also ubiquitous in cosmological simulations (Tremmel et al., 2019; Wright et al., 2019; Appleby et al., 2020; Donnari et al., 2021, 2021).
Despite the consensus of environmental quenching in observations and simulations, uncertainties remain in the mass dependence and the scatter of the quenching effectiveness (Donnari et al., 2021). The uncertainties likely arise from the complex physical processes during the satellite-environment interactions (see recent review by Cortese et al., 2021), of which the dominant is ram pressure stripping (RPS; Gunn and Gott, 1972), the direct removal of the satellite's interstellar medium (ISM) by a host halo medium. RPS galaxies, identified by unidirectional gas tails and little stellar disk deformation, have been observed in several clusters (van Gorkom, 2004; Boselli et al., 2006; Sun et al., 2007; Poggianti et al., 2016; Deb et al., 2022) as well as in both idealized (e.g., Abadi et al., 1999; Quilis et al., 2000; Schulz and Struck, 2001; Roediger and Bruggen, 2006; Jachym et al., 2007; McCarthy et al., 2008) and cosmological simulations (Bahe et al., 2012; Yun et al., 2019; Rohr et al., 2023). Although the hallmark of RPS is gas removal and, therefore, the eventual quenching of star formation (Boselli et al., 2006; Crowl and Kenney, 2008), recent detailed observations have shown that the early stages of RPS may have complex effects on both the ISM phase distribution and SFRs. Under RPS, the satellite's star formation can be triggered (Ebeling et al., 2014; Jachym
et al., 2019; Poggianti et al., 2019), the SFR globally (Vulcani et al., 2018; Roberts et al., 2021; Kolcu et al., 2022; Molnar et al., 2022) or locally (Vulcani et al., 2020) enhanced, and the molecular-to-atomic gas ratio boosted (Moretti et al., 2020).
The interplay between RPS and star formation is key to understanding environmental quenching, and has been explored in various controlled hydrodynamical simulations. Multiple simulations analyzed the star formation triggering- or enhancing-potential of RPS (Schulz and Struck, 2001; Kapferer et al., 2009; Tonnesen and Bryan, 2012; Roediger et al., 2014; Bekki, 2014; Steinhauser et al., 2016; Ruggiero and Lima Neto, 2017; Lee et al., 2020), but the physical reasons behind the enhancement are unclear. Ram pressure-driven shock passages can trigger local boosts to the SFR, but have little global effects (Roediger et al., 2014); pressure enhancement in the wind-leading halves of the galaxies undergoing stripping suggests that compression likely enhances the star formation efficiency (Troncoso-Iribarren et al., 2020); ram pressure-induced radial gas inflows can modify the star formation morphology, shifting it to the central regions with higher SFR (Schulz and Struck, 2001; Tonnesen and Bryan, 2012; Lee et al., 2020). There is a need to examine the physical causes of RPS-enhanced star formation in simulations, and to directly compare with recent observations.
In this work, we study the complicated effects of RPS on the ISM distribution and SFRs. (i) We simulate a low-mass spiral galaxy (lowest resolved in Donnari et al., 2021, 2022) or tension in the quenched fractions) undergoing RPS in different environments, from a Milky Way-like to a cluster halo, and examine the galaxy's gas and SFR response; (ii) For each host halo, we model a realistic infall orbit with time-varying ram pressure profiles; (iii) We analyze the local SFR-mass relations at comparable spatial resolutions with recent high angular resolution observations (Vulcani et al., 2020; Jimenez-Donaire et al., 2023); and finally (iv) We compare and identify key physical causes of RPS-enhanced star formation.
The structure of this paper is as follows. In SS2 we introduce the methodology, with SS2.1 on the satellite galaxy model, SS2.2 on the infall orbits, and SS2.3 the simulation initial conditions. We present the simulation global results in SS3: the time evolution of star formation and the surviving gas (SS3.1), and the gas morphology and kinematics (SS3.2). Then, SS4 compares the spatially resolved SFR-mass relations (\(\Sigma_{\rm SFR}-\Sigma_{\rm gas}\) and \(\Sigma_{\rm SFR}-\Sigma_{*}\)) between the stripping and isolated galaxy sets. We discuss our results in SS5: the impact of RPS on star formation (SS5.1), predictions for observations (SS5.2), and limitations of our methodology (SS5.3). SS6 summarizes the key findings.
## 2 Methodology
We run a suite of three-dimensional "wind-tunnel" simulations using the adaptive mesh refinement (AMR) code Enzo (Bryan et al., 2014). The simulation volume is a \(162^{3}\) kpc cube with a \(128^{3}\) root grid resolution. We allow up to five levels of refinement, giving a highest spatial resolution of 39 pc (marginally resolving giant molecular clouds). To model the radiative cooling of the multiphase gas, we use the Grackle chemistry and cooling library1(Smith et al., 2017), which calculates photoheating and photoionization from the UV background of Haardt and Madau (2012). We use the star formation recipe of Goldbaum et al. (2015) with the following parameters: Once a gas cell reaches the Jeans criterion with a number density threshold of \(n_{\rm min}=10\) cm\({}^{-3}\), it forms star particles (including regular stars and Type II supernovae) with a 5% efficiency. The star particles, now followed in our simulations as active particles, subsequently deposit energy into the gas in the forms of stellar and supernovae feedback, under the Goldbaum et al. (2016) feedback model, which includes the terminal momentum input from the number of supernovae expected to go off during a given timestep, adding any additional energy in the form of thermal energy.
Footnote 1: [https://grackle.readthedocs.io/](https://grackle.readthedocs.io/)
We use yt, a multi-code toolkit for analyzing and visualizing astrophysical simulation data (Turk et al., 2011), to create slices and projections, and to select the disk gas and the active star particles for subsequent analyses.
### The galaxy
Our galaxy is placed at the center of the 162-kpc cubical simulation volume at (81, 81, 81) kpc. We choose a galaxy of low stellar mass \(M_{*}=10^{9.7}\)\(M_{\odot}\), motivated by the lowest satellite \(M_{*}\) examined in Donnari et al. (2021) using the IllustrisTNG cosmological simulations (Weinberger et al., 2017; Pillepich et al., 2018). Table 1 summarizes the global parameters of the satellite galaxy.
Among the three components in Table 1, our simulations model the stellar disk and dark matter as static
\begin{table}
\begin{tabular}{c c c c|c c|c c} \hline \multicolumn{3}{c|}{Stellar Disk} & \multicolumn{2}{c|}{Dark Matter} & \multicolumn{3}{c}{Gas Disk} \\ \hline \(M_{*}\) & \(a_{*}\) & \(b_{*}\) & \(\rho_{40}\) & \(r_{0}\) & \(M_{\rm gas}\) & \(a_{\rm gas}\) & \(b_{\rm gas}\) \\ (\(M_{\odot}\)) & (kpc) & (kpc) & (g cm\({}^{-3}\)) & (kpc) & (\(M_{\odot}\)) & (kpc) & (kpc) \\ \hline \(10^{9.7}\) & 2.5 & 0.5 & 5.93e-25 & 11.87 & \(10^{9.7}\) & 3.75 & 0.75 \\ \hline \end{tabular}
\end{table}
Table 1: Global Parameters of the Low-mass Galaxy
gravitational potential fields. The static stellar disk potential is under the Plummer-Kuzmin model (Miyamoto and Nagai, 1975) with the scale length (\(a_{*}\)) and height (\(b_{*}\)) of 2.5 and 0.5 kpc, respectively (from the baryonic mass-stellar disk size scaling relation; Wu, 2018). We model the cold dark matter potential under the spherical Burkert model (Burkert, 1995; Mori and Burkert, 2000), which is selected to better match the observational rotation curves of low-mass galaxies (Salucci and Burkert, 2000; Blok et al., 2008). Given the stellar mass (Table 1), we obtain the circular velocity \(V_{\rm circ}\approx 120\) km s\({}^{-1}\) from the observational baryonic Tully-Fisher relation (Lelli et al., 2019; McGaugh et al., 2021), which gives the dark matter central density \(\rho_{d0}\) and scale radius \(r_{0}\) (Table 1).
The gas disk in Table 1 is followed in our simulations with AMR. We adopt the gas mass from observed gas-(H i and H\({}_{2}\) combined) to-stellar mass ratio \(M_{\rm gas}/M_{*}\approx 1\)(Calette et al., 2018), and the disk size from the size ratio \(R_{\rm gas}/R_{\rm optical}\approx 1.25\)(Swaters et al., 2002). This ensures that the resulting galaxy model is consistent with the \(z\approx 0\) observed scaling relations. The gas density is distributed under a softened exponential disk model (see Tonnesen and Bryan, 2009, 2010, eqn. 1), and the temperature and pressure are calculated to maintain hydrostatic equilibrium in the disk with the surrounding ICM. The rotational velocity is then calculated to balance the gravitational force and the combination of the centrifugal force and the pressure gradient.
### The orbits
We model the time-varying infalling orbits -- satellites travelling from the host's virial radius \(R_{200}\) to the pericenter location \(R_{\rm peri}\) -- of three host halos: a "Milky Way-like" host halo of \(M_{200}=10^{12}\)\(M_{\odot}\), a "group" halo of \(M_{200}=10^{13}\)\(M_{\odot}\), and a "cluster" halo of \(M_{200}=10^{14}\)\(M_{\odot}\). The host mass selection is motivated by the mass-dependent quenched fraction disagreements in Donnari et al. (2021): a satellite of \(M_{*}=10^{9.7}\)\(M_{\odot}\) tends to be under-quenched in the TNG300 simulations compared with SDSS observations for the low-mass hosts \(M_{200,{\rm host}}<10^{13.5}\)\(M_{\odot}\), but over-quenched in simulations for the higher-mass hosts (see Donnari et al., 2021, Fig 9). Our host mass sampling (\(M_{200,{\rm host}}\in[10^{12},10^{14}]\)\(M_{\odot}\)) is to span the mass range over which the Donnari et al. (2021) turnover in quenching effectiveness happens for the satellite of \(M_{*}=10^{9.7}\)\(M_{\odot}\).
In this subsection, we describe our two-step orbit modeling process: (1) Satellite orbit kinematics (Table 2), which gives the position and velocity of the satellite galaxy as a function of infalling time; and (2) Host halo radial profiles (Table 3), which gives the density and temperature of the host's gaseous halo medium as a function of radius.
We use the Galactic Dynamics package Gala (Price-Whelan, 2017; Price-Whelan et al., 2020) to perform time integration of the satellite orbits. First, we use Gala to construct the three host halos' gravitational potential profiles, adopting an NFW halo structure (Navarro et al., 1996), and redshift-zero concentration values (_c_ in Table 2; Ludlow et al., 2014). For simplicity, we assume that the satellite travels as a point mass when orbiting the host halos. The orbital integration begins at the host's virial radius \(R_{200}\) (from the Gala-generated NFW profiles), and takes the best-fit values of Wetzel (2011) as velocity initial conditions, see (\(|V_{\phi,0}|,|V_{r,0}|)/V_{200}\) in Table 2. With the position and velocity initial inputs, we then use the Gala orbital integrator to integrate for a sufficient time (e.g., 100 Gyr) to ensure we capture many stable orbits, and focus on the branch from \(R_{200}\) to pericenter \(R_{\rm peri}\). The resulting orbits contain the satellite's position and velocity as a function of infalling time, as summarized in Table 2.
We model the extended, diffuse gaseous halos of the hosts as an isothermal sphere with a \(\beta\)-profile in density (Cavaliere and Fusco-Femiano, 1976; Arnaud, 2009). The spherical \(\beta\) model is a relatively simple three-parameter model capable of reproducing the X-ray surface brightness observations for a range of galaxies (Makino et al., 1998; O'Sullivan et al., 2003; Anderson and Bregman, 2011; Dai et al., 2012). It gives the gaseous halo density at a distance \(r\) from the center as,
\[n(r)=n_{0}\cdot[1+(\frac{r}{r_{c}})^{2}]^{-3\beta/2}, \tag{1}\]
where \(n_{0}\) is the core density, \(r_{c}\) is the core radius, and \(\beta\) is the density slope at large radii. Among our three host cases, we apply the generic \(\beta\)-modeling for the group and cluster cases, but it breaks down at the low mass of \(10^{12}\)\(M_{\odot}\), where we instead use observational data of the Milky Way (Miller and Bregman, 2013, 2015; Salem et al., 2015; Voit, 2019). The parameters of the halo models are summarized in Table 3 below.
For the "Milky Way" host case, we refer to the Miller and Bregman (2015) parameterization of a modified \(\beta\) profile,
\[n(r)\approx\frac{n_{0}r_{c}^{3\beta}}{r^{3\beta}}, \tag{2}\]
where \(n_{0}\), \(r_{c}\), and \(\beta\) are defined as in equation 1 above (see Miller and Bregman, 2015, eqn. 2). However, this profile becomes less constrained at the large radii of our satellite's orbit (Table 2), and tends to underestimate densities compared with other studies (see Voit, 2019, Fig. 3). To address this density underestimation at large radii, we boost the best-fit Miller and Bregman
(2015) model by a constant factor \(C\approx 2.73\), obtained by matching the LMC-constrained pericentric (at 48.2 kpc) density from Salem et al. (2015).
For the galaxy group and cluster cases, we obtain the three parameters in the \(\beta\) model (equation 1) as follows: We adopt \(r_{c}\) and \(\beta\) from the gas halo profiles of Komatsu & Seljak (2001), and solve for \(n_{0}\) as an integration constant by assuming the gas-to-total mass fraction at \(R_{500}\) is \(\approx 10\%\)(Lovisari et al., 2015; \(f_{\rm gas,500}=M_{\rm gas,500}/M_{\rm tot,500}\approx 10\%\)). The gas mass within \(R_{500}\) under a \(\beta\)-profile in density (equation 1) can be written as,
\[\begin{split} M_{\rm gas,500}&=4\pi\rho_{\rm gas, 0}\int_{0}^{R_{500}}[1+(\frac{r}{r_{c}})^{2}]^{-3\beta/2}r^{2}dr\\ &\approx 10\%\cdot M_{\rm tot,500}\end{split} \tag{3}\]
where we supply \(r_{c}\), \(\beta\) from Komatsu & Seljak (2001), \(M_{\rm tot,500}\) and \(R_{500}\) from the Gala-generated host NFW halos, and the \(f_{\rm gas,500}\approx 10\%\) relation from Lovisari et al. (2015). Solving equation 3 for the integration constant \(\rho_{\rm gas,0}\) gives the central mass density and hence the number density \(n_{0}\).
Figure 1 shows the density profiles of the three gaseous halo cases shown in Table 3, where we annotate the infall orbits' initial (virial radius) and final (pericenter) locations from Table 2. At a given time \(t\) of an infall orbit, the orbital density is given by the density profiles \(\rho(r)\) in Figure 1, taking the radius \(r(t)\) from the Gala-generated orbits. The resulting orbital density ranges (densities between the solid and empty circles in Figure 1) for the group and the cluster cases are relatively similar, but the cluster case has a higher pericentric velocity (Table 2), which leads to about five times the ram pressure of the group case at the pericenter, see Section 2.3 below for details.
### The simulations
Our suite of four simulations includes three "wind-tunnel" runs and one "isolated galaxy" run. In each of the "wind-tunnel" runs, we introduce a 45-degree-inclined boundary inflow (velocity normal vector \(\hat{v}_{\rm wind(x,y,z)}=(0,\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})\)) from the \(y=0\), \(z=0\) corner of the simulation box. Instead of a purely face-on or edge-on wind, we choose the \(45^{\circ}\) inclination angle to investigate the ram pressure effects both perpendicular and parallel to the disk. The inflow is modeling
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \(M_{200}\) & Case\({}^{(1)}\) & \(c^{(2)}\) & \(R_{200}^{(3)}\) & \(R_{\rm peri}^{(3)}\) & \(V_{200}^{(4)}\) & \((|V_{\phi,0}|,|V_{r,0}|)^{(5)}/V_{200}\) & \(V_{\rm peri}^{(3)}\) & \(e^{(6)}\) & \(\tau_{\rm infall}^{(7)}\) \\ (\(M_{\odot}\)) & & & (kpc) & (km s\({}^{-1}\)) & & & (km s\({}^{-1}\)) & & (Mpc) \\ \hline \(10^{12}\) & Milky-Way & 8.81 & 211 & 75 & 143 & (0.655, 0.832) & 265 & 0.674 & 1127 \\ \(10^{13}\) & Group & 7.08 & 455 & 149 & 308 & (0.603, 0.786) & 565 & 0.666 & 1165 \\ \(10^{14}\) & Cluster & 5.62 & 949 & 278 & 663 & (0.53, 0.782) & 1236 & 0.692 & 1164 \\ \hline \end{tabular} Note. –\({}^{(1)}\) The case names represent the physical context of the central halos. \({}^{(2)}\) The present-day (redshift-zero) concentration values \(c\) from Ludlow et al. (2014). \({}^{(3)}\) The virial radii (\(R_{200}\)), and the pericentric radii (\(R_{\rm peri}\)) and velocities (\(V_{\rm peri}\)) from the Gala-generated orbits, see §2.2. \({}^{(4)}\) The virial velocities defined as \(V_{200}\equiv\sqrt{G\cdot M_{200}/R_{200}}\), following Wetzel (2011). \({}^{(5)}\) The tangential (\(|V_{\phi,0}|\)) and radial (\(|V_{r,0}|\)) velocity magnitudes at \(R_{200}\) in units of the virial velocity (\(V_{200}\); see note 4) from Wetzel (2011), used as velocity initial conditions for the orbit integration. \({}^{(6)}\) The resulting orbital eccentricities. \({}^{(7)}\) The infalling time from \(R_{200}\) to \(R_{\rm peri}\).
\end{table}
Table 2: Parameters of the Satellite Galaxy Infalling Orbits
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Case\({}^{(1)}\) & \(n_{0}\tau_{c}^{3\beta}\)\({}^{(2)}\) & \(C^{(3)}\) & \(\beta\) & \(T\) & Refs\({}^{(4)}\) \\ & (\(10^{-2}\)cm\({}^{-3}\)kpc\({}^{3\beta}\)) & (kpc) & & (K) & \\ \hline Milky-Way & 1.35 & 2.73 & 0.5 & 2.51E6 & MB15 \\ \hline \hline Case\({}^{(1)}\) & \(n_{0}\) & \(r_{c}\) & \(\beta\) & \(T\) & Refs\({}^{(4)}\) \\ & (cm\({}^{-3}\)) & (kpc) & & (K) & \\ \hline Group & 0.0121 & 25 & 0.655 & 5.53e6 & KS01 \\ Cluster & 0.0071 & 76 & 0.675 & 2.02e7 & KS01 \\ \hline \end{tabular} Note. –\({}^{(1)}\) Case names as in Table 2. We list the Milky Way case separately from the group and cluster cases because of the different methods; see §2.2. (2). Best fit parameter \(n_{0}\tau_{c}^{3\beta}\) for the modified \(\beta\) profile of Miller & Bregman (2015), which fits \(n_{0}\) and \(r_{c}\) of a \(\beta\) profile together (equations 1 and 2). (3). The constant factor \(C\) in equation 2 to match with the LMC-constrained pericentric conditions of the Milky Way halo (Salem et al., 2015). (4). References for our adopted \(\beta\)-profile (or modified in the Milky Way case) parameters, \(\beta\), \(r_{c}\), and the isothermal gas halo temperatures \(T\). MB15: Miller & Bregman (2015); KS01: Komatsu & Seljak (2001).
\end{table}
Table 3: Parameters of the Hosts’ Gaseous Halo Profiles
the ram pressure 'wind', which carries the time-varying orbital conditions (gas density, temperature, and velocities) set in SS2.2. The metallicities of the inflow gas (the 'wind') and the initial galaxy gas disk (see SS2.1) are set as \(Z_{\rm wind}=0.1Z_{\odot}\), \(Z_{\rm galaxy}=0.3Z_{\odot}\), respectively, which are subsequently used as tracers for galactic versus wind material. The isolated galaxy run is a control case without inflow, but otherwise has the same setup of galaxy structure, radiative cooling, star formation, and feedback as in the wind runs.
We summarize key aspects of the simulations in Table 4 and Figure 2. The time-dependent ram pressure is defined as \(P_{\rm ram}(t)=\rho_{\rm host}(t)\cdot v_{\rm sat}(t)^{2}\), where the host halo medium density (Figure 1) \(\rho_{\rm host}(t)\) is evaluated at the satellite location \(r_{\rm sat}(t)\); and the satellite location and velocity, \(r_{\rm sat}(t)\) and \(v_{\rm sat}(t)\), are from the gala-generated orbits (SS2.2 and Table 2). For the three wind runs (hereafter 12W, 13W, and 14W), we list the initial and final ram pressure values from the orbits described in SS2.2, and show their time evolution in Figure 2. We initialize the 13W run from a snapshot in the 12W run where the ram pressure matches the 13W initial conditions, and similarly start the 14W run from a 13W snapshot; see the relevant time frames in Figure 2. This results in the initial galaxy disk in the 13W and 14W runs being "pre-processed": it has been orbiting in smaller host halos prior to their accretion onto the more massive group- or cluster-host, a highly probable process for low-mass galaxies that has abundant observational and theoretical evidence (Zabludoff et al., 1996; Wetzel et al., 2013; Haines et al., 2015; Jung et al., 2018; Bahe et al., 2019; Donnari et al., 2021). For the masses modeled in this paper, TNG simulations find \(\approx 50\%\) of satellites below \(M_{*}=10^{10}\ M_{\odot}\) have been pre-processed in hosts of \(M_{200}=10^{12}\ M_{\odot}\) or above, if they reside in a cluster of \(M_{200}=10^{14}\ M_{\odot}\) at \(z=0\)(Donnari et al., 2021).
Initializing the simulations from a previous snapshot effectively avoids the numerical artifacts in the initial few hundred Myr, like an unstable outburst of star formation (Tasker and Bryan, 2006) and significant transient ringing (Goldbaum et al., 2015) in the gas disk. Previous works such as Tonnesen and Bryan (2012) addressed these artifacts by delaying the wind and allowing for a thermal relaxation phase of at least 200 Myr to stabilize the disk. For our 12W run (the only wind simulation that begins at \(t=0\)), however, the wind delay is unnecessary. Because of the Milky Way wind's initial slow speed (\(|v_{\rm wind}|\approx 151\) km s\({}^{-1}\); Table 2), it takes the first shock wave (of Mach number 2) generated by the initial inflow more than 300 Myr to reach the galaxy disk, and longer for the stable inflow, see Figure 2. The location of the sampling box for ram pressure values (right panel of Figure 2) is chosen to be relatively close to the galactic disk, while avoiding the bow shock in front of the galaxy after the thermal relaxation phase.
The three wind runs cover over three orders of magnitude in ram pressure (solid lines in Figure 2), which generally follow the input orbit conditions (dashed lines). We attach a constant ram pressure value at the end of each wind run to ensure the pericentric inflow from the corner of the simulation box reaches the galactic disk, and the attached time periods are relatively short (\(<\)300 Myr). We annotate the input ICM thermal pressure of the isolated case as the dash-dotted line in Figure 2, which is lower than the weakest ram pressure input of the wind runs (\(t=0\) of 12W). The sampled ram pressure of 12W (blue solid line) is low during \(0-300\) Myr because it shows the initial collapse of the gas before the wind reaches the sampling box; then during \(\sim\)\(300-700\) Myr, its stochasticity reflects an interplay between the feedback outflows and ram pressure of a comparable strength. The two short peaks in ram pressure at \(\sim\)1250 (13W) and 1700 Myr (14W) are due to shock waves generated when we stack the ram pressure profiles; they have no global effects on the simulations.
## 3 Global Results: Gas Stripping and Star Formation Rate Response
We present our simulation results as follows: SS3.1 summarizes the global evolution of baryonic mass, star formation rate (SFR), and star forming location; SS3.2 de
Figure 1: The radial density profiles of the three gaseous host halos in this study (Table 3). Solid and empty circles show the virial radii (\(R_{200}\)) and pericenter radii (\(R_{\rm peri}\)), respectively, which mark the initial and final locations of the satellite infall orbits (see Table 2). The y-axis on each side shows the same information in mass density (left: \(\rho\)) and number density (right: \(n\)).
scribes the wind-driven gas morphology and kinematics, which explains the global evolution in SS3.1.
### The Fate of the Ram Pressure Stripped Galaxy
The global effects of RPS on the disk mass and SFR are demonstrated in Figure 3, comparing the three wind cases and the iso case. The upper panel shows the galactic disk masses: gas as solid lines and gas plus formed stars as dashed lines, versus simulation time. We apply a spatial disk cut (\(R_{\rm disk}=18\) kpc, \(z_{\rm disk}=\pm 2\) kpc) and a metallicity cut (\(Z_{\rm gas}\geq 0.25Z_{\odot}\)) to select the gas in the galactic disk, and place the same spatial cut when calculating masses of the formed stars. We verified that varying these selection criteria to include more gas (e.g., using \(z_{\rm disk}=\pm 5\) kpc and \(Z_{\rm gas}\geq 0.2Z_{\odot}\)) does not change the trends. The gas fuels star formation and decreases in mass in all four cases. Without RPS, star formation accounts for most of the gas mass loss, as shown by the iso case's gas plus formed stellar mass (dashed gray line), which remains at \(\sim 98.5\%\) its initial value -- almost conserved after \(\sim\)3 Gyr of evolution over which time nearly 45% of the initial gas mass is converted to stars.
In the wind runs, in addition to forming stars, the gas can gain or lose mass due to interactions with the wind, depending on the ram pressure strength. In 12W, there is no stripping-induced mass loss compared to the iso case due to the weak ram pressure (Figure 2). Instead, there is a mild mass excess at \(t\sim 750\) Myr, because ram pressure pushes part of the feedback outflows back to the disk (this will be discussed in SS3.2). Both 13W
\begin{table}
\begin{tabular}{c c c c} \hline \hline Simulation & \(P_{\rm ram,i}\) & \(P_{\rm ram,f}\) & \(t_{i}\) \\ & (g/(cm s\({}^{2}\))) & (g/(cm s\({}^{2}\))) & (Myr) \\ (1) & (2) & (3) & (4) \\ \hline
12W & 4.6e-15 & 6.7e-14 & 0 \\
13W & 6.4e-14 & 1.9e-12 & 1060 \\
14W & 2.6e-13 & 1.2e-11 & 1600 \\ iso & & & 0 \\ \hline \end{tabular} Note. –(1) Simulation short names, used throughout this paper. 12W: “Milky Way” halo (\(10^{12}\)\(M_{\odot}\)) wind; 13W: “group” halo (\(10^{13}\)\(M_{\odot}\)) wind; and 14W: “cluster” halo (\(10^{14}\)\(M_{\odot}\)) wind; iso: isolated galaxy (no wind). (2) and (3) The initial and final ram pressure values of the wind orbits (§2.2), also see Figure 2. (4) The initial time of each simulation. 12W and iso both begin at \(t=0\), but the 13W and 14W runs are each continued from a lower halo mass case’s evolved snapshot with matching ram pressures. For example, the 13W run is a continuation from the \(t=1060\) Myr snapshot of 12W, where the initial ram pressure \(P_{\rm ram,i(13W)}\) matches the 12W run’s \(P_{\rm ram,t=1060(12W)}\), see Figure 2.
\end{table}
Table 4: Overview of the Simulation Suite
Figure 2: **Left**: Ram pressure time evolution for the three wind simulations. Dashed lines show the input from the orbits (§2.2 and §2.3), solid lines are the simulation values (\(\rho\cdot v_{\rm gas}^{2}\) in the wind direction \(\phi_{\rm wind}\)) read in at a sampling box (see right panel). The dash-dotted gray line shows the ambient gas thermal pressure in the isolated run for reference. The three shaded vertical bars denote important time frames we will later refer to in Figure 3 and §3. **Right**: A density slice of the 12W “Milky Way” case, annotating in-plane gas velocities. The inflow wind from the lower left corner travels at low velocities (Table 2), taking 200-300 Myr to reach the sampling box (white circle; at (0, -25, -25) kpc), and another 100-200 Myr to reach the disk (domain center). The inflow is about to reach the sampling box at this snapshot.
and 14W experience the onset of the stripping phase at \(t\sim 2150-2250\) Myr, seen from the steepened gas mass slopes in Figure 3 (leftmost vertical bar; see Figure 2 for the corresponding \(P_{\rm ram}\)). The stripping only lasts for \(\sim\)200 Myr in 13W, after which the gas mass slope flattens as the ram pressure is kept constant (Figure 2). But in 14W, as the ram pressure continues to increase, the stripping continues until nearly the entire gas disk is removed.
The lower panel of Figure 3 shows the SFR evolution, manifesting both the SFR enhancing and quenching potential of the wind. The first \(\sim\)500 Myr is the thermal relaxation phase where star formation is still stabilizing, and the 12W wind has yet to reach the galaxy (Figure 2 and SS2.3). This period will be omitted in subsequent plots and analyses. After the relaxation phase, the SFR steadily decreases in the iso case throughout the \(\sim\)3 Gyr of evolution, as its gas density steadily decreases due to starvation without cosmological inflow replenishing the disk. The SFR of 12W remains similar to the iso case throughout its orbit. In 13W and 14W, the SFR remains approximately constant at \(\sim 0.6\)\(M_{\odot}\)/yr until \(t\sim 2450\) Myr, which is a relative enhancement compared with iso. The SFR then mildly increases in 13W as the gas mass remains almost constant, resulting in a 2.5 times SFR enhancement relative to iso at the first-infall pericenter (middle vertical bar). In 14W during the final 480 Myr, the SFR decreases by \(\sim 65\%\), dropping to below iso at the cluster pericenter (rightmost vertical bar), as the gas is rapidly removed from the disk (from \(\sim 2.2\times 10^{9}\)\(M_{\odot}\) to \(\sim 3.8\times 10^{8}\)\(M_{\odot}\)). The 14W galaxy will ultimately be quenched judging from the rapid, almost complete gas removal.
Combining the ram pressure, the disk mass, and SFR (Figures 2 and 3) gives the direct effect of ram pressure on these star-forming disks' global evolution. The turning point at \(t\sim 2200\) Myr, where the effective gas stripping takes place in 13W and 14W, corresponds to a total \(45^{\circ}\)-angled ram pressure of \(P_{\rm ram}\approx 1.5\times 10^{-12}\) g/(cm \(\cdot\) s\({}^{2}\)). Before this critical ram pressure is reached, the gas plus formed stellar masses in the disks (dashed lines in Figure 3) remain conserved with respect to ram pressure for all wind runs. Unlike gas stripping, which is a direct consequence of strong ram pressure, the SFR shows no immediate correlation to ram pressure. The SFR turning points in 13W and 14W appear \(\sim\)300 Myr delayed compared to the gas mass change (Figure 3), likely because it takes time for the wind-driven mass flows around the disk to affect the global SFR.
As the wind interacts with the galactic disk, the location of star formation changes, as shown in Figure 4. We select stars newly formed within 100 Myr of each given time and obtain their distribution in cylindrical/disk radius (\(R_{\rm disk}\)) and height (\(z_{\rm disk}\)). These stars' radial distribution has a power-law tail, and Figure 4 shows their 95% percentile values as solid lines. The height distribution characterizes a thin disk of cold gas -- symmetrically peaked around \(z=0\) kpc without wind impact, or skewed towards \(+z\) under a wind that has a \(+z\) component, and Figure 4 shows their \(5-95\%\) percentiles as colored bands.
The radial ringing in Figure 4 results from epicyclic oscillations triggered by rapid radiative cooling disturbing the initial equilibrium disk (Goldbaum et al., 2015), and has no global effect on our results. In the iso case, the radius of star formation in an undisturbed disk shows a slow and steady increase and asymptotes to \(\sim\)11 kpc at the final stage of evolution, and the height remains symmetric within \(\pm 0.5\) kpc, which 12W closely follows
Figure 3: **Upper panel**: The time evolution of the galaxy disk’s ISM mass (solid lines), where the ISM is specified by a combined spatial and metallicity selection (§3.1). The dashed lines are the ISM mass in solid lines added with the total mass of formed stars under the same spatial selection. **Lower panel**: The SFR time evolution. The first \(\sim\)500 Myr is the thermal relaxation phase (§2.3), where the disk cools and collapses, leading to an initial burst in star formation, and gradually stabilizes. This unstable phase (\(0-500\) Myr) is shaded and will be omitted in plots hereafter. To aid visual comparison, we denote three 100 Myr time periods with vertical bars showing the onset of effective stripping, 13W pericenter, and 14W pericenter from left to right; see §3.1.
throughout its orbit. In 13W and 14W, when the gas stripping begins at \(t\sim 2200\) Myr (Figure 3), their radii of star formation begin to deviate from the iso case, with the 13W radius decreasing slightly faster and its \(z_{\rm disk}\) symmetrically thickening, while the 14W radius decreasing relatively slowly to \(\sim\)6 kpc and its \(z_{\rm disk}\) extending to +4 kpc, highly skewed towards the wind direction. The star formation radius in 13W decreases faster than in 14W, where ram pressure is higher, which seems to contradict the Gunn & Gott (1972) face-on stripping picture. But this is because of the wind inclination: more 14W gas is stripped into an extensive tail inclined to the disk (see Figure 5 below), forming stars in the tail, which skews the 14W star formation to higher cylindrical radii.
We selected the 100 Myr timescale in Figure 4 in order to match the typical timescales in UV observations of star formation (e.g., Leroy et al., 2012; Kennicutt & Evans, 2012). We also experimented with 10 Myr (typical H\(\alpha\) timescale), 30 Myr, and all formed stars within the simulations (1-3 Gyr, roughly matching the timescales in optical observations, see Tasker & Bryan, 2006), and found the temporal trends agree for the 10, 30, 100 Myr selections. If using all formed stars, however, the radial distributions in all cases asymptote to \(R_{\rm disk}\approx 10-11\) kpc, characterizing a steady stellar disk (rather than recent star formation); the similar radius across all cases is expected as RPS has no direct effect on the stellar disk.
### Wind-driven Gas Morphology and Kinematics
The global results in SS3.1 show that RPS directly affects the gas mass and, although the global SFR is eventually affected, that impact occurs a few hundred Myr after the onset of gas stripping. We find the impact on star formation can evolve in opposite directions: enhance or quench (Figure 3). In this section, we examine the wind-driven gas flows to determine the physical reasons behind the bimodal effects on star formation.
We first show the different morphology of the gas via density slices along the \(x=0\) plane in Figure 5, comparing the iso case to each wind run when the ram pressure has reached its peak value at the galaxy position. The three lower panels show that, without RPS, the isolated galactic disk remains cylindrically symmetric and drives an outflow above and below the disk via star formation feedback, which decays in strength as the SFR decreases with time (Figure 3). The three wind runs demonstrate different interactions between the wind and the galactic gas, as summarized below.
* 12W: There is no clear signal of RPS within the gas disk. Ram pressure appears to interact with the feedback-driven outflows, likely suppressing those outflows below the disk (against the wind direction).
* 13W: Gas is being stripped and forms an outer ring that, in this slice, looks like two tails. The feedback-launched gas below the disk in the iso and 12W runs is missing because of the higher ram pressure.
* 14W: Gas is being stripped relatively uniformly from all radii of a shrunken and highly fragmented disk, forming a single extended tail tracing the wind direction. The 14W wind has a similar density but \(>\)2 times higher velocity compared with 13W, leading to its \(>\)4 times higher pericentric ram pressure (Tables 2 and 4).
Moving from 12W to 13W and finally to 14W, we see a clear progression from negligible stripping to outer gas removal to nearly complete stripping. However, the 13W gas morphology in Figure 5 demonstrates a unique complexity: within the stripping radius (characterized by the outer ring of stripped gas), there is high-density gas above the disk. We zoom in and examine the gas kinematics of 13W in Figure 6. The \(v_{z}\) slice (left; not mass-weighted) shows that much of the gas above the disk has a \(\sim\)0 or negative z-velocity -- falling back to the disk. The stripped material from the leading side (\(y<0\)), initially traveling at 45\({}^{\circ}\) (\(\hat{v}_{\rm wind}\)), experiences gravitational forces toward the disk center where the potential well is deepest (perpendicular to the contour lines, Figure 6). This fallback phenomenon is confirmed
Figure 4: The evolution of star forming locations in terms of the cylindrical/disk radius \(R_{\rm disk}\) and height \(z_{\rm disk}\) versus simulation time. The radii shown here as solid lines are the 95 percentile values, and the heights as colored bands are \(5-95\) percentile values. We select new stars formed within 100 Myr to match the typical timescales of UV observations (§3.1). Higher ram pressures lead to more central star formation.
by the mass-weighted gas velocity streamlines, indicating the paths of motion, in the edge-on projection map of gas that originated in the galaxy (middle panel). Fallback happens for the relatively dense stripped gas at the wind leading edge (\(y\approx-6\) kpc) onto the leading half of the disk, and also for the more diffuse stripped gas in the tail (\(z\gtrsim 5\) kpc), which occurs on a larger spatial scale.
The face-on projection map (Figure 6 right panel), on the other hand, shows the interplay between ram pressure and disk rotation within the disk plane. On the \(x>0\) side of the disk where the ram pressure in-plane component (\(+y\)) counters disk rotation, the disk gas can lose its angular momentum, manifested in radial inflows towards the disk center. This is clearly distinct from the \(x<0\) side, where ram pressure aligns with disk rotation, and the in-plane gas kinematics transforms into radial outflows in the wind-trailing end of the disk (\(y>0\)). Gas that is pushed above the disk while losing angular momentum, as illustrated in the third panel, will be more able to fall back along the streamlines shown in the middle panel.
We now focus on RPS-driven mass losses using the gas motions perpendicular to the disk (Roediger & Bruggen, 2006; Bekki, 2014). In our simulations, this corresponds to flows in the z-direction, \(\dot{M}_{z}=\int_{\rm surface}\rho\;\vec{v}\cdot d\vec{A}=\int_{\rm surface} \rho\;v_{z}\cdot d{A}_{(x,y)}\), where the surfaces are selected to be \(z_{\rm disk}=\pm 2\) kpc for the gas with metallicity \(Z\geq 0.25{Z_{\odot}}^{2}\), consistent with our disk ISM selection (Figure 3). In addition, the mass flows are expected to have radial dependence, because gas removal typically begins at larger disk radii where the local gravity is weakest, and migrates radially inward as the ram pressure increases (Gunn & Gott, 1972, also see Figure 4). To characterize the radial dependence, we further distinguish the mass flows across the full planes (\(z_{\rm disk}=\pm 2\) kpc) versus only the central 5 kpc regions of the planes (\(R_{\rm disk}\leq 5\) kpc, \(z_{\rm disk}=\pm 2\) kpc). The resulting \(\dot{M}_{z}\) of the full and central planes are shown in Figure 7, together with the gas mass and SFR time evolution (similar to Figure 3) of the central plane.
Figure 5: Gas density “edge-on” slices, zoomed in to 40 kpc. The \(45^{\circ}\) winds introduced as boundary inflows are in the \(\hat{v}_{\rm wind(y,z)}=(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})\) direction (§2.3). The selected time frames, as annotated on the upper left corners, correspond to the pericentric conditions of the 12W, 13W, and 14W orbits (Figures 2 and 3). The upper panels show the wind cases, and the lower panels show the isolated case at the same time for comparison.
The mass loss rates in Figure 7's left two panels are obtained by summing \(\dot{M}_{\rm gas,z}\) above and below the disk (at \(z=+2\) and \(-2\) kpc), where a positive value is outflow/mass loss by definition. For the iso case, the mass loss rate demonstrates galactic fountain flows driven by the stochastic star formation feedback alone: in the central plane where the star formation and feedback is strongest, \(\dot{M}_{\rm gas,z}\) remains an outflow (lower left panel); but the full plane \(\dot{M}_{\rm gas,z}\) oscillates around 0 (upper left panel), and results in \(\sim\)0 net baryonic mass loss throughout the 3 Gyr iso simulation (gray dashed line, Figure 3). The fountain flows' (or central plane feedback outflows') amplitudes decay as the SFR decreases with time (Figure 3). The mass loss rate of 12W overall follows iso, except for a mildly enhanced inflow within its full plane (upper left panel) at \(t\sim 600-800\) Myr, which explains the mild 12W gas mass excess around this time (Figure 3). We verified that the 12W enhanced inflow relative to iso is via the \(z=-2\) kpc surface, indicating that the ram pressure, although not yet sufficient to strip the gas disk, transfers momentum with the diffuse fountain flows below the disk.
For 13W and 14W, the mass loss rates are dominated by RPS. Across the full plane, the first peak of mass loss occurs at \(t\sim 2150-2250\) Myr (leftmost vertical bar, Figure 7), corresponding to the onset of effective stripping in both 13W and 14W (Figure 3). After that, the 13W mass loss rate steadily decreases to \(\sim\)0 at its pericenter, as the ram pressure becomes constant (Figure 2), and the 14W mass loss rate keeps increasing for another \(\sim\)400 Myr with its still increasing ram pressure. For the central plane, however, there is a clear dichotomy: 13W shows a central inflow (\(\dot{M}_{\rm gas,z}<0\)) with increasing amplitude, while 14W shows a (slightly delayed) central outflow. During the onset of effective stripping, the stripping radius is greater than the selected central region (5 kpc), so \(\dot{M}_{\rm gas,z}\sim 0\) for both cases. At \(t\sim 2500\) Myr, the 14W stripping radius reaches the inner 5 kpc, and hence the central outflows begin. But for 13W, ram pressure was never sufficient to strip the inner disk; instead, the gravitational fallback of the stripped material (as described in Figure 6) replenishes the central disk.
The right-hand two panels of Figure 7 show the central disk gas mass and SFR time evolution and the central-to-full disk ratios ("c/f ratio"; see Figure 3 for the full disk). For all four simulations, the central gas mass evolution tightly correlates with the central SFR evolution. The temporal oscillations arise from the radial ringing discussed previously in Figure 4. In the absence of effective RPS, the iso and 12W cases maintain an almost constant c/f ratio throughout the simulations (gray and blue dashed lines). Effective RPS (13W and 14W) leads to a radial redistribution of the gas and SFR: (i) the profiles are more radially centralized (enhanced c/f ratios), and (ii) the inner disk \(\rm M_{gas}\) and SFR values (solid lines) are both enhanced relative to iso. Although (i) partially results from the removal of outer disk gas, (ii) directly reflects the star formation enhancement potential of RPS.
The gas motions perpendicular to the disk (Figure 7) demonstrated an indirect mode of radial mass transfer in 13W: gas is lifted by RPS from the edge of the disk, and falls back to the central disk a few 100 Myr later, replenishing star formation there (also see Figure 6, middle panel). Another mode of radial mass transfer is directly via the (cylindrical) radial direction, \(\dot{M}_{\varpi}=\int_{\rm surface}\rho\ \vec{v}\cdot d\vec{A}=\int_{\rm surface }\rho\ v_{\varpi}\ dA\), where \(\vec{\varpi}\) is the cylindrical radial vector, and the surface can be ap
Figure 6: Gas z-velocity slice and projected density maps of 13W at the pericenter (as in Figure 5), zoomed in to 30 kpc. **Left panel**: Gas \(v_{z}\) slice map, annotating gravitational potential contours in white lines. The potential includes the static stellar and dark matter components (§2.1), self-gravity of the gas, and of the newly formed stars. The in-plane velocity vectors (\(v_{y,z}\)) are marked by black arrows. **Middle and right panels**: Gas density projection maps (edge-on and face-on) with a metallicity selection for the galactic ISM (\(Z_{\rm gas}\geq 0.25\ Z_{\odot}\); largely unmixed with the ICM). The streamlines show the mass-weighted gas velocities.
proximated by a thin cylindrical shell of average width \(\bar{h}\), such that \(\dot{M}_{\varpi}\approx(1/\bar{h})\ \sum_{i}^{\rm shell}(m_{i}\ v_{\varpi,i})\). We evaluated the radial mass flow rate for the central disk in Figure 8 (\(R_{\rm disk}\leq 5\) kpc, \(|z_{\rm disk}|\leq 2\) kpc)3. Because \(\dot{M}_{\varpi}\) characterizes mass transfer within the galaxy, its amplitude is particularly susceptible to radial oscillations (Figure 4). Therefore, we show the time cumulative (\(\Delta M=\sum\dot{M}(t)\Delta t\); frequent temporal oscillations cancel out) mass flows in Figure 8: the left panel compares the radial and vertical (perpendicular to the disk) components, the right shows the total kinetic flows (the sum of the two).
Footnote 3: The shell width \(\bar{h}\) is obtained by \(\bar{h}=(1/A_{\rm shell})\sum_{i}^{\rm shell}V_{i}\), where \(V_{i}\) is the individual cell volume, and \(A_{\rm shell}=2\pi R_{\rm disk}\cdot(2z_{\rm disk})\) is the shell area. We tested a range of shell widths, \(\bar{h}\in[39,156]\) pc, which corresponds to 1 to 4 times the highest-refined cell length (§2), and \(\dot{M}_{\varpi}\) is approximately constant over these \(\bar{h}\). After obtaining \(\bar{h}\), we evaluate \(\dot{M}_{\varpi}(\bar{h})\) for the cells in the thin shell that satisfy the ISM metallicity selection \(Z\geq 0.25Z_{\odot}\). The final \(\dot{M}_{\varpi}\) is an average of \(\dot{M}_{\varpi}(\bar{h})\) over \(\bar{h}\in[39,156]\) pc.
The dashed lines in Figure 8's left panel is the time integration of the central plane \(\dot{M}_{\rm gas,z}\) (Figure 7 lower left panel). As described above, in the central plane there is a consistent feedback outflow in iso, fallback replenishment in 13W, and stripping in 14W when \(P_{\rm ram}\) becomes sufficient to affect the inner disk (\(R_{\rm disk}\leq 5\) kpc). In the radial direction (solid lines, Figure 8 left panel), however, both 13W and 14W show an excess of inflow relative to iso, with peak amplitudes (\(\Delta M_{\rm inflow}\approx 2\times 10^{8}\)\(M_{\odot}\)) comparable to the net fallback inflow in 13W. These radial inflows can be explained by an interplay between the edge-on ram pressure component and disk rotation (right panel of Figure 6). Rotating gas in the disk, when countered by ram pressure (where \(x>0\); Figure 6), can lose angular momentum and migrate radially
Figure 7: **Left**: Gas mass loss rate perpendicular to the disk (\(\dot{M}_{\rm gas,z}\), where positive values indicate outflows), evaluated at disk heights \(z=\pm 2\) kpc, for the full plane (upper left) and the central 5 kpc regions (lower left). **Right**: The central 5 kpc gas mass (upper right) and SFR (lower right) time evolution as solid lines (similar to Figure 3, here for the central disk), and the central-to-full disk ratio (“c/f ratio”) as dashed lines on the right-hand y axes. In all panels, we show the three 100 Myr reference time frames as in Figure 3.
inward. For 14W, this radial inflow eventually decreases to 0 as stripping proceeds into the central region of the disk with increasing \(P_{\rm ram}\).
The right panel of Figure 8 shows the sum of the radial and vertical components: the total kinetic mass transport across all surfaces of the central disk. We summarize the key physical processes at play in the figure. For iso, the gradual mass loss is dominated by the central plane's star formation feedback. For 13W, there is a combination of replenishment from fallback and from direct radial inflow, resulting in its highest central plane gas mass and SFR (Figure 7 right panels). For 14W, because the ram pressure keeps increasing, there is net replenishment from radial inflow but little to no fallback around the group pericentric time (middle vertical bar), which eventually becomes a net outflow as the stripping radius reaches within the selected central disk (\(R_{\rm disk}=5\) kpc) at the cluster pericenter (rightmost vertical bar). Collectively, the kinetic mass transport (Figure 8) explains the central plane \(M_{\rm gas}\) and SFR evolution (Figure 7). We found RPS-driven direct radial inflows, in agreement with literature results (Schulz and Struck, 2001; Tonnesen and Bryan, 2009; Akerman et al., 2023), can replenish the central star-forming disk; and we identified fallback as an indirect mode of radial mass transport (Figures 6, 7, and 8) that can add to the enhancement for certain orbits.
We emphasize that as long as the \(P_{\rm ram}(t)\) profiles are consistent, even with different \(\rho_{\rm ICM}\), \(v_{\rm sat}\) components (SS2.3), the galaxy undergoes a similar global evolution. For example, at \(t\sim 2200\) Myr (leftmost vertical bar), the global properties and the mass loss rates of 13W and 14W closely match (Figures 3, 7, and 8) as their \(P_{\rm ram}\) values are comparable (Figure 2), despite the 14W orbit consisting of a higher \(v_{\rm sat}\) and a lower \(\rho_{\rm ICM}\). Importantly, the galaxy's global evolution is also sensitive to the time derivative of ram pressure, \(dP_{\rm ram}/dt\). When the 13W ram pressure stops increasing as the galaxy reaches the group pericenter (\(dP_{\rm ram}/dt=0\), Figure 2 leftmost to middle vertical bar), the stripping radius is kept at constant and the remaining gas disk acts as a "shield" for the stripped gas above it, such that gravity outweighs ram pressure in shielded regions and causes the central plane fallback (\(\dot{M}_{z}\), Figure 7). Conversely, in 14W where \(dP_{\rm ram}/dt\) keeps increasing after \(t\sim 2200\) Myr for \(\sim\)500 Myr, the stripping radius decreases, so the shielded region shrinks (Figure 5); without shielding the stripped gas above the disk is unable to fall back.
The global evolution described in SS3.1 and 3.2 can be summarized as follows. In the iso run, the gas disk primarily loses mass to steady star formation, which drives feedback fountain flows that decrease in magnitude with decreasing SFR. In 12W with weak \(P_{\rm ram}\), the global properties are overall consistent with iso, other than the additional interactions between the wind and the low-density fountain flows. In 13W with moderate \(P_{\rm ram}\), RPS in the disk outskirts dominates the mass loss, and actually enhances the SFR in the remaining disk.
Figure 8: The cumulative mass loss (\(\Delta\)M\({}_{\rm gas}\)) over time for the central 5 kpc disk as driven by mass transport, comparing 13W, 14W, and iso. Left: the cylindrical radial (\(\Delta M_{\varpi}\); solid lines) versus vertical (\(\Delta M_{z}\); see Figure 7 for \(\dot{M}_{z}\), here dashed lines) mass loss components. Right: the total kinetic mass loss by summing the two components. As in Figure 7, positive \(\Delta\)M\({}_{\rm gas}\) values denote outflows, and the vertical bars show the three reference time frames. Key physical processes that explain the trends are annotated on the right panel.
Because the pericentric \(P_{\rm ram}\) is insufficient to remove the entire gas disk, gas can migrate radially inward via fallback and direct radial inflows, both replenishing the central disk's star formation. In 14W, \(P_{\rm ram}\) is sufficient to strip the gas first in the outskirts and then in the center, ultimately resulting in a rapid decline in the galaxy's SFR.
## 4 Spatially resolved star formation rate-mass relation
In the previous section, we found that RPS can enhance the satellite galaxy's global SFR while removing its gas (Figure 3), and the wind-enhanced star formation favors central disk regions (Figures 4 and 7). In this section, we evaluate the spatially-resolved SFR-mass relations -- a direct clue to the star formation microphysics (e.g., Kennicutt & Evans, 2012). We compare the relations between the RPS and isolated cases to characterize the physical conditions of ram pressure-enhanced star formation.
### Spatial Division Methodology and Radial Profiles
When spatially resolving galactic regions, the sampling scales need to exceed certain minima for galactic star formation-mass scaling relations to hold (Kruijssen & Longmore, 2014; Kruijssen et al., 2018); the selected sampling scales need to account for the incomplete statistical sampling of independent star-forming regions and the spatial drift between gas and stars. Empirically, this minimum spatial scale \(\Delta x\) works out to be \(\sim 1\) kpc for typical star-forming galaxy disks (see Kruijssen & Longmore, 2014, Fig. 2). Here, we select the sampling scale to be 1 kpc\({}^{2}\) to satisfy the validity of the scaling relations; and to match with typical scales (\(0.75-1.2\) kpc\({}^{2}\)) in high-angular-resolution observations in the local universe (Bigiel et al., 2008; Vulcani et al., 2019, 2020; Jimenez-Donaire et al., 2023).
For a given simulation snapshot, we divide the satellite disk into 1 kpc\({}^{2}\)-resolved patches, integrate the patches along the disk-height direction (for disk height \(|z|\leq 2\) kpc), and calculate the projected SFR, gas, and stellar surface densities (\(\Sigma_{\rm SFR}\), \(\Sigma_{\rm gas}\), and \(\Sigma_{*}\)) of each patch. We focus on the galaxy group and cluster pericenter time frames, when RPS most effectively enhances/quenches the satellite star formation (Figure 3), and compare the wind cases with the isolated galaxy case (iso) at the corresponding times. Using 100 Myr windows (10 simulation outputs) produces larger samples of 1 kpc\({}^{2}\) regions with recent star formation (10 Myr) across the disk. Changing the number of outputs does not qualitatively affect our results. Global properties of the selected patches are summarized in Table 5. Throughout the Section, we will focus on the four pericentric cases in Table 5: 13W, 14W, iso group, and iso cluster control, while iso pre-starvation is a special reference case for the star formation law comparison in SS4.2 below.
The radial profiles of the resulting patches are shown in Figure 9. The SFR profiles (top panel) closely resemble the respective gas profiles (bottom panel) in all cases. In the central few kpc, the wind cases (blue) are enhanced in both the SFR and gas densities relative to iso (red); at larger radii, the wind radial profiles show a steeper decrease with radius than iso. This is expected given our previous finding that ram pressure removes gas in the outer disk while driving gas into the central disk (Figure 7). The 95% enclosing radii of the SFR, denoted by the open circles, show that star formation is more centrally-concentrated with increasing ram pressure, with the iso cases forming stars within \(\sim\)9.0 kpc (no ram pressure), 13W within \(\sim\)5.4 kpc (moderate ram pressure, enhanced SFR), and 14W within \(\sim\)2.3 kpc (strong ram pressure, approaching complete stripping). The time evolution in iso from the group to cluster pericenter times (\(\sim\)300 Myr duration; red solid and dashed curves) is due to star formation "starvation", which has a relatively small impact on the radial profiles (and reduces the global SFR by \(\sim 16\)%; see Table 5).
The bottom panel of Figure 9 directly compares the gas and stellar surface density (\(\Sigma_{\rm gas}\) and \(\Sigma_{*}\)) profiles. The y-axes are under the same physical scale follow
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Case & \(t_{\rm peri}\) & N\({}_{\rm patch,SF}\) & SFR\({}_{\rm global}\) & \(\tau_{\rm dep}\) \\ & (Myr) & & (\(M_{\odot}\)/yr) & (Gyr) \\ (1) & (2) & (3) & (4) & (5) \\ \hline
13W pericenter & 2530-2630 & 1427 & 0.85 & 2.28 \\ iso group control & 2530-2630 & 3016 & 0.38 & 7.12 \\
14W pericenter & 2830-2930 & 447 & 0.21 & 1.70 \\ iso cluster control & 2830-2930 & 2933 & 0.32 & 8.16 \\ iso pre-starvation & 580-680\({}^{(a)}\) & 2292 & 1.08 & 3.11 \\ \hline \end{tabular} Note. –(1) Simulation cases. (2) Simulation time periods that correspond to 13W and 14W pericenter passages. \({}^{(a)}\) The iso “pre-starvation” time frame is selected to match the central plane M\({}_{\rm gas}\) of 13W pericenter (Figure 7), see §4.1. (3) The number of star-forming patches, where star formation is defined to have \(\Sigma_{\rm SFR}>10^{-6}\ M_{\odot}/({\rm yr\cdot kpc^{2}})\) per patch. (4) The total SFR of the star-forming patches averaged over the selected 100 Myr time period. (5) The gas depletion time defined as \(\tau_{\rm dep}=M_{\rm gas}/{\rm SFR}\), where \(M_{\rm gas}\) is the total gas mass in the patches, and SFR as in column 4.
\end{table}
Table 5: Summary of the 1 kpc\({}^{2}\) patch selection
ing the conventional units of each quantity, as will be used in Sections 4.2 and 4.3 and figures therein. In all cases, \(\Sigma_{*}\) profiles (lighter lines) are greater than \(\Sigma_{\rm gas}\) (deeper lines) within the inner \(\sim\)8 kpc region that encloses the majority of star formation. Unlike \(\Sigma_{\rm gas}\) (or \(\Sigma_{\rm SFR}\)) that clearly distinguishes wind and iso, the \(\Sigma_{*}\) profiles are consistent among all cases. This is because (i) ram pressure only directly impacts the gas disk and not the stellar disk; (ii) the formed stellar mass is low compared with the static stellar potential (\(\Delta M_{\rm formed\ star}/M_{\rm static\ disk}<3\%\); Figure 3 and SS2), therefore the static potential is dominated by the total \(\Sigma_{*}\) (and dark matter) at all radii.
### SFR\(-M_{\rm gas}\): The Kennicutt-Schmidt Relation
We investigate the SFR\(-M_{\rm gas}\) relation, also known as the Kennicutt-Schmidt (KS) relation (Schmidt, 1959; Kennicutt, 1989), for the resolved 1 kpc\({}^{2}\) patches. The KS relation is an empirical power-law between the observed SFR and gas surface densities, \(\Sigma_{\rm SFR}=A\cdot\Sigma_{\rm gas}^{N}\). Physically, it is a proxy for how efficiently gas forms stars at given surface densities. In our suite of simulations, both the stripping and the isolated galaxy cases follow the same numerical star formation and feedback recipe (Goldbaum et al., 2015, 2016; see SS2), and differences on the \(\Sigma_{\rm SFR}-\Sigma_{\rm gas}\) (KS) phase plane will directly reflect the impact of RPS.
Figure 10 shows the KS relation in the RPS and isolated galaxy disks at the group and cluster pericenters (Table 5). For all cases, the gas densities are tightly correlated with SFR on the resolved scale, but the RPS cases populate a distinct phase space of high-density, high-SFR gas that is absent in iso. To quantify this excess, we first identify the 99.85 percentile surface density thresholds (empirically 3\(\sigma\) upper limits) in iso using the 1D histograms of \(\Sigma_{\rm SFR}\) and \(\Sigma_{\rm gas}\). For each of these distributions, these upper limits are nearly identical at both the group and cluster pericentric times: \(\log\Sigma_{\rm gas}/(M_{\odot}\ {\rm pc}^{-2})\gtrsim 1.2\) and \(\log\Sigma_{\rm SFR}/(M_{\odot}\ {\rm yr}^{-1}\ {\rm kpc}^{-2})\gtrsim-1.8\). Many patches in 13W and 14W occupy the KS phase space beyond these upper limits in iso (blue scatter points; upper right corner of Figure 9); these are the dense gas excess in the RPS cases and have a significant contribution to the total SFR (\(\sim\)58% in both 13W and 14W).
For 13W, star formation from this dense gas (58% or 0.5 \(M_{\odot}\)/yr) is comparable with its SFR enhancement relative to iso (\(\Delta{\rm SFR}_{\rm group}\approx 0.45\ M_{\odot}\)/yr; Table 5). For 14W, ram pressure is strong enough to remove most of the surviving ISM and leads, ultimately, to a quenching of star formation. Despite 14W's dense gas excess, its number of star-forming patches (\(N_{\rm patch,SF}\); Table 5) has decreased to \(\sim\)30% of 13W and \(\sim\)15% of the iso control, resulting in its lowest total SFR of all cases. We will further discuss why gas and SFR surface densities are enhanced in the RPS cases in SS5.1.
We showed in Figure 10 that the RPS and iso cases populate different \(\Sigma_{\rm gas}\) ranges. At the selected pericentric time frames, \(\Sigma_{\rm gas}\) in iso primarily belongs in the H i-dominated regime (the left of the dashed vertical line in both panels), while in 13W and 14W, it populates both H i and H\({}_{2}\) regimes (Krumholz et al., 2009). The relatively low surface densities in iso are a direct result of the gradual gas consumption due to star formation and the feedback-driven outflows, also known as "starvation" (Larson et al., 1980; van den Bosch et al., 2008; Trussler et al., 2020). To evaluate the effect of RPS on the KS relation over a similar \(\Sigma_{\rm gas}\) range, we identified an earlier time frame in iso (iso pre-starvation; see Table 5), where the central disk gas mass -- and hence the highest \(\Sigma_{\rm gas}\) -- is comparable to 13W at the pericenter (Figure 7). This comparison will determine if the star formation efficiency at given gas densities is modified in the RPS cases.
Figure 9: The SFR, gas, and stellar surface densities (\(\Sigma_{\rm SFR}\), \(\Sigma_{\rm gas}\), and \(\Sigma_{*}\)) radial profiles of the resolved 1 kpc\({}^{2}\) patches. The four cases (Table 5) are 13W and 14W at pericenters (solid and dashed in blue) and iso group and cluster control cases (same line styles in red). On the top panel, the open circles show the radii enclosing 95% of the \(\Sigma_{\rm SFR}\). On the bottom panel, \(\Sigma_{\rm gas}\) (deeper colors; left-hand y-axis) and \(\Sigma_{*}\) (lighter colors; right-hand y-axis) are shown under the same scale, following the conventional units of each quantity.
Figure 11 shows the effect of RPS versus starvation on the KS plane by comparing three cases: 13W and iso at group pericenter time and iso pre-starvation. Starvation shifts the isolated galaxy to lower surface densities along the KS relation (red dashed versus red filled) via gas consumption and feedback-driven outflows. However, the iso pre-starvation case shares a very similar KS relation with the RPS case (red dashed versus blue filled), despite their distinct evolutionary history and gas morphology (Figures 3 and 5). Judging from the similar KS relation, the star formation efficiency in RPS and iso cases remains the same at comparable \(\Sigma_{\rm gas}\).
Independent of RPS, the resolved patches in our simulations show a KS power-law slope turnover from the H i to H\({}_{2}\) regimes when sufficient dense gas exists as H\({}_{2}\) (Figures 10 and 11). This is a direct reflection of our numerical star formation recipe (see Goldbaum et al., 2016, Fig 5), which agrees with observational findings that the KS power-law slope transitions from superlinear in the atomic regime (\(N_{\rm KS,H_{1}}>1\) with poor correlation; Bigiel et al., 2008; Leroy et al., 2008; Kennicutt and Evans, 2012) to approximately linear in the molecular regime (\(N_{\rm KS,H_{2}}\approx 1\); Krumholz et al., 2009; Heiderman et al., 2010; Krumholz et al., 2012; Jimenez-Donaire et al., 2023; see the solid line in Figure 10). We also annotated the H i+H\({}_{2}\) combined fitting result from Bigiel et al. (2008) in Figure 10 (dashed line, N\({}_{\rm KS}\approx 1.8\)); our simulations follow a mildly steeper slope in the atomic regime (N\({}_{\rm KS,H_{1}}\approx 2.0\)), still well within the observational scatter (Bigiel et al., 2008). An exception to the overall consistent KS slope in our simulations is 14W at the lowest gas surface densities (\(\Sigma_{\rm gas}\leq 0\)\(M_{\odot}\) pc\({}^{-2}\); see Figure 10 right panel), which shows distinctively higher \(\Sigma_{\rm SFR}\) than iso and hence a lower KS slope in the low-density H i regime. The high \(\Sigma_{\rm gas}\) (H\({}_{2}\)-dominated) regime in 14W is similar to the other cases. We suspect that the low-density star-forming gas in 14W is driven by fast gas removal from RPS in recently star-forming regions.
### SFR\(-M_{*}\): Strong Stripping and Disk Truncation
In this section, we investigate the SFR\(-M_{*}\) relation, also known as the star formation main sequence relation (e.g., Schiminovich et al., 2007; Sargent et al., 2014; Speagle et al., 2014), on the spatially-resolved \(\Sigma_{\rm SFR}-\Sigma_{*}\) plane. We examine the impact of RPS using our simulations and make comparisons with Vulcani et al. (2020) (from the GAs Stripping Phenomena in galaxies "GASP" survey; Poggianti et al., 2017). The SFR surface densities of the resolved simulation patches
Figure 10: Spatially resolved SFR\(-M_{\rm gas}\) (\(\Sigma_{\rm SFR}-\Sigma_{\rm gas}\); “Kennicutt-Schmidt”) relation within disk height \(|z|\leq 2\) kpc. The left and right panels show the 100 Myr duration of the galaxy group and cluster pericentric passages (or the corresponding times in iso; Table 5), respectively. Each panel shows the \(\Sigma_{\rm SFR}-\Sigma_{\rm gas}\) bivariate and one-dimensional (1D) distributions of 1 kpc\({}^{2}\) patches in the iso (red) and wind (blue) simulations. Solid and dashed black lines show the spatially resolved KS power-law from Bigiel et al. (2008) over the observed \(\Sigma_{\rm gas}\) ranges for H\({}_{2}\) and H i+H\({}_{2}\) combined, see §4.2 for details. Constant depletion time contours \(t_{\rm dep}=0.1\), 1, 10 Gyr are annotated in gray dotted lines. Gray vertical line indicates the atomic-to-molecular gas density transition at \(\Sigma_{\rm gas}\approx 10\)\(M_{\odot}\)/pc\({}^{2}\)(Krumholz et al., 2009).
are identical to those in the KS relation (SS4.2), while the stellar surface densities are a combination of the static stellar disk and the formed star particles as outlined in SS4.1, which turns out to be highly consistent among all simulations (Figure 9). The Vulcani et al. (2020) sample contains \(\sim\)1 kpc\({}^{2}\) resolved patches within 30 RPS galaxies under various stripping stages in nearby clusters, along with 10 isolated control case galaxies of similar masses (see Vulcani et al., 2019). Our simulated galaxy with \(M_{*}\approx 10^{9.8}\)\(M_{\odot}\) lies well within the GASP sample range and is directly comparable.
Figure 12 shows the \(\Sigma_{\rm SFR}-\Sigma_{*}\) relation for the resolved patches, comparing 13W, 14W at their pericenters with the respective iso control cases in the same style as Figure 10. Since \(\Sigma_{*}\) is a tight, monotonic function of disk radius (Figure 9), it is an indicator for star formation location on the \(\Sigma_{\rm SFR}-\Sigma_{*}\) phase plane. Two major effects of RPS can be identified from the differences between the wind and iso cases, (i) the truncation of the star-forming disk, shown by the high \(\Sigma_{*}\) cutoffs for star formation, \(\Sigma_{*}\approx 10^{6.5}(M_{\odot}\) kpc\({}^{-2})\) in 13W and \(10^{7}(M_{\odot}\) kpc\({}^{-2})\) in 14W; (ii) the enhancement of star formation in the central disk regions, shown by the \(\Sigma_{\rm SFR}\) excess at \(\Sigma_{*}\gtrsim 10^{7.1}(M_{\odot}\) kpc\({}^{-2})\) in 13W and \(\Sigma_{*}\gtrsim 10^{7.6}(M_{\odot}\) kpc\({}^{-2})\) in 14W. The disk truncation is more evident in 14W, where the ram pressure is higher, which is consistent with the radial profiles (Figure 9).
In Figure 12, we annotated the GASP best-fit power-law lines for the stripping (solid) and isolated control (dashed) samples (Vulcani et al., 2020), where the slopes are almost identical, but the stripping sample is \(\sim\)0.35 dex higher in \(\Sigma_{\rm SFR}\) at all \(\Sigma_{*}\). Our result is consistent with Vulcani et al. (2020) at high \(\Sigma_{*}\) (central disk regions) that the spatially-resolved SFR is enhanced in the stripping cases. But we do not see a similar SFR enhancement at low \(\Sigma_{*}\); instead, the low \(\Sigma_{*}\) phase space is poorly populated in our wind cases due to disk truncation. We note that, however, the Vulcani et al. (2020) sample contains an ensemble of galaxies with a range of \(M_{*}\), inclinations, and environments, while our simulations focus on one galaxy under different ram pressure strengths versus in isolation. The disk truncation and SFR enhancement in our wind cases are consistent with the "Jstage=3" (strongest stripping) galaxies in Vulcani et al. (2020), which do have a steeper fitted \(\Sigma_{\rm SFR}\)-\(\Sigma_{*}\) slope.
We also examined earlier pre-pericenter time frames in our simulations: qualitatively, the wind cases at earlier times (weaker stripping) show a similar truncation at lowest \(\Sigma_{*}\) (disk edge) and a mild SFR enhancement at relatively high \(\Sigma_{*}\) as those in Figure 12. The transition \(\Sigma_{*}\), where \(\Sigma_{\rm SFR}\) in the wind cases becomes higher than in iso, increases with ram pressure strength, as expected for outside-in stripping. We will present the "time-stacked" results from various RPS stages in SS5.2.
## 5 Discussion
### Impacts of RPS on star formation
Our key results in Sections 3 and 4 are,
1. In certain orbits, RPS can lead to an enhanced global star formation rate (SFR) in relatively gas-deficient galaxies.
2. The SFR enhancement is driven by an excess of dense gas in the disk central regions, while the star formation efficiency at given gas surface densities (the Kennicutt-Schmidt relation) remains the same.
There are two possible channels through which the SFR is enhanced in the galaxies undergoing ram pressure stripping: compression and mass transport, which are usually not separable (Tonnesen and Bryan, 2012; Roediger et al., 2014; Troncoso-Iribarren et al., 2020; Vulcani et al., 2020). The ISM at the ram pressure interface can be locally compressed, leading to a higher star formation rate and efficiency; global mass flows driven by ram pressure can redistribute gas in the disk and cause SFR enhancement.
Figure 11: The effects of RPS versus starvation on the KS plane. The red and blue filled contours and solid 1D distribution lines are iso group control and 13W pericenter, respectively (as in Figure 10), and the red dashed lines are for iso at an earlier time (prior to \(\sim\)2 Gyr of starvation; see §4.2). The constant depletion time contours are shown as in Figure 10.
Our results show that RPS-driven mass transport, including fallback and radial inflows (SS3.2; Figures 7 and 8), is directly responsible for the central disk gas mass enhancement relative to iso during the \(\sim\)Gyr early stripping stage (Figure 7 upper right panel). The centralized ISM mass distribution in the RPS galaxies results in enhanced central surface densities (\(\Sigma_{\rm gas}\) and \(\Sigma_{\rm SFR}\); Figures 9 and 10), which account for the global enhancement of SFR in the stripping cases relative to iso (with only starvation). The signal of relative SFR enhancement exists for longer than Gyr timescales unless ram pressure becomes sufficient to remove the entire gas disk and quench the star formation (Figure 3).
The role of compression is more challenging to quantify and is often inferred indirectly. Roediger et al. (2014) found that compression, indicated by shock passages, can drive a local, short-lived SFR burst (\(\sim\)15 Myr), but it only impacts the low-density outer disk and has only a mild effect on the global SFR. Choi et al. (2022) modeled a local patch of star-forming galactic disks under RPS and found a similar short-lived enhancement (\(\sim\)20 Myr; see their fig 13(c)) in the dense gas surface density (\(\Sigma_{\rm gas,nn_{\rm J}>10\ {\rm cm^{-3}}}\)), demonstrating vertical gas compression by the initial ram pressure passage. Compression would increase the gas volume density (\(\rho_{\rm gas}\)) without increasing the surface density integrated throughout the disk (\(\Sigma_{\rm gas}\)). If such compression happens, at comparable \(\Sigma_{\rm gas}\), \(\Sigma_{\rm SFR}\) will be systematically higher on the KS plane. However, our Figures 10 and 11 show that at the same gas surface densities, the stripping set follows the same KS relation as the iso set. This suggests that local compression is insufficient to account for the galaxy-scale SFR enhancement in our simulations.
Another search for compression in RPS galaxies in simulations was performed by Troncoso-Iribarren et al. (2020), which spatially divided satellite galaxy disks from the EAGLE simulations into leading and trailing halves (LH and TH) separated by the infall velocity vector. Under this LH-TH division, which maximizes the SFR asymmetry between the two halves of the satellite galaxies, Troncoso-Iribarren et al. (2020) found that gas in the LH that tends to be more compressed, as inferred from higher average pressure, also has a higher star formation efficiency (defined as the total SFR/M\({}_{\rm gas}\)) compared with the TH. Here, we follow the methodology of Troncoso-Iribarren et al. (2020) to further test for galaxy-scale effects of compression. We divide the galaxy disks based on a simple LH: \(y<0\) and TH: \(y\geq 0\) spatial criterion, given the infall velocity vector (Figure 5) and that the star-forming disk is thin throughout the simulations (Figure 4). In the discussion hereafter, we
Figure 12: Spatially resolved SFR\(-M_{*}\) (\(\Sigma_{\rm SFR}-\Sigma_{*}\)) relation within disk height \(|z|\leq 2\) kpc, in the same style as Figure 10. The stellar surface densities are a combination of the static Plummer-Kuzmin potential (§2.1 and Table 1) and the formed stars (as active particles) in the simulations. The black lines show the best-fit relations for the GASP stripping (solid; Vulcani et al., 2020) and isolated control (dashed; Vulcani et al., 2019) samples; both relations are shown over the observed \(\Sigma_{\rm SFR}\), \(\Sigma_{*}\) ranges, see Vulcani et al. (2020).
will assume that the LH is under higher compression than the TH because of ram pressure. The disparity (or the lack of) in mass and star formation between the LH and TH will help disentangle the effects of mass transport and compression.
Figure 13 shows the SFR and gas radial profiles of the wind and iso runs (similar to Figure 9), distinguishing the LH (solid lines) and TH (dashed lines) of the disks. As expected, the surface density profiles for the isolated galaxy control case are an equal division between the two halves at all radii. In the wind runs, the surface densities in the LH consistently show a steeper decrease with disk radius than the TH; the disk radius at which the two halves diverge decreases with ram pressure. The more extended low \(\Sigma_{\rm SFR}\), \(\Sigma_{\rm gas}\) material in the TH is caused by the asymmetric disk morphology4 under an inclined wind (Figure 5). Within the central few kpcs of the disk, the surface density profiles show a mild LH excess in 13W and a stronger excess in 14W. However, we found that the 14W signal oscillates over time5, which is likely due to the orbit of the few dense clouds remaining before complete stripping instead of compression.
Footnote 4: Our \(|z|\leq 2\) kpc disk height selection excludes tail contamination.
Figure 14 shows the time evolution of the LH-TH disparity. The time-dependent differences in the SFR and gas mass are subject to oscillations due to disk rotation and epicyclic motions (as previously shown in Figure 4; also see Tonnesen & Bryan 2009), which always average
Figure 14: The SFR and gas mass differences between the LH and TH versus simulation time. The three panels from top to bottom show the differences (LH minus TH) in SFR, the disk ISM mass, and the dense ISM mass (where number density exceeds the star formation threshold; see §2), respectively. In each panel, the solid curves show the running means over 100 Myr, and the horizontal dash-dotted lines show the time averages of individual simulations.
Figure 13: The SFR and gas surface density radial profiles (described in §4.1) for the wind-LH and TH of the disks. The simulations are color-coded as in §3, and we omitted 12W where the ram pressure has negligible impacts on the gas or SFR (Figure 3). For each wind simulation, we averaged over the 100 Myr closest to the pericenter (10 outputs), as in Table 5; for iso, we selected the group pericenter time, which yields a largely consistent profile with the cluster pericenter time (Figure 9).
to \(\sim\)0 in the absence of ram pressure; see the time averages of iso (horizontal gray dash-dotted line). Under ram pressure, the time-averaged SFR remains close to equal between the two halves (top panel); the disk ISM mass shows a strong excess in the TH under intermediate and strong ram pressure (middle panel); but the dense, star-forming ISM (bottom panel) mass is again almost equal between the two halves. The temporal trends of the SFR generally follow those of the dense ISM; they are much less sensitive to the total ISM, which acquired the strongest LH-TH disparity from RPS.
The primary effect of RPS (with an edge-on component) is generating an excess of low-density gas in the TH that has a low contribution to the global SFR (Figure 13). The SFR and dense ISM of the two halves, although subject to temporal oscillations, show close to equal time-averaged values and no trend with respect to ram pressure (Figure 14). Our finding of the RPS-driven gas excess in the TH agrees with Troncoso-Iribarren et al. (2020), but our interpretation of this asymmetry differs. Since Troncoso-Iribarren et al. (2020) defined the star formation efficiency of each half as the mass-weighted SFR/M\({}_{\rm gas}\), the TH efficiency may be biased by the excess of non-star forming gas, appearing as an efficiency enhancement in the LH. We showed that the dense ISM responsible for star formation shows no such disparity (Figure 14), indicating that the likely more compressed LH has the same efficiency as the TH.
To conclude, compression is not the direct cause of the ram pressure-induced SFR enhancement in our simulations, judging from two independent tests, (i) the spatially resolved SFR surface densities (\(\Sigma_{\rm SFR}\)) in the stripping set show no systematic enhancement at comparable \(\Sigma_{\rm gas}\) (inferred from KS relation; Figures 10 and 11), (ii) galaxy-scale global properties, SFR and dense ISM mass, show no enhancement in the LH where compression is stronger. Instead, the RPS-induced mass flows (Figures 7 and 8) account for the centralized mass and SFR profiles (enhanced central surface densities; Figures 9 and 13), which supports mass transport as the direct mechanism for the SFR enhancement.
### Predictions for observations
Here we predict RPS observables based on the simulation results, where Section 5.2.1 focuses on the surviving gas in the disk and Section 5.2.2 on the local SFR-mass relations. We will discuss our predictions in the context of recent environmental surveys, GASP (Poggianti et al., 2017; Moretti et al., 2020; Vulcani et al., 2020) and VERTICO (Brown et al., 2021; Jimenez-Donaire et al., 2023).
#### 5.2.1 Surviving gas in the disk: the dense gas ratio and the gas mass fraction
We describe the surviving gas using two global quantities: the dense gas ratio (\(R_{\Sigma_{10}}\)) and the gas mass fraction (\(f_{\rm gas}\)) within the simulated disk. The dense gas ratio is an estimate for the H\({}_{2}\) to H i mass ratio, defined as \(R_{\Sigma_{10}}\equiv M_{\rm gas(\Sigma_{\rm gas}>10)}/M_{\rm gas(\Sigma_{ \rm gas}\leq 10)}\), where \(\Sigma_{\rm gas}=10\)\(M_{\odot}/\)pc\({}^{2}\) is adopted as an empirical atomic-to-molecular transition density (Krumholz et al., 2009; Kennicutt and Evans, 2012; also see SS4.2). The ratio \(R_{\Sigma_{10}}\) is not direct modeling of \(M_{\rm H_{2}}/M_{\rm H{\,\textsc{i}}}\), but it self-consistently compares the molecular- and atomic-dominated gas masses in our simulations. The gas mass fraction is defined as \(f_{\rm gas}\equiv M_{\rm gas}/M_{*}\), where \(M_{\rm gas}\) is the total gas mass within the disk and \(M_{*}\) includes the static stellar potential (SS2) and the formed star particles. Figure 15 shows the time evolution of both quantities, for which we consistently selected disk height \(|z|\leq 2\) kpc, hence excluding most of the unbound gas in the tail.
In Figure 15, we annotated the observational results from the extended GALEX Arecibo SDSS Survey (xGASS; Catinella et al., 2018) as a reference. The comparison sample we adopted (cyan on both panels, dashed line for median, shading for first to three quartiles) is a subset of 21 xGASS galaxies with comparable stellar masses (\(M_{*}\in 10^{9.6-9.9}M_{\odot}\)) to our simulated satellite galaxy, and with detections in both CO and H i(Saintonge et al., 2017; Catinella et al., 2018). We consistently adopted \(M_{\rm H_{2}}/M_{\rm H{\,\textsc{i}}}\) as the dense gas ratio (left panel) for the observational samples. Additionally, we showed the average dense gas ratio of xGASS disk regions (dashed-dotted line; following Moretti et al., 2020), which has an additional disk radius selection (Wang et al., 2020) that results in a higher dense gas ratio.
In our simulations, the dense gas ratio \(R_{\Sigma_{10}}\) in the RPS cases is consistently higher than that in the isolated galaxy case (Figure 15 left panel). The ratio \(R_{\Sigma_{10}}\) increases with time in 13W and 14W, as opposed to the clearly decreasing trend in iso. The decreasing trend in iso is a result of starvation (Figure 11): gas depletion due to star formation and feedback favours the high \(\Sigma_{\rm gas}\) regions with high local \(\Sigma_{\rm SFR}\), reducing the ratio of the denser (e.g., \(\Sigma_{\rm gas}>10\)\(M_{\odot}/\)pc\({}^{2}\); H\({}_{2}\)-dominated) gas. While in the RPS cases, gas removal favors the low \(\Sigma_{\rm gas}\) regions where the gravitational restoring force is weakest ("outside-in stripping"), and the disk central regions can be replenished by the ram pressure-driven mass flows like fallback and radial inflows (SS5.1) -- both mechanisms increasing the dense gas ratio within the disk. Compared at the pericenter times (Table 5), \(R_{\Sigma_{10}}\) in 13W and 14W are 1.04 and 0.87 dex higher than iso (a factor of 11.0 and 7.5), respectively.
Our result that RPS can increase the dense gas ratio agrees with Moretti et al. (2020), which found a factor of 4 to \(\sim\)100 higher \(M_{\rm H_{2}}/M_{\rm H\,{\sc i}}\) ratios for three GASP jellyfish galaxies (undergoing active RPS) than the xGASS disk control sample (dash-dotted line in Figure 15). Moretti et al. (2020) suggested that a more efficient conversion of neutral into molecular gas in these jellyfish galaxies can explain their significantly higher molecular mass ratios. However, the \(R_{\Sigma_{10}}\) trends in our simulations can be explained by the different gas depletion models under starvation versus RPS described above, independent of the H i-H\({}_{2}\) conversion. Despite the different definitions, our \(R_{\Sigma_{10}}\) values are comparable with the observational \(M_{\rm H_{2}}/M_{\rm H\,{\sc i}}\) values of the xGASS sample (Figure 15); we do not see the high \(M_{\rm H_{2}}/M_{\rm H\,{\sc i}}\gtrsim 10\) (\(\log R_{\Sigma_{10}}>1\)) of the three jellyfish galaxies in Moretti et al. (2020).
The evolution of the gas mass fraction \(f_{\rm gas}\) (Figure 15 right panel) closely follows that of \(M_{\rm gas}\) (Figure 3), because the stellar mass evolution is relatively minimal throughout the simulations (\(M_{*}=10^{9.7-9.8}\)\(M_{\odot}\)). The initial condition of \(\log f_{\rm gas}\approx 0\) we adopted (SS2.1) is \(\sim\)0.5 dex higher than the xGASS average value (\(0.01<z<0.05\) galaxies, Catinella et al.2018, but within 1\(\sigma\) of Calette et al.2018), otherwise the \(f_{\rm gas}\) evolution throughout the iso simulation is within the observational scatter of xGASS (cyan shading in Figure 15). We take \(f_{\rm gas}(t)\) in iso as the reference gas fraction in our simulations: 12W is in overall agreement with iso, 13W at the group pericenter is mildly lower (\(\Delta\log f_{\rm gas}\approx-0.16\) dex), and 14W at the cluster pericenter significantly lower (\(-0.84\) dex). As expected, direct removal by RPS decreases the total \(M_{\rm gas}\) and hence \(f_{\rm gas}\) in the group and cluster cases. But in the group case, a \(-0.16\) dex deviation from prediction is within the typical observational scatter (\(0.2-0.3\) dex) of such relations (see Figure 15 and Cortese et al.2021 fig 2). The satellite at 13W pericenter has a reduced gas fraction but still belongs to the gas normal regime, while at 14W, it is gas deficient during the final \(\sim\)400 Myr approaching the pericenter.
For the simulation cases with observable gas stripping morphology (13W and 14W), RPS always reduces \(f_{\rm gas}\) in the galaxy disks. This means that under RPS, despite the mass transfer channels that can potentially replenish the dense/central-disk gas, stripping of the low-density/outer-disk gas dominates the global mass evolution (see, e.g., Figure 7). But when we account for the total gas in the disk _and tail_ (tail gas potentially unbound), we find, similarly to Moretti et al. (2020), that \(f_{\rm gas,disk+tail}\) is similar between the RPS and iso cases.
To summarize SS5.2.1, first, RPS with an edge-on component tends to increase the dense gas ratio in the disk, while starvation decreases it. This could explain the observed higher molecular-to-atomic gas ratio (\(M_{\rm H_{2}}/M_{\rm H\,{\sc i}}\)) in jellyfish galaxies (Moretti et al.2020) without requiring a substantially higher H i-H\({}_{2}\) conversion efficiency. Second, RPS (unsurprisingly) reduces the gas mass frac
Figure 15: Time evolution of the dense gas ratio \(R_{\Sigma_{10}}\) (left) and the gas mass fraction \(f_{\rm gas}\) (right). For the dense gas ratio, we employed a simplified \(\Sigma_{\rm gas}=10\)\(M_{\odot}\)/pc\({}^{2}\) cut to distinguish the H i- and H\({}_{2}\)-dominated gas; see §5.2. The dash-dotted line on the left panel shows the average \(M_{\rm H_{2}}/M_{\rm H\,{\sc i}}\) ratio for the disk regions of the xGASS sample (see Fig 3 of Moretti et al.2020). The cyan dashed lines and shadings on both panels show the median and the first to third quartile ranges of the xGASS sample (Saintonge et al.2017; Catinella et al.2018) at a comparable stellar mass range (\(9.6<\log M_{*}/M_{\odot}<9.9\)). All simulation quantities are for the disk only (disk height \(|z|\leq 2\) kpc; excluding the stripping tails).
tions in the disk, even where the global SFR is enhanced. Where ram pressure at the orbital pericenter is insufficient to remove the densest ISM (13W), the \(f_{\rm gas}\) reduction can be mild, maintaining the stripped galaxy in the gas normal regime (\(<\pm 0.3\) dex). The remaining gas in the disk at 13W pericenter will likely be perturbed by galaxy-galaxy gravitational interactions, which are expected to be effective in group environments; see SS5.3.
#### 5.2.2 RPS signatures on the local SFR-mass relations
High angular resolution observations have enabled the direct mapping of galactic star formation laws on small scales (Bigiel et al., 2008; Leroy et al., 2008; Kennicutt and Evans, 2012). Some recent programs include the PHANGS (Physics at High Angular resolution in Nearby GalaxieS) survey for nearby galaxies (Leroy et al., 2021; Lee et al., 2022), the VERTICO survey for Virgo cluster galaxies (Brown et al., 2021; Jimenez-Donaire et al., 2023), and the GASP survey for environmentally selected jellyfish galaxies (Poggianti et al., 2017; Jaffe et al., 2018; Vulcani et al., 2020; Moretti et al., 2020). RPS is one of the main environmental processes in the environmentally-selected samples (e.g., GASP and VERTICO), but the assessment of the RPS impact often faces several challenges: (i) the inevitable mixture of sample stellar masses and inclination angles, (ii) the difficulties of constraining the environment (e.g., ICM densities) and the satellite orbits, and (iii) the complex gravitational effects that could coexist with RPS. Here, we use our simulation suite, which focuses on a single galaxy across various environments undergoing RPS and no tidal effects, to make predictions for the observational "RPS signatures" on the local SFR-mass relations.
To create mock observational datasets, we selected 900 Myr of simulation data (90 outputs) that cover more than 3 dex of \(P_{\rm ram}\) in the wind runs (Figure 2), ranging over 1.5 Gyr in simulation time. Stacking the selected data creates two datasets, the stripping set and the isolated control set. This is equivalent to observing an ensemble of galaxies at \(M_{*}\approx 10^{9.7-9.8}\ M_{\odot}\) undergoing various stages of RPS (stripping set: Milky Way-like to cluster pericenter environments) or starvation (isolated control set: \(\sim\)1.5 Gyr duration). Figure 16 shows the local SFR-mass relations for the two sets on 1 kpc\({}^{2}\) scale, following the methodology described in SS4. In Figure 16, we shaded the low SFR regions where \(\log\Sigma_{\rm SFR}\) are below observational limits (\(\sim\)-4 dex, e.g., Leroy et al., 2012; Kennicutt and Evans, 2012; Vulcani et al., 2020). The over-densities in the simulation data at certain low \(\Sigma_{\rm SFR}\) is a numerical effect due to our star particle mass resolution (e.g., the lowest horizontal over-density corresponds to the \(\Sigma_{\rm SFR}\) from a single star particle of \(\approx 800\)\(M_{\odot}\)). To guide observational comparison, we show the global KS relation for the 61 non-starburst spiral galaxies from Kennicutt (1998) on the left panel in addition to the resolved KS relations from Bigiel et al. (2008); also see Figure 10. The discontinuous sampling at high \(\Sigma_{*}\) (right panel) is caused by the limited 1 kpc\({}^{2}\) spatial patch number in the disk center.
The resolved KS relation for the time-stacked stripping and iso sets is remarkably consistent (Figure 16 left panel). When observed at different snapshots in time (e.g., in Figures 10 and 11), the galaxy populates different \(\Sigma_{\rm gas}\) ranges, which is closely correlated with the local \(\Sigma_{\rm SFR}\) (SS4.2). In the time-stacked view, where the \(\Sigma_{\rm gas}\) ranges become similar between the two sets, the underlying KS relation is in overall agreement. Our finding is consistent with Jimenez-Donaire et al. (2023) (VERTICO) results that the local KS relation agrees between an ensemble of Virgo RPS satellites and their isolated field counterparts, which suggests that RPS does not directly affect the local star formation efficiency within the gas.
The main difference between the stripping and isolated control sets is best seen in the 1D distributions of \(\Sigma_{\rm gas}\) and \(\Sigma_{\rm SFR}\). The stripping set reaches higher maximum surface densities and is truncated at low surface densities; the peaks in both \(\Sigma_{\rm gas}\) and \(\Sigma_{\rm SFR}\) distribution are at higher values compared with the isolated set. The difference can be explained by a combination of low-density gas removal and high-density gas replenishment in the stripping set; see the evolution of the dense gas ratio (SS5.2.1). Another relatively minor difference is that the stripping set reaches higher \(\Sigma_{\rm SFR}\) at low \(\Sigma_{\rm gas}\) (\(<0\) dex), which, as also discussed above, is likely due to fast gas removal by RPS in still star-forming regions. The signals of high \(\Sigma_{\rm SFR}\) at low \(\Sigma_{\rm gas}\) only occur shortly before the complete removal of gas (cluster pericenter in Figure 10).
The time-stacked star formation main sequence relation (Figure 16 right panel) shows a mild \(\Sigma_{\rm SFR}\) enhancement over a range of \(\Sigma_{*}\) in the stripping set. Independent of RPS or starvation, the \(\Sigma_{*}\) radial profiles remain monotonic (Figure 9), so the local \(\Sigma_{*}\) can be used as an indicator of disk radii. Under increasing ram pressure, the \(\Sigma_{*}\) threshold where the wind SFR exceeds iso SFR increases (corresponding disk radius decreases), so the time-stacked result here shows a smoother \(\Sigma_{\rm SFR}\) enhancement that extends to lower \(\Sigma_{*}\) (larger radii) compared with the group and cluster pericenter cases (Figure 12). This agrees with the Vulcani et al. (2020) (GASP) finding that \(\Sigma_{\rm SFR}\) can be enhanced over a range of \(\Sigma_{*}\) when accounting for various stripping stages. However, at the lowest \(\Sigma_{*}\) end, where Vulcani et al.
(2020) found SFR enhancement in the stripping sample, our simulations always show disk truncation (lowest \(\Sigma_{*}\) not populated in the stripping set; SS4.3), instead of SFR enhancement. We note that since we focused on the disk region (\(|z|\leq 2\) kpc), our sampled patches are free of star-forming clumps in the tail, which are shown to have a higher \(\Sigma_{\rm SFR}\) than the disk at low \(\Sigma_{*}\)(Vulcani et al., 2020).
Our predictions in SS5.2.2 can be summarized as follows. When observing a large ensemble of RPS and isolated galaxies at the same stellar mass, the set of galaxies undergoing RPS will have the same KS relation with the isolated control set at comparable gas surface densities. Individual galaxies may populate different \(\Sigma_{\rm gas}\) ranges and hence occupy different subsets of the ensemble KS relation, which can be caused by both active (RPS-driven gas flows) and passive (gas consumption due to starvation) mechanisms, as shown in Figure 11. But there is no evidence of star formation efficiency change at given \(\Sigma_{\rm gas}\) in the RPS cases. On the star formation main sequence plane (\(\Sigma_{\rm SFR}-\Sigma_{*}\)), the RPS galaxy disks (clear of tail contamination and inclination/projection effects) will show enhanced \(\Sigma_{\rm SFR}\) above a certain \(\Sigma_{*}\) threshold, and sparse sampling indicating disk truncation below the \(\Sigma_{*}\) threshold. This is because galaxies undergoing RPS tend to have more centrally concentrated gas (and SFR) radial profiles than their isolated counterparts under starvation (e.g., Figure 9). All predictions here assume that RPS is the only active effect and starvation is the only passive effect. We discuss how these assumptions are limited and their implications in SS5.3 below.
### Limitations
We made idealistic simplifications in our modeling choices in order to focus on the science goals. We adopted a single star formation and feedback recipe (Goldbaum et al., 2015, 2016) and a static dark matter potential, omitted the direct modeling of magnetic fields, turbulence, and cosmic rays, and only sampled a single (most probable; Wetzel, 2011) satellite orbit and a 45\({}^{\circ}\) wind inclination in each halo, instead of conducting a population study. In particular, we discuss the following two and their implications.
**(i) Gas removal by gravitational mechanisms.** Our controlled suite of hydrodynamical simulations only includes active gas removal by RPS; we are missing the gravitational mechanisms, including satellite-host and satellite-satellite interactions (Boselli & Gavazzi, 2006). In clusters, RPS by the ICM is the dominant mechanism for cold gas stripping (Boselli & Gavazzi, 2006; Cortese et al., 2021). In galaxy groups (lower relative velocities), satellite-satellite gravitational interactions are traditionally considered the primary stripping mechanism based on the observational evidence in various systems (e.g., Yun et al., 1994; Serra et al., 2013; Lee-Waddell et al., 2019; Wang et al., 2022). However, the observa
Figure 16: Spatially resolved SFR-mass relations (KS and star formation main sequence; see §4), similar to Figures 10 and 12. Here for the observational predictions, we stacked 900 Myr of simulation data (90 outputs), covering more than 3 dex of ram pressure strengths in the wind runs (§5.2). The shaded regions in both panels are where \(\log\Sigma_{\rm SFR}\) is below current observational limits (\(\sim\)-4 dex). On the left panel, we additionally show the global KS relation of the non-starburst spiral galaxies from Kennicutt (1998), see §5.2.2.
tional selection bias towards gas-rich galaxies in groups may have favored the gravitational mechanisms (Cortese et al., 2021); in fact, both simulations (Bekki, 2014; Bahe and McCarthy, 2015; Marasco et al., 2016) and recent observational work (Roberts et al., 2021; Putman et al., 2021; Kolcu et al., 2022) have found that RPS can be efficient in galaxy groups. RPS and gravitational (satellite-satellite) interactions are likely both effective in groups, and the relative importance depends on individual environments and satellites.
Missing the aspect of gravitational interactions, we likely overestimated the final \(M_{\rm gas}\) (and \(f_{\rm gas}\)) in our galaxy group case (13W; Figures 3 and 15), as gravitational encounters can contribute to active gas removal. As the satellite still retains some ISM at 13W pericenter, gravitational interactions will additionally perturb the remaining gas, affecting its morphology and kinematics (Figures 5 and 6), and cause disturbances in the stellar disk. Such effects may also be present in the Milky Way and cluster halo cases but will have a weaker impact on the global properties (low likelihood of massive galaxy-galaxy close encounters in a Milky Way-like halo; high relative velocities in clusters).
Gravitational interactions will not change our key result of the RPS-induced star formation enhancement. Gas stripping by gravitational mechanisms is "outside-in" like RPS and will have a relatively minimal impact on the dense gas in the disk center, where the SFR enhancement occurs in our simulations (e.g., Figures 9 and 12). Our result is overall consistent with current observational evidence of triggered star formation that is of RPS origin, including global and local SFR enhancement (Vulcani et al., 2018, 2020; Roberts and Parker, 2020), and star formation in the tail (Hester et al., 2010; Ebeling et al., 2014; Poggianti et al., 2019).
**(ii) Passive gas depletion and accretion.** We modeled passive gas depletion (i.e., free of direct removal) in our isolated galaxy simulation: steady consumption by star formation and stellar and supernova feedback-driven outflows. This scenario, however, only applies to a special case of galaxies where gas accretion has been halted, which we referred to as starvation following literature conventions (Larson et al., 1980). Accretion flows can naturally be halted by RPS, but they also likely occur in many of the star-forming field galaxies today, whose SFR relies on the rejuvenating cold gas accretion, e.g., via cooling inflows from the circumgalactic medium (CGM; Tumlinson et al., 2017).
Gas accretion is a fundamental aspect of galaxy evolution that remains poorly constrained (Fox and Dave, 2017). With the typical inflow rates and redshift dependence being highly uncertain, direct modeling of accretion is challenging. But we can infer the impact of accretion by comparing the SFR-time evolution in our simulated iso case (starvation without accretion), with the observational SFR-redshift relation (Speagle et al., 2014) at our modeled stellar mass. The difference in the SFR time evolution will characterize the star formation fueled by accretion missing in our iso simulation.
Taking the initial and final conditions in iso (Figure 3; \(t_{\rm init}=500\) Myr and \(t_{\rm final}=2980\) Myr), which converts to \(t_{\rm lookback}\approx 2.5\) Gyr and the present day, the best-fit relation from Speagle et al. (2014) gives \(\log{\rm SFR_{best-fit}}\) of 0.07 and -0.24 dex. While in the iso simulation (starvation without accretion), the initial and final \(\log{\rm SFR_{iso}}\) are 0.18 and -0.46 dex, respectively. The simulation values are still within the expected scatter of \(\log{\rm SFR_{best-fit}}\) (\(\pm 0.3\) dex; Speagle et al., 2014), but unsurprisingly decrease faster with time due to the lack of replenishment. If we use the observational best-fit value as our field galaxy control case instead, which represents the average of observed star-forming galaxies at this stellar mass, the galaxy group case (13W) still shows globally enhanced SFR, although the enhancement becomes mild (a factor of 1.5 instead of \(\sim\)2.5).
## 6 Summary and Conclusions
In this paper, we present a suite of galaxy-scale "wind-tunnel" simulations with radiative cooling, star formation, and supernovae feedback, modeling a low-mass satellite galaxy undergoing RPS in various halo environments. The input time-varying ram pressure covers over three orders of magnitude (Figure 2), representing realistic satellite infall orbits from a Milky Way-like halo's \(R_{200}\) to a cluster's pericenter. We simulate the same satellite galaxy in isolation for \(\sim\)3 Gyr as a control case, and compare the simulations in terms of their global evolution, gas morphology and kinematics, and the spatially resolved SFR-mass relations. Our key findings can be summarized as follows.
1. RPS has the potential to quench or enhance (up to a factor of \(\sim\)2.5) the global SFR of a satellite galaxy while gas is being removed (Figure 3). The impact on SFR depends on both the strength and the time derivative of the ram pressure.
2. Star formation is radially centralized under a moderate (13W; group halo) or strong (14W; cluster halo) ram pressure profile, and it occurs in the stripped tail when the satellite reaches the cluster pericenter (Figure 4). This is also reflected in a central enhancement of gas density (Figure 9).
3. Under an inclined wind (with face-on _and_ edge-on components), stripping in the disk outskirts dominates the gas mass loss, but when the pericen
tric ram pressure is insufficient for complete gas removal (13W), some stripped gas falls back and replenishes the central disk (Figures 6 and 7). The edge-on component of ram pressure also drives a direct radial gas inflow (13W and 14W; Figure 8) where it counters the disk rotation (right panel of Figure 6).
This radial gas transport has the following consequences:
1. The stripping set (13W and 14W) shows an excess of high \(\Sigma_{\rm SFR}\)-high \(\Sigma_{\rm gas}\) material relative to the iso control set on the spatially resolved KS plane (Figure 10). However, the underlying KS relation is the same between the two sets when compared at similar \(\Sigma_{\rm gas}\) (Figure 11), indicating that RPS has no direct effect on the star formation efficiency.
2. On the spatially resolved SFR-stellar mass plane, the stripping set shows enhanced \(\Sigma_{\rm SFR}\) at high \(\Sigma_{\ast}\) (corresponding to central disk regions) relative to iso, and is truncated at the lowest \(\Sigma_{\ast}\) (Figure 12).
3. The dense gas ratio (\(R_{\Sigma_{10}}\); an approximation for \(M_{\rm H_{2}}/M_{\rm H_{1}}\)) increases with time in the stripping set because of a combination of low-density gas removal and dense gas replenishment, as opposed to the decreasing trend in iso due to starvation (Figure 15).
Several of our findings agree with observational results. First, the RPS-induced global SFR enhancement is mild, up to a factor of \(\sim\)2.5 relative to iso, or \(\sim\)1.5 relative to the observational SFR-redshift relation (Speagle et al., 2014; see SS5.3). This agrees with the typical enhancement factor of \(<\)2 in observational samples of RPS-triggered star formation (Iglesias-Paramo et al., 2004; Vulcani et al., 2018; Roberts and Parker, 2020). Second, despite occupying different \(\Sigma_{\rm gas}\) ranges, the stripping and isolated sets follow the same local KS relation, consistent with Jimenez-Donaire et al. (2023) (the VERTICO survey); the stripping set's \(\Sigma_{\rm SFR}\) is smoothly enhanced at high \(\Sigma_{\ast}\) -- the inner disk, consistent with Vulcani et al. (2020) (the GASP survey). Third, the stripping set acquires an enhanced dense gas ratio, which agrees with the high \(M_{\rm H_{2}}/M_{\rm H_{1}}\) ratios found for three GASP jellyfish galaxies (Moretti et al., 2020). Finally, RPS by a galaxy group medium can be effective for low-mass spiral galaxies and potentially lead to enhanced global SFR (Roberts et al., 2021; Kolcu et al., 2022).
The radial redistribution of gas in the galaxy is a key result of this work: gas is stripped from the outskirts and enhanced in the center. It is the direct cause of the SFR enhancement when \(P_{\rm ram}\) is insufficient to remove the entire gas disk; when \(P_{\rm ram}\) is sufficient for central gas removal, the galaxy is ultimately quenched of star formation. This mass transport scenario is consistent with the increased dense gas ratio in jellyfish galaxies (Moretti et al., 2020), the local KS relation agreement between environmentally selected samples (Jimenez-Donaire et al., 2023), and the comparable star formation efficiency between the leading and trailing halves of the disk (Figure 14). We find that this explanation is a better match to the simulations than compression by ram pressure (SS5.1) or a more efficient H i-H\({}_{2}\) conversion (SS5.2.1) -- as the main driver for potential SFR enhancement.
We thank Dan Foreman-Mackey, Mary Putman, David Schiminovich, Jacqueline van Gorkom, and Ann Zabludoff for helpful discussions. We thank the anonymous referee for useful suggestions that improved the paper. JZ thanks Matthew Abruzzo, Nina Akerman, and Hui Li for the conversations on the simulations. ST thanks the GASP collaboration for useful conversations. GLB acknowledges support from the NSF (AST-2108470, XSEDE), a NASA TCAN award, and the Simons Foundation through the Learning the Universe Simons Collaboration. The simulations used in this work were run and analysed on facilities supported by the Scientific Computing Core at the Flatiron Institute, a division of the Simons Foundation. We also acknowledge computing resources from Columbia University's Shared Research Computing Facility project, which is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010. Analyses of this work have made use of NumPy (Harris et al., 2020), Astropy (Astropy Collaboration et al., 2013, 2018, 2022), yet (Turk et al., 2011), and Ipython (Perez and Granger, 2007).
|
2309.08304 | Lattice attack on group ring NTRU: The case of the dihedral group | Group ring NTRU (GR-NTRU) provides a general structure to design different
variants of NTRU-like schemes by employing different groups. Although, most of
the schemes in literature are built over cyclic groups, nonabelian groups can
also be used. Coppersmith and Shamir in 1997 have suggested that
noncommutativity may result in better security against some lattice attacks for
some groups. Lattice attacks on the public key of NTRU-like cryptosystems try
to retrieve the private key by solving the shortest vector problem (SVP) or its
approximation in a lattice of a certain dimension, assuming the knowledge of
the public key only. This paper shows that dihedral groups do not guarantee
better security against this class of attacks. We prove that retrieving the
private key is possible by solving the SVP in two lattices with half the
dimension of the original lattice generated for GR-NTRU based on dihedral
groups. The possibility of such an attack was mentioned by Yasuda et
al.(IACR/2015/1170). In contrast to their proposed approach, we explicitly
provide the lattice reduction without any structure theorem from the
representation theory for finite groups. Furthermore, we demonstrate the
effectiveness of our technique with experimental results. | Vikas Kumar, Ali Raya, Sugata Gangopadhyay, Aditi Kar Gangopadhyay | 2023-09-15T10:50:46Z | http://arxiv.org/abs/2309.08304v1 | # Lattice attack on group ring NTRU: The case of the dihedral group
###### Abstract
Group ring NTRU (GR-NTRU) provides a general structure to design different variants of NTRU-like schemes by employing different groups. Although, most of the schemes in literature are built over cyclic groups, nonabelian groups can also be used. Coppersmith and Shamir in 1997 have suggested that noncommutativity may result in better security against some lattice attacks for some groups. Lattice attacks on the public key of NTRU-like cryptosystems try to retrieve the private key by solving the shortest vector problem (SVP) or its approximation in a lattice of a certain dimension, assuming the knowledge of the public key only. This paper shows that dihedral groups do not guarantee better security against this class of attacks. We prove that retrieving the private key is possible by solving the SVP in two lattices with half the dimension of the original lattice generated for GR-NTRU based on dihedral groups. The possibility of such an attack was mentioned by Yasuda et al. (IACR/2015/1170). In contrast to their proposed approach, we explicitly provide the lattice reduction without any structure theorem from the representation theory for finite groups. Furthermore, we demonstrate the effectiveness of our technique with experimental results.
**Keywords:** Post-Quantum Cryptography, Lattice, NTRU, Group ring NTRU, Dihedral group
## 1 Introduction
The first NTRU cryptosystem [1] was proposed early in 1996 as a public key scheme built over a quotient ring of polynomials. Being an efficient scheme with reasonable memory requirements, NTRU has attracted cryptanalysts and undergone extensive analysis for its security and performance. IEEE considered NTRU for standardization as an efficient scheme based on post-quantum mathematical problems (IEEE-1363.1) [2]. Moreover, a few NTRU-like schemes have been submitted to the National Institute of Standards and Technology (NIST) competition and proceeded through the different rounds of evaluation [3, 4, 5, 6, 7, 8, 9, 10].
For most submissions of NTRU, either in the literature or NIST's competition, the underlying ring \(\mathcal{R}\) is selected to be a commutative ring, for example, \(\mathcal{R}=\mathbb{Z}_{q}[x]/(x^{N}-1)\) for prime \(N\) or \(\mathcal{R}=\mathbb{Z}_{q}[x]/(x^{2^{n}}+1)\) for a positive integer \(n\). However, when Coppersmith and Shamir established their lattice attack against NTRU [11], they stated that considering a noncommutative group algebra could be another direction to provide better security against their attack.
Few variants of NTRU based on noncommutative rings have been introduced and studied. In 1997, Hoffstein and Silverman [12] proposed a noncommutative variant of NTRU based on the dihedral group, which was broken soon by Coppersmith [13]. The same design of the noncommutative NTRU based on the dihedral group has been analyzed by Truman [14]. In the same work Truman extends the idea of the noncommutative NTRU to other group rings showing that Coppersmith's attacks can only work for the group ring based on the dihedral or closely related group rings.
Another attempt to build a noncommutative scheme analogous to NTRU but based on Quaternion algebra was proposed by Malekian et al. [15]. According to the authors' claim, the proposed cryptosystem is multidimensional and more resistant to some lattice attacks due to the noncommutativity of the underlying algebraic structure. In [16], Yasuda et al. describe group ring NTRU (GR-NTRU), which serves as a general structure to build different variants of NTRU-like schemes. The group ring \(\mathbb{Z}G\) corresponding to a finite group \(G\) is used to create an NTRU-like variant, where the group \(G\) can be abelian or nonabelian. They generalize the attack by Gentry [17] against composite-degree NTRU to GR-NTRU using the concepts of group representation theory. Furthermore, they discuss their attack against some groups in the context of GR-NTRU, including the dihedral group. It is worth mentioning here that the schemes discussed in [12, 14] are variants of NTRU using group ring based on the dihedral group. However, the designs of these schemes differ from the
dihedral group based GR-NTRU, and the attack proposed by Coppersmith in [13] can not be applied against GR-NTRU.
**Our contribution:** For GR-NTRU on a dihedral group of order \(2N\), Yasuda et al. [16] provided an overview of a lattice reduction, and an estimate of the lattice reduction complexity. However, they do not provide any explicit mapping or concrete algorithm for their reduction. Our work in this paper:
* explicitly shows the lattice reduction using simple matrix algebra;
* explains how to map the problem of retrieving the private key from solving the SVP (or an approximation of it) in a \(4N\)-dimensional lattice into two smaller \(2N\)-dimensional lattices. Furthermore, the structures of the smaller lattices are provided;
* provides a pull-back approach to retrieve two decryption keys; one is a short-enough (non ternary) key while the other is a ternary key;
* supports the reduction's correctness through theoretical analysis and experimental results;
* proves that the dihedral group does not provide additional security to GR-NTRU compared to the standard NTRU based on a cyclic group of order \(N\).
The remaining paper is structured as follows: Section 2 provides preliminaries related to group rings and the matrix representation of group ring elements. Section 3 describes the general design of GR-NTRU and the lattice attack on the public key. Section 4 discusses GR-NTRU based on the dihedral group and provides our method for lattice reduction. Section 5 presents experimental verification of our lattice reduction attack on this scheme. Finally, we conclude our work in Section 6.
## 2 Group rings
Group rings can be defined for arbitrary groups. However, as far as the scope of this work is concerned, we will consider group rings of finite groups only. For a ring \(R\) and a finite group \(G=\{g_{i}:i=1,2,\ldots,n\}\) of order \(n\), with identity element \(g_{1}=e\), define the set of formal sums
\[RG=\Bigg{\{}a=\sum_{i=1}^{n}\alpha_{i}g_{i}:\alpha_{i}\in R\text{ for }i=1,2,\ldots,n \Bigg{\}}. \tag{1}\]
Suppose \(a=\sum_{i=1}^{n}\alpha_{i}g_{i}\) and \(b=\sum_{i=1}^{n}\beta_{i}g_{i}\) in \(RG\). By definition, \(a=b\) if and only if \(\alpha_{i}=\beta_{i}\) for all \(i=1,2,\ldots,n\). Sum of \(a\) and \(b\) is defined as:
\[a+b=\sum_{i=1}^{n}\alpha_{i}g_{i}+\sum_{i=1}^{n}\beta_{i}g_{i}=\sum_{i=1}^{n} (\alpha_{i}+\beta_{i})g_{i} \tag{2}\]
Define the product of \(a\) and \(b\) as:
\[ab=\left(\sum_{i=1}^{n}\alpha_{i}g_{i}\right)\left(\sum_{i=1}^{n}\beta_{i}g_{i} \right)=\sum_{i=1}^{n}\gamma_{i}g_{i} \tag{3}\]
where
\[\gamma_{i}=\sum_{g_{h}g_{k}=g_{i}}\alpha_{h}\beta_{k}. \tag{4}\]
For each element \(a=\sum_{i=1}^{n}\alpha_{i}g_{i}\in RG\), we associate a unique vector \(\mathbf{a}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})\). We use \(a\) and \(\mathbf{a}\) interchangeably to refer to an element of group ring \(RG\). In vector notation
\[\mathbf{a}+\mathbf{b}=(\alpha_{1}+\beta_{1},\alpha_{2}+\beta_{2},\ldots, \alpha_{n}+\beta_{n}),\ \mathbf{a}\star\mathbf{b}=(\gamma_{1},\gamma_{2},\ldots, \gamma_{n})\]
where \(\gamma_{i}\), for \(i=1,2,\ldots,n\), are given by (4), denote coordinatewise addition and the convolutional product of two vectors \(\mathbf{a},\mathbf{b}\in RG\), respectively.
**Definition 1**: ([18, Chapter 3]) The set \(RG\) together with the operations defined in (2) and (3) forms a ring. We say that \(RG\) is the _group ring_ of \(G\) over \(R\).
Suppose \(R\) is ring with unity \(1_{R}\), then \(\mathbf{1}_{RG}=(1_{R},0,0,\ldots,0)\) is the unity of the group ring \(RG\). We define the scalar product of elements of \(RG\) with elements \(\delta\in R\) as follows:
\[\delta\mathbf{a}=\delta(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})=(\delta\alpha _{1},\delta\alpha_{2},\ldots,\delta\alpha_{n}). \tag{5}\]
This makes \(RG\) an \(R\)-module. Further, if \(R\) is commutative then \(RG\) is an \(R\)-algebra.
**Definition 2**: Let the vector \(\mathbf{a}=(\alpha_{1},\alpha_{2},\alpha_{3},\ldots,\alpha_{n})\in RG\), and \(1\leq r\leq n-1\). Then, \(\mathbf{a}^{(r)}=(\alpha_{r},\alpha_{r+1},\ldots,\alpha_{r-1})\) denotes the rotation of \(\mathbf{a}\) to left by \(r\) positions and \(\mathbf{a}^{(-r)}=(\alpha_{n-r+1},\alpha_{n-r+2},\ldots,\alpha_{n-r})\) denote the rotation of \(\mathbf{a}\) to right by \(r\) positions, and \(\mathbf{a}^{(0)}=\mathbf{a}\). We may also write \(\mathbf{a}^{(-r)}=\mathbf{a}^{(n-r)}\).
**Matrix representation of group ring elements:** In [19], Hurley establishes an isomorphism between a group ring \(RG\) and a certain subring of \(n\times n\) matrices over \(R\). For a group \(G=\{g_{1},g_{2},\ldots,g_{n}\}\), define the matrix of \(G\) as
\[M_{G}=\begin{pmatrix}g_{1}^{-1}g_{1}&g_{1}^{-1}g_{2}&\ldots\ldots&g_{1}^{-1}g_ {n}\\ g_{2}^{-1}g_{1}&g_{2}^{-1}g_{2}&\ldots\ldots&g_{2}^{-1}g_{n}\\ \vdots&\vdots&\ddots&\vdots\\ g_{n}^{-1}g_{1}&g_{n}^{-1}g_{2}&\ldots\ldots&g_{n}^{-1}g_{n}\\ \end{pmatrix}. \tag{6}\]
We now construct the \(RG\)-matrix of an element \(\mathbf{a}=(\alpha_{g_{1}},\alpha_{g_{2}},\ldots,\alpha_{g_{n}})\in RG\) as follows
\[M_{RG}(\mathbf{a})=\begin{pmatrix}\alpha_{g_{1}^{-1}g_{1}}&\alpha_{g_{1}^{-1}g_ {2}}&\cdots\cdots&\alpha_{g_{1}^{-1}g_{n}}\\ \alpha_{g_{2}^{-1}g_{1}}&\alpha_{g_{2}^{-1}g_{2}}&\cdots\cdots&\alpha_{g_{2}^{- 1}g_{n}}\\ \vdots&\vdots&\ddots&\vdots\\ \alpha_{g_{n}^{-1}g_{1}}&\alpha_{g_{n}^{-1}g_{2}}&\cdots\cdots&\alpha_{g_{n}^{- 1}g_{n}}\end{pmatrix}. \tag{7}\]
The set \(M_{RG}=\{M_{RG}(\mathbf{a}):\mathbf{a}\in RG\}\) is the subring of the ring of \(n\times n\) matrices over \(R\), denoted by \(M_{n}(R)\). We say a matrix \(A\in M_{n}(R)\) is an \(RG\)-matrix if there is an \(\mathbf{a}\in RG\) such that \(A=M_{RG}(\mathbf{a})\).
**Theorem 1**: _([19, Thereom 1]) The mapping \(\tau:RG\to M_{RG}\subset M_{n}(R)\) defined as \(\tau(\mathbf{a})=M_{RG}(\mathbf{a})\) is a bijective ring homomorphism, i.e., \(\tau(\mathbf{a}+\mathbf{b})=\tau(\mathbf{a})+\tau(\mathbf{b})=M_{RG}(\mathbf{a })+M_{RG}(\mathbf{b})\), and \(\tau(\mathbf{a}\star\mathbf{b})=\tau(\mathbf{a})\cdot\tau(\mathbf{b})=M_{RG}( \mathbf{a})\cdot M_{RG}(\mathbf{b})\), where \(+,\cdot\) denote the usual matrix addition and multiplication, respectively. Furthermore, \(\tau\) is a module \(R\)-homomorphism, i.e., \(\tau(\delta\mathbf{a})=\delta\tau(\mathbf{a})=\delta M_{RG}(\mathbf{a})\), for \(\delta\in R\)._
**Theorem 2**: _([19, Thereom 2]) Let \(R\) be a ring with unity and \(G\) be a finite group. Then, \(\mathbf{a}\in RG\) is a unit if and only if \(M_{RG}(\mathbf{a})\) is invertible in \(M_{n}(R)\). In that case, inverse of \(M_{RG}(\mathbf{a})\) is also an \(RG\)-matrix._
**Corollary 3**: _([19, corollary 2]) When \(R\) is commutative ring with unity, element \(\mathbf{a}\) is a unit in \(RG\) if and only if \(det(\tau(\mathbf{a}))\) is a unit in \(R\). In case, when \(R\) is a field then \(\mathbf{a}\) is a unit if and only if \(det(\tau(\mathbf{a}))\neq 0\)._
## 3 Group ring NTRU (GR-NTRU)
Yasuda et al. [16] proposed a general framework to develop NTRU-like cryptosystems based on group rings. They call this scheme: Group ring NTRU or GR-NTRU. The idea for such a construction is as follows:
**Parameters selection:** Let \(n,p,q,d\) be positive integers with \(p\) prime, \(p<<q\), \(\gcd(p,q)=1\), \(2d+1\leq n\), and \(q>(6d+1)p\). Throughout this paper, we will take \(p=3,d\) is taken to be at most \(\lfloor\frac{n}{3}\rfloor\), and \(q\) is usually a power of \(2\). Let \(\mathbb{Z}G,\mathbb{Z}_{q}G\), and \(\mathbb{Z}_{p}G\) be group rings where \(\mathbb{Z},\mathbb{Z}_{q}\), and \(\mathbb{Z}_{p}\) denote the ring of integers, ring of integers modulo \(q\), and ring of integers modulo \(p\), respectively, and \(G\) is a finite group of order \(n\).
Let \(t_{1},t_{2}\) be positive integers such that \(t_{1}+t_{2}\leq n\). Define
\[\mathcal{P}(t_{1},t_{2})=\left\{\begin{array}{l|l}\mathbf{a}\in\mathbb{Z}G& \begin{array}{l}\mathbf{a}\text{ has }t_{1}\text{ coefficients equal to }1\\ \mathbf{a}\text{ has }t_{2}\text{ coefficients equal to }-1\\ \text{other coefficients are }0\end{array}\right\}.\]
Elements in \(\mathcal{P}(t_{1},t_{2})\) are referred to as ternary vectors or ternary elements. For \(\mathbf{a}\in\mathbb{Z}_{q}G\), the _centered lift_ of \(\mathbf{a}\) is the unique element \(\mathbf{a}_{lifted}\in\mathbb{Z}G\) whose
coefficients are in the interval \(\left(-\frac{q}{2},\frac{q}{2}\right]\) and \(\mathbf{a}_{lifted}\pmod{q}=\mathbf{a}\), where \(\mathbf{a}_{lifted}\pmod{q}\) is obtained by reducing each coefficient of the vector \(\mathbf{a}_{lifted}\) modulo \(q\). A message is a vector in \(\mathbb{Z}G\) that is the centered lift of some element in \(\mathbb{Z}_{p}G\). In other words, message space consists of elements from \(\mathbb{Z}G\) whose coefficients are between \(-\frac{p}{2}\) and \(\frac{p}{2}\).
**Key generation:**
1. choose \(\mathbf{f}\in\mathcal{P}(d+1,d)\) such that there exist \(\mathbf{f}_{q}\in\mathbb{Z}_{q}G\), \(\mathbf{f}_{p}\in\mathbb{Z}_{p}G\) satisfying \(\mathbf{f}\star\mathbf{f}_{q}\equiv 1_{\mathbb{Z}_{q}G}\pmod{q}\) and \(\mathbf{f}\star\mathbf{f}_{p}\equiv 1_{\mathbb{Z}_{p}G}\pmod{p}\).
2. choose another element \(\mathbf{g}\in\mathcal{P}(d,d)\).
3. construct \(\mathbf{h}\in\mathbb{Z}_{q}G\) such that \(\mathbf{f}\star\mathbf{h}=\mathbf{g}\pmod{q}\), equivalently \(\mathbf{h}=\mathbf{f}_{q}\star\mathbf{g}\pmod{q}\).
4. declare \(\mathbf{h},p,q\) to be public key.
5. \((\mathbf{f},\mathbf{g})\) and \(\mathbf{f}_{p}\) are kept private.
**Encryption:** To encrypt a message \(\mathbf{m}\), we first randomly choose \(\mathbf{r}\in\mathcal{P}(d,d)\). Then, the ciphertext is computed as follows: \(\mathbf{c}=p\mathbf{h}\star\mathbf{r}+\mathbf{m}\pmod{q}\).
**Decryption:** First, compute \(\mathbf{a}\equiv\mathbf{f}\star\mathbf{c}\pmod{q}\). Then, centerlift it to \(\mathbf{a}_{lifted}\) modulo \(q\). Now, \(\mathbf{m}\) can be recovered by computing \(\mathbf{f}_{p}\star\mathbf{a}_{lifted}\pmod{p}\) and centerlifting it modulo \(p\).
**Correctness:** We have \(\mathbf{a}\equiv p\mathbf{g}\star\mathbf{r}+\mathbf{f}\star\mathbf{m}\pmod{q}\). Since \(\mathbf{f}\in\mathcal{P}(d+1,d)\), \(\mathbf{g},\mathbf{r}\in\mathcal{P}(d,d)\), and coefficients of \(\mathbf{m}\) lie between \(-\frac{p}{2}\) to \(\frac{p}{2}\). Therefore, the largest coefficient of \(\mathbf{g}\star\mathbf{r}\) can be \(2d\) and the largest coefficient of \(\mathbf{f}\star\mathbf{m}\) can be \((2d+1)\frac{p}{2}\). Consequently, the largest coefficient of \(p\mathbf{g}\star\mathbf{r}+\mathbf{f}\star\mathbf{m}\) is at most \((6d+1)\frac{p}{2}\). Thus, if \(q>(6d+1)p\), computing \(\mathbf{a}\equiv\mathbf{f}\star\mathbf{c}\pmod{q}\) and then centerlifting it gives exactly the element \(p\mathbf{g}\star\mathbf{r}+\mathbf{f}\star\mathbf{m}\) without modulo \(q\). Now, we multiply this element with \(\mathbf{f}_{p}\) and reduce coefficients modulo \(p\) to recover an element in \(\mathbb{Z}_{p}G\) whose centered lift gives the message \(\mathbf{m}\).
**Lattice attack on GR-NTRU cryptosystem:** Let \(\mathbf{h}=(h_{1},h_{2},\ldots,h_{n})\) be a GR-NTRU public key generated by the private key \(\mathbf{f}=(f_{1},f_{2},\ldots,f_{n})\) and \(\mathbf{g}=(g_{1},g_{2},\ldots,g_{n})\), i.e., \(\mathbf{f}\star\mathbf{h}=\mathbf{g}\pmod{q}\).
The NTRU lattice \(L_{\mathbf{h}}\) associated to \(\mathbf{h}\) is a \(2n\)-dimensional lattice generated by the rows of the matrix
\[M_{\mathbf{h}}=\begin{pmatrix}I_{n}&H\\ \mathbf{0}_{n}&qI_{n}\end{pmatrix} \tag{8}\]
where, \(H=\tau(\mathbf{h})\) is a matrix of order \(n\) with vector \(\mathbf{h}\) as its first row, and \(I_{n}\) is identity matrix of order \(n\).
**Theorem 4**: _The vector \((\mathbf{f},\mathbf{g})\) lies in the lattice \(L_{\mathbf{h}}\), and if \(n\) is large, then there is a high probability that the shortest nonzero vectors in \(L_{\mathbf{h}}\) are \((\mathbf{f},\mathbf{g})\) and other vectors obtained by "rotations" of \(\mathbf{f}\) and \(\mathbf{g}\)._
Therefore, the private key can be recovered by solving the SVP in a lattice of dimension \(2n\). The proof of the Theorem 4 can be given precisely the same way as in [20, Proposition 6.59, 6.61]. By "rotation" in the above theorem, we refer to a transformation related to the underlying group and not necessarily a cyclic rotation in the usual sense. We discuss the particular transformation and provide the proof for the dihedral group based GR-NTRU in Section 4.1.
Remark 1: If one can find a pair of vectors \((\mathbf{f}^{\prime},\mathbf{g}^{\prime})\in L_{\mathbf{h}}\), not necessarily the private key, with small enough coefficients such that for an arbitrary message \(\mathbf{m}\) encrypted using the public key \(\mathbf{h}\) and a random ternary vector \(\mathbf{r}\), the largest coefficient of \(p\mathbf{g}^{\prime}\star\mathbf{r}+\mathbf{f}^{\prime}\star\mathbf{m}\) is at most \(\frac{q}{2}\). Then, \((\mathbf{f}^{\prime},\mathbf{g}^{\prime})\) serves the purpose of decryption for \(\mathbf{m}\). However, this requires \(\mathbf{f}^{\prime}\) to be invertible over \(\mathbb{Z}_{p}G\).
**NTRU as a special case of GR-NTRU:** It is straightforward to observe that the NTRU scheme [20, Chapter 6.10] can be reformulated over the group ring:
\[\mathbb{Z}_{q}C_{N}\cong\mathbb{Z}_{q}[x]/(x^{N}-1) \tag{9}\]
where \(C_{N}\) is a cyclic group of order \(N\). In this case, \(H\) is a circulant matrix of order \(N\). Further, the lattice attack for GR-NTRU is the generalization of lattice attack on NTRU given in [1]. Also, Gentry in [17] proposed an attack on NTRU with composite value of \(N\). Therefore, the value of \(N\) is always taken to be prime.
## 4 GR-NTRU based on the dihedral group
The dihedral group \(D_{N}\) of order \(2N\) is given by \(D_{N}=\left\langle x,y:x^{N}=y^{2}=1,xy=yx^{-1}\right\rangle\), i.e., \(D_{N}=\{1,x,\ldots,x^{N-1},y,yx,\ldots,yx^{N-1}\}\).
This article will focus on the GR-NTRU built over group ring \(\mathbb{Z}_{q}D_{N}\):
\[\mathbb{Z}_{q}D_{N}\cong\frac{\mathbb{Z}_{q}[x,y]}{\left\langle x^{N}-1,y^{2}- 1,yx-x^{N-1}y\right\rangle} \tag{10}\]
According to Theorem 4, finding the private key in GR-NTRU cryptosystem over \(\mathbb{Z}_{q}D_{N}\) is equivalent to solve the SVP in a \(4N\)-dimensional lattice. This section highlights our reduction from solving SVP in a \(4N\)-dimensional lattice to two \(2N\)-dimensional lattices.
**Theorem 5**: _Consider an element \(h\in\mathbb{Z}D_{N}\) where_
\(h=h_{00}1+h_{01}x+\cdots+h_{0N-1}x^{N-1}+h_{10}y+h_{11}yx+\cdots+h_{1N-1}yx^{ N-1}\)_._
_Let \(\mathbf{h}_{0}=(h_{00},h_{01},\ldots,h_{0N-1})\) and \(\mathbf{h}_{1}=(h_{10},h_{11},\ldots,h_{1N-1})\), so that \(\mathbf{h}=(\mathbf{h}_{0},\mathbf{h}_{1})\) is a \(2N\)-length vector. Then,_
1. _The matrix of the group_ \(D_{N}\)_, and consequently the_ \(\mathbb{Z}D_{N}\)_-matrix of_ \(\mathbf{h}\) _is a_ \(2N\times 2N\) _matrix of the form_ \(H=\tau(\mathbf{h})=\begin{pmatrix}H_{0}&H_{1}\\ H_{1}&H_{0}\end{pmatrix}\)_, where_ \(H_{0}\) _is a circulant matrix with vector_ \(\mathbf{h}_{0}\)_, and_ \(H_{1}\) _is a reverse circulant matrix with vector_ \(\mathbf{h}_{1}\)_, as their first rows, respectively._
2. _Let_ \(L_{\mathbf{h}}\) _denotes the lattice associated to_ \(\mathbf{h}\) _spanned by the rows of the matrix_ \[M_{\mathbf{h}}=\left[\begin{array}{c|c}I_{2N}&H\\ \hline\mathbf{0}_{2N}&qI_{2N}\end{array}\right]\] _and_ \(\mathbf{f}=(\mathbf{f}_{0},\mathbf{f}_{1}),\mathbf{g}=(\mathbf{g}_{0}, \mathbf{g}_{1})\) _are vectors such that_ \((\mathbf{f},\mathbf{g})\in L_{\mathbf{h}}\)_. Then, for every_ \(-N+1\leq r\leq N-1\)_,_ \((\mathbf{f}_{0}^{(r)},\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)},\mathbf{g}_{ 1}^{(-r)}),(\mathbf{f}_{1}^{(r)},\mathbf{f}_{0}^{(-r)},\mathbf{g}_{1}^{(r)}, \mathbf{g}_{0}^{(-r)})\in L_{\mathbf{h}}\)__
Proof.: The proof of the first part can be derived directly from (6). Now suppose that \((\mathbf{f}_{0},\mathbf{f}_{1},\mathbf{g}_{0},\mathbf{g}_{1})\in L_{\mathbf{h}}\) then there exists a vector \((\mathbf{u}_{0},\mathbf{u}_{1})\) with integer entries such that \((\mathbf{f}_{0},\mathbf{f}_{1},\mathbf{u}_{0},\mathbf{u}_{1})M_{\mathbf{h}}= (\mathbf{f}_{0},\mathbf{f}_{1},\mathbf{g}_{0},\mathbf{g}_{1})\). It is easy to check that \((\mathbf{f}_{1},\mathbf{f}_{0},\mathbf{u}_{1},\mathbf{u}_{0})M_{\mathbf{h}}= (\mathbf{f}_{1},\mathbf{f}_{0},\mathbf{g}_{1},\mathbf{g}_{0})\). Then the second part follows from the observations that, since \(H_{0}\) is a circulant matix and \(H_{1}\) is a reverse circulant matrix, therefore for every \(0\leq r\leq N-1\) and a vector \(\mathbf{a}\) of length \(N\) with integer entries, \(\mathbf{a}^{(r)}H_{0}=(\mathbf{a}H_{0})^{(r)}\) and \(\mathbf{a}^{(r)}H_{1}=(\mathbf{a}H_{1})^{(-r)}\). Also, \(\mathbf{a}^{(-r)}H_{0}=(\mathbf{a}H_{0})^{(-r)}\) and \(\mathbf{a}^{(-r)}H_{1}=(\mathbf{a}H_{1})^{(r)}\).
### Estimation of lengths of short vectors in the lattice \(L_{\mathbf{h}}\)
Let \(\mathbf{f}=(\mathbf{f}_{0},\mathbf{f}_{1})\in\mathcal{P}(d+1,d)\), \(\mathbf{g}=(\mathbf{g}_{0},\mathbf{g}_{1})\in\mathcal{P}(d,d)\) be two randomly and uniformly generated ternary vectors with \(d\) at most \(\lfloor\frac{2N}{3}\rfloor\). Therefore, for \(-N+1\leq r\leq N-1\), the length of the vectors \((\mathbf{f}_{0}^{(r)},\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)},\mathbf{g}_{ 1}^{(-r)})\) and \((\mathbf{f}_{1}^{(r)},\mathbf{f}_{0}^{(-r)},\mathbf{g}_{1}^{(r)},\mathbf{g}_{ 0}^{(-r)})\) is at most \(\sqrt{\frac{8N}{3}+1}\approx 1.63\sqrt{N}\). Let \(\mathbf{h}\) be a public key constructed from private vectors \(\mathbf{f}\) and \(\mathbf{g}\), i.e., \(\mathbf{h}=\mathbf{f}_{q}\star\mathbf{g}\pmod{q}\). According to Gaussian heuristic estimation, the length of the shortest vector in the lattice \(L_{\mathbf{h}}\) is
\[\sigma(L_{\mathbf{h}})=\sqrt{\frac{4N}{2\pi e}}(\det L_{\mathbf{h}})^{\frac{1 }{4N}}=\sqrt{\frac{2qN}{\pi e}}\approx\sqrt{\frac{8}{\pi e}}N\approx 0.97N,\]
since \(q\approx 4N\). Furthermore,
\[\frac{\left\|(\mathbf{f}_{0}^{(r)},\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r) },\mathbf{g}_{1}^{(-r)})\right\|}{\sigma(L_{\mathbf{h}})}=\frac{\left\|( \mathbf{f}_{1}^{(r)},\mathbf{f}_{0}^{(-r)},\mathbf{g}_{1}^{(r)},\mathbf{g}_{ 0}^{(-r)})\right\|}{\sigma(L_{\mathbf{h}})}\approx\frac{1.68}{\sqrt{N}}.\]
So, these vectors are a factor of \(O\left(\frac{1}{\sqrt{N}}\right)\) shorter than predicted by the Gaussian heuristic. Hence, for larger values of \(N\), there is a high probability that the vectors \((\mathbf{f}_{0}^{(r)},\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)},\mathbf{g}_{ 1}^{(-r)}),(\mathbf{f}_{1}^{(r)},\mathbf{f}_{0}^{(-r)},\mathbf{g}_{1}^{(r)}, \mathbf{g}_{0}^{(-r)})\) are the shortest vectors in the lattice \(L_{\mathbf{h}}\).
### Our lattice reduction for GR-NTRU over \(\mathbb{Z}_{q}D_{N}\)
Let \(\mathcal{I}=\begin{pmatrix}I_{N}&I_{N}\\ I_{N}&-I_{N}\end{pmatrix}\) where \(I_{N}\) is an \(N\times N\) identity matrix. For a ring \(R\) (in our case \(R=\mathbb{Z}\)) with characteristic not \(2\), \(\mathcal{I}\) is invertible over \(R\) or over the
field of quotients of \(R\). Conjugating \(\tau(\mathbf{h})\) by \(\mathcal{I}\), we get
\[\mathcal{I}\begin{pmatrix}H_{0}&H_{1}\\ H_{1}&H_{0}\end{pmatrix}\mathcal{I}^{-1}=\begin{pmatrix}H_{0}+H_{1}&\mathbf{0}_{ N}\\ \mathbf{0}_{N}&H_{0}-H_{1}\end{pmatrix}. \tag{11}\]
Here, the matrix \(\mathcal{I}\) is independent of \(H_{0}\) and \(H_{1}\).
**Theorem 6**: _Let the vector \((\mathbf{f},\mathbf{g})\) where \(\mathbf{f}=(\mathbf{f}_{0},\mathbf{f}_{1})\) and \(\mathbf{g}=(\mathbf{g}_{0},\mathbf{g}_{1})\) lies in the lattice \(L_{\mathbf{h}}\), i.e., \(\mathbf{f}\star\mathbf{h}+q\mathbf{u}=\mathbf{g}\) for some vector \(\mathbf{u}\). Let \(-N+1\leq r\leq N-1\), then the vector \((\mathbf{f}_{0}^{(r)}+\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)}+\mathbf{g}_{1 }^{(-r)})\in L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), and the vectors \((\mathbf{f}_{0}^{(r)}-\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)}-\mathbf{g}_{ 1}^{(-r)}),(\mathbf{f}_{1}^{(r)}-\mathbf{f}_{0}^{(-r)},\mathbf{g}_{1}^{(r)}- \mathbf{g}_{0}^{(-r)})\in L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\)._
We have \(\mathbf{f}\star\mathbf{h}+q\mathbf{u}=\mathbf{g}\). Applying \(\tau\) to both sides and using the fact that \(\tau\) is a ring homomorphism as well as a \(\mathbb{Z}\)-module homomorphism, and using Equation (11), we get
\[\tau(\mathbf{f})\cdot\tau(\mathbf{h})+q\tau(\mathbf{u}) =\tau(\mathbf{g})\] \[\begin{pmatrix}F_{0}&F_{1}\\ F_{1}&F_{0}\end{pmatrix}\begin{pmatrix}H_{0}&H_{1}\\ H_{1}&H_{0}\end{pmatrix}+q\begin{pmatrix}U_{0}&U_{1}\\ U_{1}&U_{0}\end{pmatrix} =\begin{pmatrix}G_{0}&G_{1}\\ G_{1}&G_{0}\end{pmatrix}\] \[\mathcal{I}\begin{pmatrix}F_{0}&F_{1}\\ F_{1}&F_{0}\end{pmatrix}\mathcal{I}^{-1}\mathcal{I}\begin{pmatrix}H_{0}&H_{1} \\ H_{1}&H_{0}\end{pmatrix}\mathcal{I}^{-1}+q\mathcal{I}\begin{pmatrix}U_{0}&U_{1} \\ U_{1}&U_{0}\end{pmatrix}\mathcal{I}^{-1} =\mathcal{I}\begin{pmatrix}G_{0}&G_{1}\\ G_{1}&G_{0}\end{pmatrix}\mathcal{I}^{-1}\] \[\begin{pmatrix}F_{0}+F_{1}&\mathbf{0}_{N}\\ \mathbf{0}_{N}&F_{0}-F_{1}\end{pmatrix}\begin{pmatrix}H_{0}+H_{1}&\mathbf{0}_{ N}\\ \mathbf{0}_{N}&H_{0}-H_{1}\end{pmatrix}+q\begin{pmatrix}U_{0}+U_{1}&\mathbf{0}_{ N}\\ \mathbf{0}_{N}&U_{0}-U_{1}\end{pmatrix}\] \[= \begin{pmatrix}G_{0}+G_{1}&\mathbf{0}_{N}\\ \mathbf{0}_{N}&G_{0}-G_{1}\end{pmatrix}.\]
Equivalently,
\[(F_{0}+F_{1})(H_{0}+H_{1})+q(U_{0}+U_{1}) =(G_{0}+G_{1})\] \[(F_{0}-F_{1})(H_{0}-H_{1})+q(U_{0}-U_{1}) =(G_{0}-G_{1}).\]
Considering the first rows, we have
\[(\mathbf{f}_{0}+\mathbf{f}_{1})(H_{0}+H_{1})+q(\mathbf{u}_{0}+ \mathbf{u}_{1}) =(\mathbf{g}_{0}+\mathbf{g}_{1})\] \[(\mathbf{f}_{0}-\mathbf{f}_{1})(H_{0}-H_{1})+q(\mathbf{u}_{0}- \mathbf{u}_{1}) =(\mathbf{g}_{0}-\mathbf{g}_{1}).\]
Therefore,
\[(\mathbf{f}_{0}+\mathbf{f}_{1},\mathbf{u}_{0}+\mathbf{u}_{1}) \begin{pmatrix}I_{2N}&H_{0}+H_{1}\\ \mathbf{0}_{N}&qI_{2N}\end{pmatrix} =(\mathbf{f}_{0}+\mathbf{f}_{1},\mathbf{g}_{0}+\mathbf{g}_{1})\] \[(\mathbf{f}_{0}-\mathbf{f}_{1},\mathbf{u}_{0}-\mathbf{u}_{1}) \begin{pmatrix}I_{2N}&H_{0}-H_{1}\\ \mathbf{0}_{N}&qI_{2N}\end{pmatrix} =(\mathbf{f}_{0}-\mathbf{f}_{1},\mathbf{g}_{0}-\mathbf{g}_{1}).\]
This implies that the vector \((\mathbf{f}_{0}+\mathbf{f}_{1},\mathbf{g}_{0}+\mathbf{g}_{1})\) lies in the lattice \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), and the vector \((\mathbf{f}_{0}-\mathbf{f}_{1},\mathbf{g}_{0}-\mathbf{g}_{1})\) lies in the lattice \(L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\). The rest follows from Theorem 5 that for \(-N+1\leq r\leq N-1\), the vectors \((\mathbf{f}_{0}^{(r)},\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)},\mathbf{g}_{ 1}^{(-r)})\), \((\mathbf{f}_{1}^{(r)},\mathbf{f}_{0}^{(-r)},\mathbf{g}_{1}^{(r)},\mathbf{g}_{ 0}^{(-r)})\) lie in the lattice \(L_{\mathbf{h}}\).
**Theorem 7** (Pull-back): _Let vectors \((\mathbf{f}^{\prime}_{0},\mathbf{g}^{\prime}_{0})\) and \((\mathbf{f}^{\prime}_{1},\mathbf{g}^{\prime}_{1})\) lie in the lattices \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\) and \(L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\), respectively. Then, the vector \((\mathbf{f}^{\prime}_{0}+\mathbf{f}^{\prime}_{1},\mathbf{f}^{\prime}_{0}- \mathbf{f}^{\prime}_{1},\mathbf{g}^{\prime}_{0}+\mathbf{g}^{\prime}_{1}, \mathbf{g}^{\prime}_{0}-\mathbf{g}^{\prime}_{1})\) lies in the lattice \(L_{\mathbf{h}}\)._
Proof: Let the vectors \(\mathbf{u}^{\prime}_{0}\) and \(\mathbf{u}^{\prime}_{1}\) be such that
\[(\mathbf{f}^{\prime}_{0},\mathbf{u}^{\prime}_{0})\begin{pmatrix}I_{N}&H_{0}+ H_{1}\\ \mathbf{0}_{N}&qI_{N}\end{pmatrix}=(\mathbf{f}^{\prime}_{0},\mathbf{g}^{ \prime}_{0})\]
Therefore we get
\[\mathbf{f}^{\prime}_{0}(H_{0}+H_{1})+q\mathbf{u}^{\prime}_{0}=\mathbf{g}^{ \prime}_{0}\]
\[\mathbf{f}^{\prime}_{1}(H_{0}-H_{1})+q\mathbf{u}^{\prime}_{1}=\mathbf{g}^{ \prime}_{1}.\]
adding and subtracting these equations gives
\[(\mathbf{f}^{\prime}_{0}+\mathbf{f}^{\prime}_{1})H_{0}+(\mathbf{f }^{\prime}_{0}-\mathbf{f}^{\prime}_{1})H_{1}+q(\mathbf{u}^{\prime}_{0}+ \mathbf{u}^{\prime}_{1})=\mathbf{g}^{\prime}_{0}+\mathbf{g}^{\prime}_{1}\] \[(\mathbf{f}^{\prime}_{0}+\mathbf{f}^{\prime}_{1})H_{1}+(\mathbf{f }^{\prime}_{0}-\mathbf{f}^{\prime}_{1})H_{0}+q(\mathbf{u}^{\prime}_{0}- \mathbf{u}^{\prime}_{1})=\mathbf{g}^{\prime}_{0}-\mathbf{g}^{\prime}_{1}.\]
Finally
\[(\mathbf{f}^{\prime}_{0}+\mathbf{f}^{\prime}_{1},\mathbf{f}^{\prime}_{0}- \mathbf{f}^{\prime}_{1},\mathbf{u}^{\prime}_{0}+\mathbf{u}^{\prime}_{1}, \mathbf{u}^{\prime}_{0}-\mathbf{u}^{\prime}_{1})M_{\mathbf{h}}=(\mathbf{f}^{ \prime}_{0}+\mathbf{f}^{\prime}_{1},\mathbf{f}^{\prime}_{0}-\mathbf{f}^{ \prime}_{1},\mathbf{g}^{\prime}_{0}+\mathbf{g}^{\prime}_{1},\mathbf{g}^{ \prime}_{0}-\mathbf{g}^{\prime}_{1})\]
where
\[M_{\mathbf{h}}=\begin{pmatrix}I_{N}&\mathbf{0}_{N}&H_{0}&H_{1}\\ \mathbf{0}_{N}&I_{N}&H_{1}&H_{0}\\ \hline 0_{N}&0_{N}&qI_{N}&0_{N}\\ 0_{N}&0_{N}&qI_{N}&qI_{N}\end{pmatrix}.\]
This gives that the vector \((\mathbf{f}^{\prime}_{0}+\mathbf{f}^{\prime}_{1},\mathbf{f}^{\prime}_{0}- \mathbf{f}^{\prime}_{1},\mathbf{g}^{\prime}_{0}+\mathbf{g}^{\prime}_{1}, \mathbf{g}^{\prime}_{0}-\mathbf{g}^{\prime}_{1})\) lies in the lattice \(L_{\mathbf{h}}\). We say that the vector \((\mathbf{f}^{\prime}_{0}+\mathbf{f}^{\prime}_{1},\mathbf{f}^{\prime}_{0}- \mathbf{f}^{\prime}_{1},\mathbf{g}^{\prime}_{0}+\mathbf{g}^{\prime}_{1}, \mathbf{g}^{\prime}_{0}-\mathbf{g}^{\prime}_{1})\) is the pull-back of the vectors \((\mathbf{f}^{\prime}_{0},\mathbf{g}^{\prime}_{0})\) and \((\mathbf{f}^{\prime}_{1},\mathbf{g}^{\prime}_{1})\) lying in the lattices \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\) and \(L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\), respectively.
### Recovering a decryption key
Let \(\mathbf{f}=(\mathbf{f}_{0},\mathbf{f}_{1})\), \(\mathbf{g}=(\mathbf{g}_{0},\mathbf{g}_{1})\) be ternary vectors where \(\mathbf{f}_{i},\mathbf{g}_{i}\) roughly have \(\lfloor\frac{N}{3}\rfloor\) number of \(1\)s, \(\lfloor\frac{N}{3}\rfloor\) number of \(-1\)s, and rest are \(0\)s, the same holds true for \(\mathbf{f}^{(r)}_{i},\mathbf{g}^{(r)}_{i}\), where \(-N+1\leq r\leq N-1\). Let \((\mathbf{f},\mathbf{g})\) be the private key and \(\mathbf{h}=(\mathbf{h}_{0},\mathbf{h}_{1})\) be the public key satisfying \(\mathbf{f}\star\mathbf{h}=\mathbf{g}\pmod{q}\). From Theorem 7, we know that the vector \((\mathbf{f}^{(r)}_{0}+\mathbf{f}^{(-r)}_{1},\mathbf{g}^{(r)}_{0}+\mathbf{g}^{ (-r)}_{1})\in L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), and the vectors \((\mathbf{f}^{(r)}_{0}-\mathbf{f}^{(-r)}_{1},\mathbf{g}^{(r)}_{0}-\mathbf{g}^{ (-r)}_{1}),(\mathbf{f}^{(r)}_{1}-\mathbf{f}^{(-r)}_{0},\mathbf{g}^{(r)}_{1}- \mathbf{g}^{(-r)}_{0})\in L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\). In the extreme case when all the \(1\)s and \(-1\)s of \(\mathbf{f}^{(r)}_{0}\) match with \(1\)s and \(-1\)s of \(\mathbf{f}^{(-r)}_{1}\), respectively, and the same is true for \(\mathbf{g}^{(r)}_{0},\mathbf{g}^{(-r)}_{1}\), then \(\left\|(\mathbf{f}^{(r)}_{0}+\mathbf{f}^{(-r)}_{1},\mathbf{g}^{(r)}_{0}+ \mathbf{g}^{(-r)}_{1})\right\|\approx\sqrt{2}\sqrt{\frac{8N}{3}}\). Similarly, we have
\(\sqrt{2}\sqrt{\frac{8N}{3}}\). Gaussian heuristic predicts that the length of shortest vector in the lattices \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\) and \(L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\) is
\[\sigma(L_{\mathbf{h}_{0}+\mathbf{h}_{1}})=\sigma(L_{\mathbf{h}_{0}-\mathbf{h}_{ 1}})=\sqrt{\frac{qN}{\pi e}}\approx\sqrt{\frac{4}{\pi e}}N\approx 0.68N,\ \text{since}\ q \approx 4N.\]
Also the ratios,
\[\frac{\left\|(\mathbf{f}_{0}^{(r)}+\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)}+ \mathbf{g}_{1}^{(-r)})\right\|}{\sigma(L_{\mathbf{h}_{0}+\mathbf{h}_{1}})}\ =\ \frac{\left\|(\mathbf{f}_{0}^{(r)}-\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)}- \mathbf{g}_{1}^{(-r)})\right\|}{\sigma(L_{\mathbf{h}_{0}-\mathbf{h}_{1}})}\ =\ \frac{\left\|(\mathbf{f}_{1}^{(r)}-\mathbf{f}_{0}^{(-r)},\mathbf{g}_{1}^{(r)} -\mathbf{g}_{0}^{(-r)})\right\|}{\sigma(L_{\mathbf{h}_{0}-\mathbf{h}_{1}})}\ \approx\ \frac{3.37}{\sqrt{N}}.\]
Therefore, for large values of \(N\), we expect that for \(-N+1\leq r\leq N-1\), the vectors \((\mathbf{f}_{0}^{(r)}+\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)}+\mathbf{g}_{ 1}^{(-r)})\) are the shortest in the lattice \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), and the vectors \((\mathbf{f}_{0}^{(r)}-\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)}-\mathbf{g}_ {1}^{(-r)}),(\mathbf{f}_{1}^{(r)}-\mathbf{f}_{0}^{(-r)},\mathbf{g}_{1}^{(r)}- \mathbf{g}_{0}^{(-r)})\) are the shortest in the lattice \(L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\).
Suppose applying the basis reduction algorithms* return \((\mathbf{f}_{0}^{(r)}+\mathbf{f}_{1}^{(-r)},\mathbf{g}_{0}^{(r)}+\mathbf{g}_{ 1}^{(-r)})\in L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\) and \((\mathbf{f}_{0}^{(s)}-\mathbf{f}_{1}^{(-s)},\mathbf{g}_{0}^{(s)}-\mathbf{g}_{ 1}^{(-s)})\) or \((\mathbf{f}_{1}^{(t)}-\mathbf{f}_{0}^{(-t)},\mathbf{g}_{1}^{(t)}-\mathbf{g}_{ 0}^{(-t)})\in L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\), as a solution to the SVP. In case \(s=r\) or \(t=-r\), we can obtain the vector \((\mathbf{f}_{0}^{(s)},\mathbf{f}_{1}^{(-s)},\mathbf{g}_{0}^{(s)},\mathbf{g}_{ 1}^{(-s)})\) or \((\mathbf{f}_{0}^{(-t)},\mathbf{f}_{1}^{(t)},\mathbf{g}_{0}^{(-t)},\mathbf{g}_ {1}^{(t)})\) that lies in the lattice \(L_{\mathbf{h}}\). For small values of \(N\), the chances of getting a match \(s=r\) or \(t=-r\) is low. However, we observed experimentally that the chance of getting the desired match increases with the value of \(N\). Hence, we are able to recover the ternary vectors in the lattice \(L_{\mathbf{h}}\) with high probability for large values of \(N\), and by Remark 1 it will also work as a decryption key.
Footnote *: To avoid any possible confusion, in this paper the term _lattice reduction_ refers to our reduction method given in subsection 4.2, while the _basis reduction/lattice basis reduction_ refers to applying a reduction algorithm like LLL or BKZ to reduce the basis of a lattice.
As discussed, there are chances that the basis reduction algorithms do not return the desired vectors. Even in those cases there is a way to get a pair of vectors \(\mathbf{f}^{\prime},\mathbf{g}^{\prime}\) with small coefficients such that \((\mathbf{f}^{\prime},\mathbf{g}^{\prime})\in L_{\mathbf{h}}\). Again by Remark 1, such a pair can serve as a decryption key. Suppose \((\mathbf{f}_{0}^{\prime},\mathbf{g}_{0}^{\prime})\in L_{\mathbf{h}_{0}+ \mathbf{h}_{1}}\) and \((\mathbf{f}_{1}^{\prime},\mathbf{g}_{1}^{\prime})\in L_{\mathbf{h}_{0}- \mathbf{h}_{1}}\) are the short vectors returned by the basis reduction algorithms. Assuming that these vectors take values from the set \(\{0,\pm 1,\pm 2,\ldots,\pm\ell\}\). Then, from Theorem 7 we know that the vector \((\mathbf{f}_{0}^{\prime}+\mathbf{f}_{1}^{\prime},\mathbf{f}_{0}^{\prime}- \mathbf{f}_{1}^{\prime},\mathbf{g}_{0}^{\prime}+\mathbf{g}_{1}^{\prime}, \mathbf{g}_{0}^{\prime}-\mathbf{g}_{1}^{\prime})\) lies in the lattice \(L_{\mathbf{h}}\), and takes values from the set \(\{0,\pm 1,\pm 2,\ldots,\pm 2\ell\}\). Let \(\mathbf{f}^{\prime}=(\mathbf{f}_{0}^{\prime}+\mathbf{f}_{1}^{\prime},\mathbf{f}_ {0}^{\prime}-\mathbf{f}_{1}^{\prime})\) and \(\mathbf{g}^{\prime}=(\mathbf{g}_{0}^{\prime}+\mathbf{g}_{1}^{\prime},\mathbf{g }_{0}^{\prime}-\mathbf{g}_{1}^{\prime})\), then \((\mathbf{f}^{\prime},\mathbf{g}^{\prime})\) serves as a potential decryption key for small values of \(\ell\), refer to Table 2.
_Remark 2_: The same approach can be applied to recover a decryption key in GR-NTRU built over other groups whose matrices show similar pattern as the dihedral group. For example, a cyclic group \(G=C_{2N}\) of order \(2N\) has the matrix of the form \(\begin{pmatrix}A&B\\ B&A\end{pmatrix}\).
## 5 Experimental Results
We ran our experiment using different parameter sets \((N,p,q,d)\) where \(N\) is a prime number equal to half the order of the dihedral group, \(p=3\), \(d=\lfloor\frac{2N}{3}\rfloor\), and \(q\) is the least power of 2 satisfying \(q>(6d+1)p\). For each parameter set, we generated 100 random private keys and messages, then we generated the corresponding public keys and the ciphertexts. We ran algorithms 1, 2 to retrieve a decryption key and decrypt the ciphertext. Algorithm 1 describes the steps of retrieving the private key by solving the SVP in a \(4N\)-dimensional lattice, which is the naive way to do a lattice attack on the public key for an NTRU-like scheme, while Algorithm 2 shows the steps of retrieving the key by solving the SVP in two lattices of dimension \(2N\).
We measured the average time to run the algorithms, the percentage of the returned vectors that worked successfully as decryption keys, and their average norms. For the naive approach, if Algorithm 1 returns a vector \(\mathbf{k}\) such that \(\|\mathbf{k}\|\leq 4\times\!\|\mathbf{private\ key}\|\), we count the trail as a success. For the pull-back approach, the algorithm returns two vectors \(\mathbf{k}_{1}\) (non-ternary) and \(\mathbf{k}_{2}\) (ternary). In case \(\|\mathbf{k}_{1}\|\leq 4\!\times\!\|\mathbf{private\ key}\|\), then \(\mathbf{k}_{1}\) is counted as a decryption key, while \(\mathbf{k}_{2}\) being ternary is always counted, if the algorithm returns it. For verification purpose, we have decrypted the ciphertext and checked that the decrypted messages equal the original messages for all the returned keys.
The success of retrieving the key highly depends on the basis reduction algorithm. The goal of basis reduction techniques is to find shorter and nearly orthogonal bases. LLL [21] and BKZ [22] are famous examples of these algorithms. While LLL can run in a polynomial time and produce a good reduced basis for smaller dimensions, it fails in higher dimensions. BKZ has an additional input, the block size \(\beta\), that affects both the running time and the quality of the reduced basis. The larger the value of \(\beta\), the better the quality of the reduced basis and the higher the running time. Many enhancements have been introduced to BKZ resulting in BKZ2.0 [23] and other variants of BKZ [24, 25]. For our experiment, FPLLL implementations of LLL and BKZ2.0 [26] have been used as options for lattice basis reduction. We executed the experiment in SageMath depending on FPyLLL [27] as a python wrapper of FPLLL. Tables 1, 2 show the results for algorithms 1, 2, respectively and Fig. 1 breaks down these results into comparisons of the success rates and average times for the mentioned algorithms. Timed results have been executed on a system running Windows 10 Pro with Intel(R) Core(TM)i9-10980HK [email protected] with 32 GB installed RAM.
We can notice that the naive approach is successful up-to certain values of \(N\), then for \(N>67\) we aren't able to get the results in reasonable time since we are solving the SVP in a \(4N\)-dimensional lattice. However, the pull-back approach can function and retrieve the decryption key for larger values of \(N\). In the pull-back approach, the success rate of getting the non-ternary key \(\mathbf{k}_{1}\) is higher than that for the ternary key \(\mathbf{k}_{2}\) since, in the former case, we are looking only for two short-enough vectors in the two lattices \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), \(L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\) such that their pull-back is also a short-enough vector in the larger lattice
\(L_{\mathbf{h}}\). However, to retrieve \(\mathbf{k}_{2}\), we need to find a match for two rotated vectors \(\in\{0,\pm 1,\pm 2\}^{2N}\) in the lattices \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), \(L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\) that enable retrieving a ternary key in \(L_{\mathbf{h}}\). Finding such a match increases with the dimension of the lattice and the quality of vectors returned by the basis reduction algorithm. We can see from Table 2, Fig. (e)e that LLL is successful up-to \(N=73\) in finding the non-ternary key, then the norm of the returned vector starts to increase significantly, while BKZ2.0 can retrieve vectors with smaller norms for higher dimensions. On the other hand, the success rate for finding a ternary vector starts to increase when the value of \(N\) increases. Further, for \(N=61,67,71\), and \(73\) the success rate is \(98\%,73\%,48\%,\) and \(33\%\) respectively for LLL as a basis reduction option, while the success percentage is \(100\%\) for the same values of \(N\) with BKZ2.0.
We would like to point out that the effect that appears as a sudden drop in the success rate in Fig. (b)b doesn't mean that BKZ2.0 fails to find a decryption key for \(N>67\) in the naive approach and for \(N>131\) in the pull-back approach, rather it fails to find the key in a reasonable time due to enabling auto-abort flag in our experiment.
```
Input:N, p, q: parameters of the dihedral group based GR-NTRU h: the public key threshold: the maximum norm of a vector to check as a key option: basis reduction algorithm Output:\(\mathbf{k}\in L_{\mathbf{h}}\) that serves as a decryption key or a failure
1\(M_{\mathbf{h}}\leftarrow\texttt{get\_lattice\_basis}(\text{h,N,q})\) /* 4N \(\times\) 4N matrix */
2\(M_{\mathbf{h}}^{Reduced}\leftarrow\texttt{reduce\_basis}(M_{\mathbf{h}}, \text{option})\)
3\(i\gets 1\)
4while\(i\leq 4N\)do
5 Let \(\mathbf{v}=(v_{1},v_{2},\ldots v_{4N})\) be the \(i^{th}\) shortest vector of \(M_{\mathbf{h}}^{Reduced}\) if\(\|\mathbf{v}\|>threshold\)then
6returnfailure
7 Let \(\mathbf{f}^{\prime}=(v_{1},v_{2},\ldots v_{2N})\), \(\mathbf{g}^{\prime}=(v_{2N+1},v_{2N+1},\ldots v_{4N})\)
8if\(\mathbf{f}^{\prime}\) is invertible in \(\mathbb{Z}_{p}D_{N}\)then
9\(\mathbf{k}\leftarrow(\mathbf{f}^{\prime},\mathbf{g}^{\prime})\)
10return\(\mathbf{k}\)
11\(i\gets i+1\)
```
**Algorithm 1**Naive approach to retrieve a decryption key
```
Input:N,p,q: parameters of the dihedral group based GR-NTRU \(\mathbf{h}=(\mathbf{h}_{0},\mathbf{h}_{1})\): the public key threshold: the maximum norm of a vector to check as a key option: basis reduction algorithm Output:\(\mathbf{k}_{1}\in L_{\mathbf{h}}\) that serves as a decryption key, \(\mathbf{k}_{2}\in L_{\mathbf{h}}\) a ternary decryption key, or a failure
1\(\mathbf{k}_{1}\leftarrow[\;]\), \(\mathbf{k}_{2}\leftarrow[\;]\)\(key_{1}\_found\leftarrow\)false, \(key_{2}\_found\leftarrow\)false /* initialization */ \(M_{\mathbf{h}_{0}+\mathbf{h}_{1}}\leftarrow\)get_first_basis(h,N,q) /* 2N \(\times\) 2N matrix */ \(M_{\mathbf{h}_{0}-\mathbf{h}_{1}}\leftarrow\)get_second_basis(h,N,q) /* 2N \(\times\) 2N matrix */ \(M_{\mathbf{h}_{0}+\mathbf{h}_{1}}^{Reduced},M_{\mathbf{h}_{0}-\mathbf{h}_{1} }^{Reduced}\leftarrow\)reduce_basis(\(M_{\mathbf{h}_{0}+\mathbf{h}_{1}},M_{\mathbf{h}_{0}-\mathbf{h}_{1}}\),option) \(i\gets 1\)while\(i\leq 2N\)do
2 Let \(\mathbf{v}=(v_{1},v_{2},\ldots v_{2N})\) be the \(i^{th}\) shortest vector of \(M_{\mathbf{h}_{0}+\mathbf{h}_{1}}^{Reduced}\) if\(\|\mathbf{v}\|>threshold\)then
3if\(key_{1}\_found\)or\(key_{2}\_found\)then
4return\(k_{1},k_{2}\)
5returnfailure
6 Let \(\mathbf{f}^{\prime}_{0}=(v_{1},v_{2},\ldots v_{N})\), \(\mathbf{g}^{\prime}_{0}=(v_{N+1},v_{N+2},\ldots v_{2N})\)\(j\gets 1\)while\(j\leq 2N\)do
7 Let \(\mathbf{w}=(w_{1},w_{2},\ldots w_{2N})\) be the \(j^{th}\) shortest vector of \(M_{\mathbf{h}_{0}-\mathbf{h}_{1}}^{Reduced}\) if\(\|\mathbf{w}\|>threshold\)then
8 break
9 Let \(\mathbf{f}^{\prime}_{1}=(w_{1},w_{2},\ldots w_{N})\), \(\mathbf{g}^{\prime}_{1}=(w_{N+1},w_{N+2},\ldots w_{2N})\)\((\mathbf{f}^{\prime},\mathbf{g}^{\prime})\leftarrow\left((\mathbf{f}^{\prime}_{0}+ \mathbf{f}^{\prime}_{1},\mathbf{f}^{\prime}_{0}-\mathbf{f}^{\prime}_{1}),( \mathbf{g}^{\prime}_{0}+\mathbf{g}^{\prime}_{1},\mathbf{g}^{\prime}_{0}- \mathbf{g}^{\prime}_{1})\right)\)ifnot(\(key_{1}\_found\))and\(\mathbf{f}^{\prime}\) is invertible in \(\mathbb{Z}_{p}D_{N}\)then
10\(\mathbf{k}_{1}\leftarrow(\mathbf{f}^{\prime},\mathbf{g}^{\prime})\)\(key_{1}\_found\leftarrow\)true
11ifnot(\(key_{2}\_found\))and\((\mathbf{f}^{\prime},\mathbf{g}^{\prime})\in\{-2,0,2\}^{4N}\)then
12\((\mathbf{f}^{\prime},\mathbf{g}^{\prime\prime})\leftarrow(\frac{\mathbf{f}}{2}, \frac{\mathbf{g}^{\prime}}{2})\) /* coefficient-wise division */
13if\(\mathbf{f}^{\prime\prime}\) is invertible in \(\mathbb{Z}_{p}D_{N}\)then
14\(\mathbf{k}_{2}\leftarrow(\mathbf{f}^{\prime},\mathbf{g}^{\prime\prime})\)\(key_{2}\_found\leftarrow\)true
15if\(key_{1}\_found\)and\(key_{2}\_found\)then
16return\(\mathbf{k}_{1},\mathbf{k}_{2}\)
17\(j\gets j+1\)\(i\gets i+1\)
```
**Algorithm 2**Pull-back approach to retrieve decryption key
The results are obtained by running Algorithm 1 (the naive approach) with threshold value \(=4\times\|key\|\), where\(\,\|key\|\) refers to the norm of the private key for the corresponding value of \(N\), \(\mathbf{k}\%\) refers to the success rate of retrieving a decryption key, which is equivalent to solve the SVP in a \(4N\)-dimensional lattice, and the average time indicates the average running time of one trail of Algorithm 1 over 100 randomly generated examples. LLL has been called using the default parameters with \(\delta=0.99,\eta=0.501\) and BKZ2.0 with block size \(\beta=40\) and auto-abort flag enabled.
\({}^{*}\) We have used multiple-precision binary floating-point computations with correct rounding (MPFR) [28] for arbitrary precision at 200-bits due to floating points errors.
\({}^{**}\) auto-abort is triggered when the execution takes longer times and the quality of basis doesn't improve quickly over tours.
\begin{table}
\begin{tabular}{l c c c c c} \hline & & & **LLL** & & **BKZ2.0** \\ \hline N & \(\|key\|\) & \(\mathbf{k}\%\) & Time avg (s) & \(\mathbf{k}\%\) & Time avg (s) \\ \hline
13 & 5.7446 & 100 & 0.3219 & 100 & 0.872 \\
17 & 6.7082 & 100 & 0.5431 & 100 & 1.118 \\
19 & 7 & 100 & 0.7139 & 100 & 1.337 \\
23 & 7.8103 & 100 & 1.2061 & 100 & 2.182 \\
29 & 8.7750 & 100 & 2.5117 & 100 & 5.057 \\
31 & 9 & 100 & 3.1766 & 100 & 6.189 \\
37 & 9.8489 & 100 & 6.4258 & 100 & 12.038 \\
41 & 10.4403 & 7 & 9.1183 & 100 & 21.484 \\
43 & 10.6301 & 1 & 16.640 & 100 & 33.765 \\
47 & 11.1803 & 1 & 37.187 & 100 & 42.984 \\
53 & 11.8743 & 0 & \_ & 100 & 1823.4\({}^{*}\) \\
61 & 12.6886 & 0 & \_ & 100 & 4203.9\({}^{*}\) \\
67 & 13.3041 & 0 & \_ & 100 & 9885.4\({}^{*}\) \\
71 & 13.7477 & 0 & \_ & \_ & \_ \\ \hline \end{tabular}
\end{table}
Table 1: Results for the naive approach to retrieve a decryption key
The results are obtained by running Algorithm 2 (the pull-back approach) with threshold value \(=2\times\|key\|\), where \(\|key\|\) refers to the norm of the private key for the corresponding value of \(N\), \(\mathbf{k}_{1}\%\) refers to the success rate of retrieving a decryption key short-enough with norm \(\|\mathbf{k}_{1}\|\), while \(\mathbf{k}_{2}\%\) refers to the success rate of retrieving a ternary key, which is equivalent to solve two instances of the SVP in a \(2N\)-dimensional lattice, and the average time indicates the average running time of one trail of Algorithm 2 over 100 randomly generated examples. LLL has been called using the default parameters with \(\delta=0.99,\eta=0.501\) and BKZZ2.0 with block size \(\beta=40\) and auto-abort flag enabled.
\({}^{*}\)We have used multiple-precision binary floating-point computations with correct rounding (MPFR) [28] for arbitrary precision at 200-bits due to floating points errors.
\({}^{**}\)auto-abort is triggered when the execution takes longer times and the quality of basis doesn't improve quickly over tours.
\({}^{\dagger}\) LLL algorithm produce worse quality of reduced basis, therefore Algorithm 2 meets the threshold condition and doesn't do any further processing to find a ternary key \(\mathbf{k}_{2}\), hence the time is slightly lower for this value of \(N\).
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline & & & \multicolumn{2}{c}{**LLL**} & \multicolumn{4}{c}{**BKZ2.0**} \\ \hline N & \(\|key\|\) & \(\mathbf{k}_{1}\%\) & \(\|\mathbf{k}_{1}\|\) avg & \(\mathbf{k}_{2}\%\) & Time avg (s) & \(\mathbf{k}_{1}\%\) & \(\|\mathbf{k}_{1}\|\) avg & \(\mathbf{k}_{2}\%\) & Time avg (s) \\ \hline
13 & 5.745 & 100 & 9.180 & 42 & 0.427 & 100 & 9.194 & 57 & 0.998 \\
17 & 6.708 & 100 & 10.947 & 84 & 0.748 & 100 & 10.993 & 78 & 1.439 \\
19 & 7 & 100 & 11.561 & 86 & 1.024 & 100 & 11.648 & 88 & 1.713 \\
23 & 7.810 & 100 & 13.117 & 95 & 1.176 & 100 & 13.212 & 96 & 1.903 \\
29 & 8.775 & 100 & 15.003 & 99 & 1.578 & 100 & 15.008 & 100 & 2.380 \\
31 & 9 & 100 & 15.387 & 100 & 1.844 & 100 & 15.458 & 100 & 2.663 \\
37 & 9.849 & 100 & 17.057 & 100 & 2.296 & 100 & 17.084 & 100 & 3.384 \\
41 & 10.440 & 100 & 18.208 & 100 & 3.087 & 100 & 18.224 & 100 & 4.442 \\
43 & 10.630 & 100 & 18.545 & 100 & 3.963 & 100 & 18.569 & 100 & 5.326 \\
47 & 11.180 & 100 & 19.616 & 100 & 5.174 & 100 & 19.603 & 100 & 7.815 \\
53 & 11.874 & 100 & 20.936 & 100 & 7.419 & 100 & 20.912 & 100 & 11.04 \\
61 & 12.689 & 100 & 22.715 & 98 & 12.749 & 100 & 22.442 & 100 & 21.902 \\
67 & 13.304 & 100 & 26.881 & 73 & 26.998 & 100 & 23.750 & 100 & 28.692 \\
71 & 13.748 & 96 & 33.665 & 48 & 56.955 & 100 & 24.561 & 100 & 60.972 \\
73 & 13.892 & 78 & 39.519 & 33 & 58.989 & 100 & 24.830 & 100 & 77.803 \\
79 & 14.457 & 7 & 51.279 & 0 & 37.093\({}^{\dagger}\) & 100 & 25.967 & 100 & 136.291 \\
83 & 14.866 & 0 & \_ & 0 & \_ & 100 & 26.721 & 100 & 160.720 \\
89 & 15.395 & 0 & \_ & 0 & \_ & 100 & 27.809 & 100 & 1928.9\({}^{*}\) \\
97 & 16.031 & 0 & \_ & 0 & \_ & 100 & 29.189 & 100 & 2710.7\({}^{*}\) \\
131 & 18.682 & 0 & \_ & 0 & \_ & 100 & 34.117 & 100 & 16043.8\({}^{*}\) \\
149 & 19.925 & 0 & \_ & 0 & \_ & \_ & \_ & \_ & \_ \\ \hline \end{tabular} \({}^{\dagger}\)LLL algorithm produce worse quality of reduced basis, therefore Algorithm 2 meets the threshold condition and doesn’t do any further processing to find a ternary key \(\mathbf{k}_{2}\), hence the time is slightly lower for this value of \(N\).
\end{table}
Table 2: Results for the pull back approach to retrieve a decryption key
Figure 1: Naive approach vs. Pull-back approach over different values of N. Figures 0(a), 0(b) show the success percentage of retrieving a decryption key, figures 0(c), 0(d) compare the average time (in seconds) for LLL, BKZ2.0, respectively, and Fig. 0(e) compares the norm of the private key with the norm of \(\mathbf{k}_{1}\) returned by the pull-back approach for LLL and BKZ2.0 over different values of \(N\).
## 6 Conclusion
This paper provides a lattice reduction for GR-NTRU based on the dihedral group using elementary matrix algebra. We show that for a dihedral group of order \(2N\), one can perform the lattice attack on the public key in two \(2N\)-dimensional lattices instead of a \(4N\)-dimensional lattice. We provide an approach to retrieve two vectors in the two smaller lattices and pull them back to the larger one. Our pull-back approach gives two potential decryption keys: a short-enough key (not necessarily ternary), while the other is a ternary key. For a good reduced basis, theoretical analysis and experimental results show that retrieving the first key is deterministic, while the ternary key is returned with high probability. Thus, the scheme under investigation provides an equivalent level of security to GR-NTRU based on a cyclic group of order \(N\). This study is part of an effort to understand the effect of using nonabelian groups in the context of GR-NTRU. As a future scope, the resistance of other nonabelian groups to lattice attacks should be explored.
## 7 Declaration
**Competing interest:** The authors have no competing interests to declare that are relevant to the content of this article.
## Appendix A Lattice reduction: toy example
For a parameter set as \(N=7,p=3,q=128,d=\lfloor\frac{2N}{3}\rfloor=\lfloor\frac{14}{3}\rfloor=4\).
Suppose that the private key \((f,g)\) is sampled as:
\[f =x-x^{2}-x^{4}+x^{5}+x^{6}+y-yx^{2}+yx^{4}-yx^{6}\] \[g =x-x^{2}+x^{3}-x^{5}-x^{6}+yx+yx^{4}-yx^{5}\]
then the public key \(h=f_{q}\star g\pmod{q}\)
Therefore,
\[h =115+42x+117x^{2}+108x^{3}+73x^{4}+3x^{5}+53x^{6}+29y\] \[\qquad+108yx+34yx^{2}+72yx^{3}+5yx^{4}+36yx^{5}+101yx^{6}\]
In other words, the vectors corresponding to \(f,g\) and \(h\) are \(\mathbf{f}=(\mathbf{f}_{0},\mathbf{f}_{1})\), \(\mathbf{g}=(\mathbf{g}_{0},\mathbf{g}_{1})\), and \(\mathbf{h}=(\mathbf{h}_{0},\mathbf{h}_{1})\), respectively, where
\[\mathbf{f} =(\mathbf{f}_{0},\mathbf{f}_{1})=\big{(}(0,1,-1,0,-1,1,1),(1,0,-1,0,1,0,-1)\big{)}\] \[\mathbf{g} =(\mathbf{g}_{0},\mathbf{g}_{1})=\big{(}(0,1,-1,1,0,-1,-1),(0,1, 0,0,1,-1,0)\big{)}\] \[\mathbf{h} =(\mathbf{h}_{0},\mathbf{h}_{1})=\big{(}(115,42,117,108,73,3,53),(29,108,34,72,5,36,101)\big{)}\]
We can notice that the norm of the private key is \(:\left\|(\mathbf{f},\mathbf{g})\right\|=\sqrt{17}\). Suppose we want to encrypt the message represented by the vector:
\[\mathbf{m}=(0,0,-1,-1,-1,0,1,-1,-1,0,1,-1,0,0)\]
then the ciphertext vector \(\mathbf{c}=p\mathbf{h}\star\mathbf{r}+\mathbf{m}\pmod{q}\) for \(\mathbf{r}=(0,1,-1,1,0,-1,\\ -1,0,1,0,0,1,-1,0)\) will be:
\[\mathbf{c}=(123,64,97,31,92,46,63,119,23,111,39,80,33,99).\]
For an attacker who wants to decrypt the message knowing only the public key \(\mathbf{h}\), he launches the lattice attack on the public key, and for that he has the following options:
### The naive approach
The straightforward way to apply the naive attack is given by Algorithm 1.
\(\bullet\) Build the matrix \(M_{\mathbf{h}}\) i.e., the basis of the lattice \(L_{\mathbf{h}}\) as the following:
\(\bullet\) Algorithm 1 returns \(\mathbf{k}=(\mathbf{f^{\prime}},\mathbf{g^{\prime}})\) (the first row of the matrix \(M_{\mathbf{h}}^{LLLL}\)) as a solution to the SVP, where
\[\mathbf{f^{\prime}} =(-1,1,0,1,-1,-1,0,1,-1,0,1,0,-1,0)\] \[\mathbf{g^{\prime}} =(-1,1,-1,0,1,1,0,0,0,-1,0,0,-1,1).\]
We can notice that the returned key \((\mathbf{f^{\prime}},\mathbf{g^{\prime}})\) is different from the actual key. However, it has the same norm and since \(\mathbf{f^{\prime}}\) is invertible in \(\mathbb{Z}_{p}D_{N}\), it can be used to decrypt the ciphertext and retrieve the message.
As we have noticed, the naive approach retrieved the private key by solving the SVP for a matrix of dimension \(28\times 28\). However, our contribution shows how to retrieve a decryption key by solving two instances of the SVP in matrices of dimensions \(14\times 14\).
### The pull-back approach
The pull-back approach tries to retrieve two decryption keys; one of them short-enough and can serve as a decryption key, and the other is a ternary decryption key(returns with high probability for large \(N\)). The steps of this approach are mentioned in Algorithm 2.
\(\bullet\) Build two matrices \(M_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), \(M_{\mathbf{h}_{0}-\mathbf{h}_{1}}\) for the lattices \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), respectively.
\[M_{\mathbf{h}_{\mathbf{b}}+\mathbf{h}_{\mathbf{i}}}=\left[\begin{array}{cccccccc|cccc}1&0&0&0&0&0&0&144&150&151&180&78&39&154\\ 0&1&0&0&0&0&0&161&149&114&122&144&174&32\\ 0&0&1&0&0&0&0&37&125&120&78&218&137&181\\ 0&0&0&1&0&0&0&145&8&89&216&71&225&142\\ 0&0&0&0&1&0&0&113&109&104&82&223&76&189\\ 0&0&0&0&0&1&0&153&209&102&111&87&187&47\\ 0&0&0&0&0&0&0&1&143&146&126&107&75&58&151\\ \hline 0&0&0&0&0&0&0&128&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&128&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&128&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&128&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&128&0\\ 0&0&0&0&0&0&0&0&0&0&0&128\\ \end{array}\right]\]
\[M_{\mathbf{h}_{\mathbf{b}}-\mathbf{h}_{\mathbf{i}}}=\left[\begin{array}{cccccccc|cccc}1&0&0&0&0&0&86&- 66&83&36&68&-33&-48\\ 0&1&0&0&0&0&0&-55&81&-30&112&72&-28&-26\\ 0&0&1&0&0&0&-31&-19&110&6&16&79&-35\\ 0&0&0&1&0&0&0&1&-2&17&14&13&9&74\\ 0&0&0&0&1&0&103&37&-98&24&7&8&45\\ 0&0&0&0&0&1&0&81&7&44&-105&19&43&37\\ 0&0&0&0&0&0&1&-59&88&0&39&-69&48&79\\ \hline 0&0&0&0&0&0&0&128&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&128&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&128&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&128&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&128&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&128&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&128\\ \end{array}\right]\]
\(\bullet\) Apply LLL algorithm to the lattices \(L_{\mathbf{h}_{0}+\mathbf{h}_{1}}\), \(L_{\mathbf{h}_{0}-\mathbf{h}_{1}}\) to get
\[M_{\mathbf{h}_{\mathbf{b}}+\mathbf{h}_{\mathbf{i}}}^{LLLL}=\left[\begin{array}{ ccccccccc}1&1&1&1&1&1&1&0&0&0&0&0&0&0&0\\ 1&-1&1&0&0&-2&2&0&0&1&0&0&0&-1\\ 0&1&1&1&-2&1&-1&-1&-1&1&0&-1&1&1\\ -2&-1&0&-1&2&0&-1&1&-1&-1&-1&0&1&1\\ 1&-2&-1&1&-2&1&1&0&0&2&0&-1&0&-1\\ 0&-2&0&3&0&0&1&0&0&-1&-1&0&1&1\\ -1&1&0&-2&0&0&1&-2&1&1&1&-1&-1&1\\ -2&-7&10&6&-1&0&-6&6&-6&2&10&1&-24&11\\ -14&9&-4&10&6&-4&-2&3&11&2&4&-14&10&-16\\ 8&-8&-8&-5&6&4&-15&-12&12&11&-21&14&-7&3\\ -1&-1&10&-8&-11&5&3&8&19&-19&4&-19&14&-7\\ -18&-5&9&3&-110&3&-15&-5&7&1&27&0&-15\\ 3&11&-7&-15&0&7&19&9&4&-33&-1&-13&15\\ 6&-7&7&-10&-5&8&0&-27&-13&-30&-14&-15&-7&-22\\ \end{array}\right]\]
\[M_{\mathbf{h}_{\mathbf{b}}-\mathbf{h}_{\mathbf{i}}}^{LLLL}=\left[\begin{array}{ ccccccccccccc}1&0&-1&-1&0&1&-1&0&2&0&0&-1&2&-1\\ -1&-1&2&0&0&2&-1&0&1&-1&-1&-1&1&-1\\ -1&1&0&0&-2&1&2&0&0&-1&1&-1&0&-1\\ 2&-2&0&0&1&0&0&1&-1&0&0&-1&-2&1\\ -2&-1&1&-1&0&1&1&1&1&-2&1&-1&1\\ 1&-1&2&-1&-2&0&0&-1&0&2&1&0&0\\ 1&1&1&-2&0&0&-2&0&1&-2&2&0&-1\\ -4&-2&-4&-7&-9&-19&-7&6&6&-17&-8&0&-8&-3\\ 7&15&11&7&8&8&-4&-1&18&4&12&3&-15&3\\ 2&-14&-3&-3&10&-9&16&5&0&-19&-3&12&11&-4\\ 8&-1&2&11&-18&-4&1&-9&5&1&-15&-8&9&19\\ -2&0&15&-6&12&-25&6&6&16&2&9&-11&-6&-16\\ 0&10&10&-16&7&-17&4&24&-8&-10&-7&-6&11&0\\ 3&-11&-2&19&-9&-6&5&4&0&20&-5&1&4&-22\\ \end{array}\right]\)
\(\bullet\) Algorithm 2 finds \(\mathbf{k}_{1}=(\mathbf{f}^{\prime},\mathbf{g}^{\prime})\) where,
\[\mathbf{f}^{\prime} =(1,0,1,-1,0,-1,1,0,-1,0,1,0,-1,1)\] \[\mathbf{g}^{\prime\prime} =(-1,0,1,-1,1,0,-1,1,0,0,1,-1,0,0).\]
The returned key is not exactly the private key, but it is ternary with the same norm and since \(\mathbf{f}^{\prime\prime}\) is invertible in \(\mathbb{Z}_{p}D_{N}\), it can be used perfectly to decrypt the message.
|
2306.17489 | On the possibility of classical vacuum polarization and magnetization | It is common practice to take for granted the equality (up to the constant
$\varepsilon_0$) of the electric displacement ($\bf{D}$) and electric
($\bf{E}$) field vectors in vacuum. The same happens with the magnetic field
($\bf{H}$) and the magnetic flux density ($\bf{B}$) vectors (up to the constant
$\mu_0^{-1}$). The fact that gravity may change this by effectively inducing
dielectric or magnetic responses to the primary fields is commonly overlooked.
It is the purpose of this communication to call attention to classical
polarization or magnetization of the vacuum due to the concomitant presence of
gravitational and electromagnetic sources. The formalism of differential forms
(exterior calculus) is used since it provides a clear-cut way to achieve this.
This work offers new routes for possible detection of various spacetime
geometries via their electromagnetic manifestations and the way they influence
light propagation. | Sébastien Fumeron, Fernando Moraes, Bertrand Berche | 2023-06-30T09:05:13Z | http://arxiv.org/abs/2306.17489v1 | # On the possibility of classical vacuum polarization and magnetization
###### Abstract
It is common practice to take for granted the equality (up to the constant \(\varepsilon_{0}\)) of the electric displacement (\(\mathbf{D}\)) and electric (\(\mathbf{E}\)) field vectors in vacuum. The same happens with the magnetic field (\(\mathbf{H}\)) and the magnetic flux density (\(\mathbf{B}\)) vectors (up to the constant \(\mu_{0}^{-1}\)). The fact that gravity may change this by effectively inducing dielectric or magnetic responses to the primary fields is commonly overlooked. It is the purpose of this communication to call attention to classical polarization or magnetization of the vacuum due to the concomitant presence of gravitational and electromagnetic sources. The formalism of differential forms (exterior calculus) is used since it provides a clear-cut way to achieve this. This work offers new routes for possible detection of various spacetime geometries via their electromagnetic manifestations and the way they influence light propagation.
## 1 Introduction
Vacuum polarization is a well-identified phenomenon in quantum electrodynamics. Since the pioneering works of Dirac [1], Furry and Oppenheimer [2], Heisenberg [3], Uehling [4] and Weisskopf [5], vacuum is understood as a dynamical object filled with quantum fluctuations. As prescribed by the Heisenberg indeterminacy relations, virtual electron-positron pairs can indeed briefly pop in and out of existence to interact with the external electromagnetic field (EM field), in an analog fashion to what happens inside any polarizable medium: vacuum permittivity value \(\varepsilon_{0}=8.854\,187\,82\,\,10^{-12}\) F.m\({}^{-1}\) corresponds to the particular case for which vacuum is maximally polarized [6].
Quantum vacuum polarization manifests itself in a large variety of situations, including the Casimir effect, the Hawking radiation, and the Lamb shift. In contrast, the possibility of a classical vacuum polarization is less often (if almost ever) considered in the literature [7, 8, 9]. It can be defined as _any deviation of the electric constitutive relation from the form it takes in the flat Minkowski spacetime_. In this paper, we will use exterior calculus to investigate the possibility of vacuum polarization and magnetization in curved spacetimes containing electromagnetic sources.
In this formalism, and considering units such that \(\varepsilon_{0}=\mu_{0}=1\), Maxwell's equations in vacuum reduce to Bianchi equation \(d\mathsf{F}=0\) for the Faraday \(2-\)form defined in terms of the potential \(1-\)form, \(\mathsf{F}=d\mathsf{A}\), and outside the location of point charges, \(d\mathsf{G}=0\). Here \(\mathsf{F}=\mathsf{E}\wedge dt+\mathsf{B}\) and \(\mathsf{G}=\mathsf{D}-\mathsf{H}\wedge dt\) is the Maxwell 2-form (differential forms are denoted in sanserif to distinguish them from their components in italics)[10]. These are geometry-independent equations but the constitutive relation \(\mathsf{G}=\star_{4}\mathsf{F}\) on the other hand (see Appendix for a short introduction to the "star operation"), which follows from the definition of an action in the form of
\[S[\mathsf{A}]=\int\frac{1}{2}\mathsf{F}\wedge\star_{4}\mathsf{F}-\mathsf{A} \wedge\star_{4}\mathsf{J}, \tag{1}\]
does depend on the geometry, which is embedded in the Hodge star operator. This is the cause of the classical polarization and magnetization of the vacuum in local coordinates. For a recent introduction to exterior calculus applied to electrodynamics, including classical and quantum vacuum polarization, see Ref. [11].
In order to have a visual impression of polarization or magnetization effects on the electromagnetic fields, for the various geometries considered, we display plots of the field lines as if the expressions found in terms of local coordinates were those in flat spacetime. Of course, this distorts the field lines, since they are in reality in curved spacetime, but preserves their topology.
The paper is structured as follows. First, the electrostatics of Reissner-Nordstrom (RN) and related spacetimes will be studied in Section 2. In Subsection 2.1 the RN spacetime is used as groundwork to determine a general condition for vacuum polarization to arise. Then, in Subsection 2.2 we will establish how the additional
presence of a cosmic string may indeed couple to the EM field such as to produce a non-trivial polarization in RN spacetime. Classical vacuum polarization will also be found in the case of a charged wormhole (Subsection 2.3). In a similar way, classical magnetization is studied in Section 3 with its subsections focusing on Melvin (Subsection 3.1) and Ernst (Subsection 3.2) spacetimes. The case of a rotating charged gravitational source, the Kerr-Newman (KN) spacetime, where both polarization and magnetization appear, is studied in Section 4. Finally, in Section 5 we will present our conclusions.
## 2 Electrostatics in Reissner-Nordstrom spacetimes
### RN spacetime
The very first exact solution of Einstein's field equations was found in 1916 by Karl Schwarzschild [12]. This is the so-called Schwarzschild metric which describes the geometry of spacetime in the vicinity of a static and spherically-symmetric compact source of gravitation such as a star or a black hole. Soon after, Weyl, Reissner and Nordstrom independently considered a generalization of the Schwarzschild solution when the compact object (black hole) has a net charge \(Q\) in addition to the mass parameter \(M\). As found by Bekenstein [13] in 1971, the gravitational field near a charged star is the standard Reissner-Weyl-Nordstrom metric as well.
In standard units where \(c=1\) and \(G=1\), the Reissner-Nordstrom metric line element in local coordinates \((t,r,\theta,\varphi)\) writes as
\[g = -\left(1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\right)dt^{2}+\frac{dr^ {2}}{1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}} \tag{2}\] \[\mbox{}+r^{2}\left(d\theta^{2}+\sin^{2}\theta\;d\varphi^{2} \right),\]
for \(r>R\). Here, \(R\), \(M\) and \(Q\) represent the radius, mass and charge of the star, respectively. The metric line element, written above in the coordinate basis, takes the standard Minkowskian form \(g=-(\mathsf{e}^{0})^{2}+(\mathsf{e}^{1})^{2}+(\mathsf{e}^{2})^{2}+(\mathsf{e }^{3})^{2}\) in the local coframe \(\mathsf{e}^{0}=\sqrt{A(r)}\:dt\), \(\mathsf{e}^{1}=dr/\sqrt{A(r)}\), \(\mathsf{e}^{2}=rd\theta\) and \(\mathsf{e}^{3}=r\sin\theta d\varphi\) with \(A(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\). There, spherical symmetry demands that the 2-form \(\mathsf{D}\) has the simple expression \(\mathsf{D}=D_{23}(r)\mathsf{e}^{2}\wedge\mathsf{e}^{3}\). Since the total charge is \(Q\), and \(\int_{\partial\mathsf{V}}\mathsf{D}=Q\) from Gauss theorem \(d\mathsf{D}=\rho\) with \(\rho\) the charge density \(3-\)form, it turns out that the electric flux density 2-form is given by a single component in the coordinate basis
\[\mathsf{D}=\frac{Q}{4\pi r^{2}}e^{2}\wedge\mathsf{e}^{3}=\frac{Q}{4\pi}\sin \theta\;d\theta\wedge d\varphi, \tag{3}\]
where we read that \(D_{\theta\varphi}=\frac{Q}{4\pi}\sin\theta\). The electric field 1-form \(\mathsf{E}\) is obtained from the Hodge star operator as
\[\mathsf{D}=\star_{4}(\mathsf{E}\wedge dt) \tag{4}\]
where \(\star_{4}\) is the Hodge dual operator (see Appendix), which, applied to any \(p-\)form \(\mathsf{u}\), completes \(\mathsf{u}\) to the \(4-\)volume form, \(\mathsf{u}\wedge\star_{4}\mathsf{u}=\frac{1}{p!}u_{\mu_{1}\ldots\mu_{p}}u^{ \mu_{1}\ldots\mu_{p}}\sqrt{-\det g}\,dx^{\mu_{1}}\wedge\ldots dx^{\mu_{4}}\). This is the key property which enables one to construct actions like in equation (1).
Straightforward algebra shows that
\[\star_{4}(d\theta\wedge d\varphi)=\frac{1}{r^{2}\sin\theta}dt\wedge dr \tag{5}\]
so that
\[\star_{4}\mathsf{D}=-\mathsf{E}\wedge dt=\frac{Q}{4\pi r^{2}}dt\wedge dr \tag{6}\]
Hence, the unique component of the 1-form \(\mathsf{E}\) in the coordinate basis is equal to
\[E_{r}=\frac{Q}{4\pi r^{2}}=\frac{1}{r^{2}\sin\theta}D_{\theta\varphi} \tag{7}\]
This means that the vacuum polarization due to the Reissner-Nordstrom spacetime, with permittivity \(\varepsilon_{r}(r,\theta)=r^{2}\sin\theta\), is exactly the same as in ordinary empty space. The result \(\varepsilon_{r}(r,\theta)=r^{2}\sin\theta\) is rather a manifestation of the local spherical coordinates than a true vacuum polarization.
### RN spacetime pierced by a cosmic string
In order to investigate a situation that does not reduce to empty flat space at infinity, we can consider the case of a Nambu-Goto cosmic string, with infinite length and zero thickness. Cosmic strings are topological defects associated with a conical geometry obtained by cutting a wedge in the background spacetime[14, 15]
\[g=-dt^{2}+d\rho^{2}+\alpha^{2}\rho^{2}d\varphi^{2}+dz^{2}, \tag{8}\]
(in local coordinates \((t,\rho,\varphi,z)\) with the usual meaning \(\rho=r\sin\theta\) and \(z=r\cos\theta\)) where \(0\leq\varphi<2\pi\) and \(\alpha=1-4\mu\). The string tension \(\mu\) is related to the mass per unit length of the defect and in first approximation, it is estimated from \(G\mu=(\eta/M_{P})\) where \(\eta\) is the energy scale of the string-forming phase transition and \(M_{P}\) is the Planck mass. Comparison between simulations of the cosmic microwave background (CMB) in the presence of Nambu-Goto strings (unconnected segment model) and observational datas of the CMB power spectrum from Planck set a 95% confidence upper limit of \(G\mu<1,5.10^{-7}\)[16]. Methods based on gravitational wave interferometry decreased the upper limit of several order of magnitude and currently, estimation of the string tension is narrowed down to \(G\mu<4.10^{-15}\)[17].
In spite of the absence of experimental evidence of such a spacetime, we now consider the case of a Reissner-Nordstrom black hole crossed by a cosmic string, in order to combine both the effect of the black hole and of the conical geometry which makes the metrics deviate from flat spacetime even at infinity. The metric line element in \((t,r,\theta,\varphi)\) coordinates and in standard units writes as [18]
\[g = -\left(1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\right)dt^{2}+\frac{dr^{2 }}{1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}} \tag{9}\] \[+\quad r^{2}\left(d\theta^{2}+\alpha^{2}\sin^{2}\theta\,d\varphi ^{2}\right),\]
i.e. a slight modification \(\varphi\rightarrow\alpha\varphi\) compared tp the previous case. The result (3) is slightly modified, hence \(D_{\theta\varphi}=[Q/(4\pi)]\alpha\sin\theta\), and the same line of reasoning as before
leads to
\[\mathsf{E}=\frac{Q}{4\pi r^{2}}dr, \tag{10}\]
or equivalently
\[D_{\theta\varphi}=\alpha r^{2}\sin\theta E_{r}. \tag{11}\]
The cosmic string tension couples to the electromagnetic field and produces a vacuum polarization which does not reduce to the use of local spherical coordinates. This is clearly a manifestation of the cosmic string since the result still holds for \(M=Q=0\), which turns the metric (9) into (8). Also, in the absence of the string (\(\alpha=1\)) the RN result (7) is recovered.
### Charged wormhole spacetime
A combination of Morris-Thorne wormhole and Reissner-Nordstrom spacetimes, the charged wormhole solution, is described by the metric [19]
\[g= - \left(1+\frac{Q^{2}}{r^{2}}\right)dt^{2}+\left(1-\frac{b(r)}{r}+ \frac{Q^{2}}{r^{2}}\right)^{-1}dr^{2} \tag{12}\] \[+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right),\]
where we choose the Morris and Thorne [20] shape function \(b(r)=b_{0}^{2}/r\). The spatial shape of the MT wormhole is specified by the function \(b(r)\), thus its denomination. The parameter \(b_{0}\) defines the smaller possible value of \(r\), i.e. the wormhole throat. In the MT case (\(Q=0\)) the coordinate \(r\) is problematic at \(r=b_{0}\) for the factor multiplying \(dr^{2}\) in equation (12) vanishes. Furthermore, the MT wormhole is unstable but this may be resolved by adding exotic matter [20] or electric charge [19]. In this case, the condition \(Q^{2}<b_{0}^{2}\) is required [19] to maintain the wormhole throat open. This also solves the problem at \(r=b_{0}\). The reader should be aware that this is a rather extreme case since, in order to have an open mouth of radius around 1 m, the wormhole would need to have a charge of the order of \(3\times 10^{16}\,\mathrm{C}\). More realistic charge values may be found for different shape functions but this would unnecessarily complicate this example.
From Gauss' law, we get the same result \(D_{23}(r)=\frac{Q}{4\pi r^{2}}\), hence the same (3), namely,
\[D_{\theta\varphi}=\frac{Q}{4\pi}\sin\theta. \tag{13}\]
Using (13), (4) and (52), we get the electric field 1-form single component
\[\mathsf{E}=\sqrt{\frac{1+\frac{Q^{2}}{r^{2}}}{1-\frac{b_{0}^{2}}{r^{2}}+ \frac{Q^{2}}{r^{2}}}}D_{23}\,dr=E_{r}dr, \tag{14}\]
in agreement with [19].
Note that, for \(b_{0}=0\) the RN result (7) is recovered. It follows that
\[D_{\theta\varphi}=\sqrt{\frac{1-\frac{b_{0}^{2}}{r^{2}}+\frac{Q^{2}}{r^{2}}}{1+ \frac{Q^{2}}{r^{2}}}}r^{2}\sin\theta E_{r}=\epsilon(r,\theta)E_{r}. \tag{15}\]
In this example, the vacuum polarization is due to the coupling of \(b_{0}\) to \(Q\) in the geometry. Figure 1 is a plot of the effective permittivity of the charged wormhole spacetime as given by Eq. (15). We call the reader's attention to the fact that as \(r\rightarrow\sqrt{b_{0}^{2}-Q^{2}}\) (or \(r\to 1\) in the plot of Fig.1) the permittivity goes quickly to zero. This remarkable quality implies the total reflection of electromagnetic waves incident on the wormhole [21, 22]. On the other hand, recent research on near-zero permittivity metamaterials has revealed a number of exotic electromagnetic properties like, for instance, "squeeze the electromagnetic wave and make it tunnel through a deep subwavelength channel with arbitrary shape" [23]. This offers a perspective of building metamaterial-based analog models for further study of the charged wormhole spacetime.
Recalling that the wormhole connects two asymptotically flat spacetimes through a spherically symmetric bridge of radius \(b_{0}\), the plot in Fig. 1 represents the permittivity in either universe. Since the range of the radial coordinate is \(r\geq b_{0}\), the permittivity does not really become zero but can reach arbitrarily small values depending on the relative values of \(Q\) and \(b_{0}\). Again, quite suitable for a near-zero permittivity metamaterial analog model.
We close this Section by noting that the robustness of the result (13) is a manifestation of the topological character of the relation \(Q=\int_{\partial V}\mathsf{D}\) while the sensibility
Figure 1: Effective permittivity of the charged wormhole spacetime, in the equatorial plane (\(\theta=\pi/2\)). The parameters used for this plot were \(b_{0}=\sqrt{2}\) and \(Q=1\).
of the expression of \(E_{r}\) with the form of the metric is a consequence of the use of Hodge duality.
## 3 Magnetostatics in Melvin and Ernst spacetimes
### Melvin spacetime
The Melvin magnetic universe is a solution of the Einstein-Maxwell equations associated with a bundle of magnetic flux lines held together by its own gravitational field [24, 25]. We note that there is also an electric solution analogous to this one, which can be obtained by taking its electro-magnetic dual [26]. The line element of the magnetic spacetime is
\[g=-\Lambda(\rho)^{2}dt^{2}+\Lambda(\rho)^{2}d\rho^{2}+\Lambda(\rho)^{-2}\rho^{ 2}d\varphi^{2}+\Lambda(\rho)^{2}dz^{2}, \tag{16}\]
where
\[\Lambda(\rho)=1+\frac{1}{4}\kappa_{0}^{2}\rho^{2}. \tag{17}\]
Here, \(\kappa_{0}^{-1}\) is the Melvin length scale, a measure of the magnetic field strength \(B_{0}\) on the axis, normalized to the dimensions of an inverse length \(\kappa_{0}=B_{0}\). The metric being diagonal, a simple tetrad choice reads as \(\mathsf{e}^{0}=\Lambda(\rho)dt\), \(\mathsf{e}^{1}=\Lambda(\rho)dr\), \(\mathsf{e}^{2}=[\rho/\Lambda(\rho)]d\varphi\) and \(\mathsf{e}^{3}=\Lambda(\rho)dz\).
An ansatz for the 1-form potential in cylindrical coordinates is \(\mathsf{A}=A_{2}(\rho)\mathsf{e}^{2}=A_{2}(\rho)[\rho/\Lambda(\rho)]d\varphi\), therefore \(A_{2}(\rho)\) follows from the definition of the dimensionless magnetic flux \(\Phi_{0}\) enclosed by a circle \(\partial\Sigma\) of radius \(\rho\) perpendicular to the \(z\) axis,
\[\int_{\partial\Sigma}\mathsf{A}=\int_{\Sigma}\mathsf{B}=\Phi_{0}=2\pi\rho A_{ 2}(\rho). \tag{18}\]
It follows that
\[A_{2}(\rho)=\frac{\Phi_{0}}{2\pi\rho},\quad\mbox{and}\quad A_{\varphi}(\rho)= \frac{\Phi_{0}}{2\pi}\frac{1}{\Lambda(\rho)}. \tag{19}\]
This leads to
\[\mathsf{F}=\mathsf{B}=d\mathsf{A}=\partial_{\rho}A_{\varphi}d\rho\wedge d \varphi+\partial_{z}A_{\varphi}dz\wedge d\varphi \tag{20}\]
hence,
\[B_{\varphi z}=0,\quad B_{\rho\varphi}=-\frac{\Phi_{0}}{4\pi}\frac{\kappa_{0}^{ 2}\rho}{\Lambda(\rho)^{2}}. \tag{21}\]
The 2-form \(\mathsf{G}\) is now given by the Hodge product \(\mathsf{G}=\star_{4}\mathsf{F}=-\mathsf{H}\wedge dt\). The calculation leads to
\[\mathsf{G}=-\frac{\Lambda(\rho)^{2}}{\rho}B_{\rho\varphi}dz\wedge dt \tag{22}\]
and implies
\[H_{z}=\frac{\Lambda(\rho)^{2}}{\rho}B_{\rho\varphi}=\frac{1}{\mu(\rho)}B_{ \rho\varphi} \tag{23}\]
with the relative permeability given by
\[\mu(\rho)=\frac{\rho}{\Lambda(\rho)^{2}}. \tag{24}\]
Therefore there is magnetization in Melvin spacetime since \(H_{z}\neq B_{r\varphi}\). In Fig. 2 the magnetic fields are plotted in the \(z-x\) plane assuming that the expressions for the fields are in a Minkowski background.
### Ernst spacetime
We now consider the case of an Ernst spacetime [27], consisting of a black hole immersed in a magnetic field. The metric line element writes in standard units as
\[g = \Lambda(r,\theta)^{2}\left[-\left(1-\frac{2M}{r}\right)dt^{2}+ \frac{dr^{2}}{1-\frac{2M}{r}}+r^{2}d\theta^{2}\right] \tag{25}\] \[+\frac{r^{2}\sin^{2}\theta}{\Lambda(r,\theta)^{2}}d\varphi^{2}\]
where \(\Lambda(r,\theta)=1+\frac{\kappa_{0}^{2}}{4}r^{2}\sin^{2}\theta\) and \(\kappa_{0}\) is, again, a constant normalized magnetic field.
Then, one solves Maxwell's equations in the background metric (25) with the 1-form potential \(\mathsf{A}\) obtained by the flux condition analogous to (18) [28]. The coframe
Figure 2: Melvin magnetic flux density \(\mathbf{B}\) (left) and magnetic field \(\mathbf{H}\) (right). The scale was set by choosing \(B_{0}=1\).
basis vectors follow from (25) and read as
\[\mathsf{e}^{0} =\Lambda(r,\theta)\left(1-\frac{2M}{r}\right)^{1/2}dt, \tag{26}\] \[\mathsf{e}^{1} =\Lambda(r,\theta)\left(1-\frac{2M}{r}\right)^{-1/2}dr,\] (27) \[\mathsf{e}^{2} =\Lambda(r,\theta)rd\theta,\] (28) \[\mathsf{e}^{3} =\frac{r\sin\theta}{\Lambda(r,\theta)}d\varphi \tag{29}\]
and assuming cylindrical symmetry in the Minkowski cotangent spacetime, \(\mathsf{A}=A_{3}(r\sin\theta)\mathsf{e}^{3}\) we extract \(A_{3}\) from
\[\int\mathsf{A}=\int d\mathsf{B}=\Phi_{0}=2\pi r\sin\theta A_{3}(r,\theta). \tag{30}\]
It follows that[29]
\[A_{3}(r,\theta)=\frac{\Phi_{0}}{2\pi r\sin\theta},\quad A_{\varphi}(r,\theta )=\frac{\Phi_{0}}{2\pi}\frac{1}{\Lambda(r,\theta)} \tag{31}\]
Equation \(\mathsf{F}=\mathsf{B}=d\mathsf{A}\) then yields the magnetic \(2-\)form
\[\mathsf{B}=-\frac{\Phi_{0}}{4\pi}\frac{\kappa_{0}^{2}r\sin\theta}{\Lambda(r, \theta)^{2}}\left(\sin\theta\;dr\wedge d\varphi+r\cos\theta\;d\theta\wedge d\varphi\right) \tag{32}\]
and from \(\mathsf{G}=\star_{4}\mathsf{F}=-\mathsf{H}\wedge dt\) one finds that \(\mathsf{H}\) expresses as
\[\mathsf{H}=\frac{\Lambda(r,\theta)^{2}}{\sin\theta}\Big{[}\Big{(}1-\frac{2M} {r}\Big{)}B_{r\varphi}\,d\theta+\frac{1}{r^{2}}B_{\theta\varphi}\,dr\Big{]}. \tag{33}\]
This expression implies the relation between components
\[H_{r} =\frac{\Lambda(r,\theta)^{2}}{r^{2}\sin\theta}B_{\theta\varphi}, \tag{34}\] \[H_{\theta} =\frac{\Lambda(r,\theta)^{2}}{\sin\theta}\left(1-\frac{2M}{r} \right)B_{r\varphi}. \tag{35}\]
This time, the background metric couples to the magnetic field, and, the metric being not asymptotically flat it results in an anisotropic magnetization.
In Fig. 3 the magnetic fields are plotted in the \(z-x\) plane assuming, as in previous graphs, that the expressions for the fields are in a Minkowski background.
## 4 Electric and magnetic fields in the Kerr-Newman spacetime
As seen in Section 2, the Reissner-Nordstrom solution of Einstein-Maxwell equations generalizes the Schwarzschild spacetime to include charge of the gravitational source. Analogously, the Kerr solution [30] complements Schwarzschild's by including rotation of the source. The generalization of both the Reissner-Nordstrom and Kerr solutions,
known as Kerr-Newman (KN) metric [31, 32] describes the spacetime of a rotating charged source. Unlike the RN case, the electromagnetic source is not pointlike but a distribution of mass, charge, and current on a disk [33, 34]. The Kerr-Newman metric in Boyer-Lindquist coordinates [35] is given by [32]
\[\begin{split} g=&-\frac{\Delta}{\rho^{2}}\left[a\sin^ {2}(\theta)d\varphi-du\right]^{2}\\ &+\frac{\rho^{2}}{\Delta}dr^{2}+\rho^{2}d\theta^{2}+\frac{\sin^{ 2}\theta}{\rho^{2}}\left[\left(r^{2}+a^{2}\right)d\varphi-adu\right]^{2}\end{split} \tag{36}\]
where
\[\Delta(r)=r^{2}-2Mr+a^{2}+Q^{2} \tag{37}\]
and
\[\rho^{2}(r,\theta)=r^{2}+a^{2}\cos^{2}\theta, \tag{38}\]
for a source of angular momentum per unit mass \(a\), mass \(M\) and charge \(Q\). The coordinates \((u,r,\theta,\varphi)\) are Schwarzschild-like coordinates. The quantities \(r\), \(\rho\), \(\sqrt{\Delta}\), \(u\) and \(a\) all have the dimensions of lengths. The metric (36) reduces to the Kerr metric for \(Q=0\) and to the Reissner-Nordstrom metric (2) for \(a=0\). If \(a=Q=0\) one gets the Schwarzschild metric.
In the Minkowski coframe, given by
\[\mathbf{e}^{0} =-\frac{\sqrt{\Delta}}{\rho}(du-a\sin^{2}\theta d\varphi) \tag{39}\] \[\mathbf{e}^{1} =\frac{\rho}{\sqrt{\Delta}}dr\] (40) \[\mathbf{e}^{2} =\rho d\theta\] (41) \[\mathbf{e}^{3} =\frac{\sin\theta}{\rho}(adu-(r^{2}+a^{2})d\varphi) \tag{42}\]
Figure 3: Ernst magnetic flux density \(\mathbf{B}\) (left) and magnetic field \(\mathbf{H}\) (right) lines. The scale was set by choosing \(B_{0}=2M=1\).
the 1-form electromagnetic potential in the Kerr-Newman spacetime is given by \(\mathsf{A}=A_{0}\mathsf{e}^{0}\) which, in local coordinates reads as [33]
\[\mathsf{A}=\frac{Qr}{\rho\sqrt{\Delta}}\mathsf{e}^{0}= -\frac{Qr}{\rho^{2}}\left(du-a\sin^{2}\theta d\varphi\right). \tag{43}\]
From the Faraday 2-form \(\mathsf{F}=d\mathsf{A}=\mathsf{E}\wedge du+\mathsf{B}\), it follows that the electric field 1-form is given by
\[\mathsf{E}=\frac{Q}{\rho^{4}}\left[r^{2}-a^{2}\cos^{2}\theta\right]dr-\frac{Q} {\rho^{4}}a^{2}\sin 2\theta\left(rd\theta\right) \tag{44}\]
and the magnetic flux density 2-form by
\[\mathsf{B}= \frac{Q}{r\rho^{4}}\left[r^{2}-a^{2}\cos^{2}\theta\right]a\sin \theta\left(r\sin\theta d\varphi\right)\wedge dr \tag{45}\] \[+ \frac{2Q}{r\rho^{4}}a(r^{2}+a^{2})\cos\theta\left(rd\theta\right) \wedge(r\sin\theta d\varphi).\]
In the Minkowski coframe, the expression of \(\mathsf{F}\) takes a simpler form
\[\mathsf{F}=-\frac{Q(\rho^{2}-2r^{2})}{\rho^{4}}\mathsf{e}^{0}\wedge\mathsf{ e}^{1}-\frac{2Qar\cos\theta}{\rho^{4}}\mathsf{e}^{2}\wedge e^{3} \tag{46}\]
and the evaluation of the Hodge star is made easier because of fortunate simplifications:
\[\star_{4}\mathsf{F} = \sqrt{-\eta}F^{ab}\epsilon_{abcd}\mathsf{e}^{c}\wedge\mathsf{e}^{d} \tag{47}\] \[= \frac{Q(\rho^{2}-2r^{2})}{\rho^{4}}\mathsf{e}^{2}\wedge\mathsf{e} ^{3}-\frac{2Qar\cos\theta}{\rho^{4}}\mathsf{e}^{0}\wedge\mathsf{e}^{1}.\]
This result is finally reverted to the local coframe:
\[\star_{4}\mathsf{F} = \frac{Qa}{\rho^{4}}\Big{[}\Big{(}-(\rho^{2}-2r^{2})\sin\theta d \theta+r\cos\theta dr\Big{)}\wedge du \tag{48}\] \[+(\rho^{2}-2r^{2}))a^{-1}d\theta\wedge d\varphi+2ar\sin\theta \cos\theta d\varphi\wedge dr\Big{]}\]
This leads to \(\mathsf{D}\) and \(\mathsf{H}\) fields from the Maxwell 2-form \(\mathsf{G}=\star_{4}\mathsf{F}=\mathsf{D}-\mathsf{H}\wedge du\). We get for the electric flux density
\[\mathsf{D}= \frac{Q}{r^{2}\rho^{4}}\left[r^{2}-a^{2}\cos^{2}\theta\right](r ^{2}+a^{2})\left(rd\theta\right)\wedge(r\sin\theta d\varphi) \tag{49}\] \[- \frac{Q}{\rho^{4}}a^{2}\sin 2\theta\left(r\sin\theta\,d\varphi \right)\wedge dr.\]
Comparing (49) to (44) we see that \(E_{\theta}=D_{r\varphi}\) but \(E_{r}\neq D_{\theta\varphi}\).
The magnetic field is given by
\[\mathsf{H}=\frac{Q}{r\rho^{4}}\big{[}\rho^{2}-2r^{2}\big{]}a\sin\theta\left( rd\theta\right)+\frac{2Q}{\rho^{4}}ar\cos\theta dr \tag{50}\]
and we have that \(H_{\theta}=B_{r\varphi}\) but \(H_{r}\neq B_{\theta\varphi}\), from Eqs. (45) and (50). To summarize, the vacuum response in the KN spacetime is anisotropic: polarization and magnetization occur only for the radial components of the respective fields.
In order to visualize the classical vacuum polarization effects of the KN spacetime we map the electromagnetic fields into Minkowski spacetime by assuming that Eqs. (44)-(50) are in a flat background in spherical coordinates. Using this approach, we show in Figs. 4 and 5 the electromagnetic field lines in the \(z-x\) plane. They may be compared to the plots of \(\mathbf{E}\) and \(\mathbf{H}\) presented in [34], keeping in mind that they are plotted in a different background. While we forced the field lines to be in Minkowski spacetime, Ref. [34] plots the fields in the KN curved background described in a Cartesian system (that is asymptotically flat) introduced originally by Kerr [30]. It is clear from the plots that the electromagnetic sources are located on a disk on the equatorial plane. Both charge and current densities are singular on the disk rim, in agreement with Refs. [33, 34]. Furthermore, the polarization is relevant only in the region near the sources. Away from the sources, \(\mathbf{E}\sim\mathbf{D}\) and \(\mathbf{B}\sim\mathbf{H}\).
## 5 Conclusions
The examples studied in this article show that both classical vacuum polarization and magnetization do occur in some curved spacetimes with electromagnetic sources. These properties may be useful for a possible detection of non trivial spacetime geometries from distant observations.
Mathematically, the non vanishing polarisation/magnetization is a consequence of the Hodge star operation that incorporates the spacetime geometry while providing the transformation between the "fundamental" (\(E,\,B\)) and "excitation" (\(D,\,H\)) fields. To understand this property, we can compare the standard approach of electrodynamics in terms of tensors with the one privileged in this paper in terms of differential forms. In terms of tensors, one can sum up the basic postulates of electrodynamics to Bianchi
Figure 4: Kerr-Newman electric field \(\mathbf{E}\) (left) and electric flux density \(\mathbf{D}\) (right) lines. The scale was set by choosing \(a=Q=1\).
equation for the Faraday tensor \(\partial_{\lambda}F_{\mu\nu}+\partial_{\mu}F_{\nu\lambda}+\partial_{\nu}F_{ \lambda\mu}=0\), and to the equation of motion for the Maxwell tensor \(\partial_{\mu}G^{\mu\nu}=-\sqrt{-g}J^{\nu}\). At this point, there is still missing an action to provide a relation between the two second-rank tensors \(F_{\mu\nu}\) and \(G^{\mu\nu}\). In Maxwell electrodynamics, this action is \(S[A_{\sigma}]=\int\sqrt{-g}\;d^{4}x\left(\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-A_{ \mu}J^{\mu}\right)\), the minimisation of which leads to the equations of motion \(\frac{1}{\sqrt{-g}}\partial_{\mu}\big{(}\sqrt{-g}F^{\mu\nu}\big{)}=-J^{\nu}\). In terms of differential forms, Bianchi identity reads as \(d\mathsf{F}=0\) and the equation of motion reduces to \(d\mathsf{G}=-\star_{4}\mathsf{J}\) and, again, the system of equations is not complete in absence of a relation between \(\mathsf{G}\) and \(\mathsf{F}\). This relation is either provided again by an action as in equation (1), of directly by the duality relation \(\mathsf{G}=\star_{4}\mathsf{F}\) called constitutive relation. If \(\mathsf{G}\) was obtained independently of the coordinate system via generalized Gauss or Ampere theorems, the metric dependence of the Hodge star operation makes \(\mathsf{F}\) depend on the local coordinates and possibly lead to a non trivial polarizability/permeability in these coordinates.
We remark that our study encompasses static fields and therefore is not equivalent to the well-known interpretation of curved spacetime as an effective anisotropic medium for light propagation [36]. Moreover, in all cases studied here, both gravitational and electromagnetic fields are solutions of the coupled Einstein-Maxwell equations, since the electromagnetic field's energy-momentum tensor generates a very weak gravitational field, thus its contribution to gravity in the presence of matter sources is usually neglected: this is in particular the case for a distant observer who might detect the imprint of the polarization/magnetization in the geometry. A possible extension of the results presented here may consist in describing the full backreaction of the electromagnetic field via the energy-momentum tensor [37].
Analogies between gravity and elasticity of continuum media have been explored by many authors (see for instance [38, 39, 40]). Let us quote for example Landau and
Figure 5: Kerr-Newman magnetic flux density \(\mathbf{B}\) (left) and magnetic field \(\mathbf{H}\) (right) lines. The scale was set by choosing \(a=Q=1\).
Lifshitz[41]:
"We may say that with respect to its effect on the electromagnetic field a static gravitational field plays the role of a medium with electric and magnetic permeabilites."
On the other hand, the coupling of electromagnetic fields to elastic deformations gives rise to well-known phenomena like piezoelectricity (piezomagnetism) and the not-so-well-known flexoelectricity (flexomagnetism) [42]. Further, in higher dimensional gravity, electroelastic effects have been found in strained charged branes [43, 44]. This leads us to conclude that the results obtained here for the vacuum dielectric and magnetic response functions, with the associated gravitational fields, may perhaps be realized in electro-magneto-elastic media as analog models for Einstein-Maxwell solutions. Conversely, one may propose an Einstein-Maxwell approach to electro-magneto-elastic materials. Indeed, elasticity in continuum mechanics has been long related to gravity [45], their similarity is made explicit when the equation obeyed by the deformation field of an elastic medium is written as an "Einstein equation", as shown in [46]. This naturally allows for a generalization of the elastic Einstein equation to the elastic Einstein-Maxwell equations by considering electromagnetic fields in material media and their coupling to elasticity. In other words: the rewriting of the electro-magneto-elastic equations as effective Einstein-Maxwell equations.
The classical response of vacuum may provide new tools for the astronomical search of cosmic strings and wormholes. The former are expected to produce observable signatures such as gravitational lensing [47], anisotropic patterns in Cosmic Microwave Background, Kaiser-Stebbins effect [16, 48] or powerful bursts of gravitational waves due to strings cusps [49]. Wormholes could be observed from lensing effects [50]-[51] or from the iron line spectrum of their accretions disk [52]. In this work, we showed that the vacuum constitutive relations depend on the position and on the string/wormhole parameters. This leaves a usable imprint on the propagating waves: as is known from Rytov law, the polarization plane of a wave is indeed likely to rotate when propagating inside inhomogeneous media [53].
## Acknowledgements
FM thanks Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), for partially supporting this work.
## Appendix
The Hodge dual operator \(\star_{n}\) is an invertible linear map between any \(p\)-form \(\mathsf{v}\in\Lambda^{p}\left(\mathcal{M}\right)\) and its dual \((n-p)\)-form \(\star_{n}\mathsf{v}\in\Lambda^{n-p}\left(\mathcal{M}\right)\) such that [54]
\[\mathsf{u}\wedge\left(\star\mathsf{w}\right)=\left\langle\mathsf{u},\mathsf{v }\right\rangle\sqrt{\left|\det g_{ab}\right|}\,dx^{1}\wedge..\wedge dx^{n} \tag{51}\]
(here \(\mathsf{u}\) is of the same degree as \(\mathsf{v}\)) with \(n\) the dimension of the manifold \(\mathcal{M}\), here \(n=4\). In the case considered here,
\[\mathsf{u}\wedge(\star_{4}\mathsf{v})=\left\langle\mathsf{u},\mathsf{v}\right\rangle r ^{2}\sin\theta\;dt\wedge dr\wedge d\theta\wedge d\varphi. \tag{52}\]
Again, \(\mathsf{u}\) is of the same degree as \(\mathsf{v}\) and the inner product \(\left\langle\;,\;\right\rangle\) between two \(p\)-forms obeys:
\[p=1: \left\langle dx^{\mu},dx^{\nu}\right\rangle=g^{\mu\nu} \tag{53}\] \[p>1: \left\langle dx^{\mu_{1}}\wedge..\wedge dx^{\mu_{p}},dx^{\nu_{1} }\wedge..\wedge dx^{\nu_{p}}\right\rangle\] (54) \[=\left|\begin{pmatrix}g^{\mu_{1}\nu_{1}}&\cdots&g^{\mu_{1}\nu_{p} }\\ \vdots&\ddots&\vdots\\ g^{\mu_{p}\nu_{1}}&\cdots&g^{\mu_{p}\nu_{p}}\end{pmatrix}\right|.\]
A useful property is that in a Minkowski frame, the Hodge dual of the Faraday form is simply
\[\star_{4}\mathsf{F}=\sqrt{-\eta}F^{ab}\epsilon_{abcd}\mathsf{e}^{c}\wedge \mathsf{e}^{d} \tag{55}\]
with \(-\eta=1\) and \(\mathsf{e}^{a}\) the Minkowski cotetrads.
|
2309.14705 | Dynamic fluctuations of current and mass in nonequilibrium mass
transport processes | We study steady-state dynamic fluctuations of current and mass, as well as
the corresponding power spectra, in conserved-mass transport processes on a
ring of $L$ sites; these processes violate detailed balance, have nontrivial
spatial structures, and their steady states are not described by the
Boltzmann-Gibbs distribution. We exactly calculate, for all times $T$, the
fluctuations $\langle \mathcal{Q}_i^2(T) \rangle$ and $\langle
\mathcal{Q}_{sub}^2(l, T) \rangle$ of the cumulative currents upto time $T$
across $i$th bond and across a subsystem of size $l$ (summed over bonds in the
subsystem), respectively; we also calculate the (two-point) dynamic correlation
function for subsystem mass. In particular, we show that, for large $L \gg 1$,
the bond-current fluctuation grows linearly for $T \sim {\cal O}(1)$,
subdiffusively for $T \ll L^2$ and then again linearly for $T \gg L^2$. The
scaled subsystem current fluctuation $\lim_{l \rightarrow \infty, T \rightarrow
\infty} \langle \mathcal{Q}^2_{sub}(l, T) \rangle/2lT$ converges to the
density-dependent particle mobility $\chi$ when the large subsystem size limit
is taken first, followed by the large time limit. Remarkably, the scaled
current fluctuation $D \langle \mathcal{Q}_i^2(T)\rangle/2 \chi L \equiv {\cal
W}(y)$ as a function of scaled time $y=DT/L^2$ is expressed in terms of a
universal scaling function ${\cal W}(y)$, where $D$ is the bulk-diffusion
coefficient. Similarly, the power spectra for current and mass time series are
characterized by the respective universal scaling functions, which are
calculated exactly. We provide a microscopic derivation of equilibrium-like
Green-Kubo and Einstein relations, that connect the steady-state current
fluctuations to the response to an external force and to mass fluctuation,
respectively. | Animesh Hazra, Anirban Mukherjee, Punyabrata Pradhan | 2023-09-26T06:58:02Z | http://arxiv.org/abs/2309.14705v2 | # Dynamic fluctuations of current and mass in nonequilibrium mass transport processes
###### Abstract
We study steady-state dynamic fluctuations of current and mass, as well as the corresponding power spectra, in conserved-mass transport processes on a ring of \(L\) sites; these processes violate detailed balance, have nontrivial spatial structures, and their steady states are not described by the Boltzmann-Gibbs distribution. We exactly calculate, for all times \(T\), the fluctuations \(\langle Q_{i}^{2}(T)\rangle\) and \(\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle\) of the cumulative currents upto time \(T\) across \(i\)th bond and across a subsystem of size \(l\) (summed over bonds in the subsystem), respectively; we also calculate the (two-point) dynamic correlation function for subsystem mass. In particular, we show that, for large \(L\gg 1\), the bond-current fluctuation grows linearly for \(T\sim\mathcal{O}(1)\), subdiffusively for \(T\ll L^{2}\) and then again linearly for \(T\gg L^{2}\). The scaled subsystem current fluctuation \(\lim_{l\to\infty,T\to\infty}(\mathcal{Q}_{sub}^{2}(l,T))/2lT\) converges to the density-dependent particle mobility \(\chi\) when the large subsystem limit is taken first, followed by the large time limit. Remarkably, the scaled current fluctuation \(D\langle\mathcal{Q}_{i}^{2}(T)\rangle/2\chi L\equiv\mathcal{W}(y)\) as a function of scaled time \(y=DT/L^{2}\) is expressed in terms of a universal scaling function \(\mathcal{W}(y)\), where \(D\) is the bulk-diffusion coefficient. Similarly, the power spectra for current and mass time series are characterized by the respective universal scaling functions, which are calculated exactly. We provide a microscopic derivation of equilibrium-like Green-Kubo and Einstein relations, that connect the steady-state current fluctuations to the response to an external force and to mass fluctuation, respectively.
## I Introduction
Characterizing the static and dynamic properties of mass transport processes is a fundamental problem in nonequilibrium statistical physics; it helps develop a simple theoretical understanding of a variety of natural phenomena involving rather complex many-body interactions among constituents, that facilitate transport of mass and energy in a far-from-equilibrium setting. Such processes are abundant in nature, manifesting themselves in cloud formation [1], heat conduction [2], propagation of forces in granular media [3; 4], river network formation [5], self-assembly of lipid droplets on cell surfaces [6], traffic flow [7], and wealth distribution in a population [8], among others. A widely studied class of minimal lattice models for understanding transport in interacting-particle systems is that of simple exclusion processes (SEPs) and zero-range processes (ZRPs). Another class of models, which have drawn significant attention in the past, is that of the conserved-mass transport processes, also called _mass chipping models_ (MCMs) [9; 10; 11; 12; 13; 4; 13]. Interestingly, their steady-state measures on a closed geometry, unlike that for SEPs and ZRPs, are not usually described by the equilibrium Boltzmann-Gibbs distribution and, in most cases, are _a-priori_ not known. Indeed, these systems are inherently driven far from equilibrium and generate nontrivial spatial structures, making exact dynamic characterization of steady-state fluctuations a challenging problem.
Recently, a theoretical framework for driven diffusive systems, known as macroscopic fluctuation theory (MFT) [14; 15], has been developed to study fluctuations of coarse-grained (hydrodynamic) variables such as density \(\rho(x,\tau)\) and current \(j(x,\tau)\), where \(x\) and \(\tau\) are suitably rescaled position and time, respectively. The MFT is a generalization of the Onsager-Machlup theory of _near-equilibrium systems_ to the theory of far-from-equilibrium ones [16; 17]. Its main ingredients are the density-dependent transport coefficients, namely the bulk-diffusion coefficient \(D(\rho)\) and the mobility \(\chi(\rho)\) (equivalently, the conductivity), which govern density relaxation and current fluctuation on macroscopic scales [18; 19; 20; 21]. Despite a simple prescription of the MFT, calculating the transport coefficients as a function of density, and other parameters, is difficult, especially for many-body systems when spatial correlations are nonzero and the steady-state measure is unknown. The difficulty stems primarily from the fact that the averages of various observables, which are necessary to calculate the transport coefficients, must be computed in the nonequilibrium steady state, which is however not described by the Boltzmann-Gibbs distribution and is, furthermore, not explicitly known in most cases. Perhaps not surprisingly, apart from SEPs [22; 23; 24; 25; 26; 27; 28] and ZRPs [29; 30], which have a product-measure steady state [31], there are very few examples of exact microscopic characterization of dynamic fluctuations in interacting-particle systems.
Of course, MCMs, which constitute a paradigm for out-of-equilibrium many-body systems, are an exception. Indeed, because they are analytically tractable, MCMs provide a level playing field for exact microscopic calculations of various time-dependent quantities, such as static density correlations and dynamic tagged-particle correlations, which have been extensively explored in the past [9; 32; 33]. However, except for the Kipnis-Marchioro-Presutti (KMP)-like models [34; 33] and the SEP [35], which satisfy detailed balance, exact calculations of current fluctuations, and characterization of the precise quantitative connection between fluctuation and transport, have not been done for mass transport mod
els with a nontrivial nonequilibrium steady state. Indeed it would be quite interesting to employ the microscopic techniques to relate the dynamic properties of mass and current to the macroscopic transport coefficients, and to derive the MFT for such models from "first-principles" calculations.
In this paper we exactly calculate dynamic correlations for subsystem current and mass in a broad class of one-dimensional mass chipping models (MCMs) on a ring of \(L\) sites. In these models, a site \(i\) contains a continuous mass \(m_{i}\geq 0\) and the total mass in the systems remains conserved. With some specified rates, a certain fraction of mass at a site gets fragmented or chipped off from the parent mass, diffuse _symmetrically_ and coalesce with mass at one of the nearest-neighbor sites. The MCMs have been intensively studied in various contexts in the past decades [9; 10; 12; 13; 36; 13], and can be mapped to a class of transport processes, called the _random averaging processes_ (RAPs) [37], which is again a variant of the so called Hammersley process [38]. Note that, for symmetric transfer (i.e., diffusion) of masses, although there is no net mass flow in the steady state on a ring geometry, the probability currents in the configuration space can still be nonzero and the Kolmogorov criteria for equilibrium can be shown to get violated [39]. As mentioned before, despite the steady-state measures for generic parameter values are not known [9; 10; 12; 32], the MCMs are amenable to exact theoretical studies. For example, the spatial correlation function of mass has been exactly calculated before in some of the variants of MCMs [9; 10; 11; 40; 41]. Furthermore, the mean-squared fluctuation of the position of a single tagged particle as well as the dynamic correlations of two tagged particles in related models - the RAPs - have been calculated exactly using microscopic and hydrodynamic calculations [42; 43; 41].
The primary focus of our study is the cumulative time-integrated currents - \(\mathcal{Q}_{i}(T)\) and \(\mathcal{Q}_{sub}(l,T)\) in a time interval \([0,T]\) across a bond \((i,i+1)\) and a subsystem of size \(l\), respectively. The bond current fluctuation \(\langle\mathcal{Q}_{i}^{2}(T)\rangle\) as a function of time \(T\) exhibits three distinct temporal behaviors. Initially, for small times \(T\ll 1/D\), the temporal growth is linear in time \(T\), where \(D\) is the bulk-diffusion coefficient (a constant). At moderately large times, the fluctuation grows subdiffusively, having a \(T^{1/2}\) growth for \(1/D\ll T\ll L^{2}/D\) with \(L\) being the system size. Finally, at very large times \(T\gg L^{2}/D\), the growth again becomes linear in time. We find that, even in the presence of nonzero spatial correlations, the qualitative behaviour of the current fluctuations, except for the prefactors, have a characteristic similar to that in the SEP. Remarkably, independent of the details of the mass transfer rules of the models, the suitably scaled bond-current fluctuation \(\langle\mathcal{Q}_{i}^{2}(T)\rangle D/2\chi L\), with \(\chi(\rho)\) is the density-dependent mobility, as a function of the scaled time \(y=DT/L^{2}\) can be expressed in terms of a universal scaling function \(\mathcal{W}(y)\), which is exactly calculated and is shown to have the following asymptotic behavior,
\[\mathcal{W}(y)=\begin{cases}\left(\frac{y}{\pi}\right)^{1/2}&\text{for}\ y\ll 1,\\ y&\text{for}\ y\gg 1.\end{cases} \tag{1}\]
Furthermore, we show that the two-point correlation for the instantaneous current as a function of time \(t\) has a delta correlated part at \(t=0\) and a long-ranged (power law) negative part, which decays as \(t^{-3/2}\). The corresponding power spectrum of current \(S_{\mathcal{J}}(f)\) is calculated analytically and it exhibits a low-frequency power-law behavior \(f^{1/2}\) in the frequency regime \(D/L^{2}\ll f\ll 1\). Similarly, the power spectrum \(S_{M_{l}}(f)\) for the subsystem mass time series is calculated exactly and is shown to have a low-frequency power-law divergence \(f^{-3/2}\). We have also calculated the scaling functions when the rescaled power spectra for current and mass are expressed in terms of the scaled frequency \(fL^{2}/D\).
We derive a nonequilibrium fluctuation relation between scaled subsystem mass and space-time integrated current fluctuations. We calculate the scaled fluctuation of the cumulative current \(\mathcal{Q}_{sub}(l,T)\), summed over a subsystem of size \(l\) and integrated up to time \(T\), and we show that the scaled subsystem current fluctuation converges to the density-dependent particle mobility \(\chi(\rho)\), i.e., a nonequilibrium Green-Kubo-like formula,
\[\sigma_{\mathcal{Q}}^{2}\equiv\lim_{l\to\infty,T\to\infty}\frac{\langle \mathcal{Q}_{sub}^{2}(l,T)\rangle}{lT}=2\chi(\rho), \tag{2}\]
where the infinite subsystem-size limit \(l\to\infty\) is taken first, followed by the infinite time limit \(T\to\infty\); notably, in the opposite order of limits, the lhs simply vanishes. By explicitly calculating the scaled subsystem mass fluctuation \(\sigma_{M}^{2}=\lim_{l\to\infty}\langle\Delta M_{l}^{2}\rangle/l\), where \(\langle\Delta M_{l}^{2}\rangle=\langle M_{l}^{2}\rangle-\langle M_{l}\rangle\) is the fluctuation of mass in a subsystem of size \(l\), we then derive a nonequilibrium fluctuation relation between mass and current fluctuations,
\[\sigma_{M}^{2}=\frac{\sigma_{\mathcal{Q}}^{2}}{2D}, \tag{3}\]
which is a modified version of the celebrated Einstein relation for equilibrium systems. Furthermore, provided there is a small biasing force \(\tilde{F}\) (suitably scaled), which generates a drift current \(J_{drift}=\chi_{op}(\rho)\tilde{F}\) along the direction of the force, we derive a Green-Kubo-like fluctuation-response relation,
\[\chi_{op}(\rho)\equiv\left[\frac{\partial J_{drift}}{\partial\tilde{F}} \right]_{\tilde{F}=0}=\frac{\sigma_{Q}^{2}}{2}; \tag{4}\]
the above relation directly connects the "operational mobility" or, equivalently, the response, due to the applied force, to the current fluctuations in the nonequilibrium steady state.
We organize the rest of the paper as follows. In section II we introduce three models at the core of our study:
MCM I, MCM II, and MCM III. In Section III, a comprehensive theoretical framework is established to calculate the dynamic correlations, time-integrated bond current fluctuations, and power spectra of instantaneous bond current and subsystem mass fluctuations within the context of MCM I. In sections IV and V we present similar calculations of dynamic correlations for the other two models, MCM II and MCM III, both of which lack nearest-neighbor correlation in masses. We perform a thorough comparative analysis of the dynamic properties exhibited by these models in section VI. Finally, in Sec. VII, we summarize our results.
## II Models
In this section, we define three well studied variants of conserved mass chipping models: MCM I, MCM II, and MCM III, which differ in the details of their microscopic dynamics. In our consideration, MCMs are defined in a one-dimensional periodic lattice with sites labeled by \(i=0,1,\cdots,L-1\). A mass \(m_{i}\geq 0\) is associated with each site \(i\). The total mass \(M=\sum_{i=0}^{L}m_{i}\) is conserved. In these models, we introduce a variable \(\lambda\) that determines the fraction of mass retained by the site and other fraction of mass \(\tilde{\lambda}=1-\lambda\), called chipping constant, which gets chipped off from the parent mass. In Fig. 1 we present schematic diagrams that represent the underlying microscopic dynamics of these three models. In these models, a site \(i\) is updated with unit rate, with which the following events occur.
**MCM I:** A specific fraction, \(\lambda\), is retained at the site, while fraction, \(\tilde{\lambda}=1-\lambda\), of mass \(m_{i}\) is chipped off. Subsequently, a random fraction \(\xi_{i}\) of the chipped-off mass,i.e., \(\tilde{\lambda}m_{i}\xi_{i}\), is then transferred to the right nearest neighbor, while the remaining fraction of the chipped-off mass, \(\tilde{\lambda}m_{i}(1-\xi_{i})\), is transferred to the left nearest neighbor. Here \(\xi_{i}\in[0,1]\) is independent and identically distributed (i.i.d.) random variables, having a uniform distribution. For convenience, we also define \(\tilde{\xi}_{i}=1-\xi_{i}\) for later use.
**MCM II:** A specific fraction, \(\lambda\), is retained, while the fraction, \(\tilde{\lambda}=1-\lambda\), is chipped off. Additionally, a random fraction \(\xi_{i}\) of the chipped-off mass, i.e., \(\tilde{\lambda}m_{i}\xi_{i}\), is then transferred either to the left or to the right, with an equal probability \(1/2\). The remaining fraction of the chipped-off mass, \(\tilde{\lambda}m_{i}(1-\xi_{i})\), is subsequently deposited back to site \(i\).
**MCM III:** In this model, a bond \((i,i+1)\) is updated with unit rate. A fraction \(\tilde{\lambda}=1-\lambda\) of their masses is chipped off,i.e., resulting in \(\tilde{\lambda}m_{i}\) being removed from the site \(i\) and \(\tilde{\lambda}m_{i+1}\) from the site \(i+1\). These chipped-off masses are then combined and, subsequently, a random fraction, \(\xi_{i}\), of the combined mass is transferred to site \(i+1\), while the \(\tilde{\xi}_{i}=1-\xi_{i}\) fraction is tranferred to site \(i\).
## III Theory: MCM I
In this section, we study in detail the first variant of mass-chipping models, i.e., MCM I, on a periodic one-dimensional lattice of size \(L\); For other models MCMs II
Figure 1: _Schematic representation of the mass-chipping models MCM I, MCM II, and MCM III:_ (a) In MCM I, a site \(i\) on a periodic lattice (shaped as a dark oval) having mass \(m_{i}\) (dark violet), retains a fraction \(\lambda\) of its mass (dark violet), while a random fraction \(\xi_{i}\) of the chipped-off mass (green) migrates to the right neighbor, and the remaining fraction of the chipped mass(red) moves to the left side. (b) In MCM II, a random fraction \(\xi_{i}\) of chipped-off mass(red) moves either left or nearest neighbor with equal probability, while the rest of the chipped mass(green) is deposited back to the same site \(i\). (c) In MCM III, a fraction equivalent to \(1-\lambda\) of the mass (blue) is chipped off from sites \(i\) and \(i+1\). This extracted mass is then recombined and subsequently redistributed in a way such that site \(i\) receives a random fraction of \(1-\xi_{i}\) (red), while site \(i+1\) acquires a fraction of \(\xi_{i}\) (green).
and III, we later state the results, which can be derived following the techniques developed in this section.
A site \(i\), with \(i=0\), \(1\), \(\ldots\), \(L-1\), possess a mass \(m_{i}\), which can take continuous value in the interval \(0\leq m_{i}\leq M\); total mass \(M=\sum_{i=0}^{L-1}m_{i}=M\) remains conserved throughout and is the only conserved quantity. Density is defined as \(\rho=M/L\), however we denote \(\rho_{i}(t)=\langle m_{i}(t)\rangle\) as local density at site \(i\) and at time \(t\). Notably, unlike in MCMs II and III, a site in MCM I is stochastically updated in a way that simultaneously impacts its immediate neighbours, as stated in the previous section. This results in nonzero spatial correlations, making the calculations nontrivial.
We can now explicitly write down the stochastic update rules for mass \(m_{i}(t)\) at site \(i\) and at time \(t\) during an infinitesimal time interval \((t,t+dt)\),
\[m_{i}(t+dt)=\begin{cases}\textbf{event}&\textbf{prob.}\\ m_{i}(t)-\tilde{\lambda}m_{i}(t)&dt\\ m_{i}(t)+\tilde{\lambda}\xi_{i-1}m_{i-1}(t)&dt\\ m_{i}(t)+\tilde{\lambda}\tilde{\xi}_{i+1}m_{i+1}(t)&dt\\ m_{i}(t)&(1-3dt),\end{cases} \tag{5}\]
where \(\xi_{j}\in(0,1)\) is a random variable, which, for simplicity, is taken to be uniformly distributed; generalization of the results to other distributions is straightforward. Using the above dynamical update rules, the time-evolution of local mass can be written as
\[\frac{d}{dt}\left\langle m_{i}(t)\right\rangle=D(\lambda)\left( \left\langle m_{i-1}(t)\right\rangle-2\left\langle m_{i}(t)\right\rangle+ \left\langle m_{i+1}(t)\right\rangle\right), \tag{6}\]
where \(D(\lambda)=\tilde{\lambda}/2\) is the bulk-diffusion coefficient for MCM I. Note that \(D\) is independent of density, leading to some important simplifications in the hierarchy for mass and current correlation functions, which, as we show later, actually close.
### Definitions and notations
At this point, we introduce time-integrated bond current \(\mathcal{Q}_{i}(t)\), which is the cumulative current across bond \((i,i+1)\) in the time interval time \((0,t)\). The time-integrated current across the \(i^{th}\) bond during an infinitesimal time interval \([t,t+dt]\) is simply \(\mathcal{J}_{i}(t)dt\), where instantaneous bond current
\[\mathcal{J}_{i}(t)\equiv\frac{d\mathcal{Q}_{i}(t)}{dt}, \tag{7}\]
and therefore we have the time-integrated current across bond \((i,i+1)\)
\[\mathcal{Q}_{i}(t)=\int\limits_{0}^{t}dt^{\prime}\mathcal{J}_{i}(t^{\prime}). \tag{8}\]
We can then express Eq.(6), the time evolution of the local density \(\rho_{i}(t)=\langle m_{i}(t)\rangle\), in terms of a continuity equation for local density \(\rho_{i}(t)=\langle m_{i}(t)\rangle\) and the average local current \(\langle\mathcal{J}_{i}(t)\rangle\) simply as
\[\frac{d}{dt}\rho_{i}(t)=\langle\mathcal{J}_{i}(t)-\mathcal{J}_{i+1}(t)\rangle. \tag{9}\]
It is useful to decompose the instantaneous bond current as the sum of a diffusive component \(\mathcal{J}_{i}^{(d)}(t)\) and a fluctuating component \(\mathcal{J}_{i}^{(fl)}\) as
\[\mathcal{J}_{i}(t)=\mathcal{J}_{i}^{(d)}(t)+\mathcal{J}_{i}^{(fl)}(t), \tag{10}\]
where we can identify the diffusive current \(\mathcal{J}_{i}^{(d)}(t)\) as
\[\mathcal{J}_{i}^{(d)}(t)\equiv D(\lambda)\big{[}m_{i}(t)-m_{i+1}(t)\big{]}. \tag{11}\]
The diffusion constant \(D(\lambda)\) depends only on the chipping constant \(\tilde{\lambda}\), not density \(\rho\). It should be noted that the average fluctuating current \(\langle\mathcal{J}_{i}^{(fl)}\rangle=0\), implying \(\langle\mathcal{J}_{i}(t)\rangle=\langle\mathcal{J}_{i}^{(d)}(t)\rangle\). Indeed, one could interpret \(\mathcal{J}_{i}^{(fl)}(t)\) as a fast varying "noise" current around the slowly varying diffusive ("hydrodynamic") current component \(\mathcal{J}_{i}^{(d)}(t)\). This decomposition of current is important because, as we show later explicitly, the fluctuation statistics of \(\mathcal{J}_{i}^{(fl)}(t)\) is in fact strictly delta-correlated in time and short-ranged in space, whereas the diffusive current \(\mathcal{J}_{i}^{(d)}(t)\) is long-ranged in time (in fact, a power law) and short-ranged in space.
For convenience, we introduce the following notation for correlation function \(C_{r}^{AB}(t,t^{\prime})\) involving any two local observable \(A_{i}(t)\) and \(B_{j}(t^{\prime})\), with \(t\geq t^{\prime}\),
\[\begin{split} C_{r=|j-i|}^{AB}(t,t^{\prime})&= \langle A_{i}(t)B_{j}(t^{\prime})\rangle-\langle A_{i}(t)\rangle\langle B_{j}(t ^{\prime})\rangle\\ &\equiv\langle A_{i}(t)B_{j}(t^{\prime})\rangle_{c},\end{split} \tag{12}\]
where \(r=|j-i|\) is the relative distance. We denote the spatial Fourier transform of the correlation function \(C_{r}^{AB}(t,t^{\prime})\) as given below
\[\tilde{C}_{q}^{AB}(t,t^{\prime})=\sum_{r=0}^{L-1}C_{r}^{AB}(t,t^{\prime})e^{ iqr}, \tag{13}\]
where \(q=2\pi s/L\) and \(s=0,1,\ldots,L-1\); the inverse Fourier transform is given by
\[C_{r}^{AB}(t,t^{\prime})=\frac{1}{L}\sum_{q}\tilde{C}_{q}^{AB}(t,t^{\prime})e ^{-iqr}. \tag{14}\]
### Calculation scheme
In this section we describe our calculation scheme in details for MCM I. The stochastic dynamical rules for
time-integrated current \(\mathcal{Q}_{i}(t)\) in an infinitesimal time interval \((t,t+dt)\) can be written as
\[\mathcal{Q}_{i}(t+dt)=\begin{cases}\textbf{event}&\textbf{prob.}\\ \mathcal{Q}_{i}(t)+\tilde{\lambda}\xi_{i}m_{i}(t)&dt\\ \mathcal{Q}_{i}(t)-\tilde{\lambda}\xi_{i+1}m_{i+1}(t)&dt\\ \mathcal{Q}_{i}(t)&(1-2dt).\end{cases} \tag{15}\]
The above update rules allow us to derive the time-evolution equation for the first moment of the time-integrated bond-current \(\mathcal{Q}_{i}(t)\) as follows:
\[\frac{d\langle\mathcal{Q}_{i}(t)\rangle}{dt}=D\langle m_{i}(t)-m_{i+1}(t) \rangle=\langle\mathcal{J}_{i}^{(d)}(t)\rangle. \tag{16}\]
Using the update rule as in Eq.(15), the infinitesimal time-evolution equation for following product of the time-integrated currents at two different times \(t\) and \(t^{\prime}(t>t^{\prime})\) can be written as
\[\mathcal{Q}_{i}(t+dt)\mathcal{Q}_{i+r}(t^{\prime})=\]
\[\begin{cases}\textbf{event}&\textbf{prob.}\\ \left[\mathcal{Q}_{i}(t)+\tilde{\lambda}\xi_{i}m_{i}(t)\right]\mathcal{Q}_{i+r }(t^{\prime})&dt\\ \mathcal{Q}_{i}(t)\mathcal{Q}_{i+r}(t^{\prime})&(1-2dt).\end{cases} \tag{17}\]
Now expressing Eq.(19) in terms of mass and after some algebraic manipulation, we immediately get the following equality,
\[\frac{d}{dt}C_{r}^{\mathcal{Q}\mathcal{Q}}(t,t^{\prime})=C_{r}^{\mathcal{J}^ {(d)}\mathcal{Q}}(t,t^{\prime}). \tag{18}\]
Interestingly, while calculating time derivative of current (or related observables), Eq. (18) can be simply obtained by using a convenient thumb rule where one takes the time derivative inside angular brackets as
\[\begin{split}\frac{d}{dt}\left\langle\mathcal{Q}_{i}(t)\mathcal{Q }_{i+r}(t^{\prime})\right\rangle_{c}&\equiv\left\langle\frac{d \mathcal{Q}_{i}(t)}{dt}\mathcal{Q}_{i+r}(t^{\prime})\right\rangle\\ &-\left\langle\frac{d\mathcal{Q}_{i}(t)}{dt}\right\rangle\left\langle \mathcal{Q}_{i+r}(t^{\prime})\right\rangle.\end{split} \tag{19}\]
Then by replacing the instantaneous current through the equivalence relation \(d\mathcal{Q}_{i}(t)/dt\equiv D(m_{i}-m_{i+1})+\mathcal{J}_{i}^{(f)}\) and subsequently dropping the noise correlation as \(\langle\mathcal{J}_{i}^{(f)}(t)\mathcal{Q}_{i+r}(t^{\prime})\rangle=0\) for \(t>t^{\prime}\), we get Eq. (18).
Now, using Eq. (11) into rhs of Eq.(18), we can immediately express the time evolution of unequal-space-time current-current correlation function in terms of the unequal-space-time mass-current correlation function,
\[\frac{d}{dt}C_{r}^{\mathcal{Q}\mathcal{Q}}(t,t^{\prime})=D\big{(}C_{r}^{m \mathcal{Q}}(t,t^{\prime})-C_{r-1}^{m\mathcal{Q}}(t,t^{\prime})\big{)}. \tag{20}\]
From the above equation, we see that we now rwquire to calculate the unequal-time mass-current correlation \(C_{r}^{m\mathcal{Q}}(t,t^{\prime})\) in order to determine the unequal-time current-current correlation \(C_{r}^{\mathcal{Q}\mathcal{Q}}(t,t^{\prime})\). The time evolution of the correlation function \(C_{r}^{m\mathcal{Q}}(t,t^{\prime})\) can be obtained by using infinitesimal-time update rules for the following mass-current product at a later time \(t+dt\) as
\[m_{i}(t+dt)Q_{i+r}(t^{\prime})=\]
\[\begin{cases}\textbf{event}&\textbf{prob.}\\ [m_{i}(t)-\tilde{\lambda}m_{i}(t)]Q_{i+r}(t^{\prime})&dt\\ [m_{i}(t)+\tilde{\lambda}\xi_{i-1}m_{i-1}(t)]Q_{i+r}(t^{\prime})&dt\\ [m_{i}(t)+\tilde{\lambda}\xi_{i+1}m_{i+1}(t)]Q_{i+r}(t^{\prime})&dt\\ m_{i}(t)Q_{i+r}(t^{\prime})&(1-3dt).\end{cases} \tag{21}\]
Using above update rule the time evaluation of unequal-time mass-current can be expressed in the following form,
\[\begin{split}&\frac{d}{dt}C_{r}^{m\mathcal{Q}}(t,t^{\prime})\\ &=D\langle(m_{i+1}(t)-2m_{i}(t)+m_{i-1}(t))\mathcal{Q}_{i+r}(t^{ \prime})\rangle_{c}\\ &=D\sum_{k}\Delta_{r,k}C_{k}^{m\mathcal{Q}}(t,t^{\prime}),\end{split} \tag{22}\]
where \(\Delta_{r,k}=\delta_{r-1,k}-2\delta_{r,k}+\delta_{r+1,k}\) is the discrete Laplacian operator. Now, equations (20) and (22) can be expressed in terms of the Fourier modes as defined in Eq.(13),
\[\frac{d}{dt}\tilde{C}_{q}^{\mathcal{Q}\mathcal{Q}}(t,t^{\prime})=D\tilde{C}_{q }^{m\mathcal{Q}}(t,t^{\prime})\left(1-e^{iq}\right), \tag{23}\]
and
\[\frac{d}{dt}\tilde{C}_{q}^{m\mathcal{Q}}(t,t^{\prime})=-D\omega_{q}\tilde{C}_{ q}^{m\mathcal{Q}}(t,t^{\prime}), \tag{24}\]
where the eigenvalue of discrete Laplacian is written as follows
\[\omega_{q}=2\left(1-\cos q\right). \tag{25}\]
Also, \(\tilde{C}_{q}^{\mathcal{Q}\mathcal{Q}}(t,t^{\prime})\) and \(\tilde{C}_{q}^{m\mathcal{Q}}(t,t^{\prime})\) are the Fourier transforms of the quantities \(C_{r}^{m\mathcal{Q}}(t,t^{\prime})\) and \(C_{r}^{m\mathcal{Q}}(t,t^{\prime})\), respectively. Now, Eq.(23) and Eq.(24), can be integrated to have
\[\begin{split}\tilde{C}_{q}^{\mathcal{Q}\mathcal{Q}}(t,t^{\prime})& =D\int\limits_{t^{\prime}}^{t}dt^{\prime\prime}\tilde{C}_{q}^{m \mathcal{Q}}(t^{\prime\prime},t^{\prime})\left(1-e^{iq}\right)\\ &+\tilde{C}_{q}^{\mathcal{Q}\mathcal{Q}}(t^{\prime},t^{\prime}), \end{split} \tag{26}\]
and
\[\tilde{C}_{q}^{m\mathcal{Q}}(t,t^{\prime})=e^{-D\omega_{q}(t-t^{\prime})}\tilde{ C}_{q}^{m\mathcal{Q}}(t^{\prime},t^{\prime}) \tag{27}\]
respectively. The equations Eq.(26) and Eq.(27) suggest that the dynamic correlation of current-current \(C_{r}^{m\mathcal{Q}\mathcal{Q}}(t^{\prime},t^{\prime})\) and mass-current \(C_{r}^{m\mathcal{Q}}(t^{\prime},t^{\prime})\) at equal time
are required to obtain the respective dynamic correlation at unequal time, from their corresponding update rules.
The time-evolution equation for the equal-time current-current spatial correlation \(C_{r}^{\mathcal{QQ}}(t,t)\) can be written from the infinitesimal update rules for the product of the following random variables,
\[\mathcal{Q}_{i}(t+dt)\mathcal{Q}_{i+r}(t+dt)=\]
\[\begin{cases}\textbf{event}&\textbf{prob.}\\ \mathcal{Q}_{i}\mathcal{Q}_{i+r}+\tilde{\lambda}\left(\xi_{r}m_{i}-\tilde{\xi} _{i+1}m_{i+1}\right)\mathcal{Q}_{r}\\ +\tilde{\lambda}\left(\xi_{r}m_{r}-\tilde{\xi}_{r+1}m_{r+1}\right)\mathcal{Q}_ {i}&dt\\ \mathcal{Q}_{i}\mathcal{Q}_{i+r}+\tilde{\lambda}^{2}(\xi_{i}^{2}m_{i}^{2}+ \tilde{\xi}_{i+1}^{2}m_{i+1}^{2})&\delta_{r,0}dt\\ \mathcal{Q}_{i}\mathcal{Q}_{i+r}-\tilde{\lambda}^{2}\xi_{i}\tilde{\xi}_{i}m_{ i}^{2}&\delta_{r,1}dt\\ \mathcal{Q}_{i}\mathcal{Q}_{i+r}-\tilde{\lambda}^{2}\xi_{i+1}\tilde{\xi}_{i+1 }m_{i+1}^{2}&\delta_{r,-1}dt\\ \mathcal{Q}_{i}\mathcal{Q}_{i+r}&1-\sum dt,\end{cases} \tag{28}\]
where \(\sum=1+\delta_{r,0}+\delta_{r,1}+\delta_{r,-1}\) represents the total exit rate. Hence, from the above update rules, we can deduce the following time-evolution equation,
\[\frac{d}{dt}\left\langle\mathcal{Q}_{i}(t)\mathcal{Q}_{i+r}(t)\right\rangle_{c} =D\left\langle(m_{i}-m_{i+1})\mathcal{Q}_{r}\right\rangle_{c} \tag{29}\] \[+D\left\langle\mathcal{Q}_{i}(m_{i+r}-m_{i+r+1})\right\rangle_{c }+\Gamma_{r},\]
where \(\Gamma_{r}\) can be written in terms of steady-state single-site mass fluctuation (function of \(\rho\)),
\[\Gamma_{r}(\rho)=\frac{\tilde{\lambda}^{2}}{6}\langle m_{i}^{2}\rangle(4 \delta_{r,0}-\delta_{r,1}-\delta_{r,-1}). \tag{30}\]
For convenience, we introduce the following quantity,
\[\chi(\rho)\equiv\frac{\tilde{\lambda}^{2}}{6}\langle m_{i}^{2}\rangle \tag{31}\]
which, as we show later, is nothing but the density-dependent transport coefficient, called mobility \(\chi(\rho)=\lim\limits_{T\rightarrow\infty,L\rightarrow\infty}L\left\langle \mathcal{Q}_{i}^{2}(T)\right\rangle/2T\) - the scaled bond-current fluctuation, with infinite time limit \(T\rightarrow\infty\) taken first [23]. As we shall demonstrate later, the mobility \(\chi(\rho)\) can be exactly equated to another related transport coefficient, we call it "operational" mobility \(\chi_{op}(\rho)\), which is ratio of the current (response) to a small externally applied biasing force (perturbation) [39]. The expression for the second moment of mass \(\langle m_{i}^{2}\rangle\) in the steady state can be written in terms of chipping constant \(\tilde{\lambda}\) and the density \(\rho\) as given below [13],
\[\langle m_{i}^{2}\rangle=\frac{3\rho^{2}}{3-2\tilde{\lambda}}. \tag{32}\]
We can now substitute Eq.(31) into Eq.(30) to express \(\Gamma_{r}\) in terms of the system's mobility \(\chi\) as
\[\Gamma_{r}(\rho)=4\chi(\rho)\delta_{r,0}-\chi(\delta_{r,1}+\delta_{r,-1}). \tag{33}\]
It is interesting to note that \(\Gamma_{r}\) has a direct connection to the steady-state mass-mass correlation \(C_{r}^{mm}\), through the relation
\[\Gamma_{r}=2DC_{r}^{mm}, \tag{34}\]
which should be generic for diffusive systems [44; 45]. Later we show that the quantity \(\Gamma_{r}\) is also related to the spatial correlation function for the fluctuating ("noise") current, thus establishing a direct (presumably generic) connection between (noise) current fluctuation and density fluctuation in a diffusive system and, thus, characterizing the role of steady-state spatial structure in determining the large-scale dynamic properties. This is precisely how density and current fluctuations, as well as relaxation properties (through bulk-diffusivity), are intricately coupled to one another, resulting in an equilibrium-like Einstein relation, as demonstrated subsequently.
Now, by using the following formula
\[D\left\langle(m_{i}-m_{i+1})\mathcal{Q}_{r}\right\rangle_{c}=\frac{D}{L}\sum_{ q}(1-e^{iq})\tilde{C}_{q}^{m\mathcal{Q}}(t,t)e^{-iqr} \tag{35}\]
in Eq.(29) and performing some algebraic manipulations, we obtain the following expression,
\[\frac{d}{dt}C_{r}^{\mathcal{QQ}}(t,t)=\frac{D}{L}\sum_{q}(1-e^{iq})\tilde{C}_ {q}^{m\mathcal{Q}}(t,t)(2-\omega_{qr})+\Gamma_{r}, \tag{36}\]
where \(\omega_{qr}=2\left(1-\cos(qr)\right)\), as indicated in Eq.(25). After integrating both sides of the above equation, we obtain the equal-time current-current dynamic correlation as follows:
\[\begin{split}& C_{r}^{\mathcal{QQ}}(t,t)=\int\limits_{0}^{t}dt^{ \prime}\Gamma_{r}(t^{\prime})+\\ &\frac{D}{L}\int\limits_{0}^{t}dt^{\prime}\sum_{q}\tilde{C}_{q}^{m \mathcal{Q}}(t^{\prime},t^{\prime})(1-e^{iq})(2-\omega_{qr}).\end{split} \tag{37}\]
Now, to obtain the desired form of the above equal-time correlation for current, we calculate the equal-time correlation function \(C_{r}^{m\mathcal{Q}}(t,t)\) by using the following infinitesimal-time update rule,
\[m_{i}(t+dt)\mathcal{Q}_{i+r}(t+dt)=\]
where \(\sum=1+\delta_{r,0}+\delta_{r,1}+\delta_{r,-1}+\delta_{r,-2}\) is the total exit rate. Using the above dynamical update rules, we obtain the following equation:
\[\frac{d}{dt}C_{r}^{m\mathcal{Q}}(t,t)=D\left(C_{r-1}^{m\mathcal{Q}}-2C_{r}^{m \mathcal{Q}}+C_{r+1}^{m\mathcal{Q}}\right)+A_{r}, \tag{39}\]
where \(A_{r}\) is given by
\[\begin{split} A_{r}&=\frac{\tilde{\lambda}}{2} \langle m_{i}m_{i+r}-m_{i}m_{i+r+1}\rangle_{c}\\ &-\frac{5\tilde{\lambda}^{2}}{6}\langle m_{i}^{2}\rangle\delta_{r,0}+\frac{\tilde{\lambda}^{2}}{6}\langle m_{i}^{2}\rangle\delta_{r,1}-\frac{ \tilde{\lambda}^{2}}{6}\langle m_{i}^{2}\rangle\delta_{r,-2}.\end{split} \tag{40}\]
From the above equation, it is evident that \(A_{r}\) can be represented in terms of the equal-time spatial correlation of masses, which can be calculated by using the following infinitesimal-time update rules, \(m_{i}(t+dt)m_{i+r}(t+dt)=\begin{cases}\textbf{event}&\textbf{prob.}\\ \tilde{\lambda}\left(\tilde{\xi}_{i+1}m_{i+1}+\xi_{i-1}m_{i-1}\right)m_{r}\\ +\tilde{\lambda}m_{i}(\tilde{\xi}_{i+r+1}m_{i+r+1}+\xi_{i+r-1}m_{i+r-1})\\ (1-2\tilde{\lambda})m_{i}m_{i+r}\\ m_{i}m_{i+r}+\tilde{\lambda}^{2}(m_{i}^{2}+\tilde{\xi}_{i+1}^{2}m_{i+1}^{2 }+m_{i-1}^{2}\xi_{i-1}^{2})\\ m_{i}m_{i+r}\\ \end{cases}\begin{cases}\textbf{event}&\textbf{prob.}\\ \tilde{\lambda}\left(\tilde{\xi}_{i+1}m_{i+1}+\xi_{i-1}m_{i-1}\right)m_{r}\\ +\tilde{\zeta}_{i+r+1}m_{i+r+1}+\xi_{i+r-1}m_{i+r-1}\\ (1-2\tilde{\lambda})m_{i}m_{i+r}\\ m_{i}m_{i+r}+\tilde{\lambda}^{2}(m_{i}^{2}+\tilde{\xi}_{i+1}^{2}m_{i+1}^{2 }+m_{i-1}^{2}\xi_{i-1}^{2})\\ m_{i}m_{i+r}-\tilde{\lambda}^{2}(\tilde{\xi}_{i+1}m_{i+1}^{2}+\xi_{i}m_{i}^{2 })\\ m_{i}m_{i+r}\\ \end{cases}\begin{cases}\textbf{prob.}\\ +\tilde{\lambda}m_{i}(\tilde{\xi}_{i+r+1}m_{i+r+1}+\xi_{i+r-1}m_{i+r-1})\\ (1-2\tilde{\lambda})m_{i}m_{i+r}\\ m_{i}m_{i+r}+\tilde{\lambda}^{2}(m_{i}^{2}+\tilde{\xi}_{i+1}^{2}m_{i+1}^{2 }+m_{i-1}^{2}\xi_{i-1}^{2})\\ m_{i}m_{i+r}-\tilde{\lambda}^{2}(\tilde{\xi}_{i+1}m_{i+1}^{2}+\xi_{i}m_{i}^{2 })\\ m_{i}m_{i+r}+\tilde{\lambda}^{2}(\tilde{\xi}_{i}m_{i}^{2}+\xi_{i-1}m_{i-1}^{2 }\\ m_{i}m_{i+r}-\tilde{\lambda}^{2}\xi_{i+1}\tilde{\xi}_{i+1}m_{i+1}^{2}\\ m_{i}m_{i+r}\\ \end{cases}\begin{cases}\textbf{event}&\textbf{prob.}\\ \tilde{\lambda}\left(\tilde{\xi}_{i+1}m_{i+1}+\xi_{i-1}m_{i-1}\right)m_{r}\\ +\tilde{\zeta}_{i+r+1}m_{i+r+1}+\xi_{i+r-1}m_{i+r-1}\\ (1-2\tilde{\lambda})m_{i}m_{i+r}\\ m_{i}m_{i+r}+\tilde{\lambda}^{2}(m_{i}^{2}+\tilde{\xi}_{i+1}^{2}m_{i+1}^{2 }+m_{i-1}^{2}\xi_{i-1}^{2})\\ m_{i}m_{i+r}-\tilde{\lambda}^{2}(\tilde{\xi}_{i+1}m_{i+1}^{2}+\xi_{i}m_{i}^{2 })\\ m_{i}m_{i+r}\\ \end{cases}\begin{cases}\textbf{event}&\textbf{prob.}\\ \tilde{\lambda}\left(\tilde{\xi}_{i+1}m_{i+1}+\xi_{i-1}m_{i-1}\right)m_{r}\\ +\tilde{\zeta}_{i+r+1}m_{i+r+1}+\xi_{i+r-1}m_{i+r-1}\\ m_{i}m_{i}m_{i+r}+\tilde{\lambda}^{2}(m_{i}^{2}+\tilde{\xi}_{i+1}m_{i+1}^{2})\\ m_{i}m_{i+r}-\tilde{\lambda}^{2}\xi_{i-1}\tilde{\xi}_{i-1}m_{i-1}^{2}\\ m_{i}m_{i+r}\\ \end{cases}\begin{cases}\textbf{}&\textbf{}&\textbf{}&\textbf{}&\textbf{}&\textbf{}& \textbf
considering that \(G(z=1)\) represents the sum of density correlations and thus should be finite, implying that \(\lim_{z\to 1}G(z)\) must also be finite. Therefore the numerator, and its derivative, must also vanish as \(z\to 1\). From this root-cancellation conditions, we get the following two equations,
\[C_{0}^{mm}(6-5\tilde{\lambda})-6C_{1}^{mm}-5\rho^{2}=0, \tag{46}\]
and
\[2\tilde{\lambda}C_{0}^{mm}+3C_{1}^{mm}-2\rho^{2}=0. \tag{47}\]
By solving the above equations, we finally obtain the desired solution for the generating function,
\[G(z)=\frac{2\tilde{\lambda}\rho^{2}}{3-2\tilde{\lambda}}-\frac{\tilde{\lambda} \rho^{2}}{2(3-2\tilde{\lambda})}z, \tag{48}\]
which immediately leads to the explicit analytical expression for the steady-state spatial correlation function \(C_{r}^{mm}\) for mass,
\[C_{r}^{mm}=\langle m_{i}m_{i+r}\rangle-\rho^{2}=\begin{cases}\frac{2\tilde{ \lambda}\rho^{2}}{3-2\tilde{\lambda}}&\text{for }r=0\\ -\frac{\lambda\rho^{2}}{2(3-2\tilde{\lambda})}&\text{for }|r|=1\\ 0&\text{otherwise}.\end{cases} \tag{49}\]
Note that the above correlation function was previously calculated through a different method in Ref. [13]. Now the steady-state spatial correlation function for mass can be readily expressed in terms of the particle mobility \(\chi\),
\[C_{r}^{mm}=\frac{\chi}{\tilde{\lambda}}(4\delta_{r,0}-\delta_{r,1}-\delta_{r, -1}). \tag{50}\]
Now, by summing both sides of Eq.(50) over \(r\), we obtain a fluctuation relation between mass fluctuation and the mobility (equivalently, the current fluctuation),
\[\sum_{r=-\infty}^{\infty}C_{r}^{mm}=\frac{2\chi}{\tilde{\lambda}}. \tag{51}\]
Now, in the steady state, we write \(A_{r}\) by simply using Eq.(50) in Eq.(40),
\[A_{r}=-\frac{5}{2}\chi(\delta_{r,0}-\delta_{r,-1})+\frac{1}{2}\chi(\delta_{r, 1}-\delta_{r,-2}). \tag{52}\]
We now express Eq. (39) in the Fourier space as
\[\frac{d}{dt}\tilde{C}_{q}^{m\mathcal{Q}}(t,t)=-D\omega_{q}\tilde{C}_{q}^{m \mathcal{Q}}(t,t)+\tilde{f}_{q}(t), \tag{53}\]
where the Fourier transform of the source term \(A_{r}\) is expressed as \(\tilde{f}_{q}(t^{\prime})\) in the following equation:
\[\tilde{f}_{q}=-\chi(1-e^{-iq})\left(1+\frac{1}{2}\omega_{q}\right). \tag{54}\]
Equation (53) can now be integrated directly to obtain the following equation,
\[\tilde{C}_{q}^{m\mathcal{Q}}(t^{\prime},t^{\prime})=\int\limits_{0}^{t^{ \prime}}dt^{\prime\prime}e^{-D\omega_{q}(t^{\prime}-t^{\prime\prime})}\tilde{ f}_{q}(t^{\prime\prime}). \tag{55}\]
The above equation describes the equal-time dynamic correlation of mass-current, which appears in Eq.(37) and Eq.(27), making it necessary for the calculation of the equal-time current-current correlation and unequal-time mass-current, respectively. By substituting Eq.(55) into Eq.(27), leading to the following expression,
\[\tilde{C}_{q}^{m\mathcal{Q}}(t,t^{\prime})=\int\limits_{0}^{t^{\prime}}dt^{ \prime\prime}e^{-D\omega_{q}(t-t^{\prime\prime})}\tilde{f}_{q}(t^{\prime \prime}). \tag{56}\]
### Time-integrated current fluctuation
In this section, we apply the theoretical framework established in the previous section to finally compute the time-integrated bond-current fluctuation for MCM I. To achieve this, we insert Eq. (55) into Eq. (37), yielding an explicit expression for the time-integrated bond-current fluctuation at equal times, as follows:
\[C_{r}^{\mathcal{Q}\mathcal{Q}}(t,t)=\int\limits_{0}^{t}dt^{ \prime}\Gamma_{r}(t^{\prime})+ \tag{57}\] \[\frac{D}{L}\sum_{q}\int\limits_{0}^{t}dt^{\prime}\int\limits_{0}^ {t^{\prime}}dt^{\prime\prime}e^{-D\omega_{q}(t^{\prime}-t^{\prime\prime})} \tilde{f}_{q}(t^{\prime\prime})(1-e^{iq})[2-\omega_{qr}].\]
Furthermore, by plugging in the equal-time current-current dynamic correlation \(C_{r}^{\mathcal{Q}\mathcal{Q}}(t,t)\) from Eq.(57) and the unequal-time mass-current \(\tilde{C}_{q}^{m\mathcal{Q}}(t,t^{\prime})\) from Eq.(56) into Eq.(26), we can obtain the final expression for the current-current dynamic correlation at unequal time:
\[C_{r}^{\mathcal{QQ}}(t,t^{\prime})= t^{\prime}\Gamma_{r}-\frac{\chi D}{L}\sum_{q}\int_{0}^{t^{\prime}} \mathrm{d}t^{\prime\prime}\int_{0}^{t^{\prime\prime}}\mathrm{d}t^{\prime\prime \prime}\,e^{-D\omega_{q}(t^{\prime\prime}-t^{\prime\prime\prime})}\omega_{q}(1+ \frac{1}{2}\omega_{q})(2-\omega_{qr}) \tag{58}\] \[-\frac{\chi D}{L}\sum_{q}\int_{t^{\prime}}^{t}\mathrm{d}t^{\prime \prime}\int_{0}^{t^{\prime}}\mathrm{d}t^{\prime\prime\prime}\,e^{-D\omega_{q} (t^{\prime\prime}-t^{\prime\prime\prime})}\omega_{q}(1+\frac{1}{2}\omega_{q})e ^{-iqr}.\]
We obtain the time-integrated bond-current fluctuation \(\langle\mathcal{Q}^{2}(T)\rangle\equiv\langle\mathcal{Q}_{i}^{2}(T)\rangle= C_{0}^{\mathcal{QQ}}(T,T)\) from Eq.(58), by putting \(t^{\prime}=t=T\) and \(r=0\),
\[\langle\mathcal{Q}^{2}(T)\rangle=\frac{2\chi T}{L}+\frac{2\chi}{L}\sum_{n=1}^ {L-1}\left(1+\frac{\omega_{n}}{2}\right)\frac{(1-e^{-D\omega_{n}T})}{D\omega_ {n}}, \tag{59}\]
where \(\omega_{n}=2(1-\cos(2\pi n/L))\), with \(n=0,1,\cdots,L-1\). If we take the \(T\rightarrow\infty\) limit first (i.e., \(T\gg L^{2}\)), we immediately obtain
\[\langle\mathcal{Q}^{2}(T)\rangle\simeq\frac{2\chi T}{L}+\frac{\chi}{D}\left( \frac{L}{6}-\frac{1}{L}\right)=\frac{2\chi T}{L}\left[1+\mathcal{O}\left( \frac{L^{2}}{DT}\right)\right]. \tag{60}\]
In Eq. (59), we have identified two distinct time regimes that correspond to two the following cases.
Case 1: \(DT\ll 1\)
In the limit \(DT\ll 1\), the system does not have sufficient time for building up of spatial correlations at neighboring sites, resulting in the bond-current fluctuation having no information of the spatial structure. In equation (59), we expand the exponential up to linear order for \(DT\ll 1\) and obtain
\[\langle\mathcal{Q}^{2}(T)\rangle=\frac{2\chi T}{L}+\frac{2\chi}{L}\sum_{n=1}^ {L-1}\left(1+\frac{\omega_{n}}{2}\right)T. \tag{61}\]
To further simplify the above equation, we utilize the identity \(\sum_{n=1}^{L-1}\omega_{n}=2L\), resulting in
\[\langle\mathcal{Q}^{2}(T)\rangle=\Gamma_{0}T=4\chi T. \tag{62}\]
where \(\Gamma_{0}\) is the strength of fluctuating current as mentioned in Eq.(33) and shown exactly later.
In Figure [2], we present the simulation data for the time-integrated bond-current fluctuation, \(\langle\mathcal{Q}^{2}(T)\rangle\), as a function of time \(T\). The plot reveals three distinct growth behaviors of \(\langle\mathcal{Q}^{2}(T)\rangle\) over time: \(T\), \(T^{1/2}\), and again \(T\). We have examined the effects of varying chipping parameters on two different system sizes, namely \(L=500\) and \(L=1000\), with values of \(\lambda\) set at \(0.25\) and \(0.90\). Additionally, we have included two plots that overlap in the \(DT\ll 1\) region for the two system sizes with the same \(\lambda\) values.
Case 2: \(DT\gg 1\)
In the time limit \(DT\gg 1\), the spatial correlations build up in the systems. Interestingly Eq.(60) suggests that, in the large-time limit, \(\langle Q_{i}^{2}\rangle\) asymptotically approaches \(2\chi T/L\), indicating that, in the long-time regime, only three relevant parameters are required to characterize the diffusive system: \(D\), \(\chi\), and \(L\). Moreover, it becomes evident that \(\langle\mathcal{Q}_{i}^{2}\rangle\) and \(T\) are related through a particular scaling combination, as given in Eq.(63). To further analyze the behavior, we introduce a scaled time-integrated bond-current fluctuation as follows:
\[\mathcal{W}\left(\frac{DT}{L^{2}}\right)\equiv\frac{D\langle \mathcal{Q}^{2}(T)\rangle}{2\chi L} \tag{63}\] \[=\lim_{L\rightarrow\infty}\frac{DT}{L^{2}}+\lim_{L\rightarrow\infty }\frac{1}{L^{2}}\sum_{q}\left(1+\frac{\omega_{q}}{2}\right)\frac{1-e^{-\omega _{q}DT}}{\omega_{q}},\]
Figure 2: The time-integrated bond-current fluctuation, \(\langle\mathcal{Q}_{i}^{2}(T)\rangle\), plotted as a function of time \(T\) for various chipping parameter and system sizes. The cyan (\(\lambda=0.25\), \(L=1000\)), magenta (\(\lambda=0.90\), \(L=1000\)), red (\(\lambda=0.25\), \(L=500\)), and green (\(\lambda=0.90\), \(L=500\)) lines lines represent simulation data for global density \(\rho=1\). The red dashed line represents the behavior \(\Gamma_{0}T\) for \(\lambda=0.90\) as mentioned in Eq.(62), while the blue and green dashed lines represent sub-diffusive \(\sim T^{1/2}\) and diffusive \(\sim T\) growth, respectively, as mentioned in Eq.(71).
or, equivalently, we can write the scaling function as
\[\mathcal{W}(y) =\lim_{L\rightarrow\infty}\left[y+\frac{1}{L^{2}}\sum_{q}\left(1+ \frac{\omega_{q}}{2}\right)\frac{1-e^{-\omega_{q}L^{2}y}}{\omega_{q}}\right], \tag{64}\] \[=\lim_{L\rightarrow\infty}\left[y+\frac{1}{L^{2}}\sum_{q}\frac{1- e^{-\omega_{q}L^{2}y}}{\omega_{q}}\right]+o\left(\frac{1}{L}\right), \tag{65}\]
where \(\omega_{q}=2(1-\cos q)\), with \(q=2\pi n/L\) and \(n=1,2,\cdots,L-1\). In the expression of Eq. 64, the scaling function \(\mathcal{W}(y)\) can be approximately written as the following integral representation,
\[\mathcal{W}(y)\simeq y+\lim_{L\rightarrow\infty}\frac{1}{\pi L} \int\limits_{\frac{2\pi}{L}}^{\pi}dq\frac{\left(1-e^{-L^{2}y\omega(q)}\right) }{\omega(q)} \tag{66}\] \[\qquad+\lim_{L\rightarrow\infty}\frac{1}{2\pi L}\int\limits_{ \frac{2\pi}{L}}^{\pi}dq\left(1-e^{-L^{2}y\omega(q)}\right).\]
By using variable transformation \(z=\omega(q)L^{2}\) to Eq.(66) and taking the infinite system-size limit \(L\rightarrow\infty\), we obtain the following expression,
\[\mathcal{W}(y)\simeq y+\frac{1}{2\pi}\int\limits_{4\pi^{2}}^{\infty}dz\frac{( 1-e^{-zy})}{z^{\frac{3}{2}}}, \tag{67}\]
where we have used that the third term in the rhs of eq. (66) gives a subleading \(o(1/L)\) contribution, which vanishes in the scaling limit. After performing the integration, we finally write
\[\mathcal{W}(y)\simeq y+\sqrt{\frac{y}{\pi}}\text{erfc}(2\pi\sqrt{y})+\frac{1-e ^{-4\pi^{2}y}}{4\pi^{2}}, \tag{68}\]
where \(\text{erfc}(z)=1-\text{erf}(z)\), with error function,
\[\text{erf}(z)=\frac{2}{\sqrt{\pi}}\int\limits_{0}^{\tilde{z}}e^{-t^{2}}dt. \tag{69}\]
From Eq.(68), we can calculate the asymptotic forms of \(\mathcal{W}(y)\) in two different limits,
\[\mathcal{W}(y)\simeq\begin{cases}\sqrt{\frac{\pi}{\pi}}&\text{for }y\ll 1\\ y&\text{for }y\gg 1.\end{cases} \tag{70}\]
In Fig.3, we illustrate the scaled time-integrated bond-current fluctuation \(\langle\mathcal{Q}_{i}^{2}(T)\rangle D/(2\chi L)\) as a function of scaled time \(DT/L^{2}\) for different chipping parameters and system sizes at a global density of \(\rho=1\). The colored lines are obtained from simulations, and the black solid line corresponds to the theoretical lines as in Eq.(65). Two guiding dashed lines, which represent sub-diffusive behavior as \(\sim y^{1/2}\) (blue) at early times, followed by diffusive growth as \(\sim y\) (green) at longer times, as mentioned in Eq.(70).
Now combining all the temporal regimes discussed above, we can summarize asymptotic behaviors of the time-integrated bond-current \(\langle\mathcal{Q}^{2}(T)\rangle\) in the following three regimes:
\[\langle\mathcal{Q}^{2}(T)\rangle=\begin{cases}4\chi T&\text{for }DT\ll 1\\ \frac{2\chi}{\sqrt{D\pi}}T^{\frac{1}{2}}&\text{for }1\ll DT\ll L^{2}\\ \frac{2\chi T}{L}&\text{for }DT\gg L^{2}.\end{cases} \tag{71}\]
#### iv.2.1 Space-time integrated current
In this section, we will focus on the calculation of the steady-state variance \(\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle-\langle\mathcal{Q}_{sub}(l,T)\rangle ^{2}=\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle\) of the cumulative (space-time integrated) actual particle current \(\mathcal{Q}_{sub}(l,T)=\sum_{i=0}^{l-1}\mathcal{Q}_{i}(T)\) across a subsystem of size \(l\) and up to time \(T\). The variance \(\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle\) can be expressed as follows:
\[\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle=\left\langle\sum_{i=1}^{l-1}\mathcal{ Q}_{i}(T)\sum_{j=1}^{l-1}\mathcal{Q}_{j}(T)\right\rangle. \tag{72}\]
The sum on the right-hand side of the equation can be simplified and expressed in terms of the current-current dynamic correlation at equal times:
\[\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle=lC_{0}^{\mathcal{QQ}}(T,T)+\sum_{r=1} ^{l-1}2(l-r)C_{r}^{\mathcal{QQ}}(T,T). \tag{73}\]
Figure 3: Scaled time-integrated bond-current fluctuation, \(\langle\mathcal{Q}_{i}^{2}(T)\rangle D/(2\chi L)\), plotted against scaled time \(DT/L^{2}\) for different chipping parameters and system sizes at a global density of \(\rho=1\). The cyan (\(\lambda=0.25\), \(L=1000\)), magenta (\(\lambda=0.90\), \(L=1000\)), red (\(\lambda=0.25\), \(L=500\)), and green (\(\lambda=0.90\), \(L=500\)) lines lines represent simulation data. The two guiding dashed lines depict sub-diffusive behavior as \(\sim y^{1/2}\) (blue) at early times, followed by a diffusive growth as \(\sim y\) (green) at longer times. These trends are based on the scaling function \(\mathcal{W}(y)\) [as in Eq.(70)]. The black solid line corresponds to theoretical results [as in Eq.(65)] and demonstrates excellent agreement with the simulations.
Now, we can utilize Eq.(57) in the above equation and employ the following identity:
\[\sum_{r=1}^{l-1}2(l-r)(2-\omega_{rq})=2\left(\frac{\omega_{lq}-l\omega_{q}}{\omega _{q}}\right). \tag{74}\]
Afterwards, we perform some algebraic manipulations, yielding the following expression:
\[\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle =2\chi lT+2\chi T(1-\delta_{l,L})-\frac{2\chi D}{L} \tag{75}\] \[\times\sum_{q}\frac{(D\omega_{q}T-1+e^{-\omega_{q}DT})}{(\omega_{ q}D)^{2}}\big{(}1+\frac{1}{2}\omega_{q}\big{)}\omega_{ql}.\]
Here, \(\omega_{q}=2(1-\cos q)\), where \(q=2\pi n/L\) and \(n=1,2,\cdots,L-1\). The influence of the subsystem size \(l\) comes into play through the Fourier mode \(\omega_{ql}\) alone. We will now derive the asymptotic dependence of Eq.(75) on the subsystem size \(l\) and time \(T\), first by considering the limit \(T\gg 1\) followed by \(l\gg 1\), and then by reversing the order of the limits, i.e., \(l\gg 1\) followed by \(T\gg 1\).
Case 1: \(T\gg 1\) and \(l\gg 1\)
When we first consider the limit \(T\gg 1\) followed by \(l\gg 1\), the above Eq.(75) simplifies to:
\[\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle \tag{76}\] \[\simeq\frac{2\chi l^{2}}{L}+\frac{2\chi D}{L}\sum_{q}\frac{(1-e^{ -D\omega_{q}T})}{D^{2}\omega_{q}^{2}}\left(1+\frac{\omega_{q}}{2}\right) \omega_{ql}.\]
In the limit as \(L\rightarrow\infty\), the sum in the above equation can be approximated to an integral form as follows:
\[\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle \tag{77}\] \[\simeq\frac{2\chi D}{\pi}\int\limits_{0}^{\pi}dq\frac{(1-e^{-D \omega(q)T})}{D^{2}\omega(q)^{2}}\left(1+\frac{\omega(q)}{2}\right)\omega(ql). \tag{78}\]
By using the approximation \(\omega(lq)\simeq l^{2}q^{2}\) for a finite subsystem size \(l\), and introducing a variable transformation \(z=DTq^{2}\), we obtain
\[\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle\simeq\frac{\chi l^{2}\sqrt{T}}{\pi \sqrt{D}}\int\limits_{0}^{\infty}dz(1-e^{-z})z^{-3/2}. \tag{79}\]
After evaluating the integral \(\int_{0}^{\infty}dz(1-e^{-z})z^{-3/2}=2\sqrt{\pi}\), we obtain:
\[\frac{\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle}{lT}\simeq\frac{2\chi}{\sqrt{ \pi D}}\frac{l}{\sqrt{T}}. \tag{80}\]
In this specific order of taking the limit (\(l\gg 1\) first and then \(T\gg 1\)), we make the approximation \(\omega(ql)\simeq 2\), and the equation (75) can be expressed in integral form as:
\[\frac{\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle}{lT} \tag{81}\] \[\simeq 2\chi+\frac{2\chi}{l}+\frac{4\chi D}{lT\pi}\int\limits_{0}^{ \pi}dq\frac{[D\omega(q)T-1+e^{-\omega(q)DT}]}{D^{2}\omega(q)^{2}}.\]
Again using the variable transformation \(z=DTq^{2}\), we obtain,
\[\frac{\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle}{lT} \tag{82}\] \[\simeq 2\chi+\frac{2\chi}{l}-\frac{2\chi\sqrt{DT}}{\pi l}\int \limits_{0}^{\infty}dz(z-1+e^{-z})z^{-\frac{5}{2}}.\]
By employing the integral \(\int_{0}^{\infty}dz(z-1+e^{-z})z^{-5/2}=4\sqrt{\pi}/3\) in the above equation, we can express the leading-order contribution as:
\[\frac{\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle}{lT}\simeq 2\chi-\frac{8\chi}{3} \frac{\sqrt{D}}{\pi}\frac{\sqrt{T}}{l} \tag{83}\]
Hence, the asymptotic expression for the variance of the cumulative subsystem current, as presented in equation (75), is conditional on the order of limits for the two variables \(T\gg 1\) and \(l\gg 1\), i.e.,
\[\frac{\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle}{lT}\simeq\begin{cases}\frac{2 \chi}{\sqrt{\pi D}}\frac{l}{\sqrt{T}}&\text{ for }T\gg 1,l\gg 1,\\ 2\chi-\frac{8\chi}{3}\frac{\sqrt{D}}{\pi}\frac{\sqrt{T}}{l}&\text{ for }l\gg 1,T\gg 1. \end{cases} \tag{84}\]
The first expression in the equation above results from taking the limits in the following sequence: first \(T\gg 1\) and then \(l\gg 1\). In this specific order of limits, the scaled function \(\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle/lT\) decreases as \(1/\sqrt{T}\) and eventually diminishes as \(T\) approaches infinity. Conversely, if we reverse the order of limits, starting with \(l\gg 1\) and then considering \(T\gg 1\), we derive the second asymptotic expression found in Eq.(84). In essence, when \(l\rightarrow\infty\), the scaled fluctuation of the subsystem current \(\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle/lT\) approaches \(2\chi\) as \(T\) increases,
\[\sigma_{\mathcal{Q}}^{2}\equiv\lim_{l\rightarrow\infty}\lim_{T\rightarrow\infty} \frac{\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle}{lT}=\sum_{r}\Gamma_{r}. \tag{85}\]
In the infinite subsystem limit, taken first, followed by the infinite-time limit, Eq. (85) can be viewed as a non-equilibrium version of the Green-Kubo relation, well-known in equilibrium systems. Interestingly, when we consider \(l=L\gg 1\), which corresponds to the bond current summed over the entire system, we obtain the following identity:
\[\lim_{L\rightarrow\infty}\frac{\langle\mathcal{Q}_{sub}^{2}(L,T)\rangle}{LT}=2 \chi=\sum_{r}\Gamma_{r}. \tag{86}\]
It is important to note that the above expression is valid for any finite time \(T\). This is due to the fact that the diffusive part of the total current vanishes over the full system size, i.e., \(\sum_{i=1}^{L}\mathcal{J}_{i}^{(d)}=0\), by definition. As a result, we are left with only the strength of the fluctuating part of the current, which leads to \(\sum_{r}\Gamma_{r}=2\chi\).
In Figure 4, we plot the scaled subsystem current fluctuation \(\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle/lT\) against time \(T\) for various subsystem sizes: \(l=100\) (blue line), \(l=500\) (orange line), and \(l=L=1000\) (green line). The black dashed lines represent theoretical predictions that closely match the simulation data. A magenta dotted line at \(2\chi\) overlays the data when \(l=L\), indicating a limit where \(l\rightarrow\infty\) is taken first. Notably, for smaller subsystem sizes, \(\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle/lT\) tends to zero as \(T\rightarrow\infty\) limit taken first.
### Instantaneous bond-current fluctuations
In this section, we calculate the spatio-temporal correlation of the instantaneous current, \(C_{r}^{\mathcal{J},\mathcal{J}}(t)=\langle\mathcal{J}_{i}(t)\mathcal{J}_{r+ r}(0)\rangle_{c}\), from the correlation of the time-integrated bond current and show that the instantaneous bond current is negatively correlated in time. This is accomplished by taking a double derivative of the time-integrated bond current correlation, which is expressed as follows:
\[C_{r}^{\mathcal{J}\mathcal{J}}(t,t^{\prime}) =\left[\frac{d}{dt}\frac{d}{dt^{\prime}}C_{r}^{\mathcal{QQ}}(t,t ^{\prime})\right], \tag{87}\]
Here, with the condition that \(t\geq t^{\prime}\), we proceed to differentiate Eq.(58) twice, resulting in its rewritten form:
\[C_{r}^{\mathcal{J}\mathcal{J}}(t,t^{\prime}) =\Gamma_{r}\delta(t-t^{\prime})\] \[-\frac{\chi D}{L}\sum_{q}e^{-D\omega_{q}(t-t^{\prime})}\omega_{q }\left(1+\frac{\omega_{q}}{2}\right)e^{-iqr}. \tag{88}\]
To investigate the temporal behavior of instantaneous bond current, we set \(r=0\) and \(t>t^{\prime}=0\) in Eq.(88), and simplify the resulting equation into the following integral form by taking limit \(L\rightarrow\infty\) as
\[C_{0}^{\mathcal{J}\mathcal{J}}(t,0)\simeq\frac{\chi D}{\pi}\int \limits_{0}^{\pi}dqe^{-D\omega(q)t}\omega(q)\left[1+\frac{\omega(q)}{2}\right], \tag{89}\]
where \(\omega(q)=2(1-\cos q)\). Now, we approximate \(\omega(q)\simeq q^{2}\) and make a variable transformation \(z=Dq^{2}t\) to rewrite the above equation as follows:
\[C_{0}^{\mathcal{J}\mathcal{J}}(t,0)\simeq-\frac{\chi t^{-\frac{3}{2}}}{2\pi \sqrt{D}}\int\limits_{0}^{\infty}z^{\frac{1}{4}}e^{-z}dz, \tag{90}\]
where we have ignored the subleading term \(O(t^{-5/2})\). We note that the sign is negative in Eq.(90). Finally, using the integral \(\int_{0}^{\infty}z^{1/2}e^{-z}dz=\sqrt{\pi}/2\), the asymptotic form of the time-dependent instantaneous current for any time \(t\) in the thermodynamic limit can be written as:
\[C_{0}^{\mathcal{J}\mathcal{J}}(t,0)\simeq\Gamma_{0}\delta(t)-\frac{\chi}{4 \sqrt{\pi D}}t^{-\frac{3}{2}}. \tag{91}\]
Notably, the negative part of the above equation exhibits long-range behavior in the temporal domain, primarily due to the contribution of dynamic correlation from the diffusive current, i.e., \(C_{0}^{\mathcal{J}^{(d)}\mathcal{J}^{(d)}}(t,0)\sim t^{-3/2}\). In contrast, the fluctuating current is short-ranged, given by \(C_{r}^{\mathcal{J}^{(l)}\mathcal{J}^{(l)}}(t,0)=\delta(t)\Gamma_{r}\), where \(\Gamma_{r}\) represents the strength of the fluctuating current.
Now, to check the spatial behavior of instantaneous current, we calculated this correlation at the same time, \(t=t^{\prime}\). In that case Eq.(88) can be expressed as:
\[C_{r}^{\mathcal{J}\mathcal{J}}(t,t)=\Gamma_{r}\delta(0)-\frac{\chi D}{L}\sum _{q}\omega_{q}\left(1+\frac{\omega_{q}}{2}\right)e^{-iqr}. \tag{92}\]
After some algebraic manipulation of the above equation, it can be expressed in terms of the steady-state (\(t\rightarrow\infty\)) density correlation \(C_{r}^{mm}\) as follows:
\[C_{r}^{\mathcal{J}\mathcal{J}}=\Gamma_{r}\delta(0)-D^{2}[3C_{r}^{mm}-(C_{r-1} ^{mm}+C_{r+1}^{mm})]. \tag{93}\]
Figure 4: The scaled space-time-integrated bond-current fluctuation, \(\langle\mathcal{Q}_{sub}^{2}(l,T)\rangle\), is displayed as a function of time \(T\) for various sub-system sizes: \(l=100\) (lower blue solid line), \(500\) (middle orange solid line), and \(1000\) (top solid green line). The chosen chipping parameter for the model is \(\lambda=0.25\) (fixed), with a system size of \(L=1000\) and a global density of \(\rho=1\). The black dashed line in the plot corresponds to the theoretical prediction derived from Eq.(75), and it precisely aligns with the respectable simulation data. Moreover, when the subsystem size equals the full system size, i.e., \(l=L\), \(\langle\mathcal{Q}_{sub}^{2}(L,T)\rangle/LT\) follows the behavior of \(2\chi\) (magenta dashed lines), as indicated by Eq.(86).
In Eq.(93), we can see that the spatial length scale of the instantaneous current is intimately tied to the strength of the instantaneous current, denoted as \(\Gamma_{r}\), particularly in the leading-order analysis. An intriguing implication emerges from this observation: as \(\Gamma_{r}\) directly influences the density correlation \(C_{r}^{mm}\), it points to the conclusion that the spatial extent of the instantaneous current in this model is inherently short-ranged. In contrast with the dynamic correlation of the fluctuating current, which constitutes two distinct components: first, the fluctuating current, which is short-ranged both in terms of spatial and temporal dependencies; and second, the diffusive current, characterized by its short-ranged spatial behavior but intriguingly long-ranged (power law) temporal behavior, scaling as \(\sim t^{-3/2}\).
Now, using the _Wiener-Khinchin theorem_[46], we can express the power spectrum for the instantaneous current \(\mathcal{J}_{i}\) as
\[S_{\mathcal{J}}(f)=\int\limits_{-\infty}^{\infty}dtC_{0}^{\mathcal{J}\mathcal{ J}}(t,0)e^{2\pi ift}. \tag{94}\]
Using Eq.(88) with \(r=0\) and \(t^{\prime}=0\), we integrate the rhs of the above equation and obtain the following expression:
\[S_{\mathcal{J}}(f)=\frac{2\chi(\rho)}{L}+\frac{2\chi(\rho)}{L}\sum_{q}\big{(}1 +\frac{1}{2}\omega_{q}\big{)}\frac{4\pi^{2}f^{2}}{D^{2}\omega_{q}^{2}+4\pi^{2 }f^{2}}. \tag{95}\]
Here, \(\omega_{q}=2(1-\cos q)\), where \(q=2\pi n/L\) and \(n=1,2,\cdots,L-1\). Subtracting the zero modes we rewrite the above expression as follows \(\tilde{S}_{\mathcal{J}}(f)=S_{\mathcal{J}}(f)-S_{\mathcal{J}}(0)\),
\[\tilde{S}_{\mathcal{J}}(f)=\frac{2\chi(\rho)}{L}\sum_{q}\big{(}1+\frac{1}{2} \omega_{q}\big{)}\frac{4\pi^{2}f^{2}}{D^{2}\omega_{q}^{2}+4\pi^{2}f^{2}}. \tag{96}\]
In the equation above, we rescaled frequency as \(\tilde{y}=fL^{2}/D\) and introduced a scaling function \(\mathcal{H}\) that relates to \(\tilde{S}_{\mathcal{J}}(f)\) as,
\[\mathcal{H}\left(\frac{L^{2}f}{D}\right) =\frac{L\tilde{S}_{\mathcal{J}}(f)}{2\chi(\rho)}\] \[=\lim_{L\rightarrow\infty}\sum_{q}\big{(}1+\frac{1}{2}\omega_{q} \big{)}\frac{4\pi^{2}\left(\frac{L^{2}f}{D}\right)^{2}}{L^{4}\omega_{q}^{2}+4 \pi^{2}\left(\frac{L^{2}f}{D}\right)^{2}}, \tag{97}\]
where \(\omega_{q}=2(1-\cos q)\), with \(q=2\pi n/L\) and \(n=1,2,\cdots,L-1\). The above expression can be represented in integral form, and in the limit of small frequency and \(L\rightarrow\infty\), we obtain the scaling regime. Furthermore, Eq(97) shows that as \(\tilde{y}\rightarrow\infty\), the scaled power spectrum of instantaneous currents diverges with the system size as \(2L-1\), which has been illustrated in Fig. 5.
The proposed scaling function for the power spectrum of instantaneous bond current as a function of scaled frequency \(\tilde{y}\) has an integral representation in the lower frequency regime \(D/L^{2}\ll f\ll 1\). The integral representation is as follows:
\[\mathcal{H}(\tilde{y})\simeq\lim_{L\rightarrow\infty}\frac{L}{\pi}\int \limits_{2\pi/L}^{\pi}dq\left[1+\frac{\omega(q)}{2}\right]\frac{1}{1+\frac{L+ \omega(q)^{2}}{4\pi^{2}\tilde{y}^{2}}}, \tag{98}\]
where \(\omega(q)=2(1-\cos q)\). Now after variable transformation \(z=\omega(q)L^{2}\) and then taking \(L\rightarrow\infty\), we obtained the following expression,
\[\mathcal{H}(\tilde{y})\simeq\frac{1}{2\pi}\int\limits_{0}^{\infty}\frac{dz}{z ^{1/2}[1+\frac{z^{2}}{4\pi^{2}\tilde{y}^{2}}]}\simeq\sqrt{\frac{\tilde{y}\pi}{ 4}}. \tag{99}\]
In Figure 5, you can see the scaled power spectrum of the instantaneous bond current, denoted as \(L\tilde{S}_{\mathcal{J}}(f)/(2\chi)\), plotted against the scaled frequency \(L^{2}f/D\) for various chipping parameters and system sizes, all at a global density of \(\rho=1\). The simulation results are depicted using solid color lines, while the black solid line represents the theoretical predictions from Eq.(97), which align perfectly with the simulation data. Additionally, we've included a guiding line \(\sim\tilde{y}^{1/2}\), as specified in Eq.(99), in the lower scaled-frequency range.
According to Eq.(99), the power spectrum of the instantaneous currents displays a power-law behavior of \(f^{\psi_{\mathcal{J}}}\) with an exponent of \(\psi_{\mathcal{J}}=1/2\) in the low-frequency
Figure 5: The scaled power spectrum of instantaneous currents, \(L\tilde{S}_{\mathcal{J}}(f)/(2\chi)\), is plotted as a function of scaled frequency \(L^{2}f/D\) for various chipping parameters and system sizes. The cyan solid line corresponds to \(\lambda=0.25\) and \(L=1000\), the magenta solid line corresponds to \(\lambda=0.90\) and \(L=1000\), the red solid line corresponds to \(\lambda=0.25\) and \(L=500\), and the green solid line corresponds to \(\lambda=0.90\) and \(L=500\), all at a global density of \(\rho=1\). The blue dashed line shows \(\tilde{y}^{1/2}\) scaling behavior in the low-frequency regime as in Eq.(99) and red dashed lines represent \(L\tilde{S}_{\mathcal{J}}(f)/(2\chi)\) diverges as system size \(2L-1\) at the high-frequency limit. The solid color lines represent the simulation results, while the black solid line represents the theoretical predictions of Eq.(97), which perfectly matches the simulation data.
regime. Additionally, it can be inferred that in the temporal domain, the correlation of instantaneous current has a scaling behavior of \(\left\langle\mathcal{J}_{0}(t)\mathcal{J}_{0}(0)\right\rangle\sim t^{-\psi_{J}-1}\), which is qualitatively described by a \(t^{-3/2}\) behavior.
### Subsystem mass fluctuations
In previous sections, we have a detailed study on current-current dynamical correlation and its associated power spectrum of instantaneous current. In this section, we study the power spectrum of subsystem mass fluctuation. For that, we derive a two-point dynamic correlation of mass \(C_{r}^{mm}(t,0)=\left\langle m_{i}(t)m_{i+r}(0)\right\rangle-\left\langle m_{ i}(t)\right\rangle\left\langle m_{i+r}(0)\right\rangle\). By employing the microscopic update rule, we can derive the time evolution of \(C_{r}^{mm}(t,0)\equiv C_{r}^{mm}(t)\) as follows:
\[\begin{split}\frac{d}{dt}C_{r}^{mm}(t)&=D\sum_{k} \Delta_{0,k}\left\langle m_{k}(t)m_{r}(0)\right\rangle_{c}\\ &=D\sum_{k}\Delta_{0,k}C_{k}^{mm}(t).\end{split} \tag{100}\]
The solution of above Eq.(100) can be written as Fourier representation as
\[\tilde{C}_{q}^{mm}(t)=e^{-D\omega_{q}t}\tilde{C}_{q}^{mm}(0), \tag{101}\]
where \(\tilde{C}_{q}^{mm}\) is the Fourier transform of \(C_{r}^{mm}\). The equal-time mass correlation \(C_{r}^{mm}(0)\) corresponds to the steady-state mass-mass correlation \(C_{r}^{mm}\) mentioned in Eq.(50). Notably, this correlation \(C_{r}^{mm}\) has a direct connection with scaled subsystem-mass fluctuation,
\[\sigma_{M}^{2}\equiv\lim_{l\rightarrow\infty}\frac{\left\langle M_{l}^{2} \right\rangle-\left\langle M_{l}\right\rangle^{2}}{l}=\sum_{r=-\infty}^{r= \infty}C_{r}^{mm}=\frac{\chi}{D}, \tag{102}\]
where boundary contribution of \(C_{r}^{mm}\) has been neglected.
Note that, in the above equation, the mobility \(\chi\) is defined purely from current fluctuations in the systems, when the particle hopping rates are strictly symmetric in either directions. Indeed, the essence of the MFT is that, for "gradient-type models", the current fluctuations can be alternatively calculated using a slightly different approach where the hopping rates are biased in a certain direction; this amounts to applying a small biasing force in that direction so that the hopping rates become slightly asymmetric and a small current is generated in the system. Interestingly, this particular scheme leads to definition of another transport coefficient in the system, we call it an "operational" mobility, which characterizes the response of the systems (i.e., the small current generated) to a small force field \(F\), which, for simplicity, assumed to be constant. Indeed, in Ref. [39], one of us previously introduced such a biasing force which modifies the original unbiased (symmetric) hopping rates of MCMs; of course, in the absence of the force \(F\), we recover the original time-evolution equation of Eq.(6)). In that case, a time-evolution equation of local density, as opposed to the unbiased scenario as in Eq.(6)), is given by
\[\frac{d\rho_{i}}{dt}=\frac{\tilde{\lambda}}{2}(\rho_{i+1}-2\rho_{ i}+\rho_{i-1})+\frac{\tilde{\lambda}^{2}}{12}F\left(\left\langle m_{i-1}^{2} \right\rangle-\left\langle m_{i+1}^{2}\right\rangle\right), \tag{103}\]
where \(\rho_{i}=\left\langle m_{i}\right\rangle\). By scaling space and time as \(x=i/L\) and \(\tau=t/L^{2}\) and the (vanishingly) small biasing force as \(F=\tilde{F}/L\), the time evolution equation for density field \(\rho(x,\tau)\) as a function of the rescaled space and time variables can be expressed in terms of a continuity equation,
\[\begin{split}\partial_{\tau}\rho(x,\tau)&=-\partial _{x}\left[-D\partial_{x}\rho(x,\tau)+\chi_{op}\tilde{F}\rho(x,\tau)\right]\\ &=-\partial_{x}J(x,\tau),\end{split} \tag{104}\]
where the total local current \(J=J_{diff}+J_{drift}\) is written as the sum of the diffusive current \(J_{diff}=-D\partial_{x}\rho(x,\tau)\) and drift current \(J_{drift}=\chi_{op}\tilde{F}\); here the two transport coefficients - the bulk-diffusion coefficient and the "operational" mobility are given by \(D=\tilde{\lambda}/2\) and \(\chi_{op}=\left\langle m_{i}^{2}\right\rangle/6\), respectively. The latter identity immediately implies, directly through Eq.(31), a simple fluctuation-response relation
\[\chi_{op}(\rho)\equiv\left[\frac{\partial J_{drift}}{\partial\tilde{F}} \right]_{\tilde{F}=0}=\sigma_{Q}^{2}\equiv\chi(\rho). \tag{105}\]
In other words, we have derived here a nonequilibrium version of the celebrated Green-Kubo relation for (near) equilibrium systems. We can immediately derive a version of another celebrated relation in equilibrium, called the Einstein relation, which connects the scaled mass fluctuation, the bulk-diffusion coefficient and the "operational" mobility, i.e.,
\[\chi_{op}\equiv\left[\frac{\partial J_{drift}}{\partial\tilde{F}}\right]_{ \tilde{F}=0}=D\sigma_{M}^{2}, \tag{106}\]
where we have used the already derived fluctuation relation [47] as given in Eq.(102); notably, the above equation is exact for MCMs and the above analysis constitutes a microscopic derivation of the relation. Furthermore, by using Eq.(85) and Eq. (102), we can immediately derive another nonequilibrium fluctuation relation, between fluctuation of mass and that of current, as expressed in the following equation,
\[\sigma_{M}^{2}=\frac{\sigma_{\mathcal{Q}}^{2}}{2D}. \tag{107}\]
It is not difficult to see that the above relation is nothing but a slightly modified version of the equilibrium-like Einstein relation as given in Eq. (106). While the above set of fluctuation relations is well established in the context of equilibrium systems, their existence in systems having a nonequilibrium steady state is not well understood. Indeed, a general theoretical understanding has emerged for nonequilibrium diffusive systems, which
MCMs belong to, and it is desirable to prove such relations using exact microscopic calculations, which are still a formidable task even for the simplest class of models, i.e., the many-particle diffusive systems.
In order to solve Eq.(101), we must determine the value of the steady-state mass-mass correlation in Fourier mode, denoted as \(C_{q}^{mm}\), which is given by:
\[C_{q}^{mm}=\frac{\chi}{D}\left(1+\frac{\omega_{q}}{2}\right). \tag{108}\]
Now, we substitute the above equation into Eq.(101) to obtain the solution of Eq.(101) as follows:
\[\bar{C}_{q}^{mm}(t)=\frac{\chi}{D}e^{-D\omega_{q}t}\left(1+\frac{\omega_{q}}{2 }\right). \tag{109}\]
Finally, using inverse Fourier transformation, we get
\[C_{r}^{mm}(t)=\frac{\chi}{D}\frac{1}{L}\sum_{q}e^{-iqr}e^{-D\omega_{q}t}\left( 1+\frac{\omega_{q}}{2}\right). \tag{110}\]
To calculate the asymptotic behavior at the single site level, we express the above expression by setting \(r=0\) and write it in integral form as follows:
\[C_{0}^{mm}(t)\simeq\frac{\chi}{\pi D}\int\limits_{0}^{\pi}dqe^{-D\omega(q)t} \left[1+\frac{\omega(q)}{2}\right]. \tag{111}\]
Now, we approximate \(\omega(q)\approx q^{2}\) and perform a variable transformation \(z=Dtq^{2}\) to simplify the above equation as follows:
\[C_{0}^{mm}(t)\simeq\frac{\chi}{\pi D\sqrt{4Dt}}\int\limits_{0}^{\pi}z^{-\frac {1}{2}}e^{-z}dz, \tag{112}\]
where subleading term \(O(t^{-3/2})\) is neglected. After putting the value of the integral \(\int_{0}^{\infty}z^{-\frac{1}{2}}e^{-z}dz=\sqrt{\pi}\) in the above equation, we obtain the asymptotic expression of dynamic correlation of mass at a single site as
\[C_{0}^{mm}(t)\simeq\frac{\chi}{D\sqrt{4\pi D}}t^{-\frac{1}{2}}. \tag{113}\]
We now consider a subsystem of size \(l<L\) with a total mass \(M_{l}(t)=\sum_{i=0}^{l-1}m_{i}(t)\) and calculate the unequal-time correlation of mass \(C^{M_{l}M_{l}}(t,0)\equiv C^{M_{l}M_{l}}(t)\) as follows:
\[C^{M_{l}M_{l}}(t)=\left\langle\sum_{i=0}^{l-1}m_{i}(t)\sum_{j=0}^{l-1}m_{j}(0 )\right\rangle_{c}. \tag{114}\]
Upon simplification of the above equation, we obtain,
\[C^{M_{l}M_{l}}(t)=lC_{0}^{mm}(t)+\sum_{r=1}^{l-1}(l-r)\left[C_{r}^{mm}(t)+C_{ -r}^{mm}(t)\right]. \tag{115}\]
After substituting Eq.(110) into the previous equation and performing algebraic operations, we arrive at the subsequent expression,
\[C^{M_{l}M_{l}}(t)=\frac{\chi}{D}\frac{1}{L}\sum_{q}e^{-D\omega_{q}t}\left(1+ \frac{\omega_{q}}{2}\right)\frac{\omega_{lq}}{\omega_{q}}. \tag{116}\]
Now, we derive the temporal asymptotic behavior of the dynamic correlation of the subsystem mass \(C^{M_{l}M_{l}}(t)\) as appeared in Eq.(116). At time \(t=0\), it takes a maximum value and after that, it decays as a function of time \(t\). To extract the time dependency, we isolate \(C^{M_{l}M_{l}}(t)\) after its subtracted from its maximum value and then express the equation in an approximate integral form as mentioned below:
\[\begin{split}& C^{M_{l}M_{l}}(0)-C^{M_{l}M_{l}}(t)\\ &\simeq\frac{2\chi}{\pi D}\int\limits_{0}^{\pi}dq\left[1-e^{-D \omega(q)t}\right]\left[1+\frac{\omega(q)}{2}\right]\frac{1}{\omega(q)}. \end{split} \tag{117}\]
Again, we approximate \(\omega(q)\approx q^{2}\) and perform a variable transformation \(z=Dtq^{2}\) to simplify the above equation in the leading order as follows:
\[C^{M_{l}M_{l}}(0)-C^{M_{l}M_{l}}(t)\simeq\frac{\chi\sqrt{t}}{\pi\sqrt{D}}\int \limits_{0}^{\infty}z^{-\frac{3}{2}}(1-e^{-z})dz. \tag{118}\]
After putting the value of the integral \(\int_{0}^{\infty}z^{-3/2}(1-e^{-z})dz=2\sqrt{\pi}\) in the above equation, we obtain the asymptotic expression of dynamic correlation of the subsystem mass as
\[C^{M_{l}M_{l}}(t)-C^{M_{l}M_{l}}(0)\simeq-\frac{2\chi}{\sqrt{\pi D}}t^{\frac{1 }{2}}. \tag{119}\]
Now, after taking the Fourier transform of the Eq.(116), we can express the power spectrum of the subsystem mass fluctuation \(S_{M_{l}}(f)\) as follows:
\[S_{M_{l}}(f)=\lim\limits_{T\rightarrow\infty}\int\limits_{-T}^{T}dt\;C^{M_{l} M_{l}}(t,0)e^{2\pi ift}. \tag{120}\]
Upon completing the aforementioned integration, we have derived the subsequent expression,
\[S_{M_{l}}(f)=\frac{2\chi}{L}\sum_{q}\left(1+\frac{\omega_{q}}{2}\right)\frac{ \omega_{lq}}{\omega_{q}^{2}D^{2}+4\pi^{2}f^{2}}, \tag{121}\]
where \(\omega_{q}=2(1-\cos q)\), with \(q=2\pi n/L\) and \(n=1,2,\cdots,L-1\). We would like to present an additional scaling function, denoted as \(\mathcal{F}\), that links \(S_{M_{l}}(f)\) in the following manner:
\[\begin{split}\mathcal{F}\left(\frac{L^{2}f}{D}\right)& =\frac{D^{2}}{2\chi L^{3}}S_{M_{l}}(f)\\ &=\lim\limits_{L\rightarrow\infty}\sum_{q}\left(1+\frac{\omega_{q }}{2}\right)\frac{\omega_{lq}}{L^{4}\omega_{q}^{2}+4\pi^{2}(L^{2}f/D)^{2}}. \end{split} \tag{122}\]
Additionally, it is worth noting that the sub-system mass power spectrum \(S_{M_{l}}(f)\) can be represented by an integral, as stated below,
\[\mathcal{F}(\tilde{y})\simeq\lim_{L\to\infty}\frac{L}{\pi}\int\limits _{2\pi/L}^{\pi}dq\left[1+\frac{\omega(q)}{2}\right]\frac{\omega(lq)}{4\pi^{2} \tilde{y}^{2}+L^{4}\omega(q)^{2}}. \tag{123}\]
For a large subsystem size \(1\ll l<L\), the function \(\omega(lq)\) exhibits high-frequency oscillations with values in the range of \([0,4]\), leading to an approximation of \(\omega(lq)\approx 2\). Additionally, we can make use of the transformation \(z=\omega(q)L^{2}\) and then taking \(L\to\infty\), we obtain the following expression,
\[\mathcal{F}(\tilde{y})\simeq\frac{1}{4\pi^{3}\tilde{y}^{2}}\int\limits_{0}^{ \infty}\frac{dz}{z^{1/2}\left(1+\frac{z^{2}}{4\pi^{2}\tilde{y}^{2}}\right)}. \tag{124}\]
After performing the above integral, we obtain the asymptotic expression of the scaled power spectrum of the subsystem mass fluctuation as follows:
\[\mathcal{F}(\tilde{y})\simeq\frac{(\tilde{y}\pi)^{-\frac{3}{2}}}{4}. \tag{125}\]
This implies that in the low-frequency regime, the power spectrum of the subsystem mass follows a power-law behavior \(f^{-\psi_{M}}\) with an exponent of \(\psi_{M}=3/2\). It is also evident that in the temporal domain, the correlation of the subsystem mass \(\langle M_{l}(t)M_{l}(0)\rangle_{c}\) scales as \(t^{\psi_{M}-1}\), which is qualitatively a \(t^{1/2}\) behavior.
Fig. 6 displays the scaled sub-system mass power spectrum \(D^{2}\tilde{S}_{M_{l}}(f)/(2\chi L^{3})\) as a function of the scaled frequency \(L^{2}f/D\) for various chipping parameters and system sizes, along with theoretical predictions. The power spectrum exhibits notable scaling behaviors in the low-frequency range, with the red dashed line indicating a \(\tilde{y}^{-3/2}\) scaling behavior. The simulation results (solid color lines) are in excellent agreement with the theoretical predictions (black solid line).
## IV Model: MCM II
In this section, we apply a similar theoretical framework as developed in the previous section. In the case of MCM II, a site is selected randomly, and a portion of the chipped-off mass is transferred either to the right nearest neighbor or the left nearest neighbor, with the remaining chipped-off mass being deposited back onto the same site. This dynamic behavior results in the absence of nearest-neighbor correlations, simplifying the calculation of dynamic correlations. Since we have already provided an analytical theory for MCM I model, we now present the significant findings for MCM II model.
### Time-integrated current fluctuation
Importantly, for MCM II, the time-integrated bond-current fluctuation follows the functional form as mentioned in Eq.(126). In this case, the strength of the fluctuating current is characterized by \(\Gamma_{r}=2\chi\delta_{0,r}\), where the bulk diffusion coefficient is given by \(D=\frac{\tilde{\lambda}}{4}\), and the mobility \(\chi\) remains identical to that of MCM I. Indeed, it's worth highlighting that for this model, MCM II, both the steady-state density correlation \(C_{r}^{mm}\) and the strength of the fluctuating current \(\Gamma_{r}\) are worth-ranged, indicating a lack of nearest-neighbor correlation (as indicated in Table 1. Additionally, it's noteworthy that
Figure 6: The scaled subsystem mass power spectrum, \(D^{2}\tilde{S}_{M_{l}}(f)/(2\chi L^{3})\), is plotted as a function of the scaled frequency \(L^{2}f/D\) for various chipping parametre and system sizes. The lines, colored cyan (\(\lambda=0.25\), \(L=1000\)), magenta (\(\lambda=0.90\), \(L=1000\)), red (\(\lambda=0.25\), \(L=500\)), and green (\(\lambda=0.90\), \(L=500\)), represent simulation data, all obtained at a global density of \(\rho=1\). In the low-frequency range, the red dashed line demonstrates a scaling behavior of \(\mathcal{F}(\tilde{y})\) as \(\tilde{y}^{-3/2}\) [as in Eq.(125)]. The black solid line corresponds to theoretical predictions [as mentioned in Eq.(122)], providing an excellent match with the simulation data for that system size.
and \(C_{r}^{mm}\) are related by a scaling factor represented by the bulk diffusivity \(D\), as demonstrated in Eq.(34), and this relationship holds true for this model as well.
For MCM II, we proceed to calculate the time-integrated bond-current fluctuation \(C_{0}^{QQ}(t,t)\) for equal time \(t^{\prime}=t=T\) and in the same space, i.e., \(r=0\). The resulting expression is as follows:
\[\langle\mathcal{Q}^{2}(T)\rangle=\frac{2\chi T}{L}+\frac{2\chi}{L}\sum_{n=1}^{ L-1}\frac{(1-e^{-D\omega_{n}T})}{D\omega_{n}}, \tag{126}\]
where \(\omega_{n}=2(1-\cos(2\pi n/L))\), with \(n=0,1,\cdots,L-1\). If we take the \(T\rightarrow\infty\) limit first in the above equation, we get the following expression,
\[\langle\mathcal{Q}^{2}(T)\rangle\simeq\frac{2\chi T}{L}+\frac{\chi L}{6D}= \frac{2\chi T}{L}\left[1+\mathcal{O}\left(\frac{L^{2}}{DT}\right)\right]. \tag{127}\]
It is worth noting that the term \(\chi L/6D\) in the above equation can be safely neglected since the leading contribution will arise from the \(2\chi T/L\) term as \(T\gg L^{2}\). In the smaller time regime where \(DT\ll 1\), Eq.(126) can be simplified as follows:
\[\langle\mathcal{Q}^{2}(T)\rangle=\Gamma_{0}T=2\chi T, \tag{128}\]
where the strength of the fluctuating current \(\Gamma_{0}\) is equal to \(2\chi\), which differs from the value found in the first model, MCM I (as demonstrated in Eq.(33)). Furthermore, it's worth highlighting that in the scaling regime where \(DT\gg 1\), we've observed that the scaling function \(\mathcal{W}(y)\), with the scaling variable \(y=DT/L^{2}\), maintains the same form as presented in Eq.(68), as observed in MCM I. In Figure 7, we present the time-integrated bond-current fluctuation, denoted as \(\langle\mathcal{Q}_{i}^{2}(T)\rangle\), as a function of time \(T\) in the top panel. We highlight the early-time behavior of \(\langle\mathcal{Q}^{2}(T)\rangle\), which scales as \(\Gamma_{0}T\) and is described in Eq.(128). Additionally, in the bottom panel of this figure, we showcase the scaled fluctuation of the time-integrated current, represented as \(\langle\mathcal{Q}_{i}^{2}(T)\rangle D/(2\chi L)\), as a function of the scaled time \(DT/L^{2}\). We include guiding lines that illustrate the asymptotic behavior of the scaling function \(\mathcal{W}(y)\).
The overall growth of the time-integrated bond-current fluctuation exhibits three distinct asymptotic regimes, as described below:
\[\langle\mathcal{Q}^{2}(T)\rangle=\begin{cases}2\chi T&\text{for }DT\ll 1\\ \frac{2\chi}{\sqrt{D\pi}}T^{\frac{1}{2}}&\text{for }1\ll DT\ll L^{2}\\ \frac{2\chi T}{L}&\text{for }DT\gg L^{2}\end{cases} \tag{129}\]
Figure 7: _In top panel:_We present the time-integrated bond-current fluctuation, denoted as \(\langle\mathcal{Q}_{i}^{2}(T)\rangle\), plotted as a function of time \(T\) for various chipping parameters and system sizes. For both this panel, the cyan (\(\lambda=0.25\), \(L=1000\)), magenta (\(\lambda=0.90\), \(L=1000\)), red (\(\lambda=0.25\), \(L=500\)), and green (\(\lambda=0.90\), \(L=500\)) lines represent simulation data at a global density of \(\rho=1\). The red dashed line corresponds to the behavior \(\Gamma_{0}T\) for \(\lambda=0.90\) (refer to Eq.(128)), while the blue and green dashed lines represent sub-diffusive growth, approximately scaling as \(\sim T^{1/2}\), and diffusive growth, approximately scaling as \(\sim T\), respectively, as discussed in Eq.(129) _In bottom panel:_ The plot displays the scaled time-integrated bond-current fluctuation, \(\langle\mathcal{Q}_{i}^{2}(T)\rangle D/(2\chi L)\), as a function of scaled time \(DT/L^{2}\) for various chipping parameter and system sizes. The two dashed lines serve as guides, indicating that \(\mathcal{W}(y)\) exhibits sub-diffusive behavior as \(\sim y^{1/2}\) (in blue) at early times, followed by diffusive growth as \(\sim y\) (in green) at later times [as in Eq.(70)]. The solid color lines illustrate the simulation results, while the black solid line represents the theoretical results obtained from Eq.(126) upon suitable scaling, and this theoretical curve perfectly matches the simulation data.
### Instantaneous bond-current fluctuations
Using the theory presented in Section III, we have computed the power spectrum of the instantaneous bond current \(\mathcal{J}_{i}(t)\) in MCM II. The expression for the power spectrum is provided below:
\[\tilde{S}_{\mathcal{J}}(f)=\frac{2\chi(\rho)}{L}\sum_{q}\frac{4\pi^{2}f^{2}}{D^ {2}\omega_{q}^{2}+4\pi^{2}f^{2}}, \tag{130}\]
where, \(\omega_{q}=2(1-\cos q)\), with \(q=2\pi n/L\) and \(n=1,2,\cdots,L-1\). The above expression exhibits an asymptotic behavior \(\mathcal{H}(\tilde{y})\sim\tilde{y}^{1/2}\) in the lower frequency regime, similar to MCM I as mentioned in Eq.(99). Furthermore, \(\mathcal{H}(\tilde{y})\) diverges as \(L-1\) in the limit \(\tilde{y}\to\infty\) (much larger than \(L^{2}\)).
In Figure 8, the scaled power spectrum of the instantaneous bond current, represented as \(L\tilde{S}_{\mathcal{J}}(f)/(2\chi)\), plotted against the scaled frequency \(L^{2}f/D\). Both simulation and theoretical results are shown, and they exhibit a perfect match. Additionally, in the lower frequency range, we have included a guiding line \(\sim\tilde{y}^{1/2}\), which is obtained from the integral representation of Eq.(130). This behavior is consistent with what was mentioned earlier in the context of MCM I.
### Subsystem mass fluctuations
For MCM II, we have also computed another temporal quantity known as the sub-system mass power spectrum \(S_{M_{l}}(f)\). This quantity is expressed as a summation as below:
\[S_{M_{l}}(f)=\frac{2\chi}{L}\sum_{q}\frac{\omega_{vq}}{\omega_{q}^{2}D^{2}+4 \pi^{2}f^{2}}. \tag{131}\]
Additionally, we have calculated the scaling function \(\mathcal{F}(\tilde{y})\) associated with the power spectrum of the sub-system mass. Notably, it exhibits a \(\tilde{y}^{-3/2}\) behavior in the low-frequency limit, mirroring the behavior observed in model MCM I as previously mentioned in Eq.(125). In Figure 9, the scaled subsystem mass power spectrum, denoted as \(D^{2}\tilde{S}_{M_{l}}(f)/(2\chi L^{3})\), plotted against the scaled frequency \(L^{2}f/D\) for various chipping parameters and system sizes. The red dashed line exhibits a \(\tilde{y}^{-3/2}\) scaling behavior in the low-frequency regime. The solid color lines represent the simulation results, and the black solid line corresponds to the theoretical predictions from Eq.(131), which matches the simulation data when suitably scaled.
## V Mcm Iii
In this section, we have computed the dynamic correlation for MCM III, which has been previously investigated to comprehend the distribution of wealth in a
Figure 8: The scaled power spectrum of instantaneous currents, \(L\tilde{S}_{\mathcal{J}}(f)/(2\chi)\), is plotted as a function of scaled frequency \(L^{2}f/D\) for various chipping parameters and system sizes. The cyan solid line corresponds to \(\lambda=0.25\) and \(L=1000\), the magenta solid line corresponds to \(\lambda=0.90\) and \(L=1000\), the red solid line corresponds to \(\lambda=0.25\) and \(L=500\), and the green solid line corresponds to \(\lambda=0.90\) and \(L=500\), all at a global density of \(\rho=1\). The blue dashed line shows \(\tilde{y}^{1/2}\) scaling behavior in the low-frequency regime as in Eq.(99) and red dashed lines represent \(L\tilde{S}_{\mathcal{J}}(f)/(2\chi)\) diverges as system size \(L-1\) at the high-frequency limit. The solid color lines represent the simulation results, while the black solid line represents the theoretical predictions of Eq.(130) upon suitable scaling, which fully matches the simulation data.
Figure 9: The scaled subsystem mass power spectrum, \(D^{2}\tilde{S}_{M_{l}}(f)/(2\chi L^{3})\), is plotted as a function of scaled frequency \(L^{2}f/D\) for various chipping parameter and system sizes. The cyan solid line corresponds to \(\lambda=0.25\) and \(L=1000\), the magenta solid line corresponds to \(\lambda=0.90\) and \(L=1000\), the red solid line corresponds to \(\lambda=0.25\) and \(L=500\), and the green solid line corresponds to \(\lambda=0.90\) and \(L=500\), all at a global density of \(\rho=1\). The red dashed line shows a \(\tilde{y}^{-3/2}\) scaling behavior in the low-frequency regime. The solid color lines represent the simulation results, while the black solid line represents the theoretical predictions of Eq.(131) upon suitable scaling, which matches the simulation data.
population[48; 49]. In this model, each site retains a fraction of its own mass, and the remaining mass is distributed among its nearest neighbor sites. The distribution of the mixed masses among these sites is randomized. It is important to note that the dynamic correlations for this model, MCM II, turn out to be similar to those of model MCM II upon appropriate scaling. This similarity arises from the fact that both of these models exhibit similar density correlation behaviors due to their respective microscopic dynamics.
### Time-integrated current fluctuation
The time-integrated bond current fluctuation in MCM III takes on an identical form to that of MCM II, as shown in Eq.(126). What stands out is that the strength of the fluctuating current \(\Gamma_{0}\) and the conductivity \(\chi\) are the same as in MCM II. The key distinction lies in the bulk diffusivity, which is given by \(D=\frac{\lambda}{2}\) for MCM III.
In Figure 10, we present the time-integrated bond-current fluctuation, denoted as \(\langle\mathcal{Q}_{i}^{2}(T)\rangle\), as a function of time \(T\) in the top panel. The early-time behavior of \(\langle\mathcal{Q}^{2}(T)\rangle\), which scales as \(\Gamma_{0}T\) according to Eq.(128), is highlighted. In the bottom panel, we display the scaled fluctuation of the time-integrated current, \(\langle\mathcal{Q}_{i}^{2}(T)\rangle D/(2\chi L)\), as a function of scaled time \(DT/L^{2}\). Solid color lines represent simulation results, while the black solid line represents the theoretical prediction obtained from Eq.(126), which closely matches the simulation data when appropriately scaled. Guiding lines illustrate the asymptotic behavior of the scaling function \(\mathcal{W}(y)\).
### Instantaneous bond-current fluctuations
For MCM III, we have calculated the power spectrum of the instantaneous current, and interestingly, it exhibits the same form as mentioned in Eq.(130), with \(D=\lambda/2\) and the conductivity being the same as above mentioned MCMs (MCM I and MCM II).
In Fig. 11, the scaled power spectrum of the instantaneous bond current, represented as \(L\tilde{S}_{\mathcal{J}}(f)/(2\chi)\), plot
Figure 10: _In top panel:_ The time-integrated bond-current fluctuation, \(\langle\mathcal{Q}_{i}^{2}(T)\rangle\), plotted as a function of time \(T\) for various chipping parameter and system sizes. For both this panel, the cyan (\(\lambda=0.25\), \(L=1000\)), magenta (\(\lambda=0.90\), \(L=1000\)), red (\(\lambda=0.25\), \(L=500\)), and green (\(\lambda=0.90\), \(L=500\)) lines lines represent simulation data for global density \(\rho=1\). The red dashed line represents the behavior \(\Gamma_{0}T\) for \(\lambda=0.90\) [see Eq(128)], while the blue and green dashed lines represent sub-diffusive \(\sim T^{1/2}\) and diffusive \(\sim T\) growth, respectively, as mentioned in Eq.(129). _In bottom panel:_ The plot shows the scaled time-integrated bond-current fluctuation, \(\langle\mathcal{Q}_{i}^{2}(T)\rangle D/(2\chi L)\), as a function of scaled time \(DT/L^{2}\) for different chipping parameter and system sizes. The two dashed lines serve as guides, indicating that \(\mathcal{W}(y)\) exhibits sub-diffusive behavior as \(\sim y^{1/2}\) (in blue) at early times, followed by diffusive growth as \(\sim y\) (in green) at later times [as in Eq.(70)]. Solid color lines represent simulation results, while the black solid line represents the theoretical prediction obtained from Eq.(126), which perfectly matches the simulation data when suitably scaled.
ted against the scaled frequency \(L^{2}f/D\). Both simulation and theoretical results are shown, and they exhibit a perfect match. Additionally, in the lower frequency range, we have included a guiding line \(\sim\tilde{y}^{1/2}\). This behavior is consistent with what was mentioned earlier in the context of MCM I [see Eq.(99)]. Notably, both models MCM II and MCM III show identical behavior in the scaled plot, both from simulation and theory, indicating a strong agreement between the two.
### Subsystem Mass fluctuations
Additionally, we have calculated the power spectrum of the subsystem mass, and remarkably, it follows the same form as mentioned in Eq.(131). In Figure 12, we plot the scaled subsystem mass power spectrum, denoted as \(D^{2}\tilde{S}_{M_{l}}(f)/(2\chi L^{3})\), against the scaled frequency \(L^{2}f/D\) for various chipping parameters and system sizes. Solid color lines represent simulation results, and the black solid line corresponds to theoretical predictions from Eq.(131), matching the simulation data when suitably scaled. The red dashed line shows a \(\tilde{y}^{-3/2}\) scaling behavior at low frequencies, agreeing with MCM I [Eq.(125)].
## VI Comparison of models
In this section, we present a comprehensive comparative study of various dynamical quantities for the three models: MCM I, MCM II, and MCM III. Specifically, we investigate the scaled time-integrated bond-current fluctuation \(\mathcal{W}(y)\) as a function of scaled time \(y=DT/L^{2}\), the scaled power spectrum of instantaneous currents \(\mathcal{H}(\tilde{y})\), and the scaled subsystem mass power spectrum \(\mathcal{F}(\tilde{y})\) as a function of scaled frequency \(\tilde{y}=fL^{2}/D\) for these three models. In Fig.(13), we present comparative plots of these dynamical quantities for the three models. Remarkably, despite the different microscopic dynamics of the models, we find that there exists an appropriate scaling regime, where all three models in fact exhibit qualitatively quite similar behavior. This observation suggests that certain universal features are shared among these models, and presumably diffusive systems in general, even when the underlying dynamical mechanisms are completely different.
Furthermore, we note that outside the scaling regime, deviations are observed between the model with nearest neighbor correlation (MCM I) and the models without nearest neighbor correlation (MCM II and MCM III). These deviations are accurately captured in both the simulation data and theoretical predictions, providing insights into the distinct dynamical properties of each model.
Table 1 provides a concise overview of the similarities
Figure 11: The scaled power spectrum of instantaneous currents, \(L\tilde{S}_{\tilde{\jmath}}(f)/(2\chi)\), is plotted as a function of scaled frequency \(L^{2}f/D\) for various chipping parameters and system sizes. The cyan solid line corresponds to \(\lambda=0.25\) and \(L=1000\), the magenta solid line corresponds to \(\lambda=0.90\) and \(L=1000\), the red solid line corresponds to \(\lambda=0.25\) and \(L=500\), and the green solid line corresponds to \(\lambda=0.90\) and \(L=500\), all at a global density of \(\rho=1\). The blue dashed line shows \(\tilde{y}^{1/2}\) scaling behavior in the low-frequency regime and the red dashed lines represent the power spectrum diverges as \(L-1\) at the high-frequency limit. The solid color lines represent the simulation results, while the black solid line represents the theoretical predictions of Eq.(130) upon suitable scaling, which fully matches the simulation data.
Figure 12: The scaled subsystem mass power spectrum, \(D^{2}\tilde{S}_{M_{l}}(f)/(2\chi L^{3})\), is plotted as a function of scaled frequency \(L^{2}f/D\) for various chipping parameter and system sizes. The cyan solid line corresponds to \(\lambda=0.25\) and \(L=1000\), the magenta solid line corresponds to \(\lambda=0.90\) and \(L=1000\), the red solid line corresponds to \(\lambda=0.25\) and \(L=500\), and the green solid line corresponds to \(\lambda=0.90\) and \(L=500\), all at a global density of \(\rho=1\). The red dashed line shows a \(\tilde{y}^{-3/2}\) scaling behavior in the low-frequency regime. The solid color lines represent the simulation results, while the black solid line represents the theoretical predictions of Eq.(131) upon suitable scaling, which matches the simulation data.
and differences among these three models in terms of their key parameters and quantities related to dynamical correlations. To begin with, it highlights the transport coefficients, bulk diffusion coefficient \(D\), and mobility \(\chi\) for each model. Notably, \(D\) is constant (independent of density) for all three models; however, the mobility \(\chi\) is density dependent (proportional to density square). The table displays the density correlation function \(C_{r}^{mm}\) for these models. MCM I possess nearest-neighbor correlations, whereas MCM II and MCM III lack such correlations. The table also presents the strength \(\Gamma_{r}\) of fluctuating ("noise") current for the models, with each model of the models having a distinct value. However, it is important to note that the relationship \(\Gamma_{r}=C_{r}^{mm}/2D\) holds true for all models; presumably this relation is valid for diffusive systems in general. Lastly, the table compares the source term \(A_{r}\) in the time evolution equation of mass-current correlation. For completeness, we also provide the Fourier modes of this source term, denoted as \(\tilde{f}_{q}\), which proves to be useful in the explicit calculations of various dynamic quantities.
Figure 13: (a) Scaled time-integrated bond-current fluctuation, \(\langle\mathcal{Q}_{i}^{2}(T)\rangle D/(2\chi L)\), as a function of scaled time \(DT/L^{2}\) for three models: MCM I (blue solid line), MCM II (magenta solid line), and MCM III (orange solid line). The two dashed lines serve as guides, indicating that \(\mathcal{W}(y)\) exhibits sub-diffusive behavior as \(\sim y^{1/2}\) (in blue) at early times, followed by diffusive growth as \(\sim y\) (in green) at later times. (b) Scaled power spectrum of instantaneous currents, \(L\tilde{S}_{\mathcal{J}}(f)/(2\chi)\), plotted as a function of scaled frequency \(L^{2}f/D\) for various chipping parameter and system sizes. The blue dashed line represents the \(\tilde{y}^{1/2}\) scaling behavior of \(\mathcal{H}(\tilde{y})\) in the low-frequency regime. In contrast, the power spectrum exhibits divergence, reaching \(2L-1\) (red dashed line) for MCM I and \(L-1\) (line dashed line) for MCM II and MCM III in the high-frequency limit. (c) Scaled subsystem mass power spectrum, \(D^{2}\tilde{S}_{M_{l}}(f)/(2\chi L^{3})\), plotted as a function of scaled frequency \(L^{2}f/D\) for various chipping parameters and system sizes. The red dashed line shows a \(\tilde{y}^{-3/2}\) scaling behavior of \(\mathcal{F}(\tilde{y})\) in the low-frequency regime. In all of these panels, we have used a fixed chipping parameter \(\lambda=0.5\), global density \(\rho=1.0\), and a system size of \(L=1000\). Simulation data for MCM I, MCM II, and MCM III are represented by blue, magenta, and orange solid lines, respectively. Black dashed lines correspond to MCM I theory, while black solid lines represent MCM II and MCM III theory, which are identical.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Quantity** & **MCM I** & **MCM II** & **MCM III** \\ \hline Bulk-diffusivity: **D** & \(\frac{\tilde{\chi}}{2}\) & \(\frac{\tilde{\chi}}{4}\) & \(\frac{\tilde{\chi}}{2}\) \\ \hline Mobility: \(\chi\) & \(\frac{\tilde{\chi}^{2}\rho^{2}}{2(3-2\lambda)}\) & \(\frac{\tilde{\chi}^{2}\rho^{2}}{2(3-2\lambda)}\) & \(\frac{\tilde{\chi}^{2}\rho^{2}}{2(3-2\lambda)}\) \\ \hline Density correlation: \(\mathbf{C_{r}^{mm}}\) & \(\frac{\tilde{\chi}\rho^{2}}{2(3-2\lambda)}[4\delta_{0,r}-\delta_{r,1}-\delta_{r,-1}]\) & \(\frac{2\tilde{\chi}}{(3-2\lambda)}\rho^{2}\delta_{0,r}\) & \(\frac{\tilde{\chi}}{(3-2\lambda)}\rho^{2}\delta_{0,r}\) \\ \hline \(\mathbf{\Gamma_{r}}\) & \(\frac{\tilde{\chi}^{2}\rho^{2}}{2(3-2\lambda)}[4\delta_{0,r}-\delta_{r,1}- \delta_{r,-1}]\) & \(\frac{\tilde{\chi}^{2}\rho^{2}}{(3-2\lambda)}\delta_{r,0}\) & \(\frac{\tilde{\chi}^{2}\rho^{2}}{(3-2\lambda)}\delta_{r,0}\) \\ \hline \(\mathbf{A_{r}}\) & \(\frac{\tilde{\chi}^{2}\rho^{2}}{4(3-2\lambda)}[-5(\delta_{r,0}-\delta_{r,-1})+( \delta_{r,1}-\delta_{r,-2})]\) & \(-\frac{1}{2}\frac{\tilde{\chi}^{2}\rho^{2}}{(3-2\lambda)}(\delta_{r,0}-\delta_ {r,-1})\) & \(-\frac{1}{2}\frac{\tilde{\chi}^{2}\rho^{2}}{(3-2\lambda)}(\delta_{r,r}-\delta_ {r,-1})\) \\ \hline \(\mathbf{\tilde{f}_{q}}\) & \(-\frac{\tilde{\chi}^{2}\rho^{2}}{2(3-2\lambda)}(1-e^{-iq})\Big{(}1+\frac{w_{q}} {2}\Big{)}\) & \(-\frac{\tilde{\chi}^{2}\rho^{2}}{2(3-2\lambda)}(1-e^{-iq})\) & \(-\frac{\tilde{\chi}^{2}\rho^{2}}{2(3-2\lambda)}(1-e^{-iq})\) \\ \hline \end{tabular}
\end{table}
Table 1: We have highlighted key quantities related to dynamical correlations in these three models: MCM I, MCM II, and MCM III. These quantities include transport coefficients such as bulk diffusivity \(D\) and mobility \(\chi\). Additionally, we have mentioned steady-state density correlation \(C_{r}^{mm}\), the strength of fluctuating current \(\Gamma_{r}\), the source term in the time evolution equation of mass-current correlation at equal times \(A_{r}\), and its Fourier mode \(\tilde{f}_{q}\). These quantities are essential for deriving dynamical correlations in these models.
## VII Summary and conclusion
In this paper, we exactly calculate dynamic correlation functions for mass and current in a broad class of one dimensional conserved-mass transport processes, called mass chipping models (MCMs). These systems violate detailed balance and have nontrivial spatial structures; indeed, their steady state measures are not described by the Boltzmann-Gibbs distribution, and a priori not known. Overall we find three temporal growth regimes for the fluctuation of the time-integrated bond current. Initially, for all three models, the time-integrated current fluctuation grows linearly with time \(T\), with a proportionality factor of \(\Gamma_{0}\), which, however, is a model-dependent quantity. In the intermediate, but large, time regime \(1/D\ll T\ll L^{2}/D\), with \(D\) being the bulk diffusion coefficient, we found subdiffusive - \(T^{1/2}\) - growth of the current fluctuation where the density-dependent prefactor of \(T^{1/2}\) is exactly determined. This was again followed by a linear or diffusive growth of current fluctuation in the large diffusive time region, \(T\gg L^{2}\). Furthermore, we exactly calculate a model _independent_ scaling function \(\mathcal{W}(y)\equiv D\langle\mathcal{Q}_{t}^{2}(T)\rangle/2\chi L\) as a function of a scaling variable \(y=DT/L^{2}\), where \(D\), \(\chi\) and \(L\) are the bulk-diffusion coefficient, mobility, and system size \(L\), respectively.
We calculate the dynamic correlation function for the instantaneous current and show that the correlation function decays as \(t^{-3/2}\); notably, the correlation function has a delta-correlated part at \(t=0\) and its magnitude for \(t>0\) is negative. Indeed the negative part of the current correlation function is directly responsible for the subdiffusive growth of the bond current fluctuation in the thermodynamic limit. The power-law behavior of dynamic current correlations is consistent with the exact calculation of the current power spectrum, \(\tilde{S}_{\mathcal{J}}(f)\), which has a low-frequency asymptotic behavior \(\tilde{S}_{\mathcal{J}}(f)\sim f^{\psi_{\mathcal{J}}}\) with \(\psi_{\mathcal{J}}=1/2\). Furthermore, we exactly obtain the scaling function which \(\mathcal{H}(\tilde{y})\), which represents the rescaled power spectrum of the current \(L\tilde{S}_{\mathcal{J}}(f)/2\chi\) as a function of the scaled variable \(\tilde{y}=fL^{2}/D\).
We also calculate the scaled subsystem mass fluctuation, which is shown to be identically equal to the suitably scaled dynamic fluctuation of the space-time integrated current \(\mathcal{Q}_{sub}(l,T)\), divided by factor of \(2D\) - \(D\) being the bulk-diffusion coefficient [see Eq.(107)]. This particular fluctuation relation in the context of MCMs is nothing but a slightly modified, nonequilibrium version of the celebrate equilibrium Einstein relation. It should be noted that the fluctuation relation requires the intensive fluctuation of space-time integrated current to be calculated in the thermodynamic limit, i.e., by first taking the limit of the infinite subsystem size limit \(l\rightarrow\infty\), followed by the limit of infinite time \(T\rightarrow\infty\). In this specific order of limits, we also show that \(\lim_{l\rightarrow\infty}\lim_{T\rightarrow\infty}\langle\mathcal{Q}_{sub}^{2 }(l,T)\rangle/lT\) is identically equal to the spatial sum of the strength \(\Gamma_{r}\) of the fluctuating bond current. As a simple consequence of the Einstein relation, the correlation functions of mass and fluctuating current are related by \(C_{r}^{mm}=\Gamma_{r}/2D\). We also calculate the unequal-time correlation function of the total mass of a subsystem and its power spectrum \(S_{M_{l}}\), which decays as \(S_{M_{l}}\sim f^{-\psi_{M}}\), with \(\psi_{M}=3/2\), consistent with the scaling relation \(\psi_{\mathcal{J}}=2-\psi_{M}\)[44]. The scaled power spectrum of subsystem mass, \(D^{2}S_{M_{l}}(f)/2\chi L^{3}\), can also be expressed in terms of a scaling function \(\mathcal{F}(\tilde{y})\) with scaling variable \(\tilde{y}=fL^{2}/D\).
Notably, the qualitative behavior of time-integrated bond-current fluctuations for all three models are similar, though the prefactors of the temporal growth laws depend on the details of the dynamical rules. In the small-frequency (large-time) domain, the prefactors of the intermediate-time subdiffusive and long-time diffusive growth of the bond current fluctuations can be expressed in terms of the bulk-diffusion coefficient and the particle mobility. However, the large-frequency behavior is not universal in the sense that they depend on the local microscopic properties. More specifically, the different spatial structures of these models manifest in the small-time growth of the time-integrated bond-current fluctuation \(\langle\mathcal{Q}_{t}^{2}(T)\rangle\simeq\Gamma_{0}T\), where \(\Gamma_{0}\) is proportional to the single-site mass fluctuation \(\langle m_{i}^{2}\rangle\), being considerably different in thse models.
A few remarks are in order. Characterizing the dynamic properties of interacting-particle systems through microscopic calculations is an important, though difficult, problem in statistical physics. The variants of mass chipping models discussed here have nontrivial steady states, that, unlike the SEP on a periodic domain, are not described by the Boltzmann-Gibbs distribution and, moreover, a-priori unknown. As mentioned above, depending on their dynamical rules, these models differ from each other in the details of their spatial structures. Model MCM I possesses nonzero spatial correlations, whereas MCM II and III have vanishing neighboring correlations. However all these models share one noteworthy aspect in common: The bulk-diffusion coefficient, like in the SEP [35], is _independent_ of density. In fact, this is precisely why the hierarchy of current and mass correlations closes and thus the models are exactly solvable. Despite the fact that the mass chipping models have a nontrivial steady state, these models, due to their simple dynamical rules, do not exhibit a phase transition, or any singularities in the transport coefficients. However, through simple variations in the dynamical rules, it is possible to have nontrivial (singular) macroscopic behavior in some of the variants of these models. Indeed, it would be quite interesting to characterize the dynamic properties of current and mass through microscopic calculations in higher dimensions, for systems with more than one conserved density [34] and when the transport properties are singular or anomalous [44].
## Acknowledgement
We thank Tanmoy Chakraborty, Arghya Das and Anupam Kundu for helpful discussions at various stages of the project. P.P. gratefully acknowledges the Science and Engineering Research Board (SERB), India, under Grant No. MTR/2019/000386, for financial support. A.M. acknowledges financial support from the Department of Science and Technology, India [Fellowship No. DST/INSPIRE Fellowship/2017/IF170275] for part of the work carried out under his senior research fellowship.
|
2309.09754 | Computational Exploration of Magnetic Saturation and Anisotropy Energy
for Nonstoichiometric Ferrite Compositions | A grand challenge in materials research is identifying the relationship
between composition and performance. Herein, we explore this relationship for
magnetic properties, specifically magnetic saturation (M$_s$) and magnetic
anisotropy energy (MAE) of ferrites. Ferrites are materials derived from
magnetite (which has the chemical formulae Fe$_3$O$_4$) that comprise metallic
elements in some combination such as Fe, Mn, Ni, Co, Cu and Zn. They are used
in a variety of applications such as electromagnetism, magnetic hyperthermia,
and magnetic imaging. Experimentally, synthesis and characterization of
magnetic materials is time consuming. In order to create insight to help guide
synthesis, we compute the relationship between ferrite composition and magnetic
properties using density functional theory (DFT). Specifically, we compute
M$_s$ and MAE for 571 ferrite structures with the formulae
M1$_x$M2$_y$Fe$_{3-x-y}$O$_4$, where M1 and M2 can be Mn, Ni, Co, Cu and/or Zn
and 0 $\le$ x $\le$ 1 and y = 1 - x. By varying composition, we were able to
vary calculated values of M$_s$ and MAE by up to 9.6$\times$10$^5$ A m$^{-1}$
and 14.1$\times$10$^5$ J m$^{-3}$, respectively. Our results suggest that
composition can be used to optimize magnetic properties for applications in
heating, imaging, and recording. This is mainly achieved by varying M$_s$, as
these applications are more sensitive to variation in M$_s$ than MAE. | Venkata Rohit Punyapu, Jiazhou Zhu, Paul Meza-Morales, Anish Chaluvadi, O. Thompson Mefford, Rachel B. Getman | 2023-09-18T13:28:50Z | http://arxiv.org/abs/2309.09754v1 | Computational Exploration of Magnetic Saturation and Anisotropy Energy for Nonstoichiometric Ferrite Compositions
###### Abstract
A grand challenge in materials research is identifying the relationship between composition and performance. Herein, we explore this relationship for magnetic properties, specifically magnetic saturation (M\({}_{\mathrm{s}}\)) and magnetic anisotropy energy (MAE) of ferrites. Ferrites are materials derived from magnetite (which has the chemical formulae Fe\({}_{3}\)O\({}_{4}\)) that comprise metallic elements in some combination such as Fe, Mn, Ni, Co, Cu and Zn. They are used in a variety of applications such as electromagnetism, magnetic hyperthermia, and magnetic imaging. Experimentally, synthesis and characterization of magnetic materials is time consuming. In order to create insight to help guide synthesis, we compute the relationship between ferrite composition and magnetic properties using density functional theory (DFT). Specifically, we compute M\({}_{\mathrm{s}}\) and MAE for 571 ferrite structures with the formulae M1\({}_{\mathrm{s}}\)M2\({}_{\mathrm{F}}\)3\({}_{\mathrm{-x}}\)O\({}_{4}\), where M1 and M2 can be Mn, Ni, Co, Cu and/or Zn and 0 \(\leq\) x \(\leq\) 1 and y = 1 - x. By varying composition, we were able to vary calculated values of M\({}_{\mathrm{s}}\) and MAE by up to 9.6\(\times\)10\({}^{5}\) A m\({}^{-1}\) and 14.08\(\times\)10\({}^{5}\) J m\({}^{-3}\), respectively. Our results suggest that composition can be used to optimize magnetic properties for applications in heating, imaging, and recording. This is mainly achieved by varying M\({}_{\mathrm{s}}\), as these applications are more sensitive to variation in M\({}_{\mathrm{s}}\) than MAE.
Puryapu, et al.
## 1 Introduction
The magnetite-derived ferrites composed of metallic elements in some combination such as Fe, Mn, Ni, Co, Cu and Zn have been widely studied for their structure and magnetic properties [1, 2, 3, 4, 5, 6]. Typical ferrites have spinel-type (normal, inverse) crystal structures with O\({}^{2-}\) anions packed in a face-centered cubic (fcc) arrangement, such that there are two types of sites between them, i.e., tetrahedrally and octahedrally coordinated sites (see Figure 1). The general empirical formula for the stoichiometric class of ferrites is M\({}_{\mathrm{x}}\)Fe\({}_{3\mathrm{,}}\)O\({}_{4}\), where M can be different substituent metals (e.g., Mn, Ni, Co, Cu, Zn and other divalent metal cations) and 0 \(\leq\) x \(\leq\) 3. Nonstoichiometric ferrites, i.e., materials with the general formula M1\({}_{\mathrm{s}}\)M2\({}_{\mathrm{y}}\)Fe\({}_{3\mathrm{-x}}\)O\({}_{4}\), where M1 and M2 can be Mn, Ni, Co, Cu and/or Zn and 0 \(\leq\) x \(\leq\) 1 and y = 1 - x, offer even greater compositional diversity.
Ferrite nanoparticles are widely used in the cores of transformers, antenna rods, electromagnets, and magnets used in imaging applications [7, 8, 9, 10, 11]. Another well-studied application is magnetically mediated energy delivery, which has most often been applied to biomedical devices (e.g., heating a cell via magnetic hyperthermia, i.e., MagMED) and more recently been applied to catalysts (e.g., supplying the heat needed to break and form chemical bonds via magnetic induction heating; i.e., MIH) [12, 13, 14, 15, 16]. Specifically, an oscillating magnetic field is applied, and hysteresis in the magnetic properties of
the nanoparticle during oscillation results in conversion of magnetic energy into thermal energy [8, 11, 12, 13]. The heat generated in hysteretic losses is due to Neel and Brown relaxations, which are determined by the magnetic saturation (M\({}_{\mathrm{s}}\)), magnetic anisotropy energy (MAE), and the size of the nanoparticle [17, 18]. M\({}_{\mathrm{s}}\) and MAE, in turn, are determined by the particle composition [19, 20, 21, 22].
A benefit to ferrites is that the compositions can be tuned. Indeed, multiple groups have investigated the influence of ferrite composition on performance for magnetic hyperthermia [17, 23]. Specifically, these groups showed how varying composition results in values of M\({}_{\mathrm{s}}\) and MAE that vary by up to a full order of magnitude. They further estimated the potential for heat generation as a function of size and composition, showing that this value could be varied by two orders of magnitude. Composition is important in other applications as well. For example, Co, Ni, Fe and Cu ferrites perform well for catalysis due to large hysteresis losses [12, 13, 14, 16, 21, 24], while Zn, Co and Mn ferrites are often used in magnetic resonance imaging (MRI) because of the resulting high M\({}_{\mathrm{s}}\) from their substitution [4, 11, 25]. An understanding of how composition influences magnetic properties is hence imperative to maximizing performance for the variety of applications that utilize ferrites and other magnetic materials.
While M\({}_{\mathrm{s}}\) and MAE as well as other magnetic properties can be measured experimentally, the process is laborious; often requiring multiple attempts at synthesis to achieve the expected structure in addition to state-of-the-art measurement techniques to learn the magnetic properties [26, 27, 28]. Further, general rules linking magnetic properties to composition do not yet exist [1, 17, 20, 25, 29, 30]. Filling this knowledge gap would greatly reduce the time and money required to design magnetic materials and devices for a wide range of applications; however, it would be impossible to accomplish this with experiments alone. On the other hand, computational approaches can provide estimates of magnetic properties relatively quickly. In such approaches, magnetic properties of model structures are computed with quantum mechanics. While these model structures are simplifications of the structures used in real-life applications - which is required for computational feasibility - they provide useful estimates of the magnetic properties of a given composition and are vastly more efficient at doing so than experiments. A database of magnetic material compositions and their associated magnetic properties would greatly facilitate design of magnetic materials for a variety of applications.
To this end, in this work we generate a database of ferrite compositions and their associated magnetic properties. Specifically, we compute values of M\({}_{\mathrm{s}}\) and MAE for singly and doubly substituted non-stoichiometric ferrites using density functional theory (DFT). We investigate 571 total compositions and create an open access database that includes each composition's specific crystal structure (either normal or inverse spinel), calculated M\({}_{\mathrm{s}}\), and calculated MAE. We further provide insight about the influence of composition on M\({}_{\mathrm{s}}\) and MAE, showing that these values can vary by up to 10\({}^{3}\) A m\({}^{\mathrm{-1}}\) and 10\({}^{6}\) J m\({}^{\mathrm{-3}}\), respectively.
### 2.0 Methodology
#### 2.1.1 Ferrite Model Setup
The ferrite model employed herein is based on the calculated bulk unit cell of magnetite (Fe\({}_{3}\)O\({}_{4}\)). We specifically employ a unit cell with space group of F\(\overline{\mathrm{d}}\)3m. To create models with diverse compositions, we use eight repeats of the formula unit, giving a base stoichiometry of Fe\({}_{24}\)O\({}_{32}\) (Figure 1). This model comprises eight Fe ions in tetrahedral sites (purple tetrahedra in Figure 1) and
Figure 1: Left: Polyhedral representation of a bulk ferrite structure with stoichiometry Fe\({}_{24}\)O\({}_{32}\). Purple = tetrahedral sites, orange and gray = octahedral sites. Right: Sites where substitutions were considered in this work.
sixteen Fe ions in octahedral sites (orange and gray octahedra in Figure 1). Up to eight Fe ions are substituted with Mn, Ni, Co, Cu, and/or Zn in the purple and orange sites labeled 1 through 8 in Figure 1. We consider both singly (e.g., Cu\({}_{1}\)Fe\({}_{2}\)O\({}_{32}\), Mn\({}_{7}\)Fe\({}_{17}\)O\({}_{32}\)) and doubly substituted (e.g., Mn\({}_{2}\)Co\({}_{2}\)Fe\({}_{20}\)O\({}_{32}\), Ni\({}_{1}\)Cu\({}_{2}\)Fe\({}_{21}\)O\({}_{32}\)) ferrites in this work. Singly substituted ferrites include substitutions involving Mn and Cu and are denoted FeMn and FeCu for simplicity. Similarly, doubly substituted ferrites include combinations of Co and Cu, Ni and Zn, Co and Ni, Mn and Co, and Co and Zn, and are denoted FeCoCu, FeNiZn, FeCoNi, FeMnNi, FeCuNi, FeMnCo, and FeCoZn, respectively. The number of each general composition considered in this work is provided in Table 1. In total, 571 structures are considered, of which 99 are singly substituted and 472 are doubly substituted. Prior literature suggests that substitution into the same type of site (i.e., either tetrahedral or octahedral) but different location within the crystal lattice (e.g., different numbered tetrahedron or octahedron in Figure 1) has an influence on magnetic properties [22]. Hence, we also consider structures that have the same composition and substitution into the same _type_ of site, but with the metal ions substituted into different tetrahedra or octahedra (e.g., Cu\({}_{1}\)tet,site1Fe\({}_{23}\)O\({}_{32}\) and Cu\({}_{1}\)tet,site1Fe\({}_{23}\)O\({}_{32}\)). In this way, out of 571 structures, there are 204 unique compositions.
In general, ferrites can crystalize in either the normal (e.g., [M1\({}_{x}\)2-M2\({}_{y}\)2-1]tet[Fe3-1\({}_{y}\)]\({}_{2}\)O\({}_{4}\)) or inverse spinel structure (e.g., [Fe3-1]tet[M1\({}_{x}\)2-M2\({}_{y}\)Fe3-1\({}_{y}\)]\({}_{2}\)O\({}_{4}\)) [5,6]. We consider both structures for each composition. Magnetic properties are reported for the structure that gives the lowest electronic energy in DFT. In rare cases, the electronic structure of one structure (i.e., either normal or inverse spinel) did not converge. In these cases, magnetic properties are reported for the structure that converged. The specific models employed in this work are available in the ioChemBD database [31] along with their calculated M\({}_{\text{s}}\) and MAE.
### Magnetic property calculations
M\({}_{\text{s}}\) is computed as
\[\text{M}_{\text{s}}=\frac{\text{Total magnetic moment}\times\mu_{B}}{\text{unit cell volume}}\qquad\text{Eq. (1)}\]
where the total magnetic moment is the total number of unpaired electrons in the unit cell calculated in DFT, and \(\mu_{B}\) is the Bohr magneton equal to \(9.27\times 10^{-24}\) A m\({}^{2}\). MAE is calculated as the difference in energy between the hard axis and the easy axis [32], i.e.,
\[\text{MAE}=\textit{E}_{\text{hard}}-\textit{E}_{\text{easy}}\qquad\qquad\qquad \text{Eq. (2)}\]
where \(E\) is the electronic energy calculated in DFT. The [0,0,1], [1,0,0] and [0,1,0] crystallographic directions are evaluated as the easy and hard axes for each composition. These directions were chosen since test calculations on the [0,1,1], [1,0,1], [1,1,0] and [1,1,1] directions often resulted in electronic energies significantly more positive than the
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**General** & **Total** & **M\({}_{\text{s}}\)\(\times\)10\({}^{\text{5}}\)[A m\({}^{\text{-1}}\)]** & **MAE \(\times\)10\({}^{\text{5}}\)[J m\({}^{\text{-3}}\)]** & **Crystal Structure** \\ \cline{1-5}
**Composition** & **Structures** & & & \\ \hline
**FeCu** & 24 & 1.2 – 9.4 & 1.3 – 5.5 & 62.5\% Normal spinel \\ \hline
**FeMn** & 75 & 2.0 – 5.3 & 0.1 – 6.4 & 62.5\% Inverse spinel \\ \hline
**FeCoCu** & 69 & 1.2 – 9.6 & 0.6 – 11.4 & 61.2\% Inverse spinel \\ \hline
**FeMnCo** & 187 & 3.3 – 7.4 & 0.07 – 8.5 & 66.6\% Normal spinel \\ \hline
**FeNiZn** & 34 & 0.3 – 8.9 * & 0.05 – 3.7 & 93.5\% Inverse spinel \\ \hline
**FeMnNi** & 28 & 2.5 – 7.5 & 0.1 – 6.8 & 76.1\% Inverse spinel \\ \hline
**FeNiCo** & 47 & 0.6 – 4.8 & 0.02 – 14.1 & 93.5\% Inverse spinel \\ \hline
**FeNiCu** & 78 & 1.2 – 7.5 & 0.1 – 4.3 & 92\% Inverse spinel \\ \hline
**FeCoZn** & 29 & 0.4 – 8.8 * & 0.06 – 11.1 & 64.5\% Inverse spinel \\ \hline \end{tabular}
*Not included here is the M\({}_{\text{s}}\) of Zn\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\) which has a M\({}_{\text{s}}\) of 0.04 A m\({}^{\text{-1}}\).
\end{table}
Table 1: General compositions considered in this work, along with their range of M\({}_{\text{s}}\), range of MAE, and most prominent crystal structure.
[0,0,1], [1,0,0] and [0,1,0] directions, suggesting that the [0,0,1], [1,0,0] and [0,1,0] directions are more reliable for large-scale DFT calculations. Among these three directions, the direction that gave the lowest electronic energy was taken as the easy axis, and the direction that resulted in the highest electronic energy was taken as the hard axis. Calculated easy and hard axis directions for singly substituted ferrites partially agree with prior results. For instance, in Co\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), the easy axis was found to be [100] in agreement with our results [33], while in Ni\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), the easy axis was found to be [111][34], which does not match with our results (which determined the easy axis to be [010]). However, we find that MAEs calculated in this work generally follow experimental trends in cases where such data is available experimentally. Further details are provided in SI Section S7.
### 2.3 Data Visualization.
The values of M\({}_{\rm s}\) and MAE as functions of composition for doubly substituted ferrites are presented as contour plots. These plots interpolate between explicitly calculated points in order to create a continuous colormap. Each colormap is based on 18 - 31 explicitly calculated data points (see SI Section S6 for a sample plot with only data points). This is done using the OriginPro software [35] using the data boundary algorithm without smoothening. A total of 27 plots are generated for FeCoCu, FeNiZn, FeCoNi, FeMnNi, FeCuNi, FeMnCo FeCoZn, FeCu and FeMn, i.e., 9 for crystal structure, 9 for M\({}_{\rm s}\) and 9 for MAE.
### 2.4 Density Functional Theory Calculations.
DFT calculations are performed using the Vienna Ab initio Simulation Package (VASP) [36, 37]. Both collinear (i.e., all spins are aligned along the [0,0,1] direction) and non-collinear (i.e., spins are aligned along the [1,0,0], [0,1,0] and [0,0,1] directions) calculations are performed. Non-collinear calculations are always started from the wavefunction and charge density generated from a collinear calculation on the same system. Initial guesses for magnetic moments of the Fe, Mn, Co, Ni, Cu, and Zn cations are based on experimental findings by de Berg et al. [38] and reported in SI Section S9. The DFT+U formalism [30, 39, 40, 41] is employed to capture the strong Coulombic repulsion on 3d electrons and to prevent the delocalization of electrons in these semiconducting materials. We specifically employ an effective U parameter, U\({}_{\rm eff}\), equal to U - J, where U and J are the spherically averaged screened Coulomb and Exchange energies, respectively [42]. Values of U and J used to compute crystal structure and M\({}_{\rm s}\) are taken from the Materials Project Database [43]. These values are provided in SI Section S9. Calculation of MAE requires a more stringent value of J in order to capture the spin orbit interaction [30] and achieve the magnetic ground state energies. We hence varied this value while holding values of U constant at the values taken from the Materials Project Database [43] and compared the resulting MAE with values from experiment [7, 44] (see SI Section S10). These calculations were specifically done for the stoichiometric ferrites, i.e., Fe\({}_{2}\)O\({}_{32}\), MnSFe\({}_{16}\)O\({}_{32}\), Co\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), Ni\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), Cu\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), and Zn\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\). Resulting values of J were then used for the corresponding metal cations when calculating MAE values for the non-stoichiometric ferrites. We found that a value of U\({}_{\rm eff}\) of 1.5 eV resulted in values of MAE that were in good agreement with experiment for the stoichiometric ferrites. Hence, J values for MAE calculations were taken as 1.5 eV - U.
In all calculations, electron exchange and correlation are treated using the Perdew-Burke-Ernzerhof (PBE) form of the generalized gradient approximation [45], the energies of core electrons are simulated with projector augmented wave (PAW) pseudopotentials [37, 46] up to a cut-off energy of 550 eV, and spin polarization is turned on. Gamma-centered k-point meshes of 4\(\times\)4\(\times\)4 are used to sample the first Brillouin zones. Validation of the cut-off energy and k-point mesh are provided in SI Section S10. Electronic structures are calculated self-consistently and considered to be converged when the difference in energy between subsequent iterations falls below 10-6 eV for MAE calculations and 10-5 eV for everything else. During geometry relaxations, all atom positions as well as the unit cell shape and volume are allowed to relax. This strategy is validated in SI Section S10. Unit cell relaxations are considered to be converged when the absolute values of the forces on all atoms are smaller than 0.02 eV/A. Most unit cells become slightly non-cubic during relaxation; however, the deviation from cubic is typically less than 0.3deg. Examples of VASP input files are available in SI Section S9.
### 3.0 Results.
**3.1 Crystal Structure.** Calculated crystal structure preferences are shown in Figures 2 and S1 and tabulated in Table 1. We find that the stoichiometric ferrites Fe\({}_{24}\)O\({}_{32}\), Co\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), Ni\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), Cu\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), and Zn\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\) crystalize in the inverse spinel structure, whereas Mn\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\) crystallizes in the normal spinel structure, in agreement with experiments [44, 47, 48, 49, 50, 51, 52, 53]. In general, we find that non-stoichiometric ferrites largely crystalize in the inverse spinel structure (see Table 1). The exceptions are FeCu (Figure S1i) and FeMnCo (Figure 2f) which form mostly normal spinel structures (FeCu at low Cu content and FeMnCo at high Mn content). Of the remaining compositions that we investigated, FeNiCo forms inverse spinel structures over the majority of compositional space (Figure 2a) [52]. FeNiCu and FeNiZn also form mostly inverse spinel structures (Figure S1a and S1d). The remaining compositions form a mixture of normal and inverse spinel structures, depending on the composition (see Table 1 and Figures S1b, c and e). For example, FeMnNi forms a normal spinel structure at high Mn content and inverse spinel otherwise (Figure S1c), while FeCoZn and FeCoCu form inverse spinel structures at high Zn and Cu content and a mixture of inverse and normal spinel otherwise (Figure S1b and S1e).
### 3.2 Saturation Magnetization.
Calculated M\({}_{\mathrm{s}}\) are shown in Figures 3 and S2 and tabulated in Table 1. M\({}_{\mathrm{s}}\) for the stoichiometric ferrites Fe\({}_{24}\)O\({}_{32}\), Mn\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), Co\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), Ni\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), Cu\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), and Zn\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\) are 4.8, 4.9, 3.6, 2.5, and 1.2, and 0.4\(\times\)10\({}^{6}\)\(\times\) 10\({}^{5}\) A m\({}^{-1}\), respectively, approximately following the trends of the substituent cations themselves determined experimentally [17, 54]. M\({}_{\mathrm{s}}\) for most of the non-stoichiometric ferrites show large variations with composition, spanning more than 5\(\times\)10\({}^{5}\) A m\({}^{-1}\). Exceptions to this are compositions involving Mn, which have narrower ranges of M\({}_{\mathrm{s}}\) and hence cannot be as finely tuned as the other compositions investigated in this work. Comparing Figures S1 and S2, we observe the compositions that crystallize in the normal spinel structure exhibit higher M\({}_{\mathrm{s}}\) than compositions that crystallize in the inverse spinel structure. For example, values of M\({}_{\mathrm{s}}\) in the largely inverse spinel regions of FeNiZn, FeCoCu, FeCoZn and FeMnNi are relatively low, whereas values of M\({}_{\mathrm{s}}\) in the normal spinel regions of these compositions are higher. In fact, the normal spinel regions of FeNiZn, FeCoCu and FeCoZn exhibit some of the highest M\({}_{\mathrm{s}}\) calculated in this work, with compositions such as Co\({}_{4}\)Cu\({}_{1}\)Fe\({}_{19}\)O\({}_{32}\) and Co\({}_{5}\)Zn\({}_{1}\)Fe\({}_{18}\)O\({}_{32}\) exhibiting M\({}_{\mathrm{s}}\) of 9.6 and 8.8\(\times\)10\({}^{5}\) A m\({}^{-1}\), respectively. Conversely, the inverse spinel regions of these compositions exhibit some of the lowest values of M\({}_{\mathrm{s}}\) calculated in this work, with compositions such as Co\({}_{1}\)Zn\({}_{1}\)Fe\({}_{16}\)O\({}_{32}\) and Co\({}_{6}\)Zn\({}_{1}\)Fe\({}_{16}\)O\({}_{32}\) (Figure 3b) exhibiting M\({}_{\mathrm{s}}\) of 0.45 and 0.74\(\times\)10\({}^{5}\) A m\({}^{-1}\) respectively. These
Figure 2: Calculated crystal structures of FeNiCo (a) and FeMnCo (b). Fe and substituent compositions span from 0 to 16 and 0 to 8, respectively, with the stoichiometric ferrites represented at the vertices.
compositions hence show good promise for tuning M\({}_{\rm s}\) through composition.
### Magnetic Anisotropy Energy.
Calculated MAE are shown in Figures 4 and S3 and tabulated in Table 1. MAE for the stoichiometric ferrites Fe\({}_{24}\)O\({}_{32}\), MnsFe\({}_{16}\)O\({}_{32}\), Co\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), Ni\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), Cu\({}_{8}\)Fe\({}_{16}\)O\({}_{32}\), and ZnsFe\({}_{16}\)O\({}_{32}\) are 2.2, 0.1, 1.7, 1.3, 1.3, and 0.6\(\times\)10\({}^{5}\) J m\({}^{-3}\), respectively. We observe that similar to M\({}_{\rm s}\), MAE is also dependent on composition; however, most compositions span within the same order of magnitude (Figures 4 and S3). For example, FeCu and FeMn only show variation in MAE when there is an almost equal number of substituent ions and Fe ions, possibly due to the resulting transformation of spinel structure (Table 1 and Figures S1 and S3 parts h and i). Exceptions are FeNiCo, FeCoZn and FeCoCu, presented in Figure 4 and S3b, which span three orders of magnitude. Comparing Figures S1 and S3, contrary to M\({}_{\rm s}\), we observe that compositions that crystallize in inverse spinel result in higher MAE. The largest MAE value of 13.6\(\times\)10\({}^{5}\) J m\({}^{-3}\) is found in FeNiCo (Figure 4a) in agreement with prior literature [55], and the lowest value is from ZnsFe\({}_{16}\)O\({}_{32}\) of 0.06\(\times\)10\({}^{5}\) J m\({}^{-3}\) (Figure S3b).
systems [56]. Specifically, our model systems are nearly pristine bulk structures, whereas experimental systems are particles with finite sizes, surfaces, and defects, as well as different ligands, crystallographic domains, etc. [1, 10, 56, 57]. Further, experimental observation is an average value from a distribution of these properties, whereas the calculations presented herein are for individual structures. Unfortunately, it is not presently possible to model even one single nanoparticle with DFT, let alone a distribution for any one composition, and certainly not for a distribution of compositions. Hence, at present, a better use for these results is to learn how trends influence magnetic properties. To this end, Figure 5 shows calculated M\({}_{\text{s}}\) and MAE for the various doubly substituted non-stoichiometric ferrites.
Figure 5a shows that FeCoZn, FeCoCu, and FeNiCu can achieve relatively large ranges in M\({}_{\text{s}}\) from varying composition, whereas FeNiZn, FeNiCo, FeMnNi, and FeMnCo tend to have more uniform M\({}_{\text{s}}\). MAE for all compositions modeled in this work varies by \(\sim\) 2 orders of magnitude. To understand how these compositions could be used in practice, Figure 5b illustrates optimal combinations of M\({}_{\text{s}}\) and MAE for various applications. For example, our results suggest that FeMnCo and FeMnNi as well as some compositions of FeCoZn will be optimal for magnetic induction heating (evaluating these materials for toxicity for biomedical applications is a concern that is beyond the scope of this paper). MRI requires materials with high M\({}_{\text{s}}\)[9, 11, 63] and therefore FeCoZn, FeCoCu, and FeNiCu are promising. Permanent magnets require modest M\({}_{\text{s}}\) and high MAE [20, 64] and hence FeMnNi and FeNiCo are promising. Ferrites with high M\({}_{\text{s}}\) and low MAE could potentially replace rare earth materials in antennas [10], and compositions such
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Specific Composition** & **Crystal structure** & **M\({}_{\text{s}}\)\(\times\)10\({}^{\text{5}}\)A m\({}^{\text{-1}}\)** & **MAE \(\times\)10\({}^{\text{5}}\)J m\({}^{\text{-3}}\)** \\ & **This work / expt** & **This work / expt** & **This work / expt** \\ \hline Fe\({}_{24}\)O\({}_{32}\) & Inverse / Inverse \({}^{\text{ \emph{f}}}\) & 4.8 / 4.6 \({}^{\text{ \emph{f}}}\) & 2.2 / 0.1 \({}^{\text{ \emph{f}}}\) \\ \hline Fe\({}_{16}\)Mn\({}_{\text{s}}\)O\({}_{32}\) & Normal / Normal \({}^{\text{ \emph{q}}}\) & 4.9 / 5.9 \({}^{\text{ \emph{q}}}\) & 0.1 / 0.2 \({}^{\text{ \emph{q}}}\) \\ \hline Fe\({}_{16}\)Cu\({}_{\text{s}}\)O\({}_{32}\) & Inverse / Inverse \({}^{\text{ \emph{B}}}\) & 1.2 / 1.3 \({}^{\text{ \emph{B}}}\) & 1.3 / 1.4 \({}^{\text{ \emph{B}}}\) \\ \hline Fe\({}_{16}\)Co\({}_{\text{s}}\)O\({}_{32}\) & Inverse / Inverse \({}^{\text{ \emph{g}}}\) & 3.7 / 3.5 \({}^{\text{ \emph{g}}}\) & 1.7 / 2.2 \({}^{\text{ \emph{g}}}\) \\ \hline Fe\({}_{16}\)Ni\({}_{\text{s}}\)O\({}_{32}\) & Inverse / Inverse \({}^{\text{ \emph{e}}}\) & 3.6 / 2.0 \({}^{\text{ \emph{g}}}\) & 0.2 / 0.1 \({}^{\text{ \emph{f}}}\) \\ \hline Fe\({}_{16}\)Zn\({}_{\text{s}}\)O\({}_{32}\) & Inverse / Normal \({}^{\text{ \emph{g}}}\) & 0.04 / 0.09 \({}^{\text{ \emph{g}}}\) & 0.6 / N/A \\ \hline Co\({}_{1.6}\)Cu\({}_{\text{s}}\)Fe\({}_{2}\)O\({}_{32}\) & Normal / N/A & 1.8 / 2.3 \({}^{\text{ \emph{\Omega}}}\) & 7.4 / 3.6 \({}^{\text{ \Omega}}\) \\ \hline Co\({}_{4.8}\)Ni\({}_{3.2}\)Fe\({}_{16}\)O\({}_{32}\) & Inverse / N/A & 1.7 / 3.2 \({}^{\text{ \emph{\Sigma}}}\) & 9.0 / N/A \\ \hline \end{tabular} \({}^{\text{ \emph{\emph{f}}}}\)Ref. [17] ; \({}^{\text{ \emph{q}}}\)Ref. [58] ; \({}^{\text{ \emph{p}}}\)Ref. [59]; \({}^{\text{ \emph{q}}}\)Ref. [60]; \({}^{\text{ \emph{q}}}\)Ref. [61]; \({}^{\text{ \emph{q}}}\)Ref. [4]; \({}^{\text{ \emph{g}}}\)Ref. [54]; \({}^{\text{ \emph{q}}}\)Ref. [62]; \({}^{\text{ \emph{\Sigma}}}\)Ref. [52]; N/A:
not available
\end{table}
Table 2: Comparison of DFT calculated crystal structure, M\({}_{\text{s}}\), and MAE with experiment.
Figure 5: _Top:_ Calculated M\({}_{\text{s}}\) and MAE (log scale). Dotted lines indicate the regions in the bottom graph. _Bottom:_ Ranges of M\({}_{\text{s}}\) and MAE for various applications of magnetic materials.
as such as FeNiZn could achieve this. Transformers utilize magnetic induction and require minimal heat losses [65], i.e., high M\({}_{\mathrm{s}}\) and low MAE. Based on Figure 5, FeCoZn and FeMnNi compositions could be promising.
### Conclusions.
In summary, structural and magnetic properties of non-stoichiometric bulk ferrites in the formula M1\({}_{\mathrm{s}}\)M2\({}_{\mathrm{y}}\)Fe3\({}_{\mathrm{x-y}}\)O4 where M1 and M2 = Mn, Ni, Co, Cu, and/or Zn and 0 \(\leq\) x \(\leq\) 1 and y = 1 - x have been investigated using DFT. Through varying the composition, we found changes in crystal structure (from normal to inverse spinel), which resulted in variations in the magnetic saturation and magnetic anisotropy energy of 9.6\(\times\)10\({}^{5}\) A m\({}^{-1}\) and 14.1\(\times\)10\({}^{5}\) J m\({}^{-3}\), respectively. We found that magnetic properties are influenced by composition through their crystal structures, with normal spinel compositions resulting in higher M\({}_{\mathrm{s}}\) and inverse spinel compositions resulting in higher MAE. Our results suggest that composition can be used to optimize magnetic properties for applications in heating, imaging, and recording. This is mainly achieved by varying M\({}_{\mathrm{s}}\), as these applications are more sensitive to variation in M\({}_{\mathrm{s}}\) than MAE (Figure 5). Moving forward, developing a strategy to achieve greater variation of MAE would lead to greater technological applicability. Our calculations suggest that doubly substituted non-stoichiometric ferrites based on Mn, Ni, Co, and Zn could achieve this. Comparison with available experimental data suggests DFT underpredicts M\({}_{\mathrm{s}}\) and overpredicts MAE. As a major difference between experiments and our DFT simulations is that our simulations were performed on pristine bulk structures, whereas experiments were performed on nanoparticles comprising different crystallographic domains and surfaces with ligands and defects, these results suggest that a way to maximize control over magnetic properties in practice is to minimize these effects in order to have the greatest control over MAE while using composition to control M\({}_{\mathrm{s}}\). This is a topic of ongoing work.
We thank Dr. Megan Hoover, Prof. Lindsay Shuller-Nickles, Prof. Steven Pellizzeri, Dr. Benjamin Fellows, and Dr. Zichun "Tony" Yan for helpful discussions. This work was partly supported as part of the Center for Programmable Energy Catalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences at the University of Minnesota under award #DE-SC0023464 (VRP & RBG: MAE calculations, magnetic property analysis, comparisons to experimental data, linking results to potential applications). We would also like to acknowledge support by Materials Assembly and Design Excellence in South Carolina (MADE in SC; VRP, JZ, PMM, AC: Model development, M\({}_{\mathrm{s}}\) and crystal structure calculations), National Science Foundation award no. OIA-1655740 and Grants for Exploratory Academic Research (GEAR; OTM). We would also like to thank the support of National Science Foundation award no. CBET-2146591 (OTM). This work was supported in part by the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science (OTM).
Structure function relationships, magnetic inductive heating, magnetic imaging, magnetic recording, spinel structures.
## Supporting Information.
Additional figures illustrating preferred crystal structure, calculated M\({}_{\mathrm{s}}\), and calculated MAE; additional figures illustrating ranges of M\({}_{\mathrm{s}}\) and MAE used in different applications; simulation input files; validations of the computational model and methods. This data can be obtained by emailing the corresponding author until the data is officially published in the peer reviewed literature (after which it will be freely available on the publisher's website).
|
2305.20013 | Software Architecture for Operation and Use of Quantum Communications
Networks | Quantum Communications Networks using the properties of qubits, namely state
superposition, no-cloning and entanglement, can enable the exchange of
information in a very secure manner across optical links or free space. New
innovations enable the use of optical repeaters as well as multi-cast
communication in the networks. Some types of quantum communications mechanisms
can be implemented at room-temperature instead of requiring super-cooled
systems. This makes it likely that business impact from quantum communications
will be realized sooner than that from quantum computers.
Quantum networks need to be integrated into the ecosystem of currently
deployed classical networks and augment them with new capabilities. Classical
computers and networks need to be able to use the new secure communication
capabilities offered by quantum networks. To provide this interoperability,
appropriate software abstractions on the usage of quantum networks need to be
developed. In this paper, we examine what the type of software abstractions
quantum networks can provide, and the type of applications that the new
abstractions can support. | Dinesh Verma, Eden Figueroa, Gabriella Carini, Mark Ritter | 2023-05-31T16:40:45Z | http://arxiv.org/abs/2305.20013v1 | # Software Architecture for Operation and Use of Quantum Communications Networks
###### Abstract
Quantum Communications Networks using the properties of qubits, namely state superposition, no-cloning and entanglement, can enable the exchange of information in a very secure manner across optical links or free space. New innovations enable the use of optical repeaters as well as multi-cast communication in the networks. Some types of quantum communications mechanisms can be implemented at room-temperature instead of requiring super-cooled systems. This makes it likely that business impact from quantum communications will be realized sooner than that from quantum computers.
Quantum networks need to be integrated into the ecosystem of currently deployed classical networks and augment them with new capabilities. Classical computers and networks need to be able to use the new secure communication capabilities offered by quantum networks. To provide this interoperability, appropriate software abstractions on the usage of quantum networks need to be developed. In this paper, we examine what the type of software abstractions quantum networks can provide, and the type of applications that the new abstractions can support.
_Keywords-- Quantum networks, communications software, network software, communication abstractions_
## I Introduction
Quantum Communications Networks [1, 2] enable secure information exchange and have been demonstrated over optical links as well as free space communication. Quantum communications can be supported on a complex network topology using optical repeaters [3, 4] and they can support multi-cast communication [5, 6]. Quantum communications can be implemented at room-temperature [7, 8] instead of requiring super-cooled systems, which makes realizing the business impact from quantum communications at a time-scale which may be faster than the impact of quantum computers themselves.
To deal with real-world issues such as information loss and transmission errors, quantum networks are usually accompanied by a classical network, which is used to support and help in the operations of the quantum network. This provides for a new type of communication network which combines the best attributes from both quantum and classical networks to improve the nature of what is possible between one or more pair of communicating computers.
Quantum networks provide a greenfield space, giving us an opportunity to rethink how networks ought to be controlled, operated and used. The last few decades have seen many different types of networks [9, 10], including but not limited to circuit-switched telephony networks [11], packet-switched Internet [12], Broadband Integrated Services Digital Networks (B-ISDN) or Asynchronous Transmission Mode (ATM) networks [13], Multiprotocol Label Switching (MPLS) networks [14], device-oriented third generation (3G) and fourth generation (4G) cellular networks [15], software oriented fifth generation (5G) cellular networks [16, 17], centrally controlled Software Defined Networks [18], Supervisory Control And Data Acquisition (SCADA) networks [19] etc. The success and failures of these networks have also taught the technical community important technical lessons and best practices in the control, operation and management of computer networks. It is our goal to combine the best lessons learnt from the design of control, management and operations of the past type of networks to design a good control software architecture for quantum networks.
In this paper, we propose such a software architecture that covers the topics of using, controlling and managing quantum communications networks. This architecture will cover the data, control and management planes required for quantum communication networks.
We believe this is the first paper to explore how software ought to be developed for Quantum-enabled Internet. While there are papers that have explored subjects such as using software defined networking to control quantum nodes [20, 21], developing simulators for quantum networks [22, 23, 24], and modeling of quantum protocol performance [25, 26], we have not come across papers that have explored subjects involved with the development of software on classical computers to exploit quantum networks. To enable this goal, we propose new communication abstractions that ought to be supported by a software development kit (SDK) using the capabilities of a quantum network and move towards practical application software on a quantum-enabled Internet. As a first attempt towards this topic, we do not presume to assert that the
abstractions we introduced are the final one, but strive to initiate a discussion in the technical community on the subject of the right software abstraction to use and exploit quantum communication networks.
After making a few general observations on the lessons learnt from software architectures from existing types of classical networks, we discuss the various configurations that a quantum network can be used for. We argue that we ought to focus initial discussion of software development on classical computers communicating over quantum networks. We subsequently discuss the abstractions that will be useful to develop new software applications using quantum communications. This abstraction provides the data plane architecture for quantum networks, and we briefly cover the software layers needed for the control and management plane of quantum networks. We conclude by discussing some software applications that can benefit from the abstractions we propose.
## II general observations
Years of designing computer networks of many flavors have led to some lessons and best practices in the design of software for computer networks. As we attempt the design for the software stack to operate a new class of networks, it is instructive to recap some of the best practices learnt from existing networks.
The first of the extremely successful lessons is the importance of _layering_. The concept of designing computer networks as consisting of many layers, with each layer solving one problem, relying on layers above or below it to address other problems, and exporting a canonical interface with a well-defined set of abstractions has proven its value in the implementation of all types of networks. It has also allowed different types of networks to be layered in new and unique ways. ATM networks [13], which were designed to be complex global standardized networks became a data-link layer to carry IP packets of the Internet due to economic and business reasons. SCADA networks [19], designed initially with a hardware set of components in mind started to get layered on top of IP networks. User-level implementations of network protocols such as IP were done to bypass the difficulties of implementing software in operating system kernels instead of in the user space. It follows implicitly that we should design the software of the quantum network as a layer which exports a well-defined set of abstractions to its user.
The second lesson from the development of various protocols has been in the separation of the functions in _three different planes of data, control and management_. Circuit switched networks and current cellular networks have well-defined protocols delineated in these tree different planes. The Internet did not have a control plane for a long period but the emergence of protocols like SIP [27] and the advantages of managing Ethernet switches using Software Defined Networking [18] reinforced the value of having a control plane within the network. On a broad basis, the following is the separation between these three planes of network:
* **Control Plane**: The control plane suite of functions is responsible for setting up the initial mechanism for establishing communication. In a circuit switched network, this established the end-to-end circuit. In a packet switched network, this capability is not needed. Nevertheless, having a control plane to orchestrate any steps required to ensure smooth communication would be useful for any type of network.
* **Management Plane**: The management plane suite of functions is responsible for managing any errors that happen during the operation of the network. This requires observing events in the network, detecting problems that may be present, and fixing those problems. The problems can either be reported to a human administrator for appropriate action, or automated actions for fixing the problems can be taken, e.g. using an automation engine or an Artificial Intelligence based engine.
* **Data Plane**: The data plane carries the information flowing on the network, and is responsible for the aspects such as reliable transmission of information, security of the information, and supporting the communication abstraction offered by the network.
Quantum communications network have a strong flavor of being a circuit-switched network, and hence it would make sense to define a software architecture that includes all three planes for their operation.
The design of each of these three planes could consist of one or more layers, i.e. layering may help us design a better control and management plane for the operation of quantum networks.
## III quantum network configurations
Before describing the design of the software in each of the three planes of control, management and data, it would be instructive to look at the usage configurations of a quantum network. A quantum network can be used in one of the following three configurations:
1) Quantum-enabled Configuration: The quantum network is used to interconnect one or more classical computers together. To the classical computers, the quantum network is another channel to exchange information among themselves.
2) Full Quantum Configuration: The quantum network is used to interconnect one or more quantum computers together. To the quantum computers, the quantum network is a channel to exchange qubits among themselves. Quantum computers may choose to exchange the entanglement state of the qubits, as opposed to an actual exchange of qubits.
Figure 1: Configurations of Quantum Networks
3) Mixed Configuration: The quantum network is used to interconnect computers, some of which may be classical computers while others may be quantum computers. To the classical computers, the quantum network provides a way to get the qubits from the quantum computers on the network, which they can read and collapse the information to a binary number. The quantum computers, the quantum network provides a way to transform a classical number into a set of qubits.
The mixed configuration would match the vision of Quantum Internet [28], but the quantum-enabled configuration and full quantum configuration are likely intermediary steps towards the attainment of the vision. A mixed configuration can be supported by means of a quantum and a classical computer, both being present in the network and the conversion between qubits and classical bits be done outside the quantum communication network. In effect, the mixed configuration quantum network is separable into two segments, one where a group of classical computers are using it, and the other where a group of quantum computers are using it. The exchange between them two segments can happen as shown in Fig. 1. As a result, for the purpose of developing software abstractions, we can safely assume that for all practical purposes, there are only two configurations of the network - the quantum-enabled configuration and the quantum configuration.
When we consider a full quantum configuration, there is a physical constraint that needs to be taken into account. Quantum computers, in most of the common designs, operate at temperatures close to absolute zero. There are some explorations to create room-temperature quantum computing [29], but they are at a relatively early stage of development. On the other hand, quantum networks operating over wide area need to operate at room temperatures. Since the energy, entropy and entangled state of qubits can experience a significant change when moving from a super-cooled environment to a room-temperature environment, interconnecting a supercooled quantum computing system with a room-temperature quantum network remains a difficult problem at the current time. While we as a research community look forward to a solution to that problem in the future, and we can consider the theory of such distributed quantum computing environment, from the perspective of software development, it seems more prudent to focus on configurations that are viable currently.
Therefore, we will assume in the rest of the paper that we are operating in the quantum-enabled configuration.
We now discuss the abstractions that the quantum-enabled configuration ought to support. Given the current state of quantum networks, we can model the overall operation of the quantum network as an abstraction shown in Fig. 2. The bigger dashed pipe shows the representation of the quantum network, which we will refer to as the quantum overlay in order to avoid confusion with the network pipe that actually performs the transfer of qubits, which is the quantum underlay. The quantum underlay is supported by a classical underlay network. In order to communicate, both the services of the quantum underlay and the classical overlay need to be used. The combined pair of quantum underlay and the classical underlay provide the abstracted network representation, which is the quantum overlay.
The quantum overlay provides the perspective of one link, and this link can be converted into the concept of a path or a network in one of two manners. The first approach in which this can be converted is shown in Fig. 3. In this case, the abstract quantum overlay is defined on a link by link basis. The control and management plane are defined to compose multiple quantum overlays together. The end to end path consist of a concatenation of multiple quantum overlays. The computer shown in the middle of Fig. 3 is a trusted relay which is a classical computer and reads/writes information across the two quantum overlays that it connects.
An alternate architecture is that of creating a more complex underlay. This approach assumes that the underlay's create their own complete networks. This definitely holds true for the underlay of a classical computer network. In these cases, the network can be configured by means of a variety of control and management protocols. The same scheme for quantum networks, however, is not as well developed. This approach is shown in Fig. 4. The quantum underlay is connected by quantum repeaters, while the classical underlay is connected by traditional routers or switches.
A hybrid approach, in which the classical network is assumed to be present and interconnecting all sites which are used with the quantum network, with the quantum overlay being used at each location can also be used. This approach provides a mixed abstraction which is the one we prefer to follow. This allows the quantum overlay to be configured in a manner which need not configure the classical network, and only configure and manage the quantum network. This approach is shown in Fig. 5.
Figure 4: Overlay Abstraction using Complex Underlays
Figure 3: Overlay repeated at each node
Figure 2: The Abstracted Representation of a Quantum Network
While the hybrid representation does not appear very clean from a conceptual representation, it does provide a pragmatic approach where we can focus on the control and management of quantum aspects of the network, instead of duplicating an already existing rich set of activities in the classical network space. With a hybrid representation, the control and management plane for quantum overlays can focus completely on the operation of the overlay and quantum underlay, assuming that the classical network is available and connect all the communicating nodes. For the discussion of the control and management architecture of the quantum overlays in the next section, we assume that the hybrid configuration is being used.
## IV abstractions
Borrowing from the different personas that are used in classical communication networks, we can identify three types of personas that are involved with quantum communication networks. These personas include:
* **The User**: The user persona is the entity using the network for communication. In most cases, the user persona would be the application software that calls upon the quantum network to perform its task. If we consider the quantum network as one aggregated layer in the networking stack, the user is the software that is running on the layer above the quantum network layer.
* **The Provider**: The provider persona is the layer that sits below the quantum network
* **The Configurator**: The configurator persona is the entity that is responsible for getting an instance of the quantum network installed and ready for operation. The task of the configurator is to provide the control mechanisms that will ensure that the quantum network instance is up and operational.
* **The Administrator**: The administrator persona is the entity that is responsible for ensuring that the network is operational, that any errors happening in the network are handled correctly.
These three personas are effectively the users of the data plane, the control plane and the management plane for the quantum overlay that we have discussed in Section II. Let us now look at the abstractions that we can offer to each of these personas from the quantum overlay perspective.
### _Data Plane Abstraction_
The quantum overlay in the model we are describing leverages the two underlays - namely the quantum underlay and the classical underlay. As a default, the abstraction offered by the quantum overlay can be a superset of any of the abstractions offered either of the two overlays.
The traditional networking layer of the 7-layer OSI architecture [30] offers the abstractions of a circuit or a datagram. A circuit provides the abstraction of an end-to-end connection between the sender and the receiver. Information sent at one end of the circuit is delivered to the other end of the circuit. The circuit could be physical or virtual - the latter being more common given the ubiquity of the Internet Protocol. The circuit may have some error rate, but maintains the sequence of packets. A datagram provides the abstraction of a packet which is transferred between the end-points provided a destination of the packet. Delivery may be out of sequence, and may be lossy.
While we can export the same abstractions at the quantum overlay level, it does not leverage the capabilities of the quantum underlay. The quantum network due to the qubit properties of no-cloning, is more suitable for a circuit abstraction instead of a datagram abstraction. We propose the following abstractions to be offered by the quantum overlay which combine the strengths of the quantum underlay and the classical underlay:
* **Secure Lossy Datagram Circuit**: The secure unreliable circuit takes a datagram from the sender and delivers it securely to the receiver. Security is obtained by establishing a secure key using a Quantum Key Distribution (QKD) protocol [31], which is then used to encrypt the datagram. The secure circuit would have an key refresh interval parameter which would result in the renegotiation of the secure key after a given number of datagrams are transmitted, or after the lapse of a time-period. The secure circuit does not guarantee reliability of the circuit.
* **Secure Reliable Datagram Circuit**: The secure reliable circuit offers security as well as the reliability of datagrams that are delivered. The abstraction offered is still that of a datagram bundle, but information is guaranteed to be delivered. The delivery need not preserve sequence.
* **Secure Reliable ByteStream**: The secure reliable ByteStream offers security, reliability and in-sequence delivery. This allows the creation of a bytestream abstraction.
* **Synchronized Random number generator**: the secure random number generator provides an abstraction where a call of either side of the quantum overlay results in the generation of the same random number at both sides. The synchronized random number generated could be the key generated by QKD protocols which leverage both the classical and the quantum underlay. We would emphasize that the term synchronized here refers to the fact that the same number is seen by all parties, and does not imply any type of time-synchronization.
The combination of a datagram abstraction with a circuit abstraction may appear contradictory at first glance, but the usage model with quantum networks enables a logical way to understand this combination. The quantum underlay provides
Fig. 5: Overlay Abstraction using Complex Classical Underlay
the circuit analog, while the ability to chunk information into logical units provides the concept of a datagram.
The first three abstractions provide the analogue of the abstractions offered by transport layer security (TLS) [32] over Universal Datagram Protocol(UDP) [33], TLS with Real-time Transport Protocol(RTP) [34], and TLS with Transmission Control Protocol (TCP) [35] in classical computer networks. The quantum overlay thus provides abstractions which are at transport and session level of the OSI-architecture [30].
The fourth abstraction exploits QKD protocols such as BB84 [36] or any other QKD protocol [31] to create a random number generator. This random generator can be used for other programs in a classical computer, such as the use in applications such as simulated annealing, genetic algorithms, or to provide seeds for a variety of Monte Carlo simulations. Having a synchronized random number at different end-points can enable the exploration of different spaces in Monte Carlo simulations more effectively. Unlike the first three abstractions, which rely on having a circuit traditionally associated with two end-points, the synchronized random number generator can be easily extended to the case of multicast quantum networks.
Each of these abstractions can be implemented and provided by a software development kit (SDK) running on a classical computer. The assumptions of the hardware and software required for supporting this SDK are shown in Fig. 6. The host that runs the software have two network interface cards (NICs), one supporting a classical network and the other interfacing with a quantum network. The design of the quantum network NIC can be done in a variety of ways, just like many different kinds of classical NICs are available in the market today. The Quantum Comm. SDK exports the four abstractions described above to the application software that invokes it. It implements its abstractions by invoking the commands supported by the software drivers of the two NICs, using the capabilities of both the quantum network and the classical network.
The SDK allows computer software to be developed for quantum networks without waiting for a complete development of quantum computers. This SDK can be implemented and offered as a user-level library, just like the support for TLS protocol [37] is implemented in current classical computers.
### _Control Plane Abstraction_
The control plane is responsible for configuring the operation of the quantum overlay. The control plane for the quantum overlay needs to configure both the quantum underlay and the classical underlay in order for the quantum overlay to support the data plane abstractions. The control plane capabilities for a classical network are well developed. The additional work is to augment the control plane capabilities for the quantum network as well.
In order to perform its task, the control plane needs to implement the abstractions of a link, a path and a network. The link provides for a single instance of the quantum overlay (or the quantum underlay) since we are opting to go for the more hybrid approach for implementing the quantum overlay. The path is a collection of links, while the network consists of an conglomerate of paths and networks.
In order to control the network, a set of layered abstractions as shown in Fig. 7 ought to be implemented in the control layer. The link and path configurations are used to establish the specific communication needs to set up the quantum overlay abstraction. These would configure the different attributes so that the quantum underlay network is able to exchange information, and appropriate configuration information is also provided and exchanged for a quantum network.
The topmost layer of the control plane architecture is the set of utilities and tools that will be needed. These utilities and tools include the software that can perform tasks such as autoconfiguration, and the generation of policies so that policy based configuration [38, 39] can be supported. Policy based management has been used to configure many different types of networks, and would be a natural approach to configure and control quantum overlays/underlays.
A policy based approach can be used with both decentralized control mechanisms, e.g. how distributed routing protocols control the operation of the network, as well as with centralized control mechanisms such as SDN [18]. Some papers have explored using the centralized control scheme for quantum networks [20]. Both centralized and decentralized control mechanisms have their own benefits and drawbacks depending on the scale, administrative control, and performance requirements of different types of networks. We would argue that the policy based approach, where different policies dictate different actions to be taken by the network in response to some conditions arising in the network, is an approach that can be used across both paradigms for controlling a network.
With modern AI based capabilities, configuration of different types of software and systems can be done using natural text such as English. Any such tools which provide a natural approach to configure quantum overlays will also fall within this category of tools.
Fig. 6: Hardware and Software for supporting the Abstractions
Fig. 7: The Control Plane for a Quantum Network
### _Management Plane Abstraction_
Like the control plane architecture, the management plane architecture would need to follow a layered architecture. At each layer in this stack, we assume that the system is conforming to the logical abstractions of a link, a path and a network. This layering of the stack is shown in Fig. 8
The different from the control stack and the management stack is the concept of the event. Events happen within the network which need to be handled. These events can then be processed, suppressed, stored, automatically handled by a management software component, or be displayed to a human administrator for action.
The concept of an event is integral to traditional approach for network management, and one can use the existing protocols [40, 41] for network management aspects of a quantum overlay. The format to represent an event, as well as any management information for the link and network are well-defined in those standards. We can extend the specifications to handle the concepts associated with a quantum overlay and a quantum underlay in these specifications.
A big aspect of network management is the use of Artificial Intelligence and Machine Learning to automatically handle the different events that are generated within the network. The concepts of policy based management, described in Section IV-B are useful in network management to handle the events as well. Machine Learning can be used to generate the policies that are required for the handling of the different events in the system [42].
## V applications
Any discussion of an abstraction is incomplete without a discussion of the applications, i.e. the users of the abstraction that are proposed. In this particular case, we need to discuss the potential applications that can use the abstractions of (i) secure lossy datagram circuit (ii) secure reliable datagram circuit (iii) secure reliable byte-stream and (iv) synchronized random number generator.
### _Applications using Circuit/ByteStream Abstraction_
The first three of these offer abstractions that are similar to those of secure communication pipes that exist in current Internet, with an enhanced security provided by means of the quantum key provided on the quantum network. As a result, the applications of these abstractions will be same as the general class of applications today.
For a secure lossy datagram circuit, real-time applications that can tolerate loss are suitable. These include the streaming of voice and video, conversations over the telephone, and interactive video-conferencing. The primary advantage of the abstraction is that it provides a secure connection by default. In general, any application that is designed around the SIP and RTP protocols on the Internet can benefit from this abstraction, gaining improved security by means of quantum networks.
For a secure reliable datagram circuit, the most common application would be the messaging applications including slack, telegram, whatsapp, facebook messenger, Apple iMessage etc. Since security will come by default for these applications, the secure messaging service which is provided by end-to-end encryption by some of these applications will become available for free. Note however, that these consumer applications are used today on mobile phones which may not have a cost-effective ready access to a free-space quantum communications link, at least at the present. However, the exploitation of this abstraction need not await the development of quantum networking hardware for the hand-held.
There are several enterprise applications which use a messaging protocol between computers in the back-end. The common examples of such messaging protocols include AMQP [43], the original IBM MQSeries [44], various implementations of the concept of the enterprise service bus [45]. These messaging systems operate to interconnect large corporate systems and data centers. An implementation of a quantum network software using the secure reliable datagram circuit would provide commercial use-cases for long-distance quantum networks that are likely to become viable in the near future.
The secure reliable byte-stream is very similar to the well-known Transmission Control Protocol (TCP) which forms the basis for the bulk of communication happening on the Internet. The provision of quantum-security provides a default secure mode of operation for these computers.
### _Applications using Synchronized Random Number Generator_
The synchronized random number generator is a new abstraction that is not found in current software system or communication protocols. We would like to emphasize that the synchronization is reflected in the same random number being visible at all parties that are communicating, not in them happening at the same time concurrently. As a result, this is the abstraction that can lead to the development of new software applications that are enabled purely by the advent of quantum networks. The first and most common use of these synchronized random number is to use as security keys as common shared secrets. However, the ability to generate the same random number at two or more different sites has the potential to improve many other software applications that depend on random numbers.
A good use-case for the random number application can be found in the field of randomized algorithms [46, 47]. There are two broad classes of random algorithms, categorized as Las Vegas algorithms and Monte Carlo algorithms. A Las Vegas algorithm is a randomized algorithm that always gives the correct result but gambles with resources. A Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with some probability. The error probability of a Monte Carlo algorithm would typically be small. Monte Carlo
Fig. 8: The Management Plane for a Quantum Network
algorithms are used for simulations in a large variety of fields. When using either a Las Vegas or a Monte Carlo algorithm, having two nodes with a synchronized random number available would be an advantage. That allows the algorithm to be parallelized. For Las Vegas algorithms for search, as an example, two or more nodes can use a shared random number to search for items in parallel, resulting in increasing the speed of the search by the number of nodes available.
Monte Carlo simulations [48] can benefit tremendously from a synchronized random number. Monte Carlo simulations of any system depends on drawing random numbers that drives the operations of simulated behavior in a variety of application domains. When the same random number is available at two different sites, Monte Carlo simulations can coordinate their behavior, e.g. use the random number to split the simulation space into two distinct spaces, and each of two computers exploring those two different areas for exploration.
An approach for doing such a split is described below. Let us assume that a Monte Carlo simulation consists of multiple variables. The values these variables can take defines the simulation space that needs to be explored. For the sake of visualization, we can camp these variables into a space of two dimensions, e.g. by using a Principal Component Analysis transform [49]. Without loss of generality, we assume that the axes are bounded by 0 and 1. The random number shared at each of the nodes can be mapped to a number between 0 and 1, e.g. a K bit random number can be taken as a fraction represented by that number divided by \(2^{K}\). This random number can be used to split the space in a manner so that the PCA space is divided into two equal halves. One of the computer can explore Monte Carlo simulation in the simulation space that corresponds to one half of the space split by the random point, while the other can explore the simulation space corresponding to the other half. If the simulation space is mapped to a PCA space of a single dimension, it can be represented as a circular space that loops around after the maximum value, and the random number used to split the space into two equal halves. Many other approaches to split the space in half are also possible, including using the wraparound from the maximum value to separate the 2-dimensional space into two equal halves. These approaches are illustrated in Fig. 9. The left and right pictures in Fig. 9 show two different ways in which a 2-dimensional PCA space can be divided into equal spaces. The middle picture shows how a single dimensional space can be divided into two equal halves by means of a random number. The right picture can be viewed as the unfolded approach to extend the approach shown in the middle from one dimension to two dimensions. Of course, these are only a few approaches to divide the available simulation space, and many other approaches can be designed depending on the exact nature of the variables defining the Monte Carlo simulation.
It is worth noting that the scheme can be easily generalized to split the PCA space and corresponding simulation space into any number of equal partitions, and the simulations run in parallel over several machines. At the end of the simulations, results from all the runs can be aggregated. This random partitioning provides an easier way to coordinate the parallel execution of tasks.
The same splitting of tasks can also be achieved when running other probabilistic algorithms, e.g. training a neural network model in a distributed manner [50]. Neural networks models train themselves by examining different small batches of training data, starting with a random guess of their internal wieghts, and iteratively reducing the error in their prediction by adjusting the weights. If this task has to be conducted in parallel by two or more different machines, they need to cover different batches of the training data in a synchronized manner so they do not use the same data batch, but operate over different data batches. A synchronized random number can provide the seed for them to effectively divide the training space, and reduce the time involved in training the models. This can speed up the exploration of parallel methods for running machine learning models.
Another application area that may benefit is one where data space is not shared among learning models, e.g. when different data fragments are running over multiple wide area nodes.
The current approaches to train models across such distributed data sets without moving data to a central location is federated learning [51, 52] and requires a central coordinator in the most common implementations. Federated learning can be improved significantly by having an exchange of some portion of data among all the participating sites [53]. A random number exchanged among participants can be used to determine what data set to exchange. One nice property about using Quantum networks for this over traditional network is that this synchronization and coordination can be done without the need of a centralized federated learning coordinator.
Other type of applications that can be implemented and accelerated using parallel algorithms include simulated annealing [54] and other genetic algorithms [55]. Since these simulations are done on large compute clusters where the addition of quantum networks would be economically feasible, they provide a good venue to implement applications of quantum communications using the abstractions described in this paper.
As a final note, quantum communications will result in the synchronized number appearing at two or more nodes almost simultaneously with a difference determined by the error in time-synchronization between them. On the other hand, creating a pseudo-random number and sharing it over a classical network incurs a delay of the round-trip latency between the two nodes. Such an exchange on a classical network also becomes complex and cumbersome as the number of nodes involved in creating a shared random number increases. In the field of improving random algorithms, quantum networks may hold a performance advantage over doing similar exchanges on a classical network.
Fig. 9: Different Approaches to Split a Simulation Space
## VI Conclusion
In this paper, we have done an initial exploration of the type of communication abstraction that a quantum communications network can provide to its users, and the potential set of applications that can be implemented using those abstractions. This leads towards the development of a suite of software applications which can lead to practical exploitation of quantum networks. These applications can be developed today with the experimental quantum networks that are under development and deployment.
As a next step in this direction of exploiting quantum networks, we want to explore the abstractions that would make sense when quantum computers are communicating over a quantum network, and when a mixed configuration of classical and quantum computers are used. We would also like to expand the abstractions to a multicast domain, and explore applications when many participants are able to get a synchronized random number at the same time.
|
2305.19692 | Inferring redshift and energy distributions of fast radio bursts from
the first CHIME/FRB catalog | We reconstruct the extragalactic dispersion measure \ -- redshift relation
(${\rm DM_E}-z$ relation) from well-localized fast radio bursts (FRBs) using
Bayesian inference method. Then the ${\rm DM_E}-z$ relation is used to infer
the redshift and energy of the first CHIME/FRB catalog. We find that the
distributions of extragalactic dispersion measure and inferred redshift of the
non-repeating CHIME/FRBs follow cut-off power law, but with a significant
excess at the low-redshift range. We apply a set of criteria to exclude events
which are susceptible to selection effect, but find that the excess at low
redshift still exists in the remaining FRBs (which we call Gold sample). The
cumulative distributions of fluence and energy for both the full sample and the
Gold sample do not follow the simple power law, but they can be well fitted by
the bent power law. The underlying physical implications remain to be further
investigated. | Li Tang, Hai-Nan Lin, Xin Li | 2023-05-31T09:35:49Z | http://arxiv.org/abs/2305.19692v1 | # Inferring redshift and energy distributions of fast radio bursts from the first CHIME/FRB catalog+
###### Abstract
We reconstruct the extragalactic dispersion measure - redshift relation (\(\mathrm{DM_{E}-z}\) relation) from well-localized fast radio bursts (FRBs) using Bayesian inference method. Then the \(\mathrm{DM_{E}-z}\) relation is used to infer the redshift and energy of the first CHIME/FRB catalog. We find that the distributions of extragalactic dispersion measure and inferred redshift of the non-repeating CHIME/FRBs follow cut-off power law, but with a significant excess at the low-redshift range. We apply a set of criteria to exclude events which are susceptible to selection effect, but find that the excess at low redshift still exists in the remaining FRBs (which we call Gold sample). The cumulative distributions of fluence and energy for both the full sample and the Gold sample do not follow the simple power law, but they can be well fitted by the bent power law. The underlying physical implications remain to be further investigated.
fast radio bursts - intergalactic medium - cosmological parameters +
Footnote †: Supported by the National Natural Science Fund of China under grant Nos. 11873001, 12147102 and 12275034.
+
Footnote †: Supported by the National Natural Science Fund of China under grant Nos. 11873001, 12147102 and 12275034.
## 1 Introduction
Fast radio bursts (FRBs) are energetic radio pulses of milliseconds duration happening in the Universe, see e.g. [1, 2, 3, 4] for recent review. The discovery of the first FRB can date back to 2007, when Lorimer et al. [5] reanalyzed the 2001 archive data of the Parkes 64-m telescope and found an anomalous radio pulse, which is now named FRB010724. Later on, Thornton et al. [6] discovered several other similar radio pulses, which made FRBs receive great attention within the astronomy community. The origin of FRBs was still a mystery at that time, but the large dispersion measure (DM) implies that they are unlikely to originate from the Milky Way. The identification of host galaxy and the direct measurement of redshift confirmed that they have an extragalactic origin [7, 8, 9]. Up to now, several hundreds of FRBs have been discovered [10, 11], among which only one is confirmed to originate from our Galaxy [12]. Phenomenological, FRBs can be divided into two types: repeaters and non-repeaters, according to whether they are one-off events or not. The majority of FRBs are apparently non-repeating, but it is still unclear if they will be repeating in the future. Most repeating FRBs are not very active, which repeat only two to three times [13]. However, more than one thousand bursts have been observed from two extremely active sources, i.e. FRB20121102A [14] and FRB20201124A [15].
The physical origin of FRBs is still under extensive debate. Several theoretical models have been proposed to explain repeating and non-repeating FRBs, respectively. For example, giant pulses from young rapidly rotating pulsars [16], the black hole battery model [17], the "Cosmic Comb" model [18], the inspiral and merger of binary neutron stars [19, 20], neutron star-white dwarf binary model [21], collision between neutron stars and asteroids [22], highly magnetized pulsars travelling through asteroid belts [23, 24], young magnetars with fracturing crusts [25], axion stars moving through pulsar magnetospheres [26], and so on. Although there is no standard model yet, it is widely accepted that the progenitor of FRB should at least involve one neutron star or magnetar. The recently discovered magnetar-associated burst in our Milky Way strongly supports the magnetar origin of some, if not all FRBs [12, 27]. The statistical similarity between repeating FRBs and soft gamma repeaters further implies that they may have similar origin [28, 29].
FRBs are energetic enough to be detectable up to high redshift, therefore they can be used as probes to investigate the cosmology [30, 31, 32, 33, 34, 35, 36, 37], and to test the fundamental physics [38, 39, 40, 41, 42]. Unfortunately, up to now most FRBs have no direct measurement of redshift. Although hundreds of FRBs have been measured, only a dozen of them are well localized. With such a small sample,
we even do not clearly know the redshift distribution of FRBs. One way to solve this problem is to use the observed DM, which is an indicator of distance, to infer the redshift [43, 44, 45, 46]. To this end, one should reasonably model the DM contribution from host galaxy and subtract it from the total observed DM. This is not an easy task, because too many factors may affect the host DM, such as the galaxy type, the inclination angle, the mass of host galaxy, the offset of FRB site from galaxy center, etc. A simple but rough assumption is that the host DM is a universal constant for all FRBs [31, 35, 46]. Alternatively, Luo et al. [47] assumed that the host DM follows the star-formation rate (SFR) of the host galaxy. However, Lin et al. [48] found no strong correlation between host DM and SFR from the limited sample of localized FRBs. A more reasonable way to deal with the host DM is to model it using a proper probability distribution and marginalize over the free parameters [36, 49, 50]. For example, Macquart et al. [49] assumed that the host DM follows log-normal distribution, and reconstructed the DM-redshift relation from five well-localized FRBs. However, due to the small data sample, the DM-redshift relation has large uncertainty. As the discovery of more and more FRBs in recent years, it is interesting to recheck the DM-redshift relation and use it to infer the redshift of FRBs which has no direct measurement of spectroscopic or photometric redshift.
In this paper, we assume that the host DM of FRBs follows log-normal distribution, and reconstruct the DM-redshift relation from well localized FRBs using Bayesian inference method. Then the DM-redshift relation is used to infer the redshift of the first CHIME/FRB catalog [11]. We further use the inferred redshift to calculate the isotropic energy of the CHIME/FRBs. The rest parts of this paper are arranged as follows: In Section 2, we reconstruct the DM-redshift relation from well-localized FRBs. In Section 3, we investigate the redshift and energy distributions of CHIME/FRBs. Finally, discussion and conclusions are given in Section 4.
## 2 The DM-redshift relation from localized FRBs
The interaction of electromagnetic waves with plasma leads to the frequency-dependent light speed. This plasma effect, although small, may cause detectable time delay between electromagnetic waves of different frequencies, if it is accumulated at cosmological distance. This phenomenon is more obvious for low-frequency electromagnetic waves, such as the radio wave as is observed in e.g. FRBs. The time delay between low- and high-frequency electromagnetic waves propagating from a distant source to earth is proportional to the integral of electron number density along the line-of-sight, i.e. the so called dispersion measure (DM). The observed DM of an extragalactic FRB can generally be decomposed into four main parts: the Milky Way interstellar medium (DM\({}_{\rm MW}\)), the Galactic halo (DM\({}_{\rm halo}\)), the intergalactic medium (DM\({}_{\rm IGM}\)), and the host galaxy (DM\({}_{\rm host}\)) [49, 51, 52],
\[{\rm DM}_{\rm obs}\!=\!{\rm DM}_{\rm MW}\!+\!{\rm DM}_{\rm halo}\!+\!{\rm DM} _{\rm IGM}\!+\!\frac{{\rm DM}_{\rm host}}{1\!+\!z}, \tag{1}\]
where DM\({}_{\rm host}\) is the DM of host galaxy in the FRB source frame, and the factor 1\(+\!z\) arises from the cosmic expansion. Occasionally, the DM\({}_{\rm halo}\) term is ignored, but this term is comparable to, or even larger than the DM\({}_{\rm MW}\) term for FRBs at high Galactic latitude.
The Milky Way ISM term (DM\({}_{\rm MW}\)) can be well modeled from pulsar observations, such as the NE2001 model [53] and the YMW16 model [54]. For FRBs at high Galactic latitude, both models give consistent results. However, it is pointed out that the YMW16 model may overestimate DM\({}_{\rm MW}\) at low Galactic latitude [55]. Therefore, we use the NE2001 model to estimate the DM\({}_{\rm MW}\) term. The Galactic halo term (DM\({}_{\rm halo}\)) is not well constrained yet, and Prochaska & Zheng [56] estimated that it is about in the range \(50\!\sim\!80\) pc cm\({}^{-3}\). Here we follow Macquart et al. [49] and assume a conservative estimation on it, i.e. DM\({}_{\rm halo}\!=\!50\) pc cm\({}^{-3}\). The concrete value of DM\({}_{\rm halo}\) should not strongly affect our results, as its uncertainty is much smaller than the uncertainties of the DM\({}_{\rm IGM}\) and DM\({}_{\rm host}\) terms described bellow. Therefore, the first two terms on the right-hand-side of equation (1) can be subtracted from the observed DM\({}_{\rm obs}\). For convenience, we define the extragalactic DM as
\[{\rm DM}_{\rm E}\!\equiv\!{\rm DM}_{\rm obs}\!-\!{\rm DM}_{\rm MW}\!-\!{\rm DM }_{\rm halo}\!=\!{\rm DM}_{\rm IGM}\!+\!\frac{{\rm DM}_{\rm host}}{1\!+\!z}. \tag{2}\]
Given a specific cosmological model, the DM\({}_{\rm IGM}\) term can be calculated theoretically. Assuming that both hydrogen and helium are fully ionized [57, 58], the DM\({}_{\rm IGM}\) term can be written in the standard \(\Lambda\)CDM model as [43, 51]
\[\langle{\rm DM}_{\rm IGM}(z)\rangle\!=\!\frac{21cH_{0}\Omega_{b}f_{\rm IGM}}{ 64\pi Gm_{p}}\int_{0}^{z}\frac{1\!+\!z}{\sqrt{\Omega_{m}(1\!+\!z)^{3}\!+\! \Omega_{\Lambda}}}dz, \tag{3}\]
where \(f_{\rm IGM}\) is the fraction of baryon mass in IGM, \(m_{p}\) is the proton mass, \(H_{0}\) is the Hubble constant, \(G\) is the Newtonian gravitational constant, \(\Omega_{b}\) is the normalized baryon matter density, \(\Omega_{m}\) and \(\Omega_{\Lambda}\) are the normalized densities of matter (including baryon matter and dark matter) and dark energy, respectively. In this paper, we work in the standard \(\Lambda\)CDM model with the Planck 2018 parameters, i.e. \(H_{0}=67.4\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.315\), \(\Omega_{\Lambda}\!=\!0.685\) and \(\Omega_{b}\!=\!0.0493\)[59]. The fraction of baryon mass in IGM can be tightly constrained by directly observing the budget of baryons in different states [60], or
observing the radio dispersion on gamma-ray bursts [61]. All the observations show that \(f_{\rm IGM}\) is about 0.84. Using five well-localized FRBs, Li et al. [33] also obtained the similar result. Therefore, we fix \(f_{\rm IGM}=0.84\) to reduce the freedom. The uncertainty of these parameters should not significantly affect our results, since they are much smaller than the variation of \({\rm DM}_{\rm IGM}\) described bellow.
Note that equation (3) should be interpreted as the mean contribution from IGM. Due to the large-scale matter density fluctuation, the actual value would vary around the mean. Theoretical analysis and hydrodynamic simulations show that the probability distribution for \({\rm DM}_{\rm IGM}\) has a flat tail at large values, which can be fitted with the following function [49, 62]
\[p_{\rm IGM}(\Delta)\!=\!A\Delta^{-\beta}\exp\left[-\frac{(\Delta^{-\alpha}-C_{ 0})^{2}}{2\alpha^{2}\sigma_{\rm IGM}^{2}}\right],\quad\Delta\!>\!0, \tag{4}\]
where \(\Delta\!\equiv\!{\rm DM}_{\rm IGM}/\langle{\rm DM}_{\rm IGM}\rangle\), \(\sigma_{\rm IGM}\) is the effective standard deviation, \(\alpha\) and \(\beta\) are related to the inner density profile of gas in haloes, \(A\) is a normalization constant, and \(C_{0}\) is chosen such that the mean of this distribution is unity. Hydrodynamic simulations show that \(\alpha\!=\!\beta\!=\!3\) provides the best match to the model [49, 62], thus we fix these two parameters. Simulations also show that the standard deviation \(\sigma_{\rm IGM}\) approximately scales with redshift as \(z^{-1/2}\) in the redshift range \(z\!\lesssim\!1\)[63, 64]. The redshift-dependence of \(\sigma_{\rm IGM}\) is still unclear at \(z\!>\!1\), so we just simply extrapolate this relation to high-redshift region. Therefore, we follow Macquart et al. [49] and parameterize it as \(\sigma_{\rm IGM}\!=\!Fz^{-1/2}\), where \(F\) is a free parameter.
Due to the lack of detailed observation on the local environment of FRB source, the host term \({\rm DM}_{\rm host}\) is poorly known. It may range from several tens to several hundreds pc cm\({}^{-3}\). For example, Xu et al. [15] estimated that the \({\rm DM}_{\rm host}\) of the repeating burst FRB20201124A is in the range \(10\!<\!{\rm DM}_{\rm host}\!<\!310\) pc cm\({}^{-3}\); Niu et al. [65] inferred \({\rm DM}_{\rm host}\!\approx\!900\) pc cm\({}^{-3}\) for FRB20190520B. Numerical simulations show that the probability of \({\rm DM}_{\rm host}\) follows the log-normal distribution [49, 50],
\[p_{\rm host}({\rm DM}_{\rm host}|\mu,\sigma_{\rm host}) = \frac{1}{\sqrt{2\pi}{\rm DM}_{\rm host}\sigma_{\rm host}} \tag{5}\] \[\times \exp\left[-\frac{({\rm ln}{\rm DM}_{\rm host}\!-\!\mu)^{2}}{2 \sigma_{\rm host}^{2}}\right],\]
where \(\mu\) and \(\sigma_{\rm host}\) are the mean and standard deviation of \({\rm ln}{\rm DM}_{\rm host}\), respectively. This distribution has a
We first use the full 17 FRBs to constrain the free parameters (\(F\),\(e^{\mu}\),\(\sigma_{\rm host}\)). We use \(e^{\mu}\) rather than \(\mu\) as a free parameter, as was done in Macquart et al. [49], because the former directly represents the median value of \(e^{\mu}\) and variance \(e^{\mu+\sigma_{\rm host}^{2}/2}(e^{\sigma_{\rm host}^{2}}\!-\!1)^{1/2}\). Theoretically, the log-normal distribution allows for the appearance of large value of \({\rm DM}_{\rm host}\), as is shown by simulations, \({\rm DM}_{\rm host}\) may be as large as 1000 pc cm\({}^{-3}\)[44]. Generally, the two parameters (\(\mu,\sigma_{\rm host}\)) may be redshift-dependent, but for non-repeating bursts they do not vary significantly with redshift [50]. For simplicity, we first follow Macquart et al. [49] and treat them as two constant parameters. The possible redshift-dependence will be investigated later.
Given the distributions \(p_{\rm IGM}\) and \(p_{\rm host}\), the probability distribution of \({\rm DM}_{\rm E}\) at redshift \(z\) can be calculated as [49]
\[p_{E}({\rm DM}_{\rm E}|z) = \int_{0}^{(1+z){\rm DM}_{\rm E}}p_{\rm host}({\rm DM}_{\rm host}| \mu,\sigma_{\rm host}) \tag{6}\] \[\times p_{\rm IGM}({\rm DM}_{\rm E}\!-\!\frac{{\rm DM}_{\rm host}}{1\!+ \!z}|F,z)d{\rm DM}_{\rm host}.\]
The likelihood that we observe a sample of FRBs with \({\rm DM}_{{\rm E},i}\) at redshift \(z_{i}\) (\(i\!=\!1,2,3,...,N\)) is given by
\[{\cal L}({\rm FRBs}|F,\mu,\sigma_{\rm host})\!=\!\prod_{i=1}^{N}\!p_{E}({\rm DM }_{{\rm E},i}|z_{i}), \tag{7}\]
where \(N\) is the total number of FRBs. Given the FRB data (\(z_{i},{\rm DM}_{{\rm E},i}\)), the posterior probability distribution of the parameters (\(F\),\(\mu,\sigma_{\rm host}\)) is given according to Bayes theorem by
\[P(F,\mu,\sigma_{\rm host}|{\rm FRBs})\!\propto\!{\cal L}({\rm FRBs}|F,\mu, \sigma_{\rm host})P_{0}(F,\mu,\sigma_{\rm host}), \tag{8}\]
where \(P_{0}\) is the prior of the parameters.
Up to now, there are 19 well-localized extragalactic FRBs that have direct identification of host galaxy and well measured redshift*. Among them, we ignore FRB20200120E and FRB20190614D. The former is very close to our Galaxy (3.6 Mpc), and the peculiar velocity dominates over the Hubble flow, hence has a negative redshfit \(z\!=\!-0.0001\)[66, 67]. The latter has no direct measurement of spectroscopic redshift, but has a photometric redshift \(z\!\approx\!0.6\)[68]. All the rest 17 FRBs have well measured spectroscopic redshift. The main properties of the 17 FRBs are listed in Table 1, which will be used in the following to reconstruct the \({\rm DM}_{\rm E}\)-redshift relation.
Footnote *: The FRB Host Database, [http://frbhosts.org/](http://frbhosts.org/)
We first use the full 17 FRBs to constrain the free parameters (\(F\),\(e^{\mu}\),\(\sigma_{\rm host}\)). We use \(e^{\mu}\) rather than \(\mu\) as a free parameter, as was done in Macquart et al. [49], because the former directly represents the median value of \(e^{\mu}\) and variance \(e^{\mu+\sigma_{\rm host}^{2}/2}(e^{\sigma_{\rm host}^{2}}\!-\!1)^{1/2}\). Theoretically, the log-normal distribution allows for the appearance of large value of \({\rm DM}_{\rm host}\), as is shown by simulations, \({\rm DM}_{\rm host}\) may be as large as 1000 pc cm\({}^{-3}\)[44]. Generally, the two parameters (\(\mu,\sigma_{\rm host}\)) may be redshift-dependent, but for non-repeating bursts they do not vary significantly with redshift [50]. For simplicity, we first follow Macquart et al. [49] and treat them as two constant parameters. The possible redshift-dependence will be investigated later.
Given the distributions \(p_{\rm IGM}\) and \(p_{\rm host}\), the probability distribution of \({\rm DM}_{\rm E}\) at redshift \(z\) can be calculated as [49]
\[p_{E}({\rm DM}_{\rm E}|z) = \int_{0}^{(1+z){\rm DM}_{\rm E}}p_{\rm host}({\rm DM}_{\rm host}| \mu,\sigma_{\rm host}) \tag{9}\] \[\times p_{\rm IGM}({\rm DM}_{\rm E}\!-\!\frac{{\rm DM}_{\rm host}}{1\!+ \!z}|F,z)d{\rm DM}_{\rm host}.\]
The likelihood that we observe a sample of FRBs with \({\rm DM}_{{\rm E},i}\) at redshift \(z_{i}\) (\(i\!=\!1,2,3,...,N\)) is given by
\[{\cal L}({\rm FRBs}|F,\mu,\sigma_{\rm host})\!=\!\prod_{i=1}^{N}\!p_{E}({\rm DM }_{{\rm E},i}|z_{i}), \tag{10}\]
where \(N\) is the total number of FRBs. Given the FRB data (\(z_{i},{\rm DM}_{{\rm E},i}\)), the posterior probability distribution of the parameters (\(F\),\(\mu,\sigma_{\rm host}\)) is given according to Bayes theorem by
\[P(F,\mu,\sigma_{\rm host}|{\rm FRBs})\!\propto\!{\cal L}({\rm FRBs}|F,\mu, \sigma_{\rm host})P_{0}(F,\mu,\sigma_{\rm host}), \tag{11}\]
where \(P_{0}\) is the prior of the parameters.
Up to now, there are 19 well-localized extragalactic FRBs that have direct identification of host galaxy and well measured redshift*. Among them, we ignore FRB20200120E and FRB20190614D. The former is very close to our Galaxy (3.6 Mpc), and the peculiar velocity dominates over the Hubble flow, hence has a negative redshfit \(z\!=\!-0.001\)[66, 67]. The latter has no direct measurement of spectroscopic redshift, but has a photometric redshift \(z\!\approx\!0.6\)[68]. All the rest 17 FRBs have well measured spectroscopic redshift. The main properties of the 17 FRBs are listed in Table 1, which will be used in the following to reconstruct the \({\rm DM}_{\rm E}\)-redshift relation.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline FRBs & RA & Dec & DM\({}_{\rm obs}\) & DM\({}_{\rm MW}\) & DM\({}_{\rm E}\) & \(z_{\rm sp}\) & repeat? & reference \\ & [\({}^{\circ}\) ] & [\({}^{\circ}\) ] & [pc cm\({}^{-3}\)] & [pc cm\({}^{-3}\)] & [pc cm\({}^{-3}\)] & & & \\ \hline
20121102A & 82.99 & 33.15 & 557.00 & 157.60 & 349.40 & 0.1927 & Yes & Chatterjee et al. [8] \\
20180301A & 93.23 & 4.67 & 536.00 & 136.53 & 349.47 & 0.3305 & Yes & Bhandari et al. [69] \\
20180916B & 29.50 & 65.72 & 348.80 & 168.73 & 130.07 & 0.0337 & Yes & Marcote et al. [70] \\
20180924B & 326.11 & \(-\)40.90 & 362.16 & 41.45 & 270.71 & 0.3214 & No & Bannister et al. [71] \\
20181030A & 158.60 & 73.76 & 103.50 & 40.16 & 13.34 & 0.0039 & Yes & Bhardwaj et al. [72] \\
20181112A & 327.35 & \(-\)52.97 & 589.00 & 41.98 & 497.02 & 0.4755 & No & Prochaska et al. [73] \\
20190102C & 322.42 & \(-\)79.48 & 364.55 & 56.22 & 258.33 & 0.2913 & No & Macquart et al. [49] \\
20190523A & 207.06 & 72.47 & 760.80 & 36.74 & 674.06 & 0.6600 & No & Ravi et al. [74] \\
20190608B & 334.02 & \(-\)7.90 & 340.05 & 37.81 & 252.24 & 0.1178 & No & Macquart et al. [49] \\
20190611B & 320.74 & \(-\)79.40 & 332.63 & 56.60 & 226.03 & 0.3778 & No & Macquart et al. [49] \\
20190711A & 329.42 & \(-\)80.36 & 592.60 & 55.37 & 487.23 & 0.5217 & Yes & Macquart et al. [49] \\
20190714A & 183.98 & \(-\)13.02 & 504.13 & 38.00 & 416.13 & 0.2365 & No & Heintz et al. [75] \\
20191001A & 323.35 & \(-\)54.75 & 507.90 & 44.22 & 413.68 & 0.2340 & No & Heintz et al. [75] \\
20191228A & 344.43 & \(-\)29.59 & 297.50 & 33.75 & 213.75 & 0.2432 & No & Bhandari et al. [69] \\
20200430A & 229.71 & 12.38 & 380.25 & 27.35 & 302.90 & 0.1608 & No & Bhandari et al. [69] \\
20200906A & 53.50 & \(-\)14.08 & 577.80 & 36.19 & 491.61 & 0.3688 & No & Bhandari et al. [69] \\
20201124A & 77.01 & 26.06 & 413.52 & 126.49 & 237.03 & 0.0979 & Yes & Fong et al. [76] \\ \hline \end{tabular}
\end{table}
Table 1: The properties of the Host/FRB catalog. Column 1: FRB name; Columns 2 and 3: the right ascension and declination of FRB source on the sky; Column 4: the observed DM; Column 5: the DM of the Milky Way ISM calculated using the NE2001 model; Column 6: the extragalactic DM calculated by subtracting DM\({}_{\rm MW}\) and DM\({}_{\rm halo}\) from the observed DM\({}_{\rm obs}\), assuming DM\({}_{\rm halo}\)\(=\) 50 pc cm\({}^{-3}\) for the Milky Way halo; Column 7: the spectroscopic redshift; Column 8: repeating or non-repeating; Column 9: the references.
ues [59]. The same flat priors as that in Macquart et al. [49] are used for the free parameters: \(F\in\mathcal{U}(0.01,0.5)\), \(e^{\mu}\in\mathcal{U}(20,200)\)\(\mathrm{pc\ cm^{-3}}\) and \(\sigma_{\mathrm{host}}\in\mathcal{U}(0.2,2.0)\). The posterior probability density functions and the confidence contours of the free parameters are plotted in the left panel of Figure 1. The median values and \(1\sigma\) uncertainties of these two FRBs are smaller than expected. We note that the outlier FRB20181030A has a very smaller redshift (\(z=0.0039\)) and a very low extragalactic DM (\(\mathrm{DM_{E}=13.34\ pc\ cm^{-3}}\)), so the peculiar velocity of its host galaxy couldn't be ignored. The redshift of the other outlier FRB20190611B is \(z=0.3778\), and the observed DM of this burst is \(\mathrm{DM_{obs}=332.63\ pc\ cm^{-3}}\). The normal burst FRB20200906A has a redshift (\(z=0.3688\)) similar to FRB20190611B, but with a much larger DM (\(\mathrm{DM_{obs}=577.8\ pc\ cm^{-3}}\)). Note that both FRB20200906A and FRB20190611B are non-repeating, and the sky positions of these two bursts differ significantly. The large difference of \(\mathrm{DM_{obs}}\) of these two bursts may be caused by e.g. the fluctuation of matter density in IGM, the variation of host DM, or different local environment of FRB source [65, 78].
The full FRB sample includes 11 non-repeating FRBs and 6 repeating FRBs. The repeaters and non-repeaters may have different \(\mathrm{DM_{host}}\). To check this, we re-constrain the parameters (\(F,e^{\mu},\sigma_{\mathrm{host}}\)) using the 11 non-repeating FRBs. The confidence contours and the posterior probability distributions of the parameter space are plotted in the right panel of Figure 1. The median values and \(1\sigma\) uncertainties of the free parameters are \(F=0.32^{+0.11}_{-0.10}\), \(e^{\mu}=102.02^{+37.65}_{-31.06}\)\(\mathrm{pc\ cm^{-3}}\) and \(\sigma_{\mathrm{host}}=1.10^{+0.31}_{-0.23}\).
With the parameters (\(F\), \(e^{\mu},\sigma_{\mathrm{host}}\)) constrained, we calculate the probability distribution of \(\mathrm{DM_{E}}\) at any redshift in the range \(0<z<4\) according to equation (6). The reconstructed \(\mathrm{DM_{E}-z}\) relation is plotted in the left panel of Figure 2. The dark blue line is the median value and the light blue region is the \(1\sigma\) uncertainty. For comparison, we also plot the best-fitting curve by directly fitting equation (2) to the FRB data using the least-\(\chi^{2}\) method (the red-dashed line), where \(\mathrm{DM_{IGM}}\) is replaced by its mean given in equation (3). The least-\(\chi^{2}\) method is equivalent to assume that both \(\mathrm{DM_{IGM}}\) and \(\mathrm{DM_{host}}\) follow Gaussian distribution around the mean. The least-\(\chi^{2}\) curve gradually deviates from the median value of the reconstructed \(\mathrm{DM_{E}-z}\) relation at high redshift, but due to the large uncertainty they are still consistent within \(1\sigma\) uncertainty. We find that 15 out of the 17 FRBs well fall into the \(1\sigma\) range of the reconstructed \(\mathrm{DM_{E}-z}\) relation. Two outliers, FRB20181030A and 6 repeating FRBs. The repeaters and non-repeaters may have different \(\mathrm{DM_{host}}\). To check this, we re-constrain the parameters (\(F,e^{\mu},\sigma_{\mathrm{host}}\)) using the 11 non-repeating FRBs. The confidence contours and the posterior probability distributions of the parameter space are plotted in the right panel of Figure 1. The median values and \(1\sigma\) uncertainties of the free parameters are \(F=0.32^{+0.11}_{-0.10}\), \(e^{\mu}=102.02^{+37.65}_{-31.06}\)\(\mathrm{pc\ cm^{-3}}\) and \(\sigma_{\mathrm{host}}=1.10^{+0.31}_{-0.23}\).
With the parameters (\(F\), \(e^{\mu},\sigma_{\mathrm{host}}\)) constrained, we calculate the probability distribution of \(\mathrm{DM_{E}}\) at any redshift in the range \(0<z<4\) according to equation (6). The reconstructed \(\mathrm{DM_{E}-z}\) relation is plotted in the left panel of Figure 2. The dark blue line is the median value and the light blue region is the \(1\sigma\) uncertainty. For comparison, we also plot the best-fitting curve by directly fitting equation (2) to the FRB data using the least-\(\chi^{2}\) method (the red-dashed line), where \(\mathrm{DM_{IGM}}\) is replaced by its mean given in equation (3). The least-\(\chi^{2}\) method is equivalent to assume that both \(\mathrm{DM_{IGM}}\) and \(\mathrm{DM_{host}}\) follow Gaussian distribution around the mean. The least-\(\chi^{2}\) curve gradually deviates from the median value of the reconstructed \(\mathrm{DM_{E}-z}\) relation at high redshift, but due to the large uncertainty they are still consistent within \(1\sigma\) uncertainty. We find that 15 out of the 17 FRBs well fall into the \(1\sigma\) range of the reconstructed \(\mathrm{DM_{E}-z}\) relation. Two outliers, FRB20181030A and 6 repeating FRBs. The repeaters and non-repeaters may have different \(\mathrm{DM_{host}}\). To check this, we re-constrain the parameters (\(F,e^{\mu},\sigma_{\mathrm{host}}\)) using the 11 non-repeating FRBs and FRB20190611B (the red dots in Figure 2), fall below the \(1\sigma\) range of the \(\mathrm{DM_{E}-z}\) relation, imply that the \(\mathrm{DM_{E}}\) values of these two FRBs are smaller than expected. We note that the outlier FRB20181030A has a very smaller redshift (\(z=0.0039\)) and a very low extragalactic DM (\(\mathrm{DM_{E}=13.34\ pc\ cm^{-3}}\)), so the peculiar velocity of its host galaxy couldn't be ignored. The redshift of the other outlier FRB20190611B is \(z=0.3778\), and the observed DM of this burst is \(\mathrm{DM_{obs}=332.63\ pc\ cm^{-3}}\). The normal burst FRB20200906A has a redshift (\(z=0.3688\)) similar to FRB20190611B, but with a much larger DM (\(\mathrm{DM_{obs}=577.8\ pc\ cm^{-3}}\)). Note that both FRB20200906A and FRB20190611B are non-repeating, and the sky positions of these two bursts differ significantly. The large difference of \(\mathrm{DM_{obs}}\) of these two bursts may be caused by e.g. the fluctuation of matter density in IGM, the variation of host DM, or different local environment of FRB source [65, 78].
The confidence contours and the posterior probability distributions of the parameter space are plotted in the right panel of Figure 1. The median values and \(1\sigma\) uncertainties of the free parameters are \(F=0.32^{+0.11}_{-0.10}\), \(e^{\mu}=102.02^{+37.65}_{-31.06}\)\(\mathrm{pc\ cm^{-3}}\) and \(\sigma_{\mathrm{host}}=1.10^{+0.31}_{-0.23}\).
Figure 1: Constraints on the free parameters (\(F\), \(e^{\mu},\sigma_{\mathrm{host}}\)) using the full sample (left panel) and the non-repeaters (right panel). The contours from the inner to outer represent \(1\sigma\), \(2\sigma\) and \(3\sigma\) confidence regions, respectively.
ters are \(F=0.38^{+0.09}_{-0.11}\), \(e^{\mu}=126.86^{+39.77}_{-41.07}\) pc cm\({}^{-3}\) and \(\sigma_{\rm host}=0.88^{+0.42}_{-0.28}\). We obtain a little larger \(e^{\mu}\) value but a smaller \(\sigma_{\rm host}\) value than that constrained from the full FRBs. But they are still consistent with \(1\sigma\) uncertainty. The reconstructed \({\rm DM_{E}}-z\) relation using the non-repeating sample is shown in the right panel of Figure 2. FRB20190611B is still an outlier (the other outlier FRB20181030A is a repeater). The \({\rm DM_{E}}-z\) relations of the full sample and the non-repeaters are well consistent with each other, but the latter has a little larger uncertainty, especially at the low-redshift range.
In general, \(e^{\mu}\) and \(\sigma_{\rm host}\) may evolve with redshift. Numerical simulations shows that the median value of \({\rm DM_{host}}\) has a power-law dependence on redshift, but \(\sigma_{\rm host}\) does not change significantly [50]. To check this, we parameterize \(e^{\mu}\) in the power-law form,
\[e^{\mu}=e^{\mu_{0}}(1+z)^{\alpha}, \tag{9}\]
and use the full FRB sample to constrain the parameters
## 3 The redshift and energy distribution of CHIME/FRBs
The first CHIME/FRB catalog consists of 536 bursts, including 474 apparently non-repeating bursts and 62 repeating bursts from 18 FRB sources [11]. In this paper, we focus on the 474 apparently non-repeating bursts, whose properties are listed in a long table in the _online material_. All the bursts have well measured \({\rm DM_{obs}}\), but most of them have no direct measurement of redshift. We calculate the extragalactic \({\rm DM_{E}}\) by subtracting \({\rm DM_{MW}}\) and \({\rm DM_{halo}}\) from the observed \({\rm DM_{obs}}\), where \({\rm DM_{MW}}\) is calculated using the NE2001 model [53], and \({\rm DM_{halo}}\) is assumed to be 50 pc cm\({}^{-3}\)[49]. The \({\rm DM_{E}}\) values of the 474 apparently non-repeating bursts fall into the range \(20-3000\) pc cm\({}^{-3}\). Among them 444 bursts have \({\rm DM_{E}}>100\) pc cm\({}^{-3}\), while the rest 30 bursts have \({\rm DM_{E}}<100\) pc cm\({}^{-3}\). The mean and median values of \({\rm DM_{E}}\) are 557 and 456 pc cm\({}^{-3}\), respectively. We divide the \({\rm DM_{E}}\) of the full non-repeating bursts into 30 uniform bins, with bin width \(\Delta{\rm DM_{E}}=100\) pc cm\({}^{-3}\), and plot the histogram in the left panel of Figure 4. The distribution of \({\rm DM_{E}}\) can be well fitted by cut-off power law (CPL),
\[{\rm CPL}\dvtx\quad N(x)\propto x^{\alpha}\exp\left(-\frac{x}{x_{c}}\right), \quad x>0, \tag{10}\]
with the best-fitting parameters \(\alpha=0.86\pm 0.07\) and \(x_{c}=289.49\pm 17.90\) pc cm\({}^{-3}\). This distribution has a peak at \(x_{p}=\alpha x_{c}\approx 250\) pc cm\({}^{-3}\), which is much smaller than the median value and mean value of \({\rm DM_{E}}\).
Figure 2: The \({\rm DM_{E}}-z\) relation obtained from full sample (left panel) and non-repeaters (right panel). The dark blue line is the median value and the light blue region is \(1\sigma\) uncertainty. The dots are the FRB data points and the outliers are highlighted in red. The red-dashed line is the best-fitting result obtained using the least-\(\chi^{2}\) method. The inset is the zoom-in of the low-redshift range.
Now we use the \(\rm DM_{E}-z\) relation reconstructed using the full sample (using the non-repeating sample does not significantly affect our results) to infer the redshift of the non-repeating CHIME/FRBs. For FRBs with \(\rm DM_{E}<100\) pc cm\({}^{-3}\), the \(\rm DM_{host}\) term may dominate over the \(\rm DM_{IGM}\) term, hence a smaller uncertainty on \(\rm DM_{host}\) may cause large bias on the estimation of redshift. Therefore, when inferring the redshift using \(\rm DM_{E}-z\) relation, we only consider the FRBs with \(\rm DM_{E}>100\) pc cm\({}^{-3}\). From the \(\rm DM_{E}-z\) relation we can know that \(\rm DM_{E}(z=0.1)=169.9^{+196.9}_{-73.4}\) pc cm\({}^{-3}\) (\(1\sigma\) uncertainty). Therefore, FRBs with \(\rm DM_{E}<100\) pc cm\({}^{-3}\) are expected to have redshift \(z<0.1\), while the lower limit is unable to be determined. The inferred redshifts for FRBs with \(\rm DM_{E}>100\) pc cm\({}^{-3}\) are given in the _online material_. The inferred redshifts span the range \(z_{\rm inf}\in(0.023,3.935)\). Three bursts have inferred redshift larger than 3, i.e., FRB20180906B with \(z_{\rm inf}=3.935^{+0.463}_{-0.705}\), FRB20181203C with \(z_{\rm inf}=3.003^{+0.443}_{-0.657}\), and FRB20190430B with \(z_{\rm inf}=3.278^{+0.449}_{-0.650}\).
Figure 4: The histogram of \(\rm DM_{E}\) (left panel) and inferred redshift (right panel) of the first non-repeating CHIME/FRB catalog. The left-most gray bar represent the 30 FRBs with \(\rm DM_{E}<100\) pc cm\({}^{-3}\), which are expected to have \(z<0.1\). The blue and red lines are the best-fitting CPL models for the Full sample and the Gold sample, respectively.
Figure 3: Constraints on the free parameters \((F,e^{\mu_{0}},\sigma_{\rm host},\alpha)\) using the full sample (left panel) and the non-repeaters (right panel). The contours from the inner to outer represent \(1\sigma\), \(2\sigma\) and \(3\sigma\) confidence regions, respectively.
We divide the redshift range \(0<z<3\) into 30 uniform bins, with bin width \(\Delta z=0.1\), and plot the histogram of the inferred redshift in the right panel of Figure 4. The distribution of the inferred redshift can be fitted by CPL model given in equation (10). The best-fitting parameters are \(\alpha=0.39\pm 0.09\) and \(x_{c}=0.48\pm 0.06\). The distribution has a peak at \(z_{p}=\alpha x_{c}\approx 0.19\). The mean and median value of this distribution are 0.67 and 0.52, respectively. Considering the FRBs with \(\rm DM_{E}<100\) pc cm\({}^{-3}\) (30 FRBs in total), which are expected to have \(z<0.1\), there is a large excess compared to the CPL model in the redshft range \(z<0.1\) (see the left-most gray bar in Figure 4). This may be caused by the selection effect, since the detector is more sensitive to the near FRBs.
Amiri et al. [11] provided a set of criteria to exclude events which are unsuitable to be used in population analyses. (1) Events with \(S/N<12\) are excluded; (2) Events having \(\rm DM_{obs}<1.5\rm max(DM_{NE2001},DM_{YMW16})\) are excluded; (3) Events detected in far sidelobes are excluded; (4) Events detected during non-nominal telescope operations are excluded; (5) Highly scattered events (\(\rm\tau_{scat}>10\) ms) are excluded. We call the remaining FRBs the Gold sample, which consists of 253 non-repeating FRBs. We plot the distributions of \(\rm DM_{E}\) and redshifts of the Gold sample (together with the Full sample) in Figure 4. Similar to the Full sample, the distributions of \(\rm DM_{E}\) and redshifts of the Gold sample can also be fitted by CPL model. The best-fitting CPL model parameters are summarized in Table 2. We see that parameters are not significantly changed compared to the Full sample. Note that the redshift distribution of the Gold sample shown in the right panel of Figure 4 only contains the FRBs with \(\rm DM_{E}>100\) pc cm\({}^{-3}\) (236 FRBs). The Gold sample still contains 17 FRBs with \(\rm DM_{E}<100\) pc cm\({}^{-3}\), whose redshifts are expected to be \(z<0.1\). So the low-redshift excess still exists in the Gold sample.
Given the redshift, the isotropic energy of a burst can be calculated as [79]
\[E=\frac{4\pi d_{L}^{2}F\Delta\nu}{(1+z)^{2+\alpha}}, \tag{11}\]
where \(d_{L}\) is the luminosity distance, \(F\) is the average fluence, \(\alpha\) is the spectral index (\(F_{\nu}\propto\nu^{\alpha}\)), and \(\Delta\nu\) is the waveband in which the fluence is observed. The fluence listed in the first CHIME/FRB catalog is averaged over the 400\(-\)800 MHz waveband, hence \(\Delta\nu=400\) MHz. The spectra indices of some bursts are not clear. Macquart et al. [80] showed that for a sample of ASKAP/FRBs, \(\alpha=-1.5\) provides a reasonable fit. Hence, we fix \(\alpha=-1.5\) for all the bursts. Note that the fluence given in the CHIME/FRBs are excluded; (3) Events detected in far sidelobes are excluded; (4) Events detected during non-nominal telescope operations are excluded; (5) Highly scattered events (\(\rm\tau_{scat}>10\) ms) are excluded. We call the remaining FRBs the Gold sample, which consists of 253 non-repeating FRBs. We plot the distributions of \(\rm DM_{E}\) and redshifts of the Gold sample (together with the Full sample) in Figure 4. Similar to the Full sample, the distributions of \(\rm DM_{E}\) and redshifts of the Gold sample can also be fitted by CPL model. The best-fitting CPL model parameters are summarized in Table 2. We see that parameters are not significantly changed compared to the Full sample. Note that the redshift distribution of the Gold sample shown in the right panel of Figure 4 only contains the FRBs with \(\rm DM_{E}>100\) pc cm\({}^{-3}\) (236 FRBs). The Gold sample still contains 17 FRBs with \(\rm DM_{E}<100\) pc cm\({}^{-3}\), whose redshifts are expected to be \(z<0.1\). So the low-redshift excess still exists in the Gold sample.
the distributions of fluence and energy. Especially at the left end, where the model prediction much exceeds the data points.
Lin & Sang [83] showed that the bent power law (BPL) model fits the distributions of fluence and energy of the repeating burst FRB121102 much better than the SPL model. The BPL model takes the form
\[\mathrm{BPL}\,\colon\quad N(>x)\propto\left[1+\left(\frac{x}{x_{b}}\right)^{ \gamma}\right]^{-1},\quad x>0, \tag{13}\]
where \(x_{b}\) is the median value of \(x\), i.e. \(N(x>x_{b})=N(x<x_{b})\). The BPL model has a flat tail at \(x\ll x_{b}\), and behaves like SPL model at \(x\gg x_{b}\). The BPL model was initially used to fit the power density spectra of gamma-ray bursts [84]. Then it was shown that the BPL model can well fit the distribution of fluence and energy of soft-gamma repeaters [29, 85]. The choice of BPL model is inspired by the fact that the cumulative distributions of fluence and energy has a flat tail at the left end, as can be seen from Figure 5. We therefore try to fit the cumulative distributions of fluence and energy of CHIME/FRBs using the BPL model. The best-fitting parameters are summarized in Table 4, and the best-fitting lines are shown in Figure 5 (the solid lines). We see that the BPL model fits the data (both the Full sample and the Gold sample) much better than the SPL model. The BPL model fits the distribution of fluence very well in the full range. For the distribution of energy, the BPL model also well fits the data, except at the very high energy end.
Figure 5: The cumulative distribution of fluence (left panel) and isotropic energy (right panel) of the non-repeating CHIME/FRBs with \(\mathrm{DM_{E}>100~{}pc~{}cm^{-3}}\). The solid and dashed lines are the best-fitting BPL model and SPL model, respectively.
\begin{table}
\begin{tabular}{c c c c c c c c c c} FRBs & RA & Dec & \(\mathrm{DM_{obs}}\) & \(\mathrm{DM_{MW}}\) & \(\mathrm{DM_{E}}\) & Fluence & \(z_{\mathrm{inf}}\) & \(\log(E/\mathrm{erg})\) & flag \\ & [\({}^{\circ}\) ] & [\({}^{\circ}\) ] & [pc/cm\({}^{3}\)] & [pc/cm\({}^{3}\)] & [pc/cm\({}^{3}\)] & [Jy ms] & & & \\ \hline
20181219B & 180.79 & 71.55 & 1950.7 & 35.8 & 1864.9 & \(27.00\pm 22.00\) & \(2.300^{+0.357}_{-0.511}\) & \(42.405^{+0.388}_{-0.962}\) & 1 \\
20190228B & 50.01 & 81.94 & 1125.8 & 81.9 & 993.9 & \(66.00\pm 32.00\) & \(1.175^{+0.205}_{-0.355}\) & \(42.170^{+0.324}_{-0.633}\) & 0 \\
20190319A & 113.43 & 5.72 & 2041.3 & 109.0 & 1882.3 & \(19.40\pm 4.20\) & \(2.325^{+0.359}_{-0.516}\) & \(42.271^{+0.214}_{-0.335}\) & 1 \\ \end{tabular}
\end{table}
Table 3: The most energetic bursts with \(E>10^{42}\) erg. Column 1: FRB name; Columns 2 and 3: the right ascension and declination of FRB source on the sky; Column 4: the observed DM; Column 5: the DM of the Milky Way ISM calculated using the NE2001 model; Column 6: the extragalactic DM calculated by subtracting \(\mathrm{DM_{MW}}\) and \(\mathrm{DM_{halo}}\) from the observed \(\mathrm{DM_{obs}}\), assuming \(\mathrm{DM_{halo}=50~{}pc~{}cm^{-3}}\) for the Milky Way halo; Column 7: the observed fluence; Column 8: the inferred redshift; Column 9: the isotropic energy; Column 10: the flag for Gold sample (flag=1) or not (flag=0). Note that the uncertainty of energy may be underestimated due to the lack of well-localized FRBs at \(z>1\).
## 4 Discussion and conclusions
In this paper, we reconstructed the DM\({}_{\rm E}-z\) relation from 17 well-localized FRBs at \(z<1\) using Bayesian inference method. The host DM was assumed to follow log-normal distribution with mean \(\exp(\mu)\) and variance \(\sigma_{\rm host}\), and the variance of DM of IGM was assumed to be redshift-dependent (\(\sigma_{\rm IGM}=Fz^{-1/2}\)). The free parameters were tightly constrained by 17 well-localized FRBs: \(F=0.32^{+0.11}_{-0.10}\), \(\exp(\mu)=102.02^{+37.65}_{-31.06}\) pc cm\({}^{-3}\) and \(\sigma_{\rm host}=1.10^{+0.31}_{-0.23}\). These parameters are well consistent with that of Macquart et al. [49], who obtained \(F=0.31^{+0.13}_{-0.16}\), \(\exp(\mu)=68.2^{+59.6}_{-35.0}\) pc cm\({}^{-3}\) and \(\sigma_{\rm host}=0.88^{+0.65}_{-0.45}\) from five well-localized FRBs. As the enlargement of FRB sample and one less free parameter (\(\Omega_{b}\)), our constraint is more stringent than that of Macquart et al. [49]. We directly extrapolated these parameters to high redshift and reconstructed the DM\({}_{\rm E}-z\) relation up to \(z=4\).
We further used the DM\({}_{\rm E}-z\) relation to infer the redshift of the first CHIME/FRB catalog. We found that the extragalactic DM of the non-repeating CHIME/FRBs follows CPL distribution, with a peak at 250 pc cm\({}^{-3}\). The inferred redshift of the non-repeating CHIME/FRBs can also be fitted by CPL distribution, but with a significant excess at the low redshift range \(0<z<0.1\), which may be caused by selection effect. We applied a set of criteria to exclude events which are susceptible to selection effect, as was described in Amiri et al. [11]. We found that the extragalactic DM, as well as redshift of the remaining FRBs (which we call the Gold sample) still follow CPL distribution, and the excess at low redshift still exists. We further used the inferred redshift to calculate the isotropic energy of the non-repeating CHIME/FRBs. It was found that the distributions of energy and fluence can be well fitted by BPL model, with power index \(\gamma=0.90\pm 0.01\) and \(\gamma=1.55\pm 0.01\) for energy and fluence, respectively. However, the SPL model fails to fit both the distributions of fluence and energy, even for the Gold sample. The statistical properties of the non-repeating CHIME/FRBs are similar to that of the bursts from the repeating FRB source, FRB121102 [83]. The BPL model has a flat tail at low-energy (lowfluence) end, thus it predicts much less dim bursts than the SPL model. The flatness at low-energy (low-fluence) end can be explained by the observational incompleteness, since some dim bursts may missing from detection. Note that the BPL model reduces to SPL model at high energy end, \(N(>E)\propto E^{-\gamma}\). The power-law index of the energy accumulative distribution is \(\gamma\approx 0.9\), which corresponding to \(\hat{\gamma}\approx 1.90\) for the differential distribution. Interestingly, the power-law index of the non-repeating CHIME/FRBs is similar to that of repeating bursts from the single source FRB 121102, with \(\hat{\gamma}\approx 1.6\sim 1.8\)[82].
We emphasis that the CPL distribution of redshift is not intrinsic. The intrinsic redshift distribution should take into account the selection effect of the detector. Due to the lack of well-localized FRBs, the intrinsic redshift distribution is still poorly known. Several possibilities have been discussed in literature, such as the distribution similar with gamma-ray bursts [31], a constant comoving number density with a Gaussian cutoff [86], the SFR history model [43], the modified SFR history model [87], the compact star merger model with various time delay [43]. In a recent work, Qiang et al. [46] considered several modified SFR history models and found that many of them are well consistent with the observed data of the first CHIME/FRB catalog, as long as the model parameters are chosen properly, but the simple SFR history model was fully ejected by the data. Hackstein et al. [44] have investigate three different intrinsic redshift distribution models (constant comoving density model, SFR history model, and stellar mass density model). After considering the selection effects of CHIME telescope, them showed that the distribution of the observed redshift should have CPL shape. It remains to be a future work to study which model fits the CHIME/FRB best. In addition, Shin et al. [88] have studied the FRB population by assuming a Schechter luminosity function, and after calibrating the selection effects, they also found that the distribution of redshift has CPL shape.
When reconstructing the DM\({}_{\rm E}-z\) relation, it is im
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{ Fluence (Full)} & SPL & \(\beta=0.54\pm 0.02\) & \(x_{c}=66.30\pm 3.52\) Jy ms & \(\chi^{2}/{\rm dof}=7.48\) \\ & BPL & \(\gamma=1.55\pm 0.01\) & \(x_{b}=3.36\pm 0.04\) Jy ms & \(\chi^{2}/{\rm dof}=0.23\) \\ \hline \multirow{2}{*}{ Fluence (Gold)} & SPL & \(\beta=0.48\pm 0.03\) & \(x_{c}=58.59\pm 4.02\) Jy ms & \(\chi^{2}/{\rm dof}=5.79\) \\ & BPL & \(\gamma=1.65\pm 0.02\) & \(x_{b}=3.96\pm 0.07\) Jy ms & \(\chi^{2}/{\rm dof}=0.29\) \\ \hline \multirow{2}{*}{ Energy (Full)} & SPL & \(\beta=0.09\pm 0.01\) & \(x_{c}=(1.17\pm 0.06)\times 10^{42}\) erg & \(\chi^{2}/{\rm dof}=11.10\) \\ & BPL & \(\gamma=0.90\pm 0.01\) & \(x_{b}=(1.55\pm 0.02)\times 10^{40}\) erg & \(\chi^{2}/{\rm dof}=0.50\) \\ \hline \multirow{2}{*}{ Energy (Gold)} & SPL & \(\beta=0.08\pm 0.01\) & \(x_{c}=(1.13\pm 0.09)\times 10^{42}\) erg & \(\chi^{2}/{\rm dof}=7.12\) \\ & BPL & \(\gamma=0.95\pm 0.01\) & \(x_{b}=(1.82\pm 0.04)\times 10^{40}\) erg & \(\chi^{2}/{\rm dof}=0.29\) \\ \hline \end{tabular}
\end{table}
Table 4: The best-fitting parameters of the cumulative distributions of fluence and energy for the Full sample and the Gold sample.
portant to reasonably deal with the DM\({}_{\rm host}\) term. The simplest way is to assume that DM\({}_{\rm host}\) is a constant [31; 35; 46]. Of course, this is inappropriate because the actual value can vary significantly from bursts to bursts. Luo et al. [47] parameterized DM\({}_{\rm host}\) as a function of SFR. However, statistical analysis of the well-localized FRBs shown that there is not strong correlation between DM\({}_{\rm host}\) and the host galaxy properties, including SFR [48]. Because there is a lack of direct observation on DM\({}_{\rm host}\), at present the most reasonable way is to model it using a probability distribution. Theoretical analysis and numerical simulations show that the probability of DM\({}_{\rm host}\) can be modeled by log-normal distribution with mean value \(\mu\) and deviation \(\sigma_{\rm host}\)[49; 50]. Based on the IllustrisTNG simulation, Zhang et al. [50] showed that \(\exp(\mu)\) has a power-law dependence on redshift, and the power-law index for repeating and non-repeating FRBs is slightly different. However, we found no evidence for the redshift evolution of \(\exp(\mu)\) here. The median value of DM\({}_{\rm host}\) for the well localized FBRs we obtained here is about \(\exp(\mu)\sim 100\) pc cm\({}^{-3}\). It is consistent with the DM\({}_{\rm host}\) of FRB20190608B (\(\sim 137\pm 43\) pc cm\({}^{-3}\)) obtained from optical/UV observations [89].
Due to the lack of high-redshift FRBs, the uncertainty on the \({\rm DM_{E}-z}\) relation is large at high redshift. The uncertainty mainly comes from the uncertainties on DM\({}_{\rm IGM}\) and DM\({}_{\rm host}\). The uncertainty on DM\({}_{\rm IGM}\) at redshift \(z=1\) is about \(\delta{\rm DM_{IGM}}\approx 0.3{\rm DM_{IGM}}\approx 270\) pc cm\({}^{-3}\). From the lognormal distribution, the uncertainty of DM\({}_{\rm host}\) is estimated to be \(\delta{\rm DM_{host}}=\exp(\mu+\sigma_{\rm host}/2)(\exp(\sigma_{\rm host}^{2}) -1)^{1/2}\approx 200\) pc cm\({}^{-3}\), where \(\exp(\mu)\approx 100\) pc cm\({}^{-3}\) and \(\sigma_{\rm host}\approx 1\). The uncertainties of DM\({}_{\rm MW}\) and DM\({}_{\rm halo}\) is expected to be much smaller than that of DM\({}_{\rm IGM}\) and DM\({}_{\rm host}\), thus they have been ignored. We also ignored the DM of the FRB source, which is hard to model due the lack of knowledge on the local environment of FRBs. With the present knowledge we do not clearly know the probability distribution of DM\({}_{\rm source}\). In some models involving the merger of compact binary, this term is expected to be small [90; 91]. Therefore, in most work this term is directly neglected. If DM\({}_{\rm source}\) does not strongly vary from burst to burst (such that it can be treated approximately as a constant), it can be absorbed into the DM\({}_{\rm host}\) term, while the probability distribution \(p_{\rm host}\) does not change except for an overall shift. In this case, the parameter \(\exp(\mu)\) should be explained as the median value of the sum of DM\({}_{\rm host}\) and DM\({}_{\rm source}\). Therefore, if DM\({}_{\rm source}\) does not vary significantly, including it or not should not affect our results. Another uncertainty comes from the parameter \(f_{\rm IGM}\). In general case, \(f_{\rm IGM}\) should be treated as a free parameter, together with \(F\), \(\exp(\mu)\) and \(\sigma_{\rm host}\). But due to the small FRB sample, free \(f_{\rm IGM}\) will lead to unreasonable result. So we fix \(f_{\rm IGM}=0.84\) based on other independent observations. This will lead to the underestimation on the uncertainty of \({\rm DM_{E}-z}\) relation.
The conclusions of our paper are based on the assumption that the DM\({}_{E}-z\) relation obtained from low-redshift data can be extrapolated to high redshift region. As is demonstrated in section 2, there is no strong evidence for the redshift dependence of host DM, at least at low-redshift region \(z\lesssim 1\). But we can't prove this assumption at high redshift region, since there is lack of data points at \(z>1\). So we just simply extrapolate the DM\({}_{E}-z\) relation to high redshift region without proving it. Recent works [79; 87] show that the DM\({}_{E}-z\) relation may be nonmonotonic, with a turn point at a certain redshift. This is because that a FRB at low redshift is easier to be detected than at high redshift, for a given intrinsic luminosity. Therefore, a highly dispersed FRB is mainly caused by large DM of host galaxy, rather than by high redshift. For example, the large DM of FRB20190520B (\({\rm DM_{obs}}\approx 1200\) pc cm\({}^{-3}\), \(z\approx 0.241\)) main attributes to the large value of DM\({}_{\rm host}\) (\(\approx 900\) pc cm\({}^{-3}\)) [65]. Therefore, the uncertainty of DM\({}_{E}-z\) relation we obtained in our paper may be significantly underestimated. We hope that the uncertainty can be reduced if more high-redshift FRBs are detected in the future.
## Online material
The parameters of the first (non-repeating) CHIME/FRB catalog are listed in a long table in the online material.
|
2309.07779 | Convergence analysis of online algorithms for vector-valued kernel
regression | We consider the problem of approximating the regression function from noisy
vector-valued data by an online learning algorithm using an appropriate
reproducing kernel Hilbert space (RKHS) as prior. In an online algorithm,
i.i.d. samples become available one by one by a random process and are
successively processed to build approximations to the regression function. We
are interested in the asymptotic performance of such online approximation
algorithms and show that the expected squared error in the RKHS norm can be
bounded by $C^2 (m+1)^{-s/(2+s)}$, where $m$ is the current number of processed
data, the parameter $0<s\leq 1$ expresses an additional smoothness assumption
on the regression function and the constant $C$ depends on the variance of the
input noise, the smoothness of the regression function and further parameters
of the algorithm. | Michael Griebel, Peter Oswald | 2023-09-14T15:10:47Z | http://arxiv.org/abs/2309.07779v2 | # Convergence analysis of online algorithms for vector-valued kernel regression
###### Abstract
We consider the problem of approximating the regression function from noisy vector-valued data by an online learning algorithm using an appropriate reproducing kernel Hilbert space (RKHS) as prior. In an online algorithm, i.i.d. samples become available one by one by a random process and are successively processed to build approximations to the regression function. We are interested in the asymptotic performance of such online approximation algorithms and show that the expected squared error in the RKHS norm can be bounded by \(C^{2}(m+1)^{-s/(2+s)}\), where \(m\) is the current number of processed data, the parameter \(0<s\leq 1\) expresses an additional smoothness assumption on the regression function and the constant \(C\) depends on the variance of the input noise, the smoothness of the regression function and further parameters of the algorithm.
Keywords:vector-valued kernel regression online algorithms convergence rates reproducing kernel Hilbert spaces Msc: 65D15 65F08 65F10 68W27 +
Footnote †: journal:
## 1 Introduction
In this paper, we deal with the problem of learning the regression function from noisy vector-valued data using an appropriate RKHS as prior. For the relevant background on the theory of kernel methods, see [4; 5; 12; 13; 15] and specifically [2; 3; 11] in the vector-valued case. Our emphasis is on obtaining estimates for the expectation of the squared error norm in the RKHS \(H\) of approximations to the regression function which are built in an incremental way by so-called online algorithms. The setting we use is as follows: Let be given \(N\leq\infty\) samples \((\omega_{m},y_{m})\in\Omega\times Y\), \(m=0,\ldots,N-1\), of an input-output process \(\omega\to y\), which are i.i.d. with respect to a (generally unknown) probability measure \(\mu\) defined on \(\Omega\times Y\). For simplicity, let \(\Omega\) be a compact metric space, \(Y\) a separable Hilbert space, and \(\mu\) a Borel measure. What we are looking for is a regression function \(f_{\mu}:\Omega\to Y\) which, in some sense, optimally represents the underlying input-output process. We deal with algorithms for least-squares regression which aim at finding approximations to the solution
\[f_{\mu}(\omega)=\mathbb{E}(y|\omega)\in L_{p}^{2}(\Omega,Y)\]
of the minimization problem
\[\mathbb{E}(\|f(\omega)-y\|_{Y}^{2})=\int_{\Omega\times Y}\|f(\omega)-y\|_{Y}^{2} \,d\mu(\omega,y)\longmapsto\min \tag{1}\]
for \(f\in L_{\rho}^{2}(\Omega,Y)\) from the samples \((\omega_{m},y_{m}),m=0,\ldots,N-1\), where \(\rho(\omega)\) is the marginal probability measure generated by \(\mu(\omega,y)\) on \(\Omega\). On a theoretical level, for the minimization problem (1) to be meaningful, one needs1
Footnote 1: To obtain quantitative convergence results, stronger conditions on the random variable \(y\) such as uniform boundedness \(\mu\)-a.e. are often imposed in the literature, we will not do this here.
\[\mathbb{E}(\|y\|_{Y}^{2})=\int_{\Omega\times Y}\|y\|_{Y}^{2}\,d\mu(\omega,y)= \int_{\Omega}\mathbb{E}(\|y\|_{Y}^{2}|\omega)\,d\rho(\omega)<\infty.\]
Since solving the discretized least-squares problem
\[\frac{1}{N}\sum_{m=0}^{N-1}\|f(\omega_{m})-y_{m}\|_{Y}^{2}\longmapsto\min \tag{2}\]
for \(f\in L_{\rho}^{2}(\Omega,Y)\) is an ill-posed problem which does not make sense without further regularization, it is customary to add a prior assumption \(f\in H\), where \(H\subset L_{\rho}^{2}(\Omega,Y)\) is a set of functions \(f:\,\Omega\to Y\) such that point evaluations \(\omega\to f(\omega)\) are well-defined. Staying within the Hilbert space setting, candidates for \(H\) are vector-valued RKHS which we will introduce in the next section by means of a feature map, i.e. a family of bounded linear operators which map into a separable Hilbert space \(V\). Under standard assumptions, the RKHS \(H\) and \(V\) are isometric.
Starting from an inital guess \(f^{(0)}\in H\), the online algorithms we consider build a sequence of successive approximations \(f^{(m)}\in H\), where \(f^{(m+1)}\) is a linear combination of the previous approximation \(f^{(m)}\) and a term involving the residual \(y_{m}-f^{(m)}(\omega_{m})\) with respect to the currently processed sample \((\omega_{m},y_{m})\). More precisely, the update formula can be written in the form
\[f^{(m+1)}(\omega)=\alpha_{m}(f^{(m)}(\omega)+\mu_{m}K(\omega,\omega_{m})(y_{m} -f^{(m)}(\omega_{m}))),\qquad m=0,1,\ldots,N-1, \tag{3}\]
where \(K(\omega,\theta):\,Y\to Y\), \(\omega,\theta\in\Omega\), is the operator kernel of the RKHS determined by the feature map. The isometry of \(H\) and \(V\) allows us to rewrite (3) as iteration in \(V\) which is convenient for our subsequent analysis, see Section 2 for the details. Our main result, namely Theorem 1, concerns a sharp estimate for the expected squared error \(\mathbb{E}(\|f_{\mu}-f^{(m)}\|_{H}^{2})\) in the RKHS norm which in this generality seems to be new. It holds under standard assumptions on the feature map, the parameters \(\alpha_{m},\mu_{m}\) in (3) and the smoothness \(s\) of \(f_{\mu}\in H\) measured in a scale of smoothness spaces associated with the underlying covariance operator \(P_{\rho}\). Moreover, it exhibits the optimal error decay rate \(s/(s+2)\). Our approach is an extension of earlier work [8] on Schwarz iterative methods in the noiseless case, where \(y_{m}=f_{\mu}(\omega_{m})\).
The remainder of this paper is organized as follows: In Section 2 we introduce vector-valued RKHS, define \(P_{\rho}\) and the associated scale of smoothness spaces \(V_{P_{\rho}}^{s}\). This sets the stage for the specification of our online learning algorithms in \(V\), allows for their subsequent analysis, and enables us to formulate our main convergence result, namely Theorem 1. In Section 3 we review related results from the literature and compare them to our new result. In Section 4 we then provide the detailed proof of Theorem 1. In Section 5 we give further remarks on Theorem 1, discuss the advantages and limitations of our approach and consider a simple special case of learning an element \(u\) of a Hilbert space \(V\) from noisy measurements of its coefficients with respect to a complete orthonormal system (CONS) in \(V\).
## 2 Setting and main result
Let us first introduce our approach to vector-valued RKHS \(H\) of functions \(f:\Omega\to Y\), where \(Y\) is a separable Hilbert space. To this end, note that such a RKHS \(H\) can be implicitly introduced by a family \(\mathbf{R}=\{R_{\omega}\}_{\omega\in\Omega}\) of bounded linear operators \(R_{\omega}:Y\to V\)), where \(V\) is another separable Hilbert space (we will silently assume that \(V\) is infinite-dimensional). More precisely, under the condition
\[\bigcap_{\omega\in\Omega}\,\ker(R_{\omega}^{*})=\{0\}, \tag{4}\]
the space \(H\) consists of maps of the form
\[f_{v}(\omega):=R_{\omega}^{*}v,\quad\omega\in\Omega,\]
and
\[\|f_{v}\|_{H}:=\|v\|_{V},\qquad v\in V.\]
Thus, \(H\) and \(V\) are isometric which allows us to easily switch between \(H\) and \(V\) in the sequel. In the literature, the map \(\omega\to R_{\omega}\) (or sometimes \(\omega\to R_{\omega}^{*}\)) is called feature map defining the RKHS \(H\).
To simplify our further considerations, we will assume that
\[R_{\omega}v\in C(\Omega,Y)\qquad\forall\,v\in V. \tag{5}\]
This condition is the continuity of the operator family \(\mathbf{R}\) in the strong operator topology and ensures Bochner integrability of functions from \(\Omega\) into \(Y\) and \(V\), respectively, appearing in the formulas below. Due to the assumed compactness of \(\Omega\), it also implies
\[\|R_{\omega}\|_{Y\to V}^{2}\leq\Lambda<\infty, \tag{6}\]
with some \(\Lambda<\infty\). Another consequence is that the operator kernel
\[K(\omega,\theta):=R_{\omega}^{*}R_{\theta}:\;Y\to Y,\qquad\omega,\theta\in \Omega,\]
associated with the vector-valued RKHS \(H\) is a Mercer kernel. The condition (6) is equivalent to the uniform boundedness
\[\|K(\omega,\theta)\|_{Y}\leq\Lambda,\qquad\omega,\theta\in\Omega, \tag{7}\]
of the operator kernel \(K\). Moreover, (6) is equivalent to the uniform boundedness of the operator family
\[P_{\omega}:=R_{\omega}R_{\omega}^{*}:V\to V,\qquad\omega\in\Omega,\]
in \(V\), i.e.,
\[\|P_{\omega}\|_{V}\leq\Lambda,\qquad\omega\in\Omega. \tag{8}\]
For fixed \(V\) and \(\mathbf{R}\) satisfying the above properties, instead of (1) one now seeks \(u\in V\) such that \(f_{u}(\omega)=R_{\omega}^{*}u\in H\) is the minimizer of the problem2
Footnote 2: The symbol \(\mathbb{E}\) denotes expectations of random variables with respect to the underlying probability space which may vary from formula to formula but should be clear from the context.
\[J(v):=\mathbb{E}(\|f_{v}-y\|_{Y}^{2})=\int_{\Omega\times Y}\|R_{\omega}^{*}v- y\|_{Y}^{2}\,d\mu(\omega,y)\longmapsto\min. \tag{9}\]
The solution \(u\) of this quadratic minimization problem on \(V\), if it exists, must satisfy the necessary condition
\[\mathbb{E}((R_{\omega}^{*}u-y,R_{\omega}^{*}w)_{Y})=\mathbb{E}((P_{\omega}u- R_{\omega}y,w)_{V})=0\qquad\forall\,w\in V.\]
This condition is equivalent to the linear operator equation
\[P_{\rho}u=\mathbb{E}(R_{\omega}y),\qquad P_{\rho}:=\mathbb{E}(P_{\omega})= \mathbb{E}(R_{\omega}R_{\omega}^{*}), \tag{10}\]
in \(V\). The operator \(P_{\rho}:V\to V\) defined in (10), which plays the role of a covariance operator, is bounded and symmetric positive definite. The boundedness of \(P_{\rho}:V\to V\), together with the estimate
\[\|P_{\rho}\|_{V}\leq\Lambda,\]
follows from (6). The spectrum of \(P_{\rho}\) is contained in \([0,\Lambda]\) and we have \(\ker(P_{\rho})=\{0\}\) due to (4). Moreover, we will assume that \(P_{\rho}\) is compact. A sufficient condition, which is often satisfied in applications, is the trace class property for \(P_{\rho}\) which in particular holds if the operators \(R_{\omega}\), \(\omega\in\Omega\), have uniformly bounded finite rank.
Note here that the assumptions on \(\Omega\) and \(\mathbf{R}\) can be weakened, see for instance [2], and that the compactness of \(P_{\rho}\) is only used as technical simplification. In particular, the latter allows us to define the scale of smoothness spaces \(V_{\rho}^{s}\), \(s\in\mathbb{R}\), generated by \(P_{\rho}\) using the complete orthonormal system (CONS) \(\Psi:=\{\psi_{k}\}\) of eigenvectors of \(P_{\rho}\) and associated eigenvalues \(\lambda_{1}\geq\lambda_{2}\geq\ldots>0\) with limit \(0\) in \(V\) as follows: \(V_{\rho_{\rho}}^{s}\) is the completion of \(\operatorname{span}(\Psi)\) with respect to the norm
\[\|\sum_{k}c_{k}\psi_{k}\|_{V_{\rho_{\rho}}^{s}}=\left(\sum_{k}\lambda_{k}^{-s }c_{k}^{2}\right)^{1/2},\]
which is well defined on \(\operatorname{span}(\Psi)\) for any \(s\). These spaces will appear in the investigation below.
For our convergence analysis in \(V\), we make another simplifying assumption, namely that
\[f_{\mu}=f_{u},\text{ where }u\in V\text{ is the unique minimizer for (\ref{eq:P_rho}).} \tag{11}\]
In particular, this means that \(\mathbb{E}(R_{\omega}y)\in\operatorname{ran}(P_{\rho})\) and that (10) holds.3
Footnote 3: If \(\mathbb{E}(R_{\omega}y)\not\in\operatorname{ran}(P_{\rho})\) then the usual alternative is to study estimates for the squared error \(\|f_{v}-f_{\mu}\|_{L^{2}_{\rho}(\Omega,Y)}^{2}\) between \(f_{v}\in H\) (obtained by an approximation process to \(f_{\mu}\) from within \(H\)) and the \(L^{2}_{\rho}(\Omega,Y)\) orthoprojection \(f_{\mu}\) of \(f_{\mu}\) onto \(W\), where \(W\) is the closure of \(H\) in \(L^{2}_{\rho}(\Omega,Y)\). Since our main result in this section is about norm convergence to \(u\) in \(V\) or, equivalently, to \(f_{\mu}\) in \(H\), we will not pursue this option.
For a given prior RKHS \(H\) induced by the operator family \(\mathbf{R}\) with associated space \(V\) and for given samples \((\omega_{m},y_{m})\), \(m=0,\ldots,N-1\), with finite \(N\), the standard regularization of the ill-posed problem (2) is to find the minimizer \(u_{N}\in V\) of the minimization problem
\[J_{N}(v):=\frac{1}{N}\sum_{m=0}^{N-1}\|R_{\omega_{m}}^{*}v-y_{m}\|_{Y}^{2}+ \kappa_{N}\|v\|_{V}^{2}\longmapsto\text{min} \tag{12}\]
on \(V\), where \(\kappa_{N}>0\) is a suitable regularization parameter. To compare with (2) or (9), recall that \(f_{v}(\omega_{m})=R_{\omega_{m}}^{*}v\) are the function values of a function in \(H\). Using the representer theorem for Mercer kernels [11], this problem leads to a linear system with a typically dense and ill-conditioned \(N\times N\) matrix. There is a huge body of literature, especially in the scalar-valued case \(Y=\mathbb{R}\), devoted to setting up, analyzing and solving this problem for fixed \(N\).
We focus here on online learning algorithms for finding approximations to the regression function \(f_{\mu}=f_{u}\) and are interested in their asymptotic performance, i.e., we assume that \(N\) is not fixed (set formally \(N=\infty\)) and that the i.i.d. samples \((\omega_{m},y_{m})\) become available one by one by a random process, \(m=0,1,\ldots\). The task of an online algorithm, now viewed as approximation process in \(V\), is then to recover \(u\in V\) satisfying (11) from this stream of samples.
To this end, we define the noise term
\[\varepsilon_{\omega}:=y-f_{u}(\omega)=y-R_{\omega}^{*}u,\qquad\omega\in\Omega,\]
which is a \(Y\)-valued random variable on \(\Omega\times Y\) (to keep the notation short, the depencence on \(y\) is not explicitly shown). Since \(f_{u}(\omega)=f_{\mu}(\omega)\) by (11), we have \(\mathbb{E}(\varepsilon_{\omega}|\omega)=0\) for any \(\omega\in\Omega\). Moreover, the noise variance
\[\sigma^{2}:=\mathbb{E}(\|\varepsilon_{\omega}\|_{Y}^{2})=\mathbb{E}(\|y-f_{ \mu}\|_{Y}^{2}) \tag{13}\]
with respect to \(f_{u}\in H\) is finite since \(\mathbb{E}(\|y\|_{Y}^{2})<\infty\) was assumed in the first place. The value of \(\sigma\) characterizes the average size of the noise \(y-f_{\mu}(\omega)\) on \(\Omega\) measured in the \(Y\) norm.
We consider online algorithms of the standard form
\[u^{(m+1)}=\alpha_{m}(u^{(m)}+\mu_{m}R_{\alpha_{m}}(y_{m}-R_{\omega_{m}}^{*}u^{ (m)})),\qquad m=0,1,\ldots, \tag{14}\]
where, at each step, the used sample \((\omega_{m},y_{m})\) is i.i.d. drawn according to the probability measure \(\mu\) and, consequently, \(\omega_{m}\) is i.i.d. drawn according to the marginal probability measure \(\rho\) on \(\Omega\). Traditionally, the parameters \(\alpha_{m}\) and \(\mu_{m}\) are called regularization parameter and step-size parameter (or learning rate), respectively. As to be expected, after applying \(R_{\omega}^{*}\) to both sides in (14) and setting
\[f^{(m)}(\omega):=f_{u^{(m)}}(\omega)=R_{\omega}^{*}u^{(m)},\qquad\omega\in \Omega,\]
we arrive at the online algorithm (3) in \(H\).
The online algorithm (14) is a particular instance of a randomized Schwarz approximation method associated with \(\mathbf{R}\). Its noiseless version, where \(y_{m}=R_{\omega_{m}}^{*}u\), was studied in [8]. Our goal is to derive convergence results for the expected squared error \(\mathbb{E}(\|u-u^{(m)}\|_{Y}^{2})\), \(m=1,2,\ldots\), which corresponds to convergence estimates in the RKHS \(H\). As to be expected, such estimates will once more require additional smoothness assumptions on \(u\) in the form \(u\in V_{P_{\rho}}^{s}\) with \(0<s\leq 1\). However, in contrast to the noiseless case [8], they also include a dependence on the noise variance \(\sigma^{2}\) in addition to the dependence on the initial error \(\|e^{(0)}\|^{2}\). The prize to pay for convergence is a certain decay of the step-sizes \(\mu_{m}\to 0\) which is typical for stochastic approximation algorithms. More precisely, throughout the remainder of this paper, we set
\[\alpha_{m}=\frac{m+1}{m+2},\qquad\mu_{m}=\frac{A}{(m+1)^{t}},\qquad m=0,1,\ldots, \tag{15}\]
where the parameters \(1/2<t<1\) and \(0<A\leq(2\Lambda)^{-1}\) will be properly fixed later on. In the language of learning algorithms, this is a so-called regularized online algorithm, compared to unregularized online algorithms with \(\alpha_{m}=1\). Our main result is as follows:
Theorem 3.1: _Let \(Y,V\) be separable Hilbert spaces, \(\Omega\) be a compact metric space, \(\mu\) be a Borel probability measure on \(\Omega\times Y\), and \(\rho\) the marginal Borel probability measure on \(\Omega\) induced by \(\mu\). Assume that_
\[\mathbb{E}(\|y\|_{Y}^{2})=\int_{\Omega\times Y}\|y\|_{Y}^{2}\,d\mu<\infty.\]
_For the operator family \(\mathbf{R}=\{R_{\omega}\}_{\omega\in\Omega}\), we require the conditions (4-6). We further assume that the operator \(P_{\rho}=\mathbb{E}(R_{\omega}R_{\omega}^{*})\) is compact. Finally, we assume (11) and that \(u\in V_{P_{\rho}}^{s}\) for some \(0<s\leq 1\)._
_Consider the online learning algorithm (14), where \(u^{(0)}\in V\) is arbitrary, the parameters \(\alpha_{m},\mu_{m}\) are given by (15) with \(t=t_{s}:=(1+s)/(2+s)\) and \(A=1/(2\Lambda)\) and the random samples \((\omega_{m},y_{m})\), \(m=0,1,\ldots,N\leq\infty\), are i.i.d. with respect to \(\mu\). Then the expected squared error \(\mathbb{E}(\|u-u^{(m)}\|_{Y}^{2})\) in \(V\) satisfies_
\[\mathbb{E}(\|f_{\mu}-f^{(m)}\|_{H}^{2})=\mathbb{E}(\|u-u^{(m)}\|_{V}^{2})\leq C ^{2}(m+1)^{-s/(2+s)},\qquad m=1,2,\ldots,N, \tag{16}\]
_where \(f^{(m)}=f_{u^{(m)}}\), \(C^{2}=2\|e^{(0)}\|_{V}^{2}+2\|u\|_{V}^{2}+8\Lambda^{s}\|u\|_{V_{P_{\rho}}^{2}} ^{2}+\sigma^{2}/\Lambda\) and the noise variance \(\sigma^{2}\) is defined in (13)._
In this generality, Theorem 1 has not yet appeared in the literature, at least to our knowledge. Its proof is carried out in Section 4. For the parameter range \(0<s\leq 1\), the exponent \(-s/(2+s)\) in the right-hand side of (16) is best possible under the general conditions stated in Theorem 1. Estimates of the form (16) also hold for arbitrary values \(1/2<t<1\) and \(0<A\leq 1/(2\Lambda)\) admissible in (15), albeit with non-optimal exponents depending on \(t\) and different constants \(C\) varying with \(t\) and \(A\). Note that estimates for
\[\mathbb{E}(\|f_{u}-f_{u^{(m)}}\|_{L^{2}_{p}(\Omega,Y)}^{2})=\mathbb{E}(\|P_{ \rho}^{1/2}e^{(m)}\|_{V}^{2})=\mathbb{E}((P_{\rho}e^{(m)},e^{(m)})_{V})\]
with respect to the weaker \(L^{2}_{p}(\Omega,Y)\) norm are of great interest but cannot be obtained within our framework. We will comment on these issues in the concluding Section 5.
There is a huge amount of literature devoted to the convergence theory of various versions of the algorithm (14), especially for the scalar-valued case \(Y=\mathbb{R}\). In particular, (14) is often considered in the so-called finite horizon case, where \(N<\infty\) is fixed and the step-sizes \(\mu_{m}\) are chosen in dependence on \(N\) such that expectations such as \(\mathbb{E}(\|u-u^{(N)}\|_{V}^{2})\) or \(\mathbb{E}(\|f_{u}-f_{u^{(N)}}\|_{L^{2}_{p}(\Omega,Y)}^{2})\), respectively, are optimized for the final approximation \(u^{(N)}\). We provide a brief discussion of known results in the next section.
## 3 Results related to Theorem 1
Given the vast number of publications on convergence rates for learning algorithms, we will only present a selection of results concentrating on the RKHS setting and online algorithms similar to (14). The results we cite are often stated and proved for the scalar-valued case \(Y=\mathbb{R}\), even though some authors claim that their methods extend to the case of an arbitrary separable Hilbert space \(Y\) with minor changes. One of the first papers on the vector-valued case is [1], where the authors provide upper bounds in probability for the \(L^{2}_{p}(\Omega,Y)\) error of \(f_{u_{N}}\) if \(N\to\infty\) and \(\kappa_{N}\to 0\), where \(u_{N}\) is the solution of (12). These bounds depend in a specific way on the smoothness of \(u\in V^{s}_{P_{\rho}}\), \(0\leq s\leq 1\), and on the spectral properties of \(P_{\rho}\). Note that in [1] and in many other papers stronger assumptions on the compactness of \(P_{\rho}\) compared to our assumptions are made and that bounds in probability do not automatically imply bounds in expectation. Moreover, the error measured in the \(L^{2}_{p}(\Omega,Y)\) norm is with respect to \(f_{u_{N}}\) and not with respect to approximations such as \(f_{u^{(m)}}\), \(m\leq N\), which are produced by a particular algorithm comparable with (14).
In [14], the authors provide estimates in probability for an algorithm similar to (14) for the scalar-valued case \(Y=\mathbb{R}\). They treat both, convergence in \(L^{2}_{\rho}(\Omega,\mathbb{R})\) and \(H\) norms. There, the main additional assumption needed for the application of certain results from martingale theory is that, for some constant \(M_{\rho}<\infty\), the random variable \(y\) satisfies
\[|y|\leq M_{\rho}\]
a.e. on the support of \(\rho\). If \(u^{(0)}=0\) (as assumed in [14]) then this assumption implies bounds for \(\|e^{(0)}\|_{V}=\|u\|_{V}\) and \(\sigma\) with constants depending on \(M_{\rho}\). Up to the specification of constants and using the notation of the present paper, the convergence result for the \(H\) norm stated in [14, Theorem B] reads as follows: Consider the online algorithm (14) with starting value \(u^{(0)}=0\) and parameters
\[\alpha_{m}=\frac{m+m_{0}-1}{m+m_{0}},\qquad\alpha_{m}\mu_{m}=\frac{A}{(m+m_{0 })^{(s+1)/(s+2)}},\qquad m=0,1,\ldots,\]
for some (large enough) \(m_{0}\) and suitable \(A\). Then, if \(u\in V^{s}_{P_{\rho}}\) for some \(0<s\leq 2\), we have
\[\mathbb{P}\left(\|u-u^{(m)}\|_{V}^{2}\leq\frac{C}{(m+m_{0})^{s/(s+2)}}\right) \geq 1-\delta,\qquad 0<\delta<1,\quad m=0,1,\ldots,\]
for some constant \(C=C(M_{\rho},\|u\|_{V^{s}_{\rho}},m_{0},s,\Lambda,\log(2/\delta))<\infty\). Here, \(V=H\) is an RKHS of functions \(u:\Omega\to\mathbb{R}\) generated by some scalar-valued Mercer kernel \(K:\,\Omega\times\Omega\to\mathbb{R}\) and \(\Lambda=\max_{\omega\in\Omega}K(\omega,\omega)\). The associated maps \(R_{\omega}\) are given by \(R_{\omega}y=yK(\omega,\cdot)\), \(y\in\mathbb{R}\). Consequently, \(R^{s}_{\omega}u=u(\omega)\), \(\omega\in\Omega\), corresponds to function evaluation. Thus, for \(0<s\leq 1\), we get the same rate as in our Theorem 1 which, however, deals with the expectation of the squared error in \(V=H\) in the more general vector-valued case. What our rather elementary method does not deliver is a result for the case \(1<s\leq 2\) and for \(L^{2}_{\rho}(\Omega,Y)\) convergence. For the latter situation, [14, Theorem C] gives the better estimate
\[\mathbb{P}\left(\|u-u^{(m)}\|^{2}_{L^{2}_{\rho}(\Omega,\mathbb{R})}\leq\frac{ \bar{C}}{(m+m_{0})^{(s+1)/(s+2)}}\right)\geq 1-\delta,\quad 0<\delta<1,\quad m =0,1,\ldots,\]
under the same assumptions but with a different constant
\[\bar{C}=\bar{C}(M_{\rho},\|u\|_{V^{s}_{\rho}},m_{0},s,\Lambda,\log(2/\delta)) <\infty.\]
This is almost matching the lower estimates for kernel learning derived in [1] for classes of instances, where the spectrum of \(P_{\rho}\) exhibits a prescribed decay of the form \(\lambda_{k}\asymp k^{-b}\) for some \(b>1\). Note that, for Mercer kernels, the operator
\[P_{\rho}:\,u\in H\quad\longmapsto\quad(P_{\rho}u)(\cdot)=\int_{\Omega}K(\cdot, \theta)u(\theta)\,d\rho(\theta)\]
is trace class whereas in our Theorem 1 no stronger decay of eigenvalues is assumed.
Estimates in expectation that are close to our result have also been obtained for slightly different settings. For example, in [16] both, the so-called _regularized_ (\(\alpha_{m}<1\)) and the _unregularized_ online algorithm (\(\alpha_{m}=1\)) were analyzed in the scalar-valued case \(Y=\mathbb{R}\) under assumptions similar to ours for \(L^{2}_{\rho}(\Omega,\mathbb{R})\) and \(V=H\) convergence. We only quote the result for convergence in the RKHS \(V=H\). It concerns the so-called _finite horizon_ case of the unregularized online algorithm (14) with \(\alpha_{m}=1\), where one fixes \(N<\infty\), chooses a constant step-size \(\mu_{m}=\mu\), \(m=0,\ldots,N-1\), which depends on \(N\), stops the iteration at \(m=N\) and asks for a good estimate of the expectation of \(\mathbb{E}(\|u-u^{(N)}\|^{2}_{V})\) for the final iterate only. Up to the specification of constants, Theorem 6 in [16] states that, under the condition \(u\in V^{s}_{\rho}\), \(s>0\), one can achieve the bound
\[\mathbb{E}(\|u-u^{(N)}\|^{2}_{V})=\mathrm{O}(N^{-s/(s+2)}),\qquad N\to\infty,\]
if one sets \(\mu_{N}=cN^{-(s+1)/(s+2)}\) with a properly adjusted value of \(c\). Note that \(s>0\) is arbitrary with the exponent approaching \(-1\), if the smoothness parameter \(s\) tends to \(\infty\), while our result does not provide improvements for \(s>1\). The drawback of the finite horizon case is that the estimate concerns only a fixed iterate \(u^{(N)}\) with an \(N\) which needs to be decided on beforehand. In some sense, this can be viewed as building an approximation to the solution \(u_{N}\) of (12) with \(\kappa_{N}=\mu_{N}\) from a single pass over the \(N\) i.i.d. samples \((\omega_{m},y_{m})\), \(m=0,\ldots,N-1\).
In recent years, attention has shifted to obtaining refined rates when \(P_{\rho}\) possesses faster eigenvalue decay, usually expressed by the property that \(P_{\rho}^{\beta}\) is trace class for some \(\beta<1\) or by the slightly weaker assumption
\[\lambda_{k}=\mathrm{O}(k^{-1/\beta}),\qquad k\to\infty, \tag{17}\]
on the eigenvalues of the covariance operator \(P_{\rho}\). Bounds involving knowledge about \(\beta<1\) are sometimes called capacity dependent, our bounds in Theorem 1 as well as the cited results from [14; 16] are thus capacity independent. Capacity dependent convergence rates for the expected squared error for the online algorithm (14) have been obtained, among others, in [6; 7; 9; 10], again in the scalar-valued case \(Y=\mathbb{R}\) and with various parameter settings in (14), including unregularized and finite horizon versions. In [7], rates for \(\mathbb{E}(\|u-\bar{u}^{(m)}\|^{2}_{L^{2}_{\rho}(\Omega,\mathbb{R})})\) have been established, where
\[\bar{u}^{(m)}=\frac{1}{m+1}\sum_{\xi=0}^{m}u^{(k)},\qquad m=0,1,\ldots, \tag{18}\]
is the sequence of averages associated with the sequence \(u^{(m)}\), \(m=0,1,\ldots\), obtained by the unregularized iteration (14) with \(\alpha_{m}=1\) and \(u^{(0)}=0\). That averaging has a similar effect as regularization with \(\alpha_{m}=(m+1)/(m+2)\) in (14) considered in Theorem 1 can be guessed if one observes that
\[\ddot{u}^{(m+1)}=\frac{m+1}{m+2}\ddot{u}^{(m)}+\frac{1}{m+2}u^{(m+1)},\]
where \(u^{(m+1)}=u^{(m)}+\mu_{m}(y_{m}-R_{\alpha_{m}}u^{(m)})\), and compares with (14). To illustrate the influence of \(\beta\), we formulate the following bound, which is a consequence of [7, Corollary 3]: Under an additional technical assumptions on the noise term \(\varepsilon_{\omega}\), if the condition (17) holds for some \(0<\beta<1\) and \(u\in V^{s}_{P_{\rho}}\), \(s>-1\), then for suitable choices for the learning rates \(\mu_{m}\), we have
\[\mathbb{E}(\|u-\ddot{u}^{(m)}\|^{2}_{L^{2}_{\rho}(\Omega,\mathbb{R})})=\left\{ \begin{array}{ll}\mathrm{O}((m+1)^{-(s+1)}),&-1<s<-\beta,\\ \mathrm{O}((m+1)^{-(s+1)/(s+1+\beta)}),&-\beta<s<1-\beta,\\ \mathrm{O}((m+1)^{-(1-\beta/2)}),&1-\beta<s.\end{array}\right.\]
Thus, stronger eigenvalue decay generally implies stronger asymptotic error decay in the \(L^{2}_{\rho}(\Omega,\mathbb{R})\) norm. In [6, Section 6], similar rates are obtained in the finite horizon setting for both, the above averaged iterates \(\ddot{u}^{(N)}\) and for \(u^{(N)}\) produced by a two-step extension of the one-step iteration (14).
In addition to \(L^{2}_{\rho}(\Omega,\mathbb{R})\) convergence results, the paper [10] also provides a capacity dependent convergence estimate in the RKHS norm for the unregularized algorithm (14) with parameters \(\alpha_{m}=1\) and \(\mu_{m}=c(m+1)^{-1/2}\). Under the boundedness assumption \(|y|\leq M_{\rho}\), Theorem 2 in [10] implies that
\[\mathbb{E}(\|u-u^{(m)}\|^{2}_{V})=\mathrm{O}((m+1)^{-\min(s,1-\beta)/2}\log^{ 2}(m+1)),\qquad m=1,2,\ldots,\]
if \(u\in V^{s}_{P_{\rho}}\) for some \(s>0\), \(P^{\beta}_{\rho}\) is trace class for some \(0<\beta<1\), and \(c\) is properly adjusted.
Finally, the scalar-valued least-squares regression problem with \(Y=\mathbb{R}\) and RKHS prior space \(H\) can also be cast as linear regression problem in \(V=H\). This has been done in [6, 9]. More abstractly, given a \(\mu\)-distributed random variable \((\xi_{\omega},y)\in V\times\mathbb{R}\) on \(\Omega\times\mathbb{R}\), the task is to find approximations to the minimizer \(u\in V\) of the problem
\[\mathbb{E}(|(\xi_{\omega},v)-y|^{2})\longmapsto\min,\qquad v\in V, \tag{19}\]
from i.i.d. samples \((\xi_{\omega},y_{i})\). If \(V=H\) is the RKHS, which regularizes the scalar-valued least-squares regression problem on \(\Omega\times\mathbb{R}\), then the canonical choice is \(\xi_{\omega}=K(\omega,\cdot)\). In [9], for the iteration
\[u^{(m+1)}=u^{(m)}+\mu_{m}(y_{m}-(\xi_{\omega_{m}},u^{(m)}))\xi_{\omega_{m}}, \qquad m=0,1,\ldots,\]
weak convergence in \(V\) is studied by deriving estimates for quantities such as \(\mathbb{E}((v,e^{(m)})^{2})\) and \(\mathbb{E}((\xi_{\omega^{\prime}},e^{(m)})^{2})\) under some simplifying assumptions on the noise and the normalization \(\|\xi_{\omega}\|=1\). Note that this iteration is nothing but the unregularized iteration (14) with \(\alpha_{m}=1\) since \((\xi_{\alpha_{m}},u^{(m)})=u^{(m)}(\omega_{m})\) in this case. In the learning application, the assumption \(\|\xi_{\omega}\|=1\) means \(K(\omega,\omega)=1\). Moreover, in this case
\[\mathbb{E}((\xi_{\omega^{\prime}},e^{(m)})^{2})=\mathbb{E}(\|u-u^{(m)}\|_{L^{ 2}_{\rho}(\Omega,\mathbb{R})}),\]
since the expectation on the left is, in addition to the i.i.d. samples \((\xi_{\omega_{k}},y_{k})\), \(k=0,\ldots,m-1\), also taken with respect to the independently \(\rho\)-distributed random variable \(\xi_{\omega^{\prime}}\). This links to learning rates in the \(L^{2}_{\rho}(\Omega,\mathbb{R})\) norm. The estimates for \(\mathbb{E}((\xi_{\omega^{\prime}},e^{(m)})^{2})\) given in [9] concern both, the finite horizon and the online setting and again depend on the parameters \(s\geq 0\) (smoothness of \(u\)) and \(0<\beta\leq 1\) (capacity assumption on \(P_{\rho}\)). For the estimates of \(\mathbb{E}((v,e^{(m)})^{2})\), the smoothness \(s^{\prime}\geq 0\) of the fixed element \(v\in V^{s^{\prime}}_{P_{\rho}}\) is traded against the smoothness \(s\geq 0\) of \(u\in V^{s}_{P_{\rho}}\). We refer to [9] for the details.
## 4 Proof of Theorem 1
In this subsection, we will use the notation and assumptions outlined above, with the only change that the scalar product in \(V\) is simply denoted by \((\cdot,\cdot)\) and the associated norm \(\|\cdot\|_{V}\) is accordingly denoted by \(\|\cdot\|\). Moreover, recall that we have set \(e^{(m)}=u-u^{(m)}\). We will prove an estimate of the form
\[\mathbb{E}(\|e^{(m)}\|^{2})=\mathrm{O}((m+1)^{-s/(2+s)}),\qquad m \to\infty, \tag{20}\]
under the assumption \(u\in V^{s}_{P_{P}}\), \(0<s\leq 1\), if the parameters \(A\) and \(t\) in (15) are chosen accordingly. The precise statement and the dependence of the constant in (20) on initial error, noise variance and smoothness assumption are stated in the formulation of Theorem 1.
From (14) and \(y_{m}=R^{*}_{\alpha_{m}}u+\varepsilon_{\alpha_{m}}\) we deduce the error representation
\[e^{(m+1)}=\underbrace{\alpha_{m}(e^{(m)}-\mu_{m}P_{\alpha_{m}}e^{(m)})+\bar{ \alpha}_{m}u}_{\bar{z}^{(m+1)}:=}-\alpha_{m}\mu_{m}R_{\alpha_{m}}e_{\alpha_{ m}},\]
where \(\bar{\alpha}_{m}:=1-\alpha_{m}=(m+2)^{-1}\), compare also 15. The first term \(\bar{e}^{(m+1)}\) corresponds to the noiseless case considered in [8] while the remainder term is the noise contribution. Thus,
\[\|e^{(m+1)}\|^{2}=\|\bar{e}^{(m+1)}\|^{2}-2\alpha_{m}\mu_{m}(R_{\alpha_{m}}e _{\alpha_{m}},\bar{e}^{(m+1)})+\alpha_{m}^{2}\mu_{m}^{2}\|R_{\alpha_{m}}e_{ \alpha_{m}}\|^{2}. \tag{21}\]
We now estimate the conditional expectation with respect to given \(u^{(m)}\), separately for the three terms in (21). Here and in the following we denote this conditional expectation by \(\mathbb{E}^{\prime}\). For the third term, by (6) and the definition of the variance \(\sigma^{2}\), we have
\[\mathbb{E}^{\prime}(\|R_{\alpha_{m}}e_{\alpha_{m}}\|^{2})\leq\Lambda\,\mathbb{ E}(\|\varepsilon_{\omega}\|_{Y}^{2})=\Lambda\,\sigma^{2}. \tag{22}\]
For the second term, we need
\[\mathbb{E}((R_{\omega}e_{\omega},w))=\mathbb{E}((y-R^{*}_{\omega}u,R^{*}_{ \omega}w)_{Y})=0\qquad\forall\,w\in V.\]
This straightforwardly follows from the fact that \(u\in V\) is the minimizer of the problem (9). Thus, by setting \(w=\alpha_{m}e^{(m)}+\bar{\alpha}_{m}u\), we obtain
\[\mathbb{E}^{\prime}(-2\alpha_{m}\mu_{m}(R_{\alpha_{m}}e_{\alpha_{ m}},\bar{e}^{(m+1)}))\] \[\qquad=2\alpha_{m}\mu_{m}(\alpha_{m}\mu_{m}\mathbb{E}^{\prime}(( R_{\alpha_{m}}e_{\alpha_{m}},P_{\alpha_{m}}e^{(m)}))-\mathbb{E}^{\prime}((R_{ \alpha_{m}}e_{\alpha_{m}},w)))\] \[\qquad=2\alpha_{m}^{2}\mu_{m}^{2}\mathbb{E}^{\prime}((R_{\alpha _{m}}e_{\alpha_{m}},P_{\alpha_{m}}e^{(m)}))\] \[\qquad\leq\alpha_{m}^{2}\mu_{m}^{2}(\mathbb{E}^{\prime}(\|R_{ \alpha_{m}}e_{\alpha_{m}}\|^{2})+\mathbb{E}^{\prime}(\|P_{\alpha_{m}}e^{(m)} \|^{2})).\]
Here, the first term is estimated by (22). For the second term, we substitute the upper bound
\[\mathbb{E}^{\prime}(\|P_{\alpha_{m}}e^{(m)}\|^{2})\leq\Lambda\,\mathbb{E}^{ \prime}((P_{\alpha_{m}}e^{(m)},e^{(m)}))=\Lambda(P_{\rho}e^{(m)},e^{(m)}), \tag{23}\]
which follows from (6) and the definition of \(P_{\rho}\). Together this gives
\[\mathbb{E}^{\prime}(-2\alpha_{m}\mu_{m}(R_{\alpha_{m}}e_{\alpha_{ m}},\bar{e}^{(m+1)}))\leq\Lambda\,\alpha_{m}^{2}\mu_{m}^{2}(\sigma^{2}+(P_{\rho}e^{( m)},e^{(m)})) \tag{24}\]
for the second term in (21).
For the estimation of the first term \(\mathbb{E}^{\prime}(\|\bar{e}^{(m+1)}\|^{2})\), we modify the arguments from [8], where the case \(\varepsilon_{m}=0\) was treated. We use the error decomposition
\[\|\bar{e}^{(m+1)}\|^{2}=\bar{\alpha}_{m}^{2}\|u\|^{2}+2\alpha_{m} \bar{\alpha}_{m}(u,e^{(m)}-\mu_{m}P_{\alpha_{m}}e^{(m)}\] \[\qquad\qquad\qquad\qquad\qquad+\alpha_{m}^{2}(\|e^{(m)}\|^{2}-2 \mu_{m}(e^{(m)},P_{\alpha_{m}}e^{(m)}))+\mu_{m}^{2}\|P_{\alpha_{m}}e^{(m)}\|^ {2}).\]
After taking conditional expectations, we arrive with the definition of \(P_{\rho}\) and (23) at
\[\mathbb{E}^{\prime}(\|\bar{e}^{(m+1)}\|^{2}) = \bar{\alpha}_{m}^{2}\|u\|^{2}+2\alpha_{m}\bar{\alpha}_{m}(u,e^{(m)} -\mu_{m}P_{\rho}e^{(m)})\] \[\quad+\alpha_{m}^{2}(\|e^{(m)}\|^{2}-2\mu_{m}(e^{(m)},P_{\rho}e^{( m)})+\mu_{m}^{2}\mathbb{E}^{\prime}(\|P_{\alpha_{m}}e^{(m)}\|^{2}))\] \[\leq \bar{\alpha}_{m}^{2}\|u\|^{2}+2\alpha_{m}\bar{\alpha}_{m}(u,e^{(m )}-\mu_{m}P_{\rho}e^{(m)})\] \[\quad+\alpha_{m}^{2}(\|e^{(m)}\|^{2}-\mu_{m}(2-\Lambda\,\mu_{m})(e ^{(m)},P_{\rho}e^{(m)})).\]
Next, in order to estimate the term \((u,e^{(m)}-\mu_{m}P_{\rho}e^{(m)})\), we take an arbitrary \(h=P_{\rho}^{1/2}v\in V_{P_{\rho}}^{1}\), where \(v\in V=V_{P_{\rho}}^{0}\) and \(\|h\|_{V_{P_{\rho}}^{1}}=\|v\|\). With this, we have
\[2\alpha_{m}\bar{\alpha}_{m}(u,e^{(m)}-\mu_{m}P_{\rho}e^{(m)})\] \[\quad=2\alpha_{m}\bar{\alpha}_{m}((u-h,(I-\mu_{m}P_{\rho})e^{(m)} )+(h,(I-\mu_{m}P_{\rho})e^{(m)}))\] \[\quad\leq 2\alpha_{m}\bar{\alpha}_{m}\|u-h\|\|(I-\mu_{m}P_{\rho})e ^{(m)}\|+2(\bar{\alpha}_{m}\mu_{m}^{-1/2}(I-\mu_{m}P_{\rho})v,\alpha_{m}\mu_{ m}^{1/2}e^{(m)})\] \[\quad\leq 2\alpha_{m}\bar{\alpha}_{m}\|u-h\|\|e^{(m)}\|+\bar{ \alpha}_{m}^{2}\mu_{m}^{-1}\|(I-\mu_{m}P_{\rho})v\|^{2}+\alpha_{m}^{2}\mu_{m} \|P_{\rho}^{1/2}e^{(m)}\|^{2}\] \[\quad\leq 2\alpha_{m}\bar{\alpha}_{m}\|u-h\|\|e^{(m)}\|+\bar{ \alpha}_{m}^{2}\mu_{m}^{-1}\|h\|_{V_{P_{\rho}}^{1}}^{2}+\alpha_{m}^{2}\mu_{m}( P_{\rho}e^{(m)},e^{(m)}).\]
Here, we have silently used that \(\|(I-\mu_{m}P_{\rho})e^{(m)}\|\leq\|e^{(m)}\|\) and similarly
\[\|(I-\mu_{m}P_{\rho})v\|\leq\|v\|=\|h\|_{V_{P_{\rho}}^{1}},\]
which holds since \(0<\mu_{m}\leq\Lambda\leq(2\Lambda)^{-1}\) according to (15) and the restriction on \(A\). Substitution into the previous inequality results in
\[\mathbb{E}^{\prime}(\|\bar{e}^{(m+1)}\|^{2}) \leq \bar{\alpha}_{m}^{2}(\|u\|^{2}+\mu_{m}^{-1}\|h\|_{V_{P_{\rho}}^{ 1}}^{2})+2\alpha_{m}\bar{\alpha}_{m}\|u-h\|\|e^{(m)}\|\] \[\quad+\alpha_{m}^{2}(\|e^{(m)}\|^{2}-\mu_{m}(1-\Lambda\mu_{m})(e ^{(m)},P_{\rho}e^{(m)})).\]
Now, combining this estimate for the conditional expectation of the first term in (21) with the bounds (22) and (24) for the third and second term, respectively, we arrive at
\[\mathbb{E}^{\prime}(\|\bar{e}^{(m+1)}\|^{2}) \leq \alpha_{m}^{2}(\|e^{(m)}\|^{2}+2\Lambda\,\sigma^{2}\mu_{m}^{2})\] \[\quad+2\alpha_{m}\bar{\alpha}_{m}\|u-h\|\|e^{(m)}\|+\bar{\alpha} _{m}^{2}(\|u\|^{2}+\mu_{m}^{-1}\|h\|_{V_{P_{\rho}}^{1}}^{2}).\]
Here, the term involving \((e^{(m)},P_{\rho}e^{(m)})\geq 0\) has been dropped since its forefactor \(-\mu_{m}(1-2\Lambda\,\mu_{m})\) is non-positive due to the restriction on \(A\) in (15).
For given
\[u=\sum_{k}c_{k}\psi_{k}\in V_{P_{\rho}}^{s},\qquad 0<s\leq 1,\]
in (25) we choose
\[h=\sum_{k:\,\lambda_{k}(m+1)^{b}\geq B}c_{k}\psi_{k}\]
with some fixed constants \(b,B>0\) specified below. This gives
\[\|h\|_{V_{P_{\rho}}^{1}}^{2}=\sum_{k:\,\lambda_{k}(m+1)^{b}\geq B}\lambda_{k}^ {-(1-s)}(\lambda_{k}^{-s}c_{k}^{2})\leq B^{-(1-s)}(m+1)^{(1-s)b}\|u\|_{V_{P_{ \rho}}^{s}}^{2}\]
and
\[\|u-h\|^{2}=\sum_{k:\,\lambda_{k}(m+1)^{b}<B}\lambda_{k}^{s}(\lambda_{k}^{-s}c _{k}^{2})\leq B^{s}(m+1)^{-bs}\|u\|_{V_{P_{\rho}}^{1}}^{2}.\]
After substitution into (25), we obtain
\[\mathbb{E}^{\prime}(\|\dot{e}^{(m+1)}\|^{2})\leq\alpha_{m}^{2}(\|e^{(m)}\|^{2}+2 \Lambda\sigma^{2}\mu_{m}^{2})+2\alpha_{m}\bar{\alpha}_{m}B^{s/2}(m+1)^{-bs/2}\| u\|_{V_{P_{p}}^{s}}\|e^{(m)}\| \tag{26}\]
\[+\bar{\alpha}_{m}^{2}(\|u\|^{2}+\mu_{m}^{-1}B^{-(1-s)}(m+1)^{(1-s)b}\|u\|_{V_{ P_{p}}^{s}}^{2}).\]
Clearly, if \(s=1\), we can set \(h=u\) which would greatly simplify the considerations below and leads to a more precise final estimate, see Section 5.1.
Next, we switch to full expectations in (26) by using the independence assumption for the sampling process and take into account that
\[\varepsilon_{m}:=\mathbb{E}(\|e^{(m)}\|^{2})^{1/2}\geq\mathbb{E}(\|e^{(m)}\|).\]
Together with (15) and \(\bar{\alpha}_{m}=(m+1)\bar{\alpha}_{m}\), this gives
\[\varepsilon_{m+1}^{2} \leq \alpha_{m}^{2}(\bar{\epsilon}_{m}^{2}+2A^{2}\Lambda\sigma^{2}(m+ 1)^{-2t})+2\alpha_{m}\bar{\alpha}_{m}B^{s/2}(m+1)^{-bs/2}\|u\|_{V_{P_{p}}^{s}} \varepsilon_{m}\] \[+\bar{\alpha}_{m}^{2}(\|u\|^{2}+A^{-1}B^{-(1-s)}(m+1)^{(1-s)b+t} \|u\|_{V_{P_{p}}^{s}}^{2})\] \[\leq \alpha_{m}^{2}\varepsilon_{m}^{2}+\bar{\alpha}_{m}^{2}(2A^{2} \Lambda\sigma^{2}(m+1)^{2-2t}+2B^{s/2}(m+1)^{-bs/2+1}\|u\|_{V_{P_{p}}^{s}} \varepsilon_{m}\|e^{(m)}\|\] \[+\|u\|^{2}+A^{-1}B^{-(1-s)}(m+1)^{(1-s)b+t}\|u\|_{V_{P_{p}}^{s}}^{ 2}).\]
In a final step, we assume for a moment that
\[\varepsilon_{k}\leq C(k+1)^{-r},\qquad k=0,\ldots,m, \tag{27}\]
holds for some constants \(C,r>0\). Next, we set
\[a:=\max(2-2t,-bs/2+1-r,(1-s)b+t)\]
and
\[D:=2A^{2}\Lambda\sigma^{2}+2CB^{s/2}\|u\|_{V_{P_{p}}^{s}}+\|u\|^{2}+A^{-1}B^{- (1-s)}\|u\|_{V_{P_{p}}^{s}}^{2}.\]
Since \(1/2<t<1\) is assumed in (15), we have \(a>0\). Then, for \(k=0,1,\ldots,m\), the estimate for \(\varepsilon_{k+1}\) simplifies to
\[\varepsilon_{k+1}^{2}\leq\alpha_{k}^{2}\varepsilon_{k}^{2}+D\bar{\alpha}_{k}^{ 2}(k+1)^{a}\]
or, since \(\alpha_{k}^{2}\bar{\alpha}_{k-1}^{2}=\bar{\alpha}_{k}^{2}\), to
\[d_{k+1}:=\alpha_{k}^{-2}\varepsilon_{k+1}^{2}\leq\alpha_{k}^{2}\bar{\alpha}_{ k}^{-2}\varepsilon_{k}^{2}+D(k+1)^{a}=d_{k}+D(k+1)^{a}.\]
By recursion we obtain
\[d_{m+1}\leq d_{0}+D\sum_{k=0}^{m}(k+1)^{a}=\varepsilon_{0}^{2}+D\sum_{k=0}^{m }(k+1)^{a}\]
and eventually
\[\varepsilon_{m+1}^{2}\leq(m+2)^{-2}(\|e^{(0)}\|^{2}+D(m+2)^{a+1})<(\|e^{(0)}\| ^{2}+D)(m+2)^{a-1},\]
since we have \(a>0\) and
\[\sum_{k=0}^{m}(k+1)^{a}\leq\int_{1}^{m+2}x^{a}\,dx<(m+2)^{a+1}.\]
Thus, (27) holds by induction for all \(m\) if we ensure that
\[1-a\geq 2r,\qquad\|e^{(0)}\|^{2}+D\leq C^{2}. \tag{28}\]
To finish the proof of Theorem 1, it remains to maximize \(r\) for given \(0<s\leq 1\). To this end, it is intuitively clear to require
\[a=1-2r=2-2t=-bs/2+1-r=(1-s)b+t.\]
This system of equations has the unique solution
\[t=\frac{1+s}{2+s},\quad b=\frac{1}{2+s},\quad 2r=\frac{s}{2+s},\quad a=\frac{2}{ 2+s}.\]
Furthermore, the appropriate value for \(C\) in (27) must satisfy
\[C^{2}\geq\|e^{(0)}\|^{2}+\|u\|^{2}+2A^{2}\Lambda\,\sigma^{2}+2CB^{s/2}\|u\|_{V ^{s}_{P}}+A^{-1}B^{-(1-s)}\|u\|_{V^{s}_{P}}^{2}.\]
With such choices for \(t\) and \(C\), the condition (28) is guaranteed and (27) yields the desired bound
\[\varepsilon_{m}^{2}\leq C^{2}(m+1)^{s/(s+2)},\qquad m=1,2,\ldots,N-1.\]
By choosing concrete values for \(0<A\leq(2\Lambda)^{-1}\) and \(B>0\), the constant \(C^{2}\) can be made more explicit. E.g., substituting the upper bound
\[2CB^{s/2}\|u\|_{V^{s}_{P_{P}}}\leq\frac{C^{2}}{2}+2B^{s}\|u\|_{V^{s}_{P_{P}}}^ {2}\]
and rearranging term shows that
\[C^{2}=2\left(\|e^{(0)}\|^{2}+\|u\|^{2}+B^{s}(2+(AB)^{-1})\|u\|_{V^{s}_{P_{P}}} ^{2}+2A^{2}\Lambda\,\sigma^{2}\right)\]
is suitable. In particular, setting for simplicity \(A\) to its maximal value \(A=(2\Lambda)^{-1}\) and taking \(B=\Lambda\) gives a more explicit dependence of \(C^{2}\) on the assumptions on \(\|e^{(0)}\|^{2}\), the variance \(\sigma^{2}\) and the smoothness of \(u\), namely
\[C^{2}=2\|e^{(0)}\|^{2}+2\|u\|^{2}+8\Lambda^{s}\|u\|_{V^{s}_{P_{P}}}+\sigma^{2} /\Lambda\,. \tag{29}\]
This is the constant shown in the formulation of Theorem 1. Clearly, varying \(A\) and \(B\) will change the trade-off between initial error, noise variance and smoothness assumptions in the convergence estimate (27). Note also that \(B\) is not part of the algorithm and can be adjusted to any value. This finishes the proof of Theorem 1.
## 5 Further remarks
### Comments on Theorem 1
In the special case \(s=1\), the proof of Theorem 2 simplifies as follows: In (25) we can set \(h=u\) and (26) consequently simplifies to
\[\mathbb{E}^{\prime}(\|\tilde{e}^{(m+1)}\|^{2})\leq\alpha_{m}^{2}(\|e^{(m)}\|^ {2}+2\Lambda\,\sigma^{2}\mu_{m}^{2})+\bar{\alpha}_{m}^{2}(\|u\|^{2}+\mu_{m}^{ -1}\|u\|_{V^{s}_{P_{P}}}^{2}). \tag{30}\]
Thus, with \(\mu_{m}=\Lambda(m+1)^{-t}\) we directly obtain a recursion for
\[d_{m}:=\bar{\alpha}_{m-1}^{-2}\varepsilon_{m}^{2}=(m+1)^{2}\mathbb{E}(\|e^{(m )}\|^{2})\]
in the form
\[d_{m+1}\leq d_{m}+(2A^{2}\Lambda\,\sigma^{2}(m+1)^{2-2t}+\|u\|^{2}+A^{-1}(m+1) ^{t}\|u\|_{V^{s}_{P_{P}}}^{2}).\]
For \(1/2<t<1\) we finally arrive at
\[\mathbb{E}(\|e^{(m)}\|^{2})\leq\frac{\|e^{(0)}\|^{2}}{(m+1)^{2}}+\frac{2\Lambda^ {2}\Lambda\sigma^{2}}{(m+1)^{2-1}}+\frac{\|u\|^{2}}{m+1}+\frac{A^{-1}\|u\|^{2}_ {V_{P_{p}}^{1}}}{(m+1)^{1-t}}, \tag{31}\]
\(m=1,2,\ldots\). This estimate shows more clearly the guaranteed error decay with respect to the initial error \(\|e^{(0)}\|^{2}\), the noise variance \(\sigma^{2}\) and the norms \(\|u\|^{2}\) and \(\|u\|^{2}_{V_{P_{p}}^{1}}\) of the solution \(u\), respectively, in dependence on \(t\). The asymptotically dominant term is here of the form
\[\mathrm{O}((m+1)^{-\min(2t-1,1-t)})\]
and is minimized if \(t=2/3\). For this value of \(t\) and with \(A=(2\Lambda)^{-1}\) one obtains
\[\mathbb{E}(\|e^{(m)}\|^{2})\leq\frac{\|e^{(0)}\|^{2}}{(m+1)^{2}}+\frac{\|u\|^ {2}}{m+1}+\frac{\|u\|^{2}_{V_{p}^{1}}+\sigma^{2}}{2\Lambda(m+1)^{1/3}}. \tag{32}\]
Without further assumptions, one cannot expect a better error decay rate, see Section 3 and Subsection 5.3.
Another comment concerns the finite horizon setting which is often treated instead of a true online method. Here one fixes a finite \(N\), chooses a constant learning rate \(\mu_{m}=\mu\) for \(m=0,\ldots,N-1\) in dependence on \(N\), and asks for a best possible bound for \(\mathbb{E}(\|e^{(N)}\|^{2})\) only. Our approach easily delivers results for this case as well. We demonstrate this only for \(s=1\). For fixed \(\mu_{m}=\mu\), the error recursion for the quantities \(d_{m}\) takes now the form
\[d_{m+1}\leq d_{m}+(2\Lambda^{2}\Lambda\sigma^{2}(m+1)^{2}\mu^{2}+\|u\|^{2}+\mu ^{-1}\|u\|^{2}_{V_{P_{p}}^{1}}),\quad m=0,\ldots,N-1,\]
and gives
\[\mathbb{E}(\|e^{(N)}\|^{2})\leq\frac{\|e^{(0)}\|^{2}}{(N+1)^{2}}+2\Lambda \sigma^{2}\mu^{2}(N+1)+\frac{\|u\|^{2}+\mu^{-1}\|u\|^{2}_{V_{P_{p}}^{1}}}{N+1}.\]
Setting \(\mu=A(N+1)^{-2/3}\) results in a final estimate for the finite horizon case similar to (32) but only for \(m=N\).
There are obvious drawbacks of the whole setting in which Theorem 1 is formulated. First of all, the assumptions are qualitative at most: Since \(\mu\), and thus \(\rho\), is usually not at our disposal, we cannot verify the assumption \(u\in V_{P_{p}}^{s}\), nor assess the value of \(\sigma^{2}\). Moreover, even though in view of the obtained results the restriction to learning rates \(\mu_{m}\) of the form (15) may not cause issues, the choice of optimal values for \(t\) and \(A\) is by no means obvious. A rule for the adaptive choice of \(\mu_{m}\), which does not require knowledge about values for \(s\) and the size of norms of \(u\) but leads to the same quantitaive error decay as guaranteed by Theorem 1, would be desirable.
### Difficulties with convergence in \(L^{2}_{\rho}(\Omega,Y)\)
Our result for the vector-valued case concerned convergence in \(V\) which is isometric to the RKHS \(H\) generated by \(\mathbf{R}\). What we did not succeed in is to extend our methods to establish better asymptotic convergence rates of \(f_{u^{(m)}}\to f_{u}\) in the \(L^{2}_{\rho}(\Omega,Y)\) norm. It is not hard to see that, under the assumption (11) about the existence of the minimizer \(u\) in (9), error estimates in the \(L^{2}_{\rho}(\Omega,Y)\) norm require the investigation of \(\mathbb{E}(\|P_{\rho}^{1/2}e^{(m)}\|^{2})=\mathbb{E}((P_{\rho}e^{(m)},e^{(m) }))\) instead of \(\mathbb{E}(\|e^{(m)}\|^{2})\). If, in analogy with (21), one examines the error decomposition
\[\|P_{\rho}^{1/2}e^{(m+1)}\|^{2}\leq\|P_{\rho}^{1/2}\bar{e}^{(m+1)}\|^{2}-2 \alpha_{m}\mu_{m}(P_{P}R_{\alpha_{m}}e_{\alpha_{m}},\bar{e}^{(m+1)})+\alpha_{m }^{2}\mu_{m}^{2}\|P_{\rho}^{1/2}R_{\alpha_{m}}e_{\alpha_{m}}\|^{2},\]
then difficulties mostly arise from the first term in the right-hand side. Indeed, we have
\[\|P_{\rho}^{1/2}\tilde{e}^{(m+1)}\|^{2} = \tilde{\alpha}_{m}^{2}\|P_{\rho}^{1/2}u\|^{2}+2\alpha_{m}\tilde{ \alpha}_{m}(P_{\rho}u,e^{(m)}-\mu_{m}P_{\alpha_{m}}e^{(m)})\] \[+ \alpha_{m}^{2}(\|P_{\rho}^{1/2}e^{(m)}\|^{2}-2\mu_{m}(P_{\rho}e^{( m)},P_{\alpha_{m}}e^{(m)}))+\mu_{m}^{2}\|P_{\rho}^{1/2}P_{\alpha_{m}}e^{(m)}\|^{2}).\]
After taking conditional expectations \(\mathbb{E}^{\prime}(\|P_{\rho}^{1/2}\tilde{e}^{(m+1)}\|^{2})\), we obtain a negative term
\[-2\alpha_{m}^{2}\mu_{m}\|P_{\rho}e^{(m)}\|^{2}\]
on the right-hand side which needs to compensate for positive contributions from terms such as
\[\mathbb{E}^{\prime}(\|P_{\rho}^{1/2}P_{\alpha_{m}}e^{(m)}\|^{2}).\]
Since, in general, \(P_{\rho}\) does not commute with the operators \(P_{\omega}\), this strategy does not work without additional assumptions.
### A special case
Let us now consider the particular "learning" problem of recovering an unknown element \(u\in V\) from noisy measurements of its coefficients with respect to a CONS \(\Psi=\{\psi_{i}\}_{i\in\mathbb{N}}\) in \(V\) by the online method considered in this paper. To this end, we assume that we are given \(\mu\)-distributed random samples \((i_{m},y_{m})\), where \(i_{m}\in\mathbb{N}\) and
\[y_{m}=(u,\psi_{i_{m}})+\varepsilon_{m},\qquad m=0,1,\ldots \tag{33}\]
are the noisy samples of the coefficients \((u,\psi_{i})\). Starting from \(u^{(0)}=0\), we now want to approximate \(u\) by the iterates \(u^{(m)}\) obtained from the online algorithm
\[u^{(m+1)}=\alpha_{m}u^{(m)}+\alpha_{m}\mu_{m}(y_{m}-(u^{(m)},\psi_{i_{m}}))\psi _{i_{m}},\qquad m=0,1,\ldots, \tag{34}\]
where the coefficients \(\alpha_{m}\) and \(\mu_{m}\) are given by (15) with \(\Lambda=1\). This is a special instance of (14) if we set \(\Omega=\mathbb{N}\), \(Y=\mathbb{R}\) and define \(R_{i}:\mathbb{R}\to V\) and \(R_{i}^{*}:V\to\mathbb{R}\) by \(R_{i}y=y\psi_{i}\) and \(R_{i}^{*}\nu=(v,\psi_{i})\), respectively. To simplify things further, let \(i_{m}\) be i.i.d. samples from \(\mathbb{N}\) with respect to a discrete probability measure \(\rho\) on \(\mathbb{N}\) and let \(\varepsilon_{m}\) be i.i.d. random noise with zero mean and finite variance \(\sigma^{2}<\infty\) which is independent of \(i_{m}\). This means that the underlying measure \(\mu\) on \(\mathbb{N}\times\mathbb{R}\) is a product measure. The associated operator \(P_{\rho}\) is given by
\[P_{\rho}v=\sum_{i\in\mathbb{N}}\rho_{i}(v,\psi_{i})\psi_{i},\]
its eigenvalues \(\lambda_{i}=\rho_{i}\) are given by \(\rho\), and it is trace class (w.l.o.g., we assume \(\rho_{1}\geq\rho_{2}\geq\ldots\)). The spaces \(V_{P_{\rho}}^{s},-\infty<s<\infty\), can now be identified as sets of formal orthogonal series
\[V_{P_{\rho}}^{s}:=\left\{u\sim\sum_{i\in\mathbb{N}}c_{i}\psi_{i}\ :\quad\|u\|_{V_{P_{\rho}}^{s}}^{2}=\sum_{i\in\mathbb{N}}\rho_{i}^{-s}c_{i}^{2} \right\}.\]
Obviously, \(V_{P_{\rho}}^{s}\subset V=V_{P_{\rho}}^{0}\) for \(s>0\). Since functions \(f:\mathbb{N}\to\mathbb{R}\) can be identified with formal series with respect to \(\Psi\) by
\[u\sim\sum_{i\in\mathbb{N}}c_{i}\psi_{i}\quad\leftrightarrow\quad f_{u}:\ f_{u}(i)=c_{i},\]
we have \(\|f_{u}\|_{L^{2}_{\rho}(\mathbb{N},\mathbb{R})}=\|u\|_{V_{P_{\rho}}^{-1}}\) and we can silently identify \(L^{2}_{\rho}(\mathbb{N},\mathbb{R})\) with \(V_{P_{\rho}}^{-1}\). Under the assumptions made, the underlying minimization problem (9) on \(V\) reads
\[\mathbb{E}(|f_{v}-y|^{2})=\|f_{v}-f_{u}\|_{L^{2}_{\rho}(\mathbb{N},\mathbb{R}) }^{2}+\sigma^{2}\quad\longmapsto\quad\text{min},\]
and, as expected, has \(u\) as its unique solution. This example also shows that it may sometimes be more appropriate to consider convergence in \(V\) than convergence in the sense of \(L^{2}_{\rho}(\Omega,Y)\).
The simplicity of this example enables a rather comprehensive convergence theory with respect to the scale of \(V^{s}_{P_{p}}\) spaces. We state the following results without detailed proof.
Theorem 3.1: _Let \(-1\leq\bar{s}\leq 0\), \(s\geq 0\), and \(\bar{s}<s\leq\bar{s}+2\). Then, for the sampling process described above, the online algorithm (34) converges for \(u\in V^{s}_{P_{p}}\) in the \(V^{\bar{s}}_{P_{p}}\) norm with the bound_
\[\mathbb{E}(\|e^{(m)}\|^{2}_{V^{\bar{s}}_{P_{p}}})\leq C(m+1)^{-\min \left(\frac{s-\bar{s}}{s+2},\frac{2}{s+2}\right)}(A^{\bar{s}-\bar{s}}\|u\|^{2}_ {V^{\bar{s}}_{P_{p}}}+A^{2+\bar{s}}\sigma^{2}),\quad m=1,2,\ldots, \tag{35}\]
_if the parameters \(t\) and \(A\) in (15) satisfy_
\[t=t_{s,\bar{s}}:=\max((s+1)/(s+2),(\bar{s}+3)/(\bar{s}+4)),\qquad 0<A\leq 1/2.\]
Setting \(\bar{s}=0\), one concludes from (35) that the convergence estimate for the online algorithm (14), which holds by Theorem 3.1 for \(0<s\leq 1\) in the general case, is indeed matched. For \(\bar{s}=-1\), which corresponds to \(L^{2}_{\rho}(\mathbb{N},\mathbb{R})\) convergence, the rate is better and in line with known lower bounds.
The estimate (35) for the online algorithm (34) is best possible, in the sense that, under the conditions of Theorem 3.1, the exponent in (35) cannot be increased without additional assumptions on \(\rho\). In particular, for \(s>\bar{s}+2\) no further improvement is obtained, i.e., the estimate indeed saturates at \(s=\bar{s}+2\). This can be seen from the following result.
Theorem 3.2: _Let \(-1\leq\bar{s}\leq 0\leq s<\infty\), \(\bar{s}<s\) and \(\sigma>0\). For the online algorithm (34) we have_
\[\sup_{\rho}\sup_{w:\|u\|^{2}_{V^{s}_{P_{p}}}=1}(m+1)^{\min\left((s-\bar{s})/(2 +s),2/(\bar{s}+4)\right)}\mathbb{E}(\|e^{(m)}\|^{2}_{V^{\bar{s}}_{P_{p}}}) \geq c>0, \tag{36}\]
\(m=1,2,\ldots\)_, where \(c\) depends on \(\bar{s}\), \(s\), \(\sigma\) and the parameters \(t\) and \(A\) in (15), but is independent of \(m\)._
The proofs of these statements are elementary but rather tedious and will be given elsewhere. Let us just note that the simplicity of this example allows us to reduce the considerations to explicit linear recursions for expectations associated with the decomposition coefficients \(c^{(m)}_{i}:=(u^{(m)},\psi_{i})\) of the iterates \(u^{(m)}\) with respect to \(\Psi\) for each \(i\in\mathbb{N}\) separately. This is because
\[\mathbb{E}(\|e^{(m)}\|^{2}_{V^{\bar{s}}_{P_{p}}})=\sum_{i}\rho_{i }^{-\bar{s}}\mathbb{E}((c^{(m)}_{i})^{2}),\qquad\|u\|^{2}_{V^{s}_{P_{p}}}=\|e ^{(0)}\|^{2}_{V^{s}_{P_{p}}}=\sum_{i}\rho_{i}^{-s}c^{2}_{i} \tag{37}\]
and
\[c^{(m+1)}_{i} = \bar{\alpha}_{m}c_{i}+\alpha_{m}c^{(m)}_{i}+\alpha_{m}\left\{ \begin{array}{ll}\mu_{m}(y_{i_{m}}-(u^{(m)},\psi_{i_{m}})),&i_{m}=i\\ 0,&i_{m}\neq i\end{array}\right.\] \[= \bar{\alpha}_{m}c_{i}+\alpha_{m}(c^{(m)}_{i}-\delta_{i,i_{m}}\mu _{m}(c^{(m)}_{i}+\varepsilon_{m}))\]
for \(m=0,1,\ldots\), where \(\delta_{i,i_{m}}=1\) with probability \(\rho_{i}\), and \(\delta_{i,i_{m}}=0\) with probability \(1-\rho_{i}\). Thus, if we denote \(\varepsilon_{m,i}:=\mathbb{E}((c^{(m)}_{i})^{2})\) and \(\bar{\varepsilon}_{m,i}:=\mathbb{E}(c^{(m)}_{i})\) and use the independence assumption, we get a system of linear recursions
\[\varepsilon_{m+1,i} = \alpha_{m}^{2}(1-\rho_{i}\mu_{m}(2-\mu_{m}))\varepsilon_{m,i}+2 \alpha_{m}\bar{\alpha}_{m}(1-\rho_{i}\mu_{m})c_{i}\bar{\varepsilon}_{m,i}+\bar{ \alpha}_{m}^{2}c^{2}_{i}+\rho_{i}\alpha_{m}^{2}\mu_{m}^{2}\sigma^{2},\] \[\bar{\varepsilon}_{m+1,i} = \alpha_{m}(1-\rho_{i}\mu_{m})\bar{\varepsilon}_{m,i}+\bar{\alpha}_ {m}c_{i},\]
\(m=0,1,\ldots\), with starting values \(\varepsilon_{0,i}=c^{2}_{i}\) and \(\bar{\varepsilon}_{0,i}=c_{i}\). This system can, in principle, be solved explicitly. For instance, we straightforwardly have
\[\bar{\varepsilon}_{m,i}=\frac{1}{m+1}c_{i}S_{m},\qquad S_{m}:=\sum_{k=0}^{m} \Pi_{k}^{m-1},\]
where the notation
\[\Pi_{k}^{m-1}=(1-\frac{a}{m^{t}})\cdot\ldots(1-\frac{a}{(k+1)^{t}}),\quad 0\leq k \leq m-1,\quad\Pi_{m}^{m-1}=1, \tag{38}\]
is used with \(a=A\rho_{i}\). Similarly, we get
\[\varepsilon_{m,i}\leq\frac{1}{(m+1)^{2}}\left(2c_{i}^{2}\sum_{k=0}^{m}\Pi_{k}^ {m-1}S_{k}+\rho_{i}\sigma^{2}\sum_{k=0}^{m-1}(k+1)^{2}\mu_{k}^{2}\Pi_{k+1}^{m-1 }\right).\]
A matching lower bound for \(\varepsilon_{m,i}\) can be obtained by using a slightly different value of \(a\) in the definition of the products \(\Pi_{k}^{m-1}\). The remainder of the argument for Theorem 2 requires first the substitution of tight upper bounds for \(\Pi_{k}^{m-1}\) and \(S_{k}\) in dependence on \(t\) and \(a\) into the bounds for \(\varepsilon_{m,i}\). Next, after substitution of the estimates for \(\varepsilon_{m,i}\) into (37), the resulting series has to be further estimated separately for the index sets \(I_{1}:=\{i:A\rho_{i}\leq(m+1)^{t-1}\}\) and \(I_{2}:=\mathbb{N}\backslash I_{1}\) followed by choosing the indicated optimal value of \(t=t_{s,\bar{s}}\). This leads to the bound (35) in Theorem 2. For the proof of Theorem 3, lower bounds for \(\Pi_{k}^{m-1}\), \(S_{k}\) and, consequently, for \(\varepsilon_{m}\) are needed, combined with choosing appropriate discrete probability distributions \(\rho\).
## Acknowledgement
Michael Griebel and Peter Oswald were supported by the _Hausdorff Center for Mathematics_ in Bonn, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2047/1 - 390685813 and the CRC 1060 _The Mathematics of Emergent Effects_ of the Deutsche Forschungsgemeinschaft.
|
2308.16549 | Thesis Distillation: Investigating The Impact of Bias in NLP Models on
Hate Speech Detection | This paper is a summary of the work done in my PhD thesis. Where I
investigate the impact of bias in NLP models on the task of hate speech
detection from three perspectives: explainability, offensive stereotyping bias,
and fairness. Then, I discuss the main takeaways from my thesis and how they
can benefit the broader NLP community. Finally, I discuss important future
research directions. The findings of my thesis suggest that the bias in NLP
models impacts the task of hate speech detection from all three perspectives.
And that unless we start incorporating social sciences in studying bias in NLP
models, we will not effectively overcome the current limitations of measuring
and mitigating bias in NLP models. | Fatma Elsafoury | 2023-08-31T08:40:41Z | http://arxiv.org/abs/2308.16549v2 | # Thesis Distillation:
###### Abstract
This paper is a summary of the work in my PhD thesis. In which, I investigate the impact of bias in NLP models on the task of hate speech detection from three perspectives: explainability, offensive stereotyping bias, and fairness. I discuss the main takeaways from my thesis and how they can benefit the broader NLP community. Finally, I discuss important future research directions. The findings of my thesis suggest that bias in NLP models impacts the task of hate speech detection from all three perspectives. And that unless we start incorporating social sciences in studying bias in NLP models, we will not effectively overcome the current limitations of measuring and mitigating bias in NLP models.
## 1 Introduction
Hate speech on social media has severe negative impacts, not only on its victims (Sticca et al., 2013) but also on the moderators of social media platforms (Roberts, 2019). This is why it is crucial to develop tools for automated hate speech detection. These tools should provide a safer environment for individuals, especially for members of marginalized groups, to express themselves online. However, recent research shows that current hate speech detection models falsely flag content written by members of marginalized communities, as hateful (Sap et al., 2019; Dixon et al., 2018; Mohangama et al., 2021). Similarly, recent research indicates that there are social biases in natural language processing (NLP) models (Garg et al., 2018; Nangia et al., 2020; Kurita et al., 2019; Ousidhoum et al., 2021; Nozza et al., 2021, 2022).
Yet, the impact of these biases on the task of hate speech detection has been understudied. In my thesis, I identify and study three research problems: 1) the impact of bias in NLP models on the performance and explainability of hate speech detection models; 2) the impact of the imbalanced representation of hateful content on the bias in NLP models; and 3) the impact of bias in NLP models on the fairness of hate speech detection models.
Investigating and understanding the impact of bias in NLP on hate speech detection models will help the NLP community to develop more reliable, effective, and fair hate speech detection models. My research findings can be extended to the general task of text classification. Similarly, Understanding the origins of bias in NLP models and the limitations of the current research on bias and fairness in NLP models, will help the NLP community develop more effective tools and methods to expose and mitigate the bias in NLP models.
In my thesis and this paper, I, first, critically review the literature on hate speech detection (SS2) and bias and fairness in NLP models (SS3). Then, I address the identified research problems in hate speech detection, by investigating the impact of bias in NLP models on hate speech detection models from three perspectives: 1) the explainability perspective (SS4), where I address the first research problem and investigate the impact of bias in NLP models on their performance of hate speech detection and whether the bias in NLP models explains their performance on hate speech detection; 2) the offensive stereotyping bias perspective (SS5), where I address the second research problem and investigate the impact of imbalanced representations and co-occurrences of hateful content with marginalized identity groups on the bias of NLP models; and 3) the fairness perspective (SS6), where I address the third research problem and investigate the impact of bias in NLP models on the fairness of the task of hate speech detection. For each research problem, I summarize the work done to highlight its main findings, contributions, and limitations. Thereafter, I discuss the general takeaways from my thesis and how it can benefit the NLP community at large (SS7).
Finally, I present directions for future research (SS8).
The findings of my thesis suggest that the bias in NLP models has an impact on hate speech detection models from all three perspectives. This means that we need to mitigate the bias in NLP models so that we can ensure the reliability of hate speech detection models. Additionally, I argue that the limitations and criticisms of the currently used methods to measure and mitigate bias in NLP models are direct results of failing to incorporate relevant literature from social sciences. I build on my findings on hate speech detection and provide a list of actionable recommendations to improve the fairness of the task of text classification as a short time solution. For a long-term solution to mitigate the bias in NLP models, I propose a list of recommendations to address bias in NLP models by addressing the underlying causes of bias from a social science perspective.
## 2 Survey: Hate speech
In Elsafoury et al. (2021), I provide a comprehensive literature review on hate speech and its different forms. Furthermore, I review the literature of hate speech detection for different methods proposed in the literature accomplishing every step in the text classification pipeline. Then, I point out the limitations and challenges of the current research on hate speech detection.
The main findings of this survey are: 1) There are different definitions and forms of hate speech. One of the main limitations of current studies on hate speech detection, is the lack of distinction between hate speech and other concepts like cyberbullying. 2) There are many resources of hate speech related datasets in the literature, that allow the development of new hate speech detection models. However, these datasets have many limitations, including limited languages, biased annotations, class imbalances, and user distribution imbalances. 3) One of the main limitations of the current research on hate speech detection, is the lack of understanding how it is impacted by the bias in NLP models. This limitation is what I aim to address in my thesis.
Limitations:One of the main limitations of this survey, is that it focuses on hate speech detection only as a supervised text classification task. However, recent studies propose a framework to automate and enforce moderation policies, instead of training a machine learning models to understand hate speech Calabrese et al. (2022). Similarly, this review focuses on hate speech datasets that are collected only from social media platforms. However, more recently, generative models have become more popular and started to be used in generating hate speech related datasets Hartvigsen et al. (2022).
## 3 Survey: Bias and Fairness in NLP
In Elsafoury and Abercrombie (2023), I review the literature on the definitions of bias and fairness in NLP models. Additionally, I review the literature on the origins of bias in NLP models from two perspectives: 1) NLP pipeline as discussed in Shah et al. (2020); Hovy and Prabhumoye (2021), and 2) social sciences and critical race theory as discussed in Benjamin (2019); Broussard (2023); Nobel (2018). In this work, I argue that the sources of bias in the NLP pipeline originate in the social sciences and that they are direct results of the sources of bias from the social science perspective as shown in 1.
The main contribution of this literature review is reviewing the sources of bias in NLP models from the social science perspective as well as the NLP perspective. This survey points out the limitations of the currently used methods to measure and mitigate bias in NLP models. It also suggests that these limitations are direct results of the lack of inclusion of social science literature in the development of methods that quantify and mitigate bias in NLP. Finally, I share a list of actionable suggestions and recommendations with the NLP community on how to mitigate the limitations discussed in studying bias in NLP.
Limitations:One main limitation of this survey is that it reviews the literature on the sources of bias in the NLP pipeline, only in supervised models. Unsupervised NLP models might have different sources of bias. The second limitation is
Figure 1: The sources of bias in supervised NLP models
regarding the reviewed literature on the sources of bias in social sciences, where I rely mainly on three books _Algorithms of Oppression: How Search Engines Reinforce Racism_ by Safiya Nobel Nobel (1981), _Race after Technology: Abolitionist Tools for the New Jim Code_ by Ruha Benjamin Benjamin (19), and _More than a glitch: Confronting race, gender, and ability bias in tech_ by Meredith Broussard Broussard (2023). A more comprehensive literature review to review studies that investigate the direct impact of social causes on bias in NLP would be important future work. However, to the best of my knowledge, this area is currently understudied.
In the next sections, I address the under-study of the impact of bias in NLP models on hate speech detection models. I investigate that impact from the following perspectives.
## 4 The explainability perspective
For this perspective, I investigate the performance of different hate speech detection models and whether the bias in NLP models explains their performance on the task of hate speech detection. To achieve that, I investigate two sources of bias:
1. **Bias introduced by pre-training:** where I investigate the role that pre-training a language model has on the model's performance, especially when we don't know the bias in the pre-training dataset. I investigate the explainability of the performance of contextual word embeddings, also known as language models, on the task of hate speech detection. I compare the performance of BERT Devlin et al. (2019) to other deep learning models, LSTM and BiLSTM, on five hate speech datasets, and BERT outperformed the rest of the models. I analyze BERT's attention weights and BERT's feature importance scores. I also investigate the most important part of speech (POS) tags that BERT relies on for its performance. The results of this work suggest that pre-training BERT results in a syntactical bias that impacts its performance on the task of hate speech detection Elsafoury et al. (2021).
Based on these findings, I investigate whether the social bias resulting from pre-training contextual word embeddings explains their performance on hate speech detection in the same way syntactical bias does. I inspect the social bias in three contextual word embeddings models (BERT (base and large) Devlin et al. (2019), ALBERT (base and xx-large) Lan et al. (2020), and ROBERTA (base and large) Liu et al. (2019)) using three different bias metrics, CrowS-Pairs Nangia et al. (2020), StereoSet Nadeem et al. (2021), and SEAT May et al. (2019), to measure gender, racial and religious biases. First, I investigate whether large models are more socially biased than base models. The Wilcoxon statistical significance test Zimmerman and Zumbo (1993) indicates that there is no statistical significant difference between the bias in base and large models in BERT and RoBERTa, unlike the findings of Nadeem et al. (2021). However, there is a significant difference between the base and xx-large AlBERT. These results suggest that large models are not necessarily more biased than base models, but if the model size gets even bigger, like ALBERT-xx-large, then the models might get significantly more biased. Since there is no significant difference between the base and large models, I only use base language models in the rest of the thesis.
Then, I follow the work of Steed et al. (2022); Goldfarb-Tarrant et al. (2021) and use correlation as a measure of the impact of bias on the performance of the task of hate speech detection. The Pearson's correlation coefficients between the bias scores of the different models and the F1-scores of the different models on the used five hate-speech-related datasets are inconsistently positive. However, due to the limitations of the metric used to measure social bias, as explained in Blodgett et al. (2021), the impact of the social bias in contextual word embeddings on their performance on the task of hate speech detection remains inconclusive.
2. **Bias in pre-training datasets:** Where I investigate the impact of using NLP models pre-trained on data collected from social media platforms like Urban dictionary and 4 & 8 Chan, which are famous for having sexist and racist posts Nguyen et al. (2017); Papasavva et al. (2020). I investigate the performance of two groups of static word embeddings on hate speech detection. The first group, social-media-based, pre-trained on biased datasets that contain hateful content. This group consists of Glove-Twitter Mozafari et al. (2020), Urban dictionary (UD) Wilson et al. (2020), and 4& 8 Chan (chan) Voue et al. (2020) word embeddings. The second group of word embeddings, informational-based, is pre-trained
on informational data collected from Wikipedia and Google New platforms. This group contains the word2vec Mikolov et al. (2021) and Glove-WK word Pennington et al. (2014) embeddings. I use static word embeddings in this part of the work because there are static word embeddings that are pre-trained on datasets collected from social media platforms like urban dictionary, and 4 &8 Chan. First, I investigate the ability of the five different word embeddings, to categorize offensive terms in the Hurtlex lexicon. Then, I investigate the performance of Bi-LSTM model with an un-trainable embeddings layer of the five word embeddings on the used five hate-speech-related datasets. The results indicate that the word embeddings that are pre-trained on biased datasets social-media-based, outperform the other word embeddings that are trained on informational data, informational-based on the tasks of offenses categorization and hate speech detection Elsafoury et al. (2022).
Based on these findings, I inspect the impact of social bias, gender, and racial, in the static word embeddings on their performance on the task of hate speech detection. To measure the social bias in the static word embeddings, I use the following metrics from the literature: WEAT Caliskan et al. (2017), RNSB Sweeney and Najafian (2019), RND Garg et al. (2018), and ECT Dev and Phillips (2019). Then, I use Pearson's correlation to investigate whether the social bias in the word embeddings explains their performance on the task of hate speech detection. Similar to contextual word embeddings, the results indicate an inconsistent positive correlation between the bias scores and the F1-sores of the Bi-LSTM model using the different word embeddings. This lack of positive correlation could be due to limitations in the used metrics to measure social bias in static word embeddings Antoniak and Mimno (2021). These results suggest that the impact of the social bias in the static word embeddings on the performance of the task of hate speech detection is inconclusive.
Contributions:The main findings and contributions of the explainability perspective can be summarized as: **1)** The results provide evidence that the syntactical bias in contextual word embeddings, resulting from pre-training, explains their performance on the task of hate speech detection. **2)** The results suggest that pre-training static word embeddings on biased datasets from social-media-based sources improves and might explain the performance of the word embeddings on the task of hate speech detection. **3)** For both static and contextual word embeddings, there is no strong evidence that social bias explains the performance of hate speech detection models. However, due to the limitations of the methods used to measure social bias in both static and contextual word embeddings, this finding remains inconclusive.
Limitations:one of the main limitations of this work is using social bias metrics from the literature, which have their limitations as argued in Blodgett et al. (2021); Antoniak and Mimno (2021). Additionally, the work done here, is limited to hate speech datasets that are in English. Similarly, the social bias inspected in the different word embeddings is based on Western societies, where the marginalized groups might be different in different societies. It is also important to mention that the findings of this work are limited to the used datasets and models and might not generalize to other models or datasets.
## 5 The offensive stereotyping bias perspective
In Elsafoury et al. (2022); Elsafoury (2023), I investigate how the hateful content on social media and other platforms that are used to collect data and pre-train NLP models, is being encoded by those NLP models to form systematic offensive stereotyping (SOS) bias against marginalized groups of people. Especially with imbalanced representation and co-occurrence of the hateful content with the marginalized identity groups. I introduce the systematic offensive stereotyping (SOS) bias. I formally define it and propose a method to measure it and validate it in static Elsafoury et al. (2022) and contextual word embeddings Elsafoury et al. (2022). Finally, I study how it impacts the performance of these word embeddings on hate speech detection models. I propose the NCSP, which is a metric to measure the SOS bias in static word embeddings using the cosine similarity between a list of swear words and non-offensive words that describe marginalized groups. As for measuring the SOS bias in contextual word embeddings, I propose the \(SOS_{LM}\) metric. The \(SOS_{LM}\) metric uses the masked language model (MLM) task to measure the SOS
bias, similar to the work proposed in StereoSet (Nadeem et al., 2021) and CrowS-Pairs (Nangia et al., 2020) metrics. Instead of using crowdsourced sentence pairs that express socially bias sentences and socially unbiased sentences, I use synthesized sentence pairs that express profane sentences and non-profane sentence-pairs.
I measure the SOS bias scores in 15 static word embeddings (Elsafoury et al., 2022) and 3 contextual word embeddings (Elsafoury, 2023). The results show that for static word embeddings, there is SOS bias in all the inspected word embeddings, and it is significantly higher towards marginalized groups. Similarly, all the inspected contextual word embeddings are SOS biased, but the SOS bias scores are not always higher towards marginalized groups. Then, I validate the SOS bias itself by investigating how reflective it is of the hate that the same marginalized groups experience online. The correlation results, using Pearson correlation coefficient, indicate that there is a positive correlation between the measured SOS bias in static and contextual word embeddings and the published statistics of the percentages of the marginalized groups (Women, LGBTQ, and non-white ethnicities) that experience online hate (Hawdon et al., 2015) and the measured SOS bias scores in static word embeddings using the NCSP metric and the \(\mathit{SOS_{M}}\) metric.
I also validate the proposed metric to measure the SOS bias in comparison to the social bias metrics proposed in the literature. I use the Pearson correlation coefficient between the social bias scores and the SOS bias scores in the static and the contextual word embeddings. The results show that, for the inspected static word embeddings, the correlation results, according to Pearson correlation, show a negative correlation between the measured SOS bias scores measured using the NCSP metric and the social bias scores (gender and race) measured using the WEAT, RND, RNSB, and ECT metrics. As for the contextual word embeddings, the Pearson correlation coefficient results show a positive correlation between the SOS bias scores measured using the \(\mathit{SOS_{LM}}\) metric and the social bias scores (gender, race, and religion) measured using the CrowS-Pairs metric, which could be the case because the \(\mathit{SOS_{LM}}\) metric is built on the CrowS-Pairs metric.
Finally, I investigate whether the inspected SOS bias explained the performance of the inspected word embeddings on the task of hate speech detection. I train MLP and Bi-LSTM models with an untrainable layer of the different static word embeddings on four hate-speech-related datasets. As for contextual word embeddings, I fine-tune BERT-base-uncased, ALBERT-base, and ROBERTA-base on six hate speech related datasets. Then, I use Pearson's correlation between the SOS bias scores in the different word embeddings and their F1 scores on the models on the task of hate speech detection. The correlation results, similar to the results in SS4, show an inconsistent positive correlation. This could be because the limitations of other social bias metrics in the literature are extended to the proposed metrics, especially since I build on proposed bias metrics. In this case, the impact of the SOS bias in static and contextual word embeddings on their performance on the task of hate speech detection remains inconclusive.
**Contributions:** The main findings and contributions of the offensive stereotyping perspective can be summarized as follows: **1)** I define the SOS bias, propose two metrics to measure it in static and contextual word embeddings, and demonstrate that SOS bias correlates positively with the hate that marginalized people experience online. **2)** The results of this chapter provide evidence that all the examined static and contextual word embeddings are SOS biased. This SOS bias is significantly higher for marginalized groups in static word embeddings versus non-marginalized groups. However, this is not the case with the contextual word embeddings. **3)** Similar to social bias, there is no strong evidence that the SOS bias explains the performance of the different word embeddings on the task of hate speech detection. Which could be due to limitations in the proposed metrics to measure the SOS bias.
**Limitations:** The findings of this work are limited to the examined word embeddings, models, and datasets, and might not generalize to others. Similarly, the SOS bias scores measured using the NCSP metric in the inspected static word embeddings, are limited to the used word lists, and even if I use two different swear word lists and identity terms that are coherent according to (Antoniak and Mimno, 2021), using other word lists may give different results. Another limitation is regarding my definition of the SOS bias, as I
define bias from a statistical perspective, which lacks the social science perspective as discussed in (Blodgett et al., 2021; Delobelle et al., 2022). Moreover, I only study bias in western societies where Women, LGBTQ and Non-White ethnicities are among the marginalized groups. However, marginalized groups could include different groups of people in other societies. I also only use datasets and word lists in English, which limits our study to the English-speaking world. Similar to other works on quantifying bias, our proposed metric measures the existence of bias and not its absence (May et al., 2019), and thus low bias scores do not necessarily mean the absence of bias or discrimination in the word embeddings. Another limitation of this work is the use of template sentence-pairs to measure the SOS bias in contextual word embeddings, which do not provide a real context that might have impacted the measured SOS bias. Since the proposed method used to measure the SOS bias in contextual word embeddings (\(\mathit{SOS}_{\mathit{LM}}\)) builds on social bias metrics like CrowS-Pairs and StereoSet, it is highly likely that \(\mathit{SOS}_{\mathit{LM}}\) have the same limitations as CrowS-Pairs and StereoSet that are pointed out in (Blodgett et al., 2021).
## 6 The fairness perspective
In (Elsafoury et al., 2023), I investigate how different sources of bias in NLP models and their removal impact the fairness of the task of hate speech detection. Improving the fairness of the text classification task is very critical to ensure that the decisions made by the models are not based on sensitive attributes like race or gender.
I first measure three sources of bias according to (Shah et al., 2020; Hovy and Prabhumoye, 2021): representation bias, selection bias, and overamplification bias. Then, I fine-tune three language models: BERT, ALBERT, and ROBERTA on the Jigsaw dataset (Jigsaw, 2018), and measure the fairness of these models using two sets of fairness metrics: threshold-based and threshold-agnostic. The threshold-based metrics are the TPR_gap and the FPR_gap metrics used in (Steed et al., 2022; De-Arteaga et al., 2019). As for the threshold-agnostic metric, I use the AUC_gap metric, which is an adaptation of the metrics proposed in (Borkan et al., 2019). I investigate the impact of the different sources of bias on the models' fairness by measuring the Pearson correlation coefficient between the bias scores and the fairness score. Then, I investigate the impact of removing the three sources of bias, using different debiasing methods, on the fairness of hate speech detection models. I remove the representation bias using the SentDebias method proposed in (Liang et al., 2020) to remove gender, racial, religious and SOS bias on the inspected language models. To remove the selection bias, I aim to balance the ratio of positive examples between the identity groups in the Jigsaw dataset. To achieve that, I generate synthetic positive examples using existing positive examples in the Jigsaw training dataset, but with word substitutions using the NLPAUG tool that uses contextual word embeddings to generate word substitutions (Ma, 2019). To remove the overamplification bias, I aim to ensure that the different identity groups, in the Jigsaw dataset, appear in similar semantic contexts in the training dataset, as proposed in (Webster et al., 2020). To achieve that, I use different methods: 1) create data perturbations, 2) I use the sentDebias method to remove the bias representations from the fine-tuned models.
Thereafter, I compare the fairness of the inspected language models on the task of hate speech detection before and after removing each of the inspected source of bias. I aim to find the most impactful source of bias on the fairness of the task of hate speech detection and to find out the most effective debiasing method. I also use the counterfactual fairness method Perturbation score sensitivity (_SenseScore_), proposed in (Prabhakaran et al., 2019) to further inspect the impact of removing different sources of bias and the most effective bias removal method.
Finally, I build on the findings of this work and proposed practical guidelines to ensure the fairness of the task of text classification and showcase these recommendations on the task of sentiment analysis.
**Contributions:** The main findings and contributions of the fairness perspective can be summarized as follows: **1)** The results demonstrate that the dataset used to measure the models' fairness on the downstream task of hate speech detection plays an important role in the measured fairness scores. **2)** The results indicate that it is important to have a fairness dataset with similar semantic contexts and ratios of positive examples between the identity groups within the same sensitive attribute, to make sure that the fairness scores are reliable. **3)** Unlike the findings of
previous research (Cao et al., 2022; Kaneko et al., 2022), the results demonstrate that there is a positive correlation between representation bias, measured by the CrowS-Pairs and the \(SOS_{LM}\) metrics, and the fairness scores of the different models on the downstream task of hate speech detection. **4)** Similar to findings from previous research, (Steed et al., 2022), the results of this work demonstrate that downstream sources of bias, overamplification and selection, are more impactful than upstream sources of bias, representation bias. **5)** The results also demonstrate that removing overamplification bias by training language models on a dataset with a balanced contextual representation and similar ratios of positive examples between different identity groups, improved the models' fairness consistently across the sensitive attributes and the different fairness metrics, without sacrificing the performance. **6)** I provide empirical guidelines to ensure the fairness of the text classification.
Limitations:It is important to point out that the work done in this section is limited to the examined models and datasets. This work studies bias and fairness from a Western perspective regarding language (English) and culture. There are also issues regarding the datasets that those metrics used to measure the bias (Blodgett et al., 2021). Besides, those metrics measure the existence of bias, not its absence, so a lower score does not necessarily mean the model is unbiased (May et al., 2019). The used fairness metric, extrinsic bias metrics, also received criticism (Hedden, 2021). This means that even though I used more than one metric and different methods to ensure that our findings are reliable, the results could be different when applied to a different dataset. It is also important to mention that there is a possibility that the findings regarding the most effective debiasing method, which is fine-tuning the models on a perturbed dataset, is the case because I use a perturbed fairness dataset as well. I recognize that the provided recommendations to have a fairer text classification task rely on creating perturbations for the training and the fairness dataset.
I acknowledge that this task might be challenging for some datasets, especially if the mention of the different identities is not explicit, like using the word "Asian" to refer to an Asian person but using Asian names instead. Additionally, for the sentiment analysis task, the used keyword to filter the IMDB dataset and get only gendered sentences might provide additional limitations that might have influenced the results. Moreover, in this section, I aim to achieve equity in the fairness of the task of text classification between the different identity groups. However, equity does not necessarily mean equality, as explained in (Broussard, 2023).
## 7 What have we learned?
In this section, I combine all the findings of my thesis and point out how this work can benefit the NLP community and the ongoing research on hate speech detection, bias, and fairness in NLP. The survey of the literature on hate speech detection in SS2 shows a lack of research on the impact of bias in NLP models and hate speech detection models. Especially the impact on the performance of hate speech detection, and how the hateful content led NLP models to form an offensive stereotyping bias, in addition to limitations with the current research that investigates the impact of bias in NLP models on the fairness of hate speech detection models. The aim of my thesis is to fill these research gaps.
The research goal of my thesis is to investigate the bias in NLP models and its impact on the performance and fairness of the task of hate speech detection, and more generally, the task of text classification. The findings of my thesis show that the bias in NLP models is preventing us from having reliable and effective hate speech detection and text classification models. This is evident by the findings of my thesis.
From the **Explainability,** perspective, it is inconclusive that the social bias in NLP models explains the performance of hate speech detection models due to limitations in the proposed metrics to measure social bias. However, the results in SS4 also indicate that the bias resulting from pre-training language models, e.g., syntactic bias and biased pre-training datasets, impacts and explains their performance on hate speech detection modes. This good performance suggests that the hate speech detection model associates hateful content with marginalized groups. This might result in falsely flagging content written by marginalized groups on social media platforms.
From the **Offensive stereotyping bias** perspective, the findings in SS5 demonstrate that word embeddings, static and contextual, are systematic offensive stereotyping (SOS) biased.
The results show no strong evidence that the SOS bias explains the performance of the word embeddings on the task of hate speech detection, due to limitations in the proposed metrics to measure the SOS bias. However, the existence of SOS bias might have an impact on the hate speech detection models in ways that we have not explored or understood yet, especially against the marginalized groups.
From the **Fairness** perspective, the findings of SS6 show that the inspected types of bias, representation, selection, overamplification, have an impact on the fairness of the models on the task of hate speech detection, especially the downstream sources of bias which are selection and overamplification bias. This means that the bias in the current hate speech datasets and the bias in the most commonly used language models have a negative impact on the fairness of hate speech detection models. Hence, researchers should pay attention to these biases and aim to mitigate them before implementing hate speech detection models.
These findings assert the notion that bias in NLP models negatively impacts hate speech detection models and that, as a community, we need to mitigate those biases so that we can ensure the reliability of hate speech detection models. However, in SS3, I discuss the limitations and criticisms of the currently used methods to measure and mitigate bias in NLP models that fail to incorporate findings from the social sciences.
As a short-term solution to improve the fairness of hate speech detection and text classification tasks, I provide a list of guidelines in Elsafoury et al. (2023). These guidelines can be summarized as follows:
1. Measure the bias in the downstream task.
2. Remove overamplification bias.
3. To reliably measure fairness, use a balanced fairness dataset and counterfactual fairness metrics.
4. Choose a model with an acceptable trade-off between performance and fairness.
For a long-term solution and to overcome the current limitations of studying bias and fairness in NLP models, I provide a detailed actionable plan in Elsafoury and Abercrombie (2023) and I summarize the main items in this plan here:
1. Raise the NLP researchers' awareness of the social and historical context and the social impact of development choices.
2. Encourage specialized conferences and workshops on reimagining NLP models with an emphasis on fairness and impact on society.
3. Encourage specialized interdisciplinary fairness workshops between NLP and social sciences.
4. Encourage diversity in NLP research teams.
5. Incorporating more diversity workshops in NLP conferences.
6. Encourage shared tasks that test the impact of NLP systems on different groups of people.
## 8 Future work
In this section, I discuss important future research directions to mitigate the limitations of this work and the literature on NLP.
One of the main limitations of the work presented in my thesis and most of the work on bias and fairness in NLP models is that it focuses on the English language and on bias from a Western perspective. A critical future work is to create biased datasets in different languages to investigate social bias in models that are pre-trained on data in different languages. It is also important to investigate bias in multilingual NLP models and bias against marginalized groups in societies apart from Western societies.
As argued in Elsafoury and Abercrombie (2023) the sources of bias on the NLP pipelines originate in social sources. I also argue that the methods proposed in the NLP literature to measure and mitigate social bias in NLP models are inefficient, as argued in Blodgett et al. (2021); Gonen and Goldberg (2019) as a result of failing to incorporate social sciences literature and methods. One of the main limitations of this work is the lack of studies that empirically support this argument. This research direction is an important step towards understanding the bias and fairness in NLP and machine learning models in general.
## 9 Conclusion
In this paper, I provide a summary of my PhD thesis. I describe the work done to each my research findings and contributions. I also discuss the limitations of my work and how they can be mitigated in future research. Moreover, I discuss the main lessons learned from my research as well as recommendations that can benefit the NLP research community, especially for studying and
mitigating bias in NP models and improving the fairness of text classification tasks.
|
2309.16660 | Folding QQ-relations and transfer matrix eigenvalues: towards a unified
approach to Bethe ansatz for super spin chains | Extending the method proposed in [arXiv:1109.5524], we derive QQ-relations
(functional relations among Baxter Q-functions) and T-functions (eigenvalues of
transfer matrices) for fusion vertex models associated with the twisted quantum
affine superalgebras $U_{q}(gl(2r+1|2s)^{(2)})$, $U_{q}(gl(2r|2s+1)^{(2)})$,
$U_{q}(gl(2r|2s)^{(2)})$, $U_{q}(osp(2r|2s)^{(2)})$ and the untwisted quantum
affine orthosymplectic superalgebras $U_{q}(osp(2r+1|2s)^{(1)})$ and
$U_{q}(osp(2r|2s)^{(1)})$ (and their Yangian counterparts, $Y(osp(2r+1|2s))$
and $Y(osp(2r|2s))$) as reductions (a kind of folding) of those associated with
$U_{q}(gl(M|N)^{(1)})$. In particular, we reproduce previously proposed
generating functions (difference operators) of the T-functions for the
symmetric or anti-symmetric representations, and tableau sum expressions for
more general representations for orthosymplectic superalgebras
[arXiv:0911.5393,arXiv:0911.5390], and obtain Wronskian-type expressions
(analogues of Weyl-type character formulas) for them. T-functions for spinorial
representations are related to reductions of those for asymptotic limits of
typical representations of $U_{q}(gl(M|N)^{(1)})$. | Zengo Tsuboi | 2023-09-28T17:58:02Z | http://arxiv.org/abs/2309.16660v4 | Folding QQ-relations and transfer matrix eigenvalues: towards a unified approach to Bethe ansatz for super spin chains
###### Abstract
Extending the method proposed in [1], we derive QQ-relations (functional relations among Baxter Q-functions) and T-functions (eigenvalues of transfer matrices) for fusion vertex models associated with the twisted quantum affine superalgebras \(U_{q}(gl(2r+1|2s)^{(2)})\), \(U_{q}(gl(2r|2s+1)^{(2)})\), \(U_{q}(gl(2r|2s)^{(2)})\), \(U_{q}(osp(2r|2s)^{(2)})\) and the non-twisted quantum affine orthosymplectic superalgebras \(U_{q}(osp(2r+1|2s)^{(1)})\) and \(U_{q}(osp(2r|2s)^{(1)})\) (and their Yangian counterparts, \(Y(osp(2r+1|2s))\) and \(Y(osp(2r|2s))\)) as reductions (a kind of folding) of those for associated with \(U_{q}(gl(M|N)^{(1)})\). In particular, we reproduce previously proposed generating functions (difference operators) of the T-functions for the symmetric or anti-symmetric representations, and tableau sum expressions for more general representations for orthosymplectic superalgebras [2, 3], and obtain Wronskian-type expressions (analogues of Weyl-type character formulas) for them. T-functions for spinorial representations are related to reductions of those for asymptotic limits of typical representations of \(U_{q}(gl(M|N)^{(1)})\).
Keywords: Baxter Q-function, QQ-relation, Bethe ansatz, orthosymplectic superalgebras, Wronskian formula
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Notation
* 2.2 Lie superalgebras
* 2.2.1 Simple root systems and highest weights
* 2.2.2 Weyl group
* 2.3 Quantum affine superalgebras
* 3
T- and Q-functions for \(U_{q}(gl(M|N)^{(1)})\) * 3.1 Tableaux sum and CBR-type expressions of T-functions * 3.2 Wronskian-type expressions of T-functions
* 4 Reductions of T- and Q-functions
* 4.1 Reductions of Q-functions by automorphisms
* 4.2 Symmetric nesting path
* 4.3 Regular reductions
* 4.3.1 \(U_{q}(osp(2r+1|2s)^{(1)})\) case
* 4.3.2 \(U_{q}(gl(2r|2s+1)^{(2)})\) case
* 4.3.3 \(U_{q}(gl(2r+1|2s)^{(2)})\) case
* 4.3.4 \(U_{q}(gl(2r|2s)^{(2)})\) case
* 4.4 Singular reductions
* 4.4.1 \(U_{q}(sp(2r)^{(1)})\) case
* 4.4.2 \(U_{q}(osp(2r|2s)^{(1)})\) case
* 4.5 Bethe ansatz equations associated with root systems
* 4.5.1 QQ-relations
* 4.5.2 Bethe ansatz equations
* 4.5.3 Extended Weyl group symmetry
* 4.6 Bethe strap
* 4.7 T-functions for spinorial representations: \(U_{q}(osp(2r+1|2s)^{(1)})\) case
* 5 Concluding remarks
* A
## 1 Introduction
This paper, together with our recent paper [4], is an expansion of section 3.7 in our previous paper [1]. Namely, we will explain details of it and generalize it further.
Quantum integrable systems have commuting family of transfer matrices (T-operators). Finding eigenvalues of transfer matrices (T-functions) is an important problem in the study of quantum integrable systems. For this, the Bethe ansatz is often used. The T-functions are expressed in terms of Baxter Q-functions (for short, Q-functions). The Q-functions are eigenvalues of Baxter Q-operators. The zeros of the Q-functions give the roots of Bethe ansatz equations. In general, the Q-functions are not functionally independent and satisfy functional relations, called QQ-relations. There are two kinds of QQ-relations for quantum integrable systems associated with \(U_{q}(gl(M|N)^{(1)})\) (or \(U_{q}(sl(M|N)^{(1)})\)). The bosonic QQ-relations are generalization of the quantum Wronskian condition (cf. [5, 6]). The fermionic QQ-relations came from particle-hole transformations in statistical physics [7], and are related [8] to odd Weyl reflections [9, 10] of the superalgebra \(gl(M|N)\).
T-functions are generalization of characters of representations of underlying quantum algebras, with a spectral parameter. Corresponding to the fact that there are several different expressions of characters, there are several different expressions of T-functions: the Cherednik-Bazhanov-Reshetikhin (CBR) determinant formula (an analogue of the Jacobi-Trudi formula) [12, 11], Wronskian-like determinant (Casoratian) formulas (analogues of the Weyl character formula) [13, 5], and tableau sum expressions, etc (see [14] for a review). Among them, Wronskian expressions have a merit that the action of the Weyl group on them is manifest. Of particular interest is quantum integrable systems associated with superalgebras since representations of underlying superalgebras are less well understood and more diverse than those of ordinary (non-super) algebras. In view of this, we derived tableau sum and CBR-determinant expressions of T-functions by analytic Bethe ansatz for fusion vertex models associated with \(U_{q}(gl(M|N)^{(1)})\) (or \(U_{q}(sl(M|N)^{(1)})\)) [15, 8, 16], \(Y(osp(r|2s))\) for \(r\geq 3\), \(s\geq 1\)[2], \(U_{q}(osp(2|2s)^{(1)})\) for \(s\geq 1\)[3]. Establishing Wronskian expressions of T-functions for these is a longstanding problem, and in [17] (together with [18]), we proposed Wronskian expressions of T-functions for the case \(U_{q}(gl(M|N)^{(1)})\) (or \(U_{q}(sl(M|N)^{(1)})\)). In this paper we will explain our trial toward the rest, namely the \(U_{q}(osp(r|2s)^{(1)})\) case, and also related twisted quantum affine superalgebras cases.
It is known [19] that there is a correspondence between representations of superalgebras and ordinary (non-graded) algebras. Thus there should be a correspondence between different quantum integrable models in accordance with the correspondence between representations of different underlying algebras. A relatively well known example for this would be the Izergin-Korepin model [20] associated with the twisted quantum affine algebra \(U_{q}(sl(3)^{(2)})\) and a vertex model associated with the quantum affine superalgebra \(U_{q}(osp(1|2)^{(1)})\)[21, 22]. In the context of the thermodynamic Bethe ansatz, coincidence of the Q-system (a system of functional relations among characters of Kirillov-Reshetikhin modules) for \(U_{q}(sl(2r+1)^{(2)})\) and \(U_{q}(osp(1|2r)^{(1)})\) was pointed out in [23]. Having in mind a correspondence between the twisted quantum affine superalgebra \(U_{q}(gl(2r|1)^{(2)})\) and the quantum affine algebra \(U_{q}(so(2r+1)^{(1)})\), we proposed [1] a Wronskian solution of the T-system for \(U_{q}(so(2r+1)^{(1)})\) as a reduction (some kind of folding) of the Wronskian solution for \(U_{q}(gl(2r|1)^{(1)})\)[17]. In our recent paper [4], we not only explained details of [section 3.7, [1]], but also gave the QQ-relations for \(U_{q}(so(2r+1)^{(1)})\) as a reduction of the QQ-relations for \(U_{q}(gl(2r|1)^{(1)})\). In this paper, we extend our discussion to more wider algebras, in particular, twisted quantum affine superalgebras \(U_{q}(gl(2r+1|2s)^{(2)})\), \(U_{q}(gl(2r|2s+1)^{(2)})\), \(U_{q}(gl(2r|2s)^{(2)})\) (or \(U_{q}(sl(2r+1|2s)^{(2)})\), \(U_{q}(sl(2r|2s+1)^{(2)})\), \(U_{q}(sl(2r|2s)^{(2)})\)) and quantum affine orthosymplectic superalgebras \(U_{q}(osp(2r+1|2s)^{(1)})\) and \(U_{q}(osp(2r|2s)^{(1)})\) (and their Yangian counterparts, \(Y(osp(2r+1|2s))\) and \(Y(osp(2r|2s))\)). We will derive T-functions, QQ-relations, Bethe equations for these algebras as reductions of those for \(U_{q}(gl(M|N)^{(1)})\). We have reproduced some of our previous results by analytic Bethe ansatz [2, 3], in particular generating functions of T-functions of fusion vertex models for the symmetric or anti-symmetric representations in the auxiliary space.
The basic idea on the reduction procedure proposed in [1] is as follows. As remarked in [17], there are \(2^{M+N}\) kinds of Q-functions \(\mathbb{Q}_{I}(u)\) labeled by \(I\subset\{1,2,\ldots,M+N\}\) with the spectral parameter \(u\in\mathbb{C}\) for quantum integrable systems associated with \(U_{q}(gl(M|N)^{(1)})\). First we consider a map \(\sigma\) which keeps the form of the QQ-relations invariant. Then we apply this to the Q-functions \(\mathbb{Q}_{I}(u)\), \(I\subset\{1,2,\ldots,M+N\}\) and boundary twist parameters
\(\{z_{a}\}_{a=1}^{M+N}\) and identity the image of them with the original ones: \(\sigma(\mathbb{Q}_{I}(u))=\mathbb{Q}_{I}(u)\), \(\sigma(z_{a})=z_{a}\). In case we consider reductions to twisted quantum affine superalgebras, we have to make a shift of the spectral parameter: \(\sigma(\mathbb{Q}_{I}(u))=\mathbb{Q}_{I}(u+\eta)\), where \(\eta\) is half of the period of the Q-functions. The reduction procedure for (non-super) twisted quantum affine algebras for a special class of the index set \(I\) was proposed in [24] (see also [69]). The reduction procedure can also be applied to the T-functions since the T-functions are expressed in terms of Q-functions. In case we consider a reduction to \(U_{q}(osp(2r|2s)^{(1)})\), we have to modify the relation \(\sigma(z_{a})=z_{a}\) in part and impose additional conditions on Q-functions. This situation is similar to the one in [25], where a reduction of q-characters for \(U_{q}(sl(2r+2)^{(1)})\) to q-characters for \(U_{q}(sp(2r)^{(1)})\) was discussed.
In general, representations of finite Lie algebras other than type A 1 can not be lifted to representations of Yangians or quantum affine algebras. In contrast, evaluation representations based on representations of \(U_{q}(gl(M|N))\) are available for the \(U_{q}(gl(M|N)^{(1)})\) case. This is a merit to work on the problems as reductions of the \(U_{q}(gl(M|N)^{(1)})\) case. On the level of supercharacters, one can use various expressions of supercharacter formulas of \(gl(M|N)\) (see for example, [26]), and consider reductions of them, to get supercharacters of twisted quantum affine superalgebras and quantum affine orthosymplectic superalgebras (or their Yangian counterparts).
Footnote 1: \(gl(M|N)\), \(sl(M|N)\) or \(A(M-1|N-1)\)
It should be remarked that the Bethe ansatz equations of the Gaudin models associated with \(osp(2r+1|2s)\) and \(osp(2r|2s)\) were studied in [27] in connection to those associated with \(gl(r|s)\) (see also the recent paper [28]). Although it is not a topic of this paper, our results on XXZ-type models will have some connection to theirs in the Gaudin limit.
The outline of this paper is as follows. In section 2, we fix notation and summarize preliminaries on Lie superalgebras in our convention. In section 3, we summarize necessary formulas on T- and Q-functions for \(U_{q}(gl(M|N)^{(1)})\), which are taken mainly from [17, 1, 15, 8]. In section 4.1, we explain the general procedure of reductions of the formulas introduced in section 3. In section 4.2, we restrict our consideration to reductions along symmetric nesting paths, which correspond to folding with respect to symmetric Dynkin diagrams of \(gl(M|N)\). In subsections 4.3 and 4.4, the results of the reductions are presented for each value of \((M,N)\): QQ-relations, generating functions of the T-functions for the symmetric or anti-symmetric representations, Wronskian-type expressions of T-functions, and Bethe ansatz equations are presented. In subsection 4.5, the QQ-relations and the Bethe ansatz equations derived in subsections 4.3 and 4.4 are compactly expressed in terms of simple root systems of underlying algebras. We remark that QQ-relations are expressed in terms of root systems of underlying (non-super) Lie algebras in connection with discrete Miura opers [29] and the ODE/IM correspondence [30, 31]. Our formulation is different from theirs in that we use a simple root system of the Lie superalgebra \(gl(2r|1)\) for the non-super algebra \(U_{q}(so(2r+1)^{(1)})\) case. In subsection 4.6, we derive various T-functions by Bethe stray procedures in the analytic Bethe ansatz. One will find similar objects in the context of q-characters in representation theory [32, 33]. We remark that the notion of the Bethe stray appeared [34, 35] before the q-characters were introduced. In subsection 4.7, T-functions for spinorial representations are presented. We remark that T-functions for spinorial representations are obtained as reductions of T-functions of
asymptotic typical representations of \(U_{q}(gl(M|N)^{(1)})\), as already demonstrated in [1, 4] for the \(U_{q}(so(2r+1)^{(1)})\) case (a reduction of \((M,N)=(2r,1)\) case). Section 5 is devoted to concluding remarks. We can consider more reductions to T-functions obtained by reductions. In Appendix A, we consider reductions of \(U_{q}(osp(2r|2s)^{(1)})\) case to \(U_{q}(osp(2r|2s)^{(2)})\) case. In Appendix B, we consider the (super)character limit of T-functions, and their decomposition with respect to (super)characters of finite Lie superalgebras. We show that special cases of them coincide with the characters of the Kirillov-Reshetikhin modules of quantum affine algebras (or their Yangian counterparts).
## 2 Preliminaries
### Notation
For \(M,N\in\mathbb{Z}_{\geq 0}\), we define sets
\[\mathfrak{B}=\{1,2,\ldots,M\},\] \[\mathfrak{F}=\{M+1,M+2,\ldots,M+N\}, \tag{2.1}\] \[\mathfrak{I}=\mathfrak{B}\sqcup\mathfrak{F},\]
and an operation
\[b^{*}=M+1-b\quad\text{for}\quad b\in\mathfrak{B},\qquad f^{*}=2M+ N+1-f\quad\text{for}\quad f\in\mathfrak{F}, \tag{2.2}\] \[I^{*}=\{a^{*}|a\in I\}\quad\text{for}\quad I\subset\mathfrak{I}.\]
We define the set of acceptable sets 2
Footnote 2: This type of sets appeared in the context of Q-operators associated with \(Y(so(2r))\)[75].
\[\mathfrak{A}=\{I\subset\mathfrak{I}|a^{*}\notin I\text{ for any }a\in I\}. \tag{2.3}\]
By definition, \(|I|\leq|\mathfrak{I}|/2=(M+N)/2\) if \(I\subset\mathfrak{A}\). We will use a grading parameter defined on the set \(\mathfrak{B}\sqcup\mathfrak{F}\):
\[p_{a}=1\quad\text{for}\quad a\in\mathfrak{B},\qquad p_{a}=-1\quad\text{for} \quad a\in\mathfrak{F}. \tag{2.4}\]
We remark that \(p_{a}=p_{a^{*}}\) holds for any \(a\in\mathfrak{I}\). We use the following notation for a matrix:
\[(a_{ij})_{{i\in B}\atop{j\in F}}=\begin{pmatrix}a_{b_{1},f_{1}}&a_{b_{1},f_{2 }}&\cdots&a_{b_{1},f_{n}}\\ a_{b_{2},f_{1}}&a_{b_{2},f_{2}}&\cdots&a_{b_{2},f_{n}}\\ \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\\ a_{b_{m},f_{1}}&a_{b_{m},f_{2}}&\cdots&a_{b_{m},f_{n}}\end{pmatrix}, \tag{2.5}\]
where \(B=(b_{1},\ldots,b_{m})\), \(F=(f_{1},\ldots,f_{n})\). In case the tuples \(B\) and \(F\) are regarded as sets \(B=\{b_{1},\ldots,b_{m}\}\), \(F=\{f_{1},\ldots,f_{n}\}\), we assume \(b_{1}<\cdots<b_{m}\), \(f_{1}<\cdots<f_{n}\).
Consider an arbitrary function \(f(u)\) of \(u\in\mathbb{C}\) (the spectral parameter). In this paper we use the following notation for a shift of the spectral parameter: \(f^{[a]}=f(u+a\hbar)\) for
an additive shift, and \(f^{[a]}=f(uq^{ah})\) for a multiplicative shift (\(q\)-difference), where \(a\in\mathbb{C}\). Here the unit of the shift \(\hbar\) is a non-zero fixed complex number. If there is no shift (\(a=0\)), \([0]\) is often omitted: \(f^{[0]}=f=f(u)\). In the following, we mainly use an additive shift with \(\hbar=1\). Throughout the paper we assume that the deformation parameter \(q\) of the quantum affine superalgebras is not a root of unity.
We denote by \(S_{r}\) the symmetric group of order \(r\), and by \(S(I)\) the symmetric group over the elements of a set (or tuple) \(I\). Let \(\tau_{ab}\in S(I)\) be the transposition such that \(\tau_{ab}(a)=b,\tau_{ab}(b)=a\) and \(\tau_{ab}(c)=c\) for \(c\neq a\), \(c\neq b\) (\(a,b,c\in I\)).
A partition is a non increasing sequence of positive integers \(\mu=(\mu_{1},\mu_{2},\dots)\): \(\mu_{1}\geq\mu_{2}\geq\dots\geq 0\). We often write this in the form \(\mu=(l^{m_{r}},(l-1)^{m_{l-1}},\dots,2^{m_{2}},1^{m_{1}})\), where \(l=\mu_{1}\), and \(m_{k}=\mathrm{Card}\{j|\mu_{j}=k\}\). We use the same symbol \(\mu\) for the Young diagram corresponding to a partition \(\mu\). The conjugate (transposition) of \(\mu\) is defined by \(\mu^{\prime}=(\mu^{\prime}_{1},\mu^{\prime}_{2},\dots)\), where \(\mu^{\prime}_{j}=\mathrm{Card}\{k|\mu_{k}\geq j\}\).
### Lie superalgebras
Although the underlying algebras of the quantum integrable systems in question are quantum affine superalgebras (or superYangians), we need simple roots and highest weights of (finite) Lie superalgebras for labeling of quantities which we discuss in what follows. For details of Lie superalgebras, see for example, [36, 37, 38, 39].
Each simple root \(\alpha\) of basic Lie superalgebras carries the grading (parity) \(p_{\alpha}\in\{1,-1\}\). The root \(\alpha\) is called an _even root_ (bosonic root) if \(p_{\alpha}\)=1, and an _odd root_ (fermionic root) if \(p_{\alpha}=-1\). Lie superalgebras have several inequivalent simple root systems. The simplest one is the _distinguished simple root system_, which contains only one odd root. Let \(\{\epsilon_{i}\}_{i=1}^{M+N}\) be a basis of the dual space of a Cartan subalgebra of \(gl(M|N)\) with the bilinear form such that \((\epsilon_{i}|\epsilon_{j})=(\epsilon_{j}|\epsilon_{i})=p_{i}\delta_{ij}\) and \(p_{\epsilon_{i}}=p_{i}\). It is convenient to set \(\varepsilon_{i}=\epsilon_{i}\) for \(1\leq i\leq M\), \(\delta_{i}=\epsilon_{i+M}\) for \(1\leq i\leq N\), thus \((\varepsilon_{i}|\varepsilon_{j})=\delta_{ij}\), \((\varepsilon_{i}|\delta_{j})=(\delta_{j}|\varepsilon_{i})=0\), \((\delta_{i}|\delta_{j})=-\delta_{ij}\). We will describe simple root systems of type B, C and D Lie superalgebras (orthosymplectic Lie superalgebras) in terms of subsets 3 of the bases \(\{\epsilon_{i}\}_{i=1}^{M+N}\).
Footnote 3: We abuse notation and use the same symbol for different objects.
One can draw a Dynkin diagram for any simple root system. To each simple root \(\alpha\), one assigns one of the following three types of dots:
white dot \(\bigcirc\) if \((\alpha|\alpha)\neq 0\) and \(p_{\alpha}=1\),
black dot \(\bigcirc\) if \((\alpha|\alpha)\neq 0\) and \(p_{\alpha}=-1\), and
gray dot \(\bigotimes\) if \((\alpha|\alpha)=0\).
We also use a symbol \(\begin{array}{c}\bullet\\ \alpha\end{array}\) to denote one of the above three dots for the root \(\alpha\).
The black dot appears in Dynkin diagrams of \(osp(2r+1|2s)\). In [27], the Dynkin diagrams for \(osp(2r+1|2s)\) and \(osp(2r|2s)\) are defined by attaching one more node to the Dynkin diagram for \(gl(r|s)\) associated with a "parity sequence". The parity sequence in [27] corresponds to a subset \((p_{i_{1}},p_{i_{2}},\dots,p_{i_{r+s}})\)4 of the grading parameters of \(gl(M|N)\) in
this paper. Note however that we will mainly use tuples (made from \(\mathfrak{I}\)), rather than the parity sequences, to describe the simple root systems of \(osp(2r+1|2s)\) and \(osp(2r|2s)\).
#### 2.2.1 Simple root systems and highest weights
Type ALet \(I_{M+N}=(i_{1},i_{2},\ldots,i_{M+N})\) be any one of the permutations of the tuple \((1,2,\ldots,M+N)\). The simple root system of \(gl(M|N)\) associated with the tuple \(I_{M+N}\) is defined by
\[\alpha_{a}=\epsilon_{i_{M+N+1-a}}-\epsilon_{i_{M+N-a}}. \tag{2.6}\]
The corresponding Dynkin diagram is given by
\(\alpha_{1}\)\(\alpha_{2}\)\(\alpha_{M+N-2}\)\(\alpha_{M+N-1}\)
In particular for the case \(I_{M+N}=(M+N,\ldots,2,1)\), (2.6) reduces to the distinguished simple root system:
\[\alpha_{i} =\varepsilon_{i}-\varepsilon_{i+1}\quad\text{for}\quad i\in\{1,2,\ldots,M-1\}, \tag{2.7}\] \[\alpha_{M} =\varepsilon_{M}-\delta_{1},\] \[\alpha_{i+M} =\delta_{i}-\delta_{i+1}\quad\text{for}\quad i\in\{1,2,\ldots,N-1\},\]
and the corresponding Dynkin diagram is given by
\(\alpha_{1}\)\(\alpha_{2}\)\(\alpha_{M}\)\(\alpha_{M+N-2}\)\(\alpha_{M+N-1}\)
Let \(V(\Lambda)\) be the irreducible representation of \(gl(M|N)\) with the highest weight
\[\Lambda=\sum_{j=1}^{M}\Lambda_{j}\varepsilon_{j}+\sum_{j=1}^{N}\Lambda_{M+j} \delta_{j}, \tag{2.8}\]
where \(\Lambda_{j}\in\mathbb{C}\). The Kac-Dynkin labels \([b_{1},b_{2},\ldots,b_{M+N-1}]\) of \(V(\Lambda)\) are defined by \(b_{j}=2(\Lambda|\alpha_{j})/(\alpha_{j}|\alpha_{j})\) if \((\alpha_{j}|\alpha_{j})\neq 0\), \(b_{j}=(\Lambda|\alpha_{j})/(\alpha_{j}|\alpha_{j^{\prime}})\) for some \(j^{\prime}\) such that \((\alpha_{j}|\alpha_{j^{\prime}})\neq 0\) if \((\alpha_{j}|\alpha_{j})=0\). For the distinguished simple roots (2.7), we have 5
Footnote 5: Here we set \(M^{\prime}=M+1\).
\[b_{j}=\Lambda_{j}-\Lambda_{j+1}\quad\text{for}\quad j\neq M,\qquad b_{M}= \Lambda_{M}+\Lambda_{M+1}. \tag{2.9}\]
\(V(\Lambda)\) is finite dimensional if \(b_{j}\in\mathbb{Z}_{\geq 0}\) for \(j\neq M\). In case \(\Lambda_{j}\in\mathbb{Z}_{\geq 0}\), these parameters are related to an \([M,N]\)-hook partition \(\mu=(\mu_{1},\mu_{2},\ldots)\), \(\mu_{1}\geq\mu_{2}\geq\cdots\geq 0\), \(\mu_{M+1}\leq N\):
\[\Lambda_{j}=\mu_{j}\quad\text{for}\quad j\in\{1,2,\ldots,M\},\quad\Lambda_{M+j }=\max\{\mu^{\prime}_{j}-M,0\}\quad\text{for}\quad j\in\{1,2,\ldots,N\}, \tag{2.10}\]
where \(\mu^{\prime}_{k}=\text{Card}\{j|\mu_{j}\geq k\}\). The \([M,N]\)-hook partition describes a Young diagram in the \([M,N]\)-hook (see Figure 1).
From now on, we consider the case that the elements of the tuple \(I_{M+N}\) satisfy \(i^{*}_{k}=i_{M+N+1-k}\) for any \(k\in\mathfrak{I}\) (for type B, C, D superalgebras). We formally set \(\epsilon_{k^{*}}=-\epsilon_{k}\).
Type B
Set \((M,N)=(2r,2s+1)\), where \(r,s\in\mathbb{Z}_{\geq 0}\), \(r+s\geq 1\). In this case the tuple has the form \(I_{2r+2s+1}=(i_{1},i_{2},\ldots,i_{r+s},2r+s+1,i_{r+s}^{*},\ldots,i_{2}^{*},i_{ 1}^{*})\). Note that any elements of this tuple are mutually distinct. The simple root system of \(B(r|s)=osp(2r+1|2s)\) associated with the tuple \(I_{2r+2s+1}\) is defined by
\[\beta_{a}=\epsilon_{i_{a}^{*}}-\epsilon_{i_{a+1}^{*}}\quad\text{for}\quad a\in \{1,2,\ldots,r+s-1\},\quad\beta_{r+s}=\epsilon_{i_{r+s}^{*}}, \tag{2.11}\]
and the corresponding Dynkin diagrams are given by 6
Footnote 6: We use the tuple \(I_{2r+2s+1}\) to emphasize the connection with \(gl(2r|2s+1)\), and the last \(r+s\) elements for labeling of the simple roots. It is possible to use the first \(r+s\) elements instead (this looks more standard). In this case, it would be better to reverse the order of the labeling of (2.6) (as \(\alpha_{a}=\epsilon_{i_{a}}-\epsilon_{i_{a+1}}\)). Similar remarks can be applied to the type C and D superalgebras.
\[\beta_{1}\]
\[\beta_{2}\]
\[\beta_{r+s-2}\]
\[\beta_{r+s-1}\]
\[\beta_{r+s}\]
\[\beta_{r+s}\]
The case \(r>0\):We have
\[\begin{split}\beta_{i}&=\delta_{i}-\delta_{i+1}\quad \text{for}\quad i\in\{1,2,\dots,s-1\},\\ \beta_{s}&=\delta_{s}-\varepsilon_{1},\\ \beta_{i+s}&=\varepsilon_{i}-\varepsilon_{i+1}\quad \text{for}\quad i\in\{1,2,\dots,r-1\},\\ \beta_{r+s}&=\varepsilon_{r},\end{split} \tag{2.12}\]
and the corresponding Dynkin diagram is given by
Let \(V(\Lambda)\) be the irreducible representation of \(osp(2r+1|2s)\) with the highest weight
\[\Lambda=\sum_{j=1}^{s}\Lambda_{j}\delta_{j}+\sum_{j=1}^{r}\Lambda_{s+j} \varepsilon_{j}, \tag{2.13}\]
where \(\Lambda_{j}\in\mathbb{C}\). The Kac-Dynkin labels \([b_{1},b_{2},\dots,b_{r+s}]\) of \(V(\Lambda)\) are defined by \(b_{j}=2(\Lambda|\beta_{j})/(\beta_{j}|\beta_{j})\) if \((\beta_{j}|\beta_{j})\neq 0\), \(b_{j}=(\Lambda|\beta_{j})/(\beta_{j}|\beta_{j^{\prime}})\) for some \(j^{\prime}\) such that \((\beta_{j}|\beta_{j^{\prime}})\neq 0\) if \((\beta_{j}|\beta_{j})=0\). For the distinguished simple roots (2.12), we have 7
Footnote 7: Here we set \(s^{\prime}=s+1\).
\[b_{j}=\Lambda_{j}-\Lambda_{j+1}\quad\text{for}\quad j\neq s,r+s,\qquad b_{s}= \Lambda_{s}+\Lambda_{s+1},\qquad b_{r+s}=2\Lambda_{r+s}. \tag{2.14}\]
\(V(\Lambda)\) is finite dimensional if \(b_{j}\in\mathbb{Z}_{\geq 0}\) for \(j\neq s\), \(c=b_{s}-b_{s+1}-b_{s+2}-\dots-b_{r+s-1}-b_{r+s}/2\in\mathbb{Z}_{\geq 0}\), and \(b_{s+c+1}=b_{s+c+2}=\dots=b_{r+s}=0\) if \(c<r\). In case \(\Lambda_{j}\in\mathbb{Z}_{\geq 0}\), these parameters are related to an \([r,s]\)-hook partition \(\mu=(\mu_{1},\mu_{2},\dots)\), \(\mu_{1}\geq\mu_{2}\geq\dots\geq 0\), \(\mu_{r+1}\leq s\):
\[\Lambda_{j}=\mu_{j}^{\prime}\quad\text{for}\quad j\in\{1,2,\dots,s\},\quad \Lambda_{s+j}=\max\{\mu_{j}-s,0\}\quad\text{for}\quad j\in\{1,2,\dots,r\}, \tag{2.15}\]
where \(\mu_{k}^{\prime}=\text{Card}\{j|\mu_{j}\geq k\}\). The \([r,s]\)-hook partition describes a Young diagram in the \([r,s]\)-hook. This is embedded into the \([2r,2s+1]\)-hook of \(gl(2r|2s+1)\) (see Figure 2).
The case \(r=0\):The distinguished simple root system of \(B(0|s)=osp(1|2s)\) is given by
\[\begin{split}\beta_{i}&=\delta_{i}-\delta_{i+1} \quad\text{for}\quad i\in\{1,2,\dots,s-1\},\\ \beta_{s}&=\delta_{s},\end{split} \tag{2.16}\]
and the corresponding Dynkin diagram has the form
Let \(V(\Lambda)\) be the irreducible representation of \(osp(1|2s)\) with the highest weight
\[\Lambda=\sum_{j=1}^{s}\Lambda_{j}\delta_{j}, \tag{2.17}\]
Figure 2: \([r,s]\)-hook:in \([2r,2s+1]\)-hook: the Young diagram \(\mu\) is related to the highest weight (2.13) by (2.15).
where \(\Lambda_{j}\in\mathbb{C}\). The Kac-Dynkin labels \([b_{1},b_{2},\dots,b_{s}]\) of \(V(\Lambda)\) are given by
\[b_{j}=\Lambda_{j}-\Lambda_{j+1}\quad\text{for}\quad j\neq s,\qquad b _{s}=2\Lambda_{s}. \tag{2.18}\]
\(V(\Lambda)\) is finite dimensional if \(b_{j}\in\mathbb{Z}_{\geq 0}\) for \(j\neq s\), \(c=b_{s}/2\in\mathbb{Z}_{\geq 0}\). In case \(\Lambda_{j}\in\mathbb{Z}_{\geq 0}\), these parameters are related to a \([0,s]\)-hook partition \(\mu=(\mu_{1},\mu_{2},\dots)\), \(\mu_{1}\geq\mu_{2}\geq\cdots\geq 0\), \(\mu_{1}\leq s\):
\[\Lambda_{j}=\mu^{\prime}_{j}\quad\text{for}\quad j\in\{1,2,\dots,s\}. \tag{2.19}\]
The \([0,s]\)-hook partition describes a Young diagram in the \([0,s]\)-hook. This is embedded into the \([0,2s+1]\)-hook of \(gl(0|2s+1)\) (see Figure 3).
Type C and DSet \((M,N)=(2r,2s+2)\), where \(r,s\in\mathbb{Z}_{\geq 0}\), \(r+s\geq 1\). In this case, the tuple has the form \(I_{2r+2s+2}=(i_{1},i_{2},\dots,i_{r+s+1},i_{r+s+1}^{*},\dots,i_{2}^{*},i_{1}^{*})\). Here we assume \(i_{r+s+1}=2r+s+1\) or \(2r+s+2\). Note that any elements of this tuple are mutually distinct. For this tuple \(I_{2r+2s+2}\), we define
\[\Upsilon=(-1)^{\text{Card}\{i_{a}\in\{1,2,\dots,r\}|1\leq a\leq r+ s\}}=(-1)^{\text{Card}\{i_{a}^{*}\in\{r+1,r+2,\dots,2r\}|1\leq a\leq r+s\}}. \tag{2.20}\]
The simple root systems of \(osp(2r|2s)\) (\(D(r|s)=osp(2r|2s)\) if \(r\geq 2\), \(C(s+1)=osp(2|2s)\)) associated with the tuple \(I_{2r+2s+2}\) are defined as follows.
Figure 3: \([0,s]\)-hook:in \([0,2s+1]\)-hook: the Young diagram \(\mu\) is related to the highest weight (2.17) by (2.19).
The case \(i_{r+s}\in\mathfrak{B}\), \(r\geq 1\), \(s\geq 0\), \(r+s\geq 2\) (type D):We have
\[\begin{split}\beta_{a}&=\epsilon_{i_{a}^{*}}- \epsilon_{i_{a+1}^{*}}\quad\text{for}\quad a\in\{1,2,\ldots,r+s-2\},\\ \beta_{r+s-1}&=\epsilon_{i_{r+s-1}^{*}}-\epsilon_{i_ {r+s}^{*}},\\ \beta_{r+s}&=\epsilon_{i_{r+s-1}^{*}}+\epsilon_{i_ {r+s}^{*}},\end{split} \tag{2.21}\]
and the corresponding Dynkin diagrams are given by
\[\begin{split}\beta_{1}&\beta_{2}\\ \beta_{1}&\beta_{2}\\ \beta_{r+s-3}&\beta_{r+s-2}\\ \beta_{r+s}&\beta_{r+s}\end{split}\]
[MISSING_PAGE_POST]
simple root systems with \(\Upsilon=0\): one can start from the distinguished simple root system (2.22) and applies Weyl reflections and odd reflections to each simple root repeatedly.
Let \(V(\Lambda)\) be the irreducible representation of \(osp(2r|2s)\) with the highest weight
\[\Lambda=\sum_{j=1}^{s}\Lambda_{j}\delta_{j}+\sum_{j=1}^{r}\Lambda_{s+j}\varepsilon _{j}, \tag{2.23}\]
where \(\Lambda_{j}\in\mathbb{C}\). For the distinguished simple roots (2.22), the Kac-Dynkin labels \([b_{1},b_{2},\dots,b_{r+s}]\) of \(V(\Lambda)\) is given by 8
Footnote 8: Here we set \(s^{\prime}=s+1\).
\[b_{j}=\Lambda_{j}-\Lambda_{j+1}\quad\text{for}\quad j\neq s,r+s,\qquad b_{s}= \Lambda_{s}+\Lambda_{s+1},\quad b_{r+s}=\Lambda_{r+s-1}+\Lambda_{r+s}. \tag{2.24}\]
\(V(\Lambda)\) is finite dimensional if \(b_{j}\in\mathbb{Z}_{\geq 0}\) for \(j\neq s\), \(c=b_{s}-b_{s+1}-b_{s+2}-\dots-b_{r+s-2}-(b_{r+s-1}+b_{r+s})/2\in\mathbb{Z}_{\geq 0}\), \(b_{s+c+1}=b_{s+c+2}=\dots=b_{r+s}=0\) if \(c<r-1\), and \(b_{r+s-1}=b_{r+s}=0\) if \(c=r-1\). In case \(\Lambda_{j}\in\mathbb{Z}_{\geq 0}\), these parameters are related to an \([r,s]\)-hook partition \(\mu=(\mu_{1},\mu_{2},\dots)\), \(\mu_{1}\geq\mu_{2}\geq\dots\geq 0\), \(\mu_{r+1}\leq s\):
\[\Lambda_{j}=\mu_{j}^{\prime}\quad\text{for}\quad j\in\{1,2,\dots,s\},\quad \Lambda_{s+j}=\max\{\mu_{j}-s,0\}\quad\text{for}\quad j\in\{1,2,\dots,r\}, \tag{2.25}\]
or
\[\Lambda_{j}=\mu_{j}^{\prime}\quad\text{for}\quad j\in\{1,2,\dots, s\},\quad\Lambda_{s+j}=\max\{\mu_{j}-s,0\}\quad\text{for}\quad j\in\{1,2,\dots,r-1\},\] \[\Lambda_{r+s}=-\max\{\mu_{r}-s,0\}. \tag{2.26}\]
The \([r,s]\)-hook partition describes a Young 9 diagram in the \([r,s]\)-hook. This is embedded into the \([2r,2s+2]\)-hook of \(gl(2r|2s+2)\) (see Figure 4).
Footnote 9: One may consider a “branch cut” along the lines \(a=r-1\), \(m\geq s\) and \(r-1\leq a\leq r\), \(m=s\), to describe (2.25) and (2.26) at the same time. The main target domain of the T- and Q-systems for tensor representations of \(U_{q}(osp(2s|2r)^{(1)})\) would be such an extended \([r,s]\)-hook.
**The case \(i_{r+s}\in\mathfrak{F}\), \(r\geq 0\), \(s\geq 1\), \(r+s\geq 1\) (type C):** The simple root system is defined by
\[\begin{split}\beta_{a}&=\epsilon_{i_{a}^{*}}- \epsilon_{i_{a+1}^{*}},\quad\text{for}\quad a\in\{1,2,\dots,r+s-1\},\\ \beta_{r+s}&=2\epsilon_{i_{r+s}^{*}},\end{split} \tag{2.27}\]
and the corresponding Dynkin diagram is given by
\[\begin{split}\beta_{1}&\beta_{2}\\ \end{split}\]
\[\begin{split}\beta_{r+s-2}&\beta_{r+s-1}\end{split}\]
\[\begin{split}\beta_{r+s}&\text{if $p_{i_{r+s}}=-1$}\end{split}\]
In particular for the case \(r=1\), \(s\geq 1\), \(I_{2s+4}=(2,2s+4,2s+3,\dots,4,3,1)\), (2.27) reduces to the distinguished simple root system of \(C(s+1)=osp(2|2s)\):
\[\beta_{1}=\varepsilon_{1}-\delta_{1}, \tag{2.28}\] \[\beta_{i}=\delta_{i-1}-\delta_{i}\quad\text{for}\quad i\in\{2,3, \dots,s\},\] (2.29) \[\beta_{s}=2\delta_{s}, \tag{2.30}\]
Figure 4: \([r,s]\)-hook:in \([2r,2s+2]\)-hook: the Young diagram \(\mu\) is related to the highest weight (2.23) by (2.26).
and the corresponding Dynkin diagram is given by
(2.31)
Let \(V(\Lambda)\) be the irreducible representation of \(osp(2|2s)\) with the highest weight
\[\Lambda=\Lambda_{1}\varepsilon_{1}+\sum_{j=1}^{s}\Lambda_{1+j}\delta_{j}, \tag{2.32}\]
where \(\Lambda_{j}\in\mathbb{C}\). For the distinguished simple roots (2.23), the Kac-Dynkin labels \([b_{1},b_{2},\dots,b_{s+1}]\) of \(V(\Lambda)\) is given by 10
Footnote 10: Here we set \(1^{\prime}=2\).
\[b_{1}=\Lambda_{1}+\Lambda_{2},\qquad b_{j}=\Lambda_{j}-\Lambda_{j+1}\quad\text {for}\quad j\neq 1,s+1,\qquad b_{s+1}=\Lambda_{s+1}. \tag{2.33}\]
\(V(\Lambda)\) is finite dimensional if \(b_{j}\in\mathbb{Z}_{\geq 0}\) for \(j\neq 1\). In case \(\Lambda_{j}\in\mathbb{Z}_{\geq 0}\), these parameters are related to a \([1,s]\)-hook partition \(\mu=(\mu_{1},\mu_{2},\dots)\), \(\mu_{1}\geq\mu_{2}\geq\dots\geq 0\), \(\mu_{2}\leq s\):
\[\Lambda_{1}=\mu_{1},\qquad\Lambda_{j+1}=\max\{\mu_{j}^{\prime}-s,0\},\quad \text{for}\quad j\in\{1,2,\dots,s\}. \tag{2.34}\]
The \([1,s]\)-hook partition describes a Young diagram in the \([1,s]\)-hook. This is embedded into the \([2,2s+2]\)-hook of \(gl(2|2s+2)\) (see Figure 5).
#### 2.2.2 Weyl group
Let \(\alpha\) and \(\beta\) be roots of a Lie superalgebra. The Weyl group of the Lie superalgebra is generated by _Weyl reflections_:
\[w_{\alpha}(\beta)=\beta-\frac{2(\alpha|\beta)}{(\alpha|\alpha)}\alpha, \tag{2.35}\]
where \(\alpha\) is an even root. The Weyl group is extended by _odd reflections_[9, 10]:
\[w_{\alpha}(\beta)=\begin{cases}\beta-\frac{2(\alpha|\beta)}{(\alpha|\alpha)} \alpha\quad\text{if}\quad(\alpha|\alpha)\neq 0,\\ \beta+\alpha\quad\text{if}\quad(\alpha|\alpha)=0\quad\text{and}\quad(\alpha| \beta)\neq 0,\\ \beta\quad\text{if}\quad(\alpha|\alpha)=0\quad\text{and}\quad(\alpha|\beta)=0, \\ -\alpha\quad\text{if}\quad\alpha=\beta,\end{cases} \tag{2.36}\]
where \(\alpha\) is an odd root. The Weyl reflections (2.35) do not change the shape of the Dynkin diagrams, while the odd reflections (2.36) do. Take the type D simple root system (2.22) for the case \(-p_{i_{r+s-1}}=p_{i_{r+s}}=1\), and apply the odd reflection with respect to the \((r+s)\)-th odd simple root \(\beta_{r+s}\) to this:
\[w_{\beta_{r+s}}(\beta_{a}) =\beta_{a}=\epsilon_{i_{a}^{*}}-\epsilon_{i_{a+1}^{*}}=:\beta_{a} ^{\prime}\quad\text{for}\quad a\in\{1,2,\dots,r+s-3\}, \tag{2.37}\] \[w_{\beta_{r+s}}(\beta_{r+s-2}) =\beta_{r+s-2}+\beta_{r+s}=\epsilon_{i_{r+s-2}^{*}}+\epsilon_{i_{ r+s}^{*}}=\epsilon_{i_{r+s-2}^{*}}-\epsilon_{i_{r+s}}=:\beta_{r+s-2}^{\prime},\] \[w_{\beta_{r+s}}(\beta_{r+s-1}) =\beta_{r+s-1}+\beta_{r+s}=2\epsilon_{i_{r+s-1}^{*}}=:\beta_{r+s} ^{\prime},\] \[w_{\beta_{r+s}}(\beta_{r+s}) =-\beta_{r+s}=-\epsilon_{i_{r+s-1}^{*}}-\epsilon_{i_{r+s}^{*}}= \epsilon_{i_{r+s}}-\epsilon_{i_{r+s-1}^{*}}=:\beta_{r+s-1}^{\prime}.\]
Figure 5: \([1,s]\)-hook:in \([2,2s+2]\)-hook: the Young diagram \(\mu\) is related to the highest weight (2.31) by (2.33).
The resultant simple root system \(\{\beta^{\prime}_{a}\}_{a=1}^{r+s}\) is the type C simple root system (2.27) defined by the tuple \(I^{\prime}_{2r+2s+2}=\tau_{i^{*}_{r+s-1},i_{r+s}}\circ\tau_{i_{r+s-1},i^{*}_{r+s }}\circ\tau_{i_{r+s},i^{*}_{r+s}}(I_{2r+2s+2})=(i_{1},i_{2},\ldots,i_{r+s-2},i^{* }_{r+s},i_{r+s-1},i_{r+s+1},i^{*}_{r+s-1},i_{r+s},i^{*}_{r+s-2},\ldots,i^{*}_{2 },i^{*}_{1})\) with \(p_{i_{r+s-1}}=-1\), where \(\tau_{a,b}\) permutes \(a\) and \(b\). Note that the labeling of the \((r+s-1)\)-th and \((r+s)\)-th nodes of the Dynkin diagram is interchanged, and the sign of \(\Upsilon\) is changed by the above \(w_{\beta_{r+s}}\).
Suppose we have a type C simple system (2.27) with \(p_{i_{r+s}}=-p_{i_{r+s-1}}=-1\). In this case, \(\beta_{r+s-1}\) is an odd root, and the odd reflection by \(\beta_{r+s-1}\) produces a type D simple root system \(\{\beta^{\prime}_{a}\}_{a=1}^{r+s}\) with \(\Upsilon=1\):
\[\beta^{\prime}_{a}=w_{\beta_{r+s-1}}(\beta_{a})\quad\mbox{for}\quad 1\leq a \leq r+s\quad\mbox{if}\quad\Upsilon=1; \tag{2.37}\]
\[\beta^{\prime}_{a}=w_{\beta_{r+s-1}}(\beta_{a})\quad\mbox{for}\quad 1\leq a \leq r+s-2,\]
\[\beta^{\prime}_{r+s}=w_{\beta_{r+s-1}}(\beta_{r+s-1}),\quad\beta^{\prime}_{r+s -1}=w_{\beta_{r+s-1}}(\beta_{r+s})\quad\mbox{if}\quad\Upsilon=-1. \tag{2.38}\]
If we define the opposite way (that is, (2.37) for \(\Upsilon=-1\), (2.38) for \(\Upsilon=1\)), the type D simple root system \(\{\beta^{\prime}_{a}\}_{a=1}^{r+s}\) should be interpreted as the one with \(\Upsilon=-1\). Starting form a given simple root system, one can obtain any other simple root systems by applying (2.34) and (2.35) repeatedly.
Let us summarize the relation between the action of the Weyl reflections and odd reflections by simple roots and the action of symmetric groups on tuples.
**Type A, (2.6):**
\[w_{\alpha_{a}}(I_{M+N}):=\tau_{i_{M+N-a},i_{M+N-a+1}}(I_{M+N})\quad\mbox{for} \quad 1\leq a\leq M+N-1. \tag{2.39}\]
**Type B, (2.11):**
\[\begin{split} w_{\beta_{a}}(I_{2r+2s+1})&:=\tau_{i_ {a},i_{a+1}}\circ\tau_{i^{*}_{a},i^{*}_{a+1}}(I_{2r+2s+1})\quad\mbox{for}\quad 1 \leq a\leq r+s-1,\\ w_{\beta_{r+s}}(I_{2r+2s+1})&:=\tau_{i_{r+s}:i^{*}_{ r+s}}(I_{2r+2s+1}).\end{split} \tag{2.40}\]
**Type D, (2.21), \(p_{i_{r+s}}=1\):**
\[\begin{split} w_{\beta_{a}}(I_{2r+2s+2})&:=\tau_{i_ {a},i_{a+1}}\circ\tau_{i^{*}_{a},i^{*}_{a+1}}(I_{2r+2s+2})\quad\mbox{for}\quad 1 \leq a\leq r+s-1,\\ w_{\beta_{r+s}}(I_{2r+2s+2})&:=\tau_{i_{r+s-1},i^{*} _{r+s}}\circ\tau_{i^{*}_{r+s-1},i_{r+s}}(I_{2r+2s+2})\quad\mbox{if}\quad p_{ \beta_{r+s}}=1\quad(p_{i_{r+s-1}}=1),\\ w_{\beta_{r+s}}(I_{2r+2s+2})&:=\tau_{i_{r+s-1},i^{*} _{r+s-1}}\circ\tau_{i_{r+s-1},i^{*}_{r+s}}\circ\tau_{i^{*}_{r+s-1},i_{r+s}}(I_ {2r+2s+2})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\mbox{if}\quad p_{\beta_{r+s}}=-1,\quad(p_{i_{r+s- 1}}=-1).\end{split} \tag{2.41}\]
**Type C, (2.27), \(p_{i_{r+s}}=-1\):**
\[\begin{split} w_{\beta_{a}}(I_{2r+2s+2})&:=\tau_{i_{ a},i_{a+1}}\circ\tau_{i_{a}^{*},i_{a+1}^{*}}(I_{2r+2s+2})\quad\text{for}\quad 1 \leq a\leq r+s-2,\\ w_{\beta_{r+s-1}}(I_{2r+2s+2})&:=\tau_{i_{r+s-1},i_{ r+s}}\circ\tau_{i_{r+s-1}^{*},i_{r+s}^{*}}(I_{2r+2s+2})\\ \text{if}\quad p_{\beta_{r+s-1}}=1\quad(p_{i_{r+s-1}}=-1),\quad \text{or}\quad p_{\beta_{r+s-1}}=-1\quad(p_{i_{r+s-1}}=1),\quad\Upsilon=1,\\ w_{\beta_{r+s-1}}(I_{2r+2s+2})&:=\tau_{i_{r+s-1},i _{r+s-1}^{*}}\circ\tau_{i_{r+s-1},i_{r+s}}\circ\tau_{i_{r+s-1}^{*},i_{r+s}^{*}} (I_{2r+2s+2})\\ \text{if}\quad p_{\beta_{r+s-1}}=-1\quad(p_{i_{r+s-1}}=1),\quad \Upsilon=-1,\\ w_{\beta_{r+s}}(I_{2r+2s+2})&:=\tau_{i_{r+s},i_{r+ s}^{*}}(I_{2r+2s+2}).\end{split} \tag{2.42}\]
For any roots \(\alpha,\beta,\gamma\) with \((\alpha|\alpha)\neq 0\), one can show
\[(w_{\alpha}(\beta)|w_{\alpha}(\gamma))=(\beta|\gamma), \tag{2.43}\]
and for any roots \(\alpha,\beta,\gamma\) with \((\alpha|\alpha)=0\),
\[(w_{\alpha}(\beta)|w_{\alpha}(\gamma))=\begin{cases}(\beta|\gamma)&\text{if} \quad(\alpha|\beta)(\alpha|\gamma)=0,\ \alpha\neq\beta,\ \alpha\neq\gamma\\ -(\beta|\gamma)&\text{if}\quad\alpha=\beta,\ \alpha\neq\gamma\quad\text{or}\quad \alpha\neq\beta,\ \alpha=\gamma\\ (\beta|\gamma)+(\alpha|\gamma)+(\beta|\alpha)&\text{if}\quad(\alpha|\beta)( \alpha|\gamma)\neq 0,\ \alpha\neq\beta,\ \alpha\neq\gamma\\ 0&\text{if}\quad\alpha=\beta=\gamma.\end{cases} \tag{2.44}\]
### Quantum affine superalgebras
The Dynkin diagrams of affine Lie superalgebras (see [37]) are obtained by extending those of the Lie superalgebras mentioned in the previous subsection. Quantum affine superalgebras are quantization of the universal enveloping algebras of them [40, 41]. Rational counterparts of non-twisted quantum affine superalgebras are superYangians. In [42], RTT presentation of the superYangian \(Y(osp(r|2s))\) is given. A complete classification of representations of quantum affine superalgebras or superYangians, in particular of orthosymplectic type, seems to be still not established, although there are some partial results for the case of super Yangians [43, 44].
## 3 T- and Q-functions for \(U_{q}(gl(M|N)^{(1)})\)
In this section, we will briefly summarize mainly a part of [17] (and [15, 8]), in which miscellaneous formulas on T-and Q-functions for quantum integrable models associated with \(U_{q}(gl(M|N)^{(1)})\) (or \(Y(gl(M|N))\)) are presented. There are some overlaps with the text in [17, 1, 4], with respect to review parts of this paper. Other references relevant to this section are [13, 46, 47], in which Backlund flows in the context of Bethe ansatz are discussed in detail.
### Tableaux sum and CBR-type expressions of T-functions
Let \(I_{M+N}=(i_{1},i_{2},\ldots,i_{M+N})\) be any one of the permutations of the tuple \((1,2,\ldots,M+N)\), and \(I_{a}=(i_{1},i_{2},\ldots,i_{a})\) be the first \(a\) elements of it, where \(0\leq a\leq M+N\). In particular, \(I_{0}\) and \(I_{M+N}\) coincide with \(\emptyset\) and \(\mathfrak{I}\) as sets, respectively. We use the symbol \(\mathbb{Q}_{I_{a}}(u)\) to denote the Baxter Q-function (for short, Q-function) labeled by \(I_{a}\), which is a function of the spectral parameter \(u\in\mathbb{C}\). We suppose that \(\mathbb{Q}_{I_{a}}(u)\) does not depend on the order of the elements of \(I_{a}\) and thus \(I_{a}\) may be regarded as a subset (rather than a tuple) \(\{i_{1},i_{2},\ldots,i_{a}\}\) of the full set \(\mathfrak{I}\). Therefore, as remarked in [17], there are \(2^{M+N}\) kinds of Q-functions corresponding to the number of the subsets of \(\mathfrak{I}\). We use the following abbreviation for the labeling of Q-functions: \(\mathbb{Q}_{I_{a},i,j}(u)=\mathbb{Q}_{I_{a},i,j}=\mathbb{Q}_{\{i_{1},i_{2}, \ldots,i_{a},i,j\}}=\mathbb{Q}_{i_{1},i_{2},\ldots,i_{a},i,j}\) for \(i,j\notin I_{a}\), \(i\neq j\).
Fundamental T-function and Bethe ansatz equationsThe eigenvalue formula of the transfer matrix for the fundamental representation of \(U_{q}(gl(M|N)^{(1)})\) in the auxiliary space (Perk-Schultz-type model [48]) by Bethe ansatz has the following form [49, 50]:
\[\mathsf{F}^{I_{M+N}}_{(1)}(u)=\mathbb{Q}^{[M-N]}_{\emptyset}\mathbb{Q}_{I_{M+ N}}\sum_{a=1}^{M+N}p_{i_{a}}\mathcal{X}_{I_{a}}, \tag{3.1}\]
11
Footnote 11: The summation \(\sum_{j\in I_{a}}\) is meant by \(\sum_{j\in\{i_{1},i_{2},\ldots,i_{a}\}}\). We also remark that \(\mathcal{X}^{[-M+N]}_{I_{a}}\) corresponds to eq.(2.9) in [4], and \(\mathcal{X}^{[-\frac{2IM-N}{2}]}_{I_{a}}\) corresponds to eq.(2.7) in [17]. Based on the relation \(\sum_{j\in\mathfrak{I}}p_{j}=M-N\), one can show
\[\mathcal{X}_{I_{a}}=z_{i_{a}}\frac{\mathbb{Q}^{[\sum_{j}\in\overline{\tau}_{a -1}\;p_{j}-2p_{i_{a}}]}_{I_{a-1}}\mathbb{Q}^{[\sum_{j}\in\overline{\tau}_{a} \;p_{j}+2p_{i_{a}}]}_{I_{a}}}{\mathbb{Q}^{[\sum_{j}\in\overline{\tau}_{a-1}\; p_{j}]}_{I_{a-1}}\mathbb{Q}^{[\sum_{j}\in\overline{\tau}_{a}\;p_{j}]}_{I_{a}}}, \tag{3.2}\]
where \(\overline{I}_{a}=(i_{a+1},i_{a+2},\ldots,i_{M+N})\). We will use this to derive (4.12).
parameter) and have zeros at \(u=u_{k}^{I_{a}}\):
\[\mathbb{Q}_{I_{a}}=\mathbb{Q}_{I_{a}}(u)=\prod_{j=1}^{n_{I_{a}}}(1-q^{-2u+2u_{j}^ {I_{a}}}), \tag{3.4}\]
where \(k\in\{1,2,\ldots,n_{I_{a}}\},a\in\{0,1,2,\ldots,M+N\}\). In particular, \(u_{k}^{I_{0}}\) and \(u_{k}^{I_{M+N}}\) are inhomogeneity of the spectral parameter (known parameters), and \(n_{I_{0}}=n_{I_{M+N}}=L\) is (half) of the number of the lattice sites 13. The requirement that the T-function (3.1) is free of poles, namely,
Footnote 13: Q-functions of an alternating spin chain with \(2L\) lattice sites (half of them in the fundamental representation and the other half in the anti-fundamental representation) will have this normalization condition.
\[\operatorname{Res}_{u=u_{k}^{I_{a}}+\sum_{j\in I_{a}}p_{j}-M+N}(p _{i_{a}}\mathcal{X}_{I_{a}}(u)+p_{i_{a+1}}\mathcal{X}_{I_{a+1}}(u))=0\\ \text{for}\quad k\in\{1,2,\ldots,n_{I_{a}}\}\quad\text{and}\quad a \in\{1,2,\ldots,M+N-1\} \tag{3.5}\]
produces the following Bethe ansatz equation:
\[-1=\frac{p_{i_{a}}z_{i_{a}}}{p_{i_{a+1}}z_{i_{a+1}}}\frac{ \mathbb{Q}_{I_{a-1}}(u_{k}^{I_{a}}-p_{i_{a}})\mathbb{Q}_{I_{a}}(u_{k}^{I_{a}}+ 2p_{i_{a}})\mathbb{Q}_{I_{a+1}}(u_{k}^{I_{a}}-p_{i_{a+1}})}{\mathbb{Q}_{I_{a-1 }}(u_{k}^{I_{a}}+p_{i_{a}})\mathbb{Q}_{I_{a}}(u_{k}^{I_{a}}-2p_{i_{a+1}}) \mathbb{Q}_{I_{a+1}}(u_{k}^{I_{a}}+p_{i_{a+1}})}\\ \text{for}\quad k\in\{1,2,\ldots,n_{I_{a}}\}\quad\text{and}\quad a \in\{1,2,\ldots,M+N-1\}. \tag{3.6}\]
Here the poles from the known functions \(\mathbb{Q}_{I_{0}}\) and \(\mathbb{Q}_{I_{M+N}}\) are out of the question. We assume that the roots of the Q-functions are sufficiently generically distributed. We do not go into mathematical rigour on this.
QQ-relationsLet \(S(I_{M+N})\) be the symmetric group over the components of the tuple \(I_{M+N}\). We assume that \(\tau\in S(I_{M+N})\) acts on \(I_{a}\) as \(\tau(I_{a})=(\tau(i_{1}),\tau(i_{2}),\ldots,\tau(i_{a}))\), \(0\leq a\leq M+N\). The action of \(\tau\in S(I_{M+N})\) on \(\mathsf{F}_{(1)}^{I_{M+N}}\) is defined as \(\tau(\mathsf{F}_{(1)}^{I_{M+N}}):=\mathsf{F}_{(1)}^{\tau(I_{M+N})}=\mathsf{F}_ {(1)}^{(\tau(i_{1}),\tau(i_{2}),\ldots,\tau(i_{M+N}))}\). We also set \(\tau(\mathcal{X}_{I_{a}})=\mathcal{X}_{\tau(I_{a})}\), \(\tau(\mathbb{Q}_{I_{a}})=\mathbb{Q}_{\tau(I_{a})}\), \(\tau(z_{a})=z_{\tau(a)},\tau(p_{a})=p_{\tau(a)}\). The direct product of the symmetric groups \(S(\mathfrak{B})\times S(\mathfrak{F})\) corresponds to the Weyl group of \(gl(M|N)\) (see (2.34); we also denote the symmetric group over a subset \(I\) of \(\mathfrak{I}\) as \(S(I)\).) On the other hand, the elements of \(S(I_{M+N})/(S(\mathfrak{B})\times S(\mathfrak{F}))\) correspond to the odd reflections of \(gl(M|N)\) (see (2.35)). Consider a permutation \(\tau\in S(I_{M+N})=S(\mathfrak{I})\) such that \(\tau(i_{a})=i_{a+1},\tau(i_{a+1})=i_{a}\) and \(\tau(i_{b})=i_{b}\) for \(b\neq a,a+1\), for a fixed \(a\in\{1,2,\ldots,M+N-1\}\). The condition \(\tau(\mathsf{F}_{(1)}^{I_{M+N}})=\mathsf{F}_{(1)}^{I_{M+N}}\) is equivalent to
\[p_{i}\mathcal{X}_{I_{a}}+p_{j}\mathcal{X}_{I_{a+1}}=p_{j}\mathcal{X}_{\tau(I_{a })}+p_{i}\mathcal{X}_{\tau(I_{a+1})}, \tag{3.7}\]
where \(i=i_{a},j=i_{a+1}\). This implies the following functional relations, called QQ-relations
\[(z_{i}-z_{j})\mathbb{Q}_{I}\mathbb{Q}_{I,i,j}=z_{i}\mathbb{Q}_{I,i}^{[p_{i}]} \mathbb{Q}_{I,j}^{[-p_{i}]}-z_{j}\mathbb{Q}_{I,i}^{[-p_{i}]}\mathbb{Q}_{I,j}^{[ p_{i}]}\qquad\text{for}\qquad p_{i}=p_{j}, \tag{3.8}\]
\[(z_{i}-z_{j})\mathbb{Q}_{I,i}\mathbb{Q}_{I,j}=z_{i}\mathbb{Q}_{I}^{[-p_{i}]} \mathbb{Q}_{I,i,j}^{[p_{i}]}-z_{j}\mathbb{Q}_{I}^{[p_{i}]}\mathbb{Q}_{I,i,j}^{[ -p_{i}]}\qquad\text{for}\qquad p_{i}=-p_{j}, \tag{3.9}\]
where \(I=I_{a-1}\). The 3-term QQ-relations (3.8) and (3.9) are simplifications 14 of the 4-term QQ-relation (3.7). The QQ-relations (3.8) and (3.9) have a determinant solution [17]. In our convention, it is given by (3.51) ((3.53), (3.53)). That the determinant satisfies the QQ-relations was proved in [Appendix C, [17]] using the Jacobi or Plucker identities. From now on, we assume (3.8) and (3.9). Various types of QQ-relations appeared in the literature (see for example, [5, 6, 51, 52, 29, 46] and references in [17]). In particular, the form relevant to our discussion appeared in [5] ((3.8) for \((M,N)=(2,0)\)), [52] (for (3.8) for \((M,N)=(3,0)\)), and [18] ((3.8) and (3.9) for \((M,N)=(2,1)\)). In this paper, we use the presentation [17] of QQ-relations for the whole set of \(2^{M+N}\) Q-functions on the Hasse diagram. This is necessary for the formulation of Wronskian-type expressions of T-functions. The first equations (3.8) (Bosonic QQ-relations) are generalization of the quantum Wronskian condition. The second equations (3.9) (Fermionic QQ-relations) came from the particle-hole transformations [7] in statistical mechanics, and are related [8] to odd Weyl reflections [9, 10] of the superalgebra \(gl(M|N)\). Fermionic QQ-relations are studied [46] in relation to Backlund transformations in soliton theory. In [17, 1], we normalized the Q-function for the empty set as \(\mathbb{Q}_{\emptyset}=1\), but we do not impose this in this paper.
Footnote 14: Clear the denominators of (3.7). Eqs. (3.8) and (3.9) are “bi-linearizations” of this “multi-linear form”.
Let \(\{\epsilon_{a}\}_{a=1}^{M+N}\) be a basis of the dual space of the Cartan subalgebra of \(gl(M|N)\) with the bilinear form \((\epsilon_{a}|\epsilon_{b})=p_{a}\delta_{ab}\). We introduce a map \(\sigma\), which is related to an automorphism of \(gl(M|N)\) (or \(sl(M|N)\), cf. [37]):
\[\sigma(\epsilon_{i})=-\epsilon_{i^{*}}=\begin{cases}-\epsilon_{M+1-i}\quad \text{for}\quad i\in\mathfrak{B},\\ -\epsilon_{2M+N+1-i}\quad\text{for}\quad i\in\mathfrak{F}.\end{cases} \tag{3.10}\]
Take a Cartan element \(h\) of \(gl(M|N)\) so that \(e^{\epsilon_{a}(h)}=z_{a}\) holds, and define
\[\sigma(z_{i})=z_{i^{*}}^{-1}=\begin{cases}z_{M+1-i}^{-1}\quad \text{for}\quad i\in\mathfrak{B},\\ z_{2M+N+1-i}^{-1}\quad\text{for}\quad i\in\mathfrak{F}.\end{cases} \tag{3.11}\]
Then we define the action of this map on the index sets of Q-functions:
\[\sigma(I)=\mathfrak{I}\setminus I^{*}\quad\text{for}\quad I \subset\mathfrak{I}, \tag{3.12}\]
and on the Q-functions:
\[\sigma(\mathbb{Q}_{I})=\mathbb{Q}_{\sigma(I)}. \tag{3.13}\]
We remark that the form of the QQ-relations (3.8) and (3.9) are invariant under the map \(\sigma\).
Root systems and Bethe ansatzThe Bethe ansatz equation (3.6) fits into a universal form (cf. [53, 54, 55]) associated with the root system of the superalgebra, supplemented
by a sign factor and boundary twist parameters:
\[-\frac{\Lambda_{a}(u_{k}^{(a)}-(\omega_{a}|\omega_{a}))}{\Lambda_{a+1 }(u_{k}^{(a)}-(\omega_{a}|\omega_{a}))}=p_{\alpha_{a}}e^{-\alpha_{a}(h)}\prod_{ \begin{subarray}{c}b=1\\ b\neq a\downarrow(\alpha_{a}|\alpha_{a})=0\end{subarray}}^{\mathfrak{r}} \underbrace{\mathcal{Q}_{b}(u_{k}^{(a)}+(\alpha_{a}|\alpha_{b}))}_{\mathcal{Q} _{b}(u_{k}^{(a)}-(\alpha_{a}|\alpha_{b}))}\\ \text{for}\quad k\in\{1,2,\ldots,n_{a}\}\quad\text{and}\quad a\in \{1,2,\ldots,\mathfrak{r}\}, \tag{3.14}\]
where \(\mathfrak{r}=M+N-1\) is the rank of \(sl(M|N)\); \(((\alpha_{a}|\alpha_{b}))_{1\leq a,b\leq\mathfrak{r}}\) is the symmetrized Cartan matrix with a set of simple roots \(\alpha_{a}=\epsilon_{i_{M+N+1-a}}-\epsilon_{i_{M+N-a}}\), \(\omega_{a}=\sum_{k=1}^{a}\epsilon_{i_{M+N-a+k}}\), \((\omega_{a}|\omega_{a})=\sum_{k=1}^{a}p_{i_{M+N-a+k}}\), \(p_{\alpha_{a}}=p_{\epsilon_{i_{M+N+1-a}}}p_{\epsilon_{i_{M+N-a}}}\), \(\nu_{\epsilon_{a}}=p_{a}\), \(u_{k}^{(a)}=u_{k}^{I_{M+N-a}}\). We identify \(\mathcal{Q}_{a}(u)=\mathbb{Q}_{I_{M+N-a}}(u)\), \(n_{a}=n_{I_{M+N-a}}\). Left hand side of (3.14) depends on the quantum space of the model in question. \(\{\Lambda_{a}(u)\}_{a=1}^{M+N}\) are the eigenvalues of the diagonal elements of a monodromy matrix on the vacuum vector ("vacuum parts" in the analytic Bethe ansatz). Here we consider the case \(\Lambda_{1}(u)=\mathbb{Q}_{\emptyset}(u+M-N)\mathbb{Q}_{I_{M+N}}(u+2p_{i_{M+N }})\), \(\Lambda_{a}(u)=\mathbb{Q}_{\emptyset}(u+M-N)\mathbb{Q}_{I_{M+N}}(u)\) for \(2\leq a\leq M+N-1\), \(\Lambda_{M+N}(u)=\mathbb{Q}_{\emptyset}(u+M-N-2p_{i_{1}})\mathbb{Q}_{I_{M+N}}(u)\). One can rewrite the left hand side of (3.14) further:
\[-\frac{P_{a}(u_{k}^{(a)}+d_{a})}{P_{a}(u_{k}^{(a)}-d_{a})}=p_{ \alpha_{a}}e^{-\alpha_{a}(h)}\prod_{\begin{subarray}{c}b=1\\ b\neq a\downarrow(\alpha_{a}|\alpha_{a})=0\end{subarray}}^{\mathfrak{r}}\underbrace {\mathcal{Q}_{b}(u_{k}^{(a)}+(\alpha_{a}|\alpha_{b}))}_{\mathcal{Q}_{b}(u_{k}^ {(a)}-(\alpha_{a}|\alpha_{b}))}\\ \text{for}\quad k\in\{1,2,\ldots,n_{a}\}\quad\text{and}\quad a\in \{1,2,\ldots,\mathfrak{r}\}, \tag{3.15}\]
where
\[P_{a}=\begin{cases}\mathcal{Q}_{0}=\mathbb{Q}_{I_{M+N}}&\text{if}\quad a=1,\\ \mathcal{Q}_{M+N}=\mathbb{Q}_{I_{0}}&\text{if}\quad a=M+N-1,\\ 1&\text{otherwise},\end{cases} \tag{3.16}\]
\(d_{a}=(\alpha_{a}|\alpha_{a})/2\) if \((\alpha_{a}|\alpha_{a})\neq 0\), \(d_{a}=(\alpha_{a}|\alpha_{a^{\prime}})\neq 0\) for some simple root \(\alpha_{a^{\prime}}\) if \((\alpha_{a}|\alpha_{a})=0\), in particular \(d_{1}=p_{i_{M+N}}\), \(d_{M+N-1}=p_{i_{1}}\). 15 The functions \(P_{a}(u)\) are related to Drinfeld polynomials, which characterize the quantum space of the model (or representation of the underlying algebra) in question (cf. [56]).
Footnote 15: \(d_{1}=(\alpha_{1}|\alpha_{2})\) if \((\alpha_{1}|\alpha_{1})=0\) since \(p_{i_{M+N}}=-p_{i_{M+N-1}}\), \(d_{1}=(\alpha_{M+N-1}|\alpha_{M+N-2})\) if \((\alpha_{M+N-1}|\alpha_{M+N-1})=0\) since \(p_{i_{1}}=-p_{i_{2}}\). The other option is \(d_{a}=-(\alpha_{a}|\alpha_{0})\) for \(a=1,M+N-1\), where \(\alpha_{0}=\epsilon_{i_{1}}-\epsilon_{i_{M+N}}\) is a simple root of the affine Lie superalgebra \(gl(M|N)^{(1)}\) (the null vector is omitted).
The QQ-relations (3.8) and (3.9) can also be written in terms of a root system of \(gl(M|N)\). For \(a\in\{1,2,\ldots,M+N-1\}\), they read
\[(e^{-\alpha_{a}(h)}\!-\!1)P_{a}\prod_{\begin{subarray}{c}b=1\\ (\alpha_{a}|\alpha_{b})\neq 0,b\neq a\end{subarray}}^{\mathfrak{r}}\mathcal{Q}_{b}=e^{- \alpha_{a}(h)}\mathcal{Q}_{a}^{[d_{a}]}\widetilde{\mathcal{Q}}_{a}^{[-d_{a}] }-\mathcal{Q}_{a}^{[-d_{a}]}\widetilde{\mathcal{Q}}_{a}^{[d_{a}]}\quad \text{if}\quad(\alpha_{a}|\alpha_{a})\neq 0, \tag{3.17}\]
\[(e^{-\alpha_{a}(h)}-1){\cal Q}_{a}\widetilde{\cal Q}_{a}=e^{- \alpha_{a}(h)}P_{a}^{[-d_{a}]}\prod_{\stackrel{{ b=1}}{{(\alpha_{a}|\alpha_{b})\neq 0,b\neq a }}}{\cal Q}_{b}^{[(\alpha_{a}|\alpha_{b})]}-P_{a}^{[d_{a}]}\prod_{\stackrel{{ b=1}}{{(\alpha_{a}|\alpha_{b})\neq 0,b\neq a }}}^{\mathsf{f}}{\cal Q}_{b}^{[-(\alpha_{a}|\alpha_{b})]}\\ \text{if}\quad(\alpha_{a}|\alpha_{a})=0, \tag{3.18}\]
where 16\(\widetilde{\cal Q}_{a}={\mathbb{Q}}_{\widetilde{I}_{M\pm N-a}}={\mathbb{Q}}_{(i _{1}.i_{2},\dots,i_{M+N-a-1},i_{M+N-a+1})}\).
Footnote 16: One may also write this as \(\widetilde{\cal Q}_{a}=w_{\alpha_{a}}({\cal Q}_{a})\) (see subsection 4.5).
We expect that QQ-relations for other quantum affine superalgebras or super-Yangians associated with simply laced Dynkin diagrams can also be expressed in this form (3.17)-(3.18). We remark that QQ-relations for non-super affine Lie algebras are expressed in terms of root systems in connection with discrete Miura opers [29], and with the ODE/IM correspondence [30, 31].
T-functions for fusion vertex modelsThe Young diagram \(\mu\), corresponding to a partition \(\mu\), has \(\mu_{k}\) boxes in the \(k\)-th row of the plane. Each box in the Young diagram has the coordinate \((i,j)\in{\mathbb{Z}}_{\geq 1}\times{\mathbb{Z}}_{\geq 1}\), where the row index \(i\) increases as one goes down, and the column index \(j\) increases as one goes from left to right. The upper left corner of \(\mu\) has the coordinates \((1,1)\). Let \(\lambda=(\lambda_{1},\lambda_{2},\dots)\) and \(\mu=(\mu_{1},\mu_{2},\dots)\) be two partitions such that \(\mu_{i}\geq\lambda_{i}:i=1,2,\dots\) and \(\lambda_{\mu_{1}^{\prime}}=\lambda_{\mu_{1}}^{\prime}=0\). We express the skew-Young diagram defined by these two partitions as \(\lambda\subset\mu\). Each box on the skew-Young diagram \(\lambda\subset\mu\) is specified by its coordinate on \(\mu\).
We define the space of admissible tableaux \(\mathsf{Tab}_{I_{K}}(\lambda\subset\mu)\) for a tuple \(I_{K}=(i_{1},i_{2},\dots,i_{K})\) on a (skew) Young diagram \(\lambda\subset\mu\). We assign an integer \(t_{ij}\) in each box \((i,j)\) of the diagram. An admissible tableau \(t\in\mathsf{Tab}_{I_{K}}(\lambda\subset\mu)\) is a set of integers \(t=\{t_{jk}\}_{(j,k)\in\lambda\subset\mu}\), where all \(t_{jk}\in\{1,2,\dots,K\}\) satisfy the following conditions
* \(t_{jk}\geq t_{j+1,k},t_{j,k+1}\)
* \(t_{jk}>t_{j,k+1}\) if \(i_{t_{jk}}\in\mathfrak{F}\) or \(i_{t_{j,k+1}}\in\mathfrak{F}\)
* \(t_{jk}>t_{j+1,k}\) if \(i_{t_{jk}}\in\mathfrak{B}\) or \(i_{t_{j+1,k}}\in\mathfrak{B}\).
We introduce a \(\mathsf{T}\)-function 17 with auxiliary space labeled by a skew Young diagram \(\lambda\subset\mu\)[15, 8] (see [16] for an extension of this T-function, [11] for \(N=0\) case, [57] for representation or combinatorial theoretical background and [56] for \(U_{q}(B_{r}^{(1)})\) case):
Footnote 17: Here we change the convention of the function \({\cal F}_{\lambda\subset\mu}^{I_{K}}\) in [eq. (2.12), [4]]. \({\cal F}_{\lambda\subset\mu}^{I_{K}}\) in [4] corresponds to \({\cal F}_{\lambda\subset\mu}^{I_{K}}\) in this paper, where \(\widetilde{\lambda\subset\mu}\) is the \(180^{\circ}\) rotation of \(\lambda\subset\mu\). In particular, both of them coincide if the Young diagram is of rectangular shape.
\[{\cal F}_{\lambda\subset\mu}^{I_{K}}(u)=\sum_{t\in\mathsf{Tab}_{I_{K}}( \lambda\subset\mu)}\prod_{(j,k)\in\lambda\subset\mu}p_{i_{t_{j,k}}}{\cal X}_{I _{t_{j,k}}}^{[-\mu_{1}+\mu_{1}^{\prime}-2j+2k+\mathtt{a}-\mathtt{n}-M+N]}, \tag{3.19}\]
where the summation is taken over all the admissible tableaux, and the products are taken over all the boxes of the Young diagram \(\lambda\subset\mu\); \(\mathtt{m}:=\mathrm{card}(I_{K}\cap\mathfrak{B})\), \(\mathtt{n}:=\mathrm{card}(I_{K}\cap\mathfrak{F})\).
We also set \(\mathcal{F}_{\emptyset}^{I_{K}}=1\), and \(\mathcal{F}_{\mu}^{\emptyset}=0\) and for the non-empty Young diagram \(\mu\). Note that the admissible tableaux \(\mathtt{Tab}_{I_{K}}(\lambda\subset\mu)\) becomes an empty set if the Young diagram \(\lambda\subset\mu\) contains a rectangular sub-diagram of a height of \(\mathtt{m}+1\) and a width of \(\mathtt{n}+1\), and consequently (3.19) vanishes for such Young diagram. Thus the T-functions for \(\lambda=\emptyset\) are defined on the \([\mathtt{m},\mathtt{n}]\)-hook (L-hook; cf. Figure 1). Let us give examples of (3.19) for the cases \((M,N)=(2,1)\), \(K=3\), \(I_{3}=(i_{1},i_{2},i_{3})=(2,3,1)\), \(\lambda=\emptyset\), \(\mu=(1)\) and \(\mu=(1^{2})\). In these cases, we have \(\mathfrak{B}=\{1,2\}\), \(\mathfrak{F}=\{3\}\), \(\mathtt{m}=2\), \(\mathtt{n}=1\), \(I_{2}=(i_{1},i_{2})=(2,3)\), \(I_{1}=(i_{1})=(2)\), \(I_{0}=\emptyset\), \(p_{i_{1}}=p_{2}=1\), \(p_{i_{2}}=p_{3}=-1\), \(p_{i_{3}}=p_{1}=1\). Thus (3.19) reduces to
\[\mathcal{F}_{(1)}^{I_{3}} =\mathcal{X}_{I_{3}}-\mathcal{X}_{I_{2}}+\mathcal{X}_{I_{1}}=z_{1} \frac{\mathbb{Q}_{231}^{[2]}\mathbb{Q}_{23}^{[-1]}}{\mathbb{Q}_{231}\mathbb{Q }_{23}^{[1]}}-z_{3}\frac{\mathbb{Q}_{23}^{[-1]}\mathbb{Q}_{2}^{[2]}}{\mathbb{Q }_{23}^{[1]}\mathbb{Q}_{2}}+z_{2}\frac{\mathbb{Q}_{2}^{[2]}\mathbb{Q}_{ \emptyset}^{[-1]}}{\mathbb{Q}_{2}\mathbb{Q}_{2}^{[1]}}, \tag{3.20}\] \[\mathcal{F}_{(1^{2})}^{I_{3}} =-\mathcal{X}_{I_{3}}^{[1]}\mathcal{X}_{I_{2}}^{[-1]}+\mathcal{X }_{I_{3}}^{[1]}\mathcal{X}_{I_{1}}^{[-1]}+\mathcal{X}_{I_{2}}^{[1]}\mathcal{ X}_{I_{2}}^{[-1]}-\mathcal{X}_{I_{2}}^{[1]}\mathcal{X}_{I_{1}}^{[-1]}\] \[=-z_{1}z_{3}\frac{\mathbb{Q}_{231}^{[3]}\mathbb{Q}_{23}^{[-2]} \mathbb{Q}_{2}^{[1]}}{\mathbb{Q}_{231}^{[1]}\mathbb{Q}_{23}^{[2]}\mathbb{Q}_ {2}^{[-1]}}+z_{1}z_{2}\frac{\mathbb{Q}_{231}^{[3]}\mathbb{Q}_{23}\mathbb{Q}_{ 2}^{[1]}\mathbb{Q}_{\emptyset}^{[-2]}}{\mathbb{Q}_{231}^{[1]}\mathbb{Q}_{23}^{ [2]}\mathbb{Q}_{2}^{[-1]}\mathbb{Q}_{\emptyset}}+(z_{3})^{2}\frac{\mathbb{Q}_ {23}^{[-2]}\mathbb{Q}_{2}^{[3]}\mathbb{Q}_{2}^{[-1]}}{\mathbb{Q}_{23}^{[2]} \mathbb{Q}_{2}^{[-1]}}-z_{3}z_{2}\frac{\mathbb{Q}_{23}\mathbb{Q}_{2}^{[3]} \mathbb{Q}_{2}^{[-2]}}{\mathbb{Q}_{23}^{[2]}\mathbb{Q}_{2}^{[-1]}\mathbb{Q}_{ \emptyset}}. \tag{3.21}\]
(Super)character limitWe define the _(super)character limit_ by the operation:
\[\zeta:\mathbb{Q}_{I}\to 1\quad\text{for all}\quad I\subset\mathfrak{I}. \tag{3.22}\]
In this limit, we have \(\zeta(\chi_{I_{a}})=z_{i_{a}}\) for (3.3). In general, T-functions reduce to the (super)characters of representations of underlying algebras. In particular, \(\zeta(\mathcal{F}_{\mu}^{I_{M+N}})\) coincides with the supercharacter of the highest weight representation of \(gl(M|N)\) with the highest weight (2.8) for (2.10) (in case \(I_{M+N}=(M+N,\ldots,2,1)\)).
Generating functions of T-functionsFor Young diagrams with one rows or columns, there are generating functions for (3.19):
\[\mathbf{W}_{I_{K}}(\mathbf{X}) =\prod_{k=1}^{\overset{\leftarrow}{K}}(1-\mathcal{X}_{I_{k}} \mathbf{X})^{-p_{i_{k}}}=\sum_{a=0}^{\infty}\mathcal{F}_{(a)}^{I_{K}[a-1- \mathtt{n}+\mathtt{n}+M-N]}\mathbf{X}^{a}, \tag{3.23}\] \[\mathbf{W}_{I_{K}}(\mathbf{X})^{-1} =\prod_{k=1}^{\overset{\rightarrow}{K}}(1-\mathcal{X}_{I_{k}} \mathbf{X})^{p_{i_{k}}}=\sum_{a=0}^{\infty}(-1)^{a}\mathcal{F}_{(1^{a})}^{I_{K }[a-1-\mathtt{m}+\mathtt{n}+M-N]}\mathbf{X}^{a}, \tag{3.24}\]
where \(\mathbf{X}\) is a shift operator \(\mathbf{X}f=f^{[2]}\mathbf{X}\) for any function \(f\) of the spectral parameter. This type of generating functions for \(K=M+N\) appeared in [15, 8] ([46, 47, 17] for \(0<K<M+N\) case; [13] for \(N=0\) case). The supercharacter limit of these at \(K=M+N\), namely \(\zeta(\mathbf{W}_{I_{K}}(\mathbf{X}))\) and \(\zeta(\mathbf{W}_{I_{K}}(\mathbf{X})^{-1})\) are the generating functions of the symmetric and anti-symmetric representations of \(gl(M|N)\), respectively (see (B1)). The condition that the generating function \(\mathbf{W}_{I_{K}}(\mathbf{X})\) for \(K\geq a+1\) is invariant under the transposition \(\tau\in S(I_{M+N})\) of \(i_{a}\) and \(i_{a+1}\), namely \(\tau(\mathbf{W}_{I_{K}}(\mathbf{X}))=\mathbf{W}_{\tau(I_{K})}(\mathbf{X})= \mathbf{W}_{I_{K}}(\mathbf{X})\) is equivalent to the _discrete zero curvature condition_ (cf. [46, 47]):
\[(1-\mathcal{X}_{I_{a}}\mathbf{X})^{p_{i_{a}}}(1-\mathcal{X}_{I_{a+1}}\mathbf{X} )^{p_{i_{a+1}}}=(1-\mathcal{X}_{\tau(I_{a})}\mathbf{X})^{p_{\tau(i_{a})}}(1- \mathcal{X}_{\tau(I_{a+1})}\mathbf{X})^{p_{\tau(i_{a+1})}}, \tag{3.25}\]
where \(\tau(i_{a})=i_{a+1},\tau(i_{a+1})=i_{a}\). This relation (3.25) boils down to (3.7) and an identity.
\[\left(\mathcal{X}_{I_{a}}^{[-p_{i_{a+1}}]}\right)^{p_{i_{a}}}\left( \mathcal{X}_{I_{a+1}}^{[p_{i_{a}}]}\right)^{p_{i_{a+1}}}=\left(\mathcal{X}_{ \tau(I_{a})}^{[-p_{\tau(i_{a+1})}]}\right)^{p_{\tau(i_{a})}}\left(\mathcal{X}_ {\tau(I_{a+1})}^{[p_{\tau(i_{a})}]}\right)^{p_{\tau(i_{a+1})}}\\ =(z_{i_{a}})^{p_{i_{a}}}(z_{i_{a+1}})^{p_{i_{a+1}}}\frac{\mathbb{ Q}_{I_{a-1}}^{[M-N-\sum_{j\in I_{a+1}}p_{j}-1]}\mathbb{Q}_{I_{a+1}}^{[M-N-\sum_{j\in I _{a-1}}p_{j}+1]}}{\mathbb{Q}_{I_{a-1}}^{[M-N-\sum_{j\in I_{a+1}}p_{j}+1]} \mathbb{Q}_{I_{a+1}}^{[M-N-\sum_{j\in I_{a-1}}p_{j}-1]}}. \tag{3.26}\]
The invariance for the case \(K\leq a-1\) is trivial. Thus the T-functions \(\mathcal{F}_{(b)}^{I_{K}}\) and \(\mathcal{F}_{(1^{b})}^{I_{K}}\) for \(b\geq 0\), \(K\neq a\) are invariant under the transposition \(\tau\). Therefore, these functions are invariant under \(S(I_{K})\times S(\overline{I}_{K})\), where \(\overline{I}_{K}=(i_{K+1},i_{K+2},\ldots,i_{M+N})\), since the symmetric group is generated by transpositions. In particular, the T-functions \(\mathcal{F}_{(b)}^{I_{M+N}}\) and \(\mathcal{F}_{(1^{b})}^{I_{M+N}}\) are invariant under \(S(I_{M+N})\). This means that these are independent of the order of the elements of tuple \(I_{M+N}\) under the QQ-relations (3.8) and (3.9).
One can derive Baxter type equations from the kernels of (3.23) and (3.24).
\[0=\mathbf{W}_{I_{K}}(\mathbf{X})\cdot\mathfrak{Q}_{I_{1}}=\sum_{a=0}^{ \infty}\mathcal{F}_{(a)}^{I_{K}[a-1-\mathtt{m}+\mathtt{n}+M-N]}\mathfrak{Q}_{ I_{1}}^{[2a]}\quad\text{if}\quad p_{i_{1}}=-1, \tag{3.27}\]
\[0=\mathbf{W}_{I_{K}}(\mathbf{X})^{-1}\cdot\mathfrak{Q}_{I_{K}}=\sum_{a=0}^{ \infty}(-1)^{a}\mathcal{F}_{(1^{a})}^{I_{K}[a-1-\mathtt{m}+\mathtt{n}+M-N]} \mathfrak{Q}_{I_{K}}^{[2a]}\quad\text{if}\quad p_{i_{K}}=1, \tag{3.28}\]
where
\[\mathfrak{Q}_{I_{a}}=e^{-\frac{u}{2}\log z_{i_{a}}}\left(\frac{\mathbb{Q}_{I_{ a-1}}^{[M-N-\sum_{j\in I_{a-1}}p_{j}-p_{i_{a}}-1]}}{\mathbb{Q}_{I_{a}}^{[M-N- \sum_{j\in I_{a}}p_{j}+p_{i_{a}}-1]}}\right)^{p_{i_{a}}}, \tag{3.29}\]
and \(\mathbf{X}\cdot\mathfrak{Q}_{I_{a}}=\mathfrak{Q}_{I_{a}}^{[2]}\). It is possible to consider reverse order version of (3.23) and (3.24):
\[\mathbf{W}_{I_{K}}^{\prime}(\mathbf{X}^{-1})= \prod_{k=1}^{\overrightarrow{K}}(1-\mathcal{X}_{I_{k}}\mathbf{X} ^{-1})^{-p_{i_{k}}}=\sum_{a=0}^{\infty}\mathcal{F}_{(a)}^{I_{K}[-a+1-\mathtt{ m}+\mathtt{n}+M-N]}\mathbf{X}^{-a}, \tag{3.30}\] \[\mathbf{W}_{I_{K}}^{\prime}(\mathbf{X}^{-1})^{-1}= \prod_{k=1}^{\leftarrowleftarrow}(1-\mathcal{X}_{I_{k}}\mathbf{X} ^{-1})^{p_{i_{k}}}=\sum_{a=0}^{\infty}(-1)^{a}\mathcal{F}_{(1^{a})}^{I_{K}[-a+ 1-\mathtt{m}+\mathtt{n}+M-N]}\mathbf{X}^{-a}. \tag{3.31}\]
These are invariant under the action of \(S(I_{K})\times S(\overline{I}_{K})\). One can derive Baxter type equations from the kernels of (3.30) and (3.31).
\[0=\mathbf{W}_{I_{K}}^{\prime}(\mathbf{X}^{-1})\cdot\mathfrak{Q}_{I_{K}}^{\prime} =\sum_{a=0}^{\infty}\mathcal{F}_{(a)}^{I_{K}[-a+1-\mathtt{m}+\mathtt{n}+M-N]} \mathfrak{Q}_{I_{K}}^{\prime[-2a]}\quad\text{if}\quad p_{i_{K}}=-1, \tag{3.32}\]
\[0=\mathbf{W}_{I_{K}}^{\prime}(\mathbf{X}^{-1})^{-1}\cdot\mathfrak{Q}_{I_{1}}^{ \prime}=\sum_{a=0}^{\infty}(-1)^{a}\mathcal{F}_{(1^{a})}^{I_{K}[-a+1-\mathtt{ n}+\mathtt{n}+M-N]}\mathfrak{Q}_{I_{1}}^{\prime[-2a]}\quad\text{if}\quad p_{i_{1}}=1, \tag{3.33}\]
where
\[\mathfrak{Q}^{\prime}_{I_{a}}=e^{\frac{u}{2}\log z_{i_{a}}}\left( \frac{\mathbb{Q}^{[M-N-\sum_{j\in I_{a}}p_{j}+p_{i_{a}}+1]}_{I_{a-1}}}{\mathbb{Q }^{[M-N-\sum_{j\in I_{a-1}}p_{j}-p_{i_{a}}+1]}_{I_{a-1}}}\right)^{p_{i_{a}}}, \tag{3.34}\]
and \(\mathbf{X}^{-1}\cdot\mathfrak{Q}^{\prime}_{I_{a}}=\mathfrak{Q}^{\prime[-2]}_{I_{ a}}\).
Supersymmetric Cherednik-Bazhanov-Reshetikhin formulaThe tableau sum formula (3.19) has determinant expressions 18
Footnote 18: For the 180 degree rotated Young diagram \(\widetilde{\lambda\subset\mu}\), one can show
\[\mathcal{F}^{I_{K}}_{\widetilde{\lambda\subset\mu}}=\left|\left( \mathcal{F}^{I_{K}\left[\mu_{1}-\mu_{1}^{\prime}+\mu_{1}^{\prime}+\lambda_{j }^{\prime}-i-j+1\right]}_{\begin{subarray}{c}1\leq i\leq\mu_{1}\\ 1\leq j\leq\mu_{1}\end{subarray}\end{subarray}}\right|\right. \tag{3.35}\]
based on the identity \(|A_{ij}|_{1\leq i,j\leq K}=|A_{K+1-j,K+1-i}|_{1\leq i,j\leq K}\) for a matrix \((A_{ij})_{1\leq i,j\leq K}\). This corresponds to [(2.13), [4]].
\[\mathcal{F}^{I_{K}}_{\widetilde{\lambda\subset\mu}}=\left|\left( \mathcal{F}^{I_{K}\left[-\mu_{1}+\mu_{1}^{\prime}+\mu_{1}^{\prime}+\lambda_{j }^{\prime}-i-j+1\right]}_{\begin{subarray}{c}1\leq i\leq\mu_{1}^{\prime}\\ 1\leq j\leq\mu_{1}^{\prime}\end{subarray}}\right|, \tag{3.36}\] \[=\left|\left(\mathcal{F}^{I_{K}\left[-\mu_{1}+\mu_{1}^{\prime}+ \mu_{1}^{\prime}+\lambda_{j}^{\prime}-i-j+1\right]}_{\begin{subarray}{c}1 \leq i\leq\mu_{1}^{\prime}\\ 1\leq j\leq\mu_{1}^{\prime}\end{subarray}}\right|\right. \tag{3.37}\]
where \(\mathcal{F}^{I_{K}}_{\left(10\right)}=\mathcal{F}^{I_{K}}_{\left(0\right)}=1\) and \(\mathcal{F}^{I_{K}}_{\left(1^{a}\right)}=\mathcal{F}^{I_{K}}_{\left(a\right)}=0\) for \(a<0\). These determinant expressions for \(K=M+N\) correspond to the supersymmetric Cherednik-Bazhanov-Reshetikhin formulas (supersymmetric CBR formulas or quantum supersymmetric Jacobi-Trudi formulas) [15, 8] (see also [58, 57, 56]), which are supersymmetric extensions of the CBR formula (quantum Jacobi-Trudi formula) [12, 11]. In order to cancel the poles by the functions \(\mathbb{Q}_{\emptyset}\) and \(\mathbb{Q}_{I_{K}}\), we introduce the following transformation 19
Footnote 19: In the right hand side, we do not need the 180 degrees rotated Young diagram (see [(2.14), [4]]) since we have changed the convention of the function (3.19). for any skew Young diagram \(\lambda\subset\mu\):
\[\mathsf{F}^{I_{K}}_{\lambda\subset\mu}=\Phi^{I_{K}}_{\lambda \subset\mu}\mathcal{F}^{I_{K}}_{\lambda\subset\mu} \tag{3.38}\]
with the overall factor defined by
\[\Phi^{I_{K}}_{\lambda\subset\mu}=\mathbb{Q}^{[-\mu_{1}-\mu_{1}^{ \prime}+2\mu_{\mu_{1}^{\prime}}+\mathfrak{m}-\mathfrak{n}]}_{\mathbb{Q}^{[- \mu_{1}+\mu_{1}^{\prime}+2\lambda_{1}]}_{I_{K}}}\times\\ \times\prod_{j=1}^{\mu_{1}^{\prime}-1}\left(\mathbb{Q}^{[-\mu_{1 }+\mu_{1}^{\prime}-2j+2\mu_{j}+\mathfrak{m}-\mathfrak{n}]}_{\Omega_{I_{K}}^{ \left[-\mu_{1}+\mu_{1}^{\prime}-2j+2\lambda_{j}+2\right]}}\right)^{\theta((\mu _{j}-\mu_{j+1})(\mu_{j}-\lambda_{j})>0)}\times\\ \times\prod_{j=2}^{\min(\lambda_{1}^{\prime}+1,\mu_{1}^{\prime}) }\left(\mathbb{Q}^{[-\mu_{1}+\mu_{1}^{\prime}-2j+2\lambda_{j}+2]}_{I_{K}} \right)^{\theta((\lambda_{j-1}-\lambda_{j})(\mu_{j}-\lambda_{j})>0)}, \tag{3.39}\]
where \(\lambda_{\lambda^{\prime}_{1}+1}=0\), \(\theta(\mbox{True})=1\), \(\theta(\mbox{False})=0\), \(\prod_{j=c_{1}}^{c_{2}}(\ldots)=1\) if \(c_{1}>c_{2}\). In case the Young diagram \(\lambda\subset\mu\) is of rectangular shape, (3.39) reduces to
\[\Phi^{I_{K}}_{(\mu^{\prime}_{1})}=\mathbb{Q}^{[\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\!\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\!\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\!\![\![\![\![\![\![\![\![\!\[\![\![\![\!\![\![\![\![\!\![\![\!\[\![\![\![\!\![\!\[\![\![\!\![\![\![\![\!\![\![\!\![\![\!\![\![\!\![\![\![\!\![\![\![\!\![\![\!\![\![\!\![\![\!\![\![\!\![\![\!\![\![\![\!\![\![\!\![\!\![\![\!\![\![\!\![\!\![\!\![\!\[\!\![\!\![\![\!\![\!\![\!\![\!\![\![\!\![\!\![\![\!\![\!\![\!\![\![\![\!\![\!\![\!\![\!\![\!\![\!\![\!
since it carries the \(gl(M|N)\) highest weight (2.8) for (2.10). In fact, we have \(\zeta(\mathsf{hw}_{\mu}(u))=(-1)^{\sum_{k=1}^{N}\max\{\mu_{k}^{\prime}-M,0\}}e^{ \Delta(h)}\), where \(e^{\epsilon_{a}(h)}=z_{a}\), \(a\in\mathfrak{I}\). Here we set \(\mu_{j}=0\) if \(j>\mu_{1}^{\prime}\), \(\mu_{k}^{\prime}=0\) if \(k>\mu_{1}\), \(\prod_{j=a}^{b}(\cdots)=1\) if \(a>b\).
Let us give examples for the case \(U_{q}(gl(2|1)^{(1)})\). In case \(I_{3}=(2,3,1)\), (3.41) reduces to
\[F_{1}=-\frac{z_{3}}{z_{1}}\frac{\mathbb{Q}_{231}^{[-1]}\mathbb{Q}_{2}^{[1]}}{ \mathbb{Q}_{231}^{[1]}\mathbb{Q}_{2}^{[-1]}},\qquad F_{2}=-\frac{z_{2}}{z_{3} }\frac{\mathbb{Q}_{23}^{[1]}\mathbb{Q}_{\emptyset}^{[-1]}}{\mathbb{Q}_{23}^{ [-1]}\mathbb{Q}_{\emptyset}^{[1]}}. \tag{3.44}\]
The Bethe straps for the T-functions (3.21) form connected graphs described in Figure 6. See [8] for more examples of Bethe straps for other representations. The notion of the Bethe strap in the analytic Bethe ansatz appeared [34, 35] before q-characters in representation theory [32, 33] were introduced. In the theory of q-characters, the parameters \(z_{j}\) are usually included in Q-functions and ratios of Q-functions are used as variables. In addition, the Q-functions \(\mathbb{Q}_{\emptyset}\) and \(\mathbb{Q}_{\mathfrak{I}}\), which are related to "vacuum parts" in the analytic Bethe ansatz, are normalized to \(1\), and the top term (3.43) is called the "highest weight monomial". The Bethe strap procedures may be mathematically justified by the theory of q-characters, but this is beyond the scope of this paper.
Figure 6: Bethe strap structures of T-functions for \(U_{q}(gl(2|1)^{(1)})\).
### Wronskian-type expressions of T-functions
Let us introduce a determinant 20 over a block matrix labeled by sets \(B,R,F,S\) (\(B\subset\mathfrak{B}\), \(|B|=\mathfrak{m}\); \(F\subset\mathfrak{F}\), \(|F|=\mathfrak{n}\); \(R,S\subset\mathbb{Z}\)):
Footnote 20: This is related to the sparse determinant in [eq.(3.24) in [1]] as \(\Delta_{F,S}^{B,R,[\xi]}=\Delta_{F,S,\emptyset,\emptyset}^{B,\emptyset,R,[0; \xi]}\), where we set \(B_{1}=B,B_{2}=T_{1}=T_{2}=\emptyset\), \(\eta=0\). Moreover, this is related to the determinant in [eq.(3.4) in [17]] as \(\Delta_{F,S}^{B,R,[\xi]}=\Delta_{F,S}^{B,R}(xq^{\xi})\) in case q-difference is adopted.
\[\Delta_{F,S}^{B,R,[\xi]}=\left|\begin{pmatrix}\frac{\mathbb{Q}_{b,f}^{[\xi]}}{ z_{b}-z_{f}}\cr\frac{b\in B,}{f\in F}\cr\end{pmatrix}\right.\qquad\qquad\left(z_{b}^{j-1} \mathbb{Q}_{b}^{[\xi+2j-1]}\right)_{b\in B,\atop j\in S}\Bigg{|}, \tag{3.45}\]
where \(\xi\in\mathbb{C}\), and \((0)_{|R|\times|S|}\) is \(|R|\) by \(|S|\) zero matrix. The number of the elements of the sets must satisfy \(|B|+|R|=|F|+|S|\). For any Young diagram \(\mu\), we introduce a number, called \((\mathfrak{m},\mathfrak{n})\)-index [26]:
\[\xi_{\mathfrak{m},\mathfrak{n}}(\mu):=\min\{j\in\mathbb{Z}_{\geq 1}|\mu_{j}+ \mathfrak{m}-j\leq\mathfrak{n}-1\}. \tag{3.46}\]
In particular, we have \(1\leq\xi_{\mathfrak{m},\mathfrak{n}}(\mu)\leq\mathfrak{m}+1\), \(\xi_{\mathfrak{m},0}(\mu)=\mathfrak{m}+1\) and \(\xi_{0,\mathfrak{n}}(\mu)=1\) for \(\mu_{\mathfrak{m}+1}\leq\mathfrak{n}\), and \(\xi_{\mathfrak{m},\mathfrak{n}}(\mu)=\mathfrak{m}+1\) for \(\mu_{\mathfrak{m}+1}\leq\mathfrak{n}\leq\mu_{\mathfrak{n}}\); \(\xi_{\mathfrak{m},\mathfrak{n}}(\emptyset)=\max\{\mathfrak{m}-\mathfrak{n}+ 1,1\}\). We often abbreviate \(\xi_{\mathfrak{n},\mathfrak{n}}(\mu)\) as \(\xi_{\mathfrak{m},\mathfrak{n}}\). The denominator formula of the supercharacter of \(gl(\mathfrak{m}|\mathfrak{n})\) can be written as [26]:
\[\mathsf{D}(B|F)=\frac{\prod_{b,b^{\prime}\in B,}(z_{b}-z_{b^{\prime}})\prod_{ f,f^{\prime}\in F,}(z_{f^{\prime}}-z_{f})}{\prod_{(b,f)\in B\times F}(z_{b}-z_{f})}. \tag{3.47}\]
For any Young diagram \(\mu\), we introduce the following function 21
Footnote 21: \(\Gamma_{\mu}^{B,F[-\frac{\mathfrak{n}-\mathfrak{n}}{2}]}\) corresponds to eq.(3.15) in [17] (in a different normalization).
\[\mathsf{T}_{\mu}^{B,F}:=(-1)^{(\mathfrak{n}+\mathfrak{n}+1)(\xi _{\mathfrak{m},\mathfrak{n}}(\mu)+1)+\frac{(\mathfrak{n}-\mathfrak{n})( \mathfrak{n}+\mathfrak{n}-1)}{2}}\times\\ \times\frac{\Phi_{\mu}^{B,F}}{\Phi_{\mu}^{B,F}}\,\Psi_{\mu}^{( \mathfrak{m},\mathfrak{n})}\Delta_{F,(s_{1},s_{2},\ldots,s_{\mathfrak{G}, \mathfrak{n}-1})}^{B,(r_{1},r_{2},\ldots,r_{\mathfrak{n}-\mathfrak{n}+ \mathfrak{G},\mathfrak{n}-1}),[-\mathfrak{n}+\mathfrak{n}+\mu_{1}^{\prime}- \mu_{1}]}\mathsf{D}(B|F)^{-1}, \tag{3.48}\]
where \(s_{l}=\mu_{\xi_{\mathfrak{n},\mathfrak{n}}-l}+\mathfrak{m}-\mathfrak{n}-\xi_ {\mathfrak{m},\mathfrak{n}}(\mu)+l+1\), \(r_{k}=\mu_{\mathfrak{n}-\mathfrak{n}+\xi_{\mathfrak{n},\mathfrak{n}}-k}^{ \prime}+k-\xi_{\mathfrak{m},\mathfrak{n}}(\mu)+1\) and
\[\Psi_{\mu}^{(\mathfrak{m},\mathfrak{n})}=\frac{\mathbb{Q}_{ \emptyset}^{[\mathfrak{n}-\mathfrak{n}+\mu_{1}-\mu_{1}^{\prime}]}\mathbb{Q}_{ \emptyset}^{[-\mathfrak{n}+\mathfrak{n}-\mu_{1}+\mu_{1}^{\prime}]}\left( \mathbb{Q}_{\emptyset}^{[-\mathfrak{n}+\mathfrak{n}-\mu_{1}+\mu_{1}^{\prime}] }\right)^{-\mathfrak{n}-1+\xi_{\mathfrak{n},\mathfrak{n}}}}{\prod_{i=1}^{ \mathfrak{n}-\mathfrak{n}+\xi_{\mathfrak{n},\mathfrak{n}}-1}\mathbb{Q}_{ \emptyset}^{[-\mathfrak{n}+\mathfrak{n}-\mu_{1}+\mu_{1}^{\prime}-2z_{f}+2]} \prod_{j=1}^{\xi_{\mathfrak{n},\mathfrak{n}}-1}\mathbb{Q}_{\emptyset}^{[- \mathfrak{n}+\mathfrak{n}-\mu_{1}+\mu_{1}^{\prime}+2s_{j}-2]}, \tag{3.49}\] \[\frac{\Phi_{\mu}^{B,F}}{\Phi_{\mu}^{B,F}}=\mathbb{Q}_{ \emptyset}^{[-\mu_{1}-\mu_{1}^{\prime}+2\mu_{\mu_{1}^{\prime}}+\mathfrak{n}- \mathfrak{n}]}\left(\mathbb{Q}_{\emptyset}^{[\mu_{1}-\mu_{1}^{\prime}+\mathfrak{ n}-\mathfrak{n}]}\right)^{-1}\prod_{j=1}^{\mu_{1}^{\prime}-1}\left(\mathbb{Q}_{ \emptyset}^{[-\mu_{1}+\mu_{1}^{\prime}-2j+2\mu_{j}+\mathfrak{n}-\mathfrak{n}]} \right)^{\theta(\mu_{j}-\mu_{j+1}>0)}, \tag{3.50}\]
If the Young diagram \(\mu\) is of rectangular shape, the factor (3.50) becomes \(1\). In this case, the normalization of (3.48) reduces to the one in the previous paper [4]. We remark that the (super)character limit of (3.48) at \((\mathfrak{m},\mathfrak{n})=(M,N)\), namely \(\zeta(\mathsf{T}_{\mu}^{\mathfrak{B},\mathfrak{F}})\) coincides with the determinant formula [26] of the supercharacter of the highest weight representation \(V(\Lambda)\) of \(gl(M|N)\) with the highest weight (2.8) (with (2.10)). It is a natural generalization of the Schur function (Weyl-type formula). Specializing (3.48) for the empty diagram, we obtain a Wronskian-like determinant solution [Theorem 3.2 in [17]] of the QQ-relations (3.8) and (3.9):
\[\mathsf{T}_{\emptyset}^{B,F} =(-1)^{\frac{(\mathfrak{n}-\mathfrak{n})(\mathfrak{n}+\mathfrak{ n}-1)}{2}}\Psi_{\emptyset}^{(\mathfrak{n},\mathfrak{n})}\Delta_{F,(1, \mathfrak{n}-\mathfrak{n})}^{B,\langle 1,\mathfrak{n}-\mathfrak{n}\rangle,[- \mathfrak{n}+\mathfrak{n}]}\mathsf{D}(B|F)^{-1}\] \[=\mathbb{Q}_{B,F}\mathbb{Q}_{\emptyset}^{[\mathfrak{n}- \mathfrak{n}]}, \tag{3.51}\]
where we introduce notation
\[\langle a,b\rangle=\begin{cases}\{a,a+1,a+2,\ldots,b\}&\text{for}\quad b-a\in \mathbb{Z}_{\geq 0},\\ \emptyset&\text{for}\quad b-a\notin\mathbb{Z}_{\geq 0}.\end{cases} \tag{3.52}\]
Explicitly, (3.51) reads
\[\mathbb{Q}_{B,F}=\frac{(-1)^{\frac{(\mathfrak{n}-\mathfrak{n})( \mathfrak{n}+\mathfrak{n}-1)}{2}}}{(\mathbb{Q}_{\emptyset}^{[\mathfrak{n}- \mathfrak{n}]})^{\mathfrak{n}-1}\prod_{j=1}^{\mathfrak{n}-\mathfrak{n}} \mathbb{Q}_{\emptyset}^{[\mathfrak{n}-\mathfrak{n}+2j-2]}\mathsf{D}(B|F)} \left|\left(\frac{\mathbb{Q}_{b,f}^{[\mathfrak{n}-\mathfrak{n}]}}{z_{b}-z_{f }}\right)_{\begin{subarray}{c}b\in B,\\ f\in F\end{subarray}}\right.\hskip 10.0pt\left(z_{b}^{j-1}\mathbb{Q}_{b}^{[ \mathfrak{n}-\mathfrak{n}+2j-1]}\right)_{\begin{subarray}{c}b\in B,\\ j\in(\mathfrak{n}-\mathfrak{n})\end{subarray}}\right|\\ \text{for}\quad\mathfrak{m}\geq\mathfrak{n}, \tag{3.53}\]
\[\mathbb{Q}_{B,F}=\frac{(-1)^{\frac{(\mathfrak{n}-\mathfrak{n})( \mathfrak{n}+\mathfrak{n}-1)}{2}}}{(\mathbb{Q}_{\emptyset}^{[\mathfrak{n}- \mathfrak{n}]})^{\mathfrak{n}-1}\prod_{j=1}^{\mathfrak{n}-\mathfrak{n}} \mathbb{Q}_{\emptyset}^{[\mathfrak{n}-\mathfrak{n}-2j+2]}\mathsf{D}(B|F)} \left|\begin{pmatrix}\frac{\mathbb{Q}_{b,f}^{[\mathfrak{n}-\mathfrak{n}]}}{z_ {b}-z_{f}}\cr\frac{f\in F}{f\in F}\cr\left((-z_{f})^{i-1}\mathbb{Q}_{f}^{[ \mathfrak{n}-\mathfrak{n}-2i+1]}\right)_{\begin{subarray}{c}i\in(\mathfrak{n }-\mathfrak{n}),\\ f\in F\end{subarray}}\end{pmatrix}\right|\\ \text{for}\quad\mathfrak{m}\leq\mathfrak{n}. \tag{3.54}\]
For any rectangular Young diagram, we set
\[\mathsf{T}_{a,m}^{B,F}=\begin{cases}\mathsf{T}_{(m^{a})}^{B,F}&\text{for}\quad a,m\in\mathbb{Z}_{\geq 1}\\ \mathbb{Q}_{\emptyset}^{[\mathfrak{n}-\mathfrak{n}-\mathfrak{n}]}(\mathbb{Q}_ {\emptyset}^{[\mathfrak{n}-\mathfrak{n}+\mathfrak{n}]})^{-1}\mathsf{T}_{ \emptyset}^{B,F[a]}&\text{for}\quad a\in\mathbb{Z}_{\geq 0},\quad m=0\\ \mathbb{Q}_{\emptyset}^{[\mathfrak{n}-\mathfrak{n}+m]}(\mathbb{Q}_{ \emptyset}^{[\mathfrak{n}-\mathfrak{n}-m]})^{-1}\mathsf{T}_{\emptyset}^{B,F[-m ]}&\text{for}\quad a=0,\quad m\in\mathbb{Z}\\ 0&\text{otherwise}.\end{cases} \tag{3.55}\]
More explicitly, we have:
for \(a\leq\mathfrak{m}-\mathfrak{n}\), we have \(\xi_{\mathfrak{m},\mathfrak{n}}((m^{a}))=\mathfrak{m}-\mathfrak{n}+1\) and
\[\mathsf{T}_{a,m}^{B,F}=(-1)^{\frac{(\mathfrak{n}-\mathfrak{n})(\mathfrak{n} +\mathfrak{n}-1)}{2}}\Psi_{a,m}^{(\mathfrak{n},\mathfrak{n})}\Delta_{F,(1, \mathfrak{m}-\mathfrak{n}-a)\sqcup(\mathfrak{m}-\mathfrak{n}-a+m+1,\mathfrak{ m}-\mathfrak{n}+m)}^{B,\emptyset,[-\mathfrak{n}+\mathfrak{n}+a-m]}\mathsf{D}(B|F)^{-1}, \tag{3.56}\]
for \(a-m\leq\mathfrak{m}-\mathfrak{n}\leq a\), we have \(\xi_{\mathfrak{n},\mathfrak{n}}((m^{a}))=a+1\) and
\[\mathsf{T}^{B,F}_{a,m}=(-1)^{(\mathfrak{n}+\mathfrak{n}+1)a+\frac{(\mathfrak{n} -\mathfrak{n})(\mathfrak{n}+\mathfrak{n}-1)}{2}}\Psi^{(\mathfrak{m},\mathfrak{ n})}_{a,m}\Delta^{B,(\mathfrak{n},\mathfrak{n}-\mathfrak{n}+a),[-\mathfrak{n}+ \mathfrak{n}+\mathfrak{n}-m]}_{F,(\mathfrak{m}-\mathfrak{n}-a+m+1,\mathfrak{ n}-\mathfrak{n}+m)}\mathsf{D}(B|F)^{-1}, \tag{3.57}\]
for \(-m\leq\mathfrak{m}-\mathfrak{n}\leq a-m\), we have \(\xi_{\mathfrak{m},\mathfrak{n}}((m^{a}))=\mathfrak{m}-\mathfrak{n}+m+1\) and
\[\mathsf{T}^{B,F}_{a,m}=(-1)^{(\mathfrak{n}+\mathfrak{n}+1)m+\frac{(\mathfrak{ n}-\mathfrak{n})(\mathfrak{n}+\mathfrak{n}-1)}{2}}\Psi^{(\mathfrak{m}, \mathfrak{n})}_{a,m}\Delta^{B,(\mathfrak{n}-\mathfrak{n}-m+a+1,\mathfrak{n}- \mathfrak{n}+a),[-\mathfrak{m}+\mathfrak{n}+\mathfrak{n}-m]}_{F,(\mathfrak{1},\mathfrak{n}-\mathfrak{n}+m)}\mathsf{D}(B|F)^{-1}, \tag{3.58}\]
for \(\mathfrak{m}-\mathfrak{n}\leq-m\), we have \(\xi_{\mathfrak{m},\mathfrak{n}}((m^{a}))=1\) and
\[\mathsf{T}^{B,F}_{a,m}=(-1)^{\frac{(\mathfrak{n}-\mathfrak{n})(\mathfrak{n} +\mathfrak{n}-1)}{2}}\Psi^{(\mathfrak{m},\mathfrak{n})}_{a,m}\Delta^{B,( 1,\mathfrak{n}-\mathfrak{n}-m)\sqcup(\mathfrak{n}-\mathfrak{m}-m+a+1, \mathfrak{n}-\mathfrak{m}+a),[-\mathfrak{m}+\mathfrak{n}+\mathfrak{n}-m]}_{F,( \mathfrak{1},\mathfrak{n}-\mathfrak{n}+m)}\mathsf{D}(B|F)^{-1}, \tag{3.59}\]
where we set
\[\Psi^{(\mathfrak{m},\mathfrak{n})}_{a,m}=\begin{cases}\Psi^{(\mathfrak{m}, \mathfrak{n})}_{(m^{a})}&\text{for}\quad a,m\in\mathbb{Z}_{\geq 1}\\ \mathbb{Q}^{[\mathfrak{n}-\mathfrak{n}-\mathfrak{n}-\mathfrak{n}]}_{0}( \mathbb{Q}^{[\mathfrak{n}-\mathfrak{n}+\mathfrak{n}]}_{0})^{-1}\Psi^{( \mathfrak{n},\mathfrak{n})[a]}_{\emptyset}&\text{for}\quad a\in\mathbb{Z}_{ \geq 0},\quad m=0\\ \mathbb{Q}^{[\mathfrak{n}-\mathfrak{n}+m]}_{\emptyset}(\mathbb{Q}^{[ \mathfrak{n}-\mathfrak{n}-m]}_{0})^{-1}\Psi^{(\mathfrak{n},\mathfrak{n})[-m]}_ {\emptyset}&\text{for}\quad a=0,\quad m\in\mathbb{Z}\\ 1&\text{otherwise}.\end{cases} \tag{3.60}\]
Applying the Laplace expansion formula to (3.55)-(3.59), we obtain useful expressions [17]
\[\mathsf{T}^{B,F}_{a,m}=\sum_{J\subset F,\atop|J|=m}\frac{\prod_{j \in J}(-z_{j})^{a-m-\mathfrak{m}+\mathfrak{n}}\prod_{(b,j)\in B\times J}(z_{b }-z_{j})}{\prod_{(i,j)\in(F\setminus J)\times J}(z_{i}-z_{j})}\mathbb{Q}^{[ a]}_{B,F\setminus J}\mathbb{Q}^{[-a+\mathfrak{m}-\mathfrak{n}]}_{J}\\ \text{for}\quad a\geq m+\mathfrak{m}-\mathfrak{n}, \tag{3.61}\]
\[\mathsf{T}^{B,F}_{a,m} =\sum_{I\subset B,\atop|I|=a}\frac{\prod_{i\in I}z_{i}^{m-a+ \mathfrak{m}-\mathfrak{n}}\prod_{(i,f)\in I\times F}(z_{i}-z_{f})}{\prod_{(i,j )\in I\times(B\setminus I)}(z_{i}-z_{j})}\mathbb{Q}^{[m+\mathfrak{n}- \mathfrak{n}]}_{I}\mathbb{Q}^{[-m]}_{B\setminus I,F}\quad\text{for}\quad a \leq m+\mathfrak{m}-\mathfrak{n}, \tag{3.62}\]
where the summation is taken over any possible subsets. The T-functions \(\mathsf{T}^{B,F}_{a,m}\) solve [17] the T-system for \(U_{q}(gl(M|N)^{(1)})\) (or \(U_{q}(sl(M|N)^{(1)})\), or its Yangian counterpart) [15, 8]. The T-functions (3.61) and (3.62) are defined for the integer parameters \(a\) and \(m\). One can consider analytic continuation of (3.61) with respect to \(a\) and (3.62) with respect to \(m\) to the whole complex plane. However, in most cases, the analytically continued T-functions for the generic complex parameters (\(a\) or \(m\)) do not give T-functions for irreducible representations of the underlying algebra. Exceptions are T-functions for one parameter families of finite dimensional typical irreducible representations of superalgebras, which correspond to (3.61) for \(m=\mathfrak{n}=N\) and (3.62) for \(a=\mathfrak{m}=M\) (analytic continuation of T-functions were considered in [16, 3].
We also have [eq.(3.67) in [17]].
\[\mathsf{T}^{B,F}_{\mu}=\mathsf{F}^{B,F}_{\mu} \tag{3.63}\]
This is proven for general non-rectangular Young diagrams for \(|F|=0\) case and rectangular Young diagrams for \(|B||F|\neq 0\) case, and is a conjecture for general non-rectangular Young diagrams for \(|B||F|\neq 0\) case (see page 426 in [17]).
T-functions for asymptotic representationsLet \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{\mu^{\prime}_{1}})\) be a partition with \(\mu^{\prime}_{1}\leq\mathfrak{m}\). For an integer \(c\geq\mathfrak{m}\), we define a partition \(\mu+(\mathfrak{n}^{c})=(\mu_{1}+\mathfrak{n},\mu_{2}+\mathfrak{n},\ldots,\mu_{ \mu^{\prime}_{1}}+\mathfrak{n},\underbrace{\mathfrak{n},\ldots,\mathfrak{n}} _{c-\mu^{\prime}_{1}})\). We can show [cf. eq. (4.6) in [17]; eq. (3.52) in [4]]
\[\mathsf{T}^{B,F[\mu^{\prime}_{1}-c]}_{\mu+(\mathfrak{n}^{c})}=\prod_{f\in F}(-z _{f})^{c-\mathfrak{n}}\prod_{(b,f)\in B\times F}(z_{b}-z_{f})\left(\mathbb{Q}^ {[\mathfrak{n}-\mu_{1}-\mu^{\prime}_{1}+2\mu_{\mu^{\prime}_{1}}]}_{\emptyset} \right)^{-1}\mathbb{Q}^{[\mathfrak{n}-\mathfrak{n}-\mu_{1}+\mu^{\prime}_{1}-2 c]}_{F}\mathsf{T}^{B,\emptyset}_{\mu}, \tag{3.64}\]
where we use the property of the determinant
\[\begin{vmatrix}A&B\\ C&\mathbf{0}\end{vmatrix}=(-1)^{\mathfrak{m}\mathfrak{m}}|C||B| \tag{3.65}\]
for \(\mathfrak{m}\times\mathfrak{n}\) matrix \(A\), \(\mathfrak{m}\times\mathfrak{m}\) matrix \(B\), \(\mathfrak{n}\times\mathfrak{n}\) matrix \(C\) and \(\mathfrak{n}\times\mathfrak{m}\) zero matrix \(\mathbf{0}\), and the relation
\[\frac{\mathsf{D}(B|\emptyset)\mathsf{D}(\emptyset|F)}{\mathsf{D}(B|F)}=\prod_{ (b,f)\in B\times F}(z_{b}-z_{f}). \tag{3.66}\]
We consider the following limit of (3.64) with respect to the parameter \(c\) [cf. eq. (3.53) in [4]]:
\[\mathbb{Q}^{[\mathfrak{n}-\mu_{1}-\mu^{\prime}_{1}+2\mu_{\mu^{\prime}_{1}}]} _{\emptyset}\lim_{f\in F}(-z_{f})^{-c}\mathsf{T}^{B,F[\mu^{\prime}_{1}-c]}_{ \mu+(\mathfrak{n}^{c})}=\prod_{f\in F}(-z_{f})^{-\mathfrak{n}}\prod_{(b,f) \in B\times F}(z_{b}-z_{f})\mathsf{T}^{B,\emptyset}_{\mu} \tag{3.67}\]
where we assume \(\lim_{c}\mathbb{Q}^{[-2c]}_{\emptyset}=\lim_{c}\mathbb{Q}^{[-2c]}_{F}=1\). We will use reductions of (3.67) to describe T-functions for spinorial representations of orthosymplectic superalgebras. One can also use \(\mathsf{F}^{B,\emptyset}_{\mu}\) instead of \(\mathsf{T}^{B,\emptyset}_{\mu}\) in (3.67).
## 4 Reductions of T- and Q-functions
Now we would like to explain details of our proposal [section 3.7 in [1]] and its extension. In this section we consider reductions of T- and Q-functions introduced in the previous section. The reduction procedures in this section is an extension of the methods to derive the Bethe ansatz equations and T-functions for twisted quantum affine algebras from the ones for non-twisted quantum affine algebras [24] (see also [69]). We will also consider an extension of [25], in which a reduction from the \(U_{q}(sl(2r+2)^{(1)})\) case to the \(U_{q}(sp(2r)^{(1)})\) case was discussed. Besides these one will find new features that are not present in [24, 69, 25].
### Reductions of Q-functions by automorphisms
We find that reductions on the QQ-relations by the map \(\sigma\) (and some dualities among different superalgebras [19]) produce QQ-relations (and from zeros of Q-functions, Bethe
equations) and T-functions (and in particular, solutions of the T-systems) associated with algebras different from the original ones. The reductions here are basically accomplished by identifying the image of the Q-functions and the parameters \(\{z_{a}\}\) by the map \(\sigma\) with the original ones (up to overall factors and manipulations on the spectral parameter in some cases). Let \(\mathfrak{D}\) be a subset of the sets \(\mathfrak{B}\) or \(\mathfrak{F}\) such that \(\mathfrak{D}^{*}=\mathfrak{D}\) and \(|\mathfrak{D}|=2\), or \(\mathfrak{D}=\emptyset\). Let us consider "\(gl(M|N)^{(2)}\) type reduction" by \(\sigma\)22 :
Footnote 22: It may be possible to generalize this by considering compositions of \(\sigma\) and the \(GL(M)\times GL(N)\)-symmetry [1, 59] of the QQ-relations. Here we consider only the simplest case.
\[\sigma(\mathbb{Q}_{I})=\mathbb{Q}_{\mathfrak{I}^{*}}=\mathbb{Q}_ {I}^{[\eta]}\quad\text{for}\quad I\subset\mathfrak{I},\] \[\sigma(z_{a})=z_{a^{*}}^{-1}=z_{a}\quad\text{for}\quad a\in \mathfrak{I}\setminus\mathfrak{D}, \tag{4.1}\] \[\sigma(z_{a})=z_{a^{*}}^{-1}=-z_{a}=1\quad\text{or}\ -1\quad\text{for}\quad a\in\mathfrak{D},\]
where \(\eta=0\), or \(2\eta\in\mathbb{C}^{\times}\) is the common period 23 of the Q-functions: \(|\eta|\) is the minimal non-zero number such that \(\mathbb{Q}_{J}^{[2\eta]}=\mathbb{Q}_{J}\) for all \(J\subset\mathfrak{I}\). In particular, we have
Footnote 23: Depending on the normalization of the Q-functions, a sign factor may appear \(\mathbb{Q}_{J}^{[2\eta]}=\pm\mathbb{Q}_{J}\). We normalize the Q-functions so that the sign factor is always \(1\). Let \(\sigma\) be an automorphism of order \(\kappa\) (\(\sigma^{\kappa}=1\)). In general, \(\kappa\eta\) corresponds to the common period of the Q-functions in case \(\eta\neq 0\). Here we consider only the case \(\kappa=2\).
\[\mathbb{Q}_{\mathfrak{B}}=\mathbb{Q}_{\mathfrak{F}}^{[\eta]}, \tag{4.2}\] \[\mathbb{Q}_{\mathfrak{I}}=\mathbb{Q}_{\mathfrak{B},\mathfrak{F}} =\mathbb{Q}_{\emptyset}^{[\eta]}. \tag{4.3}\]
We remark that \(\sigma(I)=I\) holds if and only if \(|I|=(M+N)/2\) and \(I\cap I^{*}=\emptyset\). This is possible only if both \(M\) and \(N\) are even numbers. For this index set \(I\), the Q-function satisfies \(\mathbb{Q}_{I}^{[\eta]}=\mathbb{Q}_{I}\). In case \(\eta\neq 0\), this suggests a factorization \(\mathbb{Q}_{I}=\mathbf{Q}_{I}\mathbf{Q}_{I}^{[\eta]}=:\mathbf{Q}_{I}^{2}\), where \(\mathbf{Q}_{I}^{[2\eta]}=\mathbf{Q}_{I}\). In case the Q-function \(\mathbb{Q}_{I}\) has the form (3.4), this means that
\[\mathbb{Q}_{I}=\mathbb{Q}_{I}(u) =\prod_{j=1}^{n_{I}}(1-q^{-2u+2u_{j}^{I}})\] \[=\prod_{j=1}^{m_{I}}(1-q^{-2u+2v_{j}^{I}})(1+q^{-2u+2v_{j}^{I}})= \prod_{j=1}^{m_{I}}(1-q^{-4u+4v_{j}^{I}}), \tag{4.4}\]
where
\[\mathbf{Q}_{I}=\mathbf{Q}_{I}(u)=\prod_{j=1}^{m_{I}}(1-q^{-2u+2v_{j}^{I}}),\]
\[n_{I}=2m_{I},\qquad\{u_{j}\}_{j=1}^{n_{I}}=\{v_{j}\}_{j=1}^{m_{I}}\sqcup\{v_{ j}+\eta\}_{j=1}^{m_{I}},\qquad\eta=\frac{\pi i}{2\log q}. \tag{4.5}\]
If \(M\) or \(N\) are odd, fixed points \(\mathfrak{f}\in\mathfrak{I}\) by \({}^{*}\) appear: \(\mathfrak{f}^{*}=\mathfrak{f}\), \(\sigma(z_{\mathfrak{f}})=z_{\mathfrak{f}}^{-1}=z_{\mathfrak{f}}\). Thus we have \(z_{\mathfrak{f}}=\pm 1\). The minus sign \(z_{\mathfrak{f}}=-1\) effectively changes the sign of \(p_{\mathfrak{f}}\) from the grading of the
superalgebra (see (3.3) and (3.1)), which induces a duality among superalgebras. We will set \(z_{\mathfrak{f}}=-1\) (resp. \(z_{\mathfrak{f}}=1\)) when we consider reductions to non-twisted quantum affine superalgebras (resp. twisted quantum affine superalgebras). In case we consider reductions to twisted quantum affine superalgebras, we assume \(\eta\neq 0\). In case we consider reductions to quantum affine superalgebras of type A and B, we assume \(\mathfrak{D}=\emptyset\) (regular reduction). We need more reductions on Q-functions in addition to (4.1) for reduction to quantum affine superalgebras of type C and D, where we assume \(\mathfrak{D}\neq\emptyset\) (singular reduction).
In case \(B=\mathfrak{B}\), \(F=\mathfrak{F}\), \(\mathbb{Q}_{\mathfrak{B},\mathfrak{F}\setminus J}=\mathbb{Q}_{J^{\star}}^{[ \eta]}\) and \(\mathbb{Q}_{\mathfrak{B}\setminus I,\mathfrak{F}}=\mathbb{Q}_{I^{\star}}^{[ \eta]}\) hold in (3.61) and (3.62). Thus they reduce to
\[\mathsf{T}_{a,m}^{\mathfrak{B},\mathfrak{F}}=\sum_{\stackrel{{ J\subset\mathfrak{F},}}{{|J|=m}}}\frac{\prod_{j\in J}(-z_{j})^{a-m-M+N} \prod_{(i,j)\in\mathfrak{B}\times J}(z_{b}-z_{j})}{\prod_{(i,j)\in(\mathfrak{ F}\setminus J)\times J}(z_{i}-z_{j})}\mathbb{Q}_{J^{\star}}^{[a+\eta]} \mathbb{Q}_{J}^{[-a+M-N]}\\ \text{for}\quad a\geq m+M-N, \tag{4.6}\]
\[\mathsf{T}_{a,m}^{\mathfrak{B},\mathfrak{F}}=\sum_{\stackrel{{ I\subset\mathfrak{B},}}{{|I|=a}}}\frac{\prod_{i\in I}z_{i}^{m-a+M-N}\prod_{(i,f)\in I \times\mathfrak{F}}(z_{i}-z_{f})}{\prod_{(i,j)\in I\times(\mathfrak{B} \setminus I)}(z_{i}-z_{j})}\mathbb{Q}_{I}^{[m+M-N]}\mathbb{Q}_{I^{\star}}^{[- m+\eta]}\quad\text{for}\quad a\leq m+M-N. \tag{4.7}\]
In case \(\mathfrak{D}=\emptyset\), one can show
\[\mathsf{T}_{a,N-m}^{\emptyset,\mathfrak{F}[\eta]}=\left(\prod_{ f\in\mathfrak{F}}(-z_{f})\right)^{a}\mathsf{T}_{a,m}^{\emptyset,\mathfrak{F}} \quad\text{for}\quad a\geq 0,\quad 0\leq m\leq N, \tag{4.8}\] \[\mathsf{T}_{M-a,m}^{\mathfrak{B},\emptyset[\eta]}=\left(\prod_{ b\in\mathfrak{B}}z_{b}\right)^{m}\mathsf{T}_{a,m}^{\mathfrak{B},\emptyset} \quad\text{for}\quad 1\leq a\leq M,\quad m\geq 0, \tag{4.9}\]
where the prefactors of (4.8) and (4.9) take \(1\) or \(-1\). Let \(\hat{\mathsf{T}}_{a,m}^{\mathfrak{B},\mathfrak{F}}\) be an analytic continuation of the right hand side of (4.6) with respect to \(a\) and that of (4.7) with respect to \(m\) to the whole complex plane. One can show
\[\hat{\mathsf{T}}_{-a+M-N,m}^{\mathfrak{B},\mathfrak{F}[\eta]}= \left(-\frac{\prod_{i\in\mathfrak{F}}z_{i}}{\prod_{b\in\mathfrak{B}}z_{b}} \right)^{m}\hat{\mathsf{T}}_{a,m}^{\mathfrak{B},\mathfrak{F}}\quad\text{for $a \in\mathbb{C}$ and $\mathfrak{D}\cap\mathfrak{F}=\emptyset$ for \eqref{eq:def_def_def_def}}, \tag{4.10}\] \[\hat{\mathsf{T}}_{a,-m-M+N}^{\mathfrak{B},\mathfrak{F}[\eta]}= \left((-1)^{N-M+a}\frac{\prod_{j\in\mathfrak{B}}z_{j}}{\prod_{f \in\mathfrak{F}}z_{f}}\right)^{a}\hat{\mathsf{T}}_{a,m}^{\mathfrak{B}, \mathfrak{F}}\quad\text{for $m\in\mathbb{C}$ and $\mathfrak{D}\cap\mathfrak{B}=\emptyset$ for \eqref{eq:def_def_def_def}}, \tag{4.11}\]
where the prefactors of (4.10) and (4.11) take \(1\) or \(-1\).
### Symmetric nesting path
Let \(I_{M+N}=(i_{1},i_{2},\ldots,i_{M+N})\) be any one of the permutations of \((1,2,\ldots,M+N)\), and \(I_{a}=(i_{1},i_{2},\ldots,i_{a})\) be the first \(a\) elements of it, where \(0\leq a\leq M+N\). In particular,
and \(I_{M+N}\) coincide with \(\emptyset\) and \(\mathfrak{I}\) as sets, respectively. The set of the tuples \(\{I_{a}\}_{a=0}^{M+N}\) is called the _nesting path_ defined by the tuple \(I_{M+N}\). For \(1\leq a\leq M+N-1\), we define \(\widetilde{I}_{a}=(i_{1},i_{2},\ldots,i_{a-1},i_{a+1})\). In this subsection, we consider a special class of nesting paths.
Suppose \(i_{k}^{*}=i_{M+N+1-k}\) holds for any \(k\in\mathfrak{I}\). In this case, \(I_{M+N}=(i_{1},i_{2},\ldots,i_{\frac{M+N}{2}},i_{\frac{M+N}{2}}^{*},\ldots,i_{ 2}^{*},i_{1}^{*})\) if both \(M+N\) and \(MN\) are even, and \(I_{M+N}=(i_{1},i_{2},\ldots,i_{\frac{M+N-1}{2}},\mathfrak{f},i_{\frac{M+N-1}{ 2}}^{*},\ldots,i_{2}^{*},i_{1}^{*})\) if \(M+N\) is odd and \(MN\) is even (there is a fixed point \(\mathfrak{f}:=i_{\frac{M+N+1}{2}}^{*}=i_{\frac{M+N+1}{2}};\mathfrak{f}=M/2\) if \(M\) is even, and \(\mathfrak{f}=M+N/2\) if \(N\) is even). We may say that the nesting path is _symmetric_ in the sense that the Q-functions along this nesting path are symmetric up to the half period: \(\mathbb{Q}_{I_{a}}^{[\eta]}=\mathbb{Q}_{I_{M+N-a}}\) for any \(a\in\{0,1,\ldots M+N\}\). Note that this is impossible if \(MN\) is odd since two fixed points \(((M+1)/2)^{*}=(M+1)/2\), \((M+(N+1)/2)^{*}=M+(N+1)/2\) appear. In this case, we propose to consider_almost symmetric nesting path_ defined by \(i_{k}^{*}=i_{M+N+1-k}\) for any \(k\in\mathfrak{I}\setminus\{(M+N)/2,(M+N)/2+1\}\) and \((\mathfrak{f}_{1},\mathfrak{f}_{2}):=(i_{\frac{M+N}{2}},i_{\frac{M+N}{2}+1})=( (M+1)/2,M+(N+1)/2)\) or \((M+(N+1)/2,(M+1)/2)\), namely \(I_{M+N}=(i_{1},i_{2},\ldots,i_{\frac{M+N-2}{2}},\mathfrak{f}_{1},\mathfrak{f} _{2},i_{\frac{M+N-2}{2}}^{*},\ldots,i_{2}^{*},i_{1}^{*})\). Along this almost symmetric nesting path, we have \(\mathbb{Q}_{I_{a}}^{[\eta]}=\mathbb{Q}_{I_{M+N-a}}\) for any \(a\in\{0,1,\ldots M+N\}\setminus\{(M+N)/2\}\), and \(\mathbb{Q}_{I_{\frac{M+N}{2}}}^{[\eta]}=\mathbb{Q}_{I_{\frac{M+N-2}{2}}, \mathfrak{f}_{1}}=\mathbb{Q}_{I_{\frac{M+N-2}{2}},\mathfrak{f}_{2}}\). See Figures 7,8,9 for examples of symmetric nesting paths. We remark that \(I_{a}\in\mathfrak{A}\) holds as set if \(a\leq(M+N)/2\) and \(MN\) is even, or if \(a\leq(M+N-2)/2\) and \(MN\) is odd.
The functions \(\mathcal{X}_{I_{a}}\) on any symmetric (or almost symmetric) nesting path can be expressed in terms of the Q-functions \(\mathbb{Q}_{I_{a}}\) for \(a\leq\frac{M+N}{2}\). In fact, (3.3) for \(a>\frac{M+N}{2}\) can be rewritten as
\[\mathcal{X}_{I_{M+N+1-a}}=z_{i_{a}^{*}}\frac{\mathbb{Q}_{I_{a-1}}^{[\sum_{j\in I _{a}}p_{j}+p_{i_{a}}+\eta]}\mathbb{Q}_{I_{a}}^{[\sum_{j\in I_{a}}p_{j}-2p_{i_ {a}}+\eta]}}{\mathbb{Q}_{I_{a-1}}^{[\sum_{j\in I_{a}}p_{j}-p_{i_{a}}+\eta]} \mathbb{Q}_{I_{a}}^{[\sum_{j\in I_{a}}p_{j}+\eta]}}\quad\text{for}\quad a\leq \frac{M+N}{2}. \tag{4.12}\]
Although the T-function (3.19) can be non-zero on the \([M,N]\)-hook (see Figures 2, 3, 4, 5), the main target domain after the reduction is the \([r,s]\)-hook, in which we have observations on the meaning (relationship with the labels of representations, irreducibility) of T-functions through the Bethe strip (see subsection 4.6). The T-system (for tensor representations; cf. [2, 3]) will be defined on the \([r,s]\)-hook (or its extension). We expect that the T-functions outside of the \([r,s]\)-hook are described in terms of the ones in the \([r,s]\)-hook (see [56, 4] for the case \(U_{q}(so(2r+1)^{(1)})\)). The main target domain of the Wronskian-type expression of the T-function (3.48) (for \((M,N)=(\mathfrak{m},\mathfrak{n}))\) is also \([r,s]\)-hook after the reduction. The identity (3.63) (with (3.38)) assumes the QQ-relations (3.8) and (3.9), so further investigation is needed to see how far it holds when the consideration is restricted to the symmetric nesting paths. Note, however, that the identity (3.63) always holds under the (super)character limit (3.22). After the reduction, \(\zeta(\mathsf{T}_{\mu}^{\mathfrak{D},\mathfrak{F}})\) gives a Weyl-type (super)character formula for finite dimensional representations of quantum affine superalgebras (or super-Yangians). This does not always give irreducible (super)characters, especially for type C or D superalgebras. We expect that criteria for irreducibility and cues for extracting irreducible components are suggested by the Bethe strip (see subsection 4.6).
The simple root system \(\{\alpha_{a}=\epsilon_{i_{M+N+1-a}}-\epsilon_{i_{M+N-a}}=\epsilon_{i_{a}^{*}}- \epsilon_{i_{a+1}^{*}}\}_{a=1}^{M+N-1}\) on the symmetric nesting path is symmetric in the sense that
\(-\epsilon_{i_{a}}+\epsilon_{i_{a+1}}=\alpha_{M+N-a}\), and thus the associated Dynkin diagram is symmetric with respect to the map \(\sigma\).
Let \(\mathfrak{W}\) be a subgroup of the permutation group \(S(I_{M+N})=S(\mathfrak{I})\), which preserves the entire set of symmetric nesting paths. \(\mathfrak{W}\) preserves the whole set of the symmetric Dynkin diagrams of \(gl(M|N)\). We will discuss the invariance of T-functions under this (\(\mathfrak{W}\)-symmetry).
### Regular reductions
In this subsection, we consider reductions for the case \(\mathfrak{D}=\emptyset\). In this case the resultant Bethe ansatz equations are always reductions of the ones for \(U_{q}(gl(M|N)^{(1)})\).
#### 4.3.1 \(U_{q}(osp(2r+1|2s)^{(1)})\) case
We assume \(r,s\in\mathbb{Z}_{\geq 0}\), \(r+s\geq 1\), and set
\[(M,N)=(2r,2s+1),\quad\mathfrak{B}=\{1,2,\ldots,2r\},\quad\mathfrak{ F}=\{2r+1,2r+2,\ldots,2r+2s+1\},\\ \mathfrak{D}=\emptyset,\quad\eta=0,\quad z_{2r+s+1}=-1. \tag{4.13}\]
In particular for \(s=0\), this reduces to the case \(U_{q}(so(2r+1)^{(1)})\), which is already explained in [4].
QQ-relationsFor a symmetric nesting path defined by \(I_{2r+2s+1}=(i_{1},i_{2},\ldots,i_{r+s},2r+s+1,i_{r+s}^{*}\ldots,i_{2}^{*},i_{1 }^{*})\), the QQ-relations (3.8) reduce to
\[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a-1}}\mathbb{Q}_{I_{a+1}}=z_ {i_{a}}\mathbb{Q}_{I_{a}}^{[p_{i_{a}}]}\mathbb{Q}_{\widetilde{I}_{a}}^{[-p_{i_ {a}}]}-z_{i_{a+1}}\mathbb{Q}_{I_{a}}^{[-p_{i_{a}}]}\mathbb{Q}_{\widetilde{I}_{a }}^{[p_{i_{a}}]}\] \[\qquad\text{for}\quad a\in\{1,2,\ldots,r+s-1\},\quad p_{i_{a}}=p_ {i_{a+1}}, \tag{4.14}\]
Figure 7: Hasse diagrams for Q-functions: The thick lines denote symmetric nesting paths. There are two symmetric nesting paths for each algebra.
Figure 8: Hasse diagrams for Q-functions: The thick or dotted lines denote symmetric nesting paths. There are eight symmetric nesting paths for each algebra.
Figure 9: Hasse diagrams for Q-functions: The thick or dotted lines denote symmetric nesting paths. There are eight symmetric nesting paths.
\[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a}}\mathbb{Q}_{\widetilde{I}_{a}}=z_{i_{a}} \mathbb{Q}_{I_{a-1}}^{[-p_{i_{a}}]}\mathbb{Q}_{I_{a+1}}^{[p_{i_{a}}]}-z_{i_{a+1}} \mathbb{Q}_{I_{a-1}}^{[p_{i_{a}}]}\mathbb{Q}_{I_{a+1}}^{[-p_{i_{a}}]}\]
\[\text{for}\quad a\in\{1,2,\ldots,r+s-1\},\quad p_{i_{a}}=-p_{i_{a+1}}, \tag{4.15}\]
\[(z_{i_{r+s}}+1)\mathbb{Q}_{I_{r+s-1}}\mathbb{Q}_{I_{r+s}}=z_{i_{r+s}}\mathbb{Q }_{I_{r+s}}^{[-1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[1]}+\mathbb{Q}_{I_{r+s}}^ {[1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[-1]}\quad\text{for}\quad p_{i_{r+s}}=-1, \tag{4.16}\]
\[(z_{i_{r+s}}+1)\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{\widetilde{I}_{r+s}}=z_{i_{r+s}} \mathbb{Q}_{I_{r+s-1}}^{[-1]}\mathbb{Q}_{I_{r+s}}^{[1]}+\mathbb{Q}_{I_{r+s-1}} ^{[1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[-1]}\quad\text{for}\quad p_{i_{r+s}}=1, \tag{4.17}\]
where \(\widetilde{I}_{a}=(i_{1},i_{2},\ldots,i_{a-1},i_{a+1})\), \(i_{r+s+1}=2r+s+1\). Eqs. (4.14) and (4.16) are reductions of (3.8) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r+s-1\) and \(a=r+s\), respectively 24. Eqs. (4.15) and (4.17) are reductions of (3.9) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r+s-1\) and \(a=r+s\), respectively. Instead of (4.17), one may use
Footnote 24: The QQ-relations for \(a>r+s\) reduce to the ones for \(a\leq r+s\). For example, (4.16) is also a reduction of (3.8) for \(I=I_{r+s}\), \((i,j)=(2r+s+1,i_{r+s}^{*})\).
\[(z_{i_{r+s}}-1)\mathbb{Q}_{I_{r+s-1}}^{[1]}\mathbb{Q}_{I_{r+s-1}}=z_{i_{r+s}} \mathbb{Q}_{I_{r+s}}^{[1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[1]}-\mathbb{Q}_ {I_{r+s}}^{[1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[1]}\quad\text{for}\quad p_{ i_{r+s}}=1, \tag{4.18}\]
where \(\breve{I}_{r+s}=(i_{1},i_{2},\ldots,i_{r+s-1},i_{r+s}^{*})\). One can derive (4.18) in the same way as explained in [section 3.2, [4]] for \(s=0\) case. We will use the following QQ-relations:
\[(z_{i_{r+s}}-z_{i_{r+s}}^{-1})\mathbb{Q}_{I_{r+s-1}}\mathbb{Q}_{ \widetilde{I}_{r+s}}=z_{i_{r+s}}\mathbb{Q}_{I_{r+s}}^{[1]}\mathbb{Q}_{I_{r+s}} ^{[-1]}-z_{i_{r+s}}^{-1}\mathbb{Q}_{I_{r+s}}^{[-1]}\mathbb{Q}_{\widetilde{I}_{ r+s}}^{[1]}\quad\text{for}\quad p_{i_{r+s}}=1, \tag{4.19}\] \[(z_{i_{r+s}}^{-1}+1)\mathbb{Q}_{\breve{I}_{r+s}}\mathbb{Q}_{ \widetilde{I}_{r+s}}=z_{i_{r+s}}^{-1}\mathbb{Q}_{I_{r+s-1}}^{[-1]}\mathbb{Q}_ {\breve{I}_{r+s}}^{[1]}+\mathbb{Q}_{I_{r+s-1}}^{[1]}\mathbb{Q}_{\breve{I}_{r+s }}^{[-1]}\quad\text{for}\quad p_{i_{r+s}}=1. \tag{4.20}\]
Eqs. (4.19) and (4.20) are reductions of (3.8) for \(I=I_{r+s-1}\), \((i,j)=(i_{r+s},i_{r+s}^{*})\) and (3.9) for \(I=I_{r+s-1}\), \((i,j)=(i_{r+s}^{*},2r+s+1)\), respectively. Now we prove (4.18) step by step as follows.
\[[\text{left hand side of \eqref{eq:11}}]\times(z_{i_{r+s}}-z_{i_{r+s}}^{-1}) \mathbb{Q}_{\widetilde{I}_{r+s}}=\\ =(z_{i_{r+s}}-1)\mathbb{Q}_{I_{r+s-1}}^{[1]}(\underbrace{z_{i_{r+s }}-z_{i_{r+s}}^{-1})\mathbb{Q}_{I_{r+s-1}}\mathbb{Q}_{\widetilde{I}_{r+s}}}_{ \text{apply (\ref{eq:11})}}\\ =(z_{i_{r+1}}-1)\mathbb{Q}_{I_{r+s-1}}^{[1]}(z_{i_{r+s}}\mathbb{Q }_{I_{r+s}}^{[1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[-1]}-z_{i_{r+s}}^{-1} \mathbb{Q}_{I_{r+s}}^{[-1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[1]}), \tag{4.21}\]
\[[\text{right hand side of \eqref{eq:11}}]\times(z_{i_{r+s}}-z_{i_{r+s}}^{-1}) \mathbb{Q}_{\widetilde{I}_{r+s}}=\\ =(z_{i_{r+s}}-z_{i_{r+s}}^{-1})(z_{i_{r+s}}\mathbb{Q}_{I_{r+s}}^{ [1]}\underbrace{\mathbb{Q}_{\breve{I}_{r+s}}\mathbb{Q}_{\widetilde{I}_{r+s}}}_{ \text{apply (\ref{eq:11})}}-\underbrace{\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{\widetilde{I}_{r+s}} \mathbb{Q}_{\widetilde{I}_{r+s}}^{[1]}}_{\text{apply (\ref{eq:11})}}\\ =(z_{i_{r+s}}-z_{i_{r+s}}^{-1})\big{(}z_{i_{r+s}}\mathbb{Q}_{I_{r+s}}^{ [1]}(z_{i_{r+s}}^{-1}+1)^{-1}(z_{i_{r+s}}^{-1}\mathbb{Q}_{I_{r+s}}^{[-1]} \mathbb{Q}_{\widetilde{I}_{r+s}}^{[1]}+\mathbb{Q}_{\widetilde{I}_{r+s-1}}^{[1]} \mathbb{Q}_{\widetilde{I}_{r+s}}^{[-1]})\\ -(z_{i_{r+s}}+1)^{-1}(z_{i_{r+s}}\mathbb{Q}_{I_{r+s-1}}^{[-1]} \mathbb{Q}_{I_{r+s}}^{[1]}+\mathbb{Q}_{I_{r+s-1}}^{[1]}\mathbb{Q}_{\widetilde{I}_{ r+s}}^{[-1]})\mathbb{Q}_{\widetilde{I}_{r+s}}^{[1]}\big{)}\\ =[\text{right hand side of \eqref{eq:11}}]. \tag{4.22}\]
Hence (4.18) holds since \((z_{i_{r+s}}-z_{i_{r+s}}^{-1})\mathbb{Q}_{\widetilde{I}_{r+s}}\) is not identically zero. One can also show that (4.17) and (4.20) follow from (4.18) and (4.19).
T-functions and Bethe ansatz equationsEqs. (4.14) and (4.18) for the case \(s=0\), \((i_{1},i_{2},\ldots,i_{r})=(1,2,\ldots,r)\) correspond to [eqs. (6.11) and (6.14) in [60]]. Eq. (3.3) reduces to
\[\begin{array}{c}{\cal X}_{I_{a}}=z_{i_{a}}\frac{\mathbb{Q}_{I_{a-1}}^{[2r-2s-1- \sum_{j\in I_{a}}p_{j}-p_{i_{a}}]}\mathbb{Q}_{I_{a}}^{[2r-2s-1-\sum_{j\in I_{a}} p_{j}+2p_{i_{a}}]}}{\mathbb{Q}_{I_{a-1}}^{[2r-2s-1-\sum_{j\in I_{a}}p_{j}+p_{i_{a}}]} \mathbb{Q}_{I_{a}}^{[2r-2s-1-\sum_{j\in I_{a}}p_{j}]}}\quad\text{for}\quad 1 \leq a\leq r+s,\\ \\ {\cal X}_{I_{r+s+1}}=-\frac{\mathbb{Q}_{I_{r+s}}^{[r-s+1]}\mathbb{Q}_{I_{r+s}}^ {[r-s-2]}}{\mathbb{Q}_{I_{r+s}}^{[r-s-1]}\mathbb{Q}_{I_{r+s}}^{[r-s]}},\\ \\ {\cal X}_{I_{2r+2s+2-a}}=z_{i_{a}}^{-1}\frac{\mathbb{Q}_{I_{a-1}}^{[\sum_{j\in I _{a}}p_{j}+p_{i_{a}}]}\mathbb{Q}_{I_{a}}^{[\sum_{j\in I_{a}}p_{j}-2p_{i_{a}}]}} {\mathbb{Q}_{I_{a-1}}^{[\sum_{j\in I_{a}}p_{j}-p_{i_{a}}]}\mathbb{Q}_{I_{a}}^ {[\sum_{j\in I_{a}}p_{j}]}}\quad\text{for}\quad 1\leq a\leq r+s.\end{array} \tag{4.23}\]
The T-function (3.1) reduces to
\[\mathsf{F}_{(1)}^{I_{2r+2s+1}}=\mathbb{Q}_{\emptyset}^{[2r-2s-1]}\mathbb{Q}_{ \emptyset}\left(\sum_{a=1}^{r+s}p_{i_{a}}({\cal X}_{I_{a}}+{\cal X}_{I_{2r+2s+ 2-a}})-{\cal X}_{I_{r+s+1}}\right). \tag{4.24}\]
The pole-free condition of the T-function (4.24) produces the following Bethe ansatz equations:
\[\begin{array}{c}-1=\frac{p_{i_{a}}z_{i_{a}}}{p_{i_{a+1}}z_{i_{a+1}}}\frac{ \mathbb{Q}_{I_{a-1}}(u_{k}^{I_{a}}-p_{i_{a}})\mathbb{Q}_{I_{a}}(u_{k}^{I_{a}} +2p_{i_{a}})\mathbb{Q}_{I_{a+1}}(u_{k}^{I_{a}}-p_{i_{a+1}})}{\mathbb{Q}_{I_{a- 1}}(u_{k}^{I_{a}}+p_{i_{a}})\mathbb{Q}_{I_{a}}(u_{k}^{I_{a}}-2p_{i_{a+1}}) \mathbb{Q}_{I_{a+1}}(u_{k}^{I_{a}}+p_{i_{a+1}})}\\ \qquad\qquad\qquad\text{for}\quad k\in\{1,2,\ldots,n_{I_{a}}\}\quad\text{and} \quad a\in\{1,2,\ldots,r+s-1\},\\ \\ -1=p_{i_{r+s}}z_{i_{r+s}}\frac{\mathbb{Q}_{I_{r+s-1}}(u_{k}^{I_{r+s}}-p_{i_{r+s }})\mathbb{Q}_{I_{r+s}}(u_{k}^{I_{r+s}}+2p_{i_{r+s}})\mathbb{Q}_{I_{r+s}}(u_{k }^{I_{r+s}}+1)}{\mathbb{Q}_{I_{r+s-1}}(u_{k}^{I_{r+s}}+p_{i_{r+s}})\mathbb{Q}_ {I_{r+s}}(u_{k}^{I_{r+s}}+2)\mathbb{Q}_{I_{r+s}}(u_{k}^{I_{r+s}}-1)}\\ \qquad\qquad\qquad\text{for}\quad k\in\{1,2,\ldots,n_{I_{r+s}}\}.\end{array} \tag{4.25}\]
This is a reduction of (3.6) on the symmetric nesting path. Eqs. (4.24) and (4.25) agree with the known results by algebraic Bethe ansatz [73] in case \(i_{k}\in\mathfrak{F}\) for \(1\leq k\leq s\) and \(i_{k}\in\mathfrak{B}\) for \(s+1\leq k\leq r+s\). One can also derive the Bethe ansatz equations (4.25) from the QQ-relations (4.14)-(4.17) by considering the zeros of the Q-functions. One can also use (4.18) instead of (4.17). The tableaux sum expression of the T-function (3.19) reproduces [eq. (3.38), [2]]25 under the reduction. Moreover, \(\mathsf{T}_{\mu}^{\mathfrak{B},\mathfrak{F}}\) (from (3.48)) and its (super)character limit \(\zeta(\mathsf{T}_{\mu}^{\mathfrak{B},\mathfrak{F}})\) give a Wronskian expression of the T-function and a Weyl-type supercharacter formula respectively after the reduction. The Young diagram \(\mu\) is related to the labels of the representation through (2.13)-(2.15).
Footnote 25: The functions \(\mathbb{Q}_{\emptyset}\), \(\mathbb{Q}_{I_{a}}\), \(\mathbb{Q}_{\emptyset}^{[2r-2s-1]}\mathbb{Q}_{\emptyset}{\cal X}_{I_{a}}\), \(-\mathbb{Q}_{\emptyset}^{[2r-2s-1]}\mathbb{Q}_{\emptyset}{\cal X}_{I_{r+s+1}}\), and \(\mathbb{Q}_{\emptyset}^{[2r-2s-1]}\mathbb{Q}_{\emptyset}{\cal X}_{I_{2r+2s+2-a}}\) correspond to \(\phi(u)\), \(Q_{a}(u)\), \(\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxededed{\ddots
The generating functions (3.23) and (3.24) reduce to
\[{\bf W}_{I_{2r+2s+1}}({\bf X}) =\prod_{a=1}^{\overrightarrow{r+s}}(1-{\cal X}_{I_{2r+2s+2-a}}{\bf X })^{-p_{a}}(1-{\cal X}_{I_{r+s+1}}{\bf X}){\prod_{a=1}^{\over r+s}}(1-{\cal X}_ {I_{a}}{\bf X})^{-p_{a}}\] \[=\sum_{a=0}^{\infty}{\cal F}_{(a)}^{I_{2r+2s+1}[a-1]}{\bf X}^{a}, \tag{4.26}\] \[{\bf W}_{I_{2r+2s+1}}({\bf X})^{-1} =\prod_{a=1}^{\overrightarrow{r+s}}(1-{\cal X}_{I_{a}}{\bf X})^{p_ {a}}(1-{\cal X}_{I_{r+s+1}}{\bf X})^{-1}{\prod_{a=1}^{\over r+s}}(1-{\cal X}_ {I_{2r+2s+2-a}}{\bf X})^{p_{a}}\] \[=\sum_{a=0}^{\infty}(-1)^{a}{\cal F}_{(1^{a})}^{I_{2r+2s+1}[a-1]} {\bf X}^{a}. \tag{4.27}\]
In this way, we recover [eqs. (B.2) and (B.1) in [2]] (see also [eqs. (2.7a) and (2.7b) in [56]] for the case \(s=0\)) 26. Baxter type equations follow from the kernels of (4.26) and (4.27), which are reductions of (3.32) and (3.33).
Footnote 26: In [2], we considered only the formulas for the distinguished Dynkin diagram, while the formulas here are the ones for general Dynkin diagrams.
The relations (4.10) and (4.11) reduce to
\[\hat{\sf T}_{-a+2r-2s-1,m}^{\mathfrak{B},\mathfrak{F}} =\hat{\sf T}_{a,m}^{\mathfrak{B},\mathfrak{F}}\quad\text{for any $a\in\mathbb{C}$}\quad\text{for}\quad(\ref{eq:4.6}), \tag{4.28}\] \[\hat{\sf T}_{a,-m-2r+2s+1}^{\mathfrak{F},\mathfrak{F}} =(-1)^{a}\hat{\sf T}_{a,m}^{\mathfrak{B},\mathfrak{F}}\quad\text{ for any $m\in\mathbb{C}$}\quad\text{for}\quad(\ref{eq:4.7}). \tag{4.29}\]
We remark that (4.29) for \(a=1\) and \(s=0\) corresponds to the Yangian \(Y(so(2r+1))\) case of [Proposition 8.58 in [61]]. Let us write down \(a=1\) case of (4.7).
\[{\sf T}_{1,m}^{\mathfrak{B},\mathfrak{F}} =\sum_{i=1}^{2r}\frac{z_{i}^{m-2+2r-2s}\prod_{f=2r+1}^{2r+2s+1}(z_ {i}-z_{f})}{\prod_{j\neq i}^{2r}(z_{i}-z_{j})}\mathbb{Q}_{i}^{[m+2r-2s-1]} \mathbb{Q}_{i^{*}}^{[-m]}\] \[=\sum_{i=1}^{r}\left(\chi_{i}^{+}\mathbb{Q}_{i}^{[m+2r-2s-1]} \mathbb{Q}_{i^{*}}^{[-m]}+\chi_{i^{*}}^{+}\mathbb{Q}_{i^{*}}^{[m+2r-2s-1]} \mathbb{Q}_{i}^{[-m]}\right)\quad\text{for}\quad 2s-2r+2\leq m.\]
where the character parts are given by
\[\chi_{i}^{+} =\frac{z_{i}^{m-2+2r-2s}(z_{i}+1)\prod_{f=2r+1}^{2r+s}(z_{i}-z_{f })(z_{i}-z_{f}^{-1})}{\prod_{j=1}^{i-1}(z_{i}-z_{j})\prod_{j=i+1}^{r}(z_{i}-z_ {j})\prod_{j=1}^{r}(z_{i}-z_{j}^{-1})}\] \[=\frac{(-1)^{i-1}z_{i}^{m+i-1}\prod_{j=1}^{i-1}z_{j}^{-1}\prod_{f= 2r+1}^{2r+s}(1-\frac{z_{f}}{z_{i}})(1-\frac{1}{z_{i}z_{f}})}{(1-\frac{1}{z_{i} })\prod_{j=1}^{i-1}(1-\frac{z_{i}}{z_{j}})\prod_{j=i+1}^{r}(1-\frac{z_{j}}{z_{ i}})\prod_{j\neq i}^{r}(1-\frac{1}{z_{i}z_{j}})}\quad\text{for}\quad 1\leq i\leq r, \tag{4.31}\] \[\chi_{i^{*}}^{+} =\frac{(-1)^{i}z_{i}^{-m+i-2r+2s}\prod_{j=1}^{i-1}z_{j}^{-1}\prod _{f=2r+1}^{2r+s}(1-\frac{z_{f}}{z_{i}})(1-\frac{1}{z_{i}z_{f}})}{(1-\frac{1}{z _{i}})\prod_{j=1}^{i-1}(1-\frac{z_{i}}{z_{j}})\prod_{j=i+1}^{r}(1-\frac{z_{j}}{ z_{i}})\prod_{j\neq i}^{r}(1-\frac{1}{z_{i}z_{j}})}\quad\text{for}\quad 1\leq i\leq r. \tag{4.32}\]
We remark that (4.31) and (4.32) for \(s=0\) coincide with the Yangian \(Y(so(2r+1))\) case of [eq. (9.25) in [61]] and that (4.30) for \(s=0\), which is [eq. (3.26) for \(a=1\) in [4]] (up to an overall factor), corresponds 27 to [eq. (9.28) in [61]].
Footnote 27: Compare \(\mathsf{T}_{1,m}^{\mathfrak{B},\mathfrak{I}[-r-\frac{1}{2}]}\) for \(s=0\) with [eq. (9.28) in [61]]. In our convention, the unit of shift of the spectral parameter is twice as large as theirs. Their parameters \(\tau_{j}\) correspond to our parameters \(z_{j}\). The sign factors \((-1)^{i-1}\) and \((-1)^{i}\) are included in the character parts (4.31) and (4.32). If we understand correctly, [61] mainly focuses on specific representations of the Yangians \(Y(\mathfrak{g})\) that are lift of those of \(\mathfrak{g}=B_{r},C_{r},D_{r}\). Note that the evaluation map is not available for general representations for these cases. Thus, it will be an interesting problem to apply their method to evaluation representations of \(Y(gl(M|N))\) (or \(U_{q}(gl(M|N)^{(1)})\)) and investigate their reductions.
Footnote 28: Taking ratio of both sides of (4.18) and the ones shifted by \(u\to u-1\), we obtain
\[\frac{\mathbb{Q}_{I_{r+s-1}}^{[-1]}}{\mathbb{Q}_{I_{r+s-1}}^{[1]}}=\frac{z_{ i_{r+s}}\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{I_{r+s}}^{[-1]}-\mathbb{Q}_{I_{r+s}}^{[-1]} \mathbb{Q}_{I_{r+s}}}{z_{i_{r+s}}\mathbb{Q}_{I_{r+s}}^{[1]}\mathbb{Q}_{I_{r+s }}-\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{I_{r+s}}^{[1]}}. \tag{4.36}\]
Then we eliminate the Q-function \(\mathbb{Q}_{I_{r+s-1}}\) from (4.35) by (4.36). We would like to consider a subgroup \(\mathfrak{W}=\mathbb{Z}_{2}^{r+s}\rtimes S_{r+s}\) of the permutation group \(S(I_{M+N})=S(I_{2r+2s+1})=S(\mathfrak{I})\), which preserves the entire set of symmetric nesting, and discuss the invariance of the T-function \(\mathsf{F}_{(1)}^{I_{2r+2s+1}}\) under it. \(\mathfrak{W}\) is generated by two kinds of operations of the form: \(\mathfrak{s}=\tau_{i_{a}i_{a+1}}\circ\tau_{i^{*}_{a}i^{*}_{a+1}}\), \(\mathfrak{s}(I_{2r+2s+1})=(i_{1},i_{2},\ldots,i_{a-1},i_{a+1},i_{a},i_{a+2}, \ldots,i_{r+s},2r+s+1,i^{*}_{r+s},\ldots,i^{*}_{a+2},i^{*}_{a},i^{*}_{a+1},i^{ *}_{a-1},\ldots,i^{*}_{2},i^{*}_{1})\) for \(a\in\{1,2,\ldots,r+s-1\}\), and \(\mathfrak{k}=\tau_{i_{r+s}i^{*}_{r+s}}\), \(\mathfrak{k}(I_{2r+2s+1})=(i_{1},i_{2},\ldots,i_{r+s-1},i^{*}_{r+s},2r+s+1,i _{r+s},i^{*}_{r+s-1},\ldots,i^{*}_{2},i^{*}_{1})\). Let \(\mathfrak{s}(I_{a})\) denotes the sub-tuple consisting of the first \(a\)-components of \(\mathfrak{s}(I_{2r+2s+1})\). Similarly, let \(\mathfrak{k}(I_{a})\) denotes the sub-tuple consisting of the first \(a\)-components of \(\mathfrak{k}(I_{2r+2s+1})\). The condition \(\mathfrak{s}(\mathsf{F}_{(1)}^{I_{2r+2s+1}})=\mathsf{F}_{(1)}^{\mathfrak{s}( I_{2r+2s+1})}=\mathsf{F}_{(1)}^{I_{2r+2s+1}}\) is equivalent to the 4-term QQ-relations
\[p_{i_{a}}\mathcal{X}_{I_{a}}+p_{i_{a+1}}\mathcal{X}_{I_{a+1}} =p_{i_{a+1}}\mathcal{X}_{\mathfrak{s}(I_{a})}+p_{i_{a}}\mathcal{ X}_{\mathfrak{s}(I_{a+1})}, \tag{4.33}\] \[p_{i_{a}}\mathcal{X}_{I_{2r+2s+2-a}}+p_{i_{a+1}}\mathcal{X}_{I_{ 2r+2s+1-a}} =p_{i_{a+1}}\mathcal{X}_{\mathfrak{s}(I_{2r+2s+2-a})}+p_{i_{a}} \mathcal{X}_{\mathfrak{s}(I_{2r+2s+1-a})}, \tag{4.34}\]
which follow from the 3-term simplified QQ-relations (4.14) and (4.15). The condition \(\mathfrak{k}(\mathsf{F}_{(1)}^{I_{2r+2s+1}})=\mathsf{F}_{(1)}^{\mathfrak{k}( I_{2r+2s+1})}=\mathsf{F}_{(1)}^{I_{2r+2s+1}}\) is equivalent to the following 6-term QQ-relation
\[p_{i_{r+s}}\mathcal{X}_{I_{r+s}}-\mathcal{X}_{I_{r+s+1}}+p_{i_{r+s}}\mathcal{X} _{I_{r+s+2}}=p_{i_{r+s}}\mathcal{X}_{\mathfrak{k}(I_{r+s})}-\mathcal{X}_{ \mathfrak{k}(I_{r+s+1})}+p_{i_{r+s}}\mathcal{X}_{\mathfrak{k}(I_{r+s+2})}. \tag{4.35}\]
One can show 28 that (4.35) holds under the 3-term QQ-relation (4.18) in case \(p_{i_{r+s}}=1\). Thus one can forget about (4.17), which deviates from the symmetric nesting paths. At the moment, we do not have a simple analogue of (4.18) for the case \(p_{i_{r+s}}=-1\), and thus have to use 3-term QQ-relations, 29
Footnote 28: Taking ratio of both sides of (4.18) and the ones shifted by \(u\to u-1\), we obtain
\[\frac{\mathbb{Q}_{I_{r+s-1}}^{[-1]}}{\mathbb{Q}_{I_{r+s-1}}^{[1]}}=\frac{z_{i_{r+ s}}\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{I_{r+s}}^{[-1]}-\mathbb{Q}_{I_{r+s}}^{[-1]} \mathbb{Q}_{I_{r+s}}}{z_{i_{r+s}}\mathbb{Q}_{I_{r+s}}^{[1]}\mathbb{Q}_{I_{r+s}} -\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{I_{r+s}}^{[1]}}. \tag{4.36}\]
Then we eliminate the Q-function \(\mathbb{Q}_{I_{r+s-1}}\) from (4.35) by (4.36).
to show (4.35). Split the operation \(\mathfrak{k}\) into three steps 30 : \(\mathfrak{k}(I_{2r+2s+1})=\tau_{i_{r+s},2r+s+1}\circ\tau_{i_{r+s},i^{*}_{r+s}} \circ\tau_{2r+s+1,i^{*}_{r+s}}(I_{2r+2s+1})=\tau_{i_{r+s},2r+s+1}\circ\tau_{i_{r+ s},i^{*}_{r+s}}(i_{1},i_{2},\ldots,i_{r+s-1},i_{r+s},i^{*}_{r+s},2r+s+1,i^{*}_{r+s-1 },\ldots,i^{*}_{2},i^{*}_{1})=1,i^{*}_{r+s-1},\ldots,i^{*}_{2},i^{*}_{1})= \tau_{i_{r+s},2r+s+1}(i_{1},i_{2},\ldots,i_{r+s-1},i^{*}_{r+s},i_{r+s},2r+s+1,i ^{*}_{r+s},i_{r+s},2r+s+1,i^{*}_{r+s-1},\ldots,i^{*}_{2},i^{*}_{1})=(i_{1},i_ {2},\ldots,i_{r+s-1},i^{*}_{r+s},2r+s+1,i_{r+s},i^{*}_{r+s-1},\ldots,i^{*}_{2},i^{*}_{1})\). One sees non-symmetric nesting paths in the intermediate steps. We have to use (4.16) and
Footnote 30: In a sense, this is an analogue of (4.18) for the case \(p_{i_{r+s}}=-1\). However, what we need is not this but a QQ-relation among \(\mathbb{Q}_{I_{r+s-1}}\), \(\mathbb{Q}_{I_{r+s}}\), \(\mathbb{Q}_{I_{r+s}}\). It is possible to eliminate \(\mathbb{Q}_{\bar{I}_{r+s}}\) from (4.16), (3.8) and (4.38), and derive a relation among \(\mathbb{Q}_{I_{r+s-1}}\), \(\mathbb{Q}_{I_{r+s}}\), \(\mathbb{Q}_{I_{r+s}}\). However, both the final expression and the 6-term QQ-relation (4.35) are too cumbersome to use.
Footnote 30: Instead of this, one may take \(\mathfrak{k}(I_{2r+2s+1})=\tau_{2r+s+1,i^{*}_{r+s}}\circ\tau_{i_{r+s},i^{*}_{r +s}}\circ\tau_{i_{r+s},2r+s+1}(I_{2r+2s+1})\).
\[(z_{i_{r+s}}-z_{i_{r+s}}^{-1})\mathbb{Q}_{I_{r+s-1}}\mathbb{Q}_{\bar{I}_{r+s} }=z_{i_{r+s}}^{-1}\mathbb{Q}_{I_{r+s}}^{[-1]}\mathbb{Q}_{\bar{I}_{r+s}}^{[1]}- z_{i_{r+s}}^{-1}\mathbb{Q}_{I_{r+s}}^{[1]}\quad\text{for}\quad p_{i_{r+s}}=-1. \tag{4.39}\]
for each step. Eqs. (4.38) is a reduction of (3.8) for \(I=I_{r+s-1}\), \((i,j)=(i_{r+s},i^{*}_{r+s})\). Eqs. (4.39) is a reduction of (3.8) for \(I=I_{r+s-1}\sqcup(i^{*}_{r+s})\), \((i,j)=(i_{r+s},2r+s+1)\) [or \(I=I_{r+s-1}\), \((i,j)=(i^{*}_{r+s},2r+s+1)\)]. In short, we have to use reductions of (3.7) three times to show (4.35). T-functions and Bethe ansatz equations on non-symmetric nesting do not necessarily have the standard form given in (4.23)-(4.25). At the moment, the T-functions for the symmetric nesting paths form a closed system under \(\mathfrak{W}\) only for the case \(s=0\) (the \(U_{q}(so^{(1)}(2r+1))\) case) if we stick to the three-term QQ-relations (instead of 4- or 6-term QQ-relations). It remains to be seen whether we can exclude T-and Q-functions on non-symmetric nesting paths.
The condition that the generating function \(\mathbf{W}_{I_{2r+2s+1}}(\mathbf{X})\) is invariant under \(\mathfrak{s}\), namely \(\mathfrak{s}(\mathbf{W}_{I_{2r+2s+1}}(\mathbf{X}))=\mathbf{W}_{\mathfrak{s}(I _{2r+2s+1})}(\mathbf{X})=\mathbf{W}_{I_{2r+2s+1}}(\mathbf{X})\) is equivalent to the discrete zero curvature condition (a reduction of (3.25)):
\[(1-\mathcal{X}_{I_{a}}\mathbf{X})^{p_{i_{a}}}(1-\mathcal{X}_{I_{ a+1}}\mathbf{X})^{p_{i_{a+1}}}=(1-\mathcal{X}_{\mathfrak{s}(I_{a})}\mathbf{X})^{p_{i_{ a+1}}}(1-\mathcal{X}_{\mathfrak{s}(I_{a+1})}\mathbf{X})^{p_{i_{a}}}, \tag{4.40}\] \[(1-\mathcal{X}_{I_{2r+2s+1-a}}\mathbf{X})^{p_{i_{a+1}}}(1- \mathcal{X}_{I_{2r+2s+2-a}}\mathbf{X})^{p_{i_{a}}}=\] \[=(1-\mathcal{X}_{\mathfrak{s}(I_{2r+2s+1-a})}\mathbf{X})^{p_{i_{a} }}(1-\mathcal{X}_{\mathfrak{s}(I_{2r+2s+2-a})}\mathbf{X})^{p_{i_{a+1}}}, \tag{4.41}\]
where \(a\in\{1,2,\ldots,r+s-1\}\). These relations (4.41) boil down to (4.33), (4.34) and a reduction of the identity (3.26).
The condition that the generating function \(\mathbf{W}_{I_{2r+2s+1}}(\mathbf{X})\) is invariant under \(\mathfrak{k}\), namely \(\mathfrak{k}(\mathbf{W}_{I_{2r+2s+1}}(\mathbf{X}))=\mathbf{W}_{\mathfrak{k}(I _{2r+2s+1})}(\mathbf{X})=\mathbf{W}_{I_{2r+2s+1}}(\mathbf{X})\) is equivalent to the following form of discrete zero curvature condition:
\[(1-\mathcal{X}_{I_{r+s}}\mathbf{X})^{p_{i_{r+s}}}(1-\mathcal{X} _{I_{r+s+1}}\mathbf{X})^{-1}(1-\mathcal{X}_{I_{r+s+2}}\mathbf{X})^{p_{i_{r+s}}}=\\ =(1-\mathcal{X}_{\mathfrak{k}(I_{r+s})}\mathbf{X})^{p_{i_{r+s}}}(1 -\mathcal{X}_{\mathfrak{k}(I_{r+s+1})}\mathbf{X})^{-1}(1-\mathcal{X}_{ \mathfrak{k}(I_{r+s+2})}\mathbf{X})^{p_{i_{r+s}}} \tag{4.42}\]
Consider the expansion of (4.42) with respect to the non-negative powers of \(\mathbf{X}\). The coefficient of \(\mathbf{X}\) in (4.42) is equivalent to (4.35). At the moment we have a proof of (4.42) which uses reductions of (3.25) three times, namely a proof by way of non-symmetric
nesting paths. The T-functions \({\cal F}^{I_{2r+2s+1}}_{(b)}\) and \({\cal F}^{I_{2r+2s+1}}_{(1^{b})}\) are invariant under \(S(I_{2r+2s+1})\) if reductions of the QQ-relations (3.8) and (3.9) on non-symmetric nesting paths as well as on symmetric nesting paths are imposed. Whether it is possible to restrict the reduction procedures to the symmetric nesting paths and construct T-functions which are \(\mathfrak{W}\)-invariant under the 3-term QQ-relations is an open question.
#### 4.3.2 \(U_{q}(gl(2r|2s+1)^{(2)})\) case
This case 31 is parallel to the case \(U_{q}(osp(2r+1|2s)^{(1)})\). We set
Footnote 31: We remark that R-matrices for a class of representations of \(U_{q}(gl(m|n)^{(2)})\) are available in [45].
\[(M,N)=(2r,2s+1),\quad\mathfrak{B}=\{1,2,\ldots,2r\},\quad\mathfrak{F }=\{2r+1,2r+2,\ldots,2r+2s+1\},\\ \mathfrak{D}=\emptyset,\quad\eta\neq 0,\quad z_{2r+s+1}=1. \tag{4.43}\]
QQ-relationsFor a symmetric nesting path defined by \(I_{2r+2s+1}=(i_{1},i_{2},\ldots,i_{r+s},2r+s+1,i_{r+s}^{*}\ldots,i_{2}^{*},i_{ 1}^{*})\), the QQ-relations (3.8) and (3.9) reduce to
\[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a-1}}\mathbb{Q}_{I_{a+1}}=z _{i_{a}}\mathbb{Q}^{[p_{i_{a}}]}_{I_{a}}\mathbb{Q}^{[-p_{i_{a}}]}_{\tilde{I}_ {a}}-z_{i_{a+1}}\mathbb{Q}^{[-p_{i_{a}}]}_{I_{a}}\mathbb{Q}^{[p_{i_{a}}]}_{ \tilde{I}_{a}}\] \[\qquad\qquad\text{for}\quad a\in\{1,2,\ldots,r+s-1\},\quad p_{i_{ a}}=p_{i_{a+1}}, \tag{4.44}\] \[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a}}\mathbb{Q}_{\tilde{I}_{a }}=z_{i_{a}}\mathbb{Q}^{[-p_{i_{a}}]}_{I_{a-1}}\mathbb{Q}^{[p_{i_{a}}]}_{I_{a+ 1}}-z_{i_{a+1}}\mathbb{Q}^{[p_{i_{a}}]}_{I_{a-1}}\mathbb{Q}^{[-p_{i_{a}}]}_{I_{ a+1}}\] \[\qquad\qquad\text{for}\quad a\in\{1,2,\ldots,r+s-1\},\quad p_{i_{ a}}=-p_{i_{a+1}},\] (4.45) \[(z_{i_{r+s}}-1)\mathbb{Q}_{I_{r+s-1}}\mathbb{Q}^{[\eta]}_{I_{r+s} }=z_{i_{r+s}}\mathbb{Q}^{[-1]}_{I_{r+s}}\mathbb{Q}^{[1]}_{\tilde{I}_{r+s}}- \mathbb{Q}^{[1]}_{\tilde{I}_{r+s}}\mathbb{Q}^{[-1]}_{\tilde{I}_{r+s}}\quad \text{for}\quad p_{i_{r+s}}=-1,\] (4.46) \[(z_{i_{r+s}}-1)\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{\tilde{I}_{r+s}}=z _{i_{r+s}}\mathbb{Q}^{[-1]}_{I_{r+s-1}}\mathbb{Q}^{[\eta+1]}_{I_{r+s}}- \mathbb{Q}^{[1]}_{\tilde{I}_{r+s-1}}\mathbb{Q}^{[-1]}_{I_{r+s}}\quad\text{for }\quad p_{i_{r+s}}=1. \tag{4.47}\]
Eqs. (4.44) and (4.46) are reductions of (3.8) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r+s-1\) and \(a=r+s\), respectively 32. Eqs. (4.45) and (4.47) are reductions of (3.9) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r+s-1\) and \(a=r+s\), respectively. Instead of (4.47), one may use
Footnote 32: The QQ-relations for \(a>r+s\) reduce to the ones for \(a\leq r+s\).
\[(z_{i_{r+s}}+1)\mathbb{Q}^{[1]}_{I_{r+s-1}}\mathbb{Q}^{[\eta]}_{I_{r+s-1}}=z _{i_{r+s}}\mathbb{Q}^{[\eta+1]}_{I_{r+s}}\mathbb{Q}_{I_{r+s}}+\mathbb{Q}_{I_{r+ s}}\mathbb{Q}^{[\eta+1]}_{\tilde{I}_{r+s}}\quad\text{for}\quad p_{i_{r+s}}=1, \tag{4.48}\]
where \(\breve{I}_{r+s}=(i_{1},i_{2},\ldots,i_{r+s-1},i_{r+s}^{*})\). One can derive (4.48) in the same way as (4.18). We will use the following QQ-relations:
\[(z_{i_{r+s}}-z_{i_{r+s}}^{-1})\mathbb{Q}_{I_{r+s-1}}\mathbb{Q}^{[ \eta]}_{\tilde{I}_{r+s}}=z_{i_{r+s}}\mathbb{Q}^{[1]}_{I_{r+s}}\mathbb{Q}^{[-1]} _{\tilde{I}_{r+s}}-z_{i_{r+s}}^{-1}\mathbb{Q}^{[-1]}_{I_{r+s}}\mathbb{Q}^{[1]} _{\tilde{I}_{r+s}}\quad\text{for}\quad p_{i_{r+s}}=1, \tag{4.49}\] \[(z_{i_{r+s}}^{-1}-1)\mathbb{Q}_{\tilde{I}_{r+s}}\mathbb{Q}_{ \tilde{I}_{r+s}}=z_{i_{r+s}}^{-1}\mathbb{Q}^{[-1]}_{I_{r+s-1}}\mathbb{Q}^{[ \eta+1]}_{\tilde{I}_{r+s}}-\mathbb{Q}^{[1]}_{\tilde{I}_{r+s}}\quad\text{for} \quad p_{i_{r+s}}=1. \tag{4.50}\]
Eqs. (4.49) and (4.50) are reductions of (3.8) for \(I=I_{r+s-1}\), \((i,j)=(i_{r+s},i^{*}_{r+s})\) and (3.9) for \(I=I_{r+s-1}\), \((i,j)=(i^{*}_{r+s},2r+s+1)\), respectively. Now we prove (4.48) step by step as follows.
\[[\text{left hand side of \eqref{eq:4.48}}]\times(z_{i_{r+s}}-z_{i_{r+s}}^{ -1})\mathbb{Q}_{\widetilde{I}_{r+s}}=\\ =(z_{i_{r+s}}+1)\mathbb{Q}_{I_{r+s-1}}^{[1]}(\underbrace{z_{i_{r+ s}}-z_{i_{r+s}}^{-1})\mathbb{Q}_{I_{r+s-1}}^{[\eta]}\mathbb{Q}_{\widetilde{I}_{r+s}}} \\ =(z_{i_{r+1}}+1)\mathbb{Q}_{I_{r+s-1}}^{[1]}(z_{i_{r+s}}\mathbb{Q} _{I_{r+s}}^{[\eta+1]}\mathbb{Q}_{I_{r+s}}^{[\eta-1]}-z_{i_{r+s}}^{-1}\mathbb{Q }_{I_{r+s}}^{[\eta-1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[\eta+1]}), \tag{4.51}\]
\[[\text{right hand side of \eqref{eq:4.48}}]\times(z_{i_{r+s}}-z_{i_{r+s}}^{ -1})\mathbb{Q}_{\widetilde{I}_{r+s}}=\\ =(z_{i_{r+s}}-z_{i_{r+s}}^{-1})(z_{i_{r+s}}\mathbb{Q}_{I_{r+s}}^{ [\eta+1]}\underbrace{\mathbb{Q}_{\widetilde{I}_{r+s}}\mathbb{Q}_{\widetilde{I }_{r+s}}}_{\text{apply \eqref{eq:4.50}}}+\underbrace{\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{ \widetilde{I}_{r+s}}}_{\text{apply \eqref{eq:4.47}}}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[\eta+1]})\\ =(z_{i_{r+s}}-z_{i_{r+s}}^{-1})\big{(}z_{i_{r+s}}\mathbb{Q}_{I_{r +s}}^{[\eta+1]}(z_{i_{r+s}}^{-1}-1)^{-1}(z_{i_{r+s}}^{-1}\mathbb{Q}_{I_{r+s-1}} ^{[-1]}\mathbb{Q}_{I_{r+s}}^{[\eta+1]}-\mathbb{Q}_{I_{r+s-1}}^{[1]}\mathbb{Q}_ {I_{r+s}}^{[\eta-1]})\\ +(z_{i_{r+s}}-1)^{-1}(z_{i_{r+s}}\mathbb{Q}_{I_{r+s-1}}^{[-1]} \mathbb{Q}_{I_{r+s}}^{[\eta+1]}-\mathbb{Q}_{I_{r+s-1}}^{[1]}\mathbb{Q}_{I_{r+s }}^{[\eta-1]})\mathbb{Q}_{I_{r+s}}^{[\eta+1]}\big{)}\\ =[\text{right hand side of \eqref{eq:4.51}}]. \tag{4.52}\]
Hence (4.48) holds since \((z_{i_{r+s}}-z_{i_{r+s}}^{-1})\mathbb{Q}_{\widetilde{I}_{r+s}}\) is not identically zero. One can also show that (4.47) and (4.50) follow from (4.48) and (4.49).
T-functions and Bethe ansatz equationsUnder the reduction, (3.3) reduces to
\[\mathcal{X}_{I_{a}} =z_{i_{a}}\frac{\mathbb{Q}_{I_{a-1}}^{[2r-2s-1-\sum_{j\in I_{a}} p_{j}-p_{ia}]}\mathbb{Q}_{I_{a}}^{[2r-2s-1-\sum_{j\in I_{a}}p_{j}+2p_{ia}]}}{ \mathbb{Q}_{I_{a-1}}^{[2r-2s-1-\sum_{j\in I_{a}}p_{j}+p_{ia}]}\mathbb{Q}_{I_{ a}}^{[2r-2s-1-\sum_{j\in I_{a}}p_{j}]}}\quad\text{for}\quad 1\leq a\leq r+s,\] \[\mathcal{X}_{I_{r+s+1}} =\frac{\mathbb{Q}_{I_{r+s}}^{[r-s+1]}\mathbb{Q}_{I_{r+s}}^{[r-s-2 +\eta]}}{\mathbb{Q}_{I_{r+s}}^{[r-s-1]}\mathbb{Q}_{I_{r+s}}^{[r-s+\eta]}},\] \[\mathcal{X}_{I_{2r+2s+2-a}} =z_{i_{a}}^{-1}\frac{\mathbb{Q}_{I_{a-1}}^{\sum_{j\in I_{a}}p_{j} +p_{ia}+\eta]}\mathbb{Q}_{I_{a}}^{[\sum_{j\in I_{a}}p_{j}-2p_{ia}+\eta]}}{ \mathbb{Q}_{I_{a-1}}^{[\sum_{j\in I_{a}}p_{j}-p_{ia}+\eta]}\mathbb{Q}_{I_{a}} ^{[\sum_{j\in I_{a}}p_{j}+\eta]}}\quad\text{for}\quad 1\leq a\leq r+s.\]
The T-function (3.1) reduces to
\[\mathsf{F}_{(1)}^{I_{2r+2s+1}}=\mathbb{Q}_{\emptyset}^{[2r-2s-1]}\mathbb{Q}_{ \emptyset}^{[\eta]}\left(\sum_{a=1}^{r+s}p_{i_{a}}(\mathcal{X}_{I_{a}}+ \mathcal{X}_{I_{2r+2s+2-a}})-\mathcal{X}_{I_{r+s+1}}\right), \tag{4.54}\]
The pole-free condition of the T-function (4.54) produces the following Bethe ansatz equations:
\[-1=\frac{p_{i_{a}}z_{i_{a}}}{p_{i_{a+1}}z_{i_{a+1}}}\frac{\mathbb{Q}_ {I_{a-1}}(u^{I_{a}}_{k}-p_{i_{a}})\mathbb{Q}_{I_{a}}(u^{I_{a}}_{k}+2p_{i_{a}}) \mathbb{Q}_{I_{a+1}}(u^{I_{a}}_{k}-p_{i_{a+1}})}{\mathbb{Q}_{I_{a-1}}(u^{I_{a} }_{k}+p_{i_{a}})\mathbb{Q}_{I_{a}}(u^{I_{a}}_{k}-2p_{i_{a+1}})\mathbb{Q}_{I_{a+ 1}}(u^{I_{a}}_{k}+p_{i_{a+1}})}\] \[\qquad\qquad\text{for}\quad k\in\{1,2,\dots,n_{I_{a}}\}\quad \text{and}\quad a\in\{1,2,\dots,r+s-1\}, \tag{4.55}\] \[1=p_{i_{r+s}}z_{i_{r+s}}\frac{\mathbb{Q}_{I_{r+s-1}}(u^{I_{r+s}}_ {k}-p_{i_{r+s}})\mathbb{Q}_{I_{r+s}}(u^{I_{r+s}}_{k}+2p_{i_{r+s}})\mathbb{Q}_ {I_{r+s}}(u^{I_{r+s}}_{k}+1+\eta)}{\mathbb{Q}_{I_{r+s-1}}(u^{I_{r+s}}_{k}+p_{i _{r+s}})\mathbb{Q}_{I_{r+s}}(u^{I_{r+s}}_{k}+2)\mathbb{Q}_{I_{r+s}}(u^{I_{r+s} }_{k}-1+\eta)}\] \[\qquad\qquad\text{for}\quad k\in\{1,2,\dots,n_{I_{r+s}}\}.\]
This is a reduction of (3.6) on the symmetric nesting path. One can also derive the Bethe ansatz equations (4.55) from the QQ-relations (4.44)-(4.47) by considering the zeros of the Q-functions. One may use (4.48) instead of (4.47). A tableaux sum expression of the T-function is provided by (3.19) after reduction. Moreover, \(\mathsf{T}^{\mathfrak{B},\mathfrak{F}}_{\mu}\) (from (3.48)) and its (super)character limit \(\zeta(\mathsf{T}^{\mathfrak{B},\mathfrak{F}}_{\mu})\) give a Wronskian expression of the T-function and a Weyl-type supercharacter formula respectively after reduction.
The generating functions (3.23) and (3.24) in this case have the same structure as (4.26) and (4.27)
\[\mathbf{W}_{I_{2r+2s+1}}(\mathbf{X}) =\prod_{a=1}^{\overrightarrow{r+s}}(1-\mathcal{X}_{I_{2r+2s+2-a} }\mathbf{X})^{-p_{i_{a}}}(1-\mathcal{X}_{I_{r+s+1}}\mathbf{X})\overset{ \longleftarrow}{\underset{a=1}{\prod}}(1-\mathcal{X}_{I_{a}}\mathbf{X})^{-p_{ i_{a}}}\] \[=\sum_{a=0}^{\infty}\mathcal{F}^{I_{2r+2s+1}[a-1]}_{(a)}\mathbf{ X}^{a}, \tag{4.56}\] \[\mathbf{W}_{I_{2r+2s+1}}(\mathbf{X})^{-1} =\prod_{a=1}^{\overrightarrow{r+s}}(1-\mathcal{X}_{I_{a}}\mathbf{ X})^{p_{i_{a}}}(1-\mathcal{X}_{I_{r+s+1}}\mathbf{X})^{-1}\overset{ \longleftarrow}{\underset{a=1}{\prod}}(1-\mathcal{X}_{I_{2r+2s+2-a}} \mathbf{X})^{p_{i_{a}}}\] \[=\sum_{a=0}^{\infty}(-1)^{a}\mathcal{F}^{I_{2r+2s+1}[a-1]}_{(1^{a })}\mathbf{X}^{a}. \tag{4.57}\]
Baxter type equations follow from the kernels of (4.56) and (4.57), which are reductions of (3.32) and (3.33). One can also discuss the \(\mathfrak{W}\)-symmetry in the same way as the \(U_{q}(osp(2r+1|2s)^{(1)})\) case.
#### 4.3.3 \(U_{q}(gl(2r+1|2s)^{(2)})\) case
This case is similar to the cases \(U_{q}(osp(2r+1|2s)^{(1)})\) and \(U_{q}(gl(2r|2s+1)^{(2)})\). We set
\[(M,N)=(2r+1,2s),\quad\mathfrak{B}=\{1,2,\dots,2r+1\},\quad\mathfrak{F}=\{2r+ 2,2r+3,\dots,2r+2s+1\},\\ \mathfrak{D}=\emptyset,\quad\eta\neq 0,\quad z_{r+1}=1. \tag{4.58}\]
QQ-relationsFor a symmetric nesting path defined by \(I_{2r+2s+1}=(i_{1},i_{2},\ldots,i_{r+s},r+1,i_{r+s}^{*}\ldots,i_{2}^{*},i_{1}^{*})\), the QQ-relations (3.8) and (3.9) reduce to
\[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a-1}}\mathbb{Q}_{I_{a+1}}=z_ {i_{a}}\mathbb{Q}_{I_{a}}^{[p_{a_{a}}]}\mathbb{Q}_{\widetilde{I}_{a}}^{[-p_{a _{a}}]}-z_{i_{a+1}}\mathbb{Q}_{I_{a}}^{[-p_{a_{a}}]}\mathbb{Q}_{\widetilde{I}_{ a}}^{[p_{i_{a}}]}\] \[\qquad\text{for}\quad a\in\{1,2,\ldots,r+s-1\},\quad p_{i_{a}}=p_ {i_{a+1}}, \tag{4.59}\] \[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a}}\mathbb{Q}_{\widetilde{I }_{a}}=z_{i_{a}}\mathbb{Q}_{I_{a-1}}^{[-p_{a_{a}}]}\mathbb{Q}_{I_{a+1}}^{[p_{ a_{a}}]}-z_{i_{a+1}}\mathbb{Q}_{I_{a-1}}^{[p_{a_{a}}]}\mathbb{Q}_{I_{a+1}}^{[-p_{ a_{a}}]}\] \[\qquad\text{for}\quad a\in\{1,2,\ldots,r+s-1\},\quad p_{i_{a}}=-p _{i_{a+1}},\] (4.60) \[(z_{i_{r+s}}-1)\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{I_{r+s}}^{[\eta]}= z_{i_{r+s}}\mathbb{Q}_{I_{r+s}}^{[1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[-1]}- \mathbb{Q}_{I_{r+s}}^{[-1]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{[1]}\quad\text{ if}\quad p_{i_{r+s}}=1,\] (4.61) \[(z_{i_{r+s}}-1)\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{\widetilde{I}_{r+s }}=z_{i_{r+s}}\mathbb{Q}_{I_{r+s-1}}^{[1]}\mathbb{Q}_{I_{r+s}}^{[\eta-1]}- \mathbb{Q}_{I_{r+s}}^{[-1]}\mathbb{Q}_{I_{r+s}}^{[\eta+1]}\quad\text{if}\quad p _{i_{r+s}}=-1. \tag{4.62}\]
Eqs. (4.59) and (4.61) are reductions of (3.8) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r+s-1\) and \(a=r+s\), respectively 33. Eqs. (4.60) and (4.62) are reductions of (3.9) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r+s-1\) and \(a=r+s\), respectively. Instead of (4.62), one may use
Footnote 33: The QQ-relations for \(a>r+s\) reduce to the ones for \(a\leq r+s\).
\[(z_{i_{r+s}}+1)\mathbb{Q}_{I_{r+s+s}}^{[-1]}\mathbb{Q}_{I_{r+s-1}}^{[\eta]}=z _{i_{r+s}}\mathbb{Q}_{I_{r+s}}^{[\eta-1]}\mathbb{Q}_{I_{r+s}}+\mathbb{Q}_{I_{ r+s}}^{[\eta-1]}\quad\text{if}\quad p_{i_{r+s}}=-1, \tag{4.63}\]
where \(\breve{I}_{r+s}=(i_{1},i_{2},\ldots,i_{r+s-1},i_{r+s}^{*})\). One can derive (4.48) in the same way as (4.18).
In the context of representation theory, QQ-relations for twisted quantum affine (non-super) algebras (\(s=0\) case) appeared in [66] and were proved in [67]. These papers confirm 34 our proposal [section 3.7, [1]] on reductions of the QQ-relations for the case \((M,N)=(2r+1,0)\), which was inspired by [24].
Footnote 34: As a matter of fact, we have a difficulty in comparing (4.59) (for \(s=0\), \(a=r-1\)) and (4.61) (for \(s=0\)) with [eq. (3.9), [66]] for the last two elements of \(I_{\sigma}\). On the other hand, (4.59) and (4.61) for \(s=0\) appear to agree with [eqs. (5.4), (5.5), [67]]. In any case, what is important for us is that (4.59)-(4.63) produce the real Bethe ansatz equations (4.66) derived by algebraic Bethe ansatz.
T-functions and Bethe ansatz equationsUnder the reduction, (3.3) reduces to
\[\mathcal{X}_{I_{a}} =z_{i_{a}}\frac{\mathbb{Q}_{I_{a-1}}^{[2r-2s+1-\sum_{j\in I_{a}}p_ {j}-p_{i_{a}}]}\mathbb{Q}_{I_{a}}^{[2r-2s+1-\sum_{j\in I_{a}}p_{j}+2p_{i_{a}}] }}{\mathbb{Q}_{I_{a-1}}^{[2r-2s+1-\sum_{j\in I_{a}}p_{j}+p_{i_{a}}]}\mathbb{Q }_{I_{a}}^{[2r-2s+1-\sum_{j\in I_{a}}p_{j}]}}\quad\text{for}\quad 1\leq a\leq r+s,\] \[\mathcal{X}_{I_{r+s+1}} =\frac{\mathbb{Q}_{I_{r+s}}^{[r-s-1]}\mathbb{Q}_{I_{r+s}}^{[r-s+2 +\eta]}}{\mathbb{Q}_{I_{r+s}}^{[r-s+1]}\mathbb{Q}_{I_{r+s}}^{[r-s+\eta]}},\] \[\mathcal{X}_{I_{2r+2s+2-a}} =z_{i_{a}}^{-1}\frac{\mathbb{Q}_{I_{a-1}}^{[\sum_{j\in I_{a}}p_{j} +p_{i_{a}}+\eta]}\mathbb{Q}_{I_{a}}^{[\sum_{j\in I_{a}}p_{j}-2p_{i_{a}}+\eta]} }{\mathbb{Q}_{I_{a-1}}^{[\sum_{j\in I_{a}}p_{j}-p_{i_{a}}+\eta]}\mathbb{Q}_{ I_{a}}^{[\sum_{j\in I_{a}}p_{j}+\eta]}}\quad\text{for}\quad 1\leq a\leq r+s. \tag{4.64}\]
The T-function (3.1) reduces to
\[{\sf F}^{I_{2r+2s+1}}_{(1)}={\mathbb{Q}}^{[2r-2s+1]}_{\emptyset}{\mathbb{Q}}^{[ \eta]}_{\emptyset}\left(\sum_{a=1}^{r+s}p_{i_{a}}({\cal X}_{I_{a}}+{\cal X}_{I_{2 r+2s+2-a}})+{\cal X}_{I_{r+s+1}}\right). \tag{4.65}\]
The pole-free condition of the T-function (4.65) produces the following Bethe ansatz equations:
\[\begin{split}-1&=\frac{p_{i_{a}}z_{i_{a}}}{p_{i_{a+ 1}}z_{i_{a+1}}}\frac{{\mathbb{Q}}_{I_{a-1}}(u^{I_{a}}_{k}-p_{i_{a}}){\mathbb{Q} }_{I_{a}}(u^{I_{a}}_{k}+2p_{i_{a}}){\mathbb{Q}}_{I_{a+1}}(u^{I_{a}}_{k}-p_{i_{ a+1}})}{{\mathbb{Q}}_{I_{a-1}}(u^{I_{a}}_{k}+p_{i_{a}}){\mathbb{Q}}_{I_{a}}(u^{I_{a} }_{k}-2p_{i_{a+1}}){\mathbb{Q}}_{I_{a+1}}(u^{I_{a}}_{k}+p_{i_{a+1}})}\\ &\qquad\qquad\text{for}\quad k\in\{1,2,\ldots,n_{I_{a}}\}\quad \text{and}\quad a\in\{1,2,\ldots,r+s-1\},\\ -1&=p_{i_{r+s}}z_{i_{r+s}}\frac{{\mathbb{Q}}_{I_{r+s -1}}(u^{I_{r+s}}_{k}-p_{i_{r+s}}){\mathbb{Q}}_{I_{r+s}}(u^{I_{r+s}}_{k}+2p_{i_ {r+s}}){\mathbb{Q}}_{I_{r+s}}(u^{I_{r+s}}_{k}-1+\eta)}{{\mathbb{Q}}_{I_{r+s-1} }(u^{I_{r+s}}_{k}+p_{i_{r+s}}){\mathbb{Q}}_{I_{r+s}}(u^{I_{r+s}}_{k}-2){ \mathbb{Q}}_{I_{r+s}}(u^{I_{r+s}}_{k}+1+\eta)}\\ &\qquad\qquad\text{for}\quad k\in\{1,2,\ldots,n_{I_{r+s}}\}. \end{split} \tag{4.66}\]
This is a reduction of (3.6) on the symmetric nesting path. One can also derive the Bethe ansatz equations (4.66) from the QQ-relations (4.59)-(4.62) by considering the zeros of the Q-functions. One can also use (4.63) instead of (4.62). Eqs. (4.65) and (4.66) agree with the known results by algebraic Bethe ansatz [73] in case \(i_{k}\in\mathfrak{F}\) for \(1\leq k\leq s\) and \(i_{k}\in\mathfrak{B}\) for \(s+1\leq k\leq r+s\). We remark that this reduces to the case \(U_{q}(gl(2r+1)^{(2)})\)[69, 24] for \(s=0\). A tableaux sum expression of the T-function is provided by (3.19) after reduction. Moreover, \({\sf T}^{\mathfrak{B},\mathfrak{F}}_{\mu}\) (from (3.48)) and its (super)character limit \(\zeta({\sf T}^{\mathfrak{B},\mathfrak{F}}_{\mu})\) give a Wronskian expression of the T-function and a Weyl-type supercharacter formula respectively after reduction.
The generating functions (3.23) and (3.24) reduce to
\[{\bf W}_{I_{2r+2s+1}}({\bf X}) =\prod_{a=1}^{\overrightarrow{r+s}}(1-{\cal X}_{I_{2r+2s+2-a}}{ \bf X})^{-p_{i_{a}}}(1-{\cal X}_{I_{r+s+1}}{\bf X})^{-1}\overleftarrow{\prod _{a=1}^{r+s}}(1-{\cal X}_{I_{a}}{\bf X})^{-p_{i_{a}}}\] \[=\sum_{a=0}^{\infty}{\cal F}^{I_{2r+2s+1}[a-1]}_{(a)}{\bf X}^{a}, \tag{4.67}\] \[{\bf W}_{I_{2r+2s+1}}({\bf X})^{-1} =\prod_{a=1}^{\overrightarrow{r+s}}(1-{\cal X}_{I_{a}}{\bf X})^{ p_{i_{a}}}(1-{\cal X}_{I_{r+s+1}}{\bf X})\overleftarrow{\prod_{a=1}^{r+s}}(1-{ \cal X}_{I_{2r+2s+2-a}}{\bf X})^{p_{i_{a}}}\] \[=\sum_{a=0}^{\infty}(-1)^{a}{\cal F}^{I_{2r+2s+1}[a-1]}_{(1^{a})} {\bf X}^{a}. \tag{4.68}\]
We remark that (4.68) for \(s=0\), corresponds to [eq. (2.12) in [74]]. Baxter type equations follow from the kernels of (4.67) and (4.68), which are reductions of (3.32) and (3.33). One can also discuss the \({\mathfrak{W}}\)-symmetry in the same way as the \(U_{q}(osp(2r+1|2s)^{(1)})\) case.
#### 4.3.4 \(U_{q}(gl(2r|2s)^{(2)})\) case
We set
\[(M,N)=(2r,2s),\quad\mathfrak{B}=\{1,2,\ldots,2r\},\quad\mathfrak{F} =\{2r+1,2r+2,\ldots,2r+2s\},\\ \mathfrak{D}=\emptyset,\quad\eta\neq 0 \tag{4.69}\]
QQ-relationsFor a symmetric nesting path defined by \(I_{2r+2s}=(i_{1},i_{2},\ldots,i_{r+s},i_{r+s}^{*}\ldots,i_{2}^{*},i_{1}^{*})\), the QQ-relations (3.8) and (3.9) reduce to
\[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a-1}}\mathbb{Q}_{I_{a+1}}=z _{i_{a}}\mathbb{Q}_{I_{a}}^{[p_{a_{1}}]}\mathbb{Q}_{\widetilde{I}_{a}}^{[-p_{ i_{a}}]}-z_{i_{a+1}}\mathbb{Q}_{I_{a}}^{[-p_{i_{a}}]}\mathbb{Q}_{\widetilde{I}_{a}}^{[ p_{i_{a}}]}\\ \text{for}\quad a\in\{1,2,\ldots,r+s-2\},\quad p_{i_{a}}=p_{i_{a +1}}, \tag{4.70}\] \[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a}}\mathbb{Q}_{\widetilde{I }_{a}}=z_{i_{a}}\mathbb{Q}_{I_{a-1}}^{[-p_{i_{a}}]}\mathbb{Q}_{I_{a+1}}^{[p_{ i_{a}}]}-z_{i_{a+1}}\mathbb{Q}_{I_{a-1}}^{[p_{i_{a}}]}\mathbb{Q}_{I_{a+1}}^{[-p_{ i_{a}}]}\] \[\text{for}\quad a\in\{1,2,\ldots,r+s-2\},\quad p_{i_{a}}=-p_{i_{a +1}},\] (4.71) \[(z_{i_{r+s-1}}-z_{i_{r+s}})\mathbb{Q}_{I_{r+s-2}}\mathbb{Q}_{I_{ r+s}}^{2}=z_{i_{r+s-1}}\mathbb{Q}_{I_{r+s-1}}^{[p_{i_{r+s-1}}]}\mathbb{Q}_{ \widetilde{I}_{r+s-1}}^{[-p_{i_{r+s-1}}]}-z_{i_{r+s}}\mathbb{Q}_{I_{r+s-1}}^{[ -p_{i_{r+s-1}}]}\mathbb{Q}_{\widetilde{I}_{r+s-1}}^{[p_{i_{r+s-1}}]}\] \[\text{if}\quad p_{i_{r+s-1}}=p_{i_{r+s}},\] (4.72) \[(z_{i_{r+s-1}}-z_{i_{r+s}})\mathbb{Q}_{I_{r+s-1}}\mathbb{Q}_{ \widetilde{I}_{r+s-1}}=z_{i_{r+s-1}}\mathbb{Q}_{I_{r+s-2}}^{[-p_{i_{r+s-1}}]} \mathbb{Q}_{I_{r+s}}^{2[p_{i_{r+s-1}}]}-z_{i_{r+s}}\mathbb{Q}_{I_{r+s-2}}^{[ p_{i_{r+s-1}}]}\mathbb{Q}_{I_{r+s}}^{2[-p_{i_{r+s-1}}]}\] \[\text{if}\quad p_{i_{r+s-1}}=-p_{i_{r+s}},\] (4.73) \[(z_{i_{r+s}}-z_{i_{r+s}}^{-1})\mathbb{Q}_{I_{r+s-1}}^{2}=z_{i_{r+ s}}\mathbb{Q}_{I_{r+s}}^{2[p_{i_{r+s}}]}\mathbb{Q}_{\widetilde{I}_{r+s}}^{2[-p_{i_{r+ s}}]}-z_{i_{r+s}}^{-1}\mathbb{Q}_{I_{r+s}}^{2[-p_{i_{r+s}}]}\mathbb{Q}_{ \widetilde{I}_{r+s}}^{2[p_{i_{r+s}}]}, \tag{4.74}\]
where \(\mathbb{Q}_{I_{r+s-1}}^{2}=\mathbb{Q}_{I_{r+s-1}}\mathbb{Q}_{I_{r+s-1}}^{[ \eta]}\), \(\mathbb{Q}_{I_{r+s}}=\mathbb{Q}_{I_{r+s}}^{2}=\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{I_ {r+s}}^{[\eta]}\). Eqs. (4.70), (4.72) and (4.74) are reductions of (3.8) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r+s-2\), \(a=r+s-1\) and \(a=r+s\), respectively 35. Eqs. (4.71) and (4.73) are reductions of (3.9) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r+s-2\) and \(a=r+s-1\), respectively. Let \(\{v_{k}^{I_{r+s}}\}_{k=1}^{m_{I_{k}}}\) be the zeros of the Q-function \(\mathbb{Q}_{I_{r+s}}\). \(\mathbb{Q}_{I_{r+s}}=\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{I_{r+s}}^{[\eta]}\) means that \(\{u_{k}^{I_{r+s}}\}_{k=1}^{n_{I_{k}}}=\{v_{k}^{I_{r+s}}\}_{k=1}^{m_{I_{k}}}\sqcup \{v_{k}^{I_{r+s}}+\eta\}_{k=1}^{m_{I_{k}}}\), \(n_{I_{k}}=2m_{I_{k}}\) holds.
Footnote 35: The QQ-relations for \(a>r+s\) reduce to the ones for \(a\leq r+s\).
In the context of representation theory, QQ-relations for twisted quantum affine (non-super) algebras (\(s=0\) case) appeared in [66] and were proved in [67]. These papers, at least partially 36, confirm our proposal [section 3.7, [1]] on reductions of the QQ-relations for the case \((M,N)=(2r,0)\), which was inspired by [24].
T-functions and Bethe ansatz equationsUnder the reduction, (3.3) reduces to
\[\mathcal{X}_{I_{a}} =z_{i_{a}}\frac{\mathbb{Q}_{I_{a-1}}^{[2r-2s-\sum_{j\in I_{a}}p_{j}- p_{i_{a}}]}\mathbb{Q}_{I_{a}}^{[2r-2s-\sum_{j\in I_{a}}p_{j}+2p_{i_{a}}]}}{ \mathbb{Q}_{I_{a-1}}^{[2r-2s-\sum_{j\in I_{a}}p_{j}+p_{i_{a}}]}\mathbb{Q}_{I_{a }}^{[2r-2s-\sum_{j\in I_{a}}p_{j}]}}\quad\text{for}\quad 1\leq a\leq r+s-1,\] \[\mathcal{X}_{I_{r+s}} =z_{i_{r+s}}\frac{\mathbb{Q}_{I_{r+s-1}}^{[r-s-p_{i_{r+s}}]} \mathbf{Q}_{I_{r+s}}^{2[r-s+2p_{i_{r+s}}]}}{\mathbb{Q}_{I_{r+s-1}}^{[r-s+p_{i_{ r+s}}]}\mathbf{Q}_{I_{r+s}}^{2[r-s]}},\] \[\mathcal{X}_{I_{r+s+1}} =z_{i_{r+s}}^{-1}\frac{\mathbb{Q}_{I_{r+s-1}}^{[r-s+p_{i_{r+s}}+ \eta]}\mathbf{Q}_{I_{r+s}}^{2[r-s-2p_{i_{r+s}}]}}{\mathbb{Q}_{I_{r+s-1}}^{[r-s -p_{i_{r+s}}+\eta]}\mathbb{Q}_{I_{r+s}}^{2[r-s]}},\] \[\mathcal{X}_{I_{2r+2s+1-a}} =z_{i_{a}}^{-1}\frac{\mathbb{Q}_{I_{a-1}}^{[\sum_{j\in I_{a}}p_{j }+p_{i_{a}}+\eta]}\mathbb{Q}_{I_{a}}^{[\sum_{j\in I_{a}}p_{j}-2p_{i_{a}}+\eta] }}{\mathbb{Q}_{I_{a-1}}^{[\sum_{j\in I_{a}}p_{j}-p_{i_{a}}+\eta]}\mathbb{Q}_{I_ {a}}^{[\sum_{j\in I_{a}}p_{j}+\eta]}}\quad\text{for}\quad 1\leq a\leq r+s-1, \tag{4.75}\]
where \(\mathbf{Q}_{I_{r+s}}^{2}=\mathbf{Q}_{I_{r+s}}\mathbf{Q}_{I_{r+s}}^{[\eta]}\). The T-function (3.1) reduces to
\[\mathsf{F}_{(1)}^{I_{2r+2s}}=\mathbb{Q}_{\emptyset}^{[2r-2s]} \mathbb{Q}_{\emptyset}^{[\eta]}\sum_{a=1}^{r+s}p_{i_{a}}(\mathcal{X}_{I_{a}}+ \mathcal{X}_{I_{2r+2s+1-a}}), \tag{4.76}\]
The pole-free condition of the T-function (4.76) produces the following Bethe ansatz equations:
\[-1 =\frac{p_{i_{a}}z_{i_{a}}}{p_{i_{a+1}}z_{i_{a+1}}}\frac{\mathbb{Q }_{I_{a-1}}(u_{k}^{I_{a}}-p_{i_{a}})\mathbb{Q}_{I_{a}}(u_{k}^{I_{a}}+2p_{i_{a}} )\mathbb{Q}_{I_{a+1}}(u_{k}^{I_{a}}-p_{i_{a+1}})}{\mathbb{Q}_{I_{a-1}}(u_{k}^{ I_{a}}+p_{i_{a}})\mathbb{Q}_{I_{a}}(u_{k}^{I_{a}}-2p_{i_{a+1}})\mathbb{Q}_{I_{a+1}}(u_{k}^{ I_{a}}+p_{i_{a+1}})}\] \[\qquad\qquad\text{for}\quad k\in\{1,2,\ldots,n_{I_{a}}\}\quad \text{and}\quad a\in\{1,2,\ldots,r+s-2\},\] \[-1 =\frac{p_{i_{r+s-1}}z_{i_{r+s-1}}}{p_{i_{r+s}}z_{i_{r+s}}}\frac{ \mathbb{Q}_{I_{r+s-2}}(u_{k}^{I_{r+s-1}}-p_{i_{r+s-1}})\mathbb{Q}_{I_{r+s-1}} (u_{k}^{I_{r+s-1}}+2p_{i_{r+s-1}})\mathbf{Q}_{I_{r+s}}^{2}(u_{k}^{I_{r+s-1}}-p_ {i_{r+s}})}{\mathbb{Q}_{I_{r+s-2}}(u_{k}^{I_{r+s-1}}+p_{i_{r+s-1}})\mathbb{Q}_{ I_{r+s-1}}(u_{k}^{I_{r+s-1}}-2p_{i_{r+s}})\mathbf{Q}_{I_{r+s}}^{2}(u_{k}^{I_{r+s-1}}+p_ {i_{r+s}})}\] \[\qquad\qquad\text{for}\quad k\in\{1,2,\ldots,n_{I_{r+s-1}}\},\] \[-1 =z_{i_{r+s}}^{2}\frac{\mathbb{Q}_{I_{r+s-1}}^{2}(v_{k}^{I_{r+s} }-p_{i_{r+s}})\mathbf{Q}_{I_{r+s}}^{2}(v_{k}^{I_{r+s}}+2p_{i_{r+s}})}{ \mathbb{Q}_{I_{r+s-1}}^{2}(v_{k}^{I_{r+s}}+p_{i_{r+s}})\mathbf{Q}_{I_{r+s}}^{2 }(v_{k}^{I_{r+s}}-2p_{i_{r+s}})}\quad\text{for}\quad k\in\{1,2,\ldots,m_{I_{r+ s}}\}, \tag{4.77}\]
where \(\mathbb{Q}_{I_{r+s-1}}^{2}(u)=\mathbb{Q}_{I_{r+s-1}}(u)\mathbb{Q}_{I_{r+s-1}} (u+\eta)\), \(\mathbf{Q}_{I_{r+s}}^{2}(u)=\mathbf{Q}_{I_{r+s}}(u)\mathbf{Q}_{I_{r+s}}(u+\eta)\). Note that \(\{v_{k}^{I_{r+s}}+\eta\}_{k=1}^{m_{I_{r+s}}}\) also satisfies the last equation of (4.77). (4.77) is a reduction of (3.6) on the symmetric nesting path. One can also derive the Bethe ansatz equations (4.77) from the QQ-relations (4.70)-(4.74) by considering the zeros of the Q-functions. Eqs. (4.76) and (4.77) agree with the known results by algebraic Bethe ansatz [73] in case \(i_{k}\in\mathfrak{F}\) for \(1\leq k\leq s\) and \(i_{k}\in\mathfrak{B}\) for \(s+1\leq k\leq r+s\). We remark that this reduces to the case
\(U_{q}(gl(2r)^{(2)})\)[69, 24] for \(s=0\). A tableaux sum expression of the T-function is provided by (3.19) after reduction. Moreover, \({\sf T}_{\mu}^{\mathfrak{B},\mathfrak{F}}\) (from (3.48)) and its (super)character limit \(\zeta({\sf T}_{\mu}^{\mathfrak{B},\mathfrak{F}})\) give a Wronskian expression of the T-function and a Weyl-type supercharacter formula respectively after reduction.
The generating functions (3.23) and (3.24) reduce to
\[{\bf W}_{I_{2r+2s}}({\bf X}) =\prod_{a=1}^{\stackrel{{ r\to s}}{{r+s}}}(1-{\cal X }_{I_{2r+2s+1-a}}{\bf X})^{-p_{i_{a}}}\prod_{a=1}^{\stackrel{{ \longleftarrow}}{{r+s}}}(1-{\cal X}_{I_{a}}{\bf X})^{-p_{i_{a}}}\] \[=\sum_{a=0}^{\infty}{\cal F}_{(a)}^{I_{2r+2s}[a-1]}{\bf X}^{a}, \tag{4.78}\] \[{\bf W}_{I_{2r+2s}}({\bf X})^{-1} =\prod_{a=1}^{\stackrel{{ r\to s}}{{r+s}}}(1-{\cal X }_{I_{a}}{\bf X})^{p_{i_{a}}}\prod_{a=1}^{\stackrel{{ r\to s}}{{r+s}}}(1-{\cal X}_{I_{2r+2s+1-a}}{\bf X })^{p_{i_{a}}}\] \[=\sum_{a=0}^{\infty}(-1)^{a}{\cal F}_{(1^{a})}^{I_{2r+2s}[a-1]}{ \bf X}^{a}. \tag{4.79}\]
We remark that (4.79) for \(s=0\) corresponds to [eq. (2.13) in [74]]. Baxter type equations follow from the kernels of (4.78) and (4.79), which are reductions of (3.32) and (3.33).
\(\mathfrak{W}\)-symmetryWe would like to consider a subgroup \(\mathfrak{W}={\mathbb{Z}}_{2}^{r+s}\rtimes S_{r+s}\) of the permutation group \(S(I_{M+N})=S(I_{2r+2s})=S(\mathfrak{I})\), which preserves the entire set of symmetric nesting paths, and discuss the invariance of the T-function \({\sf F}_{(1)}^{I_{2r+2s}}\) under it. \(\mathfrak{W}\) is generated by two kinds of operations of the form: \(\mathfrak{s}=\tau_{i_{a}i_{a+1}}\circ\tau_{i_{a}^{*}i_{a+1}^{*}}\), \(\mathfrak{s}(I_{2r+2s})=(i_{1},i_{2},\ldots,i_{a-1},i_{a+1},i_{a},i_{a+2}, \ldots,i_{r+s},i_{r+s}^{*},\ldots,i_{a+2}^{*},i_{a+1}^{*},i_{a-1}^{*},\ldots,i_ {2}^{*},i_{1}^{*})\) for \(a\in\{1,2,\ldots,r+s-1\}\), and \(\mathfrak{k}=\tau_{i_{r+s}i_{r+s}^{*}}\), \(\mathfrak{k}(I_{2r+2s})=(i_{1},i_{2},\ldots,i_{r+s-1},i_{r+s}^{*},i_{r+s},i_{r+ s-1}^{*},\ldots,i_{2}^{*},i_{1}^{*})\). The condition \(\mathfrak{s}({\sf F}_{(1)}^{I_{2r+2s}})={\sf F}_{(1)}^{\mathfrak{s}(I_{2r+2s} )}={\sf F}_{(1)}^{I_{2r+2s}}\) is equivalent to the following 4-term QQ-relations
\[p_{i_{a}}{\cal X}_{I_{a}}+p_{i_{a+1}}{\cal X}_{I_{a+1}} =p_{i_{a+1}}{\cal X}_{\mathfrak{s}(I_{a})}+p_{i_{a}}{\cal X}_{ \mathfrak{s}(I_{a+1})}, \tag{4.80}\] \[p_{i_{a}}{\cal X}_{I_{2r+2s+1-a}}+p_{i_{a+1}}{\cal X}_{I_{2r+2s-a }} =p_{i_{a+1}}{\cal X}_{\mathfrak{s}(I_{2r+2s+1-a})}+p_{i_{a}}{\cal X }_{\mathfrak{s}(I_{2r+2s-a})}, \tag{4.81}\]
which follow from the 3-term QQ-relations (4.70)-(4.73). The condition \(\mathfrak{k}({\sf F}_{(1)}^{I_{2r+2s}})={\sf F}_{(1)}^{{\sf f}(I_{2r+2s})}\) is equivalent to the following 4-term QQ-relations
\[{\cal X}_{I_{r+s}}+{\cal X}_{I_{r+s+1}}={\cal X}_{\mathfrak{k}(I_{r+s})}+{ \cal X}_{\mathfrak{k}(I_{r+s+1})}, \tag{4.82}\]
which follows from the 3-term QQ-relation (4.74). All these relations (4.80)-(4.82) are reductions of (3.7). Thus the T-function \({\sf F}_{(1)}^{I_{2r+2s}}\) on the symmetric nesting path is \(\mathfrak{W}\)-invariant under the QQ-relations (4.70)-(4.74).
The condition that the generating function \({\bf W}_{I_{2r+2s}}({\bf X})\) is invariant under \(\mathfrak{s}\) and \(\mathfrak{k}\), namely \(\mathfrak{s}({\bf W}_{I_{2r+2s}}({\bf X}))={\bf W}_{\mathfrak{s}(I_{2r+2s})}({ \bf X})={\bf W}_{I_{2r+2s}}({\bf X})\) and
\(\mathbf{W}_{\mathfrak{k}(I_{2r+2s})}(\mathbf{X})=\mathbf{W}_{I_{2r+2s}}(\mathbf{X})\) is equivalent to the discrete zero curvature condition (a reduction of (3.25)):
\[(1-\mathcal{X}_{I_{a}}\mathbf{X})^{p_{i_{a}}}(1-\mathcal{X}_{I_{a +1}}\mathbf{X})^{p_{i_{a+1}}}=(1-\mathcal{X}_{\mathfrak{s}(I_{a})}\mathbf{X})^{ p_{i_{a+1}}}(1-\mathcal{X}_{\mathfrak{s}(I_{a+1})}\mathbf{X})^{p_{i_{a}}}, \tag{4.83}\] \[(1-\mathcal{X}_{I_{2r+2s-a}}\mathbf{X})^{p_{i_{a+1}}}(1-\mathcal{ X}_{I_{2r+2s+1-a}}\mathbf{X})^{p_{i_{a}}}=\] \[=(1-\mathcal{X}_{\mathfrak{s}(I_{2r+2s-a})}\mathbf{X})^{p_{i_{a}} }(1-\mathcal{X}_{\mathfrak{s}(I_{2r+2s+1-a})}\mathbf{X})^{p_{i_{a+1}}},\] \[(1-\mathcal{X}_{I_{r+s}}\mathbf{X})^{p_{i_{r+s}}}(1-\mathcal{X} _{I_{r+s+1}}\mathbf{X})^{p_{i_{r+s}}}=(1-\mathcal{X}_{\mathfrak{k}(I_{r+s})} \mathbf{X})^{p_{i_{r+s}}}(1-\mathcal{X}_{\mathfrak{k}(I_{r+s+1})}\mathbf{X})^{ p_{i_{r+s}}},\]
where \(a\in\{1,2,\ldots,r+s-1\}\). These relations (4.41) boil down to (4.80)-(4.82) and reductions of the identity (3.26). Therefore the T-functions \(\mathcal{F}^{I_{2r+2s}}_{(b)}\) and \(\mathcal{F}^{I_{2r+2s}}_{(1^{b})}\) on the symmetric nesting paths are invariant under \(\mathfrak{W}\) if the QQ-relations (4.70)-(4.74) are imposed. One may be able to exclude the T- and Q-functions on the non-symmetric nesting paths from consideration.
### Singular reductions
In this subsection, we consider reductions for the case \(\mathfrak{D}\neq\emptyset\). This case is more hypothetical than the regular reduction because it requires additional non-trivial ansatzes. In particular, the resultant Bethe ansatz equations are not always reductions of the ones for \(U_{q}(gl(2r|2s+2)^{(1)})\). A part of (3.6) becomes singular under reductions and has to be excluded from our consideration. This suggests that not all the representations of \(U_{q}(osp(2r|2s)^{(1)})\) are naive reductions of those of \(U_{q}(gl(M|N)^{(1)})\). 37 To what extent the reductions work is still not fully understood. This subsection is our trial to understand this in the context of Bethe ansatz. We will present candidates of QQ-relations, which produce Bethe ansatz equations known in the literature.
Footnote 37: The relations (4.104)-(4.107) (and also (4.85), (4.87)) suggest that the reductions of some (asymptotic) representations \(W\) of \(U_{q}(gl(2r|2s+2)^{(1)})\) decompose into those \(W_{1},W_{2}\) of \(U_{q}(osp(2r|2s)^{(1)})\): \(W\simeq W_{1}\otimes W_{2}\) (on the level of the trace). In addition, there is a freedom to permute the elements of the set \(\mathfrak{D}\). We also remark that two copies of T-functions appear after reductions (see [Theorem 2.5, [25]]).
#### 4.4.1 \(U_{q}(sp(2r)^{(1)})\) case
Before we consider the \(U_{q}(osp(2r|2s)^{(1)})\) case, we summarize the \(U_{q}(sp(2r)^{(1)})\) case [25] in our terminology, as warm-up. After reading the next subsection, one will realize that [25] is the tip of the iceberg.
We assume \(r\in\mathbb{Z}_{\geq 1}\), and set
\[(M,N)=(2r+2,0),\quad\mathfrak{B}=\{1,2,\ldots,2r+2\},\quad \mathfrak{F}=\emptyset,\\ \mathfrak{D}=\{r+1,r+2\},\quad\eta=0,\quad z_{r+1}=-z_{r+2}=1. \tag{4.84}\]
QQ-relationsFor a symmetric nesting path defined by \(I_{2r+2}=(i_{1},i_{2},\ldots,i_{r+1},i_{r+1}^{*}\ldots,i_{2}^{*},i_{1}^{*})\), \(i_{r+1}\in\mathfrak{D}\), we consider additional reductions
\[\mathbb{Q}_{I_{r}} =\mathbf{Q}_{I_{r}}^{[-1]}\mathbf{Q}_{I_{r}}^{[1]}, \tag{4.85}\] \[(z_{i_{r}}+z_{i_{r+1}})\mathbb{Q}_{\bar{I}_{r}} =z_{i_{r}}\mathbf{Q}_{I_{r}}^{[1]}\mathbf{Q}_{\bar{I}_{r}}^{[-1]}+ z_{i_{r+1}}\mathbf{Q}_{I_{r}}^{[-1]}\mathbf{Q}_{\bar{I}_{r}}^{[1]}\] (4.86) \[\mathbb{Q}_{I_{r+1}} =(\mathbf{Q}_{I_{r}})^{2}, \tag{4.87}\]
where \(\breve{I}_{r}=(i_{1},i_{2},\ldots,i_{r-1},i_{r}^{*})\). Eq. (4.85) and (4.87) for the case \((i_{1},i_{2},\ldots,i_{r+1})=(1,2,\ldots,r+1)\) correspond to [eq. (B.4) in [25]]. Then the QQ-relations (3.8) along this symmetric nesting path reduce to
\[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a-1}}\mathbb{Q}_{I_{a+1}}= z_{i_{a}}\mathbb{Q}_{I_{a}}^{[1]}\mathbb{Q}_{\bar{I}_{a}}^{[-1]}-z_{i_{a+1}} \mathbb{Q}_{I_{a}}^{[-1]}\mathbb{Q}_{\bar{I}_{a}}^{[1]}\quad\text{for}\quad a \in\{1,2,\ldots,r-2\}, \tag{4.88}\] \[(z_{i_{r-1}}-z_{i_{r}})\mathbb{Q}_{I_{r-2}}\mathbf{Q}_{I_{r}}^{[- 1]}\mathbf{Q}_{I_{r}}^{[1]}=z_{i_{r-1}}\mathbb{Q}_{I_{r-1}}^{[1]}\mathbb{Q}_{ \bar{I}_{r-1}}^{[-1]}-z_{i_{r}}\mathbb{Q}_{I_{r-1}}^{[-1]}\mathbb{Q}_{\bar{I}_ {r-1}}^{[1]}.\] (4.89) \[(z_{i_{r}}^{2}-1)\mathbb{Q}_{I_{r-1}}=z_{i_{r}}^{2}\mathbf{Q}_{I_ {r}}^{[2]}\mathbf{Q}_{I_{r}}^{[-2]}-\mathbf{Q}_{I_{r}}^{[-2]}\mathbf{Q}_{\bar {I}_{r}}^{[2]}. \tag{4.90}\]
Eqs. (4.88), (4.89) and (4.90) are reductions of (3.8) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r-2\), \(a=r-1\) and \(a=r\), respectively. In (4.90), \(z_{i_{r+1}}^{2}=1\) is used. Eqs. (4.86) and (4.88), (4.89) and (4.90) for the case \((i_{1},i_{2},\ldots,i_{r+1})=(1,2,\ldots,r+1)\) correspond to [eqs. (7.26), (7.27) and (7.29) in [25]]. Let \(\{v_{k}^{I_{r}}\}_{k=1}^{m_{I_{k}}}\) be the zeros of the Q-function \(\mathbf{Q}_{I_{r}}\). Eq. (4.85) means that \(\{u_{k}^{I_{r}}\}_{k=1}^{n_{I_{k}}}=\{v_{k}^{I_{r}}-1\}_{k=1}^{m_{I_{k}}}\sqcup \{v_{k}^{I_{r}}+1\}_{k=1}^{m_{I_{k}}}\), \(n_{I_{k}}=2m_{I_{k}}\) holds.
T-functions and Bethe ansatz equationsUnder the reduction, (3.3) reduces to
\[\mathcal{X}_{I_{a}} =z_{i_{a}}\frac{\mathbb{Q}_{I_{a-1}}^{[2r+1-a]}\mathbb{Q}_{I_{a} }^{[2r+4-a]}}{\mathbb{Q}_{I_{a-1}}^{[2r+3-a]}\mathbb{Q}_{I_{a}}^{[2r+2-a]}} \quad\text{for}\quad 1\leq a\leq r-1,\] \[\mathcal{X}_{I_{r}} =z_{i_{r}}\frac{\mathbb{Q}_{I_{r-1}}^{[r+1]}\mathbf{Q}_{I_{r}}^{ [r+5]}}{\mathbb{Q}_{I_{r-1}}^{[r+3]}\mathbf{Q}_{I_{r}}^{[r+1]}},\] \[\mathcal{X}_{I_{r+1}} =-\mathcal{X}_{I_{r+2}}=z_{i_{r+1}}\frac{\mathbf{Q}_{I_{r}}^{[r- 1]}\mathbf{Q}_{I_{r}}^{[r+3]}}{(\mathbf{Q}_{I_{r}}^{[r+1]})^{2}}, \tag{4.91}\] \[\mathcal{X}_{I_{r+3}} =z_{i_{r}}^{-1}\frac{\mathbb{Q}_{I_{r-1}}^{[r+1]}\mathbf{Q}_{I_{ r}}^{[r-3]}}{\mathbb{Q}_{I_{r-1}}^{[r-1]}\mathbf{Q}_{I_{r}}^{[r+1]}},\] \[\mathcal{X}_{I_{2r+3-a}} =z_{i_{a}}^{-1}\frac{\mathbb{Q}_{I_{a-1}}^{[a+1]}\mathbb{Q}_{I_ {a}}^{[a-2]}}{\mathbb{Q}_{I_{a-1}}^{[a-1]}\mathbb{Q}_{I_{a}}^{[a]}}\quad\text{ for}\quad 1\leq a\leq r-1.\]
The T-function (3.1) reduces to
\[\mathsf{F}_{(1)}^{I_{2r+2}}=\mathbb{Q}_{\emptyset}^{[2r+2]}\mathbb{Q}_{ \emptyset}\sum_{a=1}^{r}(\mathcal{X}_{I_{a}}+\mathcal{X}_{I_{2r+3-a}}) \tag{4.92}\]
Note that the terms \({\cal X}_{I_{r+1}}\) and \({\cal X}_{I_{r+2}}\) are missing in (4.92) because of cancellation. The pole-free condition of the T-function (4.92) produces the following Bethe ansatz equations:
\[-1=\frac{z_{i_{a}}}{z_{i_{a+1}}}\frac{{\mathbb{Q}}_{I_{a-1}}(u_{k}^{I_{a}}-1){ \mathbb{Q}}_{I_{a}}(u_{k}^{I_{a}}+2){\mathbb{Q}}_{I_{a+1}}(u_{k}^{I_{a}}-1)}{{ \mathbb{Q}}_{I_{a-1}}(u_{k}^{I_{a}}+1){\mathbb{Q}}_{I_{a}}(u_{k}^{I_{a}}-2){ \mathbb{Q}}_{I_{a+1}}(u_{k}^{I_{a}}+1)}\]
\[\mbox{for}\quad k\in\{1,2,\ldots,n_{I_{a}}\}\quad\mbox{and}\quad a\in\{1,2, \ldots,r-2\}, \tag{4.93}\]
\[-1=\frac{z_{i_{r-1}}}{z_{i_{r}}}\frac{{\mathbb{Q}}_{I_{r-2}}(u_{k}^{I_{r}-1}-1 ){\mathbb{Q}}_{I_{r-1}}(u_{k}^{I_{r-1}}+2){\mathbb{Q}}_{I_{r}}(u_{k}^{I_{r-1}} -2)}{{\mathbb{Q}}_{I_{r-2}}(u_{k}^{I_{r-1}}+1){\mathbb{Q}}_{I_{r-1}}(u_{k}^{I _{r-1}}-2){\mathbb{Q}}_{I_{r}}(u_{k}^{I_{r-1}}+2)}\]
\[\mbox{for}\quad k\in\{1,2,\ldots,n_{I_{r-1}}\}, \tag{4.94}\]
\[-1=z_{i_{r}}^{2}\frac{{\mathbb{Q}}_{I_{r-1}}(v_{k}^{I_{r}}-2){\mathbb{Q}}_{I_ {r}}(v_{k}^{I_{r}}+4)}{{\mathbb{Q}}_{I_{r-1}}(v_{k}^{I_{r}}+2){\mathbb{Q}}_{I _{r}}(v_{k}^{I_{r}}-4)}\quad\mbox{for}\quad k\in\{1,2,\ldots,m_{I_{r}}\}. \tag{4.95}\]
Eqs. (4.93) and (4.94) are reductions of (3.6) on the symmetric nesting path, while (4.95) is not. Eqs. (4.92)-(4.95) agree with the known results by analytic Bethe ansatz [69]. One can also derive the Bethe ansatz equations (4.93)-(4.95) from the QQ-relations (4.88)-(4.90) by considering the zeros of the Q-functions. The tableaux sum expression of the T-function (3.19) (for one row Young diagrams and one column Young diagrams) reproduces [eqs. (3.9), (3.17), [34]] 38 under the reduction.
Footnote 38: The functions \({\mathbb{Q}}_{0}\), \({\mathbb{Q}}_{I_{b}}\) (\(1\leq b\leq r-1\)), \({\mathbb{Q}}_{I_{r}}\), \({\mathbb{Q}}_{0}^{[2r+2]}{\mathbb{Q}}_{0}{\cal X}_{I_{a}}\) and \({\mathbb{Q}}_{0}^{[2r+2]}{\mathbb{Q}}_{0}{\cal X}_{I_{2r+3-a}}\) correspond to \(\phi(u)\), \(Q_{b}(u)\) (\(1\leq b\leq r-1\)), \(Q_{r}(u)\), \(\boxed{a}\) and \(\boxed{a}\) in [eqs. (3.4a), (3.4b) for \(p=1\), [34]], where \(1\leq a\leq r\) (the unit of the shift of the spectral parameter in [34] is half of the one in this paper).
The generating functions (3.23) and (3.24) reduce to the ones in [25]:
\[{\bf W}_{I_{2r+2}}({\bf X}) =\prod_{a=1}^{\overrightarrow{r}}(1-{\cal X}_{I_{2r+3-a}}{\bf X })^{-1}(1-{\cal X}_{I_{r}}{\cal X}_{I_{r+3}}^{[2]}{\bf X}^{2})^{-1}\overleftarrow {\prod_{a=1}^{r}}(1-{\cal X}_{I_{a}}{\bf X})^{-1}\] \[=\sum_{a=0}^{\infty}{\cal F}_{(a)}^{I_{2r+2}[a-1]}{\bf X}^{a}, \tag{4.96}\] \[{\bf W}_{I_{2r+2}}({\bf X})^{-1} =\prod_{a=1}^{r}(1-{\cal X}_{I_{a}}{\bf X})(1-{\cal X}_{I_{r}}{ \cal X}_{I_{r+3}}^{[2]}{\bf X}^{2})\overleftarrow{\prod_{a=1}^{r}}(1-{\cal X} _{I_{2r+3-a}}{\bf X})\] \[=\sum_{a=0}^{2r+2}(-1)^{a}{\cal F}_{(1^{a})}^{I_{2r+2}[a-1]}{\bf X }^{a}. \tag{4.97}\]
Note that the terms \({\cal X}_{I_{r+2}}\) and \({\cal X}_{I_{r+1}}\) disappear from the formula because of cancellation. By (3.19), \({\cal F}_{(1^{a})}^{I_{2r+2}}=0\) if \(a>2r+2\). Baxter type equations follow from the kernels of (4.96) and (4.97), which are reductions of (3.32) and (3.33).
\(\mathfrak{W}\)-symmetryWe would like to consider a subgroup \(\mathfrak{W}={\mathbb{Z}}_{2}^{r}\rtimes S_{r}\) of the permutation group \(S(I_{M+N})=S(I_{2r+2})=S({\mathfrak{I}})\), which preserves the entire set39 of symmetric nesting paths, and discuss the invariance of the T-function \({\sf F}_{(1)}^{I_{2r+2}}\) under it.
is generated by two kinds of operations of the form: \(\mathfrak{s}=\tau_{i_{a}i_{a+1}}\circ\tau_{i_{a}^{*}i_{a+1}^{*}}\), \(\mathfrak{s}(I_{2r+2})=(i_{1},i_{2},\ldots,i_{a-1},i_{a+1},i_{a},i_{a+2},\ldots, i_{r},i_{r+1},i_{r+1}^{*},i_{r}^{*},\ldots,i_{a+2}^{*},i_{a}^{*},i_{a+1}^{*},i_{a -1}^{*},\ldots,i_{2}^{*},i_{1}^{*})\) for \(a\in\{1,2,\ldots,r-1\}\), and \(\mathfrak{k}=\tau_{i_{r}i_{r}^{*}}\), \(\mathfrak{k}(I_{2r+2})=(i_{1},i_{2},\ldots,i_{r-1},i_{r}^{*},i_{r+1},i_{r+1}^{ *},i_{r},i_{r-1}^{*},\ldots,i_{2}^{*},i_{1}^{*})\). The condition \(\mathfrak{s}(\mathsf{F}_{(1)}^{I_{2r+2}})=\mathsf{F}_{(1)}^{\mathfrak{s}(I_{2 r+2})}=\mathsf{F}_{(1)}^{I_{2r+2}}\) is equivalent to the following 4-term QQ-relations
\[\mathcal{X}_{I_{a}}+\mathcal{X}_{I_{a+1}}=\mathcal{X}_{\mathfrak{s}(I_{a})}+ \mathcal{X}_{\mathfrak{s}(I_{a+1})}, \tag{4.98}\]
\[\mathcal{X}_{I_{2r+3-a}}+\mathcal{X}_{I_{2r+2-a}}=\mathcal{X}_{\mathfrak{s}(I_ {2r+3-a})}+\mathcal{X}_{\mathfrak{s}(I_{2r+2-a})}, \tag{4.99}\]
which follow from the 3-term QQ-relations (4.88) and (4.89). The condition \(\mathfrak{k}(\mathsf{F}_{(1)}^{I_{2r+2}})=\mathsf{F}_{(1)}^{I_{2r+2}}\) is equivalent to the following 4-term QQ-relations
\[\mathcal{X}_{I_{r}}+\mathcal{X}_{I_{r+3}}=\mathcal{X}_{\mathfrak{t}(I_{r})}+ \mathcal{X}_{\mathfrak{t}(I_{r+3})}, \tag{4.100}\]
which follows from the 3-term QQ-relation (4.90). The relations (4.98) and (4.99) are reductions of (3.7), while the relation (4.100) is not. Thus the T-function \(\mathsf{F}_{(1)}^{I_{2r+2}}\) on the symmetric nesting path is \(\mathfrak{W}\)-invariant under the 3-term QQ-relations (4.88)-(4.90).
The condition that the generating function \(\mathbf{W}_{I_{2r+2}}(\mathbf{X})\) is invariant under \(\mathfrak{s}\), namely \(\mathfrak{s}(\mathbf{W}_{I_{2r+2}}(\mathbf{X}))=\mathbf{W}_{\mathfrak{s}(I_{ 2r+2})}(\mathbf{X})=\mathbf{W}_{I_{2r+2}}(\mathbf{X})\), is equivalent to the discrete zero curvature condition (a reduction of (3.25)):
\[\begin{split}&(1-\mathcal{X}_{I_{a}}\mathbf{X})(1-\mathcal{X}_{I _{a+1}}\mathbf{X})=(1-\mathcal{X}_{\mathfrak{s}(I_{a})}\mathbf{X})(1- \mathcal{X}_{\mathfrak{s}(I_{a+1})}\mathbf{X}),\\ &(1-\mathcal{X}_{I_{2r+2-a}}\mathbf{X})(1-\mathcal{X}_{I_{2r+3-a }}\mathbf{X})=(1-\mathcal{X}_{\mathfrak{s}(I_{2r+2-a})}\mathbf{X})(1- \mathcal{X}_{\mathfrak{s}(I_{2r+3-a})}\mathbf{X}),\end{split} \tag{4.101}\]
where \(a\in\{1,2,\ldots,r-1\}\). These relations (4.101) boil down to (4.98) and (4.99) and a reduction of the identity (3.26). The condition that the generating function \(\mathbf{W}_{I_{2r+2}}(\mathbf{X})\) is invariant under \(\mathfrak{k}\), namely \(\mathfrak{k}(\mathbf{W}_{I_{2r+2}}(\mathbf{X}))=\mathbf{W}_{\mathfrak{t}(I_{2 r+2})}(\mathbf{X})=\mathbf{W}_{I_{2r+2}}(\mathbf{X})\), is equivalent to the following discrete zero curvature condition:
\[(1-\mathcal{X}_{I_{r}}\mathbf{X})(1-\mathcal{X}_{I_{r}}\, \mathcal{X}_{I_{r+3}}^{[2]}\mathbf{X}^{2})(1-\mathcal{X}_{I_{r+3}}\mathbf{X})= \\ =(1-\mathcal{X}_{\mathfrak{t}(I_{r})}\mathbf{X})(1-\mathcal{X}_{ \mathfrak{t}(I_{r})}\,\mathcal{X}_{\mathfrak{t}(I_{r+3})}^{[2]}\mathbf{X}^{2} )(1-\mathcal{X}_{\mathfrak{t}(I_{r+3})}\mathbf{X}). \tag{4.102}\]
Consider the expansion of (4.102) with respect to the non-negative powers of \(\mathbf{X}\). The coefficients of \(\mathbf{X}\) on both sides of (4.102) give the relation (4.100), which follows from the QQ-relation (4.90). The relation derived from the coefficients of \(\mathbf{X}^{3}\) also follows from the QQ-relation (4.90). The relation derived from \(\mathbf{X}^{4}\) is trivially valid. The other coefficients are 0 or 1. Therefore the T-functions \(\mathcal{F}_{(b)}^{I_{2r+2}}\) and \(\mathcal{F}_{(1^{b})}^{I_{2r+2}}\) on the symmetric nesting paths are invariant under \(\mathfrak{W}\) if the QQ-relations (4.88)-(4.90) are imposed. One may be able to exclude the T- and Q-functions on the non-symmetric nesting paths from consideration.
#### 4.4.2 \(U_{q}(osp(2r|2s)^{(1)})\) case
We assume \(r,s\in\mathbb{Z}_{\geq 0}\), \(r+s\geq 2\) or \((r,s)=(0,1)\), and set
\[(M,N)=(2r,2s+2),\quad\mathfrak{B}=\{1,2,\ldots,2r\},\quad\mathfrak{F}=\{2r+1,2r +2,\ldots,2r+2s+2\},\\ \mathfrak{D}=\{2r+s+1,2r+s+2\},\quad\eta=0,\quad z_{2r+s+1}=-z_{2 r+s+2}=1. \tag{4.103}\]
In particular for \(s=0\), this reduces to the case \(U_{q}(so(2r)^{(1)})\). The case \(r=0\) is parallel to the \(U_{q}(sp(2s)^{(1)})\) case, but the role of row and column of Young diagrams has to be interchanged.
QQ-relationsFor a symmetric nesting path defined by \(I_{2r+2s+2}=(i_{1},i_{2},\ldots,i_{r+s+1},i_{r+s+1}^{*}\ldots,i_{2}^{*},i_{1}^{ *})\), \(i_{r+s+1}\in\mathfrak{D}\), we consider additional reductions
\[\mathbb{Q}_{I_{r+s}} =\mathbf{Q}_{I_{r+s}}^{[-1]}\mathbf{Q}_{I_{r+s}}^{[1]}, \tag{4.104}\] \[\mathbb{Q}_{I_{r+s+1}} =(\mathbf{Q}_{I_{r+s}})^{2} \tag{4.105}\]
and
\[\mathbb{Q}_{I_{r+s-1}} =\mathbf{Q}_{I_{r+s}}\mathbf{Q}_{I_{r+s}}\quad\text{if}\quad i_{r +s}\in\mathfrak{B}, \tag{4.106}\] \[\mathbb{Q}_{\widetilde{I}_{r+s-1}} =\mathbf{Q}_{\hat{I}_{r+s}}\mathbf{Q}_{I_{r+s}}\quad\text{if} \quad i_{r+s-1}\in\mathfrak{B},\] (4.107) \[(z_{i_{r+s}}+z_{i_{r+s+1}})\mathbb{Q}_{\widetilde{I}_{r+s}} =z_{i_{r+s}}\mathbf{Q}_{I_{r+s}}^{[-1]}\mathbf{Q}_{I_{r+s}}^{[1]}+ z_{i_{r+s+1}}\mathbf{Q}_{I_{r+s}}^{[1]}\mathbf{Q}_{I_{r+s}}^{[-1]}\quad\text{if} \quad i_{r+s}\in\mathfrak{F}, \tag{4.108}\]
where \(\widetilde{I}_{r+s}=(i_{1},i_{2},\ldots,i_{r+s-1},i_{r+s+1})\), \(\breve{I}_{r+s}=(i_{1},i_{2},\ldots,i_{r+s-1},i_{r+s}^{*})\), \(\breve{I}_{r+s}=(i_{1},i_{2},\ldots,i_{r+s-1},i_{r+s}^{*})\), \(\breve{I}_{r+s}=(i_{1},i_{2},\ldots,i_{r+s-2},i_{r+s-1}^{*},i_{r+s})\). The relation (4.107) is the same type relation as (4.106) on another symmetric nesting path defied by \(\tau_{i_{r+s-1},i_{r+s}}\circ\tau_{i_{r+s-1}^{*},i_{r+s}^{*}}(I_{2r+2s+2})=(i_ {1},i_{2},\ldots,i_{r+s-2},i_{r+s},i_{r+s-1},i_{r+s+1},i_{r+s+1}^{*},i_{r+s-1}^ {*},i_{r+s}^{*},i_{r+s-2}^{*},\ldots,i_{2}^{*},i_{1}^{*})\), \(i_{r+s+1}\in\mathfrak{D}\). QQ-relations can be interpreted as functional relations associated with Dynkin diagrams (see subsection 2 and section 4.3). The QQ-relations (3.8) and (3.9) reduce to the following functional relations:
**for \(a\)-th node (\(1\leq a\leq r+s-2\) for type C, \(1\leq a\leq r+s-3\) for type D):**
\[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a-1}}\mathbb{Q}_{I_{a+1}}= z_{i_{a}}\mathbb{Q}_{I_{a}}^{[p_{i_{a}}]}\mathbb{Q}_{\widetilde{I}_{a}}^{[-p_{i_{ a}}]}-z_{i_{a+1}}\mathbb{Q}_{I_{a}}^{[-p_{i_{a}}]}\mathbb{Q}_{\widetilde{I}_{a}}^{[p_{ i_{a}}]}\quad\text{if}\quad p_{i_{a}}=p_{i_{a+1}},\quad\text{and}\] \[\text{for}\;\;a\in\{1,2,\ldots,r+s-3\}\;\;\text{if}\;\;i_{r+s} \in\mathfrak{B},\quad\text{for}\;\;a\in\{1,2,\ldots,r+s-2\}\;\;\text{if}\;\;i_ {r+s}\in\mathfrak{F}, \tag{4.109}\] \[(z_{i_{a}}-z_{i_{a+1}})\mathbb{Q}_{I_{a}}\mathbb{Q}_{\widetilde{I }_{a}}=z_{i_{a}}\mathbb{Q}_{I_{a-1}}^{[-p_{i_{a}}]}\mathbb{Q}_{I_{a+1}}^{[p_{ i_{a}}]}-z_{i_{a+1}}\mathbb{Q}_{I_{a-1}}^{[p_{i_{a}}]}\mathbb{Q}_{I_{a+1}}^{[-p_{ i_{a}}]}\quad\text{if}\quad p_{i_{a}}=-p_{i_{a+1}},\quad\text{and}\] \[\text{for}\;\;a\in\{1,2,\ldots,r+s-3\}\;\;\text{if}\;\;i_{r+s} \in\mathfrak{B},\quad\text{for}\;\;a\in\{1,2,\ldots,r+s-2\}\;\;\text{if}\;\;i_ {r+s}\in\mathfrak{F}, \tag{4.110}\]
for \((r+s-1)\)-th node of type C:
\[(z_{i_{r+s-1}}-z_{i_{r+s}})\mathbb{Q}_{I_{r+s-2}}\mathbf{Q}_{I_{r+ s}}^{[-1]}\mathbf{Q}_{I_{r+s}}^{[1]}=z_{i_{r+s-1}}\mathbb{Q}_{I_{r+s-1}}^{[-1]} \mathbb{Q}_{\widetilde{I}_{r+s-1}}^{[1]}-z_{i_{r+s}}\mathbb{Q}_{I_{r+s-1}}^{[1] }\mathbb{Q}_{\widetilde{I}_{r+s-1}}^{[-1]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
**for \((r+s)\)-th node of type C:**
\[(z_{i_{r+s}}^{2}-1)\mathbb{Q}_{I_{r+s-1}}=z_{i_{r+s}}^{2}\mathbf{Q}_{I_{r+s}}^{[- 2]}\mathbf{Q}_{I_{r+s}}^{[2]}-\mathbf{Q}_{I_{r+s}}^{[2]}\mathbf{Q}_{I_{r+s}}^{[- 2]}\quad\text{if}\quad i_{r+s}\in\mathfrak{F}, \tag{4.113}\]
**for \((r+s-2)\)-th node of type D:**
\[(z_{i_{r+s-2}}-z_{i_{r+s-1}})\mathbb{Q}_{I_{r+s-3}}\mathbf{Q}_{I_{r+s}}=z_{i_{ r+s-2}}\mathbb{Q}_{I_{r+s-2}}^{[p_{i_{r+s-2}}]}\mathbb{Q}_{I_{r+s-2}}^{[-p_{i_{r+s-2}}]}\] \[\qquad\qquad\qquad-z_{i_{r+s-1}}\mathbb{Q}_{I_{r+s-2}}^{[-p_{i_{ r+s-2}}]}\mathbb{Q}_{I_{r+s-2}}^{[p_{i_{r+s-2}}]}\quad\text{if}\quad p_{i_{r+s-2}}=p _{i_{r+s-1}}\quad\text{and}\quad i_{r+s}\in\mathfrak{B}, \tag{4.114}\]
\[(z_{i_{r+s-2}}-z_{i_{r+s-1}})\mathbb{Q}_{I_{r+s-2}}\mathbb{Q}_{I_{r+s-2}}^{ [-p_{i_{r+s-2}}]}=z_{i_{r+s-2}}\mathbb{Q}_{I_{r+s-3}}^{[-p_{i_{r+s-2}}]}\mathbf{ Q}_{I_{r+s}}^{[p_{i_{r+s-2}}]}\mathbf{Q}_{I_{r+s}}^{[p_{i_{r+s-2}}]}\] \[\quad\quad-z_{i_{r+s-1}}\mathbb{Q}_{I_{r+s-3}}^{[p_{i_{r+s-2}}]} \mathbf{Q}_{I_{r+s}}^{[-p_{i_{r+s-2}}]}\quad\text{if}\quad p_{i_{r+s-2}}=-p_{i _{r+s-1}}\quad\text{and}\quad i_{r+s}\in\mathfrak{B}, \tag{4.115}\]
**for \((r+s-1)\)-th and \((r+s)\)-th nodes of type D (simply laced):**
\[(z_{i_{r+s-1}}-z_{i_{r+s}})\mathbb{Q}_{I_{r+s-2}}=z_{i_{r+s-1}} \mathbf{Q}_{I_{r+s}}^{[1]}\mathbf{Q}_{I_{r+s}}^{[-1]}-z_{i_{r+s}}\mathbf{Q}_{I _{r+s}}^{[-1]}\mathbf{Q}_{I_{r+s}}^{[1]}\quad\text{if}\quad i_{r+s-1},i_{r+s} \in\mathfrak{B}, \tag{4.116}\] \[(z_{i_{r+s-1}}-z_{i_{r+s}}^{-1})\mathbb{Q}_{I_{r+s-2}}=z_{i_{r+s-1 }}\mathbf{Q}_{I_{r+s}}^{[1]}\mathbf{Q}_{I_{r+s}}^{[-1]}-z_{i_{r+s}}^{-1} \mathbf{Q}_{I_{r+s}}^{[-1]}\mathbf{Q}_{I_{r+s}}^{[1]}\quad\text{if}\quad i_{r+ s-1},i_{r+s}\in\mathfrak{B}, \tag{4.117}\]
**for \((r+s-1)\)-th and \((r+s)\)-th nodes of type D (non-simply laced):**
\[(z_{i_{r+s-1}}-z_{i_{r+s}})\mathbb{Q}_{\tilde{I}_{r+s-1}}\mathbf{ Q}_{\tilde{I}_{r+s}}=z_{i_{r+s-1}}\mathbb{Q}_{I_{r+s-2}}^{[1]}\mathbf{Q}_{I_{r+s}}^{[- 2]}-z_{i_{r+s}}\mathbb{Q}_{I_{r+s-2}}^{[-1]}\mathbf{Q}_{I_{r+s}}^{[2]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
\((i,j)=(i_{r+s-1},i^{*}_{r+s})\). Eqs. (4.110), (4.112), (4.115) and (4.118) are reductions of (3.9) for \(I=I_{a-1}\), \((i,j)=(i_{a},i_{a+1})\) for \(1\leq a\leq r+s-3\) (or \(r+s-2\)), \(a=r+s-1\), \(a=r+s-2\), and \(a=r+s-1\), respectively. Eq. (4.119) is a reduction of (3.9) for \(I=I_{r+s-2}\), \((i,j)=(i_{r+s-1},i^{*}_{r+s})\). Eqs. (4.116) and (4.117) are of the same type. Eqs. (4.118) and (4.119) are also of the same type. They are related to each other by the permutation of \(i_{r+s}\) and \(i^{*}_{r+s}\), which corresponds to the symmetry of the Dynkin diagrams of type D with respect to their final pair of nodes.
For \(I\in\mathfrak{A}\) and mutually distinct \(i,i^{*}\in\mathfrak{I}\) such that \(I\sqcup I^{*}\sqcup\{i,i^{*}\}\sqcup\mathfrak{D}=\mathfrak{I}\), (4.106) and (4.107) are summarized as
\[\mathbb{Q}_{I}=\mathbf{Q}_{I,i}\mathbf{Q}_{I,i^{*}}\quad\text{if}\quad i\in \mathfrak{B}. \tag{4.120}\]
Note that \(|I|=r+s-1\) holds. For \(I\in\mathfrak{A}\) and mutually distinct \(i,j,i^{*},j^{*}\in\mathfrak{I}\) such that \(I\sqcup I^{*}\sqcup\{i,i^{*},j,j^{*}\}\sqcup\mathfrak{D}=\mathfrak{I}\), (4.116) and (4.117) are summarized as
\[(z_{i}-z_{j})\mathbb{Q}_{I}=z_{i}\mathbf{Q}_{I,i^{*}}^{[1]}\mathbf{Q}_{I,i^{*},j}^{[-1]}-z_{j}\mathbf{Q}_{I,i,j^{*}}^{[-1]}\mathbf{Q}_{I,i^{*},j}^{[1]}\quad \text{if}\quad p_{i}=p_{j}=1, \tag{4.121}\]
and (4.112), (4.118) and (4.119) are summarized as
\[(z_{i}-z_{j})\mathbb{Q}_{I,i}\mathbf{Q}_{I,i^{*},j}=z_{i}\mathbb{Q}_{I}^{[-1] }\mathbf{Q}_{I,i,j}^{[2]}-z_{j}\mathbb{Q}_{I}^{[1]}\mathbf{Q}_{I,i,j}^{[-2]} \quad\text{if}\quad p_{i}=-p_{j}. \tag{4.122}\]
Note that \(|I|=r+s-2\) holds. Eqs. (4.120) and (4.121) for \(s=0\) corresponds to eqs. (5.4) and (5.8) in [75], respectively. Let \(\{v_{k}^{I_{r+s}}\}_{k=1}^{m_{I_{k}}}\) and \(\{v_{k}^{\bar{I}_{r+s}}\}_{k=1}^{m_{I_{k}}}\) be the zeros of the Q-functions \(\mathbf{Q}_{I_{r+s}}\) and \(\mathbf{Q}_{\bar{I}_{r+s}}\), respectively. Eq. (4.106) means that \(\{u_{k}^{I_{r+s-1}}\}_{k=1}^{n_{I_{k}}}=\{v_{k}^{I_{r+s}}\}_{k=1}^{m_{I_{k}}} \sqcup\{v_{k}^{\bar{I}_{r+s}}\}_{k=1}^{m_{I_{k}}}\) holds if \(i_{r+s}\in\mathfrak{B}\). We remark that QQ-relations related to \(osp(4|6)\) symmetry were discussed 41 in [77] in the context of the quantum spectral curve for \(AdS_{4}/CFT_{3}\). In particular, there is a room to reconsider the reduction procedures and find QQ-relations corresponding to [eqs. (7.51), (7.52), [77]] as substitutes of the QQ-relations (4.112), (4.118), (4.119) and (4.122).
Footnote 41: Eq. (7.9) in [77] looks like a reduction of (3.9) for \(I=\emptyset\), \(i\in\mathfrak{B}\), \(j\in\mathfrak{D}\).
T-functions and Bethe ansatz equationsUnder the reduction, (3.3) reduces to
\[\mathcal{X}_{I_{a}}=z_{i_{a}}\frac{\mathbb{Q}_{I_{a-1}}^{[2r-2s-2-\sum_{j\in I _{a}}p_{j}-p_{i_{a}}]}\mathbb{Q}_{I_{a}}^{[2r-2s-2-\sum_{j\in I_{a}}p_{j}+2p_{ i_{a}}]}}{\mathbb{Q}_{I_{a-1}}^{[2r-2s-2-\sum_{j\in I_{a}}p_{j}+p_{i_{a}}]} \mathbb{Q}_{I_{a}}^{[2r-2s-2-\sum_{j\in I_{a}}p_{j}]}}\]
\[\text{for}\quad\begin{cases}1\leq a\leq r+s-2\quad\text{if}\quad i_{r+s}\in \mathfrak{B}\\ 1\leq a\leq r+s-1\quad\text{if}\quad i_{r+s}\in\mathfrak{F},\end{cases}\]
\[\mathcal{X}_{I_{r+s-1}}=z_{i_{r+s-1}}\frac{\mathbb{Q}_{I_{r+s-2}}^{[r-s-1-p_{r+s -1}]}\mathbf{Q}_{I_{r+s}}^{[r-s-1+2p_{i_{r+s-1}}]}\mathbf{Q}_{I_{r+s}}^{[r-s-1+2 p_{i_{r+s-1}}]}}{\mathbb{Q}_{I_{r+s-2}}^{[r-s-1+p_{i_{r+s-1}}]}\mathbf{Q}_{I_{r+s}}^{[r-s-1 ]}}\quad\text{if}\quad i_{r+s}\in\mathfrak{B},\]
\[\mathcal{X}_{I_{r+s}}=\begin{cases}z_{i_{r+s}}\frac{\mathbf{Q}_{I_{r+s}}^{[r-s- 3]}\mathbf{Q}_{I_{r+s}}^{[r-s+1]}}{\mathbf{Q}_{I_{r+s}}^{[r-s-1]}\mathbf{Q}_{I _{r+s}}^{[r-s-1]}}\quad\text{if}\quad i_{r+s}\in\mathfrak{B}\\ z_{i_{r+s}}\frac{\mathbb{Q}_{I_{r+s-1}}^{[r-s-1]}\mathbf{Q}_{I_{r+s}}^{[r-s-1]}}{ \mathbb{Q}_{I_{r+s-1}}^{[r-s-1]}\mathbf{Q}_{I_{r+s}}^{[r-s-1]}}\quad\text{if} \quad i_{r+s}\in\mathfrak{F},\end{cases}\]
\[\mathcal{X}_{I_{r+s+1}} =-\mathcal{X}_{I_{r+s+2}}=z_{i_{r+s+1}}\frac{\mathbb{Q}_{I_{r+s}}^{[r -s+1]}\mathbb{Q}_{I_{r+s}}^{[r-s-3]}}{(\mathbb{Q}_{I_{r+s}}^{[r-s-1]})^{2}},\] \[\mathcal{X}_{I_{r+s+3}} =\begin{cases}z_{i_{r+s}}^{-1}\frac{\mathbb{Q}_{I_{r+s+1}}^{[r-s+1 ]}\mathbb{Q}_{I_{r+s}}^{[r-s-3]}}{\mathbb{Q}_{I_{r+s}}^{[r-s-1]}\mathbb{Q}_{I_{ r+s}}^{[r-s-1]}}\quad\text{if}\quad i_{r+s}\in\mathfrak{B}\\ z_{i_{r+s}}^{-1}\frac{\mathbb{Q}_{I_{r+s}}^{[r-s+1]}\mathbb{Q}_{I_{r+s}}^{[r-s +3]}}{\mathbb{Q}_{I_{r+s+1}}^{[r-s+1]}\mathbb{Q}_{I_{r+s}}^{[r-s+3]}}\quad \text{if}\quad i_{r+s}\in\mathfrak{F},\\ \end{cases}\] \[\mathcal{X}_{I_{r+s+4}} =z_{i_{r+s-1}}^{-1}\frac{\mathbb{Q}_{I_{r+s-2}}^{[r-s-1+p_{i_{r+s -1}}]}\mathbb{Q}_{I_{r+s}}^{[r-s-1-2p_{i_{r+s-1}}]}\mathbb{Q}_{I_{r+s}}^{[r-s- 1-2p_{i_{r+s-1}}]}}{\mathbb{Q}_{I_{r+s-2}}^{[r-s-1-p_{i_{r+s-1}}]}\mathbb{Q}_{ I_{r+s}}^{[r-s-1]}}\quad\text{if}\quad i_{r+s}\in\mathfrak{B},\] \[\mathcal{X}_{I_{2r+2s+3-a}} =z_{i_{a}}^{-1}\frac{\mathbb{Q}_{I_{a-1}}^{[r_{s}\in I_{a}\,p_{j} +p_{a}]}\mathbb{Q}_{I_{a}}^{[\sum_{j\in I_{a}\,p_{j}-2p_{ia}}]}}{\mathbb{Q}_{ I_{a-1}}^{[\sum_{j\in I_{a}\,p_{j}-p_{a}}]}}\] \[\text{for}\quad\begin{cases}1\leq a\leq r+s-2&\text{if}\quad i_{r +s}\in\mathfrak{B}\\ 1\leq a\leq r+s-1&\text{if}\quad i_{r+s}\in\mathfrak{F}.\end{cases} \tag{4.123}\]
The T-function (3.1) reduces to
\[\mathsf{F}_{(1)}^{I_{2r+2s+2}}=\mathbb{Q}_{\emptyset}^{[2r-2s-2]}\mathbb{Q}_{ \emptyset}\sum_{a=1}^{r+s}p_{i_{a}}(\mathcal{X}_{I_{a}}+\mathcal{X}_{I_{2r+2s+ 3-a}}). \tag{4.124}\]
The pole-free condition of the T-function (4.124) produces the following Bethe ansatz equations:
**for \(a\)-th node (\(1\leq a\leq r+s-2\) for type C, \(1\leq a\leq r+s-3\) for type D):**
\[-1 =\frac{p_{i_{a}}z_{i_{a}}}{p_{i_{a+1}}z_{i_{a+1}}}\frac{\mathbb{Q} _{I_{a-1}}(u_{k}^{I_{a}}-p_{i_{a}})\mathbb{Q}_{I_{a}}(u_{k}^{I_{a}}+2p_{i_{a}} )\mathbb{Q}_{I_{a+1}}(u_{k}^{I_{a}}-p_{i_{a+1}})}{\mathbb{Q}_{I_{a-1}}(u_{k}^{ I_{a}}+p_{i_{a}})\mathbb{Q}_{I_{a}}(u_{k}^{I_{a}}-2p_{i_{a+1}})\mathbb{Q}_{I_{a+1}}(u_{k}^{ I_{a}}+p_{i_{a+1}})}\quad\text{for}\ k\in\{1,2,\ldots,n_{I_{a}}\}\] \[\text{and}\quad a\in\{1,2,\ldots,r+s-3\}\ \text{ if}\ i_{r+s}\in\mathfrak{B},\quad a\in\{1,2,\ldots,r+s-2\}\ \text{ if}\ i_{r+s}\in\mathfrak{F}; \tag{4.125}\]
**for \((r+s-1)\)-th node of type C:**
\[1 =\frac{p_{i_{r+s-1}}z_{i_{r+s-1}}}{z_{i_{r+s}}}\frac{\mathbb{Q} _{I_{r+s-2}}(u_{k}^{I_{r+s-1}}-p_{i_{r+s-1}})\mathbb{Q}_{I_{r+s-1}}(u_{k}^{I_{ r+s-1}}+2p_{i_{r+s-1}})\mathbb{Q}_{I_{r+s}}(u_{k}^{I_{r+s-1}}+2)}{\mathbb{Q}_{I_{r+s-2}}(u_{k}^{ I_{r+s-1}}+p_{i_{r+s-1}})\mathbb{Q}_{I_{r+s-1}}(u_{k}^{I_{r+s-1}}+2)\mathbb{Q}_{I_{r+s}}(u_{k}^{ I_{r+s-1}}-2)}\] \[\text{for}\quad k\in\{1,2,\ldots,n_{I_{r+s-1}}\}\quad\text{if} \quad i_{r+s}\in\mathfrak{F}, \tag{4.126}\]
**for \((r+s)\)-th node of type C:**
\[-1 =z_{i_{r+s}}^{2}\frac{\mathbb{Q}_{I_{r+s-1}}(v_{k}^{I_{r+s}}+2) \mathbb{Q}_{I_{r+s}}(v_{k}^{I_{r+s}}-4)}{\mathbb{Q}_{I_{r+s-1}}(v_{k}^{I_{r+s}} -2)\mathbb{Q}_{I_{r+s}}(v_{k}^{I_{r+s}}+4)}\quad\text{for}\quad k\in\{1,2, \ldots,m_{I_{r+s}}\}\quad\text{if}\quad i_{r+s}\in\mathfrak{F}, \tag{4.127}\]
**for \((r+s-2)\)-th node of type D:**
\[-1=\frac{p_{i_{r+s-2}}z_{i_{r+s-2}}}{p_{i_{r+s-1}}z_{i_{r+s-1}}}\frac{\mathbb{Q}_{I _{r+s-3}}(u_{k}^{I_{r+s-2}}-p_{i_{r+s-2}})\mathbb{Q}_{I_{r+s-2}}(u_{k}^{I_{r+s-2} }+2p_{i_{r+s-2}})}{\mathbb{Q}_{I_{r+s-3}}(u_{k}^{I_{r+s-2}}+p_{i_{r+s-2}}) \mathbb{Q}_{I_{r+s-2}}(u_{k}^{I_{r+s-2}}-2p_{i_{r+s-1}})}\times\]
\[\times\frac{\mathbf{Q}_{\tilde{I}_{r+s}}(u_{k}^{I_{r+s-2}}-p_{i_{r+s-1}}) \mathbf{Q}_{I_{r+s}}(u_{k}^{I_{r+s-2}}-p_{i_{r+s-1}})}{\mathbf{Q}_{\tilde{I}_{ r+s}}(u_{k}^{I_{r+s-2}}+p_{i_{r+s-1}})\mathbf{Q}_{I_{r+s}}(u_{k}^{I_{r+s-2}}+p_{i_{ r+s-1}})}\]
\[\text{for}\quad k\in\{1,2,\ldots,n_{I_{r+s-2}}\}\quad\text{if}\quad i_{r+s}\in\mathfrak{B}, \tag{4.128}\]
**for \((r+s-1)\)-th node of type D:**
\[-1=\frac{p_{i_{r+s-1}}z_{i_{r+s-1}}}{z_{i_{r+s}}}\frac{\mathbb{Q}_{I_{r+s-2}} (v_{k}^{\tilde{I}_{r+s}}-p_{i_{r+s-1}})\mathbf{Q}_{\tilde{I}_{r+s}}(v_{k}^{ \tilde{I}_{r+s}}+2p_{i_{r+s-1}})\mathbf{Q}_{I_{r+s}}(v_{k}^{\tilde{I}_{r+s}}+ 2p_{i_{r+s-1}})}{\mathbb{Q}_{I_{r+s-2}}(v_{k}^{\tilde{I}_{r+s}}+p_{i_{r+s-1}}) \mathbf{Q}_{\tilde{I}_{r+s}}(v_{k}^{\tilde{I}_{r+s}}-2)\mathbf{Q}_{I_{r+s}}(v _{k}^{\tilde{I}_{r+s}}+2)}\]
\[\text{for}\quad k\in\{1,2,\ldots,m_{\tilde{I}_{r+s}}\}\quad\text{if}\quad i_{r+ s}\in\mathfrak{B}, \tag{4.129}\]
**for \((r+s)\)-th node of type D:**
\[-1=\frac{z_{i_{r+s-1}}z_{i_{r+s}}}{p_{i_{r+s-1}}}\frac{\mathbb{Q}_{I_{r+s-2}} (v_{k}^{I_{r+s}}-p_{i_{r+s-1}})\mathbf{Q}_{\tilde{I}_{r+s}}(v_{k}^{I_{r+s}}-2 )\mathbf{Q}_{I_{r+s}}(v_{k}^{I_{r+s}}+2)}{\mathbb{Q}_{I_{r+s-2}}(v_{k}^{I_{r+s }}+p_{i_{r+s-1}})\mathbf{Q}_{\tilde{I}_{r+s}}(v_{k}^{I_{r+s}}-2p_{i_{r+s-1}}) \mathbf{Q}_{I_{r+s}}(v_{k}^{I_{r+s}}-2p_{i_{r+s-1}})}\]
\[\text{for}\quad k\in\{1,2,\ldots,m_{I_{r+s}}\}\quad\text{if}\quad i_{r+s}\in \mathfrak{B}. \tag{4.130}\]
The Bethe ansatz equations (4.125)-(4.130) for the Bethe roots \(\{u_{k}^{I_{a}}\}\) for \(1\leq a\leq r+s-1\) are reductions of (3.6) on the symmetric nesting path, while the ones for the Bethe roots \(\{v_{k}^{I_{r+s}}\}\) and \(\{v_{k}^{\tilde{I}_{r+s}}\}\) are not. Eqs. (4.124) and (4.125)-(4.130) agree with the known results by algebraic Bethe ansatz in case \(i_{k}\in\mathfrak{B}\) for \(s+1\leq k\leq r+s\) and \(i_{k}\in\mathfrak{F}\) for \(1\leq k\leq s\)[73]; and in case \(r=1\)[76]. Note that the terms \(\mathcal{X}_{I_{r+s+1}}\) and \(\mathcal{X}_{I_{r+s+2}}\) are missing in (4.124) because of cancellation. The tableaux sum expression of the T-function (3.19) (for one row Young diagrams and one column Young diagrams) reproduces [eq. (3.38), [2]]42 under the reduction. As for the case \(r=1\), see [eqs. (3.18), (3.27) [3]]43. Moreover, \(\mathfrak{T}_{\mu}^{\mathfrak{B},\mathfrak{F}}\) (from (3.48)) and its (super)character limit \(\zeta(\mathsf{T}_{\mu}^{\mathfrak{B},\mathfrak{F}})\) give a Wronskian expression of the T-function and a Weyl-type supercharacter formula respectively after reduction. The Young diagram \(\mu\) is related to the labels of the representation through (2.23)-(2.26) or (2.31)-(2.33). These formulas seem not provide T-functions for irreducible representations in the auxiliary space in the general situation. Nevertheless, by investigating the Bethe strap, one can gain some insights on irreducibility (see section 4.6).
Footnote 42: The functions \(\mathbb{Q}_{\emptyset}\), \(\mathbb{Q}_{I_{b}}\) (\(1\leq b\leq r+s-2\)), \(\mathbf{Q}_{\tilde{I}_{r+s}}\), \(\mathbf{Q}_{I_{r+s}}\), \(\mathbb{Q}_{\emptyset}^{[2r-2s-2]}\mathbb{Q}_{\emptyset}\mathcal{X}_{I_{a}}\) and \(\mathbb{Q}_{\emptyset}^{[2r-2s-2]}\mathbb{Q}_{\emptyset}\mathcal{X}_{I_{2r+2s+3 -a}}\) correspond to \(\phi(u)\), \(Q_{b}(u)\) (\(1\leq b\leq r+s-2\)), \(Q_{r+s-1}(u)\), \(Q_{r+s}(u)\), \(\overline{
The generating functions (3.23) and (3.24) reduce to
\[\mathbf{W}_{I_{2r+2s+2}}(\mathbf{X}) =\prod_{a=1}^{\overrightarrow{r+s}}(1-\mathcal{X}_{I_{2r+2s+3-a}} \mathbf{X})^{-p_{ia}}(1-\mathcal{X}_{I_{r+s+3}}\mathbf{X}\,\mathcal{X}_{I_{r+s }}\mathbf{X})\overset{\longleftarrow}{\underset{a=1}{\overset{r+s}{\long}}}(1 -\mathcal{X}_{I_{a}}\mathbf{X})^{-p_{ia}}\] \[=\sum_{a=0}^{\infty}\mathcal{F}^{I_{2r+2s+2}[a-1]}_{(a)}\mathbf{X }^{a}, \tag{4.131}\] \[\mathbf{W}_{I_{2r+2s+2}}(\mathbf{X})^{-1} =\overset{\overrightarrow{r+s}}{\underset{a=1}{\overset{r+s}{ \long}}}(1-\mathcal{X}_{I_{a}}\mathbf{X})^{p_{ia}}(1-\mathcal{X}_{I_{r+s+3}} \mathbf{X}\,\mathcal{X}_{I_{r+s}}\mathbf{X})^{-1}\overset{\longleftarrow}{ \underset{a=1}{\overset{r+s}{\long}}}(1-\mathcal{X}_{I_{2r+2s+3-a}}\mathbf{ X})^{p_{ia}}\] \[=\sum_{a=0}^{\infty}(-1)^{a}\mathcal{F}^{I_{2r+2s+2}[a-1]}_{(1^{ a})}\mathbf{X}^{a}, \tag{4.132}\]
where the following relations (follow from (4.123)) are used
\[(1-\mathcal{X}_{I_{r+s+2}}\mathbf{X})(1-\mathcal{X}_{I_{r+s+1}} \mathbf{X})=1-(\mathcal{X}_{I_{r+s+2}}+\mathcal{X}_{I_{r+s+1}})\mathbf{X}+ \mathcal{X}_{I_{r+s+2}}\mathbf{X}\,\mathcal{X}_{I_{r+s+1}}\mathbf{X}=\\ =1+\mathcal{X}_{I_{r+s+2}}\,\mathcal{X}^{[2]}_{I_{r+s+1}}\mathbf{ X}^{2}=1-\frac{\mathbf{Q}^{[r-s-3]}_{I_{r+s}}\mathbf{Q}^{[r-s+3]}_{I_{r+s}}}{ \mathbf{Q}^{[r-s-1]}_{I_{r+s}}\mathbf{Q}^{[r-s+1]}_{I_{r+s}}}\mathbf{X}^{2}=1- \mathcal{X}_{I_{r+s+3}}\mathbf{X}\,\mathcal{X}_{I_{r+s}}\mathbf{X}. \tag{4.133}\]
Note that the terms \(\mathcal{X}_{I_{r+s+2}}\) and \(\mathcal{X}_{I_{r+s+1}}\) disappear from the formula irrespective of \(i_{r+s}\) because of cancellation. In this way, we recover [eqs. (B.4) and (B.3) in [2]] (see also [eq. (2.9) in [71]] for the case \(s=0\) case) 44. Baxter type equations follow from the kernels of (4.131) and (4.132), which are reductions of (3.32) and (3.33).
Footnote 44: In [2], we considered only the formulas for the distinguished Dynkin diagram, while the formulas here are the ones for general Dynkin diagrams. For comparison, use an identity (4.145).
The relation (4.11) reduces to
\[\hat{\mathsf{T}}^{\mathfrak{B},\mathfrak{F}}_{a,-m-2r+2s+2}=\hat{\mathsf{T}}^{ \mathfrak{B},\mathfrak{F}}_{a,m}\quad\text{for any }m\in\mathbb{C}\quad\text{for}\quad(\ref{eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq:
where the character parts are given by
\[\chi_{i}^{+} =\frac{z_{i}^{m-3+2r-2s}(z_{i}-1)(z_{i}+1)\prod_{f=2r+1}^{2r+s}(z_{i }-z_{f})(z_{i}-z_{f}^{-1})}{\prod_{j=1}^{i-1}(z_{i}-z_{j})\prod_{j=i+1}^{r}(z_{i }-z_{j})\prod_{j=1}^{r}(z_{i}-z_{j}^{-1})}\] \[=\frac{(-1)^{i-1}z_{i}^{m+i-1}\prod_{j=1}^{i-1}z_{j}^{-1}\prod_{f= 2r+1}^{2r+s}(1-\frac{z_{f}}{z_{i}})(1-\frac{1}{z_{i}z_{f}})}{\prod_{j=1}^{i-1}( 1-\frac{z_{i}}{z_{j}})\prod_{j=i+1}^{r}(1-\frac{z_{j}}{z_{i}})\prod_{j\neq i}^{ r}(1-\frac{1}{z_{i}z_{j}})}\quad\text{for}\quad 1\leq i\leq r, \tag{4.136}\] \[\chi_{i^{*}}^{+} =\frac{(-1)^{i-1}z_{i}^{-m+i-2r+2s+1}\prod_{j=1}^{i-1}z_{j}^{-1} \prod_{f=2r+1}^{2r+s}(1-\frac{z_{f}}{z_{i}})(1-\frac{1}{z_{i}z_{f}})}{\prod_{ j=1}^{i-1}(1-\frac{z_{i}}{z_{j}})\prod_{j=i+1}^{r}(1-\frac{z_{j}}{z_{i}}) \prod_{j\neq i}^{r}(1-\frac{1}{z_{i}z_{j}})}\quad\text{for}\quad 1\leq i\leq r. \tag{4.137}\]
We remark that (4.136) and (4.137) for \(s=0\) (and \(q=1\)) coincide with the Yangian \(Y(so(2r))\) case of [eq. (9.25) in [61]] and that (4.135) for \(s=0\) corresponds 45 to [eq. (9.27) in [61]].
Footnote 45: Compare \(\mathsf{T}_{1,m}^{\mathfrak{B},\mathfrak{I}[-r]}\) for \(s=0\) with [eq. (9.27) in [61]]. In our convention, the unit of shift of the spectral parameter is twice as large as theirs. Their parameters \(\tau_{j}\) correspond to our parameters \(z_{j}\). The sign factor \((-1)^{i-1}\) is included in the character parts (4.136) and (4.137).
\(\mathfrak{W}\)-symmetryWe would like to consider a subgroup \(\mathfrak{W}=\mathbb{Z}_{2}^{r+s}\rtimes S_{r+s}\) of the permutation group \(S(I_{M+N})=S(I_{2r+2s+2})=S(\mathfrak{I})\), which preserves the set of the entire46 symmetric nesting paths, and discuss the invariance of the T-function \(\mathsf{F}_{(1)}^{I_{2r+2s+2}}\) under it. \(\mathfrak{W}\) is generated by two kinds of operations of the form: \(\mathfrak{s}=\tau_{i_{a}i_{a+1}}\circ\tau_{i_{a}i_{a+1}^{*}}\), \(\mathfrak{s}(I_{2r+2s+2})=(i_{1},i_{2},\ldots,i_{a-1},i_{a+1},i_{a},i_{a+2}, \ldots,i_{r+s},i_{r+s+1},i_{r+s+1}^{*},i_{r+s}^{*},\ldots,i_{a+2}^{*},i_{a+1}^ {*},i_{a-1}^{*},\ldots,i_{2}^{*},i_{1}^{*})\) for \(a\in\{1,2,\ldots,r+s-1\}\), and \(\mathfrak{k}=\tau_{i_{r+s},i_{r+s}^{*}}\), \(\mathfrak{k}(I_{2r+2s+2})=(i_{1},i_{2},\ldots,i_{r+s-1},i_{r+s}^{*},i_{r+s+1}, i_{r+s},i_{r+s-1}^{*},\ldots,i_{2}^{*},i_{1}^{*})\). The condition \(\mathfrak{s}(\mathsf{F}_{(1)}^{I_{2r+2s+2}})=\mathsf{F}_{(1)}^{s(I_{2r+2s+2}) }=\mathsf{F}_{(1)}^{I_{2r+2s+2}}\) is equivalent to the following 4-term QQ-relations
Footnote 46: We fix the elements of \(\mathfrak{D}\).
\[p_{i_{a}}\mathcal{X}_{I_{a}}+p_{i_{a+1}}\mathcal{X}_{I_{a+1}} =p_{i_{a+1}}\mathcal{X}_{\mathfrak{s}(I_{a})}+p_{i_{a}}\mathcal{X }_{\mathfrak{s}(I_{a+1})}, \tag{4.138}\] \[p_{i_{a}}\mathcal{X}_{I_{2r+2s+3-a}}+p_{i_{a+1}}\mathcal{X}_{I_{ 2r+2s+2-a}} =p_{i_{a+1}}\mathcal{X}_{\mathfrak{s}(I_{2r+2s+3-a})}+p_{i_{a}} \mathcal{X}_{\mathfrak{s}(I_{2r+2s+2-a})}, \tag{4.139}\]
which follow 47
Footnote 47: As an example, let us consider the 4-term QQ-relation (4.138) for \(a=r+s-1\), \(i_{r+s-1}\in\mathfrak{F}\), \(i_{r+s}\in\mathfrak{B}\), which is equivalent to
\[\left(\frac{z_{i_{r+s-1}}\mathbb{Q}_{I_{r+s-2}}^{[1]}\mathbb{Q}_{I_{r+s}}^{[-2] }-z_{i_{r+s}}\mathbb{Q}_{I_{r+s-2}}^{[-1]}\mathbb{Q}_{I_{r+s}}^{[2]}}{\mathbb{ Q}_{\bar{I}_{r+s-1}}\mathbb{Q}_{I_{r+s}}}\right)^{[2]}=\frac{z_{i_{r+s-1}} \mathbb{Q}_{I_{r+s-2}}^{[1]}\mathbb{Q}_{I_{r+s}}^{[-2]}-z_{i_{r+s}}\mathbb{Q}_ {I_{r+s-2}}^{[-1]}\mathbb{Q}_{I_{r+s}}^{[2]}}{\mathbb{Q}_{\bar{I}_{r+s-1}} \mathbb{Q}_{I_{r+s}}} \tag{4.140}\]
This means that the right hand side of (4.140) is a periodic function \(\phi\) of the spectral parameter: \(\phi^{[2]}=\phi\). The 3-term QQ-relation (4.118) corresponds to the case that this periodic function is a constant \(\phi=z_{i_{r+s-1}}-z_{i_{r+s}}\). This comes from the assumption that the Q-functions have the form (3.4), and the deformation parameter is generic. Thus (4.118) can be a sufficient condition for (4.140) in the general situation.
This relation (4.144) for the case \(p_{i_{r+s}}=1\) reduces to (4.142) and an identity
\[(1-A{\bf X})^{-1}(1-A{\bf X}B{\bf X})(1-B{\bf X})^{-1}=(1-A{\bf X})^ {-1}+(1-B{\bf X})^{-1}-1=\\ =(1-B{\bf X})^{-1}(1-B{\bf X}A{\bf X})(1-A{\bf X})^{-1} \tag{4.145}\]
for any functions \(A\) and \(B\). Consider the expansion of (4.144) for the case \(p_{i_{r+s}}=-1\) with respect to the non-negative powers of \({\bf X}\). The coefficients of \({\bf X}\) on both sides of (4.144) give the relation (4.141), which follows from the QQ-relation (4.113). The relation derived from the coefficients of \({\bf X}^{3}\) also follows from the QQ-relation (4.113). The relation derived from the coefficients of \({\bf X}^{4}\) is trivially valid. The other coefficients are 0 or 1. Therefore the T-functions \({\cal F}^{I_{2r+2s+2}}_{(b)}\) and \({\cal F}^{I_{2r+2s+2}}_{(1^{b})}\) on the symmetric nesting paths are invariant under \(\mathfrak{W}\) if the QQ-relations (4.109)-(4.119) are imposed. It may be possible to exclude the T- and Q-functions on the non-symmetric nesting paths from our consideration, and reformulate the Wronskian-type formulas. Namely, forget about the connection with \(U_{q}(gl(2r|2s+2)^{(1)})\) (reduction procedures, especially the non-trivial ansatzes (4.104)-(4.108)) first. Assume the Bethe ansatz equations associated with all the simple root systems of \(osp(2r|2s)\) (4.125)-(4.130) (see also section 4.5). Construct the T-function (4.124) by analytic Bethe ansatz, which is free of poles under the Bethe ansatz equations associated with each simple root system. Assume that all these T-functions are equivalent. The QQ-relations (4.109)-(4.119) follow from this equivalence. 50 We will be able to reformulate the Wronskian-type formulas on T-and Q-functions starting from these. The same remark will apply not only to other algebras treated in this paper, but also to other algebras such as \(U_{q}(D(2,1;\alpha)^{(1)})\), \(U_{q}(G(3)^{(1)})\) and \(U_{q}(F(4)^{(1)})\), etc. As for \(s=0\) case, several Wronskian-type formulas of T-functions (or T-operators) are already proposed in [75, 62, 63, 61], which provide alternative expressions of tableau sum [34, 71], CBR-type determinant or Pfaffian formulas [64, 71] of T-functions.
Footnote 50: In other words, if there is a part in which the equivalence breaks down, then the corresponding QQ-relation must be excluded. It would be possible to restrict our consideration to the orbit of the Weyl reflections and odd reflections (starting from the distinguished simple root system) via (2.41)-(2.42) and exclude QQ-relations outside of the orbit.
### Bethe ansatz equations associated with root systems
The QQ-relations and the Bethe ansatz equations discussed in the previous subsections can be expressed in terms of root systems of underlying algebras.
#### 4.5.1 QQ-relations
Let \(\kappa\) be the order of the map \(\sigma\) (see (3.10)). Here we consider the case \(\kappa=2\), \(\sigma^{\kappa}=1\). Let \(\{\alpha_{a}\}_{a=1}^{M+N-1}\) be the simple roots of \(gl(M|N)\) defined in (2.6) for a symmetric nesting path. The parameters \((M,N)\) are specified as in the previous subsections. Set \(\mathfrak{r}=r+s\). For \(a\in\{1,2,\ldots,\mathfrak{r}\}\), the QQ-relations (4.14)-(4.17), (4.44)-(4.47), (4.59)-(4.62) and (4.70)
(4.74) are summarized as
\[(e^{-\alpha_{a}(h)}-1)P_{a}\prod_{k=0}^{\kappa-1}\prod_{\stackrel{{ b=1}}{{(\alpha_{a}|\sigma^{k}(\alpha_{b}))\neq 0,\,\alpha_{a}\neq\sigma^{k}(\alpha_{b})}}} ^{\mathfrak{r}}\mathcal{Q}_{b}^{[k\eta]}=e^{-\alpha_{a}(h)}\prod_{\stackrel{{ k=0}}{{\alpha_{a}=\sigma^{k}(\alpha_{a})}}}^{\kappa-1} \mathcal{Q}_{a}^{[d_{a}+k\eta]}\widetilde{\mathcal{Q}}_{a}^{[-d_{a}+k\eta]}\\ -\prod_{\stackrel{{ k=0}}{{\alpha_{a}=\sigma^{k}( \alpha_{a})}}}^{\kappa-1}\mathcal{Q}_{a}^{[-d_{a}+k\eta]}\widetilde{\mathcal{Q }}_{a}^{[d_{a}+k\eta]}\quad\text{if}\quad(\alpha_{a}|\alpha_{a})\neq 0, \tag{4.146}\]
\[(e^{-\alpha_{a}(h)}-1)\mathcal{Q}_{a}\widetilde{\mathcal{Q}}_{a}=e ^{-\alpha_{a}(h)}P_{a}^{[-d_{a}]}\prod_{k=0}^{\kappa-1}\prod_{\stackrel{{ b=1}}{{(\alpha_{a}|\sigma^{k}(\alpha_{b}))\neq 0,\, \alpha_{a}\neq\sigma^{k}(\alpha_{b})}}}^{\mathfrak{r}}\mathcal{Q}_{b}^{[( \alpha_{a}|\sigma^{k}(\alpha_{b}))+k\eta]}\\ -P_{a}^{[d_{a}]}\prod_{k=0}^{\kappa-1}\prod_{\stackrel{{ b=1}}{{(\alpha_{a}|\sigma^{k}(\alpha_{b}))\neq 0,\, \alpha_{a}\neq\sigma^{k}(\alpha_{b})}}}^{\mathfrak{r}}\mathcal{Q}_{b}^{[-( \alpha_{a}|\sigma^{k}(\alpha_{b}))+k\eta]}\quad\text{if}\quad(\alpha_{a}| \alpha_{a})=0, \tag{4.147}\]
where each element is identified as:
for \(U_{q}(gl(2r|2s+1)^{(2)})\), \(U_{q}(gl(2r+1|2s)^{(2)})\), \(U_{q}(osp(2r+1|2s)^{(1)})\),
\[\mathcal{Q}_{a}=\mathbb{Q}_{I_{a}},\quad\widetilde{\mathcal{Q}}_{a}=\mathbb{Q }_{\widetilde{I}_{a}},\quad u_{k}^{(a)}=u_{k}^{I_{a}}\quad n_{a}=n_{I_{a}} \quad\text{for}\quad a\in\{1,2,\ldots,r+s\}, \tag{4.148}\]
for \(U_{q}(gl(2r|2s)^{(2)})\):
\[\mathcal{Q}_{a}=\mathbb{Q}_{I_{a}},\quad\widetilde{\mathcal{Q}}_ {a}=\mathbb{Q}_{\widetilde{I}_{a}},\quad u_{k}^{(a)}=u_{k}^{I_{a}}\quad n_{a}= n_{I_{a}}\quad\text{for}\quad a\in\{1,2,\ldots,r+s-1\},\\ \mathcal{Q}_{r+s}=\mathbf{Q}_{I_{r+s}},\quad\widetilde{\mathcal{Q }}_{r+s}=\mathbf{Q}_{\widetilde{I}_{r+s}},\quad u_{k}^{(r+s)}=v_{k}^{I_{r+s} },\quad n_{r+s}=m_{I_{r+s}}, \tag{4.149}\]
\(d_{a}=(\alpha_{a}|\alpha_{a})/2\) if \((\alpha_{a}|\alpha_{a})\neq 0\), and \(d_{a}=(\alpha_{a}|\alpha_{a^{\prime}})\neq 0\) for some simple root \(\alpha_{a^{\prime}}\) if \((\alpha_{a}|\alpha_{a})=0\), in particular \(d_{1}=p_{i_{1}}\); \(e^{\epsilon_{a}(h)}=z_{a}\) and
\[P_{a}=\begin{cases}\mathcal{Q}_{0}=\mathbb{Q}_{\emptyset}&\text{if}\quad a=1, \\ 1&\text{otherwise}.\end{cases} \tag{4.150}\]
In addition to the above, we formally set \(\eta=0\) for \(U_{q}(osp(2r+1|2s)^{(1)})\). We remark that (4.146) and (4.147) reduce to (3.17) and (3.18) if we set \(\kappa=1\), \(\mathfrak{r}=M+N-1\) and replace (4.150) with (3.16). In this case, the corresponding simple root system is not necessary on a symmetric nesting path. In order to compare (4.146) for \(s=0\) with the QQ-relations for the twisted quantum affine (non-super) algebras discussed in [66, 67], one should set: \(P_{a}\to 1\), \(e^{\pm\alpha_{a}(h)/2}\to[\pm\alpha_{a}/2]\), \(\widetilde{\mathcal{Q}}_{a}/(e^{-\alpha_{a}(h)/2}-e^{\alpha_{a}(h)/2})\to \widetilde{\mathcal{Q}}_{a}\).
Let \(\{\beta_{a}\}_{a=1}^{\mathfrak{r}}\) be the simple roots of the orthosymplectic Lie superalgebras defined in (2.11), (2.21) and (2.27). For \(a\in\{1,2,\ldots,\mathfrak{r}\}\), the QQ-relations (4.88)-(4.90), (4.109)
(4.113), (4.14)-(4.16) and (4.18) are summarized as
\[(e^{-\beta_{a}(h)}-1)\varphi_{a}\prod_{b=1\atop(\beta_{a}|\beta_{b}) \neq 0,\,b\neq a}^{\mathfrak{r}}\prod_{k=0}^{-C_{ab}-1}\mathcal{Q}_{b}^{[-( \beta_{a}|\beta_{b})-d_{a}(1+2k)]}=e^{-\beta_{a}(h)}\mathcal{Q}_{a}^{[d_{a}]} \widetilde{\mathcal{Q}}_{a}^{[-d_{a}]}-\mathcal{Q}_{a}^{[-d_{a}]}\widetilde{ \mathcal{Q}}_{a}^{[d_{a}]}\\ \text{if}\quad(\beta_{a}|\beta_{a})\neq 0,\quad p_{\beta_{a}}=1, \tag{4.151}\]
\[(e^{-\beta_{a}(h)}+1)\varphi_{a}\prod_{b=1\atop(\beta_{a}|\beta_ {b})\neq 0}^{\mathfrak{r}}\mathcal{Q}_{b}=e^{-\beta_{a}(h)}\mathcal{Q}_{a}^{[2 d_{a}]}\widetilde{\mathcal{Q}}_{a}^{[-2d_{a}]}+\mathcal{Q}_{a}^{[-2d_{a}]} \widetilde{\mathcal{Q}}_{a}^{[2d_{a}]}\\ \text{if}\quad(\beta_{a}|\beta_{a})\neq 0,\quad p_{\beta_{a}}=-1, \tag{4.152}\]
\[(e^{-\beta_{a}(h)}-1)\mathcal{Q}_{a}\widetilde{\mathcal{Q}}_{a} =e^{-\beta_{a}(h)}\varphi_{a}^{-}\prod_{b=1\atop(\beta_{a}|\beta_ {b})\neq 0,\alpha\neq b}^{\mathfrak{r}}\mathcal{Q}_{b}^{[(\beta_{a}|\beta_{ b})]}-\varphi_{a}^{+}\prod_{b=1\atop(\beta_{a}|\beta_{b})\neq 0,\alpha\neq b}^{ \mathfrak{r}}\mathcal{Q}_{b}^{[-(\beta_{a}|\beta_{b})]}\\ \text{if}\quad(\beta_{a}|\beta_{a})=0, \tag{4.153}\]
where each element is identified as:
for \(U_{q}(osp(2r)^{(1)})\):
\[\beta_{a}=\epsilon_{i_{a}^{*}}-\epsilon_{i_{a+1}^{*}},\quad \mathcal{Q}_{a}=\mathbb{Q}_{I_{a}},\quad\widetilde{\mathcal{Q}}_{a}=\mathbb{Q }_{\widetilde{I}_{a}},\quad u_{k}^{(a)}=u_{k}^{I_{a}}\quad n_{a}=n_{I_{a}}\\ \text{for}\quad a\in\{1,2,\ldots,r-1\},\\ \beta_{r}=2\epsilon_{i_{r}^{*}},\quad\mathcal{Q}_{r}=\mathbf{Q}_{ I_{r}},\quad\widetilde{\mathcal{Q}}_{r}=\mathbb{Q}_{I_{r}},\quad u_{k}^{(r)}=v_{k}^{I_ {r}},\quad n_{r}=m_{I_{r}},\quad s=0, \tag{4.154}\]
for \(U_{q}(osp(2r|2s)^{(1)})\), \(i_{r+s}\in\mathfrak{B}\):
\[\beta_{a}=\epsilon_{i_{a}^{*}}-\epsilon_{i_{a+1}^{*}},\quad \mathcal{Q}_{a}=\mathbb{Q}_{I_{a}},\quad\widetilde{\mathcal{Q}}_{a}=\mathbb{Q }_{\widetilde{I}_{a}},\quad u_{k}^{(a)}=u_{k}^{I_{a}}\quad n_{a}=n_{I_{a}}\\ \text{for}\quad a\in\{1,2,\ldots,r+s-2\},\\ \beta_{r+s-1}=\epsilon_{i_{r+s-1}^{*}}-\epsilon_{i_{r+s}^{*}},\quad \mathcal{Q}_{r+s-1}=\mathbf{Q}_{\widetilde{I}_{r+s}},\\ \widetilde{\mathcal{Q}}_{r+s-1}=\mathbb{Q}_{\widetilde{I}_{r+s-1}} \quad\text{if}\quad i_{r+s-1}\in\mathfrak{F},\quad\widetilde{\mathcal{Q}}_{r+s -1}=\mathbf{Q}_{\widetilde{I}_{r+s}}\quad\text{if}\quad i_{r+s-1}\in\mathfrak{B}, \\ u_{k}^{(r+s-1)}=v_{k}^{\widetilde{I}_{r+s}},\quad n_{r+s-1}=m_{ \widetilde{I}_{r+s}},\\ \beta_{r+s}=\epsilon_{i_{r+s-1}^{*}}+\epsilon_{i_{r+s}^{*}},\quad \mathcal{Q}_{r+s}=\mathbf{Q}_{I_{r+s}},\\ \widetilde{\mathcal{Q}}_{r+s}=\mathbb{Q}_{I_{r+s-1}}\quad\text{ if}\quad i_{r+s-1}\in\mathfrak{F},\quad\widetilde{\mathcal{Q}}_{r+s}=\mathbf{Q}_{ \widetilde{I}_{r+s}}\quad\text{if}\quad i_{r+s-1}\in\mathfrak{B},\\ u_{k}^{(r+s)}=v_{k}^{I_{r+s}},\quad n_{r+s}=m_{I_{r+s}}. \tag{4.155}\]
for \(U_{q}(osp(2r|2s)^{(1)})\), \(i_{r+s}\in\mathfrak{F}\):
\[\beta_{a}=\epsilon_{i_{a}^{*}}-\epsilon_{i_{a+1}^{*}},\quad\mathcal{Q}_{a}= \mathbb{Q}_{I_{a}},\quad u_{k}^{(a)}=u_{k}^{I_{a}},\quad n_{a}=n_{I_{a}}\quad \text{for}\quad a\in\{1,2,\ldots,r+s-1\},\\ \beta_{r+s}=2\epsilon_{i_{r+s}^{*}},\quad\mathcal{Q}_{r+s}= \mathbf{Q}_{I_{r+s}},\quad u_{k}^{(r+s)}=v_{k}^{I_{r+s}},\quad n_{r+s}=m_{I_{r+ s}},\\ \widetilde{\mathcal{Q}}_{r+s-1}=\mathbb{Q}_{\widetilde{I}_{r+s-1 }}\quad\text{if}\quad i_{r+s-1}\in\mathfrak{F},\quad\widetilde{\mathcal{Q}}_{r +s-1}=\mathbf{Q}_{\widetilde{I}_{r+s}}\quad\text{if}\quad i_{r+s-1}\in \mathfrak{B},\\ \widetilde{\mathcal{Q}}_{r+s}=\mathbf{Q}_{\widetilde{I}_{r+s}}, \tag{4.156}\]
for \(U_{q}(osp(2r+1|2s)^{(1)})\):
\[\beta_{a}=\epsilon_{i_{a}^{*}}-\epsilon_{i_{a+1}^{*}}\quad\text{for}\quad a\in \{1,2,\ldots,r+s-1\},\quad\beta_{r+s}=\epsilon_{i_{r+s}^{*}},\\ \mathcal{Q}_{a}=\mathbb{Q}_{I_{a}},\quad u_{k}^{(a)}=u_{k}^{I_{a }},\quad n_{a}=n_{I_{a}}\quad\text{for}\quad a\in\{1,2,\ldots,r+s\},\\ \widetilde{\mathcal{Q}}_{a}=\mathbb{Q}_{\widetilde{I}_{a}}\quad \text{for}\quad a\in\{1,2,\ldots,r+s-1\},\\ \widetilde{\mathcal{Q}}_{r+s}=\mathbb{Q}_{\widetilde{I}_{r+s}} \quad\text{if}\quad i_{r+s}\in\mathfrak{B},\quad\widetilde{\mathcal{Q}}_{r+s} =\mathbb{Q}_{\widetilde{I}_{r+s}}\quad\text{if}\quad i_{r+s}\in\mathfrak{F}, \tag{4.157}\]
and \(C_{ab}=2(\beta_{a}|\beta_{b})/(\beta_{a}|\beta_{a})\), \(d_{a}=(\beta_{a}|\beta_{a})/2\) for \((\beta_{a}|\beta_{a})\neq 0\). In our examples, the vacuum parts are defined as follows: for \(U_{q}(osp(3|0)^{(1)})\),
\[\varphi_{1}=\varphi_{1}(u)=\mathbb{Q}_{\emptyset}^{[-\frac{1}{2}]}\mathbb{Q} _{\emptyset}^{[\frac{1}{2}]}, \tag{4.158}\]
and for the other case
\[\varphi_{a}=\varphi_{a}(u)=\begin{cases}\mathbb{Q}_{\emptyset}&\text{if}\quad( \epsilon_{i_{1}^{*}}|\beta_{a})\neq 0\\ 1&\text{if}\quad(\epsilon_{i_{1}^{*}}|\beta_{a})=0,\end{cases} \tag{4.159}\]
\[\varphi_{a}^{\pm}=\varphi_{a}(u\pm(\epsilon_{i_{1}^{*}}|\beta_{a})). \tag{4.160}\]
The vacuum parts depend on the Hilbert space on which transfer matrices act. In the theory of q-characters, the vacuum parts should be formally set to 1. We remark that (4.151) and (4.153) reduce to (3.17) and (3.18) if we set \(\mathfrak{r}=M+N-1\) and replace \(\{\beta_{b}\}\) with \(\{\alpha_{b}\}\) (not necessary on a symmetric nesting path), \(\varphi_{a}\) with \(P_{a}\) in (3.16), and \(\varphi_{a}^{\pm}\) with \(P_{a}^{[\pm d_{a}]}\) (\(d_{a}\) is the one in (3.15)).
As already remarked, QQ-relations for non-super algebras are expressed in terms of root systems of underlying (non-super) Lie algebras [29, 30, 31]. Our formulation is different from theirs in that we start from the QQ-relations associated with root systems of the superalgebra \(gl(M|N)\) even for the non-superalgebra \(U_{q}(so(2r+1)^{(1)})\simeq U_{q}(osp(2r+1|0)^{(1)})\) case ((4.146) for \(s=0\)).
The ODE/IM correspondence is an efficient tool to derive QQ-relations. In particular, the ODE/IM correspondence for supersymmetric integrable models was discussed in [68, 18] for \(U_{q}(gl(2|1)^{(1)})\) (or \(U_{q}(sl(2|1)^{(1)})\)), and in [70] for supersymmetric affine Toda field equations associated to affine Lie superalgebras with purely fermionic simple root systems, including \(osp(2|2)^{(2)}\). It is desirable to reconsider the QQ-relations discussed here in connection with the ODE/IM correspondence, and generalize them further. In this context, it is important to clarify the object corresponding to the relations (4.1) in the ODE/IM correspondence for \(U_{q}(gl(M|N)^{(1)})\).
#### 4.5.2 Bethe ansatz equations
The Bethe ansatz equations for \(U_{q}(gl(2r|2s+1)^{(2)})\) (4.55), \(U_{q}(gl(2r+1|2s)^{(2)})\) (4.66), \(U_{q}(gl(2r|2s)^{(2)})\) (4.77), \(U_{q}(osp(2r+1|2s)^{(1)})\) (4.25) are expressed 51 in terms of a part of a symmetric simple root system of \(gl(M|N)\):
Footnote 51: Note that \(\mathbb{Q}_{I_{b}}(u_{k}^{I_{a}}+\xi)=\mathbb{Q}_{I_{M+N-b}}(u_{k}^{I_{M+N-a}}+\xi)\), \(\xi\in\mathbb{C}\), \(0\leq a,b\leq M+N\) holds for any symmetric nesting path.
\[-\frac{P_{a}(u_{k}^{(a)}+d_{a})}{P_{a}(u_{k}^{(a)}-d_{a})}=p_{ \alpha_{a}}e^{-\alpha_{a}(h)}\prod_{t=0}^{\kappa-1}\prod_{b=1}^{\mathfrak{r}} \frac{\mathcal{Q}_{b}(u_{k}^{(a)}+(\alpha_{a}|\sigma^{t}(\alpha_{b}))+\eta t )}{\mathcal{Q}_{b}(u_{k}^{(a)}-(\alpha_{a}|\sigma^{t}(\alpha_{b}))+\eta t)}\\ \text{for}\quad k\in\{1,2,\ldots,n_{a}\}\quad\text{and}\quad a \in\{1,2,\ldots,\mathfrak{r}\}. \tag{4.161}\]
For \(N=0\), this fits into the form of the Bethe ansatz equations for the twisted quantum affine (non-super) algebras in [54]. However, the formulation of the \(U_{q}(so(2r)^{(1)})\) case in [54] is different from ours in that we use a simple root system of \(gl(2r|1)\). Substituting \(u=u_{k}^{(a)}\pm d_{a}\) into (4.146) and eliminating \(\widetilde{\mathcal{Q}}_{a}(u_{k}^{(a)})\), we obtain (4.161) for \((\alpha_{a}|\alpha_{a})\neq 0\). Substituting \(u=u_{k}^{(a)}\) into (4.147), we obtain (4.161) for \((\alpha_{a}|\alpha_{a})=0\). Here we assume that the roots of the Q-functions are sufficiently generically distributed (thus \(\widetilde{\mathcal{Q}}_{a}(u_{k}^{(a)})\neq 0\)).
The Bethe ansatz equations for \(U_{q}(sp(2r)^{(1)})\) (4.93)-(4.95), \(U_{q}(osp(2r|2s)^{(1)})\) (4.125)-(4.130), and \(U_{q}(osp(2r+1|2s)^{(1)})\) (4.25) are expressed in terms of a simple root system of each finite algebra (\(\mathfrak{g}\) for \(U_{q}(\mathfrak{g}^{(1)})\)):
\[-\frac{\psi_{a}^{+}(u_{k}^{(a)})}{\psi_{a}^{-}(u_{k}^{(a)})}=p_{ \beta_{a}}e^{-\beta_{a}(h)}\prod_{b=1\atop b\neq a}^{\mathfrak{r}}\frac{ \mathcal{Q}_{b}(u_{k}^{(a)}+(\beta_{a}|\beta_{b}))}{\mathcal{Q}_{b}(u_{k}^{(a )}-(\beta_{a}|\beta_{b}))}\prod_{l=1}^{\kappa_{a}}\frac{\mathcal{Q}_{a}(u_{k}^ {(a)}+p_{l\beta_{a}}l(\beta_{a}|\beta_{a}))}{\mathcal{Q}_{a}(u_{k}^{(a)}-p_{l \beta_{a}}l(\beta_{a}|\beta_{a}))}\\ \text{for}\quad k\in\{1,2,\ldots,n_{a}\}\quad\text{and}\quad a\in \{1,2,\ldots,\mathfrak{r}\}, \tag{4.162}\]
where \(\kappa_{a}=2\) if \(p_{\beta_{a}}=-1\) and \((\beta_{a}|\beta_{a})\neq 0\) (black dot), \(\kappa_{a}=1\) if \(p_{\beta_{a}}=1\) and \((\beta_{a}|\beta_{a})\neq 0\) (white dot), \(\kappa_{a}=0\) if \(p_{\beta_{a}}=-1\) and \((\beta_{a}|\beta_{a})=0\) (gray dot); \(p_{l\beta_{a}}=(-1)^{\kappa_{a}-l}\); \(\prod_{l=1}^{0}(\cdots)=1\). \(\psi_{a}^{\pm}\) are vacuum eigenvalues of diagonal elements of a monodromy matrix 52. In general, cancellation of a common factor occurs between the numerator and denominator of the left-hand side of (4.162). Thus in our example, we may set
Footnote 52: Depending on the normalization, a shift in the spectral parameter is needed (cf. (3.14)).
\[\psi_{a}^{\pm}(u)=\mathbb{Q}_{\emptyset}(u\pm(\epsilon_{i_{1}^{*}}|\beta_{a})). \tag{4.163}\]
Substituting \(u=u_{k}^{(a)}\pm d_{a}\) into (4.151) and eliminating \(\widetilde{\mathcal{Q}}_{a}(u_{k}^{(a)})\), we obtain (4.162) for \((\beta_{a}|\beta_{a})\neq 0\), \(p_{\beta_{a}}=1\). Substituting \(u=u_{k}^{(a)}\pm 2d_{a}\) into (4.152) and eliminating \(\widetilde{\mathcal{Q}}_{a}(u_{k}^{(a)})\), we obtain (4.162) for \((\beta_{a}|\beta_{a})\neq 0\), \(p_{\beta_{a}}=-1\). Substituting \(u=u_{k}^{(a)}\) into (4.153), we obtain (4.162) for \((\beta_{a}|\beta_{a})=0\). In general, the vacuum parts of the Bethe ansatz equation (4.162)
and those of the QQ-relations (4.151)-(4.153) are related as
\[\frac{\psi_{a}^{+}(u_{k}^{(a)})}{\psi_{a}^{-}(u_{k}^{(a)})}=\begin{cases}\frac{ \varphi_{a}(u_{k}^{(a)}+d_{a})}{\varphi_{a}(u_{k}^{(a)}-d_{a})}&\text{for}\quad( \beta_{a}|\beta_{a})\neq 0,\quad p_{\beta_{a}}=1\\ \frac{\varphi_{a}(u_{k}^{(a)}+2d_{a})}{\varphi_{a}(u_{k}^{(a)}-2d_{a})}&\text{ for}\quad(\beta_{a}|\beta_{a})\neq 0,\quad p_{\beta_{a}}=-1\\ \frac{\varphi_{a}^{+}(u_{k}^{(a)})}{\varphi_{a}^{-}(u_{k}^{(a)})}&\text{for} \quad(\beta_{a}|\beta_{a})=0.\end{cases} \tag{4.164}\]
In our examples, (4.164) reduces to \(\mathbb{Q}_{\emptyset}(u+(\epsilon_{i_{1}^{\star}}|\beta_{a}))/\mathbb{Q}_{ \emptyset}(u-(\epsilon_{i_{1}^{\star}}|\beta_{a}))\). In order to describe a two-body self-interaction 53 at \(b=a\), we need the even root \(2\beta_{a}\) in addition to the odd simple root \(\beta_{a}\) for the black dot. Substituting \(u=u_{k}^{(a)}\) into (4.153), we obtain (4.162) for \((\beta_{a}|\beta_{a})=0\).
Footnote 53: Ref. [55] pointed out that the formulation of the Bethe ansatz equations in terms of their corresponding Lie algebras [53] can be extended to the case of superalgebras (for rational vertex models), but the black dot of \(osp(1|2s)\) is an exception for this. Then in [23], we tried to make use of the correspondence between \(osp(1|2s)^{(1)}\) and \(sl(2s+1)^{(2)}\) for the description of the black dot in the Bethe ansatz equation. This is incorporated into (4.161). Eq. (4.162) is an alternative expression for this.
#### 4.5.3 Extended Weyl group symmetry
Eq. (4.151) is related to the Weyl reflection (2.34) by the simple even root \(\beta_{a}\), and (4.153) is related to the odd reflection (2.35) by the simple odd root \(\beta_{a}\) (for the case \((\beta_{a}|\beta_{a})=0\)): \(w_{\beta_{a}}(\mathcal{Q}_{a})=\widetilde{\mathcal{Q}}_{a}\), \(w_{\beta_{a}}(\mathcal{Q}_{b})=\mathcal{Q}_{b}\) for \(b\neq a\). In addition, (4.152) (for \(U_{q}(osp(2r+1|2s)^{(1)})\)) is related to the Weyl reflection (2.34) by the simple even root \(\alpha_{a}\) of \(gl(2r|2s+1)\) under reduction: \(w_{\alpha_{a}}(\mathcal{Q}_{a})=\widetilde{\mathcal{Q}}_{a}\), \(w_{\alpha_{a}}(\mathcal{Q}_{b})=\mathcal{Q}_{b}\) for \(b\neq a\). As far as we could see, (4.152) is not enough to realize the odd reflection by the odd root \(\beta_{r+s}\) of \(osp(2r+1|2s)\) with \((\beta_{r+s}|\beta_{r+s})\neq 0\), \(p_{\beta_{r+s}}=-1\). In order to realize this, we have to consider a composition of at least three 3-term QQ-relations, which corresponds to the 6-term QQ-relation (4.35) for the case \(p_{i_{r+s}}=-1\). Then the action of this odd reflection becomes \(w_{\beta_{r+s}}(\mathcal{Q}_{r+s})=\mathbb{Q}_{\tilde{I}_{r+s}}\), \(w_{\beta_{r+s}}(\mathcal{Q}_{b})=\mathcal{Q}_{b}\) for \(b\neq r+s\). We show these by a method similar to the one discussed in [8], noting that the Bethe ansatz equations (4.162) are expressed in terms of root systems of underlying superalgebras. In [8], we adopted an argument in [7] (amplified in [78]) based on the residue theorem associated with the particle-hole transformation (a preliminary form of the QQ-relation). Instead we extend a more simplified version [79] (cf. [6]) of it.
Let \(\{\widetilde{u}_{k}^{(a)}\}_{k=1}^{\widetilde{n}_{a}}\) be the zeros of \(\widetilde{\mathcal{Q}}_{a}\). One can show the following for a fixed \(a\in\{1,2,\ldots,\mathfrak{r}\}\) (\(a\) corresponds to a vertex of the Dynkin diagram associated with a simple root system \(\{\beta_{c}\}_{c=1}^{\mathfrak{r}}\)).
White or gray dots [other than the case \((\beta_{a}|\beta_{a})\neq 0\), \(p_{\beta_{a}}=-1\)] Under the QQ-relations (4.151) and (4.153), the Bethe ansatz equation (4.162), namely
\[-\frac{\psi_{c}^{+}(u_{k}^{(c)})}{\psi_{c}^{-}(u_{k}^{(c)})}=p_{\beta_{c}}e^{- \beta_{c}(h)}\prod_{b=1\atop b\neq c}^{\mathfrak{r}}\frac{\mathcal{Q}_{b}(u_{k }^{(c)}+(\beta_{c}|\beta_{b}))}{\mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{c}|\beta_{b }))}\times\]
\[\times\begin{cases}\frac{\mathcal{Q}_{c}(u_{k}^{(c)}+2(\beta_{c}|\beta_{c})) \mathcal{Q}_{c}(u_{k}^{(c)}-(\beta_{c}|\beta_{c}))}{\mathcal{Q}_{c}(u_{k}^{(c) }-2(\beta_{c}|\beta_{c}))\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c}))}& \text{if}\quad p_{\beta_{c}}=-1,\quad(\beta_{c}|\beta_{c})\neq 0\\ 1&\text{if}\quad(\alpha_{c}|\alpha_{c})=0\\ \frac{\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c}))}{\mathcal{Q}_{c}(u_{k }^{(c)}-(\beta_{c}|\beta_{c}))}&\text{otherwise}\end{cases}\]
\[\text{for}\quad k\in\{1,2,\ldots,n_{c}\}\quad\text{and}\quad c\in\{1,2,\ldots, \mathbf{r}\}. \tag{4.165}\]
is equivalent to
\[-\frac{w_{\beta_{a}}(\psi_{c}^{+})(\hat{u}_{k}^{(c)})}{w_{\beta_{ a}}(\psi_{c}^{-})(\hat{u}_{k}^{(c)})}=p_{w_{\beta_{a}}(\beta_{c})}e^{-w_{ \beta_{a}}(\beta_{c})(h)}\prod_{\begin{subarray}{c}b=1\\ b\neq c\end{subarray}}^{\mathbf{r}}\frac{w_{\beta_{a}}(\mathcal{Q}_{b})(\hat{u}_ {k}^{(c)}+(w_{\beta_{a}}(\beta_{c})|w_{\beta_{a}}(\beta_{b})))}{w_{\beta_{a}}( \mathcal{Q}_{b})(\hat{u}_{k}^{(c)}-(w_{\beta_{a}}(\beta_{c})|w_{\beta_{a}}( \beta_{b})))}\times\\ \times\begin{cases}\frac{w_{\beta_{a}}(\mathcal{Q}_{c})(\hat{u}_{k}^{(c)}+2 (w_{\beta_{a}}(\beta_{c})|w_{\beta_{a}}(\beta_{c})))w_{\beta_{a}}(\mathcal{Q}_{ c})(\hat{u}_{k}^{(c)}-(w_{\beta_{a}}(\beta_{c})|w_{\beta_{a}}(\beta_{c})))}{w_{\beta_{a}}( \mathcal{Q}_{c})(\hat{u}_{k}^{(c)}-2(w_{\beta_{a}}(\beta_{c})|w_{\beta_{a}}( \beta_{c})))w_{\beta_{a}}(\mathcal{Q}_{c})(\hat{u}_{k}^{(c)}+(w_{\beta_{a}}( \beta_{c})|w_{\beta_{a}}(\beta_{c})))}&\text{if}\quad p_{w_{\beta_{a}}(\beta_{ c})}=-1,\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
**The case \((\beta_{a}|\beta_{a})\neq 0\), \(p_{\beta_{a}}=1\), \((\beta_{a}|\beta_{c})\neq 0\), \(c\neq a\)** Substituting \(u=u_{k}^{(c)}+(\beta_{a}|\beta_{c})+d_{a}(1+2j)\) for \(j\in\{0,1,\ldots,-C_{ac}-1\}\) into (4.151), we obtain
\[\frac{\mathcal{Q}_{a}(u_{k}^{(c)}+(\beta_{a}|\beta_{c})+2d_{a}j)} {\mathcal{Q}_{a}(u_{k}^{(c)}+(\beta_{a}|\beta_{c})+2d_{a}(1+j))}=e^{-\beta_{a} (h)}\frac{\widetilde{\mathcal{Q}}_{a}(u_{k}^{(c)}+(\beta_{a}|\beta_{c})+2d_{a} j)}{\widetilde{\mathcal{Q}}_{a}(u_{k}^{(c)}+(\beta_{a}|\beta_{c})+2d_{a}(1+j))}\\ \text{for}\quad k\in\{1,2,\ldots,n_{c}\}. \tag{4.167}\]
Taking the product over \(j\in\{0,1,\ldots,-C_{ac}-1\}\) on both sides of (4.167), we obtain
\[\frac{\mathcal{Q}_{a}(u_{k}^{(c)}+(\beta_{a}|\beta_{c}))}{\mathcal{Q}_{a}(u_ {k}^{(c)}-(\beta_{a}|\beta_{c}))}=e^{C_{ac}\beta_{a}(h)}\frac{\widetilde{ \mathcal{Q}}_{a}(u_{k}^{(c)}+(\beta_{a}|\beta_{c}))}{\widetilde{\mathcal{Q}}_ {a}(u_{k}^{(c)}-(\beta_{a}|\beta_{c}))}\quad\text{for}\quad k\in\{1,2,\ldots,n_ {c}\}. \tag{4.168}\]
Substituting (4.168) into the right hand side of (4.165) (the part \(b=a\)), we arrive at (4.166). Here we use the relations \(w_{\beta_{a}}(\beta_{c})=\beta_{c}-C_{ac}\beta_{a}\), (2.43).
**The case \((\beta_{a}|\beta_{a})\neq 0\), \(p_{\beta_{a}}=1\), \((\beta_{a}|\beta_{c})=0\), \(c\neq a\)** This case is trivial.
**Fermionic QQ-relation (4.153): odd reflection (gray dot)**
**The case \((\beta_{a}|\beta_{a})=0\), \(c=a\)** Substituting \(u=\widetilde{u}_{k}^{(a)}\) into (4.153), we obtain
\[\frac{\varphi_{a}^{-}(\widetilde{u}_{k}^{(a)})}{\varphi_{a}^{+}(\widetilde{u} _{k}^{(a)})}=e^{\beta_{a}(h)}\prod_{b=1\atop b\neq a}^{\text{r}}\frac{ \mathcal{Q}_{b}(\widetilde{u}_{k}^{(a)}-(\beta_{a}|\beta_{b}))}{\mathcal{Q}_{ b}(\widetilde{u}_{k}^{(a)}+(\beta_{a}|\beta_{b}))}\quad\text{for}\quad k\in\{1,2, \ldots,\widetilde{n}_{a}\}. \tag{4.169}\]
Because of the condition \((w_{\beta_{a}}(\beta_{a})|w_{\beta_{a}}(\beta_{a}))=(-\beta_{a}|-\beta_{a})=0\), and the relations (2.35) and (2.44), (4.166) for \(c=a\) has the form:
\[\frac{w_{\beta_{a}}(\psi_{a}^{+})(\widetilde{u}_{k}^{(a)})}{w_{\beta_{a}}( \psi_{a}^{-})(\widetilde{u}_{k}^{(a)})}=e^{\beta_{a}(h)}\prod_{b=1\atop b\neq a }^{\text{r}}\frac{\mathcal{Q}_{b}(\widetilde{u}_{k}^{(a)}-(\beta_{a}|\beta_{ b}))}{\mathcal{Q}_{b}(\widetilde{u}_{k}^{(a)}+(\beta_{a}|\beta_{b}))}\quad\text{for} \quad k\in\{1,2,\ldots,\widetilde{n}_{a}\}. \tag{4.170}\]
In order to show that (4.169) and (4.170) coincide, we have to check
\[\frac{\varphi_{a}^{-}(\widetilde{u}_{k}^{(a)})}{\varphi_{a}^{+}(\widetilde{u} _{k}^{(a)})}=\frac{w_{\beta_{a}}(\psi_{a}^{+})(\widetilde{u}_{k}^{(a)})}{w_{ \beta_{a}}(\psi_{a}^{-})(\widetilde{u}_{k}^{(a)})} \tag{4.171}\]
In our examples, (4.171) reduces to
\[\frac{\mathbb{Q}_{\emptyset}(\widetilde{u}_{k}^{(a)}-(\epsilon_{i_{1}^{*}}| \beta_{a}))}{\mathbb{Q}_{\emptyset}(\widetilde{u}_{k}^{(a)}+(\epsilon_{i_{1}^{*} }|\beta_{a}))}=\frac{\mathbb{Q}_{\emptyset}(\widetilde{u}_{k}^{(a)}-(w_{\beta_{a }}(\epsilon_{i_{1}^{*}})|\beta_{a}))}{\mathbb{Q}_{\emptyset}(\widetilde{u}_{k}^{ (a)}+(w_{\beta_{a}}(\epsilon_{i_{1}^{*}})|\beta_{a}))}. \tag{4.172}\]
This relation follows from
\[(\epsilon_{i_{1}^{*}}|\beta_{a})=(w_{\beta_{a}}(\epsilon_{i_{1}^{*}})|\beta_{a}). \tag{4.173}\]
Let us check (4.173) for (i) \(osp(2r|2s)\) for \(r+s=2\) of type D (\(p_{i_{2}}=1\)), and (ii) the other case for \(osp(2r|2s)\) and \(osp(2r+1|2s)\) for \(r+s\geq 2\). (i) (4.173) reduces to the condition \((\beta_{a}|\beta_{a})=0\) for \(a=1\) or \(2\) because of \(\beta_{1}=\epsilon_{i_{1}^{*}}-\epsilon_{i_{2}^{*}}\), \(\beta_{2}=\epsilon_{i_{1}^{*}}+\epsilon_{i_{2}^{*}}\), \(w_{\beta_{1}}(\epsilon_{i_{1}^{*}})=\epsilon_{i_{2}^{*}}\), \(w_{\beta_{2}}(\epsilon_{i_{1}^{*}})=-\epsilon_{i_{2}^{*}}\). (ii) (4.173) follows from \(\beta_{1}=\epsilon_{i_{1}^{*}}-\epsilon_{i_{2}^{*}}\), \(w_{\beta_{1}}(\epsilon_{i_{1}^{*}})=\epsilon_{i_{2}^{*}}\), \((\beta_{a}|\beta_{a})=0\) for \(a=1\); and \((\epsilon_{i_{1}^{*}}|\beta_{a})=0\), \(w_{\beta_{a}}(\epsilon_{i_{1}^{*}})=\epsilon_{i_{1}^{*}}\) for \(a\geq 2\).
**The case \((\beta_{a}|\beta_{a})=0\), \((\beta_{a}|\beta_{c})\neq 0\), \(c\neq a\)** Substituting \(u=u_{k}^{(c)}\pm(\beta_{a}|\beta_{c})\) into (4.153) and taking the ratio of the resultant equations on both sides, we obtain
\[\frac{\mathcal{Q}_{a}(u_{k}^{(c)}+(\beta_{a}|\beta_{c}))}{\mathcal{ Q}_{a}(u_{k}^{(c)}-(\beta_{a}|\beta_{c}))}=-e^{-\beta_{a}(h)}\frac{\varphi_{a}^{-}(u_ {k}^{(c)}+(\beta_{a}|\beta_{c}))\widetilde{\mathcal{Q}}_{a}(u_{k}^{(c)}-( \beta_{a}|\beta_{c}))}{\varphi_{a}^{+}(u_{k}^{(c)}-(\beta_{a}|\beta_{c})) \widetilde{\mathcal{Q}}_{a}(u_{k}^{(c)}+(\beta_{a}|\beta_{c}))}\times\\ \times\prod_{b=1\atop(\beta_{a}|\beta_{b})\neq 0,b\neq a}^{ \mathfrak{r}}\frac{\mathcal{Q}_{b}(u_{k}^{(c)}+(\beta_{a}|(\beta_{b}+\beta_{c }))}{\mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{a}|(\beta_{b}+\beta_{c}))}\quad\text {for}\quad k\in\{1,2,\ldots,n_{c}\}, \tag{4.174}\]
Substituting (4.174) into the right hand side of (4.165) (the part \(b=a\)), we obtain
\[-\frac{\psi_{c}^{+}(u_{k}^{(c)})\varphi_{a}^{+}(u_{k}^{(c)}-( \beta_{a}|\beta_{c}))}{\psi_{c}^{-}(u_{k}^{(c)})\varphi_{a}^{-}(u_{k}^{(c)}+( \beta_{a}|\beta_{c}))}=p_{\beta_{a}}p_{\beta_{c}}e^{-(\beta_{a}+\beta_{c})(h)} \frac{\widetilde{\mathcal{Q}}_{a}(u_{k}^{(c)}-(\beta_{a}|\beta_{c}))}{ \widetilde{\mathcal{Q}}_{a}(u_{k}^{(c)}+(\beta_{a}|\beta_{c}))}\times\\ \times\prod_{b=1\atop(\beta_{a}|\beta_{b})\neq 0,b\neq a}^{ \mathfrak{r}}\frac{\mathcal{Q}_{b}(u_{k}^{(c)}+(\beta_{a}|(\beta_{b}+\beta_{c })))}{\mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{a}|(\beta_{b}+\beta_{c}))}\prod_{b =1\atop b\neq a,c}^{\mathfrak{r}}\frac{\mathcal{Q}_{b}(u_{k}^{(c)}+(\beta_{c }|\beta_{b}))}{\mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{c}|\beta_{b}))}\times\\ \times\begin{cases}\frac{\mathcal{Q}_{c}(u_{k}^{(c)}+2(\beta_{c}| \beta_{c}))\mathcal{Q}_{c}(u_{k}^{(c)}-(\beta_{c}|\beta_{c}))}{\mathcal{Q}_{c }(u_{k}^{(c)}-2(\beta_{c}|\beta_{c}))\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}| \beta_{c}))}&\text{if}\quad p_{\beta_{c}}=-1,\quad(\beta_{c}|\beta_{c})\neq 0\\ 1&\text{if}\quad(\beta_{c}|\beta_{c})=0\\ \frac{\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c}))}{\mathcal{Q}_{c}(u_ {k}^{(c)}-(\beta_{c}|\beta_{c}))}&\text{otherwise}\\ &\text{for}\quad k\in\{1,2,\ldots,n_{c}\}.\end{cases} \tag{4.175}\]
Let us write down (4.166) in this case.
\[-\frac{w_{\beta_{a}}(\psi_{c}^{+})(u_{k}^{(c)})}{w_{\beta_{a}}( \psi_{c}^{-})(u_{k}^{(c)})}=p_{\beta_{a}+\beta_{c}}e^{-(\beta_{a}+\beta_{c})(h )}\frac{\widetilde{\mathcal{Q}}_{a}(u_{k}^{(c)}-(\beta_{c}|\beta_{a}))}{ \widetilde{\mathcal{Q}}_{a}(u_{k}^{(c)}+(\beta_{c}|\beta_{a}))}\times\\ \times\prod_{b=1\atop(\beta_{a}|\beta_{b})\neq 0,b\neq a,c}^{ \mathfrak{r}}\frac{\mathcal{Q}_{b}(u_{k}^{(c)}+(\beta_{c}|\beta_{b})+(\beta_{a }|(\beta_{b}+\beta_{c})))}{\mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{c}|\beta_{b})-( \beta_{a}|(\beta_{b}+\beta_{c})))}\prod_{b=1\atop(\beta_{a}|\beta_{b})=0,b\neq a }^{\mathfrak{r}}\frac{\mathcal{Q}_{b}(u_{k}^{(c)}+(\beta_{c}|\beta_{b}))}{ \mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{c}|\beta_{b}))}\times\\ \times\begin{cases}\frac{\mathcal{Q}_{c}(u_{k}^{(c)}+2(\beta_{c}| \beta_{c})+4(\beta_{a}|\beta_{c}))\mathcal{Q}_{c}(u_{k}^{(c)}-(\beta_{c}|\beta_{ c})-2(\beta_{a}|\beta_{c}))}{\mathcal{Q}_{c}(u_{k}^{(c)}-2(\beta_{c}|\beta_{c})-4( \beta_{a}|\beta_{c}))\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c})+2(\beta_ {a}|\beta_{c}))}&\text{if}\quad p_{\beta_{c}}=1,\quad(\beta_{c}|\beta_{c})+2( \beta_{a}|\beta_{c})\neq 0\\ 1&\text{if}\quad(\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c})=0\\ \frac{\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c}))} {\mathcal{Q}_{c}(u_{k}^{(c)}-(\beta_{c}|\beta_{c})-2(\beta_{a}|\beta_{c}))}& \text{otherwise}\\ &\text{for}\quad k\in\{1,2,\ldots,n_{c}\}.\end{cases} \tag{4.176}\]
In order to show that (4.175) and (4.176) coincide, we have to check
\[\frac{\psi_{c}^{+}(u_{k}^{(c)})\varphi_{a}^{+}(u_{k}^{(c)}-(\beta_{a}|\beta_{c }))}{\psi_{c}^{-}(u_{k}^{(c)})\varphi_{a}^{-}(u_{k}^{(c)}+(\beta_{a}|\beta_{c }))}=\frac{w_{\beta_{a}}(\psi_{c}^{+})(u_{k}^{(c)})}{w_{\beta_{a}}(\psi_{c}^{ -})(u_{k}^{(c)})} \tag{4.177}\]
and
\[\frac{\mathcal{Q}_{c}(u_{k}^{(c)}+2(\beta_{a}|\beta_{c}))}{\mathcal{Q}_{c}(u_{k} ^{(c)}-2(\beta_{a}|\beta_{c}))}\prod_{\stackrel{{ b=1}}{{(\beta_{a}|\beta_{b})\neq \emptyset,(\beta_{c}|\beta_{b})\neq 0,b\neq a,c}}}^{\mathfrak{r}}\frac{\mathcal{Q}_{b}(u_{k}^{(c) }+(\beta_{a}|(\beta_{b}+\beta_{c})))\mathcal{Q}_{b}(u_{k}^{(c)}+(\beta_{c}| \beta_{b}))}{\mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{a}|(\beta_{b}+\beta_{c}))) \mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{c}|\beta_{b}))}\times\\ \times\begin{cases}\frac{\mathcal{Q}_{c}(u_{k}^{(c)}+2(\beta_{c}| \beta_{c}))\mathcal{Q}_{c}(u_{k}^{(c)}-(\beta_{c}|\beta_{c}))}{\mathcal{Q}_{c}( u_{k}^{(c)}-2(\beta_{c}|\beta_{c}))\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}| \beta_{c}))}&\text{if}\quad p_{\beta_{c}}=-1,\quad(\beta_{c}|\beta_{c})\neq 0\\ 1&\text{if}\quad(\beta_{c}|\beta_{c})=0\\ \frac{\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c}))}{\mathcal{Q}_{c}(u_{ k}^{(c)}-(\beta_{c}|\beta_{c}))}&\text{otherwise}\end{cases}\]
\[=\prod_{\stackrel{{ b=1}}{{(\beta_{a}|\beta_{b})\neq \emptyset,(\beta_{c}|\beta_{b})\neq 0,b\neq a}}}^{\mathfrak{r}}\frac{\mathcal{Q}_{b}(u_{k}^{(c) }+(\beta_{c}|\beta_{b})+(\beta_{a}|(\beta_{b}+\beta_{c})))}{\mathcal{Q}_{b}(u_ {k}^{(c)}-(\beta_{c}|\beta_{b})-(\beta_{a}|(\beta_{b}+\beta_{c})))}\times\\ \times\begin{cases}\frac{\mathcal{Q}_{c}(u_{k}^{(c)}+2(\beta_{c}| \beta_{c})+4(\beta_{a}|\beta_{c}))\mathcal{Q}_{c}(u_{k}^{(c)}-(\beta_{c}| \beta_{c})-2(\beta_{a}|\beta_{c}))}{\mathcal{Q}_{c}(u_{k}^{(c)}-2(\beta_{c}| \beta_{c})-4(\beta_{a}|\beta_{c}))\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}| \beta_{c})+2(\beta_{a}|\beta_{c}))}&\text{if}\quad p_{\beta_{c}}=1,\quad( \beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c})\neq 0\\ 1&\text{if}\quad(\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c})=0\\ \frac{\mathcal{Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c} ))}{\mathcal{Q}_{c}(u_{k}^{(c)}-(\beta_{c}|\beta_{c})-2(\beta_{a}|\beta_{c}))}& \text{otherwise}\\ &\text{for}\quad k\in\{1,2,\ldots,n_{c}\}.\end{cases} \tag{4.178}\]
In our examples, (4.177) reduces to
\[\frac{\mathbb{Q}_{\emptyset}(u_{k}^{(c)}+(\epsilon_{i_{1}^{*}}| \beta_{c}))\mathbb{Q}_{\emptyset}(u_{k}^{(c)}-(\beta_{a}|\beta_{c})+(\epsilon_ {i_{1}^{*}}|\beta_{a}))}{\mathbb{Q}_{\emptyset}(u_{k}^{(c)}-(\epsilon_{i_{1}^{* }}|\beta_{c}))\mathbb{Q}_{\emptyset}(u_{k}^{(c)}+(\beta_{a}|\beta_{c})-( \epsilon_{i_{1}^{*}}|\beta_{a}))}=\frac{\mathbb{Q}_{\emptyset}(u_{k}^{(c)}+(w _{\beta_{a}}(\epsilon_{i_{1}^{*}})|(\beta_{a}+\beta_{c})))}{\mathbb{Q}_{ \emptyset}(u_{k}^{(c)}-(w_{\beta_{a}}(\epsilon_{i_{1}^{*}})|(\beta_{a}+\beta_{ c})))}\\ \text{if}\quad(\epsilon_{i_{1}^{*}}|\beta_{a})\neq 0, \tag{4.179}\]
and a trivial identity if \((\epsilon_{i_{1}^{*}}|\beta_{a})=0\). Let us prove (4.179) for (i) \(osp(2r|2s)\) for \(r+s=2\) of type D (\(p_{i_{2}}=1\)), and (ii) the other case for \(osp(2r|2s)\) and \(osp(2r+1|2s)\) for \(r+s\geq 2\). (i) In this case, \((a,c)=(1,2)\) or \((2,1)\), and the simple roots have the form \(\beta_{1}=\epsilon_{i_{1}^{*}}-\epsilon_{i_{2}^{*}}\), \(\beta_{2}=\epsilon_{i_{1}^{*}}+\epsilon_{i_{2}^{*}}\). Thus the condition \((\beta_{a}|\beta_{a})=0\) leads to \(p_{i_{1}}=-1\). Taking note on the fact \(w_{\beta_{1}}(\epsilon_{i_{1}^{*}})=\epsilon_{i_{2}^{*}}\), \(w_{\beta_{2}}(\epsilon_{i_{1}^{*}})=-\epsilon_{i_{2}^{*}}\), one can show the relations \((\epsilon_{i_{1}^{*}}|\beta_{c})=(\beta_{a}|\beta_{c})-(\epsilon_{i_{1}^{*}}| \beta_{a})\) and \((w_{\beta_{a}}(\epsilon_{i_{1}^{*}})|(\beta_{a}+\beta_{c}))=0\), from which (4.179) follows. (ii) The condition \((\epsilon_{i_{1}^{*}}|\beta_{a})\neq 0\) leads to \(a=1\). Thus \(\beta_{a}=\beta_{1}=\epsilon_{i_{1}^{*}}-\epsilon_{i_{2}^{*}}\). We also have \((\epsilon_{i_{1}^{*}}|\beta_{c})=0\) since \(c\neq a=1\). It suffices to show the relation \((\beta_{a}|\beta_{c})-(\epsilon_{i_{1}^{*}}|\beta_{a})=-(w_{\beta_{a}}(\epsilon_{i_ {1}^{*}})|(\beta_{a}+\beta_{c}))\). This reduces to \((\beta_{a}|\beta_{c})=(\beta_{a}|\beta_{a})-(\epsilon_{i_{2}^{*}}|\beta_{c})\) since \(w_{\beta_{1}}(\epsilon_{i_{1}^{*}})=\epsilon_{i_{2}^{*}}\). One can show this based on \((\beta_{a}|\beta_{a})=0\).
The relation (4.178) holds true if the following two relations are valid in the product: for the part \(b\neq c,a\), \((\beta_{a}|\beta_{b})\neq 0\), \((\beta_{c}|\beta_{b})\neq 0\),
\[\frac{\mathcal{Q}_{b}(u_{k}^{(c)}+(\beta_{a}|(\beta_{b}+\beta_{c})) )\mathcal{Q}_{b}(u_{k}^{(c)}+(\beta_{c}|\beta_{b}))}{\mathcal{Q}_{b}(u_{k}^{(c)}-( \beta_{a}|(\beta_{b}+\beta_{c})))\mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{c}|\beta_{b}))}= \frac{\mathcal{Q}_{b}(u_{k}^{(c)}+(\beta_{c}|\beta_{b})+(\beta_{a}|(\beta_{b}+ \beta_{c})))}{\mathcal{Q}_{b}(u_{k}^{(c)}-(\beta_{c}|\beta_{b})-(\beta_{a}|( \beta_{b}+\beta_{c})))}\\ \text{for}\quad\ k\in\{1,2,\ldots,n_{c}\}; \tag{4.180}\]
and for the rest,
\[\frac{{\cal Q}_{c}(u_{k}^{(c)}+2(\beta_{c}|\beta_{c})){\cal Q}_{c}(u_ {k}^{(c)}-2(\beta_{c}|\beta_{c}))}{{\cal Q}_{c}(u_{k}^{(c)}-2(\beta_{c}|\beta_{c })){\cal Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c}))} \mbox{if}\quad p_{\beta_{c}}=-1,\quad(\beta_{c}|\beta_{c})\neq 0\] \[=\begin{cases}\frac{{\cal Q}_{c}(u_{k}^{(c)}+2(\beta_{c}|\beta_{c })){\cal Q}_{c}(u_{k}^{(c)}-(\beta_{c}|\beta_{c}))}{{\cal Q}_{c}(u_{k}^{(c)}-2 (\beta_{c}|\beta_{c})){\cal Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c}))}&\mbox{ if}\quad p_{\beta_{c}}=1,\quad(\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c})\neq 0 \\ 1&\mbox{if}\quad(\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c})=0\\ \frac{{\cal Q}_{c}(u_{k}^{(c)}+(\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c}))} {{\cal Q}_{c}(u_{k}^{(c)}-(\beta_{c}|\beta_{c})-2(\beta_{a}|\beta_{c}))}& \mbox{otherwise}\\ &\mbox{for}\quad k\in\{1,2,\ldots,n_{c}\}.\end{cases} \tag{4.181}\]
The conditions for (4.180) mean that the three different vertexes \(a,b,c\) of the Dynkin diagram form a closed loop. This is possible only 54 when \((a,b,c)\) is a permutation of the last three vertexes \((r+s-2,r+s-1,r+s)\) of the Dynkin diagram of \(osp(2r|2s)\) for the simple root (2.21) with \(p_{i_{r+s-1}}=-p_{i_{r+s}}\). Thus (4.180) holds true since \((\beta_{r+s-2}|\beta_{r+s-1})=(\beta_{r+s-2}|\beta_{r+s})=-p_{i_{r+s-1}}\), \((\beta_{r+s-2}|\beta_{r+s})=p_{i_{r+s-1}}-p_{i_{r+s}}=2p_{i_{r+s-1}}\). As for (4.181), we consider the case \((\beta_{c}|\beta_{c})=0\) first. In this case, (4.181) becomes trivial since \(p_{\beta_{c}}=-1\). Next we consider the following three cases for \((\beta_{c}|\beta_{c})\neq 0\). (1) The case \((\beta_{c}|\beta_{c})\neq 0\), \(p_{\beta_{c}}=1\), \((\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c})=0\): (4.181) reduces to a trivial identity. (2) The case \((\beta_{c}|\beta_{c})\neq 0\), \(p_{\beta_{c}}=1\), \((\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c})\neq 0\): the only possibility is \(C_{ca}=2(\beta_{c}|\beta_{a})/(\beta_{c}|\beta_{c})=-2\), from which (4.181) holds since the cases \(C_{ca}=0\) and \(C_{ca}=-1\) contradict the conditions \((\beta_{a}|\beta_{c})\neq 0\) and \((\beta_{c}|\beta_{c})+2(\beta_{a}|\beta_{c})\neq 0\), respectively. (3) The case \((\beta_{c}|\beta_{c})\neq 0\), \(p_{\beta_{c}}=-1\): this means that the vertex \(c\) of the Dynkin diagram is a black dot. This is possible only when \(c\) is the \((r+s)\)-th vertex of the Dynkin diagram of \(osp(2r+1|2s)\) for the simple root (2.11) with \(p_{i_{r+s}}=-1\) (\(=p_{\beta_{r+s}}\)). Thus (4.181) holds since \((a,c)=(r+s-1,r+s)\), \(C_{ca}=-2\).
Footnote 54: We expect that a similar idea can be applicable for the exceptional superalgebras \(G(3)\), \(F(4)\), \(D(2,1;\alpha)\), which we do not discuss here.
**The case \((\beta_{a}|\beta_{a})=0\), \((\beta_{a}|\beta_{c})=0\), \(c\neq a\)** This case is trivial.
**QQ-relation (4.152) (and (4.39) and (4.38)): odd reflection (black dot: the case \((\beta_{a}|\beta_{a})\neq 0\), \(p_{\beta_{a}}=-1\))** Repeating a similar argument as above for the \(U_{q}(gl(M|N)^{(1)})\) case (for (3.15)-(3.18)), we identity \(w_{\alpha_{a}}({\cal Q}_{a})=\widetilde{\cal Q}_{a}\), \(w_{\alpha_{a}}({\cal Q}_{b})={\cal Q}_{b}\) for \(a\neq b\) in (3.17) and (3.18), where \(\alpha_{a}\) is a simple root of \(gl(M|N)\). The only case in which (4.152) is realized (for the algebras in question in this paper) is when \(a\) corresponds to the black dot of a Dynkin diagram of \(osp(2r+1|2s)\) (\(a=r+s\)). Taking note on the fact that (4.152), namely (4.16) is a reduction of (3.8) for \(U_{q}(gl(2r|2s+1)^{(1)})\), we identity \(w_{\alpha_{r+s}}({\cal Q}_{r+s})=\widetilde{\cal Q}_{r+s}\), \(w_{\alpha_{r+s}}({\cal Q}_{b})={\cal Q}_{b}\) for \(b\neq r+s\) (under the reduction) in (4.152). Note however that this does not keep the standard form of the Bethe ansatz equation (4.165) since \(\widetilde{\cal Q}_{r+s}={\mathbb{Q}}_{\widetilde{I}_{r+s}}\) is not on the symmetric nesting path. In order to keep the form, we have to consider the
odd reflection \(w_{\beta_{r+s}}\) of \(osp(2r+1|2s)\) [with \((\beta_{r+s}|\beta_{r+s})\neq 0\), \(p_{\beta_{r+s}}=-1\)], which acts on the Q-functions as \(w_{\beta_{r+s}}(\mathcal{Q}_{r+s})=w_{\alpha^{\prime\prime}_{r+s+1}}w_{\alpha^{ \prime}_{r+s+1}}w_{\alpha_{r+s}}(\mathcal{Q}_{r+s})=\mathbb{Q}_{I_{r+s}}\), \(w_{\beta_{r+s}}(\mathcal{Q}_{b})=\mathcal{Q}_{b}\) for \(b\neq r+s\) (under the reduction). Here the action of the odd reflection by \(\beta_{r+s}\) is realized 55 by the Weyl reflections of \(gl(2r|2s+1)\) under the reduction, where \(\alpha_{r+s}=\epsilon_{i_{r+s+2}}-\epsilon_{i_{r+s+1}}=\epsilon_{i_{r+s}}- \epsilon_{2r+s+1}\), \(\alpha^{\prime}_{r+s+1}=\epsilon_{i_{r+s+2}}-\epsilon_{i_{r+s}}=\epsilon_{i_{ r+s}}-\epsilon_{i_{r+s}}\), \(\alpha^{\prime\prime}_{r+s}=\epsilon_{i_{r+s+1}}-\epsilon_{i_{r+s}}=\epsilon_ {2r+s+1}-\epsilon_{i_{r+s}}\). The Weyl reflections by the even roots \(\alpha_{r+s}\), \(\alpha^{\prime}_{r+s+1}\) and \(\alpha^{\prime\prime}_{r+s}\) correspond to the bosonic QQ-relations (4.16) (namely, (4.152)), (4.39) and (4.38), respectively.
Footnote 55: Another option is \(w_{\beta_{r+s}}(\mathcal{Q}_{r+s})=w_{\alpha^{\prime\prime}_{r+s+1}}w_{\alpha ^{\prime}_{r+s}}w_{\alpha_{r+s+1}}(\mathcal{Q}_{r+s})=\mathbb{Q}_{I_{r+s}}\), \(w_{\beta_{r+s}}(\mathcal{Q}_{b})=\mathcal{Q}_{b}\) for \(b\neq r+s\) (under the reduction). Here the action of the odd reflection by \(\beta_{r+s}\) is realized by the Weyl reflections of \(gl(2r|2s+1)\) under the reduction, where \(\alpha_{r+s+1}=\epsilon_{i_{r+s+1}}-\epsilon_{i_{r+s}}=\epsilon_{2r+s+1}- \epsilon_{i_{r+s}}\), \(\alpha^{\prime}_{r+s}=\epsilon_{i_{r+s+2}}-\epsilon_{i_{r+s}}=\epsilon_{i_{r+s }}-\epsilon_{i_{r+s}}\), \(\alpha^{\prime\prime}_{r+s+1}=\epsilon_{i_{r+s+2}}-\epsilon_{i_{r+s+1}}= \epsilon_{i_{r+s}^{\prime}}-\epsilon_{2r+s+1}\). These roots and the roots in the main text reduce to \(\beta_{r+s},2\beta_{r+s}\) by the formal replacement \((\epsilon_{i_{r+s}},\epsilon_{2r+s+1})\rightarrow(-\epsilon_{i_{r+s}^{\prime}},0)\). In order to describe the whole symmetry of the system, we may need BC-like root system. This is also the case with \(U_{q}(gl(2r|2s+1)^{(2)})\). This point needs further research.
Starting from the Bethe ansatz equation associated with one of the Dynkin diagrams, one can obtain any other Bethe ansatz equation by using QQ-relations repeatedly. Now that the QQ-relations (4.146), (4.147), (4.151) and (4.153), and the Bethe ansatz equations (4.161) and (4.162) are expressed in terms of the root systems of the underlying algebras, these are expected to be valid for other quantum affine superalgebras as they are or with slight modifications.
### Bethe strap
In this subsection, we will explain our observation on Bethe straps for orthosymplectic superalgebras based on computer experiments with Mathematica (ver. 7).
In relation to the Bethe ansatz equation (4.162), we introduce the following function
\[F_{a}(u)=p_{\beta_{a}}e^{-\beta_{a}(h)}\frac{\psi^{-}_{a}(u)}{ \psi^{+}_{a}(u)}\prod_{b=1\atop b\neq a}^{\mathfrak{r}}\frac{\mathcal{Q}_{b}( u+(\beta_{a}|\beta_{b}))}{\mathcal{Q}_{b}(u-(\beta_{a}|\beta_{b}))}\prod_{l=1}^{ \kappa_{a}}\frac{\mathcal{Q}_{a}(u+p_{l\beta_{a}}l(\beta_{a}|\beta_{a}))}{ \mathcal{Q}_{a}(u-p_{l\beta_{a}}l(\beta_{a}|\beta_{a}))}\\ \text{for}\quad a\in\{1,2,\ldots,\mathfrak{r}\}. \tag{4.182}\]
In our examples, the vacuum parts \(\psi^{\pm}_{a}(u)\) are given by (4.163). The Bethe ansatz equation (4.162) is equivalent to \(F_{a}(u^{(a)}_{k})=-1\), \(k\in\{1,2,\ldots,n_{a}\}\). The adjacent terms in (4.23) are related to each other as
\[\mathcal{X}_{I_{2r+2s+2-a}}F_{a}^{[\sum_{j\in I_{a}}p_{j}]}=p_{ \beta_{a}}\mathcal{X}_{I_{2r+2s+1-a}},\qquad\mathcal{X}_{I_{a+1}}F_{a}^{[2r-2s- 1-\sum_{j\in I_{a}}p_{j}]}=p_{\beta_{a}}\mathcal{X}_{I_{a}}\\ \text{for}\quad 1\leq a\leq r+s. \tag{4.183}\]
The adjacent terms in (4.123) are related to each other as
\[\begin{split}&\mathcal{X}_{I_{2r+2s+3-a}}F_{a}^{[\sum_{j\in I_{a}}p_{ j}]}=p_{\beta_{a}}\mathcal{X}_{I_{2r+2s+2-a}},\quad\mathcal{X}_{I_{a+1}}F_{a}^{[2r-2s-2- \sum_{j\in I_{a}}p_{j}]}=p_{\beta_{a}}\mathcal{X}_{I_{a}}\\ &\quad\text{for}\quad 1\leq a\leq r+s-1;\\ &\mathcal{X}_{I_{r+s+4}}F_{r+s}^{[r-s-1]}=p_{\beta_{r+s}} \mathcal{X}_{I_{r+s}},\quad\mathcal{X}_{I_{r+s+3}}F_{r+s}^{[r-s-1]}=p_{\beta_{ r+s}}\mathcal{X}_{I_{r+s-1}}\quad\text{if}\quad i_{r+s}\in\mathfrak{B};\\ &\mathcal{X}_{I_{r+s+3}}F_{r+s}^{[r-s-1]}=p_{\beta_{r+s}} \mathcal{X}_{I_{r+s}}\quad\text{if}\quad i_{r+s}\in\mathfrak{F}.\end{split} \tag{4.184}\]
T-functions form Bethe star structures by the relations (4.183) and (4.184) (see Figures 10, 11, 12, 13).
\(U_{q}(osp(2r+1|2s)^{(1)})\) caseWe consider the tuple \(I_{2r+2s+1}=(2r+2s+1,2r+2s,\ldots,2r+s+3,2r+s+2,2r,2r-1,\ldots,r+2,r+1,2r+s+1, r,r-1,\ldots,2,1,2r+s,2r+s-1,\ldots,2r+2,2r+1)\) and a partition \(\mu\) with the condition \(\mu_{r+1}\leq s\) (the Young diagram \(\mu\) is on the \([r,s]\)-hook in Figure 2). Let \(\mathfrak{t}_{\mu}(u)\) be the T-function derived by the Bethe strap procedure with the top term (cf. eq. (3.48) in [2])
\[\mathsf{hw}_{\mu}(u)=\prod_{k=1}^{s}\prod_{j=1}^{\mu_{k}^{\prime}}(-1) \mathcal{X}_{I_{2r+2s+2-k}}^{[-\mu_{1}+\mu_{1}^{\prime}-2j+2k]}\prod_{j=1}^{r} \prod_{k=s+1}^{\mu_{j}}\mathcal{X}_{I_{2r+s+2-j}}^{[-\mu_{1}+\mu_{1}^{\prime} -2j+2k]}, \tag{4.185}\]
which carries the \(osp(2r+1|2s)\) highest weight (2.13) for (2.15). In fact, we have
\[\zeta(\mathsf{hw}_{\mu}(u))=(-1)^{\sum_{k=1}^{s}\mu_{k}^{\prime}}e^{\Lambda(h )}, \tag{4.186}\]
where \(h\) is a Cartan element such that \(e^{\epsilon_{a}(h)}=z_{a}\) (\(1\leq a\leq r\) or \(2r+1\leq a\leq 2r+s\)). Here we set \(\mu_{j}=0\) if \(j>\mu_{1}^{\prime}\), \(\mu_{k}^{\prime}=0\) if \(k>\mu_{1}\), \(\prod_{j=a}^{b}(\cdots)=1\) if \(a>b\). We conjecture that \(\mathfrak{t}_{\mu}(u)=\mathcal{F}_{\mu}^{I_{2r+2s+1}}\) holds on the \([r,s]\)-hook.
\(U_{q}(osp(2r|2s)^{(1)})\) caseWe consider the tuple \(I_{2r+2s+2}=(2r+2s+2,2r+2s+1,\ldots,2r+s+4,2r+s+3,2r,2r-1,\ldots,r+2,r+1,2r+s +2,2r+s+1,r,r-1,\ldots,2,1,2r+s,2r+s-1,\ldots,2r+2,2r+1)\) and a partition \(\mu\) with the condition \(\mu_{r+1}\leq s\) (the Young diagram \(\mu\) is on the \([r,s]\)-hook in Figure 4). Let \(\mathfrak{t}_{\mu,+}(u)\) be the T-function derived by the Bethe strap procedure with the top term (cf. eqs. (3.49), (3.50) in [2])
\[\mathsf{hw}_{\mu,+}(u)=\prod_{k=1}^{s}\prod_{j=1}^{\mu_{k}^{\prime}}(-1) \mathcal{X}_{I_{2r+2s+3-k}}^{[-\mu_{1}+\mu_{1}^{\prime}-2j+2k]}\prod_{j=1}^{r }\prod_{k=s+1}^{\mu_{j}}\mathcal{X}_{I_{2r+s+3-j}}^{[-\mu_{1}+\mu_{1}^{\prime} -2j+2k]}, \tag{4.187}\]
which carries the \(osp(2r|2s)\) highest weight (2.23) for (2.25) (in the same manner as (4.186)). Let \(\mathfrak{t}_{\mu,-}(u)\) be the T-function derived by the Bethe strap procedure with the top term
\[\mathsf{hw}_{\mu,-}(u)=\prod_{k=1}^{s}\prod_{j=1}^{\mu_{k}^{\prime}}(-1) \mathcal{X}_{I_{2r+2s+3-k}}^{[-\mu_{1}+\mu_{1}^{\prime}-2j+2k]}\prod_{j=1}^{r -1}\prod_{k=s+1}^{\mu_{j}}\mathcal{X}_{I_{2r+s+3-j}}^{[-\mu_{1}+\mu_{1}^{ \prime}-2j+2k]}\prod_{k=s+1}^{\mu_{r}}\mathcal{X}_{I_{r+s}^{[-\mu_{1}+\mu_{1}^ {\prime}-2r+2k]}}^{[-\mu_{1}+\mu_{1}^{\prime}-2r+2k]}, \tag{4.188}\]
which carries the \(osp(2r|2s)\) highest weight (2.23) for (2.26) (in the same manner as (4.186)). As already remarked in the previous paper [2], \(\mathfrak{t}_{\mu,+}(u)={\cal F}_{\mu}^{I_{2r+2s+2}}\) does not always hold, but rather, all the terms of \(\mathfrak{t}_{\mu,+}(u)\) are expected to be in a subset of those of \({\cal F}_{\mu}^{I_{2r+2s+2}}\) since both of them contain the top term (4.187). In fact, for Young diagrams with one row or column, we observe 56: for \(a,m\in\mathbb{Z}_{\geq 1}\),
Footnote 56: We have confirmed (4.189) and (4.190) for \(r=0\), \(2\leq s\leq 4\), \(1\leq a\leq 6\); \(r=0\), \(s=5,6\), \(1\leq a\leq 5\); \(r=1\), \(s=2,3\), \(1\leq a\leq 6\); \(r=1\), \(s=4,5\), \(1\leq a\leq 5\); \(r=2\), \(1\leq s\leq 4\), \(1\leq a\leq 5\); \(r\geq 3\), \(1\leq s\leq 6-r\), \(1\leq a\leq r-2\); \(3\leq r\leq 6\), \(s=0\), \(1\leq a\leq\min(r,5)\); and (4.191) and (4.192) for \(r\geq 1\), \(2\leq r+s\leq 6\), \(1\leq m\leq 6\); \(r=0\), \(2\leq s\leq 6\), \(1\leq m\leq s\). The Bethe straps seem to have pseudo-top terms at least for the cases \(\mu=(1^{a})\), \(r\geq 3\), \(1\leq s\leq 6-r\), \(r-1\leq a\leq 5\). We have to add pseudo-top terms (see [2]) by hand to make the Bethe straps finite connected graphs. We do not have a systematic way for this at the moment. This is a serious drawback of the Bethe stray procedures, which has to be overcome.
\[\widehat{{\cal F}}_{(1^{a})}^{I_{2r+2s+2}}=\mathfrak{t}_{(1^{a}),+}(u) \text{for}\quad s=0,\quad a<r,\quad\text{or}\quad s\geq 1,\quad r\geq 2,\] \[\text{or}\quad s\geq 2,\quad r=0,1, \tag{4.189}\] \[\widehat{{\cal F}}_{(1^{a})}^{I_{2r+2s+2}}=\mathfrak{t}_{(1^{r}), +}(u)+\mathfrak{t}_{(1^{r}),-}(u)\text{for}\quad a=r\geq 3,\quad s=0,\] (4.190) \[\widehat{{\cal F}}_{(m)}^{I_{2r+2s+2}}=\mathfrak{t}_{(m),+}(u) \text{for}\quad r\geq 2,\ r+s\geq 3\quad\text{or}\quad r=0,1,\ s\geq 2,\ m\leq s,\] (4.191) \[\widehat{{\cal F}}_{(m)}^{I_{2r+2s+2}}=\mathfrak{t}_{(m),+}(u) \text{for}\quad r=1,\quad s\geq 2,\quad m\geq s+1, \tag{4.192}\]
where
\[\widehat{{\cal F}}_{(1^{a})}^{I_{2r+2s+2}}=\begin{cases}{\cal F}_{(1^{a})}^{I _{2r+2s+2}}-g_{(1^{a})}(u){\cal F}_{(1^{2r(r-s-1)-a})}^{I_{2r+2s+2}}&\text{if} \quad 2\leq r-s\leq a\leq 2(r-s-1),\\ {\cal F}_{(1^{a})}^{I_{2r+2s+2}}&\text{otherwise},\end{cases} \tag{4.193}\]
\[\widehat{{\cal F}}_{(m)}^{I_{2r+2s+2}}=\begin{cases}{\cal F}_{(m)}^{I_{2r+2s+ 2}}-g_{(m)}(u){\cal F}_{(2(s-r+1)-m)}^{I_{2r+2s+2}}&\text{if}\quad 2\leq s-r+2\leq m \leq 2(s-r+1),\\ {\cal F}_{(m)}^{I_{2r+2s+2}}&\text{otherwise},\end{cases} \tag{4.194}\]
and 57
Footnote 57: In (4.196), we use the following relation repeatedly:
\[\chi_{I_{2r+2s+2}}^{[2r-2s-2]}\chi_{I_{1}}=\frac{\mathbb{Q}_{\emptyset}^{[2r-2s -4]}\mathbb{Q}_{\emptyset}^{[2r-2s]}}{(\mathbb{Q}_{\emptyset}^{[2r-2s-2]})^{2}}. \tag{4.195}\]
There are misprints in [2]: the condition three lines above eq. (3.51), "\(0\leq r-s-1\leq a\leq 2(r-s-1)\)" is a misprint of "\(0<r-s-1<a\leq 2(r-s-1)\)"; the condition three lines above eq. (3.52), "\(0\leq s-r+1\leq m\leq 2(s-r+1)\)" is a misprint of "\(0<s-r+1<a\leq 2(s-r+1)\)".
\[g_{(m)}(u)=\prod_{j=1}^{m+r-s-1}\chi_{l_{2r+2s+3-j}}^{[-m+2j-1]}\chi_{l_{j}}^{[m-2j +1]}=\frac{\mathbb{Q}_{\emptyset}^{[m+2r-2s-1]}\mathbb{Q}_{\emptyset}^{[-m-1]}}{ \mathbb{Q}_{\emptyset}^{[-m+1]}\mathbb{Q}_{\emptyset}^{[m+2r-2s-3]}}. \tag{4.197}\]
Similarly, in order to find the relation between \(\mathfrak{t}_{\mu,\pm}(u)\) and \(\mathcal{F}_{\mu}^{I_{2r+2s+2}}\), we will have to remove unnecessary terms from \(\mathcal{F}_{\mu}^{I_{2r+2s+2}}\). For example, for \(\mu=(2,1)\) case, we have checked that \(\mathcal{F}_{(2,1)}^{I_{2r+2s+2}}=\mathfrak{t}_{(2,1),\pm}(u)\) holds at least for \((r,s)=(1,2),(2,1),(3,0),(4,0)\), while this is modified as \(\mathcal{F}_{(2,1)}^{I_{2r+2s+2}}-\frac{\mathbb{Q}_{\emptyset}^{[-2]}\mathbb{ Q}_{\emptyset}^{[2]}}{(\mathbb{Q}_{\emptyset})^{2}}\mathcal{F}_{(1)}^{I_{2r+2s+2 }[2(r-s-1)]}=\mathfrak{t}_{(2,1),\pm}(u)\) for \((r,s)=(2,2),(3,1)\).
\(U_{q}(osp(2|2s)^{(1)})\) caseWe consider the tuple \(I_{2s+4}=(2,2s+4,2s+3,\ldots,4,3,1)\) and a partition \(\mu\) with the condition \(\mu_{2}\leq s\) (the Young diagram \(\mu\) is on the \([r,s]\)-hook in Figure 5). Note that this is different from the previous case for \(r=1\) in that the definition of the tuple is different. Let \(\mathfrak{t}_{\mu}(u)\) be the T-function derived by the Bethe strap procedure with the top term (cf. eqs. (3.22), (3.31) in [3])
\[\mathsf{hw}_{\mu}(u)=\prod_{k=1}^{\mu_{1}}\mathcal{X}_{I_{2s+4}}^{[-\mu_{1}+ \mu_{1}^{\prime}-2+2k]}\prod_{k=1}^{s}\prod_{j=2}^{\mu_{k}^{\prime}}(-1) \mathcal{X}_{I_{2s+4-k}}^{[-\mu_{1}+\mu_{1}^{\prime}-2j+2k]}, \tag{4.198}\]
which carries the \(osp(2|2s)\) highest weight (2.31) for (2.33). In fact, we have
\[\zeta(\mathsf{hw}_{\mu}(u))=(-1)^{\sum_{k=1}^{s}\max\{\mu_{k}^{\prime}-1,0\}} e^{\Lambda(h)}, \tag{4.199}\]
where \(h\) is a Cartan element such that \(e^{\epsilon_{a}(h)}=z_{a}\) (\(a=1\) or \(3\leq a\leq s+2\)). In addition to this, we consider the T-function \(\widetilde{\mathfrak{t}}_{(m)}(u)\) for \(m\geq s+1\) derived by the Bethe strap procedure with the top term
\[\widetilde{\mathsf{hw}}_{(m)}(u)=\prod_{k=1}^{\min(m-s-1,s)}\mathcal{X}_{I_{2s +4}}^{[-m+2k-1]}\prod_{k=m-s}^{s}(-1)\mathcal{X}_{I_{m+s+3-k}}^{[-m+2k-1]}\prod _{k=s+1}^{m}\mathcal{X}_{I_{1}}^{[-m+2k-1]}, \tag{4.200}\]
which carries the \(osp(2|2s)\) highest weight \(\widetilde{\Lambda}=-\epsilon_{1}+\sum_{j=3}^{2s+3-m}\epsilon_{j}=-\varepsilon _{1}+\sum_{j=1}^{2s+1-m}\delta_{j}\) for \(s+1\leq m\leq 2s\), and \(-(m-2s)\epsilon_{1}=-(m-2s)\varepsilon_{1}\) for \(m\geq 2s+1\). We have
\[\zeta(\widetilde{\mathsf{hw}}_{\mu}(u))=(-1)^{\max\{2s-m+1,0\}}e^{\widetilde{ \Lambda}(h)}. \tag{4.201}\]
For Young diagrams with one row or column, we observe 58 : for \(a,m\in\mathbb{Z}_{\geq 1}\),
Footnote 58: We have confirmed (4.202) for \(s=1\), \(1\leq a\leq 7\); \(2\leq s\leq 3\), \(1\leq a\leq 6\); \(4\leq s\leq 6\), \(1\leq a\leq 5\); and (4.203)-(4.208) for \(1\leq s\leq 6\), \(1\leq m\leq 9\).
\[\mathcal{F}_{(1^{a})}^{I_{2s+4}} =\mathfrak{t}_{(1^{a})}(u) \text{for}\quad r\geq 2, \tag{4.202}\] \[\mathcal{F}_{(m)}^{I_{2s+4}} =\mathfrak{t}_{(m)}(u) \text{for}\quad m\leq s,\] (4.203) \[\widehat{\mathcal{F}}_{(m)}^{I_{2s+4}} =\mathfrak{t}_{(m)}(u)+\widetilde{\mathfrak{t}}_{(m)}(u) \text{for}\quad m\geq s+1, \tag{4.204}\]
where
\[\widehat{\mathcal{F}}_{(m)}^{I_{2s+4}}=\begin{cases}\mathcal{F}_{(m)}^{I_{2s+4}}-g _{(m)}(u)\mathcal{F}_{(2s-m)}^{I_{2s+4}}&\text{if}\quad 2\leq s+1\leq m\leq 2s,\\ \mathcal{F}_{(m)}^{I_{2s+4}}&\text{otherwise},\end{cases} \tag{4.205}\]
and
\[g_{(m)}(u)=\prod_{j=1}^{m-s}\chi_{I_{2s+5-j}}^{[-m+2j-1]}\chi_{I_{j}}^{[m-2j+ 1]}=\frac{\mathbb{Q}_{\emptyset}^{[m-2s+1]}\mathbb{Q}_{\emptyset}^{[-m-1]}}{ \mathbb{Q}_{\emptyset}^{[-m+1]}\mathbb{Q}_{\emptyset}^{[m-2s-1]}}. \tag{4.206}\]
In addition to the above, we observe: for \(m\in\mathbb{Z}_{\geq 1}\),
\[\mathsf{t}_{(m)}(u)=\begin{cases}\mathcal{T}_{m}(u)-g_{(m)}(u) \mathcal{F}_{(2s-m)}^{I_{2s+4}}&\text{if}\quad s+1\leq m\leq 2s,\\ \mathcal{T}_{m}(u)&\text{if}\quad 2s+1\leq m,\end{cases} \tag{4.207}\]
where
\[\mathcal{T}_{m}(u)=z_{1}^{m-s}\frac{\mathbb{Q}_{\emptyset}^{[m-2s+1]} \mathbb{Q}_{I_{1}}^{[-m]}}{\mathbb{Q}_{\emptyset}^{[-m+1]}\mathbb{Q}_{I_{1}}^{ [m-2s]}}\mathcal{F}_{(s)}^{I_{2s+4}[m-s]}. \tag{4.208}\]
Eqs. (4.207) and (4.208) correspond to [eqs. (4.54)-(4.56), [3]]. Here we rewrite them in our convention. Note that the T-function (4.208) is well defined for any \(m\in\mathbb{C}\), and is free of poles 59 under the Bethe ansatz equation.
Footnote 59: The trivial poles from \(\mathbb{Q}_{\emptyset}\) are out of the question.
As for the general partition (other than Young diagrams with one rows or columns), we could not find \(\mu\) such that \(\mathsf{t}_{\mu}(u)=\mathcal{F}_{\mu}^{I_{2s+4}}\) holds. We expect that the set of all the terms of \(\mathsf{t}_{\mu}(u)\) is a subset of those of \(\mathcal{F}_{\mu}^{I_{2s+4}}\). For example, for \(s=1\), \(\mu=(2,1)\) case, \(\mathcal{F}_{(2,1)}^{I_{6}}\) has 20 terms, and 8 of them constitute \(\mathsf{t}_{(2,1)}(u)\). It is desirable to establish the general relation between \(\mathcal{F}_{\mu}^{I_{2s+4}}\) and \(\mathsf{t}_{\mu}(u)\).
### T-functions for spinorial representations: \(U_{q}(osp(2r+1|2s)^{(1)})\) case
We introduce a function labeled by a partition \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{\mu_{1}^{\prime}})\) with \(\mu_{1}^{\prime}\leq 2r\), \(\mu_{1}\geq\mu_{2}\geq\cdots\geq\mu_{\mu_{1}^{\prime}}>0\),
\[\mathcal{X}_{I_{4}}^{[-1]}\mathcal{X}_{I_{4}}^{[1]}\xrightarrow{F_{2}^{[1]}}- \mathcal{X}_{I_{5}}^{[-1]}\mathcal{X}_{I_{3}}^{[1]}\xrightarrow{F_{2}}- \mathcal{X}_{I_{5}}^{[-1]}\mathcal{X}_{I_{2}}^{[1]}\xrightarrow{F_{1}^{[1]}} \mathcal{X}_{I_{1}}^{[-2]}\] \[\mathcal{X}_{I_{4}}^{[-1]}\mathcal{X}_{I_{4}}^{[1]}\xrightarrow{F_{2}^ {[1]}}\mathcal{X}_{I_{4}}^{[-1]}\xrightarrow{F_{1}^{[-1]}}\mathcal{X}_{I_{3}}^{[ 1]}\xrightarrow{F_{2}}\mathcal{X}_{I_{4}}^{[-1]}\mathcal{X}_{I_{2}}^{[1]} \xrightarrow{F_{1}^{[1]}}-\mathcal{X}_{I_{4}}^{[-1]}\mathcal{X}_{I_{1}}^{[ 1]}\] \[\mathcal{X}_{I_{2}}^{[-1]}\xrightarrow{F_{2}^{[-1]}}\mathcal{X}_{I_{ 3}}^{[-1]}\xrightarrow{F_{1}^{[1]}}-\mathcal{X}_{I_{3}}^{[-1]}\mathcal{X}_{I_{ 1}}^{[1]}\] \[\mathcal{X}_{I_{2}}^{[-1]}\xrightarrow{F_{1}^{[1]}}-\mathcal{X}_{I_{ 3}}^{[-1]}\mathcal{X}_{I_{1}}^{[1]}\] \[\mathcal{X}_{I_{2}}^{[-1]}\xrightarrow{F_{2}^{[-2]}}\] \[\mathcal{X}_{I_{2}}^{[-1]}\xrightarrow{F_{1}^{[1]}}-\mathcal{X}_{I_{ 3}}^{[-1]}\mathcal{X}_{I_{1}}^{[1]}\]
Figure 11: Bethe star structures of the T-function \(\mathcal{F}_{(2)}^{I_{5}}\) for \(U_{q}(osp(3|2)^{(1)})\), where \(\mathfrak{B}=\{1,2\}\), \(\mathfrak{F}=\{3,4,5\}\), \(I_{5}=(5,2,4,1,3)\), \(I_{4}=(5,2,4,1)\), \(I_{3}=(5,2,4)\), \(I_{2}=(5,2)\), \(I_{1}=(5)\), \(I_{0}=\emptyset\). The top term \(-\mathcal{X}_{I_{5}}^{[-1]}\mathcal{X}_{I_{4}}^{[1]}\) carries the \(osp(3|2)\) highest weight \(\epsilon_{3}+\epsilon_{1}\).
\[\mathsf{S}_{\mu} =\left(\prod_{b=1}^{r}(z_{b}^{\frac{1}{2}}+z_{b}^{-\frac{1}{2}})\prod _{b=1}^{r}\prod_{f=2r+1}^{2r+s}(z_{b}-z_{f})(1-(z_{b}z_{f})^{-1})\right)^{-1} \mathbb{Q}_{\emptyset}^{[2r-\mu_{1}-\mu_{1}^{\prime}+2\mu_{\mu_{1}^{\prime}}]}\times\] \[\qquad\times\lim_{c}\mathsf{T}_{\mu+((2s+1)^{c})}^{\mathfrak{B}, \mathfrak{H}[\mu_{1}^{\prime}-c]}=\prod_{b=1}^{r}(z_{b}^{\frac{1}{2}}+z_{b}^{- \frac{1}{2}})\prod_{b=1}^{r}\prod_{f=2r+1}^{2r+s}(z_{b}-z_{f})(1-(z_{b}z_{f})^ {-1})\mathsf{T}_{\mu}^{\mathfrak{B},\emptyset}. \tag{4.209}\]
This is a reduction of (3.67). The case \(s=0\) corresponds to [eq. (3.53) in [4]], which gives T-functions for spinorial representations of \(U_{q}(so(2r+1)^{(1)})\).
Let us consider \(U_{q}(osp(3|2)^{(1)})\) case with \(\mathfrak{B}=\{1,2\}\), \(\mathfrak{F}=\{3,4,5\}\), \(I_{5}=(5,2,4,1,3)\), \(I_{4}=(5,2,4,1)\), \(I_{3}=(5,2,4)\), \(I_{2}=(5,2)\), \(I_{1}=(5)\), \(I_{0}=\emptyset\). We introduce functions of the spectral parameter:
\[\Omega_{1,\frac{1}{2}} =z_{3}z_{1}^{\frac{1}{2}}\mathbb{Q}_{\emptyset}^{[-\frac{5}{2}]} \frac{\mathbb{Q}_{I_{1}}^{[\frac{3}{2}]}\mathbb{Q}_{I_{2}}^{[-\frac{1}{2}]}}{ \mathbb{Q}_{I_{1}}^{[-\frac{3}{2}]}\mathbb{Q}_{I_{2}}^{[\frac{1}{2}]}}, \Omega_{1,-\frac{1}{2}} =z_{3}z_{1}^{-\frac{1}{2}}\mathbb{Q}_{\emptyset}^{[-\frac{5}{2}]} \frac{\mathbb{Q}_{I_{1}}^{[-\frac{1}{2}]}\mathbb{Q}_{I_{2}}^{[\frac{3}{2}]}}{ \mathbb{Q}_{I_{1}}^{[-\frac{3}{2}]}\mathbb{Q}_{I_{2}}^{[\frac{1}{2}]}},\] \[\Omega_{0,\frac{3}{2}} =z_{1}^{\frac{3}{2}}\mathbb{Q}_{\emptyset}^{[-\frac{1}{2}]}\frac{ \mathbb{Q}_{I_{1}}^{[\frac{3}{2}]}\mathbb{Q}_{I_{2}}^{[-\frac{5}{2}]}}{ \mathbb{Q}_{I_{1}}^{[-\frac{3}{2}]}\mathbb{Q}_{I_{2}}^{[\frac{1}{2}]}}, \Omega_{0,\frac{1}{2}} =z_{1}^{\frac{1}{2}}\mathbb{Q}_{\emptyset}^{[-\frac{1}{2}]} \frac{\mathbb{Q}_{I_{1}}^{[-\frac{1}{2}]}\mathbb{Q}_{I_{2}}^{[-\frac{5}{2}]} \mathbb{Q}_{I_{2}}^{[\frac{3}{2}]}}{\mathbb{Q}_{I_{1}}^{[-\frac{3}{2}]} \mathbb{Q}_{I_{2}}^{[-\frac{1}{2}]}\mathbb{Q}_{I_{2}}^{[\frac{3}{2}]}},\] \[\Omega_{0,-\frac{1}{2}} =z_{1}^{-\frac{1}{2}}\mathbb{Q}_{\emptyset}^{[-\frac{1}{2}]} \frac{\mathbb{Q}_{I_{1}}^{[-\frac{1}{2}]}\mathbb{Q}_{I_{2}}^{[-\frac{5}{2}]} \mathbb{Q}_{I_{2}}^{[\frac{3}{2}]}}{\mathbb{Q}_{I_{1}}^{[\frac{1}{2}]} \mathbb{Q}_{I_{2}}^{[-\frac{3}{2}]}\mathbb{Q}_{I_{2}}^{[-\frac{3}{2}]}}, \Omega_{0,-\frac{3}{2}} =z_{1}^{-\frac{3}{2}}\mathbb{Q}_{\emptyset}^{[-\frac{1}{2}]} \mathbb{Q}_{I_{1}}^{[-\frac{5}{2}]}\mathbb{Q}_{I_{1}}^{[\frac{5}{2}]} \mathbb{Q}_{I_{2}}^{[\frac{3}{2}]}\] \[\Omega_{-1,\frac{1}{2}} =z_{3}^{-1}z_{1}^{\frac{1}{2}}\mathbb{Q}_{\emptyset}^{[\frac{3}{2} ]}\frac{\mathbb{Q}_{I_{1}}^{[-\frac{1}{2}]}\mathbb{Q}_{I_{2}}^{[-\frac{5}{2}]} }{\mathbb{Q}_{I_{1}}^{[\frac{1}{2}]}\mathbb{Q}_{I_{2}}^{[-\frac{5}{2}]}}, \Omega_{-1,-\frac{1}{2}} =z_{3}^{-1}z_{1}^{-\frac{1}{2}}\mathbb{Q}_{\emptyset}^{[\frac{3}{2 }]}\frac{\mathbb{Q}_{I_{1}}^{[-\frac{5}{2}]}\mathbb{Q}_{I_{2}}^{[-\frac{1}{2}] }}{\mathbb{Q}_{I_{1}}^{[\frac{1}{2}]}\mathbb{Q}_{I_{2}}^{[-\frac{3}{2}]}}. \tag{4.210}\]
The function \(\Omega_{j,k}\) carries the \(osp(3|2)\) weight \(j\epsilon_{3}+k\epsilon_{1}\). Then the T-function derived by
the Bethe strap procedure with the top term \(\Omega_{1,\frac{1}{2}}\) is given by the summation over (4.210):
\[\mathfrak{t}_{1,\frac{3}{2}}(u)=\sum_{j=-1}^{1}\sum_{k=1}^{4-2|j|}(-1)^{j}\Omega_ {j,k+|j|-\frac{5}{2}}. \tag{4.211}\]
The Bethe strap structures of (4.211) is described in Figure 14. More generally, we conjecture that the T-function derived by the Bethe strap procedure with the top term
\[-\mathbb{Q}_{\emptyset}^{[\ell-\frac{3}{2}]}\Omega_{1,\frac{1}{2}}^{[1-\ell]} \prod_{k=1}^{\ell-1}\chi_{I_{4}}^{[2k-\ell+\frac{3}{2}]}=-z_{3}z_{1}^{\ell- \frac{1}{2}}\mathbb{Q}_{\emptyset}^{[\ell-\frac{3}{2}]}\mathbb{Q}_{\emptyset}^ {[\ell-\frac{3}{2}]}\frac{\mathbb{Q}_{I_{1}}^{[\ell+\frac{1}{2}]}\mathbb{Q}_{ I_{2}}^{[-\ell+\frac{1}{2}]}}{\mathbb{Q}_{I_{1}}^{[-\ell-\frac{1}{2}]}\mathbb{Q}_{ I_{2}}^{[\ell-\frac{1}{2}]}}, \tag{4.212}\]
which carries the \(osp(3|2)\) highest weight \(\epsilon_{3}+(\ell-\frac{1}{2})\epsilon_{1}=\delta_{1}+(\ell-\frac{1}{2}) \varepsilon_{1}\), is given by
\[\mathfrak{t}_{1,\ell+\frac{1}{2}}(u)=(-\Omega_{1,\frac{1}{2}}^{[1- \ell]}+\Omega_{0,\frac{3}{2}}^{[1-\ell]})\mathbb{Q}_{\emptyset}^{[\ell-\frac{ 3}{2}]}\mathcal{F}_{\ell-1}^{I_{4}[\frac{3}{2}]}+(-\Omega_{1,-\frac{1}{2}}^{[1 -\ell]}+\Omega_{0,\frac{1}{2}}^{[1-\ell]}+\Omega_{0,-\frac{1}{2}}^{[1-\ell]}+ \Omega_{0,-\frac{1}{2}}^{[1-\ell]}+\Omega_{0,-\frac{1}{2}}^{[1-\ell]})\mathbb{ Q}_{\emptyset}^{[\ell-\frac{3}{2}]}\mathcal{F}_{\ell-1}^{I_{2}[\frac{3}{2}]}\\ \text{for}\quad\ell\in\mathbb{Z}_{\geq 2}. \tag{4.213}\]
By using QQ-relations, one can transform (4.211) into a Wronskian form:
\[\mathfrak{t}_{1,\frac{3}{2}}(u)=(z_{1}^{\frac{1}{2}}+z_{1}^{-\frac{1}{2}})(z_ {1}-z_{3})\left(1-\frac{1}{z_{1}z_{3}}\right)\mathbb{Q}_{12}^{[-\frac{1}{2}]}. \tag{4.214}\]
More generally, we conjecture
\[\mathfrak{t}_{1,\ell+\frac{1}{2}}(u)=(z_{1}^{\frac{1}{2}}+z_{1}^{-\frac{1}{2}})(z_ {1}-z_{3})\left(1-\frac{1}{z_{1}z_{3}}\right)(\mathbb{Q}_{\emptyset}^{[-\frac{1 }{2}]})^{-\delta_{\ell,1}}\mathsf{T}_{1,\ell-1}^{\mathfrak{B},\emptyset[-\frac{ 3}{2}]}\quad\text{for}\quad\ell\in\mathbb{Z}_{\geq 1}. \tag{4.215}\]
Note that the factor \((z_{1}-z_{3})\left(1-\frac{1}{z_{1}z_{3}}\right)\) coincides with the character limit of \(\mathsf{T}_{1,1}^{\mathfrak{B},\emptyset\setminus\{4\}}\).
Let us consider \(U_{q}(osp(2r+1|2s)^{(1)})\) case with \(I_{2r+2s+1}=(2r+2s+1,2r+2s,\ldots,2r+s+2,2r,2r-1,\ldots,r+1,2r+s+1,r,r-1,\ldots,2,1,2r+s,2r+s-1,\ldots,2r+2,2r+1)\), \(\ldots\), \(I_{r+s}=(2r+2s+1,2r+2s,\ldots,2r+s+2,2r,2r-1,\ldots,r+1)\), \(\ldots\), \(I_{s}=(2r+2s+1,2r+2s,\ldots,2r+s+2)\), \(\ldots I_{1}=(2r+2s+1)\), \(I_{0}=\emptyset\). Based on a computer experiment by Mathematica (ver. 7), we conjecture that the T-function derived by the Bethe strap procedure with the top term
\[(-1)^{rs}(z_{1}z_{2}\cdots z_{r})^{\ell-s+\frac{1}{2}}(z_{2r+1}z_ {2r+2}\cdots z_{2r+s})^{r}\mathbb{Q}_{\emptyset}^{[-\ell-r-\frac{1}{2}]}( \mathbb{Q}_{\emptyset}^{[\ell-2s+r-\frac{1}{2}]})^{1-\delta_{\ell,s}}\times\\ \times\frac{\mathbb{Q}_{I_{s}}^{[\ell-s+r+\frac{1}{2}]}\mathbb{Q} _{I_{r+s}}^{[-\ell+s-\frac{1}{2}]}}{\mathbb{Q}_{I_{s}}^{[-\ell+s-r-\frac{1}{2 }]}\mathbb{Q}_{I_{r+s}}^{[\ell-s+\frac{1}{2}]}}\qquad\text{for}\quad\ell\in \mathbb{Z}_{\geq s}, \tag{4.216}\]
which carries the \(osp(2r+1|2s)\) highest weight \((\ell-s+\frac{1}{2})(\epsilon_{1}+\epsilon_{2}+\cdots+\epsilon_{r})+r( \epsilon_{2r+1}+\epsilon_{2r+2}+\cdots+\epsilon_{2r+s})=r\sum_{j=1}^{s}\delta_ {j}+(\ell-s+\frac{1}{2})\sum_{j=1}^{r}\varepsilon_{j}\), is given by the following Wronskian-type formula 60
Footnote 60: This may be interpreted as a T-function labelled by the Young diagram \(((\ell+\frac{1}{2})^{r})\) with the height \(r\) and the half integer width \(\ell+\frac{1}{2}\). More generally, we expect that the T-function derived by the Bethe strap procedure with a top term which carries the \(osp(2r+1|2s)\) highest weight \(r\sum_{j=1}^{s}\delta_{j}+\sum_{j=1}^{r}(\mu_{j}+\frac{1}{2})\varepsilon_{j}\) is described by the function \(\mathsf{S}_{\mu}\) labelled by a partition \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{\mu_{1}^{\prime}})\), where \(\mu_{1}^{\prime}\leq r\), \(\mu_{1}\geq\mu_{2}\geq\cdots\geq\mu_{\mu_{1}^{\prime}}>0\), \(\mu_{j}=0\) if \(j>\mu_{1}^{\prime}\) (see [eq. (3.50), [4]] for \(s=0\) case). In order to give a precise description of this, we need further case by case studies.
\[\mathfrak{t}_{r,\ell+\frac{1}{2}}(u)=\left(\mathbb{Q}_{\emptyset}^{[3r-s- \frac{1}{2}]}\right)^{-\delta_{\ell,s}}\mathsf{S}_{(\ell-s)^{r}}^{[r\delta_{ \ell,s}-s-\frac{1}{2}]}\qquad\text{for}\quad\ell\in\mathbb{Z}_{\geq s}. \tag{4.217}\]
The T-function derived from the top term (4.216) for \(r=2\), \(s=1\), \(\ell=1\) corresponds to the 64 term expression mentioned in the previous paper [section 5, [2]]. The expression was too bulky to write down, but now the Wronskian formula (4.217) provides an alternative concise expression for it. We also expect that reductions of (3.67) for \((M,N)=(2r,2s+2)\) are related to certain combinations of T-functions for spinorial representations of \(U_{q}(osp(2r|2s)^{(1)})\). However, this requires further investigation.
## 5 Concluding remarks
In this paper, we continued our trials [74, 18, 17, 1, 4] to construct various expressions of T-functions, in particular Wronskian-type formulas (analogues of the Weyl character formula) associated with any quantum affine (super)algebras or Yangians. The key is an extension of the reduction procedures proposed in [1]. This also connects our earlier works on the analytic Bethe ansatz for type A superalgebras [15, 8, 16] and the ones for type B, C, D superalgebras [2, 3], which is one of the motivations for this paper. There remain problems which have to be clarified step by step.
* **Refinement of the reduction procedures** We considered reductions so that the resultant Bethe ansatz equations and T-functions (for the fundamental representation) reproduce those from the algebraic Bethe ansatz on the symmetric nesting paths. There is still a room for generalization or improvement of the reduction procedures. In [77], QQ-relations with \(osp(4|6)\)-symmetries were introduced in relation to the quantum spectral curve for \(AdS_{4}/CFT_{3}\). In this context, an interesting problem is to modify the reduction procedures and find QQ-relations corresponding to [eqs. (7.51), (7.52), [77]] as substitutes of the QQ-relations (4.112), (4.118), (4.119) and (4.122). This may fix some unclear points mentioned in subsection 4.4.2.
* **Refinement of T-functions** Our discussions on Bethe straps suggest that not all the T-functions obtained by the reduction procedures give T-functions for irreducible representations of underlying algebras in the auxiliary spaces. Thus it is important to clarify the condition for irreducibility and find the modification to get T-functions for irreducible representations.
* **Symmetries of T-functions** In this paper, we considered reductions of QQ-relations and T-functions mainly along symmetric nesting paths, which are related to symmetric Dynkin diagrams of \(gl(M|N)\). If we had considered non-symmetric nesting paths, we would have come across non-standard forms of Bethe ansatz equations. It remains to be seen whether we should exclude 61 them from our consideration, or rather clarify what they mean. Footnote 61: in case non-standard Bethe ansatz equations or extra QQ-relations over-constrain the system
Let \(\mathfrak{g}^{(1)}\) be an affine Lie superalgebra, and \(\mathfrak{g}_{k}\) be the Lie superalgebra corresponding to the Dynkin diagram derived by removing \(k\)-th node of a Dynkin diagram of \(\mathfrak{g}^{(1)}\). In the standard notation, \(\mathfrak{g}_{0}=\mathfrak{g}\). The supercharacters of finite dimensional representations of \(U_{q}(\mathfrak{g}^{(1)})\) are invariant under the Weyl group of \(\mathfrak{g}_{k}\) and are linear combinations of the supercharacters of finite dimensional representations of \(\mathfrak{g}_{k}\)62. A standard way to consider the problem is to set \(k=0\). We discussed \(\mathfrak{W}\)-symmetry of T-functions for \(U_{q}(\mathfrak{g}^{(1)})\) (\(\mathfrak{g}=osp(2r+1|2s),osp(2r|2s)\)), which is a part of the original \(S_{M+N}\)-symmetry of T-functions for \(U_{q}(gl(M|N)^{(1)})\) and is related to the Weyl group (and its extension by odd reflections, and a symmetry that flips the \((r+s-1)\)-th and \((r+s)\)-th nodes of a Dynkin diagram of type D) of \(\mathfrak{g}_{0}\). It is desirable to consider also the \(k\neq 0\) case and to clarify the whole symmetries of the T-functions for \(U_{q}(\mathfrak{g}^{(1)})\) and their connection with the original \(S_{M+N}\)-symmetry of the T-functions for \(U_{q}(gl(M|N)^{(1)})\). After this, it is desirable to reformulate the Wronskian-type expressions of T-functions, which are invariant under the whole symmetries. Footnote 62: In [87], the characters of the Kirillov-Reshetikhin modules of \(U_{q}(\mathfrak{g}^{(1)})=U_{q}(B_{r}^{(1)})\) were expressed as linear combinations of characters of \(\mathfrak{g}_{r}=D_{r}\) (cf. \(\mathfrak{g}_{0}=B_{r}\)). Analogous character formulas for twisted quantum affine algebras were also presented. We found a generalization of these results to the case of superalgebras (see Appendix B).
* **Generalization to other algebras** One of the interesting superalgebras is \(U_{q}(D(2,1;\alpha)^{(1)})\). This algebra is unique in that it depends on an extra parameter \(\alpha\). There should be some connections to our results at \(\alpha=1\) because of the
relation \(D(2,1;1)\simeq osp(4|2)\). In addition, it is possible to execute the analytic Bethe ansatz based on Bethe ansatz equations with Cartan matrices of \(D(2,1;\alpha)\), as in the case of other superalgebras [15, 8, 16, 2, 3]. It will be possible to consider further reductions of some of the QQ-relations in this paper with respect to Dynkin diagram symmetries and derive QQ-relations for twisted quantum affine superalgebras including \(U_{q}(osp(2r|2s)^{(2)})\) (see Appendix A).
* **Operator realization** It is important to realize Wronskian-type formulas of T-functions as operators (through Q-operators) and give representation theoretical background for them. In [18, 80, 81], we constructed q-oscillator representations of \(U_{q}(gl(M|N)^{(1)})\) (or \(U_{q}(sl(M|N)^{(1)})\)) for Q-operators (see also [82, 83, 84] for the rational case from various points of view, and [85] for representation theoretical background). A tentative goal on this topic is to reformulate and generalize the contents of [18, 80, 81] further. In particular, it is worthwhile to apply the folding technique described in [37] (or a modified version of it) to the results in [81].
* **Connection to the soliton theory** As explained in [86] (see also [83]), a generating function of T-operators (master T-operator) for quantum integrable spin chains associated with \(Y(gl(M))\) is the \(\tau\)-function of the modified KP hierarchy. It should be possible to consider reductions of the master T-operator for \(U_{q}(gl(M|N)^{(1)})\) and discuss connection to the soliton theory. A related issue is the T-system for quantum integrable systems associated with superalgebras. Some partial results have already been obtained in the previous papers [15, 8, 2, 3], but the whole picture is still unclear, which contrasts with the well-understood non-superalgebra cases [88, 24, 14].
* **Grassmannian formalism** In [89], determinant formulas of T-and Q-functions in [1, 17] were reformulated in terms of exterior forms of Q-functions. In light of this, it will be possible to reformulate the reduction procedures in terms of the Grassmannian formalism. The recent papers [62, 63] on QQ-relations for \(so(2r)\), which use pure spinors, would be clues for this.
In addition to these, it would be possible to extend the reduction procedures and the above topics to the case of open super spin chains (at least for diagonal K-matrices). The Bethe ansatz equations for the open super spin chain based on \(Y(osp(M|2s))\) are formulated in [92].
The T-functions are not the only generalization of (super)characters. Although it is not a subject of our series of papers, it might be mathematically meaningful to consider reductions similar to (4.1) for any series in \(\{z_{j}\}_{j=1}^{M+N}\) (supersymmetric functions or polynomials) as well as q-(super)characters (and their extensions) that generalize supercharacters of \(gl(M|N)\).
## Acknowledgments
The work is supported by Grant No. 0657-2020-0015 of the Ministry of Science and Higher Education of Russia.
Note addedWhen we had almost finished writing this paper, two papers [93, 94] appeared on arXiv. They studied rational L-operators related to \(osp(M|2s)\), which might be useful for advancing the research on the operator realization of the functional relations discussed in this paper.
## Appendix A: Regular reductions in a singular reduction: \(U_{q}(osp(2r|2s)^{(2)})\) case
One can consider more reductions to some of the formulas on T-and Q-functions derived by reductions in the main text. Let us consider reductions of the \(U_{q}(osp(2r|2s)^{(1)})\) case (subsection 4.4.2) with respect to the symmetry of exchanging the \((r+s-1)\)-th and \((r+s)\)-th nodes of a Dynkin diagram of type D. In the tuple \(I_{2r+2s+2}=(i_{1},i_{2},\ldots,i_{r+s+1},i^{*}_{r+s+1},\ldots,i^{*}_{2},i^{*}_ {1})\), \(i_{r+s+1}\in\mathfrak{D}\), we fix \((i_{r+s},i^{*}_{r+s})=(r,r+1)\), or \((r+1,r)\), thus \(i_{r+s},i^{*}_{r+s}\in\mathfrak{B}\). We are interested in the action of \(\sigma^{\prime}=\tau_{i_{r+s},i^{*}_{r+s}}\in\mathfrak{W}\):
\[\sigma^{\prime}(\mathbb{Q}_{I})=\mathbb{Q}_{I}\quad\text{for}\quad I \subset\mathfrak{I},\] \[\sigma^{\prime}(\mathbb{Q}_{I_{r+s}})=\mathbb{Q}_{I_{r+s}},\quad \sigma^{\prime}(\mathbb{Q}_{I_{r+s}})=\mathbb{Q}_{I_{r+s}},\] (A1) \[\sigma^{\prime}(z_{a})=z_{a}\quad\text{for}\quad a\in\mathfrak{I }\setminus\{i_{r+s},i^{*}_{r+s}\},\quad\sigma^{\prime}(z_{i_{r+s}})=z_{i^{*}_ {r+s}},\quad\sigma^{\prime}(z_{i^{*}_{r+s}})=z_{i_{r+s}},\]
where \(\breve{I}=\sigma^{\prime}(I)\), namely \(\breve{I}=I\) if \(i_{r+s},i^{*}_{r+s}\in I\) or \(i_{r+s},i^{*}_{r+s}\notin I\); \(\breve{I}=(I\setminus\{i_{r+s}\})\sqcup\{i^{*}_{r+s}\}\) if \(i_{r+s}\in I\) and \(i^{*}_{r+s}\notin I\); \(\breve{I}=(I\setminus\{i^{*}_{r+s}\})\sqcup\{i_{r+s}^{*}\}\) if \(i^{*}_{r+s}\in I\) and \(i_{r+s}\notin I\). Then we consider the following reduction:
\[\mathbb{Q}_{\breve{I}}=\mathbb{Q}_{I}^{[\eta]}\quad\text{for} \quad I\subset\mathfrak{I},\] (A2) \[\mathbb{Q}_{\breve{I}_{r+s}}=\mathbb{Q}_{I_{r+s}}^{[\eta]},\] (A3) \[z_{i^{*}_{r+s}}=z_{i_{r+s}}.\] (A4)
The condition (A4) means \(z_{i_{r+s}}=\pm 1\). In case \(z_{i_{r+s}}=1\), we assume that \(\eta\) is the half the period of the Q-functions 63. (A2) suggest a factorization of the form \(\mathbb{Q}_{I}=\mathbb{Q}_{I}\mathbb{Q}_{I}^{[\eta]}\) if \(i_{r+s},i^{*}_{r+s}\in I\) or \(i_{r+s},i^{*}_{r+s}\notin I\), where \(\mathbb{Q}_{I}^{[2\eta]}=\mathbb{Q}_{I}\). Thus on the symmetric nesting path defined by the aforementioned tuple \(I_{2r+2s+2}\), we have \(\mathbb{Q}_{I_{a}}=\mathbb{Q}_{I_{2r+2s+2-a}}=\mathbb{Q}_{I_{a}}\mathbb{Q}_{I_ {a}}^{[\eta]}\) for \(0\leq a\leq r+s-1\). Combining this for \(a=r+s-1\) and the relation \(\mathbb{Q}_{I_{r+s-1}}=\mathbb{Q}_{I_{r+s}}\mathbb{Q}_{I_{r+s}}^{[\eta]}\) derived from (4.106) and (A3), we find \(\mathbb{Q}_{I_{r+s-1}}\mathbb{Q}_{I_{r+s-1}}^{[\eta]}=\mathbb{Q}_{I_{r+s}} \mathbb{Q}_{I_{r+s}}^{[\eta]}\). In the following we consider the case
Footnote 63: In case \(z_{i_{r+s}}=-1\), we assume \(\eta=0\)
\[\mathbb{Q}_{I_{r+s}}=\mathbb{Q}_{I_{r+s-1}},\qquad\mathbb{Q}_{\breve{I}_{r+s} }=\mathbb{Q}_{I_{r+s-1}}^{[\eta]}.\] (A5)
The QQ-relations (4.109) and (4.119) reduce to the following functional relations:
**for \(a\)-th node (\(1\leq a\leq r+s-2\)):**
\[(z_{i_{a}}-z_{i_{a+1}}){\bf Q}_{I_{a-1}}^{2}{\bf Q}_{I_{a+1}}^{2}=z_ {i_{a}}{\bf Q}_{I_{a}}^{2[p_{i_{a}}]}{\bf Q}_{\widetilde{I}_{a}}^{2[-p_{i_{a}}]} -z_{i_{a+1}}{\bf Q}_{I_{a}}^{2[-p_{i_{a}}]}{\bf Q}_{\widetilde{I}_{a}}^{2[p_{ i_{a}}]}\quad\mbox{if}\quad p_{i_{a}}=p_{i_{a+1}},\] (A6) \[(z_{i_{a}}-z_{i_{a+1}}){\bf Q}_{I_{a}}^{2}{\bf Q}_{\widetilde{I}_ {a}}^{2}=z_{i_{a}}{\bf Q}_{I_{a-1}}^{2[-p_{i_{a}}]}{\bf Q}_{I_{a+1}}^{2[p_{i_{ a}}]}-z_{i_{a+1}}{\bf Q}_{I_{a-1}}^{2[p_{i_{a}}]}{\bf Q}_{I_{a+1}}^{2[-p_{i_{a}}]} \quad\mbox{if}\quad p_{i_{a}}=-p_{i_{a+1}},\] (A7)
**for \((r+s-1)\)-th node (from simply laced):**
\[(z_{i_{r+s-1}}-1){\bf Q}_{I_{r+s-2}}^{2}=z_{i_{r+s-1}}{\bf Q}_{I_{r+s-1}}^{[ 1]}{\bf Q}_{\widetilde{I}_{r+s-1}}^{[-1]}-{\bf Q}_{I_{r+s-1}}^{[\eta-1]}{\bf Q }_{\widetilde{I}_{r+s-1}}^{[1]}\quad\mbox{if}\quad i_{r+s-1}\in{\mathfrak{B}},\] (A8)
**for \((r+s-1)\)-th node (from non-simply laced):**
\[(z_{i_{r+s-1}}-1){\mathbb{Q}}_{\widetilde{I}_{r+s-1}}^{[\eta]}{\bf Q}_{I_{r+ s-1}}^{[\eta]}=z_{i_{r+s-1}}{\bf Q}_{I_{r+s-2}}^{2[1]}{\bf Q}_{I_{r+s-1}}^{[-2]}-{ \bf Q}_{I_{r+s-2}}^{2[-1]}{\bf Q}_{I_{r+s-1}}^{[2]}\quad\mbox{if}\quad i_{r+s- 1}\in{\mathfrak{F}},\] (A9)
where \({\bf Q}_{I_{a}}^{2}={\bf Q}_{I_{a}}{\bf Q}_{I_{a}}^{[\eta]}\), \({\bf Q}_{\widetilde{I}_{a}}^{2}={\bf Q}_{\widetilde{I}_{a}}{\bf Q}_{ \widetilde{I}_{a}}^{[\eta]}\) for \(0\leq a\leq r+s-1\), \(\dot{I}_{r+s-1}=(i_{1},i_{2},\ldots,i_{r+s-2},i_{r+s-1}^{*})\).
T-functions and Bethe ansatz equationsUnder the reduction, (4.123) reduces to
\[{\cal X}_{I_{a}} = z_{i_{a}}\frac{{\bf Q}_{I_{a-1}}^{2[2r-2s-2-\sum_{j\in I_{a}}p_ {j}-p_{i_{a}}]}{\bf Q}_{I_{a}}^{2[2r-2s-2-\sum_{j\in I_{a}}p_{j}+2p_{i_{a}}]}}{ {\bf Q}_{I_{a-1}}^{2[2r-2s-2-\sum_{j\in I_{a}}p_{j}+p_{i_{a}}]}{\bf Q}_{I_{a}} ^{2[2r-2s-2-\sum_{j\in I_{a}}p_{j}]}}\quad\mbox{for}\quad 1\leq a\leq r+s-1\] \[{\cal X}_{I_{r+s}} = \frac{{\bf Q}_{I_{r+s-1}}^{[r-s-3+\eta]}{\bf Q}_{I_{r+s-1}}^{[r-s +1]}}{{\bf Q}_{I_{r+s-1}}^{2[r-s-1]}},\] \[{\cal X}_{I_{r+s+1}} = -{\cal X}_{I_{r+s+2}}=z_{i_{r+s+1}}\frac{{\bf Q}_{I_{r+s-1}}^{[r-s +1]}{\bf Q}_{I_{r+s-1}}^{[r-s-3]}}{({\bf Q}_{I_{r+s-1}}^{[r-s-1]})^{2}},\] \[{\cal X}_{I_{r+s+3}} = \frac{{\bf Q}_{I_{r+s-1}}^{[r-s+1+\eta]}{\bf Q}_{I_{r+s-1}}^{[r-s -3]}}{{\bf Q}_{I_{r+s-1}}^{2[r-s-1]}},\] \[{\cal X}_{I_{2r+2s+3-a}} = z_{i_{a}}^{-1}\frac{{\bf Q}_{I_{a-1}}^{2[\sum_{j\in I_{a}}p_{j} +p_{i_{a}}]}{\bf Q}_{I_{a}}^{2[\sum_{j\in I_{a}}p_{j}-2p_{i_{a}}]}}{{\bf Q}_{I_ {a-1}}^{2[\sum_{j\in I_{a}}p_{j}-p_{i_{a}}]}{\bf Q}_{I_{a}}^{2[\sum_{j\in I_{a}} p_{j}]}}\quad\mbox{for}\quad 1\leq a\leq r+s-1.\] (A10)
The T-function (4.124) reduces to
\[{\sf F}_{(1)}^{I_{2r+2s+2}}={\bf Q}_{\emptyset}^{2[2r-2s-2]}{\bf Q}_{\emptyset }^{2}\sum_{a=1}^{r+s}p_{i_{a}}({\cal X}_{I_{a}}+{\cal X}_{I_{2r+2s+3-a}}).\] (A11)
Note that the terms \({\cal X}_{I_{r+s+1}}\) and \({\cal X}_{I_{r+s+2}}\) are missing in (A11) because of cancellation. The pole-free condition of the T-function (A11) produces the following Bethe ansatz equations:
**for \(a\)-th node \((1\leq a\leq r+s-2)\):**
\[-1=\frac{p_{i_{a}}z_{i_{a}}}{p_{i_{a+1}}z_{i_{a+1}}}\frac{{\bf Q}_{I _{a-1}}^{2}(u_{k}^{I_{a}}-p_{i_{a}}){\bf Q}_{I_{a}}^{2}(u_{k}^{I_{a}}+2p_{i_{a}}) {\bf Q}_{I_{a+1}}^{2}(u_{k}^{I_{a}}-p_{i_{a+1}})}{{\bf Q}_{I_{a-1}}^{2}(u_{k}^{I _{a}}+p_{i_{a}}){\bf Q}_{I_{a}}^{2}(u_{k}^{I_{a}}-2p_{i_{a+1}}){\bf Q}_{I_{a+1} }^{2}(u_{k}^{I_{a}}+p_{i_{a+1}})}\\ \text{for}\quad k\in\{1,2,\dots,n_{I_{a}}\},\] (A12)
**for \((r+s-1)\)-th node:**
\[-1=\frac{z_{i_{r+s-1}}}{p_{i_{r+s-1}}}\frac{{\bf Q}_{I_{r+s-2}}^{2 }(u_{k}^{I_{r+s-1}}-p_{i_{r+s-1}}){\bf Q}_{I_{r+s-1}}(u_{k}^{I_{r+s-1}}-2+ \eta)}{{\bf Q}_{I_{r+s-2}}^{2}(u_{k}^{I_{r+s-1}}+p_{i_{r+s-1}}){\bf Q}_{I_{r+s- 1}}(u_{k}^{I_{r+s-1}}-2p_{i_{r+s-1}}+\eta)}\times\\ \times\frac{{\bf Q}_{I_{r+s-1}}(u_{k}^{I_{r+s-1}}+2)}{{\bf Q}_{I _{r+s-1}}(u_{k}^{I_{r+s-1}}-2p_{i_{r+s-1}})}\qquad\text{for}\quad k\in\{1,2, \dots,n_{I_{r+s-1}}\},\] (A13)
where \(\mathbb{Q}_{I_{a}}(u_{k}^{I_{a}})=0\), \({\bf Q}_{I_{a}}(v_{k}^{I_{a}})=0\), \(\{u_{k}^{I_{a}}\}_{k=1}^{n_{I_{a}}}=\{v_{k}^{I_{a}}\}_{k=1}^{m_{I_{a}}}\sqcup \{v_{k}^{I_{a}}+\eta\}_{k=1}^{m_{I_{a}}}\), \(n_{I_{a}}=2m_{I_{a}}\), \(0\leq a\leq r+s-1\). The Bethe ansatz equations (A12)-(A13) are reductions of (4.125)-(4.130). Eqs. (A10)-(A13) for \(s=0\) agree with the known results [69] by analytic Bethe ansatz. As for \(s>0\) case, we could not find appropriate references 64 to compare. The generating functions of the T-functions have the same form as (4.131) and (4.132), but the functions (4.123) have to be replaced with (A10). The tableaux sum and Wronskian expressions of the T-functions are given by (3.19) and (3.48) under the reductions. However we have not identified the conditions that the auxiliary spaces of the corresponding transfer matrices become irreducible representations of \(U_{q}(osp(2r|2s)^{(2)})\).
Footnote 64: As remarked in [76], the result in [73] denoted as \(U_{q}(osp(2m|2n)^{(2)})\) is something different. We also remark that Q-operators for \(r=s=1\) case were studied in [65].
## Appendix B: Decomposition of supercharacters
In this appendix, we will consider the (super)character limit of T-functions in sections 4.3 and 4.4, and compare special cases of them with character formulas for Kirillov-Reshetikhin modules of quantum affine algebras (or their Yangian counterparts). We assume that the characters of \(U_{q}(\mathfrak{g})\) for generic \(q\) are the same as those of the corresponding Lie superalgebra \(\mathfrak{g}\), and use the same symbols for representations of \(U_{q}(\mathfrak{g})\) and those of \(\mathfrak{g}\).
The (super)character limit of the generating function (3.23) at \(K=M+N\), namely \(\zeta({\bf W}_{I_{M+N}}({\bf X}))=w(t)\), is given by
\[w(t)=\prod_{j=1}^{M}(1-z_{j}t)^{-1}\prod_{j=1}^{N}(1-z_{j+M}t)=\sum_{m=0}^{ \infty}\chi_{m}(\{z_{b}\}_{b=1}^{M}|\{z_{b+M}\}_{b=1}^{N})t^{m},\] (B1)
where \(\zeta({\cal F}_{(m)}^{I_{M+N}})=\chi_{m}\), \(\zeta({\bf X})=t\). Then the (super)character limit of (3.37) at \(K=M+N\), \(\lambda=\emptyset\) is the supersymmetric Jacobi-Trudi determinant:
\[\zeta({\cal F}_{\mu}^{I_{M+N}})=\chi_{\mu}(\{z_{b}\}_{b=1}^{M}|\{z_{b+M}\}_{b= 1}^{N})=|(\chi_{\mu_{i}-i+j})_{1\leq i,j\leq\mu_{1}^{\prime}}|,\] (B2)
where the arguments are omitted on the right hand side: \(\chi_{m}=\chi_{m}(\{z_{b}\}_{b=1}^{M}|\{z_{b+M}\}_{b=1}^{N})\). Note that \(\zeta({\cal F}_{\mu}^{I_{M+N}})=\zeta(\mathsf{T}_{\mu}^{\mathfrak{B},\mathfrak{F }})\) always holds, from which a Weyl-type formula for supercharacters is given. The determinant (B2) in the \([M,N]\)-hook gives the supercharacter of the irreducible representation \(V(\Lambda)\) with the highest weight (2.8), (2.10).
In addition to (B2), we need the following determinants 65:
Footnote 65: The right hand side of (B4) is the determinant of a block matrix consisting of a \(\mu_{1}^{\prime}\times 1\) matrix and a \(\mu_{1}^{\prime}\times(\mu_{1}^{\prime}-1)\) matrix. One may rewrite (B4) as \(\chi_{\langle\mu\rangle}(\{z_{b}\}_{b=1}^{M}|\{z_{b+M}\}_{b=1}^{N})=\frac{1}{2 }|(\chi_{\mu_{i}-i+j}+\chi_{\mu_{i}-i-j+2})_{1\leq i,j\leq\mu_{1}^{\prime}}|\) if \(\mu\neq\emptyset\), and \(\chi_{\langle\emptyset\rangle}(\{z_{b}\}_{b=1}^{M}|\{z_{b+M}\}_{b=1}^{N})=1\).
\[\chi_{[\mu]}(\{z_{b}\}_{b=1}^{M}|\{z_{b+M}\}_{b=1}^{N}) =|(\chi_{\mu_{i}-i+j}-\chi_{\mu_{i}-i-j})_{1\leq i,j\leq\mu_{1}^{ \prime}}|,\] (B3) \[\chi_{\langle\mu\rangle}(\{z_{b}\}_{b=1}^{M}|\{z_{b+M}\}_{b=1}^{N}) =|(\chi_{\mu_{i}-i+1})_{1\leq i\leq\mu_{1}^{\prime}}\;(\chi_{\mu_{ i}-i+j}+\chi_{\mu_{i}-i-j+2})_{\frac{1\leq i\leq\mu_{1}^{\prime}}{2}}|,\] (B4)
where the arguments are omitted on the right hand sides: \(\chi_{m}=\chi_{m}(\{z_{b}\}_{b=1}^{M}|\{z_{b+M}\}_{b=1}^{N})\). From now on, we will use the following shorthand notations for the arguments of the above determinants: \(\mathbf{x}=\{x_{b}\}_{b=1}^{r}\), \(\mathbf{x}^{-1}=\{x_{b}^{-1}\}_{b=1}^{r}\), \(\mathbf{y}=\{y_{b}\}_{b=1}^{s}\), \(\mathbf{y}^{-1}=\{y_{b}^{-1}\}_{b=1}^{s}\), \((\mathbf{x},\mathbf{x}^{-1}|\mathbf{y},\mathbf{y}^{-1})=(\mathbf{x}\sqcup \mathbf{x}^{-1}|\mathbf{y}\sqcup\mathbf{y}^{-1})\), \((\mathbf{x},\mathbf{x}^{-1}|\mathbf{y},\pm 1,\mathbf{y}^{-1})=(\mathbf{x}\sqcup \mathbf{x}^{-1}|\mathbf{y}\sqcup\{\pm 1\}\sqcup\mathbf{y}^{-1})\), \((\mathbf{x},\pm 1,\mathbf{x}^{-1}|\mathbf{y},\mathbf{y}^{-1})=(\mathbf{x}\sqcup \{\pm 1\}\sqcup\mathbf{x}^{-1}|\mathbf{y}\sqcup\mathbf{y}^{-1})\). We will consider the following specializations of the determinants (B3) and (B4) (cf. [95]).
\[\chi_{[\mu]}(\mathbf{x},1,\mathbf{x}^{-1}|\mathbf{y},\mathbf{y}^{ -1}) [\text{type B}],\] (B5) \[\chi_{[\mu]}(\mathbf{x},\mathbf{x}^{-1}|\mathbf{y},\mathbf{y}^{ -1}) [\text{type D}],\] (B6) \[\chi_{\langle\mu\rangle}(\mathbf{x},\mathbf{x}^{-1}|\mathbf{y}, \mathbf{y}^{-1}) [\text{type C}].\] (B7)
In case the Young diagram \(\mu\) is defined on the \([r,s]\)-hook, (B5) and (B6) are expected to give supercharacters of representations of \(osp(2r+1|2s)\) and \(osp(2r|2s)\) with the highest weights (2.13), (2.15) and (2.23), (2.25), (2.26), respectively, but not necessary irreducible ones in the general situation. One may have to subtract unnecessary supercharacters of invariant subspaces from them to get irreducible ones. In particular at \(s=0\), \(\chi_{[\mu]}(\mathbf{x},1,\mathbf{x}^{-1}|\emptyset)\)\(\chi_{[\mu]}(\mathbf{x},\mathbf{x}^{-1}|\emptyset)\), \(\chi_{\langle\mu\rangle}(\mathbf{x},\emptyset)\) give irreducible characters of \(so(2r+1)\), \(O(2r)\) and \(sp(2r)\), respectively. In case \(\mu_{r}=0\), \(\chi_{[\mu]}(\mathbf{x},\mathbf{x}^{-1}|\emptyset)\) gives irreducible characters of \(so(2r)\), while in case \(\mu_{r}\neq 0\), it becomes summation of the characters of irreducible representations of \(so(2r)\) with the highest weights (2.23), (2.25) and (2.23), (2.26) (at \(s=0\)). We observe similar phenomena for the case \(s>0\) on the level of T-functions (thus in relation to representations of \(U_{q}(osp(2r|2s)^{(1)})\)): see (4.190) and (4.192).
Let us introduce the set of all the partitions \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{a})\) in a rectangular Young diagram \((m^{a})\) in the \([M,N]\)-hook \((a,m\in\mathbb{Z}_{\geq 1})\):
\[\mathcal{S}_{\raisebox{-0.5pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5pt} \rule{6.5pt}{6.5pt}\rule{6.5pt}{6.5pt}\rule{6.5pt}{6.5pt}\rule{6.5pt}{6.5pt} \rule{6.5pt}{6.5pt}\rule{6.5pt}{6.
\[\mathcal{S}_{\mbox{\tiny\raisebox{-0.86pt}{\includegraphics[width=14.2pt]{figs.eps}}}}= \Bigg{\{}\lambda\;\Bigg{|}\begin{array}{l}m\geq\lambda_{1}\geq\lambda_{2}\geq \cdots\geq\lambda_{a}\geq 1,\quad\lambda_{j}\in 2\mathbb{Z}+1\quad\mbox{if}\quad m\in 2 \mathbb{Z}+1,\\ m\geq\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{a}\geq 0,\quad\lambda_{j} \in 2\mathbb{Z}\quad\mbox{if}\quad m\in 2\mathbb{Z}\end{array}\Bigg{\}}.\] (B10)
\(U_{q}(osp(2r+1|2s)^{(1)})\) case:The (super)character limit of the generating function (4.26) is given by
\[\prod_{j=1}^{r}(1-x_{j}t)^{-1}(1-x_{j}^{-1}t)^{-1}\prod_{j=1}^{s}(1-y_{j}t)(1- y_{j}^{-1}t)(1+t)=\sum_{m=0}^{\infty}\chi_{m}({\bf x},{\bf x}^{-1}|{\bf y},-1,{ \bf y}^{-1})t^{m},\] (B11)
where \(x_{j}=z_{j}\) for \(1\leq j\leq r\), \(y_{j}=z_{2r+j}\) for \(1\leq j\leq s\).
We conjecture that the following decompositions hold for any rectangular Young diagram \((m^{a})\) in the \([2r,2s+1]\)-hook \((a,m\in\mathbb{Z}_{\geq 1})\):
\[\chi_{(m^{a})}({\bf x},{\bf x}^{-1}|{\bf y},-1,{\bf y}^{-1}) =\sum_{\lambda\in\mbox{\tiny\raisebox{-0.86pt}{\includegraphics[ width=14.2pt]{figs.eps}}}}\chi_{[\lambda]}({\bf x},1,{\bf x}^{-1}|{\bf y},{\bf y}^{-1}) \qquad\mbox{[on type B]}\] (B12) \[=\sum_{\lambda\in\mbox{\tiny\raisebox{-0.86pt}{\includegraphics[ width=14.2pt]{figs.eps}}}}\chi_{[\lambda]}({\bf x},{\bf x}^{-1}|{\bf y},{\bf y}^{-1}) \qquad\mbox{[on type D]}.\] (B13)
To relate this formula to the labels of representations, we restrict it to the \([r,s]\)-hook. Eq. (B12) suggests a decomposition of a representation \(W_{a,m}\) of \(U_{q}(osp(2r+1|2s)^{(1)})\) (or \(Y(osp(2r+1|2s))\)) into representations \(\{V^{\prime}(\lambda)\}\) of \(U_{q}(osp(2r+1|2s))\) (or \(osp(2r+1|2s)\)): \(W_{a,m}\simeq\oplus_{\lambda\in\mbox{\tiny\raisebox{-0.86pt}{\includegraphics[ width=14.2pt]{figs.eps}}}}V^{\prime}(\lambda)\). Here we use the same symbol \(\lambda\) for a Young diagram and the highest weight specified by it (via (2.13), (2.15)). We do not know whether \(V^{\prime}(\lambda)\) coincides with the irreducible representation \(V(\lambda)\) in the general situation, but at least for \(s=0\) it does. In this case (\(s=0\)), (B12) coincides with the character formula of the Kirillov-Reshetikhin modules [90] for \(U_{q}(so(2r+1)^{(1)})\) or \(Y(so(2r+1))\) (see also [91]; use (2.14) for comparison). Note that the case \(a=r\) (with \(s=0\)) corresponds to a spin-even (tensor-like) representation, and the spin-odd (spinor-like) representations have to be treated separately. The second equality (B13) for \(s=0\) corresponds to [eq. (C.1), [87]], which suggests another decomposition of \(W_{a,m}\).
For more general Young diagram \(\mu\) in the \([2r,2s+1]\)-hook, we expect a decomposition of the form:
\[\chi_{\mu}({\bf x},{\bf x}^{-1}|{\bf y},-1,{\bf y}^{-1})=\sum_{\kappa,\lambda} LR^{\mu}_{(2\kappa)^{\prime},\lambda}\chi_{[\lambda]}({\bf x},1,{\bf x}^{-1}|{ \bf y},{\bf y}^{-1})\qquad\mbox{[on type B]}\;,\] (B14)
where \(LR^{\mu}_{(2\kappa)^{\prime},\lambda}\) is a Littlewood-Richardson coefficient for \(gl(M|N)\), and the sum is taken over all the partitions \(\kappa,\lambda\) such that 66\((2\kappa)^{\prime},\lambda\subset\mu\). A proof of (B14) restricted to the \([r,0]\)-hook for the case \(s=0\) is available in [Lemma 7.3, [91]] (see also [eq. (3.14), [56]]).
\(U_{q}(gl(2r|2s+1)^{(2)})\) case:The (super)character limit of the generating function (4.56) is given by
\[\prod_{j=1}^{r}(1-x_{j}t)^{-1}(1-x_{j}^{-1}t)^{-1}\prod_{j=1}^{s}(1-y_{j}t)(1-y_ {j}^{-1}t)(1-t)=\sum_{m=0}^{\infty}\chi_{m}(\mathbf{x},\mathbf{x}^{-1}|\mathbf{ y},1,\mathbf{y}^{-1})t^{m},\] (B15)
where \(x_{j}=z_{j}\) for \(1\leq j\leq r\), \(y_{j}=z_{2r+j}\) for \(1\leq j\leq s\).
We conjecture that the following decompositions hold for any rectangular Young diagram \((m^{a})\) in the \([2r,2s+1]\)-hook \((a,m\in\mathbb{Z}_{\geq 1})\):
\[\chi_{(m^{a})}(\mathbf{x},\mathbf{x}^{-1}|\mathbf{y},1,\mathbf{y }^{-1}) =\sum_{\lambda\in\mathfrak{S_{D}}}\chi_{[\lambda]}(\mathbf{x},-1, \mathbf{x}^{-1}|\mathbf{y},\mathbf{y}^{-1}) \text{[on type B^{\prime}]}\] (B16) \[=\sum_{\lambda\in\mathfrak{S_{D}}}(-1)^{ma+|\lambda|}\chi_{[ \lambda]}(\mathbf{x},\mathbf{x}^{-1}|\mathbf{y},\mathbf{y}^{-1}) \text{[on type D]},\] (B17)
where \(|\lambda|=\sum_{j=1}^{r}\lambda_{j}\) is the size of the Young diagram \(\lambda\). This case is parallel to the case \(U_{q}(osp(2r+1|2s)^{(1)})\). However, because of the difference between the factors \(1+t\) in (B11) and \(1-t\) in (B15), we have to modify (B5) as in (B16), or add a sign factor as in (B17).
\(U_{q}(gl(2r+1|2s)^{(2)})\) case:The (super)character limit of the generating function (4.67) is given by
\[\prod_{j=1}^{r}(1-x_{j}t)^{-1}(1-x_{j}^{-1}t)^{-1}\prod_{j=1}^{s}(1-y_{j}t)(1- y_{j}^{-1}t)(1-t)^{-1}=\sum_{m=0}^{\infty}\chi_{m}(\mathbf{x},1,\mathbf{x}^{-1}| \mathbf{y},\mathbf{y}^{-1})t^{m},\] (B18)
where \(x_{j}=z_{j}\) for \(1\leq j\leq r\), \(y_{j}=z_{2r+1+j}\) for \(1\leq j\leq s\).
We conjecture that the following decompositions hold for any rectangular Young diagram \((m^{a})\) in the \([2r+1,2s]\)-hook \((a,m\in\mathbb{Z}_{\geq 1})\):
\[\chi_{(m^{a})}(\mathbf{x},1,\mathbf{x}^{-1}|\mathbf{y},\mathbf{y }^{-1}) =\sum_{\lambda\in\mathfrak{S_{D}}}\chi_{[\lambda]}(\mathbf{x},1, \mathbf{x}^{-1}|\mathbf{y},\mathbf{y}^{-1}) \text{[on type B]}\] (B19) \[=\sum_{\lambda\in\mathfrak{S_{D}}}\chi_{\langle\lambda\rangle}( \mathbf{x},\mathbf{x}^{-1}|\mathbf{y},\mathbf{y}^{-1}) \text{[on type C]}.\] (B20)
To relate this formula to the labels of representations, we restrict it to the \([r,s]\)-hook. Eq. (B19) suggests a decomposition of a representation \(W_{a,m}\) of \(U_{q}(gl(2r+1|2s)^{(2)})\) into representations \(\{V^{\prime}(\lambda)\}\) of \(U_{q}(osp(2r+1|2s))\): \(W_{a,m}\simeq\oplus_{\lambda\in\mathfrak{S_{D}}}V^{\prime}(\lambda)\). In particular, \(V^{\prime}(\lambda)\) coincides with the irreducible representation \(V(\lambda)\) at least for \(s=0\), and then (B19) coincides with the character formula of the Kirillov-Reshetikhin modules for \(U_{q}(gl(2r+1)^{(2)})\) [eq. (6.7), [87]]. The second equality (B20) for \(s=0\) corresponds to [eq. (6.6), [87]], which suggests another decomposition of \(W_{a,m}\).
\(U_{q}(gl(2r|2s)^{(2)})\) case:The (super)character limit of the generating function (4.78) is given by
\[\prod_{j=1}^{r}(1-x_{j}t)^{-1}(1-x_{j}^{-1}t)^{-1}\prod_{j=1}^{s}(1-y_{j}t)(1-y_{j }^{-1}t)=\sum_{m=0}^{\infty}\chi_{m}({\bf x},{\bf x}^{-1}|{\bf y},{\bf y}^{-1})t ^{m},\] (B21)
where \(x_{j}=z_{j}\) for \(1\leq j\leq r\), \(y_{j}=z_{2r+j}\) for \(1\leq j\leq s\).
We conjecture that the following decompositions hold for any rectangular Young diagram \((m^{a})\) in the \([2r,2s]\)-hook \((a,m\in{\mathbb{Z}}_{\geq 1})\):
\[\chi_{(m^{a})}({\bf x},{\bf x}^{-1}|{\bf y},{\bf y}^{-1}) =\sum_{\lambda\in\mathfrak{S}_{\mbox{\small\framebox{$\mathfrak{ B}$}}}}\chi_{[\lambda]}({\bf x},{\bf x}^{-1}|{\bf y},{\bf y}^{-1}) \mbox{[on type D]}\] (B22) \[=\sum_{\lambda\in\mathfrak{S}_{\mbox{\small\framebox{$\mathfrak{ B}$}}}}\chi_{\langle\lambda\rangle}({\bf x},{\bf x}^{-1}|{\bf y},{\bf y}^{-1}) \mbox{[on type C]}.\] (B23)
To relate this formula to the labels of representations, we restrict it to the \([r,s]\)-hook. Eq. (B22) suggests a decomposition of a representation \(W_{a,m}\) of \(U_{q}(gl(2r|2s)^{(2)})\) into representations \(\{V^{\prime}(\lambda)\}\) of \(U_{q}(osp(2r|2s))\): \(W_{a,m}\simeq\oplus_{\lambda\in\mathfrak{S}_{\mbox{\small\framebox{$\mathfrak{ B}$}}}}V^{\prime}(\lambda)\). In particular, \(V^{\prime}(\lambda)\) coincides with the irreducible representation \(V(\lambda)\) at least for \(s=0\), \(\lambda_{r}=0\), and \(V^{\prime}(\lambda)\simeq V(\lambda)\oplus V(\lambda)|_{\lambda_{r}\to- \lambda_{r}}\) at least for \(s=0\), \(\lambda_{r}\neq 0\), and then (B22) coincides with the character formula of the Kirillov-Reshetikhin modules for \(U_{q}(gl(2r)^{(2)})\) [eq. (6.9), [87]]. The second equality (B23) for \(s=0\) corresponds to [eq. (6.8), [87]], which suggests another decomposition of \(W_{a,m}\).
\(U_{q}(osp(2r|2s)^{(1)})\) case:The (super)character limit of the generating function (4.131) is given by
\[\prod_{j=1}^{r}(1-x_{j}t)^{-1}(1-x_{j}^{-1}t)^{-1}\prod_{j=1}^{s}( 1-y_{j}t)(1-y_{j}^{-1}t)(1-t^{2})=\\ =\sum_{m=0}^{\infty}\chi_{m}({\bf x},{\bf x}^{-1}|{\bf y},1,-1,{ \bf y}^{-1})t^{m},\] (B24)
where \(x_{j}=z_{j}\) for \(1\leq j\leq r\), \(y_{j}=z_{2r+j}\) for \(1\leq j\leq s\).
We conjecture that the following decomposition holds for any rectangular Young diagram \((m^{a})\) in the \([2r,2s+2]\)-hook \((a,m\in{\mathbb{Z}}_{\geq 1})\):
\[\chi_{(m^{a})}({\bf x},{\bf x}^{-1}|{\bf y},1,-1,{\bf y}^{-1})=\sum_{\lambda \in\mathfrak{S}_{\mbox{\small\framebox{$\mathfrak{B}$}}}}\chi_{[\lambda]}({ \bf x},{\bf x}^{-1}|{\bf y},{\bf y}^{-1})\mbox{[on type D]}.\] (B25)
To relate this formula to the labels of representations, we restrict it to the \([r,s]\)-hook. In this case, our observation on the Bethe strip suggests that it does not always give an irreducible character of \(U_{q}(osp(2r|2s)^{(1)})\) in the general situation. In addition, (B25) suggests a decomposition of a representation \(W_{a,m}\) of \(U_{q}(osp(2r|2s)^{(1)})\) (or \(Y(osp(2r|2s))\)
into representations \(\{V^{\prime}(\lambda)\}\) of \(U_{q}(osp(2r|2s))\) (or \(osp(2r|2s)\)): \(W_{a,m}\simeq\oplus_{\lambda\in\tilde{\mathfrak{S}}}V^{\prime}(\lambda)\). In particular, \(V^{\prime}(\lambda)\) coincides with the irreducible representation \(V(\lambda)\) at least for \(s=0\), \(\lambda_{r}=0\), and \(V^{\prime}(\lambda)\simeq V(\lambda)\oplus V(\lambda)|_{\lambda_{r}\to- \lambda_{r}}\) at least for \(s=0\), \(\lambda_{r}\neq 0\), and then (B25) for \(1\leq a\leq r-2\) coincides with the character formula of the Kirillov-Reshetikhin modules for \(U_{q}(so(2r)^{(1)})\) or \(Y(so(2r))\)[90] (see also [eq. (7.4), [91]]). The cases \(a=r-1,r\) for \(s=0\) have to be treated separately.
\(U_{q}(osp(2r|2s)^{(2)})\), \(r\geq 1\), \(s\geq 0\) case:The (super)character limit of the reduction of the generating function (4.131) is given by
\[\prod_{j=1}^{r-1}(1-x_{j}t)^{-1}(1-x_{j}^{-1}t)^{-1}\prod_{j=1}^{ s}(1-y_{j}t)(1-y_{j}^{-1}t)(1-t)^{-1}(1+t)=\\ =\sum_{m=0}^{\infty}\chi_{m}(\tilde{\mathbf{x}},1,1,\tilde{ \mathbf{x}}^{-1}|\mathbf{y},1,-1,\mathbf{y}^{-1})t^{m},\] (B26)
where \(\tilde{\mathbf{x}}=\{x_{j}\}_{j=1}^{r-1}\), \(\tilde{\mathbf{x}}=\{x_{j}^{-1}\}_{j=1}^{r-1}\), \(x_{j}=z_{j}\) for \(1\leq j\leq r-1\), \(y_{j}=z_{2r+j}\) for \(1\leq j\leq s\).
We conjecture that the following decomposition holds for any rectangular Young diagram \((m^{a})\) in the \([2r,2s+2]\)-hook \((a,m\in\mathbb{Z}_{\geq 1})\):
\[\chi_{(m^{a})}(\tilde{\mathbf{x}},1,1,\tilde{\mathbf{x}}^{-1}|\mathbf{y},1,-1,\mathbf{y}^{-1})=\sum_{\lambda\in\tilde{\mathfrak{D}}}\chi_{[\lambda]}( \tilde{\mathbf{x}},1,\tilde{\mathbf{x}}^{-1}|\mathbf{y},\mathbf{y}^{-1})\qquad \text{[on type B]}.\] (B27)
To relate this formula to the labels of representations, we restrict it to the \([r-1,s]\)-hook. (B27) suggests a decomposition of a representation \(W_{a,m}\) of \(U_{q}(osp(2r|2s)^{(2)})\) into representations \(\{V^{\prime}(\lambda)\}\) of \(U_{q}(osp(2r-1|2s))\): \(W_{a,m}\simeq\oplus_{\lambda\in\tilde{\mathfrak{S}}\mathbf{D}}V^{\prime}(\lambda)\). In particular, \(V^{\prime}(\lambda)\) coincides with the irreducible representation \(V(\lambda)\) at least for \(s=0\), \(\lambda_{r-1}=0\), and then (B27) for \(1\leq a\leq r-2\) coincides with the character formula of the Kirillov-Reshetikhin modules for \(U_{q}(so(2r)^{(2)})\) [eq. (6.10), [87]]. The case \(a=r-1\) for \(s=0\) has to be treated separately.
We have not tried to prove (B12), (B13), (B16), (B17), (B19), (B20), (B22), (B23), (B25) and (B27) yet, but checked them by Mathematica (ver. 7) for the case \(0\leq r+s\leq 6\).
|
2302.14305 | Ultra-high vacuum pressure measurement using cold atoms | In this work, we have measured the background pressure in an ultra-high
vacuum (UHV) chamber by measuring the collisional loss rates in a Rb atom
magneto-optical trap (MOT) on an atom chip. The loss rate due to non-Rb gases
in the background has been estimated by measuring the MOT loss rate in low Rb
pressure regime. These results can be useful for development of cold-atoms
based UHV pressure standards. | S. Supakar, Vivek Singh, V. B. Tiwari, S. R. Mishra | 2023-02-28T04:35:52Z | http://arxiv.org/abs/2302.14305v1 | # Ultra-high vacuum pressure measurement using cold atoms
###### Abstract
In this work, we have measured the background pressure in an ultra-high vacuum (UHV) chamber by measuring the collisional loss rates in a Rb atom magneto-optical trap (MOT) on an atom chip. The loss rate due to non-Rb gases in the background has been estimated by measuring the MOT loss rate in low Rb pressure regime. These results can be useful for development of cold-atoms based UHV pressure standards.
Magneto-optical trap (MOT) is a robust device to generate the samples of ultracold atoms for various research and device applications of these cold atoms. Nowadays, cold atoms are considered as an important quantum systems for their applications in several upcoming quantum technologies such as high precision atomic clocks [1; 2], inertial sensors [3; 4; 5; 6; 7; 8; 9], electromagnetic field sensors [10; 11; 12], quantum computers [13; 14], etc. Recently, the use of cold atoms for developing quantum vacuum pressure standard has been proposed and demonstrated [15; 16]. The cold atom based pressure sensors are absolute and universal, as they are based on atomic collision process and no repeated calibration is required over the time. This is advantage of a cold atom based pressure standard over the conventional pressure sensing instruments such as ionization gauges which require repeated calibrations due to aging of filaments and electrodes. In addition, the cold atoms based pressure standards can work over the large dynamic range of vacuum, from UHV to extreme-high vacuum (XHV) regime.
The loss rate of atoms in atom traps are sensitive to background pressure in the trap chamber [17; 18; 19; 20; 21; 22; 23; 24]. Therefore atom traps can be utilized to sense or measure the UHV pressure in the chamber. Both, MOT and magnetic trap, are being used for pressure sensing applications with their relative advantages over each other. MOT is easier to form but it can sense pressure in UHV regime only, whereas the magnetic traps can be used to sense the pressure down to XHV regime. Earlier, Yuan et al [21] have estimated the Rb-pressure in the chamber from the MOT loading time in low cooling beam intensity regime by ignoring the intra-trap collisional loss rate. Willems et al [25] have estimated the background pressure (non-Cs gasses) in the chamber by measuring the life-time of MOT and magneto-static trap. Arporthip et al [17] measured the MOT loading time as function of background pressure (non-Rb contents) as well as the function of MOT loading rate (dependent on Rb-pressure), to estimate the background pressure and partial pressure of Rb in the chamber. Moore et al [20] has measured the non-Rb background pressure in the chamber from the Rb-MOT loading data by increasing the non-Rb gas pressure in the chamber - applying an approach of chamber pressure rise demonstrated earlier by Arporthip et al. In this method, sputter ion pump (SIP) was turned off to change the non-Rb gas pressure and MOT loading was studied at different non-Rb background pressure. Though this method is more time consuming, but it is suitable to detect vacuum leak in the chamber. In an another approach [24], the partial pressure due to Rb and non-Rb gases have been estimated by measuring the saturated number and loading time in a Rb-MOT.
In the work reported here, we have estimated the background pressure due to non-Rb gases in the chamber by measuring the Rb-MOT loading time in low Rb pressure and low cooling beam intensity regimes. We first measured the MOT loss rate (T) as function of cooling beam intensity. The MOT loading time at low cooling beam intensity was used to estimate the total (Rb and non-Rb) background collisional loss rate by neglecting the intra-trap collisional loss rate. Then, we measured the loading time of this low cooling beam intensity MOT as function of Rb-dispenser current. In these measurements, the MOT loading time at low dispenser current was used to estimate the MOT loss rate due to non-Rb background gas contents, which provided the estimate of non-Rb background pressure in the chamber. Therefore, as compared to earlier methods [17; 20; 21], we show that non-Rb partial pressure in the UHV chamber can be estimated by measuring the MOT loading time in low cooling beam intensity and low Rb pressure regimes. Our method is comparatively less time consuming and does not require switching-off the pumping of vacuum chamber which prevents exposure of the chamber to undesirable gas contamination. The straight forward method presented here has the potential for developing a UHV pressure sensor device.
The MOT loading process can be described by a rate equation as [17],
\[\frac{dN(t)}{dt}=R-\gamma_{b}N(t)-\beta\bar{n}(t)N(t) \tag{1}\]
where \(N(t)\) is the number of atoms in the MOT cloud at any time \(t\), \(R\) is the loading rate of MOT due to Rb vapour in the background, \(\gamma_{b}\) is the loss rate in MOT due to collisions of trapped atoms in MOT with the atoms/molecule's present in the background, \(\beta\) is loss rate due to inelastic two body intra-trap collisions, \(\bar{n}(t)=\int n(\textbf{r},t)^{2}\,dV/N(t)\) is average number density and \(n(\textbf{r},t)\) the number density of the trapped atoms in the MOT cloud.
The solution of equation (1) depends on the regime of parameter in which MOT is operated. For small number of atoms in MOT (\(N<10^{5}\)), known as constant volume regime, \(\tilde{n}(t)\approx N(t)/V\). For large N (\(N>10^{5}\)), known as constant density regime, \(\tilde{n}\) is constant. In our experiments, the MOT was operated in the constant density regime (i.e. \(N>10^{6}\)), therefore the solution of the equation (1) can be written as,
\[N(t)=N_{s}\left[1-exp(-t/\tau_{L})\right], \tag{2}\]
where \(\tau_{L}\)= 1/\(\Gamma\) with \(\Gamma=\gamma_{b}+\beta\tilde{n}\). Here \(N_{s}=R\tau_{L}\) is the final number in the MOT (i.e. number of atoms in the MOT in equilibrium). The parameter \(\tau_{L}\) is known as MOT loading time. The equation (2) describes the variation in number of atoms in a MOT with time and its plot is referred as MOT loading curve. From the experimentally measured MOT loading curve, both parameters, \(\tau_{L}\) and \(N_{s}\), can be determined. These parameters are dependent on the background pressure in the chamber due to Rb and non-Rb gas contents, and on the MOT parameters such as cooling beam intensity etc.
It is known that the loss rate of atoms from the MOT cloud due to collisions from atoms/molecules of any gas species in the background is related to its partial pressure in the chamber. The loss rate \(\gamma_{l}\) due to collisions with atoms/molecules of \(l^{th}\) gas species in the background is related to its partial pressure \(P_{l}\) in the background as [17],
\[\gamma_{l}=6.8\frac{P_{i}}{(k_{B}T)^{2/3}}\left(\frac{C_{i}}{m_{i}}\right)^{1 /3}(Dm_{0})^{-1/6}=\frac{P_{i}}{k_{i}}, \tag{3}\]
where \(m_{0}\) is mass of atom in the trap, \(m_{i}\) is mass of the incident atom/molecule of \(i^{th}\) gas species, \(k_{B}\) is Boltzmann constant, T is the temperature and D is the trap depth of the MOT and \(C_{i}\) is Van der Walls coefficient for \(i^{th}\) gas species in the background.
In the typical vapour loaded MOT chamber, the background collisional loss rate has two components and can be written as,
\[\gamma_{b}=\gamma_{Rb}+\gamma_{non-Rb}, \tag{4}\]
where \(\gamma_{Rb}(=(k_{Rb})^{-1}P_{Rb})\) represents the loss rates due to collisions with untrapped Rb vapour atoms and \(\gamma_{non-Rb}\) represents the loss rate due to other atoms/molecules in the background.
We have experimentally measured \(\Gamma\) for different values of laser beam intensity in the MOT. In the low intensity regime of cooling laser beam, the value of \(\Gamma\) can be approximated to \(\Gamma\approx\gamma_{b}\), where the intratrap collisional loss rate (\(\beta\tilde{n}\)) is negligible as compared to background collisional loss rate. For value of \(\tilde{n}\sim 10^{8}\) cm\({}^{-3}\) (for our MOT at 7.7 mW/cm\({}^{2}\)) and \(\beta\sim 2\times 10^{-12}\) cm\({}^{-3}\) s\({}^{-1}\) (as reported earlier [25] for detuning and intensity used in our MOT), the value of \(\beta\tilde{n}\) is \(\sim 10^{-4}\) s\({}^{-1}\). This is much smaller than the lowest value of \(\gamma_{b}\) (\(\sim 0.0071\) s\({}^{-1}\)) observed at that intensity (Figure 3(a)).
The experiments have been performed with loading of mirror-MOT (U-MOT) on an atom-chip, with schematic as shown in Figure 1. The details of experimental setup of atom-chip mirror-MOT have been described earlier [24; 26]. Different vacuum pumps used in the setup include a 77 \(\mathrm{\SIUnitSymbolMicro s}\) turbo molecular pump (TMP), a 300 \(\mathrm{\SIUnitSymbol l}\)s sputter ion pump (SIP) and a titanium sublimation pump (TSP). The ultimate base pressure achieved in the chamber without Rb vapour was \(1.5\times 10^{-10}\) Torr as read by SIP controller. The pressure values read by SIP controller were nearly equal to those read by an extractor gauge attached to the chamber. A Rb dispenser assembly having three Rb dispensers (Rb/NF/3.4/12FT) connected in parallel configuration was prepared by welding dispensers on a two-pin feedthrough. This assembly was placed in the vacuum chamber through a viewport hole such that a Rb dispensers are at a distance of \(\sim\) 17 cm from the centre of the octagonal chamber. The Rubidium vapour is produced inside the chamber after flowing a current through this dispenser assembly. The current in each dispenser (\(I_{D}\)) is nearly one-third of the current supplied to dispenser assembly.
A quadrupole like magnetic field required for MOT was generated from a current carrying (60 A) copper U-wire (Figure 1) placed behind the atom-chip in presence of a homo
Figure 2: The loading curves for U-MOT on atom-chip for different values of cooling laser beam intensity at a fixed background pressure (at dispenser current of \(I_{D}\) = 3.57 A). The experimentally observed MOT loading data along with best-fit (continuous curve) are shown for different values of intensity.
Figure 1: A schematic diagram of the experimental setup. Two MOT-beams in the reflection geometry in the y-z plane are shown, whereas the other two MOT-beams along \(\pm\) x-direction are not shown in the diagram. PD represents photodiode (PD) for the detection of fluorescence.
geneous bias fields (\(B_{y}\sim 11\) G and \(B_{z}\sim 3\) G). The output from frequency stabilized diode lasers served as cooling and repumping laser beams. Each MOT beam was a combination of a cooling and a re-pumping beams with suitable ratio of power in the beams. Two MOT beams were reflected at \(45^{\circ}\) from chip surface which formed four MOT beams in the overlapping region. Two counter propagating MOT beams in orthogonal direction made a set of required six MOT beams for operation of the MOT. This MOT configuration is called the mirror-MOT configuration.
Figure 2 shows the loading curve of U-MOT for different values of cooling laser beam intensity at a fixed Rb dispenser current of \(I_{D}\) = 3.57 A. The continuous curves show the best-fit of the experimental loading curve to the equation (2). From the fit, we obtain the value of loading time \(\tau_{L}\) and \(\Gamma\). A reduction in loading time from 65.25 s to 33.41 s was observed with increase in intensity of the cooling beam from 7.7 mW/cm\({}^{2}\) to 20.2 mW/cm\({}^{2}\), as shown in figure 2. These intensity dependent measurements of \(\Gamma\) were carried out for different values of dispenser current (\(I_{D}\)) and results are shown in Figure 3(a). As discussed before, the value of \(\Gamma(=\gamma_{b}+\beta\bar{n})\) in low cooling beam intensity regime can be approximated as \(\Gamma\sim\gamma_{b}\). Alternatively, as followed by Yuan et al[21], the intercept on y-axis in \(\Gamma\) vs cooling beam intensity plot. Figure 3(a) can be used to estimate the value of \(\gamma_{b}\). Figure 3(b) shows the \(\gamma_{b}\) values estimated this way for different values of dispenser current.
As shown in figure 3(b), the value of \(\gamma_{b}\) increases rapidly with dispenser current for current beyond the value of 4.0 A. However, at lower dispenser current (lower than 4.0 A) values, the variation in \(\gamma_{b}\) with current is negligibly small. This shows that contribution to loss rate from the Rb atoms in the background is negligible. Therefore, the value of \(\gamma_{b}\) can be approximated to \(\gamma_{non-Rb}\) in this regime. By estimating \(\gamma_{non-Rb}\) this way, \(\gamma_{Rb}\) can be estimated at any current value by measuring \(\gamma_{b}\), as \(\gamma_{non-Rb}\) is independent of dispenser current. Thus, we can estimate both \(\gamma_{non-Rb}\) and \(\gamma_{Rb}\) in our method, without switching off the vacuum pumps as compared to earlier works[17; 20].
In the very low pressure (UHV) regime, there are only few gas species (H\({}_{2}\), He, Ar etc) which contribute to the total base pressure in the chamber (\(P=\sum_{i}P_{i}=\sum_{i}k_{i}\gamma_{i}\)). If we consider hydrogen (H\({}_{2}\)) as a dominant species in the UHV pressure range as in our chamber[27], the measured \(\gamma_{non-Rb}\) can be approximated to \(\gamma_{H_{2}}\). This gives the non-rubidium partial pressure in the chamber as \(P_{non-Rb}=k_{H_{2}}\gamma_{H_{2}}=k_{H_{2}}\gamma_{non-Rb}=1.1\times 10^{-10}\) Torr, with \(k_{H_{2}}=2.04\times 10^{-8}\) Torr s and \(\gamma_{non-Rb}\) = 0.0056 s\({}^{-1}\). This estimated pressure of non-Rb background gases agrees well with that measured by the SIP controller (\(1.5\times 10^{-10}\) Torr).
Figure 4: Variation in partial pressures due to Rb vapour (\(P_{Rb}\)), non-rubidium gases (\(P_{non-Rb}\)). The total background pressure (\(P_{b}=P_{Rb}\) + \(P_{non-Rb}\)) is compared with pressure measured by SIP controller (\(P_{3IP}\)).
Figure 3: (a) The variation in the loss rate (\(\Gamma\)) with cooling laser beam intensity for different Rb dispenser current (\(I_{D}\)). (b) Variation in background collisional loss rate (\(\gamma_{b}\)) with Rb dispenser current.
After knowing the value of \(\gamma_{Rb}\) at any dispenser current, the value of Rb partial pressure can be estimated by the relation \(P_{Rb}=k_{Rb}\gamma_{Rb}\), with \(k_{Rb}=2.27\times 10^{-8}\) Torr s [17; 21]. The variation of the estimated Rb pressure in the chamber with dispenser current is shown in figure 4. In this figure, the total background pressure (\(P_{b}=P_{Rb}+P_{non-Rb}\)) estimated by our method is also shown and compared with the pressure read by SIP controller. We note that at low dispenser current (less than 4.0 A), there is a good agreement between the estimated total background pressure (\(P_{b}\)) and the pressure measured by SIP controller (\(P_{SIP}\)). However, at higher values of dispenser current, the pressure estimated by present method is more than that read by SIP controller. This difference can be attributed to the adsorption of Rb atoms at the chamber walls and pipe connecting the SIP to the chamber. Similar observations have been reported earlier also [17].
In conclusion, we have estimated the Rb and non-Rb partial pressure values in an UHV chamber from the loading data of a Rb-MOT on an atom-chip. The estimated pressure values agree with the pressure measured by the SIP controller.
We are thankful for the help extended by Amit Chaudhary and Dayanand Mewara for this work.
The authors declare no conflict of interest.
|
2309.10211 | Loop Polarity Analysis to Avoid Underspecification in Deep Learning | Deep learning is a powerful set of techniques for detecting complex patterns
in data. However, when the causal structure of that process is underspecified,
deep learning models can be brittle, lacking robustness to shifts in the
distribution of the data-generating process. In this paper, we turn to loop
polarity analysis as a tool for specifying the causal structure of a
data-generating process, in order to encode a more robust understanding of the
relationship between system structure and system behavior within the deep
learning pipeline. We use simulated epidemic data based on an SIR model to
demonstrate how measuring the polarity of the different feedback loops that
compose a system can lead to more robust inferences on the part of neural
networks, improving the out-of-distribution performance of a deep learning
model and infusing a system-dynamics-inspired approach into the machine
learning development pipeline. | Donald Martin, Jr., David Kinney | 2023-09-18T23:49:42Z | http://arxiv.org/abs/2309.10211v2 | Causal theories and structural data representations for improving out-of-distribution classification
###### Abstract.
We consider how human-centered causal theories and tools from the dynamical systems literature can be deployed to guide the representation of data when training neural networks for complex classification tasks. Specifically, we use simulated data to show that training a neural network with a data representation that makes explicit the invariant structural causal features of the data generating process of an epidemic system improves out-of-distribution (OOD) generalization performance on a classification task as compared to a more naive approach to data representation. We take these results to demonstrate that using human-generated causal knowledge to reduce the epistemic uncertainty of ML developers can lead to more well-specified ML pipelines. This, in turn, points to the utility of a dynamical systems approach to the broader effort aimed at improving the robustness and safety of machine learning systems via improved ML system development practices.
## 1. Introduction
Rapid advances in machine learning (ML) technology are driving a major shift in how industries and disciplines of all kinds leverage data and computation to automate human tasks such as resource allocation and decision making. However, it is also broadly recognized that in many instances ML/AI based systems are not yet sufficiently safe or trustworthy to be productionized at scale, particularly for high-stakes domains such as healthcare and criminal justice (Bommasani et al., 2021). One of the primary technical problems that spans the general and trustworthy AI research areas is brittleness or lack of robustness. ML models are considered brittle when their predictive inference performance deteriorates upon receiving inputs - from real-world deployment domains - that fall outside of the distribution represented in the model training and evaluation data. This problem is also referred to as the "out-of-distribution (OOD) generalization problem" (Shen et al., 2021). Prominent AI researchers have asserted that a root cause of model brittleness is underspecification in ML development pipelines which results in models that fail to encode the essential structural and causal aspects of the relevant problem or task domain and its associated data generating process (DGP) (D'Amour et al., 2020; Scholkopf et al., 2021). In addition to poor robustness performance, such failures to encode important structural causal knowledge have caused real-world societal harms in high-stakes domains (Obermeyer et al., 2019; Ensign et al., 2018).
Models fail to encode key structural causal aspects of the DGP when they lack knowledge about these structural features. In machine learning this lack of knowledge is a specific sub-class of epistemic uncertainty called "model" uncertainty. Epistemic uncertainty in machine learning is a prominent topic as practitioners seek ways to measure and reduce this uncertainty to create more reliable ML-based solutions that generalize well in real-world deployment settings (Hulermeier and Waegeman, 2021; Huang et al., 2021; Varshney and Alemzadeh, 2017). In the current ML literature, epistemic uncertainty is conceptualized with respect to the learning agent (i.e.,
the prediction or inference model) and its training data. These model and data-centric conceptions ignore the epistemic uncertainty of the human learning agents tasked with specifying critical structural causal knowledge via ML pipeline mechanisms and practices such as data representation. We assert that factoring in the epistemic uncertainty of the human learning agents that drive ML development processes is critical for improving ML pipeline specification.
This paper is concerned with the norms and processes for specifying, constructing and evaluating inference models that directly influence the ultimate robustness and safety of ML systems. We call attention to the often-obscured fact that what are often characterized as ML model failures are actually failures of the ML development process, which is driven end-to-end by human decision making. Figure 1 depicts the typical ML development process and its relationship to the problem domain which includes a DGP and associated so-called background knowledge (Azimi and Zaydman, 2023). The major steps of the process - which are often _ad hoc_ and informal - are problem understanding, problem formulation, data selection and preparation, and model training and evaluation. The process is intended to transform human knowledge and inductive biases about the problem domain and the DGP into "substantive inductive biases" that are instantiated in the model via human decisions about important factors including but not limited to prediction task, data representation, model architecture, initialization, parameterization, optimization algorithm, batch size, and learning rate (D'Amour et al., 2020). In this paper we will focus on the human decisions that impact data representation, as they have enormous downstream implications on the remainder of the process and how well ML systems generalize for a given domain (Ding et al., 2021).
Human decision making relies on the ability of humans to infer the essential causal structure of a problem domain or situation. Humans are able to perform causal inference based on small amounts of observed data via strong prior knowledge in the form of causal theories (Tenenbaum and Griffiths, 2002). Causal theories are at play during the human decision making that is central to the problem understanding, problem formulation and data preparation phases of ML pipeline specification that result in a particular learning task and data representation. Unfortunately, the causal theories that inform these critical decisions are usually implicit.
As they are typically less proximate to the real world aspects of problem domains, ML development decision makers often lack the background knowledge of domain experts. This lack of knowledge leads to a shallow or naive understanding of the problem domain (Correia et al., 2020)
Figure 1. Typical machine learning development process
and an _epistemic gap_ between domain experts and the ML developers. As such, ML developers have a high degree of epistemic uncertainty about the structural causal aspects of the specific application domain and its DGPs. This high degree of epistemic uncertainty results in weak causal theories that are not explicitly specified or reviewed. When ML decision makers have weak causal theories, their data selection and data representation decisions are less likely to convey the essential structural causal knowledge required for a model to perform well in OOD scenarios.1 In a sense, the models inherit the epistemic uncertainty of their creators. As such, it is critical to develop methods to measure and reduce the epistemic uncertainty of the human agents that specify ML pipelines. Additionally, methods for domain experts and stakeholders to communicate a priori structural knowledge across the epistemic gap and for ML developers to convey that knowledge via ML pipeline mechanisms like problem formulation and data representation are a critical ingredient for reducing underspecification.2
Footnote 1: OOD scenarios can arise in a host of ways, from moving to entirely new environment from that in which the training data were generated (e.g., training data for a a classifier used in a medical setting may come from one hospital, only for the classifier to be deployed in another hospital), or for a different set of data points within the same environment (e.g., a new group of patients withing the same hospital in which training data were generated).
Footnote 2: D’Amour et al. (2020) explicitly call for “better interfaces for conveying domain knowledge.”
In this paper, we use a simple case study to show how data representations that take into account an explicit human causal theory of the causal structure of a DGP can lead to more robust inference. Specifically, we use the SIR model, a well-known causal theory about the dynamic behavior of epidemics, to generate simulated epidemics and to serve as the basis for problem understanding, problem formulation, and data preparation decisions that affect and drive the neural network training pipeline. We then seek to train a neural network to classify, over any given three-time-step interval, whether the _polarity_ of the level of infections is positive or negative. Polarity in the level of infections over a time interval is a frequently-studied quantity in applications of dynamical systems theory to the SIR model (Hayward and Boswell, 2014), and represents whether the rate of change in the number of infected people is positively or negatively impacted by an actual change in the number of infected people over that interval. To illustrate, if the polarity of an infected population is positive, then increases in the number of infected people will result in increases in the rate of infections, creating runaway growth. Similarly, in a positive-polarity system, decreases in the number of infected people will lead to runaway decay in the number of infected people. By contrast, when the polarity in the number of infected people is negative, we can expect plateauing in the number of infected, as increases or decreases in the total number lead to counter-balancing decreases or increases in the rate of change. Thus, measuring polarity provides an important insight into the dynamical behavior of the system over a given time time (Richardson, 1986; Hayward and Boswell, 2014; Centers for Disease Control and Prevention, 2021).
Our goal is to determine the conditions under which a neural network can be trained to accurately classify the polarity of a system over a fixed interval, given the data generated by the system over the same interval. To our knowledge, this problem has not yet been considered in the machine learning literature (though note work in this vicinity by Schoenberg and Swartz (2020), which is discussed in more detail below). In what follows, we use two different representations of the data generated by a simulated epidemic to train a neural network to classify the polarity of the level of infections
over a given time interval. The first data set, which we call the "raw data" representation, contains all of the information needed to calculate polarity, but does not incorporate any knowledge of the causal structure of the data-generating process. The second data set, which we call the "polarity" representation, uses human knowledge - provided by the SIR causal theory - of the causal structure of the data-generating process to decompose the epidemic system into a set of quantities, and then measure the polarity of these quantities. This data representation is more sparse and coarse-grained than the raw data representation. Nevertheless, we show that a neural network trained on the polarity data representation is able to perform significantly better on OOD data, because it is trained on a data representation that encodes structural knowledge of the causal feedback loops that compose the data-generating process.
A key takeaway from these computational experiments is that ML developers can benefit from a priori theories about the causal structure of DGPs to inform how they formulate problems for machine learning and choose the data representations they use for training neural networks. To make this point explicitly, we model how ML developers can leverage causal theories, which express domain knowledge as causal structure, to reduce their epistemic uncertainty about the problem domain and choose optimal data representation schemes. Specifically, we represent the epistemic states of ML developers with respect to the causal structure of the data-generating process using probability distributions, and show how domain knowledge can be exploited to reduce the entropy of these distributions, which in turn informs choices about the learning task and data representation. This amounts to an expansion of the concept of epistemic uncertainty as it currently exists in ML. Typically (e.g., in Hullermeier and Waegeman, 2021) epistemic uncertainty is understood as the uncertainty of the neural network learning agent about the nature of the DGP that can be reduced by observing more data from said DGP. Here we consider the epistemic uncertainty of the human ML development decision making agents about the nature of the DGP that can only be reduced through additional expert domain knowledge. We then show that the reduction of the epistemic uncertainty of ML decision makers can directly lead to a method of data representation that improves neural network OOD generalization performance.
To summarize, causal modeling is a way to make human causal theories explicit and has been recognized as a key ingredient for improving robustness and ML pipeline under-specification (Scholkopf et al., 2021; D'Amour et al., 2020). Although ordinary differential equations (ODEs) are recognized as the gold standard for representing structural causal knowledge (Scholkopf et al., 2021), the ML community has embraced structural causal models (SCMs) and Bayesian Causal Networks (BCNs) for making causal theories explicit and leveraging them for improving model robustness and evaluating model fairness (Chiappa and Isaac, 2019; Kyono and Van der Schaar, 2021). However, these methods do not straightforwardly incorporate either the inherent temporal element or the cyclic feedback loop mechanisms that can characterize natural and socially constructed phenomena such as epidemics (Brewer et al. 2017, p. 6-7). We leverage an ODE representation of human causal theory to develop a data representation method that bridges the gap between the more qualitative causal modeling domain and the quantitative statistical inference domain. We then show how the resulting data representation strongly biases the ML pipeline for better OOD performance.
The remainder of this paper proceeds as follows. In Section 2, we discuss previous work in the literature that is relevant to our discussion here, and situate our study within this context. In Section
3, we provide a basic overview of the elements of neural network architecture for classification that are relevant to our study. In Section 4, we introduce the polarity framework for understanding system behavior. In Section 5, we use the case study of a simple epidemic model to illustrate the distinction between a raw data approach to data representation as compared to a polarity approach (the latter of which is informed by causal domain knowledge), and show how the polarity approach enables more accurate classification on OOD data. In Section 6, we use a formal measure of epistemic uncertainty to discuss how a structural feature understanding of a data-generating process (e.g., the polarity framework) by ML development decision makers enables a reduction in epistemic uncertainty about the nature of said process. We discuss the broader implications of this case study for the ML development pipeline in Section 7, and conclude in Section 8.
## 2. Related Work
The work that most directly addresses the intersection of polarity frameworks and neural network inference is found in Schoenberg (2020) and Schoenberg and Swartz (2020). That work advocates the use of neural networks to build causal loop diagrams based on data with less structure. By contrast, in what follows we will assume that a particular causal loop structure, represented as a set of ordinary differential equations, generates a particular data set, and then exploit that assumption to represent the data in a way that allows for more effective neural network classification on OOD data. In this sense, our project is closer to work by Kyono and Van der Schaar (2021), who show how training a neural network using a loss function that rewards a prediction's coherence with a pre-assumed causal structure leads to improved inference. However, whereas that work assumes that data is generated by an acyclic causal structure, we assume an inherently cyclic causal structure for the data-generating process, represented mathematically by a system of ordinary differential equations. This allows us to exploit dynamical aspects of the data-generating process, such as polarity, that are not represented in an acyclic framework.
Perhaps the work that is closest in spirit to our own is due to Pham et al. (2023) and Magliacane et al. (2018). Pham et al. (2023) introduce a hidden neural network layer that learns a weighted graph representing relationships between features of the input data. This relational information is then fed into the next layer of the network, influencing prediction. They find that this architecture improves the performance of a neural network on OOD data. In the same way, our approach to data representation exploits relationships between data to improve the OOD performance of neural networks. However, our approach leverages human domain expertise to target specific dynamical relationships between the data, whereas Pham et al.'s approach begins from a more agnostic starting point. Moreover, our polarity-based schema for data representation greatly simplifies the representation of the data, reducing a real-valued input vector to a binary-valued one. By contrast, Pham et al. introduce an additional, real-valued weighted graph to the initial data representation, such that their approach achieves improved OOD performance through a refinement, rather than a coarsening, of the data representation.
Magliacane et al. (2018) show how knowledge of the causal structure of a DGP can enable better OOD inference about the data that would be produced under hypothetical interventions. Specifically, they provide an algorithm that uses causal knowledge to learn the features of data that are most predictive of system behavior under hypothetical interventions on variables. Here, we take
a similar approach, learning coarse-grained features of data that are highly predictive of system behavior. However, by measuring the polarity of the quantities represented in our data, we are able to make inferences about the dynamics of the system (i.e., how the system is evolving over time). A fruitful approach in future research may involve combining Magliacane et al.'s approach to causal feature learning with an approach that implicitly encodes causal dynamics.
Recent work by Bongers et al. (2021) and Weinberger (2020) has extended the structural causal modeling framework for generative inference to include cyclic relationships between random variables and differentiable stochastic processes. In this context, one can see our contribution here as partly a fusion of Kyono and van der Shaar's approach to neural network inference and the cyclic nature of Bongers et al.'s generative models. However, Bongers et al. do not use their framework to measure structural parameters like polarity, nor do they explicitly apply their approach to the problem of data representation in the ML development pipeline. In both of these respects, our work expands on theirs by explicitly connecting a cyclic causal framework to both the dynamical systems literature and data representation.
In addition, recent work by Geiger, Potts, and Icard (2023) generalizes the idea of "causal abstraction," which considers how the variables of a causal model can be fused and coarse-grained to create simpler causal models (see Chalupka, Eberhardt, and Perona, 2017; Beckers and Halpern, 2019; Rubenstein et al., 2017), to the cyclic (as opposed to acyclic) setting. They then use this more generalized notion of abstraction to coarse-grain the architectures of neural networks, to yield a more tractable, explainable architecture. This is related to our project here in that it exploits knowledge of cyclic causal structures to simplify and improve neural network inference. We also use the polarity framework to coarse-grain and simplify our representation of a data-generating process. However, our work is distinguished by the fact that we enable simpler, more tractable representations of the data that are used to train a neural network, rather than coarse-graining and abstracting the structure and parameter space of the neural network itself.
A crucial upshot of our project is to show that a polarity framework allows us to represent the structure of a data generating process with much lower worst-case epistemic uncertainty than a more naive approach to representation that does not take causal theories into account. We follow Hullermeier and Waegeman (2021) in using information-theoretic entropy to quantify the worst-case epistemic uncertainty of an algorithm's representation of a data-generating process, under different conceptions of the parameter space. Thus, we are directly quantifying what Huang, Lam, and Zhang (2021) refer to as "model uncertainty," or "the discrepancy between the best model in the hypothesis class and the real data distribution" (p. 1). However, we depart from this work in that we directly consider how different methods for representing data itself (i.e., measuring loop polarities instead of naive parameter values) can directly reduce model uncertainty by simplifying our basic model of the data-generating process. Thus, we offer a new recipe for reducing epistemic uncertainty through the consideration of structural parameters like polarity.
In what follows, we provide the necessary background on neural network architecture, before turning to a discussion of polarity and our main experimental results.
## 3. Neural Network Architecture for Classification
In contemporary machine learning, the most common technique for estimating the value of a binary variable representing the behavior of a complex system is to use a neural network. In demonstrating the value of a polarity approach to data representation, we use neural network classifiers trained on both raw data and polarity data, comparing their performance. However, in principle one could explore the potential of a polarity approach to data representation using any ML model trained by empirical risk minimization (e.g., random forests).
Though the formal details can get extremely complicated, at its core a neural network aims to approximate a function. Let \(\mathcal{X}\) be some vector space representing inputs to that function, and \(Y\) be some space of possible outcome of interest. We assume that there is a "ground truth" function \(f:\mathcal{X}\to Y\) that generates data, in the form of output-input pairs in \(\mathcal{X}\times Y\). To approximate this function, we begin by letting \(l:Y\times Y\rightarrow\mathbb{R}\) be a **loss function** such that \(l(y,y_{obs})\) measures the distance between the guessed output \(y\) and an observed output \(y_{obs}\). The neural network aims to find a function \(f^{*}\) that minimizes **expected risk**\(R(f^{*})\), which is defined as follows:
\[R(f^{*})=\int_{\mathcal{X}\times Y}l(f^{*}(\mathbf{x}),y)P(\mathbf{x},y), \tag{3.1}\]
where \(P\) is a data-generating probability distribution over \(\mathcal{X}\times Y\). To find this function, the neural network is typically given **training data** consisting of pairs \(\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\}\sim P\). The neural network then aims to use this training data, through an algorithmic training procedure, to find a function that minimizes **expected empirical risk**, which is defined as follows:
\[R_{emp}(f^{*})=\frac{1}{n}\sum_{i=1}^{n}l(f^{*}(\mathbf{x}_{i}),y_{i}). \tag{3.2}\]
By aiming to find the function that minimizes this quantity, the neural network aims to find a function that captures as accurately as possible the true data-generating function \(f\), provided that the training data are representative of all possible data sets that could be generated by \(P\).
Figure 2. Simple neural network with a four-dimensional input layer, a single four-dimensional hidden layer, and one-dimensional output layer.
In practice, neural networks achieve this approximation of often-complex target functions by composing many simpler functions. To illustrate, consider a simple neural network called a **multilayer perceptron**. Such a network consists of layers of neurons. An initial input layer \(l^{0}\) has the same dimensionality as each input vector \(\mathbf{x}\). Subsequent intermediate or "hidden" layers \((l^{1},l^{2},\dots)\) may have more or less neurons than the input layer, and the final output layer consists of a single neuron. Each neuron \(n_{i}^{l^{k}}\) in each hidden layer \(l^{k}\) is a function whose value depends on the value of each neuron in the previous layer \((n_{1}^{l^{k-1}},\dots,n_{|l^{k-1}|}^{l^{k-1}})=l^{k-1}\), and a set of parameters \(\theta_{n_{i}^{l^{k}}}^{n_{1}^{l^{k-1}}},\dots,\theta_{n_{i}^{l^{k}}}^{n_{|l^{k -1}}^{k-1}}\). The final output layer \(l^{K}\) contains a single neuron, which takes a values in the space \(\{0,1\}\). We interpret the value of the neuron in the output layer as specifying whether or not the input vector belongs in a given class or not. See Fig. 2 for a representation of a simple neural network. The total number of parameters \(m\) in a multilayer perceptron with \(K\) layers is given by the following equation, as per Roberts, Yaida, and Hanin (2022, p. 40):
\[m=\sum_{k=1}^{K}(|l^{k}|+|l^{k}||l^{k-1}|). \tag{3.3}\]
This illustrates how, in general, the number of parameters in a neural network increases with the number of neurons and layers in that network.
Since the value of each neuron is determined by both the neurons in the previous layer and the parameter vector \(\theta\), for a neural network to accurately approximate some function, the parameters of that function must be tuned so that the output of the neural network, given some input vector, tends to match the output of the target function, given the same vector. To tune parameters in this way, neural networks are fed training data consisting of input-output pairs \((\mathbf{x},y)\). Neural network training algorithms then aim to find parameters that would enable accurate prediction of the training data, using any number of optimization procedures (e.g., gradient descent algorithms). The details of these training algorithms are ultimately tangential to our arguments here; the important point is that in order to learn parameters that enable accurate classification, a neural network must be provided training data. Thus, very early in the ML development pipeline, practitioners have to make choices about the data that they use to train a neural network. The outputs of the same DGP can be represented in many different ways, and choices with regard to these data representations can make a difference to the success of training different neural networks. In what follows, we consider a method of data representation from the complex systems literature (i.e., the "polarity" framework for representing the output of a data-generating process), and show how training a neural network on this data representation improves performance on data generated from the same causal structure, but with parameters sampled from very different distributions than those used to generate the parameter values for the SIR model used to generate the training data. To our knowledge, ours is the first attempt to use loop-polarity data to train a neural network to classify the behavior of systems. By demonstrating the improved performance on OOD classification for neural networks trained on this data in a computational experiment, we invite further research on the part of the ML community aimed at using similar data to train predictive algorithms in a variety of applications.
## 4. The Polarity Framework
We turn now to the polarity framework for understanding system behavior. Our approach here takes its inspiration from contributions to the system dynamics literature due to Richardson (1986) and Hayward and Boswell (2014).3 Within this framework, different possible behaviors of a system over time are represented by stochastic processes of the form \(X:T\times\Omega\rightarrow\mathbb{R}\), which are called **levels**. We assume here that levels are at least twice differentiable. Each level can be represented as part of a **feedback loop**\((X,\dot{X}=\frac{\partial X}{\partial t})\) consisting of the level and its **inflow rate**, or rate of change over time. The **polarity** of a feedback loop is then given by the equation \(\texttt{sign}(\frac{\partial\dot{X}}{\partial X})\), i.e., the sign of the rate of change in the rate of change in \(X\) as \(X\) changes. Thus, if the rate of change of a level increases as the overall measure of that behavior increases, then the corresponding feedback loop is said to have positive polarity. If the rate decreases as the overall measure increases, then the loop is said to have negative polarity. The inflow rate of a given level can be represented as determined by the inflow rates of other levels within the system, so that for a given level \(X_{0}\), \(\frac{\partial X_{0}}{\partial t}=\varphi(X_{1},\ldots,X_{n})\), where \(\varphi\) is a function and \(\{X_{1},\ldots,X_{n}\}\) is a set of other levels within the system. This is another way of saying that the dynamics of the system can be represented using a set of differential equations.
Footnote 3: To be clear, we do not intend here to make an intervention in the system dynamics literature. We are only borrowing the notion of polarity, as it is developed in that literature, to advance a new proposal for data representation when using AI/ML approaches to classify the behavior of complex systems.
In many contexts, determining the polarity of the loop \(\left(I(t),\frac{dI(t)}{dt}\right)\) over a given time frame is crucial to understanding the overall behavior of an epidemic (Centers for Disease Control and Prevention, 2021). This quantity tells us whether, as infections increase, the _rate of change in infections is also increasing_ (indicating exponential growth and an urgent need to martial resources), or whether, as infections increase, the _rate of change in infections is decreasing_, indicating that the epidemic is nearing its peak. Thus, we can gain a crucial understanding of system behavior by learning which summands of this system are dominant at which times when it comes to determining the overall behavior of the loop \(\left(I(t),\frac{dI(t)}{dt}\right)\).
Following Hayward and Boswell (2014, p. 41-3), we illustrate the polarity framework using the case study of a simple SIR (Susceptible, Infected, Recovered) model of an epidemic with vital dynamics and constant population (Kermack and McKendrick, 1927). Note that our choice of model here is for illustrative purposes, and that one could explore applying our approach on other epidemic models, such as the Susceptible-Infected-Susceptible (SIS) model. The model consists of
five parameters and three first-order differential equations, which are as follows:
\[\Lambda(t) :=\text{birth rate} \tag{4.2}\] \[\mu(t) :=\text{death rate}\] (4.3) \[\gamma(t) :=\text{recovery rate}\] (4.4) \[\beta(t) :=\text{average number of interactions with other people per time-step}\] (4.5) \[N :=\text{population}\] (4.6) \[\dot{S}(t) =\Lambda(t)N-\mu(t)S(t)-\frac{\beta(t)I(t)S(t)}{N}\] (4.7) \[\dot{I}(t) =\frac{\beta(t)I(t)S(t)}{N}-\gamma(t)I(t)-\mu(t)I(t)\] (4.8) \[\dot{R}(t) =\gamma(t)I(t)-\mu(t)R(t) \tag{4.1}\]
Apart from the population \(N\), which does not vary with time, all other parameters and functions herein are levels, as they are stochastic processes representing quantities of interest in the model. At any given time \(t\) in the course of an epidemic, the rates of change in the number of susceptible, infected and recovered members of the population is determined by the solution to these differential equations. At a qualitative level, we can see that the rate of change in the number of susceptible people is a function of the birth rate, the number of susceptible people who die during a given time step, and the number of interactions between infected and susceptible people as a proportion of the total population. The rate of change in the number of infected people is a function of the number of interactions between infected and susceptible people as a proportion of the total population, the number of infected people who recover, and the number of infected people who die. The rate of change in the number of recovered people is a function of the number of infected people who recover and the number of recovered people who die.
In keeping with the polarity approach to understanding system behavior, the data-generating process of the epidemic can be decomposed into each summand of the governing ODEs shown in Eqs. 4.6-4.8 above. The integral of each summand up to a given time step can then be represented as part of its own loop. These loops are represented as follows:
\[\mathcal{L}_{1} :=\left(\int_{0}^{t}\Lambda(t^{\prime})N\mathrm{d}t^{\prime}, \Lambda(t)N\right)\text{ (the birth rate)} \tag{4.10}\] \[\mathcal{L}_{2} :=\left(\int_{0}^{t}\mu(t^{\prime})S(t^{\prime})\mathrm{d}t^{ \prime},\mu(t)S(t)\right)\text{ (the number of susceptible people who die)}\] (4.11) \[\mathcal{L}_{3} :=\left(\int_{0}^{t}\frac{\beta(t^{\prime})I(t^{\prime})S(t^{ \prime})}{N}\mathrm{d}t^{\prime},\frac{\beta(t)I(t)S(t)}{N}\right)\text{ (infected-susceptible interactions over $N$)}\] (4.12) \[\mathcal{L}_{4} :=\left(\int_{0}^{t}\gamma(t^{\prime})I(t^{\prime})\mathrm{d}t^{ \prime},\gamma(t)I(t)\right)\text{ (the number of infected people who recover)}\] (4.13) \[\mathcal{L}_{5} :=\left(\int_{0}^{t}\mu(t^{\prime})I(t^{\prime})\mathrm{d}t^{ \prime},\mu(t)I(t)\right)\text{ (the number of infected people who die)}\] (4.14) \[\mathcal{L}_{6} :=\left(\int_{0}^{t}\mu(t^{\prime})I(t^{\prime})\mathrm{d}t^{ \prime},\mu(t)I(t)\right)\text{ (the number of recovered people who die)} \tag{4.9}\]
One key insight of the dynamical systems literature is that in many instances, the polarity of the various component functions of a data-generating process tend to determine the overall behavior of the system composed of those functions. So, in the SIR model presented here, the polarity of the level of infections may be driven by the polarity of the loops listed above. For instance, if the total number of infected people have recovered from the epidemic is plateauing (a fact that might be established by the reliable expert testimony of, e.g., a hospital administrator), then this binary judgment may be a powerful tool for estimating the overall polarity of the number of infections. The question we consider here is whether this insight about complex system behavior can be exploited in order to represent data in a way that enables more efficient neural network training.
In the following case study, we consider two methods for training a neural network to classify the polarity of the loop \(\Big{(}I(t),\frac{dI(t)}{dt}\Big{)}\) over a three-time-step interval from simulated data generated by an epidemic system. In particular, we will consider two methods of training a neural network to make this prediction. The first method uses "raw data" produced by the epidemic (i.e., the number of susceptible, infected, and recovered people over a certain time interval). The second method uses measurements of the polarity of different summands of the system. As we will show, the second method can have significant advantages over the first method when it comes to making accurate predictions from OOD data.
## 5. Case Study
Suppose that we are collecting time-series data produced by an epidemic. How should we collect and represent that data in order to input it into a neural network so that the neural network might accurately predict the polarity of the crucial loop \(\Big{(}I(t),\frac{dI(t)}{dt}\Big{)}\)? As mentioned above, there are two methods we might use. First, one might use what we call the "raw data" method: for any three-step time interval \([t:t+2]\subset\mathbb{N}\), record the numbers of susceptible, infected, and recovered individuals at each time step. This results in a nine-entry input vector \(\mathbf{x}^{RD}\). The predicted output \(y\) is the polarity of the loop \(\Big{(}I(t),\frac{dI(t)}{dt}\Big{)}\), which is given by the equation:
\[\mathtt{polarity}\Big{(}I,\frac{dI}{dt}\Big{)}=\mathtt{sign}\Big{(}\frac{[I(t +2)-I(t+1)]-[I(t+1)-I(t)]}{[I(t+2)-I(t)]}\Big{)}. \tag{5.1}\]
Thus, we can train neural network by feeding it input-output pairs \(\{(\mathbf{x}_{1}^{RD},y_{1}),\dots,(\mathbf{x}_{n}^{RD},y_{n})\}\). Alternatively, we can collect and represent the data generated by a neural network by obtaining, for any three-step time interval \([t:t+2]\subset\mathbb{N}\), a vector consisting of the polarities of the loops corresponding to the six summands that compose the overall data generating process. We will call this the "polarity" method for collecting and representing data generated by an epidemic. Let each loop \(\mathcal{L}_{i}\) be represented as a pair \(\mathcal{L}_{i}=\Big{(}l_{i},\frac{\partial l_{i}}{\partial t}\Big{)}\). For each three-step time interval, the data collected is represented as the following six-entry vector:
\[\mathbf{x}^{P}=\left[\mathtt{polarity}\left(\mathcal{L}_{1}\right),\dots, \mathtt{polarity}\left(\mathcal{L}_{6}\right)\right], \tag{5.2}\]
where for each loop \(\mathcal{L}_{i}\),
\[\mathtt{polarity}\Big{(}l_{i},\frac{dl_{i}}{dt}\Big{)}=\mathtt{sign}\Big{(} \frac{[l_{i}(t+2)-l_{i}(t+1)]-[l_{i}(t+1)-l_{i}(t)]}{[l_{i}(t+2)-l_{i}(t)]} \Big{)}. \tag{5.3}\]
Thus, if we choose the polarity method for representing data generated by a epidemic, we can train a neural network to predict the polarity of the loop representing the total number of infected people in the population by feeding it input-output pairs \(\{(\mathbf{x}_{1}^{P},y_{1}),\ldots,(\mathbf{x}_{n}^{P},y_{n})\}\).
For our purposes, a crucial difference between these two methods for collecting and representing data is that the polarity method incorporates knowledge of the structure of the data-generating process in a way that the raw data method does not. In order to measure the polarity of each individual loop, an agent must first know the summands that together compose the data-generating process, and must also know that the polarity of these summands can be a crucial predictor of the overall behavior of the system, since in many complex systems, the behavior of the overall system is determined by the small subset of loops that are dominant at a given time. Moreover, because each vector \(\mathbf{x}^{P}\) is of a lower dimensionality than each vector \(\mathbf{x}^{RD}\), we are able to use a neural network with fewer neurons to generate predictions on the basis of \(\mathbf{x}^{P}\) as compared to \(\mathbf{x}^{RD}\).
Do these contrasts between the polarity framework for representing data and the raw data framework for representing data coincide with a difference in performance on OOD data between neural networks trained using raw data and neural networks trained using polarity data? We present results of a computational experiment showing that they do.4 We begin by using the ODE model defined in Eqs. 4.1-4.8 to generate synthetic data from 100 epidemics, each with 100 time steps. The population size is fixed at \(N=\)1,000,000, and at each time step, the relevant parameters are generated by sampling from the following distributions:
Footnote 4: For code used to run this computational experiment, see [https://osf.io/x2hku/?view_only=d417a697df6e4390802c371ac373b954](https://osf.io/x2hku/?view_only=d417a697df6e4390802c371ac373b954).
\[\Lambda(t) \sim\text{Beta}(2,\frac{2-.0002}{.0001})\text{ (i.e., a Beta distribution with a mean of.0001)} \tag{5.5}\] \[\mu(t) \sim\text{Beta}(2,\frac{2-.0002}{.0001})\text{ (i.e., a Beta distribution with a mean of.0001)}\] (5.6) \[\gamma(t) \sim\text{Beta}(2,\frac{2-.2}{.1})\text{ (i.e., a Beta distribution with a mean of.1)}\] (5.7) \[\beta(t) \sim\text{Poisson}(5)\text{ (i.e., a Poisson distribution with a mean of 5)} \tag{5.4}\]
Thus, the parameters of the SIR model are sampled from the same distribution, but take a different value, at each time step of the epidemic. This simulation process allows us to collect 9,800 synthetic data samples of both the form \(\mathbf{x}^{RD}\) and the form \(\mathbf{x}^{P}\) (i.e., both the raw data and the polarity data), while also injecting some realistic aleatory uncertainty into the data-generating process. We also generate 9,800 outcome variables representing the polarity of the number of infected people over each three-time-step interval.
In the next step of the simulation process, we train neural network models on the respective input-output pairs \(\{(\mathbf{x}_{1}^{RD},y_{1}),\ldots,(\mathbf{x}_{9800}^{RD},y_{9800})\}\) and \(\{(\mathbf{x}_{1}^{P},y_{1}),\ldots,(\mathbf{x}_{9800}^{P},y_{9800})\}\), generating the predictive models \(\mathcal{M}^{RD}\) (the raw data model) and \(\mathcal{M}^{P}\) (the polarity data model). Importantly, because \(\mathbf{x}^{RD}\) is a nine-dimensional vector and \(\mathbf{x}^{P}\) is a six-dimensional vector, \(\mathcal{M}^{RD}\) has nine neurons in its input layer while \(\mathcal{M}^{P}\) has only six input layer neurons. Both neural networks share an architecture consisting of an input layer, two hidden layers with the same number of neurons as the input layer, and a single-neuron output layer. This means that \(\mathcal{M}^{RD}\) has a larger parameter space than \(\mathcal{M}^{P}\).
Next, we test the performance of the neural networks \(\mathcal{M}^{RD}\) and \(\mathcal{M}^{P}\) on simulated OOD data. To generate the OOD data, we first sample the following meta-parameters:
\[\bar{\Lambda} \sim\text{Beta}(2,\frac{2-.02}{.01}) \tag{5.9}\] \[\bar{\mu} \sim\text{Beta}(2,\frac{2-.02}{.01})\] (5.10) \[\bar{\gamma} \sim\text{Beta}(2,\frac{2-.02}{.01})\] (5.11) \[\bar{\beta} \sim\text{Poisson}(15) \tag{5.8}\]
We then use each of these meta-parameters to generate 100 epidemics, each with 100 time steps, where at each time step parameters are generated by sampling from the following distributions:
\[\hat{\Lambda}(t) \sim\text{Beta}(2,\frac{2-2\bar{\Lambda}}{\bar{\Lambda}})\text{ (i.e., a Beta distribution with a mean of }\bar{\Lambda}) \tag{5.13}\] \[\hat{\mu}(t) \sim\text{Beta}(2,\frac{2-2\bar{\mu}}{\bar{\mu}})\text{ (i.e., a Beta distribution with a mean of }\bar{\mu})\] (5.14) \[\hat{\gamma}(t) \sim\text{Beta}(2,\frac{2-2\bar{\gamma}}{\bar{\gamma}})\text{ (i.e., a Beta distribution with a mean of }\bar{\gamma})\] (5.15) \[\hat{\beta}(t) \sim\text{Poisson}(\bar{\beta})\text{ (i.e., a Poisson distribution with a mean of }\bar{\beta}) \tag{5.12}\]
This sampling procedure is very likely to result in epidemics generated by sampling from distributions with very different means from those used to generate the training data. We repeat this process, beginning with the sampling of meta-parameters, twenty times, to produce 196,000 data points from 1,960 simulated epidemics. Since we store this OOD data in both the raw data and polarity formats, this amounts to two sets of input-output pairs of the form \(\{(\hat{\mathbf{x}}_{1}^{RD},\hat{y}_{1}),\ldots,(\hat{\mathbf{x}}_{196000}^{ RD},\hat{y}_{196000})\}\) and \(\{(\hat{\mathbf{x}}_{1}^{P},\hat{y}_{1}),\ldots,(\hat{\mathbf{x}}_{196000}^{ PD},\hat{y}_{196000})\}\).
It remains to compare the accuracy of the models \(\mathcal{M}^{RD}\) and \(\mathcal{M}^{P}\) on the OOD data. We calculate the accuracy of a pandemic that begins at time interval \(\alpha\) and ends at time interval \(\alpha+97\) by obtaining the proportion of time steps at which the predicted and true loop polarities match. Fig. 3 shows the mean accuracy across all 1,960 simulated OOD pandemics of both \(\mathcal{M}^{RD}\) and \(\mathcal{M}^{P}\). See Fig. 4 for a graphical depiction of the steps involved in the simulation-based experiment testing the performance of a loop-polarity based approach to predictive inference on out-of-sample data.
Figure 3. Mean accuracy across 1,960 simulated OOD pandemics of both a neural network trained on raw data and a neural network trained on polarity data.
The model \(\mathcal{M}^{RD}\) achieves an average accuracy of \(.497\) across all pandemics, performing slightly better than chance given that approximately \(48.4\%\) of time steps intervals have a positive-polarity loop for the level of infections. By contrast, \(\mathcal{M}^{P}\) achieves an average accuracy of \(.745\), performing much better than chance. This difference in means is highly statistically significant (\(t=200.69\), \(p<.001\)). Thus, a neural network that takes as input data representing the polarity of loops containing various summands in the data-generating process is able to significantly outperform a neural network trained on the raw data produced by the same data-generating process, once each neural network is asked to make predictions on data that is not in its training set, and which is generated via a process with the same structure, but different statistical distributions over key parameters. We note that the performance of the both neural networks on the OOD data is closely tied to their training accuracy; training accuracy is much greater for \(\mathcal{M}^{P}\) as compared to \(\mathcal{M}^{RD}\). This suggests that the improved performance is due largely to the patterns in the loop polarity data being more easily learned than the patterns in the raw data. This is striking, given that the loop polarity of the rate of infections can be calculated analytically from raw data, but not from the loop polarity data. Moreover, the polarity data allows for better neural network performance even though the neural network trained on this data set has a smaller parameter space than the neural network trained on the raw data. As will be discussed more in the following sections, we take these results to show the value of human-based understanding of data-generating structure in generating accurate predictive inferences on OOD data using neural networks.
## 6. Reducing Epistemic Uncertainty About a Data-Generating Process
In this section, we discuss and formalize the manner in which a polarity approach to modelling system behavior leads to reduction in epistemic uncertainty with respect to the nature of a data-generating system. By 'epistemic uncertainty' here, we mean the degree to which the distribution. which represents the dynamics of a data-generating process and is used to make predictions about
Figure 4. Flow chart depicting the simulation experiment described in this section.
the outputs and behaviors of that process, reflects _unknown facts_ about the data-generating process. This usage of the term is first found in Fox and Ulkumen (2011), who distinguish epistemic uncertainty from 'aleatory uncertainty,' which they define as uncertainty about the outputs and behaviors of a data-generating process that are attributable to inherent stochasticity in the dynamics of the data-generating process itself. In the philosophy of probability, one finds a similar distinction between _credences_, which are probability distributions representing an agent's subjective degree of belief in the truth of propositions or events, and _objective chance distributions_, which are probability distributions that represent stochasticity that is inherently present in the world (objective chances _par exemple_ are the probability distributions used in quantum mechanics; see Hajek, 2019 for a broad overview of the credence-chance distinction). In other words, credences represent epistemic uncertainty, while objective chances represent aleatory uncertainty.
Recent ML work has used information theoretic quantities, primarily Shannon entropy (Shannon, 1948), as a measure of epistemic uncertainty (Depeweg et al., 2018; Hullermeier and Waegeman, 2021; Lahoti, Gummadi, and Weikum, 2022; Abdar et al., 2021). Our approach continues in this tradition. However, whereas previous work on this issue has quantified uncertainty, including epistemic uncertainty, by measuring the entropy of a probability distribution over a random variable according to a distribution derived from a particular predictive model architecture (e.g., a neural network), our approach is highly general. We begin from the supposition that any epistemic agent, be it a human agent or a predictive model, is such that their degrees of belief in a logically exhaustive set of propositions can be represented as a credal distribution over a random variable. This allows us to measure the uncertainty, including the epistemic uncertainty, inherent in a wide variety of methods for making predictions, at a range of different points in the ML-development pipeline.
To unpack this, we begin by letting \((\Omega,\mathcal{A},cr)\) be a probability space in which \(\Omega\) represents a set of possible worlds, \(\mathcal{A}\) is an algebra on \(\Omega\), and \(cr\) is a probability distribution (i.e., the credal distribution) on \(\mathcal{A}\) such that for any \(A\in\mathcal{A}\), \(cr(A)\) represents some agent or group of agents' degree of belief that the actual world is in \(A\). Let \(T\) be an ordered set of times. Let \(X:T\times\Omega\to\{0,1\}\) be a binary stochastic process that is measurable with respect to the probability space \((\Omega,\mathcal{A},cr)\). Following Hullermeier and Waegeman (2021), we tie the entropy of the credal distribution, as given by the following equation, to measure the _overall_ uncertainty expressed in the credal distribution with respect to the value of \(X\) at time \(t\):
\[H[cr(X(t))]=-\sum_{x=0}^{1}cr(X(t)=x)\log_{2}cr(X(t)=x). \tag{6.1}\]
One can think of entropy as measuring the amount of informational "work" that is needed to move the agent's credal distribution to the point where it assigns all probability to the true value of \(X\) at time \(t\). The greater the entropy, the more work is required to move the agent towards this optimal epistemic state, and so the greater the agent's epistemic uncertainty.
The explicitly aleatory portion of this uncertainty can be represented as follows. At any time \(D\), let \(\iota(t)\) be a variable whose values denote all possible complete informational states that the agent could be in at time \(t\). That is, fixing the value of \(\iota(t)\) tells us everything that the agent could possibly learn about the system of interest. This variable is measurable with respect to \((\Omega,\mathcal{A},cr)\) for all times \(t\). The amount of aleatory uncertainty represented with respect to \(X\) at time \(t\) in the
credence function \(cr\) is the expected entropy of the credal distribution over \(X\) across all possible values of \(\iota(t)\). This quantity is given by the following equation:
\[\mathbb{E}_{cr(\iota(t))}[H[cr(X(t))]]\\ =\ \int_{\operatorname{range}(\iota(t))}cr(\iota(t)=i)\Big{[}- \sum_{x=0}^{1}cr(X(t)=x|\iota(t)=i)\log_{2}cr(X(t)=x|\iota(t)=i)\Big{]}. \tag{6.2}\]
The amount of epistemic uncertainty reflected in the credal distribution is then measured by taking the difference of the overall uncertainty and the aleatory uncertainty, i.e., \(H[cr(X(t))]-\mathbb{E}_{cr(\iota(t))}[H[cr(X(t))]]\).
In practice, when multiple algorithms make predictions about the output of a single data-generating process, the amount of aleatory uncertainty is determined not by the particular algorithm being used but by the nature of the data-generating process itself. This is because aleatory uncertainty is a measure of the uncertainty that an algorithm would have about output of the process if that algorithm had _perfect information about that process_. In addition, it is rare that we are able to get a fully accurate measure of aleatory uncertainty, since doing so would require the algorithm or agent to specify every possible informational state that one could be in about the system. Thus, in comparing multiple agents/algorithms making predictions about the same data-generating process, we will use overall uncertainty \(H[cr(X(t))]\) as a proxy for epistemic uncertainty.
Encoded in the formal definitions above is the idea that entropy (and, by extension, epistemic uncertainty), is defined in relation to a specific random variable, which in our case is a partition defined on the set of possible states of the data-generating system at a given time. In the case study above, the "raw data" approach to representing the state of the system over a three-time-step interval and the "polarity" approach to representing the epidemic system over the same interval represent two different partitions on the set of possible states of the system.
Beginning with the raw data approach, for any given time interval \([t:t+2]\), we can define a random variable \(RD_{[t:t+2]}:\Omega\to\mathbb{R}_{+}^{9}\), such that for each state of the epidemic \(\omega\), \(RD_{[t:t+2]}(\omega)\) is a nine-entry vector representing the number of susceptible, infected, and recovered people at each of the three relevant time steps. An agent's credences over the range of a variable \(RD_{[t:t+2]}\) represent their uncertainty about these three real-valued quantities. In virtue of how it partitions the space of possible states of the epidemic, the variable \(RD_{[t:t+2]}\) represents an agent's _understanding_ of the system, as it picks out the salient possible properties of the system over a given time interval, _from the perspective of that agent_. If we let the agent's credence function be a nine-dimensional multivariate normal distribution with covariance matrix \(\Sigma=\operatorname{diag}(\sigma)\), then the agent's entropy (and therefore, their epistemic uncertainty) with respect to the state of the system over the three-time-step interval is given by the following equation:
\[H(cr(RD_{[t:t+2]}))=\frac{1}{2}\log_{2}(\sigma^{9})+\frac{9}{2}\log_{2}(1+2 \pi). \tag{6.3}\]
To be clear, this entropy represents the agent's uncertainty over possible states of the _system, when it is given a raw data representation_. This is distinct from the agent's uncertainty over the _outcome_ of the polarity associated with the level of infections, which we do not represent here. Note that the case in which the covariance matrix of the agent's credal distribution is a diagonal matrix is a
kind of worst-case scenario from an epistemic uncertainty perspective. It represents an agent whose epistemic state is such that they know nothing about any potential correlations between different data points, such that they are starting from a place of total ignorance.
By contrast, consider an agent who takes a "polarity" perspective on the data generating epidemic described above. For such an agent, the objects of their credences about the state of the epidemic over a three-time-step interval are the values of a random variable \(LP_{[t:t+2]}:\Omega\rightarrow\{0,1\}^{6}\). The values of this variable represent possible polarities for each of the six loops that characterize the overall data-generating process. This amounts to a total of 64 possible states that such an agent might consider the system to be in, as compared to the infinite number of states that the system might be in from the raw data perspective. If an agent who takes a polarity perspective on the data-generative process knows _nothing else_ about the underlying state of the system, such that they think the system is just as likely to be in one state as any other, then their credence that the system will be in each possible state is \(\frac{1}{64}\). Thus, the entropy of their credal distribution is given by the equation:
\[H(cr(LP_{[t:t+2]}))=-\log_{2}\left(\frac{1}{64}\right)\approx 9.38. \tag{6.4}\]
It follows that \(H(cr(RD_{[t:t+2]}))>H(cr(LP_{[t:t+2]}))\) just in case the intra-variable variance \(\sigma\) is greater than approximately.346. Since \(\sigma\) is non-negative and unbounded, this means that for most values of \(\sigma\), the worst-case epistemic uncertainty of an agent with a raw data perspective on the epidemic system is greater than the worst-case epistemic uncertainty of an agent who take a polarity perspective on the same system.
To understand this result on an intuitive level, consider that the agent with a raw data perspective on the data-generating process must consider vastly more possible states of the system (namely, the set of all nine-way permutations of positive reals). By contrast, the agent who take a polarity perspective on the system considers a much smaller space of possibilities (namely, the set of all six-way permutations of binary loop polarities). As such, it is not surprising that in most cases, the agent with a polarity perspective has a lower degree of epistemic uncertainty about the possible states of the system _before any data is even collected_. The lower _ex ante_ epistemic uncertainty of an agent with a polarity perspective on the data reflects the fact that they have a level of domain knowledge that allows them to identify the polarity of the different summands of the system as the likely drivers of the overall polarity of the number of infected people, even before any data is collected. For instance, an epidemiologist might be aware that the polarity of the total number of people _recovering_ from infection is typically a driver of the polarity of the total number of infections. This binary quantity may be more easily estimable from existing data, or from expert testimony of (e.g.) healthcare providers, than it is to accurately calculate the polarity of the infection level from raw data. This lower degree of _ex ante_ epistemic uncertainty associated with this level of domain knowledge, we argue, is at least partially responsible for the superior performance of a neural network trained on a data representation that reflects this greater degree of domain knowledge and lower epistemic uncertainty.
## 7. Discussion
This paper has considered whether theory-informed methods of data-representation can improve neural network performance on OOD data. We considered a specific method of theory-informed data representation (i.e., the polarity method) for complex systems with feedback loops, like the SIR causal theory for an epidemic. We find that a data representation which reflects a lower degree of epistemic uncertainty about the data-generating process, and which isolates a small number of key behavior parameters, enables significantly greater performance on an OOD classification task than a more naive approach to representing the same data. We take this enhanced performance on OOD data to show that the polarity method enables us to represent data in a way that captures crucial structural features of the data-generating process, at a relatively coarse-grained level of description.
We take this finding to have broader implications for the optimal functioning of the machine learning development pipeline. At a high level of abstraction, one workflow in developing ML applications involves: 1) observing data from a system, 2) devising a mathematical representation of that data, 3) training a neural network model on that data, 4) using that data to make OOD classifications. Our results above highlight the importance of data representation in the success of this pipeline. Choices we make about how data is represented as it is measured and collected influence the training of neural network classifiers, which in turn influences the success of these models in OOD testing scenarios. Importantly, implementing the polarity scheme for representing data _requires human domain knowledge_. Namely, one must know the summands of the ODEs that represent the dynamics of the system, and know that measuring their polarity is key to inferring the overall polarity of the number on infected people. This is a perspective on the data that requires some background in epidemiology and the modeling of dynamical systems; unlike the raw data representation, it cannot be measured from an entirely naive perspective on the nature of the data-generating process.
The perspective on the nature of dynamical systems that recommends decomposing those systems into summands and measuring the polarity of those summands is not one that is well-represented in existing machine learning practice. The results here indicate that in at least some contexts, this perspective _should_ be represented, as it can enable data representations that allow for more accurate classification of OOD data. To this end, we recommend that ML and AI developers seek to draw on the literature in dynamical systems theory, and receive input - in the form of causal theories - from practitioners of dynamical systems science, and problem domain experts and
Figure 5. A high-level overview of the ML development pipeline and our proposed intervention.
stakeholders throughout the development of the ML/AI pipeline. This is especially true in the data representation stage of that pipeline where the emphasis is not on designing optimal architectures for function approximation, but is instead on how to interpret and optimally represent the outputs of a data-generating process. It is at this earlier (but still deeply important) stage that domain-specific causal theories with reduced epistemic uncertainty about the nature of the data-generating can be leveraged to enable better OOD performance. Fig. 5 provides a schematic representation of our normative recommendation.
## 8 Conclusion
There are several avenues for future work that build on our findings here. First, we note that the polarity framework is just one example of the many ways in which domain knowledge and expertise can be encoded in high-level, coarse-grained structural representations of data. Future work could seek to reproduce our findings in other frameworks for measuring structural parameters. Second, our findings here concern only classification tasks. In future work, we hope to extend our results to prediction and forecasting tasks, in which a neural network is not asked to learn a mapping between present features of a system over a given fixed interval, but between present and future features of a system. Here too, we would expect to see an advantage of training the neural network using human-centered, causal-theory-informed structural data representations that reflect a lower degree of epistemic uncertainty about the data-generating process in question.
|
2309.09998 | A renewal approach to prove the Four Color Theorem unplugged, Part III:
Diamond routes, canal lines and $Σ$-adjustments | This is the last part of three episodes to demonstrate a renewal approach for
proving the Four Color Theorem without checking by a computer. The first and
the second episodes have subtitles: ``RGB-tilings on maximal planar graphs''
and ``R/G/B Kempe chains in an extremum non-4-colorable MPG,'' where R/G/B
stand for red, green and blue colors to paint on edges and an MPG stands for a
maximal planar graph. We focus on an extremum non-4-colorable MPG $EP$ in the
whole paper. In this part we introduce three tools based on RGB-tilings. They
are diamond routes, normal and generalized canal lines or rings and
$\Sigma$-adjustments. Using these tools, we show a major result of this paper:
no four vertices of degree 5 form a diamond in any extremum $EP$. | Shu-Chung Liu | 2023-09-17T05:19:12Z | http://arxiv.org/abs/2309.09998v1 | # A renewal approach to prove
###### Abstract.
This is the last part of three episodes to demonstrate a renewal approach for proving the Four Color Theorem without checking by a computer. The first and the second episodes have subtitles: "RGB-tilings on maximal planar graphs" and "R/G/B Kempe chains in an extremum non-4-colorable MPG," where R/G/B stand for red, green and blue colors to paint on edges and an MPG stands for a maximal planar graph. We focus on an extremum non-4-colorable MPG \(EP\) in the whole paper. In this part we introduce three tools based on RGB-tilings. They are diamond routes, normal and generalized canal lines or rings and \(\Sigma\)-adjustments. Using these tools, we show a major result of this paper: no four vertices of degree 5 form a diamond in any extremum \(EP\).
Key words and phrases:Four Color Theorem; Kempe chain; edge-coloring; RGB-tiling; diamond route; canal line; \(\Sigma\)-adjustment 2020 Mathematics Subject Classification: Primary 05C10; 05C15
## 1. Diamond routes
This section and the next one are independent. We introduce a method to build a green (or red/blue) tiling on an MPG step by step. At the beginning of this introduction, we shall first assume the existence of a green tiling. Given an MPG or a semi-MPG, say \(M\), with a green tiling \(T_{g}:E(M)\rightarrow\{\text{green, black}\}\), we associate every green edge, say \(e_{i}\), with a green \(e_{i}\)-diamond most of the time; but a green \(e_{i}\)-triangle if \(e_{i}\) is along an outer facet of \(M\).
**Definition 14.1**.: Given \(M\) with a green green tiling \(T_{g}\), a _green diamond route_\(\mathbf{dr}_{g}\) in \(M\) is a sequence \(\mathbf{dr}_{g}:=(e_{1},e_{2},\ldots,e_{k})\) of distinct green edges such that every consecutive pair \(e_{i}\)- and \(e_{i+1}\)-diamonds or triangles share a common black edge.
**Definition 14.2**.: Continue from previous definition. We may consider \(\vec{\mathbf{dr}}_{g}:=(e_{1}\to e_{2}\rightarrow\ldots\to e_{h})\) as a _directed_ green diamond route in \(M\). In the directed mode, the triangles involved in this particular route are separated into two classes: _out-triangles_ and _in-triangles_.
For instance, in Figure 33 we have a 7-semi-MPG with a green tiling which unfortunately has a green 5-cycle, namely \(C_{5}:=v_{5}\)-\(v_{6}\)-\(v_{7}\)-\(v_{c}\)-\(v_{a}\)-\(v_{5}\). The two graphs show two different ways to demonstrate \((e_{1},e_{2},\ldots,e_{6})\) by marking edges as sequence and \(\vec{\mathbf{dr}}_{g}\) by a green-gray dashed line with direction.
On the second graph of Figure 33, we also indicate the two triangles of \(e_{1}\)-diamond as the _initial-in_ (marked by "In") and the _initial-out_ (marked by "Out"). And then there are some in-triangles (marked by "i") and out-triangles (marked by "o") along the green-gray dashed line \(\vec{\mathbf{dr}}_{g}\). Actually we miss several "i" to mark, because if any green triangle is marked by "o" then the triangle on the other side of the its \(e_{i}\)-diamond must be the corresponding in-triangle.
It is nature to define a _green diamond ring_ in \(M\). For instance, let us use the first graph in Figure 33 again. We find that \(\mathbf{dr}_{g}:=(e_{2},e_{3},e_{4},e_{5},e_{6},e_{2})\) forms a green
diamond ring. Rings are good to switch the roles of in-triangles and out-triangles for themselves.
### Green diamond route vs green canal lines
In Section 7 we introduced canal lines with each them along two parallel canal banks. At that time we mentioned that we shall treat all triangle as nodes and came out the traditional idea about the dual graph of \(M\). Now let us formally define our _dual graph_ of \((M;T_{g})\) or \((M;T_{rgb})\), denote by \(DG(M;T_{g})\) or \(DG(M;T_{rgb})\), which is a little bit different from the traditional one.
Let us use Figure 34 to explain. The set of nodes \(V(DG(M;T_{g})\) consists of all triangle facets (circles) and some _pseudo nodes_ (rectangles) near-by and along every outer facet. Notice that the traditional dual graph will set only one node for every outer facet but we rather set \(k\) pseudo nodes for a \(k\)-gon outer facet. Every link of \(DG(M;T_{g})\) crosses exactly an edge \(e\) in \(E(M)\), and we shall color every link according to the color of \(e\) in \(T_{g}\). By the similar way, \(DG(M;T_{rgb})\) can be defined.
Given \(DG(M;T_{g})\), a green canal line \(gCL_{i}\) is a path going through black links in \(DG(M;T_{g})\). If \(DG(M;T_{rgb})\) is provided, a green canal line \(gCL_{i}\) is a path going through red and blue links alternately. All these \(gCL_{i}\) are destin
Figure 34. vertices and edges in \((M;T_{g})\); nodes and links in \(DG(M;T_{g})\)
Given \(DG(M;T_{g})\), a green diamond route \(\mathbf{dr}_{g}\) or directed one \(\vec{\mathbf{dr}}_{g}\) is a path going through green and black links alternately. If \(DG(M;T_{rgb})\) is provided, all red and blue edges are treated as black. These \(\mathbf{dr}_{g}\) or \(\vec{\mathbf{dr}}_{g}\) are various for picking one of two black edges many times.
Although \(DG(M;T_{g})\) is much clear to show diamond route and canal lines, turning to this new graph it really is bothering us; so we keep using \(DG(M;T_{g})\) and \(DG(M;T_{rgb})\).
### Amending \(T_{g}\) by a green diamond route or ring
In our study, there are two major ways to amending \(T_{g}\) or \(T_{rgb}\), and if possible we wish to break up unwelcome green odd-cycles. The first way is what we are going to introduce, and the second way is discussed in Section 16. Let us see the following twp examples.
**Example 14.3**.: Let us adopt the 7-semi MPG \(M\) and \(T_{g}(M)\) in Figure 33 as the original setting, and we are going to break the green odd-cycle \(C_{5}:=v_{5}\)-\(v_{6}\)-\(v_{7}\)-\(v_{c}\)-\(v_{a}\)-\(v_{5}\) in this original \(T_{g}(M)\). Applying edge-color-switch along \(\mathbf{dr}_{g}:=(e_{2},e_{3}\ldots,e_{6},e_{2})\), we get the first graph in Figure 35. In addition, denote \(e_{0}:=v_{3}v_{a}\), and \(e_{-1}:=v_{5}v_{a}\). Applying edge-color-switch along \(\mathbf{dr}^{\prime}_{g}:=(e_{1},e_{0},e_{-1},e_{1})\), we get the second graph in Figure 35.
Let \(e_{7}:=v_{5}v_{6}\). Applying edge-color-switch along \(\mathbf{dr}^{\prime\prime}_{g}:=(e_{0},e_{1},e_{2},\ldots,e_{7})\), we get the first graph in Figure 36. Applying edge-color-switch along \(\mathbf{dr}^{\prime\prime}_{g}:=(e_{4},e_{5},e_{6},e_{7})\), we get the second graph in Figure 36. All these
Figure 35. Eliminate a green odd-cycle
four processes tear the green \(5\)-cycle \(C_{5}\) and create new odd-cycle-free tilings in \(M\). Since \(M\) is One Piece, each of these four new green tilings is grand and offers its own \(4\)-coloring function.
**Example 14.4**.: We have a \((7,5)\)-semi-MPG \(M\) in Figure 37 and an original G-tiling \(T_{g}(M)\) as the first graph. We also see a green odd-cycle \(C_{7}\) in \(T_{g}(M)\). This green odd-cycle is along the annular shape of \(M\). After the first edge-color-switch process along the the green-gray dashed line \(\mathbf{dr}_{g}\), we obtain the middle graph with a new green tiling \(T_{g}^{\prime}\).
Unfortunately \(T_{g}^{\prime}\) is not grand. Keep going! Let us apply the second edge-color-switch process along the the green-gray dashed line \(\mathbf{dr}_{g}^{\prime}\), and then obtain \(T_{g}^{\prime}\) as the third graph. Finally we achieve a grand G-tiling\({}^{*}\) (the abbreviation of "G-tiling without any green odd-cycle").
Figure 37. Eliminate a green odd-cycle; the middle is not grand
Figure 36. Eliminate a green odd-cycle, more examples
_Remark 14.5_.: By the first example, we experience a constructing method to transform a normal green tiling with odd-cycles into a G-tiling\({}^{*}\). However, this constructing method does not guarantee an odd-cycle-free result, because some other new green odd-cycles might be created after this edge-color-switch process. By the second example, we experience that the grand property might be destroyed. To avoid this awkward situation, we shall choose a green diamond route that crosses the green odd-cycle in and out. Fortunately, most of time we deal with One Piece; thus any green diamond route must be in-and-out w.r.t. every green odd-cycle.
## 15. Orientation by an initiator
In the previous section, particularly in the second graph of Figure 33, we introduced the _initial-in-triangle_ (marked by "In") and the _initial-out-triangle_ (marked by "Out"); and then we indicate _in-triangles_ (marked by "i") and _out-triangles_ (marked by "o") along a given directed green diamond route \(\vec{\mathbf{dr}}_{g}:=(e_{I}=e_{0}\to e_{1}\to\ldots\to e_{k})\). Usually, we need only mark all out-triangles by "o" and ignore "i".
The initial-in-triangle and initial-out-triangle form the _initial-\(eI\)-diamond_. However, only an initial-\(eI\)-diamond can have two possible directions by switching \(\triangle\)In and \(\triangle\)Out. Thus, as for Figure 33 we had \(eI=e_{1}\) and we shall denote \(\triangle\)Out \(:=(eI,v_{b})_{\mathrm{O}}\) as the initial-out-triangle (sometimes we just use Out without a \(\triangle\) in from of it), and \(\triangle\)In \(:=(eI,v_{2})_{\mathrm{I}}\) as the initial-in-triangle. Once we assign an initial-in-triangle or an initial-out-triangle, we can generate many different green diamond routes and each directed green diamond route \(\vec{\mathbf{dr}}_{g}\) offers its class of out-triangles and in-triangles.
**Example 15.1**.: Let us adopt the right graph in Figure 33 first, but remove the original directed green diamond route on that graph. In Figure fig:inoutTriangles, we build another two routes starting at \((eI,v_{b})_{\mathrm{O}}\). By these two directed green diamond routes together with the route given in the right graph in Figure 33, we obtain 10 out-triangles in total that are shown by the right graph in Figure 38. Most of
in-triangles lay on the opposite side of out-triangles with their corresponding green edges in middle. Some exceptional in-triangles happen, for instance, \(\triangle v_{5}v_{6}v_{b}\) is an in-triangle without its corresponding out-triangle. Obviously, these exceptional in-triangles only happen when a green edge lay on the outer facet of \(M\). One more interesting observation: there are two out-triangles \(\triangle v_{a}v_{2}v_{3}\) and \(\triangle v_{c}v_{2}v_{8}\) "_adjacent to_" the initial-in-triangle \((e_{I},v_{2})_{\rm I}\). This means in \(T_{g}\) at least two green diamond rings passing through \(e_{I}\). Since \(e_{I}\) lays on the green 5-cycle \(C_{5}=v_{a}\)-\(v_{c}\)-\(v_{7}\)-\(v_{6}\)-\(v_{5}\)-\(v_{a}\), we now have at least two edge-color-switch processes to break \(C_{5}\).
With the help of Example 15.1, we experience the _orientation_ of triangles generated by a fixed initial-out-triangle \(\triangle\)Out. Given an MPG or semi-MPG \(M\) with a green tiling \(T_{g}(M)\), let us choose an particular green edge \(eI\) and one of its associating triangle \((eI,u)\) to be the _initial-out-triangle_ \(\text{Out}:=(e,u)_{\rm O})\), and in the same time _initial-in-triangle_\(\triangle\text{In}:=(e,v)_{\rm I}\) is chosen unless \(e\) is along a outer facet of \(M\). A \(e\)-diamond or \(e\)-triangle is _reachable_ if there is a directed green diamond routes \(\vec{\text{\bf dr}}_{g}=(e_{I}=e_{1}\to e_{2}\to\ldots\to e_{k}=e)\) for some \(k\geq 1\) and all \(e_{i}\) distinct. If \(e\) is not along a outer facet of \(M\), at the final two steps of \(\vec{\text{\bf dr}}_{g}\) as we reaching \(e\), we see an in-triangle (i) first and then an out-triangle (o). Particularly when \(k=1\) and the length of \(\vec{\text{\bf dr}}\) is zero that means \((eI,v)\) and \((eI,u)\) are the only reachable
in- and out-triangles respectively by this \(\vec{\mathbf{dr}}\). Let denote the sets
\[IT(eI,u)_{\mathrm{O}} := \{\text{in-triangles generated by }(eI,u)_{\mathrm{O}},\text{ including }\triangle\text{In}\};\] \[OT(eI,u)_{\mathrm{O}} := \{\text{out-triangles generated by }(eI,u)_{\mathrm{O}},\text{ including }\triangle\text{Out}\}.\]
Let \(Tri(M)\) consists of all triangles of \(M\). We denote the following three disjoint subsets of \(Tri(M)\):
\[BiT(eI,u)_{\mathrm{O}}=\{\text{bi-oriented}\} := OT(eI,u)_{\mathrm{O}}\cap IT(eI,u)_{\mathrm{O}};\] \[NonT(eI,u)_{\mathrm{O}}=\{\text{non-oriented}\} := Tri(M)-(OT(eI,u)_{\mathrm{O}}\cup IT(eI,u)_{\mathrm{O}});\] \[UniT(eI,u)_{\mathrm{O}}=\{\text{uni-oriented}\} := Tri(M)-BiT(eI,u)_{\mathrm{O}}-NonT(eI,u)_{\mathrm{O}}.\]
In other worlds, a _non-oriented_ triangle or diamond is never reachable by any \(\vec{\mathbf{dr}}_{g}\); a _bi-oriented_ triangle or diamond can be reachable from two different directions.
**Example 15.2**.: Let us still use the right graph in Figure 38 with \(\triangle\text{Out}=(eI,v_{b})_{\mathrm{O}}\) in Example 15.1. We have \(BiT(eI,u)_{\mathrm{O}}=\{\text{triangles inside }v_{0}\text{-}v_{1}\text{-}v_{2}\text{-}v_{c}\text{-}v_{8} \text{-}v_{7}\text{-}v_{0}\}\) and \(NonT(eI,u)_{\mathrm{O}}=\emptyset\). We shade the region of this \(BiT(eI,u)_{\mathrm{O}}\) by gray. Now we show another two examples in Figure 39. On the first graph, we assign \(\triangle\text{Out}:=(eI^{\prime},v_{4})_{\mathrm{O}}\). We demonstrate three directed green diamond routs to determine that \(BiT(eI^{\prime},v_{4})_{\mathrm{O}}=\{\text{triangles inside }v_{0}\text{-}v_{7}\text{-}v_{8} \text{-}v_{c}\text{-}v_{b}\text{-}v_{6}\text{-}v_{0}\}\) and \(NonT(eI^{\prime},v_{4})_{\mathrm{O}}=\emptyset\). We can see the gray area for \(BiT(eI^{\prime},v_{4})_{\mathrm{O}}\) in the second graph. The third graph is interesting, where we assign \(\triangle\text{Out}=(eI^{\prime\prime},v_{b})_{\mathrm{O}}\) and we find that \(OT((eI^{\prime\prime},v_{b})_{\mathrm{O}})=Tri(M)\). That means starting at \((eI^{\prime\prime},v_{b})_{\mathrm{O}}\) and there is a \(\vec{\mathbf{dr}}_{g}\) reaching any triangle as an out-triangle. In this way, \(BiT(eI^{\prime},v_{4})_{\mathrm{O}}=Tri(M)-\{(eI^{\prime\prime},v_{b})\}\), i.e., almost all triangles are bi-oriented. There is a hidden meaning: Suppose \(M\) is a subgraph of \(\hat{M}\) and suppose a green diamond route from outside of \(M\) enters through the gate \(eI^{\prime\prime}\). Since all triangles in \(Tri(M)\) are bi-oriented except \((eI^{\prime\prime},v_{b})\). Say \(\triangle v_{i}v_{j}v_{k}\) is bi-oriented and \(v_{i}v_{j}\) is along the outer facet of \(M\). We can build an extended \(\vec{\mathbf{dr}}_{g}\) that goes out through the gate \(v_{i}v_{j}\) and then back into \(\hat{M}-M\). Back to the first
graph. If the gate \(v_{2}v_{3}\) is the entrance, then a possible exit can be any of \(v_{3}v_{4}\), \(v_{5}v_{6}\), \(v_{6}v_{0}\) and \(v_{0}v_{1}\), where \(v_{1}v_{2}\) and \(v_{4}v_{5}\) are impossible.
### Constructing RGB-tilings or G-tilings by diamond routes
By our method, to prove the Four Color Theorem we start with the false assumption \(EP\in e\mathcal{MPGN}4\neq\emptyset\), and then use the follow-up property \(EP-\{e\}\) is 4-colorable rather than use \(EP-\{v\}\) is 4-colorable with particularly \(\deg(v)=5\)). A 4-colorable \(EP-\{e\}\) means the existence of RGB-tilings on \(EP-\{e\}\), and the existence of G-tilings on \(EP\) each of which is almost a G-tiling\({}^{*}\) but a green odd-cycle passing through \(e\).
The truth of the Four Color Theorem for all planar graphs guarantees that all MPG's have their own RGB-tilings. However, without using the Four Color Theorem, it is independent problem that all MPG's have their own RGB-tilings.
Figure 39. \(BiT\) in gray
Besides the existence property of RGB-tilings or G-tilings, constructing methods are also interesting. We try to build a green tiling on an MPG or an an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG from initially no green edges at all. The author have achieved several results on this constructing method to build a green tiling on \(M\) by using \(OT(e,u)_{\mathrm{O}}\). The results will be collected in another paper in the near future.
## 16. Generalized canal lines and Kempe chains
We are now introducing the second topic of this article-_generalized canal lines_. The study is pretty long and we just use a pentagon in \(EP\) to demonstrate the idea in this section.
The reader shall see the previous Sections 10 and 11 to review normal _canal lines_. Basically, given an MPG or \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG \(M\), a _grand_ R-tiling and red grand canal system are same idea of two representations (See Lemma 7.7).
First of all, we need an R-tiling \(T_{r}:E(M)\rightarrow\{\)red, black\(\}\). The following is a brief review of a _grand_ R-tiling and a _grand_ R-canal system.
* A grand R-tiling: It is _grand_ one if the vertex set \(V(M)\) can be partitioned into two disjoint parts \(V_{1}\) and \(V_{2}\) such that the subgraph \(G_{bl}\) of \(M\) induced by all black edges is a bipartite graph on bipartite-sets \(V_{1}\) and \(V_{2}\), and also there is no red edge between \(V_{1}\) and \(V_{2}\). (Most of time, we will draw red edges in \(V_{1}\) thicker than \(V_{2}\).)
* A grand R-canal system: It is _grand_ one if we can arrange orientation for all R-canal lines such that the flow directions are parallel but opposite on the two sides of each red edge.
When an RGB-tiling is provided, a _normal_ red canal line is also simultaneously a G-diamond route and a B-diamond route that follows the orientation of R-canal system, i.e., follows the two sides of red river banks. R/G/B are actually symmetric and switchable under some circumstances. In the previous sections we used G-diamond routes and here we introduce R-canal lines, because a green light in traffic
means Go and free to cross; a red light in traffic means STOP and no crossing. However, a green diamond route in a provided RGB-tiling is not necessary a red canal line. Between any two consecutive green edges along a green diamond route, it could be a red edge or a blue one.
A _generalized canal line or ring_ mainly follows the orientation of R-/G-canal system but crosses some particular red/green edges occasionally. In the following, we will use several example to explain how to operate a generalized canal line or ring.
Please, refer to some figures in Section 6 for examples and counterexamples. Briefly we use "R-tiling" to stand for the abbreviation of "R-tiling without any red odd-cycle." Notice that a grand R-tiling is not necessary an R-tiling". The study of the rest paper highly depends on Sections 6, 7, 10 and 12.
**Lemma 16.1**.: _Let \(M\) be an MPG or an \(n\)-/\((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG with an R-tiling \(T_{r}\)._
1. _A red tiling_ \(T_{r}:=\bigcup_{i}rC_{i}\) _on_ \(M\) _and a red canal system_ \(rCLS:=\bigcup_{j}rCL_{j}\) _are different perspectives of looking the same thing._
2. _By linking nodes of triangles, a red canal line_ \(rCL_{j}\) _of_ \(T_{r}\) _is either (_b1_) a close cycle, called_ canal ring_, or (_b2_) a path starting from one outer facet and ending at another outer facet (maybe the same outer facet), while the pair of entrance and exit on the two end of this path are both black edges along the outer facets._
3. _If_ \(M\) _is an MPG, then every red canal line_ \(rCL_{j}\) _is ring. If_ \(M\) _is an_ \(n\)_-semi-MPG, then the connection of entrances and exits of this red canal system_ \(rCLS\) _creates a non-crossing match among all black edges along the unique outer facet._
### The rotation of the dual Kempe chains w.r.t. \((EP,v)\) with \(\deg(v)=5\)
To prove Four Color Theorem, approaching by contrapositive method is nearly inevitable. We shall assume \(e\mathcal{MPGN}4\) nonempty and deal with a pseudo extremum
graph, say \(EP\), which is minimum in cardinality among all non-4-colorable MPG's. The classical Kempe's proof consider a 5-semi-MPG defined by \(P:=EP-\{v\}\), where \(v\) is any vertex in \(EP\) with \(\deg(v)=5\). Our approach simulate Kempe's classical proof: Given the same situation as the setting of Kempe's proof, we consider a 4-semi-MPG \(Q_{i}:=EP-\{vv_{i}\}\) for the five neighbors \(v_{1},v_{2},\ldots,v_{5}\) of \(v\). By Theorem 4.3, any non-trivial subgraph of \(EP\) is 4-colorable; so \(P\) and \(Q_{i}\) are all 4-colorable.
The classical Kempe's proof applies vertex-color-switching, and our renewal method uses edge-color-switching. Please, refer to Sections 9, 10, 11 and 12 for details.
Let us introduce the first main idea involved generalized canal rings: _the rotation of the dual Kempe chains_ w.r.t. \((EP,v)\). The idea is demonstrated by Figure 40 briefly. Given any extremum planar graph \(EP\in e\mathcal{MPGN}4\), there are at least 12 vertices of degree 5 (see Theorem 8.1, and also in [1]). Let \(v\in V(EP)\) with \(\deg(v)=5\) and \(v_{1},\ldots,v_{5}\) be its five neighbors. Let us denote \(\Omega:=v_{1}\)-\(v_{2}\)-\(\ldots\)-\(v_{5}\)-\(v_{1}\), and \(\Sigma\) (\(\Sigma^{\prime}\)) to be the sub-area or sub-graph of \(EP\) inside (outside) of \(\Omega\).
We will prove that all six graphs in Figure 40 with their own RGB-tiling on \(EP-\{vv_{i}\}\) for \(i=1,\ldots,5\) in Figure 40 are different statuses of a same congruence class. Please, refer to Subsection 10.2 for the definition of the three different equivalence relations: synonym (\(\stackrel{{\rm syn}}{{=}}\), \(\langle\cdot\rangle\)), equivalence (\(\equiv\), \([\cdot]\) under \(\Omega\)) and congruence (\(\cong\) under \(\Omega\)).
Here we focus on the this _pentagon sub-area_ of \(EP\) (every \(EP\) without exception), it is nature to name this picture of \((EP;v)\) with \(\deg(v)=5\) by \(T^{5}\) particularly. Let \(\Omega:=v_{1}\)-\(v_{2}\)-\(\ldots\)-\(v_{5}\)-\(v_{1}\) and \(\Sigma\) (\(\Sigma^{\prime}\)) is the subgraph of \(EP\) inside (outside) \(\Omega\). Due to the shape of pentagon sub-area, we also denote \(Ptg:=\Sigma\) or simply use "5" as a superscript for short, where \(\Sigma\) is a general notation for all kinds of sub-area. We start with an RGB-tiling (as same as a 4-coloring function) on \(Q_{1}:=EP-\{vv_{1}\}\); this RGB-tiling, denoted by \(T^{5}(Q_{1})\), is guaranteed by Theorem 4.3: any non-trivial subgraph of \(EP\) is 4-colorable. Moreover, we use \(\langle T^{5}(Q_{1})\rangle\) to denote the class of
synonyms of \(T^{5}(Q_{1})\) which consist of six RGB-tilings on \(Q_{1}:=EP-\{vv_{1}\}\) by switching (permuting) edge-colors red, green and blue all over whole \(Q_{1}\). We will use the equivalence class \([T^{5}(Q_{1})]\) for this kind of process later as a comparison.
Figure 41. Type A and Type B RGB-tilings for \(EP-\{e\}\)
Figure 40. The rotation of the dual Kempe chains w.r.t. \((EP;v)\)
Two more things need to mention: (1) Most of time we use Type A \(e\)-diamond for the rest of this study; (2) \(T^{5}(Q_{1})\) is just one of many different RGB-tiling on \(Q_{1}\) and and \(\langle T^{5}(Q_{1})\rangle\) is just one of many different classes of synonyms. For the reader's convenience, we re-draw Type A and Type B \(e\)-diamonds in Figure 41.
Before we start our process, let us look at (a) to (f) in Figure 40 individually. So far these six Type A RGB-tilings \(T(Q_{i})\) are independent and their existence dues to Theorem 11.2 due to a fixed \(e\)-diamond.
_Remark 16.2_.: Because the pentagon \(\Sigma\) is very simple, the equivalence class \([T(Q_{i})]\) is unique and it has the dual Kempe chains \((K_{r},K_{g})\) w.r.t. \((EP;vv_{i})\) as the skeleton in \(\Sigma^{\prime}\); however \(\langle T(Q_{i})\rangle\) might have many different classes of six synonyms for a fix \(i\). According to the pictures only, we see that (a) and (f) are equivalent, even though we just see red and green edge-colors switched. But we can not guarantee that the two underline RGB-tilings of (a) and (f) are synonyms, because the same _skeleton_ in \(\Sigma^{\prime}\) shared by both (a) and (f) might has different pair of paths. Two synonyms must share same paths of skeleton in \(\Sigma^{\prime}\). In this pentagon sub-area or \(\Omega\), the skeleton in \(\Sigma^{\prime}\) can only be the dual Kempe chains \((K_{r},K_{g})\).
Starting with (a):\(\langle T^{5}(Q_{1})\rangle\) in Figure 40 where we draw a _red generalized canal ring_, denoted by \(rGCL_{1}\), shown as a red dashed line. (Both red canal ring and canal line are always denoted by \(rCL_{i}\) and never by \(rCR_{i}\).) It is generalized because it crosses the red edge \(vv_{3}\) and also it crosses the yellow double-line1\(vv_{1}\). Now let us perform edge-color-switching (ECS for short2) on \(rGCL(v_{1}v_{2})\) and then we obtain (b) in Figure 40. Because the new RGB-tiling \(\langle T^{5}(Q_{2})\rangle\) is obtain by perform ECS on red canal ring, we also write it as \(\langle T^{5}(Q_{1})\rangle\)+1.
Footnote 1: This double-line is actually orange color because yellow color in not easy to see for publications.
Footnote 2: We use acronyms VCS and ECS to stand for “vertex-color-switching” and “edge-color-switching” respectively.
So, how to perform edge-color-switching on (or along) a red generalized canal ring? It is very simple:
* Switch edge colors of green and blue, just like what we do for a normal red canal line;
* Switch edge colors of red and yellow double-line, and this switching rule is what we perform for the "generalized" segment.
* To perform edge-color-switching on (or along) a green/blue generalized canal ring, we just apply the two items above, under symmetry of three colors.
In total, we perform ECS five times in Figure 40 and alternately using red/green generalized canal rings. We use different ways to remark these six graphs. For instance, (e) is actually \(\langle T^{5}(Q_{5})\rangle\); however when we follow the previous four processes, \(\langle T^{5}(Q_{1})\rangle+4_{rgrg}\) is a good way to represent this equivalence class.
_Remark 16.3_.: After explore these five processes and the remarks labeled for (a) to (f), we find that the class \([T^{5}(Q_{i})]\) is much better than the class \(\langle T^{5}(Q_{i})\rangle\). We will use former one in the rest of the paper.
_Remark 16.4_.: If we start with (f) and perform ECS five more times, then we can get a new \([T^{5}(Q_{1})]\), even though this new \([T^{5}(Q_{1})]\) might have different underline RGB-tiling compared with the original (a): \(T^{5}(Q_{1})\), i.e., we have two RGB-tilings of a same equivalence class but not necessary the same.
By definition of congruence relation defined in Subsection 10.2, we have
\[[T^{5}(Q_{1})]_{\Omega}\cong[T^{5}(Q_{2})]_{\Omega}\cong\ldots\cong[T^{5}(Q_ {5})]_{\Omega}.\]
where the subscript \(\Omega\) means the sub-area that \([\cdot]\) builds up. Therefore, we have the following theorem.
**Theorem 16.5**.: _Let \(EP\in e\mathcal{MPGNA}\), \(v\in V(EP)\) with \(\deg(v)=5\) where the five neighbors of \(v\) form \(\Omega:=v_{1}\)-\(v_{2}\)-\(\ldots\)-\(v_{5}\)-\(v_{1}\). Under the equivalence relation \([\cdot]_{\Omega}\), all RGB-tilings on \(EP\) are in a same congruence class._
Be careful! Once we focus on another topic for discussion as well as different \(\Omega^{\prime}\), then another equivalence relation \([\cdot]_{\Omega^{\prime}}\) presents; so this theorem does not necessarily hold.
_Remark 16.6_.: There is another reason that we had better use \([\cdot]\), rather than \(\langle\cdot\rangle\). Again, let us focus on (a) which is both in Figures 40 and 42. This time we apply the other red generalized canal ring drawn as the red dashed line in Figures 42. Yes, there are only two _major_3 red generalized canal rings to reach a Type A \(vv_{2}\)-diamond shown as (b) and (b') in the two figures. Visually it is clear that (b) and (b') just a same graph with green/blue switched. However, (b) and (b') are not necessary same class of synonyms, but we are sure that (b) and (b') have same skeleton in \(\Sigma^{\prime}\); therefore the graphs of (b) and (b') are equivalent. One more thing shall be kept in mind: \(K_{r}|_{v_{1}}^{v_{3}}\) represents a red-connected component connecting \(v_{1}\) and \(v_{3}\) that means \(K_{r}|_{v_{1}}^{v_{3}}\) might contain a bunch of red paths from \(v_{1}\) to \(v_{3}\). Any red canal ring inside \(K_{r}|_{v_{1}}^{v_{3}}\) is not _major_. Also notice that
Footnote 3: Only _major_ red generalized canal rings (by our method) or _major_ red-connected components (by Kempe’s method) can effect the skeleton in \(\Sigma^{\prime}\). Please, see Section 10 for details.
\[[T^{5}(Q_{1}){+}1]:=[\langle T^{5}(Q_{1})\rangle{+}1]\]
and the left-hand-side is our standard notation.
**Definition 16.7**.: The two major red generalized canal rings shown in the the first graph in Figures 40 and in 42 are usually denote by \(rGCL(v_{1}v_{2})\) and \(rGCL(v_{1}v5)\), because they come out of \(\Sigma\) from edges \(v_{1}v_{2}\) and \(v_{1}v5\) respectively. We say \(rGCL(v_{1}v_{2})\) and \(rGCL(v_{1}v5)\) are _conjugate_ for the results, (b) and (b'), of ECS on each of them
Figure 42. The other major red generalized canal ring for (a)
are equivalent. If there are three or more _major_ red generalized canal rings, the idea of _conjugation_ is more complicate. We will talk about it then.
### More concepts about \(Ptg\)
There are still two concepts to explore \(Ptg\). The first one is a 4-colorable function on \(Ptg\) locally or this \(Ptg\) is a sub-area of a 4-colorable MPG. The only possible representative is shown as the first graph in Figure 43. One of its features is the setting of the four red edges.
The second concept comes from a half \(e\)-diamond of Type A in \(Ptg\) shown as the rest three graphs in Figure 43.
**Lemma 16.8**.: _If \(EP\) has \(Co_{\alpha}(Ptg)\), then \(\deg(v_{2},v_{3})\geq 6\)._
Proof.: If \(\deg(v_{2})=5\) (or \(deg(v_{3})=5\)) then this vertex has to follow the tangling property w.r.t. a degree 5 vertex in \(EP\). However, \(K_{r}|_{v_{2}}^{v_{3}}\) is too simple to interest with \(K_{g}\). Thus, \(\deg(v_{2})\neq 5\). Please, see Lemma 10.4.
Figure 43. 4-colorable locally and a yellow double-line on \(\Omega\)
**Corollary 16.9**.: _If \(EP\) has \(Co_{\beta}(Ptg)\) with \(\deg(v_{2})=5\) or \(\deg(v_{3})=5\), then \(K_{b}|_{v_{2}}^{v_{5}}\) is impossible and \(K_{b}|_{v_{1}}^{v_{4}}\) must exist. Please, refer to Figure 44._
**Corollary 16.10**.: _If \(EP\) has \(Co_{\gamma}(Ptg)\) with \(\deg(v_{2})=5\) or \(\deg(v_{3})=5\), then \(K_{g}|_{v_{1}}^{v_{4}}\) is impossible and \(K_{g}|_{v_{2}}^{v_{5}}\) must exist. Please, refer to Figure 44._
## 17. Kempe chains around two adjacent vertices of degree 5 in \(Ep\)
Given a particular \(EP\in e\mathcal{MPGN}4\) who has two adjacent vertices, say \(a\) and \(b\), of degree 5, us will perform ECS on generalized canal ring around \(a\) and \(b\) to obtain many Kempe chains in different statuses.
Let \(T\!D:=(\{a,b\};\deg(a,b)=5)\). It is the _topic for discussion_. Around \(T\!D\) is the _boarder_\(\Phi\) as a cycle. Here we have \(\Phi:=v_{1}\)-\(v_{2}\)-\(c\)-\(v_{4}\)-\(v_{5}\)-\(d\)-\(v_{1}\).4 The formation definition of \(T\!D\) will be given in Subsection 18.1. Because \(\deg(a,b)=5\), we also use "55" to stand this particular \(EP\). The two graphs in Figure 45 are the initial RGB-tilings of \(EP\) with Type A \(ab\)-diamond under equivalence. The subscripts \(\alpha\) and \(\beta\) are just labels to distinguish them. Clearly, \([T_{\alpha}^{55}]\not\equiv[T_{\beta}^{55}]\). We will show that \([T_{\alpha}^{55}]\cong[T_{\beta}^{55}]\). Another obvious observation is that the equivalence class \([T_{\alpha}^{55}]\) of RGB-tilings is symmetric w.r.t. the vertical line and the horizontal one; so is the class \([T_{\beta}^{55}]\) with more imagination.
Footnote 4: Here we use \(\Phi\) to distinguish from \(\Omega\) in Subsection 16.1.
For \([T_{\alpha}^{55}]\) there are two major red (green) generalized canal rings, namely \(rGCL(dv_{1})\) and \(rGCL(v_{1}v_{2})\) (\(gGCL(cv_{2})\) and \(gGCL(v_{1}v_{2})\)). For \([T_{\beta}^{55}]\) there are two major red
Figure 45. The two initial RGB-tilings of \(EP\) with \(T\!D=55\)
(green) generalized canal rings, namely \(rGCL(v_{1}v_{2})\) and \(rGCL(cv_{2})\) (\(gCCL(v_{1}v_{2})\) and \(gGCL(dv_{1})\)). Of course, all of them are conjugate in pairs.
### Let us rock-n-roll
Starting with the initial status \(S_{0}:=[T_{\alpha}^{55}]\), let us perform \(10\) consecutive processes of ECS according to those red/green dashed lines drawn in Figure 46. The whole figure shows the rotation of many Kempe chains around vertex \(a\) and \(b\), or around \(\Phi\).
Figure 46. Rock-n-roll around \((\{a,b\};\deg(a,b)=5)\)
_Remark 17.1_.: Some details in Figure 46 need to mention. We have two versions of \(rGCL(dv_{1})\), where we do show the one that turns around vertex \(a\), and the other one that turns around vertex \(b\). That is why we have both \(K_{g}|_{c}^{v_{1}}\) and \(K_{g}|_{c}^{v_{5}}\) in \(S_{1}\). Also notice that even though the graph we draw for \(S_{1}\) seems to have \(\operatorname{Gdeg}(c)=3\), but it is more possible that \(\operatorname{Gdeg}(c)=2\) for \(K_{g}|_{c}^{v_{1}}\) and \(K_{g}|_{c}^{v_{5}}\) sharing one green edge in \(\Sigma^{\prime}\). So far we still have \(\deg(c)\geq 5\).
_Remark 17.2_.: The Kempe chain \(K_{r}|_{d}^{v_{4}}\) in \(S_{2}\) and \(S_{3}\) can be replaced by \(K_{r}|_{d}^{c}\), because we can only guarantee that red edge \(cv_{4}\) is red connected with \(d\). If it is really \(K_{r}|_{d}^{c}\), then \(\deg(c)\geq 6\). The same thing happens in \(S_{3}\) and \(S_{4}\) for \(K_{g}|_{v_{2}}^{v_{4}}\); it can be either replace by \(K_{g}|_{v_{2}}^{v_{5}}\) or \(K_{g}|_{v_{2}}^{d}\). If it is really \(K_{g}|_{v_{2}}^{v_{5}}\), then \(\deg(v_{5})\geq 7\). If it is really \(K_{g}|_{v_{2}}^{d}\), then \(\deg(d)\geq 6\). There are more discussion by this similar idea, and we will talk about it then.
_Remark 17.3_.: The notation \(|S_{4}\) in the third line of this figure means reflection of \(S_{4}\) w.r.t. the vertical line. There is also notation \(\underline{S_{*}}\) that means reflection of \(S_{*}\) w.r.t. the horizontal line.
_Remark 17.4_.: For \(S_{1}\) there are three major green generalized canal rings, namely \(gGCL(v_{1}v_{2})\) (the given green dashed line in this figure) \(gGCL(v_{4}v_{5})\) and \(gGCL(dv_{1})\). So, what is the idea of _conjugation_ now? The reader can check that ECS on \(gGCL(v_{1}v_{2})\) is conjugate with \(gGCL(v_{4}v_{5})\oplus gGCL(dv_{1})\), where \(\oplus\) means combination or connecting these two generalized canal line in a proper way; also \(gGCL(v_{4}v_{5})\) is conjugate with \(gGCL(v_{1}v_{2})\oplus gGCL(dv_{1})\). We could choose ECS on \(gGCL(v_{4}v_{5})\) as our second process to change \(S_{1}\) and then obtain \(S_{2}^{\prime}\). Clearly, \(S_{2}^{\prime}\equiv|S_{2}\); so the rest after \(S_{2}^{\prime}\) are all reflections w.r.t. the horizontal line. Even though \(gGCL(dv_{1})\) is conjugate with \(gGCL(v_{1}v_{2})\oplus gGCL(v_{4}v_{5})\), it seems useless; because applying ECS on \(gGCL(dv_{1})\) does not make a single \(e\)-diamond of Type A but two of Type B. Please, see Figure 47 and refer to the next remark.
According to Figure 46, we have
\[[S_{0}=T_{\alpha}^{55}]_{\Phi}\cong[S_{1}]_{\Phi}\cong\cdots\cong[S_{5}=T_{ \beta}^{55}]_{\Phi}\cong\cdots\cong[S_{10}]_{\Phi}\equiv[T_{\alpha}^{55}]_{\Phi}. \tag{17.1}\]
_Remark 17.5_.: Since \(S_{8}\equiv|S_{2}\) and \(S_{9}\equiv S_{1}\), something happens in between \(S_{1}\) and \(S_{2}\), as well as in between \(S_{8}\) and \(S_{9}\), are similar. In the last remark, we said that performing ECS on \(gGCL(dv_{1})\) in \(S_{1}\) seems useless. Let us try it, as well as performing ECS on \(rGCL(v_{1}v_{2})\) in \(S_{8}\). We obtain \(S_{x}\cong S_{y}\) and they have two Kempe chains of only one color. Notice that the pair of yellow double-lines in \(S_{x}\) are different from the ones in \(S_{y}\). However, the congruence of them is decided by the edge-coloring along \(\Phi\), denoted by \(Co(\Phi)\), and the skeleton in \(\Sigma^{\prime}\). The difference inside \(\Sigma\) between \(S_{x}\) and \(S_{y}\) associates with \(\Sigma\)-adjustment that will be introduced later. Now we can explore this interesting \(S_{x}\).
According to Figure 47, we have
\[[S_{0}=T_{\alpha}^{55}]_{\Phi}\cong[S_{x}]_{\Phi}\equiv[S_{y}]_{\Phi}. \tag{17.2}\]
We use notation \(|S_{*}\) and \(\underline{S_{*}}\) to denote the reflection images of \(S_{*}\) w.r.t. the vertical line and the horizontal one respectively. Also \(|S_{\underline{*}}\) is reflected twice. Let \([S_{*}]_{\text{sym}}\) consists of the equivalence of the these four reflection images of \(S_{*}\). Most of time \([S_{*}]_{\text{sym}}\) has four different elements, but both \([T_{\alpha}^{55}]_{\text{sym}}\) and \([T_{\beta}^{55}]_{\text{sym}}\) have only one element. Due to this fact of only one element and Equations 17.1 and 17.2, we derive the next lemma.
**Lemma 17.6**.: _All elements in \(\{[S_{0}],[S_{1}],\ldots,[S_{9}],[S_{x}]\}_{sym}\), where the subscript sym means this set consists all symmetric images w.r.t. the vertical line and the horizontal one, are congruent to each other._
Here is one more amazing and important property.
**Theorem 17.7**.: _Let \(EP\in e\mathcal{MPG}\mathcal{N}4\) with \(T\!D:=(\{a,b\};\deg(a,b)=5)\) and it is drawn as the underlining graph shown in Figure 45. Also we adopt the notation \(\Phi\), \(\Sigma\) and \(\Sigma^{\prime}\). Let us fix the subgraph \(\Sigma^{\prime}\) and consider all kinds of MPG's, denoted by \(M_{*}\), such that \(M_{*}\) has exactly two vertices inside \(\Phi\). Among all these \(M_{*}\), only \(EP\) is non-4-colorable._
Proof.: We sill name the two vertices inside \(\Phi\) by \(a\) and \(b\). The first MPG that we consider is \(M_{1}\) show as the first (underlining) graph in Figure 48. By the existence of RGB-tiling \(S_{x}\) in \(\Sigma^{\prime}\) and the setting of edge-coloring inside \(\Phi\), we prove that \(M_{1}\) is 4-colorable.
Notice that the green cycle in \(M_{1}\) is even, and even if the possible
Figure 48. \(M_{1}\), \(M_{2}\) and the rest of 4-colorable \(M_{*}\)
blue dashed \(K_{b}|_{c}^{d}\) exists we see a blue even-cycle. The second graph in Figure 48 MPG \(M_{2}\) is the reflection of \(M_{1}\) w.r.t. the vertical line. Clearly \(M_{2}\) is 4-colorable.
Among all MPG's \(M_{*}\), including some graphs might have edges linking vertices in \(\{c,d,v_{1},v_{2},v_{4},v_{5}\}\) (for example, given edge \(v_{1}v_{4}\)), only \(EP\), \(M_{1}\) and \(M_{2}\) can keep \(\deg(a,b)=5\); the rest MPG's of \(M_{*}\) must have \(\deg(a)\leq 4\) or \(\deg(b)\leq 4\). Since \(|M_{*}|=\omega\), which is the same order of all extremum planar graphs in \(e\mathcal{MPGN}4\), the rest MPG's of \(M_{*}\) must be 4-colorable by Corollary 4.4. The proof is complete.
**Lemma 17.8**.: _Let \(EP\in e\mathcal{MPGN}4\) with \(\text{TD}:=(\{a,b\};\deg(a,b)=5)\) and it is drawn as the underlining graph shown in Figure 45. The ten graphs, from \(S_{0}\) to \(S_{9}\) in Figure 46, as well as \(S_{x}\) and \(S_{y}\) in Figure 47, are congruent. So we can only deal with one of them in discussion of 4-colorable or not; because each of them is a necessary condition for \(EP\) being non-4-colorable._
Let us still fix the subgraph \(\Sigma^{\prime}\) and consider three vertices \(\alpha\), \(\beta\) and \(\gamma\) inside \(\Phi\). We focus on two particular MPS's: \(M_{a}^{+}\) and \(M_{b}^{+}\) in Figure 49 where the superscript "\(+\)" means \(|M_{a}^{+}|=|M_{b}^{+}|=\omega+1\). We dare to ask a question: Are \(M_{a}^{+}\) and \(M_{b}^{+}\) 4-colorable? The answer is yes, if non-4-colorable (\(EP;55\)) do exist. We will prove this interesting problem later.
### 4-colorable MPG's with \((\{a,b\};\deg(a,b)=5)\)
Let us think reversely. We focus on an 4-colorable MPG, say \(M\), of any order with \(\text{TD}:=(\{a,b\};\deg(a,b)=5)\), i.e., the underlining graph of \(M\) is as same \(\Sigma\) as the ones in Figure 45. Of course,
\(M\) is a different \(\Sigma^{\prime}\) compared with \(EP\). For \(M\), we name \(\Omega:=v_{1}\)-\(v_{2}\)-\(c\)-\(v_{4}\)-\(v_{5}\)-\(d\)-\(v_{1}\) as the boarder between \(\Sigma\) and \(\Sigma^{\prime}\).
Because \(\Sigma^{\prime}\) is symmetric w.r.t. the vertical line and the horizontal one, we will only explore those representatives of RGB-tilings on \(M\). We can first consider all
Figure 50. Prototypes of red edge-coloring around vertex \(a\)
Figure 51. All possible types of RGB-tilings on \(M\)
possible R-tilings on \(\Sigma\) to force \(M\) 4-colorable. We get three prototypes of red edge-coloring around vertex \(a\) in Figure 50. The details are given in Figure 51.
Depending on what \(M\) is, at least one of these six patterns of RGB-tiling (including their symmetric patterns) on \(\Sigma\) can extend to \(\Sigma^{\prime}\), and then we can fulfill the assumption that \(M\) is 4-colorable. Any kind of possible Kempe chains in \(\Sigma^{\prime}\), which are prepared for these six patterns of RGB-tiling on \(\Sigma\), cannot create any R/G/B odd-cycle.
_Remark 17.9_.: We can drop \([B2]\) from the list in Figure 51, because \([B2]\equiv|[A2]\).
**Lemma 17.10**.: _(a): There is no intersection between \(\{[S_{0}],[S_{1}],\ldots,[S_{9}],[S_{x}]\}_{\text{sym}}\) and \(\{[A1],[A2],[B1],[B3],[C1],[C2]\}_{\text{sym}}\), where the subscript means these two sets consist all symmetric images w.r.t. the vertical line and the horizontal one. (b): Particularly there is not intersection between their own \(Co(\Phi)\). Otherwise, \(EP\) with \(\text{TD}:=(\{a,b\};\deg(a,b)=5)\) is 4-colorable._
Proof.: Let us name \(ATLAS_{N}:=\{[S_{0}],[S_{1}],\ldots,[S_{9}],[S_{x}]\}_{\text{sym}}\) temporarily, but later we will modify this set without hurting this lemma. Also \(ATLAS_{4}:=\{[A1],[A2],[B1],[B3],[C1],[C2]\}_{\text{sym}}\). Clearly, the subscripts \(N\) and 4 stand for non-4-colorable and 4-colorable.
For (a), they should no intersection. It need patience to check these two sets in the coming subsection. Also, no intersection is a necessary condition for non-4-colorability, but not a sufficient condition.
As for (b), it raises another important question: Does the edge-color along \(\Phi\), namely \(Co(\Phi)\), unique determine the 4-colorable property of \(M:=\Phi\cup\Sigma\cup\Sigma^{\prime}\)? However, this new question is far more than what we claim only for \((EP;55)\).
Sorry! This brief remark is not a real proof. The real proof of (a) is in the next subsection. Part (b) is just a consequence of (a), because the comprehensive study on \(ATLAS_{*}\) do show that the types of \(Co(\Phi)\) uniquely determine each element in \(ATLAS_{N}\cap ATLAS_{4}\) and more that that. The property of unique determination so far only works for \((EP;55)\) and \((EP;Ptg)\)
### \(Atlas\) of Figures 46, 47, 51 and more
For convenience, **all \([*]\) in this subsection is actually \([*]_{\mathbf{sym}}\)**. The main purpose of this subsection is to list all kinds of \(Co(\Phi)\) under synonyms and symmetries; then we can offer a proof of Lemma 17.10(a) about \(ATLAS_{N}\) and \(ATLAS_{4}\).
According to Subsections 17.1 and 17.2, it is nature to ask: Does the union set of those \(Co(\Phi)\) in Figures 46, 47 and 51 consists of all possible \(Co(\Phi)\) provided that \(Co(\Sigma^{\prime})\) is 4-colorable. The answer is no, but this union set consists of nearly all.
By Lemma 6.2, the array \((\#r,\#g,\#b)\) of numbers of red, green and blue edges along \(Co(\Phi)\) can only be \((0,0,6)\), \((0,2,4)\) and \((2,2,2)\) under synonyms. The cases of \((0,0,6)\) and \((0,2,4)\) are easy and demonstrated by Table 1 systematically.
Notice that in this table skeletons are unnecessary for cases [A\(*\)], [B\(*\)] and [C\(*\)], because they are for colorable for any kinds of \(K_{*}\) in \(\Sigma^{\prime}\). However, we do draw a \(K_{r}\) for case \([X_{1}]\), which never showed up before5. We also claim \([X_{1}]\) 4-colorable, because of Lemma 16.8.
Footnote 5: That is why we mark it by “X”.
To list all cases of \((2,2,2)\) under synonyms, we can refer to cases of \((0,2,4)\) and then choose two blue edges along \(\Phi\). But this way is so tedious and twice the work with half the results. Let us consider the two edges in same color adjacent or not (Y/N). So we shall follow four extra requirements: \((Y,Y,Y)\), \((N,Y,Y)\), \((N,N,Y)\) and \((N,N,N)\). Now we will demonstrate all cases in Table 2.
only one case for \((Y,Y,Y)\) under synonyms. We obtain \([X_{2}]\), which never showed up before. Of course this \([X_{2}]\) is special. Later we will find it ubiquitous in further study. The two yellow double-lines are two \(e\)-diamonds of Type B involving three edge-colors; therefore we can not get any information on \(\Sigma^{\prime}\) from these two \(e\)-diamonds. It is not a problem comes two or more \(e\)-diamonds. For instance, \([S_{x}]\) in Figure 47 has two \(e\)-diamonds. The good thing is these two \(e\)-diamonds of Type B involving two edge-colors. Even if \(S_{x}\) comes out of nowhere, rather than what we just showed that it is from \([S_{1}]\), we still can build up \(K_{g}|_{c}^{v_{1}}\) and \(K_{g}|_{c}^{v_{5}}\).
_Remark 17.11_.: In the graph \([X_{2}]\) given in Table 2, we draw a dashed \(K_{r}|_{c}^{v_{5}}\) on purpose. Actually, once \([X_{2}]\) appears it must have (a): either \(K_{r}|_{c}^{v_{5}}\) or \(K_{r}|_{d}^{v_{4}}\); and (b): either \(K_{g}|_{c}^{v_{1}}\) or \(K_{g}|_{d}^{v_{2}}\). It is possible that four kinds of combinations of (a) and (b) all suit for this \(EP\) with \(T\!D:=(\{a,b\};\deg(a,b)=5)\), but at least one combination exists. Without loss of generality, we assume \(K_{r}|_{c}^{v_{5}}\) appears in \([X_{2}]\). Then we process ECS on \(rGCL(cv_{4})\) and then obtain the second graph in Figure 52. The second graph (\([S_{3}]\), Type B) and the third graph (\([S_{3}]\), Type A) have same \(Co(\Phi)\), but only Type A can guarantee two Kempe chains: \(K_{g}|_{d}^{v_{2}}\) and \(K_{b}|_{d}^{v_{4}}\). From the second graph to the third graph, the process can be done by ECS or directly by \(\Sigma\)_-adjustment_, which is a modification inside \(\Sigma\) and will be introduced later formally. Let us make the conclusion of this remark: If \([X_{2}]\) exists, then \([X_{2}]\cong[S_{3}]\). But so far the existence is not guaranteed.
_Remark 17.12_.: Our curiosity on \([X_{2}]\) has not ended yet. There are two possible blue Kempe chains, namely \(K_{b}|_{v_{1}}^{v_{5}}\) and \(K_{b}|_{c}^{d}\), and at least one exists6. Therefore, we can perform two possible ECS and then obtain \([S_{5}]\) and \([S_{x}]\) shown as the second graph and the third one in Figure 53. Not we realize that \([X_{2}]\) connects to many different colleagues in \(ATLAS_{N}\)
Footnote 6: Be careful! They never co-exist; they might exist for different RGB-tilings.
_Remark 17.13_.: The second graph in Figure 53 has \(K_{b}|_{v_{1}}^{v_{5}}\). It does not means that every \([S_{5}]\) equips with \(K_{b}|_{v_{1}}^{v_{5}}\). It is possible that another \(S_{5}\) RGB-tiling has \(K_{b}|_{c}^{d}\). The above argument can also apply on the third graph in Figure 53.
_Remark 17.14_.: Comparing two \([S_{3}]\) in Figure 467 and in Figure 52 and under synonyms, we find that the new one (latter one) has three Kempe chains of three different colors. Remark 17.11 offer a reason of the existence of this new \(K_{r}|_{c}^{v_{5}}\). We provide another reason. If there is no \(K_{r}|_{c}^{v_{5}}\), then there should be a
Figure 52. \([X_{2}]\) and \([S_{3}]\)
and then we can turn this \([S_{3}]\) to be \([X_{1}]\) in Table 2; thus this \(EP\not\in e\mathcal{MPGN}4\) and a contradiction. Now we realize how interesting and important Lemmas 10.4 and 16.8 are. One more important thing: thanks for this "another reason", we can say \([X_{2}]\) is obtained from \([S_{3}]\) by performing ECS on \(rGCL(cv_{4})\) in Figure 52.
**So, the existence of \([X_{2}]\) is guaranteed.** Otherwise, the process that we did in Remark 17.11 bases on the assumption of existence of \([X_{2}]\) and it is possible that \([X_{2}]\) does not exist. Now we complete entire \(ATLAS_{N}\) theoretically.
_Remark 17.15_.: Let us re-check \([S_{4}]\) in Figure 46. Using \(\Sigma\)-adjustment, we find a new \(K_{b}|_{v_{1}}^{v_{5}}\). This refinement has another reason. If there is no \(K_{b}|_{v_{1}}^{v_{5}}\), then there should be a \(K_{b}|_{d}^{v_{4}}\) and then we can turn the original \([S_{4}]\) (the left graph) to be to be [A2]; thus this \(EP\not\in e\mathcal{MPGN}4\) and a contradiction.
Now finally we finish a proof of Lemma 17.10(a). Not only a proof but also we list \(ATLAS_{N}\) more precise with one more element \([X_{2}]\). Let us conclude this whole section as following theorem:
**Theorem 17.16**.: _Let \(EP\in e\mathcal{MPGN}4\) with \(T\!D:=(\{a,b\};\deg(a,b)=5)\). Every element in \(ATLAS_{N}:=\{[S_{0}],[S_{1}],\ldots,[S_{5}],[S_{x}],[X_{2}]\}_{sym}\) will appear by doing a series of ECS starting from \([S_{0}]\) or from any one in \(ATLAS_{N}\)._
Proof.: Remark 17.14 has already shown the existence of \([X_{2}]\), because it can be obtained from \([S_{3}]\).
_Remark 17.17_.: Recall the interesting problem that we left in Figure 49. For symmetry, we only show \(M_{a}^{+}\) is 4-colorable. If non-4-colorable \((EP;55)\) do exist, then the RGB-tiling \([S_{x}]\) exists. We only need the R-tiling of \([S_{x}]\) on \(\Sigma^{\prime}\). Put this R-tiling on the same \(\Sigma^{\prime}\) of \(M_{a}^{+}\) and also color 5 edges inside \(\Sigma\) shown in Figure 55. Now the whole R-tiling on \(M_{a}^{+}\) has no odd-cycle. By Theorem 6.7, \(M_{a}^{+}\) is 4-colorable.
## 18. Theory behind ECS on generalized canal rings
We have already performed ECS on generalized canal rings many times, but we did not talk clearly about the theory. In this sections we explain our standard of process (SOP) to perform ECS on generalized canal lines or rings, and then investigate new Kempe chains in \(\Sigma^{\prime}\).
### Topic for discussion \(T\!d\) and boarder \(\Omega\) in \(Ep\)
Given a fixed \(EP\in e\mathcal{MPGN}4\), let \(T\!D:=(\{u_{1},u_{2},\ldots,u_{k}\};\)requirements) consist of some chosen vertices from \(EP\) as the _topic for discussion_, where _requirements_ set up the particular type of \(u_{1},u_{2},\ldots,u_{k}\). We also use \(T\!D\) to represent the induced subgraph made by the vertices of \(T\!D\). Usually we require this subgraph \(T\!D\) connected and solid, where"solid" means no hole; and it had better to be 2-connected if \(k\geq 4\). The _boarder_\(\Omega\) (sometimes \(\Phi\)) is a subgraph induced by the vertex set consists of all surrounding neighbors of \(T\!D\). Clearly, \(\Omega\) is a cycle surrounding \(T\!D\), unless \(T\!D\) is so weird. Let \(\Sigma:=\Omega\cup T\!D\) and \(\Sigma^{\prime}:=EP-T\!D\). Clearly \(\Sigma\cap\Sigma^{\prime}=\Omega\). For example \(T\!D:=(\{v\};\deg(v)=5)\) and \(\Omega:=v_{1}\)-\(v_{2}\)-\(\ldots\)-\(v_{5}\)-\(v_{1}\) in Figure 40.
Due to the definition of \(e\mathcal{MPGIN}4\) and Theorem 4.3(b), a 4-semi-MPG \(EP-\{e\}\) for any \(e\in E(EP)\) is 4-colorable and there is at least an RGB-tiling on \(EP-\{e\}\). By Theorem 11.4, RGB-tilings with Type A and Type B \(e\)-diamond both exist and are congruent to each other in pairs. Please, recall our premature concept of \(ATLAS_{N}\) and \(ATLAS_{4}\) in Section 17. To study \((EP;\textit{TD})\), we keep improving or refining these two sets. A _primary_ RGB-tiling with a particular \(e\)-diamond, like \(T_{\alpha}^{55}\) and \(T_{\beta}^{55}\) in Figure 45, are the original members of \(ATLAS_{N}\), which are two different RGB-tilings on \(EP-\{ab\}\). Actually the _primary_8 RGB-tilings for the discussion on \((EP;\textit{TD})\) can be all inner edges \(e\) in \(\Sigma\), and then we define the _primary atlas_ of RGB-tilings w.r.t. \((EP;\textit{TD})\):
Footnote 8: The _primary_ RGB-tilings and the _initial_ ones are different. Please, wait for the definition of the latter ones.
\[ATLAS_{N}(EP;\textit{TD}) \stackrel{{\text{tmp}}}{{\coloneqq}} \{T^{\textit{TD}}\mid\text{ RGB-tilings, either Type A or Type B,}\] \[\text{ on }EP-\{e\}\text{ for }e\in E(\Sigma)-E(\Omega)\}.\]
For example, see \(ATLAS_{N}\) in the proof of Lemma 17.10.
_Remark 18.1_.: Why did we only consider Type A for our initial \(ATLAS_{N}\) in the last section? Because we get benefit from \(\textit{TD}\) containing some vertices of degree 5 and any Type B \(e\)-diamond with \(e\) incident to these vertices degree 5 can be transformed to Type A \(e^{\prime}\)-diamond without changing \(Co(\Phi)\) and the associating skeleton.
On the other hand, to attack the non-4-colorability of \((EP;\textit{TD})\), we consider all possible local 4-colorable functions on \(\Sigma\), each of which is also an RGB-tiling on \(\Sigma\). So we construct the following set:
\[ATLAS_{4}(\textit{TD}) \stackrel{{\text{tmp}}}{{\coloneqq}} \{T^{\textit{TD}}\mid\text{ RGB-tilings on }\Sigma\}.\]
_Remark 18.2_.: The main idea of whole project is to prove by contrapositive with the assumption that \(EP\) exists. The existence of \((EP;\textit{TD})\) derives every element in \(ATLAS_{N}(EP;\textit{TD})\) must exist and \(ATLAS_{4}(\textit{TD})\) must be empty. Once we
find \(ATLAS_{N}(EP;T\!D)\cap ATLAS_{4}(T\!D)\) non-empty, then \((EP;T\!D)\) is impossible; but not every \(EP\). The edge-color \(Co(\Omega)\) plays an important role on checking \(ATLAS_{N}(EP;T\!D)\cap ATLAS_{4}(T\!D)=\emptyset\) or not.
Not just for one particular _abandoned_ edge \(e\), we might consider a particular set of _abandoned_ edges, denoted by \(\{*\}\) when the elements are not chosen yet, and consider any RGB-tiling on \(EP-\{*\}\), i.e., a combination of \(e\)-diamonds with Types A and B. There are two different reason to concern about a set \(\{*\}\) containing multiple edges:
1. This new RGB-tiling on \(EP-\{*\}\) is obtained from one of the primary RGB-tilings. For example, \([S_{x}]\) in Remark 17.5.
2. Sometimes we are forced to check all possible cases of \(Co(\Omega)\). We might find some \(Co(\Omega)\) that were not investigated yet. So, after studied they can be sorted into \(ATLAS_{N}(EP;T\!D)\) or \(ATLAS_{4}(T\!D)\). For example, \([X_{1}]\) and \([X_{2}]\) in Subsection 51. However, \([X_{2}]\) obtained from one of the primary RGB-tilings is still very important.
The _secondary_\(ATLAS_{N}(EP;T\!D)\) consists of these new RGB-tilings on \(EP-\{*\}\) (non-4-colorable on \(EP\)) together with all primary ones; similarly for the _secondary_\(ATLAS_{4}(T\!D)\).
Yes, we do have the _tertiary_\(ATLAS_{N}(EP;T\!D)\). In this extended collection, we consider \(\{*\}\) to be a subset of \(E(\Sigma)\); however the edges in \(\{*\}\) still form multiple \(e\)-diamonds of Type A and Type B. For instance, \(Co_{\alpha}(Ptg)\), \(Co_{\beta}(Ptg)\) and \(Co_{\gamma}(Ptg)\) in Figures 43 and 44.
_Remark 18.3_.: Why do we always emphasize that new (secondary and tertiary) RGB-tilings on \(EP-\{*\}\) are obtained from a primary one? Because we assume that \((EP;T\!D)\) exists, thus every primary RGB-tilings on \(EP-\{e\}\) exists by Theorem 11.4. We have to guarantee all element in the secondary and the tertiary sets exist under this assumption. Therefore, \(ATLAS_{N}(EP;T\!D)\cap ATLAS_{4}(T\!D)=\emptyset\) or not is really a crucial point to judge \((EP;T\!D)\).
### The magic of the yellow double-lines and congruence relation
The magic of the yellow double-lines (_abandoned_ edges) is that replacing it by either red, green or blue, we will get an odd-cycle of the same color, which is called a _Kempe chain_. Given an \(e\)-diamond, for Type A, there are two non-trivial Kempe chains of different colors; as for Type B, there is only one. Kempe chains are just representatives because it is possible that a clusters of red/green/blue paths linking the two end-vertices of \(e\). In other words, a red Kempe chain represents red-connected property in \(\Sigma^{\prime}\). Please, see Section 10 for details.
When we have multiple \(e\)-diamonds, the Kempe chains in this RGB-tiling on \(EP-\{*\}\) need to judge case-by-case. For instance,\([S_{x}]\) in Figure 47 and \([X_{2}]\) in Remark 17.11.
Following Remark 18.3, we understand that the co-existence of certain RGB-tilings on different \(EP-\{*\}\) built by congruence relation is so important. Congruence relation is based on performing ECS on a canal ring or a generalized canal ring. Performing ECS on a canal ring will create a new RGB-tiling on \(EP-\{*\}\) without changing \(\{*\}\); however performing ECS on a generalized canal ring will change to a new \(\{*\}\). The former is easily passed on theory, but the latter need to be explained more precisely.
Let us consider an RGB-tiling \(T^{\textit{TD}}_{\alpha}(EP-\{e_{\alpha 1},e_{\alpha 2},\ldots\})\), where \(\{e_{\alpha 1},e_{\alpha 2},\ldots\}\) is a set of inner edges of \(\Sigma\) such that no two of them in a single triangle. Suppose a generalized canal ring \(rGCL\) is a part of \(T^{\textit{TD}}_{\alpha}(EP-\{e_{\alpha 1},e_{\alpha 2},\ldots\})\) and it is the one we would like to perform ECS to get a new RGB-tiling \(T^{\textit{TD}}_{\beta}(EP-\{e_{\beta 1},e_{\beta 2},\ldots\})\). This \(rGCL\) must have segments of two kinds. The first kind is along a red Kempe chain \(K_{r}|^{v}_{u}\) (or maybe more) in \(\Sigma^{\prime}\). Thus \(rGCL\cap\Sigma^{\prime}\) must be a normal red canal line(s) and performing ECS on \(T^{\textit{TD}}_{\alpha}\) will still make the new \(T^{\textit{TD}}_{\beta}\) still an RGB-tiling on \(\Sigma^{\prime}\). Notice that the tangling property in Section 10 might happen for this reason. Here we bring back an important question: Besides degree 5, does there any other situation have tangling property?
The segment(s) of the second kind is the part \(rGCL\cap\Sigma\) and this is the real "generalized" part. Let us review the rules of ECS on a red generalized canal line or ring:
* Switch edge colors of green and blue, just like what we do for a normal red canal line;
* Switch edge colors of red and yellow double-line, and this switching rule is what we perform for the "generalized" segment.
* To perform edge-color-switching on (or along) a green/blue generalized canal ring, we just apply the two items above, under symmetry of three colors.
After ECS, by the second rule above, some yellow edges in \(\{e_{\alpha 1},e_{\alpha 2},\ldots\}\) turn to be red; however in view of R-tiling on \(EP\), they are assumed to be red already. The remained unchanged yellow edges in \(\{e_{\alpha 1},e_{\alpha 2},\ldots\}\) together with the some original red edges that passed by \(rGCL\) form the new yellow-edge-set \(\{e_{\beta 1},e_{\beta 2},\ldots\}\). With this new yellow-edge-set, we shall investigate new \(K_{g}\) and \(K_{b}\). This this the key point of our ECS process. **We preserve all \(K_{r}\cap\Sigma^{\prime}\), and try to build new \(K_{g}\cap\Sigma^{\prime}\) and \(K_{b}\cap\Sigma^{\prime}\) for \(T_{\beta}^{\textit{TD}}\).** Now we can consider \(T_{\beta}^{\textit{TD}}(EP-\{e_{\beta 1},e_{\beta 2},\ldots\})\) and use a generalized canal ring \(gGCL\) or \(bGCL\) to perform the next ECS. The sequence of process is just like what did in Figure 46.
_Remark 18.4_.: Recall the red Kempe chain \(K_{r}|_{u}^{v}\) described in the last two paragraph. Suppose that edges \(uu^{\prime}\) and \(vv^{\prime}\) are along \(\Omega\) and crossed by \(rGCL\). According the first rule above, the colors on \(uu^{\prime}\) and \(vv^{\prime}\) are just switched between green and blue, and then \(Co(\Omega)\) is changed. Thus, the sequence of ECS process can explore many different kinds of \(Co(\Omega)\) for \((EC;\textit{TD})\).
Besides Figure 46, we also explored \(ATLAS_{*}\) in Subsection 17.3. Without this hard work, we probably cannot find the interesting \([x_{1}]\) and \([X_{2}]\). Do we really need to explored \(ATLAS_{*}\)? It depends on what kind of \(\textit{TD}\) that we discuss. The more fundamental structure of \(\textit{TD}\), the more details we need to know.
### \(\Sigma\)-adjustments and \(\{*\}\)
In the last subsection we wrote: **We preserve all \(K_{r}\cap\Sigma^{\prime}\), and try to build new \(K_{g}\cap\Sigma^{\prime}\) and \(K_{b}\cap\Sigma^{\prime}\) for \(T_{\beta}^{TD}\).** The general way to do is observing the new set \(\{e_{\beta 1},e_{\beta 2},\ldots\}\) in \(\Sigma\); however, we have many different ways to build new \(K_{g}\cap\Sigma^{\prime}\) and \(K_{b}\cap\Sigma^{\prime}\) as skeleton, and we had better do our best to find them. For example, the red Kempe chain \(K_{r}|_{c}^{v_{5}}\) of \([S_{3}]\), in Table 2 or in Remark 17.14, is obtained by exclusive law.
The method of \(\Sigma\)-adjustments has been already used in Remarks 17.5, 17.11 and 17.15. There are two major methods to perform a \(\Sigma\)-adjustment:
1. Find any generalized canal ring inside \(\Sigma\) and perform ECS. Then we have new set of abandoned edges \(\{e^{\prime}_{\beta 1},e^{\prime}_{\beta 2},\ldots\}\) in \(\Sigma\) to build new \(K_{g}\cap\Sigma^{\prime}\) and \(K_{b}\cap\Sigma^{\prime}\).
2. Just keep \(Co(\Omega)\) of this moment unchanged. Try to re-arrange a new single color tiling inside \(\Sigma\). Then according \(Co(\Omega)\) to complete the other two edge-coloring. Most of time, we cannot obtain an RGB-tiling on \(\Sigma\) but one on \(\Sigma-\{*\}\). Now try to build new \(K_{g}\cap\Sigma^{\prime}\) or \(K_{b}\cap\Sigma^{\prime}\).
The reader can practice method (1) using \([S_{1}]\) or \([S_{4}]\), and a red generalized canal ring inside \(\Sigma\). All [A1], [A2], [B1], [B2], [B3], [C1], [C2] were made by method (2) without any abandoned edge. Also \([X_{1}]\) and \([X_{2}]\) were studied by method (2) at the very beginning.
Working on \(ATLAS_{*}\) in Subsection 17.3 is so tedious, but the job in Subsection 17.1 is standard.
Because \(EP\) is the extremum or the smallest non-4-colorable MPG, we can only 4-color \(EP-\{e\}\) and some time \(EP-\{*\}\). The theory of \(e\)-diamond is much easier, for it can be only be Type A and Type B, which are co-existing. A large set \(\{*\}\) of abandoned edges make a coloring or an RGB-tiling on both \(\Sigma\) and \(\Sigma^{\prime}\), as well as \(Co(\Omega)\), more complicated. Also we need to be very careful: Without linking to a certain congruent RGB-tiling \(T_{\alpha}^{TD}(EP-\{e\})\), we have no right to guarantee the existence of \(T_{\beta}^{TD}(EP-\{*\})\).
## 19. Three degree 5 vertices in a triangle
Let us consider the situation that three vertices of degree 5 in \(EP\) form a triangle. Our have new \(\textit{T}\!D:=(\{a,b,c\};\deg(a,b,c)=5)\) or "\(5^{3}\)" in short that consists of three vertices of degree 5 shown in Figure 56 and \(\Omega:=d\)-\(v_{1}\)-\(v_{2}\)-\(\ldots\)-\(v_{5}\)-\(d\).
By Lemma 17.8 we choose the first graph \([T_{\alpha}^{55}]\) in Figure 45 to discuss. The following two graphs, \([T_{1}^{5^{3}}]\) and \([T_{2}^{5^{3}}]\), are determined by \(cv_{3}\) being blue or green. However, given \(cv_{3}\) blue, we see a 5-cycle \(K_{g}\cup\{ab\}\). By Lemma 16.8 or the tangling property for \(EP\) with \(\deg(a,b)=5\), we have a contradiction.
Thus, only \([T_{2}^{5^{3}}]\) can be the proper RGB-tiling for this \((EP;\textit{T}\!D)\). In addition, we claim that the red dashed path exists. We simply re-arrange a new red tiling inside \(\Sigma\) and treat green/blue as black. The new red tiling on \(EP\) is shown as the third graph \([T_{2}^{5^{3}}]\) and there must have at least a red odd-cycle crossing \(\Sigma\). The only way is the red path \(K_{r}|_{v1}^{v_{3}}\).
Just for fun, the reader can re-arrange another new red tiling inside \(\Sigma\) by setting \(d\)-\(a\)-\(v_{2}\) and \(v_{3}\)-\(c\)-\(b\)-\(v_{5}\) red, and then exams the new red odd-cycle crossing \(\Sigma\). We leave the reader to draw this result.
**Lemma 19.1**.: _Let \(a,b,c\) be three vertices in a triangle of \(EP\) with \(\deg(a,b,c)=5\). There is only one congruent class of RGB-tilings on \(EP\). This congruent class has a representative shown as \([T_{2}^{5^{3}}]\) in Figure 56. Please, turn the red dashed line in \([T_{2}^{5^{3}}]\) solid._
As different way to prove the existence of the red dashed in Figure 56 is given as the following process. In Figure 57, starting with the original \([T_{2}^{5^{3}}]\), we perform two ECS on a \(rGCL\) and then on a \(bCL\). The result \([T_{2}^{5^{3}}+2_{rb}]\) is given as the third graph. Since we use a red generalized canal ring crossing \(\Sigma^{\prime}\) and the second blue canal ring is all inside \(\Sigma\). So the new red Kempe chain \(K_{r}|_{v_{1}}^{v^{3}}\) is supposed to exist in \([T_{2}^{5^{3}}]\) before we change it.
The third graph above is very interesting and important; so we give it a special name: \([T_{\alpha}^{5^{3}}]\). In this RGB-tiling, all edges along \(\Phi\) are blue with three degree 5 vertices inside. What a symmetric structure and edge-coloring! Wait! the graph \([T_{\alpha}^{5^{3}}]\) in Figure 57 is not really symmetric. Yes, we do miss a green Kempe chain \(K_{g}|_{v_{3}}^{v_{5}}\). Symmetry is not the reason that \(K_{g}|_{v_{3}}^{v_{5}}\) exists. The reason can be found in Figure 58. Also notice that to draw \(K_{r}|_{v_{1}}^{v_{3}}\) together with \(K_{r}|_{v_{1}}^{v_{5}}\) is not different to \(K_{r}|_{v_{1}}^{v_{3}}\) together with \(K_{r}|_{v_{3}}^{v_{5}}\); or even to draw \(K_{r}|_{v_{1}}^{v_{5}}\) together with \(K_{r}|_{v_{3}}^{v_{5}}\). **Because what we need is that \(v_{1}\), \(v_{3}\) and \(v_{5}\) are red-connected and also green-connected.**
Again, just for fun, we develop three congruent RGB-tilings on \(EP-\{e\}\) for \(e=av_{1},bv_{5},cv_{3}\) in Figure 58.
**Lemma 19.2**.: _Let \(EP\in e\mathcal{MPGN}4\) with \(a,b,c\in V(EP)\) in a triangle and \(\deg(a,b,c)=5\). Three congruent RGB-tilings \([T_{\alpha}^{5^{3}}]\) shown in Figure 58 must exist._
### Four degree 5 vertices in a diamond
Finally, we want to finish our interesting question: Can a diamond in \(EP\) have all its four vertices degree 5?
**Theorem 19.3**.: _Let \(a,b,c,d\) be four vertices in a diamond in \(EP\). It is impossible that all of them are degree 5._
Proof.: Now \(\textit{TD}:=(\{a,b,c,d\};\deg(a,b,c,d)=5)\) or \(5^{4}\) in short, and \(\Omega:=v_{1}\)-\(v_{2}\)-\(\ldots\)-\(v_{6}\)-\(v_{1}\) be the 6-cycle of the neighbors of \(\textit{TD}\).
Let us adopt the second graph \([T_{2}^{5^{3}}]\) in Figure 56 to fit \(\{a,b,c\}\) and \(\{a,b,d\}\) and then we obtain the only initial status \([T^{5^{4}}]\) as in Figure 59. Before we proceed the major proof, a very minor check need to be taken care: \(v_{3}\neq v_{6}\). Theorem 14.1 offers a proof. Additionally the red-connectivity of \(v_{1}\), \(v_{3}\) and \(v_{5}\), and then these three vertices are red-disconnected with \(v_{6}\). This fact offers another proof for \(v_{3}\neq v_{6}\). Wait! We have never checked \(v_{1}\neq v_{4}\)\(v_{1}\neq v_{5}\), etc. Actually we should prove these facts before. Lemma 4.5 offers a proof for these simple cases.
Now we simply re-arrange a new red tiling inside \(\Sigma\) shown as the second graph in Figure 59. The second graph offers an R-tiling without odd-cycle. If there is a new cycle, then it must cross \(\Sigma\). First, \(K_{r}|_{v_{1}}^{v_{5}}\cup\{dv_{1},dv_{5}\}\) is an even-cycle. The path \(v_{2}\)-\(a\)-\(b\)-\(v_{4}\) can not fulfill a bigger cycle because it is blocked by \(K_{r}|_{v_{1}}^{v_{3}}\). The last thing to consider is: What about there exists \(K_{r}|_{v_{5}}^{v_{3}}\) (red dashed line)? Well, if it exists, then the length is even by the first graph where \(v_{5}\)-\(v_{4}\)-\(v_{3}\) is length 2 and all blue. Thus the big cycle \(K_{r}|_{v_{1}}^{v_{3}}\cup K_{r}|_{v_{5}}^{v_{3}}\cup\{dv_{1},dv_{5}\}\) is even length. By
Figure 58. Three congruent RGB-tilings; All of them are \([T_{\alpha}^{5^{3}}]\)
Theorem 7.12(d), an R-tiling\({}^{*}\) on an MPG must induce a 4-coloring function. Now we has a contradiction and the proof is done.
### Three degree 5 vertices in a triangle, continued
Let us back to \(T\!D:=(\{a,b,c\};\deg(a,b,c)=5)\) with \(a,b,c\) adjacent. First we need to refer to Lemma 19.2. Figure 58 demonstrates three equivalent RGB-tilings in \(\Sigma^{\prime}\) of \(T\!D\), i.e., a necessary skeleton in \(\Sigma^{\prime}\) provided all edges along \(\Omega\) blue. Let us redraw that skeleton in \(\Sigma^{\prime}\) but leave every edge inside \(\Sigma\) black as the first graph in Figure 60. There must be six vertices, say \(u_{1},u_{2},\ldots,u_{6}\), surrounding \(T\!D\). By Theorem 19.3, \(\deg(d,v_{2},v_{4})\geq 6\). **So, here we assume the minimum situation \((*)\):**\(\deg(d,v_{2},v_{4})=6\)**and**\(\deg(v_{1},v_{3},v_{5})=5\)**.** We shall consider a new topic for discussion \(T\!\hat{D}\) who has the vertex set \(\{a,b,c,d,v_{1},\ldots,v_{5}\}\) and the requirement as the situation \((*)\); and then a new \(\hat{\Omega}:=u_{1}\)-\(u_{2}\)-\(\ldots\)-\(u_{6}\); also new \(\hat{\Sigma}\) and \(\hat{\Sigma}^{\prime}\) inside and outside of \(\hat{\Omega}\) respectively. Please, see the second graph in Figure 60.
The second graph is the only feasible RGB-tiling on \(\Sigma^{\prime}\) (not only on \(\hat{\Sigma}^{\prime}\)) under synonyms. Particularly all edges in \(\hat{\Omega}\) must be blue. Now we find a blue canal ring \(bCL\) in between \(\Omega\) and \(\hat{\Omega}\). After performing ECS on this \(bCL\), we obtain a new RGB-tiling on \(\Sigma^{\prime}\) shown as the third graph in Figure 60. Not only that, we can do \(\Sigma\)-adjustment by coloring paths \(v_{1}\)-\(a\)-\(b\)-\(v_{5}\) and \(v_{2}\)-\(c\)-\(v_{4}\) red. Finally we obtain a
Figure 59. A diamond with four degree 5’s in \(EP\)
brand new R-tiling without red odd-cycle. Please, check the only red cycle crossing \(\hat{\Sigma}\). It must be even length.
**Lemma 19.4**.: _Let \(EP\in e\mathcal{MPGN}4\) with \(a,b,c\in V(EP)\) in a triangle and \(\deg(a,b,c)=5\). See Figure 60. It is impossible that the surrounding vertices along \(\Omega:=d\)-\(v_{1}\)-\(v_{2}\)-\(v_{3}\)-\(v_{4}\)-\(v_{6}\) have degree property: \(\deg(d,v_{2},v_{4})=6\) and \(\deg(v_{1},v_{3},v_{5})=5\)._
Our further study shows a more stronger property as follows:
**Lemma 19.5**.: _Let \(EP\in e\mathcal{MPGN}4\) with \(a,b,c\in V(EP)\) in a triangle and \(\deg(a,b,c)=5\). Please, refer to the second graph in Figure 60 and the most part
of the hypothesis in Lemma 19.4. This time we only assume \(\deg(v_{1})=5\) and \(deg(d)=6\) in addition, while \(deg(v_{2},v_{4})\geq 6\) (by Theorem 19.3) and \(deg(v_{3},v_{5})\geq 5\) are given automatically. It is impossible \(EP\in e\mathcal{MPGN}4\)._
This new result will be proved in the near future.
## 20. No two degree 5 vertices adjacent; We wish.
We have a dream to prove the following conjecture that covers all previous results in this paper. Once we thought we did it, but a bug came out. However, we would like demonstrate our false proof.
**Conjecture 20.1**.: _Are there any two degree 5 vertices adjacent in \(EP\)? No way!_
Let \(EP\in e\mathcal{MPGN}4\). The given situation is that \(\textit{TD}:=(\{a,b\};\deg(a,b)=5)\) or 55 in short. Then we have \(\Omega:=d\)-\(v_{1}\)-\(v_{2}\)-\(c\)-\(v_{4}\)-\(v_{5}\)-\(d\).
Now we create a new MPG \(\hat{EP}\) from \(EP\). We remove vertices \(a\) and \(b\), and then merge \(v_{2}=v_{4}\). Notice that \(v_{2}\) and \(v_{4}\) are not adjacent in \(\Sigma^{\prime}\); otherwise the 4-cycle \(a\)-\(b\)-\(v_{4}\)-\(v_{2}\)-\(a\) must form a diamond, but vertex \(c\) say no. Please, see Theorem 14.1. This merging also makes \(v_{2}c=v_{4}c\) and this fact will cause \(v_{2}c\) and \(v_{4}c\) have same edge-color in the original \(EP\). This merging also creates a new 4-outer facet \(\Phi:=d\)-\(v_{1}\)-\(v_{2}\)-\(v_{5}\)-\(d\). In addition, we set a new edge \(v_{1}v_{5}\) for \(\hat{EP}\).
Thanks for the existence of \([\underline{S_{1}}]\) on \(\Sigma^{\prime}\) of \(EP\). Thus, in Figure 61 we have two synonyms RGB-tilings, (A) and (B), on \(\hat{EP}\), which are of course Type A with \(e\)-diamond and \(e=v_{1}v_{5}\).
The case (C) is the last one we need to consider Type A with \(e\)-diamond on \(\hat{EP}\). However, it does not exist because of the last graph is a 4-colorable B-tiling on \(EP\) in Figure 61. Clearly this B-tiling is restored from (C).
If we consider Type A condition is a sufficient condition for this \(\hat{EP}\) being non-4-colorable, then we obtain a contradiction for \(|\hat{EP}|<|EP|\). What a nice proof for Conjecture 20.1. Unfortunately Type A condition is not a sufficient condition. Please, see False Conjecture 13.1.
Thank for \(ATLAS_{N}(EP;\mathit{TD})\) in Subsection 17.3. We find that \([S_{2}]\) can offer a Type C \(e\)-diamond for \(\dot{EP}\). Please, see Figure 62. By Theorem 12.1 or directly by the second graph, \(\dot{EP}\) is 4-colorable.
## 21. What are next steps by this renewal approach
Study \(\mathit{TD}\), many different \(\mathit{TD}\). For instance \(\mathit{TD}\) consists of two or three adjacent vertices of degree 5 or 6. Actually studying the distribution of degrees along \(\Omega\) or even the secondary layer \(\Omega^{2}\), especially those vertices of degrees at least 7, is our goal.
Figure 61. Merging \(v_{2}=v_{4}\)
Figure 62. Merging \(v_{2}=v_{4}\)
The setting of \(\hat{T}\!\!D\) in Subsection 19.2 is the minimum situation for \(\Omega\). There are many different settings for \(\Omega\) to discuss. That will be a new chapter of our study in the near future.
The more vertices in \(T\!\!D\) are or precisely the larger \(\sum_{v\in TD}\deg(v)\) is, the more complex \(ATLAS_{N}(EP-\{*\})\) is. To reduce complexity, Lemma 17.8 uses congruence relation between elements in \(ATLAS_{N}(EP-\{*\})\).
|
2309.13718 | Multiple Relations Classification using Imbalanced Predictions
Adaptation | The relation classification task assigns the proper semantic relation to a
pair of subject and object entities; the task plays a crucial role in various
text mining applications, such as knowledge graph construction and entities
interaction discovery in biomedical text. Current relation classification
models employ additional procedures to identify multiple relations in a single
sentence. Furthermore, they overlook the imbalanced predictions pattern. The
pattern arises from the presence of a few valid relations that need positive
labeling in a relatively large predefined relations set. We propose a multiple
relations classification model that tackles these issues through a customized
output architecture and by exploiting additional input features. Our findings
suggest that handling the imbalanced predictions leads to significant
improvements, even on a modest training design. The results demonstrate
superiority performance on benchmark datasets commonly used in relation
classification. To the best of our knowledge, this work is the first that
recognizes the imbalanced predictions within the relation classification task. | Sakher Khalil Alqaaidi, Elika Bozorgi, Krzysztof J. Kochut | 2023-09-24T18:36:22Z | http://arxiv.org/abs/2309.13718v1 | # Multiple Relations Classification using Imbalanced Predictions Adaptation
###### Abstract
The relation classification task assigns the proper semantic relation to a pair of subject and object entities; the task plays a crucial role in various text mining applications, such as knowledge graph construction and entities interaction discovery in biomedical text. Current relation classification models employ additional procedures to identify multiple relations in a single sentence. Furthermore, they overlook the imbalanced predictions pattern. The pattern arises from the presence of a few valid relations that need positive labeling in a relatively large predefined relations set. We propose a multiple relations classification model that tackles these issues through a customized output architecture and by exploiting additional input features. Our findings suggest that handling the imbalanced predictions leads to significant improvements, even on a modest training design. The results demonstrate superiority performance on benchmark datasets commonly used in relation classification. To the best of our knowledge, this work is the first that recognizes the imbalanced predictions within the relation classification task.
## 1 Introduction
The relation classification (RC) task aims to identify relations that capture the dependency in every pair of entities within unstructured text. The task is employed in several applications, such as knowledge graph construction and completion [1] and entities interaction detection in biomedical text [2]. In knowledge graphs, it is common to employ relational triples as the base structure. A triple consists of a subject entity, an object entity, and a semantic relation connecting them. For instance, Wikipedia articles rely on Wikidata knowledge base to provide its content [3]; users can query Wikidata in a structured format using SPARQL and retrieve the information as RDF triples. In biomedical text, the RC task helps in discovering the interactions between entities such as proteins, drugs, chemicals and diseases in medical corpora.
In the supervised RC task, the objective is to learn a function that takes a sentence and its tagged entities as input, then assigns a binary class to each relation from a predefined set. A positive label indicates that the relation is valid for an entity pair. Thus, the output consists of the positive relations. We use this formal notation for the task:
\[f(W,E,P)=\left\{\begin{array}{ll}R,&\text{Multiple relations}\\ r,&\text{Single relation}\\ \emptyset,&\text{otherwise}\end{array}\right. \tag{1.1}\]
where \(W\) is a sequence of words [\(w_{1},\ w_{2}\...\ w_{n}\)], \(E\) is the set of one or more entity pairs. Each entity pair consists of a subject entity and an object entity, where an entity is a sub-sequence of \(W\). \(P\) is the predefined relations set. \(R\) is a set of multiple relations found for \(E\). \(r\) is a single relation. \(\emptyset\) indicates that no relation exists connecting any of the entities. In an example from the Nyt dataset [4] with the sentence _"Johnnie Bryan Hunt was born on Feb. 28, 1927, in rural Heber Springs, in north-central Arkansas."_, the valid relations are _"contains"_ and _"place lived"_ for the entity pairs _("Arkansas", "Heber Springs")_ and _("Johnnie Bryan Hunt", "Heber Springs")_, respectively.
Usually, a sentence incorporates multiple relations. Table 1 shows the average number of relations in two well used benchmarks [4, 5]. Therefore, a single RC approach is only valid for limited cases. However, majority of the literature work follow the single relation approach. Single RC models require additional preprocessing procedure to be able to identify multiple relations [6], that is by replicating the sentence \(W\) in equation 1.1, then assigning an entity pair and a single relation \(r\) to each copy. Such approach does not only incur additional steps but also an added training load. An additional downside is losing the contextual information due to splitting the entities data in the input [7, 8], which would result missed accuracy enhancements. Besides that, several single RC models evaluate their work on highly class-imbalanced benchmarks, such as Tacred [9] or datasets with a few predefined relations. For instance, SemEval [10] has only six relations. Such performance measurements make it hard to generalize to real-world scenarios. Additionally, these models employ complicated approaches, such as attention mechanisms, additional training and tuning efforts [11, 12]. Furthermore, most approaches neglect the imbalanced
prediction pattern in the predefined relations set, when the model learns to predict only one relation out of many others in the predefined set.
The multiple RC approach tackles the previously mentioned problems. However, regular methods still unable to achieve competitive results, mainly affected by the need to adapt to the imbalanced prediction. Despite the ability to predict several relations, their number is relatively smaller than the predefined relations set. This gap is shown in Table 1 when comparing the average number of relations with the predefined set size, which indicates high imbalanced distribution of positive and negative labels in each sentence. Furthermore, the table shows the percentage of sentences of three or more prediction relations, reflecting the importance of the multiple RC task.
In this paper, we propose a **M**ultiple **R**elations **C**lassification model using Imbalanced Predictions **A**daptation (MRCA). Our approach adapts to the imbalanced predictions issue through adjusting both the output activation function and the loss function. The utilized loss function has proved its efficiency in several imbalanced tasks. However, our customization shows additional enhancements within the RC task. Furthermore, we utilize the entity features through concatenating an additional vector to the word embeddings in the text encoder level.
The evaluation shows that our approach outperforms other models that reported their multiple RC performances in the relation extraction task on two popular benchmarks. To the best of our knowledge, this is the first work that addresses the imbalanced predictions within the RC task. The ablation study demonstrates the efficacy of our approach components in adapting to the imbalanced predictions, and in utilizing the text and the entity features. Furthermore, the architecture of our model has a light design that yields astonishing performance. We make our code available online1.
Footnote 1: [https://github.com/sa5r/MRCA](https://github.com/sa5r/MRCA)
## 2 Related Work
### Single Relation Classification
Generally, RC models pursued efficient text representation to identify relations. Early supervised approaches [13, 14] employed natural language processing (NLP) tools to extract text features, such as word lexical features, using dependency tree parsers [15], part-of-speech (POS) taggers and named entity recognition. Relax [14] generated dependency parse trees and transformed them into features for a rule-based method.
With the achievements of neural network methods, deep learning models utilized a combination of text lexical features and word embeddings for the input [16, 17] while other approaches [12, 18, 19, 20] depended on those embeddings solely to avoid NLP tools error propagation to later stages [18]. Neural network-based models employed word embeddings in different ways. First, embeddings generated from algorithms such as Word2Vec [21] using custom training data such as in [16, 18]. Second, embeddings from pre-trained language models (PLMs), such as Glove [22]. These PLMs were utilized in the works including [12, 17, 19, 20]. In [12], authors presented a neural attention mechanism with bidirectional LSTM layers without any external NLP tools. In C-GCN [17], the dependency parser features were embedded into a graph convolution neural network for RC. TANL [23] is a framework to solve several structure prediction tasks in a unified way, including RC. The authors showed that classifiers cannot benefit from extra latent knowledge in PLMs, and run their experiments on the T5 language model.
Bert [24] is a contextualized PLM that has presented significant results in various NLP tasks and several RC models employed it [25, 26, 27, 28]. The earliest was R-Bert [25], where authors customized Bert for the RC task by adding special tokens for the entity pairs. Later, Bert's output was used as an input for a multi-layer neural network. In [27], the traditional classification was replaced with a span prediction approach, adopted from the question-answering task. In [28], the model combined short dependency path representation generated from dependency parsers with R-Bert generated embeddings.
### Multiple Relations Classification
Methods that classify multiple relations in a single input pass vary based on the usage of NLP tools, neural networks and PLM models. Senti-LSSVM [7] is an SVM-based model that explained the consequences on the performance when handling multi-relational sentences using a single relation approach.
CopyRE [5] is an end2end entity tagging and RC model that leveraged the copy mechanism [29] and did
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & Relations & Avg. & Stdev. & 3+ Rels. \\ \hline Nyt & 24 & 2.00 & 2.88 & 18.48\% \\ Webnlg & 216 & 2.74 & 2.23 & 41.72\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: The number of predefined relations in the Nyt and Webnlg datasets, the average number of positive relations in each sentence, the standard deviation, and the percentage of sentences with 3 or more positive relations.
not use a PLM. Instead the model used the training platform's layer to generate word embeddings. In the RC part of the model, the authors used a single layer to make predictions over the softmax function. Inspired by CopyRE, CopyMTL [30] is a joint entity and relation extraction model with a seq2seq architecture. The model followed CopyRE's approach in representing text.
Several models employed Bert in the RC task [6, 31]. The work in [6] elaborated on the flaws of the single relation prediction in multi-relational sentences and presented a model that is based on customizing Bert. Specifically, the model employed an additional prediction layer and considered the positions of the entities in the input. In [31], authors showed that RC is not one of the training objectives in the popular PLMs. Therefore, they leveraged Bert and used a product matrix to relate the identified relations to the sentence entities.
GAME model [32] used the NLP tool Spacy [33] to generate word embeddings. The model is based on graph convolution networks for global sentence dependency and entities interaction features. ZSLRC [34] is a zero-shot learning model that used Glove PLM. We mention this work because it reports the supervised learning performance in RC task.
## 3 Methodology
Our model incorporates two components, an output adaptation module and an input utilization techniques. Between the two implementations, we employ a light design to achieve low training parameters and better performance. We use an average pooling layer to reduce the dimensionality of the network before the output layer. The dropout layer is used to tackle training overfitting. Finally, in the output layer, each unit represents a relation. Figure 1 shows the main architecture of our model.
### Text Encoder
We utilize Glove [22] pre-computed word embeddings to encode the input sentences. Glove embeddings are retrieved from a key-value store where words in lowercase are the keys for a float vectors matrix \(R^{s\times d}\), where \(s\) is the vocabulary size and \(d\) is the embedding dimensions. We find Glove more convenient for the task to tackle the out-of-vocabulary (OOV) [35] problem. Specifically, Glove's most used variant2 has relatively large dictionary of 400,000 words. However, the embeddings are context-free and the keys are case insensitive. Other popular PLMs have much smaller vocabularies but support Glove's missed fea
Figure 1: The main architecture of our model. The adaptation approach uses a linear activation function in the output and the Dice loss extension. Furthermore, we enhance the embeddings by adding two vectors, a character case vector and an entity type vector denoted by the orange and blue squares.
tures. For instance, Bert [24] generates contextual embeddings and has character case support. Nevertheless, the commonly used Bert variant3 has 28,997 vocabulary entries only. Thus, OOV words will get its representation based on the latent training parameters [36]. At the same time, several studies showed that RC is not one of the training objectives in Bert [31, 37]. Thus, we adjust Glove to provide the missed features as the following.
Footnote 3: [https://tfhub.dev/tensorflow/bert_en_uncased_L_12_H-768_A-12/4](https://tfhub.dev/tensorflow/bert_en_uncased_L_12_H-768_A-12/4)
First, having case sensitive embeddings is essential to denote entity words in the sentence. Realizing entities in the RC task is crucial to detect the proper relation. Generally, a word with an uppercase first character is an entity word. Thus, we add an additional vector to the word embeddings to denote the first character case. For uppercase first character words we use the value of ceiling the largest vector value in Glove. Formally, the vector value is computed as the following:
\[v=\lceil\max_{1\leq i\leq s}\left(\max_{1\leq j\leq d}\left(R[i][j]\right) \right)\rceil \tag{2}\]
where \(R\) is the vectors matrix in Glove, \(s\) is the vocabulary size, and \(d\) is the embedding dimensions. For lowercase first character words, we use the negative value of \(v\). We employ the maximum and minimum values in the PLM to boost distinguishing entity words from non-entity words. The orange square in Figure 1 denotes the first character case vector.
Second, to provide contextual sentence representation, we make us of a bidirectional long short-term memory (LSTM) as our first layer in the model architecture.
Although we employ large vocabulary in encoding the sentence, a few words are still not matched. Thus, generate their embeddings by combining the character level embeddings.
**Entity Features** We show in equation 1, that the task input consists of subject and object entities in addition to the sentence. We attempt to enrich the input with these details by following a similar approach of appending an additional vector from section 3.1. Specifically, we append a vector of the value \(v\) from equation 2 to the word representation when the input indicates that the word is a subject entity or part of it, the negative value of \(v\) for object entity words, and 0 for non-entity words. The dense blue square in Figure 1 denotes this additional vector. Formally the vector is given by the function \(f_{entVec}\) as the following:
\[f_{entVec}(w)=\left\{\begin{array}{lcl}v&,&w\in E_{sub}\\ -1\times v&,&w\in E_{obj}\\ 0&,&w\notin\left\{E_{sub}\cup E_{obj}\right\}\end{array}\right. \tag{3}\]
where \(w\) is a word in the sentence, \(E_{sub}\) is the subject entities set and \(E_{obj}\) is the object entities set. We use the negative value in the object entity to emphasize the difference between entity types and make the relation direction between entity pairs recognizable while training.
### Imbalanced Predictions Adaptation
In real-world scenarios, the number of predefined relations is usually greater than the number of positive relations in a single sentence by a big ratio. Consider the gap in Table 1 between Webmlg relations and the average number of valid relations in each sentence. We see that it is impractical to employ traditional probability activation functions in neural networks (NN) for this case. For instance, _sigmoid_ and _softmax_ are commonly used functions in NNs [38]. Our claim is supported by the fact that these functions treat positive and negative predictions equally. In other words, all probability predictions of 0.5 or greater are considered as positive label predictions in the mentioned functions. Thus, we improve the model's ability to predict negative labels by devoting 75% of the prediction range for the negative labels. We implement this step by restricting the model's layers output to a value within the range of -1 and 1. We perform that through applying _tanh_ activation function to the first layer, then using a linear activation function in the output layer. As a result, three quarters of the range are used for the negative labels, i.e., all predictions between -1 and 0.5 indicate a negative label. Figure 2 compares the prediction ranges in a probability activation function (_sigmoid_) and the output of the _tanh_ activation function.
Figure 2: Comparison between prediction ranges in the sigmoid function and our implementation
**Dice Loss Extension** Traditionally, straightforward classification models employ the cross-entropy loss functions [38], that are used to improve the accuracy, whereas the RC task objective is to reduce the false positive and false negative predictions. Thus, we seek improving the precision and recall performances, i.e., enhancing the model's f1 score. Dice Loss has shown significant results in several domains, such as computer vision [39] and other NLP tasks that have imbalanced data [40]. The function was designed with inspiration of the f1 metric as the following:
\[DiceLoss(y_{i},p_{i})=1-\frac{2p_{i}y_{i}+\gamma}{p_{i}^{2}+y_{i}^{2}+\gamma} \tag{4}\]
where \(y_{i}\) is the ground-truth label for relation \(i\), \(p_{i}\) is the prediction value, and \(\gamma\) is added to the nominator and the denominator for smoothing, which has a small value of 1e-6 in our implementation.
Utilizing Dice Loss in our adapted predictions may incur unconventional behaviour. Specifically, when having negative ground truth labels and negative value predictions at the same time. Such case would result high loss when using Dice Loss, whereas low loss is the natural result. Our analysis in Table 2 shows the invalid loss values and the expected ones. Therefore, we expand our adaptation by implementing an extension for Dice Loss. Specifically, we address the negative prediction case by computing the loss from a division operation; the nominator is the squared smoothing value; the denominator is the regular Dice loss denominator. Raising the smoothing value to the second power is necessary to present a small loss value. Our corrected loss value examples can be observed in Table 2. We call this extension, _RC_DiceLoss_ and formally define as the following:
\[RC\_DiceLoss(y_{i},p_{i})=\] \[\left\{\begin{array}{l l l}\frac{\gamma^{2}}{p_{i}^{2}+y_{i}^{2 }+\gamma}&,&y_{i}=0\:and\:p_{i}{<}0.5\\ \\ 1-\frac{2p_{i}y_{i}+\gamma}{p_{i}^{2}+y_{i}^{2}+\gamma}&,&otherwise\end{array}\right. \tag{5}\]
## 4 Experiments
### Datasets and Experimental Setup
To demonstrate the generalization and the applicability of our model, we evaluated it on diverse and widely used datasets. The Nyt dataset [4] was generated from a large New York Times articles corpus, where each input item consisted of a sentence and a set of triples, each triple is composed of subject and object entities, and a relation. Webmlg dataset was originally generated for the Natural Language Generation (NLG) task, CopyRE [5] customized the dataset for the triples and relations extraction tasks. Table 3 shows the statistics and the splits of the datasets.
Our model achieved the best results using Glove PLM. The language model has been trained on 6 Billion tokens with a 400,000 words vocabulary and 300 dimensional word embeddings. Nevertheless, the experiments demonstrated that our model can adopt other PLMs and still provide competitive results. We performed the experiments using TensorFlow. Our model's hyper-parameters and training settings are the unified for both experimental datasets, which confirms the applicability of our approach to real-world data. Table 4 shows the training settings and the model hyper-parameters. We used Adam optimizer for stochastic gradient descent, and performed the training for five times on every dataset with different random seed and reported the mean performance and the standard deviation. Although we implement the training for 50 epochs, the mean convergence epoch for the Nyt dataset was 21.4. The hyper-parameters were chosen based on tuning the model for best performance. We ran the experiments on a server with NVIDIA A100-SXM-80GB GPU device and AMD EPYC MILAN (3rd gen) processor, but using only 8 cores. We used only 20GB of the available main memory for the Webmlg dataset experiments and 100GB for the Nyt dataset due to its size. We conducted an ablation study to test our model's components using different variants as shown in Section 4.4.
### Comparison Baselines
We compare our results with the following supervised models. We refer to the main characteristics of each one in section 2. CopyRE [5] and CopyMTL [30] are based on the copy mecha
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(y\) & \(p\) & Expected loss & Dice loss & RC Dice loss \\ \hline
0 & 1 & \(\geq\) 1 & 0.9 & 0.9 \\
0 & 0.1 & \(\approx\) 0 & 0.9 & 9e-13 \\
0 & -0.1 & \(\approx\) 0 & 0.9 & 9e-13 \\
0 & -1 & 0 & 0.9 & 9e-13 \\
1 & 1 & 0 & 0 & 0 \\
1 & 0 & \(\geq\) 1 & 0.9 & 0.9 \\
1 & -1 & \(>\)1 & 1.9 & 1.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Loss calculations for ground truth \(y\) and the prediction value \(p\) in Dice loss and in our implementation. The underlined numbers are the unconventional values in Dice loss
nism and used the same approach to generate word embeddings. Both evaluated their work on the Nyt and Webnlg datasets. GAME model [32] used Spacy to generated word embeddings and reported their results on the Nyt dataset.
Other multiple relations classification models were not considered in the comparison due to their utilization of a different release of the Nyt dataset, such as [31] and ZSLRC [34]. We found that the used release is not commonly used in the literature.
### Main Results and Analysis
We report our average F1 scores in Table 5 and Table 6 for the Nyt and Webnlg datasets, respectively. Additionally, we visualize the training performance in Figure 3. The results show superiority among the baseline models. We report the precision and recall scores in Table 7. We highlight our results in the Webnlg dataset, as we find that relation predictions in that dataset is highly imbalanced due to the large number of predefined relations. Furthermore, the dataset has smaller training data. Nevertheless, the Webnlg's F1 score is close to the Nyt's score. Knowing that, the Nyt dataset has much smaller predefined relations and more training data, which indicates that our adaptation method supported achieving better predictions despite the imbalanced distribution of the binary labels.
### Ablation Study
To examine the effectiveness of our model's components, we evaluate the imbalanced predictions adaptation approach, and the text encoder adjustments. We design different variants of our model and perform training using the same main evaluation settings in Table 4. Moreover, We report the average score of five runs and the standard deviation. We use the Webnlg dataset for the ablation study experiments. We report the performances in Table 7, then we present the following analysis.
**Imbalanced Predictions Adaptation Effectiveness** To evaluate the contribution of our imbalanced predictions adaptation approach, we assess our model using different activation and loss functions. Specifically, we use the traditional _sigmoid_ activation function and the binary cross entropy loss function. We report this variant's performance in Table 7 with the name _MRCA-Sigmoid-BCE_. The variant's F1 score is approximately 3% less than our model's score, which is an average value between the precision scores difference and the recall scores difference. Noting that the recall gap is larger, which presents the first indication that the adaptation approach improved predicting negative labels.
**Encoder effectiveness** To evaluate our text encoder adjustments, we need to consider two sub-components in the assessments, that are the usage of Glove language model and the addition of the entity type vector to the embeddings. Thus, we test the following variants of our model. _MRCA-Bert_ is a variant the uses Bert PLM instead of Glove and _MRCA-Bert-noLSTM_ is a variant that uses Bert but with no LSTM layers. We use Bert's release4 with character case support since we added the same case feature in our implementation. In the former variant, there is a
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & CopyRE & CopyMTL & MRCA \\ \hline F1 & 75.1 & 79.7 & \(\mathbf{93.35}_{0.29}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Our models F1 score on the Webnlg dataset compared with the baseline models.
\begin{table}
\begin{tabular}{l c c} \hline \hline Parameter & \multicolumn{2}{c}{Value} \\ \hline Average Pooling & Pool Size & 80 \\ & Strides & 2 \\ \hline Learning & Rate & 0.0015 \\ & Decay & 3e-5 \\ \hline Bi-LSTM units & & \(2\times 500\) \\ Dropout rate & & 0.15 \\ Sequence padding & & 100 \\ Epochs & & 50 \\ Early stopping patience & & 5 \\ Batch size & & 32 \\ Generated parameters & & 13M \\ Average epoch time & & 2355ms \\ \hline \hline \end{tabular}
\end{table}
Table 4: Model hyperparameters and training settings.
slight difference between the reported F1 score and our model' score, which demonstrates less contribution of the Glove employment in our overall performance. However, using Glove, our model still outperforms the Bert variant due to the better OOV terms support. Noting that Bert is known as a language model with contextual text representation support. Thus, the assumption is that, the LSTM layers would not affect Bert's performance. Nonetheless, in the second variant _MRCA-Bert-noLSTM_, the performance is way worst. This result supports our claim that RC is not one of Bert's training objectives in section 3.1 because of the abstract usage of Bert. Furthermore, with a weak contextual representation in Bert, OOV words will split into non-meaningful tokens as described in the tokenization algorithm that is used in Bert [41]. This concludes the importance of using a language model with larger vocabulary.
## 5 Conclusion
We propose MRCA, a multiple relations classification model that aims at improving the imbalanced predictions. Our light-design implementation leverages wider prediction range for negative labels and customize a remarkable loss function for the same purpose. Furthermore, text and entity features are utilized efficiently to improve the relations prediction. The experiments presented superiority among state-of-the-art models that reported the relation classification performance. Assessing our model's components showed that addressing the imbalanced predictions yields significant improvement in the relation classification task. Furthermore, representing sentences using language models with rich vocabularies provides performance enhancements in the relation classification task.
## 6 Future Work and Limitations
Although the relation classification task has limited applications as a single module, it has wider usages in the relation extraction task. Therefore, we see that our approach can be adopted to achieve new scores in several applications that utilize the relation classification task. Further improvements can be achieved when using NLP tools for lexical and syntactic text features. Additionally, it would be typical to advance our model to assign the predicted relation to the corresponding entities pair in the input. However, this approach cannot be considered as an ideal way for the relation or triple extraction task because errors in the entities tagging step would propagate to the relation classification task. Finally, our imbalanced predictions adaptation promises enhancements if used by similar tasks of imbalanced classes.
Our evaluation was limited by the small number of
\begin{table}
\begin{tabular}{l l l l} \hline \hline Model & Precision & Recall & F1 \\ \hline MRCA & \(95.4_{0.25}\) & \(91.3_{0.48}\) & \(93.35_{0.29}\) \\ MRCA-Sigmoid-BCE & \(93.35_{0.31}\) & \(88.73_{0.55}\) & \(90.88_{0.3}\) \\ MRCA-Bert & \(94.5_{0.2}\) & \(89.9_{0.49}\) & \(92.15_{0.26}\) \\ MRCA-Bert-noLSTM & \(55.18_{2.21}\) & \(53.7_{1.1}\) & \(54.4_{1.16}\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: The performance of our model’s variants on the Webmlg dataset.
Figure 3: The validation F1 score during training for the evaluation datasets. (a) indicates the Nyt training performance. (b) indicates the Webmlg training performance.
models that reported the relation classification performance. However, the results proved our model's superiority, denoted by the gap between our F1 score and the closest model.
|
2310.00161 | Region-centric Image-Language Pretraining for Open-Vocabulary Detection | We present a new open-vocabulary detection approach based on region-centric
image-language pretraining to bridge the gap between image-level pretraining
and open-vocabulary object detection. At the pretraining phase, we incorporate
the detector architecture on top of the classification backbone, which better
serves the region-level recognition needs of detection by enabling the detector
heads to learn from large-scale image-text pairs. Using only standard
contrastive loss and no pseudo-labeling, our approach is a simple yet effective
extension of the contrastive learning method to learn emergent object-semantic
cues. In addition, we propose a shifted-window learning approach upon window
attention to make the backbone representation more robust,
translation-invariant, and less biased by the window pattern. On the popular
LVIS open-vocabulary detection benchmark, our approach sets a new state of the
art of 37.6 mask APr using the common ViT-L backbone and public LAION dataset,
and 40.5 mask APr using the DataComp-1B dataset, significantly outperforming
the best existing approach by +3.7 mask APr at system level. On the COCO
benchmark, we achieve very competitive 39.6 novel AP without pseudo labeling or
weak supervision. In addition, we evaluate our approach on the transfer
detection setup, where it demonstrates notable improvement over the baseline.
Visualization reveals emerging object locality from the pretraining recipes
compared to the baseline. | Dahun Kim, Anelia Angelova, Weicheng Kuo | 2023-09-29T21:56:37Z | http://arxiv.org/abs/2310.00161v2 | # Detection-Oriented Image-Text Pretraining for Open-Vocabulary Detection
###### Abstract
We present a new open-vocabulary detection approach based on detection-oriented image-text pretraining to bridge the gap between image-level pretraining and open-vocabulary object detection. At the pretraining phase, we replace the commonly used classification architecture with the detector architecture, which better serves the region-level recognition needs of detection by enabling the detector heads to learn from noisy image-text pairs. Using only standard contrastive loss and no pseudo-labeling, our approach is a simple yet effective extension of the contrastive learning method to learn emergent object-semantic cues. In addition, we propose a shifted-window learning approach upon window attention to make the backbone representation more robust, translation-invariant, and less biased by the window pattern. On the popular LVIS open-vocabulary detection benchmark, our approach sets a new state of the art of 40.4 mask AP\({}_{r}\) using the common ViT-L backbone, significantly outperforming the best existing approach by +6.5 mask AP\({}_{r}\) at system level. On the COCO benchmark, we achieve very competitive 40.8 novel AP without pseudo labeling or weak supervision. In addition, we evaluate our approach on the transfer detection setup, where ours outperforms the baseline significantly. Visualization reveals emerging object locality from the pretraining recipes compared to the baseline. Code and models will be publicly released.
## 1 Introduction
To understand and localize all objects and entities in the visual world has been a foundational problem in computer vision and machine learning. This capability unlocks a broad array of compelling applications from self-driving cars to search and recommendation. However, existing object detectors typically rely on human-annotated regions and class labels. These annotations are costly and unscalable in terms of the number of categories _e.g_. O(1K) and the number of images _e.g_. O(100K).
The open-vocabulary detection (OVD) task (Zareian et al., 2021) has been introduced to overcome both limitations by pretraining on larger-scale image-text data before finetuning for detection tasks. By representing each category as a text embedding rather than a discrete label, open-vocabulary detectors can localize objects based on any user-provided text queries unavailable during training. Many techniques such as knowledge distillation (Gu et al., 2022; Du et al., 2022), weak supervision (Zhou et al., 2022b), self-training (Zhong et al., 2022; Rasheed et al., 2022; Zhao et al., 2022; Huynh et al., 2022), frozen backbone (Kuo et al., 2023), detection feature alignment (Arandjelovic et al., 2023), and better positional embeddings (Kim et al., 2023b) have been proposed. Most existing methods assume the pretrained Vision Language Models (VLMs) _e.g_. (Radford et al., 2021) are given, and focus on recipes to train the entire model or at least the detector heads from scratch using the pretrained backbone (Gu et al., 2022; Du et al., 2022; Zhong et al., 2022; Kuo et al., 2023; Kim et al., 2023b; Rasheed et al., 2022). This tends to result in suboptimal generalization because the detector heads are trained on the limited vocabulary of detection datasets, and only the backbone contains the knowledge of open-vocabulary concepts.
We present DITO (Detection-Oriented Image-Text pretraining for Open-vocabulary detection), an intuitive method to perform image-language pretraining in a detection-friendly manner for open-vocabulary object detection. Standard pretraining typically uses image classification architecture, and thus needs to train at least the detector heads from scratch during the detection fine-tuning stage. In contrast, our approach learns the detector heads directly from the large image-text corpus. The
detector heads receive text supervision at image level at multiple scales by pooling over randomly generated regions, which learns image-text representation across multiple levels of semantic granularity. In addition, we proposed a simple yet effective Shifted-Window Learning (SWL) approach in detection to enhance the features of vision transformer using window attention. Specifically, we roll the patch tokens with strides less than the window size to mitigate the bias caused by window attention pattern and obtain a more shift-invariant representation. Through Detection-Oriented Pretraining (DOP) and Shifted-Window Learning (SWL), we close the gap between image-text pretraining and open-vocabulary detection, and obtain robust backbone features for better generalization.
The best DITO model obtains 40.4 mask \(\text{AP}_{r}\) on the widely used LVIS open-vocabulary detection benchmark, surpassing the previous best approach by +6.5 \(\text{AP}_{r}\) at system level. In the setting with external box annotations, it achieves 45.8 box \(\text{AP}_{r}\), a significant gain of +12.5 points over the previous best approach. On the COCO benchmark, DITO achieves a very competitive 40.8 novel AP without using pseudo-labels or joint training. In summary, our contributions are:
* We present a novel contrastive pretraining methodology using detector architecture to learn detection-sensitive representation from noisy, large-scale image-text pairs.
* We propose a simple Shifted-Window Learning technique for detection to produce more robust and translation-invariant representation from pretrained vision transformers.
* Our approach significantly outperforms the state-of-the-art methods on LVIS open-vocabulary detection benchmark, including larger models and pseudo-labeling approaches, and achieves very competitive performance on COCO benchmark and transfer detection to Objects365 simultaneously.
We hope these findings will encourage the community to further explore detection-oriented pretraining on noisy, large-scale image-text data to advance open-vocabulary tasks.
## 2 Related Work
Language-supervised open-vocabulary recognition.Learning representations for open-vocabulary recognition is a key to advancing general intelligence. To harness the rich co-occurrence pattern of images and text found in web data, researchers have explored a diverse range of data sources, including image tags, captions, alt-texts, image search queries, or combination thereof (Desai and Johnson, 2021; Sariyildiz et al., 2020; Radford et al., 2021; Jia et al., 2021; Schuhmann et al., 2021; Gadre et al., 2023; Chen et al., 2023). From a modeling standpoint, contrastive learning operates at the image level and has proven successful due to its simplicity, scalability, and versatility across zero-shot (Radford et al., 2021), linear probing (Radford et al., 2021), few-shot (Zhou et al., 2022), and full finetuning (Dong et al., 2022) scenarios. Building upon this body of work, our approach learns region-aligned representations for open-vocabulary detection by leveraging detector architecture during the contrastive pretraining.
Self-supervised pretraining for visual tasks.Self-supervised learning has emerged as a promising paradigm to learn object features for complex visual tasks such as detection, given the challenge of scaling up human annotations. Early efforts designed pretext tasks that require semantic understanding to solve (Doersch et al., 2015; Pathak et al., 2016; Zhang et al., 2016). Subsequently, the idea of contrastive learning became popular where the pretext tasks are specified in the data itself, where the contrastive samples can take the forms of augmented images (Chen et al., 2020), sliding windows (Xiao et al., 2021), object proposals (Wei et al., 2021), or point samples (Bai et al., 2022). In addition to contrastive learning, alternative strategies like pseudo-labeling (Zhong et al., 2021), raw pixel (He et al., 2022) and image feature reconstruction (Bao et al., 2021; Zhou et al., 2021) have also proven effective. While the majority of these methods have focused on learning from images without textual context, and applying to closed-vocabulary detection, our work leverages large image-text data to tackle the more demanding open-vocabulary detection task.
Open-vocabulary object detection and segmentation.Open-Vocabulary detection has made very rapid progress in recent years (Zareian et al., 2021; Gu et al., 2022; Zhong et al., 2022). A common approach is to repurpose the capabilities of pretrained vision-language models (VLM) for detection. Various techniques including knowledge distillation (Gu et al., 2022) and prompt optimization (Du et al., 2022) have been used to train an open-vocabulary detector with the pretrained VLM. Pseudo-boxes and weak-labeling methods (Zhong et al., 2022; Li et al., 2022; Zhou et al.,
2022b; Gao et al., 2022; Feng et al., 2022) have also been used to recover the region-level information missing from the image-level pretraining. In addition, pretrained VLM backbone can be directly employed by adding new detection heads either by setting the backbone frozen (Kuo et al., 2023) or finetunable (Minderer et al., 2022; Kim et al., 2023b).
Pretraining the vision-language models for open-vocabulary object detection is a recent direction. Yao et al. (2023); Zhang et al. (2022) train on a combination of a detection, grounding, and caption data to learn the word-region alignment. RO-ViT (Kim et al., 2023b) proposes region-aware positional embeddings for contrastive model training and uses the focal loss (Lin et al., 2017b) for contrastive learning. Our work is more closely related to the latter approaches, where object detection-oriented pretraining is built in the pretraining phase of our model. Regarding backbone architecture, ConvNet, ViT (Dosovitskiy et al., 2020) or hybrid models (Liu et al., 2021) have been used in existing works. We adopt ViT with window attention in this work, and propose a shifted window learning approach to mitigate the window-induced bias.
## 3 Method
We address the problem of open-vocabulary object detection. At training time, the model can access the class and box labels of base categories (\(C_{B}\)). At test time, the model is tasked with detecting objects from a set of novel categories (\(C_{N}\)) not present in the training set. To achieve this, we leverage pretrained vision and language models (VLMs) building upon prior studies (Gu et al., 2022; Zhong et al., 2022; Kuo et al., 2023). However, instead of taking classification-based pretrained VLMs, we demonstrate how to enhance the VLMs with detector head in image-text pretraining, and propose shifted-window learning (SWL) strategy to improve open-vocabulary detection.
### Preliminaries
Image-text pretraining.Inspired by prior works in open-vocabulary detection, we adopt dual-encoder contrastive image-language pretraining (Radford et al., 2021; Jia et al., 2021) before applying the detection finetuning. The image embeddings \(\{v\}\) and text embeddings \(\{l\}\) are the average-pooled outputs from the image and text encoders, respectively. As in previous works, we compute the dot product of the embeddings in batch \(B\), and scale it by a learnable temperature \(\tau\) before applying the InfoNCE loss (Oord et al., 2018; Radford et al., 2021). Mathematically, the image-to-text (I2T) loss can be expressed as:
\[L_{\text{I2T}}=-\frac{1}{B}\sum_{i=1}^{B}\log(\frac{\text{exp}(v_{i}l_{i}/\tau )}{\sum_{j=1}^{B}\text{exp}(v_{i}l_{j}/\tau)}). \tag{1}\]
The text-to-image (T2I) loss is symmetrical by exchanging the inner/outer summation loops. The total contrastive loss \(L_{con}\) is obtained by \(L_{con}=(L_{\text{I2T}}+L_{\text{T2I}})/2\).
Figure 1: **DITO method. Detection-Oriented Pretraining (left): DITO trains the detector heads (e.g. FPN (Li et al., 2022b; Lin et al., 2017a), Faster RCNN head (Ren et al., 2015)) upon a ViT encoder backbone with multi-level image-text contrastive loss to bridge the gap between image-text pretraining and open-vocabulary detection. Shifted-Window Learning for detection (right): DITO rolls the image and combine the shifted features with the original features to mitigate the bias of window attention grid (Li et al., 2022b), and produce more robust semantic representation.**
Open-vocabulary detection finetuning.At the fine-tuning stage, our detection finetuning recipe follows previous studies (Zareian et al., 2021; Gu et al., 2022; Kuo et al., 2023; Kim et al., 2023b). During the training phase, we use the RoI-Align (He et al., 2017) feature as the detection embedding for each detected region. We replace the fixed-size classifier layer with the text embeddings of base categories (\(C_{B}\)). The detection score \(p_{i}\) is determined by calculating the cosine similarity between the region embedding \(r_{i}\) and text embeddings of base categories (\(C_{B}\)) followed by a softmax operation. We prepend an additional background class embedding to \(C_{B}\) and use the term "background" to represent the background category. Any proposals that do not match to any base category annotations are treated as background during training. It is important that the text embeddings are computed from the same text encoder as from the image-text pretraining. During testing, we expand the text embeddings to include the novel categories (\(C_{B}\cup C_{N}\)), resulting in (\(C_{B}\cup C_{N}+1\)) categories including the background. We calculate the detection scores (\(p_{i}\)) as the cosine similarity between the region embeddings (\(r_{i}\)) and the expanded text embeddings.
Apart from the detection embedding (\(r_{i}\)), we extract the VLM embedding (Kuo et al., 2023) of region \(i\) by RoI-Align at the last ViT backbone feature map. The VLM score (\(z_{i}\)) is calculated as the cosine similarity with the text embeddings of the combined categories (\(C_{B}\cup C_{N}\)). As we train the ViT backbone for detection tasks, there is a tendency to lose the pretrained image-text knowledge. Inspired by previous work (Kim et al., 2023a), we use a separate frozen ViT backbone as an open-vocabulary region classifier at inference time to compute the VLM score \(z_{i}\).
To compute the open-vocabulary detection score (\(s_{i}\textsuperscript{ens}\)), we ensemble the detection and VLM scores by geometric means (Gu et al., 2022; Kuo et al., 2023). The formula is as follows:
\[s_{i}\textsuperscript{ens}=\begin{cases}z_{i}^{(1-\alpha)}\cdot p_{i}^{\alpha}& \text{if }i\in C_{B}\\ z_{i}^{(1-\beta)}\cdot p_{i}^{\beta}&\text{if }i\in C_{N}\end{cases} \tag{2}\]
Here, \(\alpha,\beta\) are floats \(\in[0,1]\) that control the weighting of base versus novel categories. The background score comes from the detection score (\(p_{i}\)) alone, because we observe the VLM score of "background" class is often less reliable.
### Detection-Oriented Pretraining
Standard image-text pretraining uses classification architectures because the language supervision occurs at the image level rather than object level. Despite showing strong zero-shot classification performance, this approach often leads to sub-optimal performance for detection tasks. To bridge this gap, we propose Detection-Oriented Pretraining (DOP) to use detection architecture instead to capture the rich region-level learning signal in the pretraining phase through the object-oriented design of detection models such as FPN (Lin et al., 2017a; Li et al., 2022b), RoI-Align (He et al., 2017). In addition, pretraining with detection architecture allows us to transfer not only the weights of the image backbone, but also the weights of the detector heads to downstream finetuning tasks. Thus, the detector heads are not trained from scratch on a limited set of categories, but warm-started from the knowledge of large image-text data, thereby improving the generalization capability.
Detector head learning from random regions.Figure 1 (left) illustrates our detection-oriented pretraining system. To achieve the goal of training detector heads on image-text data, we first attach the simple feature pyramid network (Li et al., 2022b) on the output features of vision transformer backbone. This is because the vision transformer backbone does not provide multi-level feature maps as needed by the standard feature pyramid (Lin et al., 2017a). On top of the feature pyramid, we apply the RoI-Align (He et al., 2017) and Faster R-CNN (Ren et al., 2015) classifier head to match the classification pathway of pretraining with the region-recognition pathway in detection finetuning (see Table (a)a for ablations).
For each level \(i\) of the feature pyramid, we randomly generate \(n_{i}\) box regions uniformly over the image by sampling the box size \(h,w\thicksim Uniform(0.2,1.0)\) and aspect ratio \(h/w\thicksim Uniform(0.5,2.0)\). The \(n_{i}\) value is set proportional to the size of the \(i\)-th feature map so that larger feature map would be covered by more regions. We extract the RoI-features of each region by RoI-Align, and feed them through the region classifier head (Ren et al., 2015) to obtain the RoI embedding.
Multi-level image-text supervision.After computing the RoI embeddings across pyramid levels, we perform a max pooling over the RoI embeddings per-level to obtain an image embedding for each pyramid level. Intuitively, max pooling allows the representation to focus on salient regions and
discriminative features, thereby learning region-level information without explicit supervision. Then we apply the standard image-text contrastive loss (see Eqn. 1) on each feature level separately, which aids the learning of rich semantic and object knowledge within every feature map (see Table 5b for ablations). The losses from all levels are weighted equally and summed together. As a result of the multi-level learning, we observe the feature maps possess more localized semantic information compared to the baseline (see Figure 2).
Different from pseudo-labeling techniques (Zhong et al., 2022; Feng et al., 2022; Huynh et al., 2022) that require additional steps to prepare and store annotations, our approach learns the detector heads on the fly via contrastive learning without a need to cache annotations or detection-specific losses.
### Shifted-Window Learning for Detection
We perform shifted-window learning on top of the vanilla vision transformer (Dosovitskiy et al., 2020) without adding or removing any weights. In order to run vision transformer on large detection images (e.g. \(1024\times 1024\)), we apply window-attention layers on a fixed-size grid \(K\times K\), and \(L\) global attention layers evenly spaced throughout the vision transformer (where \(L=4\)) following Li et al. (2022). Although information mixing occurs \(L\) times through global attention, we observed that fixed-size grid is still biased by the grid pattern, thereby compromising the representation power of the backbone. To address the issue, we propose the Shifted-Window Learning (SWL) approach to mitigate the bias of grid pattern in window-attention vision transformer.
Network Architecture.Figure 1 (right) illustrates our shifted-window learning system. The standard vision transformer consists of a patchifying layer and a set of transformer layers. After feeding the image through the patchifying layer, we obtain a feature map of shape \((h,w,c)\). We keep a copy of this feature map to feed through the rest of the ViT, and create another copy which we roll along both \(h\) and \(w\) axes by \(s\) pixels. The elements that roll beyond the last position are reintroduced from the first. The shift size \(s\) is set as a fraction \(q\) of the window attention cell size \(M\), _i.e_. \(s=qM\). Empirically, we set \(q=0.5\) and the cell size \(M=16\) equals the image size (_e.g_. 1024) divided by the product of patch size (_e.g_. 16) and the grid size (_e.g_. 4). We feed the shifted feature map through the rest of the ViT in the same way as the original feature map. The outputs are two sequence features of the same shape \((h,w,c)\). We then unshift the shifted features before combining the two sequences by averaging. We apply the above shifted window operations in both detection training and test times (see Table 6 for ablations on various configurations), and observe clear improvements in representation quality.
Comparison with other architectures.Compared to the Swin Transformer (Liu et al., 2021), we apply the shifted-window ideas as separate forward passes, while Swin Transformer applies similar ideas in an alternating manner layer by layer. Our approach requires no change to the vanilla transformer architecture and is compatible with any ViT backbones pretrained without shifted windows (_e.g_. (Radford et al., 2021)), whereas Swin Transformer requires specialized pretraining on the same architecture. Compared to the full-attention vision transformer (Dosovitskiy et al., 2020), we observe that window attention taps more effectively into the region-level semantic knowledge of pretrained backbone than full attention, perhaps because the window structure helps the model focus on relevant local cues and be more robust to noises farther away.
## 4 Experimental Results
Pretraining setup.Our image-text pretraining consists of two phases. We first train a contrastive VLM from scratch following the standard CLIP (Radford et al., 2021) recipe for 500K iterations, with 16k batch size, and 224\(\times\)224 image size. We use the vision transformers as image encoders and use global average pooling at the last layer instead of CLS-token pooling.
Next, we apply the detection-oriented pretraining (DOP) where we freeze the image and text encoders trained in the first phase and introduce the detector heads. We use the Simple FPN (Li et al., 2022) and Faster R-CNN classifier head, where we replace the batch normalization with layer normalization. At the \(i\)-th pyramid level \(i\in\{2,3,4,5\}\), we randomly sample \(n_{i}\in\{400,200,100,50\}\) box regions and compute their RoI-Align features. We use a short training cycle of 30K iterations, 4k batch size, 256\(\times\)256 image size, AdamW optimizer with an initial learning rate (LR) of 1e-4, linear LR decay and warmup of 5k iterations. The ALIGN (Jia et al., 2021) image-text dataset is used by default, although we also explore the more recent DataComp-1B (Gadre et al., 2023). To
improve the region-awareness of the backbone, we adopt the cropped positional embedding (Kim et al., 2023) for both phases of pretraining.
Detection finetuning setup.We train the detector with image size \(1024\times 1024\) and use window attention in the backbone with grid size 4\(\times\)4 as in Li et al. (2022). The learning rate for the backbone is set lower as 0.6\(\times\) of the detector head layers. The text embedding of each category is calculated as the average over the CLIP prompt templates. We use the batch size 128, the SGD optimizer with momentum 0.9. The initial learning rate and iterations are set to 0.18 and 36.8k for LVIS, and 0.02 and 11.3k for COCO datasets.
### Comparison to the State-Of-The-Art
Comparisons on LVIS.In Table 1, we report the comparison with existing methods on the challenging LVIS benchmark. The 'frequent' and 'common' classes of the dataset belong to the base categories C\({}_{B}\), and the 'rare' classes are the novel categories C\({}_{N}\). The primary benchmark metric is the mask AP on rare classes (mask AP\({}_{r}\)). The best DITO model achieves the performance of 38.4 mask AP\({}_{r}\), which significantly outperforms the current state-of-the-art approaches RO-ViT and CFM-ViT with the same ViT-L backbone by +6.3 and +4.5 points using the same pretraining data (Jia et al., 2021). We achieve 40.4 mask AP\({}_{r}\) when we replace the ALIGN (Jia et al., 2021) with DataComp-1B (Gadre et al., 2023). With the smaller ViT-B backbone, DITO maintains a healthy margin of around +4 AP\({}_{r}\) above existing approaches based on ViT-B.
In addition, we present system-level comparisons in the unconstrained setting of Minderer et al. (2022) where additional non-LVIS\({}_{rare}\) object annotations _e.g_. Objects365 (Shao et al., 2019) can be used in detection training (denoted by SS in Table 1), and report the box AP\({}_{r}\) metric. In this setting, we
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline method & pretrained & detector & **mask** & mask & **box** & box \\ & model & backbone & **AP\({}_{r}\)** & AP & **AP\({}_{r}\)** & AP \\ \hline _ConvNet based:_ & & & & & & \\ VL-PLM (Zhao et al., 2022) & ViT-B/32 & R-50 & 17.2 & 27.0 & - & - \\ OV-DETR (Zang et al., 2022) & ViT-B/32 & R-50 & 17.4 & 26.6 & - & - \\ DetPro-Cascade (Du et al., 2022) & ViT-B/32 & R-50 & 20.0 & 27.0 & 21.7 & 30.5 \\ Rasheed (Rasheed et al., 2022) & ViT-B/32 & R-50 & 21.1 & 25.9 & - & - \\ PromptDet (Feng et al., 2022) & ViT-B/32 & R-50 & 21.4 & 25.3 & - & - \\ OADB (Wang et al., 2023) & ViT-B/32 & R-50 & 21.7 & 26.6 & 21.9 & 28.7 \\ RegionCLIP (Zhong et al., 2022) & R-50x4 & -50x4 & 22.0 & 32.3 & - & - \\ CORA (Wu et al., 2023b) & R-50x4 & R-50x4 & - & - & 22.2 & - \\ BARON (Wu et al., 2023a) & ViT-B/32 & R-50 & 22.6 & 27.6 & 23.2 & 29.5 \\ CondHead (Wang, 2023) & R-50x4 & R-50x4 & 24.4 & 32.0 & 25.1 & 33.7 \\ Detic-CN2 (Zhou et al., 2022b) & ViT-B/32 & R-50 & 24.6 & 32.4 & - & - \\ ViL-Dens (Gu et al., 2022) & EffNet-B7 & EffNet-B7 & 26.3 & 29.3 & 27.0 & 31.8 \\
3WayAs(Arandjelovic et al., 2023) & NFNet-F6 & NFNet-F6 & - & - & 30.1 & 44.6 \\ F-VLM (Kuo et al., 2023) & R-50x64 & R-50x64 & 32.8 & 34.9 & - & - \\ \hline _ViT based:_ & & & & & & \\ OWL-ViT (Minderer et al., 2022) & ViT-H/14 & ViT-H/14 & - & - & 23.3 & 35.3 \\ OWL-ViT (Minderer et al., 2022) & ViT-L/14 & ViT-L/14 & - & - & 25.6 & 34.7 \\ RO-ViT (Kim et al., 2023b) & ViT-B/16 & ViT-B/16 & 28.0 & 30.2 & 28.4 & 31.9 \\ RO-ViT (Kim et al., 2023b) & ViT-L/16 & ViT-L/16 & 32.1 & 34.0 & 33.6 & 36.2 \\ CFM-ViT (Kim et al., 2023a) & ViT-B/16 & ViT-B/16 & 28.8 & 32.0 & 29.6 & 33.8 \\ CFM-ViT (Kim et al., 2023a) & ViT-L/16 & ViT-L/16 & 33.9 & 36.6 & 35.6 & 38.5 \\
**DITO (ours)** & ViT-B/16 & ViT-B/16 & **32.5** & 34.0 & **34.9** & 36.9 \\
**DITO (ours)** & ViT-L/16 & ViT-L/16 & **38.4** & 36.3 & **41.1** & 40.4 \\
**DITO (ours) (ours)** & ViT-L/16 & ViT-L/16 & **40.4** & 37.7 & **42.8** & 40.7 \\ \hline _with external box annotations:_ & & & & & \\ OWL-ViT (Minderer et al., 2022) § & ViT-B/16 & ViT-B/16 & - & - & 20.6 & 27.2 \\ OWL-ViT (Minderer et al., 2022) § & ViT-L/14 & ViT-L/14 & - & - & 31.2 & 34.6 \\ DetCLIPv2 (Yao et al., 2023) § & Swin-L & Swin-L & - & - & 33.3 & 36.6 \\
**DITO (ours)** § & ViT-L/16 & ViT-L/16 & - & - & **45.8** & 44.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **LVIS open-vocabulary detection (mask/box AP). DITO outperforms the best existing approach by +6.5 mask AP\({}_{r}\) in the standard benchmark, and by +12.5 box AP\({}_{r}\) in the unconstrained setting (Minderer et al., 2022). \(\kappa\): uses the public DataComp-1B (Gadre et al., 2023) data in pretraining. §: uses filtered Objects365 box annotations in detection training.**
additionally pretrain the detector on Objects365 annotations with LVIS\({}_{rare}\) categories filtered out. Then, we conduct the same LVIS\({}_{base}\) training in the standard setting. The additional Objects365 annotations provide a significant boost to DITO and yields the state-of-the-art 45.8 box AP\({}_{r}\), +12.5 AP\({}_{r}\) over the state of the art in this setting.
Comparisons on COCO.We present the comparisons on COCO benchmark in Table 2. The main metric is AP50 of novel categories (novel AP). Our model demonstrates a very competitive result 40.8 novel AP without using pseudo labeling (Feng et al., 2022; Huynh et al., 2022; Zhao et al., 2022; Zhong et al., 2022), weak supervision (Zhou et al., 2022), or externally trained detector modules (Rasheed et al., 2022; Wu et al., 2023). Among the ViT-based methods, DITO outperforms RO-ViT (Kim et al., 2023) and CFM-ViT (Kim et al., 2023) by a clear margin of +7.2 and +6.1 points, respectively. In the unconstrained setting, we observe that training on additional box labels of Shao et al. (2019) before the COCO\({}_{base}\) training further improves DITO to 46.1 novel AP, surpassing all existing works that use external box annotations. The Objects365 annotations are deduplicated against all 80 COCO categories.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline method & pretrained & detector & \multirow{2}{*}{**novel AP**} & \multirow{2}{*}{\(\mathrm{AP}\)} \\ & model & backbone & & \\ \hline _ConvNet based:_ & & & & \\ OV-KCNN (Zareian et al., 2021) & R-50 & R-50 & 22.8 & 39.9 \\ VILD (Gu et al., 2022) & ViT-B/32 & R-50 & 27.6 & 51.3 \\ F-VLM (Kuo et al., 2023) & R-50 & R-50 & 28.0 & 39.6 \\ OV-DETR (Zang et al., 2022) & ViT-B/32 & R-50 & 29.4 & 52.7 \\
**with pseudo box labels:** & & & & \\ PromptDet (Feng et al., 2022) & ViT-B/32 & R-50 & 26.6 & 50.6 \\ XPM (Huynh et al., 2022) & R-50 & R-50 & 27.0 & 41.2 \\ OADB (Wang et al., 2023) & ViT-B/32 & R-50 & 30.0 & 47.2 \\ CondHead (Wang, 2023) & R-50 & R-50 & 33.7 & 51.7 \\ VL-PLM (Zhao et al., 2022) & ViT-B/32 & R-50 & 34.4 & 53.5 \\ RegionCLIP (Zhong et al., 2022) & R-50x4 & R-50x4 & 39.3 & 55.7 \\ CORA (Wu et al., 2023) & R-50x4 & R-50x4 & **43.1** & 56.2 \\
**with weak supervision:** & & & & \\ Detic-CN2 (Zhou et al., 2022) & ViT-B/32 & R-50 & 24.6 & 32.4 \\ \hline _VT based:_ & & & & \\ RO-ViT (Kim et al., 2023) & ViT-B/16 & ViT-B/16 & 30.2 & 41.5 \\ RO-ViT (Kim et al., 2023) & ViT-L/16 & ViT-L/16 & 33.0 & 47.7 \\ CFM-ViT (Kim et al., 2023) & ViT-B/16 & ViT-B/16 & 30.8 & 42.4 \\ CFM-ViT (Kim et al., 2023) & ViT-L/16 & ViT-L/16 & 34.1 & 46.0 \\
**DITO (ours)** & ViT-B/16 & ViT-B/16 & **38.6** & 48.5 \\
**DTO (ours)** & ViT-L/16 & ViT-L/16 & **40.8** & 50.3 \\ \hline _methods with external box annotations:_ & & & & \\ Rasheed (Rasheed et al., 2022) \(\ddagger\) & ViT-B/32 & R-50 & 36.9 & 51.5 \\ BARON (Wu et al., 2023) \(\ddagger\) & R-50 & R-50 & 42.7 & 51.7 \\
**DTO (ours) §** & ViT-L/16 & ViT-L/16 & **46.1** & 54.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **COCO open-vocabulary detection (box AP50). DITO demonstrates a very competitive novel category AP without using pseudo labeling or weak supervision. \(\ddagger\): uses an external MViT detector (Maaz et al., 2022). §: uses filtered Objects365 annotations for training.**
\begin{table}
\begin{tabular}{l c c c c} \hline \hline method & backbone & AP & AP\({}_{50}\) & AP\({}_{75}\) \\ \hline Supervised (Gu et al., 2022) & R-50 & 25.6 & 38.6 & 28.0 \\ ViLD (Gu et al., 2022) & R-50 & 11.8 & 18.2 & 12.6 \\ DetProg (Du et al., 2022) & R-50 & 12.1 & 18.8 & 12.9 \\ BARON (Wu et al., 2023) & R-50 & 13.6 & 21.0 & 14.5 \\ F-VLM (Kuo et al., 2023) & R-50x16 & 16.2 & 25.3 & 17.5 \\ F-VLM (Kuo et al., 2023) & R-50x64 & 17.7 & 27.4 & 19.1 \\ \hline RO-ViT (Kim et al., 2023) & ViT-L/16 & 17.1 & 26.9 & 18.5 \\ CFM-ViT (Kim et al., 2023) & ViT-L/16 & 18.7 & 28.9 & 20.3 \\
**DITO (ours)** & ViT-L/16 & **19.8** & **30.4** & **21.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Transfer detection from LVIS to Objects365 (box AP). All models are tested on Objects365 dataset following the setup of Gu et al. (2022). DITO outperforms existing methods with comparable backbone size.**
Transfer detection.We further evaluate DITO in the transfer detection setting, where the open-vocabulary detector trained on one dataset is tested on another dataset without any finetuning. Following the setup of ViLD (Gu et al., 2022), the detectors trained on LVIS\({}_{base}\) are evaluated on Objects365 dataset by simply replacing the text embeddings. Table 3 shows that DITO achieves 19.8 AP, outperforming previous methods using ConvNet or ViT backbones of similar size.
### Ablation Studies
To study the advantages of DITO methods, we provide ablation studies on the LVIS benchmark. We use ViT-L/16 backbone and report box AP\({}_{r}\) for ablations.
DITO framework.In Table 3(a), we verify that the gains demonstrated by DITO come from our proposed detection-oriented pretraining (DOP) as well as the shifted-window learning (SWL). Our baseline uses the CLIP pretraining with cropped positional embedding (Kim et al., 2023). For detection, the baseline adopts the frozen backbone inference method as in CFM-ViT (Kim et al., 2023). It already achieves a very strong performance of 36.2 AP\({}_{r}\), higher than the state-of-the-art CFM-ViT by +0.6 point. On top of this, our proposed detection-oriented pretraining achieves a gain of 42.3 AP\({}_{r}\) and SWL improves the baseline by +4.1 AP\({}_{r}\). By combining both strategies, DITO shows a significant gain of +5.0 AP\({}_{r}\) over the baseline. Table 3(b) reports that excluding the frozen backbone inference leads to a drop of -3.4 AP\({}_{r}\), which is consistent with the findings in CFM-ViT.
Detection-Oriented Pretraining.To further investigate our detection-oriented image-text pretraining, we ablate in Table 4(a) by progressively adding the FPN and Faster R-CNN head into the pretraining. The 'w/ FPN' only introduces the FPN, where each pyramid level map is mean-pooled into an image embedding, followed by the image-text contrastive loss per level. It improves the baseline by +1.0 AP\({}_{r}\). Incorporating both FPN and Faster R-CNN head (see Section 3.2) further improves the alignment between the pretraining and the detection finetuning, achieving 38.5 AP\({}_{r}\), a gain of +2.3 points over the baseline. Table 4(b) ablates different RoI sampling strategies and the pooling of the RoI embeddings. We first observe that sampling multiple RoIs instead of using the whole-image RoI achieves the best performance. After computing the RoI embeddings, we find max pooling over the RoI embeddings performs better than mean pooling, and pooling per pyramid level (_i.e._, multi-level image-text supervision) is better than pooling over all levels.
\begin{table}
\end{table}
Table 6: **Ablation on Shifted-Window Learning (SWL).**
\begin{table}
\end{table}
Table 4: **Ablation studies** on LVIS benchmark. We train on base (‘frequent’ + ‘common’) categories, and report AP\({}_{r}\) on novel (‘rare’) categories
\begin{table}
\end{table}
Table 5: **Ablation on Detection-Oriented Pretraining (DOP). Best setting is marked by gray.**
Shifted-Window Learning for detection.As discussed in Section 3.1, our baseline detector adopts a separate frozen ViT backbone at inference (Kim et al., 2023) to compute the VLM scores \(p_{i}\) in Equation 2. In Table (a)a, we ablate applying shifted window in the frozen and finetuned (detector) backbones. We find that it improves the performance in all cases, and using the shifted window in both backbones brings the largest gain of +4.1 AP\({}_{r}\). Table (b)b compares different window attention division. Note that the grid size '1\(\times\)1' uses global attention throughout all layers without any window attention and thus without shifting operation. OWL-ViT (Minderer et al., 2022) uses this full global attention with its pretrained CLIP ViT backbone. Interestingly, we observe that our SWL outperforms the full global attention by +4.6 AP\({}_{r}\) with the grid size 4\(\times\)4 being the optimal value. Table (c)c ablates the window shift size (\(s\) in section 3.3). Setting the shift size the same as the window attention cell size (_i.e._, 16) is equivalent to the non-shifted window baseline. We choose stride \(s=8\) in our method as a denser stride brings only marginal improvements.
### Visualization
In Figure 2, we visualize the similarity map between the image features and a query text embedding using the Flickr30K (Plummer et al., 2015) and COCO Captions (Chen et al., 2015) datasets. For each sample, we compare the baseline backbone features (middle) and our detection-oriented pre-training features (right). We select pyramid level 4 which has the same resolution as the backbone features and apply the Faster R-CNN head in a sliding window manner to obtain the dense feature map. We observe that our detection-oriented pretraining results in more localized semantic information given the queried image-text pairs. In Figure 3, we visualize the DITO outputs on LVIS novel categories and Ego4D (Grauman et al., 2022) which is real-world and out-of-distribution data. We use the same DITO detector trained on LVIS\({}_{base}\), and observe that it is able to capture many novel and unseen objects even under the significant domain shift.
## 5 Conclusion
We introduce DITO - a detection-oriented approach to learn from noisy image-text pairs for open-vocabulary detection. By replacing the classification architecture with detection architecture, DITO learns the locality-sensitive information from large-scale image-text data without a need for pseudo-labeling or additional losses. Furthermore, we present a shifted-window learning method to mitigate the bias of window attention pattern in vision transformer detectors. Experiments show that DITO outperforms the state-of-the-art by large margins on the LVIS benchmark, and is very competitive on the COCO benchmark and transfer detection. We hope this work would inspire the community to explore detection-oriented image-text pretraining for open-vocabulary localization tasks.
Figure 3: **DITO prediction.** LVIS results (left three) only show the novel categories (_e.g._, _bulllozer_, _fishbow_, _subwoofer_, _heron_). While Ego4D (right two) is a real-world and out-of-distribution data, many unseen objects are detected (_e.g._, _steel lid_, _sticker on the wall_, _recycle bin_). Best viewed with zoom in.
Figure 2: **Visual-text similarity map**. For each example, we show the paired image (left) and text (bottom) input, and the visual-text similarity map using the backbone features (middle) or our detection-oriented pre-training features (right). We use Flickr30K (top row) and COCO Captions (bottom row) datasets.
## 6 Ethics statement
Our models utilize the rich image-text information acquired through pretraining, which may reinforce deficiencies and biases in the raw web data and expose potentially harmful biases or stereotypes. The models we trained are designed for academic research purposes and need more rigorous fairness studies before serving other applications.
## 7 Reproducibility statement
We plan to open source the code for reproducibility. The model, experimental and implementation details are provided in the paper and supplementary materials. The detector model (He et al., 2017; Li et al., 2022), pretraining image-text data (Gadre et al., 2023), and detection datasets (Gupta et al., 2019; Shao et al., 2019) in this work are publicly available.
|
2309.15369 | Joint Computing, Pushing, and Caching Optimization for Mobile Edge
Computing Networks via Soft Actor-Critic Learning | Mobile edge computing (MEC) networks bring computing and storage capabilities
closer to edge devices, which reduces latency and improves network performance.
However, to further reduce transmission and computation costs while satisfying
user-perceived quality of experience, a joint optimization in computing,
pushing, and caching is needed. In this paper, we formulate the joint-design
problem in MEC networks as an infinite-horizon discounted-cost Markov decision
process and solve it using a deep reinforcement learning (DRL)-based framework
that enables the dynamic orchestration of computing, pushing, and caching.
Through the deep networks embedded in the DRL structure, our framework can
implicitly predict user future requests and push or cache the appropriate
content to effectively enhance system performance. One issue we encountered
when considering three functions collectively is the curse of dimensionality
for the action space. To address it, we relaxed the discrete action space into
a continuous space and then adopted soft actor-critic learning to solve the
optimization problem, followed by utilizing a vector quantization method to
obtain the desired discrete action. Additionally, an action correction method
was proposed to compress the action space further and accelerate the
convergence. Our simulations under the setting of a general single-user,
single-server MEC network with dynamic transmission link quality demonstrate
that the proposed framework effectively decreases transmission bandwidth and
computing cost by proactively pushing data on future demand to users and
jointly optimizing the three functions. We also conduct extensive parameter
tuning analysis, which shows that our approach outperforms the baselines under
various parameter settings. | Xiangyu Gao, Yaping Sun, Hao Chen, Xiaodong Xu, Shuguang Cui | 2023-09-27T02:44:38Z | http://arxiv.org/abs/2309.15369v1 | Joint Computing, Pushing, and Caching Optimization for Mobile Edge Computing Networks via Soft Actor-Critic Learning
###### Abstract
Mobile edge computing (MEC) networks bring computing and storage capabilities closer to edge devices, which reduces latency and improves network performance. However, to further reduce transmission and computation costs while satisfying user-perceived quality of experience, a joint optimization in computing, pushing, and caching is needed. In this paper, we formulate the joint-design problem in MEC networks as an infinite-horizon discounted-cost Markov decision process and solve it using a deep reinforcement learning (DRL)-based framework that enables the dynamic orchestration of computing, pushing, and caching. Through the deep networks embedded in the DRL structure, our framework can implicitly predict user future requests and push or cache the appropriate content to effectively enhance system performance. One issue we encountered when considering three functions collectively is the curse of dimensionality for the action space. To address it, we relaxed the discrete action space into a continuous space and then adopted soft actor-critic learning to solve the optimization problem, followed by utilizing a vector quantization method to obtain the desired discrete action. Additionally, an action correction method was proposed to compress the action space further and accelerate the convergence. Our simulations under the setting of a general single-user, single-server MEC network with dynamic transmission link quality demonstrate that the proposed framework effectively decreases transmission bandwidth and computing cost by proactively pushing data on future demand to users and jointly optimizing the three functions. We also conduct extensive parameter tuning analysis, which shows that our approach outperforms the baselines under various parameter settings.
computing, pushing, caching, mobile edge computing network, deep reinforcement learning, soft actor-critic.
## I Introduction
Recent advancements in smart mobile devices have enabled various emerging applications such as virtual / augmented reality (VR/AR) [1], online gaming, and autonomous driving [2, 3, 4]. These applications require ultra-high communication and computation requirements, making it challenging for mobile operators to minimize communication and computation costs while ensuring the user-perceived quality of experience [5]. In response to these challenges, the mobile edge computing network has emerged as a promising solution by pushing caching and computing resources to access points, base stations (BSs), and mobile devices at the wireless network edge [1].
### _Prior Art: Caching, Pushing, and Computing Design_
Caching can significantly improve bandwidth utilization by placing popular content closer to users for future reuse, leveraging the high degree of asynchronous content reuse in mobile traffic [6]. Caching policies can be classified into two types, _static caching_ and _dynamic caching_, depending on whether the cached contents remain unchanged or are dynamically updated. Static caching policies are generally determined based on the content popularity distribution, and the cache state remains unchanged for a relatively long time [6, 7]. In [7], a collaborative content caching scheme among base stations (BSs) in cache-enabled multi-cell cooperative networks is considered to minimize the average request delay, formulated by the stochastic request traffic modeling. Dynamic caching policies, on the other hand, involve updating the content placement from time to time by using instantaneous user request information, such as the least recently used (LRU) and least frequently used (LFU) policy [8]. However, since most caching policies are reactive operations and do not consider proactive pushing, the system performance can be further improved.
Joint pushing and caching can indeed improve system performance by proactively transmitting contents during low traffic periods to satisfy future user demands. Several studies have explored this approach, such as [9] which considers a multi-user wireless network with multicast opportunities to minimize current and future transmission costs, and [10] which uses content request delay information to predict a user's request time for certain content items and maximize effective throughput. Additionally, [11] builds on RDI to characterize asynchronous user requests and proposes a coded joint pushing and caching method to minimize network traffic load with low complexity. In [12], a continuous-time optimization problem is formulated to determine optimal transmission and caching policies for small cell and Device-to-Device networks with
known user demands in advance. However, existing joint pushing and caching policies only consider content delivery and have not taken into account the computation part, which limits their applicability to modern mobile traffic services such as mobile VR delivery.
To effectively serve modern mobile traffic, the joint design of caching, computing, and communication (3C) has been receiving increasing attention. One direction of 3C research focuses on the joint utilization of cache and computing resources at MEC servers to minimize transmission latency [13, 14] and energy consumption [15]. Another direction of 3C research considers the joint utilization of 3C resources at mobile devices to minimize communication costs in both single-user scenarios [1, 16] and multiple-user scenarios [17]. However, these joint 3C designs consider only static caching, where the cache states remain unchanged and do not take into account the benefits of pushing. Therefore, the system performance can be further improved through dynamic caching policies.
### _DRL-based Systems_
Recent advances in deep learning (DL) have enabled the development of novel approaches for complicated classification and detection tasks [18, 19], as well as the solving of complex optimization problems that traditional methods may not be effective or efficient at handling [20, 21, 22, 23, 24, 25, 26, 27]. Among all popular DL models, reinforcement learning (RL) [28, 29, 30] has been widely used in scheduling and optimization problems, such as transportation and resource allocation [31, 32, 33], by learning an optimal policy for the agent to take actions that maximize a reward signal. By using RL, an agent can learn from experience and adapt its behavior over time to achieve the best possible outcomes. For example, in [20], a hierarchical RL algorithm is proposed to solve the joint optimization of pushing and caching in a multi-access edge computing network with multiuser and multicast data. The objective is to maximize bandwidth utilization and decrease the total quantity of data transmitted. In [21], the actor-critic RL framework is utilized to solve the joint optimization of caching, computation offloading, and radio resource allocation in the fog-enabled Internet of Things (IoTs), with the aim of minimizing the average end-to-end delay. In [22], Ning _et al._ develop an intent-based traffic control system that utilizes DRL for the 5G-envisioned Internet of Connected Vehicles, which can dynamically orchestrate edge computing and content caching to improve the profits of mobile network operators. Furthermore, in [23], a distributed DL-based offloading algorithm is proposed, which uses multiple parallel deep neural networks to generate offloading decisions for MEC networks, where multiple wireless devices choose to offload their computation tasks to an edge server. In [24, 25], Zhao _et al._ and Huang _et al._ devise MEC networks for IoTs by using DRL frameworks to make the offloading strategy for offloading some computational tasks from IoT users to the computational access points or MEC server to reduce system latency and energy consumption. However, a common issue encountered when applying DRL-based systems to real-world optimization problems is the curse of dimensionality, which cannot be effectively and efficiently solved by general frameworks and optimization tools, especially for large-scale networks and tasks.
### _Contributions_
To address the issues mentioned earlier, we propose a joint computing, pushing, and caching policy optimization framework in MEC networks1. Our contributions are as follows:
Footnote 1: The code and sample data of this framework will be made open-source and available at _[https://github.com/Xiangyu-Gao/sac_joint_compute_push_cache_](https://github.com/Xiangyu-Gao/sac_joint_compute_push_cache_)
* We propose a model for the MEC network that computes its transmission and computation costs while taking into account computing, pushing, and caching actions. By representing system requests and their transition probabilities through a first-order F-state Markov chain, we formulate the joint optimization problem as an infinite-horizon discounted-cost Markov decision process with the dual objectives of reducing transmission and computation costs. Solving this problem requires dynamically optimizing the computing, pushing, and caching decisions over time to achieve the best overall performance.
* To address the curse of dimensionality in the joint optimization problem, we implemented a continuous-space DRL approach known as soft actor-critic (SAC) learning [29]. Unlike classic discrete-space DRL algorithms, such as deep Q-learning [30], which rely on the Q-networks with a size linearly increased with the action space, SAC only requires learning the Gaussian-format Q-functions [29]. As a result, SAC significantly reduces the number of parameters that need to be learned in a neural network. However, this does introduce the challenge of having an output action in continuous space that cannot be directly utilized. Therefore, we have designed an action quantization and correction algorithm that allows us to tailor SAC to our discrete optimization problem. Furthermore, the SAC algorithm is known for its stability and ease of convergence [29].
* We present simulation results with various system parameters under the setting of a general single-user, single-server MEC network to demonstrate the effectiveness of the proposed SAC algorithm. Our results show that by considering the joint optimization of computing, pushing, and caching, the performance of the MEC network can be significantly improved in terms of lower computation cost and reduced transmission cost. Moreover, our approach outperforms baseline methods that consider only a subset of these functions, demonstrating the benefits of the joint optimization.
### _Outline_
The paper is organized as follows: Section II outlines the system model for the MEC network. Section III formulates the joint policy optimization problem. Section IV presents the utilization of SAC in optimization. Section V covers implementation details and evaluation results. Section VI concludes the paper.
## II System Model
Without loss of generality, we begin by considering a simple mobile edge network consisting of one MEC server and one mobile device, as shown in Fig. 1. The system model can be extended to the multi-user scenario by summing the objective functions of multiple users and considering the restrictions of the total communication and computing resources. The MEC server has a large cache size, sufficient to proactively store the input and output data of all tasks requested by the mobile device. In contrast, the cache size of the mobile device is limited to a capacity denoted as \(C\) (in bits). The mobile device is equipped with multi-core computing capabilities, each with a computation frequency \(f_{D}\) (in cycles/s), and the number of computing cores is assumed to be \(M\). The system operates over an infinite time horizon, with time slotted and indexed by \(t=0,1,2,\cdots\), each with a fixed length of \(\tau\) seconds. At the start of each time slot, the mobile device submits one task request, which is delay-intolerant and must be served before the end of the slot. The tasks are categorized as delay-intolerant due to the critical importance of upholding optimal user experience and ensuring high-quality service for applications such as AR/VR, real-time communication, and streaming applications which are notably sensitive to delays. Due to the mobility of the device, the data transmission rate for the link between the mobile device and the server may vary over time. To model this dynamic effect, we adopt the signal-to-noise ratio (SNR) following the Shannon theory, which measures the quality of the link. The system is designed to optimize the joint pushing, caching, and computing functions to minimize the computation and transmission costs of the network while ensuring timely and efficient task execution.
### _Task Model_
Assuming that the mobile device will request a total of \(F\) tasks, we define the task set \(\mathcal{F}\) as \(\mathcal{F}\triangleq\{1,2,\ldots,f,\ldots,F\}\). Each task \(f\in\mathcal{F}\) is characterized by a \(4\)-item tuple \(\left\{I_{f}\ (\text{in bits}),\ O_{f}\ (\text{in bits}),\ w_{f}\ (\text{in cycles/bit}),\ \tau\ (\text{in seconds})\right\}\). Specifically, \(I_{f}\) represents the size of the input data generated from the Internet which can be cached. \(O_{f}\) represents the size of the output data after the computation is completed2. \(w_{f}\) and \(\tau\) denote the required computation cycles per bit and the maximum service latency, respectively.
Footnote 2: In many systems, the actual output size might not be known beforehand, especially for computational tasks that involve dynamic data processing. Its possible that we could use historical data or estimations based on the characteristics of the input data and the computation process.
### _System State_
#### Ii-B1 Request State
At each time slot \(t\), the mobile device submits a single task request. The request state at time \(t\) is denoted by \(A(t)\in\mathcal{F}\) representing the requested task, where \(A(t)=f\) signifies that task \(f\) in set \(\mathcal{F}\) is being requested by the mobile device. The size of \(\mathcal{F}\) is \(F\). To model the evolution of requested tasks and their transition probabilities, we employed a first-order F-state Markov chain [20, 34], referred to as \(A(t):t=0,1,2,\cdots\). In this context, each state within the Markov chain corresponds to a distinct task, and the total number of tasks is assumed to be \(F\). The choice to use a first-order Markov chain is rooted in its assumption that the probability of transitioning to a particular state is solely dependent on the current state. The probability of transitioning to state \(j\in\mathcal{F}\) at time slot \(t+1\), given that the request state at time slot \(t\) is \(i\in\mathcal{F}\), is represented by \(\Pr[A(t+1)=j|A(t)=i]\). It is assumed that \(A(t)\) is time-homogeneous. We denote the transition probability matrix of \(A(t)\) with \(\textbf{Q}\triangleq\left(q_{i,j}\right)_{i\in\mathcal{F},j\in\mathcal{F}}\), where \(q_{i,j}\triangleq\Pr\left[A(t+1)=j|A(t)=i\right]\). Moreover, we focus our attention on an irreducible Markov chain to reflect the idea that any state in the system can be reached from any other state with a non-zero probability. We denote the limiting distribution of \(A(t)\) with \(\textbf{p}\triangleq(p_{f})_{f\in\mathcal{F}}\). Here, \(p_{f}\triangleq\lim_{t\rightarrow\infty}\Pr[A(t)=f]\), and it should be noted that \(p_{f}=\sum_{i\in\mathcal{F}}p_{i}q_{i,f}\) for all \(f\in\mathcal{F}\).
#### Ii-B2 Cache State
Let \(S_{f}^{I}(t)\in\{0,1\}\) denote the indicator of the cache state of the input data for task \(f\) stored in the mobile device. Here, \(S_{f}^{I}(t)=1\) means that the input data for task \(f\) is cached in the mobile device, while \(S_{f}^{I}(t)=0\) implies that the input data is not cached. Similarly, let \(S_{f}^{O}(t)\in\{0,1\}\) denote the indicator of the cache state of the output data for task \(f\) stored in the mobile device, where \(S_{f}^{O}(t)=1\) represents that the output data for task \(f\) is cached in the mobile device, and \(S_{f}^{O}(t)=0\) implies that the output data is not cached. The cache size of the mobile device is denoted by \(C\) (in bits). The cache size constraint is given by
\[\sum_{f=1}^{F}I_{f}S_{f}^{I}(t)+O_{f}S_{f}^{O}(t)\leq C \tag{1}\]
which enforces that the sum of the sizes of input and output data cached for all tasks in the mobile device cannot exceed the cache size.
Fig. 1: Illustration of MEC network with single MEC server and single mobile device. The mobile device is assumed to be moving at a small speed, such as an iPhone being carried by an individual. The channel quality for communication between the mobile device and the MEC server is modeled as an SNR, which may change over time due to the movement of the mobile device.
We define the cache state of the mobile device at time slot \(t\), denoted by \(\textbf{S}(t)\triangleq(S_{f}^{I}(t),S_{f}^{O}(t))_{f\in\mathcal{F}}\in\mathcal{S}\), where \(\mathcal{S}\triangleq\{(S_{f}^{I},S_{f}^{O})_{f\in\mathcal{F}}\in\{0,1\}^{F}: \sum_{f\in\mathcal{F}}I_{f}S_{f}^{I}+O_{f}S_{f}^{O}\leq C\}\) represents the cache state space of the mobile device. Here, \(N_{\min}\triangleq\frac{C}{\max_{f\in\mathcal{F}}I_{f},O_{f}}\) and \(N_{\max}\triangleq\frac{C}{\min_{f\in\mathcal{F}}I_{f},O_{f}}\) represent the lower and upper bounds, respectively, on the cardinality of \(\mathcal{S}\). The cardinality of \(\mathcal{S}\) is bounded by \(\binom{F}{N_{\min}}\) and \(\binom{F}{N_{\max}}\) from below and above, respectively.
#### Iii-B3 System State
At time slot \(t\), the system state consists of both system request state and system cache state, represented by \(\textbf{X}(t)\triangleq(A(t),\textbf{S}(t))\in\mathcal{F}\times\mathcal{S}\), where \(\mathcal{F}\times\mathcal{S}\) represents the system state space.
### _System Action_
#### Iii-C1 Reactive Computation Action
At each time slot \(t\), the reactive transmission bandwidth cost and the reactive computation energy cost are denoted as \(B^{R}(t)\) and \(E^{R}(t)\), respectively. The task request \(A(t)\) is served based on the current system state \(\textbf{X}(t)=(A(t),\textbf{S}(t))\) as follows:
* If the cache state \(S_{A(t)}^{O}(t)\) is equal to 1, it indicates that the output of task \(A(t)\) is already cached locally, hence it can be retrieved without the need for any transmission or computation. As a result, the delay is negligible, and both the reactive computation energy and transmission cost become zero.
* Assuming that \(S_{A(t)}^{I}(t)=1\) and \(S_{A(t)}^{O}(t)=0\), it is possible to compute the requested task \(A(t)\) directly using the locally cached input data. Let us define \(c_{R,f}(t)\in\{1,\cdots,M\}\) as the number of computation cores allocated for reactively processing task \(f\) at time slot \(t\) on the mobile device. Consequently, we can set \(c_{R,f}(t)=0\) for all \(f\in\mathcal{F}\backslash A(t)\). To ensure that the requested task \(A(t)\) is completed within \(\tau\), we must have \(\frac{I_{A(t)}w_{A(t)}}{\tau}\leq c_{R,A(t)}(t)f_{D}\).3 Here \(I_{A(t)}\) and \(w_{A(t)}\) denote the input size and the computational workload of task \(A(t)\), respectively. We can calculate the energy consumed for computing one cycle with frequency \(c_{R,f}(t)f_{D}\) on the mobile device as \(\mu c_{R,f}^{2}(t)f_{D}^{2}\), where \(\mu\) is the effective switched capacitance related to the chip architecture indicating the power efficiency of the CPU. Therefore, the reactive computation energy cost \(E^{R}(t)\) is given by \(\mu c_{R,A(t)}^{2}(t)f_{D}^{2}I_{A(t)}w_{A(t)}\), and the reactive transmission cost \(B^{R}(t)\) is zero. Footnote 3: We assume that \(\frac{I_{A(t)}}{\tau}\textbf{I}(A(t)=f)\leq Mf_{D}\), for feasibility, where \(\textbf{I}(A(t)=f)\) is the indicator function that is equal to 1 if \(A(t)=f\), and 0 otherwise, and \(M\) is the maximum number of computation cores. This assumption holds for all \(f\in\mathcal{F}\).
* If \(S_{A(t)}^{I}(t)=0\) and \(S_{A(t)}^{O}(t)=0\), the mobile device must download the input data of task \(A(t)\) from the MEC server before computing it locally. Let \(\text{SNR}(t)\) be the SNR value of the data transmission link at time slot \(t\). The required latency can be expressed as \(\frac{I_{A(t)}}{B^{R}(t)\log_{2}\left(1+\text{SNR}(t)\right)}+\frac{I_{A(t)}w _{A(t)}}{c_{R,A(t)}(t)f_{D}}\), where \(B^{R}(t)\log_{2}\left(1+\text{SNR}(t)\right)\) is the channel capacity given by Shannon theory. To satisfy the latency constraint, i.e., \(\frac{I_{A(t)}}{B^{R}(t)\log_{2}\left(1+\text{SNR}(t)\right)}+\frac{I_{A(t)}w _{A(t)}}{c_{R,A(t)}(t)f_{D}}\leq\tau\), the minimum reactive transmission cost \(B^{R}(t)\) is given by \(\frac{I_{A(t)}}{\left(\tau-\frac{I_{A(t)}w_{A(t)}}{c_{R,A(t)}(t)f_{D}}\right) \log_{2}\left(1+\text{SNR}(t)\right)}\).4 The reactive computation energy cost \(E^{R}(t)\) is given by \(\mu c_{R,A(t)}^{2}(t)f_{D}^{2}I_{A(t)}w_{A(t)}\). Footnote 4: The steps of deriving \(B^{R}(t)\) from the preceding latency constraint are as follows: First, we have \(I_{A(t)}/\left(B^{R}(t)\log_{2}\left(1+\text{SNR}(t)\right)\right)\leq\tau-I_{A( t)}w_{A(t)}/\left(c_{R,A(t)}(f_{D})\right)\). Then, we can get \(I_{A(t)}/B^{R}(t)\leq\left(\tau-I_{A(t)}w_{A(t)}/\left(c_{R,A(t)}f_{D}\right) \right)\log_{2}\left(1+\text{SNR}(t)\right)\). Finally, we can get \(B^{R}\left(t\right)\geq I_{A(t)}/\left(\left(\tau-I_{A(t)}w_{A(t)}/\left(c_{ R,A(t)}f_{D}\right)\right)\log_{2}\left(1+\text{SNR}(t)\right)\right)\).
In summary, at time slot \(t\), the reactive computation action \(c_{R,f}(t)\) should satisfy
\[c_{R,f}(t)\leq\textbf{1}(A(t)=f)\left(1-S_{f}^{O}(t)\right)M,\ \forall f\in\mathcal{F}, \tag{2}\]
and the reactive transmission cost \(B^{R}(t)\) is given by
\[B^{R}(t)= \left(1-S_{A(t)}^{I}(t)\right)\left(1-S_{A(t)}^{O}(t)\right) \tag{3}\] \[\times\frac{I_{A(t)}}{\left(\tau-\frac{I_{A(t)}w_{A(t)}}{c_{R,A(t) }(t)f_{D}}\right)\log_{2}\left(1+\text{SNR}(t)\right)}, \tag{4}\]
and the reactive computation cost \(E^{R}(t)\) is given by
\[E^{R}(t)=\left(1-S_{A(t)}^{O}(t)\right)\mu c_{R,A(t)}^{2}(t)f_{D}^{2}I_{A(t)}w_{A( t)}. \tag{5}\]
Let \(\textbf{c}_{R}\triangleq(c_{R,f})_{f\in\mathcal{F}}\in\Pi_{C}^{R}(\textbf{X})\) denote the reactive computation action of the system, where \(\Pi_{C}^{R}(\textbf{X})\triangleq\left\{(c_{R,f})_{f\in\mathcal{F}}\in\{0,1, \cdots,M\}^{F}:(2)\right\}\) represents the decision space for reactive computation of the system under state **X**. It can be observed from Eq. (2) that the size of the reactive computation action space is \(M+1\).
#### Iii-C2 Proactive Transmission or Pushing Action
Let \(b_{f}(t)\in\{0,1\}\) denote the binary decision variable for task \(f\in\mathcal{F}\), where \(b_{f}(t)=1\) indicates that the remote input data of task \(f\) is pushed to the mobile device, and \(b_{f}(t)=0\) otherwise. We assume that the pushed data is transmitted to the mobile device by the end of the time slot. To ensure compliance with the latency constraint, we enforce \(\frac{\sum_{f=1}^{F}I_{f}b_{f}(t)}{\tau}\leq B^{P}(t)\log_{2}\left(1+\text{SNR}( t)\right)\), where \(B^{P}(t)\) denotes the proactive transmission bandwidth cost. Thus, the minimum proactive transmission cost can be expressed as:
\[B^{P}(t)=\frac{\sum_{f=1}^{F}I_{f}b_{f}(t)}{\tau\log_{2}\left(1+\text{SNR}(t) \right)}. \tag{6}\]
In summary, the system pushing action under system state \(\textbf{b}\triangleq\left(b_{f}\right)_{f\in\mathcal{F}}\in\{0,1\}^{F}\). The size of the system pushing action space under system state **X** is \(2^{F}\)
#### Iii-C3 Cache Update Action
The cache state of each task \(f\in\mathcal{F}\) is updated according to
\[S_{f}^{I}(t+1) =S_{f}^{I}(t)+\Delta s_{f}^{I}(t), \tag{7}\] \[S_{f}^{O}(t+1) =S_{f}^{O}(t)+\Delta s_{f}^{O}(t), \tag{8}\]
where \(\Delta s_{f}^{I}(t)\in\{-1,0,1\}\) and \(\Delta s_{f}^{O}(t)\in\{-1,0,1\}\) denote the update action for the cache state of the input and output data of task \(f\), respectively. Then, we have \(\forall f\in\mathcal{F}\)
\[-S_{f}^{I}(t)\leq\Delta s_{f}^{I}(t)\leq\min\left\{b_{f}(t)+c_{R,f} (t),1-S_{f}^{I}(t)\right\} \tag{9}\] \[-S_{f}^{O}(t)\leq\Delta s_{f}^{O}(t)\leq\min\left\{c_{R,f}(t),1-S _{f}^{O}(t)\right\},\] (10) \[\sum_{f=1}^{F}I_{f}\left(S_{f}^{I}(t)+\Delta s_{f}^{I}(t)\right)+ O_{f}\left(S_{f}^{O}(t)+\Delta s_{f}^{O}(t)\right)\leq C, \tag{11}\]
where the left-hand side of Eq. (9) specifies that the removal of the input of task \(f\) from the mobile device is only possible if it has been previously cached. On the other hand, the right-hand side of Eq. (9) indicates that the caching of the input of task \(f\) into the mobile device is only allowed if it has not been cached before and if it is either proactively transmitted from the MEC server or reactively transmitted, i.e., if \(b_{f}(t)=1\) or \(c_{R,f}(t)>0\). Similarly, the left-hand side of Eq. (10) states that the output of task \(f\) can only be removed from the mobile device if it has been previously cached. On the other hand, the right-hand side of Eq. (10) specifies that the caching of the output of task \(f\) into the mobile device is only allowed if it has not been cached before and if it is reactively computed at the mobile device, i.e., if \(c_{R,f}(t)>0\). Finally, Eq. (11) requires that the updated cache state complies with the cache size constraint.
In summary, let \(\Delta\mathbf{s}\triangleq\left(\Delta s_{f}^{I},\Delta s_{f}^{O}\right)f\in \mathcal{F}\in\Pi_{\Delta\mathbf{s}}(\mathbf{X})\) denote the system cache update action, where \(\Pi_{\Delta\mathbf{s}}(\mathbf{X})\triangleq\left\{\left(\Delta s_{f}^{I}, \Delta s_{f}^{O}\right)_{f\in\mathcal{F}}\in\{-1,0,1\}^{F}\times\{-1,0,1\}^{F }\text{: (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:
### _SAC Learning_
SAC is an off-policy deep reinforcement learning method that maintains the advantages of entropy maximization and stability while offering sample-efficient learning [29]. It operates on an actor-critic framework where the actor is responsible for maximizing expected reward while simultaneously maximizing entropy. The critic evaluates the effectiveness of the policy being followed.
A general form of maximum-entropy RL is given by:
\[J(\pi)=\sum_{t=0}^{T}\mathbb{E}_{(\mathbf{x}_{t},\mathbf{a}_{t})\sim\rho_{\pi}} \left[r\left(\mathbf{x}_{t},\mathbf{a}_{t}\right)+\alpha\mathcal{H}\left(\pi \left(\cdot\mid\mathbf{x}_{t}\right)\right)\right] \tag{15}\]
where the temperature parameter \(\alpha\) determines the relative importance of the entropy term against the reward \(r\), and the entropy term is given by \(\mathcal{H}\left(\pi\left(\cdot\mid\mathbf{x}_{t}\right)\right)=\mathbb{E}_{ \mathbf{a}_{t}}\left[-\log\pi\left(\mathbf{a}_{t}\mid\mathbf{x}_{t}\right)\right]\).
The SAC algorithm is a policy iteration approach designed to solve the optimization problem in Eq. (15) [29]. It comprises two essential components: soft Q-function \(Q_{\theta}\left(\mathbf{x}_{t},\mathbf{a}_{t}\right)\), and policy \(\pi_{\phi}\left(\mathbf{a}_{t}\mid\mathbf{x}_{t}\right)\). To deal with the large continuous domains, neural networks are utilized to approximate these components, with the network parameters denoted by \(\theta\) and \(\phi\). For example, the policy is modeled as a Gaussian distribution with a fully connected network providing the mean and covariance value, and the Q-function is also approximated using a fully connected neural network. Following [29], the update rules for \(\theta\) and \(\phi\) are provided below.
The soft Q-function parameters can be trained to minimize the soft Bellman residual
\[\begin{split} J_{Q}(\theta)=\mathbb{E}_{(\mathbf{x}_{t}, \mathbf{a}_{t})\sim\mathcal{D}}\Big{[}&\frac{1}{2}\big{(}Q_{ \theta}\left(\mathbf{x}_{t},\mathbf{a}_{t}\right)-\left(r\left(\mathbf{x}_{t},\mathbf{a}_{t}\right)+\right.\\ &\left.\gamma\mathbb{E}_{\mathbf{x}_{t+1}\sim p}\left[V_{\bar{ \theta}}\left(\mathbf{x}_{t+1}\right)\right]\right)\big{)}^{2}\Big{]},\end{split} \tag{16}\]
where \(\mathcal{D}\) is the distribution of previously sampled states and actions, \(p\) is the transition probability between states, and the value function \(V_{\bar{\theta}}(\mathbf{x}_{t})\) is implicitly parameterized through the soft Q-function parameters as follows
\[V_{\bar{\theta}}\left(\mathbf{x}_{t}\right)=\mathbb{E}_{\mathbf{a}_{t}\sim \pi}\left[Q_{\bar{\theta}}\left(\mathbf{x}_{t},\mathbf{a}_{t}\right)-\alpha \log\pi\left(\mathbf{a}_{t}\mid\mathbf{x}_{t}\right)\right] \tag{17}\]
The update makes use of a target soft Q-function \(Q_{\bar{\theta}}\) with parameters \(\bar{\theta}\) obtained as an exponentially moving average of the soft Q-function weights \(\theta\), which helps stabilize training. The soft Bellman residual \(J_{Q}(\theta)\) in Eq. (16) can be optimized with stochastic gradients
\[\begin{split}\hat{\nabla}_{\theta}J_{Q}(\theta)=& \nabla_{\theta}Q_{\theta}\left(\mathbf{a}_{t},\mathbf{x}_{t}\right) \Big{(}Q_{\theta}\left(\mathbf{x}_{t},\mathbf{a}_{t}\right)-\left(r\left( \mathbf{x}_{t},\mathbf{a}_{t}\right)+\right.\\ &\left.\gamma\left(Q_{\bar{\theta}}\left(\mathbf{x}_{t+1}, \mathbf{a}_{t+1}\right)-\alpha\log\left(\pi_{\phi}\left(\mathbf{a}_{t+1}\mid \mathbf{x}_{t+1}\right)\right)\right)\Big{)}\Big{)}.\end{split} \tag{18}\]
The policy parameters \(\phi\) can be learned by directly minimizing the expected KL divergence in
\[\begin{split} J_{\pi}(\phi)=\mathbb{E}_{\mathbf{x}_{t}\sim \mathcal{D}}\Big{[}&\mathbb{E}_{\mathbf{a}_{t}\sim\pi_{\phi}} \big{[}\alpha\log\left(\pi_{\phi}\left(\mathbf{a}_{t}\mid\mathbf{x}_{t}\right) \right)-\\ &\left.Q_{\theta}\left(\mathbf{x}_{t},\mathbf{a}_{t}\right)\right] \Big{]}\end{split} \tag{19}\]
A neural network transformation is used to parameterize the policy as \(\mathbf{a}_{t}=f_{\phi}\left(\epsilon_{t};\mathbf{x}_{t}\right)\), where \(\epsilon_{t}\) is an input noise vector sampled from a Gaussian distribution. The objective stated by Eq. (19) can be rewritten as:
\[\begin{split} J_{\pi}(\phi)=\mathbb{E}_{\mathbf{x}_{t}\sim \mathcal{D},\epsilon_{t}\sim\mathcal{N}}\big{[}&\alpha\log\pi_{ \phi}\left(f_{\phi}\left(\epsilon_{t};\mathbf{x}_{t}\right)\mid\mathbf{x}_{t }\right)\\ &-Q_{\theta}\left(\mathbf{x}_{t},f_{\phi}\left(\epsilon_{t}; \mathbf{x}_{t}\right)\right)\big{]},\end{split} \tag{20}\]
where \(\pi_{\phi}\) is defined implicitly in terms of \(f_{\phi}\). The gradient of Eq. (20) is approximated with
\[\begin{split}\hat{\nabla}_{\phi}J_{\pi}(\phi)=& \nabla_{\phi}\alpha\log\left(\pi_{\phi}\left(\mathbf{a}_{t}\mid \mathbf{x}_{t}\right)\right)+\big{(}\nabla_{\mathbf{a}_{t}}\alpha\log\left( \pi_{\phi}\left(\mathbf{a}_{t}\mid\mathbf{x}_{t}\right)\right)\\ &-\nabla_{\mathbf{a}_{t}}Q\left(\mathbf{x}_{t},\mathbf{a}_{t} \right)\big{)}\nabla_{\phi}f_{\phi}\left(\epsilon_{t};\mathbf{x}_{t}\right), \end{split} \tag{21}\]
where \(\mathbf{a}_{t}\) is evaluated using \(f_{\phi}\left(\epsilon_{t};\mathbf{x}_{t}\right)\).
**Remark**: In the maximum entropy framework, the soft policy iteration that alternates between the policy evaluation Eq. (16) and the policy improvement Eq. (19) converges to the optimal policy. _Proof in_[29].
### _Action Quantization and Correction_
In the context of SAC learning, the output at any given time \(t\) corresponds to the SAC action \(\mathbf{a}_{t}\), which seeks to maximize the policy value \(\pi_{\phi}\left(\mathbf{a}_{t}\mid\mathbf{x}_{t}\right)\) with respect to the current SAC state \(\mathbf{x}_{t}\). In order to assess the reward and update the cache, it is necessary to _discretize the continuous SAC action_\(\mathbf{a}_{t}\) and obtain a discrete action \(\big{(}c_{A(t)},\mathbf{b},\Delta\mathbf{s}\big{)}\). To achieve this goal, a simple action quantization approach was implemented that relies on thresholding and integer projection.
**Action quantization**: Let us consider an element \(\bar{\eta}\) in the SAC action \(\mathbf{a}\) and its corresponding quantized version \(\eta\) with the selection set \(S_{\eta}\). To obtain \(\eta\) from \(\bar{\eta}\), we adopt a uniform thresholding method for integer projection. Specifically, we use the following equation:
\[\eta=\min S_{\eta}+\left(\bar{\eta}-\min S_{\eta}\right)\text{mod}\ \frac{\max S_{\eta}-\min S_{\eta}}{\max S_{\eta}-\min S_{\eta}+1} \tag{22}\]
As an example, consider the push action \(b_{f}(t)\in S_{b_{f}}=\{0,1\}\), we can determine its quantized value \(b_{f}(t)=\hat{b}_{f}(t)\mod 0.5\) using Eq. (22).
**Action correction**: The valid action space of the system is highly constrained due to the limitations imposed by Eq. (2), (3), (9), (10), and (11), resulting in a sparsely-spanning space with a cardinality of \((M+1)\times 2^{F}\times 3^{2F}\). Consequently, even with techniques such as penalty reward, it becomes challenging for the SAC algorithm to identify which actions are valid in this vast space. Therefore, the post-quantization action \(\big{(}c_{A(t)},\mathbf{b},\Delta\mathbf{s}\big{)}\) obtained from SAC is often invalid. In order to address this issue, we propose _Rules 1, 5, and 7_ to ensure that the output action of SAC is valid, while satisfying the constraints outlined in Section II-C. Additionally, we introduce _Rules 2, 3, 4, and 6_ to improve the training process and enhance the system's overall performance by further compressing the action space, reducing unnecessary costs, and minimizing waste.
* _Rule 1_: When \(S_{A(t)}^{O}\) equals 0, the system checks if the suggested number of computation cores, denoted as \(c_{A(t)}\), is less than the minimum workable value given by \(\lceil I_{A(t)}w_{A(t)}/(\tau f_{D})\rceil\) where \(\lceil\cdot\rceil\) represents rounding up to the nearest integer. If this is the case, \(c_{A(t)}\) is updated to \(\lceil I_{A(t)}w_{A(t)}/(\tau f_{D})\rceil\). On the other hand, if \(S_{A(t)}^{O}\) equals
1, \(c_{A(t)}\) is set to 0. These rules are designed to fulfill the service latency constraint and reduce unnecessary computation.
* _Rule_ 2: When \(S_{f}^{I}+S_{f}^{O}\geq 1\), we set \(b_{f}=0\). This rule indicates that there is no need for proactive pushing if any data of a task is already cached.
* _Rule_ 3: To minimize the cost of pushing data to the mobile device, we ensure that at most one task is proactively transmitted, and this task must have the largest \(\bar{b}_{f}\) value among all un-pushed tasks. The selected task will have a \(b_{f}\) value of 1, while all other tasks will have a \(b_{f}\) value of 0. This approach is adopted to avoid unnecessary pushing costs, as the mobile device is only capable of processing one task request per time slot.
* _Rule 4_: If \(b_{f}=1\), we set \(\Delta s_{f}^{I}=1\), indicating that the data being proactively pushed needs to be cached.
* _Rule 5_: If the sum of cache sizes given by Eq. (11) exceeds the cache capacity, we drop the input or output cache depending on the ascending order of their corresponding \(\bar{\mathbf{s}}\) values until the cache capacity is satisfied.
* _Rule 6_: If the sum of the caches given by Eq. (11) is less than the capacity, we attempt to add reactive input or output cache based on the decreasing order of the continuous variables \(\Delta\bar{s}_{A(t)}^{I}\) and \(\Delta\bar{s}_{A(t)}^{O}\).
* _Rule 7_: The cache action \(\Delta\mathbf{s}\) should be clipped according to the minimum and maximum limits specified in Eq. (9) and Eq. (10).
### _Reward Design_
The reward function \(r(\mathbf{x},\mathbf{a})\) for the SAC state \(\mathbf{x}\) and action \(\mathbf{a}\) is defined as a function of the resulting bandwidth and computation cost. Specifically, it is given by
\[r(\mathbf{x},\mathbf{a})=-\kappa(B(t)+\lambda E(t)) \tag{23}\]
where \(\kappa\) is a normalization coefficient that is set to \(10^{-6}\) in this paper.
The complete SAC learning algorithm is presented in Algorithm 1. The step sizes for stochastic gradient descent, \(\lambda_{Q}\), and \(\lambda_{\pi}\) are set to \(1\times 10^{-4}\). The target smoothing coefficient, \(\xi\), is chosen to be 0.005.
```
Initialize parameters \(\theta,\bar{\theta},\phi\) for networks \(Q_{\theta}\), \(Q_{\bar{\theta}}\), \(\pi_{\phi}\). Initialize learning rate \(\lambda_{Q}\), \(\lambda_{\pi}\), and weight \(\xi\). for each iteration do for each environment step do \(\mathbf{a}_{t}\sim\pi_{\phi}\left(\mathbf{a}_{t}\mid\mathbf{x}_{t}\right)\) \(\mathbf{x}_{t+1}\sim p\left(\mathbf{x}_{t+1}\mid\mathbf{x}_{t},\mathbf{a}_{t}\right)\) \(\mathbf{a}_{t}\) quantization & correction, \(r\left(\mathbf{x}_{t},\mathbf{a}_{t}\right)\) calculation \(\mathcal{D}\leftarrow\mathcal{D}\cup\{(\mathbf{x}_{t},\mathbf{a}_{t},r\left( \mathbf{x}_{t},\mathbf{a}_{t}\right),\mathbf{x}_{t+1})\}\) endfor for each gradient step do \(\theta_{i}\leftarrow\theta_{i}-\lambda_{Q}\hat{\nabla}_{\theta_{i}}J_{Q} \left(\theta_{i}\right)\) for \(i\in\{1,2\}\) \(\phi\leftarrow\phi-\lambda_{\pi}\nabla_{\phi}J_{\pi}(\phi)\) \(\bar{\theta}_{i}\leftarrow\xi\theta_{i}+(1-\xi)\bar{\theta}_{i}\) for \(i\in\{1,2\}\) endfor endfor
```
**Algorithm 1** SAC Learning for Our Problem
## V Implementation and Evaluation
### _Baselines_
The proposed system is built on the _proactive transmission and dynamic-computing-frequency reactive service with cache_, referred to as **PTDFC**. For comparison, we have selected the following baselines:
* _Most-recently-used proactive transmission and least-recently-used cache replacement (MRU-LRU)_: This is a heuristic algorithm [8, 9], where at each time slot, the requested task is reactively served, and the input data of the most-recently-used task is proactively cached. When the cache is full, the input data cache of the least-recently-used task is replaced. We choose to cache only the input data, excluding the output data (post-calculation), due to the common scenario where output data tends to be larger in size than input data. This size difference makes caching output data less efficient in the heuristic design for the purpose of reducing overall costs. The number of computing cores being used is fixed at \(0.75M\).
* _Most-frequently-used proactive transmission and least-frequently-used cache replacement (MFU-LFU)_: This algorithm is similar to the MRU-LRU algorithm, except that the most/least recently used task is replaced with the most/least frequently used task [8, 9].
* _Dynamic-computing-frequency reactive service with no cache (DFNC)_: This algorithm provides reactive service to the requested task, where the mobile device first downloads the input data from the MEC server and then computes it to obtain the output data.
* _Dynamic-computing-frequency reactive service with cache (DFC)_: This algorithm provides reactive service to the requested task, with the option of caching the input and output data into the limited capacity.
It is important to note that the DFC, DFNC, and PTDFC algorithms are all implemented with the SAC algorithm, and as a result, we refer to them as '_SAC-enabled algorithms_' in the following analysis.
### _Data Simulation_
In this study, the training and testing data were generated through a simulation process involving the creation of a Markov chain from a set of tasks \(\mathcal{F}\). The transit probability of a task \(i\) to another randomly selected task \(j\in\mathcal{F}\setminus i\) was established as the maximum transition probability, \(p_{i,j}=p_{\max}\). For other tasks \(k\in\mathcal{F}\setminus j\), the probability \(p_{i,k}\) was calculated as \((1-p_{i,j})\frac{|p_{i,k}|}{\sum_{f\in\mathcal{F}\setminus j}|p_{i,f}^{\prime}|}\), where \(p_{i,k}^{\prime}\) or \(p_{i,f}^{\prime}\) were randomly sampled from a uniform distribution. The resulting Markov chain represented the request popularity and transition preferences of \(F\) tasks. Subsequently, \(10^{6}\) requested tasks were sampled using a frame-by-frame method. To account for the slow movement of mobile devices, the SNR of the communication channel was dynamically changed every 300 epochs, with four possible values: \(0.5\,\mathrm{dB}\), \(1\,\mathrm{dB}\), \(2\,\mathrm{dB}\), and \(3\,\mathrm{dB}\). The transition between
different SNRs was randomized with equal probabilities. The simulation was conducted using default configurations, which included \(M=8\), \(F=4\), a maximum transition probability of \(0.7\), \(\lambda=1\), \(I_{f}\) around \(16000\) bits with random offset, \(O_{f}\) around \(30000\) bits with random offset, \(w=800\) cycles/bit, \(\tau=0.02\) seconds, \(f_{D}=1.7\times 10^{8}\) cycles/s, \(\mu=10^{-19}\), and \(C=40000\) bits.
### _Implementation_
For the purposes of training and stabilization, the SAC action \(\mathbf{a}_{t}\) and system state \(\mathbf{x}_{t}\) are normalized to fall within the range of \([-1,1]\). Implementation of the system is accomplished through the use of Python and PyTorch. Training and testing processes are executed on a computer with a TITAN RTX GPU, utilizing a batch size of 256, a discount factor of \(\gamma=0.99\), automatic entropy temperature \(\alpha\) tuning [29], a hidden-layer size of 256, one model update per step, one target update per 1000 steps, and a replay buffer size of \(1\times 10^{7}\). The testing process is executed 10 epochs after every 10 training epochs, and the training and testing processes are halted when the reward and loss have converged.
### _Convergence Analysis_
We present the training convergence results of three SAC-based algorithms, PTDFC, DFC, DFNC, and two heuristic algorithms, MFU-LFU and MRU-LRU in Fig. 2. The curves plot the reward versus epochs under different SNR conditions for these algorithms. It is important to note that the MFU-LFU and MRU-LRU algorithms are heuristic in nature, lacking parameters for training. Despite this, we have included their reward outcomes in Fig. 2, aiming to provide a more comprehensive perspective on their relative performance compared to others. During the first 300 epochs, with \(\text{SNR}=1\,\mathrm{dB}\), the PTDFC, DFC, and DFNC algorithms commence with neural network parameters initialized randomly and achieve convergence in a substantial number of epochs (250, 135, and 20 epochs, respectively). PTDFC requires more training epochs to converge than DFC and DFNC simply because of its larger action space. Starting from epoch 300, the SNR value is increased to \(2\,\mathrm{dB}\), and the three SAC-based algorithms converge again in less than 13 epochs using the pre-trained model from the previous epochs. This finding demonstrates the remarkable generalization ability of SAC-based algorithms to handle SNR change cases. These SAC-based algorithms can get fine-tuned and converged again within a few epochs (around 10). The quick convergence ability is also validated at epochs 600 and 900 when the SNR changes to \(0.5\,\mathrm{dB}\) and \(3\,\mathrm{dB}\), respectively. We also noticed that there are significant discrepancies in the convergence time between the two \(\text{SNR}=1\,\mathrm{dB}\) stages. In the later \(\text{SNR}=1\,\mathrm{dB}\) stage (epoch 1200-1500), the PTDFC achieves convergence in approximately 10 epochs after the SNR transition. This can be attributed to the solid foundation of well-trained parameters established during the preceding \(\text{SNR}=3\,\mathrm{dB}\) stage. This trend reaffirms the system's adeptness in rapidly adapting to environmental SNR changes.
The optimization problem at hand involves both linear and nonlinear objective functions, constraints, and involves binary variables. Consequently, it is classified as an Integer Nonlinear Programming (INLP) problem and is notoriously difficult to solve. Traditional optimization algorithms for INLP (such as Branch and Bound) are not suitable for this problem due to their exponential convergence time and the assumption of global knowledge of the environment and its dynamics. Moreover, in the event of a change in the environment, such as a variation in channel SNR, it takes a considerable amount of time to solve the problem and achieve convergence again. Classical machine learning algorithms, exceptionally standard reinforcement learning, also face challenges in scaling with the large dimension of the variable set, requiring an excessively large network size and convergence time to solve the problem.
### _Numerical Results_
The system performance of the proposed PTDFC algorithm and the baselines (DFC, DFNC, MRU-LRU, MFU-LFU) in terms of transmission bandwidth cost and computation cost for different channel SNRs is presented in Fig. 3. The results show that the PTDFC algorithm achieves the lowest cost for transmission bandwidth for various channel SNRs, followed by the DFC, MFU-LFU, DFNC, and MRU-LRU algorithms. In terms of computation cost, the top-performing algorithms are PTDFC, MFU-LFU/MRU-LRU, DFC, and DFNC, respectively. Overall, the PTDFC algorithm achieves a reduction of around \(5\times 10^{5}\) bits/s in transmission cost and \(3\times 10^{5}\) J in
Fig. 3: The system performance for the proposed PTDFC algorithm and the baselines with the configuration stated in Section. V-B and V-C. (left) Transmission cost vs. SNR. (right) Computation cost vs. SNR.
Fig. 2: Training reward of PTDFC, DFC, DFNC, MFU-LFU, MRU-LRU algorithms when the SNR values are dynamically changed every 300 epochs.
computation cost for every SNR condition, compared to the second-best algorithm. It is also observed that all algorithms take less transmission bandwidth for the requested task as the SNR value increases, indicating that a higher SNR results in better channel quality.
### _Qualitative Results Analysis_
Fig. 4 provides an example of the status and action of four requests when deploying the PTDFC algorithm. The figure visualizes the requested task, cache state, action, and reward of each time slot to show the joint computing, pushing, and caching optimization of the four tasks. In this example, at \(t_{0}\), the mobile device requests task 1 from the MEC server, which has empty cache content. The system then makes reactive transmission and computing for task 1 with five cores and pushes the input data of unrequested task 3, followed by caching the input data of tasks 1 and 3. At \(t_{1}\), the requested task is task 3, and the system makes the reactive computing of the cached task 3 with four cores and pushes the input data of task 4. Then, the system replaces the cache of task 1 with the input data for task 4. Similarly, at \(t_{2}\), the requested task is task 4, and the system makes the reactive computing of the cached task 4 with five cores and pushes the input data of task 2. Finally, the system removes the cache for task 3 and caches the input data for task 2. The example illustrates that the SAC-based PTDFC system is capable of predicting the user's future requests using deep networks and pushing or caching the appropriate content to enhance system performance.
### _Tuning Analysis_
In this section, we investigate the impact of several crucial parameters on the performance of the proposed PTDFC algorithm. These parameters include the cache size (\(C\)), number of computing cores (\(M\)), number of tasks (\(F\)), maximum transition probabilities, base computing frequency (\(f_{D}\)), task input size (\(I_{f}\)), task output size (\(O_{f}\)), tolerable service delays (\(\tau\)), and cost weights (\(\lambda\)). To analyze the effects of each parameter, we hold the other parameters constant and observe the resulting changes in performance. The default values for these parameters are specified in Section V-B, and we maintain a fixed channel SNR value of 1 to isolate the effects of parameter tuning.
#### V-F1 Different Cache Size \(C\)
In Fig. 5, we have presented the averaged transmission and computation costs of three SAC-enabled algorithms, DFNC, DFC, and PTDFC, as well as two heuristic algorithms, MRU-LRU and MFU-LFU, under different cache sizes \(C\). It is worth noting that the DFNC algorithm is not affected by changes in cache size, as it only provides reactive service without caching. The other algorithms show a decrease in transmission costs as the cache size is increased, due to the availability of more locally cached input data. Moreover, our proposed PTDFC algorithm consistently achieves lower transmission and computation costs than the other algorithms, thanks to its ability to dynamically adjust the cache via proactive transmission. With a very large cache size, (e.g., \(C=50000\) bits), the performance of PTDFC and DFC is similar because the cache is large enough to store all input data and there is no need for a proactive transmission. Furthermore, we have observed a consistent overlapping trend in the computation costs of MRU-LRU and MFU-LFU across various configurations. This overlapping behavior can be attributed to our design choice in both algorithms, wherein solely the input data is cached. Consequently, the performance of MRU-LRU and MFU-LFU in terms of computation cost tends to align, as governed by Eq. (5), when \(S_{A(t)}^{O}(t)=0\).
#### V-F2 Different Number of Computation Cores \(M\)
Fig. 5: Impact of varying the cache size \(C\) when using the default configuration and fixed SNR. (left) Transmission cost vs. \(C\). (right) Computation cost vs. \(C\).
Fig. 4: A qualitative example for the joint optimization of 4 tasks using SAC-based PTDFC algorithm. (left) Visualization of the Markov transition probability among 4 tasks. (right) The requested task, cache state, action, and reward for the first four time slots of the proposed SAC system. If no action is mentioned, it defaults to no change with a value of 0.
numbers of computation cores \(M\). As the number of computing cores increases, the transmission cost of all five algorithms decreases, while their computation cost increases correspondingly. This is because all the reactive tasks are time-sensitive, and the system tends to utilize more computing cores for computing to reduce the computing time and leave more time for reactive transmission, which would effectively reduce the transmission cost and total cost. The proposed PTDFC algorithm consistently achieves a low transmission cost and computation cost by selecting an appropriate computing core number for the required task to achieve a better reward or a smaller cost. In contrast, the heuristic algorithms MRU-LRU and MFU-LFU have high transmission costs for small \(M\) and significant computing costs for large \(M\), as their computing frequency is linearly adjusted with the increase of computing cores.
#### Iv-B3 Different Number of Tasks \(F\)
In Fig. 7, we present the performance of five algorithms under various numbers of tasks in the request set. As the number of tasks increases, we observe a slight increase in the transmission and computation costs for the DFC and PTDFC algorithms, possibly because only a small portion of all tasks can be cached or proactively transmitted, while the rest has to be reactively served, leading to higher costs. However, it is worth noting that the proposed PTDFC algorithm consistently outperforms all other algorithms across different numbers of tasks. This is due to its ability to dynamically select the best task in the task set for caching and proactive transmission, thereby minimizing costs associated with reactive service.
#### Iv-B4 Different Maximum Transition Probabilities
We investigated the impact of the maximum transition probabilities \(p_{\max}\) on the Markov chain simulation, which is a crucial parameter. Accordingly, we conducted experiments by varying the \(p_{\max}\) value and evaluated the performance of five algorithms under different \(p_{\max}\) values. The results, shown in Fig. 8, indicate that a higher \(p_{\max}\) value leads to an easier prediction of future service requests based on the current request, resulting in the corresponding proactive push operation. Consequently, the PTDFC algorithm, equipped with proactive transmission, demonstrated significant reductions in transmission and computation costs compared to the other algorithms, which do not consider proactive transmission. It is worth noting that the other algorithms exhibited little to no variation in their performance across different \(p_{\max}\) values, as they do not rely on the push operation.
#### Iv-B5 Different Base Computing Frequency \(f_{d}\)
To study the impact of computing frequency on system performance, we conducted an evaluation of five algorithms with varying \(f_{D}\) and presented our results in Fig. 9. As a general rule, increasing the computing frequency is equivalent to having more computing cores with a fixed value of \(f_{D}\). Therefore, the trends observed in the results of Fig. 9 are similar to those previously reported in Fig. 6. Notably, the PTDFC algorithm consistently exhibits the lowest overall cost (i.e., the sum of transmission and computation costs) across all \(f_{D}\) configurations.
#### Iv-B6 Different Task Input Size \(I_{f}\)
The default input data size for all tasks is \(16000\) bits. To investigate the impact of input data size on the system performance, we conducted experiments by varying the input data size for four tasks and evaluated the performance of five algorithms under different \(I_{f}\). The results are presented in Fig. 10, where it can be observed that an increase in the input data size leads to an increase in both transmission and computation costs for all algorithms. This can be attributed to the fact that larger input
Fig. 8: Impact of varying the maximum transition probability in Markov chain when using the default configuration and fixed SNR. (left) Transmission cost vs. maximum transition probability. (right) Computation cost vs. maximum transition probability.
Fig. 6: Impact of varying the number of computing cores \(M\) when using the default configuration and fixed SNR. (left) Transmission cost vs. \(M\). (right) Computation cost vs. \(M\).
Fig. 7: Impact of varying the task number \(F\) when using the default configuration and fixed SNR. (left) Transmission cost vs. \(F\). (right) Computation cost vs. \(F\).
data requires more bandwidth and computation, whether in reactive or proactive cases, as indicated by Eq. (3), (5), (6). On the other hand, when the input data size is relatively small (e.g., 11000 bits), the PTDFC and DFC algorithms exhibit similar performance, as the optimal policy for both algorithms is to cache as much input data as possible and provide full reactive service only for non-cached tasks. However, in general, the PTDFC algorithm consistently outperforms the other algorithms, achieving the smallest total cost across all input data size configurations.
#### V-B7 Different Task Output Size \(O_{f}\)
After analyzing the impact of input data size, we further examined the influence of output data size on the system performance. For this purpose, we altered the output data size of four tasks and evaluated the performance of five algorithms under different \(O_{f}\). The default output data size for all tasks was set to \(30000\) bits. As depicted in Fig. 11, the transmission cost and computation cost for the DFNC, MRU-LRU, and MFU-LFU algorithms remained unchanged, as the output data size did not affect the calculation of the two costs and the cache update mechanism. However, for the DFC algorithm, the transmission cost fluctuated around a constant level, and the computation cost increased as the output data size increased from \(15000\) bits to \(30000\) bits. This behavior can be attributed to the algorithm's prioritization of caching input data, which ensures a low transmission cost. On the other hand, due to limited cache size, lower priority, and increased output data size, only a small fraction of tasks had the opportunity to cache their output data, resulting in additional computation for reactive service. In contrast, the PTDFC algorithm's joint computing, pushing, and caching design demonstrated robustness to variations in \(O_{f}\), as evidenced by the flat transmission and computation cost curves in Fig. 11.
#### V-B8 Different Tolerable Service Delays \(\tau\)
The tolerable service delay is a critical parameter that significantly influences the transmission and computation costs of the system. To evaluate their impact, we tested the performance of five algorithms under varying values of \(\tau\) and present our findings in Fig. 12. As we increase \(\tau\) from \(0.012\) s to \(0.024\) s, we observe a corresponding decline in the transmission and computation costs for most algorithms. This is because the larger \(\tau\) provides more time for transmitting the input data of the requested task, reducing the bandwidth cost as per Eq. (3). Similarly, additional processing time is given to the computation step for acquiring the output data, relaxing the requirement of the computing frequency, and leading to a lower computation cost as per Eq. (5). Notably, the PTDFC algorithm outperforms the other algorithms by achieving the lowest transmission and computation cost under all \(\tau\) values. This is attributed to its ability to design an optimal policy for joint computing, pushing, and caching through deep reinforcement learning. However, the transmission cost of all five algorithms converges at larger \(\tau\), and the benefits of the PTDFC algorithm are mitigated as a lower computing frequency (one core) is employed for all algorithms, resulting in comparable transmission costs.
#### V-B9 Different Cost Weights \(\lambda\)
To guide policy learning for the trade-off between transmission cost and computation cost, the default cost weight of 1 is used for designing the reward function in Eq. (23). To investigate the impact of this parameter on the performance of the algorithms, we evaluated the performance of five algorithms under different \(\lambda\) values and present the results in Fig. 13. Notably, the SAC-enabled algorithms show a clear trade-off between the two costs, as increasing \(\lambda\) leads to a decrease in computation cost and a corresponding increase in transmission cost, while the heuristic algorithms MRU-LRU and MFU-LFU have completely flat cost curves. The PTDFC algorithm consistently achieves the
Fig. 11: Impact of varying the output data size \(O_{f}\) for 4 tasks when using the default configuration and fixed SNR. (left) Transmission cost vs. \(O_{f}\). (right) Computation cost vs. \(O_{f}\).
Fig. 10: Impact of varying the input data size \(I_{f}\) for 4 tasks when using the default configuration and fixed SNR. (left) Transmission cost vs. \(I_{f}\). (right) Computation cost vs. \(I_{f}\).
Fig. 9: Impact of varying the base computing frequency \(f_{D}\) when using the default configuration and fixed SNR. (left) Transmission cost vs. \(f_{D}\). (right) Computation cost vs. \(f_{D}\).
best performance under different \(\lambda\) values, which can be attributed to its optimal computing, pushing, and caching policy design through deep reinforcement learning.
### _Complexity Analysis_
The computational complexity of the proposed PTDFC algorithm largely relies on the number and structure of neural networks in SAC system [35]. At the training stage, Algorithm 1 incorporates the parameter updating of three neural networks: \(Q_{\theta}\), \(Q_{\bar{\theta}}\) (the actors), and \(\pi_{\phi}\) (the critic). Therefore, the computation of the complexity of Algorithm 1 is:
\[\begin{split}& 2\times\sum_{j=0}^{J-1}n_{j}^{Q}n_{j+1}^{Q}+\times \sum_{k=0}^{K-1}n_{k}^{\pi}n_{k+1}^{\pi}\\ &=O\left(\sum_{j=0}^{J-1}n_{j}^{Q}n_{j+1}^{Q}+\sum_{k=0}^{K-1}n_{k }^{\pi}n_{k+1}^{\pi}\right)\end{split} \tag{24}\]
where \(J\) denotes the number of fully connected layers for the \(Q_{\theta}\) and \(Q_{\bar{\theta}}\) networks (having identical structure), and \(K\) denotes that for \(\pi_{\phi}\) network. \(n_{j}^{Q}\) and \(n_{k}^{\pi}\) represent the number of neurons at the \(j\)-th layer of \(Q_{\theta}\) or \(Q_{\bar{\theta}}\) networks and the \(k\)-th layer of \(\pi_{\phi}\) network. \(j=0\) and \(k=0\) represent the input layers.
At the testing stage, Algorithm 1 only needs to execute the trained \(Q_{\theta}\), \(Q_{\bar{\theta}}\) networks, so the computation complexity is reduced to \(O\left(\sum_{j=0}^{J-1}n_{j}^{Q}n_{j+1}^{Q}\right)\). In our system, \(J=3\), \(K=4\), \(N_{j}^{Q}=22,256,256,1\) for \(j=0,1,2,3\), given the number of tasks \(F=4\).
## VI Conclusion
In this paper, we explore joint optimization of computing, pushing, and caching in MEC networks to further improve user-perceived quality of experience. We formulate the joint-design problem as an infinite-horizon discounted-cost Markov decision process, which allows us to optimize the total quantity of transmitted data and the total computation cost for the mobile user. To solve this problem, we propose a framework based on SAC learning that dynamically orchestrates the three functions. The framework is featured with embedded deep networks that implicitly predict user future requests and a design for action quantization and correction that enables SAC to work for this problem. In simulations using a single-user single-server MEC network, our proposed framework effectively reduces both transmission load and computing cost and outperforms baseline algorithms across various parameters.
|
2309.14711 | Probing the existence of η^3He mesic nucleus with a few-body approach | Motivated by the two recent observations in the WASA-at-COSY detector, we
investigate the $\eta^3$He nucleus with the $\eta NNN$ few body method. We
construct the effective $s$-wave energy dependent $\eta N$ potential which
reproduce the $\eta N$ subthreshold scattering amplitude in the 2005
Green-Wycech model. It gives the $\eta$ separation energy and decay width of
0.19 MeV and 1.71 MeV, respectively. We also construct various sets of
effective $s$-wave energy independent $\eta N$ potentials where the
corresponding complex scattering lengths (a) are within the range given in most
theoretical models. We obtain the bound $\eta^3$He nucleus with decay width of
about 5 MeV when a is (1.0 fm, 0.3 fm), and of about 10 MeV when a is (1.0 fm,
0.5 fm). | Qian Wu, Gang Xie, Xurong Chen | 2023-09-26T07:08:15Z | http://arxiv.org/abs/2309.14711v3 | # Probing the existence of \(\eta\)d and \(\eta^{3}\)He with a few-body approach
###### Abstract
We investigate the possible \(\eta\)-deuteron and \(\eta-^{3}\)He bound states with the \(\eta\)\(\eta\)\(N\) and \(\eta\)\(NNN\) few body method. In order to solve the three body and four body Schrodinger equations, we apply the Gaussian expansion method. The realistic AV8' \(NN\) potential together with an extra 3N three body force which reproduce the binding energies of deuteron, \({}^{3}\)H and \({}^{3}\)He, are used. We construct the relations between the complex Gaussian-type energy-independent \(\eta\)\(N\) interactions and the \(\eta\)\(N\) scattering lengths \(a\). The relations between the binding energies in \(\eta\)d and \(\eta\)\({}^{3}\)He, and the scattering lengths are then obtained. We find that to have a bound \(\eta\)d nucleus, the real \(\eta\)\(N\) scattering length should exceed at least 1.35 fm. As for the \(\eta^{3}\)He system, the bound state exists when the real \(a>0.75\sim 0.8\) fm if we neglect the imaginary part of the \(\eta\)\(N\) interaction. After solving the full four body complex Schrodinger equation, we find the imaginary \(\eta\)\(N\) potential causes the system more unbound. Thus, we give the relations between the bound or unbound scenarios of the \(\eta^{3}\)He system and the complex \(\eta\)\(N\) scattering lengths.
eta meson, mesic nucleus, light nuclei pacs:
## I Introduction
The \(N^{*}(1535)\) resonance, which is close to the \(\eta\)-nucleon (\(N\)) threshold (\(E_{th}=1487\) MeV), results in a strong attractive force between the \(\eta\) meson and the nucleon. It is first examined by Bhalerao and Liu in the pioneering work [1], using the \(\eta\)\(N\)-\(\pi\)\(N\) coupled channels method, and soon it's verified in the dynamical calculation of the N\({}^{*}(1535)\)\(S_{11}\) resonance [2]. Since then, the \(\eta\)\(N\) interaction has been studied with several coupled-channel models which generate out a wide range of the real part of the \(\eta\)\(N\) scattering length from 0.2 fm [2] to 1.0 fm [3; 4]. At these works, the imaginary part of the \(\eta\)\(N\) scattering length are found to have a narrower range from 0.2 to 0.3 fm.
Due to the attractive \(\eta\)\(N\) interaction, it is possible to form \(\eta\) mesic bound states in nuclei. The evidences that the \(\eta\) mesic quasibound states may exist are given in Refs. [5; 6; 7; 8]. Since then, various optical model calculations have been used for searching the \(\eta\) nuclear bound state. Normally, the binding energy of the \(\eta\) mesic nuclei are related to the \(\eta\)\(N\) interaction and the value of the \(\eta\)\(N\) scattering length. It is also emphasized in several works [9; 10; 11; 12; 13] that the binding energy also depends on how to treat the subthreshold energy dependence. These works suggest that the energy dependent \(\eta\)\(N\) interaction is necessary in order to reproduce the subthreshold \(\eta\)\(N\) scattering amplitude obtained with several coupled channel models [2; 3; 4]. In Ref. [14], Xie et al. obtained a 0.3 MeV's binding energy of the \(\eta\)-\({}^{3}\)He and a decay width around 3 MeV by evaluating the pd\(\rightarrow\eta^{3}\)He near-threshold reaction. A recent interpretation of \(\eta\) quasibound states constrained data in the photon and hadron induced reactions implies that \(\eta\)d is unbound, \(\eta^{3}\)He might be bound while \(\eta^{4}\)He is bound [15].
Besides the optical potential model calculations, there are also several few-body calculations concerning the \(\eta\)\(NN\), \(\eta\)\(NNN\) or \(\eta\)\(NNN\) systems [16; 17]. With precise few-body stochastic variational method (SVM) and several energy dependent \(s\)-wave \(\eta\)\(N\) interactions derived from several coupled channel models of the N\({}^{*}(1535)\) resonance, Barnea et al. [17] reported that a bound \(\eta-^{3}\)He nuclei requires that the real part of the \(\eta\)N scattering length should exceed 1.0 fm approximately which yields a few MeVs binding energy between \(\eta\) and \({}^{3}\)He. For the \(\eta^{4}\)He, they found that the real \(\eta\)N scattering length should be at least 0.7 fm to form a bound nuclei. Similar conclusions were given with applying SVM and a pionless effective field theory [18]. However, in Ref. [19], with solving the four and five body Alt-Grassberger-Sandhas equations, it said neither \(\eta^{3}\)He and \(\eta^{4}\)He are bound when the \(\eta\)N scattering length is 0.91 fm\(+\)0.27\(\bar{\rm{i}}\) fm.
On the experimental side, many experiments using photon, pion, proton or deuteron beams have given signals of bound \(\eta\) mesic nuclei [20; 21; 22; 23; 24; 25] but none of these experiments can conclude clear existence [26]. Recently, the search for the \(\eta\) mesic \({}^{4}\)He have been done using the WASA-at-COSY detector setup in the \(dd\rightarrow(^{4}\rm{He},\eta)\rightarrow(^{3}\rm{He},n\pi^{0})\) and \(dd\rightarrow(^{4}\rm{He},\eta)\rightarrow(^{3}\rm{He},p\pi^{-})\) reactions [27]. It gives the upper limits of the cross section for the bound \(\eta^{4}\)He mesic state, which are 3 and 6 mb, respectively, for the \(n\pi^{0}\) and \(p\pi^{-}\) channels. The decay width are found between 5 and 50 MeV. As for a possible \(\eta^{3}\)He bound state, at the WASA-at-COSY facility in \(pd\rightarrow(^{3}\rm{He},\eta)\rightarrow(^{3}\rm{He},2\gamma)\) and \(pd\rightarrow(^{3}\rm{He},\eta)\rightarrow(^{3}\rm{He},6\gamma)\) reactions, people determined a \(\eta^{3}\)He bound nucleus with decay width \(\gamma\) above 20 MeV and binding energy B\({}_{\eta}\) between 0 and 15 MeV [28]. Besides, the search in the \(pd\rightarrow(^{3}\rm{He},\eta)\rightarrow(\rm{dp},\pi^{0})\) reaction gives 13 to 24 nb for the bound \(\eta^{3}\)He [29].
In this work, we focus on finding the possible bound states of \(\eta\)\(d\) and \(\eta^{3}\)He nuclei with energy independent \(\eta\)\(N\) interactions, which reproduce the \(\eta\)\(N\) scattering lengths in ranges of \((0.2,1.0\) fm) for the real part and \((0.2,0.5\) fm) for the imaginary part. Particularly, we fully solve the \(\eta\)\(NN\) three body and \(\eta\)\(NNN\) four body complex Hamiltonian with the precise few body method, the Gaussian expansion method (GEM) [30], which have been applied in various hadronic and nuclear sys
tems [31; 32; 33]. With fully diagonalizing the complex Hamiltonian, the influence of the imaginary \(\eta N\) potential to the nuclear binding energy (real part of the energy) can be studied.
The paper is organized as follows: In Sec. II, we introduce the realistic \(NN\) interactions and the calculated energy levels of deuteron, \({}^{3}\)He and \({}^{3}\)H. We also construct the relation between the \(\eta N\) interactions and the \(\eta N\) scattering lengths in this section. We explain the Gaussian expansion method and the three body and four body wave functions in Sec. III. In Sec. IVa and Sec. IVb, the results of the \(\eta\)d and the \(\eta^{3}\)He systems are given, respectively. Necessary discussions are also given in Sec. IVa and Sec. IVb. Sec. IV is devoted to Summary.
## II Construct the Hamiltonian
In this work, we investigate the \(\eta\)d system with solving the \(\eta\)NN three body Schrodinger equation and \(\eta^{3}\)He system via solving the four body \(\eta NNN\) Schrodinger equation. The Hamiltonian is written as follows:
\[H=T_{\eta}+\sum_{i=1}^{A}T_{N_{i}}+\sum_{i<j=1}^{A}V_{N_{i}N_{j}}+\sum_{i=1}^ {A}V_{\eta N_{i}}, \tag{1}\]
where \(A=2\) or \(3\), which corresponds to the \(\eta NN\) system and \(\eta NNN\) systems, respectively. \(T_{\eta}\) and \(T_{N_{i}}\) are the kinetic operators of \(\eta\) meson and nucleons. \(V_{\eta N_{i}N_{j}}\) represents the interactions between nucleons and \(V_{\eta N_{i}}\) denotes \(\eta N\) potential.
We use the AV8' \(NN\) interaction which is a modified version of AV18 interaction [34]. The corresponding binding energy of deuteron, \({}^{3}\)He and \({}^{3}\)H are 2.24, 7.11 and 7.82 MeV, respectively. It should be noted that the binding energy of \({}^{3}\)He and \({}^{3}\)H under present NN interaction are not equal to the experimental values, which is 7.72 and 8.48 MeV, respective for the \({}^{3}\)He and \({}^{3}\)H [35]. In order to reproduce the experimental binding energy of \({}^{3}\)He and \({}^{3}\)H, we add an extra three body attractive force as [36]:
\[V_{th}=\sum_{n=1}^{2}V_{n}^{(3)}e^{-\mu_{n}\left(r_{12}^{2}+r_{23}^{2}+r_{13}^ {2}\right)}, \tag{2}\]
where \(r_{12}\) is the relative coordinate between nucleon 1 and 2. A set of parameters is (\(V_{1}^{(3)}=-2.04\) MeV, \(\mu_{1}=4.0\) fm\({}^{-2}\), \(V_{2}^{(3)}=35.0\) MeV, \(\mu_{2}=0.75\) fm\({}^{-2}\)). Then, it gives the binding energy of \({}^{3}\)He and \({}^{3}\)H with 7.74 and 8.42 MeV, respectively, which reproduce well the experimental values.
As we mentioned in the first section, in order to reproduce the \(\eta N\) scattering amplitude given in Refs. [2; 3; 4], a energy dependent \(\eta N\) interaction is required. The phase shift \(\delta\), T-matrix and scattering amplitude \(F\) satisfy the following equations:
\[\begin{split}& kcot\delta=ik(1+\frac{2}{T})=ik+F^{-1},\\ & kcot\delta=\frac{1}{a}+\frac{1}{2}r_{0}k^{2}+...,\end{split} \tag{3}\]
where \(k\) is the wave number with \(k=\sqrt{2\mu_{\eta N}E}\) and \(E=\sqrt{s}-\sqrt{s_{th}}\). \(\mu_{\eta N}\) is the reduced mass of the \(\eta\) meson and nucleon. Here, \(a\) is the scattering length and we have \(a=F\) when \(k\to 0\). In our notation in Eq. 3, it has minus sign difference compared with the normal definition of the scattering length. And for the further convenience, we only focus on its absolute value.
In this work, we only focus on the scattering amplitude at \(\sqrt{s}=\sqrt{s_{th}}\) (scattering length) and construct a energy independent \(\eta N\) potential, which has the formula:
\[V_{\eta N}=V_{0}(\mu/\pi)^{3/2}e^{-\mu r_{\eta N}^{2}}. \tag{4}\]
We then solve the two body \(S\)-wave \(\eta N\) Schrodinger equation:
\[u^{{}^{\prime\prime}}(r)+k^{2}u(r)=2\mu_{\eta}NV_{\eta N}(r) \tag{5}\]
under the boundary condition:
\[u(0)=0,\qquad u(r)\xrightarrow{r\to\infty}sin(kr)+\frac{T}{2i}e^{ ikr}. \tag{6}\]
First, we calculate the \(\eta N\) s-wave scattering lengths under different \(\eta N\) interactions while neglecting the imaginary part of the \(\eta N\) potential. In Fig. 1, we show the relation between real \(V_{c}\) (\(=V_{0}(\mu/\pi)^{3/2}\)) and the real scattering length under different \(\mu\)s. It agrees with the usual experiences that the larger scattering length corresponds to the stronger attractive force. Besides, the relation between the Gaussian range parameter \(\mu\), which connect with the potential range as \(1/\sqrt{\mu}\), and the potential strength \(V_{c}\) shown is also reasonable.
Then, in order to give an appropriate \(\eta N\) scattering length close to the ones mentioned in the first section:
\[a_{\rm real}\to 0.2\sim 1.0\ {\rm fm},\qquad{\rm a}_{\rm imag}\to 0.2\sim 0.3\ {\rm fm}. \tag{7}\]
a complex \(\eta N\) potential is required. We calculate the \(V_{c}\) (complex) under different complex scattering lengths. In Figs. 2 and 3, we depict the relation between the real \(V_{c}\) and imaginary \(V_{c}\) under different scattering lengths. In each line, the real part of the scattering length is fixed and the six points represent the six different imaginary scattering lengths, which are \(0\sim 0.5\) fm (step = 0.1 fm) from down to the top.
Figure 1: The relations between the strengths of the Gaussian-type \(\eta N\) potential and the scattering lengths. Two different Gaussian type potentials with range parameter \(\mu\)=1.0 and 4.0 fm\({}^{-2}\) are shown.
One interesting thing in these figures is that when the imaginary scattering length grows by 0.1 fm each time, the imaginary \(V_{c}\) almost increase the same value. On the other hand, the real \(V_{c}\) has no change when the imaginary scattering length grows from 0 to 0.1 fm but it faces significant changes when the imaginary scattering length is around 0.5 fm. Therefore, we expect that the large imaginary scattering length will significantly reduce the possibility of the existence of a bound \(\eta\) mesic nuclei.
It is obvious that our potentials strongly depend on the range parameter, \(\mu\). It is often identified with the momentum cutoff \(\Lambda\) (\(\Lambda=2\sqrt{\mu}\)) which is used to treat the divergent loop integrals in on-shell EFT N\({}^{*}\)(1535) models [39; 40]. In table 1, we give the \(\Lambda\) values used in several different EFT N\({}^{*}\)(1535) models. It gives a range of \(\mu\) around \(2\sim 4\) fm\({}^{-2}\). It should be noted that in Ref. [37], \(\Lambda=6.6\) fm\({}^{-1}\) is used which gives the potential range \(1/\sqrt{\mu}=0.3\) fm. According to the Ref. [16], choosing a potential range smaller than 0.47 fm (\(\mu\sim 4\) fm\({}^{-2}\)) would be inconsistent with staying within a purely hadronic basis. Therefore, we won't consider any \(\mu\) values which are larger than 4 fm\({}^{-2}\) and use \(\mu=1.0\) and 4.0 fm\({}^{-2}\) cases in this work.
## III Gaussian expansion method
In order to solve the \(\eta NN\) three body and \(\eta NNN\) four body Schrodinger equations, We write the three body \(\eta NN\) wave
\begin{table}
\begin{tabular}{c c c} \hline \hline Refs & [2; 38] & [4] \\ \hline \(\Lambda\) (fm\({}^{-1}\)) & 3.9 & 3.2 \\ \hline \end{tabular}
\end{table}
Table 1: The momentum scale parameter \(\Lambda\) used in several EFT models.
Figure 2: The complex strength of the \(\eta N\) potential under different complex scattering lengths. In each line, the real scattering length (\(a^{R}\)) is fixed and the six points are corresponding to different imaginary scattering lengths, 0.0, 0.1, 0.2, 0.3, 0.4 and 0.5 fm, respectively. The range parameter \(\mu\) of the \(\eta N\) potential is 1.0 fm\({}^{-2}\) in this figure.
Figure 3: The complex strengths of the \(\eta N\) potential under different complex scattering lengths. In each line, the real scattering length (\(a^{R}\)) is fixed and the six points are corresponding to different imaginary scattering lengths, 0.0, 0.1, 0.2, 0.3, 0.4, and 0.5 fm, respectively. The range parameter \(\mu\) of the \(\eta N\) potential is 4.0 fm\({}^{-2}\) in this figure.
Figure 5: Jacobi coordinates of \(\eta NNN\) four body system.
Figure 4: Jacobi coordinates of \(\eta NN\) three body system.
function with \(J^{P}=1^{+}\) and total isospin (\(T,\ T_{Z}=1,\ 0\)) as:
\[\Psi_{JMTT_{c}}(\eta NN) =\sum_{c=1}^{2}\sum_{L}\sum_{S\,n_{1}}\sum_{I_{1}n_{2},I_{2}}C_{ \gamma}^{(c)}\mathcal{A}\] \[\times\left\{\left\{\left[\varphi_{n_{1}I_{1}}^{(c)}\left(r_{c} \right)\varphi_{n_{2}I_{2}}^{(c)}\left(R_{c}\right)\right]_{L}\right.\right. \tag{8}\] \[\left.\left.\times\left[\chi_{1/2}^{1}\chi_{1/2}^{2}\right]_{S} \right\}_{JM}\left[\tau_{1/2}^{1}\tau_{1/2}^{2}\right]_{T_{T_{c}}}\right\},\]
and four body \(\eta NNN\) wave function with the \(J^{P}=1/2^{+}\) and (\(T,\ T_{Z}=1/2,\ 1/2\)) as:
\[\Psi_{JMTT_{c}}(\eta NNN) =\sum_{c=1}^{4}\sum_{I,L}\sum_{s,S}\sum_{I_{1},I_{1}}\sum_{I_{2}, I_{2}}\sum_{I_{3},I_{3}}C_{\beta}^{(c)}\mathcal{A}\] \[\times\left\{\left\{\left[\left(\varphi_{n_{1}I_{1}}^{(c)}\left( r_{c}\right)\psi_{n_{2}I_{2}}^{(c)}\left(R_{c}\right)\right)\right]_{t}\varphi_{n_{ 1}I_{3}}^{(c)}\left(\rho_{c}\right)\right\}_{L}\right.\] \[\left.\times\left[\left(\chi_{1/2}^{1}\chi_{1/2}^{2}\right)_{s} \chi_{1/2}^{3}\right]_{S}\right\}_{JM}\] \[\times\left[\left(\tau_{1/2}^{1}\tau_{1/2}^{2}\right)_{t}\tau_{1 /2}^{3}\right]_{T_{T_{c}}}\right\}, \tag{9}\]
where the sets of Jacobi coordinates of \(\eta NN\) (\(c=1,2\)) and \(\eta NNN\) (\(c=1-4\)) are shown in Figs. 4 and 5, respectively. Here \(\gamma\) and \(\beta\) denote \(\{L,S,n_{1},l_{1},n_{2},l_{2}\}\) and \(\{I,L,s,S,t,n_{1},l_{1},n_{2},l_{2},n_{3},l_{3}\}\) respectively. \(\chi\) and \(\tau\) represent the spin and isospin wave function of the nucleons, respectively. It should be noted that both the intrinsic spin and isospin of the \(\eta\) meson is \(0\) thus we neglect its spin and isospin wave function. The \(\mathcal{A}\) is the anti-symmetrization operator between nucleons. The relative spatial wave functions between the nucleons and eta meson in \(\eta NNN\), corresponding to the three Jacobi coordinates, \(\phi_{n_{1}I_{1}}(\tau)\), \(\psi_{n_{2}I_{2}}(R)\) and \(\varphi_{n_{3}I_{3}}(\rho)\), are expanded by using the following Gaussian basis functions, applying the GEM [30],
\[\phi_{n_{1}I_{1}}(\tau) =r^{\prime_{1}}e^{-(r/r_{n_{1}})^{2}}Y_{\ell_{1}m_{1}}(\hat{r}),\] \[\psi_{n_{2}I_{2}}(R) =R^{\prime_{2}}e^{-(R/R_{n_{2}})}Y_{\ell_{2}m_{2}}(\hat{R}),\] \[\varphi_{n_{3}I_{3}}(\rho) =\rho^{\prime_{3}}e^{-(\rho/\rho_{n_{3}})^{2}}Y_{\ell_{3}m_{3}}( \hat{\rho}). \tag{10}\]
The Gaussian variational parameters are chosen to have geometric progression below,
\[r_{n_{1}} =r_{\text{min}}A_{1}^{n_{1}-1},\quad(n_{1}=1\sim n_{1}^{\text{max }}),\] \[R_{n_{2}} =R_{\text{min}}A_{2}^{n_{2}-1},\quad(n_{2}=1\sim n_{2}^{\text{max }}),\] \[\rho_{n_{3}} =\rho_{\text{min}}A_{3}^{n_{3}-1},\quad(n_{3}=1\sim n_{3}^{\text{ max}}). \tag{11}\]
The similar Gaussian type spatial wave function are used in \(\eta NN\) system. Then, the eigen energies and the coefficients \(C_{\gamma}\) and \(C_{\beta}\) are obtained with applying the Rayleigh-Ritz variational method.
## IV Results and discussions
### \(\eta\)d system
In this subsection, we focus on the \(\eta\)-deuteron system. We calculate the binding energy of \(\eta NN\) system with the real \(\eta N\) potential only. Then, combining the relation between potential strength \(V_{c}\) and \(\eta N\) scattering lengths (shown in Fig. 1) and the relation between \(V_{c}\) and the binding energy, we depict the relation between \(\eta\) separation energies (\(B_{\eta}\)) and \(\eta N\) scattering lengths in Fig. 6. Here \(B_{\eta}\) denotes the binding energy with respect to the \(\eta\)-d two body threshold. In Fig. 6, five lines are depicted and each line corresponds to the \(\eta N\) potentials with the same Gaussian range \(\mu\).
As we mentioned above, larger scattering length corresponds to stronger potential strength under the same potential range. Then, as shown in Fig. 6, when the scattering length goes larger, the \(B_{\eta}\) values become larger, too. It says that when the potential range \(\mu^{-2}\) is larger, it requires a larger scattering length to form a bound \(\eta\)d nucleus. In the x-axis of the Fig. 6, we can find the threshold value of the scattering length giving an existence of a bound \(\eta\)d system, which are \(1.35\sim 1.65\) fm. Considering the ranges of the \(\eta N\) scattering length given in Eq. 7, we think it's unlikely to have any \(\eta\)d bound state.
Therefore, considering the normal behavior of the imaginary potential that it will make the system more unbound under the same real potential, it's not that necessary to calculate the \(\eta\)d system with the complex \(\eta N\) potential. But in order to prove this behavior of the imaginary potential, we calculate the \(B_{\eta}\) under the complex \(\eta N\) potentials. In Fig. 7 and 8, we show the \(B_{\eta}\) values (real eigen energy) and the decay width \(\Gamma\) (complex eigen energy) of \(\eta\)d system, respectively. Here we only show the \(\mu=1.0\) fm\({}^{-2}\) case. In Fig. 7, it proves the fact that when the real potential is fixed, as the strength of the imaginary potential (\(V_{c}^{I}\)) grows, it will reduce the binding energy. On the other hand, in Fig. 8, as \(V_{c}^{I}\) grows, the decay width also grows.
### \(\eta^{3}\)He system
We calculate the \(\eta^{3}\)He system within the \(\eta NNNN\)(\(J^{P}=1/2^{+},\ T=1/2,\ T_{z}=1/2\)) four body calculation. Similar as
Figure 6: Relations between the \(B_{\eta}\) values in the \(\eta\)d system and \(\eta N\) scattering lengths (only real part). Four different \(\mu s\) are considered and \(\mu=1.0,\ 1.5,\ 2.5\) and \(4.0\) fm\({}^{-2}\) are shown, respectively, from left to right.
what we have done in the \(\eta\)d system, we first neglect the imaginary part of the \(\eta N\) potential. In Fig. 9, we show the \(B_{\eta}\) values of the \(\eta NNN\) under different real scattering lengths. We find that to form a \(\eta^{3}\)He bound system, the smallest scattering length for range parameter \(\mu=4.0\) fm\({}^{-2}\) is 0.7 fm. And the smallest scattering length for range parameter \(\mu=1.0\) fm\({}^{-2}\) is 0.83 fm. Therefore, it's quite possible that there exists a bound \(\eta^{3}\)He nucleus when the scattering length is in the range given in Eq. 7. The corresponding binding energy with respect to the \({}^{3}\)He threshold is between \(0\sim 2\) MeV.
Then, we include the imaginary \(\eta N\) potential and solve the four body complex schrodinger equation. We search the \(\eta^{3}\)He bound state under each of the \(\eta N\) potential sets shown in Fig. 2 and Fig. 3. As we mentioned before, the smallest scattering length are 0.7 and 0.83 fm in order to form a \(\eta^{3}\)He bound state, respectively for \(\mu=4.0\) and \(\mu=1.0\) fm\({}^{-2}\) cases. And considering the fact that the imaginary \(\eta N\) potential can only have negative contribution to form the bound nuclei, it is not necessary to calculate the complex four body \(\eta NNN\) schrodinger equation with \(a_{\rm real}<0.83\) fm of \(\mu=1.0\) fm\({}^{-2}\) case and with \(a_{\rm real}<0.70\) fm of \(\mu=4.0\) fm\({}^{-2}\) case.
In Fig. 10, we show whether there exist a bound \(\eta^{3}\)He nuclei under different scattering lengths at the \(\mu=1.0\) fm\({}^{-2}\) case. As we have explained, only the \(a_{\rm real}>0.83\) fm cases are necessary to be shown. The black circles mean there exist a bound \(\eta^{3}\)He nucleus while the crosses mean there don't exist any. Fig. 10 shows how the imaginary scattering length gives a negative contribution in forming a bound \(\eta-^{3}\)He nuclei. When \(a_{\rm real}\) reduces from 1.0 to 0.83 fm, the cases which have a bound state also reduce. Note that we can't calculate the binding energies with scanning all of the scattering case and we think the 0.1 fm gap is enough to check the behavior of the imaginary scattering length. Similar behaviors happen for \(\mu=4.0\) fm\({}^{-2}\), shown in Fig. 11.
Besides, in Table 2 and Table 3, we show the \(B_{\eta}\) and \(\Gamma\) values of the black circles in Fig. 10 and Fig. 11, in which the \(\eta-^{3}\)He is bound. It should be noted that the decay widths grow significantly as the imaginary sc
Figure 10: Existences of the bound \(\eta-^{3}\)He system under different complex scattering length when the range parameter \(\mu=1.0\) fm\({}^{-2}\). The circles mean there exist a bound state while the crosses mean there don’t.
Figure 7: The relations between the \(B_{\eta}\) values in the \(\eta\)d system and strength of the imaginary part of the \(\eta N\) potential (\(V_{c}^{I}\)) when the real part of the \(\eta N\) potential (\(V_{c}^{R}\)) is fixed. Four different \(V_{c}^{R}\)s are chose.
Figure 9: The relations between the \(B_{\eta}\) values in the \(\eta^{3}\)He system and the \(\eta N\) scattering lengths (only real part). The black solid line represents the \(\mu=4.0\) fm\({}^{-2}\) and the red solid line represents the \(\mu=1.0\) fm\({}^{-2}\).
0.5 fm, which are around 10 MeV and much larger than the \(B_{\eta}\).
## V Summary
We examine the possible existence of the \(\eta\)d and \(\eta^{3}\)He bound states. We have introduced effective energy independent Gaussian-type potentials for the \(\eta N\) interaction and obtained the relation between the \(\eta N\) scattering length \(a\) and the strength of the potential, under various Gaussian ranges \(\mu\). The relation between the binding energy of the \(\eta\)d and \(\eta^{3}\)He and the scattering length are obtained by solving the three body and four body Schrodinger equations with GEM.
In the \(\eta\)d system, we find that \(a_{\rm real}>1.35\) fm at \(\mu=1.0\) fm\({}^{-2}\) and \(a_{\rm real}>1.65\) fm at \(\mu=4.0\) fm\({}^{-2}\) is necessary in order to have a bound \(\eta\)d bound state. Considering the range of the \(\eta^{3}\)He bound nucleus within the range of the \(a_{\eta}\)N mentioned above.
Finally, we give the relations between the bound or unbound situations of the \(\eta^{3}\)He system and the complex scattering length \(a\). We find that the imaginary \(\eta N\) interaction, originating from the imaginary \(a\), would reduce the binding energy. We also calculate the exact \(B_{\eta}\) values and the decay width of the \(\eta^{3}\)He nucleus under different scattering lengths. The \(B_{\eta}\) values are calculated to be around \(0\sim 2\) MeV and the decay widths are between \(0\sim 10\) MeV when the scattering length \(a\) have the range of \(a_{\rm real}<1.0\) fm and \(a_{\rm simag}<0.5\) fm. Unfortunately, as we mentioned in the first section, so far the experiments can only give a wide range of the binding energy and the decay width of the possible bound \(\eta^{3}\)He nucleus. Thus, more definite experimental values are needed and will be very helpful in studying the \(\eta^{3}\)He nucleus theoretically.
###### Acknowledgements.
The authors want to thank Dr. Q. Meng for very helpful discussions. We also want to thank Prof. E. Hiyama and Prof. M. Kamimura for supports in the calculation code. This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences (Grant no. XDB34030301), Guangdong Major Project of Basic and Applied Basic Research (Grant no. 2020B0301030008), and National Natural Science Foundation of China (Grant no. 12233002).
|