Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- hrishivish23/MPM-Verse-MaterialSim-Small
|
5 |
+
- hrishivish23/MPM-Verse-MaterialSim-Large
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
pipeline_tag: graph-ml
|
11 |
+
tags:
|
12 |
+
- physics
|
13 |
+
- scientific-ml
|
14 |
+
- lagrangian-dynamics
|
15 |
+
- neural-operator
|
16 |
+
- neural-operator-transformer
|
17 |
+
- graph-neural-networks
|
18 |
+
- graph-transformer
|
19 |
+
- sequence-to-sequence
|
20 |
+
- autoregressive
|
21 |
+
- temporal-dynamics
|
22 |
+
---
|
23 |
+
|
24 |
+
# π PhysicsEngine: Reduced-Order Neural Operators for Lagrangian Dynamics
|
25 |
+
|
26 |
+
**By [Hrishikesh Viswanath](https://huggingface.co/hrishivish23), Yue Chang, Julius Berner, Peter Yichen Chen, Aniket Bera**
|
27 |
+
|
28 |
+

|
29 |
+
|
30 |
+
---
|
31 |
+
|
32 |
+
## π Model Overview
|
33 |
+
**GIOROM** is a **Reduced-Order Neural Operator Transformer** designed for **Lagrangian dynamics simulations on highly sparse graphs**. The model enables hybrid **Eulerian-Lagrangian learning** by:
|
34 |
+
|
35 |
+
- **Projecting Lagrangian inputs onto uniform grids** with a **Graph-Interaction-Operator**.
|
36 |
+
- **Predicting acceleration from sparse velocity inputs** using past time windows with a **Neural Operator Transformer**.
|
37 |
+
- **Learning physics from sparse inputs (n βͺ N)** while allowing reconstruction at arbitrarily dense resolutions via an **Integral Transform Model**.
|
38 |
+
- **Dataset Compatibility**: This model is compatible with [`MPM-Verse-MaterialSim-Small/Elasticity3DSmall`](https://huggingface.co/datasets/hrishivish23/MPM-Verse-MaterialSim-Small/tree/main/Elasticity3DSmall),
|
39 |
+
|
40 |
+
β **Note:** While the model can infer using an integral transform, **this repository only provides weights for the time-stepper model that predicts acceleration.**
|
41 |
+
|
42 |
+
---
|
43 |
+
|
44 |
+
## π Available Model Variants
|
45 |
+
Each variant corresponds to a specific dataset, showcasing the reduction in particle count (n: reduced-order, N: full-order).
|
46 |
+
|
47 |
+
| Model Name | n (Reduced) | N (Full) |
|
48 |
+
|---------------------------------|------------|---------|
|
49 |
+
| `giorom-3d-t-sand3d-long` | 3.0K | 32K |
|
50 |
+
| `giorom-3d-t-water3d` | 1.7K | 55K |
|
51 |
+
| `giorom-3d-t-elasticity` | 2.6K | 78K |
|
52 |
+
| `giorom-3d-t-plasticine` | 1.1K | 5K |
|
53 |
+
| `giorom-2d-t-water` | 0.12K | 1K |
|
54 |
+
| `giorom-2d-t-sand` | 0.3K | 2K |
|
55 |
+
| `giorom-2d-t-jelly` | 0.2K | 1.9K |
|
56 |
+
| `giorom-2d-t-multimaterial` | 0.25K | 2K |
|
57 |
+
|
58 |
+
---
|
59 |
+
|
60 |
+
## π‘ How It Works
|
61 |
+
|
62 |
+
### πΉ Input Representation
|
63 |
+
The model predicts **acceleration** from past velocity inputs:
|
64 |
+
|
65 |
+
- **Input Shape:** `[n, D, W]`
|
66 |
+
- `n`: Number of particles (reduced-order, n βͺ N)
|
67 |
+
- `D`: Dimension (2D or 3D)
|
68 |
+
- `W`: Time window (past velocity states)
|
69 |
+
|
70 |
+
- **Projected to a uniform latent space** of size `[c^D, D]` where:
|
71 |
+
- `c β {8, 16, 32}`
|
72 |
+
- `n - Ξ΄n β€ c^D β€ n + Ξ΄n`
|
73 |
+
|
74 |
+
This allows the model to generalize physics across different resolutions and discretizations.
|
75 |
+
|
76 |
+
### πΉ Prediction & Reconstruction
|
77 |
+
- The model **learns physical dynamics** on the sparse input representation.
|
78 |
+
- The **integral transform model** reconstructs dense outputs at arbitrary resolutions (not included in this repo).
|
79 |
+
- Enables **highly efficient, scalable simulations** without requiring full-resolution training.
|
80 |
+
|
81 |
+
---
|
82 |
+
|
83 |
+
## π Usage Guide
|
84 |
+
### 1οΈβ£ Install Dependencies
|
85 |
+
```bash
|
86 |
+
pip install transformers huggingface_hub torch
|
87 |
+
```
|
88 |
+
```
|
89 |
+
git clone https://github.com/HrishikeshVish/GIOROM/
|
90 |
+
cd GIOROM
|
91 |
+
```
|
92 |
+
|
93 |
+
|
94 |
+
### 2οΈβ£ Load a Model
|
95 |
+
|
96 |
+
|
97 |
+
|
98 |
+
```python
|
99 |
+
from models.giorom3d_T import PhysicsEngine
|
100 |
+
from models.config import TimeStepperConfig
|
101 |
+
|
102 |
+
time_stepper_config = TimeStepperConfig()
|
103 |
+
|
104 |
+
simulator = PhysicsEngine(time_stepper_config)
|
105 |
+
repo_id = "hrishivish23/giorom-3d-t-sand3d"
|
106 |
+
time_stepper_config = time_stepper_config.from_pretrained(repo_id)
|
107 |
+
simulator = simulator.from_pretrained(repo_id, config=time_stepper_config)
|
108 |
+
```
|
109 |
+
|
110 |
+
### 3οΈβ£ Run Inference
|
111 |
+
```python
|
112 |
+
import torch
|
113 |
+
|
114 |
+
```
|
115 |
+
|
116 |
+
---
|
117 |
+
|
118 |
+
## π Model Weights and Checkpoints
|
119 |
+
| Model Name | Model ID |
|
120 |
+
|---------------------------------|-------------|
|
121 |
+
| `giorom-3d-t-sand3d-long` | [`hrishivish23/giorom-3d-t-sand3d-long`](https://huggingface.co/hrishivish23/giorom-3d-t-sand3d-long) |
|
122 |
+
| `giorom-3d-t-water3d` | [`hrishivish23/giorom-3d-t-water3d`](https://huggingface.co/hrishivish23/giorom-3d-t-water3d) |
|
123 |
+
|
124 |
+
|
125 |
+
---
|
126 |
+
|
127 |
+
## π Training Details
|
128 |
+
### π§ Hyperparameters
|
129 |
+
- **Graph Interaction Operator** layers: **4**
|
130 |
+
- **Transformer Heads**: **4**
|
131 |
+
- **Embedding Dimension:** **128**
|
132 |
+
- **Latent Grid Sizes:** `{8Γ8, 16Γ16, 32Γ32}`
|
133 |
+
- **Learning Rate:** `1e-4`
|
134 |
+
- **Optimizer:** `Adamax`
|
135 |
+
- **Loss Function:** `MSE + Physics Regularization (Loss computed on Euler integrated outputs)`
|
136 |
+
- **Training Steps:** `1M+ steps`
|
137 |
+
|
138 |
+
### π₯οΈ Hardware
|
139 |
+
- **Trained on:** NVIDIA RTX 3050
|
140 |
+
- **Batch Size:** `2`
|
141 |
+
|
142 |
+
---
|
143 |
+
|
144 |
+
## π Citation
|
145 |
+
If you use this model, please cite:
|
146 |
+
```bibtex
|
147 |
+
@article{viswanath2024reduced,
|
148 |
+
title={Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs},
|
149 |
+
author={Viswanath, Hrishikesh and Chang, Yue and Berner, Julius and Chen, Peter Yichen and Bera, Aniket},
|
150 |
+
journal={arXiv preprint arXiv:2407.03925},
|
151 |
+
year={2024}
|
152 |
+
}
|
153 |
+
```
|
154 |
+
|
155 |
+
---
|
156 |
+
|
157 |
+
## π¬ Contact
|
158 |
+
For questions or collaborations:
|
159 |
+
- π§βπ» Author: [Hrishikesh Viswanath](https://hrishikeshvish.github.io)
|
160 |
+
- π§ Email: [email protected]
|
161 |
+
- π¬ Hugging Face Discussion: [Model Page](https://huggingface.co/hrishivish23/giorom-3d-t-sand3d-long/discussions)
|
162 |
+
|
163 |
+
---
|
164 |
+
|
165 |
+
## π Related Work
|
166 |
+
- **Neural Operators for PDEs**: Fourier Neural Operators, Graph Neural Operators
|
167 |
+
- **Lagrangian Methods**: Material Point Methods, SPH, NCLAW, CROM, LiCROM
|
168 |
+
- **Physics-Based ML**: PINNs, GNS, MeshGraphNet
|
169 |
+
|
170 |
+
---
|
171 |
+
|
172 |
+
### πΉ Summary
|
173 |
+
This model is ideal for **fast and scalable physics simulations** where full-resolution computation is infeasible. The reduced-order approach allows **efficient learning on sparse inputs**, with the ability to **reconstruct dense outputs using an integral transform model (not included in this repo).**
|