tianweiy commited on
Commit
32b620e
·
verified ·
1 Parent(s): 3353d13

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -85
README.md CHANGED
@@ -1,85 +1,86 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- library_name: diffusers
4
- tags:
5
- - text-to-video
6
- - diffusion distillation
7
- ---
8
-
9
- # CausVid Model Card
10
-
11
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63363b864067f020756275b7/YhssMfS_1e6q5fHKh9qrc.jpeg)
12
-
13
- > [**From Slow Bidirectional to Fast Autoregressive Video Diffusion Models**](https://arxiv.org/abs/2412.07772),
14
- > Tianwei Yin*, Qiang Zhang*, Richard Zhang, William T. Freeman, Frédo Durand, Eli Shechtman, Xun Huang (* equal contribution)
15
-
16
-
17
- ## Environment Setup
18
-
19
- ```bash
20
- git clone https://github.com/tianweiy/CausVid && cd CausVid
21
- conda create -n causvid python=3.10 -y
22
- conda activate causvid
23
- pip install torch torchvision
24
- pip install -r requirements.txt
25
- python setup.py develop
26
- ```
27
-
28
- Also download the Wan base models from [here](https://github.com/Wan-Video/Wan2.1) and save it to wan_models/Wan2.1-T2V-1.3B/
29
-
30
- ## Inference Example
31
-
32
- First download the checkpoints: [Autoregressive Model](https://huggingface.co/tianweiy/CausVid/tree/main/autoregressive_checkpoint), [Bidirectional Model 1](https://huggingface.co/tianweiy/CausVid/tree/main/bidirectional_checkpoint1) or [Bidirectional Model 2](https://huggingface.co/tianweiy/CausVid/tree/main/bidirectional_checkpoint2) (performs slightly better).
33
-
34
- ### Autoregressive 3-step 5-second Video Generation
35
-
36
- ```bash
37
- python minimal_inference/autoregressive_inference.py --config_path configs/wan_causal_dmd.yaml --checkpoint_folder XXX --output_folder XXX --prompt_file_path XXX
38
- ```
39
-
40
- ### Autoregressive 3-step long Video Generation
41
-
42
- ```bash
43
- python minimal_inference/longvideo_autoregressive_inference.py --config_path configs/wan_causal_dmd.yaml --checkpoint_folder XXX --output_folder XXX --prompt_file_path XXX --num_rollout XXX
44
- ```
45
-
46
- ### Bidirectional 3-step 5-second Video Generation
47
-
48
- ```bash
49
- python minimal_inference/bidirectional_inference.py --config_path configs/wan_bidirectional_dmd_from_scratch.yaml --checkpoint_folder XXX --output_folder XXX --prompt_file_path XXX
50
- ```
51
-
52
- For more information, please refer to the [code repository](https://github.com/tianweiy/DMD2)
53
-
54
-
55
- ## License
56
-
57
- CausVid is released under [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
58
-
59
-
60
- ## Citation
61
-
62
- If you find CausVid useful or relevant to your research, please kindly cite our papers:
63
-
64
- ```bib
65
- @inproceedings{yin2025causvid,
66
- title={From Slow Bidirectional to Fast Autoregressive Video Diffusion Models},
67
- author={Yin, Tianwei and Zhang, Qiang and Zhang, Richard and Freeman, William T and Durand, Fredo and Shechtman, Eli and Huang, Xun},
68
- booktitle={CVPR},
69
- year={2025}
70
- }
71
-
72
- @inproceedings{yin2024improved,
73
- title={Improved Distribution Matching Distillation for Fast Image Synthesis},
74
- author={Yin, Tianwei and Gharbi, Micha{\"e}l and Park, Taesung and Zhang, Richard and Shechtman, Eli and Durand, Fredo and Freeman, William T},
75
- booktitle={NeurIPS},
76
- year={2024}
77
- }
78
-
79
- @inproceedings{yin2024onestep,
80
- title={One-step Diffusion with Distribution Matching Distillation},
81
- author={Yin, Tianwei and Gharbi, Micha{\"e}l and Zhang, Richard and Shechtman, Eli and Durand, Fr{\'e}do and Freeman, William T and Park, Taesung},
82
- booktitle={CVPR},
83
- year={2024}
84
- }
85
- ```
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: diffusers
4
+ tags:
5
+ - text-to-video
6
+ - diffusion distillation
7
+ ---
8
+
9
+ # CausVid Model Card
10
+
11
+
12
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63363b864067f020756275b7/S1o5lYfdueP7J02rIuZF3.png)
13
+
14
+ > [**From Slow Bidirectional to Fast Autoregressive Video Diffusion Models**](https://arxiv.org/abs/2412.07772),
15
+ > Tianwei Yin*, Qiang Zhang*, Richard Zhang, William T. Freeman, Frédo Durand, Eli Shechtman, Xun Huang (* equal contribution)
16
+
17
+
18
+ ## Environment Setup
19
+
20
+ ```bash
21
+ git clone https://github.com/tianweiy/CausVid && cd CausVid
22
+ conda create -n causvid python=3.10 -y
23
+ conda activate causvid
24
+ pip install torch torchvision
25
+ pip install -r requirements.txt
26
+ python setup.py develop
27
+ ```
28
+
29
+ Also download the Wan base models from [here](https://github.com/Wan-Video/Wan2.1) and save it to wan_models/Wan2.1-T2V-1.3B/
30
+
31
+ ## Inference Example
32
+
33
+ First download the checkpoints: [Autoregressive Model](https://huggingface.co/tianweiy/CausVid/tree/main/autoregressive_checkpoint), [Bidirectional Model 1](https://huggingface.co/tianweiy/CausVid/tree/main/bidirectional_checkpoint1) or [Bidirectional Model 2](https://huggingface.co/tianweiy/CausVid/tree/main/bidirectional_checkpoint2) (performs slightly better).
34
+
35
+ ### Autoregressive 3-step 5-second Video Generation
36
+
37
+ ```bash
38
+ python minimal_inference/autoregressive_inference.py --config_path configs/wan_causal_dmd.yaml --checkpoint_folder XXX --output_folder XXX --prompt_file_path XXX
39
+ ```
40
+
41
+ ### Autoregressive 3-step long Video Generation
42
+
43
+ ```bash
44
+ python minimal_inference/longvideo_autoregressive_inference.py --config_path configs/wan_causal_dmd.yaml --checkpoint_folder XXX --output_folder XXX --prompt_file_path XXX --num_rollout XXX
45
+ ```
46
+
47
+ ### Bidirectional 3-step 5-second Video Generation
48
+
49
+ ```bash
50
+ python minimal_inference/bidirectional_inference.py --config_path configs/wan_bidirectional_dmd_from_scratch.yaml --checkpoint_folder XXX --output_folder XXX --prompt_file_path XXX
51
+ ```
52
+
53
+ For more information, please refer to the [code repository](https://github.com/tianweiy/DMD2)
54
+
55
+
56
+ ## License
57
+
58
+ CausVid is released under [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
59
+
60
+
61
+ ## Citation
62
+
63
+ If you find CausVid useful or relevant to your research, please kindly cite our papers:
64
+
65
+ ```bib
66
+ @inproceedings{yin2025causvid,
67
+ title={From Slow Bidirectional to Fast Autoregressive Video Diffusion Models},
68
+ author={Yin, Tianwei and Zhang, Qiang and Zhang, Richard and Freeman, William T and Durand, Fredo and Shechtman, Eli and Huang, Xun},
69
+ booktitle={CVPR},
70
+ year={2025}
71
+ }
72
+
73
+ @inproceedings{yin2024improved,
74
+ title={Improved Distribution Matching Distillation for Fast Image Synthesis},
75
+ author={Yin, Tianwei and Gharbi, Micha{\"e}l and Park, Taesung and Zhang, Richard and Shechtman, Eli and Durand, Fredo and Freeman, William T},
76
+ booktitle={NeurIPS},
77
+ year={2024}
78
+ }
79
+
80
+ @inproceedings{yin2024onestep,
81
+ title={One-step Diffusion with Distribution Matching Distillation},
82
+ author={Yin, Tianwei and Gharbi, Micha{\"e}l and Zhang, Richard and Shechtman, Eli and Durand, Fr{\'e}do and Freeman, William T and Park, Taesung},
83
+ booktitle={CVPR},
84
+ year={2024}
85
+ }
86
+ ```