Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- tencent/HunyuanVideo-I2V
|
4 |
+
library_name: diffusers
|
5 |
+
pipeline_tag: image-to-video
|
6 |
+
---
|
7 |
+
|
8 |
+
Unofficial community fork for Diffusers-format weights on [`tencent/HunyuanVideo-I2V`](https://huggingface.co/tencent/HunyuanVideo-I2V).
|
9 |
+
|
10 |
+
### Using Diffusers
|
11 |
+
|
12 |
+
HunyuanVideo-I2V can be used directly from Diffusers. Install the latest version of Diffusers.
|
13 |
+
|
14 |
+
```python
|
15 |
+
import torch
|
16 |
+
from diffusers import HunyuanVideoImageToVideoPipeline, HunyuanVideoTransformer3DModel
|
17 |
+
from diffusers.utils import load_image, export_to_video
|
18 |
+
|
19 |
+
# Available checkpoints: "hunyuanvideo-community/HunyuanVideo-I2V" and "hunyuanvideo-community/HunyuanVideo-I2V-33ch"
|
20 |
+
model_id = "hunyuanvideo-community/HunyuanVideo-I2V"
|
21 |
+
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
|
22 |
+
model_id, subfolder="transformer", torch_dtype=torch.bfloat16
|
23 |
+
)
|
24 |
+
pipe = HunyuanVideoImageToVideoPipeline.from_pretrained(
|
25 |
+
model_id, transformer=transformer, torch_dtype=torch.float16
|
26 |
+
)
|
27 |
+
pipe.vae.enable_tiling()
|
28 |
+
pipe.to("cuda")
|
29 |
+
|
30 |
+
prompt = "A man with short gray hair plays a red electric guitar."
|
31 |
+
image = load_image(
|
32 |
+
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png"
|
33 |
+
)
|
34 |
+
|
35 |
+
output = pipe(image=image, prompt=prompt).frames[0]
|
36 |
+
export_to_video(output, "output.mp4", fps=15)
|
37 |
+
```
|
38 |
+
|
39 |
+
Refer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video) for more information.
|