Spaces:
Running
on
Zero
Running
on
Zero
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
title: FramePack
|
3 |
emoji: 🎬
|
4 |
colorFrom: indigo
|
5 |
colorTo: purple
|
@@ -9,65 +9,3 @@ app_file: app.py
|
|
9 |
pinned: false
|
10 |
license: mit
|
11 |
---
|
12 |
-
|
13 |
-
# FramePack - Image to Video Generation
|
14 |
-
|
15 |
-
This is a modified version of the FramePack model with a 5-second maximum video length limit.
|
16 |
-
|
17 |
-
## Features
|
18 |
-
|
19 |
-
- Generate realistic videos from still images
|
20 |
-
- Simple and intuitive interface
|
21 |
-
- Bilingual support (English/Chinese)
|
22 |
-
- Maximum video length of 5 seconds to ensure quick generation times
|
23 |
-
|
24 |
-
## Usage
|
25 |
-
|
26 |
-
1. Upload an image
|
27 |
-
2. Enter a prompt describing the desired motion
|
28 |
-
3. Adjust parameters if needed (seed, video length, etc.)
|
29 |
-
4. Click "Generate" and wait for the result
|
30 |
-
|
31 |
-
## Technical Details
|
32 |
-
|
33 |
-
This application uses the HunyuanVideo transformer model for image-to-video generation. The model has been optimized to work efficiently with videos up to 5 seconds in length.
|
34 |
-
|
35 |
-
## Credits
|
36 |
-
|
37 |
-
Based on the original FramePack model by lllyasviel.
|
38 |
-
|
39 |
-
## 特点
|
40 |
-
|
41 |
-
- 使用单张图片生成流畅的动作视频
|
42 |
-
- 基于HunyuanVideo和FramePack架构
|
43 |
-
- 支持低显存GPU(最低6GB)运行
|
44 |
-
- 可以生成最长5秒的视频
|
45 |
-
- 使用TeaCache技术加速生成过程
|
46 |
-
|
47 |
-
## 使用方法
|
48 |
-
|
49 |
-
1. 上传一张人物图像
|
50 |
-
2. 输入描述所需动作的提示词
|
51 |
-
3. 设置所需视频长度(最大5秒)
|
52 |
-
4. 点击"开始生成"按钮
|
53 |
-
5. 等待视频生成(生成过程是渐进式的,会不断扩展视频长度)
|
54 |
-
|
55 |
-
## 示例提示词
|
56 |
-
|
57 |
-
- "The girl dances gracefully, with clear movements, full of charm."
|
58 |
-
- "A character doing some simple body movements."
|
59 |
-
- "The man dances energetically, leaping mid-air with fluid arm swings and quick footwork."
|
60 |
-
|
61 |
-
## 注意事项
|
62 |
-
|
63 |
-
- 视频生成是倒序进行的,结束动作将先于开始动作生成
|
64 |
-
- 如果需要高质量结果,建议关闭TeaCache选项
|
65 |
-
- 如果遇到内存不足错误,可以增加"GPU推理保留内存"的值
|
66 |
-
|
67 |
-
## 技术细节
|
68 |
-
|
69 |
-
此应用基于[FramePack](https://github.com/lllyasviel/FramePack)项目,使用了Hunyuan Video模型和FramePack技术进行视频生成。该技术可以将输入上下文压缩为固定长度,使生成工作量与视频长度无关,从而在笔记本电脑GPU上也能处理大量帧。
|
70 |
-
|
71 |
-
---
|
72 |
-
|
73 |
-
原项目链接:[FramePack GitHub](https://github.com/lllyasviel/FramePack)
|
|
|
1 |
---
|
2 |
+
title: FramePack image to video
|
3 |
emoji: 🎬
|
4 |
colorFrom: indigo
|
5 |
colorTo: purple
|
|
|
9 |
pinned: false
|
10 |
license: mit
|
11 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|