Files changed (1) hide show
  1. README.md +54 -40
README.md CHANGED
@@ -1,40 +1,54 @@
1
- ---
2
- license: apache-2.0
3
- base_model:
4
- - Qwen/Qwen2.5-7B-Instruct
5
- pipeline_tag: text-to-3d
6
- datasets:
7
- - FreedomIntelligence/BlendNet
8
- metrics:
9
- - code_eval
10
- tags:
11
- - code
12
- - render
13
- - CAD
14
- - 3D
15
- - Modeling
16
- - LLM
17
- - bpy
18
- - Blender
19
- ---
20
-
21
- # 🤖 BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement
22
-
23
- **BlenderLLM** is built using **Qwen2.5-Coder-7B-Instruct** as the base model. It has been fine-tuned on the **BlendNet** training dataset and further optimized through **Self-improvement** techniques to achieve the best performance.
24
-
25
- For more details, please visit our [GitHub repository](https://github.com/FreedomIntelligence/BlenderLLM) or refer to our [arXiv paper](https://www.arxiv.org/abs/2412.14203).
26
-
27
- ## 📖 Citation
28
- ```angular2
29
- @misc{du2024blenderllmtraininglargelanguage,
30
- title={BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement},
31
- author={Yuhao Du and Shunian Chen and Wenbo Zan and Peizhao Li and Mingxuan Wang and Dingjie Song and Bo Li and Yan Hu and Benyou Wang},
32
- year={2024},
33
- eprint={2412.14203},
34
- archivePrefix={arXiv},
35
- primaryClass={cs.HC},
36
- url={https://arxiv.org/abs/2412.14203},
37
- }
38
- ```
39
-
40
- We are from the School of Data Science (SDS), the Chinese University of Hong Kong, Shenzhen (CUHKSZ).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen2.5-7B-Instruct
5
+ pipeline_tag: text-to-3d
6
+ datasets:
7
+ - FreedomIntelligence/BlendNet
8
+ metrics:
9
+ - code_eval
10
+ tags:
11
+ - code
12
+ - render
13
+ - CAD
14
+ - 3D
15
+ - Modeling
16
+ - LLM
17
+ - bpy
18
+ - Blender
19
+ language:
20
+ - zho
21
+ - eng
22
+ - fra
23
+ - spa
24
+ - por
25
+ - deu
26
+ - ita
27
+ - rus
28
+ - jpn
29
+ - kor
30
+ - vie
31
+ - tha
32
+ - ara
33
+ ---
34
+
35
+ # 🤖 BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement
36
+
37
+ **BlenderLLM** is built using **Qwen2.5-Coder-7B-Instruct** as the base model. It has been fine-tuned on the **BlendNet** training dataset and further optimized through **Self-improvement** techniques to achieve the best performance.
38
+
39
+ For more details, please visit our [GitHub repository](https://github.com/FreedomIntelligence/BlenderLLM) or refer to our [arXiv paper](https://www.arxiv.org/abs/2412.14203).
40
+
41
+ ## 📖 Citation
42
+ ```angular2
43
+ @misc{du2024blenderllmtraininglargelanguage,
44
+ title={BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement},
45
+ author={Yuhao Du and Shunian Chen and Wenbo Zan and Peizhao Li and Mingxuan Wang and Dingjie Song and Bo Li and Yan Hu and Benyou Wang},
46
+ year={2024},
47
+ eprint={2412.14203},
48
+ archivePrefix={arXiv},
49
+ primaryClass={cs.HC},
50
+ url={https://arxiv.org/abs/2412.14203},
51
+ }
52
+ ```
53
+
54
+ We are from the School of Data Science (SDS), the Chinese University of Hong Kong, Shenzhen (CUHKSZ).