JingyaoLi commited on
Commit
74dd398
Β·
verified Β·
1 Parent(s): 52e6e86

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -57
README.md CHANGED
@@ -1,57 +1,57 @@
1
- ---
2
- license: bigscience-openrail-m
3
- metrics:
4
- - code_eval
5
- library_name: transformers
6
- tags:
7
- - code
8
- ---
9
-
10
- <p style="font-size:28px;" align="center">
11
- 🏠 MoTCoder
12
- </p>
13
-
14
- <p align="center">
15
- β€’ πŸ€— <a href="https://huggingface.co/datasets/JingyaoLi/MoTCode-Data" target="_blank">Data </a> β€’ πŸ€— <a href="https://huggingface.co/JingyaoLi/MoTCoder-15B-v1.0" target="_blank">Model </a> β€’ 🐱 <a href="https://github.com/dvlab-research/MoTCoder" target="_blank">Code</a> β€’ πŸ“ƒ <a href="https://arxiv.org/abs/2312.15960" target="_blank">Paper</a> <br>
16
- </p>
17
-
18
- [![PWC](https://img.shields.io/endpoint?url=https%3A%2F%2Fpaperswithcode.com%2Fbadge%2Fmotcoder-elevating-large-language-models-with%2Fcode-generation-on-apps%3Fmetric%3DIntroductory%2520Pass%25401)](https://paperswithcode.com/sota/code-generation-on-apps?metric=Introductory%20Pass%401/motcoder-elevating-large-language-models-with)
19
- [![PWC](https://img.shields.io/endpoint?url=https%3A%2F%2Fpaperswithcode.com%2Fbadge%2Fmotcoder-elevating-large-language-models-with%2Fcode-generation-on-codecontests%3Fmetric%3DTest%2520Set%2520pass%25401)](https://paperswithcode.com/sota/code-generation-on-codecontests?metric=Test%20Set%20pass%401)
20
-
21
- Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks. However, their performance tends to falter when confronted with more challenging programming problems. We observe that conventional models often generate solutions as monolithic code blocks, restricting their effectiveness in tackling intricate questions. To overcome this limitation, we present Module-of-Thought Coder (MoTCoder). We introduce a framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules. Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions, leading to substantial pass@1 improvements of 2.4% on APPS and 4.5% on CodeContests. MoTCoder also achieved significant improvements in self-correction capabilities, surpassing the current SOTA by 3.3%. Additionally, we provide an analysis of between problem complexity and optimal module decomposition and evaluate the maintainability index, confirming that the code generated by MoTCoder is easier to understand and modify, which can be beneficial for long-term code maintenance and evolution. Our codes are available at https://github.com/dvlab-research/MoTCoder.
22
-
23
- <div style="text-align: center;">
24
- <img src="./imgs/impression.png" alt="impression" />
25
- </div>
26
-
27
- ## Performance
28
-
29
- ### APPS
30
- <div style="text-align: center;">
31
- <img src="./imgs/apps.png" alt="Performance on APPS" />
32
- </div>
33
-
34
- ### CodeContests
35
- <div style="text-align: center;">
36
- <img src="./imgs/codecontests.png" alt="Performance on CodeContests" width="500px" />
37
- </div>
38
-
39
- ### Reflection
40
- <div style="text-align: center;">
41
- <img src="./imgs/reflection.png" alt="Performance on Reflection" />
42
- </div>
43
-
44
- ## Citation
45
- If you find our work useful, please consider citing it.
46
- ```
47
- @misc{li2025motcoderelevatinglargelanguage,
48
- title={MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks},
49
- author={Jingyao Li and Pengguang Chen and Bin Xia and Hong Xu and Jiaya Jia},
50
- year={2025},
51
- eprint={2312.15960},
52
- archivePrefix={arXiv},
53
- primaryClass={cs.LG},
54
- url={https://arxiv.org/abs/2312.15960},
55
- }
56
- ```
57
-
 
1
+ ---
2
+ license: bigscience-openrail-m
3
+ metrics:
4
+ - code_eval
5
+ library_name: transformers
6
+ tags:
7
+ - code
8
+ ---
9
+
10
+ <p style="font-size:28px;" align="center">
11
+ 🏠 MoTCoder
12
+ </p>
13
+
14
+ <p align="center">
15
+ β€’ πŸ€— <a href="https://huggingface.co/datasets/JingyaoLi/MoTCode-Data" target="_blank">Data </a> β€’ πŸ€— <a href="https://huggingface.co/JingyaoLi/MoTCoder-15B-v1.0" target="_blank">Model </a> β€’ 🐱 <a href="https://github.com/dvlab-research/MoTCoder" target="_blank">Code</a> β€’ πŸ“ƒ <a href="https://arxiv.org/abs/2312.15960" target="_blank">Paper</a> <br>
16
+ </p>
17
+
18
+ [![PWC](https://img.shields.io/endpoint?url=https%3A%2F%2Fpaperswithcode.com%2Fbadge%2Fmotcoder-elevating-large-language-models-with%2Fcode-generation-on-apps%3Fmetric%3DIntroductory%2520Pass%25401)](https://paperswithcode.com/sota/code-generation-on-apps?metric=Introductory%20Pass%401/motcoder-elevating-large-language-models-with)
19
+ [![PWC](https://img.shields.io/endpoint?url=https%3A%2F%2Fpaperswithcode.com%2Fbadge%2Fmotcoder-elevating-large-language-models-with%2Fcode-generation-on-codecontests%3Fmetric%3DTest%2520Set%2520pass%25401)](https://paperswithcode.com/sota/code-generation-on-codecontests?metric=Test%20Set%20pass%401)
20
+
21
+ Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks. However, their performance tends to falter when confronted with more challenging programming problems. We observe that conventional models often generate solutions as monolithic code blocks, restricting their effectiveness in tackling intricate questions. To overcome this limitation, we present Module-of-Thought Coder (MoTCoder). We introduce a framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules. Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions, leading to substantial pass@1 improvements of 2.4% on APPS and 4.5% on CodeContests. MoTCoder also achieved significant improvements in self-correction capabilities, surpassing the current SOTA by 3.3%. Additionally, we provide an analysis of between problem complexity and optimal module decomposition and evaluate the maintainability index, confirming that the code generated by MoTCoder is easier to understand and modify, which can be beneficial for long-term code maintenance and evolution. Our codes are available at https://github.com/dvlab-research/MoTCoder.
22
+
23
+ <div style="text-align: center;">
24
+ <img src="impression.png" alt="impression" />
25
+ </div>
26
+
27
+ ## Performance
28
+
29
+ ### APPS
30
+ <div style="text-align: center;">
31
+ <img src="apps.png" alt="Performance on APPS" />
32
+ </div>
33
+
34
+ ### CodeContests
35
+ <div style="text-align: center;">
36
+ <img src="codecontests.png" alt="Performance on CodeContests" width="500px" />
37
+ </div>
38
+
39
+ ### Reflection
40
+ <div style="text-align: center;">
41
+ <img src="reflection.png" alt="Performance on Reflection" />
42
+ </div>
43
+
44
+ ## Citation
45
+ If you find our work useful, please consider citing it.
46
+ ```
47
+ @misc{li2025motcoderelevatinglargelanguage,
48
+ title={MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks},
49
+ author={Jingyao Li and Pengguang Chen and Bin Xia and Hong Xu and Jiaya Jia},
50
+ year={2025},
51
+ eprint={2312.15960},
52
+ archivePrefix={arXiv},
53
+ primaryClass={cs.LG},
54
+ url={https://arxiv.org/abs/2312.15960},
55
+ }
56
+ ```
57
+