Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +46 -34
README.md CHANGED
@@ -1,35 +1,47 @@
1
- ---
2
- license: apache-2.0
3
- datasets:
4
- - AI-MO/NuminaMath-TIR
5
- language:
6
- - en
7
- metrics:
8
- - accuracy
9
- base_model:
10
- - Qwen/Qwen2.5-0.5B-Instruct
11
- ---
12
- # NeuroCoder Qwen2.5-0.5B-Instruct-MemoryR
13
-
14
- ## Overview
15
-
16
- This is the Hugging Face checkpoint of **Qwen2.5-0.5B-Instruct-MemoryR**, a memory-augmented RL-tuned model based on Qwen2.5.
17
-
18
- The model is introduced and analyzed in our paper: https://arxiv.org/abs/2504.02273
19
-
20
- ## Usage
21
- ```python
22
- from transformers import AutoModelForCausalLM, AutoTokenizer
23
-
24
- # Load tokenizer and model
25
- tokenizer = AutoTokenizer.from_pretrained("neurocoder/Qwen2.5-0.5B-Instruct-MemoryR")
26
- model = AutoModelForCausalLM.from_pretrained("neurocoder/Qwen2.5-0.5B-Instruct-MemoryR")
27
-
28
- # Example input
29
- prompt = "What is the capital of France?"
30
- inputs = tokenizer(prompt, return_tensors="pt")
31
-
32
- # Generate output
33
- outputs = model.generate(**inputs, max_new_tokens=50)
34
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
 
 
 
 
 
 
 
 
 
 
 
35
  ```
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - AI-MO/NuminaMath-TIR
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ metrics:
20
+ - accuracy
21
+ base_model:
22
+ - Qwen/Qwen2.5-0.5B-Instruct
23
+ ---
24
+ # NeuroCoder Qwen2.5-0.5B-Instruct-MemoryR
25
+
26
+ ## Overview
27
+
28
+ This is the Hugging Face checkpoint of **Qwen2.5-0.5B-Instruct-MemoryR**, a memory-augmented RL-tuned model based on Qwen2.5.
29
+
30
+ The model is introduced and analyzed in our paper: https://arxiv.org/abs/2504.02273
31
+
32
+ ## Usage
33
+ ```python
34
+ from transformers import AutoModelForCausalLM, AutoTokenizer
35
+
36
+ # Load tokenizer and model
37
+ tokenizer = AutoTokenizer.from_pretrained("neurocoder/Qwen2.5-0.5B-Instruct-MemoryR")
38
+ model = AutoModelForCausalLM.from_pretrained("neurocoder/Qwen2.5-0.5B-Instruct-MemoryR")
39
+
40
+ # Example input
41
+ prompt = "What is the capital of France?"
42
+ inputs = tokenizer(prompt, return_tensors="pt")
43
+
44
+ # Generate output
45
+ outputs = model.generate(**inputs, max_new_tokens=50)
46
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
47
  ```