k1h0 commited on
Commit
9605235
·
verified ·
1 Parent(s): b3ef2c6

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Salesforce/codegen2-16B_P
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.0
adapter_config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Salesforce/codegen2-16B_P",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 32,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.1,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": [
21
+ "wte",
22
+ "lm_head"
23
+ ],
24
+ "peft_type": "LORA",
25
+ "r": 16,
26
+ "rank_pattern": {},
27
+ "revision": null,
28
+ "target_modules": [
29
+ "qkv_proj",
30
+ "out_proj"
31
+ ],
32
+ "task_type": "CAUSAL_LM",
33
+ "use_dora": false,
34
+ "use_rslora": false
35
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dad68fd47d0ca79638a2185e50edcf12c6f04458c1dd7d82cd89b8ba210d950c
3
+ size 1298520480
added_tokens.json ADDED
@@ -0,0 +1,945 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "\t\t": 50294,
3
+ "\t\t\t": 50293,
4
+ "\t\t\t\t": 50292,
5
+ "\t\t\t\t\t": 50291,
6
+ "\t\t\t\t\t\t": 50290,
7
+ "\t\t\t\t\t\t\t": 50289,
8
+ "\t\t\t\t\t\t\t\t": 50288,
9
+ "\t\t\t\t\t\t\t\t\t": 50287,
10
+ " ": 50286,
11
+ " ": 50285,
12
+ " ": 50284,
13
+ " ": 50283,
14
+ " ": 50282,
15
+ " ": 50281,
16
+ " ": 50280,
17
+ " ": 50279,
18
+ " ": 50278,
19
+ " ": 50277,
20
+ " ": 50276,
21
+ " ": 50275,
22
+ " ": 50274,
23
+ " ": 50273,
24
+ " ": 50272,
25
+ " ": 50271,
26
+ " ": 50270,
27
+ " ": 50269,
28
+ " ": 50268,
29
+ " ": 50267,
30
+ " ": 50266,
31
+ " ": 50265,
32
+ " ": 50264,
33
+ " ": 50263,
34
+ " ": 50262,
35
+ " ": 50261,
36
+ " ": 50260,
37
+ " ": 50259,
38
+ " ": 50258,
39
+ " ": 50257,
40
+ "<dummy_0>": 50295,
41
+ "<dummy_1>": 50296,
42
+ "<dummy_2>": 50297,
43
+ "<dummy_3>": 50298,
44
+ "<eom>": 50300,
45
+ "<mask_100>": 51100,
46
+ "<mask_101>": 51099,
47
+ "<mask_102>": 51098,
48
+ "<mask_103>": 51097,
49
+ "<mask_104>": 51096,
50
+ "<mask_105>": 51095,
51
+ "<mask_106>": 51094,
52
+ "<mask_107>": 51093,
53
+ "<mask_108>": 51092,
54
+ "<mask_109>": 51091,
55
+ "<mask_10>": 51190,
56
+ "<mask_110>": 51090,
57
+ "<mask_111>": 51089,
58
+ "<mask_112>": 51088,
59
+ "<mask_113>": 51087,
60
+ "<mask_114>": 51086,
61
+ "<mask_115>": 51085,
62
+ "<mask_116>": 51084,
63
+ "<mask_117>": 51083,
64
+ "<mask_118>": 51082,
65
+ "<mask_119>": 51081,
66
+ "<mask_11>": 51189,
67
+ "<mask_120>": 51080,
68
+ "<mask_121>": 51079,
69
+ "<mask_122>": 51078,
70
+ "<mask_123>": 51077,
71
+ "<mask_124>": 51076,
72
+ "<mask_125>": 51075,
73
+ "<mask_126>": 51074,
74
+ "<mask_127>": 51073,
75
+ "<mask_128>": 51072,
76
+ "<mask_129>": 51071,
77
+ "<mask_12>": 51188,
78
+ "<mask_130>": 51070,
79
+ "<mask_131>": 51069,
80
+ "<mask_132>": 51068,
81
+ "<mask_133>": 51067,
82
+ "<mask_134>": 51066,
83
+ "<mask_135>": 51065,
84
+ "<mask_136>": 51064,
85
+ "<mask_137>": 51063,
86
+ "<mask_138>": 51062,
87
+ "<mask_139>": 51061,
88
+ "<mask_13>": 51187,
89
+ "<mask_140>": 51060,
90
+ "<mask_141>": 51059,
91
+ "<mask_142>": 51058,
92
+ "<mask_143>": 51057,
93
+ "<mask_144>": 51056,
94
+ "<mask_145>": 51055,
95
+ "<mask_146>": 51054,
96
+ "<mask_147>": 51053,
97
+ "<mask_148>": 51052,
98
+ "<mask_149>": 51051,
99
+ "<mask_14>": 51186,
100
+ "<mask_150>": 51050,
101
+ "<mask_151>": 51049,
102
+ "<mask_152>": 51048,
103
+ "<mask_153>": 51047,
104
+ "<mask_154>": 51046,
105
+ "<mask_155>": 51045,
106
+ "<mask_156>": 51044,
107
+ "<mask_157>": 51043,
108
+ "<mask_158>": 51042,
109
+ "<mask_159>": 51041,
110
+ "<mask_15>": 51185,
111
+ "<mask_160>": 51040,
112
+ "<mask_161>": 51039,
113
+ "<mask_162>": 51038,
114
+ "<mask_163>": 51037,
115
+ "<mask_164>": 51036,
116
+ "<mask_165>": 51035,
117
+ "<mask_166>": 51034,
118
+ "<mask_167>": 51033,
119
+ "<mask_168>": 51032,
120
+ "<mask_169>": 51031,
121
+ "<mask_16>": 51184,
122
+ "<mask_170>": 51030,
123
+ "<mask_171>": 51029,
124
+ "<mask_172>": 51028,
125
+ "<mask_173>": 51027,
126
+ "<mask_174>": 51026,
127
+ "<mask_175>": 51025,
128
+ "<mask_176>": 51024,
129
+ "<mask_177>": 51023,
130
+ "<mask_178>": 51022,
131
+ "<mask_179>": 51021,
132
+ "<mask_17>": 51183,
133
+ "<mask_180>": 51020,
134
+ "<mask_181>": 51019,
135
+ "<mask_182>": 51018,
136
+ "<mask_183>": 51017,
137
+ "<mask_184>": 51016,
138
+ "<mask_185>": 51015,
139
+ "<mask_186>": 51014,
140
+ "<mask_187>": 51013,
141
+ "<mask_188>": 51012,
142
+ "<mask_189>": 51011,
143
+ "<mask_18>": 51182,
144
+ "<mask_190>": 51010,
145
+ "<mask_191>": 51009,
146
+ "<mask_192>": 51008,
147
+ "<mask_193>": 51007,
148
+ "<mask_194>": 51006,
149
+ "<mask_195>": 51005,
150
+ "<mask_196>": 51004,
151
+ "<mask_197>": 51003,
152
+ "<mask_198>": 51002,
153
+ "<mask_199>": 51001,
154
+ "<mask_19>": 51181,
155
+ "<mask_1>": 51199,
156
+ "<mask_200>": 51000,
157
+ "<mask_201>": 50999,
158
+ "<mask_202>": 50998,
159
+ "<mask_203>": 50997,
160
+ "<mask_204>": 50996,
161
+ "<mask_205>": 50995,
162
+ "<mask_206>": 50994,
163
+ "<mask_207>": 50993,
164
+ "<mask_208>": 50992,
165
+ "<mask_209>": 50991,
166
+ "<mask_20>": 51180,
167
+ "<mask_210>": 50990,
168
+ "<mask_211>": 50989,
169
+ "<mask_212>": 50988,
170
+ "<mask_213>": 50987,
171
+ "<mask_214>": 50986,
172
+ "<mask_215>": 50985,
173
+ "<mask_216>": 50984,
174
+ "<mask_217>": 50983,
175
+ "<mask_218>": 50982,
176
+ "<mask_219>": 50981,
177
+ "<mask_21>": 51179,
178
+ "<mask_220>": 50980,
179
+ "<mask_221>": 50979,
180
+ "<mask_222>": 50978,
181
+ "<mask_223>": 50977,
182
+ "<mask_224>": 50976,
183
+ "<mask_225>": 50975,
184
+ "<mask_226>": 50974,
185
+ "<mask_227>": 50973,
186
+ "<mask_228>": 50972,
187
+ "<mask_229>": 50971,
188
+ "<mask_22>": 51178,
189
+ "<mask_230>": 50970,
190
+ "<mask_231>": 50969,
191
+ "<mask_232>": 50968,
192
+ "<mask_233>": 50967,
193
+ "<mask_234>": 50966,
194
+ "<mask_235>": 50965,
195
+ "<mask_236>": 50964,
196
+ "<mask_237>": 50963,
197
+ "<mask_238>": 50962,
198
+ "<mask_239>": 50961,
199
+ "<mask_23>": 51177,
200
+ "<mask_240>": 50960,
201
+ "<mask_241>": 50959,
202
+ "<mask_242>": 50958,
203
+ "<mask_243>": 50957,
204
+ "<mask_244>": 50956,
205
+ "<mask_245>": 50955,
206
+ "<mask_246>": 50954,
207
+ "<mask_247>": 50953,
208
+ "<mask_248>": 50952,
209
+ "<mask_249>": 50951,
210
+ "<mask_24>": 51176,
211
+ "<mask_250>": 50950,
212
+ "<mask_251>": 50949,
213
+ "<mask_252>": 50948,
214
+ "<mask_253>": 50947,
215
+ "<mask_254>": 50946,
216
+ "<mask_255>": 50945,
217
+ "<mask_256>": 50944,
218
+ "<mask_257>": 50943,
219
+ "<mask_258>": 50942,
220
+ "<mask_259>": 50941,
221
+ "<mask_25>": 51175,
222
+ "<mask_260>": 50940,
223
+ "<mask_261>": 50939,
224
+ "<mask_262>": 50938,
225
+ "<mask_263>": 50937,
226
+ "<mask_264>": 50936,
227
+ "<mask_265>": 50935,
228
+ "<mask_266>": 50934,
229
+ "<mask_267>": 50933,
230
+ "<mask_268>": 50932,
231
+ "<mask_269>": 50931,
232
+ "<mask_26>": 51174,
233
+ "<mask_270>": 50930,
234
+ "<mask_271>": 50929,
235
+ "<mask_272>": 50928,
236
+ "<mask_273>": 50927,
237
+ "<mask_274>": 50926,
238
+ "<mask_275>": 50925,
239
+ "<mask_276>": 50924,
240
+ "<mask_277>": 50923,
241
+ "<mask_278>": 50922,
242
+ "<mask_279>": 50921,
243
+ "<mask_27>": 51173,
244
+ "<mask_280>": 50920,
245
+ "<mask_281>": 50919,
246
+ "<mask_282>": 50918,
247
+ "<mask_283>": 50917,
248
+ "<mask_284>": 50916,
249
+ "<mask_285>": 50915,
250
+ "<mask_286>": 50914,
251
+ "<mask_287>": 50913,
252
+ "<mask_288>": 50912,
253
+ "<mask_289>": 50911,
254
+ "<mask_28>": 51172,
255
+ "<mask_290>": 50910,
256
+ "<mask_291>": 50909,
257
+ "<mask_292>": 50908,
258
+ "<mask_293>": 50907,
259
+ "<mask_294>": 50906,
260
+ "<mask_295>": 50905,
261
+ "<mask_296>": 50904,
262
+ "<mask_297>": 50903,
263
+ "<mask_298>": 50902,
264
+ "<mask_299>": 50901,
265
+ "<mask_29>": 51171,
266
+ "<mask_2>": 51198,
267
+ "<mask_300>": 50900,
268
+ "<mask_301>": 50899,
269
+ "<mask_302>": 50898,
270
+ "<mask_303>": 50897,
271
+ "<mask_304>": 50896,
272
+ "<mask_305>": 50895,
273
+ "<mask_306>": 50894,
274
+ "<mask_307>": 50893,
275
+ "<mask_308>": 50892,
276
+ "<mask_309>": 50891,
277
+ "<mask_30>": 51170,
278
+ "<mask_310>": 50890,
279
+ "<mask_311>": 50889,
280
+ "<mask_312>": 50888,
281
+ "<mask_313>": 50887,
282
+ "<mask_314>": 50886,
283
+ "<mask_315>": 50885,
284
+ "<mask_316>": 50884,
285
+ "<mask_317>": 50883,
286
+ "<mask_318>": 50882,
287
+ "<mask_319>": 50881,
288
+ "<mask_31>": 51169,
289
+ "<mask_320>": 50880,
290
+ "<mask_321>": 50879,
291
+ "<mask_322>": 50878,
292
+ "<mask_323>": 50877,
293
+ "<mask_324>": 50876,
294
+ "<mask_325>": 50875,
295
+ "<mask_326>": 50874,
296
+ "<mask_327>": 50873,
297
+ "<mask_328>": 50872,
298
+ "<mask_329>": 50871,
299
+ "<mask_32>": 51168,
300
+ "<mask_330>": 50870,
301
+ "<mask_331>": 50869,
302
+ "<mask_332>": 50868,
303
+ "<mask_333>": 50867,
304
+ "<mask_334>": 50866,
305
+ "<mask_335>": 50865,
306
+ "<mask_336>": 50864,
307
+ "<mask_337>": 50863,
308
+ "<mask_338>": 50862,
309
+ "<mask_339>": 50861,
310
+ "<mask_33>": 51167,
311
+ "<mask_340>": 50860,
312
+ "<mask_341>": 50859,
313
+ "<mask_342>": 50858,
314
+ "<mask_343>": 50857,
315
+ "<mask_344>": 50856,
316
+ "<mask_345>": 50855,
317
+ "<mask_346>": 50854,
318
+ "<mask_347>": 50853,
319
+ "<mask_348>": 50852,
320
+ "<mask_349>": 50851,
321
+ "<mask_34>": 51166,
322
+ "<mask_350>": 50850,
323
+ "<mask_351>": 50849,
324
+ "<mask_352>": 50848,
325
+ "<mask_353>": 50847,
326
+ "<mask_354>": 50846,
327
+ "<mask_355>": 50845,
328
+ "<mask_356>": 50844,
329
+ "<mask_357>": 50843,
330
+ "<mask_358>": 50842,
331
+ "<mask_359>": 50841,
332
+ "<mask_35>": 51165,
333
+ "<mask_360>": 50840,
334
+ "<mask_361>": 50839,
335
+ "<mask_362>": 50838,
336
+ "<mask_363>": 50837,
337
+ "<mask_364>": 50836,
338
+ "<mask_365>": 50835,
339
+ "<mask_366>": 50834,
340
+ "<mask_367>": 50833,
341
+ "<mask_368>": 50832,
342
+ "<mask_369>": 50831,
343
+ "<mask_36>": 51164,
344
+ "<mask_370>": 50830,
345
+ "<mask_371>": 50829,
346
+ "<mask_372>": 50828,
347
+ "<mask_373>": 50827,
348
+ "<mask_374>": 50826,
349
+ "<mask_375>": 50825,
350
+ "<mask_376>": 50824,
351
+ "<mask_377>": 50823,
352
+ "<mask_378>": 50822,
353
+ "<mask_379>": 50821,
354
+ "<mask_37>": 51163,
355
+ "<mask_380>": 50820,
356
+ "<mask_381>": 50819,
357
+ "<mask_382>": 50818,
358
+ "<mask_383>": 50817,
359
+ "<mask_384>": 50816,
360
+ "<mask_385>": 50815,
361
+ "<mask_386>": 50814,
362
+ "<mask_387>": 50813,
363
+ "<mask_388>": 50812,
364
+ "<mask_389>": 50811,
365
+ "<mask_38>": 51162,
366
+ "<mask_390>": 50810,
367
+ "<mask_391>": 50809,
368
+ "<mask_392>": 50808,
369
+ "<mask_393>": 50807,
370
+ "<mask_394>": 50806,
371
+ "<mask_395>": 50805,
372
+ "<mask_396>": 50804,
373
+ "<mask_397>": 50803,
374
+ "<mask_398>": 50802,
375
+ "<mask_399>": 50801,
376
+ "<mask_39>": 51161,
377
+ "<mask_3>": 51197,
378
+ "<mask_400>": 50800,
379
+ "<mask_401>": 50799,
380
+ "<mask_402>": 50798,
381
+ "<mask_403>": 50797,
382
+ "<mask_404>": 50796,
383
+ "<mask_405>": 50795,
384
+ "<mask_406>": 50794,
385
+ "<mask_407>": 50793,
386
+ "<mask_408>": 50792,
387
+ "<mask_409>": 50791,
388
+ "<mask_40>": 51160,
389
+ "<mask_410>": 50790,
390
+ "<mask_411>": 50789,
391
+ "<mask_412>": 50788,
392
+ "<mask_413>": 50787,
393
+ "<mask_414>": 50786,
394
+ "<mask_415>": 50785,
395
+ "<mask_416>": 50784,
396
+ "<mask_417>": 50783,
397
+ "<mask_418>": 50782,
398
+ "<mask_419>": 50781,
399
+ "<mask_41>": 51159,
400
+ "<mask_420>": 50780,
401
+ "<mask_421>": 50779,
402
+ "<mask_422>": 50778,
403
+ "<mask_423>": 50777,
404
+ "<mask_424>": 50776,
405
+ "<mask_425>": 50775,
406
+ "<mask_426>": 50774,
407
+ "<mask_427>": 50773,
408
+ "<mask_428>": 50772,
409
+ "<mask_429>": 50771,
410
+ "<mask_42>": 51158,
411
+ "<mask_430>": 50770,
412
+ "<mask_431>": 50769,
413
+ "<mask_432>": 50768,
414
+ "<mask_433>": 50767,
415
+ "<mask_434>": 50766,
416
+ "<mask_435>": 50765,
417
+ "<mask_436>": 50764,
418
+ "<mask_437>": 50763,
419
+ "<mask_438>": 50762,
420
+ "<mask_439>": 50761,
421
+ "<mask_43>": 51157,
422
+ "<mask_440>": 50760,
423
+ "<mask_441>": 50759,
424
+ "<mask_442>": 50758,
425
+ "<mask_443>": 50757,
426
+ "<mask_444>": 50756,
427
+ "<mask_445>": 50755,
428
+ "<mask_446>": 50754,
429
+ "<mask_447>": 50753,
430
+ "<mask_448>": 50752,
431
+ "<mask_449>": 50751,
432
+ "<mask_44>": 51156,
433
+ "<mask_450>": 50750,
434
+ "<mask_451>": 50749,
435
+ "<mask_452>": 50748,
436
+ "<mask_453>": 50747,
437
+ "<mask_454>": 50746,
438
+ "<mask_455>": 50745,
439
+ "<mask_456>": 50744,
440
+ "<mask_457>": 50743,
441
+ "<mask_458>": 50742,
442
+ "<mask_459>": 50741,
443
+ "<mask_45>": 51155,
444
+ "<mask_460>": 50740,
445
+ "<mask_461>": 50739,
446
+ "<mask_462>": 50738,
447
+ "<mask_463>": 50737,
448
+ "<mask_464>": 50736,
449
+ "<mask_465>": 50735,
450
+ "<mask_466>": 50734,
451
+ "<mask_467>": 50733,
452
+ "<mask_468>": 50732,
453
+ "<mask_469>": 50731,
454
+ "<mask_46>": 51154,
455
+ "<mask_470>": 50730,
456
+ "<mask_471>": 50729,
457
+ "<mask_472>": 50728,
458
+ "<mask_473>": 50727,
459
+ "<mask_474>": 50726,
460
+ "<mask_475>": 50725,
461
+ "<mask_476>": 50724,
462
+ "<mask_477>": 50723,
463
+ "<mask_478>": 50722,
464
+ "<mask_479>": 50721,
465
+ "<mask_47>": 51153,
466
+ "<mask_480>": 50720,
467
+ "<mask_481>": 50719,
468
+ "<mask_482>": 50718,
469
+ "<mask_483>": 50717,
470
+ "<mask_484>": 50716,
471
+ "<mask_485>": 50715,
472
+ "<mask_486>": 50714,
473
+ "<mask_487>": 50713,
474
+ "<mask_488>": 50712,
475
+ "<mask_489>": 50711,
476
+ "<mask_48>": 51152,
477
+ "<mask_490>": 50710,
478
+ "<mask_491>": 50709,
479
+ "<mask_492>": 50708,
480
+ "<mask_493>": 50707,
481
+ "<mask_494>": 50706,
482
+ "<mask_495>": 50705,
483
+ "<mask_496>": 50704,
484
+ "<mask_497>": 50703,
485
+ "<mask_498>": 50702,
486
+ "<mask_499>": 50701,
487
+ "<mask_49>": 51151,
488
+ "<mask_4>": 51196,
489
+ "<mask_500>": 50700,
490
+ "<mask_501>": 50699,
491
+ "<mask_502>": 50698,
492
+ "<mask_503>": 50697,
493
+ "<mask_504>": 50696,
494
+ "<mask_505>": 50695,
495
+ "<mask_506>": 50694,
496
+ "<mask_507>": 50693,
497
+ "<mask_508>": 50692,
498
+ "<mask_509>": 50691,
499
+ "<mask_50>": 51150,
500
+ "<mask_510>": 50690,
501
+ "<mask_511>": 50689,
502
+ "<mask_512>": 50688,
503
+ "<mask_513>": 50687,
504
+ "<mask_514>": 50686,
505
+ "<mask_515>": 50685,
506
+ "<mask_516>": 50684,
507
+ "<mask_517>": 50683,
508
+ "<mask_518>": 50682,
509
+ "<mask_519>": 50681,
510
+ "<mask_51>": 51149,
511
+ "<mask_520>": 50680,
512
+ "<mask_521>": 50679,
513
+ "<mask_522>": 50678,
514
+ "<mask_523>": 50677,
515
+ "<mask_524>": 50676,
516
+ "<mask_525>": 50675,
517
+ "<mask_526>": 50674,
518
+ "<mask_527>": 50673,
519
+ "<mask_528>": 50672,
520
+ "<mask_529>": 50671,
521
+ "<mask_52>": 51148,
522
+ "<mask_530>": 50670,
523
+ "<mask_531>": 50669,
524
+ "<mask_532>": 50668,
525
+ "<mask_533>": 50667,
526
+ "<mask_534>": 50666,
527
+ "<mask_535>": 50665,
528
+ "<mask_536>": 50664,
529
+ "<mask_537>": 50663,
530
+ "<mask_538>": 50662,
531
+ "<mask_539>": 50661,
532
+ "<mask_53>": 51147,
533
+ "<mask_540>": 50660,
534
+ "<mask_541>": 50659,
535
+ "<mask_542>": 50658,
536
+ "<mask_543>": 50657,
537
+ "<mask_544>": 50656,
538
+ "<mask_545>": 50655,
539
+ "<mask_546>": 50654,
540
+ "<mask_547>": 50653,
541
+ "<mask_548>": 50652,
542
+ "<mask_549>": 50651,
543
+ "<mask_54>": 51146,
544
+ "<mask_550>": 50650,
545
+ "<mask_551>": 50649,
546
+ "<mask_552>": 50648,
547
+ "<mask_553>": 50647,
548
+ "<mask_554>": 50646,
549
+ "<mask_555>": 50645,
550
+ "<mask_556>": 50644,
551
+ "<mask_557>": 50643,
552
+ "<mask_558>": 50642,
553
+ "<mask_559>": 50641,
554
+ "<mask_55>": 51145,
555
+ "<mask_560>": 50640,
556
+ "<mask_561>": 50639,
557
+ "<mask_562>": 50638,
558
+ "<mask_563>": 50637,
559
+ "<mask_564>": 50636,
560
+ "<mask_565>": 50635,
561
+ "<mask_566>": 50634,
562
+ "<mask_567>": 50633,
563
+ "<mask_568>": 50632,
564
+ "<mask_569>": 50631,
565
+ "<mask_56>": 51144,
566
+ "<mask_570>": 50630,
567
+ "<mask_571>": 50629,
568
+ "<mask_572>": 50628,
569
+ "<mask_573>": 50627,
570
+ "<mask_574>": 50626,
571
+ "<mask_575>": 50625,
572
+ "<mask_576>": 50624,
573
+ "<mask_577>": 50623,
574
+ "<mask_578>": 50622,
575
+ "<mask_579>": 50621,
576
+ "<mask_57>": 51143,
577
+ "<mask_580>": 50620,
578
+ "<mask_581>": 50619,
579
+ "<mask_582>": 50618,
580
+ "<mask_583>": 50617,
581
+ "<mask_584>": 50616,
582
+ "<mask_585>": 50615,
583
+ "<mask_586>": 50614,
584
+ "<mask_587>": 50613,
585
+ "<mask_588>": 50612,
586
+ "<mask_589>": 50611,
587
+ "<mask_58>": 51142,
588
+ "<mask_590>": 50610,
589
+ "<mask_591>": 50609,
590
+ "<mask_592>": 50608,
591
+ "<mask_593>": 50607,
592
+ "<mask_594>": 50606,
593
+ "<mask_595>": 50605,
594
+ "<mask_596>": 50604,
595
+ "<mask_597>": 50603,
596
+ "<mask_598>": 50602,
597
+ "<mask_599>": 50601,
598
+ "<mask_59>": 51141,
599
+ "<mask_5>": 51195,
600
+ "<mask_600>": 50600,
601
+ "<mask_601>": 50599,
602
+ "<mask_602>": 50598,
603
+ "<mask_603>": 50597,
604
+ "<mask_604>": 50596,
605
+ "<mask_605>": 50595,
606
+ "<mask_606>": 50594,
607
+ "<mask_607>": 50593,
608
+ "<mask_608>": 50592,
609
+ "<mask_609>": 50591,
610
+ "<mask_60>": 51140,
611
+ "<mask_610>": 50590,
612
+ "<mask_611>": 50589,
613
+ "<mask_612>": 50588,
614
+ "<mask_613>": 50587,
615
+ "<mask_614>": 50586,
616
+ "<mask_615>": 50585,
617
+ "<mask_616>": 50584,
618
+ "<mask_617>": 50583,
619
+ "<mask_618>": 50582,
620
+ "<mask_619>": 50581,
621
+ "<mask_61>": 51139,
622
+ "<mask_620>": 50580,
623
+ "<mask_621>": 50579,
624
+ "<mask_622>": 50578,
625
+ "<mask_623>": 50577,
626
+ "<mask_624>": 50576,
627
+ "<mask_625>": 50575,
628
+ "<mask_626>": 50574,
629
+ "<mask_627>": 50573,
630
+ "<mask_628>": 50572,
631
+ "<mask_629>": 50571,
632
+ "<mask_62>": 51138,
633
+ "<mask_630>": 50570,
634
+ "<mask_631>": 50569,
635
+ "<mask_632>": 50568,
636
+ "<mask_633>": 50567,
637
+ "<mask_634>": 50566,
638
+ "<mask_635>": 50565,
639
+ "<mask_636>": 50564,
640
+ "<mask_637>": 50563,
641
+ "<mask_638>": 50562,
642
+ "<mask_639>": 50561,
643
+ "<mask_63>": 51137,
644
+ "<mask_640>": 50560,
645
+ "<mask_641>": 50559,
646
+ "<mask_642>": 50558,
647
+ "<mask_643>": 50557,
648
+ "<mask_644>": 50556,
649
+ "<mask_645>": 50555,
650
+ "<mask_646>": 50554,
651
+ "<mask_647>": 50553,
652
+ "<mask_648>": 50552,
653
+ "<mask_649>": 50551,
654
+ "<mask_64>": 51136,
655
+ "<mask_650>": 50550,
656
+ "<mask_651>": 50549,
657
+ "<mask_652>": 50548,
658
+ "<mask_653>": 50547,
659
+ "<mask_654>": 50546,
660
+ "<mask_655>": 50545,
661
+ "<mask_656>": 50544,
662
+ "<mask_657>": 50543,
663
+ "<mask_658>": 50542,
664
+ "<mask_659>": 50541,
665
+ "<mask_65>": 51135,
666
+ "<mask_660>": 50540,
667
+ "<mask_661>": 50539,
668
+ "<mask_662>": 50538,
669
+ "<mask_663>": 50537,
670
+ "<mask_664>": 50536,
671
+ "<mask_665>": 50535,
672
+ "<mask_666>": 50534,
673
+ "<mask_667>": 50533,
674
+ "<mask_668>": 50532,
675
+ "<mask_669>": 50531,
676
+ "<mask_66>": 51134,
677
+ "<mask_670>": 50530,
678
+ "<mask_671>": 50529,
679
+ "<mask_672>": 50528,
680
+ "<mask_673>": 50527,
681
+ "<mask_674>": 50526,
682
+ "<mask_675>": 50525,
683
+ "<mask_676>": 50524,
684
+ "<mask_677>": 50523,
685
+ "<mask_678>": 50522,
686
+ "<mask_679>": 50521,
687
+ "<mask_67>": 51133,
688
+ "<mask_680>": 50520,
689
+ "<mask_681>": 50519,
690
+ "<mask_682>": 50518,
691
+ "<mask_683>": 50517,
692
+ "<mask_684>": 50516,
693
+ "<mask_685>": 50515,
694
+ "<mask_686>": 50514,
695
+ "<mask_687>": 50513,
696
+ "<mask_688>": 50512,
697
+ "<mask_689>": 50511,
698
+ "<mask_68>": 51132,
699
+ "<mask_690>": 50510,
700
+ "<mask_691>": 50509,
701
+ "<mask_692>": 50508,
702
+ "<mask_693>": 50507,
703
+ "<mask_694>": 50506,
704
+ "<mask_695>": 50505,
705
+ "<mask_696>": 50504,
706
+ "<mask_697>": 50503,
707
+ "<mask_698>": 50502,
708
+ "<mask_699>": 50501,
709
+ "<mask_69>": 51131,
710
+ "<mask_6>": 51194,
711
+ "<mask_700>": 50500,
712
+ "<mask_701>": 50499,
713
+ "<mask_702>": 50498,
714
+ "<mask_703>": 50497,
715
+ "<mask_704>": 50496,
716
+ "<mask_705>": 50495,
717
+ "<mask_706>": 50494,
718
+ "<mask_707>": 50493,
719
+ "<mask_708>": 50492,
720
+ "<mask_709>": 50491,
721
+ "<mask_70>": 51130,
722
+ "<mask_710>": 50490,
723
+ "<mask_711>": 50489,
724
+ "<mask_712>": 50488,
725
+ "<mask_713>": 50487,
726
+ "<mask_714>": 50486,
727
+ "<mask_715>": 50485,
728
+ "<mask_716>": 50484,
729
+ "<mask_717>": 50483,
730
+ "<mask_718>": 50482,
731
+ "<mask_719>": 50481,
732
+ "<mask_71>": 51129,
733
+ "<mask_720>": 50480,
734
+ "<mask_721>": 50479,
735
+ "<mask_722>": 50478,
736
+ "<mask_723>": 50477,
737
+ "<mask_724>": 50476,
738
+ "<mask_725>": 50475,
739
+ "<mask_726>": 50474,
740
+ "<mask_727>": 50473,
741
+ "<mask_728>": 50472,
742
+ "<mask_729>": 50471,
743
+ "<mask_72>": 51128,
744
+ "<mask_730>": 50470,
745
+ "<mask_731>": 50469,
746
+ "<mask_732>": 50468,
747
+ "<mask_733>": 50467,
748
+ "<mask_734>": 50466,
749
+ "<mask_735>": 50465,
750
+ "<mask_736>": 50464,
751
+ "<mask_737>": 50463,
752
+ "<mask_738>": 50462,
753
+ "<mask_739>": 50461,
754
+ "<mask_73>": 51127,
755
+ "<mask_740>": 50460,
756
+ "<mask_741>": 50459,
757
+ "<mask_742>": 50458,
758
+ "<mask_743>": 50457,
759
+ "<mask_744>": 50456,
760
+ "<mask_745>": 50455,
761
+ "<mask_746>": 50454,
762
+ "<mask_747>": 50453,
763
+ "<mask_748>": 50452,
764
+ "<mask_749>": 50451,
765
+ "<mask_74>": 51126,
766
+ "<mask_750>": 50450,
767
+ "<mask_751>": 50449,
768
+ "<mask_752>": 50448,
769
+ "<mask_753>": 50447,
770
+ "<mask_754>": 50446,
771
+ "<mask_755>": 50445,
772
+ "<mask_756>": 50444,
773
+ "<mask_757>": 50443,
774
+ "<mask_758>": 50442,
775
+ "<mask_759>": 50441,
776
+ "<mask_75>": 51125,
777
+ "<mask_760>": 50440,
778
+ "<mask_761>": 50439,
779
+ "<mask_762>": 50438,
780
+ "<mask_763>": 50437,
781
+ "<mask_764>": 50436,
782
+ "<mask_765>": 50435,
783
+ "<mask_766>": 50434,
784
+ "<mask_767>": 50433,
785
+ "<mask_768>": 50432,
786
+ "<mask_769>": 50431,
787
+ "<mask_76>": 51124,
788
+ "<mask_770>": 50430,
789
+ "<mask_771>": 50429,
790
+ "<mask_772>": 50428,
791
+ "<mask_773>": 50427,
792
+ "<mask_774>": 50426,
793
+ "<mask_775>": 50425,
794
+ "<mask_776>": 50424,
795
+ "<mask_777>": 50423,
796
+ "<mask_778>": 50422,
797
+ "<mask_779>": 50421,
798
+ "<mask_77>": 51123,
799
+ "<mask_780>": 50420,
800
+ "<mask_781>": 50419,
801
+ "<mask_782>": 50418,
802
+ "<mask_783>": 50417,
803
+ "<mask_784>": 50416,
804
+ "<mask_785>": 50415,
805
+ "<mask_786>": 50414,
806
+ "<mask_787>": 50413,
807
+ "<mask_788>": 50412,
808
+ "<mask_789>": 50411,
809
+ "<mask_78>": 51122,
810
+ "<mask_790>": 50410,
811
+ "<mask_791>": 50409,
812
+ "<mask_792>": 50408,
813
+ "<mask_793>": 50407,
814
+ "<mask_794>": 50406,
815
+ "<mask_795>": 50405,
816
+ "<mask_796>": 50404,
817
+ "<mask_797>": 50403,
818
+ "<mask_798>": 50402,
819
+ "<mask_799>": 50401,
820
+ "<mask_79>": 51121,
821
+ "<mask_7>": 51193,
822
+ "<mask_800>": 50400,
823
+ "<mask_801>": 50399,
824
+ "<mask_802>": 50398,
825
+ "<mask_803>": 50397,
826
+ "<mask_804>": 50396,
827
+ "<mask_805>": 50395,
828
+ "<mask_806>": 50394,
829
+ "<mask_807>": 50393,
830
+ "<mask_808>": 50392,
831
+ "<mask_809>": 50391,
832
+ "<mask_80>": 51120,
833
+ "<mask_810>": 50390,
834
+ "<mask_811>": 50389,
835
+ "<mask_812>": 50388,
836
+ "<mask_813>": 50387,
837
+ "<mask_814>": 50386,
838
+ "<mask_815>": 50385,
839
+ "<mask_816>": 50384,
840
+ "<mask_817>": 50383,
841
+ "<mask_818>": 50382,
842
+ "<mask_819>": 50381,
843
+ "<mask_81>": 51119,
844
+ "<mask_820>": 50380,
845
+ "<mask_821>": 50379,
846
+ "<mask_822>": 50378,
847
+ "<mask_823>": 50377,
848
+ "<mask_824>": 50376,
849
+ "<mask_825>": 50375,
850
+ "<mask_826>": 50374,
851
+ "<mask_827>": 50373,
852
+ "<mask_828>": 50372,
853
+ "<mask_829>": 50371,
854
+ "<mask_82>": 51118,
855
+ "<mask_830>": 50370,
856
+ "<mask_831>": 50369,
857
+ "<mask_832>": 50368,
858
+ "<mask_833>": 50367,
859
+ "<mask_834>": 50366,
860
+ "<mask_835>": 50365,
861
+ "<mask_836>": 50364,
862
+ "<mask_837>": 50363,
863
+ "<mask_838>": 50362,
864
+ "<mask_839>": 50361,
865
+ "<mask_83>": 51117,
866
+ "<mask_840>": 50360,
867
+ "<mask_841>": 50359,
868
+ "<mask_842>": 50358,
869
+ "<mask_843>": 50357,
870
+ "<mask_844>": 50356,
871
+ "<mask_845>": 50355,
872
+ "<mask_846>": 50354,
873
+ "<mask_847>": 50353,
874
+ "<mask_848>": 50352,
875
+ "<mask_849>": 50351,
876
+ "<mask_84>": 51116,
877
+ "<mask_850>": 50350,
878
+ "<mask_851>": 50349,
879
+ "<mask_852>": 50348,
880
+ "<mask_853>": 50347,
881
+ "<mask_854>": 50346,
882
+ "<mask_855>": 50345,
883
+ "<mask_856>": 50344,
884
+ "<mask_857>": 50343,
885
+ "<mask_858>": 50342,
886
+ "<mask_859>": 50341,
887
+ "<mask_85>": 51115,
888
+ "<mask_860>": 50340,
889
+ "<mask_861>": 50339,
890
+ "<mask_862>": 50338,
891
+ "<mask_863>": 50337,
892
+ "<mask_864>": 50336,
893
+ "<mask_865>": 50335,
894
+ "<mask_866>": 50334,
895
+ "<mask_867>": 50333,
896
+ "<mask_868>": 50332,
897
+ "<mask_869>": 50331,
898
+ "<mask_86>": 51114,
899
+ "<mask_870>": 50330,
900
+ "<mask_871>": 50329,
901
+ "<mask_872>": 50328,
902
+ "<mask_873>": 50327,
903
+ "<mask_874>": 50326,
904
+ "<mask_875>": 50325,
905
+ "<mask_876>": 50324,
906
+ "<mask_877>": 50323,
907
+ "<mask_878>": 50322,
908
+ "<mask_879>": 50321,
909
+ "<mask_87>": 51113,
910
+ "<mask_880>": 50320,
911
+ "<mask_881>": 50319,
912
+ "<mask_882>": 50318,
913
+ "<mask_883>": 50317,
914
+ "<mask_884>": 50316,
915
+ "<mask_885>": 50315,
916
+ "<mask_886>": 50314,
917
+ "<mask_887>": 50313,
918
+ "<mask_888>": 50312,
919
+ "<mask_889>": 50311,
920
+ "<mask_88>": 51112,
921
+ "<mask_890>": 50310,
922
+ "<mask_891>": 50309,
923
+ "<mask_892>": 50308,
924
+ "<mask_893>": 50307,
925
+ "<mask_894>": 50306,
926
+ "<mask_895>": 50305,
927
+ "<mask_896>": 50304,
928
+ "<mask_897>": 50303,
929
+ "<mask_898>": 50302,
930
+ "<mask_899>": 50301,
931
+ "<mask_89>": 51111,
932
+ "<mask_8>": 51192,
933
+ "<mask_90>": 51110,
934
+ "<mask_91>": 51109,
935
+ "<mask_92>": 51108,
936
+ "<mask_93>": 51107,
937
+ "<mask_94>": 51106,
938
+ "<mask_95>": 51105,
939
+ "<mask_96>": 51104,
940
+ "<mask_97>": 51103,
941
+ "<mask_98>": 51102,
942
+ "<mask_99>": 51101,
943
+ "<mask_9>": 51191,
944
+ "<sep>": 50299
945
+ }
checkpoint-384/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Salesforce/codegen2-16B_P
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.0
checkpoint-384/adapter_config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Salesforce/codegen2-16B_P",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 32,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.1,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": [
21
+ "wte",
22
+ "lm_head"
23
+ ],
24
+ "peft_type": "LORA",
25
+ "r": 16,
26
+ "rank_pattern": {},
27
+ "revision": null,
28
+ "target_modules": [
29
+ "qkv_proj",
30
+ "out_proj"
31
+ ],
32
+ "task_type": "CAUSAL_LM",
33
+ "use_dora": false,
34
+ "use_rslora": false
35
+ }
checkpoint-384/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dad68fd47d0ca79638a2185e50edcf12c6f04458c1dd7d82cd89b8ba210d950c
3
+ size 1298520480
checkpoint-384/global_step384/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74222527a8d877aad3828526b44fcd608a869281719e287460ece659cf7e6341
3
+ size 3835348208
checkpoint-384/global_step384/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ece7bcfaf4f3ca150548a156f7231b1b36cd78a5505a41560070d06fed760e10
3
+ size 3835355312
checkpoint-384/global_step384/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bb290b78c0cd5662548db0bc02fd952d40ab7510f8ca6469dcaa1cbb3b4dbd4
3
+ size 3835355312
checkpoint-384/global_step384/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b823f70f5fbf5779276919bb20fd9129c28027d89bf82341c154368a2aff88aa
3
+ size 3835348144
checkpoint-384/global_step384/mp_rank_00_model_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e15cb581a7335807bfb546dc9e2ed89361ad018c4e7be245d7c6562c8bc53be8
3
+ size 2699642858
checkpoint-384/latest ADDED
@@ -0,0 +1 @@
 
 
1
+ global_step384
checkpoint-384/rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa85d56c5321c002e1647994d628dab5e3cd4796535b50075ffd820498b048ef
3
+ size 15024
checkpoint-384/rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c48defcc06f8ea9896fee0c140c14a21cc584eb37340579e4c5bc5664d987e95
3
+ size 15024
checkpoint-384/rng_state_2.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:378cbab57014e78b0d18d5892f0da74720b514dc32520dc03258162eda1f4f2f
3
+ size 15024
checkpoint-384/rng_state_3.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb0271db62a3a09bbba0639e805ac86f2f917ee9fc8dc044148b606f8c44ebec
3
+ size 15024
checkpoint-384/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89a5746b43df091022c92ec7086b154664f55227131ae07280a849ce36e06dd6
3
+ size 1064
checkpoint-384/trainer_state.json ADDED
@@ -0,0 +1,306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9986996098829649,
5
+ "eval_steps": 500,
6
+ "global_step": 384,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.002600780234070221,
13
+ "grad_norm": 0.02962934412062168,
14
+ "learning_rate": 1.282051282051282e-06,
15
+ "loss": 0.619,
16
+ "step": 1
17
+ },
18
+ {
19
+ "epoch": 0.02600780234070221,
20
+ "grad_norm": 0.06379027664661407,
21
+ "learning_rate": 1.282051282051282e-05,
22
+ "loss": 0.6962,
23
+ "step": 10
24
+ },
25
+ {
26
+ "epoch": 0.05201560468140442,
27
+ "grad_norm": 0.0363883376121521,
28
+ "learning_rate": 2.564102564102564e-05,
29
+ "loss": 0.7759,
30
+ "step": 20
31
+ },
32
+ {
33
+ "epoch": 0.07802340702210664,
34
+ "grad_norm": 0.03419478237628937,
35
+ "learning_rate": 3.846153846153846e-05,
36
+ "loss": 0.8087,
37
+ "step": 30
38
+ },
39
+ {
40
+ "epoch": 0.10403120936280884,
41
+ "grad_norm": 0.04424262419342995,
42
+ "learning_rate": 4.985507246376812e-05,
43
+ "loss": 0.7775,
44
+ "step": 40
45
+ },
46
+ {
47
+ "epoch": 0.13003901170351106,
48
+ "grad_norm": 0.22272075712680817,
49
+ "learning_rate": 4.840579710144928e-05,
50
+ "loss": 0.7476,
51
+ "step": 50
52
+ },
53
+ {
54
+ "epoch": 0.15604681404421328,
55
+ "grad_norm": 0.049193304032087326,
56
+ "learning_rate": 4.695652173913044e-05,
57
+ "loss": 0.6617,
58
+ "step": 60
59
+ },
60
+ {
61
+ "epoch": 0.18205461638491546,
62
+ "grad_norm": 0.04189423844218254,
63
+ "learning_rate": 4.5507246376811595e-05,
64
+ "loss": 0.7254,
65
+ "step": 70
66
+ },
67
+ {
68
+ "epoch": 0.20806241872561768,
69
+ "grad_norm": 0.033223457634449005,
70
+ "learning_rate": 4.405797101449275e-05,
71
+ "loss": 0.7454,
72
+ "step": 80
73
+ },
74
+ {
75
+ "epoch": 0.2340702210663199,
76
+ "grad_norm": 0.023022688925266266,
77
+ "learning_rate": 4.2608695652173916e-05,
78
+ "loss": 0.7263,
79
+ "step": 90
80
+ },
81
+ {
82
+ "epoch": 0.26007802340702213,
83
+ "grad_norm": 0.1517011970281601,
84
+ "learning_rate": 4.115942028985507e-05,
85
+ "loss": 0.7241,
86
+ "step": 100
87
+ },
88
+ {
89
+ "epoch": 0.28608582574772434,
90
+ "grad_norm": 0.041623640805482864,
91
+ "learning_rate": 3.971014492753624e-05,
92
+ "loss": 0.647,
93
+ "step": 110
94
+ },
95
+ {
96
+ "epoch": 0.31209362808842656,
97
+ "grad_norm": 0.03412195295095444,
98
+ "learning_rate": 3.8260869565217395e-05,
99
+ "loss": 0.6991,
100
+ "step": 120
101
+ },
102
+ {
103
+ "epoch": 0.3381014304291287,
104
+ "grad_norm": 0.02426602691411972,
105
+ "learning_rate": 3.681159420289855e-05,
106
+ "loss": 0.7115,
107
+ "step": 130
108
+ },
109
+ {
110
+ "epoch": 0.3641092327698309,
111
+ "grad_norm": 0.023634808138012886,
112
+ "learning_rate": 3.536231884057971e-05,
113
+ "loss": 0.6992,
114
+ "step": 140
115
+ },
116
+ {
117
+ "epoch": 0.39011703511053314,
118
+ "grad_norm": 0.1857312172651291,
119
+ "learning_rate": 3.3913043478260867e-05,
120
+ "loss": 0.7133,
121
+ "step": 150
122
+ },
123
+ {
124
+ "epoch": 0.41612483745123535,
125
+ "grad_norm": 0.057914506644010544,
126
+ "learning_rate": 3.246376811594203e-05,
127
+ "loss": 0.637,
128
+ "step": 160
129
+ },
130
+ {
131
+ "epoch": 0.44213263979193757,
132
+ "grad_norm": 0.0314478725194931,
133
+ "learning_rate": 3.1014492753623195e-05,
134
+ "loss": 0.69,
135
+ "step": 170
136
+ },
137
+ {
138
+ "epoch": 0.4681404421326398,
139
+ "grad_norm": 0.02375701256096363,
140
+ "learning_rate": 2.9565217391304352e-05,
141
+ "loss": 0.7052,
142
+ "step": 180
143
+ },
144
+ {
145
+ "epoch": 0.494148244473342,
146
+ "grad_norm": 0.017046812921762466,
147
+ "learning_rate": 2.811594202898551e-05,
148
+ "loss": 0.6963,
149
+ "step": 190
150
+ },
151
+ {
152
+ "epoch": 0.5201560468140443,
153
+ "grad_norm": 0.14757999777793884,
154
+ "learning_rate": 2.6666666666666667e-05,
155
+ "loss": 0.699,
156
+ "step": 200
157
+ },
158
+ {
159
+ "epoch": 0.5461638491547465,
160
+ "grad_norm": 0.03953570872545242,
161
+ "learning_rate": 2.5217391304347827e-05,
162
+ "loss": 0.6362,
163
+ "step": 210
164
+ },
165
+ {
166
+ "epoch": 0.5721716514954487,
167
+ "grad_norm": 0.031761154532432556,
168
+ "learning_rate": 2.3768115942028988e-05,
169
+ "loss": 0.6929,
170
+ "step": 220
171
+ },
172
+ {
173
+ "epoch": 0.5981794538361509,
174
+ "grad_norm": 0.019830092787742615,
175
+ "learning_rate": 2.2318840579710145e-05,
176
+ "loss": 0.6936,
177
+ "step": 230
178
+ },
179
+ {
180
+ "epoch": 0.6241872561768531,
181
+ "grad_norm": 0.017688650637865067,
182
+ "learning_rate": 2.0869565217391303e-05,
183
+ "loss": 0.692,
184
+ "step": 240
185
+ },
186
+ {
187
+ "epoch": 0.6501950585175552,
188
+ "grad_norm": 0.18702688813209534,
189
+ "learning_rate": 1.9420289855072467e-05,
190
+ "loss": 0.7103,
191
+ "step": 250
192
+ },
193
+ {
194
+ "epoch": 0.6762028608582574,
195
+ "grad_norm": 0.03623680770397186,
196
+ "learning_rate": 1.7971014492753624e-05,
197
+ "loss": 0.6185,
198
+ "step": 260
199
+ },
200
+ {
201
+ "epoch": 0.7022106631989596,
202
+ "grad_norm": 0.026319777593016624,
203
+ "learning_rate": 1.652173913043478e-05,
204
+ "loss": 0.7065,
205
+ "step": 270
206
+ },
207
+ {
208
+ "epoch": 0.7282184655396619,
209
+ "grad_norm": 0.018396981060504913,
210
+ "learning_rate": 1.5072463768115944e-05,
211
+ "loss": 0.6869,
212
+ "step": 280
213
+ },
214
+ {
215
+ "epoch": 0.7542262678803641,
216
+ "grad_norm": 0.016413649544119835,
217
+ "learning_rate": 1.3623188405797103e-05,
218
+ "loss": 0.6865,
219
+ "step": 290
220
+ },
221
+ {
222
+ "epoch": 0.7802340702210663,
223
+ "grad_norm": 0.1341114193201065,
224
+ "learning_rate": 1.2173913043478261e-05,
225
+ "loss": 0.7022,
226
+ "step": 300
227
+ },
228
+ {
229
+ "epoch": 0.8062418725617685,
230
+ "grad_norm": 0.03741007670760155,
231
+ "learning_rate": 1.072463768115942e-05,
232
+ "loss": 0.6272,
233
+ "step": 310
234
+ },
235
+ {
236
+ "epoch": 0.8322496749024707,
237
+ "grad_norm": 0.024399157613515854,
238
+ "learning_rate": 9.27536231884058e-06,
239
+ "loss": 0.6793,
240
+ "step": 320
241
+ },
242
+ {
243
+ "epoch": 0.8582574772431729,
244
+ "grad_norm": 0.016972342506051064,
245
+ "learning_rate": 7.82608695652174e-06,
246
+ "loss": 0.7078,
247
+ "step": 330
248
+ },
249
+ {
250
+ "epoch": 0.8842652795838751,
251
+ "grad_norm": 0.014587855897843838,
252
+ "learning_rate": 6.376811594202898e-06,
253
+ "loss": 0.7041,
254
+ "step": 340
255
+ },
256
+ {
257
+ "epoch": 0.9102730819245773,
258
+ "grad_norm": 0.13855686783790588,
259
+ "learning_rate": 4.927536231884058e-06,
260
+ "loss": 0.6831,
261
+ "step": 350
262
+ },
263
+ {
264
+ "epoch": 0.9362808842652796,
265
+ "grad_norm": 0.03484239801764488,
266
+ "learning_rate": 3.4782608695652175e-06,
267
+ "loss": 0.6321,
268
+ "step": 360
269
+ },
270
+ {
271
+ "epoch": 0.9622886866059818,
272
+ "grad_norm": 0.022825093939900398,
273
+ "learning_rate": 2.028985507246377e-06,
274
+ "loss": 0.6889,
275
+ "step": 370
276
+ },
277
+ {
278
+ "epoch": 0.988296488946684,
279
+ "grad_norm": 0.019488025456666946,
280
+ "learning_rate": 5.797101449275362e-07,
281
+ "loss": 0.6797,
282
+ "step": 380
283
+ }
284
+ ],
285
+ "logging_steps": 10,
286
+ "max_steps": 384,
287
+ "num_input_tokens_seen": 0,
288
+ "num_train_epochs": 1,
289
+ "save_steps": 500,
290
+ "stateful_callbacks": {
291
+ "TrainerControl": {
292
+ "args": {
293
+ "should_epoch_stop": false,
294
+ "should_evaluate": false,
295
+ "should_log": false,
296
+ "should_save": true,
297
+ "should_training_stop": true
298
+ },
299
+ "attributes": {}
300
+ }
301
+ },
302
+ "total_flos": 8.415557240450187e+18,
303
+ "train_batch_size": 8,
304
+ "trial_name": null,
305
+ "trial_params": null
306
+ }
checkpoint-384/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:946ab9e47c5e0b67ce7697f3723adb4ce2e58ae28b9f35acb22e930e952f463a
3
+ size 6904
checkpoint-384/zero_to_fp32.py ADDED
@@ -0,0 +1,760 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright (c) Microsoft Corporation.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+
6
+ # DeepSpeed Team
7
+
8
+ # This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
9
+ # copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
10
+ # the future. Once extracted, the weights don't require DeepSpeed and can be used in any
11
+ # application.
12
+ #
13
+ # example:
14
+ # python zero_to_fp32.py . output_dir/
15
+ # or
16
+ # python zero_to_fp32.py . output_dir/ --safe_serialization
17
+
18
+ import argparse
19
+ import torch
20
+ import glob
21
+ import math
22
+ import os
23
+ import re
24
+ import gc
25
+ import json
26
+ import numpy as np
27
+ from tqdm import tqdm
28
+ from collections import OrderedDict
29
+ from dataclasses import dataclass
30
+
31
+ # while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
32
+ # DeepSpeed data structures it has to be available in the current python environment.
33
+ from deepspeed.utils import logger
34
+ from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
35
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
36
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
37
+
38
+
39
+ @dataclass
40
+ class zero_model_state:
41
+ buffers: dict()
42
+ param_shapes: dict()
43
+ shared_params: list
44
+ ds_version: int
45
+ frozen_param_shapes: dict()
46
+ frozen_param_fragments: dict()
47
+
48
+
49
+ debug = 0
50
+
51
+ # load to cpu
52
+ device = torch.device('cpu')
53
+
54
+
55
+ def atoi(text):
56
+ return int(text) if text.isdigit() else text
57
+
58
+
59
+ def natural_keys(text):
60
+ '''
61
+ alist.sort(key=natural_keys) sorts in human order
62
+ http://nedbatchelder.com/blog/200712/human_sorting.html
63
+ (See Toothy's implementation in the comments)
64
+ '''
65
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
66
+
67
+
68
+ def get_model_state_file(checkpoint_dir, zero_stage):
69
+ if not os.path.isdir(checkpoint_dir):
70
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
71
+
72
+ # there should be only one file
73
+ if zero_stage <= 2:
74
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
75
+ elif zero_stage == 3:
76
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
77
+
78
+ if not os.path.exists(file):
79
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
80
+
81
+ return file
82
+
83
+
84
+ def get_checkpoint_files(checkpoint_dir, glob_pattern):
85
+ # XXX: need to test that this simple glob rule works for multi-node setup too
86
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
87
+
88
+ if len(ckpt_files) == 0:
89
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
90
+
91
+ return ckpt_files
92
+
93
+
94
+ def get_optim_files(checkpoint_dir):
95
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
96
+
97
+
98
+ def get_model_state_files(checkpoint_dir):
99
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
100
+
101
+
102
+ def parse_model_states(files):
103
+ zero_model_states = []
104
+ for file in files:
105
+ state_dict = torch.load(file, map_location=device, weights_only=False)
106
+
107
+ if BUFFER_NAMES not in state_dict:
108
+ raise ValueError(f"{file} is not a model state checkpoint")
109
+ buffer_names = state_dict[BUFFER_NAMES]
110
+ if debug:
111
+ print("Found buffers:", buffer_names)
112
+
113
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
114
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
115
+ param_shapes = state_dict[PARAM_SHAPES]
116
+
117
+ # collect parameters that are included in param_shapes
118
+ param_names = []
119
+ for s in param_shapes:
120
+ for name in s.keys():
121
+ param_names.append(name)
122
+
123
+ # update with frozen parameters
124
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
125
+ if frozen_param_shapes is not None:
126
+ if debug:
127
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
128
+ param_names += list(frozen_param_shapes.keys())
129
+
130
+ # handle shared params
131
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
132
+
133
+ ds_version = state_dict.get(DS_VERSION, None)
134
+
135
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
136
+
137
+ z_model_state = zero_model_state(buffers=buffers,
138
+ param_shapes=param_shapes,
139
+ shared_params=shared_params,
140
+ ds_version=ds_version,
141
+ frozen_param_shapes=frozen_param_shapes,
142
+ frozen_param_fragments=frozen_param_fragments)
143
+ zero_model_states.append(z_model_state)
144
+
145
+ return zero_model_states
146
+
147
+
148
+ def parse_optim_states(files, ds_checkpoint_dir):
149
+ total_files = len(files)
150
+ state_dicts = []
151
+ for f in tqdm(files, desc='Loading checkpoint shards'):
152
+ state_dict = torch.load(f, map_location=device, mmap=True, weights_only=False)
153
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
154
+ # and also handle the case where it was already removed by another helper script
155
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
156
+ state_dicts.append(state_dict)
157
+
158
+ if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
159
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
160
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
161
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
162
+
163
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
164
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
165
+ # use the max of the partition_count to get the dp world_size.
166
+
167
+ if type(world_size) is list:
168
+ world_size = max(world_size)
169
+
170
+ if world_size != total_files:
171
+ raise ValueError(
172
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
173
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
174
+ )
175
+
176
+ # the groups are named differently in each stage
177
+ if zero_stage <= 2:
178
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
179
+ elif zero_stage == 3:
180
+ fp32_groups_key = FP32_FLAT_GROUPS
181
+ else:
182
+ raise ValueError(f"unknown zero stage {zero_stage}")
183
+
184
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
185
+ return zero_stage, world_size, fp32_flat_groups
186
+
187
+
188
+ def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters):
189
+ """
190
+ Returns fp32 state_dict reconstructed from ds checkpoint
191
+
192
+ Args:
193
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
194
+
195
+ """
196
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
197
+
198
+ optim_files = get_optim_files(ds_checkpoint_dir)
199
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
200
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
201
+
202
+ model_files = get_model_state_files(ds_checkpoint_dir)
203
+
204
+ zero_model_states = parse_model_states(model_files)
205
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
206
+
207
+ if zero_stage <= 2:
208
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
209
+ exclude_frozen_parameters)
210
+ elif zero_stage == 3:
211
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
212
+ exclude_frozen_parameters)
213
+
214
+
215
+ def _zero2_merge_frozen_params(state_dict, zero_model_states):
216
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
217
+ return
218
+
219
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
220
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
221
+
222
+ if debug:
223
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
224
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
225
+
226
+ wanted_params = len(frozen_param_shapes)
227
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
228
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
229
+ print(f'Frozen params: Have {avail_numel} numels to process.')
230
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
231
+
232
+ total_params = 0
233
+ total_numel = 0
234
+ for name, shape in frozen_param_shapes.items():
235
+ total_params += 1
236
+ unpartitioned_numel = shape.numel()
237
+ total_numel += unpartitioned_numel
238
+
239
+ state_dict[name] = frozen_param_fragments[name]
240
+
241
+ if debug:
242
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
243
+
244
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
245
+
246
+
247
+ def _has_callable(obj, fn):
248
+ attr = getattr(obj, fn, None)
249
+ return callable(attr)
250
+
251
+
252
+ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
253
+ param_shapes = zero_model_states[0].param_shapes
254
+
255
+ # Reconstruction protocol:
256
+ #
257
+ # XXX: document this
258
+
259
+ if debug:
260
+ for i in range(world_size):
261
+ for j in range(len(fp32_flat_groups[0])):
262
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
263
+
264
+ # XXX: memory usage doubles here (zero2)
265
+ num_param_groups = len(fp32_flat_groups[0])
266
+ merged_single_partition_of_fp32_groups = []
267
+ for i in range(num_param_groups):
268
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
269
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
270
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
271
+ avail_numel = sum(
272
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
273
+
274
+ if debug:
275
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
276
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
277
+ # not asserting if there is a mismatch due to possible padding
278
+ print(f"Have {avail_numel} numels to process.")
279
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
280
+
281
+ # params
282
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
283
+ # out-of-core computing solution
284
+ total_numel = 0
285
+ total_params = 0
286
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
287
+ offset = 0
288
+ avail_numel = full_single_fp32_vector.numel()
289
+ for name, shape in shapes.items():
290
+
291
+ unpartitioned_numel = shape.numel() if _has_callable(shape, 'numel') else math.prod(shape)
292
+ total_numel += unpartitioned_numel
293
+ total_params += 1
294
+
295
+ if debug:
296
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
297
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
298
+ offset += unpartitioned_numel
299
+
300
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
301
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
302
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
303
+ # live optimizer object, so we are checking that the numbers are within the right range
304
+ align_to = 2 * world_size
305
+
306
+ def zero2_align(x):
307
+ return align_to * math.ceil(x / align_to)
308
+
309
+ if debug:
310
+ print(f"original offset={offset}, avail_numel={avail_numel}")
311
+
312
+ offset = zero2_align(offset)
313
+ avail_numel = zero2_align(avail_numel)
314
+
315
+ if debug:
316
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
317
+
318
+ # Sanity check
319
+ if offset != avail_numel:
320
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
321
+
322
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
323
+
324
+
325
+ def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
326
+ exclude_frozen_parameters):
327
+ state_dict = OrderedDict()
328
+
329
+ # buffers
330
+ buffers = zero_model_states[0].buffers
331
+ state_dict.update(buffers)
332
+ if debug:
333
+ print(f"added {len(buffers)} buffers")
334
+
335
+ if not exclude_frozen_parameters:
336
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
337
+
338
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
339
+
340
+ # recover shared parameters
341
+ for pair in zero_model_states[0].shared_params:
342
+ if pair[1] in state_dict:
343
+ state_dict[pair[0]] = state_dict[pair[1]]
344
+
345
+ return state_dict
346
+
347
+
348
+ def zero3_partitioned_param_info(unpartitioned_numel, world_size):
349
+ remainder = unpartitioned_numel % world_size
350
+ padding_numel = (world_size - remainder) if remainder else 0
351
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
352
+ return partitioned_numel, padding_numel
353
+
354
+
355
+ def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
356
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
357
+ return
358
+
359
+ if debug:
360
+ for i in range(world_size):
361
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
362
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
363
+
364
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
365
+ wanted_params = len(frozen_param_shapes)
366
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
367
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
368
+ print(f'Frozen params: Have {avail_numel} numels to process.')
369
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
370
+
371
+ total_params = 0
372
+ total_numel = 0
373
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
374
+ total_params += 1
375
+ unpartitioned_numel = shape.numel()
376
+ total_numel += unpartitioned_numel
377
+
378
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
379
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
380
+
381
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
382
+
383
+ if debug:
384
+ print(
385
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
386
+ )
387
+
388
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
389
+
390
+
391
+ class GatheredTensor:
392
+ """
393
+ A pseudo tensor that collects partitioned weights.
394
+ It is more memory efficient when there are multiple groups.
395
+ """
396
+
397
+ def __init__(self, flat_groups, flat_groups_offset, offset, partitioned_numel, shape):
398
+ self.flat_groups = flat_groups
399
+ self.flat_groups_offset = flat_groups_offset
400
+ self.offset = offset
401
+ self.partitioned_numel = partitioned_numel
402
+ self.shape = shape
403
+ self.dtype = self.flat_groups[0][0].dtype
404
+
405
+ def contiguous(self):
406
+ """
407
+ Merge partitioned weights from flat_groups into a single tensor.
408
+ """
409
+ end_idx = self.offset + self.partitioned_numel
410
+ world_size = len(self.flat_groups)
411
+ pad_flat_param_chunks = []
412
+
413
+ for rank_i in range(world_size):
414
+ # for each rank, we need to collect weights from related group/groups
415
+ flat_groups_at_rank_i = self.flat_groups[rank_i]
416
+ start_group_id = None
417
+ end_group_id = None
418
+ for group_id in range(len(self.flat_groups_offset)):
419
+ if self.flat_groups_offset[group_id] <= self.offset < self.flat_groups_offset[group_id + 1]:
420
+ start_group_id = group_id
421
+ if self.flat_groups_offset[group_id] < end_idx <= self.flat_groups_offset[group_id + 1]:
422
+ end_group_id = group_id
423
+ break
424
+ # collect weights from related group/groups
425
+ for group_id in range(start_group_id, end_group_id + 1):
426
+ flat_tensor = flat_groups_at_rank_i[group_id]
427
+ start_offset = self.offset - self.flat_groups_offset[group_id]
428
+ end_offset = min(end_idx, self.flat_groups_offset[group_id + 1]) - self.flat_groups_offset[group_id]
429
+ pad_flat_param_chunks.append(flat_tensor[start_offset:end_offset])
430
+
431
+ # collect weights from all ranks
432
+ pad_flat_param = torch.cat(pad_flat_param_chunks, dim=0)
433
+ param = pad_flat_param[:self.shape.numel()].view(self.shape).contiguous()
434
+ return param
435
+
436
+
437
+ def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
438
+ param_shapes = zero_model_states[0].param_shapes
439
+ avail_numel = sum([flat_group.numel() for flat_group in fp32_flat_groups[0]]) * world_size
440
+
441
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
442
+ # param, re-consolidating each param, while dealing with padding if any
443
+
444
+ # merge list of dicts, preserving order
445
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
446
+
447
+ if debug:
448
+ for i in range(world_size):
449
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
450
+
451
+ wanted_params = len(param_shapes)
452
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
453
+ # not asserting if there is a mismatch due to possible padding
454
+ avail_numel = fp32_flat_groups[0].numel() * world_size
455
+ print(f"Trainable params: Have {avail_numel} numels to process.")
456
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
457
+
458
+ # params
459
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
460
+ # out-of-core computing solution
461
+ offset = 0
462
+ total_numel = 0
463
+ total_params = 0
464
+ flat_groups_offset = [0] + list(np.cumsum([flat_tensor.numel() for flat_tensor in fp32_flat_groups[0]]))
465
+ for name, shape in tqdm(param_shapes.items(), desc='Gathering sharded weights'):
466
+ unpartitioned_numel = shape.numel()
467
+ total_numel += unpartitioned_numel
468
+ total_params += 1
469
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
470
+
471
+ if debug:
472
+ print(
473
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
474
+ )
475
+
476
+ # memory efficient tensor
477
+ tensor = GatheredTensor(fp32_flat_groups, flat_groups_offset, offset, partitioned_numel, shape)
478
+ state_dict[name] = tensor
479
+ offset += partitioned_numel
480
+
481
+ offset *= world_size
482
+
483
+ # Sanity check
484
+ if offset != avail_numel:
485
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
486
+
487
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
488
+
489
+
490
+ def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
491
+ exclude_frozen_parameters):
492
+ state_dict = OrderedDict()
493
+
494
+ # buffers
495
+ buffers = zero_model_states[0].buffers
496
+ state_dict.update(buffers)
497
+ if debug:
498
+ print(f"added {len(buffers)} buffers")
499
+
500
+ if not exclude_frozen_parameters:
501
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
502
+
503
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
504
+
505
+ # recover shared parameters
506
+ for pair in zero_model_states[0].shared_params:
507
+ if pair[1] in state_dict:
508
+ state_dict[pair[0]] = state_dict[pair[1]]
509
+
510
+ return state_dict
511
+
512
+
513
+ def to_torch_tensor(state_dict, return_empty_tensor=False):
514
+ """
515
+ Convert state_dict of GatheredTensor to torch tensor
516
+ """
517
+ torch_state_dict = {}
518
+ converted_tensors = {}
519
+ for name, tensor in state_dict.items():
520
+ tensor_id = id(tensor)
521
+ if tensor_id in converted_tensors: # shared tensors
522
+ shared_tensor = torch_state_dict[converted_tensors[tensor_id]]
523
+ torch_state_dict[name] = shared_tensor
524
+ else:
525
+ converted_tensors[tensor_id] = name
526
+ if return_empty_tensor:
527
+ torch_state_dict[name] = torch.empty(tensor.shape, dtype=tensor.dtype)
528
+ else:
529
+ torch_state_dict[name] = tensor.contiguous()
530
+ return torch_state_dict
531
+
532
+
533
+ def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
534
+ tag=None,
535
+ exclude_frozen_parameters=False,
536
+ lazy_mode=False):
537
+ """
538
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
539
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
540
+ via a model hub.
541
+
542
+ Args:
543
+ - ``checkpoint_dir``: path to the desired checkpoint folder
544
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
545
+ - ``exclude_frozen_parameters``: exclude frozen parameters
546
+ - ``lazy_mode``: get state_dict in lazy mode. It returns a dict of pesduo tensor instead of torch tensor, which is more memory efficient.
547
+ Convert the pesduo tensor to torch tensor by ``.contiguous()``
548
+
549
+ Returns:
550
+ - pytorch ``state_dict``
551
+
552
+ A typical usage might be ::
553
+
554
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
555
+ # do the training and checkpoint saving
556
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
557
+ model = model.cpu() # move to cpu
558
+ model.load_state_dict(state_dict)
559
+ # submit to model hub or save the model to share with others
560
+
561
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
562
+ application. i.e. you will need to re-initialize the deepspeed engine, since
563
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
564
+
565
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
566
+
567
+ Note: the above usage may not work if your application doesn't have sufficient free CPU memory.
568
+ You may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
569
+ the checkpoint. Or you can load state_dict in lazy mode ::
570
+
571
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
572
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, lazy_mode=True) # not on cpu
573
+ for name, lazy_tensor in state_dict.item():
574
+ tensor = lazy_tensor.contiguous() # to cpu
575
+ print(name, tensor)
576
+ # del tensor to release memory if it no longer in use
577
+ """
578
+ if tag is None:
579
+ latest_path = os.path.join(checkpoint_dir, 'latest')
580
+ if os.path.isfile(latest_path):
581
+ with open(latest_path, 'r') as fd:
582
+ tag = fd.read().strip()
583
+ else:
584
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
585
+
586
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
587
+
588
+ if not os.path.isdir(ds_checkpoint_dir):
589
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
590
+
591
+ state_dict = _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters)
592
+ if lazy_mode:
593
+ return state_dict
594
+ else:
595
+ return to_torch_tensor(state_dict)
596
+
597
+
598
+ def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir,
599
+ output_dir,
600
+ max_shard_size="5GB",
601
+ safe_serialization=False,
602
+ tag=None,
603
+ exclude_frozen_parameters=False):
604
+ """
605
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
606
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
607
+
608
+ Args:
609
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
610
+ - ``output_dir``: directory to the pytorch fp32 state_dict output files
611
+ - ``max_shard_size``: the maximum size for a checkpoint before being sharded, default value is 5GB
612
+ - ``safe_serialization``: whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
613
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
614
+ - ``exclude_frozen_parameters``: exclude frozen parameters
615
+ """
616
+
617
+ # Dependency pre-check
618
+ if safe_serialization:
619
+ try:
620
+ from safetensors.torch import save_file
621
+ except ImportError:
622
+ print('If you want to use `safe_serialization`, please `pip install safetensors`')
623
+ raise
624
+ if max_shard_size is not None:
625
+ try:
626
+ from huggingface_hub import split_torch_state_dict_into_shards
627
+ except ImportError:
628
+ print('If you want to use `max_shard_size`, please `pip install huggingface_hub`')
629
+ raise
630
+
631
+ # Convert zero checkpoint to state_dict
632
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
633
+ tag,
634
+ exclude_frozen_parameters,
635
+ lazy_mode=True)
636
+
637
+ # Shard the model if it is too big.
638
+ weights_name = "model.safetensors" if safe_serialization else "pytorch_model.bin"
639
+ if max_shard_size is not None:
640
+ filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
641
+ # an memory-efficient approach for sharding
642
+ empty_state_dict = to_torch_tensor(state_dict, return_empty_tensor=True)
643
+ state_dict_split = split_torch_state_dict_into_shards(empty_state_dict,
644
+ filename_pattern=filename_pattern,
645
+ max_shard_size=max_shard_size)
646
+ else:
647
+ from collections import namedtuple
648
+ StateDictSplit = namedtuple("StateDictSplit", ["is_sharded", "filename_to_tensors"])
649
+ state_dict_split = StateDictSplit(is_sharded=False,
650
+ filename_to_tensors={weights_name: list(state_dict.keys())})
651
+
652
+ # Save the model by shard
653
+ os.makedirs(output_dir, exist_ok=True)
654
+ filename_to_tensors = state_dict_split.filename_to_tensors.items()
655
+ for shard_file, tensors in tqdm(filename_to_tensors, desc="Saving checkpoint shards"):
656
+ shard_state_dict = {tensor_name: state_dict[tensor_name] for tensor_name in tensors}
657
+ shard_state_dict = to_torch_tensor(shard_state_dict)
658
+ output_path = os.path.join(output_dir, shard_file)
659
+ if safe_serialization:
660
+ save_file(shard_state_dict, output_path, metadata={"format": "pt"})
661
+ else:
662
+ torch.save(shard_state_dict, output_path)
663
+ # release the memory of current shard
664
+ for tensor_name in list(shard_state_dict.keys()):
665
+ del state_dict[tensor_name]
666
+ del shard_state_dict[tensor_name]
667
+ del shard_state_dict
668
+ gc.collect()
669
+
670
+ # Save index if sharded
671
+ if state_dict_split.is_sharded:
672
+ index = {
673
+ "metadata": state_dict_split.metadata,
674
+ "weight_map": state_dict_split.tensor_to_filename,
675
+ }
676
+ save_index_file = "model.safetensors.index.json" if safe_serialization else "pytorch_model.bin.index.json"
677
+ save_index_file = os.path.join(output_dir, save_index_file)
678
+ with open(save_index_file, "w", encoding="utf-8") as f:
679
+ content = json.dumps(index, indent=2, sort_keys=True) + "\n"
680
+ f.write(content)
681
+
682
+
683
+ def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
684
+ """
685
+ 1. Put the provided model to cpu
686
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
687
+ 3. Load it into the provided model
688
+
689
+ Args:
690
+ - ``model``: the model object to update
691
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
692
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
693
+
694
+ Returns:
695
+ - ``model`: modified model
696
+
697
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
698
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
699
+ conveniently placed for you in the checkpoint folder.
700
+
701
+ A typical usage might be ::
702
+
703
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
704
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
705
+ # submit to model hub or save the model to share with others
706
+
707
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
708
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
709
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
710
+
711
+ """
712
+ logger.info(f"Extracting fp32 weights")
713
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
714
+
715
+ logger.info(f"Overwriting model with fp32 weights")
716
+ model = model.cpu()
717
+ model.load_state_dict(state_dict, strict=False)
718
+
719
+ return model
720
+
721
+
722
+ if __name__ == "__main__":
723
+ parser = argparse.ArgumentParser()
724
+ parser.add_argument("checkpoint_dir",
725
+ type=str,
726
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
727
+ parser.add_argument("output_dir",
728
+ type=str,
729
+ help="directory to the pytorch fp32 state_dict output files"
730
+ "(e.g. path/checkpoint-12-output/)")
731
+ parser.add_argument(
732
+ "--max_shard_size",
733
+ type=str,
734
+ default="5GB",
735
+ help="The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size"
736
+ "lower than this size. If expressed as a string, needs to be digits followed by a unit (like `5MB`"
737
+ "We default it to 5GB in order for models to be able to run easily on free-tier google colab instances"
738
+ "without CPU OOM issues.")
739
+ parser.add_argument(
740
+ "--safe_serialization",
741
+ default=False,
742
+ action='store_true',
743
+ help="Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).")
744
+ parser.add_argument("-t",
745
+ "--tag",
746
+ type=str,
747
+ default=None,
748
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
749
+ parser.add_argument("--exclude_frozen_parameters", action='store_true', help="exclude frozen parameters")
750
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
751
+ args = parser.parse_args()
752
+
753
+ debug = args.debug
754
+
755
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir,
756
+ args.output_dir,
757
+ max_shard_size=args.max_shard_size,
758
+ safe_serialization=args.safe_serialization,
759
+ tag=args.tag,
760
+ exclude_frozen_parameters=args.exclude_frozen_parameters)
logs/events.out.tfevents.1738893626.apolo.2381507.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88633ed8a22a5f6c0c0a70e3b0d09eddfb5a64f018c65a729e77c75d6e84fc5d
3
+ size 6829
logs/events.out.tfevents.1738894521.apolo.2389024.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:039d9de97b83309f2ab5b48653fd2d7e0e7a0dce791111b881f8f206fe9b81d4
3
+ size 6415
logs/events.out.tfevents.1738894780.apolo.2393100.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d0d28805640478d1ad9727f750eb7a9dbfe5270e50dccc85a97f3d23c3492ba
3
+ size 6621
logs/events.out.tfevents.1738895567.apolo.2399779.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7b03d4a1b9a5da4e9715345ef670697f4cbae426e0cc33524eafb725250c4e1
3
+ size 6835
logs/events.out.tfevents.1738895868.apolo.2404670.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d47513bb9522eaf28ec3c8e32c8d9660eb16c2a78a3857e7f1df65b8d464b353
3
+ size 6834
logs/events.out.tfevents.1738896201.apolo.2410067.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61476e8e694469e468dfad1328a2f69d2aa90a57199e595f2fe1e1bfa7a27007
3
+ size 14958
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
vocab.json ADDED
The diff for this file is too large to render. See raw diff