pretrain core
Browse files- README.md +3 -2
- scripts/pretrain_core_model.yaml +2 -2
README.md
CHANGED
@@ -53,8 +53,9 @@ time python -B prepare_core_datasets.py
|
|
53 |
```
|
54 |
|
55 |
```
|
56 |
-
i=0, min_len=0, max_len=
|
57 |
-
Total number of tokens in the optimized dataset '../core-data-0-0-
|
|
|
58 |
```
|
59 |
|
60 |
```bash
|
|
|
53 |
```
|
54 |
|
55 |
```
|
56 |
+
i=0, min_len=0, max_len=1073741824, block_size=4097, chunk_size=16388000, len(dataset)=1287403, len(dataset) * block_size=5274490091
|
57 |
+
Total number of tokens in the optimized dataset '../core-data-0-0-1073741824-4097-4000' is 5274490091
|
58 |
+
|
59 |
```
|
60 |
|
61 |
```bash
|
scripts/pretrain_core_model.yaml
CHANGED
@@ -46,7 +46,7 @@ data:
|
|
46 |
class_path: LitData
|
47 |
|
48 |
init_args:
|
49 |
-
data_path: "../core-data-0-0-
|
50 |
num_workers: 32
|
51 |
|
52 |
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
@@ -70,7 +70,7 @@ train:
|
|
70 |
epochs:
|
71 |
|
72 |
# Total number of tokens to train on (type: Optional[int], default: 3000000000000)
|
73 |
-
max_tokens:
|
74 |
|
75 |
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
76 |
max_steps:
|
|
|
46 |
class_path: LitData
|
47 |
|
48 |
init_args:
|
49 |
+
data_path: "../core-data-0-0-1073741824-4097-4000/"
|
50 |
num_workers: 32
|
51 |
|
52 |
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
|
|
70 |
epochs:
|
71 |
|
72 |
# Total number of tokens to train on (type: Optional[int], default: 3000000000000)
|
73 |
+
max_tokens: 5274490091
|
74 |
|
75 |
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
76 |
max_steps:
|