fix paper link
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ language:
|
|
7 |
|
8 |

|
9 |
|
10 |
-
More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release [DataDecide](allenai.org/
|
11 |
|
12 |
|
13 |
## 25 Data Recipes
|
@@ -85,7 +85,7 @@ For each of our 25 datasets and 14 model sizes, we train a model linked below. E
|
|
85 |
### Links
|
86 |
|
87 |
- **Repository:** [https://github.com/allenai/DataDecide](https://github.com/allenai/DataDecide)
|
88 |
-
- **Paper:** [https:/allenai.org/
|
89 |
|
90 |
## Citation
|
91 |
|
|
|
7 |
|
8 |

|
9 |
|
10 |
+
More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release [DataDecide](https://allenai.org/papers/datadecide)—a suite of models we pretrain on 25 corpora with differing sources, deduplication, and filtering up to 100B tokens, over 14 different model sizes ranging from 4M parameters up to 1B parameters (more than 30k model checkpoints in total).
|
11 |
|
12 |
|
13 |
## 25 Data Recipes
|
|
|
85 |
### Links
|
86 |
|
87 |
- **Repository:** [https://github.com/allenai/DataDecide](https://github.com/allenai/DataDecide)
|
88 |
+
- **Paper:** [https:/allenai.org/papers/datadecide](https:/allenai.org/papers/datadecide)
|
89 |
|
90 |
## Citation
|
91 |
|