fix paper link
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ license: odc-by
|
|
36 |
|
37 |

|
38 |
|
39 |
-
More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release [DataDecide](allenai.org/
|
40 |
|
41 |
|
42 |
## Evaluation
|
@@ -113,7 +113,7 @@ These evaluations are done over all DataDecide models. For each of our 25 datase
|
|
113 |
### Links
|
114 |
|
115 |
- **Repository:** [https://github.com/allenai/DataDecide](https://github.com/allenai/DataDecide)
|
116 |
-
- **Paper:** [https:/allenai.org/
|
117 |
|
118 |
## Citation
|
119 |
|
|
|
36 |
|
37 |

|
38 |
|
39 |
+
More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release [DataDecide](allenai.org/papers/datadecide)—a suite of models we pretrain on 25 corpora with differing sources, deduplication, and filtering up to 100B tokens, over 14 different model sizes ranging from 4M parameters up to 1B parameters (more than 30k model checkpoints in total).
|
40 |
|
41 |
|
42 |
## Evaluation
|
|
|
113 |
### Links
|
114 |
|
115 |
- **Repository:** [https://github.com/allenai/DataDecide](https://github.com/allenai/DataDecide)
|
116 |
+
- **Paper:** [https:/allenai.org/papers/datadecide](https:/allenai.org/papers/datadecide)
|
117 |
|
118 |
## Citation
|
119 |
|