Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
IanMagnusson commited on
Commit
5b9e7f5
·
verified ·
1 Parent(s): 4ab6073

fix paper link

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -36,7 +36,7 @@ license: odc-by
36
 
37
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62bddd0b1e22ec8427a0f27e/MwddQs_8OaU4128VYrwoU.png)
38
 
39
- More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release [DataDecide](allenai.org/paper/datadecide)—a suite of models we pretrain on 25 corpora with differing sources, deduplication, and filtering up to 100B tokens, over 14 different model sizes ranging from 4M parameters up to 1B parameters (more than 30k model checkpoints in total).
40
 
41
 
42
  ## Evaluation
@@ -113,7 +113,7 @@ These evaluations are done over all DataDecide models. For each of our 25 datase
113
  ### Links
114
 
115
  - **Repository:** [https://github.com/allenai/DataDecide](https://github.com/allenai/DataDecide)
116
- - **Paper:** [https:/allenai.org/paper/datadecide](https:/allenai.org/paper/datadecide)
117
 
118
  ## Citation
119
 
 
36
 
37
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62bddd0b1e22ec8427a0f27e/MwddQs_8OaU4128VYrwoU.png)
38
 
39
+ More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release [DataDecide](allenai.org/papers/datadecide)—a suite of models we pretrain on 25 corpora with differing sources, deduplication, and filtering up to 100B tokens, over 14 different model sizes ranging from 4M parameters up to 1B parameters (more than 30k model checkpoints in total).
40
 
41
 
42
  ## Evaluation
 
113
  ### Links
114
 
115
  - **Repository:** [https://github.com/allenai/DataDecide](https://github.com/allenai/DataDecide)
116
+ - **Paper:** [https:/allenai.org/papers/datadecide](https:/allenai.org/papers/datadecide)
117
 
118
  ## Citation
119