sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
85f3a003f2483bb16a920a1e797c285ef3c2dde3
# Dataset Card for `beir/fever/dev` The `beir/fever/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/dev). # Data This dataset provides: - `queries` (i.e., topics); count=6,666 - `qrels`: (relevance assessments); count=8,079 - For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fever_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fever_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Thorne2018Fever, title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification", author = "Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N18-1074", doi = "10.18653/v1/N18-1074", pages = "809--819" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fever_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fever", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:44:56+00:00
{"source_datasets": ["irds/beir_fever"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fever/dev`", "viewer": false}
2023-01-05T02:45:02+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_fever #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/fever/dev' The 'beir/fever/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=6,666 - 'qrels': (relevance assessments); count=8,079 - For 'docs', use 'irds/beir_fever' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/fever/dev'\n\nThe 'beir/fever/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,666\n - 'qrels': (relevance assessments); count=8,079\n\n - For 'docs', use 'irds/beir_fever'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_fever #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/fever/dev'\n\nThe 'beir/fever/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,666\n - 'qrels': (relevance assessments); count=8,079\n\n - For 'docs', use 'irds/beir_fever'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
615f7819eb0498ad701ec109b14d001a2e3c2830
# Dataset Card for `beir/fever/test` The `beir/fever/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/test). # Data This dataset provides: - `queries` (i.e., topics); count=6,666 - `qrels`: (relevance assessments); count=7,937 - For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fever_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fever_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Thorne2018Fever, title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification", author = "Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N18-1074", doi = "10.18653/v1/N18-1074", pages = "809--819" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fever_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fever", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:07+00:00
{"source_datasets": ["irds/beir_fever"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fever/test`", "viewer": false}
2023-01-05T02:45:13+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_fever #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/fever/test' The 'beir/fever/test' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=6,666 - 'qrels': (relevance assessments); count=7,937 - For 'docs', use 'irds/beir_fever' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/fever/test'\n\nThe 'beir/fever/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,666\n - 'qrels': (relevance assessments); count=7,937\n\n - For 'docs', use 'irds/beir_fever'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_fever #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/fever/test'\n\nThe 'beir/fever/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,666\n - 'qrels': (relevance assessments); count=7,937\n\n - For 'docs', use 'irds/beir_fever'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a09ed8ea7edeb86eb6c29f4d10858e6a515b8b9d
# Dataset Card for `beir/fever/train` The `beir/fever/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/train). # Data This dataset provides: - `queries` (i.e., topics); count=109,810 - `qrels`: (relevance assessments); count=140,085 - For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fever_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fever_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Thorne2018Fever, title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification", author = "Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N18-1074", doi = "10.18653/v1/N18-1074", pages = "809--819" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fever_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fever", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:19+00:00
{"source_datasets": ["irds/beir_fever"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fever/train`", "viewer": false}
2023-01-05T02:45:24+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_fever #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/fever/train' The 'beir/fever/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=109,810 - 'qrels': (relevance assessments); count=140,085 - For 'docs', use 'irds/beir_fever' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/fever/train'\n\nThe 'beir/fever/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=109,810\n - 'qrels': (relevance assessments); count=140,085\n\n - For 'docs', use 'irds/beir_fever'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_fever #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/fever/train'\n\nThe 'beir/fever/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=109,810\n - 'qrels': (relevance assessments); count=140,085\n\n - For 'docs', use 'irds/beir_fever'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
5b07156fcd5f44189a2a7b7f638bac632893d073
# Dataset Card for `beir/fiqa` The `beir/fiqa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=57,638 - `queries` (i.e., topics); count=6,648 This dataset is used by: [`beir_fiqa_dev`](https://huggingface.co/datasets/irds/beir_fiqa_dev), [`beir_fiqa_test`](https://huggingface.co/datasets/irds/beir_fiqa_test), [`beir_fiqa_train`](https://huggingface.co/datasets/irds/beir_fiqa_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_fiqa', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/beir_fiqa', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Maia2018Fiqa, title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering}, author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur}, journal={Companion Proceedings of the The Web Conference 2018}, year={2018} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fiqa
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:30+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fiqa`", "viewer": false}
2023-01-05T02:45:35+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/fiqa' The 'beir/fiqa' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=57,638 - 'queries' (i.e., topics); count=6,648 This dataset is used by: 'beir_fiqa_dev', 'beir_fiqa_test', 'beir_fiqa_train' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/fiqa'\n\nThe 'beir/fiqa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=57,638\n - 'queries' (i.e., topics); count=6,648\n\n\nThis dataset is used by: 'beir_fiqa_dev', 'beir_fiqa_test', 'beir_fiqa_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/fiqa'\n\nThe 'beir/fiqa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=57,638\n - 'queries' (i.e., topics); count=6,648\n\n\nThis dataset is used by: 'beir_fiqa_dev', 'beir_fiqa_test', 'beir_fiqa_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
999e0e31ad15ebb0511133c8b9eb1e11f7193984
# Dataset Card for `beir/fiqa/dev` The `beir/fiqa/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/dev). # Data This dataset provides: - `queries` (i.e., topics); count=500 - `qrels`: (relevance assessments); count=1,238 - For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fiqa_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fiqa_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Maia2018Fiqa, title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering}, author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur}, journal={Companion Proceedings of the The Web Conference 2018}, year={2018} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fiqa_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fiqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:41+00:00
{"source_datasets": ["irds/beir_fiqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fiqa/dev`", "viewer": false}
2023-01-05T02:45:47+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_fiqa #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/fiqa/dev' The 'beir/fiqa/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=500 - 'qrels': (relevance assessments); count=1,238 - For 'docs', use 'irds/beir_fiqa' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/fiqa/dev'\n\nThe 'beir/fiqa/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=500\n - 'qrels': (relevance assessments); count=1,238\n\n - For 'docs', use 'irds/beir_fiqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_fiqa #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/fiqa/dev'\n\nThe 'beir/fiqa/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=500\n - 'qrels': (relevance assessments); count=1,238\n\n - For 'docs', use 'irds/beir_fiqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a9ea84138dde55ae1149087a2b70d9d1e3ea06ae
# Dataset Card for `beir/fiqa/test` The `beir/fiqa/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/test). # Data This dataset provides: - `queries` (i.e., topics); count=648 - `qrels`: (relevance assessments); count=1,706 - For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fiqa_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fiqa_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Maia2018Fiqa, title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering}, author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur}, journal={Companion Proceedings of the The Web Conference 2018}, year={2018} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fiqa_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fiqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:52+00:00
{"source_datasets": ["irds/beir_fiqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fiqa/test`", "viewer": false}
2023-01-05T02:45:58+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_fiqa #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/fiqa/test' The 'beir/fiqa/test' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=648 - 'qrels': (relevance assessments); count=1,706 - For 'docs', use 'irds/beir_fiqa' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/fiqa/test'\n\nThe 'beir/fiqa/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=648\n - 'qrels': (relevance assessments); count=1,706\n\n - For 'docs', use 'irds/beir_fiqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_fiqa #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/fiqa/test'\n\nThe 'beir/fiqa/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=648\n - 'qrels': (relevance assessments); count=1,706\n\n - For 'docs', use 'irds/beir_fiqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a7fe82fcfc5e2397b7468ade5a236400d1e29f5d
# Dataset Card for `beir/fiqa/train` The `beir/fiqa/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/train). # Data This dataset provides: - `queries` (i.e., topics); count=5,500 - `qrels`: (relevance assessments); count=14,166 - For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fiqa_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fiqa_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Maia2018Fiqa, title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering}, author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur}, journal={Companion Proceedings of the The Web Conference 2018}, year={2018} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fiqa_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fiqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:03+00:00
{"source_datasets": ["irds/beir_fiqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fiqa/train`", "viewer": false}
2023-01-05T02:46:09+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_fiqa #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/fiqa/train' The 'beir/fiqa/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=5,500 - 'qrels': (relevance assessments); count=14,166 - For 'docs', use 'irds/beir_fiqa' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/fiqa/train'\n\nThe 'beir/fiqa/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=5,500\n - 'qrels': (relevance assessments); count=14,166\n\n - For 'docs', use 'irds/beir_fiqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_fiqa #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/fiqa/train'\n\nThe 'beir/fiqa/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=5,500\n - 'qrels': (relevance assessments); count=14,166\n\n - For 'docs', use 'irds/beir_fiqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a9d4569fcf0f719c13a00e7d829da989c180c858
# Dataset Card for `beir/hotpotqa` The `beir/hotpotqa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=5,233,329 - `queries` (i.e., topics); count=97,852 This dataset is used by: [`beir_hotpotqa_dev`](https://huggingface.co/datasets/irds/beir_hotpotqa_dev), [`beir_hotpotqa_test`](https://huggingface.co/datasets/irds/beir_hotpotqa_test), [`beir_hotpotqa_train`](https://huggingface.co/datasets/irds/beir_hotpotqa_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_hotpotqa', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ...} queries = load_dataset('irds/beir_hotpotqa', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Yang2018Hotpotqa, title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering", author = "Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William and Salakhutdinov, Ruslan and Manning, Christopher D.", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1259", doi = "10.18653/v1/D18-1259", pages = "2369--2380" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_hotpotqa
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:14+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/hotpotqa`", "viewer": false}
2023-01-05T02:46:20+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/hotpotqa' The 'beir/hotpotqa' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=5,233,329 - 'queries' (i.e., topics); count=97,852 This dataset is used by: 'beir_hotpotqa_dev', 'beir_hotpotqa_test', 'beir_hotpotqa_train' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/hotpotqa'\n\nThe 'beir/hotpotqa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,233,329\n - 'queries' (i.e., topics); count=97,852\n\n\nThis dataset is used by: 'beir_hotpotqa_dev', 'beir_hotpotqa_test', 'beir_hotpotqa_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/hotpotqa'\n\nThe 'beir/hotpotqa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,233,329\n - 'queries' (i.e., topics); count=97,852\n\n\nThis dataset is used by: 'beir_hotpotqa_dev', 'beir_hotpotqa_test', 'beir_hotpotqa_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
843a773f0c01515354a6e4ed92b808fbf0ce5816
# Dataset Card for `beir/hotpotqa/dev` The `beir/hotpotqa/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/dev). # Data This dataset provides: - `queries` (i.e., topics); count=5,447 - `qrels`: (relevance assessments); count=10,894 - For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_hotpotqa_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_hotpotqa_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Yang2018Hotpotqa, title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering", author = "Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William and Salakhutdinov, Ruslan and Manning, Christopher D.", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1259", doi = "10.18653/v1/D18-1259", pages = "2369--2380" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_hotpotqa_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_hotpotqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:25+00:00
{"source_datasets": ["irds/beir_hotpotqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/hotpotqa/dev`", "viewer": false}
2023-01-05T02:46:31+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_hotpotqa #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/hotpotqa/dev' The 'beir/hotpotqa/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=5,447 - 'qrels': (relevance assessments); count=10,894 - For 'docs', use 'irds/beir_hotpotqa' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/hotpotqa/dev'\n\nThe 'beir/hotpotqa/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=5,447\n - 'qrels': (relevance assessments); count=10,894\n\n - For 'docs', use 'irds/beir_hotpotqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_hotpotqa #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/hotpotqa/dev'\n\nThe 'beir/hotpotqa/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=5,447\n - 'qrels': (relevance assessments); count=10,894\n\n - For 'docs', use 'irds/beir_hotpotqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
57bdab89c981562ccfff221830c2a788af40222e
# Dataset Card for `beir/hotpotqa/test` The `beir/hotpotqa/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/test). # Data This dataset provides: - `queries` (i.e., topics); count=7,405 - `qrels`: (relevance assessments); count=14,810 - For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_hotpotqa_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_hotpotqa_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Yang2018Hotpotqa, title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering", author = "Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William and Salakhutdinov, Ruslan and Manning, Christopher D.", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1259", doi = "10.18653/v1/D18-1259", pages = "2369--2380" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_hotpotqa_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_hotpotqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:37+00:00
{"source_datasets": ["irds/beir_hotpotqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/hotpotqa/test`", "viewer": false}
2023-01-05T02:46:42+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_hotpotqa #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/hotpotqa/test' The 'beir/hotpotqa/test' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=7,405 - 'qrels': (relevance assessments); count=14,810 - For 'docs', use 'irds/beir_hotpotqa' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/hotpotqa/test'\n\nThe 'beir/hotpotqa/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=7,405\n - 'qrels': (relevance assessments); count=14,810\n\n - For 'docs', use 'irds/beir_hotpotqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_hotpotqa #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/hotpotqa/test'\n\nThe 'beir/hotpotqa/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=7,405\n - 'qrels': (relevance assessments); count=14,810\n\n - For 'docs', use 'irds/beir_hotpotqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
cb373247cc67a93292fd84e958f74263c982c7ce
# Dataset Card for `beir/hotpotqa/train` The `beir/hotpotqa/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/train). # Data This dataset provides: - `queries` (i.e., topics); count=85,000 - `qrels`: (relevance assessments); count=170,000 - For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_hotpotqa_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_hotpotqa_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Yang2018Hotpotqa, title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering", author = "Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William and Salakhutdinov, Ruslan and Manning, Christopher D.", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1259", doi = "10.18653/v1/D18-1259", pages = "2369--2380" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_hotpotqa_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_hotpotqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:48+00:00
{"source_datasets": ["irds/beir_hotpotqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/hotpotqa/train`", "viewer": false}
2023-01-05T02:46:53+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_hotpotqa #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/hotpotqa/train' The 'beir/hotpotqa/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=85,000 - 'qrels': (relevance assessments); count=170,000 - For 'docs', use 'irds/beir_hotpotqa' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/hotpotqa/train'\n\nThe 'beir/hotpotqa/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=85,000\n - 'qrels': (relevance assessments); count=170,000\n\n - For 'docs', use 'irds/beir_hotpotqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_hotpotqa #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/hotpotqa/train'\n\nThe 'beir/hotpotqa/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=85,000\n - 'qrels': (relevance assessments); count=170,000\n\n - For 'docs', use 'irds/beir_hotpotqa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
7ac33c87e5a52e34ac4de6b2fbc9a6be0a332d70
# Dataset Card for `beir/msmarco` The `beir/msmarco` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 - `queries` (i.e., topics); count=509,962 This dataset is used by: [`beir_msmarco_dev`](https://huggingface.co/datasets/irds/beir_msmarco_dev), [`beir_msmarco_test`](https://huggingface.co/datasets/irds/beir_msmarco_test), [`beir_msmarco_train`](https://huggingface.co/datasets/irds/beir_msmarco_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_msmarco', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/beir_msmarco', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_msmarco
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:59+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/msmarco`", "viewer": false}
2023-01-05T02:47:04+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/msmarco' The 'beir/msmarco' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=8,841,823 - 'queries' (i.e., topics); count=509,962 This dataset is used by: 'beir_msmarco_dev', 'beir_msmarco_test', 'beir_msmarco_train' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/msmarco'\n\nThe 'beir/msmarco' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,841,823\n - 'queries' (i.e., topics); count=509,962\n\n\nThis dataset is used by: 'beir_msmarco_dev', 'beir_msmarco_test', 'beir_msmarco_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/msmarco'\n\nThe 'beir/msmarco' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,841,823\n - 'queries' (i.e., topics); count=509,962\n\n\nThis dataset is used by: 'beir_msmarco_dev', 'beir_msmarco_test', 'beir_msmarco_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a2542a6abd16cb12e686638f8378210b862e9e02
# Dataset Card for `beir/msmarco/dev` The `beir/msmarco/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco/dev). # Data This dataset provides: - `queries` (i.e., topics); count=6,980 - `qrels`: (relevance assessments); count=7,437 - For `docs`, use [`irds/beir_msmarco`](https://huggingface.co/datasets/irds/beir_msmarco) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_msmarco_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_msmarco_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_msmarco_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_msmarco", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:10+00:00
{"source_datasets": ["irds/beir_msmarco"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/msmarco/dev`", "viewer": false}
2023-01-05T02:47:16+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_msmarco #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/msmarco/dev' The 'beir/msmarco/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=6,980 - 'qrels': (relevance assessments); count=7,437 - For 'docs', use 'irds/beir_msmarco' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/msmarco/dev'\n\nThe 'beir/msmarco/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,980\n - 'qrels': (relevance assessments); count=7,437\n\n - For 'docs', use 'irds/beir_msmarco'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_msmarco #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/msmarco/dev'\n\nThe 'beir/msmarco/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,980\n - 'qrels': (relevance assessments); count=7,437\n\n - For 'docs', use 'irds/beir_msmarco'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
db8b6c11d729fe1de71e4ed49b8c6eb7b95202e6
# Dataset Card for `beir/msmarco/test` The `beir/msmarco/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco/test). # Data This dataset provides: - `queries` (i.e., topics); count=43 - `qrels`: (relevance assessments); count=9,260 - For `docs`, use [`irds/beir_msmarco`](https://huggingface.co/datasets/irds/beir_msmarco) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_msmarco_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_msmarco_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_msmarco_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_msmarco", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:21+00:00
{"source_datasets": ["irds/beir_msmarco"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/msmarco/test`", "viewer": false}
2023-01-05T02:47:27+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_msmarco #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/msmarco/test' The 'beir/msmarco/test' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=43 - 'qrels': (relevance assessments); count=9,260 - For 'docs', use 'irds/beir_msmarco' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/msmarco/test'\n\nThe 'beir/msmarco/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=43\n - 'qrels': (relevance assessments); count=9,260\n\n - For 'docs', use 'irds/beir_msmarco'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_msmarco #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/msmarco/test'\n\nThe 'beir/msmarco/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=43\n - 'qrels': (relevance assessments); count=9,260\n\n - For 'docs', use 'irds/beir_msmarco'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
55ed347565ae2664c3163709e13a1b310bd6c437
# Dataset Card for `beir/msmarco/train` The `beir/msmarco/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco/train). # Data This dataset provides: - `queries` (i.e., topics); count=502,939 - `qrels`: (relevance assessments); count=532,751 - For `docs`, use [`irds/beir_msmarco`](https://huggingface.co/datasets/irds/beir_msmarco) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_msmarco_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_msmarco_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_msmarco_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_msmarco", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:32+00:00
{"source_datasets": ["irds/beir_msmarco"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/msmarco/train`", "viewer": false}
2023-01-05T02:47:38+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_msmarco #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/msmarco/train' The 'beir/msmarco/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=502,939 - 'qrels': (relevance assessments); count=532,751 - For 'docs', use 'irds/beir_msmarco' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/msmarco/train'\n\nThe 'beir/msmarco/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=502,939\n - 'qrels': (relevance assessments); count=532,751\n\n - For 'docs', use 'irds/beir_msmarco'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_msmarco #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/msmarco/train'\n\nThe 'beir/msmarco/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=502,939\n - 'qrels': (relevance assessments); count=532,751\n\n - For 'docs', use 'irds/beir_msmarco'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
c28514f2fa050fd2960a90289ed03de27cc58774
# Dataset Card for `beir/nfcorpus` The `beir/nfcorpus` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=3,633 - `queries` (i.e., topics); count=3,237 This dataset is used by: [`beir_nfcorpus_dev`](https://huggingface.co/datasets/irds/beir_nfcorpus_dev), [`beir_nfcorpus_test`](https://huggingface.co/datasets/irds/beir_nfcorpus_test), [`beir_nfcorpus_train`](https://huggingface.co/datasets/irds/beir_nfcorpus_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_nfcorpus', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ...} queries = load_dataset('irds/beir_nfcorpus', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'url': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nfcorpus
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:43+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nfcorpus`", "viewer": false}
2023-01-05T02:47:49+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/nfcorpus' The 'beir/nfcorpus' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=3,633 - 'queries' (i.e., topics); count=3,237 This dataset is used by: 'beir_nfcorpus_dev', 'beir_nfcorpus_test', 'beir_nfcorpus_train' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/nfcorpus'\n\nThe 'beir/nfcorpus' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=3,633\n - 'queries' (i.e., topics); count=3,237\n\n\nThis dataset is used by: 'beir_nfcorpus_dev', 'beir_nfcorpus_test', 'beir_nfcorpus_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/nfcorpus'\n\nThe 'beir/nfcorpus' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=3,633\n - 'queries' (i.e., topics); count=3,237\n\n\nThis dataset is used by: 'beir_nfcorpus_dev', 'beir_nfcorpus_test', 'beir_nfcorpus_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
0480b338f240acd48ca982972179d794e4f013ba
# Dataset Card for `beir/nfcorpus/dev` The `beir/nfcorpus/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus/dev). # Data This dataset provides: - `queries` (i.e., topics); count=324 - `qrels`: (relevance assessments); count=11,385 - For `docs`, use [`irds/beir_nfcorpus`](https://huggingface.co/datasets/irds/beir_nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_nfcorpus_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_nfcorpus_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nfcorpus_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_nfcorpus", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:54+00:00
{"source_datasets": ["irds/beir_nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nfcorpus/dev`", "viewer": false}
2023-01-05T02:48:00+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_nfcorpus #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/nfcorpus/dev' The 'beir/nfcorpus/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=324 - 'qrels': (relevance assessments); count=11,385 - For 'docs', use 'irds/beir_nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/nfcorpus/dev'\n\nThe 'beir/nfcorpus/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=324\n - 'qrels': (relevance assessments); count=11,385\n\n - For 'docs', use 'irds/beir_nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_nfcorpus #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/nfcorpus/dev'\n\nThe 'beir/nfcorpus/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=324\n - 'qrels': (relevance assessments); count=11,385\n\n - For 'docs', use 'irds/beir_nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a81c6f673c961398e0885421d30a490005764c3c
# Dataset Card for `beir/nfcorpus/test` The `beir/nfcorpus/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus/test). # Data This dataset provides: - `queries` (i.e., topics); count=323 - `qrels`: (relevance assessments); count=12,334 - For `docs`, use [`irds/beir_nfcorpus`](https://huggingface.co/datasets/irds/beir_nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_nfcorpus_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_nfcorpus_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nfcorpus_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_nfcorpus", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:05+00:00
{"source_datasets": ["irds/beir_nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nfcorpus/test`", "viewer": false}
2023-01-05T02:48:11+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_nfcorpus #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/nfcorpus/test' The 'beir/nfcorpus/test' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=323 - 'qrels': (relevance assessments); count=12,334 - For 'docs', use 'irds/beir_nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/nfcorpus/test'\n\nThe 'beir/nfcorpus/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=323\n - 'qrels': (relevance assessments); count=12,334\n\n - For 'docs', use 'irds/beir_nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_nfcorpus #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/nfcorpus/test'\n\nThe 'beir/nfcorpus/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=323\n - 'qrels': (relevance assessments); count=12,334\n\n - For 'docs', use 'irds/beir_nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a68af490b17ac326021022490730a55d72f4d7bc
# Dataset Card for `beir/nfcorpus/train` The `beir/nfcorpus/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus/train). # Data This dataset provides: - `queries` (i.e., topics); count=2,590 - `qrels`: (relevance assessments); count=110,575 - For `docs`, use [`irds/beir_nfcorpus`](https://huggingface.co/datasets/irds/beir_nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_nfcorpus_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_nfcorpus_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nfcorpus_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_nfcorpus", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:17+00:00
{"source_datasets": ["irds/beir_nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nfcorpus/train`", "viewer": false}
2023-01-05T02:48:22+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_nfcorpus #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/nfcorpus/train' The 'beir/nfcorpus/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=2,590 - 'qrels': (relevance assessments); count=110,575 - For 'docs', use 'irds/beir_nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/nfcorpus/train'\n\nThe 'beir/nfcorpus/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=2,590\n - 'qrels': (relevance assessments); count=110,575\n\n - For 'docs', use 'irds/beir_nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_nfcorpus #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/nfcorpus/train'\n\nThe 'beir/nfcorpus/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=2,590\n - 'qrels': (relevance assessments); count=110,575\n\n - For 'docs', use 'irds/beir_nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
01d9f2ee90a78404bd6886a8554f4a3d3348cb1a
# Dataset Card for `beir/nq` The `beir/nq` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nq). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,681,468 - `queries` (i.e., topics); count=3,452 - `qrels`: (relevance assessments); count=4,201 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_nq', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ...} queries = load_dataset('irds/beir_nq', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_nq', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Kwiatkowski2019Nq, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {TACL} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nq
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:28+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nq`", "viewer": false}
2023-01-05T02:48:33+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/nq' The 'beir/nq' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=2,681,468 - 'queries' (i.e., topics); count=3,452 - 'qrels': (relevance assessments); count=4,201 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/nq'\n\nThe 'beir/nq' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,681,468\n - 'queries' (i.e., topics); count=3,452\n - 'qrels': (relevance assessments); count=4,201", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/nq'\n\nThe 'beir/nq' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,681,468\n - 'queries' (i.e., topics); count=3,452\n - 'qrels': (relevance assessments); count=4,201", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
f5f4a676494bac329c88d060c18c492e2b68808b
# Dataset Card for `beir/quora` The `beir/quora` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/quora). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=522,931 - `queries` (i.e., topics); count=15,000 This dataset is used by: [`beir_quora_dev`](https://huggingface.co/datasets/irds/beir_quora_dev), [`beir_quora_test`](https://huggingface.co/datasets/irds/beir_quora_test) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_quora', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/beir_quora', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_quora
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:39+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/quora`", "viewer": false}
2023-01-05T02:48:44+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/quora' The 'beir/quora' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=522,931 - 'queries' (i.e., topics); count=15,000 This dataset is used by: 'beir_quora_dev', 'beir_quora_test' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/quora'\n\nThe 'beir/quora' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=522,931\n - 'queries' (i.e., topics); count=15,000\n\n\nThis dataset is used by: 'beir_quora_dev', 'beir_quora_test'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/quora'\n\nThe 'beir/quora' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=522,931\n - 'queries' (i.e., topics); count=15,000\n\n\nThis dataset is used by: 'beir_quora_dev', 'beir_quora_test'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
669cd6363d77000b48fe48cee3f6379b0688509c
# Dataset Card for `beir/quora/dev` The `beir/quora/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/quora/dev). # Data This dataset provides: - `queries` (i.e., topics); count=5,000 - `qrels`: (relevance assessments); count=7,626 - For `docs`, use [`irds/beir_quora`](https://huggingface.co/datasets/irds/beir_quora) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_quora_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_quora_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_quora_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_quora", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:50+00:00
{"source_datasets": ["irds/beir_quora"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/quora/dev`", "viewer": false}
2023-01-05T02:48:56+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_quora #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/quora/dev' The 'beir/quora/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=5,000 - 'qrels': (relevance assessments); count=7,626 - For 'docs', use 'irds/beir_quora' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/quora/dev'\n\nThe 'beir/quora/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=5,000\n - 'qrels': (relevance assessments); count=7,626\n\n - For 'docs', use 'irds/beir_quora'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_quora #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/quora/dev'\n\nThe 'beir/quora/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=5,000\n - 'qrels': (relevance assessments); count=7,626\n\n - For 'docs', use 'irds/beir_quora'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
15990d64a421a576b8a8feb20d1032667f4e69e3
# Dataset Card for `beir/quora/test` The `beir/quora/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/quora/test). # Data This dataset provides: - `queries` (i.e., topics); count=10,000 - `qrels`: (relevance assessments); count=15,675 - For `docs`, use [`irds/beir_quora`](https://huggingface.co/datasets/irds/beir_quora) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_quora_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_quora_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_quora_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_quora", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:01+00:00
{"source_datasets": ["irds/beir_quora"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/quora/test`", "viewer": false}
2023-01-05T02:49:07+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_quora #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/quora/test' The 'beir/quora/test' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=10,000 - 'qrels': (relevance assessments); count=15,675 - For 'docs', use 'irds/beir_quora' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/quora/test'\n\nThe 'beir/quora/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=10,000\n - 'qrels': (relevance assessments); count=15,675\n\n - For 'docs', use 'irds/beir_quora'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_quora #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/quora/test'\n\nThe 'beir/quora/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=10,000\n - 'qrels': (relevance assessments); count=15,675\n\n - For 'docs', use 'irds/beir_quora'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
ea1662eca271f598ab4682fb594cca266c5e3fec
# Dataset Card for `beir/scifact` The `beir/scifact` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=5,183 - `queries` (i.e., topics); count=1,109 This dataset is used by: [`beir_scifact_test`](https://huggingface.co/datasets/irds/beir_scifact_test), [`beir_scifact_train`](https://huggingface.co/datasets/irds/beir_scifact_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_scifact', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ...} queries = load_dataset('irds/beir_scifact', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Wadden2020Scifact, title = "Fact or Fiction: Verifying Scientific Claims", author = "Wadden, David and Lin, Shanchuan and Lo, Kyle and Wang, Lucy Lu and van Zuylen, Madeleine and Cohan, Arman and Hajishirzi, Hannaneh", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.609", doi = "10.18653/v1/2020.emnlp-main.609", pages = "7534--7550" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_scifact
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:12+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/scifact`", "viewer": false}
2023-01-05T02:49:18+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/scifact' The 'beir/scifact' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=5,183 - 'queries' (i.e., topics); count=1,109 This dataset is used by: 'beir_scifact_test', 'beir_scifact_train' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/scifact'\n\nThe 'beir/scifact' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,183\n - 'queries' (i.e., topics); count=1,109\n\n\nThis dataset is used by: 'beir_scifact_test', 'beir_scifact_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/scifact'\n\nThe 'beir/scifact' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,183\n - 'queries' (i.e., topics); count=1,109\n\n\nThis dataset is used by: 'beir_scifact_test', 'beir_scifact_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
d1088c90e1b9c36d4d39175293c66afe76687927
# Dataset Card for `beir/scifact/test` The `beir/scifact/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact/test). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=339 - For `docs`, use [`irds/beir_scifact`](https://huggingface.co/datasets/irds/beir_scifact) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_scifact_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_scifact_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Wadden2020Scifact, title = "Fact or Fiction: Verifying Scientific Claims", author = "Wadden, David and Lin, Shanchuan and Lo, Kyle and Wang, Lucy Lu and van Zuylen, Madeleine and Cohan, Arman and Hajishirzi, Hannaneh", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.609", doi = "10.18653/v1/2020.emnlp-main.609", pages = "7534--7550" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_scifact_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_scifact", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:23+00:00
{"source_datasets": ["irds/beir_scifact"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/scifact/test`", "viewer": false}
2023-01-05T02:49:29+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_scifact #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/scifact/test' The 'beir/scifact/test' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=300 - 'qrels': (relevance assessments); count=339 - For 'docs', use 'irds/beir_scifact' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/scifact/test'\n\nThe 'beir/scifact/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=339\n\n - For 'docs', use 'irds/beir_scifact'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_scifact #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/scifact/test'\n\nThe 'beir/scifact/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=339\n\n - For 'docs', use 'irds/beir_scifact'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
992069af405943938783272ba56318c11ecf547f
# Dataset Card for `beir/scifact/train` The `beir/scifact/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact/train). # Data This dataset provides: - `queries` (i.e., topics); count=809 - `qrels`: (relevance assessments); count=919 - For `docs`, use [`irds/beir_scifact`](https://huggingface.co/datasets/irds/beir_scifact) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_scifact_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_scifact_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Wadden2020Scifact, title = "Fact or Fiction: Verifying Scientific Claims", author = "Wadden, David and Lin, Shanchuan and Lo, Kyle and Wang, Lucy Lu and van Zuylen, Madeleine and Cohan, Arman and Hajishirzi, Hannaneh", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.609", doi = "10.18653/v1/2020.emnlp-main.609", pages = "7534--7550" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_scifact_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_scifact", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:35+00:00
{"source_datasets": ["irds/beir_scifact"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/scifact/train`", "viewer": false}
2023-01-05T02:49:40+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/beir_scifact #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/scifact/train' The 'beir/scifact/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=809 - 'qrels': (relevance assessments); count=919 - For 'docs', use 'irds/beir_scifact' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/scifact/train'\n\nThe 'beir/scifact/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=809\n - 'qrels': (relevance assessments); count=919\n\n - For 'docs', use 'irds/beir_scifact'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_scifact #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/scifact/train'\n\nThe 'beir/scifact/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=809\n - 'qrels': (relevance assessments); count=919\n\n - For 'docs', use 'irds/beir_scifact'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
38c95aff68694ebdaaca5268c37fe8c0c4af76ff
# Dataset Card for `beir/trec-covid` The `beir/trec-covid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/trec-covid). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=171,332 - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=66,336 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_trec-covid', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ..., 'pubmed_id': ...} queries = load_dataset('irds/beir_trec-covid', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'query': ..., 'narrative': ...} qrels = load_dataset('irds/beir_trec-covid', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } @article{Voorhees2020TrecCovid, title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection}, author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang}, journal={ArXiv}, year={2020}, volume={abs/2005.04474} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_trec-covid
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:46+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/trec-covid`", "viewer": false}
2023-01-05T02:49:51+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/trec-covid' The 'beir/trec-covid' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=171,332 - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=66,336 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/trec-covid'\n\nThe 'beir/trec-covid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=171,332\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=66,336", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/trec-covid'\n\nThe 'beir/trec-covid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=171,332\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=66,336", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
965174667841438e58ccfed2e8760ee4ec0aabd3
# Dataset Card for `beir/webis-touche2020` The `beir/webis-touche2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/webis-touche2020). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=382,545 - `queries` (i.e., topics); count=49 - `qrels`: (relevance assessments); count=2,962 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_webis-touche2020', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'stance': ..., 'url': ...} queries = load_dataset('irds/beir_webis-touche2020', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/beir_webis-touche2020', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Tuche, title={Overview of Touch{\'e} 2020: Argument Retrieval}, author={Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Christian Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle={CLEF}, year={2020} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_webis-touche2020
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:57+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/webis-touche2020`", "viewer": false}
2023-01-05T02:50:02+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/webis-touche2020' The 'beir/webis-touche2020' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=382,545 - 'queries' (i.e., topics); count=49 - 'qrels': (relevance assessments); count=2,962 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/webis-touche2020'\n\nThe 'beir/webis-touche2020' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=382,545\n - 'queries' (i.e., topics); count=49\n - 'qrels': (relevance assessments); count=2,962", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/webis-touche2020'\n\nThe 'beir/webis-touche2020' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=382,545\n - 'queries' (i.e., topics); count=49\n - 'qrels': (relevance assessments); count=2,962", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
285d3592952490f2051d0c9f56b9eca74746ec7d
# Dataset Card for `beir/webis-touche2020/v2` The `beir/webis-touche2020/v2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/webis-touche2020/v2). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=382,545 - `queries` (i.e., topics); count=49 - `qrels`: (relevance assessments); count=2,214 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_webis-touche2020_v2', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'stance': ..., 'url': ...} queries = load_dataset('irds/beir_webis-touche2020_v2', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/beir_webis-touche2020_v2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Tuche, title={Overview of Touch{\'e} 2020: Argument Retrieval}, author={Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Christian Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle={CLEF}, year={2020} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_webis-touche2020_v2
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:50:08+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/webis-touche2020/v2`", "viewer": false}
2023-01-05T02:50:14+00:00
[ "2104.08663" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2104.08663 #region-us
# Dataset Card for 'beir/webis-touche2020/v2' The 'beir/webis-touche2020/v2' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=382,545 - 'queries' (i.e., topics); count=49 - 'qrels': (relevance assessments); count=2,214 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'beir/webis-touche2020/v2'\n\nThe 'beir/webis-touche2020/v2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=382,545\n - 'queries' (i.e., topics); count=49\n - 'qrels': (relevance assessments); count=2,214", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n", "# Dataset Card for 'beir/webis-touche2020/v2'\n\nThe 'beir/webis-touche2020/v2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=382,545\n - 'queries' (i.e., topics); count=49\n - 'qrels': (relevance assessments); count=2,214", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
b9e4834c6e4a249fa558055e85ccecd578f92681
# Dataset Card for `c4/en-noclean-tr` The `c4/en-noclean-tr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/c4#c4/en-noclean-tr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,063,805,381 This dataset is used by: [`c4_en-noclean-tr_trec-misinfo-2021`](https://huggingface.co/datasets/irds/c4_en-noclean-tr_trec-misinfo-2021) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/c4_en-noclean-tr', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'url': ..., 'timestamp': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/c4_en-noclean-tr
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:50:19+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`c4/en-noclean-tr`", "viewer": false}
2023-01-05T02:50:25+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'c4/en-noclean-tr' The 'c4/en-noclean-tr' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,063,805,381 This dataset is used by: 'c4_en-noclean-tr_trec-misinfo-2021' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'c4/en-noclean-tr'\n\nThe 'c4/en-noclean-tr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,063,805,381\n\n\nThis dataset is used by: 'c4_en-noclean-tr_trec-misinfo-2021'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'c4/en-noclean-tr'\n\nThe 'c4/en-noclean-tr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,063,805,381\n\n\nThis dataset is used by: 'c4_en-noclean-tr_trec-misinfo-2021'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
7953f361b9c65497915f8d74b8dc41aa70c628f9
# Dataset Card for `c4/en-noclean-tr/trec-misinfo-2021` The `c4/en-noclean-tr/trec-misinfo-2021` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/c4#c4/en-noclean-tr/trec-misinfo-2021). # Data This dataset provides: - `queries` (i.e., topics); count=50 - For `docs`, use [`irds/c4_en-noclean-tr`](https://huggingface.co/datasets/irds/c4_en-noclean-tr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/c4_en-noclean-tr_trec-misinfo-2021', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'description': ..., 'narrative': ..., 'disclaimer': ..., 'stance': ..., 'evidence': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/c4_en-noclean-tr_trec-misinfo-2021
[ "task_categories:text-retrieval", "source_datasets:irds/c4_en-noclean-tr", "region:us" ]
2023-01-05T02:50:30+00:00
{"source_datasets": ["irds/c4_en-noclean-tr"], "task_categories": ["text-retrieval"], "pretty_name": "`c4/en-noclean-tr/trec-misinfo-2021`", "viewer": false}
2023-01-05T02:50:36+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/c4_en-noclean-tr #region-us
# Dataset Card for 'c4/en-noclean-tr/trec-misinfo-2021' The 'c4/en-noclean-tr/trec-misinfo-2021' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - For 'docs', use 'irds/c4_en-noclean-tr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'c4/en-noclean-tr/trec-misinfo-2021'\n\nThe 'c4/en-noclean-tr/trec-misinfo-2021' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n\n - For 'docs', use 'irds/c4_en-noclean-tr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/c4_en-noclean-tr #region-us \n", "# Dataset Card for 'c4/en-noclean-tr/trec-misinfo-2021'\n\nThe 'c4/en-noclean-tr/trec-misinfo-2021' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n\n - For 'docs', use 'irds/c4_en-noclean-tr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
7bab217764d968c16a26944abb7cc44985413e83
# Dataset Card for `car/v1.5` The `car/v1.5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=29,678,367 This dataset is used by: [`car_v1.5_trec-y1_auto`](https://huggingface.co/datasets/irds/car_v1.5_trec-y1_auto), [`car_v1.5_trec-y1_manual`](https://huggingface.co/datasets/irds/car_v1.5_trec-y1_manual) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/car_v1.5', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} } ```
irds/car_v1.5
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:50:41+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`car/v1.5`", "viewer": false}
2023-01-05T02:50:47+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'car/v1.5' The 'car/v1.5' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=29,678,367 This dataset is used by: 'car_v1.5_trec-y1_auto', 'car_v1.5_trec-y1_manual' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'car/v1.5'\n\nThe 'car/v1.5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=29,678,367\n\n\nThis dataset is used by: 'car_v1.5_trec-y1_auto', 'car_v1.5_trec-y1_manual'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'car/v1.5'\n\nThe 'car/v1.5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=29,678,367\n\n\nThis dataset is used by: 'car_v1.5_trec-y1_auto', 'car_v1.5_trec-y1_manual'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
7cd52fe0fd77dee77f590679270efc5e7f318772
# Dataset Card for `car/v1.5/trec-y1/auto` The `car/v1.5/trec-y1/auto` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5/trec-y1/auto). # Data This dataset provides: - `qrels`: (relevance assessments); count=5,820 - For `docs`, use [`irds/car_v1.5`](https://huggingface.co/datasets/irds/car_v1.5) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/car_v1.5_trec-y1_auto', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} } ```
irds/car_v1.5_trec-y1_auto
[ "task_categories:text-retrieval", "source_datasets:irds/car_v1.5", "region:us" ]
2023-01-05T02:50:52+00:00
{"source_datasets": ["irds/car_v1.5"], "task_categories": ["text-retrieval"], "pretty_name": "`car/v1.5/trec-y1/auto`", "viewer": false}
2023-01-05T02:50:58+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/car_v1.5 #region-us
# Dataset Card for 'car/v1.5/trec-y1/auto' The 'car/v1.5/trec-y1/auto' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'qrels': (relevance assessments); count=5,820 - For 'docs', use 'irds/car_v1.5' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'car/v1.5/trec-y1/auto'\n\nThe 'car/v1.5/trec-y1/auto' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=5,820\n\n - For 'docs', use 'irds/car_v1.5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/car_v1.5 #region-us \n", "# Dataset Card for 'car/v1.5/trec-y1/auto'\n\nThe 'car/v1.5/trec-y1/auto' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=5,820\n\n - For 'docs', use 'irds/car_v1.5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2daa98372a4412614902f0b178cf68cbe62b016b
# Dataset Card for `car/v1.5/trec-y1/manual` The `car/v1.5/trec-y1/manual` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5/trec-y1/manual). # Data This dataset provides: - `qrels`: (relevance assessments); count=29,571 - For `docs`, use [`irds/car_v1.5`](https://huggingface.co/datasets/irds/car_v1.5) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/car_v1.5_trec-y1_manual', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} } ```
irds/car_v1.5_trec-y1_manual
[ "task_categories:text-retrieval", "source_datasets:irds/car_v1.5", "region:us" ]
2023-01-05T02:51:03+00:00
{"source_datasets": ["irds/car_v1.5"], "task_categories": ["text-retrieval"], "pretty_name": "`car/v1.5/trec-y1/manual`", "viewer": false}
2023-01-05T02:51:09+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/car_v1.5 #region-us
# Dataset Card for 'car/v1.5/trec-y1/manual' The 'car/v1.5/trec-y1/manual' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'qrels': (relevance assessments); count=29,571 - For 'docs', use 'irds/car_v1.5' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'car/v1.5/trec-y1/manual'\n\nThe 'car/v1.5/trec-y1/manual' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=29,571\n\n - For 'docs', use 'irds/car_v1.5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/car_v1.5 #region-us \n", "# Dataset Card for 'car/v1.5/trec-y1/manual'\n\nThe 'car/v1.5/trec-y1/manual' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=29,571\n\n - For 'docs', use 'irds/car_v1.5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
7c27b92a62438e66c3f826b0f4c47feda87273e1
# Dataset Card for `car/v2.0` The `car/v2.0` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v2.0). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=29,794,697 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/car_v2.0', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} } ```
irds/car_v2.0
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:51:15+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`car/v2.0`", "viewer": false}
2023-01-05T02:51:21+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'car/v2.0' The 'car/v2.0' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=29,794,697 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'car/v2.0'\n\nThe 'car/v2.0' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=29,794,697", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'car/v2.0'\n\nThe 'car/v2.0' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=29,794,697", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
f45cb0e6a107cf67791746772c5dfee581277137
# Dataset Card for `highwire/trec-genomics-2006` The `highwire/trec-genomics-2006` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/highwire#highwire/trec-genomics-2006). # Data This dataset provides: - `queries` (i.e., topics); count=28 - `qrels`: (relevance assessments); count=27,999 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/highwire_trec-genomics-2006', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/highwire_trec-genomics-2006', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'start': ..., 'length': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hersh2006TrecGenomics, title={TREC 2006 Genomics Track Overview}, author={William Hersh and Aaron M. Cohen and Phoebe Roberts and Hari Krishna Rekapalli}, booktitle={TREC}, year={2006} } ```
irds/highwire_trec-genomics-2006
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:51:26+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`highwire/trec-genomics-2006`", "viewer": false}
2023-01-05T02:51:32+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'highwire/trec-genomics-2006' The 'highwire/trec-genomics-2006' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=28 - 'qrels': (relevance assessments); count=27,999 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'highwire/trec-genomics-2006'\n\nThe 'highwire/trec-genomics-2006' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=28\n - 'qrels': (relevance assessments); count=27,999", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'highwire/trec-genomics-2006'\n\nThe 'highwire/trec-genomics-2006' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=28\n - 'qrels': (relevance assessments); count=27,999", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
b7fa4a63416f551e005def97161bbeb1650a0d7a
# Dataset Card for `highwire/trec-genomics-2007` The `highwire/trec-genomics-2007` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/highwire#highwire/trec-genomics-2007). # Data This dataset provides: - `queries` (i.e., topics); count=36 - `qrels`: (relevance assessments); count=35,996 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/highwire_trec-genomics-2007', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/highwire_trec-genomics-2007', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'start': ..., 'length': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hersh2007TrecGenomics, title={TREC 2007 Genomics Track Overview}, author={William Hersh and Aaron Cohen and Lynn Ruslen and Phoebe Roberts}, booktitle={TREC}, year={2007} } ```
irds/highwire_trec-genomics-2007
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:51:37+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`highwire/trec-genomics-2007`", "viewer": false}
2023-01-05T02:51:43+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'highwire/trec-genomics-2007' The 'highwire/trec-genomics-2007' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=36 - 'qrels': (relevance assessments); count=35,996 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'highwire/trec-genomics-2007'\n\nThe 'highwire/trec-genomics-2007' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=36\n - 'qrels': (relevance assessments); count=35,996", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'highwire/trec-genomics-2007'\n\nThe 'highwire/trec-genomics-2007' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=36\n - 'qrels': (relevance assessments); count=35,996", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2e188da49f6876634790bef7ec0bf9bae554e704
# Dataset Card for `medline/2004` The `medline/2004` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2004). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=3,672,808 This dataset is used by: [`medline_2004_trec-genomics-2004`](https://huggingface.co/datasets/irds/medline_2004_trec-genomics-2004), [`medline_2004_trec-genomics-2005`](https://huggingface.co/datasets/irds/medline_2004_trec-genomics-2005) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/medline_2004', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'abstract': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/medline_2004
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:51:48+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2004`", "viewer": false}
2023-01-05T02:51:54+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'medline/2004' The 'medline/2004' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=3,672,808 This dataset is used by: 'medline_2004_trec-genomics-2004', 'medline_2004_trec-genomics-2005' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'medline/2004'\n\nThe 'medline/2004' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=3,672,808\n\n\nThis dataset is used by: 'medline_2004_trec-genomics-2004', 'medline_2004_trec-genomics-2005'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'medline/2004'\n\nThe 'medline/2004' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=3,672,808\n\n\nThis dataset is used by: 'medline_2004_trec-genomics-2004', 'medline_2004_trec-genomics-2005'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
65b12b3859a0e689d5e74e989f11b8d3abc43b8b
# Dataset Card for `medline/2004/trec-genomics-2004` The `medline/2004/trec-genomics-2004` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2004/trec-genomics-2004). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=8,268 - For `docs`, use [`irds/medline_2004`](https://huggingface.co/datasets/irds/medline_2004) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/medline_2004_trec-genomics-2004', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'need': ..., 'context': ...} qrels = load_dataset('irds/medline_2004_trec-genomics-2004', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hersh2004TrecGenomics, title={TREC 2004 Genomics Track Overview}, author={William R. Hersh and Ravi Teja Bhuptiraju and Laura Ross and Phoebe Johnson and Aaron M. Cohen and Dale F. Kraemer}, booktitle={TREC}, year={2004} } ```
irds/medline_2004_trec-genomics-2004
[ "task_categories:text-retrieval", "source_datasets:irds/medline_2004", "region:us" ]
2023-01-05T02:52:00+00:00
{"source_datasets": ["irds/medline_2004"], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2004/trec-genomics-2004`", "viewer": false}
2023-01-05T02:52:05+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/medline_2004 #region-us
# Dataset Card for 'medline/2004/trec-genomics-2004' The 'medline/2004/trec-genomics-2004' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=8,268 - For 'docs', use 'irds/medline_2004' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'medline/2004/trec-genomics-2004'\n\nThe 'medline/2004/trec-genomics-2004' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=8,268\n\n - For 'docs', use 'irds/medline_2004'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/medline_2004 #region-us \n", "# Dataset Card for 'medline/2004/trec-genomics-2004'\n\nThe 'medline/2004/trec-genomics-2004' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=8,268\n\n - For 'docs', use 'irds/medline_2004'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
199f8f87b8d80b9728773f1fef2020d77b7a8dfb
# Dataset Card for `medline/2004/trec-genomics-2005` The `medline/2004/trec-genomics-2005` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2004/trec-genomics-2005). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=39,958 - For `docs`, use [`irds/medline_2004`](https://huggingface.co/datasets/irds/medline_2004) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/medline_2004_trec-genomics-2005', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/medline_2004_trec-genomics-2005', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hersh2005TrecGenomics, title={TREC 2005 Genomics Track Overview}, author={William Hersh and Aaron Cohen and Jianji Yang and Ravi Teja Bhupatiraju and Phoebe Roberts and Marti Hearst}, booktitle={TREC}, year={2007} } ```
irds/medline_2004_trec-genomics-2005
[ "task_categories:text-retrieval", "source_datasets:irds/medline_2004", "region:us" ]
2023-01-05T02:52:11+00:00
{"source_datasets": ["irds/medline_2004"], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2004/trec-genomics-2005`", "viewer": false}
2023-01-05T02:52:16+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/medline_2004 #region-us
# Dataset Card for 'medline/2004/trec-genomics-2005' The 'medline/2004/trec-genomics-2005' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=39,958 - For 'docs', use 'irds/medline_2004' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'medline/2004/trec-genomics-2005'\n\nThe 'medline/2004/trec-genomics-2005' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=39,958\n\n - For 'docs', use 'irds/medline_2004'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/medline_2004 #region-us \n", "# Dataset Card for 'medline/2004/trec-genomics-2005'\n\nThe 'medline/2004/trec-genomics-2005' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=39,958\n\n - For 'docs', use 'irds/medline_2004'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
e078b41418b36287b42d57a58a768294482516ea
# Dataset Card for `medline/2017` The `medline/2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2017). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=26,740,025 This dataset is used by: [`medline_2017_trec-pm-2017`](https://huggingface.co/datasets/irds/medline_2017_trec-pm-2017), [`medline_2017_trec-pm-2018`](https://huggingface.co/datasets/irds/medline_2017_trec-pm-2018) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/medline_2017', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'abstract': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/medline_2017
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:52:22+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2017`", "viewer": false}
2023-01-05T02:52:28+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'medline/2017' The 'medline/2017' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=26,740,025 This dataset is used by: 'medline_2017_trec-pm-2017', 'medline_2017_trec-pm-2018' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'medline/2017'\n\nThe 'medline/2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=26,740,025\n\n\nThis dataset is used by: 'medline_2017_trec-pm-2017', 'medline_2017_trec-pm-2018'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'medline/2017'\n\nThe 'medline/2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=26,740,025\n\n\nThis dataset is used by: 'medline_2017_trec-pm-2017', 'medline_2017_trec-pm-2018'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
d39c00d1f99bfe13b9469976c0413e980f2461a7
# Dataset Card for `medline/2017/trec-pm-2017` The `medline/2017/trec-pm-2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2017/trec-pm-2017). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=22,642 - For `docs`, use [`irds/medline_2017`](https://huggingface.co/datasets/irds/medline_2017) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/medline_2017_trec-pm-2017', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ..., 'other': ...} qrels = load_dataset('irds/medline_2017_trec-pm-2017', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2017TrecPm, title={Overview of the TREC 2017 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant}, booktitle={TREC}, year={2017} } ```
irds/medline_2017_trec-pm-2017
[ "task_categories:text-retrieval", "source_datasets:irds/medline_2017", "region:us" ]
2023-01-05T02:52:33+00:00
{"source_datasets": ["irds/medline_2017"], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2017/trec-pm-2017`", "viewer": false}
2023-01-05T02:52:39+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/medline_2017 #region-us
# Dataset Card for 'medline/2017/trec-pm-2017' The 'medline/2017/trec-pm-2017' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=30 - 'qrels': (relevance assessments); count=22,642 - For 'docs', use 'irds/medline_2017' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'medline/2017/trec-pm-2017'\n\nThe 'medline/2017/trec-pm-2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=22,642\n\n - For 'docs', use 'irds/medline_2017'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/medline_2017 #region-us \n", "# Dataset Card for 'medline/2017/trec-pm-2017'\n\nThe 'medline/2017/trec-pm-2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=22,642\n\n - For 'docs', use 'irds/medline_2017'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
45ad7d885b6393bc9706bd1e4407859f2bada08f
# Dataset Card for `medline/2017/trec-pm-2018` The `medline/2017/trec-pm-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2017/trec-pm-2018). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=22,429 - For `docs`, use [`irds/medline_2017`](https://huggingface.co/datasets/irds/medline_2017) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/medline_2017_trec-pm-2018', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ...} qrels = load_dataset('irds/medline_2017_trec-pm-2018', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2018TrecPm, title={Overview of the TREC 2018 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar}, booktitle={TREC}, year={2018} } ```
irds/medline_2017_trec-pm-2018
[ "task_categories:text-retrieval", "source_datasets:irds/medline_2017", "region:us" ]
2023-01-05T02:52:44+00:00
{"source_datasets": ["irds/medline_2017"], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2017/trec-pm-2018`", "viewer": false}
2023-01-05T02:52:50+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/medline_2017 #region-us
# Dataset Card for 'medline/2017/trec-pm-2018' The 'medline/2017/trec-pm-2018' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=22,429 - For 'docs', use 'irds/medline_2017' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'medline/2017/trec-pm-2018'\n\nThe 'medline/2017/trec-pm-2018' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=22,429\n\n - For 'docs', use 'irds/medline_2017'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/medline_2017 #region-us \n", "# Dataset Card for 'medline/2017/trec-pm-2018'\n\nThe 'medline/2017/trec-pm-2018' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=22,429\n\n - For 'docs', use 'irds/medline_2017'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
e534f88bc693fe3fab9457519d1e7790614bc781
# Dataset Card for `clinicaltrials/2017` The `clinicaltrials/2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2017). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=241,006 This dataset is used by: [`clinicaltrials_2017_trec-pm-2017`](https://huggingface.co/datasets/irds/clinicaltrials_2017_trec-pm-2017), [`clinicaltrials_2017_trec-pm-2018`](https://huggingface.co/datasets/irds/clinicaltrials_2017_trec-pm-2018) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clinicaltrials_2017', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'condition': ..., 'summary': ..., 'detailed_description': ..., 'eligibility': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2017
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:52:55+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2017`", "viewer": false}
2023-01-05T02:53:01+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clinicaltrials/2017' The 'clinicaltrials/2017' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=241,006 This dataset is used by: 'clinicaltrials_2017_trec-pm-2017', 'clinicaltrials_2017_trec-pm-2018' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clinicaltrials/2017'\n\nThe 'clinicaltrials/2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=241,006\n\n\nThis dataset is used by: 'clinicaltrials_2017_trec-pm-2017', 'clinicaltrials_2017_trec-pm-2018'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clinicaltrials/2017'\n\nThe 'clinicaltrials/2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=241,006\n\n\nThis dataset is used by: 'clinicaltrials_2017_trec-pm-2017', 'clinicaltrials_2017_trec-pm-2018'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
859236df2ccb1dddf5b47e4db41c0613d95d52c3
# Dataset Card for `clinicaltrials/2017/trec-pm-2017` The `clinicaltrials/2017/trec-pm-2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2017/trec-pm-2017). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=13,019 - For `docs`, use [`irds/clinicaltrials_2017`](https://huggingface.co/datasets/irds/clinicaltrials_2017) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2017_trec-pm-2017', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ..., 'other': ...} qrels = load_dataset('irds/clinicaltrials_2017_trec-pm-2017', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2017TrecPm, title={Overview of the TREC 2017 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant}, booktitle={TREC}, year={2017} } ```
irds/clinicaltrials_2017_trec-pm-2017
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2017", "region:us" ]
2023-01-05T02:53:06+00:00
{"source_datasets": ["irds/clinicaltrials_2017"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2017/trec-pm-2017`", "viewer": false}
2023-01-05T02:53:13+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2017 #region-us
# Dataset Card for 'clinicaltrials/2017/trec-pm-2017' The 'clinicaltrials/2017/trec-pm-2017' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=30 - 'qrels': (relevance assessments); count=13,019 - For 'docs', use 'irds/clinicaltrials_2017' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clinicaltrials/2017/trec-pm-2017'\n\nThe 'clinicaltrials/2017/trec-pm-2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=13,019\n\n - For 'docs', use 'irds/clinicaltrials_2017'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2017 #region-us \n", "# Dataset Card for 'clinicaltrials/2017/trec-pm-2017'\n\nThe 'clinicaltrials/2017/trec-pm-2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=13,019\n\n - For 'docs', use 'irds/clinicaltrials_2017'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
f33c2c94bae6745e8db596e5a7f5151a24b341bf
# Dataset Card for `clinicaltrials/2017/trec-pm-2018` The `clinicaltrials/2017/trec-pm-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2017/trec-pm-2018). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=14,188 - For `docs`, use [`irds/clinicaltrials_2017`](https://huggingface.co/datasets/irds/clinicaltrials_2017) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2017_trec-pm-2018', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ...} qrels = load_dataset('irds/clinicaltrials_2017_trec-pm-2018', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2018TrecPm, title={Overview of the TREC 2018 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar}, booktitle={TREC}, year={2018} } ```
irds/clinicaltrials_2017_trec-pm-2018
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2017", "region:us" ]
2023-01-05T02:53:19+00:00
{"source_datasets": ["irds/clinicaltrials_2017"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2017/trec-pm-2018`", "viewer": false}
2023-01-05T02:53:24+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2017 #region-us
# Dataset Card for 'clinicaltrials/2017/trec-pm-2018' The 'clinicaltrials/2017/trec-pm-2018' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=14,188 - For 'docs', use 'irds/clinicaltrials_2017' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clinicaltrials/2017/trec-pm-2018'\n\nThe 'clinicaltrials/2017/trec-pm-2018' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=14,188\n\n - For 'docs', use 'irds/clinicaltrials_2017'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2017 #region-us \n", "# Dataset Card for 'clinicaltrials/2017/trec-pm-2018'\n\nThe 'clinicaltrials/2017/trec-pm-2018' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=14,188\n\n - For 'docs', use 'irds/clinicaltrials_2017'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
b6a34cc1fc325fbc3ea0b16c0b34dabd64cb8a41
# Dataset Card for `clinicaltrials/2019` The `clinicaltrials/2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2019). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=306,238 This dataset is used by: [`clinicaltrials_2019_trec-pm-2019`](https://huggingface.co/datasets/irds/clinicaltrials_2019_trec-pm-2019) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clinicaltrials_2019', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'condition': ..., 'summary': ..., 'detailed_description': ..., 'eligibility': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2019
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:53:30+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2019`", "viewer": false}
2023-01-05T02:53:35+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clinicaltrials/2019' The 'clinicaltrials/2019' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=306,238 This dataset is used by: 'clinicaltrials_2019_trec-pm-2019' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clinicaltrials/2019'\n\nThe 'clinicaltrials/2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=306,238\n\n\nThis dataset is used by: 'clinicaltrials_2019_trec-pm-2019'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clinicaltrials/2019'\n\nThe 'clinicaltrials/2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=306,238\n\n\nThis dataset is used by: 'clinicaltrials_2019_trec-pm-2019'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
bed5f92f0ff72e0c08ff8f6f42c2dc3947f7c208
# Dataset Card for `clinicaltrials/2019/trec-pm-2019` The `clinicaltrials/2019/trec-pm-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2019/trec-pm-2019). # Data This dataset provides: - `queries` (i.e., topics); count=40 - `qrels`: (relevance assessments); count=12,996 - For `docs`, use [`irds/clinicaltrials_2019`](https://huggingface.co/datasets/irds/clinicaltrials_2019) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2019_trec-pm-2019', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ...} qrels = load_dataset('irds/clinicaltrials_2019_trec-pm-2019', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2019TrecPm, title={Overview of the TREC 2019 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant and Funda Meric-Bernstam}, booktitle={TREC}, year={2019} } ```
irds/clinicaltrials_2019_trec-pm-2019
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2019", "region:us" ]
2023-01-05T02:53:41+00:00
{"source_datasets": ["irds/clinicaltrials_2019"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2019/trec-pm-2019`", "viewer": false}
2023-01-05T02:53:47+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2019 #region-us
# Dataset Card for 'clinicaltrials/2019/trec-pm-2019' The 'clinicaltrials/2019/trec-pm-2019' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=40 - 'qrels': (relevance assessments); count=12,996 - For 'docs', use 'irds/clinicaltrials_2019' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clinicaltrials/2019/trec-pm-2019'\n\nThe 'clinicaltrials/2019/trec-pm-2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=40\n - 'qrels': (relevance assessments); count=12,996\n\n - For 'docs', use 'irds/clinicaltrials_2019'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2019 #region-us \n", "# Dataset Card for 'clinicaltrials/2019/trec-pm-2019'\n\nThe 'clinicaltrials/2019/trec-pm-2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=40\n - 'qrels': (relevance assessments); count=12,996\n\n - For 'docs', use 'irds/clinicaltrials_2019'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
35f26ab1d47913a5224c280cb37a9920ae195ab5
# Dataset Card for `clinicaltrials/2021` The `clinicaltrials/2021` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=375,580 This dataset is used by: [`clinicaltrials_2021_trec-ct-2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021_trec-ct-2021), [`clinicaltrials_2021_trec-ct-2022`](https://huggingface.co/datasets/irds/clinicaltrials_2021_trec-ct-2022) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clinicaltrials_2021', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'condition': ..., 'summary': ..., 'detailed_description': ..., 'eligibility': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2021
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:53:52+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2021`", "viewer": false}
2023-01-05T02:53:58+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clinicaltrials/2021' The 'clinicaltrials/2021' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=375,580 This dataset is used by: 'clinicaltrials_2021_trec-ct-2021', 'clinicaltrials_2021_trec-ct-2022' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clinicaltrials/2021'\n\nThe 'clinicaltrials/2021' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=375,580\n\n\nThis dataset is used by: 'clinicaltrials_2021_trec-ct-2021', 'clinicaltrials_2021_trec-ct-2022'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clinicaltrials/2021'\n\nThe 'clinicaltrials/2021' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=375,580\n\n\nThis dataset is used by: 'clinicaltrials_2021_trec-ct-2021', 'clinicaltrials_2021_trec-ct-2022'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
f742aaa8027bcb04ad78516dabb2ca9a9d827c8d
# Dataset Card for `clinicaltrials/2021/trec-ct-2021` The `clinicaltrials/2021/trec-ct-2021` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021/trec-ct-2021). # Data This dataset provides: - `queries` (i.e., topics); count=75 - `qrels`: (relevance assessments); count=35,832 - For `docs`, use [`irds/clinicaltrials_2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2021_trec-ct-2021', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clinicaltrials_2021_trec-ct-2021', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2021_trec-ct-2021
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2021", "region:us" ]
2023-01-05T02:54:03+00:00
{"source_datasets": ["irds/clinicaltrials_2021"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2021/trec-ct-2021`", "viewer": false}
2023-01-05T02:54:09+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2021 #region-us
# Dataset Card for 'clinicaltrials/2021/trec-ct-2021' The 'clinicaltrials/2021/trec-ct-2021' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=75 - 'qrels': (relevance assessments); count=35,832 - For 'docs', use 'irds/clinicaltrials_2021' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clinicaltrials/2021/trec-ct-2021'\n\nThe 'clinicaltrials/2021/trec-ct-2021' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=75\n - 'qrels': (relevance assessments); count=35,832\n\n - For 'docs', use 'irds/clinicaltrials_2021'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2021 #region-us \n", "# Dataset Card for 'clinicaltrials/2021/trec-ct-2021'\n\nThe 'clinicaltrials/2021/trec-ct-2021' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=75\n - 'qrels': (relevance assessments); count=35,832\n\n - For 'docs', use 'irds/clinicaltrials_2021'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
281328b8e7dd6c07a712f26cb7733a616a0e13b2
# Dataset Card for `clinicaltrials/2021/trec-ct-2022` The `clinicaltrials/2021/trec-ct-2022` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021/trec-ct-2022). # Data This dataset provides: - `queries` (i.e., topics); count=50 - For `docs`, use [`irds/clinicaltrials_2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2021_trec-ct-2022', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2021_trec-ct-2022
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2021", "region:us" ]
2023-01-05T02:54:14+00:00
{"source_datasets": ["irds/clinicaltrials_2021"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2021/trec-ct-2022`", "viewer": false}
2023-01-05T02:54:20+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2021 #region-us
# Dataset Card for 'clinicaltrials/2021/trec-ct-2022' The 'clinicaltrials/2021/trec-ct-2022' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - For 'docs', use 'irds/clinicaltrials_2021' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clinicaltrials/2021/trec-ct-2022'\n\nThe 'clinicaltrials/2021/trec-ct-2022' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n\n - For 'docs', use 'irds/clinicaltrials_2021'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clinicaltrials_2021 #region-us \n", "# Dataset Card for 'clinicaltrials/2021/trec-ct-2022'\n\nThe 'clinicaltrials/2021/trec-ct-2022' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n\n - For 'docs', use 'irds/clinicaltrials_2021'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
9ef4fbe912ffaa61a5935000fb3814acc41ec3b9
# Dataset Card for `clueweb09` The `clueweb09` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,040,859,705 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:54:25+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09`", "viewer": false}
2023-01-05T02:54:31+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09' The 'clueweb09' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,040,859,705 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09'\n\nThe 'clueweb09' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,040,859,705", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09'\n\nThe 'clueweb09' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,040,859,705", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
86191503ff0c7ec98d9208a5e5c55414fa23698f
# Dataset Card for `clueweb09/ar` The `clueweb09/ar` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/ar). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=29,192,662 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_ar', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_ar
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:54:37+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/ar`", "viewer": false}
2023-01-05T02:54:42+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/ar' The 'clueweb09/ar' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=29,192,662 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/ar'\n\nThe 'clueweb09/ar' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=29,192,662", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/ar'\n\nThe 'clueweb09/ar' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=29,192,662", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
8ca4546e4d972f051bad67e7db0ff248fa9b7b88
# Dataset Card for `clueweb09/catb` The `clueweb09/catb` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/catb). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=50,220,423 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_catb', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_catb
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:54:48+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/catb`", "viewer": false}
2023-01-05T02:54:53+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/catb' The 'clueweb09/catb' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=50,220,423 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/catb'\n\nThe 'clueweb09/catb' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=50,220,423", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/catb'\n\nThe 'clueweb09/catb' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=50,220,423", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
0b8dc1e0b991a14b27da0abd1c3bd23ffffa1422
# Dataset Card for `clueweb09/de` The `clueweb09/de` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/de). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=49,814,309 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_de', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_de
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:54:59+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/de`", "viewer": false}
2023-01-05T02:55:04+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/de' The 'clueweb09/de' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=49,814,309 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/de'\n\nThe 'clueweb09/de' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=49,814,309", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/de'\n\nThe 'clueweb09/de' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=49,814,309", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2245bbcf914a04c301f534b5e0bd472dce834876
# Dataset Card for `clueweb09/en` The `clueweb09/en` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/en). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=503,903,810 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_en', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_en
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:10+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/en`", "viewer": false}
2023-01-05T02:55:16+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/en' The 'clueweb09/en' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=503,903,810 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/en'\n\nThe 'clueweb09/en' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=503,903,810", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/en'\n\nThe 'clueweb09/en' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=503,903,810", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
3cf3e9a459381b013d5179f4405a5b95b5f94f86
# Dataset Card for `clueweb09/es` The `clueweb09/es` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/es). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=79,333,950 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_es', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_es
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:21+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/es`", "viewer": false}
2023-01-05T02:55:27+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/es' The 'clueweb09/es' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=79,333,950 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/es'\n\nThe 'clueweb09/es' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=79,333,950", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/es'\n\nThe 'clueweb09/es' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=79,333,950", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
0787bf65542fbd38df9bebf2a67835e2e6732fa2
# Dataset Card for `clueweb09/fr` The `clueweb09/fr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/fr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=50,883,172 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_fr', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_fr
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:32+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/fr`", "viewer": false}
2023-01-05T02:55:38+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/fr' The 'clueweb09/fr' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=50,883,172 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/fr'\n\nThe 'clueweb09/fr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=50,883,172", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/fr'\n\nThe 'clueweb09/fr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=50,883,172", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
50713902b4395333f7958e3065a30f1523887a96
# Dataset Card for `clueweb09/it` The `clueweb09/it` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/it). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=27,250,729 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_it', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_it
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:43+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/it`", "viewer": false}
2023-01-05T02:55:49+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/it' The 'clueweb09/it' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=27,250,729 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/it'\n\nThe 'clueweb09/it' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=27,250,729", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/it'\n\nThe 'clueweb09/it' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=27,250,729", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
5012e975cecb674f0a3ac6d86b102f837217b243
# Dataset Card for `clueweb09/ja` The `clueweb09/ja` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/ja). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=67,337,717 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_ja', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_ja
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:54+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/ja`", "viewer": false}
2023-01-05T02:56:00+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/ja' The 'clueweb09/ja' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=67,337,717 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/ja'\n\nThe 'clueweb09/ja' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=67,337,717", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/ja'\n\nThe 'clueweb09/ja' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=67,337,717", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
32bc7d27a661d23ee5fc7d6a2e0b4b7325f58a96
# Dataset Card for `clueweb09/ko` The `clueweb09/ko` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/ko). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=18,075,141 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_ko', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_ko
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:06+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/ko`", "viewer": false}
2023-01-05T02:56:11+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/ko' The 'clueweb09/ko' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=18,075,141 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/ko'\n\nThe 'clueweb09/ko' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=18,075,141", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/ko'\n\nThe 'clueweb09/ko' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=18,075,141", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2e14354e1e75d6d135a7bb87cf1e687405c70f25
# Dataset Card for `clueweb09/pt` The `clueweb09/pt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/pt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=37,578,858 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_pt', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_pt
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:17+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/pt`", "viewer": false}
2023-01-05T02:56:22+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/pt' The 'clueweb09/pt' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=37,578,858 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/pt'\n\nThe 'clueweb09/pt' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=37,578,858", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/pt'\n\nThe 'clueweb09/pt' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=37,578,858", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
486ef41199f3503cc6fac9d6ac839d6aa37561ba
# Dataset Card for `clueweb09/zh` The `clueweb09/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=177,489,357 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_zh', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_zh
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:28+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/zh`", "viewer": false}
2023-01-05T02:56:34+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb09/zh' The 'clueweb09/zh' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=177,489,357 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb09/zh'\n\nThe 'clueweb09/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=177,489,357", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb09/zh'\n\nThe 'clueweb09/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=177,489,357", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
d6ab58367a0f5348cf9220927df6c318a7e8dbef
# Dataset Card for `clueweb12` The `clueweb12` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=733,019,372 This dataset is used by: [`clueweb12_touche-2020-task-2`](https://huggingface.co/datasets/irds/clueweb12_touche-2020-task-2), [`clueweb12_touche-2021-task-2`](https://huggingface.co/datasets/irds/clueweb12_touche-2021-task-2) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb12', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb12
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:39+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12`", "viewer": false}
2023-01-05T02:56:45+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb12' The 'clueweb12' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=733,019,372 This dataset is used by: 'clueweb12_touche-2020-task-2', 'clueweb12_touche-2021-task-2' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12'\n\nThe 'clueweb12' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=733,019,372\n\n\nThis dataset is used by: 'clueweb12_touche-2020-task-2', 'clueweb12_touche-2021-task-2'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb12'\n\nThe 'clueweb12' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=733,019,372\n\n\nThis dataset is used by: 'clueweb12_touche-2020-task-2', 'clueweb12_touche-2021-task-2'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
02c36d5a67b664d56ed8c1ce5a7e30167c2ad2c2
# Dataset Card for `clueweb12/b13` The `clueweb12/b13` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=52,343,021 This dataset is used by: [`clueweb12_b13_clef-ehealth`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth), [`clueweb12_b13_clef-ehealth_cs`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_cs), [`clueweb12_b13_clef-ehealth_de`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_de), [`clueweb12_b13_clef-ehealth_fr`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_fr), [`clueweb12_b13_clef-ehealth_hu`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_hu), [`clueweb12_b13_clef-ehealth_pl`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_pl), [`clueweb12_b13_clef-ehealth_sv`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_sv), [`clueweb12_b13_ntcir-www-1`](https://huggingface.co/datasets/irds/clueweb12_b13_ntcir-www-1), [`clueweb12_b13_ntcir-www-2`](https://huggingface.co/datasets/irds/clueweb12_b13_ntcir-www-2), [`clueweb12_b13_ntcir-www-3`](https://huggingface.co/datasets/irds/clueweb12_b13_ntcir-www-3), [`clueweb12_b13_trec-misinfo-2019`](https://huggingface.co/datasets/irds/clueweb12_b13_trec-misinfo-2019) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb12_b13', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb12_b13
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:50+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13`", "viewer": false}
2023-01-05T02:56:56+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb12/b13' The 'clueweb12/b13' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=52,343,021 This dataset is used by: 'clueweb12_b13_clef-ehealth', 'clueweb12_b13_clef-ehealth_cs', 'clueweb12_b13_clef-ehealth_de', 'clueweb12_b13_clef-ehealth_fr', 'clueweb12_b13_clef-ehealth_hu', 'clueweb12_b13_clef-ehealth_pl', 'clueweb12_b13_clef-ehealth_sv', 'clueweb12_b13_ntcir-www-1', 'clueweb12_b13_ntcir-www-2', 'clueweb12_b13_ntcir-www-3', 'clueweb12_b13_trec-misinfo-2019' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13'\n\nThe 'clueweb12/b13' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=52,343,021\n\n\nThis dataset is used by: 'clueweb12_b13_clef-ehealth', 'clueweb12_b13_clef-ehealth_cs', 'clueweb12_b13_clef-ehealth_de', 'clueweb12_b13_clef-ehealth_fr', 'clueweb12_b13_clef-ehealth_hu', 'clueweb12_b13_clef-ehealth_pl', 'clueweb12_b13_clef-ehealth_sv', 'clueweb12_b13_ntcir-www-1', 'clueweb12_b13_ntcir-www-2', 'clueweb12_b13_ntcir-www-3', 'clueweb12_b13_trec-misinfo-2019'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb12/b13'\n\nThe 'clueweb12/b13' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=52,343,021\n\n\nThis dataset is used by: 'clueweb12_b13_clef-ehealth', 'clueweb12_b13_clef-ehealth_cs', 'clueweb12_b13_clef-ehealth_de', 'clueweb12_b13_clef-ehealth_fr', 'clueweb12_b13_clef-ehealth_hu', 'clueweb12_b13_clef-ehealth_pl', 'clueweb12_b13_clef-ehealth_sv', 'clueweb12_b13_ntcir-www-1', 'clueweb12_b13_ntcir-www-2', 'clueweb12_b13_ntcir-www-3', 'clueweb12_b13_trec-misinfo-2019'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
b084fbb52a06c66dde5d7443c7809adb87fb3be0
# Dataset Card for `clueweb12/b13/clef-ehealth` The `clueweb12/b13/clef-ehealth` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:01+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth`", "viewer": false}
2023-01-05T02:57:07+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/clef-ehealth' The 'clueweb12/b13/clef-ehealth' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=300 - 'qrels': (relevance assessments); count=269,232 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/clef-ehealth'\n\nThe 'clueweb12/b13/clef-ehealth' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/clef-ehealth'\n\nThe 'clueweb12/b13/clef-ehealth' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
5984a4a5a7754911e0a8e133634142117dd084b9
# Dataset Card for `clueweb12/b13/clef-ehealth/cs` The `clueweb12/b13/clef-ehealth/cs` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/cs). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_cs', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_cs', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_cs
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:12+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/cs`", "viewer": false}
2023-01-05T02:57:18+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/clef-ehealth/cs' The 'clueweb12/b13/clef-ehealth/cs' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=300 - 'qrels': (relevance assessments); count=269,232 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/clef-ehealth/cs'\n\nThe 'clueweb12/b13/clef-ehealth/cs' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/clef-ehealth/cs'\n\nThe 'clueweb12/b13/clef-ehealth/cs' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
23968856269a1c839348b013251f42bf21ba5366
# Dataset Card for `clueweb12/b13/clef-ehealth/de` The `clueweb12/b13/clef-ehealth/de` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/de). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_de', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_de', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_de
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:23+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/de`", "viewer": false}
2023-01-05T02:57:29+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/clef-ehealth/de' The 'clueweb12/b13/clef-ehealth/de' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=300 - 'qrels': (relevance assessments); count=269,232 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/clef-ehealth/de'\n\nThe 'clueweb12/b13/clef-ehealth/de' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/clef-ehealth/de'\n\nThe 'clueweb12/b13/clef-ehealth/de' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
73be0c1c975b1671acb7dfd7f350953170460fcd
# Dataset Card for `clueweb12/b13/clef-ehealth/fr` The `clueweb12/b13/clef-ehealth/fr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/fr). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_fr', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_fr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_fr
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:34+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/fr`", "viewer": false}
2023-01-05T02:57:40+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/clef-ehealth/fr' The 'clueweb12/b13/clef-ehealth/fr' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=300 - 'qrels': (relevance assessments); count=269,232 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/clef-ehealth/fr'\n\nThe 'clueweb12/b13/clef-ehealth/fr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/clef-ehealth/fr'\n\nThe 'clueweb12/b13/clef-ehealth/fr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
c976f7948099db1858550574cf508185c6d5702c
# Dataset Card for `clueweb12/b13/clef-ehealth/hu` The `clueweb12/b13/clef-ehealth/hu` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/hu). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_hu', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_hu', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_hu
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:46+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/hu`", "viewer": false}
2023-01-05T02:57:51+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/clef-ehealth/hu' The 'clueweb12/b13/clef-ehealth/hu' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=300 - 'qrels': (relevance assessments); count=269,232 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/clef-ehealth/hu'\n\nThe 'clueweb12/b13/clef-ehealth/hu' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/clef-ehealth/hu'\n\nThe 'clueweb12/b13/clef-ehealth/hu' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
c62848301211e275f01c87396dc445114ec9610d
# Dataset Card for `clueweb12/b13/clef-ehealth/pl` The `clueweb12/b13/clef-ehealth/pl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/pl). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_pl', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_pl', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_pl
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:57+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/pl`", "viewer": false}
2023-01-05T02:58:02+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/clef-ehealth/pl' The 'clueweb12/b13/clef-ehealth/pl' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=300 - 'qrels': (relevance assessments); count=269,232 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/clef-ehealth/pl'\n\nThe 'clueweb12/b13/clef-ehealth/pl' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/clef-ehealth/pl'\n\nThe 'clueweb12/b13/clef-ehealth/pl' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
1348d2eee351a2d03b2da067b7ce83e46cbeb4a9
# Dataset Card for `clueweb12/b13/clef-ehealth/sv` The `clueweb12/b13/clef-ehealth/sv` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/sv). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_sv', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_sv', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_sv
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:58:08+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/sv`", "viewer": false}
2023-01-05T02:58:14+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/clef-ehealth/sv' The 'clueweb12/b13/clef-ehealth/sv' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=300 - 'qrels': (relevance assessments); count=269,232 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/clef-ehealth/sv'\n\nThe 'clueweb12/b13/clef-ehealth/sv' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/clef-ehealth/sv'\n\nThe 'clueweb12/b13/clef-ehealth/sv' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=300\n - 'qrels': (relevance assessments); count=269,232\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
42bbc0b98d162eaac2a7a4250b3691923660e5e8
# Dataset Card for `clueweb12/b13/ntcir-www-1` The `clueweb12/b13/ntcir-www-1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/ntcir-www-1). # Data This dataset provides: - `queries` (i.e., topics); count=100 - `qrels`: (relevance assessments); count=25,465 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_ntcir-www-1', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_ntcir-www-1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Luo2017Www1, title={Overview of the NTCIR-13 We Want Web Task}, author={Cheng Luo and Tetsuya Sakai and Yiqun Liu and Zhicheng Dou and Chenyan Xiong and Jingfang Xu}, booktitle={NTCIR}, year={2017} } ```
irds/clueweb12_b13_ntcir-www-1
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:58:19+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/ntcir-www-1`", "viewer": false}
2023-01-05T02:58:25+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/ntcir-www-1' The 'clueweb12/b13/ntcir-www-1' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=100 - 'qrels': (relevance assessments); count=25,465 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/ntcir-www-1'\n\nThe 'clueweb12/b13/ntcir-www-1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=100\n - 'qrels': (relevance assessments); count=25,465\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/ntcir-www-1'\n\nThe 'clueweb12/b13/ntcir-www-1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=100\n - 'qrels': (relevance assessments); count=25,465\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a570c1a88cfafb5ee9dbe9fb0e06171bafc771fe
# Dataset Card for `clueweb12/b13/ntcir-www-2` The `clueweb12/b13/ntcir-www-2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/ntcir-www-2). # Data This dataset provides: - `queries` (i.e., topics); count=80 - `qrels`: (relevance assessments); count=27,627 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_ntcir-www-2', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ...} qrels = load_dataset('irds/clueweb12_b13_ntcir-www-2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Mao2018OWww2, title={Overview of the NTCIR-14 We Want Web Task}, author={Jiaxin Mao and Tetsuya Sakai and Cheng Luo and Peng Xiao and Yiqun Liu and Zhicheng Dou}, booktitle={NTCIR}, year={2018} } ```
irds/clueweb12_b13_ntcir-www-2
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:58:30+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/ntcir-www-2`", "viewer": false}
2023-01-05T02:58:36+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/ntcir-www-2' The 'clueweb12/b13/ntcir-www-2' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=80 - 'qrels': (relevance assessments); count=27,627 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/ntcir-www-2'\n\nThe 'clueweb12/b13/ntcir-www-2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=80\n - 'qrels': (relevance assessments); count=27,627\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/ntcir-www-2'\n\nThe 'clueweb12/b13/ntcir-www-2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=80\n - 'qrels': (relevance assessments); count=27,627\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
655000b7048ffb4cf5ab507b64880b680a153c38
# Dataset Card for `clueweb12/b13/ntcir-www-3` The `clueweb12/b13/ntcir-www-3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/ntcir-www-3). # Data This dataset provides: - `queries` (i.e., topics); count=160 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_ntcir-www-3', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb12_b13_ntcir-www-3
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:58:41+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/ntcir-www-3`", "viewer": false}
2023-01-05T02:58:47+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/ntcir-www-3' The 'clueweb12/b13/ntcir-www-3' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=160 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/ntcir-www-3'\n\nThe 'clueweb12/b13/ntcir-www-3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=160\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/ntcir-www-3'\n\nThe 'clueweb12/b13/ntcir-www-3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=160\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
332f7264493aebd076d614cefb8de1abe392dad6
# Dataset Card for `clueweb12/b13/trec-misinfo-2019` The `clueweb12/b13/trec-misinfo-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/trec-misinfo-2019). # Data This dataset provides: - `queries` (i.e., topics); count=51 - `qrels`: (relevance assessments); count=22,859 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_trec-misinfo-2019', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'cochranedoi': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/clueweb12_b13_trec-misinfo-2019', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'effectiveness': ..., 'redibility': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Abualsaud2019TrecDecision, title={Overview of the TREC 2019 Decision Track}, author={Mustafa Abualsaud and Christina Lioma and Maria Maistro and Mark D. Smucker and Guido Zuccon}, booktitle={TREC}, year={2019} } ```
irds/clueweb12_b13_trec-misinfo-2019
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:58:52+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/trec-misinfo-2019`", "viewer": false}
2023-01-05T02:58:58+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us
# Dataset Card for 'clueweb12/b13/trec-misinfo-2019' The 'clueweb12/b13/trec-misinfo-2019' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=51 - 'qrels': (relevance assessments); count=22,859 - For 'docs', use 'irds/clueweb12_b13' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/b13/trec-misinfo-2019'\n\nThe 'clueweb12/b13/trec-misinfo-2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=51\n - 'qrels': (relevance assessments); count=22,859\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12_b13 #region-us \n", "# Dataset Card for 'clueweb12/b13/trec-misinfo-2019'\n\nThe 'clueweb12/b13/trec-misinfo-2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=51\n - 'qrels': (relevance assessments); count=22,859\n\n - For 'docs', use 'irds/clueweb12_b13'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
ec0c54e364d446fb49fc4c1655639138f883aee2
# Dataset Card for `codec` The `codec` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/codec#codec). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=729,824 - `queries` (i.e., topics); count=42 - `qrels`: (relevance assessments); count=6,186 This dataset is used by: [`codec_economics`](https://huggingface.co/datasets/irds/codec_economics), [`codec_history`](https://huggingface.co/datasets/irds/codec_history), [`codec_politics`](https://huggingface.co/datasets/irds/codec_politics) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/codec', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ...} queries = load_dataset('irds/codec', 'queries') for record in queries: record # {'query_id': ..., 'query': ..., 'domain': ..., 'guidelines': ...} qrels = load_dataset('irds/codec', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{mackie2022codec, title={CODEC: Complex Document and Entity Collection}, author={Mackie, Iain and Owoicho, Paul and Gemmell, Carlos and Fischer, Sophie and MacAvaney, Sean and Dalton, Jeffery}, booktitle={Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval}, year={2022} } ```
irds/codec
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:59:04+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`codec`", "viewer": false}
2023-01-05T02:59:09+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'codec' The 'codec' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=729,824 - 'queries' (i.e., topics); count=42 - 'qrels': (relevance assessments); count=6,186 This dataset is used by: 'codec_economics', 'codec_history', 'codec_politics' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'codec'\n\nThe 'codec' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=729,824\n - 'queries' (i.e., topics); count=42\n - 'qrels': (relevance assessments); count=6,186\n\n\nThis dataset is used by: 'codec_economics', 'codec_history', 'codec_politics'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'codec'\n\nThe 'codec' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=729,824\n - 'queries' (i.e., topics); count=42\n - 'qrels': (relevance assessments); count=6,186\n\n\nThis dataset is used by: 'codec_economics', 'codec_history', 'codec_politics'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a03e874ae88baddeb147cbebe9e7bdf368a68dba
# Dataset Card for `codec/economics` The `codec/economics` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/codec#codec/economics). # Data This dataset provides: - `queries` (i.e., topics); count=14 - `qrels`: (relevance assessments); count=1,970 - For `docs`, use [`irds/codec`](https://huggingface.co/datasets/irds/codec) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/codec_economics', 'queries') for record in queries: record # {'query_id': ..., 'query': ..., 'domain': ..., 'guidelines': ...} qrels = load_dataset('irds/codec_economics', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{mackie2022codec, title={CODEC: Complex Document and Entity Collection}, author={Mackie, Iain and Owoicho, Paul and Gemmell, Carlos and Fischer, Sophie and MacAvaney, Sean and Dalton, Jeffery}, booktitle={Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval}, year={2022} } ```
irds/codec_economics
[ "task_categories:text-retrieval", "source_datasets:irds/codec", "region:us" ]
2023-01-05T02:59:15+00:00
{"source_datasets": ["irds/codec"], "task_categories": ["text-retrieval"], "pretty_name": "`codec/economics`", "viewer": false}
2023-01-05T02:59:20+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/codec #region-us
# Dataset Card for 'codec/economics' The 'codec/economics' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=14 - 'qrels': (relevance assessments); count=1,970 - For 'docs', use 'irds/codec' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'codec/economics'\n\nThe 'codec/economics' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=14\n - 'qrels': (relevance assessments); count=1,970\n\n - For 'docs', use 'irds/codec'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/codec #region-us \n", "# Dataset Card for 'codec/economics'\n\nThe 'codec/economics' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=14\n - 'qrels': (relevance assessments); count=1,970\n\n - For 'docs', use 'irds/codec'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
e6fa05fded809259ab7ecba70180075ef753ff69
# Dataset Card for `codec/history` The `codec/history` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/codec#codec/history). # Data This dataset provides: - `queries` (i.e., topics); count=14 - `qrels`: (relevance assessments); count=2,024 - For `docs`, use [`irds/codec`](https://huggingface.co/datasets/irds/codec) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/codec_history', 'queries') for record in queries: record # {'query_id': ..., 'query': ..., 'domain': ..., 'guidelines': ...} qrels = load_dataset('irds/codec_history', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{mackie2022codec, title={CODEC: Complex Document and Entity Collection}, author={Mackie, Iain and Owoicho, Paul and Gemmell, Carlos and Fischer, Sophie and MacAvaney, Sean and Dalton, Jeffery}, booktitle={Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval}, year={2022} } ```
irds/codec_history
[ "task_categories:text-retrieval", "source_datasets:irds/codec", "region:us" ]
2023-01-05T02:59:26+00:00
{"source_datasets": ["irds/codec"], "task_categories": ["text-retrieval"], "pretty_name": "`codec/history`", "viewer": false}
2023-01-05T02:59:31+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/codec #region-us
# Dataset Card for 'codec/history' The 'codec/history' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=14 - 'qrels': (relevance assessments); count=2,024 - For 'docs', use 'irds/codec' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'codec/history'\n\nThe 'codec/history' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=14\n - 'qrels': (relevance assessments); count=2,024\n\n - For 'docs', use 'irds/codec'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/codec #region-us \n", "# Dataset Card for 'codec/history'\n\nThe 'codec/history' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=14\n - 'qrels': (relevance assessments); count=2,024\n\n - For 'docs', use 'irds/codec'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
9b586aef35f669e67c9a6a125bf79cf4fda5a6a0
# Dataset Card for `codec/politics` The `codec/politics` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/codec#codec/politics). # Data This dataset provides: - `queries` (i.e., topics); count=14 - `qrels`: (relevance assessments); count=2,192 - For `docs`, use [`irds/codec`](https://huggingface.co/datasets/irds/codec) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/codec_politics', 'queries') for record in queries: record # {'query_id': ..., 'query': ..., 'domain': ..., 'guidelines': ...} qrels = load_dataset('irds/codec_politics', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{mackie2022codec, title={CODEC: Complex Document and Entity Collection}, author={Mackie, Iain and Owoicho, Paul and Gemmell, Carlos and Fischer, Sophie and MacAvaney, Sean and Dalton, Jeffery}, booktitle={Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval}, year={2022} } ```
irds/codec_politics
[ "task_categories:text-retrieval", "source_datasets:irds/codec", "region:us" ]
2023-01-05T02:59:37+00:00
{"source_datasets": ["irds/codec"], "task_categories": ["text-retrieval"], "pretty_name": "`codec/politics`", "viewer": false}
2023-01-05T02:59:43+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/codec #region-us
# Dataset Card for 'codec/politics' The 'codec/politics' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=14 - 'qrels': (relevance assessments); count=2,192 - For 'docs', use 'irds/codec' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'codec/politics'\n\nThe 'codec/politics' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=14\n - 'qrels': (relevance assessments); count=2,192\n\n - For 'docs', use 'irds/codec'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/codec #region-us \n", "# Dataset Card for 'codec/politics'\n\nThe 'codec/politics' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=14\n - 'qrels': (relevance assessments); count=2,192\n\n - For 'docs', use 'irds/codec'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
fa7626cf818659caf8e9d0aed32b919c5a012b5a
# Dataset Card for `cord19` The `cord19` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/cord19#cord19). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=192,509 This dataset is used by: [`cord19_trec-covid`](https://huggingface.co/datasets/irds/cord19_trec-covid), [`cord19_trec-covid_round5`](https://huggingface.co/datasets/irds/cord19_trec-covid_round5) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/cord19', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'doi': ..., 'date': ..., 'abstract': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ```
irds/cord19
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:59:48+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`cord19`", "viewer": false}
2023-01-05T02:59:54+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'cord19' The 'cord19' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=192,509 This dataset is used by: 'cord19_trec-covid', 'cord19_trec-covid_round5' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'cord19'\n\nThe 'cord19' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=192,509\n\n\nThis dataset is used by: 'cord19_trec-covid', 'cord19_trec-covid_round5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'cord19'\n\nThe 'cord19' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=192,509\n\n\nThis dataset is used by: 'cord19_trec-covid', 'cord19_trec-covid_round5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2f3c1421303b0dc01c2f2b449b306939919179d4
# Dataset Card for `cord19/fulltext/trec-covid` The `cord19/fulltext/trec-covid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/cord19#cord19/fulltext/trec-covid). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=69,318 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/cord19_fulltext_trec-covid', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/cord19_fulltext_trec-covid', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Voorhees2020TrecCovid, title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection}, author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang}, journal={ArXiv}, year={2020}, volume={abs/2005.04474} } @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ```
irds/cord19_fulltext_trec-covid
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:59:59+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`cord19/fulltext/trec-covid`", "viewer": false}
2023-01-05T03:00:05+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'cord19/fulltext/trec-covid' The 'cord19/fulltext/trec-covid' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=69,318 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'cord19/fulltext/trec-covid'\n\nThe 'cord19/fulltext/trec-covid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=69,318", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'cord19/fulltext/trec-covid'\n\nThe 'cord19/fulltext/trec-covid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=69,318", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
82b3b2bd9156a8cd8ff4470ee3c1ac4071f1207b
# Dataset Card for `cord19/trec-covid` The `cord19/trec-covid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/cord19#cord19/trec-covid). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=69,318 - For `docs`, use [`irds/cord19`](https://huggingface.co/datasets/irds/cord19) This dataset is used by: [`cord19_trec-covid_round5`](https://huggingface.co/datasets/irds/cord19_trec-covid_round5) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/cord19_trec-covid', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/cord19_trec-covid', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Voorhees2020TrecCovid, title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection}, author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang}, journal={ArXiv}, year={2020}, volume={abs/2005.04474} } @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ```
irds/cord19_trec-covid
[ "task_categories:text-retrieval", "source_datasets:irds/cord19", "region:us" ]
2023-01-05T03:00:10+00:00
{"source_datasets": ["irds/cord19"], "task_categories": ["text-retrieval"], "pretty_name": "`cord19/trec-covid`", "viewer": false}
2023-01-05T03:00:16+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/cord19 #region-us
# Dataset Card for 'cord19/trec-covid' The 'cord19/trec-covid' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=69,318 - For 'docs', use 'irds/cord19' This dataset is used by: 'cord19_trec-covid_round5' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'cord19/trec-covid'\n\nThe 'cord19/trec-covid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=69,318\n\n - For 'docs', use 'irds/cord19'\n\nThis dataset is used by: 'cord19_trec-covid_round5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/cord19 #region-us \n", "# Dataset Card for 'cord19/trec-covid'\n\nThe 'cord19/trec-covid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=69,318\n\n - For 'docs', use 'irds/cord19'\n\nThis dataset is used by: 'cord19_trec-covid_round5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
98237ea0a13cd99c51f1be77986f5a9948bd1a07
# Dataset Card for `cord19/trec-covid/round1` The `cord19/trec-covid/round1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/cord19#cord19/trec-covid/round1). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=51,078 - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=8,691 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/cord19_trec-covid_round1', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'doi': ..., 'date': ..., 'abstract': ...} queries = load_dataset('irds/cord19_trec-covid_round1', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/cord19_trec-covid_round1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Voorhees2020TrecCovid, title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection}, author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang}, journal={ArXiv}, year={2020}, volume={abs/2005.04474} } @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ```
irds/cord19_trec-covid_round1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:00:22+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`cord19/trec-covid/round1`", "viewer": false}
2023-01-05T03:00:27+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'cord19/trec-covid/round1' The 'cord19/trec-covid/round1' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=51,078 - 'queries' (i.e., topics); count=30 - 'qrels': (relevance assessments); count=8,691 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'cord19/trec-covid/round1'\n\nThe 'cord19/trec-covid/round1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=51,078\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=8,691", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'cord19/trec-covid/round1'\n\nThe 'cord19/trec-covid/round1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=51,078\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=8,691", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
bf5a71315325ecbaac8c532e8aa4c5e6f086ebb9
# Dataset Card for `cord19/trec-covid/round2` The `cord19/trec-covid/round2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/cord19#cord19/trec-covid/round2). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=59,887 - `queries` (i.e., topics); count=35 - `qrels`: (relevance assessments); count=12,037 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/cord19_trec-covid_round2', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'doi': ..., 'date': ..., 'abstract': ...} queries = load_dataset('irds/cord19_trec-covid_round2', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/cord19_trec-covid_round2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Voorhees2020TrecCovid, title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection}, author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang}, journal={ArXiv}, year={2020}, volume={abs/2005.04474} } @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ```
irds/cord19_trec-covid_round2
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:00:33+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`cord19/trec-covid/round2`", "viewer": false}
2023-01-05T03:00:39+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'cord19/trec-covid/round2' The 'cord19/trec-covid/round2' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=59,887 - 'queries' (i.e., topics); count=35 - 'qrels': (relevance assessments); count=12,037 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'cord19/trec-covid/round2'\n\nThe 'cord19/trec-covid/round2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=59,887\n - 'queries' (i.e., topics); count=35\n - 'qrels': (relevance assessments); count=12,037", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'cord19/trec-covid/round2'\n\nThe 'cord19/trec-covid/round2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=59,887\n - 'queries' (i.e., topics); count=35\n - 'qrels': (relevance assessments); count=12,037", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
605c214bddf10bdd557ace09150fa1b1bedd0416
# Dataset Card for `cord19/trec-covid/round3` The `cord19/trec-covid/round3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/cord19#cord19/trec-covid/round3). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=128,492 - `queries` (i.e., topics); count=40 - `qrels`: (relevance assessments); count=12,713 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/cord19_trec-covid_round3', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'doi': ..., 'date': ..., 'abstract': ...} queries = load_dataset('irds/cord19_trec-covid_round3', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/cord19_trec-covid_round3', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Voorhees2020TrecCovid, title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection}, author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang}, journal={ArXiv}, year={2020}, volume={abs/2005.04474} } @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ```
irds/cord19_trec-covid_round3
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:00:44+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`cord19/trec-covid/round3`", "viewer": false}
2023-01-05T03:00:50+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'cord19/trec-covid/round3' The 'cord19/trec-covid/round3' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=128,492 - 'queries' (i.e., topics); count=40 - 'qrels': (relevance assessments); count=12,713 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'cord19/trec-covid/round3'\n\nThe 'cord19/trec-covid/round3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=128,492\n - 'queries' (i.e., topics); count=40\n - 'qrels': (relevance assessments); count=12,713", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'cord19/trec-covid/round3'\n\nThe 'cord19/trec-covid/round3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=128,492\n - 'queries' (i.e., topics); count=40\n - 'qrels': (relevance assessments); count=12,713", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
0cc515cf99545227460067a0cde9920c66ff47e7
# Dataset Card for `cord19/trec-covid/round4` The `cord19/trec-covid/round4` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/cord19#cord19/trec-covid/round4). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=158,274 - `queries` (i.e., topics); count=45 - `qrels`: (relevance assessments); count=13,262 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/cord19_trec-covid_round4', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'doi': ..., 'date': ..., 'abstract': ...} queries = load_dataset('irds/cord19_trec-covid_round4', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/cord19_trec-covid_round4', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Voorhees2020TrecCovid, title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection}, author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang}, journal={ArXiv}, year={2020}, volume={abs/2005.04474} } @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ```
irds/cord19_trec-covid_round4
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:00:55+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`cord19/trec-covid/round4`", "viewer": false}
2023-01-05T03:01:01+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'cord19/trec-covid/round4' The 'cord19/trec-covid/round4' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=158,274 - 'queries' (i.e., topics); count=45 - 'qrels': (relevance assessments); count=13,262 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'cord19/trec-covid/round4'\n\nThe 'cord19/trec-covid/round4' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=158,274\n - 'queries' (i.e., topics); count=45\n - 'qrels': (relevance assessments); count=13,262", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'cord19/trec-covid/round4'\n\nThe 'cord19/trec-covid/round4' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=158,274\n - 'queries' (i.e., topics); count=45\n - 'qrels': (relevance assessments); count=13,262", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
99e35e9abc3b2b3d7a9bb88f3ecb2156add4efdd
# Dataset Card for `cord19/trec-covid/round5` The `cord19/trec-covid/round5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/cord19#cord19/trec-covid/round5). # Data This dataset provides: - `qrels`: (relevance assessments); count=23,151 - For `docs`, use [`irds/cord19`](https://huggingface.co/datasets/irds/cord19) - For `queries`, use [`irds/cord19_trec-covid`](https://huggingface.co/datasets/irds/cord19_trec-covid) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/cord19_trec-covid_round5', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Voorhees2020TrecCovid, title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection}, author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang}, journal={ArXiv}, year={2020}, volume={abs/2005.04474} } @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ```
irds/cord19_trec-covid_round5
[ "task_categories:text-retrieval", "source_datasets:irds/cord19", "source_datasets:irds/cord19_trec-covid", "region:us" ]
2023-01-05T03:01:06+00:00
{"source_datasets": ["irds/cord19", "irds/cord19_trec-covid"], "task_categories": ["text-retrieval"], "pretty_name": "`cord19/trec-covid/round5`", "viewer": false}
2023-01-05T03:01:12+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/cord19 #source_datasets-irds/cord19_trec-covid #region-us
# Dataset Card for 'cord19/trec-covid/round5' The 'cord19/trec-covid/round5' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'qrels': (relevance assessments); count=23,151 - For 'docs', use 'irds/cord19' - For 'queries', use 'irds/cord19_trec-covid' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'cord19/trec-covid/round5'\n\nThe 'cord19/trec-covid/round5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=23,151\n\n - For 'docs', use 'irds/cord19'\n - For 'queries', use 'irds/cord19_trec-covid'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/cord19 #source_datasets-irds/cord19_trec-covid #region-us \n", "# Dataset Card for 'cord19/trec-covid/round5'\n\nThe 'cord19/trec-covid/round5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=23,151\n\n - For 'docs', use 'irds/cord19'\n - For 'queries', use 'irds/cord19_trec-covid'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
e631ffec40e83200210806b7e9c565b26a02f97b
# Dataset Card for `cranfield` The `cranfield` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/cranfield#cranfield). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,400 - `queries` (i.e., topics); count=225 - `qrels`: (relevance assessments); count=1,837 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/cranfield', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'author': ..., 'bib': ...} queries = load_dataset('irds/cranfield', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/cranfield', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/cranfield
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:01:17+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`cranfield`", "viewer": false}
2023-01-05T03:01:23+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'cranfield' The 'cranfield' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,400 - 'queries' (i.e., topics); count=225 - 'qrels': (relevance assessments); count=1,837 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'cranfield'\n\nThe 'cranfield' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,400\n - 'queries' (i.e., topics); count=225\n - 'qrels': (relevance assessments); count=1,837", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'cranfield'\n\nThe 'cranfield' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,400\n - 'queries' (i.e., topics); count=225\n - 'qrels': (relevance assessments); count=1,837", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
52c67d945ef9511d18477d413607ae97b9c88d8e
# Dataset Card for `disks45/nocr` The `disks45/nocr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=528,155 This dataset is used by: [`disks45_nocr_trec-robust-2004`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004), [`disks45_nocr_trec-robust-2004_fold1`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold1), [`disks45_nocr_trec-robust-2004_fold2`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold2), [`disks45_nocr_trec-robust-2004_fold3`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold3), [`disks45_nocr_trec-robust-2004_fold4`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold4), [`disks45_nocr_trec-robust-2004_fold5`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold5), [`disks45_nocr_trec7`](https://huggingface.co/datasets/irds/disks45_nocr_trec7), [`disks45_nocr_trec8`](https://huggingface.co/datasets/irds/disks45_nocr_trec8) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/disks45_nocr', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'body': ..., 'marked_up_doc': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } ```
irds/disks45_nocr
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:01:29+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`disks45/nocr`", "viewer": false}
2023-01-05T03:01:34+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'disks45/nocr' The 'disks45/nocr' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=528,155 This dataset is used by: 'disks45_nocr_trec-robust-2004', 'disks45_nocr_trec-robust-2004_fold1', 'disks45_nocr_trec-robust-2004_fold2', 'disks45_nocr_trec-robust-2004_fold3', 'disks45_nocr_trec-robust-2004_fold4', 'disks45_nocr_trec-robust-2004_fold5', 'disks45_nocr_trec7', 'disks45_nocr_trec8' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'disks45/nocr'\n\nThe 'disks45/nocr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=528,155\n\n\nThis dataset is used by: 'disks45_nocr_trec-robust-2004', 'disks45_nocr_trec-robust-2004_fold1', 'disks45_nocr_trec-robust-2004_fold2', 'disks45_nocr_trec-robust-2004_fold3', 'disks45_nocr_trec-robust-2004_fold4', 'disks45_nocr_trec-robust-2004_fold5', 'disks45_nocr_trec7', 'disks45_nocr_trec8'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'disks45/nocr'\n\nThe 'disks45/nocr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=528,155\n\n\nThis dataset is used by: 'disks45_nocr_trec-robust-2004', 'disks45_nocr_trec-robust-2004_fold1', 'disks45_nocr_trec-robust-2004_fold2', 'disks45_nocr_trec-robust-2004_fold3', 'disks45_nocr_trec-robust-2004_fold4', 'disks45_nocr_trec-robust-2004_fold5', 'disks45_nocr_trec7', 'disks45_nocr_trec8'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
82515c250db47eba1d7cfa0fb32e1f81c1c6336d
# Dataset Card for `disks45/nocr/trec-robust-2004` The `disks45/nocr/trec-robust-2004` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr/trec-robust-2004). # Data This dataset provides: - `queries` (i.e., topics); count=250 - `qrels`: (relevance assessments); count=311,410 - For `docs`, use [`irds/disks45_nocr`](https://huggingface.co/datasets/irds/disks45_nocr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/disks45_nocr_trec-robust-2004', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/disks45_nocr_trec-robust-2004', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/disks45_nocr_trec-robust-2004
[ "task_categories:text-retrieval", "source_datasets:irds/disks45_nocr", "region:us" ]
2023-01-05T03:01:40+00:00
{"source_datasets": ["irds/disks45_nocr"], "task_categories": ["text-retrieval"], "pretty_name": "`disks45/nocr/trec-robust-2004`", "viewer": false}
2023-01-05T03:01:45+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us
# Dataset Card for 'disks45/nocr/trec-robust-2004' The 'disks45/nocr/trec-robust-2004' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=250 - 'qrels': (relevance assessments); count=311,410 - For 'docs', use 'irds/disks45_nocr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'disks45/nocr/trec-robust-2004'\n\nThe 'disks45/nocr/trec-robust-2004' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=250\n - 'qrels': (relevance assessments); count=311,410\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us \n", "# Dataset Card for 'disks45/nocr/trec-robust-2004'\n\nThe 'disks45/nocr/trec-robust-2004' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=250\n - 'qrels': (relevance assessments); count=311,410\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
7a3dffc831270a0ac3f9004f5116e8624cbc339f
# Dataset Card for `disks45/nocr/trec-robust-2004/fold1` The `disks45/nocr/trec-robust-2004/fold1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr/trec-robust-2004/fold1). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=62,789 - For `docs`, use [`irds/disks45_nocr`](https://huggingface.co/datasets/irds/disks45_nocr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/disks45_nocr_trec-robust-2004_fold1', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/disks45_nocr_trec-robust-2004_fold1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/disks45_nocr_trec-robust-2004_fold1
[ "task_categories:text-retrieval", "source_datasets:irds/disks45_nocr", "region:us" ]
2023-01-05T03:01:51+00:00
{"source_datasets": ["irds/disks45_nocr"], "task_categories": ["text-retrieval"], "pretty_name": "`disks45/nocr/trec-robust-2004/fold1`", "viewer": false}
2023-01-05T03:01:57+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us
# Dataset Card for 'disks45/nocr/trec-robust-2004/fold1' The 'disks45/nocr/trec-robust-2004/fold1' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=62,789 - For 'docs', use 'irds/disks45_nocr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold1'\n\nThe 'disks45/nocr/trec-robust-2004/fold1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=62,789\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us \n", "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold1'\n\nThe 'disks45/nocr/trec-robust-2004/fold1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=62,789\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2e65f37fb30c2fb303fcdb94de14dee46f659ab1
# Dataset Card for `disks45/nocr/trec-robust-2004/fold2` The `disks45/nocr/trec-robust-2004/fold2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr/trec-robust-2004/fold2). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=63,917 - For `docs`, use [`irds/disks45_nocr`](https://huggingface.co/datasets/irds/disks45_nocr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/disks45_nocr_trec-robust-2004_fold2', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/disks45_nocr_trec-robust-2004_fold2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/disks45_nocr_trec-robust-2004_fold2
[ "task_categories:text-retrieval", "source_datasets:irds/disks45_nocr", "region:us" ]
2023-01-05T03:02:02+00:00
{"source_datasets": ["irds/disks45_nocr"], "task_categories": ["text-retrieval"], "pretty_name": "`disks45/nocr/trec-robust-2004/fold2`", "viewer": false}
2023-01-05T03:02:08+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us
# Dataset Card for 'disks45/nocr/trec-robust-2004/fold2' The 'disks45/nocr/trec-robust-2004/fold2' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=63,917 - For 'docs', use 'irds/disks45_nocr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold2'\n\nThe 'disks45/nocr/trec-robust-2004/fold2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=63,917\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us \n", "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold2'\n\nThe 'disks45/nocr/trec-robust-2004/fold2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=63,917\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
95802b20acedd71aa9e5d18f027d436e300e9768
# Dataset Card for `disks45/nocr/trec-robust-2004/fold3` The `disks45/nocr/trec-robust-2004/fold3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr/trec-robust-2004/fold3). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=62,901 - For `docs`, use [`irds/disks45_nocr`](https://huggingface.co/datasets/irds/disks45_nocr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/disks45_nocr_trec-robust-2004_fold3', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/disks45_nocr_trec-robust-2004_fold3', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/disks45_nocr_trec-robust-2004_fold3
[ "task_categories:text-retrieval", "source_datasets:irds/disks45_nocr", "region:us" ]
2023-01-05T03:02:13+00:00
{"source_datasets": ["irds/disks45_nocr"], "task_categories": ["text-retrieval"], "pretty_name": "`disks45/nocr/trec-robust-2004/fold3`", "viewer": false}
2023-01-05T03:02:19+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us
# Dataset Card for 'disks45/nocr/trec-robust-2004/fold3' The 'disks45/nocr/trec-robust-2004/fold3' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=62,901 - For 'docs', use 'irds/disks45_nocr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold3'\n\nThe 'disks45/nocr/trec-robust-2004/fold3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=62,901\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us \n", "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold3'\n\nThe 'disks45/nocr/trec-robust-2004/fold3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=62,901\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
17af47a69cde35eeccbf0a620b9bc463852db686
# Dataset Card for `disks45/nocr/trec-robust-2004/fold4` The `disks45/nocr/trec-robust-2004/fold4` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr/trec-robust-2004/fold4). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=57,962 - For `docs`, use [`irds/disks45_nocr`](https://huggingface.co/datasets/irds/disks45_nocr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/disks45_nocr_trec-robust-2004_fold4', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/disks45_nocr_trec-robust-2004_fold4', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/disks45_nocr_trec-robust-2004_fold4
[ "task_categories:text-retrieval", "source_datasets:irds/disks45_nocr", "region:us" ]
2023-01-05T03:02:24+00:00
{"source_datasets": ["irds/disks45_nocr"], "task_categories": ["text-retrieval"], "pretty_name": "`disks45/nocr/trec-robust-2004/fold4`", "viewer": false}
2023-01-05T03:02:30+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us
# Dataset Card for 'disks45/nocr/trec-robust-2004/fold4' The 'disks45/nocr/trec-robust-2004/fold4' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=57,962 - For 'docs', use 'irds/disks45_nocr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold4'\n\nThe 'disks45/nocr/trec-robust-2004/fold4' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=57,962\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us \n", "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold4'\n\nThe 'disks45/nocr/trec-robust-2004/fold4' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=57,962\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
8e5830b11ffd3fbadba98f540791125d6f15d0e0
# Dataset Card for `disks45/nocr/trec-robust-2004/fold5` The `disks45/nocr/trec-robust-2004/fold5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr/trec-robust-2004/fold5). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=63,841 - For `docs`, use [`irds/disks45_nocr`](https://huggingface.co/datasets/irds/disks45_nocr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/disks45_nocr_trec-robust-2004_fold5', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/disks45_nocr_trec-robust-2004_fold5', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/disks45_nocr_trec-robust-2004_fold5
[ "task_categories:text-retrieval", "source_datasets:irds/disks45_nocr", "region:us" ]
2023-01-05T03:02:35+00:00
{"source_datasets": ["irds/disks45_nocr"], "task_categories": ["text-retrieval"], "pretty_name": "`disks45/nocr/trec-robust-2004/fold5`", "viewer": false}
2023-01-05T03:02:41+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us
# Dataset Card for 'disks45/nocr/trec-robust-2004/fold5' The 'disks45/nocr/trec-robust-2004/fold5' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=63,841 - For 'docs', use 'irds/disks45_nocr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold5'\n\nThe 'disks45/nocr/trec-robust-2004/fold5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=63,841\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us \n", "# Dataset Card for 'disks45/nocr/trec-robust-2004/fold5'\n\nThe 'disks45/nocr/trec-robust-2004/fold5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=63,841\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
47d1eb11b8cd84a3e894376f2ac32778ba3db2ee
# Dataset Card for `disks45/nocr/trec7` The `disks45/nocr/trec7` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr/trec7). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=80,345 - For `docs`, use [`irds/disks45_nocr`](https://huggingface.co/datasets/irds/disks45_nocr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/disks45_nocr_trec7', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/disks45_nocr_trec7', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } @inproceedings{Voorhees1998Trec7, title = {Overview of the Seventh Text Retrieval Conference (TREC-7)}, author = {Ellen M. Voorhees and Donna Harman}, year = {1998}, booktitle = {TREC} } ```
irds/disks45_nocr_trec7
[ "task_categories:text-retrieval", "source_datasets:irds/disks45_nocr", "region:us" ]
2023-01-05T03:02:46+00:00
{"source_datasets": ["irds/disks45_nocr"], "task_categories": ["text-retrieval"], "pretty_name": "`disks45/nocr/trec7`", "viewer": false}
2023-01-05T03:02:52+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us
# Dataset Card for 'disks45/nocr/trec7' The 'disks45/nocr/trec7' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=80,345 - For 'docs', use 'irds/disks45_nocr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'disks45/nocr/trec7'\n\nThe 'disks45/nocr/trec7' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=80,345\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us \n", "# Dataset Card for 'disks45/nocr/trec7'\n\nThe 'disks45/nocr/trec7' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=80,345\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
8dd6749f122fa62cac2d269410a61a62a2e52fc7
# Dataset Card for `disks45/nocr/trec8` The `disks45/nocr/trec8` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr/trec8). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=86,830 - For `docs`, use [`irds/disks45_nocr`](https://huggingface.co/datasets/irds/disks45_nocr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/disks45_nocr_trec8', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/disks45_nocr_trec8', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } @inproceedings{Voorhees1999Trec8, title = {Overview of the Eight Text Retrieval Conference (TREC-8)}, author = {Ellen M. Voorhees and Donna Harman}, year = {1999}, booktitle = {TREC} } ```
irds/disks45_nocr_trec8
[ "task_categories:text-retrieval", "source_datasets:irds/disks45_nocr", "region:us" ]
2023-01-05T03:02:58+00:00
{"source_datasets": ["irds/disks45_nocr"], "task_categories": ["text-retrieval"], "pretty_name": "`disks45/nocr/trec8`", "viewer": false}
2023-01-05T03:03:03+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us
# Dataset Card for 'disks45/nocr/trec8' The 'disks45/nocr/trec8' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=86,830 - For 'docs', use 'irds/disks45_nocr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'disks45/nocr/trec8'\n\nThe 'disks45/nocr/trec8' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=86,830\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/disks45_nocr #region-us \n", "# Dataset Card for 'disks45/nocr/trec8'\n\nThe 'disks45/nocr/trec8' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=86,830\n\n - For 'docs', use 'irds/disks45_nocr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
d14722a3b1c5c2d1a2c81f7675c27563055daead
# Dataset Card for `dpr-w100` The `dpr-w100` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/dpr-w100#dpr-w100). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=21,015,324 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/dpr-w100', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Karpukhin2020Dpr, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
irds/dpr-w100
[ "task_categories:text-retrieval", "language:en", "arxiv:2004.04906", "region:us" ]
2023-01-05T03:03:09+00:00
{"language": ["en"], "source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`dpr-w100`", "viewer": false}
2023-10-23T10:25:33+00:00
[ "2004.04906" ]
[ "en" ]
TAGS #task_categories-text-retrieval #language-English #arxiv-2004.04906 #region-us
# Dataset Card for 'dpr-w100' The 'dpr-w100' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=21,015,324 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'dpr-w100'\n\nThe 'dpr-w100' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=21,015,324", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #language-English #arxiv-2004.04906 #region-us \n", "# Dataset Card for 'dpr-w100'\n\nThe 'dpr-w100' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=21,015,324", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
6695b56cd62bf515d7dfb782c4d484f5b5c9ecf2
# Dataset Card for `codesearchnet` The `codesearchnet` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/codesearchnet#codesearchnet). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,070,536 This dataset is used by: [`codesearchnet_challenge`](https://huggingface.co/datasets/irds/codesearchnet_challenge), [`codesearchnet_test`](https://huggingface.co/datasets/irds/codesearchnet_test), [`codesearchnet_train`](https://huggingface.co/datasets/irds/codesearchnet_train), [`codesearchnet_valid`](https://huggingface.co/datasets/irds/codesearchnet_valid) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/codesearchnet', 'docs') for record in docs: record # {'doc_id': ..., 'repo': ..., 'path': ..., 'func_name': ..., 'code': ..., 'language': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Husain2019CodeSearchNet, title={CodeSearchNet Challenge: Evaluating the State of Semantic Code Search}, author={Hamel Husain and Ho-Hsiang Wu and Tiferet Gazit and Miltiadis Allamanis and Marc Brockschmidt}, journal={ArXiv}, year={2019} } ```
irds/codesearchnet
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:03:20+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`codesearchnet`", "viewer": false}
2023-01-05T03:03:26+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'codesearchnet' The 'codesearchnet' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=2,070,536 This dataset is used by: 'codesearchnet_challenge', 'codesearchnet_test', 'codesearchnet_train', 'codesearchnet_valid' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'codesearchnet'\n\nThe 'codesearchnet' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,070,536\n\n\nThis dataset is used by: 'codesearchnet_challenge', 'codesearchnet_test', 'codesearchnet_train', 'codesearchnet_valid'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'codesearchnet'\n\nThe 'codesearchnet' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,070,536\n\n\nThis dataset is used by: 'codesearchnet_challenge', 'codesearchnet_test', 'codesearchnet_train', 'codesearchnet_valid'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]