sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
8595e58f99370737775502e627af1720e70fbdae
# Dataset Card for `msmarco-document/trec-dl-hard/fold5` The `msmarco-document/trec-dl-hard/fold5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document#msmarco-document/trec-dl-hard/fold5). # Data This dataset provides: - `queries` (i.e., topics); count=10 - `qrels`: (relevance assessments); count=4,114 - For `docs`, use [`irds/msmarco-document`](https://huggingface.co/datasets/irds/msmarco-document) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document_trec-dl-hard_fold5', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document_trec-dl-hard_fold5', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Mackie2021DlHard, title={How Deep is your Learning: the DL-HARD Annotated Deep Learning Dataset}, author={Iain Mackie and Jeffrey Dalton and Andrew Yates}, journal={ArXiv}, year={2021}, volume={abs/2105.07975} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document_trec-dl-hard_fold5
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document", "region:us" ]
2023-01-05T03:40:56+00:00
{"source_datasets": ["irds/msmarco-document"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document/trec-dl-hard/fold5`", "viewer": false}
2023-01-05T03:41:01+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/msmarco-document #region-us
# Dataset Card for 'msmarco-document/trec-dl-hard/fold5' The 'msmarco-document/trec-dl-hard/fold5' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=10 - 'qrels': (relevance assessments); count=4,114 - For 'docs', use 'irds/msmarco-document' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'msmarco-document/trec-dl-hard/fold5'\n\nThe 'msmarco-document/trec-dl-hard/fold5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=10\n - 'qrels': (relevance assessments); count=4,114\n\n - For 'docs', use 'irds/msmarco-document'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/msmarco-document #region-us \n", "# Dataset Card for 'msmarco-document/trec-dl-hard/fold5'\n\nThe 'msmarco-document/trec-dl-hard/fold5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=10\n - 'qrels': (relevance assessments); count=4,114\n\n - For 'docs', use 'irds/msmarco-document'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
1cd1777f44d3df9a74fb6080240bec820de97db7
# Dataset Card for `msmarco-document-v2` The `msmarco-document-v2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=11,959,635 This dataset is used by: [`msmarco-document-v2_trec-dl-2019`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2019), [`msmarco-document-v2_trec-dl-2019_judged`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2019_judged), [`msmarco-document-v2_trec-dl-2020`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2020), [`msmarco-document-v2_trec-dl-2020_judged`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2020_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/msmarco-document-v2', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'title': ..., 'headings': ..., 'body': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:41:07+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2`", "viewer": false}
2023-01-05T03:41:13+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'msmarco-document-v2' The 'msmarco-document-v2' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=11,959,635 This dataset is used by: 'msmarco-document-v2_trec-dl-2019', 'msmarco-document-v2_trec-dl-2019_judged', 'msmarco-document-v2_trec-dl-2020', 'msmarco-document-v2_trec-dl-2020_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'msmarco-document-v2'\n\nThe 'msmarco-document-v2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=11,959,635\n\n\nThis dataset is used by: 'msmarco-document-v2_trec-dl-2019', 'msmarco-document-v2_trec-dl-2019_judged', 'msmarco-document-v2_trec-dl-2020', 'msmarco-document-v2_trec-dl-2020_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'msmarco-document-v2'\n\nThe 'msmarco-document-v2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=11,959,635\n\n\nThis dataset is used by: 'msmarco-document-v2_trec-dl-2019', 'msmarco-document-v2_trec-dl-2019_judged', 'msmarco-document-v2_trec-dl-2020', 'msmarco-document-v2_trec-dl-2020_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4d85781c48d6e38047f4be4d3c00fc392fd7411f
# Dataset Card for `msmarco-document-v2/trec-dl-2019` The `msmarco-document-v2/trec-dl-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2/trec-dl-2019). # Data This dataset provides: - `queries` (i.e., topics); count=200 - `qrels`: (relevance assessments); count=13,940 - For `docs`, use [`irds/msmarco-document-v2`](https://huggingface.co/datasets/irds/msmarco-document-v2) This dataset is used by: [`msmarco-document-v2_trec-dl-2019_judged`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2019_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document-v2_trec-dl-2019', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document-v2_trec-dl-2019', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2_trec-dl-2019
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document-v2", "region:us" ]
2023-01-05T03:41:18+00:00
{"source_datasets": ["irds/msmarco-document-v2"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2/trec-dl-2019`", "viewer": false}
2023-01-05T03:41:24+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/msmarco-document-v2 #region-us
# Dataset Card for 'msmarco-document-v2/trec-dl-2019' The 'msmarco-document-v2/trec-dl-2019' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=200 - 'qrels': (relevance assessments); count=13,940 - For 'docs', use 'irds/msmarco-document-v2' This dataset is used by: 'msmarco-document-v2_trec-dl-2019_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'msmarco-document-v2/trec-dl-2019'\n\nThe 'msmarco-document-v2/trec-dl-2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=200\n - 'qrels': (relevance assessments); count=13,940\n\n - For 'docs', use 'irds/msmarco-document-v2'\n\nThis dataset is used by: 'msmarco-document-v2_trec-dl-2019_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/msmarco-document-v2 #region-us \n", "# Dataset Card for 'msmarco-document-v2/trec-dl-2019'\n\nThe 'msmarco-document-v2/trec-dl-2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=200\n - 'qrels': (relevance assessments); count=13,940\n\n - For 'docs', use 'irds/msmarco-document-v2'\n\nThis dataset is used by: 'msmarco-document-v2_trec-dl-2019_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
6f3a39d20cbb52b1e7496e201cfc900c668d6ff9
# Dataset Card for `msmarco-document-v2/trec-dl-2019/judged` The `msmarco-document-v2/trec-dl-2019/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2/trec-dl-2019/judged). # Data This dataset provides: - `queries` (i.e., topics); count=43 - For `docs`, use [`irds/msmarco-document-v2`](https://huggingface.co/datasets/irds/msmarco-document-v2) - For `qrels`, use [`irds/msmarco-document-v2_trec-dl-2019`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2019) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document-v2_trec-dl-2019_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2_trec-dl-2019_judged
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document-v2", "source_datasets:irds/msmarco-document-v2_trec-dl-2019", "region:us" ]
2023-01-05T03:41:29+00:00
{"source_datasets": ["irds/msmarco-document-v2", "irds/msmarco-document-v2_trec-dl-2019"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2/trec-dl-2019/judged`", "viewer": false}
2023-01-05T03:41:35+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/msmarco-document-v2 #source_datasets-irds/msmarco-document-v2_trec-dl-2019 #region-us
# Dataset Card for 'msmarco-document-v2/trec-dl-2019/judged' The 'msmarco-document-v2/trec-dl-2019/judged' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=43 - For 'docs', use 'irds/msmarco-document-v2' - For 'qrels', use 'irds/msmarco-document-v2_trec-dl-2019' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'msmarco-document-v2/trec-dl-2019/judged'\n\nThe 'msmarco-document-v2/trec-dl-2019/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=43\n\n - For 'docs', use 'irds/msmarco-document-v2'\n - For 'qrels', use 'irds/msmarco-document-v2_trec-dl-2019'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/msmarco-document-v2 #source_datasets-irds/msmarco-document-v2_trec-dl-2019 #region-us \n", "# Dataset Card for 'msmarco-document-v2/trec-dl-2019/judged'\n\nThe 'msmarco-document-v2/trec-dl-2019/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=43\n\n - For 'docs', use 'irds/msmarco-document-v2'\n - For 'qrels', use 'irds/msmarco-document-v2_trec-dl-2019'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
22e10c61d01c6aa08d8c1c4113955ec11a7c23d7
# Dataset Card for `msmarco-document-v2/trec-dl-2020` The `msmarco-document-v2/trec-dl-2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2/trec-dl-2020). # Data This dataset provides: - `queries` (i.e., topics); count=200 - `qrels`: (relevance assessments); count=7,942 - For `docs`, use [`irds/msmarco-document-v2`](https://huggingface.co/datasets/irds/msmarco-document-v2) This dataset is used by: [`msmarco-document-v2_trec-dl-2020_judged`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2020_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document-v2_trec-dl-2020', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document-v2_trec-dl-2020', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Craswell2020TrecDl, title={Overview of the TREC 2020 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos}, booktitle={TREC}, year={2020} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2_trec-dl-2020
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document-v2", "region:us" ]
2023-01-05T03:41:40+00:00
{"source_datasets": ["irds/msmarco-document-v2"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2/trec-dl-2020`", "viewer": false}
2023-01-05T03:41:46+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/msmarco-document-v2 #region-us
# Dataset Card for 'msmarco-document-v2/trec-dl-2020' The 'msmarco-document-v2/trec-dl-2020' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=200 - 'qrels': (relevance assessments); count=7,942 - For 'docs', use 'irds/msmarco-document-v2' This dataset is used by: 'msmarco-document-v2_trec-dl-2020_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'msmarco-document-v2/trec-dl-2020'\n\nThe 'msmarco-document-v2/trec-dl-2020' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=200\n - 'qrels': (relevance assessments); count=7,942\n\n - For 'docs', use 'irds/msmarco-document-v2'\n\nThis dataset is used by: 'msmarco-document-v2_trec-dl-2020_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/msmarco-document-v2 #region-us \n", "# Dataset Card for 'msmarco-document-v2/trec-dl-2020'\n\nThe 'msmarco-document-v2/trec-dl-2020' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=200\n - 'qrels': (relevance assessments); count=7,942\n\n - For 'docs', use 'irds/msmarco-document-v2'\n\nThis dataset is used by: 'msmarco-document-v2_trec-dl-2020_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
f9988dba951cb081b53e58c9bd6cc99923ad8be3
# Dataset Card for `msmarco-document-v2/trec-dl-2020/judged` The `msmarco-document-v2/trec-dl-2020/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2/trec-dl-2020/judged). # Data This dataset provides: - `queries` (i.e., topics); count=45 - For `docs`, use [`irds/msmarco-document-v2`](https://huggingface.co/datasets/irds/msmarco-document-v2) - For `qrels`, use [`irds/msmarco-document-v2_trec-dl-2020`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2020) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document-v2_trec-dl-2020_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Craswell2020TrecDl, title={Overview of the TREC 2020 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos}, booktitle={TREC}, year={2020} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2_trec-dl-2020_judged
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document-v2", "source_datasets:irds/msmarco-document-v2_trec-dl-2020", "region:us" ]
2023-01-05T03:41:51+00:00
{"source_datasets": ["irds/msmarco-document-v2", "irds/msmarco-document-v2_trec-dl-2020"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2/trec-dl-2020/judged`", "viewer": false}
2023-01-05T03:41:57+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/msmarco-document-v2 #source_datasets-irds/msmarco-document-v2_trec-dl-2020 #region-us
# Dataset Card for 'msmarco-document-v2/trec-dl-2020/judged' The 'msmarco-document-v2/trec-dl-2020/judged' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=45 - For 'docs', use 'irds/msmarco-document-v2' - For 'qrels', use 'irds/msmarco-document-v2_trec-dl-2020' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'msmarco-document-v2/trec-dl-2020/judged'\n\nThe 'msmarco-document-v2/trec-dl-2020/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=45\n\n - For 'docs', use 'irds/msmarco-document-v2'\n - For 'qrels', use 'irds/msmarco-document-v2_trec-dl-2020'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/msmarco-document-v2 #source_datasets-irds/msmarco-document-v2_trec-dl-2020 #region-us \n", "# Dataset Card for 'msmarco-document-v2/trec-dl-2020/judged'\n\nThe 'msmarco-document-v2/trec-dl-2020/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=45\n\n - For 'docs', use 'irds/msmarco-document-v2'\n - For 'qrels', use 'irds/msmarco-document-v2_trec-dl-2020'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
cf23d065f02c5f0bb0e0aba80ecd9cb97a65aef8
# Dataset Card for `msmarco-qna` The `msmarco-qna` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-qna#msmarco-qna). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=9,048,606 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/msmarco-qna', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'url': ..., 'msmarco_passage_id': ..., 'msmarco_document_id': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-qna
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:42:02+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-qna`", "viewer": false}
2023-01-05T03:42:08+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'msmarco-qna' The 'msmarco-qna' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=9,048,606 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'msmarco-qna'\n\nThe 'msmarco-qna' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=9,048,606", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'msmarco-qna'\n\nThe 'msmarco-qna' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=9,048,606", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4832073238f72be3b72607cc7c9d0478cc89d233
# Dataset Card for `neumarco/fa` The `neumarco/fa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 This dataset is used by: [`neumarco_fa_dev`](https://huggingface.co/datasets/irds/neumarco_fa_dev), [`neumarco_fa_dev_judged`](https://huggingface.co/datasets/irds/neumarco_fa_dev_judged), [`neumarco_fa_dev_small`](https://huggingface.co/datasets/irds/neumarco_fa_dev_small), [`neumarco_fa_train`](https://huggingface.co/datasets/irds/neumarco_fa_train), [`neumarco_fa_train_judged`](https://huggingface.co/datasets/irds/neumarco_fa_train_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neumarco_fa', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:42:14+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa`", "viewer": false}
2023-01-05T03:42:19+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'neumarco/fa' The 'neumarco/fa' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=8,841,823 This dataset is used by: 'neumarco_fa_dev', 'neumarco_fa_dev_judged', 'neumarco_fa_dev_small', 'neumarco_fa_train', 'neumarco_fa_train_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/fa'\n\nThe 'neumarco/fa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,841,823\n\n\nThis dataset is used by: 'neumarco_fa_dev', 'neumarco_fa_dev_judged', 'neumarco_fa_dev_small', 'neumarco_fa_train', 'neumarco_fa_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'neumarco/fa'\n\nThe 'neumarco/fa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,841,823\n\n\nThis dataset is used by: 'neumarco_fa_dev', 'neumarco_fa_dev_judged', 'neumarco_fa_dev_small', 'neumarco_fa_train', 'neumarco_fa_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
73180c1d1348eed46d5fc81bda6ad2ce4f38066f
# Dataset Card for `neumarco/fa/dev` The `neumarco/fa/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/dev). # Data This dataset provides: - `queries` (i.e., topics); count=101,093 - `qrels`: (relevance assessments); count=59,273 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) This dataset is used by: [`neumarco_fa_dev_judged`](https://huggingface.co/datasets/irds/neumarco_fa_dev_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_fa_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_dev
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "region:us" ]
2023-01-05T03:42:25+00:00
{"source_datasets": ["irds/neumarco_fa"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/dev`", "viewer": false}
2023-01-05T03:42:30+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_fa #region-us
# Dataset Card for 'neumarco/fa/dev' The 'neumarco/fa/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=101,093 - 'qrels': (relevance assessments); count=59,273 - For 'docs', use 'irds/neumarco_fa' This dataset is used by: 'neumarco_fa_dev_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/fa/dev'\n\nThe 'neumarco/fa/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=101,093\n - 'qrels': (relevance assessments); count=59,273\n\n - For 'docs', use 'irds/neumarco_fa'\n\nThis dataset is used by: 'neumarco_fa_dev_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_fa #region-us \n", "# Dataset Card for 'neumarco/fa/dev'\n\nThe 'neumarco/fa/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=101,093\n - 'qrels': (relevance assessments); count=59,273\n\n - For 'docs', use 'irds/neumarco_fa'\n\nThis dataset is used by: 'neumarco_fa_dev_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
197de71e9eeb3a74cefa1b0e1ac8813d6139652f
# Dataset Card for `neumarco/fa/dev/judged` The `neumarco/fa/dev/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/dev/judged). # Data This dataset provides: - `queries` (i.e., topics); count=55,578 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) - For `qrels`, use [`irds/neumarco_fa_dev`](https://huggingface.co/datasets/irds/neumarco_fa_dev) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_dev_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_dev_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "source_datasets:irds/neumarco_fa_dev", "region:us" ]
2023-01-05T03:42:36+00:00
{"source_datasets": ["irds/neumarco_fa", "irds/neumarco_fa_dev"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/dev/judged`", "viewer": false}
2023-01-05T03:42:41+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_fa #source_datasets-irds/neumarco_fa_dev #region-us
# Dataset Card for 'neumarco/fa/dev/judged' The 'neumarco/fa/dev/judged' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=55,578 - For 'docs', use 'irds/neumarco_fa' - For 'qrels', use 'irds/neumarco_fa_dev' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/fa/dev/judged'\n\nThe 'neumarco/fa/dev/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=55,578\n\n - For 'docs', use 'irds/neumarco_fa'\n - For 'qrels', use 'irds/neumarco_fa_dev'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_fa #source_datasets-irds/neumarco_fa_dev #region-us \n", "# Dataset Card for 'neumarco/fa/dev/judged'\n\nThe 'neumarco/fa/dev/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=55,578\n\n - For 'docs', use 'irds/neumarco_fa'\n - For 'qrels', use 'irds/neumarco_fa_dev'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4f7e9b08f302936c8f42dff46f4bb3df4c728247
# Dataset Card for `neumarco/fa/dev/small` The `neumarco/fa/dev/small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/dev/small). # Data This dataset provides: - `queries` (i.e., topics); count=6,980 - `qrels`: (relevance assessments); count=7,437 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_dev_small', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_fa_dev_small', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_dev_small
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "region:us" ]
2023-01-05T03:42:47+00:00
{"source_datasets": ["irds/neumarco_fa"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/dev/small`", "viewer": false}
2023-01-05T03:42:53+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_fa #region-us
# Dataset Card for 'neumarco/fa/dev/small' The 'neumarco/fa/dev/small' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=6,980 - 'qrels': (relevance assessments); count=7,437 - For 'docs', use 'irds/neumarco_fa' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/fa/dev/small'\n\nThe 'neumarco/fa/dev/small' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,980\n - 'qrels': (relevance assessments); count=7,437\n\n - For 'docs', use 'irds/neumarco_fa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_fa #region-us \n", "# Dataset Card for 'neumarco/fa/dev/small'\n\nThe 'neumarco/fa/dev/small' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,980\n - 'qrels': (relevance assessments); count=7,437\n\n - For 'docs', use 'irds/neumarco_fa'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
f991550fab0acc9540353de90ae6d46d0950821b
# Dataset Card for `neumarco/fa/train` The `neumarco/fa/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/train). # Data This dataset provides: - `queries` (i.e., topics); count=808,731 - `qrels`: (relevance assessments); count=532,761 - `docpairs`; count=269,919,004 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) This dataset is used by: [`neumarco_fa_train_judged`](https://huggingface.co/datasets/irds/neumarco_fa_train_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_fa_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} docpairs = load_dataset('irds/neumarco_fa_train', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_train
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "region:us" ]
2023-01-05T03:42:58+00:00
{"source_datasets": ["irds/neumarco_fa"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/train`", "viewer": false}
2023-01-05T03:43:04+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_fa #region-us
# Dataset Card for 'neumarco/fa/train' The 'neumarco/fa/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=808,731 - 'qrels': (relevance assessments); count=532,761 - 'docpairs'; count=269,919,004 - For 'docs', use 'irds/neumarco_fa' This dataset is used by: 'neumarco_fa_train_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/fa/train'\n\nThe 'neumarco/fa/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=808,731\n - 'qrels': (relevance assessments); count=532,761\n - 'docpairs'; count=269,919,004\n\n - For 'docs', use 'irds/neumarco_fa'\n\nThis dataset is used by: 'neumarco_fa_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_fa #region-us \n", "# Dataset Card for 'neumarco/fa/train'\n\nThe 'neumarco/fa/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=808,731\n - 'qrels': (relevance assessments); count=532,761\n - 'docpairs'; count=269,919,004\n\n - For 'docs', use 'irds/neumarco_fa'\n\nThis dataset is used by: 'neumarco_fa_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
772eca36006c63ecc66bf08ef2a88b57fe1df400
# Dataset Card for `neumarco/fa/train/judged` The `neumarco/fa/train/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/train/judged). # Data This dataset provides: - `queries` (i.e., topics); count=502,939 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) - For `qrels`, use [`irds/neumarco_fa_train`](https://huggingface.co/datasets/irds/neumarco_fa_train) - For `docpairs`, use [`irds/neumarco_fa_train`](https://huggingface.co/datasets/irds/neumarco_fa_train) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_train_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_train_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "source_datasets:irds/neumarco_fa_train", "region:us" ]
2023-01-05T03:43:09+00:00
{"source_datasets": ["irds/neumarco_fa", "irds/neumarco_fa_train"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/train/judged`", "viewer": false}
2023-01-05T03:43:15+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_fa #source_datasets-irds/neumarco_fa_train #region-us
# Dataset Card for 'neumarco/fa/train/judged' The 'neumarco/fa/train/judged' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=502,939 - For 'docs', use 'irds/neumarco_fa' - For 'qrels', use 'irds/neumarco_fa_train' - For 'docpairs', use 'irds/neumarco_fa_train' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/fa/train/judged'\n\nThe 'neumarco/fa/train/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=502,939\n\n - For 'docs', use 'irds/neumarco_fa'\n - For 'qrels', use 'irds/neumarco_fa_train'\n - For 'docpairs', use 'irds/neumarco_fa_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_fa #source_datasets-irds/neumarco_fa_train #region-us \n", "# Dataset Card for 'neumarco/fa/train/judged'\n\nThe 'neumarco/fa/train/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=502,939\n\n - For 'docs', use 'irds/neumarco_fa'\n - For 'qrels', use 'irds/neumarco_fa_train'\n - For 'docpairs', use 'irds/neumarco_fa_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
dc02a3d5a47b72f1df1dde2ee6ecd3a70b2fe4c1
# Dataset Card for `neumarco/ru` The `neumarco/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 This dataset is used by: [`neumarco_ru_dev`](https://huggingface.co/datasets/irds/neumarco_ru_dev), [`neumarco_ru_dev_judged`](https://huggingface.co/datasets/irds/neumarco_ru_dev_judged), [`neumarco_ru_dev_small`](https://huggingface.co/datasets/irds/neumarco_ru_dev_small), [`neumarco_ru_train`](https://huggingface.co/datasets/irds/neumarco_ru_train), [`neumarco_ru_train_judged`](https://huggingface.co/datasets/irds/neumarco_ru_train_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neumarco_ru', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:43:20+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru`", "viewer": false}
2023-01-05T03:43:26+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'neumarco/ru' The 'neumarco/ru' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=8,841,823 This dataset is used by: 'neumarco_ru_dev', 'neumarco_ru_dev_judged', 'neumarco_ru_dev_small', 'neumarco_ru_train', 'neumarco_ru_train_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/ru'\n\nThe 'neumarco/ru' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,841,823\n\n\nThis dataset is used by: 'neumarco_ru_dev', 'neumarco_ru_dev_judged', 'neumarco_ru_dev_small', 'neumarco_ru_train', 'neumarco_ru_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'neumarco/ru'\n\nThe 'neumarco/ru' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,841,823\n\n\nThis dataset is used by: 'neumarco_ru_dev', 'neumarco_ru_dev_judged', 'neumarco_ru_dev_small', 'neumarco_ru_train', 'neumarco_ru_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
247672bb5e728f4abbb4dc77ab13081629186363
# Dataset Card for `neumarco/ru/dev` The `neumarco/ru/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/dev). # Data This dataset provides: - `queries` (i.e., topics); count=101,093 - `qrels`: (relevance assessments); count=59,273 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) This dataset is used by: [`neumarco_ru_dev_judged`](https://huggingface.co/datasets/irds/neumarco_ru_dev_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_ru_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_dev
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "region:us" ]
2023-01-05T03:43:31+00:00
{"source_datasets": ["irds/neumarco_ru"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/dev`", "viewer": false}
2023-01-05T03:43:37+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_ru #region-us
# Dataset Card for 'neumarco/ru/dev' The 'neumarco/ru/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=101,093 - 'qrels': (relevance assessments); count=59,273 - For 'docs', use 'irds/neumarco_ru' This dataset is used by: 'neumarco_ru_dev_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/ru/dev'\n\nThe 'neumarco/ru/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=101,093\n - 'qrels': (relevance assessments); count=59,273\n\n - For 'docs', use 'irds/neumarco_ru'\n\nThis dataset is used by: 'neumarco_ru_dev_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_ru #region-us \n", "# Dataset Card for 'neumarco/ru/dev'\n\nThe 'neumarco/ru/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=101,093\n - 'qrels': (relevance assessments); count=59,273\n\n - For 'docs', use 'irds/neumarco_ru'\n\nThis dataset is used by: 'neumarco_ru_dev_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
98bd8ef8f7158f20241c0ad08c018cc608e78048
# Dataset Card for `neumarco/ru/dev/judged` The `neumarco/ru/dev/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/dev/judged). # Data This dataset provides: - `queries` (i.e., topics); count=55,578 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) - For `qrels`, use [`irds/neumarco_ru_dev`](https://huggingface.co/datasets/irds/neumarco_ru_dev) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_dev_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_dev_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "source_datasets:irds/neumarco_ru_dev", "region:us" ]
2023-01-05T03:43:43+00:00
{"source_datasets": ["irds/neumarco_ru", "irds/neumarco_ru_dev"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/dev/judged`", "viewer": false}
2023-01-05T03:43:49+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_ru #source_datasets-irds/neumarco_ru_dev #region-us
# Dataset Card for 'neumarco/ru/dev/judged' The 'neumarco/ru/dev/judged' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=55,578 - For 'docs', use 'irds/neumarco_ru' - For 'qrels', use 'irds/neumarco_ru_dev' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/ru/dev/judged'\n\nThe 'neumarco/ru/dev/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=55,578\n\n - For 'docs', use 'irds/neumarco_ru'\n - For 'qrels', use 'irds/neumarco_ru_dev'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_ru #source_datasets-irds/neumarco_ru_dev #region-us \n", "# Dataset Card for 'neumarco/ru/dev/judged'\n\nThe 'neumarco/ru/dev/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=55,578\n\n - For 'docs', use 'irds/neumarco_ru'\n - For 'qrels', use 'irds/neumarco_ru_dev'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
7119b3bce58621e4de0b6d49af778d93ba59cf37
# Dataset Card for `neumarco/ru/dev/small` The `neumarco/ru/dev/small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/dev/small). # Data This dataset provides: - `queries` (i.e., topics); count=6,980 - `qrels`: (relevance assessments); count=7,437 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_dev_small', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_ru_dev_small', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_dev_small
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "region:us" ]
2023-01-05T03:43:54+00:00
{"source_datasets": ["irds/neumarco_ru"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/dev/small`", "viewer": false}
2023-01-05T03:44:00+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_ru #region-us
# Dataset Card for 'neumarco/ru/dev/small' The 'neumarco/ru/dev/small' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=6,980 - 'qrels': (relevance assessments); count=7,437 - For 'docs', use 'irds/neumarco_ru' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/ru/dev/small'\n\nThe 'neumarco/ru/dev/small' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,980\n - 'qrels': (relevance assessments); count=7,437\n\n - For 'docs', use 'irds/neumarco_ru'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_ru #region-us \n", "# Dataset Card for 'neumarco/ru/dev/small'\n\nThe 'neumarco/ru/dev/small' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,980\n - 'qrels': (relevance assessments); count=7,437\n\n - For 'docs', use 'irds/neumarco_ru'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
8fa65507b46ded9f16109fd04a88540bd89b5da7
# Dataset Card for `neumarco/ru/train` The `neumarco/ru/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/train). # Data This dataset provides: - `queries` (i.e., topics); count=808,731 - `qrels`: (relevance assessments); count=532,761 - `docpairs`; count=269,919,004 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) This dataset is used by: [`neumarco_ru_train_judged`](https://huggingface.co/datasets/irds/neumarco_ru_train_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_ru_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} docpairs = load_dataset('irds/neumarco_ru_train', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_train
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "region:us" ]
2023-01-05T03:44:05+00:00
{"source_datasets": ["irds/neumarco_ru"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/train`", "viewer": false}
2023-01-05T03:44:11+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_ru #region-us
# Dataset Card for 'neumarco/ru/train' The 'neumarco/ru/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=808,731 - 'qrels': (relevance assessments); count=532,761 - 'docpairs'; count=269,919,004 - For 'docs', use 'irds/neumarco_ru' This dataset is used by: 'neumarco_ru_train_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/ru/train'\n\nThe 'neumarco/ru/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=808,731\n - 'qrels': (relevance assessments); count=532,761\n - 'docpairs'; count=269,919,004\n\n - For 'docs', use 'irds/neumarco_ru'\n\nThis dataset is used by: 'neumarco_ru_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_ru #region-us \n", "# Dataset Card for 'neumarco/ru/train'\n\nThe 'neumarco/ru/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=808,731\n - 'qrels': (relevance assessments); count=532,761\n - 'docpairs'; count=269,919,004\n\n - For 'docs', use 'irds/neumarco_ru'\n\nThis dataset is used by: 'neumarco_ru_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
b10efa0c750292faae7121eb7f1185cbdd1b1ef5
# Dataset Card for `neumarco/ru/train/judged` The `neumarco/ru/train/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/train/judged). # Data This dataset provides: - `queries` (i.e., topics); count=502,939 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) - For `qrels`, use [`irds/neumarco_ru_train`](https://huggingface.co/datasets/irds/neumarco_ru_train) - For `docpairs`, use [`irds/neumarco_ru_train`](https://huggingface.co/datasets/irds/neumarco_ru_train) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_train_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_train_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "source_datasets:irds/neumarco_ru_train", "region:us" ]
2023-01-05T03:44:16+00:00
{"source_datasets": ["irds/neumarco_ru", "irds/neumarco_ru_train"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/train/judged`", "viewer": false}
2023-01-05T03:44:22+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_ru #source_datasets-irds/neumarco_ru_train #region-us
# Dataset Card for 'neumarco/ru/train/judged' The 'neumarco/ru/train/judged' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=502,939 - For 'docs', use 'irds/neumarco_ru' - For 'qrels', use 'irds/neumarco_ru_train' - For 'docpairs', use 'irds/neumarco_ru_train' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/ru/train/judged'\n\nThe 'neumarco/ru/train/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=502,939\n\n - For 'docs', use 'irds/neumarco_ru'\n - For 'qrels', use 'irds/neumarco_ru_train'\n - For 'docpairs', use 'irds/neumarco_ru_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_ru #source_datasets-irds/neumarco_ru_train #region-us \n", "# Dataset Card for 'neumarco/ru/train/judged'\n\nThe 'neumarco/ru/train/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=502,939\n\n - For 'docs', use 'irds/neumarco_ru'\n - For 'qrels', use 'irds/neumarco_ru_train'\n - For 'docpairs', use 'irds/neumarco_ru_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
38fa331af462b210a32e3213a34e232cc27e510b
# Dataset Card for `neumarco/zh` The `neumarco/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 This dataset is used by: [`neumarco_zh_dev`](https://huggingface.co/datasets/irds/neumarco_zh_dev), [`neumarco_zh_dev_judged`](https://huggingface.co/datasets/irds/neumarco_zh_dev_judged), [`neumarco_zh_dev_small`](https://huggingface.co/datasets/irds/neumarco_zh_dev_small), [`neumarco_zh_train`](https://huggingface.co/datasets/irds/neumarco_zh_train), [`neumarco_zh_train_judged`](https://huggingface.co/datasets/irds/neumarco_zh_train_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neumarco_zh', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:44:27+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh`", "viewer": false}
2023-01-05T03:44:33+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'neumarco/zh' The 'neumarco/zh' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=8,841,823 This dataset is used by: 'neumarco_zh_dev', 'neumarco_zh_dev_judged', 'neumarco_zh_dev_small', 'neumarco_zh_train', 'neumarco_zh_train_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/zh'\n\nThe 'neumarco/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,841,823\n\n\nThis dataset is used by: 'neumarco_zh_dev', 'neumarco_zh_dev_judged', 'neumarco_zh_dev_small', 'neumarco_zh_train', 'neumarco_zh_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'neumarco/zh'\n\nThe 'neumarco/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,841,823\n\n\nThis dataset is used by: 'neumarco_zh_dev', 'neumarco_zh_dev_judged', 'neumarco_zh_dev_small', 'neumarco_zh_train', 'neumarco_zh_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
142218f829a44fa1a5e0f4d3deef06edfa48b961
# Dataset Card for `neumarco/zh/dev` The `neumarco/zh/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/dev). # Data This dataset provides: - `queries` (i.e., topics); count=101,093 - `qrels`: (relevance assessments); count=59,273 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) This dataset is used by: [`neumarco_zh_dev_judged`](https://huggingface.co/datasets/irds/neumarco_zh_dev_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_zh_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_dev
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "region:us" ]
2023-01-05T03:44:38+00:00
{"source_datasets": ["irds/neumarco_zh"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/dev`", "viewer": false}
2023-01-05T03:44:44+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_zh #region-us
# Dataset Card for 'neumarco/zh/dev' The 'neumarco/zh/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=101,093 - 'qrels': (relevance assessments); count=59,273 - For 'docs', use 'irds/neumarco_zh' This dataset is used by: 'neumarco_zh_dev_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/zh/dev'\n\nThe 'neumarco/zh/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=101,093\n - 'qrels': (relevance assessments); count=59,273\n\n - For 'docs', use 'irds/neumarco_zh'\n\nThis dataset is used by: 'neumarco_zh_dev_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_zh #region-us \n", "# Dataset Card for 'neumarco/zh/dev'\n\nThe 'neumarco/zh/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=101,093\n - 'qrels': (relevance assessments); count=59,273\n\n - For 'docs', use 'irds/neumarco_zh'\n\nThis dataset is used by: 'neumarco_zh_dev_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
bb88f4f0815d99e2fe6909c1a86b7f1792a1bf82
# Dataset Card for `neumarco/zh/dev/judged` The `neumarco/zh/dev/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/dev/judged). # Data This dataset provides: - `queries` (i.e., topics); count=55,578 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) - For `qrels`, use [`irds/neumarco_zh_dev`](https://huggingface.co/datasets/irds/neumarco_zh_dev) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_dev_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_dev_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "source_datasets:irds/neumarco_zh_dev", "region:us" ]
2023-01-05T03:44:50+00:00
{"source_datasets": ["irds/neumarco_zh", "irds/neumarco_zh_dev"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/dev/judged`", "viewer": false}
2023-01-05T03:44:55+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_zh #source_datasets-irds/neumarco_zh_dev #region-us
# Dataset Card for 'neumarco/zh/dev/judged' The 'neumarco/zh/dev/judged' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=55,578 - For 'docs', use 'irds/neumarco_zh' - For 'qrels', use 'irds/neumarco_zh_dev' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/zh/dev/judged'\n\nThe 'neumarco/zh/dev/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=55,578\n\n - For 'docs', use 'irds/neumarco_zh'\n - For 'qrels', use 'irds/neumarco_zh_dev'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_zh #source_datasets-irds/neumarco_zh_dev #region-us \n", "# Dataset Card for 'neumarco/zh/dev/judged'\n\nThe 'neumarco/zh/dev/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=55,578\n\n - For 'docs', use 'irds/neumarco_zh'\n - For 'qrels', use 'irds/neumarco_zh_dev'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a9dfebce7a38ebcbf75dcdd9c5754dbf4ef97433
# Dataset Card for `neumarco/zh/dev/small` The `neumarco/zh/dev/small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/dev/small). # Data This dataset provides: - `queries` (i.e., topics); count=6,980 - `qrels`: (relevance assessments); count=7,437 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_dev_small', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_zh_dev_small', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_dev_small
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "region:us" ]
2023-01-05T03:45:01+00:00
{"source_datasets": ["irds/neumarco_zh"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/dev/small`", "viewer": false}
2023-01-05T03:45:06+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_zh #region-us
# Dataset Card for 'neumarco/zh/dev/small' The 'neumarco/zh/dev/small' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=6,980 - 'qrels': (relevance assessments); count=7,437 - For 'docs', use 'irds/neumarco_zh' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/zh/dev/small'\n\nThe 'neumarco/zh/dev/small' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,980\n - 'qrels': (relevance assessments); count=7,437\n\n - For 'docs', use 'irds/neumarco_zh'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_zh #region-us \n", "# Dataset Card for 'neumarco/zh/dev/small'\n\nThe 'neumarco/zh/dev/small' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=6,980\n - 'qrels': (relevance assessments); count=7,437\n\n - For 'docs', use 'irds/neumarco_zh'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4e1a0fc268895279a59021cf645af5dce40a9fa5
# Dataset Card for `neumarco/zh/train` The `neumarco/zh/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/train). # Data This dataset provides: - `queries` (i.e., topics); count=808,731 - `qrels`: (relevance assessments); count=532,761 - `docpairs`; count=269,919,004 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) This dataset is used by: [`neumarco_zh_train_judged`](https://huggingface.co/datasets/irds/neumarco_zh_train_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_zh_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} docpairs = load_dataset('irds/neumarco_zh_train', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_train
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "region:us" ]
2023-01-05T03:45:12+00:00
{"source_datasets": ["irds/neumarco_zh"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/train`", "viewer": false}
2023-01-05T03:45:18+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_zh #region-us
# Dataset Card for 'neumarco/zh/train' The 'neumarco/zh/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=808,731 - 'qrels': (relevance assessments); count=532,761 - 'docpairs'; count=269,919,004 - For 'docs', use 'irds/neumarco_zh' This dataset is used by: 'neumarco_zh_train_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/zh/train'\n\nThe 'neumarco/zh/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=808,731\n - 'qrels': (relevance assessments); count=532,761\n - 'docpairs'; count=269,919,004\n\n - For 'docs', use 'irds/neumarco_zh'\n\nThis dataset is used by: 'neumarco_zh_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_zh #region-us \n", "# Dataset Card for 'neumarco/zh/train'\n\nThe 'neumarco/zh/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=808,731\n - 'qrels': (relevance assessments); count=532,761\n - 'docpairs'; count=269,919,004\n\n - For 'docs', use 'irds/neumarco_zh'\n\nThis dataset is used by: 'neumarco_zh_train_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2b81e8013a61713e4698b6b189e4c418a66448a8
# Dataset Card for `neumarco/zh/train/judged` The `neumarco/zh/train/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/train/judged). # Data This dataset provides: - `queries` (i.e., topics); count=502,939 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) - For `qrels`, use [`irds/neumarco_zh_train`](https://huggingface.co/datasets/irds/neumarco_zh_train) - For `docpairs`, use [`irds/neumarco_zh_train`](https://huggingface.co/datasets/irds/neumarco_zh_train) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_train_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_train_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "source_datasets:irds/neumarco_zh_train", "region:us" ]
2023-01-05T03:45:23+00:00
{"source_datasets": ["irds/neumarco_zh", "irds/neumarco_zh_train"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/train/judged`", "viewer": false}
2023-01-05T03:45:29+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/neumarco_zh #source_datasets-irds/neumarco_zh_train #region-us
# Dataset Card for 'neumarco/zh/train/judged' The 'neumarco/zh/train/judged' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=502,939 - For 'docs', use 'irds/neumarco_zh' - For 'qrels', use 'irds/neumarco_zh_train' - For 'docpairs', use 'irds/neumarco_zh_train' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neumarco/zh/train/judged'\n\nThe 'neumarco/zh/train/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=502,939\n\n - For 'docs', use 'irds/neumarco_zh'\n - For 'qrels', use 'irds/neumarco_zh_train'\n - For 'docpairs', use 'irds/neumarco_zh_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/neumarco_zh #source_datasets-irds/neumarco_zh_train #region-us \n", "# Dataset Card for 'neumarco/zh/train/judged'\n\nThe 'neumarco/zh/train/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=502,939\n\n - For 'docs', use 'irds/neumarco_zh'\n - For 'qrels', use 'irds/neumarco_zh_train'\n - For 'docpairs', use 'irds/neumarco_zh_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a93f3f651bffbfc7dccee464c5221d69bb533b9c
# Dataset Card for `nfcorpus` The `nfcorpus` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=5,371 This dataset is used by: [`nfcorpus_dev`](https://huggingface.co/datasets/irds/nfcorpus_dev), [`nfcorpus_dev_nontopic`](https://huggingface.co/datasets/irds/nfcorpus_dev_nontopic), [`nfcorpus_dev_video`](https://huggingface.co/datasets/irds/nfcorpus_dev_video), [`nfcorpus_test`](https://huggingface.co/datasets/irds/nfcorpus_test), [`nfcorpus_test_nontopic`](https://huggingface.co/datasets/irds/nfcorpus_test_nontopic), [`nfcorpus_test_video`](https://huggingface.co/datasets/irds/nfcorpus_test_video), [`nfcorpus_train`](https://huggingface.co/datasets/irds/nfcorpus_train), [`nfcorpus_train_nontopic`](https://huggingface.co/datasets/irds/nfcorpus_train_nontopic), [`nfcorpus_train_video`](https://huggingface.co/datasets/irds/nfcorpus_train_video) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/nfcorpus', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'title': ..., 'abstract': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:45:34+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus`", "viewer": false}
2023-01-05T03:45:40+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'nfcorpus' The 'nfcorpus' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=5,371 This dataset is used by: 'nfcorpus_dev', 'nfcorpus_dev_nontopic', 'nfcorpus_dev_video', 'nfcorpus_test', 'nfcorpus_test_nontopic', 'nfcorpus_test_video', 'nfcorpus_train', 'nfcorpus_train_nontopic', 'nfcorpus_train_video' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus'\n\nThe 'nfcorpus' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,371\n\n\nThis dataset is used by: 'nfcorpus_dev', 'nfcorpus_dev_nontopic', 'nfcorpus_dev_video', 'nfcorpus_test', 'nfcorpus_test_nontopic', 'nfcorpus_test_video', 'nfcorpus_train', 'nfcorpus_train_nontopic', 'nfcorpus_train_video'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'nfcorpus'\n\nThe 'nfcorpus' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,371\n\n\nThis dataset is used by: 'nfcorpus_dev', 'nfcorpus_dev_nontopic', 'nfcorpus_dev_video', 'nfcorpus_test', 'nfcorpus_test_nontopic', 'nfcorpus_test_video', 'nfcorpus_train', 'nfcorpus_train_nontopic', 'nfcorpus_train_video'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
e0998dd546ea60e52b86ccb3618ebfd665117ea4
# Dataset Card for `nfcorpus/dev` The `nfcorpus/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/dev). # Data This dataset provides: - `queries` (i.e., topics); count=325 - `qrels`: (relevance assessments); count=14,589 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_dev', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'all': ...} qrels = load_dataset('irds/nfcorpus_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_dev
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:45:45+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/dev`", "viewer": false}
2023-01-05T03:45:51+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us
# Dataset Card for 'nfcorpus/dev' The 'nfcorpus/dev' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=325 - 'qrels': (relevance assessments); count=14,589 - For 'docs', use 'irds/nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus/dev'\n\nThe 'nfcorpus/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=325\n - 'qrels': (relevance assessments); count=14,589\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us \n", "# Dataset Card for 'nfcorpus/dev'\n\nThe 'nfcorpus/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=325\n - 'qrels': (relevance assessments); count=14,589\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
611bd08ccae3987930e7f129b3956da865e56a26
# Dataset Card for `nfcorpus/dev/nontopic` The `nfcorpus/dev/nontopic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/dev/nontopic). # Data This dataset provides: - `queries` (i.e., topics); count=144 - `qrels`: (relevance assessments); count=4,353 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_dev_nontopic', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nfcorpus_dev_nontopic', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_dev_nontopic
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:45:57+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/dev/nontopic`", "viewer": false}
2023-01-05T03:46:02+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us
# Dataset Card for 'nfcorpus/dev/nontopic' The 'nfcorpus/dev/nontopic' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=144 - 'qrels': (relevance assessments); count=4,353 - For 'docs', use 'irds/nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus/dev/nontopic'\n\nThe 'nfcorpus/dev/nontopic' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=144\n - 'qrels': (relevance assessments); count=4,353\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us \n", "# Dataset Card for 'nfcorpus/dev/nontopic'\n\nThe 'nfcorpus/dev/nontopic' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=144\n - 'qrels': (relevance assessments); count=4,353\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
3636266168c7341424759c3239724d46bc5b9981
# Dataset Card for `nfcorpus/dev/video` The `nfcorpus/dev/video` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/dev/video). # Data This dataset provides: - `queries` (i.e., topics); count=102 - `qrels`: (relevance assessments); count=3,068 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_dev_video', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'desc': ...} qrels = load_dataset('irds/nfcorpus_dev_video', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_dev_video
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:08+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/dev/video`", "viewer": false}
2023-01-05T03:46:13+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us
# Dataset Card for 'nfcorpus/dev/video' The 'nfcorpus/dev/video' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=102 - 'qrels': (relevance assessments); count=3,068 - For 'docs', use 'irds/nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus/dev/video'\n\nThe 'nfcorpus/dev/video' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=102\n - 'qrels': (relevance assessments); count=3,068\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us \n", "# Dataset Card for 'nfcorpus/dev/video'\n\nThe 'nfcorpus/dev/video' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=102\n - 'qrels': (relevance assessments); count=3,068\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
b2f4a28248d54a5d9d71413246e0be85ef02ebd1
# Dataset Card for `nfcorpus/test` The `nfcorpus/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/test). # Data This dataset provides: - `queries` (i.e., topics); count=325 - `qrels`: (relevance assessments); count=15,820 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_test', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'all': ...} qrels = load_dataset('irds/nfcorpus_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_test
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:19+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/test`", "viewer": false}
2023-01-05T03:46:24+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us
# Dataset Card for 'nfcorpus/test' The 'nfcorpus/test' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=325 - 'qrels': (relevance assessments); count=15,820 - For 'docs', use 'irds/nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus/test'\n\nThe 'nfcorpus/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=325\n - 'qrels': (relevance assessments); count=15,820\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us \n", "# Dataset Card for 'nfcorpus/test'\n\nThe 'nfcorpus/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=325\n - 'qrels': (relevance assessments); count=15,820\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2ddd47a2a3b68cee5d44330302fa7ef51aae9b73
# Dataset Card for `nfcorpus/test/nontopic` The `nfcorpus/test/nontopic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/test/nontopic). # Data This dataset provides: - `queries` (i.e., topics); count=144 - `qrels`: (relevance assessments); count=4,540 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_test_nontopic', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nfcorpus_test_nontopic', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_test_nontopic
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:30+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/test/nontopic`", "viewer": false}
2023-01-05T03:46:36+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us
# Dataset Card for 'nfcorpus/test/nontopic' The 'nfcorpus/test/nontopic' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=144 - 'qrels': (relevance assessments); count=4,540 - For 'docs', use 'irds/nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus/test/nontopic'\n\nThe 'nfcorpus/test/nontopic' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=144\n - 'qrels': (relevance assessments); count=4,540\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us \n", "# Dataset Card for 'nfcorpus/test/nontopic'\n\nThe 'nfcorpus/test/nontopic' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=144\n - 'qrels': (relevance assessments); count=4,540\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
6800cec2cc8cf14f941b3a02db775ddc3c93fddd
# Dataset Card for `nfcorpus/test/video` The `nfcorpus/test/video` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/test/video). # Data This dataset provides: - `queries` (i.e., topics); count=102 - `qrels`: (relevance assessments); count=3,108 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_test_video', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'desc': ...} qrels = load_dataset('irds/nfcorpus_test_video', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_test_video
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:41+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/test/video`", "viewer": false}
2023-01-05T03:46:47+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us
# Dataset Card for 'nfcorpus/test/video' The 'nfcorpus/test/video' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=102 - 'qrels': (relevance assessments); count=3,108 - For 'docs', use 'irds/nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus/test/video'\n\nThe 'nfcorpus/test/video' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=102\n - 'qrels': (relevance assessments); count=3,108\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us \n", "# Dataset Card for 'nfcorpus/test/video'\n\nThe 'nfcorpus/test/video' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=102\n - 'qrels': (relevance assessments); count=3,108\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
0fb021728466330179089ff30a28cb8c8a94495f
# Dataset Card for `nfcorpus/train` The `nfcorpus/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/train). # Data This dataset provides: - `queries` (i.e., topics); count=2,594 - `qrels`: (relevance assessments); count=139,350 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_train', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'all': ...} qrels = load_dataset('irds/nfcorpus_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_train
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:52+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/train`", "viewer": false}
2023-01-05T03:46:58+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us
# Dataset Card for 'nfcorpus/train' The 'nfcorpus/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=2,594 - 'qrels': (relevance assessments); count=139,350 - For 'docs', use 'irds/nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus/train'\n\nThe 'nfcorpus/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=2,594\n - 'qrels': (relevance assessments); count=139,350\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us \n", "# Dataset Card for 'nfcorpus/train'\n\nThe 'nfcorpus/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=2,594\n - 'qrels': (relevance assessments); count=139,350\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
83ff1cb44780c1a3b5cbad8963fea0865b49aa83
# Dataset Card for `nfcorpus/train/nontopic` The `nfcorpus/train/nontopic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/train/nontopic). # Data This dataset provides: - `queries` (i.e., topics); count=1,141 - `qrels`: (relevance assessments); count=37,383 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_train_nontopic', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nfcorpus_train_nontopic', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_train_nontopic
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:47:03+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/train/nontopic`", "viewer": false}
2023-01-05T03:47:09+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us
# Dataset Card for 'nfcorpus/train/nontopic' The 'nfcorpus/train/nontopic' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=1,141 - 'qrels': (relevance assessments); count=37,383 - For 'docs', use 'irds/nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus/train/nontopic'\n\nThe 'nfcorpus/train/nontopic' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=1,141\n - 'qrels': (relevance assessments); count=37,383\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us \n", "# Dataset Card for 'nfcorpus/train/nontopic'\n\nThe 'nfcorpus/train/nontopic' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=1,141\n - 'qrels': (relevance assessments); count=37,383\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
26d34488578a5f7c180cd6948b0ae34aefdc4f2e
# Dataset Card for `nfcorpus/train/video` The `nfcorpus/train/video` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/train/video). # Data This dataset provides: - `queries` (i.e., topics); count=812 - `qrels`: (relevance assessments); count=27,465 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_train_video', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'desc': ...} qrels = load_dataset('irds/nfcorpus_train_video', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_train_video
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:47:15+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/train/video`", "viewer": false}
2023-01-05T03:47:20+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us
# Dataset Card for 'nfcorpus/train/video' The 'nfcorpus/train/video' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=812 - 'qrels': (relevance assessments); count=27,465 - For 'docs', use 'irds/nfcorpus' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nfcorpus/train/video'\n\nThe 'nfcorpus/train/video' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=812\n - 'qrels': (relevance assessments); count=27,465\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nfcorpus #region-us \n", "# Dataset Card for 'nfcorpus/train/video'\n\nThe 'nfcorpus/train/video' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=812\n - 'qrels': (relevance assessments); count=27,465\n\n - For 'docs', use 'irds/nfcorpus'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
5afa6a30bf39abc3b8f31f073c7b65df78e0dd24
# Dataset Card for `natural-questions` The `natural-questions` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/natural-questions#natural-questions). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=28,390,850 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/natural-questions', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'html': ..., 'start_byte': ..., 'end_byte': ..., 'start_token': ..., 'end_token': ..., 'document_title': ..., 'document_url': ..., 'parent_doc_id': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Kwiatkowski2019Nq, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {TACL} } ```
irds/natural-questions
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:47:26+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`natural-questions`", "viewer": false}
2023-01-05T03:47:31+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'natural-questions' The 'natural-questions' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=28,390,850 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'natural-questions'\n\nThe 'natural-questions' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=28,390,850", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'natural-questions'\n\nThe 'natural-questions' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=28,390,850", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
1edb0703e2cd3fd18745d67a7bd9fd6e3cb8d859
# Dataset Card for `nyt` The `nyt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,864,661 This dataset is used by: [`nyt_trec-core-2017`](https://huggingface.co/datasets/irds/nyt_trec-core-2017), [`nyt_wksup`](https://huggingface.co/datasets/irds/nyt_wksup), [`nyt_wksup_train`](https://huggingface.co/datasets/irds/nyt_wksup_train), [`nyt_wksup_valid`](https://huggingface.co/datasets/irds/nyt_wksup_valid) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/nyt', 'docs') for record in docs: record # {'doc_id': ..., 'headline': ..., 'body': ..., 'source_xml': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:47:37+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`nyt`", "viewer": false}
2023-01-05T03:47:43+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'nyt' The 'nyt' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,864,661 This dataset is used by: 'nyt_trec-core-2017', 'nyt_wksup', 'nyt_wksup_train', 'nyt_wksup_valid' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nyt'\n\nThe 'nyt' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,864,661\n\n\nThis dataset is used by: 'nyt_trec-core-2017', 'nyt_wksup', 'nyt_wksup_train', 'nyt_wksup_valid'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'nyt'\n\nThe 'nyt' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,864,661\n\n\nThis dataset is used by: 'nyt_trec-core-2017', 'nyt_wksup', 'nyt_wksup_train', 'nyt_wksup_valid'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
83569ce3f7de11a59f3f8de32f6bb4512628bce1
# Dataset Card for `nyt/trec-core-2017` The `nyt/trec-core-2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/trec-core-2017). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=30,030 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_trec-core-2017', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/nyt_trec-core-2017', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Allan2017TrecCore, author = {James Allan and Donna Harman and Evangelos Kanoulas and Dan Li and Christophe Van Gysel and Ellen Vorhees}, title = {TREC 2017 Common Core Track Overview}, booktitle = {TREC}, year = {2017} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt_trec-core-2017
[ "task_categories:text-retrieval", "source_datasets:irds/nyt", "region:us" ]
2023-01-05T03:47:48+00:00
{"source_datasets": ["irds/nyt"], "task_categories": ["text-retrieval"], "pretty_name": "`nyt/trec-core-2017`", "viewer": false}
2023-01-05T03:47:54+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nyt #region-us
# Dataset Card for 'nyt/trec-core-2017' The 'nyt/trec-core-2017' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=30,030 - For 'docs', use 'irds/nyt' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nyt/trec-core-2017'\n\nThe 'nyt/trec-core-2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=30,030\n\n - For 'docs', use 'irds/nyt'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nyt #region-us \n", "# Dataset Card for 'nyt/trec-core-2017'\n\nThe 'nyt/trec-core-2017' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=30,030\n\n - For 'docs', use 'irds/nyt'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
ff3519ef39dd30d0750f2adff16282c3728259f9
# Dataset Card for `nyt/wksup` The `nyt/wksup` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/wksup). # Data This dataset provides: - `queries` (i.e., topics); count=1,864,661 - `qrels`: (relevance assessments); count=1,864,661 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_wksup', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nyt_wksup', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{MacAvaney2019Wksup, author = {MacAvaney, Sean and Yates, Andrew and Hui, Kai and Frieder, Ophir}, title = {Content-Based Weak Supervision for Ad-Hoc Re-Ranking}, booktitle = {SIGIR}, year = {2019} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt_wksup
[ "task_categories:text-retrieval", "source_datasets:irds/nyt", "region:us" ]
2023-01-05T03:47:59+00:00
{"source_datasets": ["irds/nyt"], "task_categories": ["text-retrieval"], "pretty_name": "`nyt/wksup`", "viewer": false}
2023-01-05T03:48:05+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nyt #region-us
# Dataset Card for 'nyt/wksup' The 'nyt/wksup' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=1,864,661 - 'qrels': (relevance assessments); count=1,864,661 - For 'docs', use 'irds/nyt' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nyt/wksup'\n\nThe 'nyt/wksup' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=1,864,661\n - 'qrels': (relevance assessments); count=1,864,661\n\n - For 'docs', use 'irds/nyt'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nyt #region-us \n", "# Dataset Card for 'nyt/wksup'\n\nThe 'nyt/wksup' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=1,864,661\n - 'qrels': (relevance assessments); count=1,864,661\n\n - For 'docs', use 'irds/nyt'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
452d6a5f6e151dc2c22b9ad4eaaae5fafe216ebc
# Dataset Card for `nyt/wksup/train` The `nyt/wksup/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/wksup/train). # Data This dataset provides: - `queries` (i.e., topics); count=1,863,657 - `qrels`: (relevance assessments); count=1,863,657 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_wksup_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nyt_wksup_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{MacAvaney2019Wksup, author = {MacAvaney, Sean and Yates, Andrew and Hui, Kai and Frieder, Ophir}, title = {Content-Based Weak Supervision for Ad-Hoc Re-Ranking}, booktitle = {SIGIR}, year = {2019} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt_wksup_train
[ "task_categories:text-retrieval", "source_datasets:irds/nyt", "region:us" ]
2023-01-05T03:48:10+00:00
{"source_datasets": ["irds/nyt"], "task_categories": ["text-retrieval"], "pretty_name": "`nyt/wksup/train`", "viewer": false}
2023-01-05T03:48:16+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nyt #region-us
# Dataset Card for 'nyt/wksup/train' The 'nyt/wksup/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=1,863,657 - 'qrels': (relevance assessments); count=1,863,657 - For 'docs', use 'irds/nyt' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nyt/wksup/train'\n\nThe 'nyt/wksup/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=1,863,657\n - 'qrels': (relevance assessments); count=1,863,657\n\n - For 'docs', use 'irds/nyt'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nyt #region-us \n", "# Dataset Card for 'nyt/wksup/train'\n\nThe 'nyt/wksup/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=1,863,657\n - 'qrels': (relevance assessments); count=1,863,657\n\n - For 'docs', use 'irds/nyt'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
d45ff1ab82b3028d7637b37d6ccea966be04866d
# Dataset Card for `nyt/wksup/valid` The `nyt/wksup/valid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/wksup/valid). # Data This dataset provides: - `queries` (i.e., topics); count=1,004 - `qrels`: (relevance assessments); count=1,004 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_wksup_valid', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nyt_wksup_valid', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{MacAvaney2019Wksup, author = {MacAvaney, Sean and Yates, Andrew and Hui, Kai and Frieder, Ophir}, title = {Content-Based Weak Supervision for Ad-Hoc Re-Ranking}, booktitle = {SIGIR}, year = {2019} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt_wksup_valid
[ "task_categories:text-retrieval", "source_datasets:irds/nyt", "region:us" ]
2023-01-05T03:48:21+00:00
{"source_datasets": ["irds/nyt"], "task_categories": ["text-retrieval"], "pretty_name": "`nyt/wksup/valid`", "viewer": false}
2023-01-05T03:48:27+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/nyt #region-us
# Dataset Card for 'nyt/wksup/valid' The 'nyt/wksup/valid' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=1,004 - 'qrels': (relevance assessments); count=1,004 - For 'docs', use 'irds/nyt' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'nyt/wksup/valid'\n\nThe 'nyt/wksup/valid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=1,004\n - 'qrels': (relevance assessments); count=1,004\n\n - For 'docs', use 'irds/nyt'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/nyt #region-us \n", "# Dataset Card for 'nyt/wksup/valid'\n\nThe 'nyt/wksup/valid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=1,004\n - 'qrels': (relevance assessments); count=1,004\n\n - For 'docs', use 'irds/nyt'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
9cd1241b86e7da023c9b7c34a5910dc5354ef85d
# Dataset Card for `pmc/v1` The `pmc/v1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v1). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=733,111 This dataset is used by: [`pmc_v1_trec-cds-2014`](https://huggingface.co/datasets/irds/pmc_v1_trec-cds-2014), [`pmc_v1_trec-cds-2015`](https://huggingface.co/datasets/irds/pmc_v1_trec-cds-2015) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/pmc_v1', 'docs') for record in docs: record # {'doc_id': ..., 'journal': ..., 'title': ..., 'abstract': ..., 'body': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/pmc_v1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:48:32+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v1`", "viewer": false}
2023-01-05T03:48:38+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'pmc/v1' The 'pmc/v1' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=733,111 This dataset is used by: 'pmc_v1_trec-cds-2014', 'pmc_v1_trec-cds-2015' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'pmc/v1'\n\nThe 'pmc/v1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=733,111\n\n\nThis dataset is used by: 'pmc_v1_trec-cds-2014', 'pmc_v1_trec-cds-2015'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'pmc/v1'\n\nThe 'pmc/v1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=733,111\n\n\nThis dataset is used by: 'pmc_v1_trec-cds-2014', 'pmc_v1_trec-cds-2015'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
fe480013625a42a1742f3c821ea8541eb350d01e
# Dataset Card for `pmc/v1/trec-cds-2014` The `pmc/v1/trec-cds-2014` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v1/trec-cds-2014). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=37,949 - For `docs`, use [`irds/pmc_v1`](https://huggingface.co/datasets/irds/pmc_v1) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/pmc_v1_trec-cds-2014', 'queries') for record in queries: record # {'query_id': ..., 'type': ..., 'description': ..., 'summary': ...} qrels = load_dataset('irds/pmc_v1_trec-cds-2014', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Simpson2014TrecCds, title={Overview of the TREC 2014 Clinical Decision Support Track}, author={Matthew S. Simpson and Ellen M. Voorhees and William Hersh}, booktitle={TREC}, year={2014} } ```
irds/pmc_v1_trec-cds-2014
[ "task_categories:text-retrieval", "source_datasets:irds/pmc_v1", "region:us" ]
2023-01-05T03:48:43+00:00
{"source_datasets": ["irds/pmc_v1"], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v1/trec-cds-2014`", "viewer": false}
2023-01-05T03:48:49+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/pmc_v1 #region-us
# Dataset Card for 'pmc/v1/trec-cds-2014' The 'pmc/v1/trec-cds-2014' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=30 - 'qrels': (relevance assessments); count=37,949 - For 'docs', use 'irds/pmc_v1' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'pmc/v1/trec-cds-2014'\n\nThe 'pmc/v1/trec-cds-2014' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=37,949\n\n - For 'docs', use 'irds/pmc_v1'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/pmc_v1 #region-us \n", "# Dataset Card for 'pmc/v1/trec-cds-2014'\n\nThe 'pmc/v1/trec-cds-2014' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=37,949\n\n - For 'docs', use 'irds/pmc_v1'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4997f50b0c869500fc1e8201a908286c199992c1
# Dataset Card for `pmc/v1/trec-cds-2015` The `pmc/v1/trec-cds-2015` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v1/trec-cds-2015). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=37,807 - For `docs`, use [`irds/pmc_v1`](https://huggingface.co/datasets/irds/pmc_v1) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/pmc_v1_trec-cds-2015', 'queries') for record in queries: record # {'query_id': ..., 'type': ..., 'description': ..., 'summary': ...} qrels = load_dataset('irds/pmc_v1_trec-cds-2015', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Roberts2015TrecCds, title={Overview of the TREC 2015 Clinical Decision Support Track}, author={Kirk Roberts and Matthew S. Simpson and Ellen Voorhees and William R. Hersh}, booktitle={TREC}, year={2015} } ```
irds/pmc_v1_trec-cds-2015
[ "task_categories:text-retrieval", "source_datasets:irds/pmc_v1", "region:us" ]
2023-01-05T03:48:55+00:00
{"source_datasets": ["irds/pmc_v1"], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v1/trec-cds-2015`", "viewer": false}
2023-01-05T03:49:00+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/pmc_v1 #region-us
# Dataset Card for 'pmc/v1/trec-cds-2015' The 'pmc/v1/trec-cds-2015' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=30 - 'qrels': (relevance assessments); count=37,807 - For 'docs', use 'irds/pmc_v1' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'pmc/v1/trec-cds-2015'\n\nThe 'pmc/v1/trec-cds-2015' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=37,807\n\n - For 'docs', use 'irds/pmc_v1'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/pmc_v1 #region-us \n", "# Dataset Card for 'pmc/v1/trec-cds-2015'\n\nThe 'pmc/v1/trec-cds-2015' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=37,807\n\n - For 'docs', use 'irds/pmc_v1'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
c34de21e098ff1b5732cf35bca07864655db6159
# Dataset Card for `pmc/v2` The `pmc/v2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v2). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,255,260 This dataset is used by: [`pmc_v2_trec-cds-2016`](https://huggingface.co/datasets/irds/pmc_v2_trec-cds-2016) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/pmc_v2', 'docs') for record in docs: record # {'doc_id': ..., 'journal': ..., 'title': ..., 'abstract': ..., 'body': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/pmc_v2
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:49:06+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v2`", "viewer": false}
2023-01-05T03:49:11+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'pmc/v2' The 'pmc/v2' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,255,260 This dataset is used by: 'pmc_v2_trec-cds-2016' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'pmc/v2'\n\nThe 'pmc/v2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,255,260\n\n\nThis dataset is used by: 'pmc_v2_trec-cds-2016'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'pmc/v2'\n\nThe 'pmc/v2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,255,260\n\n\nThis dataset is used by: 'pmc_v2_trec-cds-2016'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
9ca8ce2ee813d3d2f3eefa4c390077092e9373e7
# Dataset Card for `pmc/v2/trec-cds-2016` The `pmc/v2/trec-cds-2016` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v2/trec-cds-2016). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=37,707 - For `docs`, use [`irds/pmc_v2`](https://huggingface.co/datasets/irds/pmc_v2) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/pmc_v2_trec-cds-2016', 'queries') for record in queries: record # {'query_id': ..., 'type': ..., 'note': ..., 'description': ..., 'summary': ...} qrels = load_dataset('irds/pmc_v2_trec-cds-2016', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Roberts2016TrecCds, title={Overview of the TREC 2016 Clinical Decision Support Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh}, booktitle={TREC}, year={2016} } ```
irds/pmc_v2_trec-cds-2016
[ "task_categories:text-retrieval", "source_datasets:irds/pmc_v2", "region:us" ]
2023-01-05T03:49:17+00:00
{"source_datasets": ["irds/pmc_v2"], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v2/trec-cds-2016`", "viewer": false}
2023-01-05T03:49:23+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/pmc_v2 #region-us
# Dataset Card for 'pmc/v2/trec-cds-2016' The 'pmc/v2/trec-cds-2016' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=30 - 'qrels': (relevance assessments); count=37,707 - For 'docs', use 'irds/pmc_v2' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'pmc/v2/trec-cds-2016'\n\nThe 'pmc/v2/trec-cds-2016' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=37,707\n\n - For 'docs', use 'irds/pmc_v2'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/pmc_v2 #region-us \n", "# Dataset Card for 'pmc/v2/trec-cds-2016'\n\nThe 'pmc/v2/trec-cds-2016' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=30\n - 'qrels': (relevance assessments); count=37,707\n\n - For 'docs', use 'irds/pmc_v2'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2412561bf98177e9f67d82e07e337723c2a5b67e
# Dataset Card for `argsme/2020-04-01/touche-2020-task-1` The `argsme/2020-04-01/touche-2020-task-1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/2020-04-01/touche-2020-task-1). # Data This dataset provides: - `queries` (i.e., topics); count=49 - `qrels`: (relevance assessments); count=2,298 This dataset is used by: [`argsme_2020-04-01_touche-2020-task-1_uncorrected`](https://huggingface.co/datasets/irds/argsme_2020-04-01_touche-2020-task-1_uncorrected) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/argsme_2020-04-01_touche-2020-task-1', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/argsme_2020-04-01_touche-2020-task-1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 11th International Conference of the CLEF Association (CLEF 2020)}, doi = {10.1007/978-3-030-58219-7\_26}, editor = {Avi Arampatzis and Evangelos Kanoulas and Theodora Tsikrika and Stefanos Vrochidis and Hideo Joho and Christina Lioma and Carsten Eickhoff and Aur{\'e}lie N{\'e}v{\'e}ol and Linda Cappellato and Nicola Ferro}, month = sep, pages = {384-395}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Thessaloniki, Greece}, title = {{Overview of Touch{\'e} 2020: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-58219-7_26}, volume = 12260, year = 2020, } @inproceedings{Wachsmuth2017Quality, author = {Henning Wachsmuth and Nona Naderi and Yufang Hou and Yonatan Bilu and Vinodkumar Prabhakaran and Tim Alberdingk Thijm and Graeme Hirst and Benno Stein}, booktitle = {15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)}, editor = {Phil Blunsom and Alexander Koller and Mirella Lapata}, month = apr, pages = {176-187}, site = {Valencia, Spain}, title = {{Computational Argumentation Quality Assessment in Natural Language}}, url = {http://aclweb.org/anthology/E17-1017}, year = 2017 } ```
irds/argsme_2020-04-01_touche-2020-task-1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:49:28+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/2020-04-01/touche-2020-task-1`", "viewer": false}
2023-01-05T03:49:34+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'argsme/2020-04-01/touche-2020-task-1' The 'argsme/2020-04-01/touche-2020-task-1' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=49 - 'qrels': (relevance assessments); count=2,298 This dataset is used by: 'argsme_2020-04-01_touche-2020-task-1_uncorrected' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'argsme/2020-04-01/touche-2020-task-1'\n\nThe 'argsme/2020-04-01/touche-2020-task-1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=49\n - 'qrels': (relevance assessments); count=2,298\n\n\nThis dataset is used by: 'argsme_2020-04-01_touche-2020-task-1_uncorrected'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'argsme/2020-04-01/touche-2020-task-1'\n\nThe 'argsme/2020-04-01/touche-2020-task-1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=49\n - 'qrels': (relevance assessments); count=2,298\n\n\nThis dataset is used by: 'argsme_2020-04-01_touche-2020-task-1_uncorrected'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
1aac6bf8d4a822d1252d3a72243c8ea1cdd740b3
# Dataset Card for `clueweb12/touche-2020-task-2` The `clueweb12/touche-2020-task-2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/touche-2020-task-2). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=1,783 - For `docs`, use [`irds/clueweb12`](https://huggingface.co/datasets/irds/clueweb12) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_touche-2020-task-2', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/clueweb12_touche-2020-task-2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 11th International Conference of the CLEF Association (CLEF 2020)}, doi = {10.1007/978-3-030-58219-7\_26}, editor = {Avi Arampatzis and Evangelos Kanoulas and Theodora Tsikrika and Stefanos Vrochidis and Hideo Joho and Christina Lioma and Carsten Eickhoff and Aur{\'e}lie N{\'e}v{\'e}ol and Linda Cappellato and Nicola Ferro}, month = sep, pages = {384-395}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Thessaloniki, Greece}, title = {{Overview of Touch{\'e} 2020: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-58219-7_26}, volume = 12260, year = 2020, } @inproceedings{Braunstain2016Support, author = {Liora Braunstain and Oren Kurland and David Carmel and Idan Szpektor and Anna Shtok}, editor = {Nicola Ferro and Fabio Crestani and Marie{-}Francine Moens and Josiane Mothe and Fabrizio Silvestri and Giorgio Maria Di Nunzio and Claudia Hauff and Gianmaria Silvello}, title = {Supporting Human Answers for Advice-Seeking Questions in {CQA} Sites}, booktitle = {Advances in Information Retrieval - 38th European Conference on {IR} Research, {ECIR} 2016, Padua, Italy, March 20-23, 2016. Proceedings}, series = {Lecture Notes in Computer Science}, volume = {9626}, pages = {129--141}, publisher = {Springer}, year = {2016}, doi = {10.1007/978-3-319-30671-1\_10}, } @inproceedings{Rafalak2014Credibility, author = {Maria Rafalak and Katarzyna Abramczuk and Adam Wierzbicki}, editor = {Chin{-}Wan Chung and Andrei Z. Broder and Kyuseok Shim and Torsten Suel}, title = {Incredible: is (almost) all web content trustworthy? analysis of psychological factors related to website credibility evaluation}, booktitle = {23rd International World Wide Web Conference, {WWW} '14, Seoul, Republic of Korea, April 7-11, 2014, Companion Volume}, pages = {1117--1122}, publisher = {{ACM}}, year = {2014}, doi = {10.1145/2567948.2578997}, } ```
irds/clueweb12_touche-2020-task-2
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12", "region:us" ]
2023-01-05T03:49:39+00:00
{"source_datasets": ["irds/clueweb12"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/touche-2020-task-2`", "viewer": false}
2023-01-05T03:49:45+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12 #region-us
# Dataset Card for 'clueweb12/touche-2020-task-2' The 'clueweb12/touche-2020-task-2' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=1,783 - For 'docs', use 'irds/clueweb12' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/touche-2020-task-2'\n\nThe 'clueweb12/touche-2020-task-2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=1,783\n\n - For 'docs', use 'irds/clueweb12'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12 #region-us \n", "# Dataset Card for 'clueweb12/touche-2020-task-2'\n\nThe 'clueweb12/touche-2020-task-2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=1,783\n\n - For 'docs', use 'irds/clueweb12'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
6939f0f803e794d67cc83a7f2bc4fac238a7eeb8
# Dataset Card for `argsme/2020-04-01/touche-2021-task-1` The `argsme/2020-04-01/touche-2021-task-1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/2020-04-01/touche-2021-task-1). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=3,711 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/argsme_2020-04-01_touche-2021-task-1', 'queries') for record in queries: record # {'query_id': ..., 'title': ...} qrels = load_dataset('irds/argsme_2020-04-01_touche-2021-task-1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'quality': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2021Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Lukas Gienapp and Maik Fr{\"o}be and Meriem Beloucif and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 12th International Conference of the CLEF Association (CLEF 2021)}, doi = {10.1007/978-3-030-85251-1\_28}, editor = {{K. Sel{\c{c}}uk} Candan and Bogdan Ionescu and Lorraine Goeuriot and Henning M{\"u}ller and Alexis Joly and Maria Maistro and Florina Piroi and Guglielmo Faggioli and Nicola Ferro}, month = sep, pages = {450-467}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bucharest, Romania}, title = {{Overview of Touch{\'e} 2021: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-85251-1_28}, volume = 12880, year = 2021, } ```
irds/argsme_2020-04-01_touche-2021-task-1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:49:51+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/2020-04-01/touche-2021-task-1`", "viewer": false}
2023-01-05T03:49:56+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'argsme/2020-04-01/touche-2021-task-1' The 'argsme/2020-04-01/touche-2021-task-1' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=3,711 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'argsme/2020-04-01/touche-2021-task-1'\n\nThe 'argsme/2020-04-01/touche-2021-task-1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=3,711", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'argsme/2020-04-01/touche-2021-task-1'\n\nThe 'argsme/2020-04-01/touche-2021-task-1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=3,711", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a4a3969a7fd79461be22346d0501c0d757f5afcc
# Dataset Card for `clueweb12/touche-2021-task-2` The `clueweb12/touche-2021-task-2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/touche-2021-task-2). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=2,076 - For `docs`, use [`irds/clueweb12`](https://huggingface.co/datasets/irds/clueweb12) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_touche-2021-task-2', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/clueweb12_touche-2021-task-2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'quality': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2021Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Lukas Gienapp and Maik Fr{\"o}be and Meriem Beloucif and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 12th International Conference of the CLEF Association (CLEF 2021)}, doi = {10.1007/978-3-030-85251-1\_28}, editor = {{K. Sel{\c{c}}uk} Candan and Bogdan Ionescu and Lorraine Goeuriot and Henning M{\"u}ller and Alexis Joly and Maria Maistro and Florina Piroi and Guglielmo Faggioli and Nicola Ferro}, month = sep, pages = {450-467}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bucharest, Romania}, title = {{Overview of Touch{\'e} 2021: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-85251-1_28}, volume = 12880, year = 2021, } ```
irds/clueweb12_touche-2021-task-2
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12", "region:us" ]
2023-01-05T03:50:02+00:00
{"source_datasets": ["irds/clueweb12"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/touche-2021-task-2`", "viewer": false}
2023-01-05T03:50:08+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/clueweb12 #region-us
# Dataset Card for 'clueweb12/touche-2021-task-2' The 'clueweb12/touche-2021-task-2' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=2,076 - For 'docs', use 'irds/clueweb12' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/touche-2021-task-2'\n\nThe 'clueweb12/touche-2021-task-2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=2,076\n\n - For 'docs', use 'irds/clueweb12'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/clueweb12 #region-us \n", "# Dataset Card for 'clueweb12/touche-2021-task-2'\n\nThe 'clueweb12/touche-2021-task-2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=2,076\n\n - For 'docs', use 'irds/clueweb12'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
1a626c69e2f77d913fe16acc8d2d46c1297ca9a2
# Dataset Card for `argsme/2020-04-01/processed/touche-2022-task-1` The `argsme/2020-04-01/processed/touche-2022-task-1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/2020-04-01/processed/touche-2022-task-1). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=6,841 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/argsme_2020-04-01_processed_touche-2022-task-1', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/argsme_2020-04-01_processed_touche-2022-task-1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'quality': ..., 'coherence': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2022Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Johannes Kiesel and Shahbaz Syed and Timon Gurcke and Meriem Beloucif and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 13th International Conference of the CLEF Association (CLEF 2022)}, editor = {Alberto Barr{\'o}n-Cede{\~n}o and Giovanni Da San Martino and Mirko Degli Esposti and Fabrizio Sebastiani and Craig Macdonald and Gabriella Pasi and Allan Hanbury and Martin Potthast and Guglielmo Faggioli and Nicola Ferro}, month = sep, numpages = 29, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bologna, Italy}, title = {{Overview of Touch{\'e} 2022: Argument Retrieval}}, year = 2022 } ```
irds/argsme_2020-04-01_processed_touche-2022-task-1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:50:13+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/2020-04-01/processed/touche-2022-task-1`", "viewer": false}
2023-01-05T03:50:19+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'argsme/2020-04-01/processed/touche-2022-task-1' The 'argsme/2020-04-01/processed/touche-2022-task-1' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=6,841 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'argsme/2020-04-01/processed/touche-2022-task-1'\n\nThe 'argsme/2020-04-01/processed/touche-2022-task-1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=6,841", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'argsme/2020-04-01/processed/touche-2022-task-1'\n\nThe 'argsme/2020-04-01/processed/touche-2022-task-1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=6,841", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
358763bc770a797441a4b9ecc28294944a3e21a8
# Dataset Card for `touche-image/2022-06-13/touche-2022-task-3` The `touche-image/2022-06-13/touche-2022-task-3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/touche-image#touche-image/2022-06-13/touche-2022-task-3). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=19,821 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/touche-image_2022-06-13_touche-2022-task-3', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/touche-image_2022-06-13_touche-2022-task-3', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2022Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Johannes Kiesel and Shahbaz Syed and Timon Gurcke and Meriem Beloucif and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 13th International Conference of the CLEF Association (CLEF 2022)}, editor = {Alberto Barr{\'o}n-Cede{\~n}o and Giovanni Da San Martino and Mirko Degli Esposti and Fabrizio Sebastiani and Craig Macdonald and Gabriella Pasi and Allan Hanbury and Martin Potthast and Guglielmo Faggioli and Nicola Ferro}, month = sep, numpages = 29, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bologna, Italy}, title = {{Overview of Touch{\'e} 2022: Argument Retrieval}}, year = 2022 } @inproceedings{Kiesel2021Image, author = {Johannes Kiesel and Nico Reichenbach and Benno Stein and Martin Potthast}, booktitle = {8th Workshop on Argument Mining (ArgMining 2021) at EMNLP}, doi = {10.18653/v1/2021.argmining-1.4}, editor = {Khalid Al-Khatib and Yufang Hou and Manfred Stede}, month = nov, pages = {36-45}, publisher = {Association for Computational Linguistics}, site = {Punta Cana, Dominican Republic}, title = {{Image Retrieval for Arguments Using Stance-Aware Query Expansion}}, url = {https://aclanthology.org/2021.argmining-1.4/}, year = 2021 } @inproceedings{Dimitrov2021SemEval, author = {Dimitar Dimitrov and Bishr Bin Ali and Shaden Shaar and Firoj Alam and Fabrizio Silvestri and Hamed Firooz and Preslav Nakov and Giovanni Da San Martino}, editor = {Alexis Palmer and Nathan Schneider and Natalie Schluter and Guy Emerson and Aur{\'{e}}lie Herbelot and Xiaodan Zhu}, title = {SemEval-2021 Task 6: Detection of Persuasion Techniques in Texts and Images}, booktitle = {Proceedings of the 15th International Workshop on Semantic Evaluation, SemEval@ACL/IJCNLP 2021, Virtual Event / Bangkok, Thailand, August 5-6, 2021}, pages = {70--98}, publisher = {Association for Computational Linguistics}, year = {2021}, doi = {10.18653/v1/2021.semeval-1.7}, } @inproceedings{Yanai2007Image, author = {Keiji Yanai}, editor = {Carey L. Williamson and Mary Ellen Zurko and Peter F. Patel{-}Schneider and Prashant J. Shenoy}, title = {Image collector {III:} a web image-gathering system with bag-of-keypoints}, booktitle = {Proceedings of the 16th International Conference on World Wide Web, {WWW} 2007, Banff, Alberta, Canada, May 8-12, 2007}, pages = {1295--1296}, publisher = {{ACM}}, year = {2007}, doi = {10.1145/1242572.1242816}, } ```
irds/touche-image_2022-06-13_touche-2022-task-3
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:50:24+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`touche-image/2022-06-13/touche-2022-task-3`", "viewer": false}
2023-01-05T03:50:30+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'touche-image/2022-06-13/touche-2022-task-3' The 'touche-image/2022-06-13/touche-2022-task-3' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=19,821 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'touche-image/2022-06-13/touche-2022-task-3'\n\nThe 'touche-image/2022-06-13/touche-2022-task-3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=19,821", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'touche-image/2022-06-13/touche-2022-task-3'\n\nThe 'touche-image/2022-06-13/touche-2022-task-3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=19,821", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
6df8436fb6c1dbc6ab79f263347a321e043f6244
# Dataset Card for `argsme/1.0/touche-2020-task-1/uncorrected` The `argsme/1.0/touche-2020-task-1/uncorrected` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/1.0/touche-2020-task-1/uncorrected). # Data This dataset provides: - `queries` (i.e., topics); count=49 - `qrels`: (relevance assessments); count=2,964 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/argsme_1.0_touche-2020-task-1_uncorrected', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/argsme_1.0_touche-2020-task-1_uncorrected', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 11th International Conference of the CLEF Association (CLEF 2020)}, doi = {10.1007/978-3-030-58219-7\_26}, editor = {Avi Arampatzis and Evangelos Kanoulas and Theodora Tsikrika and Stefanos Vrochidis and Hideo Joho and Christina Lioma and Carsten Eickhoff and Aur{\'e}lie N{\'e}v{\'e}ol and Linda Cappellato and Nicola Ferro}, month = sep, pages = {384-395}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Thessaloniki, Greece}, title = {{Overview of Touch{\'e} 2020: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-58219-7_26}, volume = 12260, year = 2020, } @inproceedings{Wachsmuth2017Quality, author = {Henning Wachsmuth and Nona Naderi and Yufang Hou and Yonatan Bilu and Vinodkumar Prabhakaran and Tim Alberdingk Thijm and Graeme Hirst and Benno Stein}, booktitle = {15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)}, editor = {Phil Blunsom and Alexander Koller and Mirella Lapata}, month = apr, pages = {176-187}, site = {Valencia, Spain}, title = {{Computational Argumentation Quality Assessment in Natural Language}}, url = {http://aclweb.org/anthology/E17-1017}, year = 2017 } ```
irds/argsme_1.0_touche-2020-task-1_uncorrected
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:50:35+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/1.0/touche-2020-task-1/uncorrected`", "viewer": false}
2023-01-05T03:50:41+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'argsme/1.0/touche-2020-task-1/uncorrected' The 'argsme/1.0/touche-2020-task-1/uncorrected' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=49 - 'qrels': (relevance assessments); count=2,964 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'argsme/1.0/touche-2020-task-1/uncorrected'\n\nThe 'argsme/1.0/touche-2020-task-1/uncorrected' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=49\n - 'qrels': (relevance assessments); count=2,964", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'argsme/1.0/touche-2020-task-1/uncorrected'\n\nThe 'argsme/1.0/touche-2020-task-1/uncorrected' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=49\n - 'qrels': (relevance assessments); count=2,964", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
5cf1db3fbb635c2b2d177fdfc5ce6b390f8a1222
# Dataset Card for `argsme/2020-04-01/touche-2020-task-1/uncorrected` The `argsme/2020-04-01/touche-2020-task-1/uncorrected` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/2020-04-01/touche-2020-task-1/uncorrected). # Data This dataset provides: - `qrels`: (relevance assessments); count=2,298 - For `queries`, use [`irds/argsme_2020-04-01_touche-2020-task-1`](https://huggingface.co/datasets/irds/argsme_2020-04-01_touche-2020-task-1) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/argsme_2020-04-01_touche-2020-task-1_uncorrected', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 11th International Conference of the CLEF Association (CLEF 2020)}, doi = {10.1007/978-3-030-58219-7\_26}, editor = {Avi Arampatzis and Evangelos Kanoulas and Theodora Tsikrika and Stefanos Vrochidis and Hideo Joho and Christina Lioma and Carsten Eickhoff and Aur{\'e}lie N{\'e}v{\'e}ol and Linda Cappellato and Nicola Ferro}, month = sep, pages = {384-395}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Thessaloniki, Greece}, title = {{Overview of Touch{\'e} 2020: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-58219-7_26}, volume = 12260, year = 2020, } @inproceedings{Wachsmuth2017Quality, author = {Henning Wachsmuth and Nona Naderi and Yufang Hou and Yonatan Bilu and Vinodkumar Prabhakaran and Tim Alberdingk Thijm and Graeme Hirst and Benno Stein}, booktitle = {15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)}, editor = {Phil Blunsom and Alexander Koller and Mirella Lapata}, month = apr, pages = {176-187}, site = {Valencia, Spain}, title = {{Computational Argumentation Quality Assessment in Natural Language}}, url = {http://aclweb.org/anthology/E17-1017}, year = 2017 } ```
irds/argsme_2020-04-01_touche-2020-task-1_uncorrected
[ "task_categories:text-retrieval", "source_datasets:irds/argsme_2020-04-01_touche-2020-task-1", "region:us" ]
2023-01-05T03:50:47+00:00
{"source_datasets": ["irds/argsme_2020-04-01_touche-2020-task-1"], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/2020-04-01/touche-2020-task-1/uncorrected`", "viewer": false}
2023-01-05T03:50:52+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/argsme_2020-04-01_touche-2020-task-1 #region-us
# Dataset Card for 'argsme/2020-04-01/touche-2020-task-1/uncorrected' The 'argsme/2020-04-01/touche-2020-task-1/uncorrected' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'qrels': (relevance assessments); count=2,298 - For 'queries', use 'irds/argsme_2020-04-01_touche-2020-task-1' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'argsme/2020-04-01/touche-2020-task-1/uncorrected'\n\nThe 'argsme/2020-04-01/touche-2020-task-1/uncorrected' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=2,298\n\n - For 'queries', use 'irds/argsme_2020-04-01_touche-2020-task-1'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/argsme_2020-04-01_touche-2020-task-1 #region-us \n", "# Dataset Card for 'argsme/2020-04-01/touche-2020-task-1/uncorrected'\n\nThe 'argsme/2020-04-01/touche-2020-task-1/uncorrected' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=2,298\n\n - For 'queries', use 'irds/argsme_2020-04-01_touche-2020-task-1'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
e5feb990f58cca9f31dd2ecff574884a3badc30b
# Dataset Card for `clueweb12/touche-2022-task-2/expanded-doc-t5-query` The `clueweb12/touche-2022-task-2/expanded-doc-t5-query` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/touche-2022-task-2/expanded-doc-t5-query). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=868,655 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb12_touche-2022-task-2_expanded-doc-t5-query', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'chatnoir_url': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2022Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Johannes Kiesel and Shahbaz Syed and Timon Gurcke and Meriem Beloucif and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 13th International Conference of the CLEF Association (CLEF 2022)}, editor = {Alberto Barr{\'o}n-Cede{\~n}o and Giovanni Da San Martino and Mirko Degli Esposti and Fabrizio Sebastiani and Craig Macdonald and Gabriella Pasi and Allan Hanbury and Martin Potthast and Guglielmo Faggioli and Nicola Ferro}, month = sep, numpages = 29, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bologna, Italy}, title = {{Overview of Touch{\'e} 2022: Argument Retrieval}}, year = 2022 } ```
irds/clueweb12_touche-2022-task-2_expanded-doc-t5-query
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:50:58+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/touche-2022-task-2/expanded-doc-t5-query`", "viewer": false}
2023-01-05T03:51:03+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'clueweb12/touche-2022-task-2/expanded-doc-t5-query' The 'clueweb12/touche-2022-task-2/expanded-doc-t5-query' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=868,655 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'clueweb12/touche-2022-task-2/expanded-doc-t5-query'\n\nThe 'clueweb12/touche-2022-task-2/expanded-doc-t5-query' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=868,655", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'clueweb12/touche-2022-task-2/expanded-doc-t5-query'\n\nThe 'clueweb12/touche-2022-task-2/expanded-doc-t5-query' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=868,655", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
d9f57041624d76b852c44aa90b6df8ab9f54b18e
# Dataset Card for `trec-arabic` The `trec-arabic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=383,872 This dataset is used by: [`trec-arabic_ar2001`](https://huggingface.co/datasets/irds/trec-arabic_ar2001), [`trec-arabic_ar2002`](https://huggingface.co/datasets/irds/trec-arabic_ar2002) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-arabic', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @misc{Graff2001Arabic, title={Arabic Newswire Part 1 LDC2001T55}, author={Graff, David, and Walker, Kevin}, year={2001}, url={https://catalog.ldc.upenn.edu/LDC2001T55}, publisher={Linguistic Data Consortium} } ```
irds/trec-arabic
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:51:09+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-arabic`", "viewer": false}
2023-01-05T03:51:15+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'trec-arabic' The 'trec-arabic' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=383,872 This dataset is used by: 'trec-arabic_ar2001', 'trec-arabic_ar2002' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-arabic'\n\nThe 'trec-arabic' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=383,872\n\n\nThis dataset is used by: 'trec-arabic_ar2001', 'trec-arabic_ar2002'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'trec-arabic'\n\nThe 'trec-arabic' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=383,872\n\n\nThis dataset is used by: 'trec-arabic_ar2001', 'trec-arabic_ar2002'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
00012ae7b44b7db5b8843da36594af763491e000
# Dataset Card for `trec-arabic/ar2001` The `trec-arabic/ar2001` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic/ar2001). # Data This dataset provides: - `queries` (i.e., topics); count=25 - `qrels`: (relevance assessments); count=22,744 - For `docs`, use [`irds/trec-arabic`](https://huggingface.co/datasets/irds/trec-arabic) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-arabic_ar2001', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/trec-arabic_ar2001', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Gey2001Arabic, title={The TREC-2001 Cross-Language Information Retrieval Track: Searching Arabic using English, French or Arabic Queries}, author={Fredric Gey and Douglas Oard}, booktitle={TREC}, year={2001} } @misc{Graff2001Arabic, title={Arabic Newswire Part 1 LDC2001T55}, author={Graff, David, and Walker, Kevin}, year={2001}, url={https://catalog.ldc.upenn.edu/LDC2001T55}, publisher={Linguistic Data Consortium} } ```
irds/trec-arabic_ar2001
[ "task_categories:text-retrieval", "source_datasets:irds/trec-arabic", "region:us" ]
2023-01-05T03:51:20+00:00
{"source_datasets": ["irds/trec-arabic"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-arabic/ar2001`", "viewer": false}
2023-01-05T03:51:26+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-arabic #region-us
# Dataset Card for 'trec-arabic/ar2001' The 'trec-arabic/ar2001' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=25 - 'qrels': (relevance assessments); count=22,744 - For 'docs', use 'irds/trec-arabic' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-arabic/ar2001'\n\nThe 'trec-arabic/ar2001' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=25\n - 'qrels': (relevance assessments); count=22,744\n\n - For 'docs', use 'irds/trec-arabic'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-arabic #region-us \n", "# Dataset Card for 'trec-arabic/ar2001'\n\nThe 'trec-arabic/ar2001' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=25\n - 'qrels': (relevance assessments); count=22,744\n\n - For 'docs', use 'irds/trec-arabic'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
edc95002c110b3d7d12da315316d9776a7560045
# Dataset Card for `trec-arabic/ar2002` The `trec-arabic/ar2002` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic/ar2002). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=38,432 - For `docs`, use [`irds/trec-arabic`](https://huggingface.co/datasets/irds/trec-arabic) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-arabic_ar2002', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/trec-arabic_ar2002', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Gey2002Arabic, title={The TREC-2002 Arabic/English CLIR Track}, author={Fredric Gey and Douglas Oard}, booktitle={TREC}, year={2002} } @misc{Graff2001Arabic, title={Arabic Newswire Part 1 LDC2001T55}, author={Graff, David, and Walker, Kevin}, year={2001}, url={https://catalog.ldc.upenn.edu/LDC2001T55}, publisher={Linguistic Data Consortium} } ```
irds/trec-arabic_ar2002
[ "task_categories:text-retrieval", "source_datasets:irds/trec-arabic", "region:us" ]
2023-01-05T03:51:31+00:00
{"source_datasets": ["irds/trec-arabic"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-arabic/ar2002`", "viewer": false}
2023-01-05T03:51:37+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-arabic #region-us
# Dataset Card for 'trec-arabic/ar2002' The 'trec-arabic/ar2002' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=38,432 - For 'docs', use 'irds/trec-arabic' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-arabic/ar2002'\n\nThe 'trec-arabic/ar2002' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=38,432\n\n - For 'docs', use 'irds/trec-arabic'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-arabic #region-us \n", "# Dataset Card for 'trec-arabic/ar2002'\n\nThe 'trec-arabic/ar2002' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=38,432\n\n - For 'docs', use 'irds/trec-arabic'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
ec1c7240bfdcb708b21cbf57ac7ab43aedf85b0e
# Dataset Card for `trec-mandarin` The `trec-mandarin` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-mandarin#trec-mandarin). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=164,789 This dataset is used by: [`trec-mandarin_trec5`](https://huggingface.co/datasets/irds/trec-mandarin_trec5), [`trec-mandarin_trec6`](https://huggingface.co/datasets/irds/trec-mandarin_trec6) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-mandarin', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @misc{Rogers2000Mandarin, title={TREC Mandarin LDC2000T52}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T52}, publisher={Linguistic Data Consortium} } ```
irds/trec-mandarin
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:51:42+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-mandarin`", "viewer": false}
2023-01-05T03:51:48+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'trec-mandarin' The 'trec-mandarin' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=164,789 This dataset is used by: 'trec-mandarin_trec5', 'trec-mandarin_trec6' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-mandarin'\n\nThe 'trec-mandarin' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=164,789\n\n\nThis dataset is used by: 'trec-mandarin_trec5', 'trec-mandarin_trec6'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'trec-mandarin'\n\nThe 'trec-mandarin' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=164,789\n\n\nThis dataset is used by: 'trec-mandarin_trec5', 'trec-mandarin_trec6'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
0e61092a7f358452ef2e692b5f5fdd6676c1cd44
# Dataset Card for `trec-mandarin/trec5` The `trec-mandarin/trec5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-mandarin#trec-mandarin/trec5). # Data This dataset provides: - `queries` (i.e., topics); count=28 - `qrels`: (relevance assessments); count=15,588 - For `docs`, use [`irds/trec-mandarin`](https://huggingface.co/datasets/irds/trec-mandarin) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-mandarin_trec5', 'queries') for record in queries: record # {'query_id': ..., 'title_en': ..., 'title_zh': ..., 'description_en': ..., 'description_zh': ..., 'narrative_en': ..., 'narrative_zh': ...} qrels = load_dataset('irds/trec-mandarin_trec5', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Harman1997Chinese, title={Spanish and Chinese Document Retrieval in TREC-5}, author={Alan Smeaton and Ross Wilkinson}, booktitle={TREC}, year={1996} } @misc{Rogers2000Mandarin, title={TREC Mandarin LDC2000T52}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T52}, publisher={Linguistic Data Consortium} } ```
irds/trec-mandarin_trec5
[ "task_categories:text-retrieval", "source_datasets:irds/trec-mandarin", "region:us" ]
2023-01-05T03:51:53+00:00
{"source_datasets": ["irds/trec-mandarin"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-mandarin/trec5`", "viewer": false}
2023-01-05T03:51:59+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-mandarin #region-us
# Dataset Card for 'trec-mandarin/trec5' The 'trec-mandarin/trec5' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=28 - 'qrels': (relevance assessments); count=15,588 - For 'docs', use 'irds/trec-mandarin' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-mandarin/trec5'\n\nThe 'trec-mandarin/trec5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=28\n - 'qrels': (relevance assessments); count=15,588\n\n - For 'docs', use 'irds/trec-mandarin'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-mandarin #region-us \n", "# Dataset Card for 'trec-mandarin/trec5'\n\nThe 'trec-mandarin/trec5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=28\n - 'qrels': (relevance assessments); count=15,588\n\n - For 'docs', use 'irds/trec-mandarin'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
7de9a307829401a6ea93a330cde12ad78cabe19c
# Dataset Card for `trec-mandarin/trec6` The `trec-mandarin/trec6` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-mandarin#trec-mandarin/trec6). # Data This dataset provides: - `queries` (i.e., topics); count=26 - `qrels`: (relevance assessments); count=9,236 - For `docs`, use [`irds/trec-mandarin`](https://huggingface.co/datasets/irds/trec-mandarin) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-mandarin_trec6', 'queries') for record in queries: record # {'query_id': ..., 'title_en': ..., 'title_zh': ..., 'description_en': ..., 'description_zh': ..., 'narrative_en': ..., 'narrative_zh': ...} qrels = load_dataset('irds/trec-mandarin_trec6', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Wilkinson1998Chinese, title={Chinese Document Retrieval at TREC-6}, author={Ross Wilkinson}, booktitle={TREC}, year={1997} } @misc{Rogers2000Mandarin, title={TREC Mandarin LDC2000T52}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T52}, publisher={Linguistic Data Consortium} } ```
irds/trec-mandarin_trec6
[ "task_categories:text-retrieval", "source_datasets:irds/trec-mandarin", "region:us" ]
2023-01-05T03:52:05+00:00
{"source_datasets": ["irds/trec-mandarin"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-mandarin/trec6`", "viewer": false}
2023-01-05T03:52:10+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-mandarin #region-us
# Dataset Card for 'trec-mandarin/trec6' The 'trec-mandarin/trec6' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=26 - 'qrels': (relevance assessments); count=9,236 - For 'docs', use 'irds/trec-mandarin' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-mandarin/trec6'\n\nThe 'trec-mandarin/trec6' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=26\n - 'qrels': (relevance assessments); count=9,236\n\n - For 'docs', use 'irds/trec-mandarin'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-mandarin #region-us \n", "# Dataset Card for 'trec-mandarin/trec6'\n\nThe 'trec-mandarin/trec6' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=26\n - 'qrels': (relevance assessments); count=9,236\n\n - For 'docs', use 'irds/trec-mandarin'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
ff0ab628c396fb46c0d52b11abd1d69b9649306a
# Dataset Card for `trec-spanish` The `trec-spanish` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-spanish#trec-spanish). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=120,605 This dataset is used by: [`trec-spanish_trec3`](https://huggingface.co/datasets/irds/trec-spanish_trec3), [`trec-spanish_trec4`](https://huggingface.co/datasets/irds/trec-spanish_trec4) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-spanish', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @misc{Rogers2000Spanish, title={TREC Spanish LDC2000T51}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T51}, publisher={Linguistic Data Consortium} } ```
irds/trec-spanish
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:52:16+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-spanish`", "viewer": false}
2023-01-05T03:52:21+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'trec-spanish' The 'trec-spanish' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=120,605 This dataset is used by: 'trec-spanish_trec3', 'trec-spanish_trec4' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-spanish'\n\nThe 'trec-spanish' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=120,605\n\n\nThis dataset is used by: 'trec-spanish_trec3', 'trec-spanish_trec4'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'trec-spanish'\n\nThe 'trec-spanish' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=120,605\n\n\nThis dataset is used by: 'trec-spanish_trec3', 'trec-spanish_trec4'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
e8784514251b0fa3d42c0ed6bfaf65b44df7e57e
# Dataset Card for `trec-spanish/trec3` The `trec-spanish/trec3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-spanish#trec-spanish/trec3). # Data This dataset provides: - `queries` (i.e., topics); count=25 - `qrels`: (relevance assessments); count=19,005 - For `docs`, use [`irds/trec-spanish`](https://huggingface.co/datasets/irds/trec-spanish) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-spanish_trec3', 'queries') for record in queries: record # {'query_id': ..., 'title_es': ..., 'title_en': ..., 'description_es': ..., 'description_en': ..., 'narrative_es': ..., 'narrative_en': ...} qrels = load_dataset('irds/trec-spanish_trec3', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Harman1994Trec3, title={Overview of the Third Text REtrieval Conference (TREC-3)}, author={Donna Harman}, booktitle={TREC}, year={1994} } @misc{Rogers2000Spanish, title={TREC Spanish LDC2000T51}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T51}, publisher={Linguistic Data Consortium} } ```
irds/trec-spanish_trec3
[ "task_categories:text-retrieval", "source_datasets:irds/trec-spanish", "region:us" ]
2023-01-05T03:52:27+00:00
{"source_datasets": ["irds/trec-spanish"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-spanish/trec3`", "viewer": false}
2023-01-05T03:52:32+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-spanish #region-us
# Dataset Card for 'trec-spanish/trec3' The 'trec-spanish/trec3' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=25 - 'qrels': (relevance assessments); count=19,005 - For 'docs', use 'irds/trec-spanish' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-spanish/trec3'\n\nThe 'trec-spanish/trec3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=25\n - 'qrels': (relevance assessments); count=19,005\n\n - For 'docs', use 'irds/trec-spanish'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-spanish #region-us \n", "# Dataset Card for 'trec-spanish/trec3'\n\nThe 'trec-spanish/trec3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=25\n - 'qrels': (relevance assessments); count=19,005\n\n - For 'docs', use 'irds/trec-spanish'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
814668fca7e7893cf79944759b8d8b8fd49e7901
# Dataset Card for `trec-spanish/trec4` The `trec-spanish/trec4` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-spanish#trec-spanish/trec4). # Data This dataset provides: - `queries` (i.e., topics); count=25 - `qrels`: (relevance assessments); count=13,109 - For `docs`, use [`irds/trec-spanish`](https://huggingface.co/datasets/irds/trec-spanish) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-spanish_trec4', 'queries') for record in queries: record # {'query_id': ..., 'description_es1': ..., 'description_en1': ..., 'description_es2': ..., 'description_en2': ...} qrels = load_dataset('irds/trec-spanish_trec4', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Harman1995Trec4, title={Overview of the Fourth Text REtrieval Conference (TREC-4)}, author={Donna Harman}, booktitle={TREC}, year={1995} } @misc{Rogers2000Spanish, title={TREC Spanish LDC2000T51}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T51}, publisher={Linguistic Data Consortium} } ```
irds/trec-spanish_trec4
[ "task_categories:text-retrieval", "source_datasets:irds/trec-spanish", "region:us" ]
2023-01-05T03:52:38+00:00
{"source_datasets": ["irds/trec-spanish"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-spanish/trec4`", "viewer": false}
2023-01-05T03:52:44+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-spanish #region-us
# Dataset Card for 'trec-spanish/trec4' The 'trec-spanish/trec4' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=25 - 'qrels': (relevance assessments); count=13,109 - For 'docs', use 'irds/trec-spanish' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-spanish/trec4'\n\nThe 'trec-spanish/trec4' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=25\n - 'qrels': (relevance assessments); count=13,109\n\n - For 'docs', use 'irds/trec-spanish'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-spanish #region-us \n", "# Dataset Card for 'trec-spanish/trec4'\n\nThe 'trec-spanish/trec4' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=25\n - 'qrels': (relevance assessments); count=13,109\n\n - For 'docs', use 'irds/trec-spanish'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
d3da193e0136acae9970ae7ced82ee189061baa2
# Dataset Card for `trec-robust04` The `trec-robust04` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=528,155 - `queries` (i.e., topics); count=250 - `qrels`: (relevance assessments); count=311,410 This dataset is used by: [`trec-robust04_fold1`](https://huggingface.co/datasets/irds/trec-robust04_fold1), [`trec-robust04_fold2`](https://huggingface.co/datasets/irds/trec-robust04_fold2), [`trec-robust04_fold3`](https://huggingface.co/datasets/irds/trec-robust04_fold3), [`trec-robust04_fold4`](https://huggingface.co/datasets/irds/trec-robust04_fold4), [`trec-robust04_fold5`](https://huggingface.co/datasets/irds/trec-robust04_fold5) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-robust04', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...} queries = load_dataset('irds/trec-robust04', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/trec-robust04', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } ```
irds/trec-robust04
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:52:49+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04`", "viewer": false}
2023-01-05T03:52:55+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'trec-robust04' The 'trec-robust04' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=528,155 - 'queries' (i.e., topics); count=250 - 'qrels': (relevance assessments); count=311,410 This dataset is used by: 'trec-robust04_fold1', 'trec-robust04_fold2', 'trec-robust04_fold3', 'trec-robust04_fold4', 'trec-robust04_fold5' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-robust04'\n\nThe 'trec-robust04' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=528,155\n - 'queries' (i.e., topics); count=250\n - 'qrels': (relevance assessments); count=311,410\n\n\nThis dataset is used by: 'trec-robust04_fold1', 'trec-robust04_fold2', 'trec-robust04_fold3', 'trec-robust04_fold4', 'trec-robust04_fold5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'trec-robust04'\n\nThe 'trec-robust04' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=528,155\n - 'queries' (i.e., topics); count=250\n - 'qrels': (relevance assessments); count=311,410\n\n\nThis dataset is used by: 'trec-robust04_fold1', 'trec-robust04_fold2', 'trec-robust04_fold3', 'trec-robust04_fold4', 'trec-robust04_fold5'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a2bfaab7b44fec23deddd87435ef1be5f2ea614a
# Dataset Card for `trec-robust04/fold1` The `trec-robust04/fold1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold1). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=62,789 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold1', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold1
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:00+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold1`", "viewer": false}
2023-01-05T03:53:06+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us
# Dataset Card for 'trec-robust04/fold1' The 'trec-robust04/fold1' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=62,789 - For 'docs', use 'irds/trec-robust04' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-robust04/fold1'\n\nThe 'trec-robust04/fold1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=62,789\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us \n", "# Dataset Card for 'trec-robust04/fold1'\n\nThe 'trec-robust04/fold1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=62,789\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
bf9cea1bfe668ad3fe0e69199f8b4a840feeec93
# Dataset Card for `trec-robust04/fold2` The `trec-robust04/fold2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold2). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=63,917 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold2', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold2
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:11+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold2`", "viewer": false}
2023-01-05T03:53:17+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us
# Dataset Card for 'trec-robust04/fold2' The 'trec-robust04/fold2' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=63,917 - For 'docs', use 'irds/trec-robust04' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-robust04/fold2'\n\nThe 'trec-robust04/fold2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=63,917\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us \n", "# Dataset Card for 'trec-robust04/fold2'\n\nThe 'trec-robust04/fold2' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=63,917\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
aeeafc3ed22f7f83cc780c71e6b99998148887d1
# Dataset Card for `trec-robust04/fold3` The `trec-robust04/fold3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold3). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=62,901 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold3', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold3', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold3
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:22+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold3`", "viewer": false}
2023-01-05T03:53:28+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us
# Dataset Card for 'trec-robust04/fold3' The 'trec-robust04/fold3' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=62,901 - For 'docs', use 'irds/trec-robust04' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-robust04/fold3'\n\nThe 'trec-robust04/fold3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=62,901\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us \n", "# Dataset Card for 'trec-robust04/fold3'\n\nThe 'trec-robust04/fold3' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=62,901\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
15da11a10c4e23bf503627958668159d68e4c636
# Dataset Card for `trec-robust04/fold4` The `trec-robust04/fold4` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold4). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=57,962 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold4', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold4', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold4
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:34+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold4`", "viewer": false}
2023-01-05T03:53:39+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us
# Dataset Card for 'trec-robust04/fold4' The 'trec-robust04/fold4' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=57,962 - For 'docs', use 'irds/trec-robust04' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-robust04/fold4'\n\nThe 'trec-robust04/fold4' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=57,962\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us \n", "# Dataset Card for 'trec-robust04/fold4'\n\nThe 'trec-robust04/fold4' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=57,962\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
c07c2da7eb63f7ddd3e06a086ce00d2082783f21
# Dataset Card for `trec-robust04/fold5` The `trec-robust04/fold5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold5). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=63,841 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold5', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold5', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold5
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:45+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold5`", "viewer": false}
2023-01-05T03:53:50+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us
# Dataset Card for 'trec-robust04/fold5' The 'trec-robust04/fold5' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=63,841 - For 'docs', use 'irds/trec-robust04' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-robust04/fold5'\n\nThe 'trec-robust04/fold5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=63,841\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-robust04 #region-us \n", "# Dataset Card for 'trec-robust04/fold5'\n\nThe 'trec-robust04/fold5' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=63,841\n\n - For 'docs', use 'irds/trec-robust04'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
f339472c5955a8b8a883754290772621a391fdc5
# Dataset Card for `tripclick` The `tripclick` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,523,878 This dataset is used by: [`tripclick_train`](https://huggingface.co/datasets/irds/tripclick_train), [`tripclick_train_head`](https://huggingface.co/datasets/irds/tripclick_train_head), [`tripclick_train_head_dctr`](https://huggingface.co/datasets/irds/tripclick_train_head_dctr), [`tripclick_train_hofstaetter-triples`](https://huggingface.co/datasets/irds/tripclick_train_hofstaetter-triples), [`tripclick_train_tail`](https://huggingface.co/datasets/irds/tripclick_train_tail), [`tripclick_train_torso`](https://huggingface.co/datasets/irds/tripclick_train_torso), [`tripclick_val_head_dctr`](https://huggingface.co/datasets/irds/tripclick_val_head_dctr) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/tripclick', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'url': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:53:56+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick`", "viewer": false}
2023-01-05T03:54:01+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'tripclick' The 'tripclick' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,523,878 This dataset is used by: 'tripclick_train', 'tripclick_train_head', 'tripclick_train_head_dctr', 'tripclick_train_hofstaetter-triples', 'tripclick_train_tail', 'tripclick_train_torso', 'tripclick_val_head_dctr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tripclick'\n\nThe 'tripclick' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,523,878\n\n\nThis dataset is used by: 'tripclick_train', 'tripclick_train_head', 'tripclick_train_head_dctr', 'tripclick_train_hofstaetter-triples', 'tripclick_train_tail', 'tripclick_train_torso', 'tripclick_val_head_dctr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'tripclick'\n\nThe 'tripclick' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,523,878\n\n\nThis dataset is used by: 'tripclick_train', 'tripclick_train_head', 'tripclick_train_head_dctr', 'tripclick_train_hofstaetter-triples', 'tripclick_train_tail', 'tripclick_train_torso', 'tripclick_val_head_dctr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
b0b576f085a3ccce49d64f84c9d551f5e069dab4
# Dataset Card for `tripclick/train` The `tripclick/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train). # Data This dataset provides: - `queries` (i.e., topics); count=685,649 - `qrels`: (relevance assessments); count=2,705,212 - `docpairs`; count=23,221,224 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) This dataset is used by: [`tripclick_train_hofstaetter-triples`](https://huggingface.co/datasets/irds/tripclick_train_hofstaetter-triples) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tripclick_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/tripclick_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} docpairs = load_dataset('irds/tripclick_train', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:54:07+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train`", "viewer": false}
2023-01-05T03:54:13+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/tripclick #region-us
# Dataset Card for 'tripclick/train' The 'tripclick/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=685,649 - 'qrels': (relevance assessments); count=2,705,212 - 'docpairs'; count=23,221,224 - For 'docs', use 'irds/tripclick' This dataset is used by: 'tripclick_train_hofstaetter-triples' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tripclick/train'\n\nThe 'tripclick/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=685,649\n - 'qrels': (relevance assessments); count=2,705,212\n - 'docpairs'; count=23,221,224\n\n - For 'docs', use 'irds/tripclick'\n\nThis dataset is used by: 'tripclick_train_hofstaetter-triples'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/tripclick #region-us \n", "# Dataset Card for 'tripclick/train'\n\nThe 'tripclick/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=685,649\n - 'qrels': (relevance assessments); count=2,705,212\n - 'docpairs'; count=23,221,224\n\n - For 'docs', use 'irds/tripclick'\n\nThis dataset is used by: 'tripclick_train_hofstaetter-triples'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
90fd10a63414a89bbeda54d06c12f81662ee21b7
# Dataset Card for `tripclick/train/head` The `tripclick/train/head` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/head). # Data This dataset provides: - `queries` (i.e., topics); count=3,529 - `qrels`: (relevance assessments); count=116,821 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) This dataset is used by: [`tripclick_train_head_dctr`](https://huggingface.co/datasets/irds/tripclick_train_head_dctr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tripclick_train_head', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/tripclick_train_head', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train_head
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:54:18+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/head`", "viewer": false}
2023-01-05T03:54:24+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/tripclick #region-us
# Dataset Card for 'tripclick/train/head' The 'tripclick/train/head' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=3,529 - 'qrels': (relevance assessments); count=116,821 - For 'docs', use 'irds/tripclick' This dataset is used by: 'tripclick_train_head_dctr' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tripclick/train/head'\n\nThe 'tripclick/train/head' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=3,529\n - 'qrels': (relevance assessments); count=116,821\n\n - For 'docs', use 'irds/tripclick'\n\nThis dataset is used by: 'tripclick_train_head_dctr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/tripclick #region-us \n", "# Dataset Card for 'tripclick/train/head'\n\nThe 'tripclick/train/head' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=3,529\n - 'qrels': (relevance assessments); count=116,821\n\n - For 'docs', use 'irds/tripclick'\n\nThis dataset is used by: 'tripclick_train_head_dctr'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
617dd3e00e3d470a1cf5b039115007ff6fa32efe
# Dataset Card for `tripclick/train/head/dctr` The `tripclick/train/head/dctr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/head/dctr). # Data This dataset provides: - `qrels`: (relevance assessments); count=128,420 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) - For `queries`, use [`irds/tripclick_train_head`](https://huggingface.co/datasets/irds/tripclick_train_head) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/tripclick_train_head_dctr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train_head_dctr
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "source_datasets:irds/tripclick_train_head", "region:us" ]
2023-01-05T03:54:29+00:00
{"source_datasets": ["irds/tripclick", "irds/tripclick_train_head"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/head/dctr`", "viewer": false}
2023-01-05T03:54:35+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/tripclick #source_datasets-irds/tripclick_train_head #region-us
# Dataset Card for 'tripclick/train/head/dctr' The 'tripclick/train/head/dctr' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'qrels': (relevance assessments); count=128,420 - For 'docs', use 'irds/tripclick' - For 'queries', use 'irds/tripclick_train_head' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tripclick/train/head/dctr'\n\nThe 'tripclick/train/head/dctr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=128,420\n\n - For 'docs', use 'irds/tripclick'\n - For 'queries', use 'irds/tripclick_train_head'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/tripclick #source_datasets-irds/tripclick_train_head #region-us \n", "# Dataset Card for 'tripclick/train/head/dctr'\n\nThe 'tripclick/train/head/dctr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=128,420\n\n - For 'docs', use 'irds/tripclick'\n - For 'queries', use 'irds/tripclick_train_head'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4a060987d00ec62f161e3dd58642b8284568816b
# Dataset Card for `tripclick/train/hofstaetter-triples` The `tripclick/train/hofstaetter-triples` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/hofstaetter-triples). # Data This dataset provides: - `docpairs`; count=10,000,000 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) - For `queries`, use [`irds/tripclick_train`](https://huggingface.co/datasets/irds/tripclick_train) - For `qrels`, use [`irds/tripclick_train`](https://huggingface.co/datasets/irds/tripclick_train) ## Usage ```python from datasets import load_dataset docpairs = load_dataset('irds/tripclick_train_hofstaetter-triples', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } @inproceedings{Hofstaetter2022TripClick, title={Establishing Strong Baselines for TripClick Health Retrieval}, author={Sebastian Hofst\"atter and Sophia Althammer and Mete Sertkan and Allan Hanbury}, year={2022}, booktitle={ECIR} } ```
irds/tripclick_train_hofstaetter-triples
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "source_datasets:irds/tripclick_train", "region:us" ]
2023-01-05T03:54:40+00:00
{"source_datasets": ["irds/tripclick", "irds/tripclick_train"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/hofstaetter-triples`", "viewer": false}
2023-01-05T03:54:46+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/tripclick #source_datasets-irds/tripclick_train #region-us
# Dataset Card for 'tripclick/train/hofstaetter-triples' The 'tripclick/train/hofstaetter-triples' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docpairs'; count=10,000,000 - For 'docs', use 'irds/tripclick' - For 'queries', use 'irds/tripclick_train' - For 'qrels', use 'irds/tripclick_train' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tripclick/train/hofstaetter-triples'\n\nThe 'tripclick/train/hofstaetter-triples' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docpairs'; count=10,000,000\n\n - For 'docs', use 'irds/tripclick'\n - For 'queries', use 'irds/tripclick_train'\n - For 'qrels', use 'irds/tripclick_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/tripclick #source_datasets-irds/tripclick_train #region-us \n", "# Dataset Card for 'tripclick/train/hofstaetter-triples'\n\nThe 'tripclick/train/hofstaetter-triples' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docpairs'; count=10,000,000\n\n - For 'docs', use 'irds/tripclick'\n - For 'queries', use 'irds/tripclick_train'\n - For 'qrels', use 'irds/tripclick_train'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
8222d24a4b4a6a5d74ae00d4c9f4d8a58b4f5c91
# Dataset Card for `tripclick/train/tail` The `tripclick/train/tail` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/tail). # Data This dataset provides: - `queries` (i.e., topics); count=576,156 - `qrels`: (relevance assessments); count=1,621,493 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tripclick_train_tail', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/tripclick_train_tail', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train_tail
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:54:52+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/tail`", "viewer": false}
2023-01-05T03:54:57+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/tripclick #region-us
# Dataset Card for 'tripclick/train/tail' The 'tripclick/train/tail' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=576,156 - 'qrels': (relevance assessments); count=1,621,493 - For 'docs', use 'irds/tripclick' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tripclick/train/tail'\n\nThe 'tripclick/train/tail' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=576,156\n - 'qrels': (relevance assessments); count=1,621,493\n\n - For 'docs', use 'irds/tripclick'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/tripclick #region-us \n", "# Dataset Card for 'tripclick/train/tail'\n\nThe 'tripclick/train/tail' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=576,156\n - 'qrels': (relevance assessments); count=1,621,493\n\n - For 'docs', use 'irds/tripclick'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a931b674056470cf3a953e42f384964c22463485
# Dataset Card for `tripclick/train/torso` The `tripclick/train/torso` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/torso). # Data This dataset provides: - `queries` (i.e., topics); count=105,964 - `qrels`: (relevance assessments); count=966,898 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tripclick_train_torso', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/tripclick_train_torso', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train_torso
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:55:03+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/torso`", "viewer": false}
2023-01-05T03:55:09+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/tripclick #region-us
# Dataset Card for 'tripclick/train/torso' The 'tripclick/train/torso' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=105,964 - 'qrels': (relevance assessments); count=966,898 - For 'docs', use 'irds/tripclick' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tripclick/train/torso'\n\nThe 'tripclick/train/torso' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=105,964\n - 'qrels': (relevance assessments); count=966,898\n\n - For 'docs', use 'irds/tripclick'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/tripclick #region-us \n", "# Dataset Card for 'tripclick/train/torso'\n\nThe 'tripclick/train/torso' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=105,964\n - 'qrels': (relevance assessments); count=966,898\n\n - For 'docs', use 'irds/tripclick'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
961409b313ecb3ddcb1ea66c346b856a311f69f0
# Dataset Card for `tripclick/val/head/dctr` The `tripclick/val/head/dctr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/val/head/dctr). # Data This dataset provides: - `qrels`: (relevance assessments); count=66,812 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/tripclick_val_head_dctr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_val_head_dctr
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:55:14+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/val/head/dctr`", "viewer": false}
2023-01-05T03:55:20+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/tripclick #region-us
# Dataset Card for 'tripclick/val/head/dctr' The 'tripclick/val/head/dctr' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'qrels': (relevance assessments); count=66,812 - For 'docs', use 'irds/tripclick' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tripclick/val/head/dctr'\n\nThe 'tripclick/val/head/dctr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=66,812\n\n - For 'docs', use 'irds/tripclick'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/tripclick #region-us \n", "# Dataset Card for 'tripclick/val/head/dctr'\n\nThe 'tripclick/val/head/dctr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'qrels': (relevance assessments); count=66,812\n\n - For 'docs', use 'irds/tripclick'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
d6143877d65dbca6cc33910e64833b75d3595239
# Dataset Card for `tweets2013-ia` The `tweets2013-ia` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tweets2013-ia#tweets2013-ia). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=252,713,133 This dataset is used by: [`tweets2013-ia_trec-mb-2013`](https://huggingface.co/datasets/irds/tweets2013-ia_trec-mb-2013), [`tweets2013-ia_trec-mb-2014`](https://huggingface.co/datasets/irds/tweets2013-ia_trec-mb-2014) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/tweets2013-ia', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'user_id': ..., 'created_at': ..., 'lang': ..., 'reply_doc_id': ..., 'retweet_doc_id': ..., 'source': ..., 'source_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Sequiera2017TweetsIA, title={Finally, a Downloadable Test Collection of Tweets}, author={Royal Sequiera and Jimmy Lin}, booktitle={SIGIR}, year={2017} } ```
irds/tweets2013-ia
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:55:25+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`tweets2013-ia`", "viewer": false}
2023-01-05T03:55:31+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'tweets2013-ia' The 'tweets2013-ia' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=252,713,133 This dataset is used by: 'tweets2013-ia_trec-mb-2013', 'tweets2013-ia_trec-mb-2014' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tweets2013-ia'\n\nThe 'tweets2013-ia' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=252,713,133\n\n\nThis dataset is used by: 'tweets2013-ia_trec-mb-2013', 'tweets2013-ia_trec-mb-2014'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'tweets2013-ia'\n\nThe 'tweets2013-ia' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=252,713,133\n\n\nThis dataset is used by: 'tweets2013-ia_trec-mb-2013', 'tweets2013-ia_trec-mb-2014'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
52fae0a6d6a507bbd3bf208c6383664a03e94f11
# Dataset Card for `tweets2013-ia/trec-mb-2013` The `tweets2013-ia/trec-mb-2013` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tweets2013-ia#tweets2013-ia/trec-mb-2013). # Data This dataset provides: - `queries` (i.e., topics); count=60 - `qrels`: (relevance assessments); count=71,279 - For `docs`, use [`irds/tweets2013-ia`](https://huggingface.co/datasets/irds/tweets2013-ia) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tweets2013-ia_trec-mb-2013', 'queries') for record in queries: record # {'query_id': ..., 'query': ..., 'time': ..., 'tweet_time': ...} qrels = load_dataset('irds/tweets2013-ia_trec-mb-2013', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Lin2013Microblog, title={Overview of the TREC-2013 Microblog Track}, author={Jimmy Lin and Miles Efron}, booktitle={TREC}, year={2013} } @inproceedings{Sequiera2017TweetsIA, title={Finally, a Downloadable Test Collection of Tweets}, author={Royal Sequiera and Jimmy Lin}, booktitle={SIGIR}, year={2017} } ```
irds/tweets2013-ia_trec-mb-2013
[ "task_categories:text-retrieval", "source_datasets:irds/tweets2013-ia", "region:us" ]
2023-01-05T03:55:36+00:00
{"source_datasets": ["irds/tweets2013-ia"], "task_categories": ["text-retrieval"], "pretty_name": "`tweets2013-ia/trec-mb-2013`", "viewer": false}
2023-01-05T03:55:42+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/tweets2013-ia #region-us
# Dataset Card for 'tweets2013-ia/trec-mb-2013' The 'tweets2013-ia/trec-mb-2013' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=60 - 'qrels': (relevance assessments); count=71,279 - For 'docs', use 'irds/tweets2013-ia' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tweets2013-ia/trec-mb-2013'\n\nThe 'tweets2013-ia/trec-mb-2013' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=60\n - 'qrels': (relevance assessments); count=71,279\n\n - For 'docs', use 'irds/tweets2013-ia'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/tweets2013-ia #region-us \n", "# Dataset Card for 'tweets2013-ia/trec-mb-2013'\n\nThe 'tweets2013-ia/trec-mb-2013' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=60\n - 'qrels': (relevance assessments); count=71,279\n\n - For 'docs', use 'irds/tweets2013-ia'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a719630ac5bc91884254e45ef2be22a470c926ca
# Dataset Card for `tweets2013-ia/trec-mb-2014` The `tweets2013-ia/trec-mb-2014` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tweets2013-ia#tweets2013-ia/trec-mb-2014). # Data This dataset provides: - `queries` (i.e., topics); count=55 - `qrels`: (relevance assessments); count=57,985 - For `docs`, use [`irds/tweets2013-ia`](https://huggingface.co/datasets/irds/tweets2013-ia) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tweets2013-ia_trec-mb-2014', 'queries') for record in queries: record # {'query_id': ..., 'query': ..., 'time': ..., 'tweet_time': ..., 'description': ...} qrels = load_dataset('irds/tweets2013-ia_trec-mb-2014', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Lin2014Microblog, title={Overview of the TREC-2014 Microblog Track}, author={Jimmy Lin and Miles Efron and Yulu Wang and Garrick Sherman}, booktitle={TREC}, year={2014} } @inproceedings{Sequiera2017TweetsIA, title={Finally, a Downloadable Test Collection of Tweets}, author={Royal Sequiera and Jimmy Lin}, booktitle={SIGIR}, year={2017} } ```
irds/tweets2013-ia_trec-mb-2014
[ "task_categories:text-retrieval", "source_datasets:irds/tweets2013-ia", "region:us" ]
2023-01-05T03:55:47+00:00
{"source_datasets": ["irds/tweets2013-ia"], "task_categories": ["text-retrieval"], "pretty_name": "`tweets2013-ia/trec-mb-2014`", "viewer": false}
2023-01-05T03:55:53+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/tweets2013-ia #region-us
# Dataset Card for 'tweets2013-ia/trec-mb-2014' The 'tweets2013-ia/trec-mb-2014' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=55 - 'qrels': (relevance assessments); count=57,985 - For 'docs', use 'irds/tweets2013-ia' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'tweets2013-ia/trec-mb-2014'\n\nThe 'tweets2013-ia/trec-mb-2014' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=55\n - 'qrels': (relevance assessments); count=57,985\n\n - For 'docs', use 'irds/tweets2013-ia'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/tweets2013-ia #region-us \n", "# Dataset Card for 'tweets2013-ia/trec-mb-2014'\n\nThe 'tweets2013-ia/trec-mb-2014' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=55\n - 'qrels': (relevance assessments); count=57,985\n\n - For 'docs', use 'irds/tweets2013-ia'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a5017ebcec57535ec8b4750eb0360183e3f7edc4
# Dataset Card for `vaswani` The `vaswani` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/vaswani#vaswani). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=11,429 - `queries` (i.e., topics); count=93 - `qrels`: (relevance assessments); count=2,083 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/vaswani', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/vaswani', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/vaswani', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/vaswani
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:55:59+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`vaswani`", "viewer": false}
2023-01-05T03:56:04+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'vaswani' The 'vaswani' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=11,429 - 'queries' (i.e., topics); count=93 - 'qrels': (relevance assessments); count=2,083 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'vaswani'\n\nThe 'vaswani' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=11,429\n - 'queries' (i.e., topics); count=93\n - 'qrels': (relevance assessments); count=2,083", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'vaswani'\n\nThe 'vaswani' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=11,429\n - 'queries' (i.e., topics); count=93\n - 'qrels': (relevance assessments); count=2,083", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
c432fd8721459a09be9a0f8c30a275801dbd8ce6
# Dataset Card for `wapo/v2/trec-core-2018` The `wapo/v2/trec-core-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v2/trec-core-2018). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=26,233 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/wapo_v2_trec-core-2018', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/wapo_v2_trec-core-2018', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/wapo_v2_trec-core-2018
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:10+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wapo/v2/trec-core-2018`", "viewer": false}
2023-01-05T03:56:15+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wapo/v2/trec-core-2018' The 'wapo/v2/trec-core-2018' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=26,233 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wapo/v2/trec-core-2018'\n\nThe 'wapo/v2/trec-core-2018' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=26,233", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wapo/v2/trec-core-2018'\n\nThe 'wapo/v2/trec-core-2018' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=26,233", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
9ab6811f8d738ac30f4befaf1249297be2cbf4a6
# Dataset Card for `wapo/v2/trec-news-2018` The `wapo/v2/trec-news-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v2/trec-news-2018). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=8,508 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/wapo_v2_trec-news-2018', 'queries') for record in queries: record # {'query_id': ..., 'doc_id': ..., 'url': ...} qrels = load_dataset('irds/wapo_v2_trec-news-2018', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Soboroff2018News, title={TREC 2018 News Track Overview}, author={Ian Soboroff and Shudong Huang and Donna Harman}, booktitle={TREC}, year={2018} } ```
irds/wapo_v2_trec-news-2018
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:21+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wapo/v2/trec-news-2018`", "viewer": false}
2023-01-05T03:56:26+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wapo/v2/trec-news-2018' The 'wapo/v2/trec-news-2018' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=8,508 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wapo/v2/trec-news-2018'\n\nThe 'wapo/v2/trec-news-2018' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=8,508", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wapo/v2/trec-news-2018'\n\nThe 'wapo/v2/trec-news-2018' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=8,508", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
247513df75b08e9d1918dc59823f26b8d3365e6e
# Dataset Card for `wapo/v2/trec-news-2019` The `wapo/v2/trec-news-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v2/trec-news-2019). # Data This dataset provides: - `queries` (i.e., topics); count=60 - `qrels`: (relevance assessments); count=15,655 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/wapo_v2_trec-news-2019', 'queries') for record in queries: record # {'query_id': ..., 'doc_id': ..., 'url': ...} qrels = load_dataset('irds/wapo_v2_trec-news-2019', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Soboroff2019News, title={TREC 2019 News Track Overview}, author={Ian Soboroff and Shudong Huang and Donna Harman}, booktitle={TREC}, year={2019} } ```
irds/wapo_v2_trec-news-2019
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:32+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wapo/v2/trec-news-2019`", "viewer": false}
2023-01-05T03:56:38+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wapo/v2/trec-news-2019' The 'wapo/v2/trec-news-2019' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=60 - 'qrels': (relevance assessments); count=15,655 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wapo/v2/trec-news-2019'\n\nThe 'wapo/v2/trec-news-2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=60\n - 'qrels': (relevance assessments); count=15,655", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wapo/v2/trec-news-2019'\n\nThe 'wapo/v2/trec-news-2019' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=60\n - 'qrels': (relevance assessments); count=15,655", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4a56eae66bba29137d43811583a2a9fea9be4b80
# Dataset Card for `wapo/v3/trec-news-2020` The `wapo/v3/trec-news-2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v3/trec-news-2020). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=17,764 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/wapo_v3_trec-news-2020', 'queries') for record in queries: record # {'query_id': ..., 'doc_id': ..., 'url': ...} qrels = load_dataset('irds/wapo_v3_trec-news-2020', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/wapo_v3_trec-news-2020
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:43+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wapo/v3/trec-news-2020`", "viewer": false}
2023-01-05T03:56:49+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wapo/v3/trec-news-2020' The 'wapo/v3/trec-news-2020' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=17,764 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wapo/v3/trec-news-2020'\n\nThe 'wapo/v3/trec-news-2020' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=17,764", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wapo/v3/trec-news-2020'\n\nThe 'wapo/v3/trec-news-2020' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=17,764", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2e5d9727052ef3595077faa290e3134bd63d105f
# Dataset Card for `wikiclir/ar` The `wikiclir/ar` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ar). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=535,118 - `queries` (i.e., topics); count=324,489 - `qrels`: (relevance assessments); count=519,269 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ar', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ar', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ar', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ar
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:54+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ar`", "viewer": false}
2023-01-05T03:57:00+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/ar' The 'wikiclir/ar' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=535,118 - 'queries' (i.e., topics); count=324,489 - 'qrels': (relevance assessments); count=519,269 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/ar'\n\nThe 'wikiclir/ar' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=535,118\n - 'queries' (i.e., topics); count=324,489\n - 'qrels': (relevance assessments); count=519,269", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/ar'\n\nThe 'wikiclir/ar' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=535,118\n - 'queries' (i.e., topics); count=324,489\n - 'qrels': (relevance assessments); count=519,269", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a4092e5d19aaef68b4980fa9f2b97b9878c2960c
# Dataset Card for `wikiclir/ca` The `wikiclir/ca` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ca). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=548,722 - `queries` (i.e., topics); count=339,586 - `qrels`: (relevance assessments); count=965,233 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ca', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ca', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ca', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ca
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:05+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ca`", "viewer": false}
2023-01-05T03:57:11+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/ca' The 'wikiclir/ca' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=548,722 - 'queries' (i.e., topics); count=339,586 - 'qrels': (relevance assessments); count=965,233 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/ca'\n\nThe 'wikiclir/ca' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=548,722\n - 'queries' (i.e., topics); count=339,586\n - 'qrels': (relevance assessments); count=965,233", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/ca'\n\nThe 'wikiclir/ca' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=548,722\n - 'queries' (i.e., topics); count=339,586\n - 'qrels': (relevance assessments); count=965,233", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
ac55b3ea8a426c0adfb9bae3681814b434a38cb0
# Dataset Card for `wikiclir/cs` The `wikiclir/cs` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/cs). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=386,906 - `queries` (i.e., topics); count=233,553 - `qrels`: (relevance assessments); count=954,370 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_cs', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_cs', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_cs', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_cs
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:16+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/cs`", "viewer": false}
2023-01-05T03:57:22+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/cs' The 'wikiclir/cs' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=386,906 - 'queries' (i.e., topics); count=233,553 - 'qrels': (relevance assessments); count=954,370 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/cs'\n\nThe 'wikiclir/cs' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=386,906\n - 'queries' (i.e., topics); count=233,553\n - 'qrels': (relevance assessments); count=954,370", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/cs'\n\nThe 'wikiclir/cs' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=386,906\n - 'queries' (i.e., topics); count=233,553\n - 'qrels': (relevance assessments); count=954,370", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
524d177f6651189d1ece8f4afcd1d00726b583cb
# Dataset Card for `wikiclir/de` The `wikiclir/de` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/de). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,091,278 - `queries` (i.e., topics); count=938,217 - `qrels`: (relevance assessments); count=5,550,454 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_de', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_de', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_de', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_de
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:28+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/de`", "viewer": false}
2023-01-05T03:57:33+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/de' The 'wikiclir/de' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=2,091,278 - 'queries' (i.e., topics); count=938,217 - 'qrels': (relevance assessments); count=5,550,454 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/de'\n\nThe 'wikiclir/de' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,091,278\n - 'queries' (i.e., topics); count=938,217\n - 'qrels': (relevance assessments); count=5,550,454", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/de'\n\nThe 'wikiclir/de' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,091,278\n - 'queries' (i.e., topics); count=938,217\n - 'qrels': (relevance assessments); count=5,550,454", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
14d4021f04cb9800c434724403fb1a33b4f14f15
# Dataset Card for `wikiclir/en-simple` The `wikiclir/en-simple` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/en-simple). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=127,089 - `queries` (i.e., topics); count=114,572 - `qrels`: (relevance assessments); count=250,380 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_en-simple', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_en-simple', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_en-simple', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_en-simple
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:39+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/en-simple`", "viewer": false}
2023-01-05T03:57:44+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/en-simple' The 'wikiclir/en-simple' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=127,089 - 'queries' (i.e., topics); count=114,572 - 'qrels': (relevance assessments); count=250,380 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/en-simple'\n\nThe 'wikiclir/en-simple' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=127,089\n - 'queries' (i.e., topics); count=114,572\n - 'qrels': (relevance assessments); count=250,380", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/en-simple'\n\nThe 'wikiclir/en-simple' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=127,089\n - 'queries' (i.e., topics); count=114,572\n - 'qrels': (relevance assessments); count=250,380", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
79e3fb27a5b5996e616373917494eacfd1dd0ddf
# Dataset Card for `wikiclir/es` The `wikiclir/es` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/es). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,302,958 - `queries` (i.e., topics); count=781,642 - `qrels`: (relevance assessments); count=2,894,807 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_es', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_es', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_es', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_es
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:50+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/es`", "viewer": false}
2023-01-05T03:57:55+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/es' The 'wikiclir/es' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,302,958 - 'queries' (i.e., topics); count=781,642 - 'qrels': (relevance assessments); count=2,894,807 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/es'\n\nThe 'wikiclir/es' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,302,958\n - 'queries' (i.e., topics); count=781,642\n - 'qrels': (relevance assessments); count=2,894,807", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/es'\n\nThe 'wikiclir/es' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,302,958\n - 'queries' (i.e., topics); count=781,642\n - 'qrels': (relevance assessments); count=2,894,807", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2fbaf54cd79254c4642cdba2a323ea8387e37114
# Dataset Card for `wikiclir/fi` The `wikiclir/fi` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/fi). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=418,677 - `queries` (i.e., topics); count=273,819 - `qrels`: (relevance assessments); count=939,613 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_fi', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_fi', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_fi', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_fi
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:01+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/fi`", "viewer": false}
2023-01-05T03:58:07+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/fi' The 'wikiclir/fi' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=418,677 - 'queries' (i.e., topics); count=273,819 - 'qrels': (relevance assessments); count=939,613 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/fi'\n\nThe 'wikiclir/fi' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=418,677\n - 'queries' (i.e., topics); count=273,819\n - 'qrels': (relevance assessments); count=939,613", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/fi'\n\nThe 'wikiclir/fi' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=418,677\n - 'queries' (i.e., topics); count=273,819\n - 'qrels': (relevance assessments); count=939,613", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
77d187f804572db733df8e9e9618f378e6a25391
# Dataset Card for `wikiclir/fr` The `wikiclir/fr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/fr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,894,397 - `queries` (i.e., topics); count=1,089,179 - `qrels`: (relevance assessments); count=5,137,366 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_fr', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_fr', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_fr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_fr
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:12+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/fr`", "viewer": false}
2023-01-05T03:58:18+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/fr' The 'wikiclir/fr' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,894,397 - 'queries' (i.e., topics); count=1,089,179 - 'qrels': (relevance assessments); count=5,137,366 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/fr'\n\nThe 'wikiclir/fr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,894,397\n - 'queries' (i.e., topics); count=1,089,179\n - 'qrels': (relevance assessments); count=5,137,366", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/fr'\n\nThe 'wikiclir/fr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,894,397\n - 'queries' (i.e., topics); count=1,089,179\n - 'qrels': (relevance assessments); count=5,137,366", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
9f5309c8bc74eee2860535108f3d5da8a7d5ba56
# Dataset Card for `wikiclir/it` The `wikiclir/it` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/it). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,347,011 - `queries` (i.e., topics); count=808,605 - `qrels`: (relevance assessments); count=3,443,633 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_it', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_it', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_it', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_it
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:23+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/it`", "viewer": false}
2023-01-05T03:58:29+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/it' The 'wikiclir/it' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,347,011 - 'queries' (i.e., topics); count=808,605 - 'qrels': (relevance assessments); count=3,443,633 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/it'\n\nThe 'wikiclir/it' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,347,011\n - 'queries' (i.e., topics); count=808,605\n - 'qrels': (relevance assessments); count=3,443,633", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/it'\n\nThe 'wikiclir/it' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,347,011\n - 'queries' (i.e., topics); count=808,605\n - 'qrels': (relevance assessments); count=3,443,633", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
72cc55ced1a6c79bd848e6d99959983d47b61f4e
# Dataset Card for `wikiclir/ja` The `wikiclir/ja` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ja). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,071,292 - `queries` (i.e., topics); count=426,431 - `qrels`: (relevance assessments); count=3,338,667 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ja', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ja', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ja', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ja
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:34+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ja`", "viewer": false}
2023-01-05T03:58:40+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/ja' The 'wikiclir/ja' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,071,292 - 'queries' (i.e., topics); count=426,431 - 'qrels': (relevance assessments); count=3,338,667 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/ja'\n\nThe 'wikiclir/ja' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,071,292\n - 'queries' (i.e., topics); count=426,431\n - 'qrels': (relevance assessments); count=3,338,667", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/ja'\n\nThe 'wikiclir/ja' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,071,292\n - 'queries' (i.e., topics); count=426,431\n - 'qrels': (relevance assessments); count=3,338,667", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
9304730a276f792ee322e52c4b9c9e8bb92ec4f7
# Dataset Card for `wikiclir/ko` The `wikiclir/ko` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ko). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=394,177 - `queries` (i.e., topics); count=224,855 - `qrels`: (relevance assessments); count=568,205 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ko', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ko', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ko', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ko
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:45+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ko`", "viewer": false}
2023-01-05T03:58:51+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/ko' The 'wikiclir/ko' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=394,177 - 'queries' (i.e., topics); count=224,855 - 'qrels': (relevance assessments); count=568,205 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/ko'\n\nThe 'wikiclir/ko' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=394,177\n - 'queries' (i.e., topics); count=224,855\n - 'qrels': (relevance assessments); count=568,205", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/ko'\n\nThe 'wikiclir/ko' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=394,177\n - 'queries' (i.e., topics); count=224,855\n - 'qrels': (relevance assessments); count=568,205", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
98a3e584c1719f9af231bfdeb824f939af233ca0
# Dataset Card for `wikiclir/nl` The `wikiclir/nl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/nl). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,908,260 - `queries` (i.e., topics); count=687,718 - `qrels`: (relevance assessments); count=2,334,644 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_nl', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_nl', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_nl', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_nl
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:57+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/nl`", "viewer": false}
2023-01-05T03:59:02+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/nl' The 'wikiclir/nl' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,908,260 - 'queries' (i.e., topics); count=687,718 - 'qrels': (relevance assessments); count=2,334,644 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/nl'\n\nThe 'wikiclir/nl' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,908,260\n - 'queries' (i.e., topics); count=687,718\n - 'qrels': (relevance assessments); count=2,334,644", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/nl'\n\nThe 'wikiclir/nl' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,908,260\n - 'queries' (i.e., topics); count=687,718\n - 'qrels': (relevance assessments); count=2,334,644", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
07e5a78fa08080ff5f25f9da12ee3de92016732d
# Dataset Card for `wikiclir/nn` The `wikiclir/nn` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/nn). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=133,290 - `queries` (i.e., topics); count=99,493 - `qrels`: (relevance assessments); count=250,141 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_nn', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_nn', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_nn', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_nn
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:08+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/nn`", "viewer": false}
2023-01-05T03:59:13+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/nn' The 'wikiclir/nn' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=133,290 - 'queries' (i.e., topics); count=99,493 - 'qrels': (relevance assessments); count=250,141 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/nn'\n\nThe 'wikiclir/nn' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=133,290\n - 'queries' (i.e., topics); count=99,493\n - 'qrels': (relevance assessments); count=250,141", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/nn'\n\nThe 'wikiclir/nn' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=133,290\n - 'queries' (i.e., topics); count=99,493\n - 'qrels': (relevance assessments); count=250,141", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2a4a0225105760f3c2a5fb1bf0f74d5304144439
# Dataset Card for `wikiclir/no` The `wikiclir/no` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/no). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=471,420 - `queries` (i.e., topics); count=299,897 - `qrels`: (relevance assessments); count=963,514 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_no', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_no', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_no', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_no
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:19+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/no`", "viewer": false}
2023-01-05T03:59:24+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/no' The 'wikiclir/no' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=471,420 - 'queries' (i.e., topics); count=299,897 - 'qrels': (relevance assessments); count=963,514 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/no'\n\nThe 'wikiclir/no' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=471,420\n - 'queries' (i.e., topics); count=299,897\n - 'qrels': (relevance assessments); count=963,514", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/no'\n\nThe 'wikiclir/no' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=471,420\n - 'queries' (i.e., topics); count=299,897\n - 'qrels': (relevance assessments); count=963,514", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]