sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
a60c4b3201953e6f73404da4f8e257a35dbb3e51
# Dataset Card for `wikiclir/pl` The `wikiclir/pl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/pl). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,234,316 - `queries` (i.e., topics); count=693,656 - `qrels`: (relevance assessments); count=2,471,360 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_pl', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_pl', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_pl', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_pl
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:30+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/pl`", "viewer": false}
2023-01-05T03:59:35+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/pl' The 'wikiclir/pl' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,234,316 - 'queries' (i.e., topics); count=693,656 - 'qrels': (relevance assessments); count=2,471,360 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/pl'\n\nThe 'wikiclir/pl' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,234,316\n - 'queries' (i.e., topics); count=693,656\n - 'qrels': (relevance assessments); count=2,471,360", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/pl'\n\nThe 'wikiclir/pl' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,234,316\n - 'queries' (i.e., topics); count=693,656\n - 'qrels': (relevance assessments); count=2,471,360", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
b7a9c26c16d319a170ffaf4278831c91d95a3f80
# Dataset Card for `wikiclir/pt` The `wikiclir/pt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/pt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=973,057 - `queries` (i.e., topics); count=611,732 - `qrels`: (relevance assessments); count=1,741,889 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_pt', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_pt', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_pt', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_pt
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:41+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/pt`", "viewer": false}
2023-01-05T03:59:48+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/pt' The 'wikiclir/pt' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=973,057 - 'queries' (i.e., topics); count=611,732 - 'qrels': (relevance assessments); count=1,741,889 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/pt'\n\nThe 'wikiclir/pt' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=973,057\n - 'queries' (i.e., topics); count=611,732\n - 'qrels': (relevance assessments); count=1,741,889", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/pt'\n\nThe 'wikiclir/pt' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=973,057\n - 'queries' (i.e., topics); count=611,732\n - 'qrels': (relevance assessments); count=1,741,889", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
0cd574b3cd70d2c3de60d7cd6b57318cd8d12894
# Dataset Card for `wikiclir/ro` The `wikiclir/ro` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ro). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=376,655 - `queries` (i.e., topics); count=199,264 - `qrels`: (relevance assessments); count=451,180 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ro', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ro', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ro', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ro
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:53+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ro`", "viewer": false}
2023-01-05T03:59:59+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/ro' The 'wikiclir/ro' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=376,655 - 'queries' (i.e., topics); count=199,264 - 'qrels': (relevance assessments); count=451,180 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/ro'\n\nThe 'wikiclir/ro' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=376,655\n - 'queries' (i.e., topics); count=199,264\n - 'qrels': (relevance assessments); count=451,180", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/ro'\n\nThe 'wikiclir/ro' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=376,655\n - 'queries' (i.e., topics); count=199,264\n - 'qrels': (relevance assessments); count=451,180", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
794af9f06463ebe99d484d1fddc4851cff4eb143
# Dataset Card for `wikiclir/ru` The `wikiclir/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,413,945 - `queries` (i.e., topics); count=664,924 - `qrels`: (relevance assessments); count=2,321,384 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ru', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ru', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ru', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ru
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:04+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ru`", "viewer": false}
2023-01-05T04:00:10+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/ru' The 'wikiclir/ru' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,413,945 - 'queries' (i.e., topics); count=664,924 - 'qrels': (relevance assessments); count=2,321,384 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/ru'\n\nThe 'wikiclir/ru' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,413,945\n - 'queries' (i.e., topics); count=664,924\n - 'qrels': (relevance assessments); count=2,321,384", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/ru'\n\nThe 'wikiclir/ru' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,413,945\n - 'queries' (i.e., topics); count=664,924\n - 'qrels': (relevance assessments); count=2,321,384", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
eb942ebdcde3f00e8ab2625f12fea6e763d89c5e
# Dataset Card for `wikiclir/sv` The `wikiclir/sv` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/sv). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=3,785,412 - `queries` (i.e., topics); count=639,073 - `qrels`: (relevance assessments); count=2,069,453 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_sv', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_sv', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_sv', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_sv
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:16+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/sv`", "viewer": false}
2023-01-05T04:00:21+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/sv' The 'wikiclir/sv' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=3,785,412 - 'queries' (i.e., topics); count=639,073 - 'qrels': (relevance assessments); count=2,069,453 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/sv'\n\nThe 'wikiclir/sv' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=3,785,412\n - 'queries' (i.e., topics); count=639,073\n - 'qrels': (relevance assessments); count=2,069,453", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/sv'\n\nThe 'wikiclir/sv' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=3,785,412\n - 'queries' (i.e., topics); count=639,073\n - 'qrels': (relevance assessments); count=2,069,453", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
8720841ac50187a4f2cb975a561071aa394adae5
# Dataset Card for `wikiclir/sw` The `wikiclir/sw` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/sw). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=37,079 - `queries` (i.e., topics); count=22,860 - `qrels`: (relevance assessments); count=57,924 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_sw', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_sw', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_sw', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_sw
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:27+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/sw`", "viewer": false}
2023-01-05T04:00:32+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/sw' The 'wikiclir/sw' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=37,079 - 'queries' (i.e., topics); count=22,860 - 'qrels': (relevance assessments); count=57,924 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/sw'\n\nThe 'wikiclir/sw' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=37,079\n - 'queries' (i.e., topics); count=22,860\n - 'qrels': (relevance assessments); count=57,924", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/sw'\n\nThe 'wikiclir/sw' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=37,079\n - 'queries' (i.e., topics); count=22,860\n - 'qrels': (relevance assessments); count=57,924", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4deb906ed0fcae7cbd355e37821a2b8c1aeeed76
# Dataset Card for `wikiclir/tl` The `wikiclir/tl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/tl). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=79,008 - `queries` (i.e., topics); count=48,930 - `qrels`: (relevance assessments); count=72,359 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_tl', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_tl', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_tl', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_tl
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:38+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/tl`", "viewer": false}
2023-01-05T04:00:44+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/tl' The 'wikiclir/tl' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=79,008 - 'queries' (i.e., topics); count=48,930 - 'qrels': (relevance assessments); count=72,359 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/tl'\n\nThe 'wikiclir/tl' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=79,008\n - 'queries' (i.e., topics); count=48,930\n - 'qrels': (relevance assessments); count=72,359", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/tl'\n\nThe 'wikiclir/tl' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=79,008\n - 'queries' (i.e., topics); count=48,930\n - 'qrels': (relevance assessments); count=72,359", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
7aaa433bf0a1b2a46fbf49d62bcac06db34acd3b
# Dataset Card for `wikiclir/tr` The `wikiclir/tr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/tr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=295,593 - `queries` (i.e., topics); count=185,388 - `qrels`: (relevance assessments); count=380,651 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_tr', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_tr', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_tr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_tr
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:49+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/tr`", "viewer": false}
2023-01-05T04:00:55+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/tr' The 'wikiclir/tr' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=295,593 - 'queries' (i.e., topics); count=185,388 - 'qrels': (relevance assessments); count=380,651 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/tr'\n\nThe 'wikiclir/tr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=295,593\n - 'queries' (i.e., topics); count=185,388\n - 'qrels': (relevance assessments); count=380,651", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/tr'\n\nThe 'wikiclir/tr' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=295,593\n - 'queries' (i.e., topics); count=185,388\n - 'qrels': (relevance assessments); count=380,651", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
34241822fbdee548da53215dc7e9aa079fedfda6
# Dataset Card for `wikiclir/uk` The `wikiclir/uk` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/uk). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=704,903 - `queries` (i.e., topics); count=348,222 - `qrels`: (relevance assessments); count=913,358 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_uk', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_uk', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_uk', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_uk
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:00+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/uk`", "viewer": false}
2023-01-05T04:01:06+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/uk' The 'wikiclir/uk' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=704,903 - 'queries' (i.e., topics); count=348,222 - 'qrels': (relevance assessments); count=913,358 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/uk'\n\nThe 'wikiclir/uk' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=704,903\n - 'queries' (i.e., topics); count=348,222\n - 'qrels': (relevance assessments); count=913,358", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/uk'\n\nThe 'wikiclir/uk' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=704,903\n - 'queries' (i.e., topics); count=348,222\n - 'qrels': (relevance assessments); count=913,358", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
11c4efc0c905cf0f7e1ec205a0d106063117d401
# Dataset Card for `wikiclir/vi` The `wikiclir/vi` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/vi). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,392,152 - `queries` (i.e., topics); count=354,312 - `qrels`: (relevance assessments); count=611,355 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_vi', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_vi', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_vi', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_vi
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:11+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/vi`", "viewer": false}
2023-01-05T04:01:17+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/vi' The 'wikiclir/vi' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=1,392,152 - 'queries' (i.e., topics); count=354,312 - 'qrels': (relevance assessments); count=611,355 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/vi'\n\nThe 'wikiclir/vi' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,392,152\n - 'queries' (i.e., topics); count=354,312\n - 'qrels': (relevance assessments); count=611,355", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/vi'\n\nThe 'wikiclir/vi' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,392,152\n - 'queries' (i.e., topics); count=354,312\n - 'qrels': (relevance assessments); count=611,355", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
da21f40f6e4b7c793a5a432c31c2cf4d7f1352ff
# Dataset Card for `wikiclir/zh` The `wikiclir/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=951,480 - `queries` (i.e., topics); count=463,273 - `qrels`: (relevance assessments); count=926,130 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_zh', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_zh', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_zh', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_zh
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:22+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/zh`", "viewer": false}
2023-01-05T04:01:28+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikiclir/zh' The 'wikiclir/zh' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=951,480 - 'queries' (i.e., topics); count=463,273 - 'qrels': (relevance assessments); count=926,130 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikiclir/zh'\n\nThe 'wikiclir/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=951,480\n - 'queries' (i.e., topics); count=463,273\n - 'qrels': (relevance assessments); count=926,130", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikiclir/zh'\n\nThe 'wikiclir/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=951,480\n - 'queries' (i.e., topics); count=463,273\n - 'qrels': (relevance assessments); count=926,130", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
34faf269615dfec84c7406e1aad4028184021690
# Dataset Card for `wikir/en1k` The `wikir/en1k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/en1k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=369,721 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_en1k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_en1k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:33+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/en1k`", "viewer": false}
2023-01-05T04:01:39+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikir/en1k' The 'wikir/en1k' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=369,721 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikir/en1k'\n\nThe 'wikir/en1k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=369,721", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikir/en1k'\n\nThe 'wikir/en1k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=369,721", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
2d9fe6d78bd5fb74187947c42fb47aecb45e616d
# Dataset Card for `wikir/en59k` The `wikir/en59k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/en59k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,454,785 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_en59k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_en59k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:44+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/en59k`", "viewer": false}
2023-01-05T04:01:50+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikir/en59k' The 'wikir/en59k' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=2,454,785 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikir/en59k'\n\nThe 'wikir/en59k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,454,785", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikir/en59k'\n\nThe 'wikir/en59k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,454,785", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
381392f54da22a76b14335afddb7a77ac3e9acf7
# Dataset Card for `wikir/en78k` The `wikir/en78k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/en78k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,456,637 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_en78k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_en78k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:56+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/en78k`", "viewer": false}
2023-01-05T04:02:01+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikir/en78k' The 'wikir/en78k' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=2,456,637 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikir/en78k'\n\nThe 'wikir/en78k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,456,637", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikir/en78k'\n\nThe 'wikir/en78k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,456,637", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4974b363522566225161cd1d4909c90f93f10324
# Dataset Card for `wikir/ens78k` The `wikir/ens78k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/ens78k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,456,637 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_ens78k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_ens78k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:07+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/ens78k`", "viewer": false}
2023-01-05T04:02:13+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikir/ens78k' The 'wikir/ens78k' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=2,456,637 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikir/ens78k'\n\nThe 'wikir/ens78k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,456,637", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikir/ens78k'\n\nThe 'wikir/ens78k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,456,637", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
db04698ad140684d348a6c4c200a105488cb2568
# Dataset Card for `wikir/es13k` The `wikir/es13k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/es13k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=645,901 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_es13k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_es13k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:18+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/es13k`", "viewer": false}
2023-01-05T04:02:24+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikir/es13k' The 'wikir/es13k' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=645,901 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikir/es13k'\n\nThe 'wikir/es13k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=645,901", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikir/es13k'\n\nThe 'wikir/es13k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=645,901", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
4b30e81715f939304b635ab6d0d310c4206388f1
# Dataset Card for `wikir/fr14k` The `wikir/fr14k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/fr14k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=736,616 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_fr14k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_fr14k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:29+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/fr14k`", "viewer": false}
2023-01-05T04:02:35+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikir/fr14k' The 'wikir/fr14k' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=736,616 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikir/fr14k'\n\nThe 'wikir/fr14k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=736,616", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikir/fr14k'\n\nThe 'wikir/fr14k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=736,616", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
b880a1e3756c68f85108b73e93cf1f121e858e5a
# Dataset Card for `wikir/it16k` The `wikir/it16k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/it16k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=503,012 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_it16k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_it16k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:40+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/it16k`", "viewer": false}
2023-01-05T04:02:46+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'wikir/it16k' The 'wikir/it16k' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=503,012 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'wikir/it16k'\n\nThe 'wikir/it16k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=503,012", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'wikir/it16k'\n\nThe 'wikir/it16k' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=503,012", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
53e8015a522198555acd5c856d28dbe9bff6da9c
# Dataset Card for `trec-fair/2022/train` The `trec-fair/2022/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-fair#trec-fair/2022/train). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=2,088,306 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-fair_2022_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'url': ...} qrels = load_dataset('irds/trec-fair_2022_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/trec-fair_2022_train
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:51+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-fair/2022/train`", "viewer": false}
2023-01-05T04:02:57+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'trec-fair/2022/train' The 'trec-fair/2022/train' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=50 - 'qrels': (relevance assessments); count=2,088,306 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-fair/2022/train'\n\nThe 'trec-fair/2022/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=2,088,306", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'trec-fair/2022/train'\n\nThe 'trec-fair/2022/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=2,088,306", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
25bbce2867adeb1406f7b6037dbe97caf78ec7cf
# Dataset Card for `trec-cast/v0` The `trec-cast/v0` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v0). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=47,696,605 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-cast_v0', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dalton2019Cast, title={CAsT 2019: The Conversational Assistance Track Overview}, author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan}, booktitle={TREC}, year={2019} } ```
irds/trec-cast_v0
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:03:03+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-cast/v0`", "viewer": false}
2023-01-05T04:03:08+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'trec-cast/v0' The 'trec-cast/v0' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=47,696,605 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-cast/v0'\n\nThe 'trec-cast/v0' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=47,696,605", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'trec-cast/v0'\n\nThe 'trec-cast/v0' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=47,696,605", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
64ac84bd0275f929e5007e75972bae185bbb7bfe
# Dataset Card for `trec-cast/v1` The `trec-cast/v1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=38,622,444 This dataset is used by: [`trec-cast_v1_2020`](https://huggingface.co/datasets/irds/trec-cast_v1_2020), [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-cast_v1', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dalton2019Cast, title={CAsT 2019: The Conversational Assistance Track Overview}, author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan}, booktitle={TREC}, year={2019} } ```
irds/trec-cast_v1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:03:14+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-cast/v1`", "viewer": false}
2023-01-05T04:03:19+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'trec-cast/v1' The 'trec-cast/v1' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=38,622,444 This dataset is used by: 'trec-cast_v1_2020', 'trec-cast_v1_2020_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-cast/v1'\n\nThe 'trec-cast/v1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=38,622,444\n\n\nThis dataset is used by: 'trec-cast_v1_2020', 'trec-cast_v1_2020_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'trec-cast/v1'\n\nThe 'trec-cast/v1' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=38,622,444\n\n\nThis dataset is used by: 'trec-cast_v1_2020', 'trec-cast_v1_2020_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
c730a1fc68ec286a83e785b796c2ebb11b6e34d2
# Dataset Card for `trec-cast/v1/2020` The `trec-cast/v1/2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1/2020). # Data This dataset provides: - `queries` (i.e., topics); count=216 - `qrels`: (relevance assessments); count=40,451 - For `docs`, use [`irds/trec-cast_v1`](https://huggingface.co/datasets/irds/trec-cast_v1) This dataset is used by: [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-cast_v1_2020', 'queries') for record in queries: record # {'query_id': ..., 'raw_utterance': ..., 'automatic_rewritten_utterance': ..., 'manual_rewritten_utterance': ..., 'manual_canonical_result_id': ..., 'topic_number': ..., 'turn_number': ...} qrels = load_dataset('irds/trec-cast_v1_2020', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dalton2020Cast, title={CAsT 2020: The Conversational Assistance Track Overview}, author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan}, booktitle={TREC}, year={2020} } ```
irds/trec-cast_v1_2020
[ "task_categories:text-retrieval", "source_datasets:irds/trec-cast_v1", "region:us" ]
2023-01-05T04:03:25+00:00
{"source_datasets": ["irds/trec-cast_v1"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-cast/v1/2020`", "viewer": false}
2023-01-05T04:03:31+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-cast_v1 #region-us
# Dataset Card for 'trec-cast/v1/2020' The 'trec-cast/v1/2020' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=216 - 'qrels': (relevance assessments); count=40,451 - For 'docs', use 'irds/trec-cast_v1' This dataset is used by: 'trec-cast_v1_2020_judged' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-cast/v1/2020'\n\nThe 'trec-cast/v1/2020' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=216\n - 'qrels': (relevance assessments); count=40,451\n\n - For 'docs', use 'irds/trec-cast_v1'\n\nThis dataset is used by: 'trec-cast_v1_2020_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-cast_v1 #region-us \n", "# Dataset Card for 'trec-cast/v1/2020'\n\nThe 'trec-cast/v1/2020' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=216\n - 'qrels': (relevance assessments); count=40,451\n\n - For 'docs', use 'irds/trec-cast_v1'\n\nThis dataset is used by: 'trec-cast_v1_2020_judged'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
a6b28f20219044c62f728352f3692897550a30cf
# Dataset Card for `trec-cast/v1/2020/judged` The `trec-cast/v1/2020/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1/2020/judged). # Data This dataset provides: - `queries` (i.e., topics); count=208 - For `docs`, use [`irds/trec-cast_v1`](https://huggingface.co/datasets/irds/trec-cast_v1) - For `qrels`, use [`irds/trec-cast_v1_2020`](https://huggingface.co/datasets/irds/trec-cast_v1_2020) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-cast_v1_2020_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dalton2020Cast, title={CAsT 2020: The Conversational Assistance Track Overview}, author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan}, booktitle={TREC}, year={2020} } ```
irds/trec-cast_v1_2020_judged
[ "task_categories:text-retrieval", "source_datasets:irds/trec-cast_v1", "source_datasets:irds/trec-cast_v1_2020", "region:us" ]
2023-01-05T04:03:36+00:00
{"source_datasets": ["irds/trec-cast_v1", "irds/trec-cast_v1_2020"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-cast/v1/2020/judged`", "viewer": false}
2023-01-05T04:03:42+00:00
[]
[]
TAGS #task_categories-text-retrieval #source_datasets-irds/trec-cast_v1 #source_datasets-irds/trec-cast_v1_2020 #region-us
# Dataset Card for 'trec-cast/v1/2020/judged' The 'trec-cast/v1/2020/judged' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'queries' (i.e., topics); count=208 - For 'docs', use 'irds/trec-cast_v1' - For 'qrels', use 'irds/trec-cast_v1_2020' ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'trec-cast/v1/2020/judged'\n\nThe 'trec-cast/v1/2020/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=208\n\n - For 'docs', use 'irds/trec-cast_v1'\n - For 'qrels', use 'irds/trec-cast_v1_2020'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #source_datasets-irds/trec-cast_v1 #source_datasets-irds/trec-cast_v1_2020 #region-us \n", "# Dataset Card for 'trec-cast/v1/2020/judged'\n\nThe 'trec-cast/v1/2020/judged' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=208\n\n - For 'docs', use 'irds/trec-cast_v1'\n - For 'qrels', use 'irds/trec-cast_v1_2020'", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
762f2fe26d008e9908e0d451cbb1c3818c95a89f
# Dataset Card for `hc4/fa` The `hc4/fa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/hc4#hc4/fa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=486,486 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/hc4_fa', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} } ```
irds/hc4_fa
[ "task_categories:text-retrieval", "arxiv:2201.09992", "region:us" ]
2023-01-05T04:03:47+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`hc4/fa`", "viewer": false}
2023-01-05T04:03:53+00:00
[ "2201.09992" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2201.09992 #region-us
# Dataset Card for 'hc4/fa' The 'hc4/fa' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=486,486 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'hc4/fa'\n\nThe 'hc4/fa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=486,486", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2201.09992 #region-us \n", "# Dataset Card for 'hc4/fa'\n\nThe 'hc4/fa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=486,486", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
65fa23f6fb44e5965c09369b58c07ceeb3f169c5
# Dataset Card for `hc4/ru` The `hc4/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/hc4#hc4/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=4,721,064 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/hc4_ru', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} } ```
irds/hc4_ru
[ "task_categories:text-retrieval", "arxiv:2201.09992", "region:us" ]
2023-01-05T04:03:58+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`hc4/ru`", "viewer": false}
2023-01-05T04:04:04+00:00
[ "2201.09992" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2201.09992 #region-us
# Dataset Card for 'hc4/ru' The 'hc4/ru' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=4,721,064 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'hc4/ru'\n\nThe 'hc4/ru' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=4,721,064", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2201.09992 #region-us \n", "# Dataset Card for 'hc4/ru'\n\nThe 'hc4/ru' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=4,721,064", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
cc98dd7e796bd3a70cea16b97ae0df671acbf444
# Dataset Card for `hc4/zh` The `hc4/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/hc4#hc4/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=646,305 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/hc4_zh', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} } ```
irds/hc4_zh
[ "task_categories:text-retrieval", "arxiv:2201.09992", "region:us" ]
2023-01-05T04:04:10+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`hc4/zh`", "viewer": false}
2023-01-05T04:04:15+00:00
[ "2201.09992" ]
[]
TAGS #task_categories-text-retrieval #arxiv-2201.09992 #region-us
# Dataset Card for 'hc4/zh' The 'hc4/zh' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=646,305 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'hc4/zh'\n\nThe 'hc4/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=646,305", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #arxiv-2201.09992 #region-us \n", "# Dataset Card for 'hc4/zh'\n\nThe 'hc4/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=646,305", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
49a02b36880687682db45f8478f2ddcf7983da27
# Dataset Card for `neuclir/1/fa` The `neuclir/1/fa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neuclir#neuclir/1/fa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,232,016 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neuclir_1_fa', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/neuclir_1_fa
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:04:21+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neuclir/1/fa`", "viewer": false}
2023-01-05T04:04:26+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'neuclir/1/fa' The 'neuclir/1/fa' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=2,232,016 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neuclir/1/fa'\n\nThe 'neuclir/1/fa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,232,016", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'neuclir/1/fa'\n\nThe 'neuclir/1/fa' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=2,232,016", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
d39ee5367d56bae93cf56997780ccdf93db01c09
# Dataset Card for `neuclir/1/ru` The `neuclir/1/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neuclir#neuclir/1/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=4,627,543 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neuclir_1_ru', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/neuclir_1_ru
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:04:32+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neuclir/1/ru`", "viewer": false}
2023-01-05T04:04:38+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'neuclir/1/ru' The 'neuclir/1/ru' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=4,627,543 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neuclir/1/ru'\n\nThe 'neuclir/1/ru' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=4,627,543", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'neuclir/1/ru'\n\nThe 'neuclir/1/ru' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=4,627,543", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
3cecf7c13de043e5ff05f4380a555fd8ce46bc4d
# Dataset Card for `neuclir/1/zh` The `neuclir/1/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neuclir#neuclir/1/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=3,179,209 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neuclir_1_zh', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/neuclir_1_zh
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:04:43+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neuclir/1/zh`", "viewer": false}
2023-01-05T04:04:49+00:00
[]
[]
TAGS #task_categories-text-retrieval #region-us
# Dataset Card for 'neuclir/1/zh' The 'neuclir/1/zh' dataset, provided by the ir-datasets package. For more information about the dataset, see the documentation. # Data This dataset provides: - 'docs' (documents, i.e., the corpus); count=3,179,209 ## Usage Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the data in Dataset format.
[ "# Dataset Card for 'neuclir/1/zh'\n\nThe 'neuclir/1/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=3,179,209", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
[ "TAGS\n#task_categories-text-retrieval #region-us \n", "# Dataset Card for 'neuclir/1/zh'\n\nThe 'neuclir/1/zh' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.", "# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=3,179,209", "## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format." ]
c8b71939a9e755c0182e676bea668ac90574e84f
# Dataset Card for "results_valid_100rows_2023-01-05" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
joddy/results_valid_100rows_2023-01-05
[ "region:us" ]
2023-01-05T06:45:12+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "resolution", "dtype": "int64"}, {"name": "attributes_loc", "dtype": {"class_label": {"names": {"0": "upper left", "1": "upper right", "2": "lower left", "3": "lower right"}}}}, {"name": "NL_text", "dtype": "string"}, {"name": "bbox_text", "dtype": "string"}, {"name": "center_text", "dtype": "string"}, {"name": "normed_object_bbox", "sequence": "int64"}, {"name": "without_pos_stable-diffusion-v1-5", "dtype": "image"}, {"name": "NL_stable-diffusion-v1-5", "dtype": "image"}, {"name": "bbox_stable-diffusion-v1-5", "dtype": "image"}, {"name": "center_stable-diffusion-v1-5", "dtype": "image"}, {"name": "without_pos_NL_text_TextENC_off", "dtype": "image"}, {"name": "NL_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_bbox_text_TextENC_off", "dtype": "image"}, {"name": "bbox_only_tag_TextENC_off", "dtype": "image"}, {"name": "bbox_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_center_text_TextENC_off", "dtype": "image"}, {"name": "center_only_tag_TextENC_off", "dtype": "image"}, {"name": "center_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_NL_text_TextENC_on", "dtype": "image"}, {"name": "NL_text_TextENC_on", "dtype": "image"}, {"name": "without_pos_bbox_text_TextENC_on", "dtype": "image"}, {"name": "bbox_only_tag_TextENC_on", "dtype": "image"}, {"name": "bbox_text_TextENC_on", "dtype": "image"}, {"name": "without_pos_center_text_TextENC_on", "dtype": "image"}, {"name": "center_only_tag_TextENC_on", "dtype": "image"}, {"name": "center_text_TextENC_on", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1033337709.0, "num_examples": 100}], "download_size": 1023757758, "dataset_size": 1033337709.0}}
2023-01-05T07:47:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "results_valid_100rows_2023-01-05" More Information needed
[ "# Dataset Card for \"results_valid_100rows_2023-01-05\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"results_valid_100rows_2023-01-05\"\n\nMore Information needed" ]
5c86cb5810cf41501b5e009e68967f021ac5e91d
# Dataset Card for "xquad_ar" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zaid/xquad_ar
[ "region:us" ]
2023-01-05T07:17:31+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 1394144.8109243698, "num_examples": 963}, {"name": "validation", "num_bytes": 172277.5, "num_examples": 119}, {"name": "test", "num_bytes": 156352.68907563025, "num_examples": 108}], "download_size": 406718, "dataset_size": 1722775.0}}
2023-01-05T07:17:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "xquad_ar" More Information needed
[ "# Dataset Card for \"xquad_ar\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"xquad_ar\"\n\nMore Information needed" ]
ad10ba17ccd8c81439459c827f2899b77e45bb69
# Dataset Card for "xquad_tr" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zaid/xquad_tr
[ "region:us" ]
2023-01-05T07:18:00+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 979782.9050420168, "num_examples": 963}, {"name": "validation", "num_bytes": 121073.9, "num_examples": 119}, {"name": "test", "num_bytes": 109882.1949579832, "num_examples": 108}], "download_size": 353715, "dataset_size": 1210739.0}}
2023-01-05T07:18:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "xquad_tr" More Information needed
[ "# Dataset Card for \"xquad_tr\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"xquad_tr\"\n\nMore Information needed" ]
884b0caaf95649264d7b4ff3052c9566866afcc1
# Dataset Card for "results_test_50rows_2023-01-05" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
joddy/results_test_50rows_2023-01-05
[ "region:us" ]
2023-01-05T08:27:23+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "resolution", "dtype": "int64"}, {"name": "attributes_loc", "dtype": {"class_label": {"names": {"0": "upper left", "1": "upper right", "2": "lower left", "3": "lower right"}}}}, {"name": "NL_text", "dtype": "string"}, {"name": "bbox_text", "dtype": "string"}, {"name": "center_text", "dtype": "string"}, {"name": "normed_object_bbox", "sequence": "int64"}, {"name": "without_pos_stable-diffusion-v1-5", "dtype": "image"}, {"name": "NL_stable-diffusion-v1-5", "dtype": "image"}, {"name": "bbox_stable-diffusion-v1-5", "dtype": "image"}, {"name": "center_stable-diffusion-v1-5", "dtype": "image"}, {"name": "without_pos_NL_text_TextENC_off", "dtype": "image"}, {"name": "NL_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_bbox_text_TextENC_off", "dtype": "image"}, {"name": "bbox_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_center_text_TextENC_off", "dtype": "image"}, {"name": "center_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_NL_text_TextENC_on", "dtype": "image"}, {"name": "NL_text_TextENC_on", "dtype": "image"}, {"name": "without_pos_bbox_text_TextENC_on", "dtype": "image"}, {"name": "bbox_text_TextENC_on", "dtype": "image"}, {"name": "without_pos_center_text_TextENC_on", "dtype": "image"}, {"name": "center_text_TextENC_on", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 388907687.0, "num_examples": 50}], "download_size": 388936971, "dataset_size": 388907687.0}}
2023-01-05T08:59:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "results_test_50rows_2023-01-05" More Information needed
[ "# Dataset Card for \"results_test_50rows_2023-01-05\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"results_test_50rows_2023-01-05\"\n\nMore Information needed" ]
b73bb1961701f6d766312a29c292b1a2513b8735
# Numerical Reasoning
lintang/numerical_reasoning_arithmetic
[ "region:us" ]
2023-01-05T08:48:37+00:00
{}
2023-01-09T06:33:43+00:00
[]
[]
TAGS #region-us
# Numerical Reasoning
[ "# Numerical Reasoning" ]
[ "TAGS\n#region-us \n", "# Numerical Reasoning" ]
07ac45f1d93a401a67420f30959baaceb36c2f26
# Датасет перефразировок коротких фраз (читчат+поэзия) В датасете содержатся правильные и некорректные перефразировки коротких диалоговых реплик ([проект диалоговой системы](https://github.com/Koziev/chatbot)) и фрагментов стихов ([проект генеративной поэзии](https://github.com/Koziev/verslibre)). Датасет представляет из себя список сэмплов-кортежей. Каждый сэмпл состоит из двух списков: ```paraphrases``` - примеры правильных перефразировок ```distractors``` - примеры неправильных перефразировок Датасет используется для создания моделей [детектора перефразировок sbert_synonymy](https://huggingface.co/inkoziev/sbert_synonymy) и [генеративного поэтического перефразировщика](https://huggingface.co/inkoziev/paraphraser). ## Disclaimer В датасете целенаправленно допускалась неконсервативность семантики перефразировок в определенных пределах. К примеру, правильными перефразировками считаются пары "_Помолчи_" и "_Дружище, не говори ни слова!_". Так как перефразировщик используется в проекте генеративной поэзии для создания датасетов, в нем есть некоторое количество метафоричных и достаточно вольных перефразировок. Эти особенности датасета могут сделать невозможным использование датасета и моделей на его основе в Ваших проектах. ## Другие датасеты перефразировок При обучении моделей вы можете совмещать этот датасет с данными из других датасетов перефразировок, например [tapaco](https://huggingface.co/datasets/tapaco).
inkoziev/paraphrases
[ "task_categories:sentence-similarity", "task_categories:text2text-generation", "task_ids:semantic-similarity-classification", "language_creators:expert-generated", "language:ru", "license:cc-by-nc-4.0", "region:us" ]
2023-01-05T09:08:02+00:00
{"language_creators": ["expert-generated"], "language": ["ru"], "license": "cc-by-nc-4.0", "task_categories": ["sentence-similarity", "text2text-generation"], "task_ids": ["semantic-similarity-classification"]}
2023-01-14T13:37:24+00:00
[]
[ "ru" ]
TAGS #task_categories-sentence-similarity #task_categories-text2text-generation #task_ids-semantic-similarity-classification #language_creators-expert-generated #language-Russian #license-cc-by-nc-4.0 #region-us
# Датасет перефразировок коротких фраз (читчат+поэзия) В датасете содержатся правильные и некорректные перефразировки коротких диалоговых реплик (проект диалоговой системы) и фрагментов стихов (проект генеративной поэзии). Датасет представляет из себя список сэмплов-кортежей. Каждый сэмпл состоит из двух списков: - примеры правильных перефразировок - примеры неправильных перефразировок Датасет используется для создания моделей детектора перефразировок sbert_synonymy и генеративного поэтического перефразировщика. ## Disclaimer В датасете целенаправленно допускалась неконсервативность семантики перефразировок в определенных пределах. К примеру, правильными перефразировками считаются пары "_Помолчи_" и "_Дружище, не говори ни слова!_". Так как перефразировщик используется в проекте генеративной поэзии для создания датасетов, в нем есть некоторое количество метафоричных и достаточно вольных перефразировок. Эти особенности датасета могут сделать невозможным использование датасета и моделей на его основе в Ваших проектах. ## Другие датасеты перефразировок При обучении моделей вы можете совмещать этот датасет с данными из других датасетов перефразировок, например tapaco.
[ "# Датасет перефразировок коротких фраз (читчат+поэзия)\n\nВ датасете содержатся правильные и некорректные перефразировки коротких диалоговых реплик (проект диалоговой системы)\nи фрагментов стихов (проект генеративной поэзии).\n\nДатасет представляет из себя список сэмплов-кортежей. Каждый сэмпл состоит из двух списков:\n\n - примеры правильных перефразировок \n - примеры неправильных перефразировок\n\n\nДатасет используется для создания моделей детектора перефразировок sbert_synonymy\nи генеративного поэтического перефразировщика.", "## Disclaimer \n\nВ датасете целенаправленно допускалась неконсервативность семантики перефразировок в определенных пределах.\nК примеру, правильными перефразировками считаются пары \"_Помолчи_\" и \"_Дружище, не говори ни слова!_\". Так как перефразировщик\nиспользуется в проекте генеративной поэзии для создания датасетов, в нем есть некоторое количество метафоричных\nи достаточно вольных перефразировок. Эти особенности датасета могут сделать невозможным использование датасета и моделей\nна его основе в Ваших проектах.", "## Другие датасеты перефразировок\n\nПри обучении моделей вы можете совмещать этот датасет с данными из других датасетов перефразировок, например tapaco." ]
[ "TAGS\n#task_categories-sentence-similarity #task_categories-text2text-generation #task_ids-semantic-similarity-classification #language_creators-expert-generated #language-Russian #license-cc-by-nc-4.0 #region-us \n", "# Датасет перефразировок коротких фраз (читчат+поэзия)\n\nВ датасете содержатся правильные и некорректные перефразировки коротких диалоговых реплик (проект диалоговой системы)\nи фрагментов стихов (проект генеративной поэзии).\n\nДатасет представляет из себя список сэмплов-кортежей. Каждый сэмпл состоит из двух списков:\n\n - примеры правильных перефразировок \n - примеры неправильных перефразировок\n\n\nДатасет используется для создания моделей детектора перефразировок sbert_synonymy\nи генеративного поэтического перефразировщика.", "## Disclaimer \n\nВ датасете целенаправленно допускалась неконсервативность семантики перефразировок в определенных пределах.\nК примеру, правильными перефразировками считаются пары \"_Помолчи_\" и \"_Дружище, не говори ни слова!_\". Так как перефразировщик\nиспользуется в проекте генеративной поэзии для создания датасетов, в нем есть некоторое количество метафоричных\nи достаточно вольных перефразировок. Эти особенности датасета могут сделать невозможным использование датасета и моделей\nна его основе в Ваших проектах.", "## Другие датасеты перефразировок\n\nПри обучении моделей вы можете совмещать этот датасет с данными из других датасетов перефразировок, например tapaco." ]
ea4ed1143e4baabdaf3a91db95a019e8a9c8a5b0
hand-collected set of 57817 pics mostly from russian internet. pics without captions. датасет из тех самых "прикольных картинок" с дисков и т.п. все картинки с корневой директории полностью собраны ручками. не размечен.
4eJIoBek/gazik-pics-57k
[ "task_categories:unconditional-image-generation", "size_categories:10K<n<100K", "license:wtfpl", "region:us" ]
2023-01-05T09:08:04+00:00
{"license": "wtfpl", "size_categories": ["10K<n<100K"], "task_categories": ["unconditional-image-generation"]}
2023-02-27T15:50:32+00:00
[]
[]
TAGS #task_categories-unconditional-image-generation #size_categories-10K<n<100K #license-wtfpl #region-us
hand-collected set of 57817 pics mostly from russian internet. pics without captions. датасет из тех самых "прикольных картинок" с дисков и т.п. все картинки с корневой директории полностью собраны ручками. не размечен.
[]
[ "TAGS\n#task_categories-unconditional-image-generation #size_categories-10K<n<100K #license-wtfpl #region-us \n" ]
3bf0d4f677da2344fd879104e25b92f3eb2eb9ed
Embrapa Wine Grape Instance Segmentation Dataset – Embrapa WGISD ================================================================ [![DOI](https://zenodo.org/badge/199083745.svg)](https://zenodo.org/badge/latestdoi/199083745) This is a detailed description of the dataset, a *datasheet for the dataset* as proposed by [Gebru *et al.*](https://arxiv.org/abs/1803.09010) Motivation for Dataset Creation ------------------------------- ### Why was the dataset created? Embrapa WGISD (*Wine Grape Instance Segmentation Dataset*) was created to provide images and annotation to study *object detection and instance segmentation* for image-based monitoring and field robotics in viticulture. It provides instances from five different grape varieties taken on field. These instances shows variance in grape pose, illumination and focus, including genetic and phenological variations such as shape, color and compactness. ### What (other) tasks could the dataset be used for? Possible uses include relaxations of the instance segmentation problem: classification (Is a grape in the image?), semantic segmentation (What are the "grape pixels" in the image?), object detection (Where are the grapes in the image?), and counting (How many berries are there per cluster?). The WGISD can also be used in grape variety identification. ### Who funded the creation of the dataset? The building of the WGISD dataset was supported by the Embrapa SEG Project 01.14.09.001.05.04, *Image-based metrology for Precision Agriculture and Phenotyping*, and the CNPq PIBIC Program (grants 161165/2017-6 and 125044/2018-6). Dataset Composition ------------------- ### What are the instances? Each instance consists in a RGB image and an annotation describing grape clusters locations as bounding boxes. A subset of the instances also contains binary masks identifying the pixels belonging to each grape cluster. Each image presents at least one grape cluster. Some grape clusters can appear far at the background and should be ignored. ### Are relationships between instances made explicit in the data? File names prefixes identify the variety observed in the instance. | Prefix | Variety | | --- | --- | | CDY | *Chardonnay* | | CFR | *Cabernet Franc* | | CSV | *Cabernet Sauvignon*| | SVB | *Sauvignon Blanc* | | SYH | *Syrah* | ### How many instances of each type are there? The dataset consists of 300 images containing 4,432 grape clusters identified by bounding boxes. A subset of 137 images also contains binary masks identifying the pixels of each cluster. It means that from the 4,432 clusters, 2,020 of them presents binary masks for instance segmentation, as summarized in the following table. |Prefix | Variety | Date | Images | Boxed clusters | Masked clusters| | --- | --- | --- | --- | --- | --- | |CDY | *Chardonnay* | 2018-04-27 | 65 | 840 | 308| |CFR | *Cabernet Franc* | 2018-04-27 | 65 | 1,069 | 513| |CSV | *Cabernet Sauvignon* | 2018-04-27 | 57 | 643 | 306| |SVB | *Sauvignon Blanc* | 2018-04-27 | 65 | 1,316 | 608| |SYH | *Syrah* | 2017-04-27 | 48 | 563 | 285| |Total | | | 300 | 4,431 | 2,020| *General information about the dataset: the grape varieties and the associated identifying prefix, the date of image capture on field, number of images (instances) and the identified grapes clusters.* #### Contributions Another subset of 111 images with separated and non-occluded grape clusters was annotated with point annotations for every berry by F. Khoroshevsky and S. Khoroshevsky ([Khoroshevsky *et al.*, 2021](https://doi.org/10.1007/978-3-030-65414-6_19)). Theses annotations are available in `test_berries.txt` , `train_berries.txt` and `val_berries.txt` |Prefix | Variety | Berries | | --- | --- | --- | |CDY | *Chardonnay* | 1,102 | |CFR | *Cabernet Franc* | 1,592 | |CSV | *Cabernet Sauvignon* | 1,712 | |SVB | *Sauvignon Blanc* | 1,974 | |SYH | *Syrah* | 969 | |Total | | 7,349 | *Berries annotations by F. Khoroshevsky and S. Khoroshevsky.* Geng Deng ([Deng *et al.*, 2020](https://doi.org/10.1007/978-3-030-63820-7_66)) provided point-based annotations for berries in all 300 images, summing 187,374 berries. These annotations are available in `contrib/berries`. Daniel Angelov (@23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory. ### What data does each instance consist of? Each instance contains a 8-bits RGB image and a text file containing one bounding box description per line. These text files follows the "YOLO format" CLASS CX CY W H *class* is an integer defining the object class – the dataset presents only the grape class that is numbered 0, so every line starts with this “class zero” indicator. The center of the bounding box is the point *(c_x, c_y)*, represented as float values because this format normalizes the coordinates by the image dimensions. To get the absolute position, use *(2048 c_x, 1365 c_y)*. The bounding box dimensions are given by *W* and *H*, also normalized by the image size. The instances presenting mask data for instance segmentation contain files presenting the `.npz` extension. These files are compressed archives for NumPy $n$-dimensional arrays. Each array is a *H X W X n_clusters* three-dimensional array where *n_clusters* is the number of grape clusters observed in the image. After assigning the NumPy array to a variable `M`, the mask for the *i*-th grape cluster can be found in `M[:,:,i]`. The *i*-th mask corresponds to the *i*-th line in the bounding boxes file. The dataset also includes the original image files, presenting the full original resolution. The normalized annotation for bounding boxes allows easy identification of clusters in the original images, but the mask data will need to be properly rescaled if users wish to work on the original full resolution. #### Contributions *For `test_berries.txt` , `train_berries.txt` and `val_berries.txt`*: The berries annotations are following a similar notation with the only exception being that each text file (train/val/test) includes also the instance file name. FILENAME CLASS CX CY where *filename* stands for instance file name, *class* is an integer defining the object class (0 for all instances) and the point *(c_x, c_y)* indicates the absolute position of each "dot" indicating a single berry in a well defined cluster. *For `contrib/berries`*: The annotations provide the *(x, y)* point position for each berry center, in a tabular form: X Y These point-based annotations can be easily loaded using, for example, `numpy.loadtxt`. See `WGISD.ipynb`for examples. [Daniel Angelov (@23pointsNorth)](https://github.com/23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory. Also see [COCO format](https://cocodataset.org/#format-data) for the JSON-based format. ### Is everything included or does the data rely on external resources? Everything is included in the dataset. ### Are there recommended data splits or evaluation measures? The dataset comes with specified train/test splits. The splits are found in lists stored as text files. There are also lists referring only to instances presenting binary masks. | | Images | Boxed clusters | Masked clusters | | ---------------------| -------- | ---------------- | ----------------- | | Training/Validation | 242 | 3,581 | 1,612 | | Test | 58 | 850 | 408 | | Total | 300 | 4,431 | 2,020 | *Dataset recommended split.* Standard measures from the information retrieval and computer vision literature should be employed: precision and recall, *F1-score* and average precision as seen in [COCO](http://cocodataset.org) and [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC). ### What experiments were initially run on this dataset? The first experiments run on this dataset are described in [*Grape detection, segmentation and tracking using deep neural networks and three-dimensional association*](https://arxiv.org/abs/1907.11819) by Santos *et al.*. See also the following video demo: [![Grape detection, segmentation and tracking](http://img.youtube.com/vi/1Hji3GS4mm4/0.jpg)](http://www.youtube.com/watch?v=1Hji3GS4mm4 "Grape detection, segmentation and tracking") **UPDATE**: The JPG files corresponding to the video frames in the [video demo](http://www.youtube.com/watch?v=1Hji3GS4mm4) are now available in the `extras` directory. Data Collection Process ----------------------- ### How was the data collected? Images were captured at the vineyards of Guaspari Winery, located at Espírito Santo do Pinhal, São Paulo, Brazil (Lat -22.181018, Lon -46.741618). The winery staff performs dual pruning: one for shaping (after previous year harvest) and one for production, resulting in canopies of lower density. The image capturing was realized in April 2017 for *Syrah* and in April 2018 for the other varieties. A Canon EOS REBEL T3i DSLR camera and a Motorola Z2 Play smartphone were used to capture the images. The cameras were located between the vines lines, facing the vines at distances around 1-2 meters. The EOS REBEL T3i camera captured 240 images, including all *Syrah* pictures. The Z2 smartphone grabbed 60 images covering all varieties except *Syrah* . The REBEL images were scaled to *2048 X 1365* pixels and the Z2 images to *2048 X 1536* pixels. More data about the capture process can be found in the Exif data found in the original image files, included in the dataset. ### Who was involved in the data collection process? T. T. Santos, A. A. Santos and S. Avila captured the images in field. T. T. Santos, L. L. de Souza and S. Avila performed the annotation for bounding boxes and masks. ### How was the data associated with each instance acquired? The rectangular bounding boxes identifying the grape clusters were annotated using the [`labelImg` tool](https://github.com/tzutalin/labelImg). The clusters can be under severe occlusion by leaves, trunks or other clusters. Considering the absence of 3-D data and on-site annotation, the clusters locations had to be defined using only a single-view image, so some clusters could be incorrectly delimited. A subset of the bounding boxes was selected for mask annotation, using a novel tool developed by the authors and presented in this work. This interactive tool lets the annotator mark grape and background pixels using scribbles, and a graph matching algorithm developed by [Noma *et al.*](https://doi.org/10.1016/j.patcog.2011.08.017) is employed to perform image segmentation to every pixel in the bounding box, producing a binary mask representing grape/background classification. #### Contributions A subset of the bounding boxes of well-defined (separated and non-occluded clusters) was used for "dot" (berry) annotations of each grape to serve for counting applications as described in [Khoroshevsky *et al.*](https://doi.org/10.1007/978-3-030-65414-6_19). The berries annotation was performed by F. Khoroshevsky and S. Khoroshevsky. Geng Deng ([Deng *et al.*, 2020](https://doi.org/10.1007/978-3-030-63820-7_66)) provided point-based annotations for berries in all 300 images, summing 187,374 berries. These annotations are available in `contrib/berries`. Deng *et al.* employed [Huawei ModelArt](https://www.huaweicloud.com/en-us/product/modelarts.html), for their annotation effort. Data Preprocessing ------------------ ### What preprocessing/cleaning was done? The following steps were taken to process the data: 1. Bounding boxes were annotated for each image using the `labelImg` tool. 2. Images were resized to *W = 2048* pixels. This resolution proved to be practical to mask annotation, a convenient balance between grape detail and time spent by the graph-based segmentation algorithm. 3. A randomly selected subset of images were employed on mask annotation using the interactive tool based on graph matching. 4. All binaries masks were inspected, in search of pixels attributed to more than one grape cluster. The annotator assigned the disputed pixels to the most likely cluster. 5. The bounding boxes were fitted to the masks, which provided a fine tuning of grape clusters locations. ### Was the “raw” data saved in addition to the preprocessed data? The original resolution images, containing the Exif data provided by the cameras, is available in the dataset. Dataset Distribution -------------------- ### How is the dataset distributed? The dataset is [available at GitHub](https://github.com/thsant/wgisd). ### When will the dataset be released/first distributed? The dataset was released in July, 2019. ### What license (if any) is it distributed under? The data is released under [**Creative Commons BY-NC 4.0 (Attribution-NonCommercial 4.0 International license)**](https://creativecommons.org/licenses/by-nc/4.0/). There is a request to cite the corresponding paper if the dataset is used. For commercial use, contact Embrapa Agricultural Informatics business office. ### Are there any fees or access/export restrictions? There are no fees or restrictions. For commercial use, contact Embrapa Agricultural Informatics business office. Dataset Maintenance ------------------- ### Who is supporting/hosting/maintaining the dataset? The dataset is hosted at Embrapa Agricultural Informatics and all comments or requests can be sent to [Thiago T. Santos](https://github.com/thsant) (maintainer). ### Will the dataset be updated? There is no scheduled updates. * In May, 2022, [Daniel Angelov (@23pointsNorth)](https://github.com/23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory. * In February, 2021, F. Khoroshevsky and S. Khoroshevsky provided the first extension: the berries ("dot") annotations. * In April, 2021, Geng Deng provided point annotations for berries. T. Santos converted Deng's XML files to easier-to-load text files now available in `contrib/berries` directory. In case of further updates, releases will be properly tagged at GitHub. ### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? Contributors should contact the maintainer by e-mail. ### No warranty The maintainers and their institutions are *exempt from any liability, judicial or extrajudicial, for any losses or damages arising from the use of the data contained in the image database*.
thsant/wgisd
[ "task_categories:object-detection", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "license:cc-by-nc-4.0", "agriculture", "viticulture", "fruit detection", "arxiv:1803.09010", "arxiv:1907.11819", "region:us" ]
2023-01-05T12:01:39+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": [], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["object-detection"], "task_ids": [], "pretty_name": "Embrapa Wine Grape Instance Segmentation Dataset \u2013 Embrapa WGISD ", "viewer": false, "tags": ["agriculture", "viticulture", "fruit detection"]}
2023-01-05T17:24:09+00:00
[ "1803.09010", "1907.11819" ]
[]
TAGS #task_categories-object-detection #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-nc-4.0 #agriculture #viticulture #fruit detection #arxiv-1803.09010 #arxiv-1907.11819 #region-us
Embrapa Wine Grape Instance Segmentation Dataset – Embrapa WGISD ================================================================ ![DOI](URL This is a detailed description of the dataset, a *datasheet for the dataset* as proposed by Gebru *et al.* Motivation for Dataset Creation ------------------------------- ### Why was the dataset created? Embrapa WGISD (*Wine Grape Instance Segmentation Dataset*) was created to provide images and annotation to study *object detection and instance segmentation* for image-based monitoring and field robotics in viticulture. It provides instances from five different grape varieties taken on field. These instances shows variance in grape pose, illumination and focus, including genetic and phenological variations such as shape, color and compactness. ### What (other) tasks could the dataset be used for? Possible uses include relaxations of the instance segmentation problem: classification (Is a grape in the image?), semantic segmentation (What are the "grape pixels" in the image?), object detection (Where are the grapes in the image?), and counting (How many berries are there per cluster?). The WGISD can also be used in grape variety identification. ### Who funded the creation of the dataset? The building of the WGISD dataset was supported by the Embrapa SEG Project 01.14.09.001.05.04, *Image-based metrology for Precision Agriculture and Phenotyping*, and the CNPq PIBIC Program (grants 161165/2017-6 and 125044/2018-6). Dataset Composition ------------------- ### What are the instances? Each instance consists in a RGB image and an annotation describing grape clusters locations as bounding boxes. A subset of the instances also contains binary masks identifying the pixels belonging to each grape cluster. Each image presents at least one grape cluster. Some grape clusters can appear far at the background and should be ignored. ### Are relationships between instances made explicit in the data? File names prefixes identify the variety observed in the instance. ### How many instances of each type are there? The dataset consists of 300 images containing 4,432 grape clusters identified by bounding boxes. A subset of 137 images also contains binary masks identifying the pixels of each cluster. It means that from the 4,432 clusters, 2,020 of them presents binary masks for instance segmentation, as summarized in the following table. *General information about the dataset: the grape varieties and the associated identifying prefix, the date of image capture on field, number of images (instances) and the identified grapes clusters.* #### Contributions Another subset of 111 images with separated and non-occluded grape clusters was annotated with point annotations for every berry by F. Khoroshevsky and S. Khoroshevsky (Khoroshevsky *et al.*, 2021). Theses annotations are available in 'test\_berries.txt' , 'train\_berries.txt' and 'val\_berries.txt' Prefix: CDY, Variety: *Chardonnay*, Berries: 1,102 Prefix: CFR, Variety: *Cabernet Franc*, Berries: 1,592 Prefix: CSV, Variety: *Cabernet Sauvignon*, Berries: 1,712 Prefix: SVB, Variety: *Sauvignon Blanc*, Berries: 1,974 Prefix: SYH, Variety: *Syrah*, Berries: 969 Prefix: Total, Variety: , Berries: 7,349 *Berries annotations by F. Khoroshevsky and S. Khoroshevsky.* Geng Deng (Deng *et al.*, 2020) provided point-based annotations for berries in all 300 images, summing 187,374 berries. These annotations are available in 'contrib/berries'. Daniel Angelov (@23pointsNorth) provided a version for the annotations in COCO format. See 'coco\_annotations' directory. ### What data does each instance consist of? Each instance contains a 8-bits RGB image and a text file containing one bounding box description per line. These text files follows the "YOLO format" ``` CLASS CX CY W H ``` *class* is an integer defining the object class – the dataset presents only the grape class that is numbered 0, so every line starts with this “class zero” indicator. The center of the bounding box is the point *(c\_x, c\_y)*, represented as float values because this format normalizes the coordinates by the image dimensions. To get the absolute position, use *(2048 c\_x, 1365 c\_y)*. The bounding box dimensions are given by *W* and *H*, also normalized by the image size. The instances presenting mask data for instance segmentation contain files presenting the '.npz' extension. These files are compressed archives for NumPy $n$-dimensional arrays. Each array is a *H X W X n\_clusters* three-dimensional array where *n\_clusters* is the number of grape clusters observed in the image. After assigning the NumPy array to a variable 'M', the mask for the *i*-th grape cluster can be found in 'M[:,:,i]'. The *i*-th mask corresponds to the *i*-th line in the bounding boxes file. The dataset also includes the original image files, presenting the full original resolution. The normalized annotation for bounding boxes allows easy identification of clusters in the original images, but the mask data will need to be properly rescaled if users wish to work on the original full resolution. #### Contributions *For 'test\_berries.txt' , 'train\_berries.txt' and 'val\_berries.txt'*: The berries annotations are following a similar notation with the only exception being that each text file (train/val/test) includes also the instance file name. ``` FILENAME CLASS CX CY ``` where *filename* stands for instance file name, *class* is an integer defining the object class (0 for all instances) and the point *(c\_x, c\_y)* indicates the absolute position of each "dot" indicating a single berry in a well defined cluster. *For 'contrib/berries'*: The annotations provide the *(x, y)* point position for each berry center, in a tabular form: ``` X Y ``` These point-based annotations can be easily loaded using, for example, 'numpy.loadtxt'. See 'URL'for examples. Daniel Angelov (@23pointsNorth) provided a version for the annotations in COCO format. See 'coco\_annotations' directory. Also see COCO format for the JSON-based format. ### Is everything included or does the data rely on external resources? Everything is included in the dataset. ### Are there recommended data splits or evaluation measures? The dataset comes with specified train/test splits. The splits are found in lists stored as text files. There are also lists referring only to instances presenting binary masks. *Dataset recommended split.* Standard measures from the information retrieval and computer vision literature should be employed: precision and recall, *F1-score* and average precision as seen in COCO and Pascal VOC. ### What experiments were initially run on this dataset? The first experiments run on this dataset are described in *Grape detection, segmentation and tracking using deep neural networks and three-dimensional association* by Santos *et al.*. See also the following video demo: ![Grape detection, segmentation and tracking](URL "Grape detection, segmentation and tracking") UPDATE: The JPG files corresponding to the video frames in the video demo are now available in the 'extras' directory. Data Collection Process ----------------------- ### How was the data collected? Images were captured at the vineyards of Guaspari Winery, located at Espírito Santo do Pinhal, São Paulo, Brazil (Lat -22.181018, Lon -46.741618). The winery staff performs dual pruning: one for shaping (after previous year harvest) and one for production, resulting in canopies of lower density. The image capturing was realized in April 2017 for *Syrah* and in April 2018 for the other varieties. A Canon EOS REBEL T3i DSLR camera and a Motorola Z2 Play smartphone were used to capture the images. The cameras were located between the vines lines, facing the vines at distances around 1-2 meters. The EOS REBEL T3i camera captured 240 images, including all *Syrah* pictures. The Z2 smartphone grabbed 60 images covering all varieties except *Syrah* . The REBEL images were scaled to *2048 X 1365* pixels and the Z2 images to *2048 X 1536* pixels. More data about the capture process can be found in the Exif data found in the original image files, included in the dataset. ### Who was involved in the data collection process? T. T. Santos, A. A. Santos and S. Avila captured the images in field. T. T. Santos, L. L. de Souza and S. Avila performed the annotation for bounding boxes and masks. ### How was the data associated with each instance acquired? The rectangular bounding boxes identifying the grape clusters were annotated using the 'labelImg' tool. The clusters can be under severe occlusion by leaves, trunks or other clusters. Considering the absence of 3-D data and on-site annotation, the clusters locations had to be defined using only a single-view image, so some clusters could be incorrectly delimited. A subset of the bounding boxes was selected for mask annotation, using a novel tool developed by the authors and presented in this work. This interactive tool lets the annotator mark grape and background pixels using scribbles, and a graph matching algorithm developed by Noma *et al.* is employed to perform image segmentation to every pixel in the bounding box, producing a binary mask representing grape/background classification. #### Contributions A subset of the bounding boxes of well-defined (separated and non-occluded clusters) was used for "dot" (berry) annotations of each grape to serve for counting applications as described in Khoroshevsky *et al.*. The berries annotation was performed by F. Khoroshevsky and S. Khoroshevsky. Geng Deng (Deng *et al.*, 2020) provided point-based annotations for berries in all 300 images, summing 187,374 berries. These annotations are available in 'contrib/berries'. Deng *et al.* employed Huawei ModelArt, for their annotation effort. Data Preprocessing ------------------ ### What preprocessing/cleaning was done? The following steps were taken to process the data: 1. Bounding boxes were annotated for each image using the 'labelImg' tool. 2. Images were resized to *W = 2048* pixels. This resolution proved to be practical to mask annotation, a convenient balance between grape detail and time spent by the graph-based segmentation algorithm. 3. A randomly selected subset of images were employed on mask annotation using the interactive tool based on graph matching. 4. All binaries masks were inspected, in search of pixels attributed to more than one grape cluster. The annotator assigned the disputed pixels to the most likely cluster. 5. The bounding boxes were fitted to the masks, which provided a fine tuning of grape clusters locations. ### Was the “raw” data saved in addition to the preprocessed data? The original resolution images, containing the Exif data provided by the cameras, is available in the dataset. Dataset Distribution -------------------- ### How is the dataset distributed? The dataset is available at GitHub. ### When will the dataset be released/first distributed? The dataset was released in July, 2019. ### What license (if any) is it distributed under? The data is released under Creative Commons BY-NC 4.0 (Attribution-NonCommercial 4.0 International license). There is a request to cite the corresponding paper if the dataset is used. For commercial use, contact Embrapa Agricultural Informatics business office. ### Are there any fees or access/export restrictions? There are no fees or restrictions. For commercial use, contact Embrapa Agricultural Informatics business office. Dataset Maintenance ------------------- ### Who is supporting/hosting/maintaining the dataset? The dataset is hosted at Embrapa Agricultural Informatics and all comments or requests can be sent to Thiago T. Santos (maintainer). ### Will the dataset be updated? There is no scheduled updates. * In May, 2022, Daniel Angelov (@23pointsNorth) provided a version for the annotations in COCO format. See 'coco\_annotations' directory. * In February, 2021, F. Khoroshevsky and S. Khoroshevsky provided the first extension: the berries ("dot") annotations. * In April, 2021, Geng Deng provided point annotations for berries. T. Santos converted Deng's XML files to easier-to-load text files now available in 'contrib/berries' directory. In case of further updates, releases will be properly tagged at GitHub. ### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? Contributors should contact the maintainer by e-mail. ### No warranty The maintainers and their institutions are *exempt from any liability, judicial or extrajudicial, for any losses or damages arising from the use of the data contained in the image database*.
[ "### Why was the dataset created?\n\n\nEmbrapa WGISD (*Wine Grape Instance Segmentation Dataset*) was created\nto provide images and annotation to study *object detection and instance\nsegmentation* for image-based monitoring and field robotics in\nviticulture. It provides instances from five different grape varieties\ntaken on field. These instances shows variance in grape pose,\nillumination and focus, including genetic and phenological variations\nsuch as shape, color and compactness.", "### What (other) tasks could the dataset be used for?\n\n\nPossible uses include relaxations of the instance segmentation problem:\nclassification (Is a grape in the image?), semantic segmentation (What\nare the \"grape pixels\" in the image?), object detection (Where are\nthe grapes in the image?), and counting (How many berries are there\nper cluster?). The WGISD can also be used in grape variety\nidentification.", "### Who funded the creation of the dataset?\n\n\nThe building of the WGISD dataset was supported by the Embrapa SEG\nProject 01.14.09.001.05.04, *Image-based metrology for Precision\nAgriculture and Phenotyping*, and the CNPq PIBIC Program (grants\n161165/2017-6 and 125044/2018-6).\n\n\nDataset Composition\n-------------------", "### What are the instances?\n\n\nEach instance consists in a RGB image and an annotation describing grape\nclusters locations as bounding boxes. A subset of the instances also\ncontains binary masks identifying the pixels belonging to each grape\ncluster. Each image presents at least one grape cluster. Some grape\nclusters can appear far at the background and should be ignored.", "### Are relationships between instances made explicit in the data?\n\n\nFile names prefixes identify the variety observed in the instance.", "### How many instances of each type are there?\n\n\nThe dataset consists of 300 images containing 4,432 grape clusters\nidentified by bounding boxes. A subset of 137 images also contains\nbinary masks identifying the pixels of each cluster. It means that from\nthe 4,432 clusters, 2,020 of them presents binary masks for instance\nsegmentation, as summarized in the following table.\n\n\n\n*General information about the dataset: the grape varieties and the associated identifying prefix, the date of image capture on field, number of images (instances) and the identified grapes clusters.*", "#### Contributions\n\n\nAnother subset of 111 images with separated and non-occluded grape\nclusters was annotated with point annotations for every berry by F. Khoroshevsky and S. Khoroshevsky (Khoroshevsky *et al.*, 2021). Theses annotations are available in 'test\\_berries.txt' , 'train\\_berries.txt' and 'val\\_berries.txt'\n\n\nPrefix: CDY, Variety: *Chardonnay*, Berries: 1,102\nPrefix: CFR, Variety: *Cabernet Franc*, Berries: 1,592\nPrefix: CSV, Variety: *Cabernet Sauvignon*, Berries: 1,712\nPrefix: SVB, Variety: *Sauvignon Blanc*, Berries: 1,974\nPrefix: SYH, Variety: *Syrah*, Berries: 969\nPrefix: Total, Variety: , Berries: 7,349\n\n\n*Berries annotations by F. Khoroshevsky and S. Khoroshevsky.*\nGeng Deng (Deng *et al.*, 2020)\nprovided point-based annotations for berries in all 300 images, summing 187,374 berries.\nThese annotations are available in 'contrib/berries'.\n\n\nDaniel Angelov (@23pointsNorth) provided a version for the annotations in COCO format. See 'coco\\_annotations' directory.", "### What data does each instance consist of?\n\n\nEach instance contains a 8-bits RGB image and a text file containing one\nbounding box description per line. These text files follows the \"YOLO\nformat\"\n\n\n\n```\nCLASS CX CY W H\n\n```\n\n*class* is an integer defining the object class – the dataset presents\nonly the grape class that is numbered 0, so every line starts with this\n“class zero” indicator. The center of the bounding box is the point\n*(c\\_x, c\\_y)*, represented as float values because this format normalizes\nthe coordinates by the image dimensions. To get the absolute position,\nuse *(2048 c\\_x, 1365 c\\_y)*. The bounding box dimensions are\ngiven by *W* and *H*, also normalized by the image size.\n\n\nThe instances presenting mask data for instance segmentation contain\nfiles presenting the '.npz' extension. These files are compressed\narchives for NumPy $n$-dimensional arrays. Each array is a\n*H X W X n\\_clusters* three-dimensional array where\n*n\\_clusters* is the number of grape clusters observed in the\nimage. After assigning the NumPy array to a variable 'M', the mask for\nthe *i*-th grape cluster can be found in 'M[:,:,i]'. The *i*-th mask\ncorresponds to the *i*-th line in the bounding boxes file.\n\n\nThe dataset also includes the original image files, presenting the full\noriginal resolution. The normalized annotation for bounding boxes allows\neasy identification of clusters in the original images, but the mask\ndata will need to be properly rescaled if users wish to work on the\noriginal full resolution.", "#### Contributions\n\n\n*For 'test\\_berries.txt' , 'train\\_berries.txt' and 'val\\_berries.txt'*:\n\n\nThe berries annotations are following a similar notation with the only\nexception being that each text file (train/val/test) includes also the\ninstance file name.\n\n\n\n```\n FILENAME CLASS CX CY\n\n```\n\nwhere *filename* stands for instance file name, *class* is an integer\ndefining the object class (0 for all instances) and the point *(c\\_x, c\\_y)*\nindicates the absolute position of each \"dot\" indicating a single berry in\na well defined cluster.\n\n\n*For 'contrib/berries'*:\n\n\nThe annotations provide the *(x, y)* point position for each berry center, in a tabular form:\n\n\n\n```\n X Y\n\n```\n\nThese point-based annotations can be easily loaded using, for example, 'numpy.loadtxt'. See 'URL'for examples.\n\n\nDaniel Angelov (@23pointsNorth) provided a version for the annotations in COCO format. See 'coco\\_annotations' directory. Also see COCO format for the JSON-based format.", "### Is everything included or does the data rely on external resources?\n\n\nEverything is included in the dataset.", "### Are there recommended data splits or evaluation measures?\n\n\nThe dataset comes with specified train/test splits. The splits are found\nin lists stored as text files. There are also lists referring only to\ninstances presenting binary masks.\n\n\n\n*Dataset recommended split.*\n\n\nStandard measures from the information retrieval and computer vision\nliterature should be employed: precision and recall, *F1-score* and\naverage precision as seen in COCO\nand Pascal VOC.", "### What experiments were initially run on this dataset?\n\n\nThe first experiments run on this dataset are described in *Grape detection, segmentation and tracking using deep neural networks and three-dimensional association* by Santos *et al.*. See also the following video demo:\n\n\n![Grape detection, segmentation and tracking](URL \"Grape detection, segmentation and tracking\")\n\n\nUPDATE: The JPG files corresponding to the video frames in the video demo are now available in the 'extras' directory.\n\n\nData Collection Process\n-----------------------", "### How was the data collected?\n\n\nImages were captured at the vineyards of Guaspari Winery, located at\nEspírito Santo do Pinhal, São Paulo, Brazil (Lat -22.181018, Lon\n-46.741618). The winery staff performs dual pruning: one for shaping\n(after previous year harvest) and one for production, resulting in\ncanopies of lower density. The image capturing was realized in April\n2017 for *Syrah* and in April 2018 for the other varieties.\n\n\nA Canon EOS REBEL T3i DSLR camera and a Motorola Z2 Play smartphone were\nused to capture the images. The cameras were located between the vines\nlines, facing the vines at distances around 1-2 meters. The EOS REBEL\nT3i camera captured 240 images, including all *Syrah* pictures. The Z2\nsmartphone grabbed 60 images covering all varieties except *Syrah* . The\nREBEL images were scaled to *2048 X 1365* pixels and the Z2 images\nto *2048 X 1536* pixels. More data about the capture process can be found\nin the Exif data found in the original image files, included in the dataset.", "### Who was involved in the data collection process?\n\n\nT. T. Santos, A. A. Santos and S. Avila captured the images in\nfield. T. T. Santos, L. L. de Souza and S. Avila performed the\nannotation for bounding boxes and masks.", "### How was the data associated with each instance acquired?\n\n\nThe rectangular bounding boxes identifying the grape clusters were\nannotated using the 'labelImg' tool.\nThe clusters can be under\nsevere occlusion by leaves, trunks or other clusters. Considering the\nabsence of 3-D data and on-site annotation, the clusters locations had\nto be defined using only a single-view image, so some clusters could be\nincorrectly delimited.\n\n\nA subset of the bounding boxes was selected for mask annotation, using a\nnovel tool developed by the authors and presented in this work. This\ninteractive tool lets the annotator mark grape and background pixels\nusing scribbles, and a graph matching algorithm developed by Noma *et al.*\nis employed to perform image segmentation to every pixel in the bounding\nbox, producing a binary mask representing grape/background\nclassification.", "#### Contributions\n\n\nA subset of the bounding boxes of well-defined (separated and non-occluded\nclusters) was used for \"dot\" (berry) annotations of each grape to\nserve for counting applications as described in Khoroshevsky *et\nal.*. The berries\nannotation was performed by F. Khoroshevsky and S. Khoroshevsky.\n\n\nGeng Deng (Deng *et al.*, 2020)\nprovided point-based annotations for berries in all 300 images, summing\n187,374 berries. These annotations are available in 'contrib/berries'.\nDeng *et al.* employed Huawei ModelArt,\nfor their annotation effort.\n\n\nData Preprocessing\n------------------", "### What preprocessing/cleaning was done?\n\n\nThe following steps were taken to process the data:\n\n\n1. Bounding boxes were annotated for each image using the 'labelImg'\ntool.\n2. Images were resized to *W = 2048* pixels. This resolution proved to\nbe practical to mask annotation, a convenient balance between grape\ndetail and time spent by the graph-based segmentation algorithm.\n3. A randomly selected subset of images were employed on mask\nannotation using the interactive tool based on graph matching.\n4. All binaries masks were inspected, in search of pixels attributed to\nmore than one grape cluster. The annotator assigned the disputed\npixels to the most likely cluster.\n5. The bounding boxes were fitted to the masks, which provided a fine\ntuning of grape clusters locations.", "### Was the “raw” data saved in addition to the preprocessed data?\n\n\nThe original resolution images, containing the Exif data provided by the\ncameras, is available in the dataset.\n\n\nDataset Distribution\n--------------------", "### How is the dataset distributed?\n\n\nThe dataset is available at GitHub.", "### When will the dataset be released/first distributed?\n\n\nThe dataset was released in July, 2019.", "### What license (if any) is it distributed under?\n\n\nThe data is released under Creative Commons BY-NC 4.0 (Attribution-NonCommercial 4.0 International license).\nThere is a request to cite the corresponding paper if the dataset is used. For\ncommercial use, contact Embrapa Agricultural Informatics business office.", "### Are there any fees or access/export restrictions?\n\n\nThere are no fees or restrictions. For commercial use, contact Embrapa\nAgricultural Informatics business office.\n\n\nDataset Maintenance\n-------------------", "### Who is supporting/hosting/maintaining the dataset?\n\n\nThe dataset is hosted at Embrapa Agricultural Informatics and all\ncomments or requests can be sent to Thiago T. Santos\n(maintainer).", "### Will the dataset be updated?\n\n\nThere is no scheduled updates.\n\n\n* In May, 2022, Daniel Angelov (@23pointsNorth) provided a version for the annotations in COCO format. See 'coco\\_annotations' directory.\n* In February, 2021, F. Khoroshevsky and S. Khoroshevsky provided the first extension: the berries (\"dot\")\nannotations.\n* In April, 2021, Geng Deng provided point annotations for berries. T. Santos converted Deng's XML files to\neasier-to-load text files now available in 'contrib/berries' directory.\n\n\nIn case of further updates, releases will be properly tagged at GitHub.", "### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so?\n\n\nContributors should contact the maintainer by e-mail.", "### No warranty\n\n\nThe maintainers and their institutions are *exempt from any liability,\njudicial or extrajudicial, for any losses or damages arising from the\nuse of the data contained in the image database*." ]
[ "TAGS\n#task_categories-object-detection #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-nc-4.0 #agriculture #viticulture #fruit detection #arxiv-1803.09010 #arxiv-1907.11819 #region-us \n", "### Why was the dataset created?\n\n\nEmbrapa WGISD (*Wine Grape Instance Segmentation Dataset*) was created\nto provide images and annotation to study *object detection and instance\nsegmentation* for image-based monitoring and field robotics in\nviticulture. It provides instances from five different grape varieties\ntaken on field. These instances shows variance in grape pose,\nillumination and focus, including genetic and phenological variations\nsuch as shape, color and compactness.", "### What (other) tasks could the dataset be used for?\n\n\nPossible uses include relaxations of the instance segmentation problem:\nclassification (Is a grape in the image?), semantic segmentation (What\nare the \"grape pixels\" in the image?), object detection (Where are\nthe grapes in the image?), and counting (How many berries are there\nper cluster?). The WGISD can also be used in grape variety\nidentification.", "### Who funded the creation of the dataset?\n\n\nThe building of the WGISD dataset was supported by the Embrapa SEG\nProject 01.14.09.001.05.04, *Image-based metrology for Precision\nAgriculture and Phenotyping*, and the CNPq PIBIC Program (grants\n161165/2017-6 and 125044/2018-6).\n\n\nDataset Composition\n-------------------", "### What are the instances?\n\n\nEach instance consists in a RGB image and an annotation describing grape\nclusters locations as bounding boxes. A subset of the instances also\ncontains binary masks identifying the pixels belonging to each grape\ncluster. Each image presents at least one grape cluster. Some grape\nclusters can appear far at the background and should be ignored.", "### Are relationships between instances made explicit in the data?\n\n\nFile names prefixes identify the variety observed in the instance.", "### How many instances of each type are there?\n\n\nThe dataset consists of 300 images containing 4,432 grape clusters\nidentified by bounding boxes. A subset of 137 images also contains\nbinary masks identifying the pixels of each cluster. It means that from\nthe 4,432 clusters, 2,020 of them presents binary masks for instance\nsegmentation, as summarized in the following table.\n\n\n\n*General information about the dataset: the grape varieties and the associated identifying prefix, the date of image capture on field, number of images (instances) and the identified grapes clusters.*", "#### Contributions\n\n\nAnother subset of 111 images with separated and non-occluded grape\nclusters was annotated with point annotations for every berry by F. Khoroshevsky and S. Khoroshevsky (Khoroshevsky *et al.*, 2021). Theses annotations are available in 'test\\_berries.txt' , 'train\\_berries.txt' and 'val\\_berries.txt'\n\n\nPrefix: CDY, Variety: *Chardonnay*, Berries: 1,102\nPrefix: CFR, Variety: *Cabernet Franc*, Berries: 1,592\nPrefix: CSV, Variety: *Cabernet Sauvignon*, Berries: 1,712\nPrefix: SVB, Variety: *Sauvignon Blanc*, Berries: 1,974\nPrefix: SYH, Variety: *Syrah*, Berries: 969\nPrefix: Total, Variety: , Berries: 7,349\n\n\n*Berries annotations by F. Khoroshevsky and S. Khoroshevsky.*\nGeng Deng (Deng *et al.*, 2020)\nprovided point-based annotations for berries in all 300 images, summing 187,374 berries.\nThese annotations are available in 'contrib/berries'.\n\n\nDaniel Angelov (@23pointsNorth) provided a version for the annotations in COCO format. See 'coco\\_annotations' directory.", "### What data does each instance consist of?\n\n\nEach instance contains a 8-bits RGB image and a text file containing one\nbounding box description per line. These text files follows the \"YOLO\nformat\"\n\n\n\n```\nCLASS CX CY W H\n\n```\n\n*class* is an integer defining the object class – the dataset presents\nonly the grape class that is numbered 0, so every line starts with this\n“class zero” indicator. The center of the bounding box is the point\n*(c\\_x, c\\_y)*, represented as float values because this format normalizes\nthe coordinates by the image dimensions. To get the absolute position,\nuse *(2048 c\\_x, 1365 c\\_y)*. The bounding box dimensions are\ngiven by *W* and *H*, also normalized by the image size.\n\n\nThe instances presenting mask data for instance segmentation contain\nfiles presenting the '.npz' extension. These files are compressed\narchives for NumPy $n$-dimensional arrays. Each array is a\n*H X W X n\\_clusters* three-dimensional array where\n*n\\_clusters* is the number of grape clusters observed in the\nimage. After assigning the NumPy array to a variable 'M', the mask for\nthe *i*-th grape cluster can be found in 'M[:,:,i]'. The *i*-th mask\ncorresponds to the *i*-th line in the bounding boxes file.\n\n\nThe dataset also includes the original image files, presenting the full\noriginal resolution. The normalized annotation for bounding boxes allows\neasy identification of clusters in the original images, but the mask\ndata will need to be properly rescaled if users wish to work on the\noriginal full resolution.", "#### Contributions\n\n\n*For 'test\\_berries.txt' , 'train\\_berries.txt' and 'val\\_berries.txt'*:\n\n\nThe berries annotations are following a similar notation with the only\nexception being that each text file (train/val/test) includes also the\ninstance file name.\n\n\n\n```\n FILENAME CLASS CX CY\n\n```\n\nwhere *filename* stands for instance file name, *class* is an integer\ndefining the object class (0 for all instances) and the point *(c\\_x, c\\_y)*\nindicates the absolute position of each \"dot\" indicating a single berry in\na well defined cluster.\n\n\n*For 'contrib/berries'*:\n\n\nThe annotations provide the *(x, y)* point position for each berry center, in a tabular form:\n\n\n\n```\n X Y\n\n```\n\nThese point-based annotations can be easily loaded using, for example, 'numpy.loadtxt'. See 'URL'for examples.\n\n\nDaniel Angelov (@23pointsNorth) provided a version for the annotations in COCO format. See 'coco\\_annotations' directory. Also see COCO format for the JSON-based format.", "### Is everything included or does the data rely on external resources?\n\n\nEverything is included in the dataset.", "### Are there recommended data splits or evaluation measures?\n\n\nThe dataset comes with specified train/test splits. The splits are found\nin lists stored as text files. There are also lists referring only to\ninstances presenting binary masks.\n\n\n\n*Dataset recommended split.*\n\n\nStandard measures from the information retrieval and computer vision\nliterature should be employed: precision and recall, *F1-score* and\naverage precision as seen in COCO\nand Pascal VOC.", "### What experiments were initially run on this dataset?\n\n\nThe first experiments run on this dataset are described in *Grape detection, segmentation and tracking using deep neural networks and three-dimensional association* by Santos *et al.*. See also the following video demo:\n\n\n![Grape detection, segmentation and tracking](URL \"Grape detection, segmentation and tracking\")\n\n\nUPDATE: The JPG files corresponding to the video frames in the video demo are now available in the 'extras' directory.\n\n\nData Collection Process\n-----------------------", "### How was the data collected?\n\n\nImages were captured at the vineyards of Guaspari Winery, located at\nEspírito Santo do Pinhal, São Paulo, Brazil (Lat -22.181018, Lon\n-46.741618). The winery staff performs dual pruning: one for shaping\n(after previous year harvest) and one for production, resulting in\ncanopies of lower density. The image capturing was realized in April\n2017 for *Syrah* and in April 2018 for the other varieties.\n\n\nA Canon EOS REBEL T3i DSLR camera and a Motorola Z2 Play smartphone were\nused to capture the images. The cameras were located between the vines\nlines, facing the vines at distances around 1-2 meters. The EOS REBEL\nT3i camera captured 240 images, including all *Syrah* pictures. The Z2\nsmartphone grabbed 60 images covering all varieties except *Syrah* . The\nREBEL images were scaled to *2048 X 1365* pixels and the Z2 images\nto *2048 X 1536* pixels. More data about the capture process can be found\nin the Exif data found in the original image files, included in the dataset.", "### Who was involved in the data collection process?\n\n\nT. T. Santos, A. A. Santos and S. Avila captured the images in\nfield. T. T. Santos, L. L. de Souza and S. Avila performed the\nannotation for bounding boxes and masks.", "### How was the data associated with each instance acquired?\n\n\nThe rectangular bounding boxes identifying the grape clusters were\nannotated using the 'labelImg' tool.\nThe clusters can be under\nsevere occlusion by leaves, trunks or other clusters. Considering the\nabsence of 3-D data and on-site annotation, the clusters locations had\nto be defined using only a single-view image, so some clusters could be\nincorrectly delimited.\n\n\nA subset of the bounding boxes was selected for mask annotation, using a\nnovel tool developed by the authors and presented in this work. This\ninteractive tool lets the annotator mark grape and background pixels\nusing scribbles, and a graph matching algorithm developed by Noma *et al.*\nis employed to perform image segmentation to every pixel in the bounding\nbox, producing a binary mask representing grape/background\nclassification.", "#### Contributions\n\n\nA subset of the bounding boxes of well-defined (separated and non-occluded\nclusters) was used for \"dot\" (berry) annotations of each grape to\nserve for counting applications as described in Khoroshevsky *et\nal.*. The berries\nannotation was performed by F. Khoroshevsky and S. Khoroshevsky.\n\n\nGeng Deng (Deng *et al.*, 2020)\nprovided point-based annotations for berries in all 300 images, summing\n187,374 berries. These annotations are available in 'contrib/berries'.\nDeng *et al.* employed Huawei ModelArt,\nfor their annotation effort.\n\n\nData Preprocessing\n------------------", "### What preprocessing/cleaning was done?\n\n\nThe following steps were taken to process the data:\n\n\n1. Bounding boxes were annotated for each image using the 'labelImg'\ntool.\n2. Images were resized to *W = 2048* pixels. This resolution proved to\nbe practical to mask annotation, a convenient balance between grape\ndetail and time spent by the graph-based segmentation algorithm.\n3. A randomly selected subset of images were employed on mask\nannotation using the interactive tool based on graph matching.\n4. All binaries masks were inspected, in search of pixels attributed to\nmore than one grape cluster. The annotator assigned the disputed\npixels to the most likely cluster.\n5. The bounding boxes were fitted to the masks, which provided a fine\ntuning of grape clusters locations.", "### Was the “raw” data saved in addition to the preprocessed data?\n\n\nThe original resolution images, containing the Exif data provided by the\ncameras, is available in the dataset.\n\n\nDataset Distribution\n--------------------", "### How is the dataset distributed?\n\n\nThe dataset is available at GitHub.", "### When will the dataset be released/first distributed?\n\n\nThe dataset was released in July, 2019.", "### What license (if any) is it distributed under?\n\n\nThe data is released under Creative Commons BY-NC 4.0 (Attribution-NonCommercial 4.0 International license).\nThere is a request to cite the corresponding paper if the dataset is used. For\ncommercial use, contact Embrapa Agricultural Informatics business office.", "### Are there any fees or access/export restrictions?\n\n\nThere are no fees or restrictions. For commercial use, contact Embrapa\nAgricultural Informatics business office.\n\n\nDataset Maintenance\n-------------------", "### Who is supporting/hosting/maintaining the dataset?\n\n\nThe dataset is hosted at Embrapa Agricultural Informatics and all\ncomments or requests can be sent to Thiago T. Santos\n(maintainer).", "### Will the dataset be updated?\n\n\nThere is no scheduled updates.\n\n\n* In May, 2022, Daniel Angelov (@23pointsNorth) provided a version for the annotations in COCO format. See 'coco\\_annotations' directory.\n* In February, 2021, F. Khoroshevsky and S. Khoroshevsky provided the first extension: the berries (\"dot\")\nannotations.\n* In April, 2021, Geng Deng provided point annotations for berries. T. Santos converted Deng's XML files to\neasier-to-load text files now available in 'contrib/berries' directory.\n\n\nIn case of further updates, releases will be properly tagged at GitHub.", "### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so?\n\n\nContributors should contact the maintainer by e-mail.", "### No warranty\n\n\nThe maintainers and their institutions are *exempt from any liability,\njudicial or extrajudicial, for any losses or damages arising from the\nuse of the data contained in the image database*." ]
0296ef2a4d400dbfa492c14b2b857c999fc3523a
11,5k russian books in txt format, divided by genres 11,5 тыщ книг русской литературы. датасет сделан из древнющего диска "lib in poc"
4eJIoBek/ru-libinpoc-11k
[ "task_categories:text-generation", "size_categories:10K<n<100K", "license:mit", "region:us" ]
2023-01-05T12:44:21+00:00
{"license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"]}
2023-01-09T22:45:47+00:00
[]
[]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #license-mit #region-us
11,5k russian books in txt format, divided by genres 11,5 тыщ книг русской литературы. датасет сделан из древнющего диска "lib in poc"
[]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #license-mit #region-us \n" ]
000dd81e685f6c10da8d2c37bd71c2fef0e92d59
# Dataset Card for "txt_to_gls_dts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fuyulinh04/txt_to_gls_dts
[ "region:us" ]
2023-01-05T13:45:16+00:00
{"dataset_info": {"features": [{"name": "gloss", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10780088.8, "num_examples": 70168}, {"name": "test", "num_bytes": 2695022.2, "num_examples": 17542}], "download_size": 8157820, "dataset_size": 13475111.0}}
2023-01-05T18:34:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "txt_to_gls_dts" More Information needed
[ "# Dataset Card for \"txt_to_gls_dts\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"txt_to_gls_dts\"\n\nMore Information needed" ]
116c215bb0817203a5255ba5a819b7b40c25a1fa
# Dataset Card for "diachronia-ocr-train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zombely/diachronia-ocr-train
[ "region:us" ]
2023-01-05T14:12:17+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51407894.0, "num_examples": 50}, {"name": "validation", "num_bytes": 10945929.0, "num_examples": 9}], "download_size": 62342762, "dataset_size": 62353823.0}}
2023-01-05T14:12:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "diachronia-ocr-train" More Information needed
[ "# Dataset Card for \"diachronia-ocr-train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"diachronia-ocr-train\"\n\nMore Information needed" ]
45e708d930ce42706461bd0f87d2b8dbbca42664
# Dataset Card for "News" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vencortex/News
[ "region:us" ]
2023-01-05T14:13:58+00:00
{"dataset_info": {"features": [{"name": "symbol", "dtype": "string"}, {"name": "publishedDate", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "site", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 834852911, "num_examples": 1495869}], "download_size": 170603751, "dataset_size": 834852911}}
2023-01-05T14:14:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "News" More Information needed
[ "# Dataset Card for \"News\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"News\"\n\nMore Information needed" ]
1f6b50c209aa111cf7e209f3840ddd408ae55afd
# Dataset Card for "test_6k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pyakymenko/test_6k
[ "region:us" ]
2023-01-05T14:33:32+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 475682224.444, "num_examples": 6661}], "download_size": 473720429, "dataset_size": 475682224.444}}
2023-01-05T14:50:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test_6k" More Information needed
[ "# Dataset Card for \"test_6k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test_6k\"\n\nMore Information needed" ]
dbe96e27d52c70eb0067de744fb5608199d31656
# Dataset Card for "6k_mp3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arnepeine/6k_mp3
[ "region:us" ]
2023-01-05T14:54:25+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 475682224.444, "num_examples": 6661}], "download_size": 473720429, "dataset_size": 475682224.444}}
2023-01-05T15:02:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "6k_mp3" More Information needed
[ "# Dataset Card for \"6k_mp3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"6k_mp3\"\n\nMore Information needed" ]
92d4a4aed4d949023ea948840a94b368447b7f9f
# Dataset Card for ScienceIE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://scienceie.github.io/index.html](https://scienceie.github.io/index.html) - **Repository:** [https://github.com/ScienceIE/scienceie.github.io](https://github.com/ScienceIE/scienceie.github.io) - **Paper:** [SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications](https://arxiv.org/abs/1704.02853) - **Leaderboard:** [https://competitions.codalab.org/competitions/15898](https://competitions.codalab.org/competitions/15898) - **Size of downloaded dataset files:** 13.7 MB - **Size of generated dataset files:** 17.4 MB ### Dataset Summary ScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents. A corpus for the task was built from ScienceDirect open access publications and was available freely for participants, without the need to sign a copyright agreement. Each data instance consists of one paragraph of text, drawn from a scientific paper. Publications were provided in plain text, in addition to xml format, which included the full text of the publication as well as additional metadata. 500 paragraphs from journal articles evenly distributed among the domains Computer Science, Material Sciences and Physics were selected. The training data part of the corpus consists of 350 documents, 50 for development and 100 for testing. This is similar to the pilot task described in Section 5, for which 144 articles were used for training, 40 for development and for 100 testing. There are three subtasks: - Subtask (A): Identification of keyphrases - Given a scientific publication, the goal of this task is to identify all the keyphrases in the document. - Subtask (B): Classification of identified keyphrases - In this task, each keyphrase needs to be labelled by one of three types: (i) PROCESS, (ii) TASK, and (iii) MATERIAL. - PROCESS: Keyphrases relating to some scientific model, algorithm or process should be labelled by PROCESS. - TASK: Keyphrases those denote the application, end goal, problem, task should be labelled by TASK. - MATERIAL: MATERIAL keyphrases identify the resources used in the paper. - Subtask (C): Extraction of relationships between two identified keyphrases - Every pair of keyphrases need to be labelled by one of three types: (i) HYPONYM-OF, (ii) SYNONYM-OF, and (iii) NONE. - HYPONYM-OF: The relationship between two keyphrases A and B is HYPONYM-OF if semantic field of A is included within that of B. One example is Red HYPONYM-OF Color. - SYNONYM-OF: The relationship between two keyphrases A and B is SYNONYM-OF if they both denote the same semantic field, for example Machine Learning SYNONYM-OF ML. Note: In this repository the documents were split into sentences using spaCy, resulting in a 2388, 400, 838 split. The `id` consists of the document id and the example index within the document separated by an underscore, e.g. `S0375960115004120_1`. This should enable you to reconstruct the documents from the sentences. ### Supported Tasks and Leaderboards - **Tasks:** Key phrase extraction and relation extraction in scientific documents - **Leaderboards:** [https://competitions.codalab.org/competitions/15898](https://competitions.codalab.org/competitions/15898) ### Languages The language in the dataset is English. ## Dataset Structure ### Data Instances #### subtask_a - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 17.4 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_1", "tokens": ["Another", "remarkable", "feature", "of", "the", "quantum", "field", "treatment", "can", "be", "revealed", "from", "the", "investigation", "of", "the", "vacuum", "state", "."], "tags": [0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0] } ``` #### subtask_b - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 17.4 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_2", "tokens": ["For", "a", "classical", "field", ",", "vacuum", "is", "realized", "by", "simply", "setting", "the", "potential", "to", "zero", "resulting", "in", "an", "unaltered", ",", "free", "evolution", "of", "the", "particle", "'s", "plane", "wave", "(", "|ψI〉=|ψIII〉=|k0", "〉", ")", "."], "tags": [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0] } ``` #### subtask_c - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 30.1 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_3", "tokens": ["In", "the", "quantized", "treatment", ",", "vacuum", "is", "represented", "by", "an", "initial", "Fock", "state", "|n0=0", "〉", "which", "still", "interacts", "with", "the", "particle", "and", "yields", "as", "final", "state", "|ΨIII", "〉", "behind", "the", "field", "region(19)|ΨI〉=|k0〉⊗|0〉⇒|ΨIII〉=∑n=0∞t0n|k−n〉⊗|n", "〉", "with", "a", "photon", "exchange", "probability(20)P0,n=|t0n|2=1n!e−Λ2Λ2n", "The", "particle", "thus", "transfers", "energy", "to", "the", "vacuum", "field", "leading", "to", "a", "Poissonian", "distributed", "final", "photon", "number", "."], "tags": [[0, 0, ...], [0, 0, ...], ...] } ``` Note: The tag sequence consists of vectors for each token, that encode what the relationship between that token and every other token in the sequence is for the first token in each key phrase. #### ner - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 17.4 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_4", "tokens": ["Let", "'s", "consider", ",", "for", "example", ",", "a", "superconducting", "resonant", "circuit", "as", "source", "of", "the", "field", "."], "tags": [0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0] } ``` #### re - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 16.4 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_5", "tokens": ["In", "the", "quantized", "treatment", ",", "vacuum", "is", "represented", "by", "an", "initial", "Fock", "state", "|n0=0", "〉", "which", "still", "interacts", "with", "the", "particle", "and", "yields", "as", "final", "state", "|ΨIII", "〉", "behind", "the", "field", "region(19)|ΨI〉=|k0〉⊗|0〉⇒|ΨIII〉=∑n=0∞t0n|k−n〉⊗|n", "〉", "with", "a", "photon", "exchange", "probability(20)P0,n=|t0n|2=1n!e−Λ2Λ2n", "The", "particle", "thus", "transfers", "energy", "to", "the", "vacuum", "field", "leading", "to", "a", "Poissonian", "distributed", "final", "photon", "number", "."], "arg1_start": 2, "arg1_end": 4, "arg1_type": "Task", "arg2_start": 5, "arg2_end": 6, "arg2_type": "Material", "relation": 0 } ``` ### Data Fields #### subtask_a - `id`: the instance id of this sentence, a `string` feature. - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `tags`: the list of tags of this sentence marking a token as being outside, at the beginning, or inside a key phrase, a `list` of classification labels. ```python {"O": 0, "B": 1, "I": 2} ``` #### subtask_b - `id`: the instance id of this sentence, a `string` feature. - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `tags`: the list of tags of this sentence marking a token as being outside a key phrase, or being part of a material, process or task, a `list` of classification labels. ```python {"O": 0, "M": 1, "P": 2, "T": 3} ``` #### subtask_c - `id`: the instance id of this sentence, a `string` feature. - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `tags`: a vector for each token, that encodes what the relationship between that token and every other token in the sequence is for the first token in each key phrase, a `list` of a `list` of a classification label. ```python {"O": 0, "S": 1, "H": 2} ``` #### ner - `id`: the instance id of this sentence, a `string` feature. - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `tags`: the list of ner tags of this sentence, a `list` of classification labels. ```python {"O": 0, "B-Material": 1, "I-Material": 2, "B-Process": 3, "I-Process": 4, "B-Task": 5, "I-Task": 6} ``` #### re - `id`: the instance id of this sentence, a `string` feature. - `token`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `arg1_start`: the 0-based index of the start token of the relation arg1 mention, an `ìnt` feature. - `arg1_end`: the 0-based index of the end token of the relation arg1 mention, exclusive, an `ìnt` feature. - `arg1_type`: the key phrase type of the end token of the relation arg1 mention, a `string` feature. - `arg2_start`: the 0-based index of the start token of the relation arg2 mention, an `ìnt` feature. - `arg2_end`: the 0-based index of the end token of the relation arg2 mention, exclusive, an `ìnt` feature. - `arg2_type`: the key phrase type of the relation arg2 mention, a `string` feature. - `relation`: the relation label of this instance, a classification label. ```python {"O": 0, "Synonym-of": 1, "Hyponym-of": 2} ``` ### Data Splits | | Train | Dev | Test | |-----------|-------|------|------| | subtask_a | 2388 | 400 | 838 | | subtask_b | 2388 | 400 | 838 | | subtask_c | 2388 | 400 | 838 | | ner | 2388 | 400 | 838 | | re | 24558 | 4838 | 6618 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{DBLP:journals/corr/AugensteinDRVM17, author = {Isabelle Augenstein and Mrinal Das and Sebastian Riedel and Lakshmi Vikraman and Andrew McCallum}, title = {SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications}, journal = {CoRR}, volume = {abs/1704.02853}, year = {2017}, url = {http://arxiv.org/abs/1704.02853}, eprinttype = {arXiv}, eprint = {1704.02853}, timestamp = {Mon, 13 Aug 2018 16:46:36 +0200}, biburl = {https://dblp.org/rec/journals/corr/AugensteinDRVM17.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset.
DFKI-SLT/science_ie
[ "task_categories:token-classification", "task_categories:text-classification", "task_ids:named-entity-recognition", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "research papers", "scientific papers", "arxiv:1704.02853", "region:us" ]
2023-01-05T15:32:00+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["token-classification", "text-classification"], "task_ids": ["named-entity-recognition", "multi-class-classification"], "pretty_name": "ScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents", "tags": ["research papers", "scientific papers"], "dataset_info": [{"config_name": "ner", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-Material", "2": "I-Material", "3": "B-Process", "4": "I-Process", "5": "B-Task", "6": "I-Task"}}}}], "splits": [{"name": "train", "num_bytes": 1185670, "num_examples": 2388}, {"name": "validation", "num_bytes": 204095, "num_examples": 400}, {"name": "test", "num_bytes": 399069, "num_examples": 838}], "download_size": 13704567, "dataset_size": 1788834}, {"config_name": "re", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "dtype": "string"}, {"name": "arg1_start", "dtype": "int32"}, {"name": "arg1_end", "dtype": "int32"}, {"name": "arg1_type", "dtype": "string"}, {"name": "arg2_start", "dtype": "int32"}, {"name": "arg2_end", "dtype": "int32"}, {"name": "arg2_type", "dtype": "string"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "O", "1": "Synonym-of", "2": "Hyponym-of"}}}}], "splits": [{"name": "train", "num_bytes": 11738520, "num_examples": 24558}, {"name": "validation", "num_bytes": 2347796, "num_examples": 4838}, {"name": "test", "num_bytes": 2835275, "num_examples": 6618}], "download_size": 13704567, "dataset_size": 16921591}, {"config_name": "subtask_a", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B", "2": "I"}}}}], "splits": [{"name": "train", "num_bytes": 1185670, "num_examples": 2388}, {"name": "validation", "num_bytes": 204095, "num_examples": 400}, {"name": "test", "num_bytes": 399069, "num_examples": 838}], "download_size": 13704567, "dataset_size": 1788834}, {"config_name": "subtask_b", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "M", "2": "P", "3": "T"}}}}], "splits": [{"name": "train", "num_bytes": 1185670, "num_examples": 2388}, {"name": "validation", "num_bytes": 204095, "num_examples": 400}, {"name": "test", "num_bytes": 399069, "num_examples": 838}], "download_size": 13704567, "dataset_size": 1788834}, {"config_name": "subtask_c", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"sequence": {"class_label": {"names": {"0": "O", "1": "S", "2": "H"}}}}}], "splits": [{"name": "train", "num_bytes": 20103682, "num_examples": 2388}, {"name": "validation", "num_bytes": 3575511, "num_examples": 400}, {"name": "test", "num_bytes": 6431513, "num_examples": 838}], "download_size": 13704567, "dataset_size": 30110706}]}
2023-01-19T11:26:55+00:00
[ "1704.02853" ]
[ "en" ]
TAGS #task_categories-token-classification #task_categories-text-classification #task_ids-named-entity-recognition #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #research papers #scientific papers #arxiv-1704.02853 #region-us
Dataset Card for ScienceIE ========================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications * Leaderboard: URL * Size of downloaded dataset files: 13.7 MB * Size of generated dataset files: 17.4 MB ### Dataset Summary ScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents. A corpus for the task was built from ScienceDirect open access publications and was available freely for participants, without the need to sign a copyright agreement. Each data instance consists of one paragraph of text, drawn from a scientific paper. Publications were provided in plain text, in addition to xml format, which included the full text of the publication as well as additional metadata. 500 paragraphs from journal articles evenly distributed among the domains Computer Science, Material Sciences and Physics were selected. The training data part of the corpus consists of 350 documents, 50 for development and 100 for testing. This is similar to the pilot task described in Section 5, for which 144 articles were used for training, 40 for development and for 100 testing. There are three subtasks: * Subtask (A): Identification of keyphrases + Given a scientific publication, the goal of this task is to identify all the keyphrases in the document. * Subtask (B): Classification of identified keyphrases + In this task, each keyphrase needs to be labelled by one of three types: (i) PROCESS, (ii) TASK, and (iii) MATERIAL. - PROCESS: Keyphrases relating to some scientific model, algorithm or process should be labelled by PROCESS. - TASK: Keyphrases those denote the application, end goal, problem, task should be labelled by TASK. - MATERIAL: MATERIAL keyphrases identify the resources used in the paper. * Subtask (C): Extraction of relationships between two identified keyphrases + Every pair of keyphrases need to be labelled by one of three types: (i) HYPONYM-OF, (ii) SYNONYM-OF, and (iii) NONE. - HYPONYM-OF: The relationship between two keyphrases A and B is HYPONYM-OF if semantic field of A is included within that of B. One example is Red HYPONYM-OF Color. - SYNONYM-OF: The relationship between two keyphrases A and B is SYNONYM-OF if they both denote the same semantic field, for example Machine Learning SYNONYM-OF ML. Note: In this repository the documents were split into sentences using spaCy, resulting in a 2388, 400, 838 split. The 'id' consists of the document id and the example index within the document separated by an underscore, e.g. 'S0375960115004120\_1'. This should enable you to reconstruct the documents from the sentences. ### Supported Tasks and Leaderboards * Tasks: Key phrase extraction and relation extraction in scientific documents * Leaderboards: URL ### Languages The language in the dataset is English. Dataset Structure ----------------- ### Data Instances #### subtask\_a * Size of downloaded dataset files: 13.7 MB * Size of the generated dataset: 17.4 MB An example of 'train' looks as follows: #### subtask\_b * Size of downloaded dataset files: 13.7 MB * Size of the generated dataset: 17.4 MB An example of 'train' looks as follows: #### subtask\_c * Size of downloaded dataset files: 13.7 MB * Size of the generated dataset: 30.1 MB An example of 'train' looks as follows: Note: The tag sequence consists of vectors for each token, that encode what the relationship between that token and every other token in the sequence is for the first token in each key phrase. #### ner * Size of downloaded dataset files: 13.7 MB * Size of the generated dataset: 17.4 MB An example of 'train' looks as follows: #### re * Size of downloaded dataset files: 13.7 MB * Size of the generated dataset: 16.4 MB An example of 'train' looks as follows: ### Data Fields #### subtask\_a * 'id': the instance id of this sentence, a 'string' feature. * 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features. * 'tags': the list of tags of this sentence marking a token as being outside, at the beginning, or inside a key phrase, a 'list' of classification labels. #### subtask\_b * 'id': the instance id of this sentence, a 'string' feature. * 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features. * 'tags': the list of tags of this sentence marking a token as being outside a key phrase, or being part of a material, process or task, a 'list' of classification labels. #### subtask\_c * 'id': the instance id of this sentence, a 'string' feature. * 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features. * 'tags': a vector for each token, that encodes what the relationship between that token and every other token in the sequence is for the first token in each key phrase, a 'list' of a 'list' of a classification label. #### ner * 'id': the instance id of this sentence, a 'string' feature. * 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features. * 'tags': the list of ner tags of this sentence, a 'list' of classification labels. #### re * 'id': the instance id of this sentence, a 'string' feature. * 'token': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features. * 'arg1\_start': the 0-based index of the start token of the relation arg1 mention, an 'ìnt' feature. * 'arg1\_end': the 0-based index of the end token of the relation arg1 mention, exclusive, an 'ìnt' feature. * 'arg1\_type': the key phrase type of the end token of the relation arg1 mention, a 'string' feature. * 'arg2\_start': the 0-based index of the start token of the relation arg2 mention, an 'ìnt' feature. * 'arg2\_end': the 0-based index of the end token of the relation arg2 mention, exclusive, an 'ìnt' feature. * 'arg2\_type': the key phrase type of the relation arg2 mention, a 'string' feature. * 'relation': the relation label of this instance, a classification label. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @phucdev for adding this dataset.
[ "### Dataset Summary\n\n\nScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents.\nA corpus for the task was built from ScienceDirect open access publications and was available freely for participants, without the need to sign a copyright agreement. Each data instance consists of one paragraph of text, drawn from a scientific paper.\nPublications were provided in plain text, in addition to xml format, which included the full text of the publication as well as additional metadata. 500 paragraphs from journal articles evenly distributed among the domains Computer Science, Material Sciences and Physics were selected.\nThe training data part of the corpus consists of 350 documents, 50 for development and 100 for testing. This is similar to the pilot task described in Section 5, for which 144 articles were used for training, 40 for development and for 100 testing.\n\n\nThere are three subtasks:\n\n\n* Subtask (A): Identification of keyphrases\n\t+ Given a scientific publication, the goal of this task is to identify all the keyphrases in the document.\n* Subtask (B): Classification of identified keyphrases\n\t+ In this task, each keyphrase needs to be labelled by one of three types: (i) PROCESS, (ii) TASK, and (iii) MATERIAL.\n\t\t- PROCESS: Keyphrases relating to some scientific model, algorithm or process should be labelled by PROCESS.\n\t\t- TASK: Keyphrases those denote the application, end goal, problem, task should be labelled by TASK.\n\t\t- MATERIAL: MATERIAL keyphrases identify the resources used in the paper.\n* Subtask (C): Extraction of relationships between two identified keyphrases\n\t+ Every pair of keyphrases need to be labelled by one of three types: (i) HYPONYM-OF, (ii) SYNONYM-OF, and (iii) NONE.\n\t\t- HYPONYM-OF: The relationship between two keyphrases A and B is HYPONYM-OF if semantic field of A is included within that of B. One example is Red HYPONYM-OF Color.\n\t\t- SYNONYM-OF: The relationship between two keyphrases A and B is SYNONYM-OF if they both denote the same semantic field, for example Machine Learning SYNONYM-OF ML.\n\n\nNote: In this repository the documents were split into sentences using spaCy, resulting in a 2388, 400, 838 split. The 'id' consists of the document id and the example index within the document separated by an underscore, e.g. 'S0375960115004120\\_1'. This should enable you to reconstruct the documents from the sentences.", "### Supported Tasks and Leaderboards\n\n\n* Tasks: Key phrase extraction and relation extraction in scientific documents\n* Leaderboards: URL", "### Languages\n\n\nThe language in the dataset is English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### subtask\\_a\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 17.4 MB\n\n\nAn example of 'train' looks as follows:", "#### subtask\\_b\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 17.4 MB\n\n\nAn example of 'train' looks as follows:", "#### subtask\\_c\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 30.1 MB\n\n\nAn example of 'train' looks as follows:\n\n\nNote: The tag sequence consists of vectors for each token, that encode what the relationship between that token\nand every other token in the sequence is for the first token in each key phrase.", "#### ner\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 17.4 MB\n\n\nAn example of 'train' looks as follows:", "#### re\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 16.4 MB\n\n\nAn example of 'train' looks as follows:", "### Data Fields", "#### subtask\\_a\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'tags': the list of tags of this sentence marking a token as being outside, at the beginning, or inside a key phrase, a 'list' of classification labels.", "#### subtask\\_b\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'tags': the list of tags of this sentence marking a token as being outside a key phrase, or being part of a material, process or task, a 'list' of classification labels.", "#### subtask\\_c\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'tags': a vector for each token, that encodes what the relationship between that token and every other token in the sequence is for the first token in each key phrase, a 'list' of a 'list' of a classification label.", "#### ner\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'tags': the list of ner tags of this sentence, a 'list' of classification labels.", "#### re\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'token': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'arg1\\_start': the 0-based index of the start token of the relation arg1 mention, an 'ìnt' feature.\n* 'arg1\\_end': the 0-based index of the end token of the relation arg1 mention, exclusive, an 'ìnt' feature.\n* 'arg1\\_type': the key phrase type of the end token of the relation arg1 mention, a 'string' feature.\n* 'arg2\\_start': the 0-based index of the start token of the relation arg2 mention, an 'ìnt' feature.\n* 'arg2\\_end': the 0-based index of the end token of the relation arg2 mention, exclusive, an 'ìnt' feature.\n* 'arg2\\_type': the key phrase type of the relation arg2 mention, a 'string' feature.\n* 'relation': the relation label of this instance, a classification label.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @phucdev for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_categories-text-classification #task_ids-named-entity-recognition #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #research papers #scientific papers #arxiv-1704.02853 #region-us \n", "### Dataset Summary\n\n\nScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents.\nA corpus for the task was built from ScienceDirect open access publications and was available freely for participants, without the need to sign a copyright agreement. Each data instance consists of one paragraph of text, drawn from a scientific paper.\nPublications were provided in plain text, in addition to xml format, which included the full text of the publication as well as additional metadata. 500 paragraphs from journal articles evenly distributed among the domains Computer Science, Material Sciences and Physics were selected.\nThe training data part of the corpus consists of 350 documents, 50 for development and 100 for testing. This is similar to the pilot task described in Section 5, for which 144 articles were used for training, 40 for development and for 100 testing.\n\n\nThere are three subtasks:\n\n\n* Subtask (A): Identification of keyphrases\n\t+ Given a scientific publication, the goal of this task is to identify all the keyphrases in the document.\n* Subtask (B): Classification of identified keyphrases\n\t+ In this task, each keyphrase needs to be labelled by one of three types: (i) PROCESS, (ii) TASK, and (iii) MATERIAL.\n\t\t- PROCESS: Keyphrases relating to some scientific model, algorithm or process should be labelled by PROCESS.\n\t\t- TASK: Keyphrases those denote the application, end goal, problem, task should be labelled by TASK.\n\t\t- MATERIAL: MATERIAL keyphrases identify the resources used in the paper.\n* Subtask (C): Extraction of relationships between two identified keyphrases\n\t+ Every pair of keyphrases need to be labelled by one of three types: (i) HYPONYM-OF, (ii) SYNONYM-OF, and (iii) NONE.\n\t\t- HYPONYM-OF: The relationship between two keyphrases A and B is HYPONYM-OF if semantic field of A is included within that of B. One example is Red HYPONYM-OF Color.\n\t\t- SYNONYM-OF: The relationship between two keyphrases A and B is SYNONYM-OF if they both denote the same semantic field, for example Machine Learning SYNONYM-OF ML.\n\n\nNote: In this repository the documents were split into sentences using spaCy, resulting in a 2388, 400, 838 split. The 'id' consists of the document id and the example index within the document separated by an underscore, e.g. 'S0375960115004120\\_1'. This should enable you to reconstruct the documents from the sentences.", "### Supported Tasks and Leaderboards\n\n\n* Tasks: Key phrase extraction and relation extraction in scientific documents\n* Leaderboards: URL", "### Languages\n\n\nThe language in the dataset is English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### subtask\\_a\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 17.4 MB\n\n\nAn example of 'train' looks as follows:", "#### subtask\\_b\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 17.4 MB\n\n\nAn example of 'train' looks as follows:", "#### subtask\\_c\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 30.1 MB\n\n\nAn example of 'train' looks as follows:\n\n\nNote: The tag sequence consists of vectors for each token, that encode what the relationship between that token\nand every other token in the sequence is for the first token in each key phrase.", "#### ner\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 17.4 MB\n\n\nAn example of 'train' looks as follows:", "#### re\n\n\n* Size of downloaded dataset files: 13.7 MB\n* Size of the generated dataset: 16.4 MB\n\n\nAn example of 'train' looks as follows:", "### Data Fields", "#### subtask\\_a\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'tags': the list of tags of this sentence marking a token as being outside, at the beginning, or inside a key phrase, a 'list' of classification labels.", "#### subtask\\_b\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'tags': the list of tags of this sentence marking a token as being outside a key phrase, or being part of a material, process or task, a 'list' of classification labels.", "#### subtask\\_c\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'tags': a vector for each token, that encodes what the relationship between that token and every other token in the sequence is for the first token in each key phrase, a 'list' of a 'list' of a classification label.", "#### ner\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'tags': the list of ner tags of this sentence, a 'list' of classification labels.", "#### re\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'token': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'arg1\\_start': the 0-based index of the start token of the relation arg1 mention, an 'ìnt' feature.\n* 'arg1\\_end': the 0-based index of the end token of the relation arg1 mention, exclusive, an 'ìnt' feature.\n* 'arg1\\_type': the key phrase type of the end token of the relation arg1 mention, a 'string' feature.\n* 'arg2\\_start': the 0-based index of the start token of the relation arg2 mention, an 'ìnt' feature.\n* 'arg2\\_end': the 0-based index of the end token of the relation arg2 mention, exclusive, an 'ìnt' feature.\n* 'arg2\\_type': the key phrase type of the relation arg2 mention, a 'string' feature.\n* 'relation': the relation label of this instance, a classification label.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @phucdev for adding this dataset." ]
6336d79bd2837d8b278104c852dba600c6abcea6
# Oracle These are scanned images of imaginative text, similar to Chinese oracles, created by the great artist Meiling Han. This dataset can be fed into Generative Adversarial Network to produce similar characters for creating modern art.
KokeCacao/oracle
[ "region:us" ]
2023-01-05T15:36:40+00:00
{}
2023-01-05T16:02:50+00:00
[]
[]
TAGS #region-us
# Oracle These are scanned images of imaginative text, similar to Chinese oracles, created by the great artist Meiling Han. This dataset can be fed into Generative Adversarial Network to produce similar characters for creating modern art.
[ "# Oracle\n\nThese are scanned images of imaginative text, similar to Chinese oracles, created by the great artist Meiling Han. This dataset can be fed into Generative Adversarial Network to produce similar characters for creating modern art." ]
[ "TAGS\n#region-us \n", "# Oracle\n\nThese are scanned images of imaginative text, similar to Chinese oracles, created by the great artist Meiling Han. This dataset can be fed into Generative Adversarial Network to produce similar characters for creating modern art." ]
925662cc6c2e4577b23143658b507b6b8622512c
# Dataset Card for aeroBERT-NER ## Dataset Description - **Paper:** aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT - **Point of Contact:** [email protected] ### Dataset Summary This dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme. There are a total of 1432 sentences. The creation of this dataset is aimed at - <br> (1) Making available an **open-source** dataset for aerospace requirements which are often proprietary <br> (2) Fine-tuning language models for **token identification** (NER) specific to the aerospace domain <br> This dataset can be used for training or fine-tuning language models for the identification of mentioned Named-Entities in aerospace texts. ## Dataset Structure The dataset is of the format: ``Sentence-Number * WordPiece-Token * NER-tag`` <br> "*" is used as a delimiter to avoid confusion with commas (",") that occur in the text. The following example shows the dataset structure for Sentence #1431. <br> 1431\*the\*O <br> 1431\*airplane\*B-SYS <br> 1431\*takeoff\*O <br> 1431\*performance\*O <br> 1431\*must\*O <br> 1431\*be\*O <br> 1431\*determined\*O <br> 1431\*for\*O <br> 1431\*climb\*O <br> 1431\*gradients\*O <br> 1431\*.\*O <br> ## Dataset Creation ### Source Data Two types of aerospace texts are used to create the aerospace corpus for fine-tuning BERT: <br> (1) general aerospace texts such as publications by the National Academy of Space Studies Board, and <br> (2) certification requirements from Title 14 CFR. A total of 1432 sentences from the aerospace domain were included in the corpus. <br> ### Importing dataset into Python environment Use the following code chunk to import the dataset into Python environment as a DataFrame. ``` from datasets import load_dataset import pandas as pd dataset = load_dataset("archanatikayatray/aeroBERT-NER") #Converting the dataset into a pandas DataFrame dataset = pd.DataFrame(dataset["train"]["text"]) dataset = dataset[0].str.split('*', expand = True) #Getting the headers from the first row header = dataset.iloc[0] #Excluding the first row since it contains the headers dataset = dataset[1:] #Assigning the header to the DataFrame dataset.columns = header #Viewing the last 10 rows of the annotated dataset dataset.tail(10) ``` ### Annotations #### Annotation process A Subject Matter Expert (SME) was consulted for deciding on the annotation categories. The BIO Tagging scheme was used for annotating the dataset. **B** - Beginning of entity <br> **I** - Inside an entity <br> **O** - Outside an entity <br> | Category | NER Tags | Example | | :----: | :----: | :----: | | System | B-SYS, I-SYS | exhaust heat exchangers, powerplant, auxiliary power unit | | Value | B-VAL, I-VAL | 1.2 percent, 400 feet, 10 to 19 passengers | | Date time | B-DATETIME, I-DATETIME | 2013, 2019, May 11,1991 | | Organization | B-ORG, I-ORG | DOD, Ames Research Center, NOAA | | Resource | B-RES, I-RES | Section 25-341, Sections 25-173 through 25-177, Part 23 subpart B | The distribution of the various entities in the corpus is shown below - <br> |NER Tag|Description|Count| | :----: | :----: | :----: | O | Tokens that are not identified as any NE | 37686 | B-SYS | Beginning of a system NE | 1915 | I-SYS | Inside a system NE | 1104 | B-VAL | Beginning of a value NE | 659 | I-VAL | Inside a value NE | 507 | B-DATETIME| Beginning of a date time NE | 147 | I-DATETIME | Inside a date time NE | 63 | B-ORG | Beginning of an organization NE | 302 | I-ORG | Inside a organization NE | 227 | B-RES | Beginning of a resource NE |390 | I-RES | Inside a resource NE | 1033 | ### Limitations (1)The dataset is an imbalanced dataset, given that's how language is (not every word is a Named-Entity). Hence, using ``Accuracy`` as a metric for the model performance is NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation. (2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment. Please refer to the Appendix of the paper for information on the test set. ### Citation Information ``` @Article{aeroBERT-NER, AUTHOR = {Tikayat Ray, Archana and Pinon Fischer, Olivia J. and Mavris, Dimitri N. and White, Ryan T. and Cole, Bjorn F.}, TITLE = {aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT}, JOURNAL = {AIAA SCITECH 2023 Forum}, YEAR = {2023}, URL = {https://arc.aiaa.org/doi/10.2514/6.2023-2583}, DOI = {10.2514/6.2023-2583} } @phdthesis{tikayatray_thesis, author = {Tikayat Ray, Archana}, title = {Standardization of Engineering Requirements Using Large Language Models}, school = {Georgia Institute of Technology}, year = {2023}, doi = {10.13140/RG.2.2.17792.40961}, URL = {https://repository.gatech.edu/items/964c73e3-f0a8-487d-a3fa-a0988c840d04} } ```
archanatikayatray/aeroBERT-NER
[ "task_categories:token-classification", "size_categories:n<1K", "language:en", "license:apache-2.0", "NER", "Aerospace", "ORG", "SYS", "DATETIME", "RESOURCE", "VALUE", "doi:10.57967/hf/0470", "region:us" ]
2023-01-05T15:43:58+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["token-classification"], "pretty_name": "all_text_annotation_NER.txt", "tags": ["NER", "Aerospace", "ORG", "SYS", "DATETIME", "RESOURCE", "VALUE"]}
2023-05-20T21:40:58+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #size_categories-n<1K #language-English #license-apache-2.0 #NER #Aerospace #ORG #SYS #DATETIME #RESOURCE #VALUE #doi-10.57967/hf/0470 #region-us
Dataset Card for aeroBERT-NER ============================= Dataset Description ------------------- * Paper: aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT * Point of Contact: archanatikayatray@URL ### Dataset Summary This dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme. There are a total of 1432 sentences. The creation of this dataset is aimed at - (1) Making available an open-source dataset for aerospace requirements which are often proprietary (2) Fine-tuning language models for token identification (NER) specific to the aerospace domain This dataset can be used for training or fine-tuning language models for the identification of mentioned Named-Entities in aerospace texts. Dataset Structure ----------------- The dataset is of the format: ''Sentence-Number \* WordPiece-Token \* NER-tag'' "\*" is used as a delimiter to avoid confusion with commas (",") that occur in the text. The following example shows the dataset structure for Sentence #1431. 1431\*the\*O 1431\*airplane\*B-SYS 1431\*takeoff\*O 1431\*performance\*O 1431\*must\*O 1431\*be\*O 1431\*determined\*O 1431\*for\*O 1431\*climb\*O 1431\*gradients\*O 1431\*.\*O Dataset Creation ---------------- ### Source Data Two types of aerospace texts are used to create the aerospace corpus for fine-tuning BERT: (1) general aerospace texts such as publications by the National Academy of Space Studies Board, and (2) certification requirements from Title 14 CFR. A total of 1432 sentences from the aerospace domain were included in the corpus. ### Importing dataset into Python environment Use the following code chunk to import the dataset into Python environment as a DataFrame. ### Annotations #### Annotation process A Subject Matter Expert (SME) was consulted for deciding on the annotation categories. The BIO Tagging scheme was used for annotating the dataset. B - Beginning of entity I - Inside an entity O - Outside an entity The distribution of the various entities in the corpus is shown below - ### Limitations (1)The dataset is an imbalanced dataset, given that's how language is (not every word is a Named-Entity). Hence, using ''Accuracy'' as a metric for the model performance is NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation. (2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment. Please refer to the Appendix of the paper for information on the test set.
[ "### Dataset Summary\n\n\nThis dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme.\nThere are a total of 1432 sentences. The creation of this dataset is aimed at - \n\n(1) Making available an open-source dataset for aerospace requirements which are often proprietary \n\n(2) Fine-tuning language models for token identification (NER) specific to the aerospace domain \n\n\n\nThis dataset can be used for training or fine-tuning language models for the identification of mentioned Named-Entities in aerospace texts.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset is of the format: ''Sentence-Number \\* WordPiece-Token \\* NER-tag'' \n\n\n\n\"\\*\" is used as a delimiter to avoid confusion with commas (\",\") that occur in the text. The following example shows the dataset structure for Sentence #1431. \n\n\n\n1431\\*the\\*O \n\n1431\\*airplane\\*B-SYS \n\n1431\\*takeoff\\*O \n\n1431\\*performance\\*O \n\n1431\\*must\\*O \n\n1431\\*be\\*O \n\n1431\\*determined\\*O \n\n1431\\*for\\*O \n\n1431\\*climb\\*O \n\n1431\\*gradients\\*O \n\n1431\\*.\\*O \n\n\n\nDataset Creation\n----------------", "### Source Data\n\n\nTwo types of aerospace texts are used to create the aerospace corpus for fine-tuning BERT: \n\n(1) general aerospace texts such as publications by the National Academy of Space Studies Board, and \n\n(2) certification requirements from Title 14 CFR. A total of 1432 sentences from the aerospace domain were included in the corpus.", "### Importing dataset into Python environment\n\n\nUse the following code chunk to import the dataset into Python environment as a DataFrame.", "### Annotations", "#### Annotation process\n\n\nA Subject Matter Expert (SME) was consulted for deciding on the annotation categories. The BIO Tagging scheme was used for annotating the dataset.\n\n\nB - Beginning of entity \n\nI - Inside an entity \n\nO - Outside an entity \n\n\n\n\nThe distribution of the various entities in the corpus is shown below -", "### Limitations\n\n\n(1)The dataset is an imbalanced dataset, given that's how language is (not every word is a Named-Entity). Hence, using ''Accuracy'' as a metric for the model performance is\nNOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.\n\n\n(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.\nPlease refer to the Appendix of the paper for information on the test set." ]
[ "TAGS\n#task_categories-token-classification #size_categories-n<1K #language-English #license-apache-2.0 #NER #Aerospace #ORG #SYS #DATETIME #RESOURCE #VALUE #doi-10.57967/hf/0470 #region-us \n", "### Dataset Summary\n\n\nThis dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme.\nThere are a total of 1432 sentences. The creation of this dataset is aimed at - \n\n(1) Making available an open-source dataset for aerospace requirements which are often proprietary \n\n(2) Fine-tuning language models for token identification (NER) specific to the aerospace domain \n\n\n\nThis dataset can be used for training or fine-tuning language models for the identification of mentioned Named-Entities in aerospace texts.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset is of the format: ''Sentence-Number \\* WordPiece-Token \\* NER-tag'' \n\n\n\n\"\\*\" is used as a delimiter to avoid confusion with commas (\",\") that occur in the text. The following example shows the dataset structure for Sentence #1431. \n\n\n\n1431\\*the\\*O \n\n1431\\*airplane\\*B-SYS \n\n1431\\*takeoff\\*O \n\n1431\\*performance\\*O \n\n1431\\*must\\*O \n\n1431\\*be\\*O \n\n1431\\*determined\\*O \n\n1431\\*for\\*O \n\n1431\\*climb\\*O \n\n1431\\*gradients\\*O \n\n1431\\*.\\*O \n\n\n\nDataset Creation\n----------------", "### Source Data\n\n\nTwo types of aerospace texts are used to create the aerospace corpus for fine-tuning BERT: \n\n(1) general aerospace texts such as publications by the National Academy of Space Studies Board, and \n\n(2) certification requirements from Title 14 CFR. A total of 1432 sentences from the aerospace domain were included in the corpus.", "### Importing dataset into Python environment\n\n\nUse the following code chunk to import the dataset into Python environment as a DataFrame.", "### Annotations", "#### Annotation process\n\n\nA Subject Matter Expert (SME) was consulted for deciding on the annotation categories. The BIO Tagging scheme was used for annotating the dataset.\n\n\nB - Beginning of entity \n\nI - Inside an entity \n\nO - Outside an entity \n\n\n\n\nThe distribution of the various entities in the corpus is shown below -", "### Limitations\n\n\n(1)The dataset is an imbalanced dataset, given that's how language is (not every word is a Named-Entity). Hence, using ''Accuracy'' as a metric for the model performance is\nNOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.\n\n\n(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.\nPlease refer to the Appendix of the paper for information on the test set." ]
b801fd035e0f2eeda6d356db4a485791fd3c64d7
# Dataset Card for UIBert ## Dataset Description - **Homepage:** https://github.com/google-research-datasets/uibert - **Repository:** https://github.com/google-research-datasets/uibert - **Paper:** https://arxiv.org/abs/2107.13731 - **Leaderboard:** - UIBert: https://arxiv.org/abs/2107.13731 - Pix2Struct: https://arxiv.org/pdf/2210.03347 ### Dataset Summary This is a Hugging Face formatted dataset derived from the [Google UIBert dataset](https://github.com/google-research-datasets/uibert), which is in turn derived from the [RICO dataset](https://interactionmining.org/rico). ### Supported Tasks and Leaderboards - UI Understanding - UI Referring Expressions - UI Action Automation ### Languages - English ## Dataset Structure - `screenshot`: blob of pixels. - `prompt`: Prompt referring to a UI component with an optional action verb. For example "click on search button next to menu drawer." - `target_bounding_box`: Bounding box of targeted UI components. `[xmin, ymin, xmax, ymax]` ### Data Splits - train: 15K samples - validation: 471 samples - test: 565 samples ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
ivelin/ui_refexp
[ "task_categories:image-to-text", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "ui-referring-expression", "ui-refexp", "arxiv:2107.13731", "arxiv:2210.03347", "region:us" ]
2023-01-05T16:32:50+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["image-to-text"], "pretty_name": "UI understanding", "tags": ["ui-referring-expression", "ui-refexp"], "dataset_info": {"features": [{"name": "screenshot", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "target_bounding_box", "dtype": "string"}], "config_name": "ui_refexp", "splits": [{"name": "train", "num_bytes": 562037265, "num_examples": 15624}, {"name": "validation", "num_bytes": 60399225, "num_examples": 471}, {"name": "test", "num_bytes": 69073969, "num_examples": 565}], "download_size": 6515012176, "dataset_size": 691510459}}
2023-01-08T03:33:10+00:00
[ "2107.13731", "2210.03347" ]
[ "en" ]
TAGS #task_categories-image-to-text #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #ui-referring-expression #ui-refexp #arxiv-2107.13731 #arxiv-2210.03347 #region-us
# Dataset Card for UIBert ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - UIBert: URL - Pix2Struct: URL ### Dataset Summary This is a Hugging Face formatted dataset derived from the Google UIBert dataset, which is in turn derived from the RICO dataset. ### Supported Tasks and Leaderboards - UI Understanding - UI Referring Expressions - UI Action Automation ### Languages - English ## Dataset Structure - 'screenshot': blob of pixels. - 'prompt': Prompt referring to a UI component with an optional action verb. For example "click on search button next to menu drawer." - 'target_bounding_box': Bounding box of targeted UI components. '[xmin, ymin, xmax, ymax]' ### Data Splits - train: 15K samples - validation: 471 samples - test: 565 samples ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for UIBert", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n - UIBert: URL\n - Pix2Struct: URL", "### Dataset Summary\n\nThis is a Hugging Face formatted dataset derived from the Google UIBert dataset, which is in turn derived from the RICO dataset.", "### Supported Tasks and Leaderboards\n\n- UI Understanding\n- UI Referring Expressions\n- UI Action Automation", "### Languages\n\n- English", "## Dataset Structure\n\n- 'screenshot': blob of pixels.\n- 'prompt': Prompt referring to a UI component with an optional action verb. For example \"click on search button next to menu drawer.\"\n- 'target_bounding_box': Bounding box of targeted UI components. '[xmin, ymin, xmax, ymax]'", "### Data Splits\n\n- train: 15K samples\n- validation: 471 samples\n- test: 565 samples", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-image-to-text #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #ui-referring-expression #ui-refexp #arxiv-2107.13731 #arxiv-2210.03347 #region-us \n", "# Dataset Card for UIBert", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n - UIBert: URL\n - Pix2Struct: URL", "### Dataset Summary\n\nThis is a Hugging Face formatted dataset derived from the Google UIBert dataset, which is in turn derived from the RICO dataset.", "### Supported Tasks and Leaderboards\n\n- UI Understanding\n- UI Referring Expressions\n- UI Action Automation", "### Languages\n\n- English", "## Dataset Structure\n\n- 'screenshot': blob of pixels.\n- 'prompt': Prompt referring to a UI component with an optional action verb. For example \"click on search button next to menu drawer.\"\n- 'target_bounding_box': Bounding box of targeted UI components. '[xmin, ymin, xmax, ymax]'", "### Data Splits\n\n- train: 15K samples\n- validation: 471 samples\n- test: 565 samples", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
6e6baf9444ff16c5a1131c73fb510adc73319b3a
# Dataset Card for "subj_multi" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bstrai/subj_multi
[ "region:us" ]
2023-01-05T16:35:39+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "objective", "1": "subjective"}}}}, {"name": "language", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2914488, "num_examples": 16000}, {"name": "train", "num_bytes": 11518066, "num_examples": 64000}], "download_size": 8870704, "dataset_size": 14432554}}
2023-01-17T17:17:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "subj_multi" More Information needed
[ "# Dataset Card for \"subj_multi\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"subj_multi\"\n\nMore Information needed" ]
0f29b41e2ee582182ccfc9413342d7f3d411c67b
# AutoTrain Dataset for project: code-explainer ## Dataset Description This dataset has been automatically processed by AutoTrain for project code-explainer. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "def upload_to_s3(local_file, bucket, s3_file):\n ## This function is responsible for uploading the file into the S3 bucket using the specified credentials. \n s3 = boto3.client('s3', aws_access_key_id=ACCESS_KEY,\n aws_secret_access_key=SECRET_KEY)\n try:\n s3.upload_file(local_file, bucket, s3_file)\n print(\"Upload Successful\")\n return True\n except FileNotFoundError:\n print(\"The file was not found\")\n return False\n except NoCredentialsError:\n print(\"Credentials not available\")\n return False\n\n\nresult = upload_to_s3(LOCAL_FILE, BUCKET_NAME, S3_FILE_NAME)", "target": "Create a function upload_to_s3 the fumction is responsible for uploading the file into the s3 bucket to do so\n1. First creating a client object that will be used to interact with the S3 service using the boto3\n(Boto3 makes it easy to integrate your Python application, library, or script with AWS services including Amazon S3, Amazon EC2, Amazon DynamoDB, and more.)\n2. We make a use of try/catch block to upload the images in s3 bucket \n3. To upload the image we use the upload_file function of s3 client if the upload is successful will return the True with print statement.\n4. In case of exception first is FileNotFoundError will return the false.\n\n(Any message with the contents FileNotFoundError indicates that Python cannot find the file you are referencing. Python raises this error because your program cannot continue running without being able to access the file to which your program refers. )\n\n5. The next except block is NoCredentialsError will return the False along with print statement\n\n(The NoCredentialsError is an error encountered when using the Boto3 library to interface with Amazon Web Services (AWS). Specifically, this error is encountered when your AWS credentials are missing, invalid, or cannot be located by your Python script.)\n", "feat_language": "python", "feat_status": "annotated", "feat_user_created": "6888d00e-fda2-4061-9038-7a86b12c9d9b" }, { "text": "def main(username):\n banner()\n '''main function accept instagram username\n return an dictionary object containging profile deatils\n '''\n\n url = \"https://www.instagram.com/{}/?hl=en\".format(username)\n page = requests.get(url)\n tree = html.fromstring(page.content)\n data = tree.xpath('//meta[starts-with(@name,\"description\")]/@content')\n\n if data:\n data = tree.xpath('//meta[starts-with(@name,\"description\")]/@content')\n data = data[0].split(', ')\n followers = data[0][:-9].strip()\n following = data[1][:-9].strip()\n posts = re.findall(r'\\d+[,]*', data[2])[0]\n name = re.findall(r'name\":\"([^\"]+)\"', page.text)[0]\n aboutinfo = re.findall(r'\"description\":\"([^\"]+)\"', page.text)[0]\n instagram_profile = {\n 'success': True,\n 'profile': {\n 'name': name,\n 'profileurl': url,\n 'username': username,\n 'followers': followers,\n 'following': following,\n 'posts': posts,\n 'aboutinfo': aboutinfo\n }\n }\n else:\n instagram_profile = {\n 'success': False,\n 'profile': {}\n }\n return instagram_profile\n", "target": "Create a function main that accepts an Instagram username and returns a dictionary object containing profile details.\n1. The code first requests the URL of the user's profile from Instagram, then it parses out all of the information on that page into variables.\n2. Then xpath is used to find all tags within this HTML document starting with \"description\" and splitting them by commas until there are no more results found.\n3 we use the findall function of re module and find the post name info and store it in the dictionary and return the dictionary.\n4. Else will just return the dictionary with success is False.\n", "feat_language": "python", "feat_status": "annotated", "feat_user_created": "6888d00e-fda2-4061-9038-7a86b12c9d9b" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)", "feat_language": "Value(dtype='string', id=None)", "feat_status": "Value(dtype='string', id=None)", "feat_user_created": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 92 | | valid | 23 |
sagard21/autotrain-data-code-explainer
[ "task_categories:summarization", "region:us" ]
2023-01-05T18:02:20+00:00
{"task_categories": ["summarization"]}
2023-01-05T18:03:02+00:00
[]
[]
TAGS #task_categories-summarization #region-us
AutoTrain Dataset for project: code-explainer ============================================= Dataset Description ------------------- This dataset has been automatically processed by AutoTrain for project code-explainer. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-summarization #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
f62a921ce2d9906c5d41db2631381fcc6a9e2c06
# Dataset Card for "hearthstone-cards-512" # Not affiliated in anyway with Blizzard nor Hearthstone # Please note that this entrie dataset contains copyrighted matirial
Norod78/hearthstone-cards-512
[ "task_categories:text-to-image", "size_categories:n<10K", "blizzard", "hearthstone", "game cards", "region:us" ]
2023-01-05T18:41:08+00:00
{"size_categories": ["n<10K"], "task_categories": ["text-to-image"], "pretty_name": "Blizzard Hearthstone cards, resized to 512x512 with OCR text field", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 230518521.36, "num_examples": 2952}], "download_size": 230628184, "dataset_size": 230518521.36}, "tags": ["blizzard", "hearthstone", "game cards"]}
2023-01-05T18:48:19+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<10K #blizzard #hearthstone #game cards #region-us
# Dataset Card for "hearthstone-cards-512" # Not affiliated in anyway with Blizzard nor Hearthstone # Please note that this entrie dataset contains copyrighted matirial
[ "# Dataset Card for \"hearthstone-cards-512\"", "# Not affiliated in anyway with Blizzard nor Hearthstone", "# Please note that this entrie dataset contains copyrighted matirial" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<10K #blizzard #hearthstone #game cards #region-us \n", "# Dataset Card for \"hearthstone-cards-512\"", "# Not affiliated in anyway with Blizzard nor Hearthstone", "# Please note that this entrie dataset contains copyrighted matirial" ]
81483a2eb455bc5c5afa925f8ab2dc9976b99ff6
Imppres, but it works https://github.com/facebookresearch/Imppres ``` @inproceedings{jeretic-etal-2020-natural, title = "Are Natural Language Inference Models {IMPPRESsive}? {L}earning {IMPlicature} and {PRESupposition}", author = "Jereti\v{c}, Paloma and Warstadt, Alex and Bhooshan, Suvrat and Williams, Adina", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.768", doi = "10.18653/v1/2020.acl-main.768", pages = "8690--8705", abstract = "Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by {``}some{''} as entailments. For some presupposition triggers like {``}only{''}, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.", } ```
tasksource/imppres
[ "task_categories:text-classification", "task_ids:natural-language-inference", "language:en", "license:apache-2.0", "region:us" ]
2023-01-05T20:14:45+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"]}
2023-06-21T11:52:43+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #language-English #license-apache-2.0 #region-us
Imppres, but it works URL
[]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #language-English #license-apache-2.0 #region-us \n" ]
03f4167589d129223f29c61e324311c80df56b8e
# Dataset Card for "dataset_glstxt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fuyulinh04/dataset_glstxt
[ "region:us" ]
2023-01-05T23:20:43+00:00
{"dataset_info": {"features": [{"name": "gloss", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11227076.8, "num_examples": 73696}, {"name": "test", "num_bytes": 2806769.2, "num_examples": 18424}], "download_size": 8513566, "dataset_size": 14033846.0}}
2023-01-05T23:21:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dataset_glstxt" More Information needed
[ "# Dataset Card for \"dataset_glstxt\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dataset_glstxt\"\n\nMore Information needed" ]
abc389e1efad2be4e2d214b7af0d775217a3a188
# Character Embedding - Princess Tutu/Ahiru ![princess_tutu_showcase.png](https://s3.amazonaws.com/moonup/production/uploads/1672973706523-6366fabccbf2cf32918c2830.png) ## Usage To use an embedding, download the .pt file and place it in "\stable-diffusion-webui\embeddings". In your prompt, write ```"princess_tutu-6500"```. ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
kxly/princess_tutu
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2023-01-06T02:00:33+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "pretty_name": "Princess Tutu", "thumbnail": "https://huggingface.co/datasets/kxly/princess_tutu/blob/main/princess_tutu_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2023-01-06T02:55:47+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Character Embedding - Princess Tutu/Ahiru !princess_tutu_showcase.png ## Usage To use an embedding, download the .pt file and place it in "\stable-diffusion-webui\embeddings". In your prompt, write . ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Character Embedding - Princess Tutu/Ahiru\n\n!princess_tutu_showcase.png", "## Usage\n\nTo use an embedding, download the .pt file and place it in \"\\stable-diffusion-webui\\embeddings\".\n\nIn your prompt, write .", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Character Embedding - Princess Tutu/Ahiru\n\n!princess_tutu_showcase.png", "## Usage\n\nTo use an embedding, download the .pt file and place it in \"\\stable-diffusion-webui\\embeddings\".\n\nIn your prompt, write .", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
38ff03d68347aaf694e598c50cb164191f50f61c
# Dataset Card for DrugProt ## Dataset Description - **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-1/ - **Pubmed:** True - **Public:** True - **Tasks:** NER,RE The DrugProt corpus consists of a) expert-labelled chemical and gene mentions, and (b) all binary relationships between them corresponding to a specific set of biologically relevant relation types. The corpus was introduced in context of the BioCreative VII Track 1 (Text mining drug and chemical-protein interactions). ## Citation Information ``` @inproceedings{miranda2021overview, title={Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of \ drug-gene/protein relations}, author={Miranda, Antonio and Mehryary, Farrokh and Luoma, Jouni and Pyysalo, Sampo and Valencia, Alfonso \ and Krallinger, Martin}, booktitle={Proceedings of the seventh BioCreative challenge evaluation workshop}, year={2021} } ```
bigbio/drugprot
[ "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
2023-01-06T03:27:49+00:00
{"language": ["en"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "DrugProt", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-1/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]}
2023-01-06T03:30:02+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
# Dataset Card for DrugProt ## Dataset Description - Homepage: URL - Pubmed: True - Public: True - Tasks: NER,RE The DrugProt corpus consists of a) expert-labelled chemical and gene mentions, and (b) all binary relationships between them corresponding to a specific set of biologically relevant relation types. The corpus was introduced in context of the BioCreative VII Track 1 (Text mining drug and chemical-protein interactions).
[ "# Dataset Card for DrugProt", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nThe DrugProt corpus consists of a) expert-labelled chemical and gene mentions, and (b) all binary relationships\nbetween them corresponding to a specific set of biologically relevant relation types. The corpus was introduced\nin context of the BioCreative VII Track 1 (Text mining drug and chemical-protein interactions)." ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for DrugProt", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nThe DrugProt corpus consists of a) expert-labelled chemical and gene mentions, and (b) all binary relationships\nbetween them corresponding to a specific set of biologically relevant relation types. The corpus was introduced\nin context of the BioCreative VII Track 1 (Text mining drug and chemical-protein interactions)." ]
970237b9a7497de2e3a925113b8c20be87a3abf5
# Dataset Card for CPI ## Dataset Description - **Homepage:** https://github.com/KerstenDoering/CPI-Pipeline - **Pubmed:** True - **Public:** True - **Tasks:** NER,NED,RE The compound-protein relationship (CPI) dataset consists of 2,613 sentences from abstracts containing annotations of proteins, small molecules, and their relationships. ## Citation Information ``` @article{doring2020automated, title={Automated recognition of functional compound-protein relationships in literature}, author={D{\"o}ring, Kersten and Qaseem, Ammar and Becer, Michael and Li, Jianyu and Mishra, Pankaj and Gao, Mingjie and Kirchner, Pascal and Sauter, Florian and Telukunta, Kiran K and Moumbock, Aur{\'e}lien FA and others}, journal={Plos one}, volume={15}, number={3}, pages={e0220925}, year={2020}, publisher={Public Library of Science San Francisco, CA USA} } ```
bigbio/cpi
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
2023-01-06T03:44:03+00:00
{"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "CPI", "bigbio_language": ["English"], "bigbio_license_shortname": "ISC", "homepage": "https://github.com/KerstenDoering/CPI-Pipeline", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION", "RELATION_EXTRACTION"]}
2023-01-06T03:46:05+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-other #region-us
# Dataset Card for CPI ## Dataset Description - Homepage: URL - Pubmed: True - Public: True - Tasks: NER,NED,RE The compound-protein relationship (CPI) dataset consists of 2,613 sentences from abstracts containing annotations of proteins, small molecules, and their relationships.
[ "# Dataset Card for CPI", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,RE\n\n\nThe compound-protein relationship (CPI) dataset consists of 2,613 sentences \nfrom abstracts containing annotations of proteins, small molecules, and their \nrelationships." ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n", "# Dataset Card for CPI", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,RE\n\n\nThe compound-protein relationship (CPI) dataset consists of 2,613 sentences \nfrom abstracts containing annotations of proteins, small molecules, and their \nrelationships." ]
0294b3bd3bfc4586f9e0be72ff5218deb032f8e0
# Dataset Card for "arxiv_mini" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
xvjiarui/arxiv_mini
[ "region:us" ]
2023-01-06T03:55:50+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3966992.0, "num_examples": 11}, {"name": "validation", "num_bytes": 7430590.0, "num_examples": 21}], "download_size": 11396049, "dataset_size": 11397582.0}}
2023-01-06T03:56:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "arxiv_mini" More Information needed
[ "# Dataset Card for \"arxiv_mini\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"arxiv_mini\"\n\nMore Information needed" ]
7b26911122fc049fec6a89a3e8b8d59f41e9fafe
# Dataset Card for "dreambooth-hackathon-images-srkman-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Xhaheen/dreambooth-hackathon-images-srkman-2
[ "region:us" ]
2023-01-06T04:14:08+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 4082680.0, "num_examples": 20}], "download_size": 4081453, "dataset_size": 4082680.0}}
2023-01-06T04:14:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images-srkman-2" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images-srkman-2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images-srkman-2\"\n\nMore Information needed" ]
c06d338555dc45ca0beeaa1359170e3464a52c8d
# Dataset Card for "blah" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deepaksingh/blah
[ "region:us" ]
2023-01-06T04:35:21+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "10k", "1": "2.5k", "2": "5k"}}}}], "splits": [{"name": "train", "num_bytes": 888512750.0, "num_examples": 348}], "download_size": 888503946, "dataset_size": 888512750.0}}
2023-01-06T04:39:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "blah" More Information needed
[ "# Dataset Card for \"blah\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"blah\"\n\nMore Information needed" ]
4ae21f679f3b2fdb2102f04d9ef104bbdc8714b9
# Green patents dataset - num_rows: 9145 - features: [title, label] - label: 0, 1 The dataset contains patent titles that are labeled as 1 (="green") and 0 (="not green"). "green" patents titles were gathered by searching for CPC class "Y02" with Google Patents (query: "status:APPLICATION type:PATENT (Y02) country:EP,US", 05/01/2023). "not green" patents titles are derived from the [HUPD dataset](https://huggingface.co/datasets/HUPD/hupd) (random choice of 5000 titles). We could not find any patents in HUPD assigned to any CPC class starting with "Y".
cwinkler/green_patents
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "region:us" ]
2023-01-06T06:12:33+00:00
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
2023-01-08T09:16:25+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-1K<n<10K #language-English #region-us
# Green patents dataset - num_rows: 9145 - features: [title, label] - label: 0, 1 The dataset contains patent titles that are labeled as 1 (="green") and 0 (="not green"). "green" patents titles were gathered by searching for CPC class "Y02" with Google Patents (query: "status:APPLICATION type:PATENT (Y02) country:EP,US", 05/01/2023). "not green" patents titles are derived from the HUPD dataset (random choice of 5000 titles). We could not find any patents in HUPD assigned to any CPC class starting with "Y".
[ "# Green patents dataset\n\n- num_rows: 9145\n- features: [title, label]\n- label: 0, 1\n\nThe dataset contains patent titles that are labeled as 1 (=\"green\") and 0 (=\"not green\").\n\n\"green\" patents titles were gathered by searching for CPC class \"Y02\" with Google Patents (query: \"status:APPLICATION type:PATENT (Y02) country:EP,US\", 05/01/2023).\n\n\"not green\" patents titles are derived from the HUPD dataset (random choice of 5000 titles). We could not find any patents in HUPD assigned to any CPC class starting with \"Y\"." ]
[ "TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #region-us \n", "# Green patents dataset\n\n- num_rows: 9145\n- features: [title, label]\n- label: 0, 1\n\nThe dataset contains patent titles that are labeled as 1 (=\"green\") and 0 (=\"not green\").\n\n\"green\" patents titles were gathered by searching for CPC class \"Y02\" with Google Patents (query: \"status:APPLICATION type:PATENT (Y02) country:EP,US\", 05/01/2023).\n\n\"not green\" patents titles are derived from the HUPD dataset (random choice of 5000 titles). We could not find any patents in HUPD assigned to any CPC class starting with \"Y\"." ]
d591b39ccd6886510e7b1957542c7855cc1b81c8
# Dataset Card for "dreambooth-hackathon-owczarek" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
misza222/dreambooth-hackathon-owczarek
[ "region:us" ]
2023-01-06T06:21:34+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3487329.0, "num_examples": 16}], "download_size": 3488676, "dataset_size": 3487329.0}}
2023-01-06T06:21:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-owczarek" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-owczarek\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-owczarek\"\n\nMore Information needed" ]
26fbfb87fab0216775081b969468814000ea1b70
# Dataset Card for "bookcorpus_compact_1024_shard0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_compact_1024_shard0_of_10
[ "region:us" ]
2023-01-06T07:01:59+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 738086319, "num_examples": 61605}], "download_size": 371729131, "dataset_size": 738086319}}
2023-01-06T07:02:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bookcorpus_compact_1024_shard0" More Information needed
[ "# Dataset Card for \"bookcorpus_compact_1024_shard0\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bookcorpus_compact_1024_shard0\"\n\nMore Information needed" ]
1022bb8dace895db459f90f31ad27f486d80e13e
# Dataset Card for HunSum-1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Description ### Dataset Summary The HunSum-1 Dataset is a Hungarian-language dataset containing over 1.1M unique news articles with lead and other metadata. The dataset contains articles from 9 major Hungarian news websites. ### Supported Tasks and Leaderboards - 'summarization' - 'title generation' ## Dataset Structure ### Data Fields - `uuid`: a string containing the unique id - `article`: a string containing the body of the news article - `lead`: a string containing the lead of the article - `title`: a string containing the title of the article - `url`: a string containing the URL for the article - `domain`: a string containing the domain of the url - `date_of_creation`: a timestamp containing the date when the article was created - `tags`: a sequence containing the tags of the article ### Data Splits The HunSum-1 dataset has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 1,144,255 | | Validation | 1996 | | Test | 1996 | ## Citation If you use our dataset, please cite the following paper: ``` @inproceedings {HunSum-1, title = {{HunSum-1: an Abstractive Summarization Dataset for Hungarian}}, booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)}, year = {2023}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Barta, Botond and Lakatos, Dorina and Nagy, Attila and Nyist, Mil{\'{a}}n Konor and {\'{A}}cs, Judit}, pages = {231--243} } ```
SZTAKI-HLT/HunSum-1
[ "task_categories:summarization", "task_ids:news-articles-summarization", "multilinguality:monolingual", "language:hu", "license:cc-by-nc-sa-4.0", "region:us" ]
2023-01-06T07:42:26+00:00
{"language": ["hu"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "pretty_name": "HunSum-1"}
2023-01-24T16:21:00+00:00
[]
[ "hu" ]
TAGS #task_categories-summarization #task_ids-news-articles-summarization #multilinguality-monolingual #language-Hungarian #license-cc-by-nc-sa-4.0 #region-us
Dataset Card for HunSum-1 ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards * Dataset Structure + Data Fields + Data Splits Dataset Description ------------------- ### Dataset Summary The HunSum-1 Dataset is a Hungarian-language dataset containing over 1.1M unique news articles with lead and other metadata. The dataset contains articles from 9 major Hungarian news websites. ### Supported Tasks and Leaderboards * 'summarization' * 'title generation' Dataset Structure ----------------- ### Data Fields * 'uuid': a string containing the unique id * 'article': a string containing the body of the news article * 'lead': a string containing the lead of the article * 'title': a string containing the title of the article * 'url': a string containing the URL for the article * 'domain': a string containing the domain of the url * 'date\_of\_creation': a timestamp containing the date when the article was created * 'tags': a sequence containing the tags of the article ### Data Splits The HunSum-1 dataset has 3 splits: *train*, *validation*, and *test*. If you use our dataset, please cite the following paper:
[ "### Dataset Summary\n\n\nThe HunSum-1 Dataset is a Hungarian-language dataset containing over 1.1M unique news articles with lead and other metadata. The dataset contains articles from 9 major Hungarian news websites.", "### Supported Tasks and Leaderboards\n\n\n* 'summarization'\n* 'title generation'\n\n\nDataset Structure\n-----------------", "### Data Fields\n\n\n* 'uuid': a string containing the unique id\n* 'article': a string containing the body of the news article\n* 'lead': a string containing the lead of the article\n* 'title': a string containing the title of the article\n* 'url': a string containing the URL for the article\n* 'domain': a string containing the domain of the url\n* 'date\\_of\\_creation': a timestamp containing the date when the article was created\n* 'tags': a sequence containing the tags of the article", "### Data Splits\n\n\nThe HunSum-1 dataset has 3 splits: *train*, *validation*, and *test*.\n\n\n\nIf you use our dataset, please cite the following paper:" ]
[ "TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #multilinguality-monolingual #language-Hungarian #license-cc-by-nc-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nThe HunSum-1 Dataset is a Hungarian-language dataset containing over 1.1M unique news articles with lead and other metadata. The dataset contains articles from 9 major Hungarian news websites.", "### Supported Tasks and Leaderboards\n\n\n* 'summarization'\n* 'title generation'\n\n\nDataset Structure\n-----------------", "### Data Fields\n\n\n* 'uuid': a string containing the unique id\n* 'article': a string containing the body of the news article\n* 'lead': a string containing the lead of the article\n* 'title': a string containing the title of the article\n* 'url': a string containing the URL for the article\n* 'domain': a string containing the domain of the url\n* 'date\\_of\\_creation': a timestamp containing the date when the article was created\n* 'tags': a sequence containing the tags of the article", "### Data Splits\n\n\nThe HunSum-1 dataset has 3 splits: *train*, *validation*, and *test*.\n\n\n\nIf you use our dataset, please cite the following paper:" ]
dd7cea1a69b543124ba399fb14981d530b0acc2a
# Dataset Card for "bookcorpus_compact_1024_shard1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_compact_1024_shard1_of_10
[ "region:us" ]
2023-01-06T07:48:43+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 733627676, "num_examples": 61605}], "download_size": 367870833, "dataset_size": 733627676}}
2023-01-06T07:49:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bookcorpus_compact_1024_shard1" More Information needed
[ "# Dataset Card for \"bookcorpus_compact_1024_shard1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bookcorpus_compact_1024_shard1\"\n\nMore Information needed" ]
1b6b830016de70123bff37b29dfc2525ace9a3cc
# Dataset Card for "bookcorpus_compact_1024_shard3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_compact_1024_shard3_of_10
[ "region:us" ]
2023-01-06T08:25:12+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 764655737, "num_examples": 61605}], "download_size": 384654577, "dataset_size": 764655737}}
2023-01-06T08:27:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bookcorpus_compact_1024_shard3" More Information needed
[ "# Dataset Card for \"bookcorpus_compact_1024_shard3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bookcorpus_compact_1024_shard3\"\n\nMore Information needed" ]
5262bf3395485bbc0fa6de9bf9edf373e9be7b21
# Dataset Card for "rico-screen2words" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pinkmooncake/rico-screen2words
[ "region:us" ]
2023-01-06T09:09:06+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 454423304.26, "num_examples": 4310}, {"name": "dev", "num_bytes": 246957743.116, "num_examples": 2364}, {"name": "train", "num_bytes": 1737030544.084, "num_examples": 15743}], "download_size": 1897987283, "dataset_size": 2438411591.46}}
2023-01-07T04:18:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "rico-screen2words" More Information needed
[ "# Dataset Card for \"rico-screen2words\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"rico-screen2words\"\n\nMore Information needed" ]
adeab69db72d52c045035039b6c64367ced4e007
# Dataset Card for "bookcorpus_compact_1024_shard2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_compact_1024_shard2_of_10
[ "region:us" ]
2023-01-06T09:25:20+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 759243184, "num_examples": 61605}], "download_size": 382569803, "dataset_size": 759243184}}
2023-01-06T09:25:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bookcorpus_compact_1024_shard2" More Information needed
[ "# Dataset Card for \"bookcorpus_compact_1024_shard2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bookcorpus_compact_1024_shard2\"\n\nMore Information needed" ]
de29ae7533b3715ab0d1e3cb191316d01c8c3664
# Dataset Card for "flurocells" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zlgao/flurocells
[ "region:us" ]
2023-01-06T09:29:48+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "mcf7", "1": "mda231"}}}}], "splits": [{"name": "train", "num_bytes": 165402692.0, "num_examples": 203}], "download_size": 165410090, "dataset_size": 165402692.0}}
2023-01-06T09:30:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "flurocells" More Information needed
[ "# Dataset Card for \"flurocells\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"flurocells\"\n\nMore Information needed" ]
bf375d06223db27a1d2481ff985cdb1163e696b1
# Dataset Card for "xquad_en" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zaid/xquad_en
[ "region:us" ]
2023-01-06T10:05:48+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 903196.0815126051, "num_examples": 963}, {"name": "validation", "num_bytes": 111609.9, "num_examples": 119}, {"name": "test", "num_bytes": 101293.01848739496, "num_examples": 108}], "download_size": 323403, "dataset_size": 1116099.0}}
2023-01-06T10:06:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "xquad_en" More Information needed
[ "# Dataset Card for \"xquad_en\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"xquad_en\"\n\nMore Information needed" ]
ea10a33abb1dc44fff19d29c078181ca7ffa94df
# Dataset Card for "xquad_ru" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zaid/xquad_ru
[ "region:us" ]
2023-01-06T10:06:47+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 1729326.2672268907, "num_examples": 963}, {"name": "validation", "num_bytes": 213696.6, "num_examples": 119}, {"name": "test", "num_bytes": 193943.13277310925, "num_examples": 108}], "download_size": 498595, "dataset_size": 2136966.0}}
2023-01-06T10:07:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "xquad_ru" More Information needed
[ "# Dataset Card for \"xquad_ru\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"xquad_ru\"\n\nMore Information needed" ]
474545964b7f14653e5de4d58cd465c5ec05e89d
# AutoTrain Dataset for project: real-vs-fake-news ## Dataset Description This dataset has been automatically processed by AutoTrain for project real-vs-fake-news. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_title": "FBI Russia probe helped by Australian diplomat tip-off: NYT", "text": "WASHINGTON (Reuters) - Trump campaign adviser George Papadopoulos told an Australian diplomat in May 2016 that Russia had political dirt on Democratic presidential candidate Hillary Clinton, the New York Times reported on Saturday. The conversation between Papadopoulos and the diplomat, Alexander Downer, in London was a driving factor behind the FBI\u2019s decision to open a counter-intelligence investigation of Moscow\u2019s contacts with the Trump campaign, the Times reported. Two months after the meeting, Australian officials passed the information that came from Papadopoulos to their American counterparts when leaked Democratic emails began appearing online, according to the newspaper, which cited four current and former U.S. and foreign officials. Besides the information from the Australians, the probe by the Federal Bureau of Investigation was also propelled by intelligence from other friendly governments, including the British and Dutch, the Times said. Papadopoulos, a Chicago-based international energy lawyer, pleaded guilty on Oct. 30 to lying to FBI agents about contacts with people who claimed to have ties to top Russian officials. It was the first criminal charge alleging links between the Trump campaign and Russia. The White House has played down the former aide\u2019s campaign role, saying it was \u201cextremely limited\u201d and that any actions he took would have been on his own. The New York Times, however, reported that Papadopoulos helped set up a meeting between then-candidate Donald Trump and Egyptian President Abdel Fattah al-Sisi and edited the outline of Trump\u2019s first major foreign policy speech in April 2016. The federal investigation, which is now being led by Special Counsel Robert Mueller, has hung over Trump\u2019s White House since he took office almost a year ago. Some Trump allies have recently accused Mueller\u2019s team of being biased against the Republican president. Lawyers for Papadopoulos did not immediately respond to requests by Reuters for comment. Mueller\u2019s office declined to comment. Trump\u2019s White House attorney, Ty Cobb, declined to comment on the New York Times report. \u201cOut of respect for the special counsel and his process, we are not commenting on matters such as this,\u201d he said in a statement. Mueller has charged four Trump associates, including Papadopoulos, in his investigation. Russia has denied interfering in the U.S. election and Trump has said there was no collusion between his campaign and Moscow. ", "feat_subject": "politicsNews", "feat_date": "December 30, 2017 ", "target": 1 }, { "feat_title": "Democrats ride grassroots wave to major statehouse gains", "text": "(Reuters) - Democrats claimed historic gains in Virginia\u2019s statehouse and booted Republicans from state and local office across the United States on Tuesday, in the party\u2019s first big wave of victories since Republican Donald Trump\u2019s won the White House a year ago. Democrats must figure out how to turn that momentum to their advantage in November 2018 elections, when control of the U.S. Congress and scores of statehouses will be at stake. From coast to coast, Democratic victories showed grassroots resistance to Trump rallying the party\u2019s base, while independent and conservative voters appeared frustrated with the unpopular Republican leadership in Washington. Democrats won this year\u2019s races for governor in Virginia and New Jersey, but successes in legislative and local races nationwide may have revealed more about where the party stands a year into Trump\u2019s administration. Unexpectedly massive Democratic gains in Virginia\u2019s statehouse surprised even the most optimistic party loyalists in a state that has trended Democratic in recent years but remains a top target for both parties in national elections. \u201cThis is beyond our wildest expectations, to be honest,\u201d said Catherine Vaughan, co-founder of Flippable, one of several new startup progressive groups rebuilding the party at the grassroots level. With several races still too close to call, Democrats were close to flipping, or splitting, control of the Virginia House of Delegates, erasing overnight a two-to-one Republican majority. Democratic Lieutenant Governor Ralph Northam also defeated Republican Ed Gillespie by nearly nine percentage points in what had seemed a closer contest for Virginia\u2019s governor\u2019s mansion, a year after Democrat Hillary Clinton carried the state by five points in the presidential election. The losing candidate had employed Trump-style campaign tactics that highlighted divisive issues such as immigration, although the president did not join him on the campaign trail. In New Jersey, a Democratic presidential stronghold, voters replaced a two-term Republican governor with a Democrat and increased the party\u2019s majorities in the state legislature. Democrats notched additional wins in a Washington state Senate race that gave the party full control of the state government and in Republican-controlled Georgia, where Democrats picked up three seats in special state legislative elections. \u201cThis was the first chance that the voters got to send a message to Donald Trump and they took advantage of it,\u201d John Feehery, a Republican strategist in Washington, said by phone. The gains suggested to some election analysts that Democrats could retake the U.S. House of Representatives next year. Republicans control both the House and Senate along with the White House. Dave Wasserman, who analyzes U.S. House and statehouse races for the nonpartisan Cook Political Report, called the Virginia results a \u201ctidal wave.\u201d Even after Tuesday\u2019s gains, however, Democrats are completely locked out of power in 26 state governments. Republicans control two-thirds of U.S. legislative chambers. Desperate to rebuild, national Democrats this year showed newfound interest in legislative contests and races even farther down the ballot. The Democratic National Committee successfully invested in mayoral races from St. Petersburg, Florida, to Manchester, New Hampshire. \u201cIf there is a lesson to be taken from yesterday, it is that we need to make sure that we are competing everywhere, because Democrats can win,\u201d DNC Chairman Tom Perez said on a media call. Democratic Legislative Campaign Committee executive director Jessica Post said national party leaders must remain focused on local races, even in a congressional year. \u201cWe don\u2019t focus enough on the state level, and that is why we are in the place we are,\u201d she said. \u201cBut when we do, we win.\u201d ", "feat_subject": "politicsNews", "feat_date": "November 8, 2017 ", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_title": "Value(dtype='string', id=None)", "text": "Value(dtype='string', id=None)", "feat_subject": "Value(dtype='string', id=None)", "feat_date": "Value(dtype='string', id=None)", "target": "ClassLabel(names=['Fake', 'True'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1598 | | valid | 400 |
Eip/autotrain-data-real-vs-fake-news
[ "task_categories:text-classification", "region:us" ]
2023-01-06T10:10:38+00:00
{"task_categories": ["text-classification"]}
2023-01-06T12:20:57+00:00
[]
[]
TAGS #task_categories-text-classification #region-us
AutoTrain Dataset for project: real-vs-fake-news ================================================ Dataset Description ------------------- This dataset has been automatically processed by AutoTrain for project real-vs-fake-news. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-text-classification #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
abad918fd61f5712ff030733993a4023ace37193
# Dataset Card for "nature128_1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mertcobanov/nature128_1k
[ "region:us" ]
2023-01-06T10:35:28+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "07968_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hardenbergia_violacea", "1": "07969_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hedysarum_alpinum", "2": "07970_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hedysarum_boreale", "3": "07971_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hoffmannseggia_glauca", "4": "07972_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hoffmannseggia_microphylla", "5": "07973_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hosackia_gracilis", "6": "07974_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hylodesmum_glutinosum", "7": "07975_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hylodesmum_nudiflorum", "8": "07976_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Indigofera_miniata", "9": "07977_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Kennedia_prostrata", "10": "07978_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Laburnum_anagyroides", "11": "07979_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_hirsutus", "12": "07980_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_japonicus", "13": "07986_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_tuberosus", "14": "07987_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_vernus", "15": "07988_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_vestitus", "16": "07989_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_capitata", "17": "07990_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_cuneata", "18": "07991_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_virginica", "19": "07992_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lessertia_frutescens", "20": "08013_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_texensis", "21": "08014_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_truncatus", "22": "08015_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_atropurpureum", "23": "08016_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_gibbosifolium", "24": "08017_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_lathyroides", "25": "08018_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_arabica", "26": "08019_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_falcata", "27": "08020_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_lupulina", "28": "08021_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_minima", "29": "08022_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_polymorpha", "30": "08023_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_sativa", "31": "08024_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_albus", "32": "08025_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_indicus", "33": "08026_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_officinalis", "34": "08049_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_laevigata", "35": "08050_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_pubescens", "36": "08051_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_velutina", "37": "08052_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_emoryi", "38": "08053_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_schottii", "39": "08054_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_spinosus", "40": "08055_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Pueraria_montana", "41": "08056_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Robinia_neomexicana", "42": "08057_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Robinia_pseudoacacia", "43": "08058_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Rupertia_physodes", "44": "08059_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Securigera_varia", "45": "08060_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senegalia_greggii", "46": "08061_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_alata", "47": "08062_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_armata", "48": "08063_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_covesii", "49": "09930_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Hypolepis_ambigua", "50": "09931_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Paesia_scaberula", "51": "09932_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_aquilinum", "52": "09933_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_esculentum", "53": "09934_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_pinetorum", "54": "09935_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Diplaziopsidaceae_Homalosorus_pycnocarpos", "55": "09936_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Cyrtomium_falcatum", "56": "09937_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_arguta", "57": "09938_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_carthusiana", "58": "09939_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_cristata", "59": "09940_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_expansa", "60": "09941_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_filix-mas", "61": "09942_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_fragrans", "62": "09943_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_intermedia", "63": "09944_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_marginalis", "64": "09945_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_acrostichoides", "65": "09946_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_lonchitis", "66": "09947_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_munitum", "67": "09948_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_neozelandicum", "68": "09949_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_vestitum", "69": "09950_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Rumohra_adiantiformis", "70": "09951_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Nephrolepidaceae_Nephrolepis_cordifolia", "71": "09952_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Onocleaceae_Matteuccia_struthiopteris", "72": "09953_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Onocleaceae_Onoclea_sensibilis", "73": "09954_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Microsorum_pustulatum", "74": "09955_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Microsorum_scandens", "75": "09956_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Notogrammitis_heterophylla", "76": "09957_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Phlebodium_aureum", "77": "09958_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Pleopeltis_michauxiana", "78": "09959_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_californicum", "79": "09960_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_glycyrrhiza", "80": "09961_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_scouleri", "81": "09962_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_virginianum", "82": "09963_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_vulgare", "83": "09964_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Pyrrosia_eleagnifolia", "84": "09965_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Acrostichum_danaeifolium", "85": "09966_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_aleuticum", "86": "09967_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_capillus-veneris", "87": "09968_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_cunninghamii", "88": "09969_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_hispidulum", "89": "09970_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_jordanii", "90": "09971_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_pedatum", "91": "09972_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Aspidotis_densa", "92": "09973_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Astrolepis_sinuata", "93": "09974_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Cryptogramma_acrostichoides", "94": "09975_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_alabamensis", "95": "09976_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_aurea", "96": "09977_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_parryi", "97": "09978_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_andromedifolia", "98": "09979_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_atropurpurea", "99": "09980_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_glabella", "100": "09981_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_mucronata", "101": "09982_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_rotundifolia", "102": "09983_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pentagramma_triangularis", "103": "09984_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_cretica", "104": "09985_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_macilenta", "105": "09986_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_tremula", "106": "09987_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_vittata", "107": "09988_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Parathelypteris_noveboracensis", "108": "09989_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Phegopteris_connectilis", "109": "09990_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Phegopteris_hexagonoptera", "110": "09991_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Pneumatopteris_pennigera", "111": "09992_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Thelypteris_palustris", "112": "09993_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Woodsiaceae_Woodsia_ilvensis", "113": "09994_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Woodsiaceae_Woodsia_obtusa", "114": "09995_Plantae_Tracheophyta_Polypodiopsida_Psilotales_Psilotaceae_Psilotum_nudum", "115": "09996_Plantae_Tracheophyta_Polypodiopsida_Psilotales_Psilotaceae_Tmesipteris_elongata", "116": "09997_Plantae_Tracheophyta_Polypodiopsida_Salviniales_Salviniaceae_Azolla_filiculoides", "117": "09998_Plantae_Tracheophyta_Polypodiopsida_Salviniales_Salviniaceae_Salvinia_minima", "118": "09999_Plantae_Tracheophyta_Polypodiopsida_Schizaeales_Lygodiaceae_Lygodium_japonicum"}}}}], "splits": [{"name": "train", "num_bytes": 130554746.56, "num_examples": 1190}], "download_size": 132054218, "dataset_size": 130554746.56}}
2023-01-06T10:37:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "nature128_1k" More Information needed
[ "# Dataset Card for \"nature128_1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"nature128_1k\"\n\nMore Information needed" ]
658b0a48276f029ac6907647ee9e1b76e896d1fc
# Dataset Card for "temp_repo" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pyakymenko/temp_repo
[ "region:us" ]
2023-01-06T11:41:52+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 226855.0, "num_examples": 4}], "download_size": 0, "dataset_size": 226855.0}}
2023-01-06T12:03:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "temp_repo" More Information needed
[ "# Dataset Card for \"temp_repo\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"temp_repo\"\n\nMore Information needed" ]
7276a5670ff72438e60ac95c54e5ed25672bae30
# Dataset Card for "gids" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Repository:** [RE-DS-Word-Attention-Models](https://github.com/SharmisthaJat/RE-DS-Word-Attention-Models/tree/master/Data/GIDS) - **Paper:** [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987) - **Size of downloaded dataset files:** 8.94 MB - **Size of the generated dataset:** 11.82 MB ### Dataset Summary The Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction. GIDS is seeded from the human-judged Google relation extraction corpus. See the paper for full details: [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987) Note: - There is a formatted version that you can load with `datasets.load_dataset('gids', name='gids_formatted')`. This version is tokenized with spaCy, removes the underscores in the entities and provides entity offsets. ### Supported Tasks and Leaderboards - **Tasks:** Relation Classification - **Leaderboards:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The language in the dataset is English. ## Dataset Structure ### Data Instances #### gids - **Size of downloaded dataset files:** 8.94 MB - **Size of the generated dataset:** 8.5 MB An example of 'train' looks as follows: ```json { "sentence": "War as appropriate. Private Alfred James_Smurthwaite Sample. 26614. 2nd Battalion Yorkshire Regiment. Son of Edward James Sample, of North_Ormesby , Yorks. Died 2 April 1917. Aged 29. Born Ormesby, Enlisted Middlesbrough. Buried BUCQUOY ROAD CEMETERY, FICHEUX. Not listed on the Middlesbrough War Memorial Private Frederick Scott. 46449. 4th Battalion Yorkshire Regiment. Son of William and Maria Scott, of 25, Aspinall St., Heywood, Lancs. Born at West Hartlepool. Died 27 May 1918. Aged 24.", "subj_id": "/m/02qt0sv", "obj_id": "/m/0fnhl9", "subj_text": "James_Smurthwaite", "obj_text": "North_Ormesby", "relation": 4 } ``` #### gids_formatted - **Size of downloaded dataset files:** 8.94 MB - **Size of the generated dataset:** 11.82 MB An example of 'train' looks as follows: ```json { "token": ["announced", "he", "had", "closed", "shop", ".", "Mary", "D.", "Crisp", "Coyle", "opened", "in", "1951", ".", "Stoffey", ",", "a", "Maricopa", "County", "/", "Phoenix", "city", "resident", "and", "longtime", "customer", ",", "bought", "the", "business", "in", "2011", ",", "when", "then", "owners", "were", "facing", "closure", ".", "He", "renovated", "the", "diner", "is", "interior", ",", "increased", "training", "for", "staff", "and", "expanded", "the", "menu", "."], "subj_start": 6, "subj_end": 9, "obj_start": 17, "obj_end": 22, "relation": 4 } ``` ### Data Fields The data fields are the same among all splits. #### gids - `sentence`: the sentence, a `string` feature. - `subj_id`: the id of the relation subject mention, a `string` feature. - `obj_id`: the id of the relation object mention, a `string` feature. - `subj_text`: the text of the relation subject mention, a `string` feature. - `obj_text`: the text of the relation object mention, a `string` feature. - `relation`: the relation label of this instance, an `int` classification label. ```python {"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4} ``` #### gids_formatted - `token`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature. - `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature. - `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature. - `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature. - `relation`: the relation label of this instance, an `int` classification label. ```python {"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4} ``` ### Data Splits | | Train | Dev | Test | |------|-------|------|------| | GIDS | 11297 | 1864 | 5663 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{DBLP:journals/corr/abs-1804-06987, author = {Sharmistha Jat and Siddhesh Khandelwal and Partha P. Talukdar}, title = {Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention}, journal = {CoRR}, volume = {abs/1804.06987}, year = {2018}, url = {http://arxiv.org/abs/1804.06987}, eprinttype = {arXiv}, eprint = {1804.06987}, timestamp = {Fri, 15 Nov 2019 17:16:02 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1804-06987.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset.
DFKI-SLT/gids
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:other", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100k", "source_datasets:extended|other", "language:en", "license:other", "relation extraction", "arxiv:1804.06987", "region:us" ]
2023-01-06T12:24:59+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "source_datasets": ["extended|other"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "Google-IISc Distant Supervision (GIDS) dataset for distantly-supervised relation extraction", "tags": ["relation extraction"], "dataset_info": [{"config_name": "gids", "features": [{"name": "sentence", "dtype": "string"}, {"name": "subj_id", "dtype": "string"}, {"name": "obj_id", "dtype": "string"}, {"name": "subj_text", "dtype": "string"}, {"name": "obj_text", "dtype": "string"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "NA", "1": "/people/person/education./education/education/institution", "2": "/people/person/education./education/education/degree", "3": "/people/person/place_of_birth", "4": "/people/deceased_person/place_of_death"}}}}], "splits": [{"name": "train", "num_bytes": 5088421, "num_examples": 11297}, {"name": "validation", "num_bytes": 844784, "num_examples": 1864}, {"name": "test", "num_bytes": 2568673, "num_examples": 5663}], "download_size": 8941490, "dataset_size": 8501878}, {"config_name": "gids_formatted", "features": [{"name": "token", "sequence": "string"}, {"name": "subj_start", "dtype": "int32"}, {"name": "subj_end", "dtype": "int32"}, {"name": "obj_start", "dtype": "int32"}, {"name": "obj_end", "dtype": "int32"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "NA", "1": "/people/person/education./education/education/institution", "2": "/people/person/education./education/education/degree", "3": "/people/person/place_of_birth", "4": "/people/deceased_person/place_of_death"}}}}], "splits": [{"name": "train", "num_bytes": 7075362, "num_examples": 11297}, {"name": "validation", "num_bytes": 1173957, "num_examples": 1864}, {"name": "test", "num_bytes": 3573706, "num_examples": 5663}], "download_size": 8941490, "dataset_size": 11823025}]}
2023-01-11T10:06:07+00:00
[ "1804.06987" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100k #source_datasets-extended|other #language-English #license-other #relation extraction #arxiv-1804.06987 #region-us
Dataset Card for "gids" ======================= Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: RE-DS-Word-Attention-Models * Paper: Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention * Size of downloaded dataset files: 8.94 MB * Size of the generated dataset: 11.82 MB ### Dataset Summary The Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction. GIDS is seeded from the human-judged Google relation extraction corpus. See the paper for full details: Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention Note: * There is a formatted version that you can load with 'datasets.load\_dataset('gids', name='gids\_formatted')'. This version is tokenized with spaCy, removes the underscores in the entities and provides entity offsets. ### Supported Tasks and Leaderboards * Tasks: Relation Classification * Leaderboards: ### Languages The language in the dataset is English. Dataset Structure ----------------- ### Data Instances #### gids * Size of downloaded dataset files: 8.94 MB * Size of the generated dataset: 8.5 MB An example of 'train' looks as follows: #### gids\_formatted * Size of downloaded dataset files: 8.94 MB * Size of the generated dataset: 11.82 MB An example of 'train' looks as follows: ### Data Fields The data fields are the same among all splits. #### gids * 'sentence': the sentence, a 'string' feature. * 'subj\_id': the id of the relation subject mention, a 'string' feature. * 'obj\_id': the id of the relation object mention, a 'string' feature. * 'subj\_text': the text of the relation subject mention, a 'string' feature. * 'obj\_text': the text of the relation object mention, a 'string' feature. * 'relation': the relation label of this instance, an 'int' classification label. #### gids\_formatted * 'token': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features. * 'subj\_start': the 0-based index of the start token of the relation subject mention, an 'ìnt' feature. * 'subj\_end': the 0-based index of the end token of the relation subject mention, exclusive, an 'ìnt' feature. * 'obj\_start': the 0-based index of the start token of the relation object mention, an 'ìnt' feature. * 'obj\_end': the 0-based index of the end token of the relation object mention, exclusive, an 'ìnt' feature. * 'relation': the relation label of this instance, an 'int' classification label. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @phucdev for adding this dataset.
[ "### Dataset Summary\n\n\nThe Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction.\nGIDS is seeded from the human-judged Google relation extraction corpus.\nSee the paper for full details: Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention\n\n\nNote:\n\n\n* There is a formatted version that you can load with 'datasets.load\\_dataset('gids', name='gids\\_formatted')'. This version is tokenized with spaCy, removes the underscores in the entities and provides entity offsets.", "### Supported Tasks and Leaderboards\n\n\n* Tasks: Relation Classification\n* Leaderboards:", "### Languages\n\n\nThe language in the dataset is English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### gids\n\n\n* Size of downloaded dataset files: 8.94 MB\n* Size of the generated dataset: 8.5 MB\nAn example of 'train' looks as follows:", "#### gids\\_formatted\n\n\n* Size of downloaded dataset files: 8.94 MB\n* Size of the generated dataset: 11.82 MB\nAn example of 'train' looks as follows:", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### gids\n\n\n* 'sentence': the sentence, a 'string' feature.\n* 'subj\\_id': the id of the relation subject mention, a 'string' feature.\n* 'obj\\_id': the id of the relation object mention, a 'string' feature.\n* 'subj\\_text': the text of the relation subject mention, a 'string' feature.\n* 'obj\\_text': the text of the relation object mention, a 'string' feature.\n* 'relation': the relation label of this instance, an 'int' classification label.", "#### gids\\_formatted\n\n\n* 'token': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'subj\\_start': the 0-based index of the start token of the relation subject mention, an 'ìnt' feature.\n* 'subj\\_end': the 0-based index of the end token of the relation subject mention, exclusive, an 'ìnt' feature.\n* 'obj\\_start': the 0-based index of the start token of the relation object mention, an 'ìnt' feature.\n* 'obj\\_end': the 0-based index of the end token of the relation object mention, exclusive, an 'ìnt' feature.\n* 'relation': the relation label of this instance, an 'int' classification label.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @phucdev for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100k #source_datasets-extended|other #language-English #license-other #relation extraction #arxiv-1804.06987 #region-us \n", "### Dataset Summary\n\n\nThe Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction.\nGIDS is seeded from the human-judged Google relation extraction corpus.\nSee the paper for full details: Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention\n\n\nNote:\n\n\n* There is a formatted version that you can load with 'datasets.load\\_dataset('gids', name='gids\\_formatted')'. This version is tokenized with spaCy, removes the underscores in the entities and provides entity offsets.", "### Supported Tasks and Leaderboards\n\n\n* Tasks: Relation Classification\n* Leaderboards:", "### Languages\n\n\nThe language in the dataset is English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### gids\n\n\n* Size of downloaded dataset files: 8.94 MB\n* Size of the generated dataset: 8.5 MB\nAn example of 'train' looks as follows:", "#### gids\\_formatted\n\n\n* Size of downloaded dataset files: 8.94 MB\n* Size of the generated dataset: 11.82 MB\nAn example of 'train' looks as follows:", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### gids\n\n\n* 'sentence': the sentence, a 'string' feature.\n* 'subj\\_id': the id of the relation subject mention, a 'string' feature.\n* 'obj\\_id': the id of the relation object mention, a 'string' feature.\n* 'subj\\_text': the text of the relation subject mention, a 'string' feature.\n* 'obj\\_text': the text of the relation object mention, a 'string' feature.\n* 'relation': the relation label of this instance, an 'int' classification label.", "#### gids\\_formatted\n\n\n* 'token': the list of tokens of this sentence, obtained with spaCy, a 'list' of 'string' features.\n* 'subj\\_start': the 0-based index of the start token of the relation subject mention, an 'ìnt' feature.\n* 'subj\\_end': the 0-based index of the end token of the relation subject mention, exclusive, an 'ìnt' feature.\n* 'obj\\_start': the 0-based index of the start token of the relation object mention, an 'ìnt' feature.\n* 'obj\\_end': the 0-based index of the end token of the relation object mention, exclusive, an 'ìnt' feature.\n* 'relation': the relation label of this instance, an 'int' classification label.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @phucdev for adding this dataset." ]
617b89fed951bf7702c2e688c8dadc6a1cd64787
# Dataset Card for "kbp37" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Repository:** [kbp37](https://github.com/zhangdongxu/kbp37) - **Paper:** [Relation Classification via Recurrent Neural Network](https://arxiv.org/abs/1508.01006) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 5.11 MB - **Size of the generated dataset:** 6.58 MB ### Dataset Summary KBP37 is a revision of MIML-RE annotation dataset, provided by Gabor Angeli et al. (2014). They use both the 2010 and 2013 KBP official document collections, as well as a July 2013 dump of Wikipedia as the text corpus for annotation. There are 33811 sentences been annotated. Zhang and Wang made several refinements: 1. They add direction to the relation names, e.g. '`per:employee_of`' is split into '`per:employee of(e1,e2)`' and '`per:employee of(e2,e1)`'. They also replace '`org:parents`' with '`org:subsidiaries`' and replace '`org:member of’ with '`org:member`' (by their reverse directions). 2. They discard low frequency relations such that both directions of each relation occur more than 100 times in the dataset. KBP37 contains 18 directional relations and an additional '`no_relation`' relation, resulting in 37 relation classes. Note: - There is a formatted version that you can load with `datasets.load_dataset('kbp37', name='kbp37_formatted')`. This version is tokenized with `str.split()` and provides entities as offsets instead of being enclosed by xml tags. It discards some examples, however, that are invalid in the original dataset and lead to entity offset errors, e.g. example train/1276. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The language data in KBP37 is in English (BCP-47 en) ## Dataset Structure ### Data Instances #### kbp37 - **Size of downloaded dataset files:** 5.11 MB - **Size of the generated dataset:** 4.7 MB An example of 'train' looks as follows: ```json { "id": "0", "sentence": "<e1> Thom Yorke </e1> of <e2> Radiohead </e2> has included the + for many of his signature distortion sounds using a variety of guitars to achieve various tonal options .", "relation": 27 } ``` #### kbp37_formatted - **Size of downloaded dataset files:** 5.11 MB - **Size of the generated dataset:** 6.58 MB An example of 'train' looks as follows: ```json { "id": "1", "token": ["Leland", "High", "School", "is", "a", "public", "high", "school", "located", "in", "the", "Almaden", "Valley", "in", "San", "Jose", "California", "USA", "in", "the", "San", "Jose", "Unified", "School", "District", "."], "e1_start": 0, "e1_end": 3, "e2_start": 14, "e2_end": 16, "relation": 3 } ``` ### Data Fields #### kbp37 - `id`: the instance id of this sentence, a `string` feature. - `sentence`: the sentence, a `string` features. - `relation`: the relation label of this instance, an `int` classification label. ```python {"no_relation": 0, "org:alternate_names(e1,e2)": 1, "org:alternate_names(e2,e1)": 2, "org:city_of_headquarters(e1,e2)": 3, "org:city_of_headquarters(e2,e1)": 4, "org:country_of_headquarters(e1,e2)": 5, "org:country_of_headquarters(e2,e1)": 6, "org:founded(e1,e2)": 7, "org:founded(e2,e1)": 8, "org:founded_by(e1,e2)": 9, "org:founded_by(e2,e1)": 10, "org:members(e1,e2)": 11, "org:members(e2,e1)": 12, "org:stateorprovince_of_headquarters(e1,e2)": 13, "org:stateorprovince_of_headquarters(e2,e1)": 14, "org:subsidiaries(e1,e2)": 15, "org:subsidiaries(e2,e1)": 16, "org:top_members/employees(e1,e2)": 17, "org:top_members/employees(e2,e1)": 18, "per:alternate_names(e1,e2)": 19, "per:alternate_names(e2,e1)": 20, "per:cities_of_residence(e1,e2)": 21, "per:cities_of_residence(e2,e1)": 22, "per:countries_of_residence(e1,e2)": 23, "per:countries_of_residence(e2,e1)": 24, "per:country_of_birth(e1,e2)": 25, "per:country_of_birth(e2,e1)": 26, "per:employee_of(e1,e2)": 27, "per:employee_of(e2,e1)": 28, "per:origin(e1,e2)": 29, "per:origin(e2,e1)": 30, "per:spouse(e1,e2)": 31, "per:spouse(e2,e1)": 32, "per:stateorprovinces_of_residence(e1,e2)": 33, "per:stateorprovinces_of_residence(e2,e1)": 34, "per:title(e1,e2)": 35, "per:title(e2,e1)": 36} ``` #### kbp37_formatted - `id`: the instance id of this sentence, a `string` feature. - `token`: the list of tokens of this sentence, using `str.split()`, a `list` of `string` features. - `e1_start`: the 0-based index of the start token of the first argument', an `int` feature. - `e1_end`: the 0-based index of the end token of the first argument, exclusive, an `int` feature. - `e2_start`: the 0-based index of the start token of the second argument, an `int` feature. - `e2_end`: the 0-based index of the end token of the second argument, exclusive, an `int` feature. - `relation`: the relation label of this instance, an `int` classification label (same as `'kbp37''`). ### Data Splits | | Train | Dev | Test | |-------|-------|------|------| | kbp37 | 15917 | 1724 | 3405 | | kbp37_formatted | 15807 | 1714 | 3379 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{DBLP:journals/corr/ZhangW15a, author = {Dongxu Zhang and Dong Wang}, title = {Relation Classification via Recurrent Neural Network}, journal = {CoRR}, volume = {abs/1508.01006}, year = {2015}, url = {http://arxiv.org/abs/1508.01006}, eprinttype = {arXiv}, eprint = {1508.01006}, timestamp = {Fri, 04 Nov 2022 18:37:50 +0100}, biburl = {https://dblp.org/rec/journals/corr/ZhangW15a.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset.
DFKI-SLT/kbp37
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:other", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other", "language:en", "license:other", "relation extraction", "arxiv:1508.01006", "region:us" ]
2023-01-06T12:26:09+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "KBP37 is an English Relation Classification dataset", "tags": ["relation extraction"], "dataset_info": [{"config_name": "kbp37", "features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "no_relation", "1": "org:alternate_names(e1,e2)", "2": "org:alternate_names(e2,e1)", "3": "org:city_of_headquarters(e1,e2)", "4": "org:city_of_headquarters(e2,e1)", "5": "org:country_of_headquarters(e1,e2)", "6": "org:country_of_headquarters(e2,e1)", "7": "org:founded(e1,e2)", "8": "org:founded(e2,e1)", "9": "org:founded_by(e1,e2)", "10": "org:founded_by(e2,e1)", "11": "org:members(e1,e2)", "12": "org:members(e2,e1)", "13": "org:stateorprovince_of_headquarters(e1,e2)", "14": "org:stateorprovince_of_headquarters(e2,e1)", "15": "org:subsidiaries(e1,e2)", "16": "org:subsidiaries(e2,e1)", "17": "org:top_members/employees(e1,e2)", "18": "org:top_members/employees(e2,e1)", "19": "per:alternate_names(e1,e2)", "20": "per:alternate_names(e2,e1)", "21": "per:cities_of_residence(e1,e2)", "22": "per:cities_of_residence(e2,e1)", "23": "per:countries_of_residence(e1,e2)", "24": "per:countries_of_residence(e2,e1)", "25": "per:country_of_birth(e1,e2)", "26": "per:country_of_birth(e2,e1)", "27": "per:employee_of(e1,e2)", "28": "per:employee_of(e2,e1)", "29": "per:origin(e1,e2)", "30": "per:origin(e2,e1)", "31": "per:spouse(e1,e2)", "32": "per:spouse(e2,e1)", "33": "per:stateorprovinces_of_residence(e1,e2)", "34": "per:stateorprovinces_of_residence(e2,e1)", "35": "per:title(e1,e2)", "36": "per:title(e2,e1)"}}}}], "splits": [{"name": "train", "num_bytes": 3570626, "num_examples": 15917}, {"name": "validation", "num_bytes": 388935, "num_examples": 1724}, {"name": "test", "num_bytes": 762806, "num_examples": 3405}], "download_size": 5106673, "dataset_size": 4722367}, {"config_name": "kbp37_formatted", "features": [{"name": "id", "dtype": "string"}, {"name": "token", "sequence": "string"}, {"name": "e1_start", "dtype": "int32"}, {"name": "e1_end", "dtype": "int32"}, {"name": "e2_start", "dtype": "int32"}, {"name": "e2_end", "dtype": "int32"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "no_relation", "1": "org:alternate_names(e1,e2)", "2": "org:alternate_names(e2,e1)", "3": "org:city_of_headquarters(e1,e2)", "4": "org:city_of_headquarters(e2,e1)", "5": "org:country_of_headquarters(e1,e2)", "6": "org:country_of_headquarters(e2,e1)", "7": "org:founded(e1,e2)", "8": "org:founded(e2,e1)", "9": "org:founded_by(e1,e2)", "10": "org:founded_by(e2,e1)", "11": "org:members(e1,e2)", "12": "org:members(e2,e1)", "13": "org:stateorprovince_of_headquarters(e1,e2)", "14": "org:stateorprovince_of_headquarters(e2,e1)", "15": "org:subsidiaries(e1,e2)", "16": "org:subsidiaries(e2,e1)", "17": "org:top_members/employees(e1,e2)", "18": "org:top_members/employees(e2,e1)", "19": "per:alternate_names(e1,e2)", "20": "per:alternate_names(e2,e1)", "21": "per:cities_of_residence(e1,e2)", "22": "per:cities_of_residence(e2,e1)", "23": "per:countries_of_residence(e1,e2)", "24": "per:countries_of_residence(e2,e1)", "25": "per:country_of_birth(e1,e2)", "26": "per:country_of_birth(e2,e1)", "27": "per:employee_of(e1,e2)", "28": "per:employee_of(e2,e1)", "29": "per:origin(e1,e2)", "30": "per:origin(e2,e1)", "31": "per:spouse(e1,e2)", "32": "per:spouse(e2,e1)", "33": "per:stateorprovinces_of_residence(e1,e2)", "34": "per:stateorprovinces_of_residence(e2,e1)", "35": "per:title(e1,e2)", "36": "per:title(e2,e1)"}}}}], "splits": [{"name": "train", "num_bytes": 4943394, "num_examples": 15807}, {"name": "validation", "num_bytes": 539197, "num_examples": 1714}, {"name": "test", "num_bytes": 1055918, "num_examples": 3379}], "download_size": 5106673, "dataset_size": 6581345}]}
2023-04-27T12:04:14+00:00
[ "1508.01006" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other #language-English #license-other #relation extraction #arxiv-1508.01006 #region-us
Dataset Card for "kbp37" ======================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: kbp37 * Paper: Relation Classification via Recurrent Neural Network * Point of Contact: * Size of downloaded dataset files: 5.11 MB * Size of the generated dataset: 6.58 MB ### Dataset Summary KBP37 is a revision of MIML-RE annotation dataset, provided by Gabor Angeli et al. (2014). They use both the 2010 and 2013 KBP official document collections, as well as a July 2013 dump of Wikipedia as the text corpus for annotation. There are 33811 sentences been annotated. Zhang and Wang made several refinements: 1. They add direction to the relation names, e.g. ''per:employee\_of'' is split into ''per:employee of(e1,e2)'' and ''per:employee of(e2,e1)''. They also replace ''org:parents'' with ''org:subsidiaries'' and replace ''org:member of’ with ''org:member'' (by their reverse directions). 2. They discard low frequency relations such that both directions of each relation occur more than 100 times in the dataset. KBP37 contains 18 directional relations and an additional ''no\_relation'' relation, resulting in 37 relation classes. Note: * There is a formatted version that you can load with 'datasets.load\_dataset('kbp37', name='kbp37\_formatted')'. This version is tokenized with 'URL()' and provides entities as offsets instead of being enclosed by xml tags. It discards some examples, however, that are invalid in the original dataset and lead to entity offset errors, e.g. example train/1276. ### Supported Tasks and Leaderboards ### Languages The language data in KBP37 is in English (BCP-47 en) Dataset Structure ----------------- ### Data Instances #### kbp37 * Size of downloaded dataset files: 5.11 MB * Size of the generated dataset: 4.7 MB An example of 'train' looks as follows: #### kbp37\_formatted * Size of downloaded dataset files: 5.11 MB * Size of the generated dataset: 6.58 MB An example of 'train' looks as follows: ### Data Fields #### kbp37 * 'id': the instance id of this sentence, a 'string' feature. * 'sentence': the sentence, a 'string' features. * 'relation': the relation label of this instance, an 'int' classification label. #### kbp37\_formatted * 'id': the instance id of this sentence, a 'string' feature. * 'token': the list of tokens of this sentence, using 'URL()', a 'list' of 'string' features. * 'e1\_start': the 0-based index of the start token of the first argument', an 'int' feature. * 'e1\_end': the 0-based index of the end token of the first argument, exclusive, an 'int' feature. * 'e2\_start': the 0-based index of the start token of the second argument, an 'int' feature. * 'e2\_end': the 0-based index of the end token of the second argument, exclusive, an 'int' feature. * 'relation': the relation label of this instance, an 'int' classification label (same as ''kbp37'''). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @phucdev for adding this dataset.
[ "### Dataset Summary\n\n\nKBP37 is a revision of MIML-RE annotation dataset, provided by Gabor Angeli et al. (2014). They use both the 2010 and\n2013 KBP official document collections, as well as a July 2013 dump of Wikipedia as the text corpus for annotation.\nThere are 33811 sentences been annotated. Zhang and Wang made several refinements:\n\n\n1. They add direction to the relation names, e.g. ''per:employee\\_of'' is split into ''per:employee of(e1,e2)''\nand ''per:employee of(e2,e1)''. They also replace ''org:parents'' with ''org:subsidiaries'' and replace\n''org:member of’ with ''org:member'' (by their reverse directions).\n2. They discard low frequency relations such that both directions of each relation occur more than 100 times in the\ndataset.\n\n\nKBP37 contains 18 directional relations and an additional ''no\\_relation'' relation, resulting in 37 relation classes.\n\n\nNote:\n\n\n* There is a formatted version that you can load with 'datasets.load\\_dataset('kbp37', name='kbp37\\_formatted')'. This version is tokenized with 'URL()' and\nprovides entities as offsets instead of being enclosed by xml tags. It discards some examples, however, that are invalid in the original dataset and lead\nto entity offset errors, e.g. example train/1276.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language data in KBP37 is in English (BCP-47 en)\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### kbp37\n\n\n* Size of downloaded dataset files: 5.11 MB\n* Size of the generated dataset: 4.7 MB\nAn example of 'train' looks as follows:", "#### kbp37\\_formatted\n\n\n* Size of downloaded dataset files: 5.11 MB\n* Size of the generated dataset: 6.58 MB\nAn example of 'train' looks as follows:", "### Data Fields", "#### kbp37\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'sentence': the sentence, a 'string' features.\n* 'relation': the relation label of this instance, an 'int' classification label.", "#### kbp37\\_formatted\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'token': the list of tokens of this sentence, using 'URL()', a 'list' of 'string' features.\n* 'e1\\_start': the 0-based index of the start token of the first argument', an 'int' feature.\n* 'e1\\_end': the 0-based index of the end token of the first argument, exclusive, an 'int' feature.\n* 'e2\\_start': the 0-based index of the start token of the second argument, an 'int' feature.\n* 'e2\\_end': the 0-based index of the end token of the second argument, exclusive, an 'int' feature.\n* 'relation': the relation label of this instance, an 'int' classification label (same as ''kbp37''').", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @phucdev for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other #language-English #license-other #relation extraction #arxiv-1508.01006 #region-us \n", "### Dataset Summary\n\n\nKBP37 is a revision of MIML-RE annotation dataset, provided by Gabor Angeli et al. (2014). They use both the 2010 and\n2013 KBP official document collections, as well as a July 2013 dump of Wikipedia as the text corpus for annotation.\nThere are 33811 sentences been annotated. Zhang and Wang made several refinements:\n\n\n1. They add direction to the relation names, e.g. ''per:employee\\_of'' is split into ''per:employee of(e1,e2)''\nand ''per:employee of(e2,e1)''. They also replace ''org:parents'' with ''org:subsidiaries'' and replace\n''org:member of’ with ''org:member'' (by their reverse directions).\n2. They discard low frequency relations such that both directions of each relation occur more than 100 times in the\ndataset.\n\n\nKBP37 contains 18 directional relations and an additional ''no\\_relation'' relation, resulting in 37 relation classes.\n\n\nNote:\n\n\n* There is a formatted version that you can load with 'datasets.load\\_dataset('kbp37', name='kbp37\\_formatted')'. This version is tokenized with 'URL()' and\nprovides entities as offsets instead of being enclosed by xml tags. It discards some examples, however, that are invalid in the original dataset and lead\nto entity offset errors, e.g. example train/1276.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language data in KBP37 is in English (BCP-47 en)\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### kbp37\n\n\n* Size of downloaded dataset files: 5.11 MB\n* Size of the generated dataset: 4.7 MB\nAn example of 'train' looks as follows:", "#### kbp37\\_formatted\n\n\n* Size of downloaded dataset files: 5.11 MB\n* Size of the generated dataset: 6.58 MB\nAn example of 'train' looks as follows:", "### Data Fields", "#### kbp37\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'sentence': the sentence, a 'string' features.\n* 'relation': the relation label of this instance, an 'int' classification label.", "#### kbp37\\_formatted\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'token': the list of tokens of this sentence, using 'URL()', a 'list' of 'string' features.\n* 'e1\\_start': the 0-based index of the start token of the first argument', an 'int' feature.\n* 'e1\\_end': the 0-based index of the end token of the first argument, exclusive, an 'int' feature.\n* 'e2\\_start': the 0-based index of the start token of the second argument, an 'int' feature.\n* 'e2\\_end': the 0-based index of the end token of the second argument, exclusive, an 'int' feature.\n* 'relation': the relation label of this instance, an 'int' classification label (same as ''kbp37''').", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @phucdev for adding this dataset." ]
93a61f1639ee7e810abc309dc6ac345c0b8affa9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/tglobal-large-booksum-WIP4-r1 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-9d5680-2758781772
[ "autotrain", "evaluation", "region:us" ]
2023-01-06T12:59:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/tglobal-large-booksum-WIP4-r1", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}}
2023-01-06T14:35:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/tglobal-large-booksum-WIP4-r1 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/tglobal-large-booksum-WIP4-r1\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/tglobal-large-booksum-WIP4-r1\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
93858a3e4e331c5ac6da0d49fbc77268fab96f69
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/tglobal-large-booksum-WIP4-r1 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-samsum-samsum-08013b-2758881773
[ "autotrain", "evaluation", "region:us" ]
2023-01-06T12:59:46+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/tglobal-large-booksum-WIP4-r1", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2023-01-06T13:08:50+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/tglobal-large-booksum-WIP4-r1 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/tglobal-large-booksum-WIP4-r1\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/tglobal-large-booksum-WIP4-r1\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
d050610418f468b8774c9f1f6ca812515e170d20
This repository holds embeddings for Stable Diffusion 2 768
Zabin/SD2_768_Embedding
[ "region:us" ]
2023-01-06T13:14:55+00:00
{}
2023-01-21T06:08:15+00:00
[]
[]
TAGS #region-us
This repository holds embeddings for Stable Diffusion 2 768
[]
[ "TAGS\n#region-us \n" ]
b6a4440982231c4bf33321bae7d26784504afc04
# Dataset Card for "test_repo_111" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arnepeine/test_repo_111
[ "region:us" ]
2023-01-06T13:28:55+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39116602.0, "num_examples": 502}], "download_size": 38127697, "dataset_size": 39116602.0}}
2023-01-07T09:39:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test_repo_111" More Information needed
[ "# Dataset Card for \"test_repo_111\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test_repo_111\"\n\nMore Information needed" ]
a677a23997beb9f0339567b4a7d1e567a9609765
# Dataset Card for "owczpodh-dog-results" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
misza222/owczpodh-dog-results
[ "region:us" ]
2023-01-06T13:44:44+00:00
{"dataset_info": {"features": [{"name": "images", "dtype": "image"}, {"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3813312.0, "num_examples": 8}], "download_size": 3814513, "dataset_size": 3813312.0}}
2023-01-06T13:49:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "owczpodh-dog-results" More Information needed
[ "# Dataset Card for \"owczpodh-dog-results\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"owczpodh-dog-results\"\n\nMore Information needed" ]
ac561acc3a27ad78d0f159393f048140d6308dab
# Dataset Card for "OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
misza222/OwczarekPodhalanski-dog-lr1e-06-max_train_steps800-results
[ "region:us" ]
2023-01-06T14:26:49+00:00
{"dataset_info": {"features": [{"name": "images", "dtype": "image"}, {"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5281596.0, "num_examples": 12}], "download_size": 5282716, "dataset_size": 5281596.0}}
2023-01-06T14:27:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results" More Information needed
[ "# Dataset Card for \"OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results\"\n\nMore Information needed" ]
c3741a66c486b1a23beefdf6c75b06dba288d4f9
__ODEX__ is an Open-Domain EXecution-based NL-to-Code generation data benchmark. It contains 945 samples with a total of 1,707 human-written test cases, covering intents in four different natural languages -- 439 in English, 90 in Spanish, 164 in Japanese, and 252 in Russian. You can load the dataset by specifying a subset from *en, es, ja, ru* (by default the english subset *en* is loaded): ```python from datasets import load_dataset ds = load_dataset("neulab/odex", "ja", split="test") ``` If you find our dataset useful, please cite the paper ``` @article{wang2022execution, title={Execution-Based Evaluation for Open-Domain Code Generation}, author={Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig}, journal={arXiv preprint arXiv:2212.10481}, year={2022} } ```
neulab/odex
[ "task_categories:text2text-generation", "task_categories:text-generation", "size_categories:n<1K", "language:en", "language:es", "language:ja", "language:ru", "license:cc-by-sa-4.0", "region:us" ]
2023-01-06T14:30:00+00:00
{"language": ["en", "es", "ja", "ru"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["text2text-generation", "text-generation"]}
2023-02-10T18:01:34+00:00
[]
[ "en", "es", "ja", "ru" ]
TAGS #task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-English #language-Spanish #language-Japanese #language-Russian #license-cc-by-sa-4.0 #region-us
__ODEX__ is an Open-Domain EXecution-based NL-to-Code generation data benchmark. It contains 945 samples with a total of 1,707 human-written test cases, covering intents in four different natural languages -- 439 in English, 90 in Spanish, 164 in Japanese, and 252 in Russian. You can load the dataset by specifying a subset from *en, es, ja, ru* (by default the english subset *en* is loaded): If you find our dataset useful, please cite the paper
[]
[ "TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-English #language-Spanish #language-Japanese #language-Russian #license-cc-by-sa-4.0 #region-us \n" ]
649356656e0639acacea52ee9986c421c6196a6e
# Dataset Card for "OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
misza222/OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results
[ "region:us" ]
2023-01-06T14:37:09+00:00
{"dataset_info": {"features": [{"name": "images", "dtype": "image"}, {"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2753767.0, "num_examples": 6}], "download_size": 2755049, "dataset_size": 2753767.0}}
2023-01-06T16:09:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results" More Information needed
[ "# Dataset Card for \"OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results\"\n\nMore Information needed" ]
a81c149b02cbb87a7d5f3fa37ff1edf01bebda76
# Dataset Card for "dreambooth-hackathon-Daphnia" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
misza222/dreambooth-hackathon-Daphnia
[ "region:us" ]
2023-01-06T15:02:34+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2288884.0, "num_examples": 9}], "download_size": 2242120, "dataset_size": 2288884.0}}
2023-01-06T15:02:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-Daphnia" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-Daphnia\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-Daphnia\"\n\nMore Information needed" ]
f6b53caa62bc535e9e71ab39541de447a25e055a
# Dataset Card for "SBC_segmented" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
NathanRoll/SBC_segmented
[ "region:us" ]
2023-01-06T15:17:33+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4103960228.735, "num_examples": 8573}, {"name": "test", "num_bytes": 318277804.0, "num_examples": 728}], "download_size": 3703460386, "dataset_size": 4422238032.735001}}
2023-01-12T21:03:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "SBC_segmented" More Information needed
[ "# Dataset Card for \"SBC_segmented\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"SBC_segmented\"\n\nMore Information needed" ]
30a01d83ee8f222d39c37f261cc75ce5a89188b6
# Dataset Card for "dreambooth-hackathon-RobertMazurek" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
misza222/dreambooth-hackathon-RobertMazurek
[ "region:us" ]
2023-01-06T15:50:20+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1320903.0, "num_examples": 12}], "download_size": 1321819, "dataset_size": 1320903.0}}
2023-01-06T15:50:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-RobertMazurek" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-RobertMazurek\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-RobertMazurek\"\n\nMore Information needed" ]
c5c38c2398d4bde5fbb2a30f7036a9a2a9c1a829
## Testing
0x0x0/autotrain-data-giri
[ "region:us" ]
2023-01-06T15:56:23+00:00
{}
2023-02-05T21:01:05+00:00
[]
[]
TAGS #region-us
## Testing
[ "## Testing" ]
[ "TAGS\n#region-us \n", "## Testing" ]
3a92822cb07f4d7d054896232fd8869a13d15d81
# Dataset Card for Multilingual Grammar Error Correction ## Dataset Description - **Homepage:** https://juancavallotti.com - **Paper:** https://blog.juancavallotti.com/2023/01/06/training-a-multi-language-grammar-error-correction-system/ - **Point of Contact:** Juan Alberto López Cavallotti ### Dataset Summary This dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German. This dataset was developed as a component for the [Squidigies](https://squidgies.app/) platform. ### Supported Tasks and Leaderboards * **Grammar Error Correction:** By appending the prefix *fix grammar:* to the prrompt. * **Language Detection:** By appending the prefix: *language:* to the prompt. ### Languages * English * Spanish * French * German ## Dataset Structure ### Data Instances The dataset contains the following instances for each language: * German 32282 sentences. * English 51393 sentences. * Spanish 67672 sentences. * French 67157 sentences. ### Data Fields * `lang`: The language of the sentence * `sentence`: The original sentence. * `modified`: The corrupted sentence. * `transformation`: The primary transformation used by the synthetic data generator. * `sec_transformation`: The secondary transformation (if any) used by the synthetic data generator. ### Data Splits * `train`: There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric. ## Dataset Creation ### Curation Rationale This dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet. ### Source Data #### Initial Data Collection and Normalization The source grammatical sentences come from various open-source datasets, such as Tatoeba. #### Who are the source language producers? * Juan Alberto López Cavallotti ### Annotations #### Annotation process The annotation is automatic and produced by the generation script. #### Who are the annotators? * Data generation script by Juan Alberto López Cavallotti ### Other Known Limitations The dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results. ## Additional Information ### Dataset Curators * Juan Alberto López Cavallotti ### Licensing Information This dataset is distributed under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0) ### Citation Information Please mention this original dataset and the author **Juan Alberto López Cavallotti** ### Contributions * Juan Alberto López Cavallotti
juancavallotti/multilingual-gec
[ "task_categories:translation", "size_categories:100K<n<1M", "language:en", "language:es", "language:fr", "language:de", "license:apache-2.0", "grammar", "gec", "multi language", "language detection", "region:us" ]
2023-01-06T16:07:20+00:00
{"language": ["en", "es", "fr", "de"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["translation"], "pretty_name": "Multi Lingual Grammar Error Correction Dataset", "author": "Juan Alberto L\u00f3pez Cavallotti", "date": "Jan 6, 2023", "tags": ["grammar", "gec", "multi language", "language detection"]}
2023-01-06T18:59:59+00:00
[]
[ "en", "es", "fr", "de" ]
TAGS #task_categories-translation #size_categories-100K<n<1M #language-English #language-Spanish #language-French #language-German #license-apache-2.0 #grammar #gec #multi language #language detection #region-us
# Dataset Card for Multilingual Grammar Error Correction ## Dataset Description - Homepage: URL - Paper: URL - Point of Contact: Juan Alberto López Cavallotti ### Dataset Summary This dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German. This dataset was developed as a component for the Squidigies platform. ### Supported Tasks and Leaderboards * Grammar Error Correction: By appending the prefix *fix grammar:* to the prrompt. * Language Detection: By appending the prefix: *language:* to the prompt. ### Languages * English * Spanish * French * German ## Dataset Structure ### Data Instances The dataset contains the following instances for each language: * German 32282 sentences. * English 51393 sentences. * Spanish 67672 sentences. * French 67157 sentences. ### Data Fields * 'lang': The language of the sentence * 'sentence': The original sentence. * 'modified': The corrupted sentence. * 'transformation': The primary transformation used by the synthetic data generator. * 'sec_transformation': The secondary transformation (if any) used by the synthetic data generator. ### Data Splits * 'train': There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric. ## Dataset Creation ### Curation Rationale This dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet. ### Source Data #### Initial Data Collection and Normalization The source grammatical sentences come from various open-source datasets, such as Tatoeba. #### Who are the source language producers? * Juan Alberto López Cavallotti ### Annotations #### Annotation process The annotation is automatic and produced by the generation script. #### Who are the annotators? * Data generation script by Juan Alberto López Cavallotti ### Other Known Limitations The dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results. ## Additional Information ### Dataset Curators * Juan Alberto López Cavallotti ### Licensing Information This dataset is distributed under the Apache 2 License Please mention this original dataset and the author Juan Alberto López Cavallotti ### Contributions * Juan Alberto López Cavallotti
[ "# Dataset Card for Multilingual Grammar Error Correction", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Juan Alberto López Cavallotti", "### Dataset Summary\n\nThis dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German. \nThis dataset was developed as a component for the Squidigies platform.", "### Supported Tasks and Leaderboards\n\n* Grammar Error Correction: By appending the prefix *fix grammar:* to the prrompt.\n* Language Detection: By appending the prefix: *language:* to the prompt.", "### Languages\n\n* English\n* Spanish\n* French\n* German", "## Dataset Structure", "### Data Instances\n\nThe dataset contains the following instances for each language:\n* German 32282 sentences.\n* English 51393 sentences.\n* Spanish 67672 sentences.\n* French 67157 sentences.", "### Data Fields\n\n* 'lang': The language of the sentence\n* 'sentence': The original sentence. \n* 'modified': The corrupted sentence.\n* 'transformation': The primary transformation used by the synthetic data generator.\n* 'sec_transformation': The secondary transformation (if any) used by the synthetic data generator.", "### Data Splits\n\n* 'train': There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe source grammatical sentences come from various open-source datasets, such as Tatoeba.", "#### Who are the source language producers?\n\n* Juan Alberto López Cavallotti", "### Annotations", "#### Annotation process\n\nThe annotation is automatic and produced by the generation script.", "#### Who are the annotators?\n\n* Data generation script by Juan Alberto López Cavallotti", "### Other Known Limitations\n\nThe dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results.", "## Additional Information", "### Dataset Curators\n\n* Juan Alberto López Cavallotti", "### Licensing Information\n\nThis dataset is distributed under the Apache 2 License\n\n\n\nPlease mention this original dataset and the author Juan Alberto López Cavallotti", "### Contributions\n\n* Juan Alberto López Cavallotti" ]
[ "TAGS\n#task_categories-translation #size_categories-100K<n<1M #language-English #language-Spanish #language-French #language-German #license-apache-2.0 #grammar #gec #multi language #language detection #region-us \n", "# Dataset Card for Multilingual Grammar Error Correction", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Juan Alberto López Cavallotti", "### Dataset Summary\n\nThis dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German. \nThis dataset was developed as a component for the Squidigies platform.", "### Supported Tasks and Leaderboards\n\n* Grammar Error Correction: By appending the prefix *fix grammar:* to the prrompt.\n* Language Detection: By appending the prefix: *language:* to the prompt.", "### Languages\n\n* English\n* Spanish\n* French\n* German", "## Dataset Structure", "### Data Instances\n\nThe dataset contains the following instances for each language:\n* German 32282 sentences.\n* English 51393 sentences.\n* Spanish 67672 sentences.\n* French 67157 sentences.", "### Data Fields\n\n* 'lang': The language of the sentence\n* 'sentence': The original sentence. \n* 'modified': The corrupted sentence.\n* 'transformation': The primary transformation used by the synthetic data generator.\n* 'sec_transformation': The secondary transformation (if any) used by the synthetic data generator.", "### Data Splits\n\n* 'train': There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe source grammatical sentences come from various open-source datasets, such as Tatoeba.", "#### Who are the source language producers?\n\n* Juan Alberto López Cavallotti", "### Annotations", "#### Annotation process\n\nThe annotation is automatic and produced by the generation script.", "#### Who are the annotators?\n\n* Data generation script by Juan Alberto López Cavallotti", "### Other Known Limitations\n\nThe dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results.", "## Additional Information", "### Dataset Curators\n\n* Juan Alberto López Cavallotti", "### Licensing Information\n\nThis dataset is distributed under the Apache 2 License\n\n\n\nPlease mention this original dataset and the author Juan Alberto López Cavallotti", "### Contributions\n\n* Juan Alberto López Cavallotti" ]
66c85606ecdd55bcf2c7d44145e966a3fdba0b28
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Achitha/tamildata
[ "task_categories:automatic-speech-recognition", "language:ta", "region:us" ]
2023-01-06T17:10:31+00:00
{"language": ["ta"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "tamildata"}
2023-01-08T15:35:38+00:00
[]
[ "ta" ]
TAGS #task_categories-automatic-speech-recognition #language-Tamil #region-us
# Dataset Card for Dataset Name ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #language-Tamil #region-us \n", "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
9f62b44bacade997a5b23ec05fb37874013e4010
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/tglobal-large-booksum-WIP3-K-r4 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-ee4836-2761681799
[ "autotrain", "evaluation", "region:us" ]
2023-01-06T23:08:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/tglobal-large-booksum-WIP3-K-r4", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}}
2023-01-07T00:06:34+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/tglobal-large-booksum-WIP3-K-r4 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/tglobal-large-booksum-WIP3-K-r4\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/tglobal-large-booksum-WIP3-K-r4\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
4fcb5b9a0332dda9b7a80d7a4ebc15fb337b9e0b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/tglobal-large-booksum-WIP3-K-r4 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-samsum-samsum-b53b11-2761781800
[ "autotrain", "evaluation", "region:us" ]
2023-01-06T23:08:32+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/tglobal-large-booksum-WIP3-K-r4", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2023-01-06T23:15:44+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/tglobal-large-booksum-WIP3-K-r4 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/tglobal-large-booksum-WIP3-K-r4\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/tglobal-large-booksum-WIP3-K-r4\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
e96e7541e24931c5a2b7d0018865b666ad5dca0f
# Dataset Card for "pubtator-central-bigbio-kb-2022-12-18" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gabrielaltay/pubtator-central-bigbio-kb-2022-12-18
[ "region:us" ]
2023-01-07T05:19:49+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document_id", "dtype": "string"}, {"name": "passages", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "text", "sequence": "string"}, {"name": "offsets", "sequence": {"list": "int32"}}]}, {"name": "entities", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "text", "sequence": "string"}, {"name": "offsets", "sequence": {"list": "int32"}}, {"name": "normalized", "list": [{"name": "db_name", "dtype": "string"}, {"name": "db_id", "dtype": "string"}]}]}, {"name": "events", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "trigger", "struct": [{"name": "text", "sequence": "string"}, {"name": "offsets", "sequence": {"list": "int32"}}]}, {"name": "arguments", "list": [{"name": "role", "dtype": "string"}, {"name": "ref_id", "dtype": "string"}]}]}, {"name": "coreferences", "list": [{"name": "id", "dtype": "string"}, {"name": "entity_ids", "sequence": "string"}]}, {"name": "relations", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "arg1_id", "dtype": "string"}, {"name": "arg2_id", "dtype": "string"}, {"name": "normalized", "list": [{"name": "db_name", "dtype": "string"}, {"name": "db_id", "dtype": "string"}]}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101493304127, "num_examples": 33653973}, {"name": "validation", "num_bytes": 2115702473, "num_examples": 701124}, {"name": "test", "num_bytes": 2117460487, "num_examples": 701125}], "download_size": 49786905438, "dataset_size": 105726467087}}
2023-01-07T05:51:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "pubtator-central-bigbio-kb-2022-12-18" More Information needed
[ "# Dataset Card for \"pubtator-central-bigbio-kb-2022-12-18\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"pubtator-central-bigbio-kb-2022-12-18\"\n\nMore Information needed" ]
6fc10e1dafa2047633e1376b0324ae11c61ad30b
# Style Embedding - illl_liil ![illl_liil_showcase.png](https://s3.amazonaws.com/moonup/production/uploads/1673077352168-6366fabccbf2cf32918c2830.png) ## Usage To use an embedding, download the .pt file and place it in "\stable-diffusion-webui\embeddings". In your prompt, write ```"illl_liil_style-15000"```. ## Original Artist https://twitter.com/llii_ilil ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
kxly/illl_liil_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2023-01-07T07:39:31+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "pretty_name": "illl_liil Style", "thumbnail": "https://huggingface.co/datasets/kxly/illl_liil_style/blob/main/illl_liil_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2023-01-07T07:47:55+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Style Embedding - illl_liil !illl_liil_showcase.png ## Usage To use an embedding, download the .pt file and place it in "\stable-diffusion-webui\embeddings". In your prompt, write . ## Original Artist URL ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Style Embedding - illl_liil\n\n!illl_liil_showcase.png", "## Usage\n\nTo use an embedding, download the .pt file and place it in \"\\stable-diffusion-webui\\embeddings\".\n\nIn your prompt, write .", "## Original Artist\n\nURL", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Style Embedding - illl_liil\n\n!illl_liil_showcase.png", "## Usage\n\nTo use an embedding, download the .pt file and place it in \"\\stable-diffusion-webui\\embeddings\".\n\nIn your prompt, write .", "## Original Artist\n\nURL", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
4ba4d6bbe054c63542a9d455489f3e6372240167
# Dataset Card for "mel_spectogram_bird_audio" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rachit8562/mel_spectogram_bird_audio
[ "region:us" ]
2023-01-07T08:02:49+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Chlorischloris", "1": "Columbapalumbus", "2": "Corvusfrugilegus", "3": "Delichonurbicum", "4": "Dendrocoposmajor", "5": "Passermontanus", "6": "Phoenicurusochruros", "7": "Sittaeuropaea", "8": "Turdusmerula", "9": "Turduspilaris"}}}}], "splits": [{"name": "train", "num_bytes": 1732741674.28153, "num_examples": 61376}, {"name": "test", "num_bytes": 311839995.5024702, "num_examples": 10832}], "download_size": 1955670248, "dataset_size": 2044581669.7840002}}
2023-01-07T08:18:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mel_spectogram_bird_audio" More Information needed
[ "# Dataset Card for \"mel_spectogram_bird_audio\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mel_spectogram_bird_audio\"\n\nMore Information needed" ]
f2708e1df214319cb925fe73d9b229dd9a236b15
# Dataset Card for "phone-recognition-generated" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Nithiwat/phone-recognition-generated
[ "region:us" ]
2023-01-07T09:22:39+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "ipa", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1355048763.96, "num_examples": 6860}], "download_size": 966944673, "dataset_size": 1355048763.96}}
2023-01-07T09:49:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "phone-recognition-generated" More Information needed
[ "# Dataset Card for \"phone-recognition-generated\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"phone-recognition-generated\"\n\nMore Information needed" ]
75bafc6a17b4bdbf5ae1ea5ef04e3b5e5fd5a01f
# Dataset Card for "new_test_repo" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pyakymenko/new_test_repo
[ "region:us" ]
2023-01-07T09:26:48+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39116602.0, "num_examples": 502}], "download_size": 38127697, "dataset_size": 39116602.0}}
2023-01-07T09:30:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "new_test_repo" More Information needed
[ "# Dataset Card for \"new_test_repo\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"new_test_repo\"\n\nMore Information needed" ]
17a9b72bc74139cc23e543ca03e41e19d82008e4
This dataset contains 10 images of asterix and Obelix cartoon characters taken from internet
nsanghi/axterix-obelix
[ "task_categories:image-to-image", "size_categories:n<1K", "language:en", "license:apache-2.0", "asterix", "diffusion", "dreambooth", "region:us" ]
2023-01-07T10:53:50+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["image-to-image"], "tags": ["asterix", "diffusion", "dreambooth"]}
2023-01-07T11:00:21+00:00
[]
[ "en" ]
TAGS #task_categories-image-to-image #size_categories-n<1K #language-English #license-apache-2.0 #asterix #diffusion #dreambooth #region-us
This dataset contains 10 images of asterix and Obelix cartoon characters taken from internet
[]
[ "TAGS\n#task_categories-image-to-image #size_categories-n<1K #language-English #license-apache-2.0 #asterix #diffusion #dreambooth #region-us \n" ]
9b36b13820de339d287a94242dbcfe69a002bd11
hyper, LoRA
Toraong/Hypernetwork
[ "license:unknown", "region:us" ]
2023-01-07T11:09:42+00:00
{"license": "unknown"}
2023-03-04T03:18:22+00:00
[]
[]
TAGS #license-unknown #region-us
hyper, LoRA
[]
[ "TAGS\n#license-unknown #region-us \n" ]
ebac76a1859f28ce4c387f2a3fa84c3138baa9e8
# Dataset Card for jacob-soni ## Dataset Description The dataset contains of images my pet - Jacob, current age of 7 years. ### Dataset Curators The data has been originally collected by Ashish Soni and his family. ### Licensing Information The jacob-soni dataset version 1.0.0 is released under the Apache-2.0 License.
Ashish08/jacob-soni
[ "size_categories:n<1K", "source_datasets:original", "language:en", "license:apache-2.0", "images ", "pet", "dog", "german-shepherd", "dreambooth-hackathon", "region:us" ]
2023-01-07T11:25:50+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "source_datasets": ["original"], "pretty_name": "My Dog - Jacob Soni", "tags": ["images ", "pet", "dog", "german-shepherd", "dreambooth-hackathon"]}
2023-01-07T15:05:28+00:00
[]
[ "en" ]
TAGS #size_categories-n<1K #source_datasets-original #language-English #license-apache-2.0 #images #pet #dog #german-shepherd #dreambooth-hackathon #region-us
# Dataset Card for jacob-soni ## Dataset Description The dataset contains of images my pet - Jacob, current age of 7 years. ### Dataset Curators The data has been originally collected by Ashish Soni and his family. ### Licensing Information The jacob-soni dataset version 1.0.0 is released under the Apache-2.0 License.
[ "# Dataset Card for jacob-soni", "## Dataset Description\n\n The dataset contains of images my pet - Jacob, current age of 7 years.", "### Dataset Curators\n\nThe data has been originally collected by Ashish Soni and his family.", "### Licensing Information\n\nThe jacob-soni dataset version 1.0.0 is released under the Apache-2.0 License." ]
[ "TAGS\n#size_categories-n<1K #source_datasets-original #language-English #license-apache-2.0 #images #pet #dog #german-shepherd #dreambooth-hackathon #region-us \n", "# Dataset Card for jacob-soni", "## Dataset Description\n\n The dataset contains of images my pet - Jacob, current age of 7 years.", "### Dataset Curators\n\nThe data has been originally collected by Ashish Soni and his family.", "### Licensing Information\n\nThe jacob-soni dataset version 1.0.0 is released under the Apache-2.0 License." ]
3915caacef63079345383d2ce5ad96b842ee4bfb
# Dataset Card for "eclassTrainST" This NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard.
gart-labor/eclassTrainST
[ "task_categories:sentence-similarity", "size_categories:100K<n<1M", "language:en", "region:us" ]
2023-01-07T12:18:12+00:00
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["sentence-similarity"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "entailment", "dtype": "string"}, {"name": "contradiction", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 327174992, "num_examples": 698880}, {"name": "eval", "num_bytes": 219201779, "num_examples": 450912}], "download_size": 46751846, "dataset_size": 546376771}}
2023-01-07T12:19:59+00:00
[]
[ "en" ]
TAGS #task_categories-sentence-similarity #size_categories-100K<n<1M #language-English #region-us
# Dataset Card for "eclassTrainST" This NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard.
[ "# Dataset Card for \"eclassTrainST\"\n\nThis NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard." ]
[ "TAGS\n#task_categories-sentence-similarity #size_categories-100K<n<1M #language-English #region-us \n", "# Dataset Card for \"eclassTrainST\"\n\nThis NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard." ]
56030305503dec3b96cba39bc8f9844b5535be41
# Dataset Card for "eclassCorpus" This Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics.
gart-labor/eclassCorpus
[ "task_categories:sentence-similarity", "size_categories:n<1K", "language:en", "doi:10.57967/hf/0410", "region:us" ]
2023-01-07T12:38:01+00:00
{"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["sentence-similarity"], "dataset_info": {"features": [{"name": "did", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "datatype", "dtype": "string"}, {"name": "unit", "dtype": "string"}, {"name": "IRDI", "dtype": "string"}, {"name": "metalabel", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 137123, "num_examples": 672}], "download_size": 0, "dataset_size": 137123}}
2023-01-07T12:42:19+00:00
[]
[ "en" ]
TAGS #task_categories-sentence-similarity #size_categories-n<1K #language-English #doi-10.57967/hf/0410 #region-us
# Dataset Card for "eclassCorpus" This Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics.
[ "# Dataset Card for \"eclassCorpus\"\n\nThis Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics." ]
[ "TAGS\n#task_categories-sentence-similarity #size_categories-n<1K #language-English #doi-10.57967/hf/0410 #region-us \n", "# Dataset Card for \"eclassCorpus\"\n\nThis Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics." ]