sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
7b3565ba7321585678cbd4f057163c2a202ec4ee
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-c967fc98-8385125
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T20:43:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scientific_papers"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": ["bertscore", "meteor"], "dataset_name": "scientific_papers", "dataset_config": "pubmed", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}}
2022-06-29T00:09:37+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Blaise_g for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
4fbd8efbb2c158e502928720f40d88d00c5fe315
Trec6 with 10% noise
rungalileo/mltakehome
[ "region:us" ]
2022-06-28T21:58:13+00:00
{}
2022-06-28T21:58:48+00:00
[]
[]
TAGS #region-us
Trec6 with 10% noise
[]
[ "TAGS\n#region-us \n" ]
a053764e08bbcd9d2af53f3c40738f797020e1f3
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/convnext-tiny-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@haesun](https://huggingface.co/haesun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-34433c04-8625146
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T23:38:57+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lewtun/dog_food"], "eval_info": {"task": "image_multi_class_classification", "model": "abhishek/convnext-tiny-finetuned-dogfood", "metrics": [], "dataset_name": "lewtun/dog_food", "dataset_config": "lewtun--dog_food", "dataset_split": "test", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-28T23:39:30+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/convnext-tiny-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @haesun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/convnext-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @haesun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/convnext-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @haesun for evaluating this model." ]
f7b1cf0b7808c73459132d36db9bcb63c7293d87
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: eslamxm/mbart-finetune-en-cnn * Dataset: cnn_dailymail To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@](https://huggingface.co/) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-72edae24-8665151
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T01:15:21+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "eslamxm/mbart-finetune-en-cnn", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-06-30T04:04:02+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: eslamxm/mbart-finetune-en-cnn * Dataset: cnn_dailymail To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: eslamxm/mbart-finetune-en-cnn\n* Dataset: cnn_dailymail\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: eslamxm/mbart-finetune-en-cnn\n* Dataset: cnn_dailymail\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ for evaluating this model." ]
ede344115166115e131c24dd78e30feb71298b66
# Dataset Card for "test" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [RedCaps homepage](https://redcaps.xyz/) - **Repository:** [RedCaps repository](https://github.com/redcaps-dataset/redcaps-downloader) - **Paper:** [RedCaps: web-curated image-text data created by the people, for the people](https://arxiv.org/abs/2111.11431) - **Leaderboard:** - **Point of Contact:** [Karan Desai](mailto:[email protected]) ### Dataset Summary ### Dataset Preprocessing
ThierryZhou/test
[ "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:found", "language_creators:found", "source_datasets:original", "language:en", "arxiv:2111.11431", "region:us" ]
2022-06-29T01:31:45+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "source_datasets": ["original"], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "Test"}
2024-01-29T12:47:32+00:00
[ "2111.11431" ]
[ "en" ]
TAGS #task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #source_datasets-original #language-English #arxiv-2111.11431 #region-us
# Dataset Card for "test" ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Dataset Preprocessing - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: RedCaps homepage - Repository: RedCaps repository - Paper: RedCaps: web-curated image-text data created by the people, for the people - Leaderboard: - Point of Contact: Karan Desai ### Dataset Summary ### Dataset Preprocessing
[ "# Dataset Card for \"test\"", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: RedCaps homepage\n- Repository: RedCaps repository\n- Paper: RedCaps: web-curated image-text data created by the people, for the people\n- Leaderboard:\n- Point of Contact: Karan Desai", "### Dataset Summary", "### Dataset Preprocessing" ]
[ "TAGS\n#task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #source_datasets-original #language-English #arxiv-2111.11431 #region-us \n", "# Dataset Card for \"test\"", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Preprocessing\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: RedCaps homepage\n- Repository: RedCaps repository\n- Paper: RedCaps: web-curated image-text data created by the people, for the people\n- Leaderboard:\n- Point of Contact: Karan Desai", "### Dataset Summary", "### Dataset Preprocessing" ]
cc5900b7f586b2a17b586bd46f72b3fca7f89e0a
# Dataset Card for ERWT Hertiage Made Digital Newspapers training data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains text extracted at the page level from historic digitised newspapers from the [Heritage Made Digital](https://bl.iro.bl.uk/collections/9a6a4cdd-2bfe-47bb-8c14-c0a5d100501f?locale=en) newspaper digitisation program. The newspapers in the dataset were published between 1800 and 1870. The data was primarily created as a dataset for training 'time-aware' language models. The dataset contains text generated from Optical Character Recognition software on digitised newspaper pages. This dataset includes the plain text from the OCR alongside some minimal metadata associated with the newspaper from which the text is derived and OCR confidence score information generated from the OCR software. #### Breakdown of word counts over time Whilst the dataset covers a time period between 1800 and 1870, the number of words in the dataset is not distributed evenly across time in this dataset. The figures below give a sense of the breakdown over time in terms of the number of words which appear in the dataset. | year | total word_count | unique words | |-------:|-------------------:|---------------:| | 1800 | 282,554,255 | 15,506,515 | | 1810 | 328,817,174 | 18,295,974 | | 1820 | 328,817,174 | 18,295,974 | | 1830 | 194,958,624 | 10,816,938 | | 1840 | 305,545,086 | 17,018,560 | | 1850 | 376,194,785 | 20,942,876 | | 1860 | 305,545,086 | 17,018,560 | | 1870 | 51,241,037 | 2,284,803 | ![Total and unique word count over time](readme_figs/total_unique_word_count.png) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases ![](https://huggingface.co/datasets/davanstrien/hmd-erwt-training/resolve/main/readme_figs/mean_ocr_wc_over_time.png) [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
Livingwithmachines/hmd-erwt-training
[ "task_categories:fill-mask", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:cc0-1.0", "library,lam,newspapers,1800-1900", "region:us" ]
2022-06-29T05:12:56+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["fill-mask"], "task_ids": ["masked-language-modeling"], "pretty_name": "Dataset Card for ERWT Hertiage Made Digital Newspapers training data", "tags": ["library,lam,newspapers,1800-1900"]}
2022-11-18T14:10:28+00:00
[]
[ "en" ]
TAGS #task_categories-fill-mask #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-cc0-1.0 #library,lam,newspapers,1800-1900 #region-us
Dataset Card for ERWT Hertiage Made Digital Newspapers training data ==================================================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This dataset contains text extracted at the page level from historic digitised newspapers from the Heritage Made Digital newspaper digitisation program. The newspapers in the dataset were published between 1800 and 1870. The data was primarily created as a dataset for training 'time-aware' language models. The dataset contains text generated from Optical Character Recognition software on digitised newspaper pages. This dataset includes the plain text from the OCR alongside some minimal metadata associated with the newspaper from which the text is derived and OCR confidence score information generated from the OCR software. #### Breakdown of word counts over time Whilst the dataset covers a time period between 1800 and 1870, the number of words in the dataset is not distributed evenly across time in this dataset. The figures below give a sense of the breakdown over time in terms of the number of words which appear in the dataset. !Total and unique word count over time ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances ### Data Fields ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ![](URL ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset contains text extracted at the page level from historic digitised newspapers from the Heritage Made Digital newspaper digitisation program. The newspapers in the dataset were published between 1800 and 1870.\n\n\nThe data was primarily created as a dataset for training 'time-aware' language models.\n\n\nThe dataset contains text generated from Optical Character Recognition software on digitised newspaper pages. This dataset includes the plain text from the OCR alongside some minimal metadata associated with the newspaper from which the text is derived and OCR confidence score information generated from the OCR software.", "#### Breakdown of word counts over time\n\n\nWhilst the dataset covers a time period between 1800 and 1870, the number of words in the dataset is not distributed evenly across time in this dataset. The figures below give a sense of the breakdown over time in terms of the number of words which appear in the dataset.\n\n\n\n!Total and unique word count over time", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\n![](URL", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-fill-mask #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-cc0-1.0 #library,lam,newspapers,1800-1900 #region-us \n", "### Dataset Summary\n\n\nThis dataset contains text extracted at the page level from historic digitised newspapers from the Heritage Made Digital newspaper digitisation program. The newspapers in the dataset were published between 1800 and 1870.\n\n\nThe data was primarily created as a dataset for training 'time-aware' language models.\n\n\nThe dataset contains text generated from Optical Character Recognition software on digitised newspaper pages. This dataset includes the plain text from the OCR alongside some minimal metadata associated with the newspaper from which the text is derived and OCR confidence score information generated from the OCR software.", "#### Breakdown of word counts over time\n\n\nWhilst the dataset covers a time period between 1800 and 1870, the number of words in the dataset is not distributed evenly across time in this dataset. The figures below give a sense of the breakdown over time in terms of the number of words which appear in the dataset.\n\n\n\n!Total and unique word count over time", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\n![](URL", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @github-username for adding this dataset." ]
de6d2333628b3fa6893b658f77e0a4d72412be6c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: huggingface-course/bert-finetuned-ner * Dataset: conll2003 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@](https://huggingface.co/) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-b20351ec-8855170
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T06:26:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "huggingface-course/bert-finetuned-ner", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-29T06:27:43+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: huggingface-course/bert-finetuned-ner * Dataset: conll2003 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: huggingface-course/bert-finetuned-ner\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: huggingface-course/bert-finetuned-ner\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ for evaluating this model." ]
2a5b793520882599e415e356621c97093eb7520c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: lewtun/autotrain-acronym-identification-7324788 * Dataset: acronym_identification To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@bonbon](https://huggingface.co/bonbon) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-81757492-8865171
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T06:35:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["acronym_identification"], "eval_info": {"task": "entity_extraction", "model": "lewtun/autotrain-acronym-identification-7324788", "metrics": [], "dataset_name": "acronym_identification", "dataset_config": "default", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "labels"}}}
2022-06-29T06:38:09+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: lewtun/autotrain-acronym-identification-7324788 * Dataset: acronym_identification To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @bonbon for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: lewtun/autotrain-acronym-identification-7324788\n* Dataset: acronym_identification\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @bonbon for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: lewtun/autotrain-acronym-identification-7324788\n* Dataset: acronym_identification\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @bonbon for evaluating this model." ]
2a6b79b3e3c939aebb149d9109d7cdb78a9c2d3b
# Dataset Card for SAMSum Corpus ## Dataset Description ### Links - **Homepage:** hhttps://arxiv.org/abs/1911.12237v2 - **Repository:** https://arxiv.org/abs/1911.12237v2 - **Paper:** https://arxiv.org/abs/1911.12237v2 - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person. The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0). ### Languages English ## Dataset Structure ### Data Instances SAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people The first instance in the training set: {'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 14732 - val: 818 - test: 819 ## Dataset Creation ### Curation Rationale In paper: In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol. As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app. ### Who are the source language producers? linguists ### Who are the annotators? language experts ### Annotation process In paper: Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary. ## Licensing Information non-commercial licence: CC BY-NC-ND 4.0 ## Citation Information ``` @inproceedings{gliwa-etal-2019-samsum, title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization", author = "Gliwa, Bogdan and Mochol, Iwona and Biesek, Maciej and Wawer, Aleksander", booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-5409", doi = "10.18653/v1/D19-5409", pages = "70--79" } ``` ## Contributions
knkarthick/samsum
[ "task_categories:summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-nd-4.0", "conversations-summarization", "arxiv:1911.12237", "region:us" ]
2022-06-29T07:24:34+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "paperswithcode_id": "samsum-corpus", "pretty_name": "SAMSum Corpus", "tags": ["conversations-summarization"]}
2022-10-21T02:03:27+00:00
[ "1911.12237" ]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-nd-4.0 #conversations-summarization #arxiv-1911.12237 #region-us
# Dataset Card for SAMSum Corpus ## Dataset Description ### Links - Homepage: hhttps://URL - Repository: URL - Paper: URL - Point of Contact: URL ### Dataset Summary The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person. The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0). ### Languages English ## Dataset Structure ### Data Instances SAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people The first instance in the training set: {'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 14732 - val: 818 - test: 819 ## Dataset Creation ### Curation Rationale In paper: In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol. As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app. ### Who are the source language producers? linguists ### Who are the annotators? language experts ### Annotation process In paper: Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary. ## Licensing Information non-commercial licence: CC BY-NC-ND 4.0 ## Contributions
[ "# Dataset Card for SAMSum Corpus", "## Dataset Description", "### Links\n- Homepage: hhttps://URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nThe SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.\nThe SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nSAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people\nThe first instance in the training set:\n{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': \"Amanda: I baked cookies. Do you want some?\\r\\nJerry: Sure!\\r\\nAmanda: I'll bring you tomorrow :-)\"}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 14732\n- val: 818\n- test: 819", "## Dataset Creation", "### Curation Rationale\nIn paper:\nIn the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.\nAs a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "### Annotation process\nIn paper:\nEach dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary.", "## Licensing Information\nnon-commercial licence: CC BY-NC-ND 4.0", "## Contributions" ]
[ "TAGS\n#task_categories-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-nd-4.0 #conversations-summarization #arxiv-1911.12237 #region-us \n", "# Dataset Card for SAMSum Corpus", "## Dataset Description", "### Links\n- Homepage: hhttps://URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nThe SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.\nThe SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nSAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people\nThe first instance in the training set:\n{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': \"Amanda: I baked cookies. Do you want some?\\r\\nJerry: Sure!\\r\\nAmanda: I'll bring you tomorrow :-)\"}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 14732\n- val: 818\n- test: 819", "## Dataset Creation", "### Curation Rationale\nIn paper:\nIn the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.\nAs a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "### Annotation process\nIn paper:\nEach dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary.", "## Licensing Information\nnon-commercial licence: CC BY-NC-ND 4.0", "## Contributions" ]
8029d595220a39d09d132dade7baba6d9388ed17
# Dataset Card for SAMSum Corpus ## Dataset Description ### Links - **Homepage:** https://arxiv.org/abs/1808.08745 - **Repository:** https://arxiv.org/abs/1808.08745 - **Paper:** https://arxiv.org/abs/1808.08745 - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary This repository contains data and code for our EMNLP 2018 paper "[Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)". ### Languages English ## Dataset Structure ### Data Instances XSum dataset is made of 226711 conversations split into train, test and val. The first instance in the training set: {'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\n"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\'re neglected or forgotten," she said.\n"That may not be true but it is perhaps my perspective over the last few days.\n"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?"\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\nThe Labour Party\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\n"I was quite taken aback by the amount of damage that has been done," he said.\n"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses."\nHe said it was important that "immediate steps" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on [email protected] or [email protected].', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.', 'id': '35232142'} ### Data Fields - dialogue: text of dialogue. - summary: one line human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 204045 - val: 11332 - test: 11334 ## Dataset Creation ### Curation Rationale ### Who are the source language producers? linguists ### Who are the annotators? language experts ### Annotation process ## Licensing Information non-commercial licence: MIT ## Citation Information ``` @InProceedings{xsum-emnlp, author = "Shashi Narayan and Shay B. Cohen and Mirella Lapata", title = "Don't Give Me the Details, Just the Summary! {T}opic-Aware Convolutional Neural Networks for Extreme Summarization", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing ", year = "2018", address = "Brussels, Belgium", ``` ## Contributions Thanks to [@Edinburgh NLP](https://github.com/EdinburghNLP) for adding this dataset.
knkarthick/xsum
[ "task_categories:summarization", "task_categories:text2text-generation", "task_categories:text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-nc-nd-4.0", "conversations-summarization", "arxiv:1808.08745", "region:us" ]
2022-06-29T07:43:29+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization", "text2text-generation", "text-generation"], "task_ids": [], "paperswithcode_id": "samsum-corpus", "pretty_name": "XSum Corpus", "tags": ["conversations-summarization"]}
2022-12-07T08:30:19+00:00
[ "1808.08745" ]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-nc-nd-4.0 #conversations-summarization #arxiv-1808.08745 #region-us
# Dataset Card for SAMSum Corpus ## Dataset Description ### Links - Homepage: URL - Repository: URL - Paper: URL - Point of Contact: URL ### Dataset Summary This repository contains data and code for our EMNLP 2018 paper "Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization". ### Languages English ## Dataset Structure ### Data Instances XSum dataset is made of 226711 conversations split into train, test and val. The first instance in the training set: {'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\n"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\'re neglected or forgotten," she said.\n"That may not be true but it is perhaps my perspective over the last few days.\n"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?"\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\nThe Labour Party\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\n"I was quite taken aback by the amount of damage that has been done," he said.\n"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses."\nHe said it was important that "immediate steps" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on URL@URL or dumfries@URL.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.', 'id': '35232142'} ### Data Fields - dialogue: text of dialogue. - summary: one line human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 204045 - val: 11332 - test: 11334 ## Dataset Creation ### Curation Rationale ### Who are the source language producers? linguists ### Who are the annotators? language experts ### Annotation process ## Licensing Information non-commercial licence: MIT ## Contributions Thanks to @Edinburgh NLP for adding this dataset.
[ "# Dataset Card for SAMSum Corpus", "## Dataset Description", "### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nThis repository contains data and code for our EMNLP 2018 paper \"Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization\".", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nXSum dataset is made of 226711 conversations split into train, test and val.\nThe first instance in the training set:\n{'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\\n\"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\\'re neglected or forgotten,\" she said.\\n\"That may not be true but it is perhaps my perspective over the last few days.\\n\"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?\"\\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\\nThe Labour Party\\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\\n\"I was quite taken aback by the amount of damage that has been done,\" he said.\\n\"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses.\"\\nHe said it was important that \"immediate steps\" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on URL@URL or dumfries@URL.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.', \n'id': '35232142'}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: one line human written summary of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 204045\n- val: 11332\n- test: 11334", "## Dataset Creation", "### Curation Rationale", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "### Annotation process", "## Licensing Information\nnon-commercial licence: MIT", "## Contributions\nThanks to @Edinburgh NLP for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-nc-nd-4.0 #conversations-summarization #arxiv-1808.08745 #region-us \n", "# Dataset Card for SAMSum Corpus", "## Dataset Description", "### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nThis repository contains data and code for our EMNLP 2018 paper \"Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization\".", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nXSum dataset is made of 226711 conversations split into train, test and val.\nThe first instance in the training set:\n{'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\\n\"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\\'re neglected or forgotten,\" she said.\\n\"That may not be true but it is perhaps my perspective over the last few days.\\n\"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?\"\\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\\nThe Labour Party\\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\\n\"I was quite taken aback by the amount of damage that has been done,\" he said.\\n\"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses.\"\\nHe said it was important that \"immediate steps\" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on URL@URL or dumfries@URL.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.', \n'id': '35232142'}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: one line human written summary of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 204045\n- val: 11332\n- test: 11334", "## Dataset Creation", "### Curation Rationale", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "### Annotation process", "## Licensing Information\nnon-commercial licence: MIT", "## Contributions\nThanks to @Edinburgh NLP for adding this dataset." ]
4fe6b74529ba552ef552afb7bafc54a980f45628
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: uygarkurt/distilbert-base-uncased-finetuned-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-b756be98-8935185
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T08:29:53+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "uygarkurt/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-06-29T08:30:21+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: uygarkurt/distilbert-base-uncased-finetuned-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: uygarkurt/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: uygarkurt/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
9a66b367250f32cea9b78aeac1ec2a719a8dd59f
# Dataset Card for HighlightSum Corpus [Single Dataset Comprising of AMI, SamSUM & DialogSUM for Brief Summarization of Text] ## Dataset Description ### Links - **AMI:** https://huggingface.co/datasets/knkarthick/AMI - **DialogSUM:** https://github.com/cylnlp/dialogsum - **SamSUM:** https://huggingface.co/datasets/knkarthick/samsum - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary HighlightSUM is collection of large-scale dialogue summarization dataset from AMI, SamSUM & DialogSUM, consisting of 31,108 dialogues with corresponding manually labeled summaries. ### Languages English ## Dataset Structure ### Data Instances HighlightSum is a large-scale dialogue summarization dataset collection, consisting of 31,108 dialogues split into train, test and validation. The first instance in the training set: {'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor."} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 27401 - val: 1360 - test: 2347 ## Dataset Creation ### Curation Rationale Collection of AMI, SamSUM & DialogSUM Datasets. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information non-commercial licence: MIT ## Citation Information Refer the above links for Credits & Citations.
knkarthick/highlightsum
[ "task_categories:summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-06-29T10:25:09+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "HighlightSum Corpus"}
2022-10-24T08:17:00+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us
# Dataset Card for HighlightSum Corpus [Single Dataset Comprising of AMI, SamSUM & DialogSUM for Brief Summarization of Text] ## Dataset Description ### Links - AMI: URL - DialogSUM: URL - SamSUM: URL - Point of Contact: URL ### Dataset Summary HighlightSUM is collection of large-scale dialogue summarization dataset from AMI, SamSUM & DialogSUM, consisting of 31,108 dialogues with corresponding manually labeled summaries. ### Languages English ## Dataset Structure ### Data Instances HighlightSum is a large-scale dialogue summarization dataset collection, consisting of 31,108 dialogues split into train, test and validation. The first instance in the training set: {'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor."} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 27401 - val: 1360 - test: 2347 ## Dataset Creation ### Curation Rationale Collection of AMI, SamSUM & DialogSUM Datasets. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information non-commercial licence: MIT Refer the above links for Credits & Citations.
[ "# Dataset Card for HighlightSum Corpus [Single Dataset Comprising of AMI, SamSUM & DialogSUM for Brief Summarization of Text]", "## Dataset Description", "### Links\n- AMI: URL\n- DialogSUM: URL\n- SamSUM: URL\n- Point of Contact: URL", "### Dataset Summary\nHighlightSUM is collection of large-scale dialogue summarization dataset from AMI, SamSUM & DialogSUM, consisting of 31,108 dialogues with corresponding manually labeled summaries.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nHighlightSum is a large-scale dialogue summarization dataset collection, consisting of 31,108 dialogues split into train, test and validation.\n\nThe first instance in the training set:\n{'id': 'train_0', \n'summary': \"Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.\", \n'dialogue': \"#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\\n#Person2#: I found it would be a good idea to get a check-up.\\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\\n#Person2#: Ok.\\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\\n#Person2#: Yes.\\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\\n#Person2#: Ok, thanks doctor.\"}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 27401\n- val: 1360\n- test: 2347", "## Dataset Creation", "### Curation Rationale\nCollection of AMI, SamSUM & DialogSUM Datasets.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "## Licensing Information\nnon-commercial licence: MIT\n\nRefer the above links for Credits & Citations." ]
[ "TAGS\n#task_categories-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us \n", "# Dataset Card for HighlightSum Corpus [Single Dataset Comprising of AMI, SamSUM & DialogSUM for Brief Summarization of Text]", "## Dataset Description", "### Links\n- AMI: URL\n- DialogSUM: URL\n- SamSUM: URL\n- Point of Contact: URL", "### Dataset Summary\nHighlightSUM is collection of large-scale dialogue summarization dataset from AMI, SamSUM & DialogSUM, consisting of 31,108 dialogues with corresponding manually labeled summaries.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nHighlightSum is a large-scale dialogue summarization dataset collection, consisting of 31,108 dialogues split into train, test and validation.\n\nThe first instance in the training set:\n{'id': 'train_0', \n'summary': \"Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.\", \n'dialogue': \"#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\\n#Person2#: I found it would be a good idea to get a check-up.\\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\\n#Person2#: Ok.\\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\\n#Person2#: Yes.\\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\\n#Person2#: Ok, thanks doctor.\"}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 27401\n- val: 1360\n- test: 2347", "## Dataset Creation", "### Curation Rationale\nCollection of AMI, SamSUM & DialogSUM Datasets.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "## Licensing Information\nnon-commercial licence: MIT\n\nRefer the above links for Credits & Citations." ]
2c58ef072fb410733cf195c02ff771928f8f9f89
# Dataset Card for TopicSum Corpus [Single Dataset Comprising of XSUM & DialogSUM for One Liner Summarization/ Topic Generation of Text] ## Dataset Description ### Links - **DialogSUM:** https://github.com/cylnlp/dialogsum - **XSUM:** https://huggingface.co/datasets/knkarthick/xsum - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary TopicSUM is collection of large-scale dialogue summarization dataset from XSUM & DialogSUM, consisting of 241,171 dialogues with corresponding manually labeled one-liner summaries/ topics. ### Languages English ## Dataset Structure ### Data Instances TopicSum is a large-scale dialogue summarization dataset collection [XSUM & DialogDUM], consisting of 241,171 dialogues split into train, test and validation. The first instance in the training set: {'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\n"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\'re neglected or forgotten," she said.\n"That may not be true but it is perhaps my perspective over the last few days.\n"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?"\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\nThe Labour Party\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\n"I was quite taken aback by the amount of damage that has been done," he said.\n"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses."\nHe said it was important that "immediate steps" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on [email protected] or [email protected].', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.', 'id': '35232142'} ### Data Fields - dialogue: text of dialogue. - summary: human written one-liner summary/ topic of the dialogue. - id: unique file id of an example. ### Data Splits - train: 216,505 - val: 11,832 - test: 12,834 ## Dataset Creation ### Curation Rationale Collection of XSUM & DialogSUM Datasets. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information non-commercial licence: MIT ## Citation Information Refer the above links for Credits & Citations.
knkarthick/topicsum
[ "task_categories:summarization", "task_categories:text2text-generation", "task_categories:text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-06-29T10:46:06+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization", "text2text-generation", "text-generation"], "task_ids": [], "pretty_name": "TopicSum Corpus"}
2022-12-07T08:30:09+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-mit #region-us
# Dataset Card for TopicSum Corpus [Single Dataset Comprising of XSUM & DialogSUM for One Liner Summarization/ Topic Generation of Text] ## Dataset Description ### Links - DialogSUM: URL - XSUM: URL - Point of Contact: URL ### Dataset Summary TopicSUM is collection of large-scale dialogue summarization dataset from XSUM & DialogSUM, consisting of 241,171 dialogues with corresponding manually labeled one-liner summaries/ topics. ### Languages English ## Dataset Structure ### Data Instances TopicSum is a large-scale dialogue summarization dataset collection [XSUM & DialogDUM], consisting of 241,171 dialogues split into train, test and validation. The first instance in the training set: {'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\n"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\'re neglected or forgotten," she said.\n"That may not be true but it is perhaps my perspective over the last few days.\n"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?"\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\nThe Labour Party\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\n"I was quite taken aback by the amount of damage that has been done," he said.\n"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses."\nHe said it was important that "immediate steps" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on URL@URL or dumfries@URL.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.', 'id': '35232142'} ### Data Fields - dialogue: text of dialogue. - summary: human written one-liner summary/ topic of the dialogue. - id: unique file id of an example. ### Data Splits - train: 216,505 - val: 11,832 - test: 12,834 ## Dataset Creation ### Curation Rationale Collection of XSUM & DialogSUM Datasets. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information non-commercial licence: MIT Refer the above links for Credits & Citations.
[ "# Dataset Card for TopicSum Corpus [Single Dataset Comprising of XSUM & DialogSUM for One Liner Summarization/ Topic Generation of Text]", "## Dataset Description", "### Links\n- DialogSUM: URL\n- XSUM: URL\n- Point of Contact: URL", "### Dataset Summary\nTopicSUM is collection of large-scale dialogue summarization dataset from XSUM & DialogSUM, consisting of 241,171 dialogues with corresponding manually labeled one-liner summaries/ topics.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nTopicSum is a large-scale dialogue summarization dataset collection [XSUM & DialogDUM], consisting of 241,171 dialogues split into train, test and validation.\n\nThe first instance in the training set:\n{'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\\n\"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\\'re neglected or forgotten,\" she said.\\n\"That may not be true but it is perhaps my perspective over the last few days.\\n\"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?\"\\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\\nThe Labour Party\\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\\n\"I was quite taken aback by the amount of damage that has been done,\" he said.\\n\"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses.\"\\nHe said it was important that \"immediate steps\" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on URL@URL or dumfries@URL.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.', \n'id': '35232142'}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written one-liner summary/ topic of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 216,505\n- val: 11,832\n- test: 12,834", "## Dataset Creation", "### Curation Rationale\nCollection of XSUM & DialogSUM Datasets.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "## Licensing Information\nnon-commercial licence: MIT\n\nRefer the above links for Credits & Citations." ]
[ "TAGS\n#task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-mit #region-us \n", "# Dataset Card for TopicSum Corpus [Single Dataset Comprising of XSUM & DialogSUM for One Liner Summarization/ Topic Generation of Text]", "## Dataset Description", "### Links\n- DialogSUM: URL\n- XSUM: URL\n- Point of Contact: URL", "### Dataset Summary\nTopicSUM is collection of large-scale dialogue summarization dataset from XSUM & DialogSUM, consisting of 241,171 dialogues with corresponding manually labeled one-liner summaries/ topics.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nTopicSum is a large-scale dialogue summarization dataset collection [XSUM & DialogDUM], consisting of 241,171 dialogues split into train, test and validation.\n\nThe first instance in the training set:\n{'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\\n\"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\\'re neglected or forgotten,\" she said.\\n\"That may not be true but it is perhaps my perspective over the last few days.\\n\"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?\"\\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\\nThe Labour Party\\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\\n\"I was quite taken aback by the amount of damage that has been done,\" he said.\\n\"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses.\"\\nHe said it was important that \"immediate steps\" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on URL@URL or dumfries@URL.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.', \n'id': '35232142'}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written one-liner summary/ topic of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 216,505\n- val: 11,832\n- test: 12,834", "## Dataset Creation", "### Curation Rationale\nCollection of XSUM & DialogSUM Datasets.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "## Licensing Information\nnon-commercial licence: MIT\n\nRefer the above links for Credits & Citations." ]
0888b80fd0a0f678cce1c6520ddd13e928e49442
## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) # Dataset Card for CatalanQA ## Dataset Description - **Homepage:** https://github.com/projecte-aina - **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:[email protected]) and [Carme Armentano-Oller](mailto:[email protected]) ### Dataset Summary This dataset can be used to build extractive-QA and Language Models. It is an aggregation and balancing of 2 previous datasets: [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) and [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad). Splits have been balanced by kind of question, and unlike other datasets like [SQuAD](http://arxiv.org/abs/1606.05250), it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times. This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/). This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>. ### Supported Tasks and Leaderboards Extractive-QA, Language Model. ### Languages The dataset is in Catalan (`ca-ES`). ## Dataset Structure ### Data Instances ``` { "title": "Els 521 policies espanyols amb més mala nota a les oposicions seran enviats a Catalunya", "paragraphs": [ { "context": "El Ministeri d'Interior espanyol enviarà a Catalunya els 521 policies espanyols que han obtingut més mala nota a les oposicions. Segons que explica El País, hi havia mig miler de places vacants que s'havien de cobrir, però els agents amb més bones puntuacions han elegit destinacions diferents. En total van aprovar les oposicions 2.600 aspirants. D'aquests, en seran destinats al Principat 521 dels 560 amb més mala nota. Per l'altra banda, entre els 500 agents amb més bona nota, només 8 han triat Catalunya. Fonts de la policia espanyola que esmenta el diari ho atribueixen al procés d'independència, al Primer d'Octubre i a la 'situació social' que se'n deriva.", "qas": [ { "question": "Quants policies enviaran a Catalunya?", "id": "0.5961700408283691", "answers": [ { "text": "521", "answer_start": 57 } ] } ] } ] }, ``` ### Data Fields Follows [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets: - `id` (str): Unique ID assigned to the question. - `title` (str): Title of the article. - `context` (str): Article text. - `question` (str): Question. - `answers` (list): Answer to the question, containing: - `text` (str): Span text answering to the question. - `answer_start` Starting offset of the span text answering to the question. ### Data Splits - train.json: 17135 question/answer pairs - dev.json: 2157 question/answer pairs - test.json: 2135 question/answer pairs ## Dataset Creation ### Curation Rationale We created this corpus to contribute to the development of language models in Catalan, a low-resource language. ### Source Data - [VilaWeb](https://www.vilaweb.cat/) and [Catalan Wikipedia](https://ca.wikipedia.org). #### Initial Data Collection and Normalization This dataset is a balanced aggregation from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets. #### Who are the source language producers? Volunteers from [Catalan Wikipedia](https://ca.wikipedia.org) and professional journalists from [VilaWeb](https://www.vilaweb.cat/). ### Annotations #### Annotation process We did an aggregation and balancing from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets. To annotate those datasets, we commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250). For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. #### Who are the annotators? Annotation was commissioned by a specialized company that hired a team of native language speakers. ### Personal and Sensitive Information No personal or sensitive information is included. ## Considerations for Using the Data ### Social Impact of Dataset We hope this corpus contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Licensing Information This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>. ### Contributions [N/A]
projecte-aina/catalanqa
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ca", "license:cc-by-sa-4.0", "arxiv:1606.05250", "region:us" ]
2022-06-29T13:22:10+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ca"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "catalanqa"}
2023-11-25T04:47:38+00:00
[ "1606.05250" ]
[ "ca" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Catalan #license-cc-by-sa-4.0 #arxiv-1606.05250 #region-us
## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions # Dataset Card for CatalanQA ## Dataset Description - Homepage: URL - Point of Contact: Carlos Rodríguez-Penagos and Carme Armentano-Oller ### Dataset Summary This dataset can be used to build extractive-QA and Language Models. It is an aggregation and balancing of 2 previous datasets: VilaQuAD and ViquiQuAD. Splits have been balanced by kind of question, and unlike other datasets like SQuAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times. This dataset was developed by BSC TeMU as part of Projecte AINA, to enrich the Catalan Language Understanding Benchmark (CLUB). This work is licensed under a <a rel="license" href="URL 4.0 International License</a>. ### Supported Tasks and Leaderboards Extractive-QA, Language Model. ### Languages The dataset is in Catalan ('ca-ES'). ## Dataset Structure ### Data Instances ### Data Fields Follows (Rajpurkar, Pranav et al., 2016) for SQuAD v1 datasets: - 'id' (str): Unique ID assigned to the question. - 'title' (str): Title of the article. - 'context' (str): Article text. - 'question' (str): Question. - 'answers' (list): Answer to the question, containing: - 'text' (str): Span text answering to the question. - 'answer_start' Starting offset of the span text answering to the question. ### Data Splits - URL: 17135 question/answer pairs - URL: 2157 question/answer pairs - URL: 2135 question/answer pairs ## Dataset Creation ### Curation Rationale We created this corpus to contribute to the development of language models in Catalan, a low-resource language. ### Source Data - VilaWeb and Catalan Wikipedia. #### Initial Data Collection and Normalization This dataset is a balanced aggregation from ViquiQuAD and VilaQuAD datasets. #### Who are the source language producers? Volunteers from Catalan Wikipedia and professional journalists from VilaWeb. ### Annotations #### Annotation process We did an aggregation and balancing from ViquiQuAD and VilaQuAD datasets. To annotate those datasets, we commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 (Rajpurkar, Pranav et al., 2016). For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. #### Who are the annotators? Annotation was commissioned by a specialized company that hired a team of native language speakers. ### Personal and Sensitive Information No personal or sensitive information is included. ## Considerations for Using the Data ### Social Impact of Dataset We hope this corpus contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL) This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA. ### Licensing Information This work is licensed under a <a rel="license" href="URL 4.0 International License</a>. ### Contributions [N/A]
[ "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "# Dataset Card for CatalanQA", "## Dataset Description\n- Homepage: URL\n- Point of Contact: Carlos Rodríguez-Penagos and Carme Armentano-Oller", "### Dataset Summary\n\nThis dataset can be used to build extractive-QA and Language Models. It is an aggregation and balancing of 2 previous datasets: VilaQuAD and ViquiQuAD.\n\nSplits have been balanced by kind of question, and unlike other datasets like SQuAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.\n\nThis dataset was developed by BSC TeMU as part of Projecte AINA, to enrich the Catalan Language Understanding Benchmark (CLUB).\n\nThis work is licensed under a <a rel=\"license\" href=\"URL 4.0 International License</a>.", "### Supported Tasks and Leaderboards\nExtractive-QA, Language Model.", "### Languages\nThe dataset is in Catalan ('ca-ES').", "## Dataset Structure", "### Data Instances", "### Data Fields\nFollows (Rajpurkar, Pranav et al., 2016) for SQuAD v1 datasets:\n\n- 'id' (str): Unique ID assigned to the question.\n- 'title' (str): Title of the article.\n- 'context' (str): Article text.\n- 'question' (str): Question.\n- 'answers' (list): Answer to the question, containing:\n - 'text' (str): Span text answering to the question.\n - 'answer_start' Starting offset of the span text answering to the question.", "### Data Splits\n- URL: 17135 question/answer pairs\n- URL: 2157 question/answer pairs\n- URL: 2135 question/answer pairs", "## Dataset Creation", "### Curation Rationale\n\nWe created this corpus to contribute to the development of language models in Catalan, a low-resource language.", "### Source Data\n- VilaWeb and Catalan Wikipedia.", "#### Initial Data Collection and Normalization\nThis dataset is a balanced aggregation from ViquiQuAD and VilaQuAD datasets.", "#### Who are the source language producers?\nVolunteers from Catalan Wikipedia and professional journalists from VilaWeb.", "### Annotations", "#### Annotation process\nWe did an aggregation and balancing from ViquiQuAD and VilaQuAD datasets.\n\nTo annotate those datasets, we commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 (Rajpurkar, Pranav et al., 2016).\n\nFor compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.", "#### Who are the annotators?\nAnnotation was commissioned by a specialized company that hired a team of native language speakers.", "### Personal and Sensitive Information\nNo personal or sensitive information is included.", "## Considerations for Using the Data", "### Social Impact of Dataset\nWe hope this corpus contributes to the development of language models in Catalan, a low-resource language.", "### Discussion of Biases\n[N/A]", "### Other Known Limitations\n[N/A]", "## Additional Information", "### Dataset Curators\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.", "### Licensing Information\nThis work is licensed under a <a rel=\"license\" href=\"URL 4.0 International License</a>.", "### Contributions\n\n[N/A]" ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Catalan #license-cc-by-sa-4.0 #arxiv-1606.05250 #region-us \n", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "# Dataset Card for CatalanQA", "## Dataset Description\n- Homepage: URL\n- Point of Contact: Carlos Rodríguez-Penagos and Carme Armentano-Oller", "### Dataset Summary\n\nThis dataset can be used to build extractive-QA and Language Models. It is an aggregation and balancing of 2 previous datasets: VilaQuAD and ViquiQuAD.\n\nSplits have been balanced by kind of question, and unlike other datasets like SQuAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.\n\nThis dataset was developed by BSC TeMU as part of Projecte AINA, to enrich the Catalan Language Understanding Benchmark (CLUB).\n\nThis work is licensed under a <a rel=\"license\" href=\"URL 4.0 International License</a>.", "### Supported Tasks and Leaderboards\nExtractive-QA, Language Model.", "### Languages\nThe dataset is in Catalan ('ca-ES').", "## Dataset Structure", "### Data Instances", "### Data Fields\nFollows (Rajpurkar, Pranav et al., 2016) for SQuAD v1 datasets:\n\n- 'id' (str): Unique ID assigned to the question.\n- 'title' (str): Title of the article.\n- 'context' (str): Article text.\n- 'question' (str): Question.\n- 'answers' (list): Answer to the question, containing:\n - 'text' (str): Span text answering to the question.\n - 'answer_start' Starting offset of the span text answering to the question.", "### Data Splits\n- URL: 17135 question/answer pairs\n- URL: 2157 question/answer pairs\n- URL: 2135 question/answer pairs", "## Dataset Creation", "### Curation Rationale\n\nWe created this corpus to contribute to the development of language models in Catalan, a low-resource language.", "### Source Data\n- VilaWeb and Catalan Wikipedia.", "#### Initial Data Collection and Normalization\nThis dataset is a balanced aggregation from ViquiQuAD and VilaQuAD datasets.", "#### Who are the source language producers?\nVolunteers from Catalan Wikipedia and professional journalists from VilaWeb.", "### Annotations", "#### Annotation process\nWe did an aggregation and balancing from ViquiQuAD and VilaQuAD datasets.\n\nTo annotate those datasets, we commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 (Rajpurkar, Pranav et al., 2016).\n\nFor compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.", "#### Who are the annotators?\nAnnotation was commissioned by a specialized company that hired a team of native language speakers.", "### Personal and Sensitive Information\nNo personal or sensitive information is included.", "## Considerations for Using the Data", "### Social Impact of Dataset\nWe hope this corpus contributes to the development of language models in Catalan, a low-resource language.", "### Discussion of Biases\n[N/A]", "### Other Known Limitations\n[N/A]", "## Additional Information", "### Dataset Curators\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.", "### Licensing Information\nThis work is licensed under a <a rel=\"license\" href=\"URL 4.0 International License</a>.", "### Contributions\n\n[N/A]" ]
94d6f34624770580411a759a1994bd437daf36d3
# Dataset Card for SPGISpeech ## Table of Contents - [Table of Contents](#table-of-contents) <img src="https://s3.amazonaws.com/moonup/production/uploads/1661776840270-62e049fe81d9ca6484eff137.png" alt="SPGISpeech Logo" width="200"/> - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) - [Terms of Usage](#terms-of-usage) ## Dataset Description - **Homepage:** https://datasets.kensho.com/datasets/spgispeech - **Repository:** - **Paper:** https://arxiv.org/abs/2104.02014 - **Leaderboard:** - **Point of Contact:** [[email protected]](mailto:[email protected] ) ## Dataset Description SPGISpeech (rhymes with “squeegee-speech”) is a large-scale transcription dataset, freely available for academic research. SPGISpeech is a corpus of 5,000 hours of professionally-transcribed financial audio. SPGISpeech contains a broad cross-section of L1 and L2 English accents, strongly varying audio quality, and both spontaneous and narrated speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted, including capitalization, punctuation, and denormalization of non-standard words. SPGISpeech consists of 5,000 hours of recorded company earnings calls and their respective transcriptions. The original calls were split into slices ranging from 5 to 15 seconds in length to allow easy training for speech recognition systems. Calls represent a broad cross-section of international business English; SPGISpeech contains approximately 50,000 speakers, one of the largest numbers of any speech corpus, and offers a variety of L1 and L2 English accents. The format of each WAV file is single channel, 16kHz, 16 bit audio. ### Example Usage The training split has several configurations of various size: S, M, L. See the Section [Data Splits](#data-splits) for for more information. To download the S configuration: ```python from datasets import load_dataset spgi = load_dataset("kensho/spgispeech", "S", use_auth_token=True) # see structure print(spgi) # load audio sample on the fly audio_input = spgi["train"][0]["audio"] # first decoded audio sample transcription = spgi["train"][0]["text"] # first transcription ``` It is possible to download only the development or test data: ```python spgi_dev = load_dataset("kensho/spgispeech", "dev", use_auth_token=True) spgi_test = load_dataset("kensho/spgispeech", "test", use_auth_token=True) ``` ### Supported Tasks and Leaderboards - `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). ### Languages SPGISpeech contains audio and transcription data in business English and offers a variety of L1 and L2 accents. ## Dataset Structure ### Data Instances ```python { 'wav_filename': '32bcf9c9dc707fb61a04290e296f31eb/99.wav', 'audio': { 'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/c7082e2bd5b.../dev_part_2/32bcf9c9dc707fb61a04290e296f31eb/99.wav', 'array': array([-0.00039673, -0.00057983, -0.00057983, ..., -0.0007019 , -0.00027466, 0.00021362], dtype=float32), 'sampling_rate': 16000 }, 'wav_filesize': 292844, 'transcript': 'This is proving to be true, and through focused execution we are on track to exceed our targeted savings in 2017. As a reminder,' } ``` ### Data Fields * wav_filename (string) - audio filename (includes parent directory). * audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). * wav_filesize (int) - size of the file in bytes. * transcript (string) - transcription of the file. ### Data Splits The dataset has three splits: train, evaluation (dev) and test. The train split has three configurations of various sizes: S, M, L. Larger subsets are supersets of smaller subsets, e.g., the L subset contains all the data from the M subset. #### Transcribed Subsets Size | Subset | Size | |:------:|:-------:| | S | 22Gb | | M | 107Gb | | L | 530Gb | | dev | 11Gb | | test | 11Gb | ## Dataset Creation ### Curation Rationale To augment the open-source speech-to-text datasets available for R&D. ### Source Data The dataset contains S&P Global company earnings calls. #### Initial Data Collection and Normalization Public earnings calls spanning the time period from 2007-2020 were converted to 16kHz, 16-bit audio. #### Who are the source language producers? English speakers with a diverse selection of accents, including non-native ones (L2), producing both spontaneous and narrated speech. ### Annotations #### Annotation process Data is orthographically transcribed according to a professional style guide detailing conventions for capitalization, punctuation, denormalization of non-standard words and transcription of disfluencies in spontaneous speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted. Full earnings calls last 30-60 minutes in length and are typically transcribed as whole units, without internal timestamps. In order to produce short audio slices suitable for STT training, the files were segmented with [Gentle](https://lowerquality.com/gentle/), a double-pass forced aligner, with the beginning and end of each slice of audio imputed by voice activity detection with [py-webrtc](https://github.com/wiseman/py-webrtcvad). #### Who are the annotators? Earning calls are manually transcribed by S&P Global, Inc. ### Personal and Sensitive Information Though earnings calls are public, we nevertheless identified full names with the spaCy en core web large model. We withheld samples containing names that appeared fewer than ten times (7% of total). Full names appearing ten times or more in the data were considered to be public figures and were retained. This necessarily incomplete approach to named entity recognition was complemented with randomized manual spot checks which uncovered no false negatives missed by the automated approach. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The largest issue inherent with the dataset is that the speaker distribution of SPGISpeech reflects the speaker distribution seen during earning calls. One example issue that stems from this: during earnings calls, close to 90% of speakers are male. ### Other Known Limitations Due to formal language seen during earnings calls, the dataset needs augmentation for training systems that transcribe informal speech. ## Additional Information ### Dataset Curators Kensho Technologies ### Licensing Information ### Citation Information Please cite this paper: ```bibtext @ARTICLE{2021arXiv210402014O, author = {{O'Neill}, Patrick K. and {Lavrukhin}, Vitaly and {Majumdar}, Somshubra and {Noroozi}, Vahid and {Zhang}, Yuekai and {Kuchaiev}, Oleksii and {Balam}, Jagadeesh and {Dovzhenko}, Yuliya and {Freyberg}, Keenan and {Shulman}, Michael D. and {Ginsburg}, Boris and {Watanabe}, Shinji and {Kucsko}, Georg}, title = "{SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language, Electrical Engineering and Systems Science - Audio and Speech Processing}, year = 2021, month = apr, eid = {arXiv:2104.02014}, pages = {arXiv:2104.02014}, archivePrefix = {arXiv}, eprint = {2104.02014}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210402014O}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } ``` ### Contributions Thanks to [@sanchit-gandhi](https://github.com/sanchit-gandhi), [@patrickvonplaten](https://github.com/patrickvonplaten), and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset. ## Terms of Usage Your access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them. If you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT. Section 1 – THE CONTENT 1.1 The Content is provided for academic research purposes and internal use only and must not be used to: - assemble or create a database; - construct or facilitate the construction of products which compete with the Content; - identify or attempt to identify or contact any individual; or link to another dataset. The Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content. 1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content. The Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content. 1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho. 1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction. 1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete. 1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice. Section 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY 2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES. 2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY. 2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law. Section 3 - PRIVACY 3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.). 3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below). 3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes. 3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content. 3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information. 3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent. 3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at [email protected] or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041. Section 4 - MISCELLANEOUS 4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof. 4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party. 4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York. 4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE. 4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.
kensho/spgispeech
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:other", "arxiv:2104.02014", "region:us" ]
2022-06-29T15:09:04+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "SpgiSpeech", "extra_gated_prompt": "Your access to and use of the information in the Kensho Transcript Dataset (the \u201cContent\u201d), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (\u201cKensho\u201d), shall be governed by the following terms and conditions of usage (\u201cTerms of Usage\u201d). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an \u201cAuthorized User\u201d). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.\nIf you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.\nSection 1 \u2013 THE CONTENT\n1.1 The Content is provided for academic research purposes and internal use only and must not be used to: assemble or create a database; construct or facilitate the construction of products which compete with the Content; identify or attempt to identify or contact any individual; or link to another dataset.\nThe Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.\n1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.\nThe Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho\u2019s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.\n1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho\u2019s or the third-party content providers\u2019 name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.\n1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.\n1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.\n1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.\nSection 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY\n2.1 THE CONTENT IS PROVIDED \u201cAS IS\u201d AND \u201cAS AVAILABLE\u201d WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER\u2019S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.\n2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.\n2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.\nSection 3 - PRIVACY\n3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (\u201cRegistration Data\u201d). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (\u201cAccess Data\u201d). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP\u2019s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).\n3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).\n3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (\u201cKensho Affiliates\u201d) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.\n3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.\n3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.\n3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.\n3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at [email protected] or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.\nSection 4 - MISCELLANEOUS\n4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.\n4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.\n4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.\n4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.\n4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.", "extra_gated_fields": {"Full name": "text", "Email": "text", "Institution": "text", "I accept the Terms of Usage": "checkbox"}}
2022-10-21T13:46:30+00:00
[ "2104.02014" ]
[ "en" ]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-other #arxiv-2104.02014 #region-us
Dataset Card for SPGISpeech =========================== Table of Contents ----------------- * Table of Contents <img src="URL alt="SPGISpeech Logo" width="200"/> * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions * Terms of Usage Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Leaderboard: * Point of Contact: data@URL Dataset Description ------------------- SPGISpeech (rhymes with “squeegee-speech”) is a large-scale transcription dataset, freely available for academic research. SPGISpeech is a corpus of 5,000 hours of professionally-transcribed financial audio. SPGISpeech contains a broad cross-section of L1 and L2 English accents, strongly varying audio quality, and both spontaneous and narrated speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted, including capitalization, punctuation, and denormalization of non-standard words. SPGISpeech consists of 5,000 hours of recorded company earnings calls and their respective transcriptions. The original calls were split into slices ranging from 5 to 15 seconds in length to allow easy training for speech recognition systems. Calls represent a broad cross-section of international business English; SPGISpeech contains approximately 50,000 speakers, one of the largest numbers of any speech corpus, and offers a variety of L1 and L2 English accents. The format of each WAV file is single channel, 16kHz, 16 bit audio. ### Example Usage The training split has several configurations of various size: S, M, L. See the Section Data Splits for for more information. To download the S configuration: It is possible to download only the development or test data: ### Supported Tasks and Leaderboards * 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). ### Languages SPGISpeech contains audio and transcription data in business English and offers a variety of L1 and L2 accents. Dataset Structure ----------------- ### Data Instances ### Data Fields * wav\_filename (string) - audio filename (includes parent directory). * audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). * wav\_filesize (int) - size of the file in bytes. * transcript (string) - transcription of the file. ### Data Splits The dataset has three splits: train, evaluation (dev) and test. The train split has three configurations of various sizes: S, M, L. Larger subsets are supersets of smaller subsets, e.g., the L subset contains all the data from the M subset. #### Transcribed Subsets Size Dataset Creation ---------------- ### Curation Rationale To augment the open-source speech-to-text datasets available for R&D. ### Source Data The dataset contains S&P Global company earnings calls. #### Initial Data Collection and Normalization Public earnings calls spanning the time period from 2007-2020 were converted to 16kHz, 16-bit audio. #### Who are the source language producers? English speakers with a diverse selection of accents, including non-native ones (L2), producing both spontaneous and narrated speech. ### Annotations #### Annotation process Data is orthographically transcribed according to a professional style guide detailing conventions for capitalization, punctuation, denormalization of non-standard words and transcription of disfluencies in spontaneous speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted. Full earnings calls last 30-60 minutes in length and are typically transcribed as whole units, without internal timestamps. In order to produce short audio slices suitable for STT training, the files were segmented with Gentle, a double-pass forced aligner, with the beginning and end of each slice of audio imputed by voice activity detection with py-webrtc. #### Who are the annotators? Earning calls are manually transcribed by S&P Global, Inc. ### Personal and Sensitive Information Though earnings calls are public, we nevertheless identified full names with the spaCy en core web large model. We withheld samples containing names that appeared fewer than ten times (7% of total). Full names appearing ten times or more in the data were considered to be public figures and were retained. This necessarily incomplete approach to named entity recognition was complemented with randomized manual spot checks which uncovered no false negatives missed by the automated approach. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases The largest issue inherent with the dataset is that the speaker distribution of SPGISpeech reflects the speaker distribution seen during earning calls. One example issue that stems from this: during earnings calls, close to 90% of speakers are male. ### Other Known Limitations Due to formal language seen during earnings calls, the dataset needs augmentation for training systems that transcribe informal speech. Additional Information ---------------------- ### Dataset Curators Kensho Technologies ### Licensing Information Please cite this paper: ### Contributions Thanks to @sanchit-gandhi, @patrickvonplaten, and @polinaeterna for adding this dataset. Terms of Usage -------------- Your access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them. If you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT. Section 1 – THE CONTENT 1.1 The Content is provided for academic research purposes and internal use only and must not be used to: * assemble or create a database; * construct or facilitate the construction of products which compete with the Content; * identify or attempt to identify or contact any individual; or link to another dataset. The Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content. 1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content. The Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content. 1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho. 1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction. 1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete. 1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice. Section 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY 2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES. 2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY. 2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law. Section 3 - PRIVACY 3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.). 3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below). 3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes. 3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content. 3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information. 3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent. 3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@URL or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041. Section 4 - MISCELLANEOUS 4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof. 4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party. 4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York. 4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE. 4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.
[ "### Example Usage\n\n\nThe training split has several configurations of various size: S, M, L. See the Section Data Splits\nfor for more information. To download the S configuration:\n\n\nIt is possible to download only the development or test data:", "### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR).\nThe model is presented with an audio file and asked to transcribe the audio file to written text.\nThe most common evaluation metric is the word error rate (WER).", "### Languages\n\n\nSPGISpeech contains audio and transcription data in business English and offers a variety of L1 and L2 accents.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* wav\\_filename (string) - audio filename (includes parent directory).\n* audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate.\nIn non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio\ninside its archive (as files are not downloaded and extracted locally).\n* wav\\_filesize (int) - size of the file in bytes.\n* transcript (string) - transcription of the file.", "### Data Splits\n\n\nThe dataset has three splits: train, evaluation (dev) and test. The train split has three configurations of various sizes:\nS, M, L. Larger subsets are supersets of smaller subsets, e.g., the L subset contains all the data from the M subset.", "#### Transcribed Subsets Size\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nTo augment the open-source speech-to-text datasets available for R&D.", "### Source Data\n\n\nThe dataset contains S&P Global company earnings calls.", "#### Initial Data Collection and Normalization\n\n\nPublic earnings calls spanning the time period from 2007-2020 were converted to 16kHz, 16-bit audio.", "#### Who are the source language producers?\n\n\nEnglish speakers with a diverse selection of accents, including non-native ones (L2), producing both\nspontaneous and narrated speech.", "### Annotations", "#### Annotation process\n\n\nData is orthographically transcribed according to a professional style guide detailing conventions for capitalization, punctuation,\ndenormalization of non-standard words and transcription of disfluencies in spontaneous speech.\nThe transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted.\n\n\nFull earnings calls last 30-60 minutes in length and are typically\ntranscribed as whole units, without internal timestamps. In order to produce short audio slices suitable for STT\ntraining, the files were segmented with Gentle, a double-pass forced aligner,\nwith the beginning and end of each slice of audio imputed by voice activity detection with\npy-webrtc.", "#### Who are the annotators?\n\n\nEarning calls are manually transcribed by S&P Global, Inc.", "### Personal and Sensitive Information\n\n\nThough earnings calls are public, we nevertheless identified full names with the spaCy en core web large model.\nWe withheld samples containing names that appeared fewer than ten times (7% of total). Full\nnames appearing ten times or more in the data were considered to be public figures and were retained.\nThis necessarily incomplete approach to named entity recognition was complemented with randomized manual spot\nchecks which uncovered no false negatives missed by the automated approach.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nThe largest issue inherent with the dataset is that the speaker distribution of SPGISpeech reflects the speaker distribution seen during earning calls.\nOne example issue that stems from this: during earnings calls, close to 90% of speakers are male.", "### Other Known Limitations\n\n\nDue to formal language seen during earnings calls, the dataset needs augmentation for training systems that transcribe informal speech.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nKensho Technologies", "### Licensing Information\n\n\nPlease cite this paper:", "### Contributions\n\n\nThanks to @sanchit-gandhi, @patrickvonplaten,\nand @polinaeterna for adding this dataset.\n\n\nTerms of Usage\n--------------\n\n\nYour access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.\n\n\nIf you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.\n\n\nSection 1 – THE CONTENT\n\n\n1.1 The Content is provided for academic research purposes and internal use only and must not be used to:\n\n\n* assemble or create a database;\n* construct or facilitate the construction of products which compete with the Content;\n* identify or attempt to identify or contact any individual; or link to another dataset.\n\n\nThe Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.\n\n\n1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.\n\n\nThe Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.\n\n\n1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.\n\n\n1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.\n\n\n1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.\n\n\n1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.\n\n\nSection 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY\n\n\n2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.\n\n\n2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.\n\n\n2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.\n\n\nSection 3 - PRIVACY\n\n\n3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).\n\n\n3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).\n\n\n3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.\n\n\n3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.\n\n\n3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.\n\n\n3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.\n\n\n3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@URL or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.\n\n\nSection 4 - MISCELLANEOUS\n\n\n4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.\n\n\n4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.\n\n\n4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.\n\n\n4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.\n\n\n4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-other #arxiv-2104.02014 #region-us \n", "### Example Usage\n\n\nThe training split has several configurations of various size: S, M, L. See the Section Data Splits\nfor for more information. To download the S configuration:\n\n\nIt is possible to download only the development or test data:", "### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR).\nThe model is presented with an audio file and asked to transcribe the audio file to written text.\nThe most common evaluation metric is the word error rate (WER).", "### Languages\n\n\nSPGISpeech contains audio and transcription data in business English and offers a variety of L1 and L2 accents.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* wav\\_filename (string) - audio filename (includes parent directory).\n* audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate.\nIn non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio\ninside its archive (as files are not downloaded and extracted locally).\n* wav\\_filesize (int) - size of the file in bytes.\n* transcript (string) - transcription of the file.", "### Data Splits\n\n\nThe dataset has three splits: train, evaluation (dev) and test. The train split has three configurations of various sizes:\nS, M, L. Larger subsets are supersets of smaller subsets, e.g., the L subset contains all the data from the M subset.", "#### Transcribed Subsets Size\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nTo augment the open-source speech-to-text datasets available for R&D.", "### Source Data\n\n\nThe dataset contains S&P Global company earnings calls.", "#### Initial Data Collection and Normalization\n\n\nPublic earnings calls spanning the time period from 2007-2020 were converted to 16kHz, 16-bit audio.", "#### Who are the source language producers?\n\n\nEnglish speakers with a diverse selection of accents, including non-native ones (L2), producing both\nspontaneous and narrated speech.", "### Annotations", "#### Annotation process\n\n\nData is orthographically transcribed according to a professional style guide detailing conventions for capitalization, punctuation,\ndenormalization of non-standard words and transcription of disfluencies in spontaneous speech.\nThe transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted.\n\n\nFull earnings calls last 30-60 minutes in length and are typically\ntranscribed as whole units, without internal timestamps. In order to produce short audio slices suitable for STT\ntraining, the files were segmented with Gentle, a double-pass forced aligner,\nwith the beginning and end of each slice of audio imputed by voice activity detection with\npy-webrtc.", "#### Who are the annotators?\n\n\nEarning calls are manually transcribed by S&P Global, Inc.", "### Personal and Sensitive Information\n\n\nThough earnings calls are public, we nevertheless identified full names with the spaCy en core web large model.\nWe withheld samples containing names that appeared fewer than ten times (7% of total). Full\nnames appearing ten times or more in the data were considered to be public figures and were retained.\nThis necessarily incomplete approach to named entity recognition was complemented with randomized manual spot\nchecks which uncovered no false negatives missed by the automated approach.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nThe largest issue inherent with the dataset is that the speaker distribution of SPGISpeech reflects the speaker distribution seen during earning calls.\nOne example issue that stems from this: during earnings calls, close to 90% of speakers are male.", "### Other Known Limitations\n\n\nDue to formal language seen during earnings calls, the dataset needs augmentation for training systems that transcribe informal speech.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nKensho Technologies", "### Licensing Information\n\n\nPlease cite this paper:", "### Contributions\n\n\nThanks to @sanchit-gandhi, @patrickvonplaten,\nand @polinaeterna for adding this dataset.\n\n\nTerms of Usage\n--------------\n\n\nYour access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.\n\n\nIf you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.\n\n\nSection 1 – THE CONTENT\n\n\n1.1 The Content is provided for academic research purposes and internal use only and must not be used to:\n\n\n* assemble or create a database;\n* construct or facilitate the construction of products which compete with the Content;\n* identify or attempt to identify or contact any individual; or link to another dataset.\n\n\nThe Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.\n\n\n1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.\n\n\nThe Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.\n\n\n1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.\n\n\n1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.\n\n\n1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.\n\n\n1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.\n\n\nSection 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY\n\n\n2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.\n\n\n2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.\n\n\n2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.\n\n\nSection 3 - PRIVACY\n\n\n3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).\n\n\n3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).\n\n\n3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.\n\n\n3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.\n\n\n3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.\n\n\n3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.\n\n\n3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@URL or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.\n\n\nSection 4 - MISCELLANEOUS\n\n\n4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.\n\n\n4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.\n\n\n4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.\n\n\n4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.\n\n\n4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail." ]
609880c2f80d9f7e1e64e8b2ae85ec474b772eb3
# Dataset Card for CORD (Consolidated Receipt Dataset) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository: https://github.com/clovaai/cord** - **Paper: https://openreview.net/pdf?id=SJl3z659UH** - **Leaderboard: https://paperswithcode.com/dataset/cord** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields ```python { "id": datasets.Value("string"), "words": datasets.Sequence(datasets.Value("string")), "bboxes": datasets.Sequence(datasets.Sequence(datasets.Value("int64"))), "labels": datasets.Sequence(datasets.features.ClassLabel(names=_LABELS)), "images": datasets.features.Image(), } ``` ### Data Splits - train (800 rows) - validation (100 rows) - test (100 rows) ## Dataset Creation ### Licensing Information [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @article{park2019cord, title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing}, author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk} booktitle={Document Intelligence Workshop at Neural Information Processing Systems} year={2019} } ``` ### Contributions Thanks to [@clovaai](https://github.com/clovaai) for adding this dataset.
wkrl/cord
[ "task_categories:token-classification", "task_ids:parsing", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-06-29T15:39:52+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["parsing"], "pretty_name": "CORD"}
2022-07-09T08:28:36+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-parsing #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for CORD (Consolidated Receipt Dataset) ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Dataset Structure - Data Instances - Data Fields - Data Splits - Additional Information - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: URL - Paper: URL - Leaderboard: URL ### Dataset Summary ### Supported Tasks and Leaderboards ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits - train (800 rows) - validation (100 rows) - test (100 rows) ## Dataset Creation ### Licensing Information Creative Commons Attribution 4.0 International License ### Contributions Thanks to @clovaai for adding this dataset.
[ "# Dataset Card for CORD (Consolidated Receipt Dataset)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL", "### Dataset Summary", "### Supported Tasks and Leaderboards", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\n\n- train (800 rows)\n- validation (100 rows)\n- test (100 rows)", "## Dataset Creation", "### Licensing Information\n\nCreative Commons Attribution 4.0 International License", "### Contributions\n\nThanks to @clovaai for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-parsing #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for CORD (Consolidated Receipt Dataset)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL", "### Dataset Summary", "### Supported Tasks and Leaderboards", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\n\n- train (800 rows)\n- validation (100 rows)\n- test (100 rows)", "## Dataset Creation", "### Licensing Information\n\nCreative Commons Attribution 4.0 International License", "### Contributions\n\nThanks to @clovaai for adding this dataset." ]
5ba3c4934f4b20a4d9cf13e1b877524267ef5f70
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/convnext-tiny-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@aciborowska](https://huggingface.co/aciborowska) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f89b1257-9045192
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T16:16:19+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lewtun/dog_food"], "eval_info": {"task": "image_multi_class_classification", "model": "abhishek/convnext-tiny-finetuned-dogfood", "metrics": [], "dataset_name": "lewtun/dog_food", "dataset_config": "lewtun--dog_food", "dataset_split": "train", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-29T16:17:15+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/convnext-tiny-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @aciborowska for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/convnext-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @aciborowska for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/convnext-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @aciborowska for evaluating this model." ]
38139de09992c33d51f53531bbf3d575ca3e2e27
# CiteSum ## Description CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation. CiteSum contains TLDR summaries for scientific papers from their citation texts without human annotation, making it around 30 times larger than the previous human-curated dataset SciTLDR. ## Homepage https://github.com/morningmoni/CiteSum ## Paper https://arxiv.org/abs/2205.06207 ## Authors ### Yuning Mao, Ming Zhong, Jiawei Han #### University of Illinois Urbana-Champaign {yuningm2, mingz5, hanj}@illinois.edu ## Dataset size Train: 83304 Validation: 4721 Test: 4921 ## Data details - src (string): source text. long description of paper - tgt (string): target text. tldr of paper - paper_id (string): unique id for the paper - title (string): title of the paper - discipline (dict): - venue (string): Where the paper was published (conference) - journal (string): Journal in which the paper was published - mag_field_of_study (list[str]): scientific fields that the paper falls under. Example: ``` { 'src': 'We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.', 'tgt': 'A convolutional neural network model for predicting hashtags was proposed in REF .', 'paper_id': '14697143', 'title': '#TagSpace: Semantic Embeddings from Hashtags', 'discipline': { 'venue': 'EMNLP', 'journal': None, 'mag_field_of_study': ['Computer Science'] } } ``` ## Using the dataset ```python from datasets import load_dataset ds = load_dataset("yuningm/citesum") ``` ## Data location https://drive.google.com/file/d/1ndHCREXGSPnDUNllladh9qCtayqbXAfJ/view
yuningm/citesum
[ "task_categories:summarization", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "arxiv:2205.06207", "region:us" ]
2022-06-29T17:55:38+00:00
{"language": ["en"], "license": "cc-by-nc-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "paperswithcode_id": "citesum"}
2022-10-25T09:39:26+00:00
[ "2205.06207" ]
[ "en" ]
TAGS #task_categories-summarization #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-4.0 #arxiv-2205.06207 #region-us
# CiteSum ## Description CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation. CiteSum contains TLDR summaries for scientific papers from their citation texts without human annotation, making it around 30 times larger than the previous human-curated dataset SciTLDR. ## Homepage URL ## Paper URL ## Authors ### Yuning Mao, Ming Zhong, Jiawei Han #### University of Illinois Urbana-Champaign {yuningm2, mingz5, hanj}@URL ## Dataset size Train: 83304 Validation: 4721 Test: 4921 ## Data details - src (string): source text. long description of paper - tgt (string): target text. tldr of paper - paper_id (string): unique id for the paper - title (string): title of the paper - discipline (dict): - venue (string): Where the paper was published (conference) - journal (string): Journal in which the paper was published - mag_field_of_study (list[str]): scientific fields that the paper falls under. Example: ## Using the dataset ## Data location URL
[ "# CiteSum", "## Description\n\nCiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation. \n\nCiteSum contains TLDR summaries for scientific papers from their citation texts without human annotation, making it around 30 times larger than the previous human-curated dataset SciTLDR.", "## Homepage\nURL", "## Paper\nURL", "## Authors", "### Yuning Mao, Ming Zhong, Jiawei Han", "#### University of Illinois Urbana-Champaign \n{yuningm2, mingz5, hanj}@URL", "## Dataset size\n\nTrain: 83304 \nValidation: 4721 \nTest: 4921", "## Data details\n\n- src (string): source text. long description of paper\n- tgt (string): target text. tldr of paper\n- paper_id (string): unique id for the paper\n- title (string): title of the paper\n- discipline (dict): \n - venue (string): Where the paper was published (conference)\n - journal (string): Journal in which the paper was published\n - mag_field_of_study (list[str]): scientific fields that the paper falls under.\n\nExample:", "## Using the dataset", "## Data location\nURL" ]
[ "TAGS\n#task_categories-summarization #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-4.0 #arxiv-2205.06207 #region-us \n", "# CiteSum", "## Description\n\nCiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation. \n\nCiteSum contains TLDR summaries for scientific papers from their citation texts without human annotation, making it around 30 times larger than the previous human-curated dataset SciTLDR.", "## Homepage\nURL", "## Paper\nURL", "## Authors", "### Yuning Mao, Ming Zhong, Jiawei Han", "#### University of Illinois Urbana-Champaign \n{yuningm2, mingz5, hanj}@URL", "## Dataset size\n\nTrain: 83304 \nValidation: 4721 \nTest: 4921", "## Data details\n\n- src (string): source text. long description of paper\n- tgt (string): target text. tldr of paper\n- paper_id (string): unique id for the paper\n- title (string): title of the paper\n- discipline (dict): \n - venue (string): Where the paper was published (conference)\n - journal (string): Journal in which the paper was published\n - mag_field_of_study (list[str]): scientific fields that the paper falls under.\n\nExample:", "## Using the dataset", "## Data location\nURL" ]
bf7670076120164edc138d6394f6ea6820907de4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: dslim/bert-base-NER * Dataset: conll2003 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@aseifert](https://huggingface.co/aseifert) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-183be059-9075194
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T19:25:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "dslim/bert-base-NER", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-29T19:26:38+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: dslim/bert-base-NER * Dataset: conll2003 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @aseifert for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: dslim/bert-base-NER\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @aseifert for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: dslim/bert-base-NER\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @aseifert for evaluating this model." ]
da4e9f4db86e259f783e89a50fd8f811dfe3f257
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net) - **Repository:** [https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net) - **Paper:** [Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing](https://dergipark.org.tr/tr/pub/dubited/issue/68307/950568) - **Leaderboard:** - **Point of Contact:** S.Serdar Helli ### Dataset Summary # Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image The aim of this study is automatic semantic segmentation and measurement total length of teeth in one-shot panoramic x-ray image by using deep learning method with U-Net Model and binary image analysis in order to provide diagnostic information for the management of dental disorders, diseases, and conditions. [***Github Link***](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net) ***Original Dataset For Only Images*** DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015 [Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1) ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { "image": X-ray Image (Image), "label": Binary Image Segmentation Map (Image) } ``` ## Dataset Creation ### Source Data ***Original Dataset For Only Images*** DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015 [Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1) ### Annotations #### Annotation process The annotation was made manually. #### Who are the annotators? S.Serdar Helli ### Other Known Limitations The X-Ray Images files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International license. To Check Out For More Information: ***Original Dataset For Only Images*** DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015 [Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1) ## Additional Information ### Citation Information For Labelling ``` @article{helli10tooth, title={Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing}, author={HELL{\.I}, Serdar and HAMAMCI, Anda{\c{c}}}, journal={D{\"u}zce {\"U}niversitesi Bilim ve Teknoloji Dergisi}, volume={10}, number={1}, pages={39--50} } ``` For Original Images ``` @article{abdi2015automatic, title={Automatic segmentation of mandible in panoramic x-ray}, author={Abdi, Amir Hossein and Kasaei, Shohreh and Mehdizadeh, Mojdeh}, journal={Journal of Medical Imaging}, volume={2}, number={4}, pages={044003}, year={2015}, publisher={SPIE} } ``` ### Contributions Thanks to [@SerdarHelli](https://github.com/SerdarHelli) for adding this dataset.
SerdarHelli/SegmentationOfTeethPanoramicXRayImages
[ "task_categories:image-segmentation", "task_ids:semantic-segmentation", "size_categories:n<1K", "teeth-segmentation", "dental-imaging", "medical-imaging", "region:us" ]
2022-06-29T20:07:00+00:00
{"size_categories": ["n<1K"], "task_categories": ["image-segmentation"], "task_ids": ["semantic-segmentation"], "tags": ["teeth-segmentation", "dental-imaging", "medical-imaging"], "train-eval-index": [{"config": "plain_text", "task": "semantic_segmentation", "task_id": "semantic_segmentation", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"image": "image", "label": "image"}}]}
2022-10-29T19:05:26+00:00
[]
[]
TAGS #task_categories-image-segmentation #task_ids-semantic-segmentation #size_categories-n<1K #teeth-segmentation #dental-imaging #medical-imaging #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Dataset Structure - Data Instances - Dataset Creation - Curation Rationale - Source Data - Annotations - Other Known Limitations - Additional Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing - Leaderboard: - Point of Contact: S.Serdar Helli ### Dataset Summary # Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image The aim of this study is automatic semantic segmentation and measurement total length of teeth in one-shot panoramic x-ray image by using deep learning method with U-Net Model and binary image analysis in order to provide diagnostic information for the management of dental disorders, diseases, and conditions. *Github Link* *Original Dataset For Only Images* DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015 Link DATASET for only original images. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ## Dataset Creation ### Source Data *Original Dataset For Only Images* DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015 Link DATASET for only original images. ### Annotations #### Annotation process The annotation was made manually. #### Who are the annotators? S.Serdar Helli ### Other Known Limitations The X-Ray Images files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International license. To Check Out For More Information: *Original Dataset For Only Images* DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015 Link DATASET for only original images. ## Additional Information For Labelling For Original Images ### Contributions Thanks to @SerdarHelli for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Structure\n - Data Instances\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Other Known Limitations\n- Additional Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing \n- Leaderboard:\n- Point of Contact: S.Serdar Helli", "### Dataset Summary\n\n # Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image\n The aim of this study is automatic semantic segmentation and measurement total length of teeth in one-shot panoramic x-ray image by using deep learning method with U-Net Model and binary image analysis in order to provide diagnostic information for the management of dental disorders, diseases, and conditions. \n \n *Github Link*\n \n \n *Original Dataset For Only Images*\n DATASET ref - \tH. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015\n Link DATASET for only original images.", "## Dataset Structure", "### Data Instances\n\nAn example of 'train' looks as follows.", "## Dataset Creation", "### Source Data\n *Original Dataset For Only Images*\n DATASET ref - \tH. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015\n Link DATASET for only original images.", "### Annotations", "#### Annotation process\n\nThe annotation was made manually.", "#### Who are the annotators?\n\nS.Serdar Helli", "### Other Known Limitations\n The X-Ray Images files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International license.\n\nTo Check Out For More Information: \n\n *Original Dataset For Only Images*\n DATASET ref - \tH. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015\n Link DATASET for only original images.", "## Additional Information\n\n\n\nFor Labelling \n\n\nFor Original Images", "### Contributions\n\nThanks to @SerdarHelli for adding this dataset." ]
[ "TAGS\n#task_categories-image-segmentation #task_ids-semantic-segmentation #size_categories-n<1K #teeth-segmentation #dental-imaging #medical-imaging #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Structure\n - Data Instances\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Other Known Limitations\n- Additional Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing \n- Leaderboard:\n- Point of Contact: S.Serdar Helli", "### Dataset Summary\n\n # Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image\n The aim of this study is automatic semantic segmentation and measurement total length of teeth in one-shot panoramic x-ray image by using deep learning method with U-Net Model and binary image analysis in order to provide diagnostic information for the management of dental disorders, diseases, and conditions. \n \n *Github Link*\n \n \n *Original Dataset For Only Images*\n DATASET ref - \tH. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015\n Link DATASET for only original images.", "## Dataset Structure", "### Data Instances\n\nAn example of 'train' looks as follows.", "## Dataset Creation", "### Source Data\n *Original Dataset For Only Images*\n DATASET ref - \tH. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015\n Link DATASET for only original images.", "### Annotations", "#### Annotation process\n\nThe annotation was made manually.", "#### Who are the annotators?\n\nS.Serdar Helli", "### Other Known Limitations\n The X-Ray Images files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International license.\n\nTo Check Out For More Information: \n\n *Original Dataset For Only Images*\n DATASET ref - \tH. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015\n Link DATASET for only original images.", "## Additional Information\n\n\n\nFor Labelling \n\n\nFor Original Images", "### Contributions\n\nThanks to @SerdarHelli for adding this dataset." ]
51da51ef377f004e18152d6e02ed1e31eb2466d9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: dbounds/roberta-large-finetuned-clinc * Dataset: clinc_oos To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mxnno](https://huggingface.co/mxnno) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-87e7c3be-9085195
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T20:09:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["clinc_oos"], "eval_info": {"task": "multi_class_classification", "model": "dbounds/roberta-large-finetuned-clinc", "metrics": [], "dataset_name": "clinc_oos", "dataset_config": "small", "dataset_split": "test", "col_mapping": {"text": "text", "target": "intent"}}}
2022-06-29T20:11:33+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: dbounds/roberta-large-finetuned-clinc * Dataset: clinc_oos To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mxnno for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: dbounds/roberta-large-finetuned-clinc\n* Dataset: clinc_oos\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mxnno for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: dbounds/roberta-large-finetuned-clinc\n* Dataset: clinc_oos\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mxnno for evaluating this model." ]
74178ac21d8791035a616fd4f97bbd652b541c78
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/autotrain_cifar10_vit_base * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@davidberg](https://huggingface.co/davidberg) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-00ac2adb-9115197
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T21:40:57+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "abhishek/autotrain_cifar10_vit_base", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-29T21:41:58+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/autotrain_cifar10_vit_base * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @davidberg for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/autotrain_cifar10_vit_base\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @davidberg for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/autotrain_cifar10_vit_base\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @davidberg for evaluating this model." ]
c535f479a30ffc48bc48663ca86c6f20272e9219
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: karthiksv/vit-base-patch16-224-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@davidberg](https://huggingface.co/davidberg) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-00ac2adb-9115199
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T21:41:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "karthiksv/vit-base-patch16-224-cifar10", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-29T21:42:09+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: karthiksv/vit-base-patch16-224-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @davidberg for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-patch16-224-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @davidberg for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-patch16-224-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @davidberg for evaluating this model." ]
e0d72b4c4e3aa00dc38e60a88deac5c4b3c10312
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: jimypbr/cifar10_outputs * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@davidberg](https://huggingface.co/davidberg) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-00ac2adb-9115200
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T21:41:49+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "jimypbr/cifar10_outputs", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-29T21:42:47+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: jimypbr/cifar10_outputs * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @davidberg for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: jimypbr/cifar10_outputs\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @davidberg for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: jimypbr/cifar10_outputs\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @davidberg for evaluating this model." ]
cc79d4ca52014f13aee22ec5c7872cebac96c9ed
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@davidberg](https://huggingface.co/davidberg) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-00ac2adb-9115202
[ "autotrain", "evaluation", "region:us" ]
2022-06-29T21:42:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "tanlq/vit-base-patch16-224-in21k-finetuned-cifar10", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-29T21:43:31+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @davidberg for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @davidberg for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @davidberg for evaluating this model." ]
c48d40f9e70f0196f8236901ee35807f7d6c44c0
This is a cleaner version of [Github-code dataset](https://huggingface.co/datasets/codeparrot/github-code), we add the following filters: * Average line length < 100 * Alpha numeric characters fraction > 0.25 * Remove auto-generated files (keyword search) 3.39M files are removed making up 2.94% of the dataset.
codeparrot/github-code-clean
[ "license:apache-2.0", "region:us" ]
2022-06-29T22:08:17+00:00
{"license": "apache-2.0"}
2022-07-05T08:35:14+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
This is a cleaner version of Github-code dataset, we add the following filters: * Average line length < 100 * Alpha numeric characters fraction > 0.25 * Remove auto-generated files (keyword search) 3.39M files are removed making up 2.94% of the dataset.
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
77859ef0ac63997f4e1a16f27cc4acbf8a06cc2f
# Dataset Card for RedditQG ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://shuyangcao.github.io/projects/ontology_open_ended_question/](https://shuyangcao.github.io/projects/ontology_open_ended_question/) - **Repository:** [https://github.com/ShuyangCao/open-ended_question_ontology](https://github.com/ShuyangCao/open-ended_question_ontology) - **Paper:** [https://aclanthology.org/2021.acl-long.502/](https://aclanthology.org/2021.acl-long.502/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This dataset contains answer-question pairs from QA communities of Reddit. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances An example looks as follows. ``` { "id": "askscience/123", "qid": "2323", "answer": "A test answer.", "question": "A test question?", "score": 20 } ``` ### Data Fields - `id`: a `string` feature. - `qid`: a `string` feature. There could be multiple answers to the same question. - `answer`: a `string` feature. - `question`: a `string` feature. - `score`: an `int` feature which is the value of `upvotes - downvotes`. ### Data Splits - train: 647763 - valid: 36023 - test: 36202 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Reddit users. ### Personal and Sensitive Information Samples with abusive words are discarded, but there could be samples containing personal information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY 4.0 ### Citation Information ``` @inproceedings{cao-wang-2021-controllable, title = "Controllable Open-ended Question Generation with A New Question Type Ontology", author = "Cao, Shuyang and Wang, Lu", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.502", doi = "10.18653/v1/2021.acl-long.502", pages = "6424--6439", abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.", } ```
launch/reddit_qg
[ "task_categories:text-classification", "annotations_creators:expert-generated", "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
2022-06-30T00:03:40+00:00
{"annotations_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RedditQG"}
2022-11-09T01:58:05+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-expert-generated #multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
# Dataset Card for RedditQG ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary This dataset contains answer-question pairs from QA communities of Reddit. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances An example looks as follows. ### Data Fields - 'id': a 'string' feature. - 'qid': a 'string' feature. There could be multiple answers to the same question. - 'answer': a 'string' feature. - 'question': a 'string' feature. - 'score': an 'int' feature which is the value of 'upvotes - downvotes'. ### Data Splits - train: 647763 - valid: 36023 - test: 36202 ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Reddit users. ### Personal and Sensitive Information Samples with abusive words are discarded, but there could be samples containing personal information. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information CC BY 4.0
[ "# Dataset Card for RedditQG", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains answer-question pairs from QA communities of Reddit.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nAn example looks as follows.", "### Data Fields\n\n- 'id': a 'string' feature.\n- 'qid': a 'string' feature. There could be multiple answers to the same question.\n- 'answer': a 'string' feature.\n- 'question': a 'string' feature.\n- 'score': an 'int' feature which is the value of 'upvotes - downvotes'.", "### Data Splits\n\n- train: 647763\n- valid: 36023\n- test: 36202", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nReddit users.", "### Personal and Sensitive Information\n\nSamples with abusive words are discarded, but there could be samples containing personal information.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY 4.0" ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for RedditQG", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains answer-question pairs from QA communities of Reddit.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nAn example looks as follows.", "### Data Fields\n\n- 'id': a 'string' feature.\n- 'qid': a 'string' feature. There could be multiple answers to the same question.\n- 'answer': a 'string' feature.\n- 'question': a 'string' feature.\n- 'score': an 'int' feature which is the value of 'upvotes - downvotes'.", "### Data Splits\n\n- train: 647763\n- valid: 36023\n- test: 36202", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nReddit users.", "### Personal and Sensitive Information\n\nSamples with abusive words are discarded, but there could be samples containing personal information.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY 4.0" ]
3b2e18d24afd6b82d6db4bb5552c64298e1ad8b2
sentencias-corte-cons-colombia-1992-2021. 23750 Case law of the Colombia's Corte Constitucional. Each row is a complete text of each case law. 23750 case law from 1992-2021. Columns: ID Texto: Complete text of the sentence
Manuel/sentencias-corte-cons-colombia-1992-2021
[ "license:cc-by-4.0", "region:us" ]
2022-06-30T01:17:04+00:00
{"license": "cc-by-4.0"}
2022-06-30T01:49:09+00:00
[]
[]
TAGS #license-cc-by-4.0 #region-us
sentencias-corte-cons-colombia-1992-2021. 23750 Case law of the Colombia's Corte Constitucional. Each row is a complete text of each case law. 23750 case law from 1992-2021. Columns: ID Texto: Complete text of the sentence
[]
[ "TAGS\n#license-cc-by-4.0 #region-us \n" ]
094794ba406881792473d6d32a26ab95e41c1dfc
# langame-seeker Self chat between two [Seeker Search-Augmented Language Model](https://parl.ai/projects/seeker/) using [Langame](https://langa.me/) conversation starters generated by Langame's proprietary language model. The 3000 conversation starters have been generated beforehand into an "offline" dataset and manually corrected and adjusted by psychologically and philosophically trained humans. The search engine source code is unfortunately private yet, some work need to be done to make it open source.
Langame/langame-seeker
[ "license:wtfpl", "region:us" ]
2022-06-30T05:38:27+00:00
{"license": "wtfpl"}
2022-06-30T05:42:22+00:00
[]
[]
TAGS #license-wtfpl #region-us
# langame-seeker Self chat between two Seeker Search-Augmented Language Model using Langame conversation starters generated by Langame's proprietary language model. The 3000 conversation starters have been generated beforehand into an "offline" dataset and manually corrected and adjusted by psychologically and philosophically trained humans. The search engine source code is unfortunately private yet, some work need to be done to make it open source.
[ "# langame-seeker\n\nSelf chat between two Seeker Search-Augmented Language Model using Langame conversation starters generated by Langame's proprietary language model. The 3000 conversation starters have been generated beforehand into an \"offline\" dataset and manually corrected and adjusted by psychologically and philosophically trained humans.\n\nThe search engine source code is unfortunately private yet, some work need to be done to make it open source." ]
[ "TAGS\n#license-wtfpl #region-us \n", "# langame-seeker\n\nSelf chat between two Seeker Search-Augmented Language Model using Langame conversation starters generated by Langame's proprietary language model. The 3000 conversation starters have been generated beforehand into an \"offline\" dataset and manually corrected and adjusted by psychologically and philosophically trained humans.\n\nThe search engine source code is unfortunately private yet, some work need to be done to make it open source." ]
d37d90c7a9165b031a34065e3036ac979b8160ea
Num questions: - train: 9,009 - val: 5,046 Num answers: - train: 90,090 - val: 50,460 Num images: - train: 8,998 - val: 5,033
HuggingFaceM4/OK-VQA
[ "region:us" ]
2022-06-30T08:59:10+00:00
{}
2022-06-30T12:35:02+00:00
[]
[]
TAGS #region-us
Num questions: - train: 9,009 - val: 5,046 Num answers: - train: 90,090 - val: 50,460 Num images: - train: 8,998 - val: 5,033
[]
[ "TAGS\n#region-us \n" ]
b34049cde0f0d716b965b826b6e3ddbaae7fee48
# AutoTrain Dataset for project: mt5_chinese_small_finetune ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project mt5_chinese_small_finetune. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "\u8fd1\u671f\uff0c\u7f8e\u56fd\u56fd\u4f1a\u4f17\u9662\u901a\u8fc7\u6cd5\u6848\uff0c\u91cd\u7533\u7f8e\u56fd\u5bf9\u53f0\u6e7e\u7684\u627f\u8bfa\u3002\u5bf9\u6b64\uff0c\u4e2d\u56fd\u5916\u4ea4\u90e8\u53d1\u8a00\u4eba\u8868\u793a\uff0c\u6709\u5173\u6cd5\u6848\u4e25\u91cd\u8fdd\u53cd\u4e00\u4e2a\u4e2d\u56fd\u539f\u5219\u548c\u4e2d\u7f8e\u4e09\u4e2a\u8054\u5408\u516c\u62a5\u89c4\u5b9a\uff0c\u7c97\u66b4\u5e72\u6d89\u4e2d\u56fd\u5185\u653f\uff0c\u4e2d\u65b9\u5bf9\u6b64\u575a\u51b3\u53cd\u5bf9\u5e76\u5df2\u5411\u7f8e\u65b9\u63d0\u51fa\u4e25\u6b63\u4ea4\u6d89\u3002\n\u4e8b\u5b9e\u4e0a\uff0c\u4e2d[...]", "target": "\u671b\u6d77\u697c\u7f8e\u56fd\u6253\u201c\u53f0\u6e7e\u724c\u201d\u662f\u5371\u9669\u7684\u8d4c\u535a" }, { "text": "\u5728\u63a8\u8fdb\u201c\u53cc\u4e00\u6d41\u201d\u9ad8\u6821\u5efa\u8bbe\u8fdb\u7a0b\u4e2d\uff0c\u6211\u4eec\u8981\u7d27\u7d27\u56f4\u7ed5\u4e3a\u515a\u80b2\u4eba\u3001\u4e3a\u56fd\u80b2\u624d\uff0c\u627e\u51c6\u95ee\u9898\u3001\u7834\u89e3\u96be\u9898\uff0c\u4ee5\u4e00\u6d41\u610f\u8bc6\u548c\u62c5\u5f53\u7cbe\u795e\uff0c\u5927\u529b\u63a8\u8fdb\u9ad8\u6821\u7684\u6cbb\u7406\u80fd\u529b\u5efa\u8bbe\u3002\n\u589e\u5f3a\u653f\u6cbb\u5f15\u9886\u529b\u3002\u575a\u6301\u515a\u5bf9\u9ad8\u6821\u5de5\u4f5c\u7684\u5168\u9762\u9886\u5bfc\uff0c\u59cb\u7ec8\u628a\u653f\u6cbb\u5efa\u8bbe\u6446\u5728[...]", "target": "\u5927\u529b\u63a8\u8fdb\u9ad8\u6821\u6cbb\u7406\u80fd\u529b\u5efa\u8bbe" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 5850 | | valid | 1679 |
dddb/autotrain-data-mt5_chinese_small_finetune
[ "region:us" ]
2022-06-30T10:33:24+00:00
{"task_categories": ["conditional-text-generation"]}
2022-06-30T11:59:06+00:00
[]
[]
TAGS #region-us
AutoTrain Dataset for project: mt5\_chinese\_small\_finetune ============================================================ Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project mt5\_chinese\_small\_finetune. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
2e5f8d3dc550028d9ae1dbbb94476a6ae282134b
# EVI ## Dataset Description - **Paper:** [EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification](https://arxiv.org/abs/2204.13496) - **Repository:** [Github](https://github.com/PolyAI-LDN/evi-paper) EVI is a challenging spoken multilingual dataset with 5,506 dialogues in English, Polish, and French that can be used for benchmarking and developing knowledge-based enrolment, identification, and identification for spoken dialogue systems. ## Example EVI can be downloaded and used as follows: ```py from datasets import load_dataset evi = load_dataset("PolyAI/evi", "en-GB") # for British English # to download data from all locales use: # evi = load_dataset("PolyAI/evi", "all") # see structure print(evi) ``` ## Dataset Structure We show detailed information of the example for the `en-GB` configuration of the dataset. All other configurations have the same structure. ### Data Instances An example of a data instance of the config `en-GB` looks as follows: ``` { "language": 0, "dialogue_id": "CA0007220161df7be23f4554704c8720f5", "speaker_id": "e80e9bdd33eda593f16a1b6f2fb228ff", "turn_id": 0, "target_profile_id": "en.GB.608", "asr_transcription": "w20 a b", "asr_nbest'": ["w20 a b", "w20 a bee", "w20 a baby"], "path": "audios/en/CA0007220161df7be23f4554704c8720f5/0.wav", "audio": { "path": "/home/georgios/.cache/huggingface/datasets/downloads/extracted/0335ebc25feace53243133b49ba17ba18e26f0f97cb083ffdf4e73dd7427b443/audios/en/CA0007220161df7be23f4554704c8720f5/0.wav", "array": array([ 0.00024414, 0.00024414, 0.00024414, ..., 0.00024414, -0.00024414, 0.00024414], dtype=float32), "sampling_rate": 8000, } } ``` ### Data Fields The data fields are the same among all splits. - **language** (int): ID of language - **dialogue_id** (str): the ID of the dialogue - **speaker_id** (str): the ID of the speaker - **turn_id** (int)": the ID of the turn - **target_profile_id** (str): the ID of the target profile - **asr_transcription** (str): ASR transcription of the audio file - **asr_nbest** (list): n-best ASR transcriptions of the audio file - **path** (str): Path to the audio file - **audio** (dict): Audio object including loaded audio array, sampling rate and path of audio ### Data Splits Every config only has the `"test"` split containing *ca.* 1,800 dialogues. ## Dataset Creation [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/). ### Citation Information ``` @inproceedings{Spithourakis2022evi, author = {Georgios P. Spithourakis and Ivan Vuli\'{c} and Micha\l{} Lis and I\~{n}igo Casanueva and Pawe\l{} Budzianowski}, title = {{EVI}: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification}, year = {2022}, note = {Data available at https://github.com/PolyAI-LDN/evi-paper}, url = {https://arxiv.org/abs/2204.13496}, booktitle = {Findings of NAACL (publication pending)} } ``` ### Contributions Thanks to [@polinaeterna](https://github.com/polinaeterna) for helping with adding this dataset
PolyAI/evi
[ "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "language:en", "language:fr", "language:pl", "license:cc-by-4.0", "arxiv:2204.13496", "region:us" ]
2022-06-30T10:42:45+00:00
{"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en", "fr", "pl"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "paperswithcode_id": "evi-multilingual-spoken-dialogue-tasks-and-1", "language_bcp47": ["en", "en-GB", "fr", "fr-FR", "pl"]}
2022-10-25T09:39:33+00:00
[ "2204.13496" ]
[ "en", "fr", "pl" ]
TAGS #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #language-English #language-French #language-Polish #license-cc-by-4.0 #arxiv-2204.13496 #region-us
# EVI ## Dataset Description - Paper: EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification - Repository: Github EVI is a challenging spoken multilingual dataset with 5,506 dialogues in English, Polish, and French that can be used for benchmarking and developing knowledge-based enrolment, identification, and identification for spoken dialogue systems. ## Example EVI can be downloaded and used as follows: ## Dataset Structure We show detailed information of the example for the 'en-GB' configuration of the dataset. All other configurations have the same structure. ### Data Instances An example of a data instance of the config 'en-GB' looks as follows: ### Data Fields The data fields are the same among all splits. - language (int): ID of language - dialogue_id (str): the ID of the dialogue - speaker_id (str): the ID of the speaker - turn_id (int)": the ID of the turn - target_profile_id (str): the ID of the target profile - asr_transcription (str): ASR transcription of the audio file - asr_nbest (list): n-best ASR transcriptions of the audio file - path (str): Path to the audio file - audio (dict): Audio object including loaded audio array, sampling rate and path of audio ### Data Splits Every config only has the '"test"' split containing *ca.* 1,800 dialogues. ## Dataset Creation ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information All datasets are licensed under the Creative Commons license (CC-BY). ### Contributions Thanks to @polinaeterna for helping with adding this dataset
[ "# EVI", "## Dataset Description\n- Paper: EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification\n- Repository: Github\n\nEVI is a challenging spoken multilingual dataset\nwith 5,506 dialogues in English, Polish, and French\nthat can be used for benchmarking and developing\nknowledge-based enrolment, identification, and identification for spoken dialogue systems.", "## Example\nEVI can be downloaded and used as follows:", "## Dataset Structure\n\nWe show detailed information of the example for the 'en-GB' configuration of the dataset.\nAll other configurations have the same structure.", "### Data Instances\n\nAn example of a data instance of the config 'en-GB' looks as follows:", "### Data Fields\nThe data fields are the same among all splits.\n- language (int): ID of language\n- dialogue_id (str): the ID of the dialogue\n- speaker_id (str): the ID of the speaker\n- turn_id (int)\": the ID of the turn\n- target_profile_id (str): the ID of the target profile\n- asr_transcription (str): ASR transcription of the audio file\n- asr_nbest (list): n-best ASR transcriptions of the audio file\n- path (str): Path to the audio file\n- audio (dict): Audio object including loaded audio array, sampling rate and path of audio", "### Data Splits\nEvery config only has the '\"test\"' split containing *ca.* 1,800 dialogues.", "## Dataset Creation", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).", "### Contributions\nThanks to @polinaeterna for helping with adding this dataset" ]
[ "TAGS\n#annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #language-English #language-French #language-Polish #license-cc-by-4.0 #arxiv-2204.13496 #region-us \n", "# EVI", "## Dataset Description\n- Paper: EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification\n- Repository: Github\n\nEVI is a challenging spoken multilingual dataset\nwith 5,506 dialogues in English, Polish, and French\nthat can be used for benchmarking and developing\nknowledge-based enrolment, identification, and identification for spoken dialogue systems.", "## Example\nEVI can be downloaded and used as follows:", "## Dataset Structure\n\nWe show detailed information of the example for the 'en-GB' configuration of the dataset.\nAll other configurations have the same structure.", "### Data Instances\n\nAn example of a data instance of the config 'en-GB' looks as follows:", "### Data Fields\nThe data fields are the same among all splits.\n- language (int): ID of language\n- dialogue_id (str): the ID of the dialogue\n- speaker_id (str): the ID of the speaker\n- turn_id (int)\": the ID of the turn\n- target_profile_id (str): the ID of the target profile\n- asr_transcription (str): ASR transcription of the audio file\n- asr_nbest (list): n-best ASR transcriptions of the audio file\n- path (str): Path to the audio file\n- audio (dict): Audio object including loaded audio array, sampling rate and path of audio", "### Data Splits\nEvery config only has the '\"test\"' split containing *ca.* 1,800 dialogues.", "## Dataset Creation", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).", "### Contributions\nThanks to @polinaeterna for helping with adding this dataset" ]
cda65762ff140746a07e7e4cd2aedf17b555f5a8
Num questions: - val: 10,000 - test-dev: 36,807 Num answers: - val: 100,000 Num images: - val: 5,000 - test-dev: 36,807
HuggingFaceM4/AdVQA
[ "region:us" ]
2022-06-30T12:56:52+00:00
{}
2022-06-30T13:38:37+00:00
[]
[]
TAGS #region-us
Num questions: - val: 10,000 - test-dev: 36,807 Num answers: - val: 100,000 Num images: - val: 5,000 - test-dev: 36,807
[]
[ "TAGS\n#region-us \n" ]
d24215071fc64685ee5a089688a4622b11d86786
This is the downloaded and processed data from Meta's [MetaICL](https://github.com/facebookresearch/MetaICL). We follow their ["How to Download and Preprocess"](https://github.com/facebookresearch/MetaICL#how-to-download-and-preprocess) instructions to obtain their modified versions of [CrossFit](https://github.com/INK-USC/CrossFit) and [UnifiedQA](https://arxiv.org/abs/2005.00700). ## Citation information ``` @inproceedings{ min2022metaicl, title={ Meta{ICL}: Learning to Learn In Context }, author={ Min, Sewon and Lewis, Mike and Zettlemoyer, Luke and Hajishirzi, Hannaneh }, booktitle={ NAACL-HLT }, year={ 2022 } } @inproceedings{ ye2021crossfit, title={ {C}ross{F}it: A Few-shot Learning Challenge for Cross-task Generalization in NLP }, author={ Ye, Qinyuan and Lin, Bill Yuchen and Ren, Xiang }, booktitle={ EMNLP }, year={ 2021 } } @inproceedings{ khashabi2020unifiedqa, title={ {U}nified{QA}: Crossing Format Boundaries With a Single QA System }, author={ Khashabi, Daniel and Min, Sewon and Khot, Tushar and Sabharwal, Ashish and Tafjord, Oyvind and Clark, Peter and Hajishirzi, Hannaneh }, booktitle={ Findings of EMNLP }, year={ 2020 } } ```
allenai/metaicl-data
[ "license:cc-by-nc-4.0", "arxiv:2005.00700", "region:us" ]
2022-06-30T17:27:28+00:00
{"license": "cc-by-nc-4.0"}
2022-06-30T20:18:49+00:00
[ "2005.00700" ]
[]
TAGS #license-cc-by-nc-4.0 #arxiv-2005.00700 #region-us
This is the downloaded and processed data from Meta's MetaICL. We follow their "How to Download and Preprocess" instructions to obtain their modified versions of CrossFit and UnifiedQA. information
[]
[ "TAGS\n#license-cc-by-nc-4.0 #arxiv-2005.00700 #region-us \n" ]
6d6b24c4204a6731263bcd5ec76564bbdbfbca58
# Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
arize-ai/xtreme_en
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|xtreme", "language:en", "license:mit", "region:us" ]
2022-06-30T18:48:47+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|xtreme"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "named-entity-recognition-en-no-drift"}
2022-07-01T16:23:29+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|xtreme #language-English #license-mit #region-us
# Dataset Card for 'reviews_with_drift' ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place. ### Supported Tasks and Leaderboards 'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @fjcasti1 for adding this dataset.
[ "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|xtreme #language-English #license-mit #region-us \n", "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
dd67b07acb615f16950e239d3e5035ffd40b696a
# Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
arize-ai/xtreme_en_language_drift_es
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|xtreme", "language:en", "license:mit", "region:us" ]
2022-06-30T20:07:38+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|xtreme"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "named-entity-recognition-en-no-drift"}
2022-07-01T16:25:51+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|xtreme #language-English #license-mit #region-us
# Dataset Card for 'reviews_with_drift' ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place. ### Supported Tasks and Leaderboards 'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @fjcasti1 for adding this dataset.
[ "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|xtreme #language-English #license-mit #region-us \n", "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
feb643d1f0a55643f91347b4c418d243343a94cd
# Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
arize-ai/xtreme_en_token_drift
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|xtreme", "language:en", "license:mit", "region:us" ]
2022-06-30T20:08:01+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|xtreme"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "named-entity-recognition-en-no-drift"}
2022-07-01T16:25:34+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|xtreme #language-English #license-mit #region-us
# Dataset Card for 'reviews_with_drift' ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place. ### Supported Tasks and Leaderboards 'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @fjcasti1 for adding this dataset.
[ "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|xtreme #language-English #license-mit #region-us \n", "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
e81e6a9ac798674b1a72239936bc4f71c4fa2c4e
# Dataset Card for AMPERE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Structure](#dataset-structure) - [Dataset Creation](#dataset-creation) ## Dataset Description This dataset is released together with our NAACL 2019 Paper "[`Argument Mining for Understanding Peer Reviews`](https://aclanthology.org/N19-1219/)". If you find our work useful, please cite: ``` @inproceedings{hua-etal-2019-argument, title = "Argument Mining for Understanding Peer Reviews", author = "Hua, Xinyu and Nikolov, Mitko and Badugu, Nikhil and Wang, Lu", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", month = jun, year = "2019", address = "Minneapolis, Minnesota", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N19-1219", doi = "10.18653/v1/N19-1219", pages = "2131--2137", } ``` This dataset includes 400 scientific peer reviews collected from ICLR 2018 hosted at the Openreview platform. Each review is segmented into multiple propositions. We include the original untokenized text for each proposition. Each proposition is labeled as one of the following types: - **evaluation**: a proposition that is not objectively verifiable and does not require any action to be performed, such as qualitative judgement and interpretation of the paper, e.g. "The paper shows nice results on a number of small tasks." - **request**: a proposition that is not objectively verifiable and suggests a course of action to be taken, such as recommendation and suggestion for new experiments, e.g. "I would really like to see how the method performs without this hack." - **fact**: a proposition that is verifiable with objective evidence, such as mathematical conclusion and common knowledge of the field, e.g. "This work proposes a dynamic weight update scheme." - **quote**: a quote from the paper or another source, e.g. "The author wrote 'where r is lower bound of feature norm'." - **reference**: a proposition that refers to an objective evidence, such as URL link and citation, e.g. "see MuseGAN (Dong et al), MidiNet (Yang et al), etc." - **non-arg**: a non-argumentative discourse unit that does not contribute to the overall agenda of the review, such as greetings, metadata, and clarification questions, e.g. "Aha, now I understand." ## Dataset Structure The dataset is partitioned into train/val/test sets. Each set is uploaded as a jsonl format. Each line contains the following elements: - `doc_id` (str): a unique id for review document - `text` (list[str]): a list of segmented propositions - `labels` (list[str]): a list of labels corresponding to the propositions An example looks as follows. ``` { "doc_id": "H1WORsdlG", "text": [ "This paper addresses the important problem of understanding mathematically how GANs work.", "The approach taken here is to look at GAN through the lense of the scattering transform.", "Unfortunately the manuscrit submitted is very poorly written.", "Introduction and flow of thoughts is really hard to follow.", "In method sections, the text jumps from one concept to the next without proper definitions.", "Sorry I stopped reading on page 3.", "I suggest to rewrite this work before sending it to review.", "Among many things: - For citations use citep and not citet to have () at the right places.", "- Why does it seems -> Why does it seem etc.", ], "labels": [ 'fact', 'fact', 'evaluation', 'evaluation', 'evaluation', 'evaluation', 'request', 'request', 'request', ] } ``` ## Dataset Creation For human annotators, they will be asked to first read the above definitions and controversial cases carefully. The dataset to be annotated consists of 400 reviews partitioned in 20 batches. Each annotator will follow the following steps for annotation: - Step 1: Open a review file with a text editor. The unannotated review file contains only one line, please separate it into multiple lines with each line corresponding to one single proposition. Repeat the above actions on all 400 reviews. - Step 2: Based on the segmented units, label the type for each proposition. Start labeling at the end of each file with the marker "## Labels:". Indicate the line number of the proposition first, then annotate the type, e.g. "1. evaluation" for the first proposition. Repeat the above actions on all 400 reviews. A third annotator then resolves the disagreements between the two annotators on both segmentation and proposition type.
launch/ampere
[ "task_categories:text-classification", "annotations_creators:expert-generated", "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
2022-07-01T01:29:23+00:00
{"annotations_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "AMPERE"}
2022-11-09T01:57:52+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-expert-generated #multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
# Dataset Card for AMPERE ## Table of Contents - Table of Contents - Dataset Description - Dataset Structure - Dataset Creation ## Dataset Description This dataset is released together with our NAACL 2019 Paper "'Argument Mining for Understanding Peer Reviews'". If you find our work useful, please cite: This dataset includes 400 scientific peer reviews collected from ICLR 2018 hosted at the Openreview platform. Each review is segmented into multiple propositions. We include the original untokenized text for each proposition. Each proposition is labeled as one of the following types: - evaluation: a proposition that is not objectively verifiable and does not require any action to be performed, such as qualitative judgement and interpretation of the paper, e.g. "The paper shows nice results on a number of small tasks." - request: a proposition that is not objectively verifiable and suggests a course of action to be taken, such as recommendation and suggestion for new experiments, e.g. "I would really like to see how the method performs without this hack." - fact: a proposition that is verifiable with objective evidence, such as mathematical conclusion and common knowledge of the field, e.g. "This work proposes a dynamic weight update scheme." - quote: a quote from the paper or another source, e.g. "The author wrote 'where r is lower bound of feature norm'." - reference: a proposition that refers to an objective evidence, such as URL link and citation, e.g. "see MuseGAN (Dong et al), MidiNet (Yang et al), etc." - non-arg: a non-argumentative discourse unit that does not contribute to the overall agenda of the review, such as greetings, metadata, and clarification questions, e.g. "Aha, now I understand." ## Dataset Structure The dataset is partitioned into train/val/test sets. Each set is uploaded as a jsonl format. Each line contains the following elements: - 'doc_id' (str): a unique id for review document - 'text' (list[str]): a list of segmented propositions - 'labels' (list[str]): a list of labels corresponding to the propositions An example looks as follows. ## Dataset Creation For human annotators, they will be asked to first read the above definitions and controversial cases carefully. The dataset to be annotated consists of 400 reviews partitioned in 20 batches. Each annotator will follow the following steps for annotation: - Step 1: Open a review file with a text editor. The unannotated review file contains only one line, please separate it into multiple lines with each line corresponding to one single proposition. Repeat the above actions on all 400 reviews. - Step 2: Based on the segmented units, label the type for each proposition. Start labeling at the end of each file with the marker "## Labels:". Indicate the line number of the proposition first, then annotate the type, e.g. "1. evaluation" for the first proposition. Repeat the above actions on all 400 reviews. A third annotator then resolves the disagreements between the two annotators on both segmentation and proposition type.
[ "# Dataset Card for AMPERE", "## Table of Contents\n- Table of Contents\n- Dataset Description\n- Dataset Structure\n- Dataset Creation", "## Dataset Description\n\nThis dataset is released together with our NAACL 2019 Paper \"'Argument Mining for Understanding Peer Reviews'\". If you find our work useful, please cite:\n\n\n\nThis dataset includes 400 scientific peer reviews collected from ICLR 2018 hosted at the Openreview platform. Each review is segmented into multiple propositions. We include the original untokenized text for each proposition. Each proposition is labeled as one of the following types:\n\n- evaluation: a proposition that is not objectively verifiable and does not require any action to be performed, such as qualitative judgement and interpretation of the paper, e.g. \"The paper shows nice results on a number of small tasks.\"\n- request: a proposition that is not objectively verifiable and suggests a course of action to be taken, such as recommendation and suggestion for new experiments, e.g. \"I would really like to see how the method performs without this hack.\"\n- fact: a proposition that is verifiable with objective evidence, such as mathematical conclusion and common knowledge of the field, e.g. \"This work proposes a dynamic weight update scheme.\"\n- quote: a quote from the paper or another source, e.g. \"The author wrote 'where r is lower bound of feature norm'.\"\n- reference: a proposition that refers to an objective evidence, such as URL link and citation, e.g. \"see MuseGAN (Dong et al), MidiNet (Yang et al), etc.\"\n- non-arg: a non-argumentative discourse unit that does not contribute to the overall agenda of the review, such as greetings, metadata, and clarification questions, e.g. \"Aha, now I understand.\"", "## Dataset Structure\n\nThe dataset is partitioned into train/val/test sets. Each set is uploaded as a jsonl format. Each line contains the following elements:\n\n- 'doc_id' (str): a unique id for review document\n- 'text' (list[str]): a list of segmented propositions\n- 'labels' (list[str]): a list of labels corresponding to the propositions\n\nAn example looks as follows.", "## Dataset Creation\n\nFor human annotators, they will be asked to first read the above definitions and controversial cases carefully. The dataset to be annotated consists of 400 reviews partitioned in 20 batches. Each annotator will follow the following steps for annotation:\n\n- Step 1: Open a review file with a text editor. The unannotated review file contains only one line, please separate it into multiple lines with each line corresponding to one single proposition. Repeat the above actions on all 400 reviews.\n- Step 2: Based on the segmented units, label the type for each proposition. Start labeling at the end of each file with the marker \"## Labels:\". Indicate the line number of the proposition first, then annotate the type, e.g. \"1. evaluation\" for the first proposition. Repeat the above actions on all 400 reviews.\n\n A third annotator then resolves the disagreements between the two annotators on both segmentation and proposition type." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for AMPERE", "## Table of Contents\n- Table of Contents\n- Dataset Description\n- Dataset Structure\n- Dataset Creation", "## Dataset Description\n\nThis dataset is released together with our NAACL 2019 Paper \"'Argument Mining for Understanding Peer Reviews'\". If you find our work useful, please cite:\n\n\n\nThis dataset includes 400 scientific peer reviews collected from ICLR 2018 hosted at the Openreview platform. Each review is segmented into multiple propositions. We include the original untokenized text for each proposition. Each proposition is labeled as one of the following types:\n\n- evaluation: a proposition that is not objectively verifiable and does not require any action to be performed, such as qualitative judgement and interpretation of the paper, e.g. \"The paper shows nice results on a number of small tasks.\"\n- request: a proposition that is not objectively verifiable and suggests a course of action to be taken, such as recommendation and suggestion for new experiments, e.g. \"I would really like to see how the method performs without this hack.\"\n- fact: a proposition that is verifiable with objective evidence, such as mathematical conclusion and common knowledge of the field, e.g. \"This work proposes a dynamic weight update scheme.\"\n- quote: a quote from the paper or another source, e.g. \"The author wrote 'where r is lower bound of feature norm'.\"\n- reference: a proposition that refers to an objective evidence, such as URL link and citation, e.g. \"see MuseGAN (Dong et al), MidiNet (Yang et al), etc.\"\n- non-arg: a non-argumentative discourse unit that does not contribute to the overall agenda of the review, such as greetings, metadata, and clarification questions, e.g. \"Aha, now I understand.\"", "## Dataset Structure\n\nThe dataset is partitioned into train/val/test sets. Each set is uploaded as a jsonl format. Each line contains the following elements:\n\n- 'doc_id' (str): a unique id for review document\n- 'text' (list[str]): a list of segmented propositions\n- 'labels' (list[str]): a list of labels corresponding to the propositions\n\nAn example looks as follows.", "## Dataset Creation\n\nFor human annotators, they will be asked to first read the above definitions and controversial cases carefully. The dataset to be annotated consists of 400 reviews partitioned in 20 batches. Each annotator will follow the following steps for annotation:\n\n- Step 1: Open a review file with a text editor. The unannotated review file contains only one line, please separate it into multiple lines with each line corresponding to one single proposition. Repeat the above actions on all 400 reviews.\n- Step 2: Based on the segmented units, label the type for each proposition. Start labeling at the end of each file with the marker \"## Labels:\". Indicate the line number of the proposition first, then annotate the type, e.g. \"1. evaluation\" for the first proposition. Repeat the above actions on all 400 reviews.\n\n A third annotator then resolves the disagreements between the two annotators on both segmentation and proposition type." ]
5a77092c28e51558c5586e9c5eb71a7e17a5e43f
# Dataset Card for tiny-imagenet ## Dataset Description - **Homepage:** https://www.kaggle.com/c/tiny-imagenet - **Repository:** [Needs More Information] - **Paper:** http://cs231n.stanford.edu/reports/2017/pdfs/930.pdf - **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-tiny-imagenet-1 ### Dataset Summary Tiny ImageNet contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images, and 50 test images. ### Languages The class labels in the dataset are in English. ## Dataset Structure ### Data Instances ```json { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=64x64 at 0x1A800E8E190, 'label': 15 } ``` ### Data Fields - image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0]. - label: an int classification label. -1 for test set as the labels are missing. Check `classes.py` for the map of numbers & labels. ### Data Splits | | Train | Valid | | ------------ | ------ | ----- | | # of samples | 100000 | 10000 | ## Usage ### Example #### Load Dataset ```python def example_usage(): tiny_imagenet = load_dataset('Maysee/tiny-imagenet', split='train') print(tiny_imagenet[0]) if __name__ == '__main__': example_usage() ```
zh-plus/tiny-imagenet
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|imagenet-1k", "language:en", "region:us" ]
2022-07-01T02:33:16+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|imagenet-1k"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "imagenet", "pretty_name": "Tiny-ImageNet", "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to ImageNet Terms of Access:\n[RESEARCHER_FULLNAME] (the \"Researcher\") has requested permission to use the ImageNet database (the \"Database\") at Princeton University and Stanford University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions:\n1. Researcher shall use the Database only for non-commercial research and educational purposes.\n2. Princeton University, Stanford University and Hugging Face make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, Stanford University and Hugging Face, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.\n4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.\n5. Princeton University, Stanford University and Hugging Face reserve the right to terminate Researcher's access to the Database at any time.\n6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n7. The law of the State of New Jersey shall apply to all disputes under this agreement."}
2022-07-12T08:04:30+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|imagenet-1k #language-English #region-us
Dataset Card for tiny-imagenet ============================== Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Leaderboard: URL ### Dataset Summary Tiny ImageNet contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images, and 50 test images. ### Languages The class labels in the dataset are in English. Dataset Structure ----------------- ### Data Instances ### Data Fields * image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0]. * label: an int classification label. -1 for test set as the labels are missing. Check 'URL' for the map of numbers & labels. ### Data Splits Train: # of samples, Valid: 100000 Usage ----- ### Example #### Load Dataset
[ "### Dataset Summary\n\n\nTiny ImageNet contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images, and 50 test images.", "### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n* label: an int classification label. -1 for test set as the labels are missing. Check 'URL' for the map of numbers & labels.", "### Data Splits\n\n\nTrain: # of samples, Valid: 100000\n\n\nUsage\n-----", "### Example", "#### Load Dataset" ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|imagenet-1k #language-English #region-us \n", "### Dataset Summary\n\n\nTiny ImageNet contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images, and 50 test images.", "### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n* label: an int classification label. -1 for test set as the labels are missing. Check 'URL' for the map of numbers & labels.", "### Data Splits\n\n\nTrain: # of samples, Valid: 100000\n\n\nUsage\n-----", "### Example", "#### Load Dataset" ]
f7abfbaa550a0d2cd478151aa437b303badc4dc9
# AutoTrain Dataset for project: Rusynpannonianpure ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project Rusynpannonianpure. ### Languages The BCP-47 code for the dataset's language is en2es. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "source": "\"I came to the region to meet with the leaders of the parties and discuss the progress in normalizin[...]", "target": "\"\u042f \u043f\u0440\u0438\u0448\u043e\u043b \u0434\u043e \u0440\u0435\u0491\u0438\u043e\u043d\u0443 \u043f\u0440\u0438\u0440\u0438\u0445\u0442\u0430\u0446 \u0448\u043b\u0457\u0434\u0443\u044e\u0446\u0438 \u0441\u0445\u043e\u0434 \u043b\u0438\u0434\u0435\u0440\u043e\u0445 \u0438 \u0431\u0435\u0448\u0435\u0434\u043e\u0432\u0430\u0446 \u043e \u043d\u0430\u043f\u0440\u0435\u0434\u043e\u0432\u0430\u043d\u044e \u0443 \u043d\u043e\u0440\u043c\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u0457 \u043e\u0434\u043d\u043e\u0448\u0435[...]" }, { "source": "\"We had a very good discussion yesterday evening about the situation and it is normal to look for a [...]", "target": "\"\u041c\u0430\u043b\u0438 \u0437\u043c\u0435 \u0454\u0434\u043d\u0443 \u043e\u0437\u0431\u0438\u043b\u044c\u043d\u0443 \u0440\u043e\u0437\u0433\u0432\u0430\u0440\u043a\u0443 \u0432\u0447\u0435\u0440\u0430 \u0432\u0435\u0447\u0430\u0440 \u043e \u0441\u0438\u0442\u0443\u0430\u0446\u0438\u0457 \u0438 \u043d\u043e\u0440\u043c\u0430\u043b\u043d\u043e \u0436\u0435 \u043f\u043e\u0442\u0440\u0435\u0431\u043d\u0435 \u0433\u043b\u0454\u0434\u0430\u0446 \u0440\u0438\u0448\u0435\u043d\u0454 \u043f\u0440\u0435\u0437 \u0434[...]" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "source": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 3 | | valid | 1 |
Tritkoman/autotrain-data-Rusynpannonianpure
[ "task_categories:translation", "language:en", "language:es", "region:us" ]
2022-07-01T04:20:02+00:00
{"language": ["en", "es"], "task_categories": ["translation"]}
2022-10-25T09:39:40+00:00
[]
[ "en", "es" ]
TAGS #task_categories-translation #language-English #language-Spanish #region-us
AutoTrain Dataset for project: Rusynpannonianpure ================================================= Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project Rusynpannonianpure. ### Languages The BCP-47 code for the dataset's language is en2es. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is en2es.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-translation #language-English #language-Spanish #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is en2es.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
09feea6476dba673a37248873f4e6e9998f1913d
this dataset contains 5 columns context, question, answer_start, answer_text, source | Column | Description | | :------------ |:---------------:| | context | A general small paragraph in tamil language | | question | question framed form the context | | answer_text | text span that extracted from context | | answer_start | index of answer_text | | source | who framed this context, question, answer pair | source team KBA => (Karthi, Balaji, Azeez) these people manually created CHAII =>a kaggle competition XQA => multilingual QA dataset
AswiN037/tamil-question-answering-dataset
[ "license:afl-3.0", "region:us" ]
2022-07-01T06:22:29+00:00
{"license": "afl-3.0"}
2022-07-01T06:53:56+00:00
[]
[]
TAGS #license-afl-3.0 #region-us
this dataset contains 5 columns context, question, answer\_start, answer\_text, source source team KBA => (Karthi, Balaji, Azeez) these people manually created CHAII =>a kaggle competition XQA => multilingual QA dataset
[]
[ "TAGS\n#license-afl-3.0 #region-us \n" ]
2ba908ef5001980a29cd652c16cebfe1a69035f8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-bigpatent * Dataset: big_patent To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@kayvane](https://huggingface.co/kayvane) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-a25a94fd-9305221
[ "autotrain", "evaluation", "region:us" ]
2022-07-01T07:08:40+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-bigpatent", "metrics": ["rouge"], "dataset_name": "big_patent", "dataset_config": "all", "dataset_split": "validation", "col_mapping": {"text": "description", "target": "abstract"}}}
2022-07-02T11:09:46+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-bigpatent * Dataset: big_patent To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @kayvane for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-bigpatent\n* Dataset: big_patent\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @kayvane for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-bigpatent\n* Dataset: big_patent\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @kayvane for evaluating this model." ]
13ad152ac29b49542dfbb3500c6b72a499731db8
# GEM Submission Submission name: This is a test submission 2
GEM-submissions/lewtun__this-is-a-test-submission-2__1656667730
[ "benchmark:gem", "evaluation", "benchmark", "region:us" ]
2022-07-01T08:28:52+00:00
{"benchmark": "gem", "type": "prediction", "submission_name": "This is a test submission 2", "tags": ["evaluation", "benchmark"]}
2022-07-01T08:28:55+00:00
[]
[]
TAGS #benchmark-gem #evaluation #benchmark #region-us
# GEM Submission Submission name: This is a test submission 2
[ "# GEM Submission\n\nSubmission name: This is a test submission 2" ]
[ "TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n", "# GEM Submission\n\nSubmission name: This is a test submission 2" ]
bba0491bc1ba950369eafcceb1d522537b54ab2e
# Dataset Card for EXCEPTIUS Corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://exceptius.com/ - **Repository:** https://github.com/tommasoc80/COVID19_emergency_event - **Paper:** Tziafas, G., de Saint-Phalle, E., de Vries, W., Egger, C., & Caselli, T. (2021). A Multilingual Approach to Identify and Classify Exceptional Measures against {COVID}-19. Proceedings of the Natural Legal Language Processing Workshop 2021, 46–62. https://doi.org/10.18653/v1/2021.nllp-1.5 - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:[email protected]) ### Dataset Summary This dataset presents a new corpus of legislative documents from 8 European countries (Beglium, France, Hunary, Italy, Netherlands, Norway, Poland, UK) in 7 languages (Dutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish) manually annotated for exceptional measures against COVID-19. The annotation was done on the sentence level. ### Supported Tasks and Leaderboards The dataset can be used for multi-label text classification tasks. ### Languages Dutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish ## Dataset Structure ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). ### Data Fields The jsonl files have the following basic columns: - `language`: The language of the sentence (set based on the country) - `country`: The country of the sentence - `text`: Sentence that has been annotated The documents have been annotated with 8 labels, each label representing a specific measurement against COVID-19. Each label is represented by one boolean field in the jsonl file. The labels, i.e. the specific measure classes, are: - `event1`: State of Emergency - `event2`: Restrictions of fundamental rights and civil liberties - `event3`: Restrictions of daily liberties - `event4`: Closures / lockdown - `event5`: Suspension of international cooperation and commitments - `event6`: Police mobilization - `event7`: Army mobilization - `event8`: Government oversight - `all_events`: an aggregate column containing all applicable events combined ### Data Splits All annotated sentences combined have the following split: - train: 3312 (80%) - dev: 418 (10%) - test: 418 (10%) The splits have been performed on each country and have later been merged. Therefore, each split contains sentences from each country. The following label distribution shows the number of occurrences per label per split. `total occurrences` sums up the previous rows (total number of events per split). `split size` is the number of sentences per split. | Event | train | validation | test | |:----------------------|----------:|-----------:|----------:| | event1 | 383 | 54 | 47 | | event2 | 253 | 39 | 42 | | event3 | 412 | 70 | 62 | | event4 | 617 | 75 | 93 | | event5 | 52 | 4 | 6 | | event6 | 15 | 2 | 1 | | event7 | 45 | 4 | 5 | | event8 | 146 | 21 | 19 | | **total occurrences** | **1923** | **269** | **275** | | **split size** | **3312** | **418** | **418** | ## Dataset Creation ### Curation Rationale *"Investigate the potential of multilingual pretrained language models in order to facilitate the analysis, exploration, and comparison of legal texts on COVID-19 exceptional measures"* (Tziafas et al., 2021) ### Source Data #### Initial Data Collection and Normalization *“The corpus collection process has been overseen by four political science experts working in partnership with national legal experts. All documents were retrieved from official governmental websites that publish legal acts. The identification of the relevant documents has been done by means of 4 keywords (i.e., “COVID”, “COVID-19”, “Coronavirus” and “Health emergency”). For each language, the corresponding language specific keywords were used. In this initial phase, we focus on a sample of 19 EEA countries on measures adopted at the national level. To do so, we identify publicly available links to relevant documents 2 plus UK and Switzerland. We could not find corresponding documents for two countries of the EEA (i.e., Bulgaria and Greece). All documents have been collected either by manually downloading them or by automatic scraping. For countries with more than one official language (e.g., Switzerland), legal acts were collected in all available languages.”*(Tziafas et al., 2021) #### Who are the source language producers? Politicians and legal experts have been involved in producing the language material. ### Annotations #### Annotation process *"A subset of 281 documents in eight languages has been selected for manual annotation. The annotation of the exceptional measures applies at sentence-level. The sample is based on the French, Polish, Dutch, English, Hungarian, Belgian, Italian, and Norwegian sub-corpora. Annotators were allowed to assign as many subclasses as they consider relevant to each sentence, but with a total of eight main classes of exceptional measures. Sentences can potentially entail multiple exceptional classes, making this a multi-label annotation task. The annotation process results in eight binary annotations per sentence, with 0 if the specific class is not identified within the sentence and 1 if it is. The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board. Since the annotators are not fluent in all languages and due to the impossibility of recruiting expert native speakers, some documents need to be translated into English to be manually annotated. No inter-annotator agreement study has been conducted in this initial phase. We intend to remedy this limitation in the project’s next development cycle. However, during the annotation phase, annotators met on a weekly basis to discuss ambiguous cases and the guidelines. Annotators are encouraged to propose new classes or subclasses. For a new (sub)class to be accepted, the measure should have been independently identified by the majority of the annotators. In this phase, no new classes were proposed."* (Tziafas et al., 2021) #### Who are the annotators? *"The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board."* (Tziafas et al., 2021) ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:[email protected]); [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:[email protected]); [Github](https://github.com/kapllan)). ### Licensing Information Creative Commons Zero v1.0 Universal ### Citation Information ``` @inproceedings{tziafas-etal-2021-multilingual, title = "A Multilingual Approach to Identify and Classify Exceptional Measures against {COVID}-19", author = "Tziafas, Georgios and de Saint-Phalle, Eugenie and de Vries, Wietse and Egger, Clara and Caselli, Tommaso", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.nllp-1.5", pages = "46--62", } ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
joelniklaus/covid19_emergency_event
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:found", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:fr", "language:hu", "language:it", "language:nb", "language:nl", "language:pl", "license:cc0-1.0", "region:us" ]
2022-07-01T10:26:15+00:00
{"annotations_creators": ["found", "other"], "language_creators": ["found"], "language": ["en", "fr", "hu", "it", "nb", "nl", "pl"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "pretty_name": "EXCEPTIUS Corpus"}
2022-09-22T12:44:15+00:00
[]
[ "en", "fr", "hu", "it", "nb", "nl", "pl" ]
TAGS #task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-found #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-English #language-French #language-Hungarian #language-Italian #language-Norwegian Bokmål #language-Dutch #language-Polish #license-cc0-1.0 #region-us
Dataset Card for EXCEPTIUS Corpus ================================= Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: Tziafas, G., de Saint-Phalle, E., de Vries, W., Egger, C., & Caselli, T. (2021). A Multilingual Approach to Identify and Classify Exceptional Measures against {COVID}-19. Proceedings of the Natural Legal Language Processing Workshop 2021, 46–62. URL * Leaderboard: * Point of Contact: Joel Niklaus ### Dataset Summary This dataset presents a new corpus of legislative documents from 8 European countries (Beglium, France, Hunary, Italy, Netherlands, Norway, Poland, UK) in 7 languages (Dutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish) manually annotated for exceptional measures against COVID-19. The annotation was done on the sentence level. ### Supported Tasks and Leaderboards The dataset can be used for multi-label text classification tasks. ### Languages Dutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish Dataset Structure ----------------- ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). ### Data Fields The jsonl files have the following basic columns: * 'language': The language of the sentence (set based on the country) * 'country': The country of the sentence * 'text': Sentence that has been annotated The documents have been annotated with 8 labels, each label representing a specific measurement against COVID-19. Each label is represented by one boolean field in the jsonl file. The labels, i.e. the specific measure classes, are: * 'event1': State of Emergency * 'event2': Restrictions of fundamental rights and civil liberties * 'event3': Restrictions of daily liberties * 'event4': Closures / lockdown * 'event5': Suspension of international cooperation and commitments * 'event6': Police mobilization * 'event7': Army mobilization * 'event8': Government oversight * 'all\_events': an aggregate column containing all applicable events combined ### Data Splits All annotated sentences combined have the following split: * train: 3312 (80%) * dev: 418 (10%) * test: 418 (10%) The splits have been performed on each country and have later been merged. Therefore, each split contains sentences from each country. The following label distribution shows the number of occurrences per label per split. 'total occurrences' sums up the previous rows (total number of events per split). 'split size' is the number of sentences per split. Dataset Creation ---------------- ### Curation Rationale *"Investigate the potential of multilingual pretrained language models in order to facilitate the analysis, exploration, and comparison of legal texts on COVID-19 exceptional measures"* (Tziafas et al., 2021) ### Source Data #### Initial Data Collection and Normalization *“The corpus collection process has been overseen by four political science experts working in partnership with national legal experts. All documents were retrieved from official governmental websites that publish legal acts. The identification of the relevant documents has been done by means of 4 keywords (i.e., “COVID”, “COVID-19”, “Coronavirus” and “Health emergency”). For each language, the corresponding language specific keywords were used. In this initial phase, we focus on a sample of 19 EEA countries on measures adopted at the national level. To do so, we identify publicly available links to relevant documents 2 plus UK and Switzerland. We could not find corresponding documents for two countries of the EEA (i.e., Bulgaria and Greece). All documents have been collected either by manually downloading them or by automatic scraping. For countries with more than one official language (e.g., Switzerland), legal acts were collected in all available languages.”*(Tziafas et al., 2021) #### Who are the source language producers? Politicians and legal experts have been involved in producing the language material. ### Annotations #### Annotation process *"A subset of 281 documents in eight languages has been selected for manual annotation. The annotation of the exceptional measures applies at sentence-level. The sample is based on the French, Polish, Dutch, English, Hungarian, Belgian, Italian, and Norwegian sub-corpora. Annotators were allowed to assign as many subclasses as they consider relevant to each sentence, but with a total of eight main classes of exceptional measures. Sentences can potentially entail multiple exceptional classes, making this a multi-label annotation task. The annotation process results in eight binary annotations per sentence, with 0 if the specific class is not identified within the sentence and 1 if it is. The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board. Since the annotators are not fluent in all languages and due to the impossibility of recruiting expert native speakers, some documents need to be translated into English to be manually annotated. No inter-annotator agreement study has been conducted in this initial phase. We intend to remedy this limitation in the project’s next development cycle. However, during the annotation phase, annotators met on a weekly basis to discuss ambiguous cases and the guidelines. Annotators are encouraged to propose new classes or subclasses. For a new (sub)class to be accepted, the measure should have been independently identified by the majority of the annotators. In this phase, no new classes were proposed."* (Tziafas et al., 2021) #### Who are the annotators? *"The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board."* (Tziafas et al., 2021) ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. Additional Information ---------------------- ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github). ### Licensing Information Creative Commons Zero v1.0 Universal ### Contributions Thanks to @JoelNiklaus and @kapllan for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset presents a new corpus of legislative documents from 8 European countries (Beglium, France, Hunary, Italy, Netherlands, Norway, Poland, UK) in 7 languages (Dutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish) manually annotated for exceptional measures against COVID-19. The annotation was done on the sentence level.", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used for multi-label text classification tasks.", "### Languages\n\n\nDutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test).", "### Data Fields\n\n\nThe jsonl files have the following basic columns:\n\n\n* 'language': The language of the sentence (set based on the country)\n* 'country': The country of the sentence\n* 'text': Sentence that has been annotated\n\n\nThe documents have been annotated with 8 labels, each label representing a specific measurement against COVID-19. Each label is represented by one boolean field in the jsonl file. The labels, i.e. the specific measure classes, are:\n\n\n* 'event1': State of Emergency\n* 'event2': Restrictions of fundamental rights and civil liberties\n* 'event3': Restrictions of daily liberties\n* 'event4': Closures / lockdown\n* 'event5': Suspension of international cooperation and commitments\n* 'event6': Police mobilization\n* 'event7': Army mobilization\n* 'event8': Government oversight\n* 'all\\_events': an aggregate column containing all applicable events combined", "### Data Splits\n\n\nAll annotated sentences combined have the following split:\n\n\n* train: 3312 (80%)\n* dev: 418 (10%)\n* test: 418 (10%)\n\n\nThe splits have been performed on each country and have later been merged. Therefore, each split contains sentences from each country.\n\n\nThe following label distribution shows the number of occurrences per label per split. 'total occurrences' sums up the previous rows (total number of events per split). 'split size' is the number of sentences per split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\n*\"Investigate the potential of multilingual pretrained language models in order to\nfacilitate the analysis, exploration, and comparison of legal texts on COVID-19 exceptional measures\"* (Tziafas et al., 2021)", "### Source Data", "#### Initial Data Collection and Normalization\n\n\n*“The corpus collection process has been overseen by four political science experts working in partnership with national legal experts. All documents were retrieved from official governmental websites that publish legal acts. The identification of the relevant documents has been done by means of 4 keywords (i.e., “COVID”, “COVID-19”, “Coronavirus” and “Health emergency”). For each language, the corresponding language specific keywords were used. In this initial phase, we focus on a sample of 19 EEA countries on measures adopted at the national level. To do so, we identify publicly available links to relevant documents 2 plus UK and Switzerland. We could not find corresponding documents for two countries of the EEA (i.e., Bulgaria and Greece). All documents have been collected either by manually downloading them or by automatic scraping. For countries with more than one official language (e.g., Switzerland), legal acts were collected in all available languages.”*(Tziafas et al., 2021)", "#### Who are the source language producers?\n\n\nPoliticians and legal experts have been involved in producing the language material.", "### Annotations", "#### Annotation process\n\n\n*\"A subset of 281 documents in eight languages has been selected for manual annotation. The annotation of the exceptional measures applies at sentence-level. The sample is based on the French, Polish, Dutch, English, Hungarian, Belgian, Italian, and Norwegian sub-corpora. Annotators were allowed to assign as many subclasses as they consider relevant to each sentence, but with a total of eight main classes of exceptional measures. Sentences can potentially entail multiple exceptional classes, making this a multi-label annotation task. The annotation process results in eight binary annotations per sentence, with 0 if the specific class is not identified within the sentence and 1 if it is. The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board. Since the annotators are not fluent in all languages and due to the impossibility of recruiting expert native speakers, some documents need to be translated into English to be manually annotated. No inter-annotator agreement study has been conducted in this initial phase. We intend to remedy this limitation in the project’s next development cycle. However, during the annotation phase, annotators met on a weekly basis to discuss ambiguous cases and the guidelines. Annotators are encouraged to propose new classes or subclasses. For a new (sub)class to be accepted, the measure should have been independently identified by the majority of the annotators. In this phase, no new classes were proposed.\"* (Tziafas et al., 2021)", "#### Who are the annotators?\n\n\n*\"The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board.\"* (Tziafas et al., 2021)", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.\nAdditional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github).", "### Licensing Information\n\n\nCreative Commons Zero v1.0 Universal", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-found #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-English #language-French #language-Hungarian #language-Italian #language-Norwegian Bokmål #language-Dutch #language-Polish #license-cc0-1.0 #region-us \n", "### Dataset Summary\n\n\nThis dataset presents a new corpus of legislative documents from 8 European countries (Beglium, France, Hunary, Italy, Netherlands, Norway, Poland, UK) in 7 languages (Dutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish) manually annotated for exceptional measures against COVID-19. The annotation was done on the sentence level.", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used for multi-label text classification tasks.", "### Languages\n\n\nDutch, English, French, Hungarian, Italian, Norwegian Bokmål, Polish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test).", "### Data Fields\n\n\nThe jsonl files have the following basic columns:\n\n\n* 'language': The language of the sentence (set based on the country)\n* 'country': The country of the sentence\n* 'text': Sentence that has been annotated\n\n\nThe documents have been annotated with 8 labels, each label representing a specific measurement against COVID-19. Each label is represented by one boolean field in the jsonl file. The labels, i.e. the specific measure classes, are:\n\n\n* 'event1': State of Emergency\n* 'event2': Restrictions of fundamental rights and civil liberties\n* 'event3': Restrictions of daily liberties\n* 'event4': Closures / lockdown\n* 'event5': Suspension of international cooperation and commitments\n* 'event6': Police mobilization\n* 'event7': Army mobilization\n* 'event8': Government oversight\n* 'all\\_events': an aggregate column containing all applicable events combined", "### Data Splits\n\n\nAll annotated sentences combined have the following split:\n\n\n* train: 3312 (80%)\n* dev: 418 (10%)\n* test: 418 (10%)\n\n\nThe splits have been performed on each country and have later been merged. Therefore, each split contains sentences from each country.\n\n\nThe following label distribution shows the number of occurrences per label per split. 'total occurrences' sums up the previous rows (total number of events per split). 'split size' is the number of sentences per split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\n*\"Investigate the potential of multilingual pretrained language models in order to\nfacilitate the analysis, exploration, and comparison of legal texts on COVID-19 exceptional measures\"* (Tziafas et al., 2021)", "### Source Data", "#### Initial Data Collection and Normalization\n\n\n*“The corpus collection process has been overseen by four political science experts working in partnership with national legal experts. All documents were retrieved from official governmental websites that publish legal acts. The identification of the relevant documents has been done by means of 4 keywords (i.e., “COVID”, “COVID-19”, “Coronavirus” and “Health emergency”). For each language, the corresponding language specific keywords were used. In this initial phase, we focus on a sample of 19 EEA countries on measures adopted at the national level. To do so, we identify publicly available links to relevant documents 2 plus UK and Switzerland. We could not find corresponding documents for two countries of the EEA (i.e., Bulgaria and Greece). All documents have been collected either by manually downloading them or by automatic scraping. For countries with more than one official language (e.g., Switzerland), legal acts were collected in all available languages.”*(Tziafas et al., 2021)", "#### Who are the source language producers?\n\n\nPoliticians and legal experts have been involved in producing the language material.", "### Annotations", "#### Annotation process\n\n\n*\"A subset of 281 documents in eight languages has been selected for manual annotation. The annotation of the exceptional measures applies at sentence-level. The sample is based on the French, Polish, Dutch, English, Hungarian, Belgian, Italian, and Norwegian sub-corpora. Annotators were allowed to assign as many subclasses as they consider relevant to each sentence, but with a total of eight main classes of exceptional measures. Sentences can potentially entail multiple exceptional classes, making this a multi-label annotation task. The annotation process results in eight binary annotations per sentence, with 0 if the specific class is not identified within the sentence and 1 if it is. The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board. Since the annotators are not fluent in all languages and due to the impossibility of recruiting expert native speakers, some documents need to be translated into English to be manually annotated. No inter-annotator agreement study has been conducted in this initial phase. We intend to remedy this limitation in the project’s next development cycle. However, during the annotation phase, annotators met on a weekly basis to discuss ambiguous cases and the guidelines. Annotators are encouraged to propose new classes or subclasses. For a new (sub)class to be accepted, the measure should have been independently identified by the majority of the annotators. In this phase, no new classes were proposed.\"* (Tziafas et al., 2021)", "#### Who are the annotators?\n\n\n*\"The annotation has been conducted by three experts in political science working under the supervision of the project’s Scientific Board.\"* (Tziafas et al., 2021)", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.\nAdditional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github).", "### Licensing Information\n\n\nCreative Commons Zero v1.0 Universal", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this dataset." ]
576b52004ed78dd747c0f9858fa6dacc7e4196e2
# Dataset Card for Annotated German Legal Decision Corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://zenodo.org/record/3936490#.X1ed7ovgomK - **Paper:** Urchs., S., Mitrović., J., & Granitzer., M. (2021). Design and Implementation of German Legal Decision Corpora. Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, 515–521. https://doi.org/10.5220/0010187305150521 - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:[email protected]) ### Dataset Summary This dataset consists of 200 randomly chosen judgments. In these judgments a legal expert annotated the components conclusion, definition and subsumption of the German legal writing style Urteilsstil. *"Overall 25,075 sentences are annotated. 5% (1,202) of these sentences are marked as conclusion, 21% (5,328) as definition, 53% (13,322) are marked as subsumption and the remaining 21% (6,481) as other. The length of judgments in sentences ranges from 38 to 862 sentences. The median of judgments have 97 sentences, the length of most judgments is on the shorter side."* (Urchs. et al., 2021) *"Judgments from 22 of the 131 courts are selected for the corpus. Most judgments originate from the VG Augsburg (59 / 30%) followed by the VG Ansbach (39 / 20%) and LSG Munich (33 / 17%)."* (Urchs. et al., 2021) *"29% (58) of all selected judgments are issued in the year 2016, followed by 22% (44) from the year 2017 and 21% (41) issued in the year 2015. [...] The percentages of selected judgments and decisions issued in 2018 and 2019 are roughly the same. No judgments from 2020 are selected."* (Urchs. et al., 2021) ### Supported Tasks and Leaderboards The dataset can be used for multi-class text classification tasks, more specifically, for argument mining. ### Languages The language in the dataset is German as it is used in Bavarian courts in Germany. ## Dataset Structure ### Data Instances Each sentence is saved as a json object on a line in one of the three files `train.jsonl`, `validation.jsonl` or `test.jsonl`. The file `meta.jsonl` contains meta information for each court. The `file_number` is present in all files for identification. Each sentence of the court decision was categorized according to its function. ### Data Fields The file `meta.jsonl` contains for each row the following fields: - `meta_title`: Title provided by the website, it is used for saving the decision - `court`: Issuing court - `decision_style`: Style of the decision; the corpus contains either *Urteil* (='judgment') or *Endurteil* ( ='end-judgment') - `date`: Date when the decision was issued by the court - `file_number`: Identification number used for this decision by the court - `title`: Title provided by the court - `norm_chains`: Norms related to the decision - `decision_guidelines`: Short summary of the decision - `keywords`: Keywords associated with the decision - `lower_court`: Court that decided on the decision before - `additional_information`: Additional Information - `decision_reference`: References to the location of the decision in beck-online - `tenor`: Designation of the legal consequence ordered by the court (list of paragraphs) - `legal_facts`: Facts that form the base for the decision (list of paragraphs) The files `train.jsonl`, `validation.jsonl` and `test.jsonl` contain the following fields: - `file_number`: Identification number for linkage with the file `meta.jsonl` - `input_sentence`: The sentence to be classified - `label`: In depth explanation of the court decision. Each sentence is assigned to one of the major components of German *Urteilsstil* (Urchs. et al., 2021) (list of paragraphs, each paragraph containing list of sentences, each sentence annotated with one of the following four labels): - `conclusion`: Overall result - `definition`: Abstract legal facts and consequences - `subsumption`: Determination sentence / Concrete facts - `other`: Anything else - `context_before`: Context in the same paragraph before the input_sentence - `context_after`: Context in the same paragraph after the input_sentence ### Data Splits No split provided in the original release. Splits created by Joel Niklaus. We randomly split the dataset into 80% (160 decisions, 19271 sentences) train, 10% validation (20 decisions, 2726 sentences) and 10% test (20 decisions, 3078 sentences). We made sure, that a decision only occurs in one split and is not dispersed over multiple splits. Label Distribution | label | train | validation | test | |:---------------|-----------:|-------------:|----------:| | conclusion | 975 | 115 | 112 | | definition | 4105 | 614 | 609 | | subsumption | 10034 | 1486 | 1802 | | other | 4157 | 511 | 555 | | total | **19271** | **2726** | **3078** | ## Dataset Creation ### Curation Rationale Creating a publicly available German legal text corpus consisting of judgments that have been annotated by a legal expert. The annotated components consist of *conclusion*, *definition* and *subsumption* of the German legal writing style *Urteilsstil*. ### Source Data #### Initial Data Collection and Normalization *“The decision corpus is a collection of the decisions published on the website www.gesetze-bayern.de. At the time of the crawling the website offered 32,748 decisions of 131 Bavarian courts, dating back to 2015. The decisions are provided from the Bavarian state after the courts agreed to a publication. All decisions are processed by the publisher C.H.BECK, commissioned by the Bavarian state. This processing includes anonymisation, key-wording, and adding of editorial guidelines to the decisions.”* (Urchs. et al., 2021) #### Who are the source language producers? German courts from Bavaria ### Annotations #### Annotation process *“As stated above, the judgment corpus consist of 200 randomly chosen judgments that are annotated by a legal expert, who holds a first legal state exam. Due to financial, staff and time reasons the presented iteration of the corpus was only annotated by a single expert. In a future version several other experts will annotate the corpus and the inter-annotator agreement will be calculated.”* (Urchs. et al., 2021) #### Who are the annotators? A legal expert, who holds a first legal state exam. ### Personal and Sensitive Information *"All decisions are processed by the publisher C.H.BECK, commissioned by the Bavarian state. This processing includes ** anonymisation**, key-wording, and adding of editorial guidelines to the decisions.”* (Urchs. et al., 2021) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations The SoMaJo Sentence Splitter has been used. Upon manual inspection of the dataset, we could see that the sentence splitter had poor accuracy in some cases (see ```analyze_dataset()``` in ```convert_to_hf_dataset.py```). When creating the splits, we thought about merging small sentences with their neighbors or removing them all together. However, since we could not find an straightforward way to do this, we decided to leave the dataset content untouched. Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:[email protected]) ; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:[email protected]) ; [Github](https://github.com/kapllan)). ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @dataset{urchs_stefanie_2020_3936490, author = {Urchs, Stefanie and Mitrović, Jelena}, title = {{German legal jugements annotated with judement style components}}, month = jul, year = 2020, publisher = {Zenodo}, doi = {10.5281/zenodo.3936490}, url = {https://doi.org/10.5281/zenodo.3936490} } ``` ``` @conference{icaart21, author = {Urchs., Stefanie and Mitrovi{\'{c}}., Jelena and Granitzer., Michael}, booktitle = {Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,}, doi = {10.5220/0010187305150521}, isbn = {978-989-758-484-8}, issn = {2184-433X}, organization = {INSTICC}, pages = {515--521}, publisher = {SciTePress}, title = {{Design and Implementation of German Legal Decision Corpora}}, year = {2021} } ``` ### Contributions Thanks to [@kapllan](https://github.com/kapllan) and [@joelniklaus](https://github.com/joelniklaus) for adding this dataset.
joelniklaus/german_argument_mining
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "license:cc-by-4.0", "region:us" ]
2022-07-01T10:30:58+00:00
{"annotations_creators": ["expert-generated", "found"], "language_creators": ["found"], "language": ["de"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "Annotated German Legal Decision Corpus"}
2022-09-22T12:44:35+00:00
[]
[ "de" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-German #license-cc-by-4.0 #region-us
Dataset Card for Annotated German Legal Decision Corpus ======================================================= Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: Urchs., S., Mitrović., J., & Granitzer., M. (2021). Design and Implementation of German Legal Decision Corpora. Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, 515–521. URL * Leaderboard: * Point of Contact: Joel Niklaus ### Dataset Summary This dataset consists of 200 randomly chosen judgments. In these judgments a legal expert annotated the components conclusion, definition and subsumption of the German legal writing style Urteilsstil. *"Overall 25,075 sentences are annotated. 5% (1,202) of these sentences are marked as conclusion, 21% (5,328) as definition, 53% (13,322) are marked as subsumption and the remaining 21% (6,481) as other. The length of judgments in sentences ranges from 38 to 862 sentences. The median of judgments have 97 sentences, the length of most judgments is on the shorter side."* (Urchs. et al., 2021) *"Judgments from 22 of the 131 courts are selected for the corpus. Most judgments originate from the VG Augsburg (59 / 30%) followed by the VG Ansbach (39 / 20%) and LSG Munich (33 / 17%)."* (Urchs. et al., 2021) *"29% (58) of all selected judgments are issued in the year 2016, followed by 22% (44) from the year 2017 and 21% (41) issued in the year 2015. [...] The percentages of selected judgments and decisions issued in 2018 and 2019 are roughly the same. No judgments from 2020 are selected."* (Urchs. et al., 2021) ### Supported Tasks and Leaderboards The dataset can be used for multi-class text classification tasks, more specifically, for argument mining. ### Languages The language in the dataset is German as it is used in Bavarian courts in Germany. Dataset Structure ----------------- ### Data Instances Each sentence is saved as a json object on a line in one of the three files 'URL', 'URL' or 'URL'. The file 'URL' contains meta information for each court. The 'file\_number' is present in all files for identification. Each sentence of the court decision was categorized according to its function. ### Data Fields The file 'URL' contains for each row the following fields: * 'meta\_title': Title provided by the website, it is used for saving the decision * 'court': Issuing court * 'decision\_style': Style of the decision; the corpus contains either *Urteil* (='judgment') or *Endurteil* ( ='end-judgment') * 'date': Date when the decision was issued by the court * 'file\_number': Identification number used for this decision by the court * 'title': Title provided by the court * 'norm\_chains': Norms related to the decision * 'decision\_guidelines': Short summary of the decision * 'keywords': Keywords associated with the decision * 'lower\_court': Court that decided on the decision before * 'additional\_information': Additional Information * 'decision\_reference': References to the location of the decision in beck-online * 'tenor': Designation of the legal consequence ordered by the court (list of paragraphs) * 'legal\_facts': Facts that form the base for the decision (list of paragraphs) The files 'URL', 'URL' and 'URL' contain the following fields: * 'file\_number': Identification number for linkage with the file 'URL' * 'input\_sentence': The sentence to be classified * 'label': In depth explanation of the court decision. Each sentence is assigned to one of the major components of German *Urteilsstil* (Urchs. et al., 2021) (list of paragraphs, each paragraph containing list of sentences, each sentence annotated with one of the following four labels): + 'conclusion': Overall result + 'definition': Abstract legal facts and consequences + 'subsumption': Determination sentence / Concrete facts + 'other': Anything else * 'context\_before': Context in the same paragraph before the input\_sentence * 'context\_after': Context in the same paragraph after the input\_sentence ### Data Splits No split provided in the original release. Splits created by Joel Niklaus. We randomly split the dataset into 80% (160 decisions, 19271 sentences) train, 10% validation (20 decisions, 2726 sentences) and 10% test (20 decisions, 3078 sentences). We made sure, that a decision only occurs in one split and is not dispersed over multiple splits. Label Distribution Dataset Creation ---------------- ### Curation Rationale Creating a publicly available German legal text corpus consisting of judgments that have been annotated by a legal expert. The annotated components consist of *conclusion*, *definition* and *subsumption* of the German legal writing style *Urteilsstil*. ### Source Data #### Initial Data Collection and Normalization *“The decision corpus is a collection of the decisions published on the website URL. At the time of the crawling the website offered 32,748 decisions of 131 Bavarian courts, dating back to 2015. The decisions are provided from the Bavarian state after the courts agreed to a publication. All decisions are processed by the publisher C.H.BECK, commissioned by the Bavarian state. This processing includes anonymisation, key-wording, and adding of editorial guidelines to the decisions.”* (Urchs. et al., 2021) #### Who are the source language producers? German courts from Bavaria ### Annotations #### Annotation process *“As stated above, the judgment corpus consist of 200 randomly chosen judgments that are annotated by a legal expert, who holds a first legal state exam. Due to financial, staff and time reasons the presented iteration of the corpus was only annotated by a single expert. In a future version several other experts will annotate the corpus and the inter-annotator agreement will be calculated.”* (Urchs. et al., 2021) #### Who are the annotators? A legal expert, who holds a first legal state exam. ### Personal and Sensitive Information *"All decisions are processed by the publisher C.H.BECK, commissioned by the Bavarian state. This processing includes anonymisation, key-wording, and adding of editorial guidelines to the decisions.”* (Urchs. et al., 2021) Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations The SoMaJo Sentence Splitter has been used. Upon manual inspection of the dataset, we could see that the sentence splitter had poor accuracy in some cases (see in ). When creating the splits, we thought about merging small sentences with their neighbors or removing them all together. However, since we could not find an straightforward way to do this, we decided to leave the dataset content untouched. Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. Additional Information ---------------------- ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus (Email ; Github) and Veton Matoshi (Email ; Github). ### Licensing Information Creative Commons Attribution 4.0 International ### Contributions Thanks to @kapllan and @joelniklaus for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset consists of 200 randomly chosen judgments. In these judgments a legal expert annotated the components\nconclusion, definition and subsumption of the German legal writing style Urteilsstil.\n\n\n*\"Overall 25,075 sentences are annotated. 5% (1,202) of these sentences are marked as conclusion, 21% (5,328) as\ndefinition, 53% (13,322) are marked as subsumption and the remaining 21% (6,481) as other. The length of judgments in\nsentences ranges from 38 to 862 sentences. The median of judgments have 97 sentences, the length of most judgments is on\nthe shorter side.\"* (Urchs. et al., 2021)\n\n\n*\"Judgments from 22 of the 131 courts are selected for the corpus. Most judgments originate from the VG Augsburg (59 /\n30%) followed by the VG Ansbach (39 / 20%) and LSG Munich (33 / 17%).\"* (Urchs. et al., 2021)\n\n\n*\"29% (58) of all selected judgments are issued in the year 2016, followed by 22% (44) from the year 2017 and 21% (41)\nissued in the year 2015. [...] The percentages of selected judgments and decisions issued in 2018 and 2019 are roughly\nthe same. No judgments from 2020 are selected.\"* (Urchs. et al., 2021)", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used for multi-class text classification tasks, more specifically, for argument mining.", "### Languages\n\n\nThe language in the dataset is German as it is used in Bavarian courts in Germany.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach sentence is saved as a json object on a line in one of the three files 'URL', 'URL'\nor 'URL'. The file 'URL' contains meta information for each court. The 'file\\_number' is present in all\nfiles for identification. Each sentence of the court decision was categorized according to its function.", "### Data Fields\n\n\nThe file 'URL' contains for each row the following fields:\n\n\n* 'meta\\_title': Title provided by the website, it is used for saving the decision\n* 'court': Issuing court\n* 'decision\\_style': Style of the decision; the corpus contains either *Urteil* (='judgment') or *Endurteil* (\n='end-judgment')\n* 'date': Date when the decision was issued by the court\n* 'file\\_number': Identification number used for this decision by the court\n* 'title': Title provided by the court\n* 'norm\\_chains': Norms related to the decision\n* 'decision\\_guidelines': Short summary of the decision\n* 'keywords': Keywords associated with the decision\n* 'lower\\_court': Court that decided on the decision before\n* 'additional\\_information': Additional Information\n* 'decision\\_reference': References to the location of the decision in beck-online\n* 'tenor': Designation of the legal consequence ordered by the court (list of paragraphs)\n* 'legal\\_facts': Facts that form the base for the decision (list of paragraphs)\n\n\nThe files 'URL', 'URL' and 'URL' contain the following fields:\n\n\n* 'file\\_number': Identification number for linkage with the file 'URL'\n* 'input\\_sentence': The sentence to be classified\n* 'label': In depth explanation of the court decision. Each sentence is assigned to one of the major components of\nGerman *Urteilsstil* (Urchs. et al., 2021) (list of paragraphs, each paragraph containing list of sentences, each\nsentence annotated with one of the following four labels):\n\t+ 'conclusion': Overall result\n\t+ 'definition': Abstract legal facts and consequences\n\t+ 'subsumption': Determination sentence / Concrete facts\n\t+ 'other': Anything else\n* 'context\\_before': Context in the same paragraph before the input\\_sentence\n* 'context\\_after': Context in the same paragraph after the input\\_sentence", "### Data Splits\n\n\nNo split provided in the original release.\n\n\nSplits created by Joel Niklaus. We randomly split the dataset into 80% (160 decisions, 19271 sentences) train, 10%\nvalidation (20 decisions, 2726 sentences) and 10% test (20 decisions, 3078 sentences). We made sure, that a decision\nonly occurs in one split and is not dispersed over multiple splits.\n\n\nLabel Distribution\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCreating a publicly available German legal text corpus consisting of judgments that have been annotated by a legal\nexpert. The annotated components consist of *conclusion*, *definition* and *subsumption* of the German legal writing\nstyle *Urteilsstil*.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\n*“The decision corpus is a collection of the decisions published on the website URL. At the time of\nthe crawling the website offered 32,748 decisions of 131 Bavarian courts, dating back to 2015. The decisions are\nprovided from the Bavarian state after the courts agreed to a publication. All decisions are processed by the publisher\nC.H.BECK, commissioned by the Bavarian state. This processing includes anonymisation, key-wording, and adding of\neditorial guidelines to the decisions.”* (Urchs. et al., 2021)", "#### Who are the source language producers?\n\n\nGerman courts from Bavaria", "### Annotations", "#### Annotation process\n\n\n*“As stated above, the judgment corpus consist of 200 randomly chosen judgments that are annotated by a legal expert,\nwho holds a first legal state exam. Due to financial, staff and time reasons the presented iteration of the corpus was\nonly annotated by a single expert. In a future version several other experts will annotate the corpus and the\ninter-annotator agreement will be calculated.”* (Urchs. et al., 2021)", "#### Who are the annotators?\n\n\nA legal expert, who holds a first legal state exam.", "### Personal and Sensitive Information\n\n\n*\"All decisions are processed by the publisher C.H.BECK, commissioned by the Bavarian state. This processing includes\nanonymisation, key-wording, and adding of editorial guidelines to the decisions.”* (Urchs. et al., 2021)\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nThe SoMaJo Sentence Splitter has been used. Upon manual inspection of the dataset, we could see that the sentence\nsplitter had poor accuracy in some cases (see in ). When creating\nthe splits, we thought about merging small sentences with their neighbors or removing them all together. However, since\nwe could not find an straightforward way to do this, we decided to leave the dataset content untouched.\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton\nMatoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset\nconsisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the\ndataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,\ndifferences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to\nhave a look at the conversion script in order to retrace the steps for converting the\noriginal dataset into the present jsonl-format. For further information on the original dataset structure, we refer to\nthe bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation\nInformation*. Additional changes were made by Joel Niklaus (Email\n; Github) and Veton Matoshi (Email\n; Github).", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International", "### Contributions\n\n\nThanks to @kapllan and @joelniklaus for adding this\ndataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-German #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nThis dataset consists of 200 randomly chosen judgments. In these judgments a legal expert annotated the components\nconclusion, definition and subsumption of the German legal writing style Urteilsstil.\n\n\n*\"Overall 25,075 sentences are annotated. 5% (1,202) of these sentences are marked as conclusion, 21% (5,328) as\ndefinition, 53% (13,322) are marked as subsumption and the remaining 21% (6,481) as other. The length of judgments in\nsentences ranges from 38 to 862 sentences. The median of judgments have 97 sentences, the length of most judgments is on\nthe shorter side.\"* (Urchs. et al., 2021)\n\n\n*\"Judgments from 22 of the 131 courts are selected for the corpus. Most judgments originate from the VG Augsburg (59 /\n30%) followed by the VG Ansbach (39 / 20%) and LSG Munich (33 / 17%).\"* (Urchs. et al., 2021)\n\n\n*\"29% (58) of all selected judgments are issued in the year 2016, followed by 22% (44) from the year 2017 and 21% (41)\nissued in the year 2015. [...] The percentages of selected judgments and decisions issued in 2018 and 2019 are roughly\nthe same. No judgments from 2020 are selected.\"* (Urchs. et al., 2021)", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used for multi-class text classification tasks, more specifically, for argument mining.", "### Languages\n\n\nThe language in the dataset is German as it is used in Bavarian courts in Germany.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach sentence is saved as a json object on a line in one of the three files 'URL', 'URL'\nor 'URL'. The file 'URL' contains meta information for each court. The 'file\\_number' is present in all\nfiles for identification. Each sentence of the court decision was categorized according to its function.", "### Data Fields\n\n\nThe file 'URL' contains for each row the following fields:\n\n\n* 'meta\\_title': Title provided by the website, it is used for saving the decision\n* 'court': Issuing court\n* 'decision\\_style': Style of the decision; the corpus contains either *Urteil* (='judgment') or *Endurteil* (\n='end-judgment')\n* 'date': Date when the decision was issued by the court\n* 'file\\_number': Identification number used for this decision by the court\n* 'title': Title provided by the court\n* 'norm\\_chains': Norms related to the decision\n* 'decision\\_guidelines': Short summary of the decision\n* 'keywords': Keywords associated with the decision\n* 'lower\\_court': Court that decided on the decision before\n* 'additional\\_information': Additional Information\n* 'decision\\_reference': References to the location of the decision in beck-online\n* 'tenor': Designation of the legal consequence ordered by the court (list of paragraphs)\n* 'legal\\_facts': Facts that form the base for the decision (list of paragraphs)\n\n\nThe files 'URL', 'URL' and 'URL' contain the following fields:\n\n\n* 'file\\_number': Identification number for linkage with the file 'URL'\n* 'input\\_sentence': The sentence to be classified\n* 'label': In depth explanation of the court decision. Each sentence is assigned to one of the major components of\nGerman *Urteilsstil* (Urchs. et al., 2021) (list of paragraphs, each paragraph containing list of sentences, each\nsentence annotated with one of the following four labels):\n\t+ 'conclusion': Overall result\n\t+ 'definition': Abstract legal facts and consequences\n\t+ 'subsumption': Determination sentence / Concrete facts\n\t+ 'other': Anything else\n* 'context\\_before': Context in the same paragraph before the input\\_sentence\n* 'context\\_after': Context in the same paragraph after the input\\_sentence", "### Data Splits\n\n\nNo split provided in the original release.\n\n\nSplits created by Joel Niklaus. We randomly split the dataset into 80% (160 decisions, 19271 sentences) train, 10%\nvalidation (20 decisions, 2726 sentences) and 10% test (20 decisions, 3078 sentences). We made sure, that a decision\nonly occurs in one split and is not dispersed over multiple splits.\n\n\nLabel Distribution\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCreating a publicly available German legal text corpus consisting of judgments that have been annotated by a legal\nexpert. The annotated components consist of *conclusion*, *definition* and *subsumption* of the German legal writing\nstyle *Urteilsstil*.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\n*“The decision corpus is a collection of the decisions published on the website URL. At the time of\nthe crawling the website offered 32,748 decisions of 131 Bavarian courts, dating back to 2015. The decisions are\nprovided from the Bavarian state after the courts agreed to a publication. All decisions are processed by the publisher\nC.H.BECK, commissioned by the Bavarian state. This processing includes anonymisation, key-wording, and adding of\neditorial guidelines to the decisions.”* (Urchs. et al., 2021)", "#### Who are the source language producers?\n\n\nGerman courts from Bavaria", "### Annotations", "#### Annotation process\n\n\n*“As stated above, the judgment corpus consist of 200 randomly chosen judgments that are annotated by a legal expert,\nwho holds a first legal state exam. Due to financial, staff and time reasons the presented iteration of the corpus was\nonly annotated by a single expert. In a future version several other experts will annotate the corpus and the\ninter-annotator agreement will be calculated.”* (Urchs. et al., 2021)", "#### Who are the annotators?\n\n\nA legal expert, who holds a first legal state exam.", "### Personal and Sensitive Information\n\n\n*\"All decisions are processed by the publisher C.H.BECK, commissioned by the Bavarian state. This processing includes\nanonymisation, key-wording, and adding of editorial guidelines to the decisions.”* (Urchs. et al., 2021)\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nThe SoMaJo Sentence Splitter has been used. Upon manual inspection of the dataset, we could see that the sentence\nsplitter had poor accuracy in some cases (see in ). When creating\nthe splits, we thought about merging small sentences with their neighbors or removing them all together. However, since\nwe could not find an straightforward way to do this, we decided to leave the dataset content untouched.\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton\nMatoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset\nconsisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the\ndataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,\ndifferences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to\nhave a look at the conversion script in order to retrace the steps for converting the\noriginal dataset into the present jsonl-format. For further information on the original dataset structure, we refer to\nthe bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation\nInformation*. Additional changes were made by Joel Niklaus (Email\n; Github) and Veton Matoshi (Email\n; Github).", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International", "### Contributions\n\n\nThanks to @kapllan and @joelniklaus for adding this\ndataset." ]
e2bc5fd22217fdaa0054bb0eadbad8401d94dd50
# Dataset Card for Greek Legal Named Entity Recognition ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://legislation.di.uoa.gr/publications?language=en - **Repository:** - **Paper:** Angelidis, I., Chalkidis, I., & Koubarakis, M. (2018). Named Entity Recognition, Linking and Generation for Greek Legislation. JURIX. - **Leaderboard:** - **Point of Contact:** [Ilias Chalkidis](mailto:[email protected]); [Joel Niklaus](mailto:[email protected]) ### Dataset Summary This dataset contains an annotated corpus for named entity recognition in Greek legislations. It is the first of its kind for the Greek language in such an extended form and one of the few that examines legal text in a full spectrum entity recognition. ### Supported Tasks and Leaderboards The dataset supports the task of named entity recognition. ### Languages The language in the dataset is Greek as it used in the Greek Government Gazette. ## Dataset Structure ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). ### Data Fields The files contain the following data fields - `date`: The date when the document was published. - `gazette`: The government gazette of the document. Either `A` or `D` - `A` is the general one, publishing standard legislation - `D` is meant for legislation on urban planning and such things - `words`: The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see `convert_to_hf_dataset.py`. - `ner`: The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following: - `FACILITY`: Facilities, such as police stations, departments etc. - `GPE`: Geopolitical Entity; any reference to a geopolitical entity (e.g., country, city, Greek administrative unit, etc.) - `LEG-REFS`: Legislation Reference; any reference to Greek or European legislation (e.g., Presidential Decrees, Laws, Decisions, EU Regulations and Directives, etc.) - `LOCATION-NAT`: Well defined natural location, such as rivers, mountains, lakes etc. - `LOCATION-UNK`: Poorly defined locations such "End of road X" or other locations that are not "official". - `ORG`: Organization; any reference to a public or private organization, such as: international organizations (e.g., European Union, United Nations, etc.), Greek public organizations (e.g., Social Insurance Institution) or private ones (e.g., companies, NGOs, etc.). - `PERSON`: Any formal name of a person mentioned in the text (e.g., Greek government members, public administration officials, etc.). - `PUBLIC-DOCS`: Public Document Reference; any reference to documents or decisions that have been published by a public institution (organization) that are not considered a primary source of legislation (e.g., local decisions, announcements, memorandums, directives). - `O`: No entity annotation present The final tagset (in IOB notation) is the following: `['O', 'B-ORG', 'I-ORG', 'B-GPE', 'I-GPE', 'B-LEG-REFS', 'I-LEG-REFS', 'B-PUBLIC-DOCS', 'I-PUBLIC-DOCS', 'B-PERSON', 'I-PERSON', 'B-FACILITY', 'I-FACILITY', 'B-LOCATION-UNK', 'I-LOCATION-UNK', 'B-LOCATION-NAT', 'I-LOCATION-NAT']` ### Data Splits The dataset has three splits: *train*, *validation* and *test*. Split across the documents: | split | number of documents | |:---------------|--------------------:| | train | 23723 | | validation | 5478 | | test | 5084 | Split across NER labels | NER label + split | number of instances | |:-----------------------------------------------|----------------------:| | ('FACILITY', 'test') | 142 | | ('FACILITY', 'train') | 1224 | | ('FACILITY', 'validation') | 60 | | ('GPE', 'test') | 1083 | | ('GPE', 'train') | 5400 | | ('GPE', 'validation') | 1214 | | ('LEG-REFS', 'test') | 1331 | | ('LEG-REFS', 'train') | 5159 | | ('LEG-REFS', 'validation') | 1382 | | ('LOCATION-NAT', 'test') | 26 | | ('LOCATION-NAT', 'train') | 145 | | ('LOCATION-NAT', 'validation') | 2 | | ('LOCATION-UNK', 'test') | 205 | | ('LOCATION-UNK', 'train') | 1316 | | ('LOCATION-UNK', 'validation') | 283 | | ('ORG', 'test') | 1354 | | ('ORG', 'train') | 5906 | | ('ORG', 'validation') | 1506 | | ('PERSON', 'test') | 491 | | ('PERSON', 'train') | 1921 | | ('PERSON', 'validation') | 475 | | ('PUBLIC-DOCS', 'test') | 452 | | ('PUBLIC-DOCS', 'train') | 2652 | | ('PUBLIC-DOCS', 'validation') | 556 | ## Dataset Creation ### Curation Rationale Creating a big dataset for Greek named entity recognition and entity linking. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Greek Government Gazette ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? According to (Angelidis et al., 2018) the authors of the paper annotated the data: *"Our group annotated all of the above documents for the 6 entity types that we examine."* ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:[email protected]); [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:[email protected]); [Github](https://github.com/kapllan)). ### Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/) ### Citation Information ``` @inproceedings{Angelidis2018NamedER, author = {Angelidis, Iosif and Chalkidis, Ilias and Koubarakis, Manolis}, booktitle = {JURIX}, keywords = {greek,legal nlp,named entity recognition}, title = {{Named Entity Recognition, Linking and Generation for Greek Legislation}}, year = {2018} } ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
joelniklaus/greek_legal_ner
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:other", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:el", "license:cc-by-nc-sa-4.0", "legal", "region:us" ]
2022-07-01T10:34:33+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["el"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Greek Legal Named Entity Recognition", "tags": ["legal"]}
2023-09-27T16:48:13+00:00
[]
[ "el" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Modern Greek (1453-) #license-cc-by-nc-sa-4.0 #legal #region-us
Dataset Card for Greek Legal Named Entity Recognition ===================================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: Angelidis, I., Chalkidis, I., & Koubarakis, M. (2018). Named Entity Recognition, Linking and Generation for Greek Legislation. JURIX. * Leaderboard: * Point of Contact: Ilias Chalkidis; Joel Niklaus ### Dataset Summary This dataset contains an annotated corpus for named entity recognition in Greek legislations. It is the first of its kind for the Greek language in such an extended form and one of the few that examines legal text in a full spectrum entity recognition. ### Supported Tasks and Leaderboards The dataset supports the task of named entity recognition. ### Languages The language in the dataset is Greek as it used in the Greek Government Gazette. Dataset Structure ----------------- ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). ### Data Fields The files contain the following data fields * 'date': The date when the document was published. * 'gazette': The government gazette of the document. Either 'A' or 'D' + 'A' is the general one, publishing standard legislation + 'D' is meant for legislation on urban planning and such things * 'words': The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see 'convert\_to\_hf\_dataset.py'. * 'ner': The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following: + 'FACILITY': Facilities, such as police stations, departments etc. + 'GPE': Geopolitical Entity; any reference to a geopolitical entity (e.g., country, city, Greek administrative unit, etc.) + 'LEG-REFS': Legislation Reference; any reference to Greek or European legislation (e.g., Presidential Decrees, Laws, Decisions, EU Regulations and Directives, etc.) + 'LOCATION-NAT': Well defined natural location, such as rivers, mountains, lakes etc. + 'LOCATION-UNK': Poorly defined locations such "End of road X" or other locations that are not "official". + 'ORG': Organization; any reference to a public or private organization, such as: international organizations (e.g., European Union, United Nations, etc.), Greek public organizations (e.g., Social Insurance Institution) or private ones (e.g., companies, NGOs, etc.). + 'PERSON': Any formal name of a person mentioned in the text (e.g., Greek government members, public administration officials, etc.). + 'PUBLIC-DOCS': Public Document Reference; any reference to documents or decisions that have been published by a public institution (organization) that are not considered a primary source of legislation (e.g., local decisions, announcements, memorandums, directives). + 'O': No entity annotation present The final tagset (in IOB notation) is the following: '['O', 'B-ORG', 'I-ORG', 'B-GPE', 'I-GPE', 'B-LEG-REFS', 'I-LEG-REFS', 'B-PUBLIC-DOCS', 'I-PUBLIC-DOCS', 'B-PERSON', 'I-PERSON', 'B-FACILITY', 'I-FACILITY', 'B-LOCATION-UNK', 'I-LOCATION-UNK', 'B-LOCATION-NAT', 'I-LOCATION-NAT']' ### Data Splits The dataset has three splits: *train*, *validation* and *test*. Split across the documents: Split across NER labels Dataset Creation ---------------- ### Curation Rationale Creating a big dataset for Greek named entity recognition and entity linking. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Greek Government Gazette ### Annotations #### Annotation process #### Who are the annotators? According to (Angelidis et al., 2018) the authors of the paper annotated the data: *"Our group annotated all of the above documents for the 6 entity types that we examine."* ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. Additional Information ---------------------- ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github). ### Licensing Information Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License ### Contributions Thanks to @JoelNiklaus and @kapllan for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset contains an annotated corpus for named entity recognition in Greek legislations. It is the first of its kind for the Greek language in such an extended form and one of the few that examines legal text in a full spectrum entity recognition.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports the task of named entity recognition.", "### Languages\n\n\nThe language in the dataset is Greek as it used in the Greek Government Gazette.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test).", "### Data Fields\n\n\nThe files contain the following data fields\n\n\n* 'date': The date when the document was published.\n* 'gazette': The government gazette of the document. Either 'A' or 'D'\n\t+ 'A' is the general one, publishing standard legislation\n\t+ 'D' is meant for legislation on urban planning and such things\n* 'words': The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see 'convert\\_to\\_hf\\_dataset.py'.\n* 'ner': The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following:\n\t+ 'FACILITY': Facilities, such as police stations, departments etc.\n\t+ 'GPE': Geopolitical Entity; any reference to a geopolitical entity (e.g., country, city, Greek administrative unit, etc.)\n\t+ 'LEG-REFS': Legislation Reference; any reference to Greek or European legislation (e.g., Presidential Decrees, Laws, Decisions, EU Regulations and Directives, etc.)\n\t+ 'LOCATION-NAT': Well defined natural location, such as rivers, mountains, lakes etc.\n\t+ 'LOCATION-UNK': Poorly defined locations such \"End of road X\" or other locations that are not \"official\".\n\t+ 'ORG': Organization; any reference to a public or private organization, such as: international organizations (e.g., European Union, United Nations, etc.), Greek public organizations (e.g., Social Insurance Institution) or private ones (e.g., companies, NGOs, etc.).\n\t+ 'PERSON': Any formal name of a person mentioned in the text (e.g., Greek government members, public administration officials, etc.).\n\t+ 'PUBLIC-DOCS': Public Document Reference; any reference to documents or decisions that have been published by a public institution (organization) that are not considered a primary source of legislation (e.g., local decisions, announcements, memorandums, directives).\n\t+ 'O': No entity annotation present\n\n\nThe final tagset (in IOB notation) is the following: '['O', 'B-ORG', 'I-ORG', 'B-GPE', 'I-GPE', 'B-LEG-REFS', 'I-LEG-REFS', 'B-PUBLIC-DOCS', 'I-PUBLIC-DOCS', 'B-PERSON', 'I-PERSON', 'B-FACILITY', 'I-FACILITY', 'B-LOCATION-UNK', 'I-LOCATION-UNK', 'B-LOCATION-NAT', 'I-LOCATION-NAT']'", "### Data Splits\n\n\nThe dataset has three splits: *train*, *validation* and *test*.\n\n\nSplit across the documents:\n\n\n\nSplit across NER labels\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCreating a big dataset for Greek named entity recognition and entity linking.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nGreek Government Gazette", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\nAccording to (Angelidis et al., 2018) the authors of the paper annotated the data: *\"Our group annotated all of the above documents for the 6 entity types that we examine.\"*", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.\nAdditional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github).", "### Licensing Information\n\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Modern Greek (1453-) #license-cc-by-nc-sa-4.0 #legal #region-us \n", "### Dataset Summary\n\n\nThis dataset contains an annotated corpus for named entity recognition in Greek legislations. It is the first of its kind for the Greek language in such an extended form and one of the few that examines legal text in a full spectrum entity recognition.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports the task of named entity recognition.", "### Languages\n\n\nThe language in the dataset is Greek as it used in the Greek Government Gazette.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test).", "### Data Fields\n\n\nThe files contain the following data fields\n\n\n* 'date': The date when the document was published.\n* 'gazette': The government gazette of the document. Either 'A' or 'D'\n\t+ 'A' is the general one, publishing standard legislation\n\t+ 'D' is meant for legislation on urban planning and such things\n* 'words': The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see 'convert\\_to\\_hf\\_dataset.py'.\n* 'ner': The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following:\n\t+ 'FACILITY': Facilities, such as police stations, departments etc.\n\t+ 'GPE': Geopolitical Entity; any reference to a geopolitical entity (e.g., country, city, Greek administrative unit, etc.)\n\t+ 'LEG-REFS': Legislation Reference; any reference to Greek or European legislation (e.g., Presidential Decrees, Laws, Decisions, EU Regulations and Directives, etc.)\n\t+ 'LOCATION-NAT': Well defined natural location, such as rivers, mountains, lakes etc.\n\t+ 'LOCATION-UNK': Poorly defined locations such \"End of road X\" or other locations that are not \"official\".\n\t+ 'ORG': Organization; any reference to a public or private organization, such as: international organizations (e.g., European Union, United Nations, etc.), Greek public organizations (e.g., Social Insurance Institution) or private ones (e.g., companies, NGOs, etc.).\n\t+ 'PERSON': Any formal name of a person mentioned in the text (e.g., Greek government members, public administration officials, etc.).\n\t+ 'PUBLIC-DOCS': Public Document Reference; any reference to documents or decisions that have been published by a public institution (organization) that are not considered a primary source of legislation (e.g., local decisions, announcements, memorandums, directives).\n\t+ 'O': No entity annotation present\n\n\nThe final tagset (in IOB notation) is the following: '['O', 'B-ORG', 'I-ORG', 'B-GPE', 'I-GPE', 'B-LEG-REFS', 'I-LEG-REFS', 'B-PUBLIC-DOCS', 'I-PUBLIC-DOCS', 'B-PERSON', 'I-PERSON', 'B-FACILITY', 'I-FACILITY', 'B-LOCATION-UNK', 'I-LOCATION-UNK', 'B-LOCATION-NAT', 'I-LOCATION-NAT']'", "### Data Splits\n\n\nThe dataset has three splits: *train*, *validation* and *test*.\n\n\nSplit across the documents:\n\n\n\nSplit across NER labels\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCreating a big dataset for Greek named entity recognition and entity linking.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nGreek Government Gazette", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\nAccording to (Angelidis et al., 2018) the authors of the paper annotated the data: *\"Our group annotated all of the above documents for the 6 entity types that we examine.\"*", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.\nAdditional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github).", "### Licensing Information\n\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this dataset." ]
a072e8508049097fc216fd26bdd89a89e47e1272
# Dataset Card for Romanian Named Entity Recognition in the Legal domain (LegalNERo) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://zenodo.org/record/4922385 - **Paper:** Pais, V., Mitrofan, M., Gasan, C. L., Coneschi, V., & Ianov, A. (2021). Named Entity Recognition in the {R}omanian Legal Domain. Proceedings of the Natural Legal Language Processing Workshop 2021, 9–18. https://doi.org/10.18653/v1/2021.nllp-1.2 - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:[email protected]) ### Dataset Summary LegalNERo is a manually annotated corpus for named entity recognition in the Romanian legal domain. It provides gold annotations for organizations, locations, persons, time and legal resources mentioned in legal documents. Additionally it offers GEONAMES codes for the named entities annotated as location (where a link could be established). ### Supported Tasks and Leaderboards The dataset supports the task of named entity recognition. ### Languages Since legal documents for LegalNERo are extracted from the larger [MARCELL-RO corpus](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/), the language in the dataset is Romanian as it used in national legislation ranging from 1881 to 2021. ## Dataset Structure ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping. Rows only containing one word (mostly words such as `\t\t\t`, `\n` or `-----`) have been filtered out. ### Data Fields The files contain the following data fields - `file_name`: The file_name of the applicable annotation document - `words`: The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see `convert_to_hf_dataset.py`. - `ner`: The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following: - `LEGAL`: Legal reference/resources - `LOC`: Location - `ORG`: Organization - `PER`: Person - `TIME`: Time reference - `O`: No entity annotation present The final tagset (in IOB notation) is the following: `['O', 'B-TIME', 'I-TIME', 'B-LEGAL', 'I-LEGAL', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-PER', 'I-PER']` ### Data Splits Splits created by Joel Niklaus. | split | number of documents | number of sentences | |:---------------|--------------------:|--------------------:| | train | 296 (80%) | 7552 | | validation | 37 (10%) | 966 | | test | 37 (10%) | 907 | ## Dataset Creation ### Curation Rationale The dataset provides gold annotations for organizations, locations, persons, time and legal resources mentioned in Romanian legal documents. ### Source Data #### Initial Data Collection and Normalization The LegalNERo corpus consists of 370 documents from the larger [MARCELL-RO corpus](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/). In the following we give a short description of the crawling process for the MARCELL-RO corpus. *The MARCELL-RO corpus "contains 163,274 files, which represent the body of national legislation ranging from 1881 to 2021. This corpus includes mainly: governmental decisions, ministerial orders, decisions, decrees and laws. All the texts were obtained via crawling from the public Romanian legislative portal . We have not distinguished between in force and "out of force" laws because it is difficult to do this automatically and there is no external resource to use to distinguish between them. The texts were extracted from the original HTML format and converted into TXT files. Each file has multiple levels of annotation: firstly the texts were tokenized, lemmatized and morphologically annotated using the Tokenizing, Tagging and Lemmatizing (TTL) text processing platform developed at RACAI, then dependency parsed with NLP-Cube, named entities were identified using a NER tool developed at RACAI, nominal phrases were identified also with TTL, while IATE terms and EuroVoc descriptors were identified using an internal tool. All processing tools were integrated into an end-to-end pipeline available within the RELATE platform and as a dockerized version. The files were annotated with the latest version of the pipeline completed within Activity 4 of the MARCELL project."* [Link](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/) #### Who are the source language producers? The source language producers are presumably politicians and lawyers. ### Annotations #### Annotation process *“Annotation of the LegalNERo corpus was performed by 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence "Mihai Drăgănescu" of the Romanian Academy (RACAI). For annotation purposes we used the BRAT tool4 […]. Inside the legal reference class, we considered sub-entities of type *organization* and *time*. This allows for using the LegalNERo corpus in two scenarios: using all the 5 entity classes or using only the remaining general-purpose classes. The LegalNERo corpus contains a total of 370 documents from the larger MARCELL-RO corpus. These documents were split amongst the 5 annotators, with certain documents being annotated by multiple annotators. Each annotator manually annotated 100 documents. The annotators were unaware of the overlap, which allowed us to compute an inter-annotator agreement. We used the Cohen’s Kappa measure and obtained a value of 0.89, which we consider to be a good result.”* (Pais et al., 2021) #### Who are the annotators? *"[...] 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence "Mihai Drăgănescu" of the Romanian Academy (RACAI)."* ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:[email protected]); [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:[email protected]); [Github](https://github.com/kapllan)). ### Licensing Information [Creative Commons Attribution Non Commercial No Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode) ### Citation Information ``` @dataset{pais_vasile_2021_4922385, author = {Păiș, Vasile and Mitrofan, Maria and Gasan, Carol Luca and Ianov, Alexandru and Ghiță, Corvin and Coneschi, Vlad Silviu and Onuț, Andrei}, title = {{Romanian Named Entity Recognition in the Legal domain (LegalNERo)}}, month = may, year = 2021, publisher = {Zenodo}, doi = {10.5281/zenodo.4922385}, url = {https://doi.org/10.5281/zenodo.4922385} } ``` ``` @inproceedings{pais-etal-2021-named, author = {Pais, Vasile and Mitrofan, Maria and Gasan, Carol Luca and Coneschi, Vlad and Ianov, Alexandru}, booktitle = {Proceedings of the Natural Legal Language Processing Workshop 2021}, doi = {10.18653/v1/2021.nllp-1.2}, month = {nov}, pages = {9--18}, publisher = {Association for Computational Linguistics}, title = {{Named Entity Recognition in the {R}omanian Legal Domain}}, url = {https://aclanthology.org/2021.nllp-1.2}, year = {2021} } ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
joelniklaus/legalnero
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:other", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ro", "license:cc-by-nc-nd-4.0", "legal", "region:us" ]
2022-07-01T10:39:54+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["ro"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Romanian Named Entity Recognition in the Legal domain (LegalNERo)", "tags": ["legal"]}
2023-09-27T16:48:28+00:00
[]
[ "ro" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Romanian #license-cc-by-nc-nd-4.0 #legal #region-us
Dataset Card for Romanian Named Entity Recognition in the Legal domain (LegalNERo) ================================================================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: Pais, V., Mitrofan, M., Gasan, C. L., Coneschi, V., & Ianov, A. (2021). Named Entity Recognition in the {R}omanian Legal Domain. Proceedings of the Natural Legal Language Processing Workshop 2021, 9–18. URL * Leaderboard: * Point of Contact: Joel Niklaus ### Dataset Summary LegalNERo is a manually annotated corpus for named entity recognition in the Romanian legal domain. It provides gold annotations for organizations, locations, persons, time and legal resources mentioned in legal documents. Additionally it offers GEONAMES codes for the named entities annotated as location (where a link could be established). ### Supported Tasks and Leaderboards The dataset supports the task of named entity recognition. ### Languages Since legal documents for LegalNERo are extracted from the larger MARCELL-RO corpus, the language in the dataset is Romanian as it used in national legislation ranging from 1881 to 2021. Dataset Structure ----------------- ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping. Rows only containing one word (mostly words such as '\t\t\t', '\n' or '-----') have been filtered out. ### Data Fields The files contain the following data fields * 'file\_name': The file\_name of the applicable annotation document * 'words': The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see 'convert\_to\_hf\_dataset.py'. * 'ner': The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following: + 'LEGAL': Legal reference/resources + 'LOC': Location + 'ORG': Organization + 'PER': Person + 'TIME': Time reference + 'O': No entity annotation present The final tagset (in IOB notation) is the following: '['O', 'B-TIME', 'I-TIME', 'B-LEGAL', 'I-LEGAL', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-PER', 'I-PER']' ### Data Splits Splits created by Joel Niklaus. Dataset Creation ---------------- ### Curation Rationale The dataset provides gold annotations for organizations, locations, persons, time and legal resources mentioned in Romanian legal documents. ### Source Data #### Initial Data Collection and Normalization The LegalNERo corpus consists of 370 documents from the larger MARCELL-RO corpus. In the following we give a short description of the crawling process for the MARCELL-RO corpus. *The MARCELL-RO corpus "contains 163,274 files, which represent the body of national legislation ranging from 1881 to 2021. This corpus includes mainly: governmental decisions, ministerial orders, decisions, decrees and laws. All the texts were obtained via crawling from the public Romanian legislative portal . We have not distinguished between in force and "out of force" laws because it is difficult to do this automatically and there is no external resource to use to distinguish between them. The texts were extracted from the original HTML format and converted into TXT files. Each file has multiple levels of annotation: firstly the texts were tokenized, lemmatized and morphologically annotated using the Tokenizing, Tagging and Lemmatizing (TTL) text processing platform developed at RACAI, then dependency parsed with NLP-Cube, named entities were identified using a NER tool developed at RACAI, nominal phrases were identified also with TTL, while IATE terms and EuroVoc descriptors were identified using an internal tool. All processing tools were integrated into an end-to-end pipeline available within the RELATE platform and as a dockerized version. The files were annotated with the latest version of the pipeline completed within Activity 4 of the MARCELL project."* Link #### Who are the source language producers? The source language producers are presumably politicians and lawyers. ### Annotations #### Annotation process *“Annotation of the LegalNERo corpus was performed by 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence "Mihai Drăgănescu" of the Romanian Academy (RACAI). For annotation purposes we used the BRAT tool4 […]. Inside the legal reference class, we considered sub-entities of type *organization* and *time*. This allows for using the LegalNERo corpus in two scenarios: using all the 5 entity classes or using only the remaining general-purpose classes. The LegalNERo corpus contains a total of 370 documents from the larger MARCELL-RO corpus. These documents were split amongst the 5 annotators, with certain documents being annotated by multiple annotators. Each annotator manually annotated 100 documents. The annotators were unaware of the overlap, which allowed us to compute an inter-annotator agreement. We used the Cohen’s Kappa measure and obtained a value of 0.89, which we consider to be a good result.”* (Pais et al., 2021) #### Who are the annotators? *"[...] 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence "Mihai Drăgănescu" of the Romanian Academy (RACAI)."* ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. Additional Information ---------------------- ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github). ### Licensing Information Creative Commons Attribution Non Commercial No Derivatives 4.0 International ### Contributions Thanks to @JoelNiklaus and @kapllan for adding this dataset.
[ "### Dataset Summary\n\n\nLegalNERo is a manually annotated corpus for named entity recognition in the Romanian legal domain. It provides gold annotations for organizations, locations, persons, time and legal resources mentioned in legal documents. Additionally it offers GEONAMES codes for the named entities annotated as location (where a link could be established).", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports the task of named entity recognition.", "### Languages\n\n\nSince legal documents for LegalNERo are extracted from the larger MARCELL-RO corpus, the language in the dataset is Romanian as it used in national legislation ranging from 1881 to 2021.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping.\n\n\nRows only containing one word (mostly words such as '\\t\\t\\t', '\\n' or '-----') have been filtered out.", "### Data Fields\n\n\nThe files contain the following data fields\n\n\n* 'file\\_name': The file\\_name of the applicable annotation document\n* 'words': The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see 'convert\\_to\\_hf\\_dataset.py'.\n* 'ner': The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following:\n\t+ 'LEGAL': Legal reference/resources\n\t+ 'LOC': Location\n\t+ 'ORG': Organization\n\t+ 'PER': Person\n\t+ 'TIME': Time reference\n\t+ 'O': No entity annotation present\n\n\nThe final tagset (in IOB notation) is the following: '['O', 'B-TIME', 'I-TIME', 'B-LEGAL', 'I-LEGAL', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-PER', 'I-PER']'", "### Data Splits\n\n\nSplits created by Joel Niklaus.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset provides gold annotations for organizations, locations, persons, time and legal resources mentioned in Romanian legal documents.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe LegalNERo corpus consists of 370 documents from the larger MARCELL-RO corpus. In the following we give a short description of the crawling process for the MARCELL-RO corpus.\n\n\n*The MARCELL-RO corpus \"contains 163,274 files, which represent the body of national legislation ranging from 1881 to 2021. This corpus includes mainly: governmental decisions, ministerial orders, decisions, decrees and laws. All the texts were obtained via crawling from the public Romanian legislative portal . We have not distinguished between in force and \"out of force\" laws because it is difficult to do this automatically and there is no external resource to use to distinguish between them. The texts were extracted from the original HTML format and converted into TXT files. Each file has multiple levels of annotation: firstly the texts were tokenized, lemmatized and morphologically annotated using the Tokenizing, Tagging and Lemmatizing (TTL) text processing platform developed at RACAI, then dependency parsed with NLP-Cube, named entities were identified using a NER tool developed at RACAI, nominal phrases were identified also with TTL, while IATE terms and EuroVoc descriptors were identified using an internal tool. All processing tools were integrated into an end-to-end pipeline available within the RELATE platform and as a dockerized version. The files were annotated with the latest version of the pipeline completed within Activity 4 of the MARCELL project.\"* Link", "#### Who are the source language producers?\n\n\nThe source language producers are presumably politicians and lawyers.", "### Annotations", "#### Annotation process\n\n\n*“Annotation of the LegalNERo corpus was performed by 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence \"Mihai Drăgănescu\" of the Romanian Academy (RACAI). For annotation purposes we used the BRAT tool4 […].\nInside the legal reference class, we considered sub-entities of type *organization* and *time*. This allows for using the LegalNERo corpus in two scenarios: using all the 5 entity classes or using only the remaining general-purpose classes. The LegalNERo corpus contains a total of 370 documents from the larger MARCELL-RO corpus. These documents were split amongst the 5 annotators, with certain documents being annotated by multiple annotators. Each annotator manually annotated 100 documents. The annotators were unaware of the overlap, which allowed us to compute an inter-annotator agreement. We used the Cohen’s Kappa measure and obtained a value of 0.89, which we consider to be a good result.”* (Pais et al., 2021)", "#### Who are the annotators?\n\n\n*\"[...] 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence \"Mihai Drăgănescu\" of the Romanian Academy (RACAI).\"*", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.\nAdditional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github).", "### Licensing Information\n\n\nCreative Commons Attribution Non Commercial No Derivatives 4.0 International", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Romanian #license-cc-by-nc-nd-4.0 #legal #region-us \n", "### Dataset Summary\n\n\nLegalNERo is a manually annotated corpus for named entity recognition in the Romanian legal domain. It provides gold annotations for organizations, locations, persons, time and legal resources mentioned in legal documents. Additionally it offers GEONAMES codes for the named entities annotated as location (where a link could be established).", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports the task of named entity recognition.", "### Languages\n\n\nSince legal documents for LegalNERo are extracted from the larger MARCELL-RO corpus, the language in the dataset is Romanian as it used in national legislation ranging from 1881 to 2021.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping.\n\n\nRows only containing one word (mostly words such as '\\t\\t\\t', '\\n' or '-----') have been filtered out.", "### Data Fields\n\n\nThe files contain the following data fields\n\n\n* 'file\\_name': The file\\_name of the applicable annotation document\n* 'words': The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see 'convert\\_to\\_hf\\_dataset.py'.\n* 'ner': The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following:\n\t+ 'LEGAL': Legal reference/resources\n\t+ 'LOC': Location\n\t+ 'ORG': Organization\n\t+ 'PER': Person\n\t+ 'TIME': Time reference\n\t+ 'O': No entity annotation present\n\n\nThe final tagset (in IOB notation) is the following: '['O', 'B-TIME', 'I-TIME', 'B-LEGAL', 'I-LEGAL', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-PER', 'I-PER']'", "### Data Splits\n\n\nSplits created by Joel Niklaus.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset provides gold annotations for organizations, locations, persons, time and legal resources mentioned in Romanian legal documents.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe LegalNERo corpus consists of 370 documents from the larger MARCELL-RO corpus. In the following we give a short description of the crawling process for the MARCELL-RO corpus.\n\n\n*The MARCELL-RO corpus \"contains 163,274 files, which represent the body of national legislation ranging from 1881 to 2021. This corpus includes mainly: governmental decisions, ministerial orders, decisions, decrees and laws. All the texts were obtained via crawling from the public Romanian legislative portal . We have not distinguished between in force and \"out of force\" laws because it is difficult to do this automatically and there is no external resource to use to distinguish between them. The texts were extracted from the original HTML format and converted into TXT files. Each file has multiple levels of annotation: firstly the texts were tokenized, lemmatized and morphologically annotated using the Tokenizing, Tagging and Lemmatizing (TTL) text processing platform developed at RACAI, then dependency parsed with NLP-Cube, named entities were identified using a NER tool developed at RACAI, nominal phrases were identified also with TTL, while IATE terms and EuroVoc descriptors were identified using an internal tool. All processing tools were integrated into an end-to-end pipeline available within the RELATE platform and as a dockerized version. The files were annotated with the latest version of the pipeline completed within Activity 4 of the MARCELL project.\"* Link", "#### Who are the source language producers?\n\n\nThe source language producers are presumably politicians and lawyers.", "### Annotations", "#### Annotation process\n\n\n*“Annotation of the LegalNERo corpus was performed by 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence \"Mihai Drăgănescu\" of the Romanian Academy (RACAI). For annotation purposes we used the BRAT tool4 […].\nInside the legal reference class, we considered sub-entities of type *organization* and *time*. This allows for using the LegalNERo corpus in two scenarios: using all the 5 entity classes or using only the remaining general-purpose classes. The LegalNERo corpus contains a total of 370 documents from the larger MARCELL-RO corpus. These documents were split amongst the 5 annotators, with certain documents being annotated by multiple annotators. Each annotator manually annotated 100 documents. The annotators were unaware of the overlap, which allowed us to compute an inter-annotator agreement. We used the Cohen’s Kappa measure and obtained a value of 0.89, which we consider to be a good result.”* (Pais et al., 2021)", "#### Who are the annotators?\n\n\n*\"[...] 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence \"Mihai Drăgănescu\" of the Romanian Academy (RACAI).\"*", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.\nAdditional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github).", "### Licensing Information\n\n\nCreative Commons Attribution Non Commercial No Derivatives 4.0 International", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this dataset." ]
448c5caa985b8dafb275294f226120f41a7f8251
# Dataset Card for A Corpus for Multilingual Analysis of Online Terms of Service ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** http://claudette.eui.eu/corpus_multilingual_NLLP2021.zip - **Paper:** Drawzeski, K., Galassi, A., Jablonowska, A., Lagioia, F., Lippi, M., Micklitz, H. W., Sartor, G., Tagiuri, G., & Torroni, P. (2021). A Corpus for Multilingual Analysis of Online Terms of Service. Proceedings of the Natural Legal Language Processing Workshop 2021, 1–8. https://doi.org/10.18653/v1/2021.nllp-1.1 - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:[email protected]) ### Dataset Summary *"We present the first annotated corpus for multilingual analysis of potentially unfair clauses in online Terms of Service [=ToS]. The data set comprises a total of 100 contracts, obtained from 25 documents annotated in four different languages: English, German, Italian, and Polish. For each contract, potentially unfair clauses for the consumer are annotated, for nine different unfairness categories."* (Drawzeski et al., 2021) ### Supported Tasks and Leaderboards The dataset can be used for multi-class multi-label text classification tasks, more specifically, for classifying unfair clauses in ToS. ### Languages English, German, Italian, and Polish. ## Dataset Structure ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). ### Data Fields The dataset contains the following fields: - `language`: The language of the sentence/document. - `company`: The company of the document. - `line_number`: The line number of the sentence in the document. - `sentence`: The sentence to be classified. - `unfairness_level`: The unfairness level assigned to the sentence (if two clauses apply, the higher unfairness level is assigned here). The documents have been annotated using nine tags that represent different categories of clause unfairness. These boolean tags are: - `a` = Arbitration: *”This clause requires or allows the parties to resolve their disputes through an arbitration process, before the case could go to court. It is therefore considered a kind of forum selection clause. However, such a clause may or may not specify that arbitration should occur within a specific jurisdiction. Clauses stipulating that the arbitration should (1) take place in a state other than the state of consumer’s residence and/or (2) be based not on law but on arbiter’s discretion were marked as clearly unfair.”* (Lippi et al., 2019) - `ch` = Unilateral change: *"This clause specifies the conditions under which the service provider could amend and modify the terms of service and/or the service itself. Such clauses were always considered as potentially unfair. This is because the ECJ has not yet issued a judgment in this regard, though the Annex to the Direc- tive contains several examples supporting such a qualification."* (Lippi et al., 2019) - `cr` = Content removal : *"This gives the provider a right to modify/delete user’s content, including in-app purchases, and sometimes specifies the conditions under which the service provider may do so. As in the case of unilateral termination, clauses that indicate conditions for content removal were marked as potentially unfair, whereas clauses stipulating that the service provider may remove content in his full discretion, and/or at any time for any or no reasons and/or without notice nor possibility to retrieve the content were marked as clearly unfair."* (Lippi et al., 2019) - `j` = Jurisdiction : *"This type of clause stipulates what courts will have the competence to adjudicate disputes under the contract. Jurisdiction clauses giving consumers a right to bring disputes in their place of residence were marked as clearly fair, whereas clauses stating that any judicial proceeding takes a residence away (i.e. in a different city, different country) were marked as clearly unfair. This assessment is grounded in ECJ’s case law, see for example Oceano case number C-240/98."* (Lippi et al., 2019) - `law` = Choice of law: *"This clause specifies what law will govern the contract, meaning also what law will be applied in potential adjudication of a dispute arising under the contract. Clauses defining the applicable law as the law of the consumer’s country of residence were marked as clearly fair [...]"* (Lippi et al., 2019) - `ltd` = Limitation of liability: *"This clause stipulates that the duty to pay damages is limited or excluded, for certain kinds of losses and under certain conditions. Clauses that explicitly affirm non-excludable providers’ liabilities were marked as clearly fair."* (Lippi et al., 2019) - `ter` = Unilateral termination: *"This clause gives provider the right to suspend and/or terminate the service and/or the contract, and sometimes details the circumstances under which the provider claims to have a right to do so. Unilateral termination clauses that specify reasons for termination were marked as potentially unfair. Whereas clauses stipulating that the service provider may suspend or terminate the service at any time for any or no reasons and/or without notice were marked as clearly unfair."* (Lippi et al., 2019) - `use` = Contract by using: *"This clause stipulates that the consumer is bound by the terms of use of a specific service, simply by using the service, without even being required to mark that he or she has read and accepted them. We always marked such clauses as potentially unfair. The reason for this choice is that a good argument can be offered for these clauses to be unfair, because they originate an imbalance in rights and duties of the parties, but this argument has no decisive authoritative backing yet, since the ECJ has never assessed a clause of this type."* (Lippi et al., 2019) - `pinc` = Privacy included: This tag identifies *"clauses stating that consumers consent to the privacy policy simply by using the service. Such clauses have been always considered potentially unfair"* (Drawzeski et al., 2021) - `all_topics` = an aggregate column containing all applicable topics combined *”We assumed that each type of clause could be classified as either clearly fair, or potentially unfair, or clearly unfair. In order to mark the different degrees of (un)fairness we appended a numeric value to each XML tag, with 1 meaning clearly fair, 2 potentially unfair, and 3 clearly unfair. Nested tags were used to annotate text segments relevant to more than one type of clause. With clauses covering multiple paragraphs, we chose to tag each paragraph separately, possibly with different degrees of (un)fairness.”* (Lippi et al., 2019) ### Data Splits No splits provided in the original paper. Joel Niklaus created the splits manually. The train split contains the 20 (80%) first companies in alphabetic order (*Booking, Dropbox, Electronic_Arts, Evernote, Facebook, Garmin, Google, Grindr, Linkedin, Mozilla, Pinterest, Quora, Ryanair, Skype, Skyscanner, Snap, Spotify, Terravision, Tinder, Tripadvisor*). The validation split contains the 2 (8%) companies *Tumblr* and *Uber*. The test split contains the 3 (12%) companies *Weebly*, *Yelp*, *Zynga*. There are two tasks possible for this dataset. #### Clause Topics By only considering the clause topic, we separated the clause topic from the fairness level classification. Thus, the label set could be reduced to just 9 classes. This dataset poses a multi-label multi-class sentence classification problem. The following label distribution shows the number of occurrences per label per split. `total occurrences` sums up the previous rows (number of clause topics per split). `split size` is the number of sentences per split. | clause topic | train | validation | test | |:----------------------|------------:|-----------------:|-----------:| | a | 117 | 6 | 21 | | ch | 308 | 45 | 53 | | cr | 155 | 4 | 44 | | j | 206 | 8 | 36 | | law | 178 | 8 | 26 | | ltd | 714 | 84 | 161 | | ter | 361 | 39 | 83 | | use | 185 | 14 | 32 | | pinc | 71 | 0 | 8 | | **total occurrences** | **2295** | **208** | **464** | | **split size** | **19942** | **1690** | **4297** | #### Unfairness Levels When predicting unfairness levels, all untagged sentences can be removed. This reduces the dataset size considerably. This dataset poses a single-label multi-class sentence classification problem. | unfairness_level | train | validation | test | |:---------------------------|------------:|-----------:|----------:| | untagged | 17868 | 1499 | 3880 | | potentially_unfair | 1560 | 142 | 291 | | clearly_unfair | 259 | 31 | 65 | | clearly_fair | 156 | 5 | 32 | | **total without untagged** | **1975** | **178** | **388** | | **total** | **19942** | **1690** | **4297** | ## Dataset Creation ### Curation Rationale The EU legislation is published in all official languages. This multilingualism comes with costs and challenges, such as limited cross-linguistical interpretability. The EU has refrained from regulating languages in which standard terms in consumer contracts should be drafted, allowing for differing approaches to emerge in various jurisdictions. Consumer protection authorities and non-governmental organizations in Europe tend to operate only in their respective languages. Therefore, consumer protection technologies are needed that are capable of dealing with multiple languages. The dataset at hand can be used for the automated detection of unfair clauses in ToS which, in most cases, are available in multiple languages. (Drawzeski et al., 2021) ### Source Data #### Initial Data Collection and Normalization *"The analysed ToS were retrieved from the [Claudette pre-existing corpus](http://claudette.eui.eu/ToS.zip), covering 100 English ToS (Lippi et al., 2019; Ruggeri et al., 2021). Such terms mainly concern popular digital services provided to consumers, including leading online platforms (such as search engines and social media). The predominant language of drafting of these ToS is English, with differing availability of corresponding ToS in other languages. To carry out the present study, the ultimate 25 ToS were selected on the basis of three main criteria: a) their availability in the four selected languages; b) the possibility of identifying a correspondence between the different versions, given their publication date; and c) the similarity of their structure (e.g. number of clauses, sections, etc.). To illustrate, while ToS in both German and Italian were identified for 63 out of the 100 ToS contained in the pre-existing Claudette training corpus, Polish versions were found for only 42 of these 63 ToS. Out of the 42 ToS available in the four languages, we selected those with the more closely corresponding versions based on criteria b) and c) above. Perfect correspondence across the 4 languages, however, could not be achieved for all 25 ToS."* (Drawzeski et al., 2021) #### Who are the source language producers? The source language producers are likely to be lawyers. ### Annotations #### Annotation process The dataset at hand is described by Drawzeski et al. (2021). The ToS of the dataset were retrieved from the pre-existing and mono-lingual (English) Claudette corpus which is described in (Lippi et al., 2019). Drawzeski et al. (2021) *“investigate methods for automatically transferring the annotations made on ToS in the context of the Claudette project onto the corresponding versions of the same documents in a target language, where such resources and expertise may be lacking.”* Therefore, in the following, we will present the annotation process for the Claudette corpus as described in (Lippi et al., 2019). *”The corpus consists of 50 relevant on-line consumer contracts, i.e., ToS of on-line platforms. Such contracts were selected among those offered by some of the major players in terms of number of users, global relevance, and time the service was established. Such contracts are usually quite detailed in content, are frequently updated to reflect changes both in the service and in the applicable law, and are often available in different versions for different jurisdictions. Given multiple versions of the same contract, we selected the most recent version available on-line to European customers. The mark-up was done in XML by three annotators, which jointly worked for the formulation of the annotation guidelines. The whole annotation process included several revisions, where some corrections were also suggested by an analysis of the false positives and false negatives retrieved by the initial machine learning prototypes. Due to the large interaction among the annotators during this process, in order to assess inter-annotation agreement, a further test set consisting of 10 additional contracts was tagged, following the final version of the guidelines. […] We produced an additional test set consisting of 10 more annotated contracts. Such documents were independently tagged by two distinct annotators who had carefully studied the guidelines. In order to quantitatively measure the inter-annotation agreement, for this test set we computed the standard Cohen’s 𝜅 metric […] which resulted to be 0.871 […].”* #### Who are the annotators? Not specified. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases It is very likely that some ToS in German, Italian and Polish are direct translations from English. Drawzeski et al. (2021) write: *“Although we could not assess this comprehensively in the present study, we infer from the wording of the ToS that at least in 9 out of 25 cases, German, Italian and Polish documents were indeed translations of the English originals.”* ### Other Known Limitations Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:[email protected]); [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:[email protected]); [Github](https://github.com/kapllan)). ### Licensing Information cc-by-nc-2.5 ### Citation Information ``` @inproceedings{drawzeski-etal-2021-corpus, address = {Punta Cana, Dominican Republic}, author = {Drawzeski, Kasper and Galassi, Andrea and Jablonowska, Agnieszka and Lagioia, Francesca and Lippi, Marco and Micklitz, Hans Wolfgang and Sartor, Giovanni and Tagiuri, Giacomo and Torroni, Paolo}, booktitle = {Proceedings of the Natural Legal Language Processing Workshop 2021}, doi = {10.18653/v1/2021.nllp-1.1}, month = {nov}, pages = {1--8}, publisher = {Association for Computational Linguistics}, title = {{A Corpus for Multilingual Analysis of Online Terms of Service}}, url = {https://aclanthology.org/2021.nllp-1.1}, year = {2021} } ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
joelniklaus/online_terms_of_service
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:found", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:de", "language:en", "language:it", "language:pl", "license:other", "region:us" ]
2022-07-01T10:42:49+00:00
{"annotations_creators": ["found", "other"], "language_creators": ["found"], "language": ["de", "en", "it", "pl"], "license": ["other"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "A Corpus for Multilingual Analysis of Online Terms of Service"}
2022-09-22T12:45:42+00:00
[]
[ "de", "en", "it", "pl" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-found #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-German #language-English #language-Italian #language-Polish #license-other #region-us
Dataset Card for A Corpus for Multilingual Analysis of Online Terms of Service ============================================================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: Drawzeski, K., Galassi, A., Jablonowska, A., Lagioia, F., Lippi, M., Micklitz, H. W., Sartor, G., Tagiuri, G., & Torroni, P. (2021). A Corpus for Multilingual Analysis of Online Terms of Service. Proceedings of the Natural Legal Language Processing Workshop 2021, 1–8. URL * Leaderboard: * Point of Contact: Joel Niklaus ### Dataset Summary *"We present the first annotated corpus for multilingual analysis of potentially unfair clauses in online Terms of Service [=ToS]. The data set comprises a total of 100 contracts, obtained from 25 documents annotated in four different languages: English, German, Italian, and Polish. For each contract, potentially unfair clauses for the consumer are annotated, for nine different unfairness categories."* (Drawzeski et al., 2021) ### Supported Tasks and Leaderboards The dataset can be used for multi-class multi-label text classification tasks, more specifically, for classifying unfair clauses in ToS. ### Languages English, German, Italian, and Polish. Dataset Structure ----------------- ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). ### Data Fields The dataset contains the following fields: * 'language': The language of the sentence/document. * 'company': The company of the document. * 'line\_number': The line number of the sentence in the document. * 'sentence': The sentence to be classified. * 'unfairness\_level': The unfairness level assigned to the sentence (if two clauses apply, the higher unfairness level is assigned here). The documents have been annotated using nine tags that represent different categories of clause unfairness. These boolean tags are: * 'a' = Arbitration: *”This clause requires or allows the parties to resolve their disputes through an arbitration process, before the case could go to court. It is therefore considered a kind of forum selection clause. However, such a clause may or may not specify that arbitration should occur within a specific jurisdiction. Clauses stipulating that the arbitration should (1) take place in a state other than the state of consumer’s residence and/or (2) be based not on law but on arbiter’s discretion were marked as clearly unfair.”* (Lippi et al., 2019) * 'ch' = Unilateral change: *"This clause specifies the conditions under which the service provider could amend and modify the terms of service and/or the service itself. Such clauses were always considered as potentially unfair. This is because the ECJ has not yet issued a judgment in this regard, though the Annex to the Direc- tive contains several examples supporting such a qualification."* (Lippi et al., 2019) * 'cr' = Content removal : *"This gives the provider a right to modify/delete user’s content, including in-app purchases, and sometimes specifies the conditions under which the service provider may do so. As in the case of unilateral termination, clauses that indicate conditions for content removal were marked as potentially unfair, whereas clauses stipulating that the service provider may remove content in his full discretion, and/or at any time for any or no reasons and/or without notice nor possibility to retrieve the content were marked as clearly unfair."* (Lippi et al., 2019) * 'j' = Jurisdiction : *"This type of clause stipulates what courts will have the competence to adjudicate disputes under the contract. Jurisdiction clauses giving consumers a right to bring disputes in their place of residence were marked as clearly fair, whereas clauses stating that any judicial proceeding takes a residence away (i.e. in a different city, different country) were marked as clearly unfair. This assessment is grounded in ECJ’s case law, see for example Oceano case number C-240/98."* (Lippi et al., 2019) * 'law' = Choice of law: *"This clause specifies what law will govern the contract, meaning also what law will be applied in potential adjudication of a dispute arising under the contract. Clauses defining the applicable law as the law of the consumer’s country of residence were marked as clearly fair [...]"* (Lippi et al., 2019) * 'ltd' = Limitation of liability: *"This clause stipulates that the duty to pay damages is limited or excluded, for certain kinds of losses and under certain conditions. Clauses that explicitly affirm non-excludable providers’ liabilities were marked as clearly fair."* (Lippi et al., 2019) * 'ter' = Unilateral termination: *"This clause gives provider the right to suspend and/or terminate the service and/or the contract, and sometimes details the circumstances under which the provider claims to have a right to do so. Unilateral termination clauses that specify reasons for termination were marked as potentially unfair. Whereas clauses stipulating that the service provider may suspend or terminate the service at any time for any or no reasons and/or without notice were marked as clearly unfair."* (Lippi et al., 2019) * 'use' = Contract by using: *"This clause stipulates that the consumer is bound by the terms of use of a specific service, simply by using the service, without even being required to mark that he or she has read and accepted them. We always marked such clauses as potentially unfair. The reason for this choice is that a good argument can be offered for these clauses to be unfair, because they originate an imbalance in rights and duties of the parties, but this argument has no decisive authoritative backing yet, since the ECJ has never assessed a clause of this type."* (Lippi et al., 2019) * 'pinc' = Privacy included: This tag identifies *"clauses stating that consumers consent to the privacy policy simply by using the service. Such clauses have been always considered potentially unfair"* (Drawzeski et al., 2021) * 'all\_topics' = an aggregate column containing all applicable topics combined *”We assumed that each type of clause could be classified as either clearly fair, or potentially unfair, or clearly unfair. In order to mark the different degrees of (un)fairness we appended a numeric value to each XML tag, with 1 meaning clearly fair, 2 potentially unfair, and 3 clearly unfair. Nested tags were used to annotate text segments relevant to more than one type of clause. With clauses covering multiple paragraphs, we chose to tag each paragraph separately, possibly with different degrees of (un)fairness.”* (Lippi et al., 2019) ### Data Splits No splits provided in the original paper. Joel Niklaus created the splits manually. The train split contains the 20 (80%) first companies in alphabetic order (*Booking, Dropbox, Electronic\_Arts, Evernote, Facebook, Garmin, Google, Grindr, Linkedin, Mozilla, Pinterest, Quora, Ryanair, Skype, Skyscanner, Snap, Spotify, Terravision, Tinder, Tripadvisor*). The validation split contains the 2 (8%) companies *Tumblr* and *Uber*. The test split contains the 3 (12%) companies *Weebly*, *Yelp*, *Zynga*. There are two tasks possible for this dataset. #### Clause Topics By only considering the clause topic, we separated the clause topic from the fairness level classification. Thus, the label set could be reduced to just 9 classes. This dataset poses a multi-label multi-class sentence classification problem. The following label distribution shows the number of occurrences per label per split. 'total occurrences' sums up the previous rows (number of clause topics per split). 'split size' is the number of sentences per split. #### Unfairness Levels When predicting unfairness levels, all untagged sentences can be removed. This reduces the dataset size considerably. This dataset poses a single-label multi-class sentence classification problem. Dataset Creation ---------------- ### Curation Rationale The EU legislation is published in all official languages. This multilingualism comes with costs and challenges, such as limited cross-linguistical interpretability. The EU has refrained from regulating languages in which standard terms in consumer contracts should be drafted, allowing for differing approaches to emerge in various jurisdictions. Consumer protection authorities and non-governmental organizations in Europe tend to operate only in their respective languages. Therefore, consumer protection technologies are needed that are capable of dealing with multiple languages. The dataset at hand can be used for the automated detection of unfair clauses in ToS which, in most cases, are available in multiple languages. (Drawzeski et al., 2021) ### Source Data #### Initial Data Collection and Normalization *"The analysed ToS were retrieved from the Claudette pre-existing corpus, covering 100 English ToS (Lippi et al., 2019; Ruggeri et al., 2021). Such terms mainly concern popular digital services provided to consumers, including leading online platforms (such as search engines and social media). The predominant language of drafting of these ToS is English, with differing availability of corresponding ToS in other languages. To carry out the present study, the ultimate 25 ToS were selected on the basis of three main criteria: a) their availability in the four selected languages; b) the possibility of identifying a correspondence between the different versions, given their publication date; and c) the similarity of their structure (e.g. number of clauses, sections, etc.). To illustrate, while ToS in both German and Italian were identified for 63 out of the 100 ToS contained in the pre-existing Claudette training corpus, Polish versions were found for only 42 of these 63 ToS. Out of the 42 ToS available in the four languages, we selected those with the more closely corresponding versions based on criteria b) and c) above. Perfect correspondence across the 4 languages, however, could not be achieved for all 25 ToS."* (Drawzeski et al., 2021) #### Who are the source language producers? The source language producers are likely to be lawyers. ### Annotations #### Annotation process The dataset at hand is described by Drawzeski et al. (2021). The ToS of the dataset were retrieved from the pre-existing and mono-lingual (English) Claudette corpus which is described in (Lippi et al., 2019). Drawzeski et al. (2021) *“investigate methods for automatically transferring the annotations made on ToS in the context of the Claudette project onto the corresponding versions of the same documents in a target language, where such resources and expertise may be lacking.”* Therefore, in the following, we will present the annotation process for the Claudette corpus as described in (Lippi et al., 2019). *”The corpus consists of 50 relevant on-line consumer contracts, i.e., ToS of on-line platforms. Such contracts were selected among those offered by some of the major players in terms of number of users, global relevance, and time the service was established. Such contracts are usually quite detailed in content, are frequently updated to reflect changes both in the service and in the applicable law, and are often available in different versions for different jurisdictions. Given multiple versions of the same contract, we selected the most recent version available on-line to European customers. The mark-up was done in XML by three annotators, which jointly worked for the formulation of the annotation guidelines. The whole annotation process included several revisions, where some corrections were also suggested by an analysis of the false positives and false negatives retrieved by the initial machine learning prototypes. Due to the large interaction among the annotators during this process, in order to assess inter-annotation agreement, a further test set consisting of 10 additional contracts was tagged, following the final version of the guidelines. […] We produced an additional test set consisting of 10 more annotated contracts. Such documents were independently tagged by two distinct annotators who had carefully studied the guidelines. In order to quantitatively measure the inter-annotation agreement, for this test set we computed the standard Cohen’s 𝜅 metric […] which resulted to be 0.871 […].”* #### Who are the annotators? Not specified. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases It is very likely that some ToS in German, Italian and Polish are direct translations from English. Drawzeski et al. (2021) write: *“Although we could not assess this comprehensively in the present study, we infer from the wording of the ToS that at least in 9 out of 25 cases, German, Italian and Polish documents were indeed translations of the English originals.”* ### Other Known Limitations Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. Additional Information ---------------------- ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github). ### Licensing Information cc-by-nc-2.5 ### Contributions Thanks to @JoelNiklaus and @kapllan for adding this dataset.
[ "### Dataset Summary\n\n\n*\"We present the first annotated corpus for multilingual analysis of potentially unfair clauses in online Terms of\nService [=ToS]. The data set comprises a total of 100 contracts, obtained from 25 documents annotated in four different\nlanguages: English, German, Italian, and Polish. For each contract, potentially unfair clauses for the consumer are\nannotated, for nine different unfairness categories.\"* (Drawzeski et al., 2021)", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used for multi-class multi-label text classification tasks, more specifically, for classifying unfair clauses in\nToS.", "### Languages\n\n\nEnglish, German, Italian, and Polish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test).", "### Data Fields\n\n\nThe dataset contains the following fields:\n\n\n* 'language': The language of the sentence/document.\n* 'company': The company of the document.\n* 'line\\_number': The line number of the sentence in the document.\n* 'sentence': The sentence to be classified.\n* 'unfairness\\_level': The unfairness level assigned to the sentence (if two clauses apply, the higher unfairness level is assigned here).\n\n\nThe documents have been annotated using nine tags that represent different categories of clause unfairness. These boolean tags are:\n\n\n* 'a' = Arbitration: *”This clause requires or allows the parties to resolve their disputes through an arbitration process, before the case could go to court. It is therefore considered a kind of forum selection clause. However, such a clause may or may not specify that arbitration should occur within a specific jurisdiction. Clauses stipulating that the arbitration should (1) take place in a state other than the state of consumer’s residence and/or (2) be based not on law but on arbiter’s discretion were marked as clearly unfair.”* (Lippi et al., 2019)\n* 'ch' = Unilateral change: *\"This clause specifies the conditions under which the service provider could amend and modify the terms of service and/or the service itself. Such clauses were always considered as potentially unfair. This is because the ECJ has not yet issued a judgment in this regard, though the Annex to the Direc- tive contains several examples supporting such a qualification.\"* (Lippi et al., 2019)\n* 'cr' = Content removal : *\"This gives the provider a right to modify/delete user’s content, including in-app purchases, and sometimes specifies the conditions under which the service provider may do so. As in the case of unilateral termination, clauses that indicate conditions for content removal were marked as potentially unfair, whereas clauses stipulating that the service provider may remove content in his full discretion, and/or at any time for any or no reasons and/or without notice nor possibility to retrieve the content were marked as clearly unfair.\"* (Lippi et al., 2019)\n* 'j' = Jurisdiction : *\"This type of clause stipulates what courts will have the competence to adjudicate disputes under the contract. Jurisdiction clauses giving consumers a right to bring disputes in their place of residence were marked as clearly fair, whereas clauses stating that any judicial proceeding takes a residence away (i.e. in a different city, different country) were marked as clearly unfair. This assessment is grounded in ECJ’s case law, see for example Oceano case number C-240/98.\"* (Lippi et al., 2019)\n* 'law' = Choice of law: *\"This clause specifies what law will govern the contract, meaning also what law will be applied in potential adjudication of a dispute arising under the contract. Clauses defining the applicable law as the law of the consumer’s country of residence were marked as clearly fair [...]\"* (Lippi et al., 2019)\n* 'ltd' = Limitation of liability: *\"This clause stipulates that the duty to pay damages is limited or excluded, for certain kinds of losses and under certain conditions. Clauses that explicitly affirm non-excludable providers’ liabilities were marked as clearly fair.\"* (Lippi et al., 2019)\n* 'ter' = Unilateral termination: *\"This clause gives provider the right to suspend and/or terminate the service and/or the contract, and sometimes details the circumstances under which the provider claims to have a right to do so. Unilateral termination clauses that specify reasons for termination were marked as potentially unfair. Whereas clauses stipulating that the service provider may suspend or terminate the service at any time for any or no reasons and/or without notice were marked as clearly unfair.\"* (Lippi et al., 2019)\n* 'use' = Contract by using: *\"This clause stipulates that the consumer is bound by the terms of use of a specific service, simply by using the service, without even being required to mark that he or she has read and accepted them. We always marked such clauses as potentially unfair. The reason for this choice is that a good argument can be offered for these clauses to be unfair, because they originate an imbalance in rights and duties of the parties, but this argument has no decisive authoritative backing yet, since the ECJ has never assessed a clause of this type.\"* (Lippi et al., 2019)\n* 'pinc' = Privacy included: This tag identifies *\"clauses stating that consumers consent to the privacy policy simply by using the service. Such clauses have been always considered potentially unfair\"* (Drawzeski et al., 2021)\n* 'all\\_topics' = an aggregate column containing all applicable topics combined\n\n\n*”We assumed that each type of clause could be classified as either clearly fair, or potentially unfair, or clearly unfair. In order to mark the different degrees of (un)fairness we appended a numeric value to each XML tag, with 1 meaning clearly fair, 2 potentially unfair, and 3 clearly unfair. Nested tags were used to annotate text segments relevant to more than one type of clause. With clauses covering multiple paragraphs, we chose to tag each paragraph separately, possibly with different degrees of (un)fairness.”* (Lippi et al., 2019)", "### Data Splits\n\n\nNo splits provided in the original paper.\n\n\nJoel Niklaus created the splits manually. The train split contains the 20 (80%) first companies in alphabetic order (*Booking, Dropbox, Electronic\\_Arts, Evernote, Facebook, Garmin, Google, Grindr, Linkedin, Mozilla,\nPinterest, Quora, Ryanair, Skype, Skyscanner, Snap, Spotify, Terravision, Tinder, Tripadvisor*). The\nvalidation split contains the 2 (8%) companies *Tumblr* and *Uber*. The test split contains the 3 (12%) companies *Weebly*,\n*Yelp*, *Zynga*.\n\n\nThere are two tasks possible for this dataset.", "#### Clause Topics\n\n\nBy only considering the clause topic, we separated the clause topic from the fairness level classification. Thus, the label set could be reduced to just 9 classes.\nThis dataset poses a multi-label multi-class sentence classification problem.\n\n\nThe following label distribution shows the number of occurrences per label per split. 'total occurrences' sums up the previous rows (number of clause topics per split). 'split size' is the number of sentences per split.", "#### Unfairness Levels\n\n\nWhen predicting unfairness levels, all untagged sentences can be removed. This reduces the dataset size considerably.\nThis dataset poses a single-label multi-class sentence classification problem.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe EU legislation is published in all official languages. This multilingualism comes with costs and challenges, such as limited cross-linguistical interpretability. The EU has refrained from regulating languages in which standard terms in consumer contracts should be drafted, allowing for differing approaches to emerge in various jurisdictions. Consumer protection authorities and non-governmental organizations in Europe tend to operate only in their respective languages. Therefore, consumer protection technologies are needed that are capable of dealing with multiple languages. The dataset at hand can be used for the automated detection of unfair clauses in ToS which, in most cases, are available in multiple languages. (Drawzeski et al., 2021)", "### Source Data", "#### Initial Data Collection and Normalization\n\n\n*\"The analysed ToS were retrieved from the Claudette pre-existing corpus, covering 100 English ToS (Lippi et al., 2019; Ruggeri et al., 2021). Such terms mainly concern popular digital services provided to consumers, including leading online platforms (such as search engines and social media). The predominant language of drafting of these ToS is English, with differing availability of corresponding ToS in other languages. To carry out the present study, the ultimate 25 ToS were selected on the basis of three main criteria: a) their availability in the four selected languages; b) the possibility of identifying a correspondence between the different versions, given their publication date; and c) the similarity of their structure (e.g. number of clauses, sections, etc.). To illustrate, while ToS in both German and Italian were identified for 63 out of the 100 ToS contained in the pre-existing Claudette training corpus, Polish versions were found for only 42 of these 63 ToS. Out of the 42 ToS available in the four languages, we selected those with the more closely corresponding versions based on criteria b) and c) above. Perfect correspondence across the 4 languages, however, could not be achieved for all 25 ToS.\"* (Drawzeski et al., 2021)", "#### Who are the source language producers?\n\n\nThe source language producers are likely to be lawyers.", "### Annotations", "#### Annotation process\n\n\nThe dataset at hand is described by Drawzeski et al. (2021). The ToS of the dataset were retrieved from the pre-existing\nand mono-lingual (English) Claudette corpus which is described in (Lippi et al., 2019). Drawzeski et al. (2021) *“investigate methods for automatically transferring the annotations made on ToS in the context of the Claudette project\nonto the corresponding versions of the same documents in a target language, where such resources and expertise may be\nlacking.”*\n\n\nTherefore, in the following, we will present the annotation process for the Claudette corpus as described in (Lippi et\nal., 2019).\n\n\n*”The corpus consists of 50 relevant on-line consumer contracts, i.e., ToS of on-line platforms. Such contracts were\nselected among those offered by some of the major players in terms of number of users, global relevance, and time the\nservice was established. Such contracts are usually quite detailed in content, are frequently updated to reflect changes\nboth in the service and in the applicable law, and are often available in different versions for different\njurisdictions. Given multiple versions of the same contract, we selected the most recent version available on-line to\nEuropean customers. The mark-up was done in XML by three annotators, which jointly worked for the formulation of the\nannotation guidelines. The whole annotation process included several revisions, where some corrections were also\nsuggested by an analysis of the false positives and false negatives retrieved by the initial machine learning\nprototypes. Due to the large interaction among the annotators during this process, in order to assess inter-annotation\nagreement, a further test set consisting of 10 additional contracts was tagged, following the final version of the\nguidelines. […] We produced an additional test set consisting of 10 more annotated contracts. Such documents were\nindependently tagged by two distinct annotators who had carefully studied the guidelines. In order to quantitatively\nmeasure the inter-annotation agreement, for this test set we computed the standard Cohen’s 𝜅 metric […] which resulted\nto be 0.871 […].”*", "#### Who are the annotators?\n\n\nNot specified.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nIt is very likely that some ToS in German, Italian and Polish are direct translations from English. Drawzeski et al. (2021) write: *“Although we could not assess this comprehensively in the present study, we infer from the wording of the ToS that at least in 9 out of 25 cases, German, Italian and Polish documents were indeed translations of the English originals.”*", "### Other Known Limitations\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.\nAdditional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github).", "### Licensing Information\n\n\ncc-by-nc-2.5", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this\ndataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-found #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-German #language-English #language-Italian #language-Polish #license-other #region-us \n", "### Dataset Summary\n\n\n*\"We present the first annotated corpus for multilingual analysis of potentially unfair clauses in online Terms of\nService [=ToS]. The data set comprises a total of 100 contracts, obtained from 25 documents annotated in four different\nlanguages: English, German, Italian, and Polish. For each contract, potentially unfair clauses for the consumer are\nannotated, for nine different unfairness categories.\"* (Drawzeski et al., 2021)", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used for multi-class multi-label text classification tasks, more specifically, for classifying unfair clauses in\nToS.", "### Languages\n\n\nEnglish, German, Italian, and Polish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test).", "### Data Fields\n\n\nThe dataset contains the following fields:\n\n\n* 'language': The language of the sentence/document.\n* 'company': The company of the document.\n* 'line\\_number': The line number of the sentence in the document.\n* 'sentence': The sentence to be classified.\n* 'unfairness\\_level': The unfairness level assigned to the sentence (if two clauses apply, the higher unfairness level is assigned here).\n\n\nThe documents have been annotated using nine tags that represent different categories of clause unfairness. These boolean tags are:\n\n\n* 'a' = Arbitration: *”This clause requires or allows the parties to resolve their disputes through an arbitration process, before the case could go to court. It is therefore considered a kind of forum selection clause. However, such a clause may or may not specify that arbitration should occur within a specific jurisdiction. Clauses stipulating that the arbitration should (1) take place in a state other than the state of consumer’s residence and/or (2) be based not on law but on arbiter’s discretion were marked as clearly unfair.”* (Lippi et al., 2019)\n* 'ch' = Unilateral change: *\"This clause specifies the conditions under which the service provider could amend and modify the terms of service and/or the service itself. Such clauses were always considered as potentially unfair. This is because the ECJ has not yet issued a judgment in this regard, though the Annex to the Direc- tive contains several examples supporting such a qualification.\"* (Lippi et al., 2019)\n* 'cr' = Content removal : *\"This gives the provider a right to modify/delete user’s content, including in-app purchases, and sometimes specifies the conditions under which the service provider may do so. As in the case of unilateral termination, clauses that indicate conditions for content removal were marked as potentially unfair, whereas clauses stipulating that the service provider may remove content in his full discretion, and/or at any time for any or no reasons and/or without notice nor possibility to retrieve the content were marked as clearly unfair.\"* (Lippi et al., 2019)\n* 'j' = Jurisdiction : *\"This type of clause stipulates what courts will have the competence to adjudicate disputes under the contract. Jurisdiction clauses giving consumers a right to bring disputes in their place of residence were marked as clearly fair, whereas clauses stating that any judicial proceeding takes a residence away (i.e. in a different city, different country) were marked as clearly unfair. This assessment is grounded in ECJ’s case law, see for example Oceano case number C-240/98.\"* (Lippi et al., 2019)\n* 'law' = Choice of law: *\"This clause specifies what law will govern the contract, meaning also what law will be applied in potential adjudication of a dispute arising under the contract. Clauses defining the applicable law as the law of the consumer’s country of residence were marked as clearly fair [...]\"* (Lippi et al., 2019)\n* 'ltd' = Limitation of liability: *\"This clause stipulates that the duty to pay damages is limited or excluded, for certain kinds of losses and under certain conditions. Clauses that explicitly affirm non-excludable providers’ liabilities were marked as clearly fair.\"* (Lippi et al., 2019)\n* 'ter' = Unilateral termination: *\"This clause gives provider the right to suspend and/or terminate the service and/or the contract, and sometimes details the circumstances under which the provider claims to have a right to do so. Unilateral termination clauses that specify reasons for termination were marked as potentially unfair. Whereas clauses stipulating that the service provider may suspend or terminate the service at any time for any or no reasons and/or without notice were marked as clearly unfair.\"* (Lippi et al., 2019)\n* 'use' = Contract by using: *\"This clause stipulates that the consumer is bound by the terms of use of a specific service, simply by using the service, without even being required to mark that he or she has read and accepted them. We always marked such clauses as potentially unfair. The reason for this choice is that a good argument can be offered for these clauses to be unfair, because they originate an imbalance in rights and duties of the parties, but this argument has no decisive authoritative backing yet, since the ECJ has never assessed a clause of this type.\"* (Lippi et al., 2019)\n* 'pinc' = Privacy included: This tag identifies *\"clauses stating that consumers consent to the privacy policy simply by using the service. Such clauses have been always considered potentially unfair\"* (Drawzeski et al., 2021)\n* 'all\\_topics' = an aggregate column containing all applicable topics combined\n\n\n*”We assumed that each type of clause could be classified as either clearly fair, or potentially unfair, or clearly unfair. In order to mark the different degrees of (un)fairness we appended a numeric value to each XML tag, with 1 meaning clearly fair, 2 potentially unfair, and 3 clearly unfair. Nested tags were used to annotate text segments relevant to more than one type of clause. With clauses covering multiple paragraphs, we chose to tag each paragraph separately, possibly with different degrees of (un)fairness.”* (Lippi et al., 2019)", "### Data Splits\n\n\nNo splits provided in the original paper.\n\n\nJoel Niklaus created the splits manually. The train split contains the 20 (80%) first companies in alphabetic order (*Booking, Dropbox, Electronic\\_Arts, Evernote, Facebook, Garmin, Google, Grindr, Linkedin, Mozilla,\nPinterest, Quora, Ryanair, Skype, Skyscanner, Snap, Spotify, Terravision, Tinder, Tripadvisor*). The\nvalidation split contains the 2 (8%) companies *Tumblr* and *Uber*. The test split contains the 3 (12%) companies *Weebly*,\n*Yelp*, *Zynga*.\n\n\nThere are two tasks possible for this dataset.", "#### Clause Topics\n\n\nBy only considering the clause topic, we separated the clause topic from the fairness level classification. Thus, the label set could be reduced to just 9 classes.\nThis dataset poses a multi-label multi-class sentence classification problem.\n\n\nThe following label distribution shows the number of occurrences per label per split. 'total occurrences' sums up the previous rows (number of clause topics per split). 'split size' is the number of sentences per split.", "#### Unfairness Levels\n\n\nWhen predicting unfairness levels, all untagged sentences can be removed. This reduces the dataset size considerably.\nThis dataset poses a single-label multi-class sentence classification problem.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe EU legislation is published in all official languages. This multilingualism comes with costs and challenges, such as limited cross-linguistical interpretability. The EU has refrained from regulating languages in which standard terms in consumer contracts should be drafted, allowing for differing approaches to emerge in various jurisdictions. Consumer protection authorities and non-governmental organizations in Europe tend to operate only in their respective languages. Therefore, consumer protection technologies are needed that are capable of dealing with multiple languages. The dataset at hand can be used for the automated detection of unfair clauses in ToS which, in most cases, are available in multiple languages. (Drawzeski et al., 2021)", "### Source Data", "#### Initial Data Collection and Normalization\n\n\n*\"The analysed ToS were retrieved from the Claudette pre-existing corpus, covering 100 English ToS (Lippi et al., 2019; Ruggeri et al., 2021). Such terms mainly concern popular digital services provided to consumers, including leading online platforms (such as search engines and social media). The predominant language of drafting of these ToS is English, with differing availability of corresponding ToS in other languages. To carry out the present study, the ultimate 25 ToS were selected on the basis of three main criteria: a) their availability in the four selected languages; b) the possibility of identifying a correspondence between the different versions, given their publication date; and c) the similarity of their structure (e.g. number of clauses, sections, etc.). To illustrate, while ToS in both German and Italian were identified for 63 out of the 100 ToS contained in the pre-existing Claudette training corpus, Polish versions were found for only 42 of these 63 ToS. Out of the 42 ToS available in the four languages, we selected those with the more closely corresponding versions based on criteria b) and c) above. Perfect correspondence across the 4 languages, however, could not be achieved for all 25 ToS.\"* (Drawzeski et al., 2021)", "#### Who are the source language producers?\n\n\nThe source language producers are likely to be lawyers.", "### Annotations", "#### Annotation process\n\n\nThe dataset at hand is described by Drawzeski et al. (2021). The ToS of the dataset were retrieved from the pre-existing\nand mono-lingual (English) Claudette corpus which is described in (Lippi et al., 2019). Drawzeski et al. (2021) *“investigate methods for automatically transferring the annotations made on ToS in the context of the Claudette project\nonto the corresponding versions of the same documents in a target language, where such resources and expertise may be\nlacking.”*\n\n\nTherefore, in the following, we will present the annotation process for the Claudette corpus as described in (Lippi et\nal., 2019).\n\n\n*”The corpus consists of 50 relevant on-line consumer contracts, i.e., ToS of on-line platforms. Such contracts were\nselected among those offered by some of the major players in terms of number of users, global relevance, and time the\nservice was established. Such contracts are usually quite detailed in content, are frequently updated to reflect changes\nboth in the service and in the applicable law, and are often available in different versions for different\njurisdictions. Given multiple versions of the same contract, we selected the most recent version available on-line to\nEuropean customers. The mark-up was done in XML by three annotators, which jointly worked for the formulation of the\nannotation guidelines. The whole annotation process included several revisions, where some corrections were also\nsuggested by an analysis of the false positives and false negatives retrieved by the initial machine learning\nprototypes. Due to the large interaction among the annotators during this process, in order to assess inter-annotation\nagreement, a further test set consisting of 10 additional contracts was tagged, following the final version of the\nguidelines. […] We produced an additional test set consisting of 10 more annotated contracts. Such documents were\nindependently tagged by two distinct annotators who had carefully studied the guidelines. In order to quantitatively\nmeasure the inter-annotation agreement, for this test set we computed the standard Cohen’s 𝜅 metric […] which resulted\nto be 0.871 […].”*", "#### Who are the annotators?\n\n\nNot specified.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nIt is very likely that some ToS in German, Italian and Polish are direct translations from English. Drawzeski et al. (2021) write: *“Although we could not assess this comprehensively in the present study, we infer from the wording of the ToS that at least in 9 out of 25 cases, German, Italian and Polish documents were indeed translations of the English originals.”*", "### Other Known Limitations\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.\nAdditional changes were made by Joel Niklaus (Email; Github) and Veton Matoshi (Email; Github).", "### Licensing Information\n\n\ncc-by-nc-2.5", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this\ndataset." ]
63eb40318a0cb1e20a4bbf816e095d9e28af8094
# Dataset Card for SPGISpeech ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) - [Terms of Usage](#terms-of-usage) ## Dataset Description - **Homepage:** https://datasets.kensho.com/datasets/spgispeech - **Repository:** - **Paper:** https://arxiv.org/abs/2104.02014 - **Leaderboard:** - **Point of Contact:** [[email protected]](mailto:[email protected] ) ## Dataset Description SPGISpeech (rhymes with “squeegee-speech”) is a large-scale transcription dataset, freely available for academic research. SPGISpeech is a corpus of 5,000 hours of professionally-transcribed financial audio. SPGISpeech contains a broad cross-section of L1 and L2 English accents, strongly varying audio quality, and both spontaneous and narrated speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted, including capitalization, punctuation, and denormalization of non-standard words. SPGISpeech consists of 5,000 hours of recorded company earnings calls and their respective transcriptions. The original calls were split into slices ranging from 5 to 15 seconds in length to allow easy training for speech recognition systems. Calls represent a broad cross-section of international business English; SPGISpeech contains approximately 50,000 speakers, one of the largest numbers of any speech corpus, and offers a variety of L1 and L2 English accents. The format of each WAV file is single channel, 16kHz, 16 bit audio. ### Example Usage The training split has several configurations of various size: S, M, L. See the Section [Data Splits](#data-splits) for for more information. To download the S configuration: ```python from datasets import load_dataset spgi = load_dataset("kensho/spgispeech", "S", use_auth_token=True) # see structure print(spgi) # load audio sample on the fly audio_input = spgi["train"][0]["audio"] # first decoded audio sample transcription = spgi["train"][0]["text"] # first transcription ``` It is possible to download only the development or test data: ```python spgi_dev = load_dataset("kensho/spgispeech", "dev", use_auth_token=True) spgi_test = load_dataset("kensho/spgispeech", "test", use_auth_token=True) ``` ### Supported Tasks and Leaderboards - `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). ### Languages SPGISpeech contains audio and transcription data in business English and offers a variety of L1 and L2 accents. ## Dataset Structure ### Data Instances ```python { 'wav_filename': '32bcf9c9dc707fb61a04290e296f31eb/99.wav', 'audio': { 'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/c7082e2bd5b.../dev_part_2/32bcf9c9dc707fb61a04290e296f31eb/99.wav', 'array': array([-0.00039673, -0.00057983, -0.00057983, ..., -0.0007019 , -0.00027466, 0.00021362], dtype=float32), 'sampling_rate': 16000 }, 'wav_filesize': 292844, 'transcript': 'This is proving to be true, and through focused execution we are on track to exceed our targeted savings in 2017. As a reminder,' } ``` ### Data Fields * wav_filename (string) - audio filename (includes parent directory). * audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). * wav_filesize (int) - size of the file in bytes. * transcript (string) - transcription of the file. ### Data Splits The dataset has three splits: train, evaluation (dev) and test. The train split has three configurations of various sizes: S, M, L. Larger subsets are supersets of smaller subsets, e.g., the L subset contains all the data from the M subset. #### Transcribed Subsets Size | Subset | Size | |:------:|:-------:| | S | 22Gb | | M | 107Gb | | L | 530Gb | | dev | 11Gb | | test | 11Gb | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The dataset contains S&P Global company earnings calls. #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? English speakers with a diverse selection of accents, including non-native ones (L2), producing both spontaneous and narrated speech. ### Annotations #### Annotation process Data is orthographically transcribed according to a professional style guide detailing conventions for capitalization, punctuation, denormalization of non-standard words and transcription of disfluencies in spontaneous speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted. Full earnings calls last 30-60 minutes in length and are typically transcribed as whole units, without internal timestamps. In order to produce short audio slices suitable for STT training, the files were segmented with [Gentle](https://lowerquality.com/gentle/), a double-pass forced aligner, with the beginning and end of each slice of audio imputed by voice activity detection with [py-webrtc](https://github.com/wiseman/py-webrtcvad). #### Who are the annotators? Earning calls are manually transcribed by S&P Global, Inc. ### Personal and Sensitive Information Though earnings calls are public, we nevertheless identified full names with the spaCy en core web large model. We withheld samples containing names that appeared fewer than ten times (7% of total). Full names appearing ten times or more in the data were considered to be public figures and were retained. This necessarily incomplete approach to named entity recognition was complemented with randomized manual spot checks which uncovered no false negatives missed by the automated approach. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information ### Citation Information Please cite this paper: ```bibtext @ARTICLE{2021arXiv210402014O, author = {{O'Neill}, Patrick K. and {Lavrukhin}, Vitaly and {Majumdar}, Somshubra and {Noroozi}, Vahid and {Zhang}, Yuekai and {Kuchaiev}, Oleksii and {Balam}, Jagadeesh and {Dovzhenko}, Yuliya and {Freyberg}, Keenan and {Shulman}, Michael D. and {Ginsburg}, Boris and {Watanabe}, Shinji and {Kucsko}, Georg}, title = "{SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language, Electrical Engineering and Systems Science - Audio and Speech Processing}, year = 2021, month = apr, eid = {arXiv:2104.02014}, pages = {arXiv:2104.02014}, archivePrefix = {arXiv}, eprint = {2104.02014}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210402014O}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } ``` ### Contributions Thanks to [@sanchit-gandhi](https://github.com/sanchit-gandhi), [@patrickvonplaten](https://github.com/patrickvonplaten), and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset. ## Terms of Usage Your access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them. If you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT. Section 1 – THE CONTENT 1.1 The Content is provided for academic research purposes and internal use only and must not be used to: - assemble or create a database; - construct or facilitate the construction of products which compete with the Content; - identify or attempt to identify or contact any individual; or link to another dataset. The Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content. 1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content. The Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content. 1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho. 1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction. 1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete. 1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice. Section 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY 2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES. 2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY. 2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law. Section 3 - PRIVACY 3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.). 3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below). 3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes. 3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content. 3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information. 3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent. 3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at [email protected] or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041. Section 4 - MISCELLANEOUS 4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof. 4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party. 4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York. 4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE. 4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.
kensho/spgispeech_demo
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "license:other", "arxiv:2104.02014", "region:us" ]
2022-07-01T11:07:49+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "SpgiSpeech", "languages": ["en"], "extra_gated_prompt": "Your access to and use of the information in the Kensho Transcript Dataset (the \u201cContent\u201d), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (\u201cKensho\u201d), shall be governed by the following terms and conditions of usage (\u201cTerms of Usage\u201d). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an \u201cAuthorized User\u201d). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.\nIf you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.\nSection 1 \u2013 THE CONTENT\n1.1 The Content is provided for academic research purposes and internal use only and must not be used to: assemble or create a database; construct or facilitate the construction of products which compete with the Content; identify or attempt to identify or contact any individual; or link to another dataset.\nThe Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.\n1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.\nThe Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho\u2019s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.\n1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho\u2019s or the third-party content providers\u2019 name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.\n1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.\n1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.\n1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.\nSection 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY\n2.1 THE CONTENT IS PROVIDED \u201cAS IS\u201d AND \u201cAS AVAILABLE\u201d WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER\u2019S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.\n2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.\n2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.\nSection 3 - PRIVACY\n3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (\u201cRegistration Data\u201d). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (\u201cAccess Data\u201d). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP\u2019s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).\n3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).\n3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (\u201cKensho Affiliates\u201d) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.\n3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.\n3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.\n3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.\n3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at [email protected] or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.\nSection 4 - MISCELLANEOUS\n4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.\n4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.\n4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.\n4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.\n4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.", "extra_gated_fields": {"Full name": "text", "Email": "text", "Institution": "text", "I accept the Terms of Usage": "checkbox"}}
2022-07-01T11:08:31+00:00
[ "2104.02014" ]
[]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #license-other #arxiv-2104.02014 #region-us
Dataset Card for SPGISpeech =========================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions * Terms of Usage Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Leaderboard: * Point of Contact: data@URL Dataset Description ------------------- SPGISpeech (rhymes with “squeegee-speech”) is a large-scale transcription dataset, freely available for academic research. SPGISpeech is a corpus of 5,000 hours of professionally-transcribed financial audio. SPGISpeech contains a broad cross-section of L1 and L2 English accents, strongly varying audio quality, and both spontaneous and narrated speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted, including capitalization, punctuation, and denormalization of non-standard words. SPGISpeech consists of 5,000 hours of recorded company earnings calls and their respective transcriptions. The original calls were split into slices ranging from 5 to 15 seconds in length to allow easy training for speech recognition systems. Calls represent a broad cross-section of international business English; SPGISpeech contains approximately 50,000 speakers, one of the largest numbers of any speech corpus, and offers a variety of L1 and L2 English accents. The format of each WAV file is single channel, 16kHz, 16 bit audio. ### Example Usage The training split has several configurations of various size: S, M, L. See the Section Data Splits for for more information. To download the S configuration: It is possible to download only the development or test data: ### Supported Tasks and Leaderboards * 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). ### Languages SPGISpeech contains audio and transcription data in business English and offers a variety of L1 and L2 accents. Dataset Structure ----------------- ### Data Instances ### Data Fields * wav\_filename (string) - audio filename (includes parent directory). * audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). * wav\_filesize (int) - size of the file in bytes. * transcript (string) - transcription of the file. ### Data Splits The dataset has three splits: train, evaluation (dev) and test. The train split has three configurations of various sizes: S, M, L. Larger subsets are supersets of smaller subsets, e.g., the L subset contains all the data from the M subset. #### Transcribed Subsets Size Dataset Creation ---------------- ### Curation Rationale ### Source Data The dataset contains S&P Global company earnings calls. #### Initial Data Collection and Normalization #### Who are the source language producers? English speakers with a diverse selection of accents, including non-native ones (L2), producing both spontaneous and narrated speech. ### Annotations #### Annotation process Data is orthographically transcribed according to a professional style guide detailing conventions for capitalization, punctuation, denormalization of non-standard words and transcription of disfluencies in spontaneous speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted. Full earnings calls last 30-60 minutes in length and are typically transcribed as whole units, without internal timestamps. In order to produce short audio slices suitable for STT training, the files were segmented with Gentle, a double-pass forced aligner, with the beginning and end of each slice of audio imputed by voice activity detection with py-webrtc. #### Who are the annotators? Earning calls are manually transcribed by S&P Global, Inc. ### Personal and Sensitive Information Though earnings calls are public, we nevertheless identified full names with the spaCy en core web large model. We withheld samples containing names that appeared fewer than ten times (7% of total). Full names appearing ten times or more in the data were considered to be public figures and were retained. This necessarily incomplete approach to named entity recognition was complemented with randomized manual spot checks which uncovered no false negatives missed by the automated approach. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Please cite this paper: ### Contributions Thanks to @sanchit-gandhi, @patrickvonplaten, and @polinaeterna for adding this dataset. Terms of Usage -------------- Your access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them. If you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT. Section 1 – THE CONTENT 1.1 The Content is provided for academic research purposes and internal use only and must not be used to: * assemble or create a database; * construct or facilitate the construction of products which compete with the Content; * identify or attempt to identify or contact any individual; or link to another dataset. The Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content. 1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content. The Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content. 1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho. 1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction. 1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete. 1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice. Section 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY 2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES. 2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY. 2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law. Section 3 - PRIVACY 3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.). 3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below). 3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes. 3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content. 3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information. 3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent. 3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@URL or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041. Section 4 - MISCELLANEOUS 4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof. 4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party. 4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York. 4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE. 4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.
[ "### Example Usage\n\n\nThe training split has several configurations of various size: S, M, L. See the Section Data Splits\nfor for more information. To download the S configuration:\n\n\nIt is possible to download only the development or test data:", "### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR).\nThe model is presented with an audio file and asked to transcribe the audio file to written text.\nThe most common evaluation metric is the word error rate (WER).", "### Languages\n\n\nSPGISpeech contains audio and transcription data in business English and offers a variety of L1 and L2 accents.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* wav\\_filename (string) - audio filename (includes parent directory).\n* audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate.\nIn non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio\ninside its archive (as files are not downloaded and extracted locally).\n* wav\\_filesize (int) - size of the file in bytes.\n* transcript (string) - transcription of the file.", "### Data Splits\n\n\nThe dataset has three splits: train, evaluation (dev) and test. The train split has three configurations of various sizes:\nS, M, L. Larger subsets are supersets of smaller subsets, e.g., the L subset contains all the data from the M subset.", "#### Transcribed Subsets Size\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe dataset contains S&P Global company earnings calls.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nEnglish speakers with a diverse selection of accents, including non-native ones (L2), producing both\nspontaneous and narrated speech.", "### Annotations", "#### Annotation process\n\n\nData is orthographically transcribed according to a professional style guide detailing conventions for capitalization, punctuation,\ndenormalization of non-standard words and transcription of disfluencies in spontaneous speech.\nThe transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted.\n\n\nFull earnings calls last 30-60 minutes in length and are typically\ntranscribed as whole units, without internal timestamps. In order to produce short audio slices suitable for STT\ntraining, the files were segmented with Gentle, a double-pass forced aligner,\nwith the beginning and end of each slice of audio imputed by voice activity detection with\npy-webrtc.", "#### Who are the annotators?\n\n\nEarning calls are manually transcribed by S&P Global, Inc.", "### Personal and Sensitive Information\n\n\nThough earnings calls are public, we nevertheless identified full names with the spaCy en core web large model.\nWe withheld samples containing names that appeared fewer than ten times (7% of total). Full\nnames appearing ten times or more in the data were considered to be public figures and were retained.\nThis necessarily incomplete approach to named entity recognition was complemented with randomized manual spot\nchecks which uncovered no false negatives missed by the automated approach.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nPlease cite this paper:", "### Contributions\n\n\nThanks to @sanchit-gandhi, @patrickvonplaten,\nand @polinaeterna for adding this dataset.\n\n\nTerms of Usage\n--------------\n\n\nYour access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.\n\n\nIf you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.\n\n\nSection 1 – THE CONTENT\n\n\n1.1 The Content is provided for academic research purposes and internal use only and must not be used to:\n\n\n* assemble or create a database;\n* construct or facilitate the construction of products which compete with the Content;\n* identify or attempt to identify or contact any individual; or link to another dataset.\n\n\nThe Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.\n\n\n1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.\n\n\nThe Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.\n\n\n1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.\n\n\n1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.\n\n\n1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.\n\n\n1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.\n\n\nSection 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY\n\n\n2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.\n\n\n2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.\n\n\n2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.\n\n\nSection 3 - PRIVACY\n\n\n3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).\n\n\n3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).\n\n\n3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.\n\n\n3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.\n\n\n3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.\n\n\n3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.\n\n\n3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@URL or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.\n\n\nSection 4 - MISCELLANEOUS\n\n\n4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.\n\n\n4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.\n\n\n4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.\n\n\n4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.\n\n\n4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #license-other #arxiv-2104.02014 #region-us \n", "### Example Usage\n\n\nThe training split has several configurations of various size: S, M, L. See the Section Data Splits\nfor for more information. To download the S configuration:\n\n\nIt is possible to download only the development or test data:", "### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR).\nThe model is presented with an audio file and asked to transcribe the audio file to written text.\nThe most common evaluation metric is the word error rate (WER).", "### Languages\n\n\nSPGISpeech contains audio and transcription data in business English and offers a variety of L1 and L2 accents.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* wav\\_filename (string) - audio filename (includes parent directory).\n* audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate.\nIn non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio\ninside its archive (as files are not downloaded and extracted locally).\n* wav\\_filesize (int) - size of the file in bytes.\n* transcript (string) - transcription of the file.", "### Data Splits\n\n\nThe dataset has three splits: train, evaluation (dev) and test. The train split has three configurations of various sizes:\nS, M, L. Larger subsets are supersets of smaller subsets, e.g., the L subset contains all the data from the M subset.", "#### Transcribed Subsets Size\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe dataset contains S&P Global company earnings calls.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nEnglish speakers with a diverse selection of accents, including non-native ones (L2), producing both\nspontaneous and narrated speech.", "### Annotations", "#### Annotation process\n\n\nData is orthographically transcribed according to a professional style guide detailing conventions for capitalization, punctuation,\ndenormalization of non-standard words and transcription of disfluencies in spontaneous speech.\nThe transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted.\n\n\nFull earnings calls last 30-60 minutes in length and are typically\ntranscribed as whole units, without internal timestamps. In order to produce short audio slices suitable for STT\ntraining, the files were segmented with Gentle, a double-pass forced aligner,\nwith the beginning and end of each slice of audio imputed by voice activity detection with\npy-webrtc.", "#### Who are the annotators?\n\n\nEarning calls are manually transcribed by S&P Global, Inc.", "### Personal and Sensitive Information\n\n\nThough earnings calls are public, we nevertheless identified full names with the spaCy en core web large model.\nWe withheld samples containing names that appeared fewer than ten times (7% of total). Full\nnames appearing ten times or more in the data were considered to be public figures and were retained.\nThis necessarily incomplete approach to named entity recognition was complemented with randomized manual spot\nchecks which uncovered no false negatives missed by the automated approach.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nPlease cite this paper:", "### Contributions\n\n\nThanks to @sanchit-gandhi, @patrickvonplaten,\nand @polinaeterna for adding this dataset.\n\n\nTerms of Usage\n--------------\n\n\nYour access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.\n\n\nIf you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.\n\n\nSection 1 – THE CONTENT\n\n\n1.1 The Content is provided for academic research purposes and internal use only and must not be used to:\n\n\n* assemble or create a database;\n* construct or facilitate the construction of products which compete with the Content;\n* identify or attempt to identify or contact any individual; or link to another dataset.\n\n\nThe Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.\n\n\n1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.\n\n\nThe Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.\n\n\n1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.\n\n\n1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.\n\n\n1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.\n\n\n1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.\n\n\nSection 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY\n\n\n2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.\n\n\n2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.\n\n\n2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.\n\n\nSection 3 - PRIVACY\n\n\n3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).\n\n\n3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).\n\n\n3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.\n\n\n3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.\n\n\n3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.\n\n\n3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.\n\n\n3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@URL or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.\n\n\nSection 4 - MISCELLANEOUS\n\n\n4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.\n\n\n4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.\n\n\n4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.\n\n\n4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.\n\n\n4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail." ]
2f9aa77e76373edaf9fd26f2b4b42a14d230c956
# Dataset Card for Images of Cervical Cells with AgNOR Stain Technique ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CCAgT homepage](https://data.mendeley.com/datasets/wg4bpm33hj/) - **Repository:** [CCAgT-utils](https://github.com/johnnv1/CCAgT-utils) - **Paper:** [Semantic Segmentation for the Detection of Very Small Objects on Cervical Cell Samples Stained with the AgNOR Technique](https://dx.doi.org/10.2139/ssrn.4126881) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [João G. A. Amorim](mailto:[email protected]) ### Dataset Summary The CCAgT (Images of Cervical Cells with AgNOR Stain Technique) dataset contains 9339 images (1600x1200 resolution where each pixel is 0.111µmX0.111µm) from 15 different slides stained using the AgNOR technique. Each image has at least one label. In total, this dataset has more than 63K instances of annotated object. The images are from the patients of the Gynecology and Colonoscopy Outpatient Clinic of the [Polydoro Ernani de São Thiago University Hospital of the Universidade Federal de Santa Catarina (HU-UFSC)](https://unihospital.ufsc.br/). ### Supported Tasks and Leaderboards - `image-segmentation`: The dataset can be used to train a model for semantic segmentation or instance segmentation. Semantic segmentation consists in classifying each pixel of the image. Success on this task is typically measured by achieving high values of [mean iou](https://huggingface.co/spaces/evaluate-metric/mean_iou) or [f-score](https://huggingface.co/spaces/evaluate-metric/f1) for pixels results. Instance segmentation consists of doing object detection first and then using a semantic segmentation model inside detected objects. For instances results, this task is typically measured by achieving high values of [recall](https://huggingface.co/spaces/evaluate-metric/recall), [precision](https://huggingface.co/spaces/evaluate-metric/precision) and [f-score](https://huggingface.co/spaces/evaluate-metric/f1). - `object-detection`: The dataset can be used to train a model for object detection to detect the nuclei categories or the nucleolus organizer regions (NORs), which consists of locating instances of objects and then classifying each one. This task is typically measured by achieving a high values of [recall](https://huggingface.co/spaces/evaluate-metric/recall), [precision](https://huggingface.co/spaces/evaluate-metric/precision) and [f-score](https://huggingface.co/spaces/evaluate-metric/f1). ### Languages The class labels in the dataset are in English. ## Dataset Structure ### Data Instances An example looks like the one below: #### `semantic segmentation` (default configuration) ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1200x1600 at 0x276021C5EB8>, 'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=1200x1600 at 0x385021C5ED7> } ``` #### `object detection` ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1200x1600 at 0x276021C5EB8>, 'objects': { 'bbox': [ [36, 7, 13, 32], [50, 7, 12, 32] ], 'label': [1, 5] } ``` #### `instance segmentation` ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1200x1600 at 0x276021C5EB8>, 'objects': { 'bbox': [ [13.3, 7.5, 47.6, 38.3], [10.2, 7.5, 50.7, 38.3] ], 'segment': [ [[36.2, 7.5, 13.3, 32.1, 52.1, 40.6, 60.9, 45.8, 50.1, 40, 40, 33.2, 35.2]], [[10.2, 7.5, 10.3, 32.1, 52.1, 40.6, 60.9, 45.8, 50.1, 40, 40, 33.2, 35.2]], ], 'label': [1, 5] } ``` ### Data Fields The data annotations have the following fields: #### `semantic segmentation` (default configuration) - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `annotation`: A `PIL.Image.Image` object containing the annotation mask. The mask has a single channel and the following pixel values are possible: `BACKGROUND` (0), `NUCLEUS` (1), `CLUSTER` (2), `SATELLITE` (3), `NUCLEUS_OUT_OF_FOCUS` (4), `OVERLAPPED_NUCLEI` (5), `NON_VIABLE_NUCLEUS` (6) and `LEUKOCYTE_NUCLEUS` (7). #### `object detection` - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `objects`: a dictionary containing bounding boxes and labels of the cell objects - `bbox`: a list of bounding boxes (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) corresponding to the objects present on the image - `label`: a list of integers representing the category (7 categories to describe the objects in total; two to differentiate nucleolus organizer regions), with the possible values including `NUCLEUS` (0), `CLUSTER` (1), `SATELLITE` (2), `NUCLEUS_OUT_OF_FOCUS` (3), `OVERLAPPED_NUCLEI` (4), `NON_VIABLE_NUCLEUS` (5) and `LEUKOCYTE_NUCLEUS` (6). #### `instance segmentation` - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `objects`: a dictionary containing bounding boxes and labels of the cell objects - `bbox`: a list of bounding boxes (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) corresponding to the objects present on the image - `segment`: a list of segments in format of `[polygon_0, ..., polygon_n]`, where each polygon is `[x0, y0, ..., xn, yn]`. - `label`: a list of integers representing the category (7 categories to describe the objects in total; two to differentiate nucleolus organizer regions), with the possible values including `NUCLEUS` (0), `CLUSTER` (1), `SATELLITE` (2), `NUCLEUS_OUT_OF_FOCUS` (3), `OVERLAPPED_NUCLEI` (4), `NON_VIABLE_NUCLEUS` (5) and `LEUKOCYTE_NUCLEUS` (6). ### Data Splits The data is split randomly using the fixed seed into training, test and validation set. The training data contains 70% of the images and the testing and the validation data contain 15% of the images each. In total, the training set contains 6533 images and the testing and the validation set 1403 images each. <details> <summary> Click here to see additional statistics: </summary> | Slide id | Diagnostics | images | annotations | NUCLEUS | CLUSTER | SATELLITE | NUCLEUS_OUT_OF_FOCUS | OVERLAPPED_NUCLEI | NON_VIABLE_NUCLEUS | LEUKOCYTE_NUCLEUS | | :-------: | :---------: | :----: | :---------: | :-----: | :------: | :-------: | :------------------: | :---------------: | :---------------: | :-------: | | A | CIN 3 | 1311 | 3164 | 763 | 1038 | 922 | 381 | 46 | 14 | 0 | | B | SCC | 561 | 911 | 224 | 307 | 112 | 132 | 5 | 1 | 130 | | C | AC | 385 | 11420 | 2420 | 3584 | 1112 | 1692 | 228 | 477 | 1907 | | D | CIN 3 | 2125 | 1258 | 233 | 337 | 107 | 149 | 12 | 8 | 412 | | E | CIN 3 | 506 | 11131 | 2611 | 6249 | 1648 | 476 | 113 | 34 | 0 | | F | CIN 1 | 318 | 3365 | 954 | 1406 | 204 | 354 | 51 | 326 | 70 | | G | CIN 2 | 249 | 2759 | 691 | 1279 | 336 | 268 | 49 | 51 | 85 | | H | CIN 2 | 650 | 5216 | 993 | 983 | 425 | 2562 | 38 | 214 | 1 | | I | No lesion | 309 | 474 | 56 | 55 | 19 | 170 | 2 | 23 | 149 | | J | CIN 1 | 261 | 1786 | 355 | 304 | 174 | 743 | 18 | 33 | 159 | | K | No lesion | 1503 | 13102 | 2464 | 6669 | 638 | 620 | 670 | 138 | 1903 | | L | CIN 2 | 396 | 3289 | 842 | 796 | 387 | 1209 | 27 | 23 | 5 | | M | CIN 2 | 254 | 1500 | 357 | 752 | 99 | 245 | 16 | 12 | 19 | | N | CIN 3 | 248 | 911 | 258 | 402 | 67 | 136 | 10 | 6 | 32 | | O | AC | 262 | 2904 | 792 | 1549 | 228 | 133 | 88 | 52 | 62 | | **Total** | - | 9339 | 63190 | 14013 | 25710 | 6478 | 9270 | 1373 | 1412 | 4934 | Lesion types: - Cervical intraepithelial neoplasia 1 - CIN 1 - Cervical intraepithelial neoplasia 2 - CIN 2 - Cervical intraepithelial neoplasia 3 - CIN 3 - Squamous cell carcinoma - SCC - Adenocarcinoma - AC - No lesion </details> ## Dataset Creation ### Curation Rationale CCAgT was built to provide a dataset for machines to learn how to identify nucleus and nucleolus organizer regions (NORs). ### Source Data #### Initial Data Collection and Normalization The images are collected as patches/tiles of whole slide images (WSIs) from cervical samples stained with AgNOR technique to allow the detection of nucleolus organizer regions (NORs). NORs are DNA loops containing genes responsible for the transcription of ribosomal RNA located in the cell nucleolus. They contain a set of argyrophilic proteins, selectively stained by silver nitrate, which can be identified as black dots located throughout the nucleoli area and called AgNORs. #### Who are the source language producers? The dataset was built using images from examinations (a gynecological exam, colposcopy and biopsy) of 15 women patients who were treated at the Gynecology and Colposcopy Outpatient Clinic of the [University Hospital Professor Polydoro Ernani de São Thiago of Federal University of Santa Catarina (HU-UFSC)](https://unihospital.ufsc.br/) and had 6 different diagnoses in their oncological exams. The samples were collected by the members of the Clinical Analyses Department: Ane Francyne Costa, Fabiana Botelho De Miranda Onofre, and Alexandre Sherlley Casimiro Onofre. ### Annotations #### Annotation process The instances were annotated using the [labelbox](https://labelbox.com/) tool. The satellite category was labeled as a single dot, and the other categories were labeled as polygons. After the annotation process, all annotations were reviewed. #### Who are the annotators? Members of the Clinical Analyses Department and the Image Processing and Computer Graphics Lab. — LAPiX from [Universidade Federal de Santa Catarina (UFSC)](https://en.ufsc.br/). - Tainee Bottamedi - Vinícius Sanches - João H. Telles de Carvalho - Ricardo Thisted ### Personal and Sensitive Information This research was approved by the UFSC Research Ethics Committee (CEPSH), protocol number 57423616.3.0000.0121. All involved patients were informed about the study's objectives, and those who agreed to participate signed an informed consent form. ## Considerations for Using the Data ### Social Impact of Dataset This dataset's purpose is to help spread the AgNOR as a support method for cancer diagnosis since this method is not standardized among pathologists. ### Discussion of Biases [More Information Needed] ### Other Known Limitations Satellite annotation is not as accurate for pixel-level representation due to single-point annotations. ## Additional Information ### Dataset Curators Members of the Clinical Analyses Department from [Universidade Federal de Santa Catarina (UFSC)](https://en.ufsc.br/) collected the dataset samples: Ane Francyne Costa, Fabiana Botelho De Miranda Onofre, and Alexandre Sherlley Casimiro Onofre. ### Licensing Information The files associated with this dataset are licensed under an [Attribution-NonCommercial 3.0 Unported](https://creativecommons.org/licenses/by-nc/3.0/) license. Users are free to adapt, copy or redistribute the material as long as they attribute it appropriately and do not use it for commercial purposes. ### Citation Information ```bibtex % Dataset oficial page @misc{CCAgTDataset, doi = {10.17632/WG4BPM33HJ.2}, url = {https://data.mendeley.com/datasets/wg4bpm33hj/2}, author = {Jo{\~{a}}o Gustavo Atkinson Amorim and Andr{\'{e}} Vict{\'{o}}ria Matias and Tainee Bottamedi and Vin{\'{i}}us Sanches and Ane Francyne Costa and Fabiana Botelho De Miranda Onofre and Alexandre Sherlley Casimiro Onofre and Aldo von Wangenheim}, title = {CCAgT: Images of Cervical Cells with AgNOR Stain Technique}, publisher = {Mendeley}, year = {2022}, copyright = {Attribution-NonCommercial 3.0 Unported} } % Dataset second version % pre-print: @article{AtkinsonAmorim2022, doi = {10.2139/ssrn.4126881}, url = {https://doi.org/10.2139/ssrn.4126881}, year = {2022}, publisher = {Elsevier {BV}}, author = {Jo{\~{a}}o Gustavo Atkinson Amorim and Andr{\'{e}} Vict{\'{o}}ria Matias and Allan Cerentini and Fabiana Botelho de Miranda Onofre and Alexandre Sherlley Casimiro Onofre and Aldo von Wangenheim}, title = {Semantic Segmentation for the Detection of Very Small Objects on Cervical Cell Samples Stained with the {AgNOR} Technique}, journal = {{SSRN} Electronic Journal} } % Dataset first version % Link: https://arquivos.ufsc.br/d/373be2177a33426a9e6c/ % Paper: @inproceedings{AtkinsonSegmentationAgNORCBMS2020, author={Jo{\~{a}}o Gustavo Atkinson Amorim and Luiz Antonio Buschetto Macarini and Andr{\'{e}} Vict{\'{o}}ria Matias and Allan Cerentini and Fabiana Botelho De Miranda Onofre and Alexandre Sherlley Casimiro Onofre and Aldo von Wangenheim}, booktitle={2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS)}, title={A Novel Approach on Segmentation of AgNOR-Stained Cytology Images Using Deep Learning}, year={2020}, pages={552-557}, doi={10.1109/CBMS49503.2020.00110}, url={https://doi.org/10.1109/CBMS49503.2020.00110} } ``` ### Contributions Thanks to [@johnnv1](https://github.com/johnnv1) for adding this dataset.
lapix/CCAgT
[ "task_categories:image-segmentation", "task_categories:object-detection", "task_ids:semantic-segmentation", "task_ids:instance-segmentation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-3.0", "region:us" ]
2022-07-01T11:27:09+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-nc-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-segmentation", "object-detection"], "task_ids": ["semantic-segmentation", "instance-segmentation"], "pretty_name": "Images of Cervical Cells with AgNOR Stain Technique"}
2022-07-27T20:11:52+00:00
[]
[ "en" ]
TAGS #task_categories-image-segmentation #task_categories-object-detection #task_ids-semantic-segmentation #task_ids-instance-segmentation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-3.0 #region-us
Dataset Card for Images of Cervical Cells with AgNOR Stain Technique ==================================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: CCAgT homepage * Repository: CCAgT-utils * Paper: Semantic Segmentation for the Detection of Very Small Objects on Cervical Cell Samples Stained with the AgNOR Technique * Leaderboard: * Point of Contact: João G. A. Amorim ### Dataset Summary The CCAgT (Images of Cervical Cells with AgNOR Stain Technique) dataset contains 9339 images (1600x1200 resolution where each pixel is 0.111µmX0.111µm) from 15 different slides stained using the AgNOR technique. Each image has at least one label. In total, this dataset has more than 63K instances of annotated object. The images are from the patients of the Gynecology and Colonoscopy Outpatient Clinic of the Polydoro Ernani de São Thiago University Hospital of the Universidade Federal de Santa Catarina (HU-UFSC). ### Supported Tasks and Leaderboards * 'image-segmentation': The dataset can be used to train a model for semantic segmentation or instance segmentation. Semantic segmentation consists in classifying each pixel of the image. Success on this task is typically measured by achieving high values of mean iou or f-score for pixels results. Instance segmentation consists of doing object detection first and then using a semantic segmentation model inside detected objects. For instances results, this task is typically measured by achieving high values of recall, precision and f-score. * 'object-detection': The dataset can be used to train a model for object detection to detect the nuclei categories or the nucleolus organizer regions (NORs), which consists of locating instances of objects and then classifying each one. This task is typically measured by achieving a high values of recall, precision and f-score. ### Languages The class labels in the dataset are in English. Dataset Structure ----------------- ### Data Instances An example looks like the one below: #### 'semantic segmentation' (default configuration) #### 'object detection' #### 'instance segmentation' ### Data Fields The data annotations have the following fields: #### 'semantic segmentation' (default configuration) * 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'. * 'annotation': A 'PIL.Image.Image' object containing the annotation mask. The mask has a single channel and the following pixel values are possible: 'BACKGROUND' (0), 'NUCLEUS' (1), 'CLUSTER' (2), 'SATELLITE' (3), 'NUCLEUS\_OUT\_OF\_FOCUS' (4), 'OVERLAPPED\_NUCLEI' (5), 'NON\_VIABLE\_NUCLEUS' (6) and 'LEUKOCYTE\_NUCLEUS' (7). #### 'object detection' * 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'. * 'objects': a dictionary containing bounding boxes and labels of the cell objects + 'bbox': a list of bounding boxes (in the coco format) corresponding to the objects present on the image + 'label': a list of integers representing the category (7 categories to describe the objects in total; two to differentiate nucleolus organizer regions), with the possible values including 'NUCLEUS' (0), 'CLUSTER' (1), 'SATELLITE' (2), 'NUCLEUS\_OUT\_OF\_FOCUS' (3), 'OVERLAPPED\_NUCLEI' (4), 'NON\_VIABLE\_NUCLEUS' (5) and 'LEUKOCYTE\_NUCLEUS' (6). #### 'instance segmentation' * 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'. * 'objects': a dictionary containing bounding boxes and labels of the cell objects + 'bbox': a list of bounding boxes (in the coco format) corresponding to the objects present on the image + 'segment': a list of segments in format of '[polygon\_0, ..., polygon\_n]', where each polygon is '[x0, y0, ..., xn, yn]'. + 'label': a list of integers representing the category (7 categories to describe the objects in total; two to differentiate nucleolus organizer regions), with the possible values including 'NUCLEUS' (0), 'CLUSTER' (1), 'SATELLITE' (2), 'NUCLEUS\_OUT\_OF\_FOCUS' (3), 'OVERLAPPED\_NUCLEI' (4), 'NON\_VIABLE\_NUCLEUS' (5) and 'LEUKOCYTE\_NUCLEUS' (6). ### Data Splits The data is split randomly using the fixed seed into training, test and validation set. The training data contains 70% of the images and the testing and the validation data contain 15% of the images each. In total, the training set contains 6533 images and the testing and the validation set 1403 images each. Click here to see additional statistics: Lesion types: * Cervical intraepithelial neoplasia 1 - CIN 1 * Cervical intraepithelial neoplasia 2 - CIN 2 * Cervical intraepithelial neoplasia 3 - CIN 3 * Squamous cell carcinoma - SCC * Adenocarcinoma - AC * No lesion Dataset Creation ---------------- ### Curation Rationale CCAgT was built to provide a dataset for machines to learn how to identify nucleus and nucleolus organizer regions (NORs). ### Source Data #### Initial Data Collection and Normalization The images are collected as patches/tiles of whole slide images (WSIs) from cervical samples stained with AgNOR technique to allow the detection of nucleolus organizer regions (NORs). NORs are DNA loops containing genes responsible for the transcription of ribosomal RNA located in the cell nucleolus. They contain a set of argyrophilic proteins, selectively stained by silver nitrate, which can be identified as black dots located throughout the nucleoli area and called AgNORs. #### Who are the source language producers? The dataset was built using images from examinations (a gynecological exam, colposcopy and biopsy) of 15 women patients who were treated at the Gynecology and Colposcopy Outpatient Clinic of the University Hospital Professor Polydoro Ernani de São Thiago of Federal University of Santa Catarina (HU-UFSC) and had 6 different diagnoses in their oncological exams. The samples were collected by the members of the Clinical Analyses Department: Ane Francyne Costa, Fabiana Botelho De Miranda Onofre, and Alexandre Sherlley Casimiro Onofre. ### Annotations #### Annotation process The instances were annotated using the labelbox tool. The satellite category was labeled as a single dot, and the other categories were labeled as polygons. After the annotation process, all annotations were reviewed. #### Who are the annotators? Members of the Clinical Analyses Department and the Image Processing and Computer Graphics Lab. — LAPiX from Universidade Federal de Santa Catarina (UFSC). * Tainee Bottamedi * Vinícius Sanches * João H. Telles de Carvalho * Ricardo Thisted ### Personal and Sensitive Information This research was approved by the UFSC Research Ethics Committee (CEPSH), protocol number 57423616.3.0000.0121. All involved patients were informed about the study's objectives, and those who agreed to participate signed an informed consent form. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset This dataset's purpose is to help spread the AgNOR as a support method for cancer diagnosis since this method is not standardized among pathologists. ### Discussion of Biases ### Other Known Limitations Satellite annotation is not as accurate for pixel-level representation due to single-point annotations. Additional Information ---------------------- ### Dataset Curators Members of the Clinical Analyses Department from Universidade Federal de Santa Catarina (UFSC) collected the dataset samples: Ane Francyne Costa, Fabiana Botelho De Miranda Onofre, and Alexandre Sherlley Casimiro Onofre. ### Licensing Information The files associated with this dataset are licensed under an Attribution-NonCommercial 3.0 Unported license. Users are free to adapt, copy or redistribute the material as long as they attribute it appropriately and do not use it for commercial purposes. ### Contributions Thanks to @johnnv1 for adding this dataset.
[ "### Dataset Summary\n\n\nThe CCAgT (Images of Cervical Cells with AgNOR Stain Technique) dataset contains 9339 images (1600x1200 resolution where each pixel is 0.111µmX0.111µm) from 15 different slides stained using the AgNOR technique. Each image has at least one label. In total, this dataset has more than 63K instances of annotated object. The images are from the patients of the Gynecology and Colonoscopy Outpatient Clinic of the Polydoro Ernani de São Thiago University Hospital of the Universidade Federal de Santa Catarina (HU-UFSC).", "### Supported Tasks and Leaderboards\n\n\n* 'image-segmentation': The dataset can be used to train a model for semantic segmentation or instance segmentation. Semantic segmentation consists in classifying each pixel of the image. Success on this task is typically measured by achieving high values of mean iou or f-score for pixels results. Instance segmentation consists of doing object detection first and then using a semantic segmentation model inside detected objects. For instances results, this task is typically measured by achieving high values of recall, precision and f-score.\n* 'object-detection': The dataset can be used to train a model for object detection to detect the nuclei categories or the nucleolus organizer regions (NORs), which consists of locating instances of objects and then classifying each one. This task is typically measured by achieving a high values of recall, precision and f-score.", "### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example looks like the one below:", "#### 'semantic segmentation' (default configuration)", "#### 'object detection'", "#### 'instance segmentation'", "### Data Fields\n\n\nThe data annotations have the following fields:", "#### 'semantic segmentation' (default configuration)\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'annotation': A 'PIL.Image.Image' object containing the annotation mask. The mask has a single channel and the following pixel values are possible: 'BACKGROUND' (0), 'NUCLEUS' (1), 'CLUSTER' (2), 'SATELLITE' (3), 'NUCLEUS\\_OUT\\_OF\\_FOCUS' (4), 'OVERLAPPED\\_NUCLEI' (5), 'NON\\_VIABLE\\_NUCLEUS' (6) and 'LEUKOCYTE\\_NUCLEUS' (7).", "#### 'object detection'\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'objects': a dictionary containing bounding boxes and labels of the cell objects\n\t+ 'bbox': a list of bounding boxes (in the coco format) corresponding to the objects present on the image\n\t+ 'label': a list of integers representing the category (7 categories to describe the objects in total; two to differentiate nucleolus organizer regions), with the possible values including 'NUCLEUS' (0), 'CLUSTER' (1), 'SATELLITE' (2), 'NUCLEUS\\_OUT\\_OF\\_FOCUS' (3), 'OVERLAPPED\\_NUCLEI' (4), 'NON\\_VIABLE\\_NUCLEUS' (5) and 'LEUKOCYTE\\_NUCLEUS' (6).", "#### 'instance segmentation'\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'objects': a dictionary containing bounding boxes and labels of the cell objects\n\t+ 'bbox': a list of bounding boxes (in the coco format) corresponding to the objects present on the image\n\t+ 'segment': a list of segments in format of '[polygon\\_0, ..., polygon\\_n]', where each polygon is '[x0, y0, ..., xn, yn]'.\n\t+ 'label': a list of integers representing the category (7 categories to describe the objects in total; two to differentiate nucleolus organizer regions), with the possible values including 'NUCLEUS' (0), 'CLUSTER' (1), 'SATELLITE' (2), 'NUCLEUS\\_OUT\\_OF\\_FOCUS' (3), 'OVERLAPPED\\_NUCLEI' (4), 'NON\\_VIABLE\\_NUCLEUS' (5) and 'LEUKOCYTE\\_NUCLEUS' (6).", "### Data Splits\n\n\nThe data is split randomly using the fixed seed into training, test and validation set. The training data contains 70% of the images and the testing and the validation data contain 15% of the images each. In total, the training set contains 6533 images and the testing and the validation set 1403 images each.\n\n\n\n\n Click here to see additional statistics:\n \n\nLesion types:\n\n\n* Cervical intraepithelial neoplasia 1 - CIN 1\n* Cervical intraepithelial neoplasia 2 - CIN 2\n* Cervical intraepithelial neoplasia 3 - CIN 3\n* Squamous cell carcinoma - SCC\n* Adenocarcinoma - AC\n* No lesion\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCCAgT was built to provide a dataset for machines to learn how to identify nucleus and nucleolus organizer regions (NORs).", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe images are collected as patches/tiles of whole slide images (WSIs) from cervical samples stained with AgNOR technique to allow the detection of nucleolus organizer regions (NORs). NORs are DNA loops containing genes responsible for the transcription of ribosomal RNA located in the cell nucleolus. They contain a set of argyrophilic proteins, selectively stained by silver nitrate, which can be identified as black dots located throughout the nucleoli area and called AgNORs.", "#### Who are the source language producers?\n\n\nThe dataset was built using images from examinations (a gynecological exam, colposcopy and biopsy) of 15 women patients who were treated at the Gynecology and Colposcopy Outpatient Clinic of the University Hospital Professor Polydoro Ernani de São Thiago of Federal University of Santa Catarina (HU-UFSC) and had 6 different diagnoses in their oncological exams. The samples were collected by the members of the Clinical Analyses Department: Ane Francyne Costa, Fabiana Botelho De Miranda Onofre, and Alexandre Sherlley Casimiro Onofre.", "### Annotations", "#### Annotation process\n\n\nThe instances were annotated using the labelbox tool. The satellite category was labeled as a single dot, and the other categories were labeled as polygons. After the annotation process, all annotations were reviewed.", "#### Who are the annotators?\n\n\nMembers of the Clinical Analyses Department and the Image Processing and Computer Graphics Lab. — LAPiX from Universidade Federal de Santa Catarina (UFSC).\n\n\n* Tainee Bottamedi\n* Vinícius Sanches\n* João H. Telles de Carvalho\n* Ricardo Thisted", "### Personal and Sensitive Information\n\n\nThis research was approved by the UFSC Research Ethics Committee (CEPSH), protocol number 57423616.3.0000.0121. All involved patients were informed about the study's objectives, and those who agreed to participate signed an informed consent form.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThis dataset's purpose is to help spread the AgNOR as a support method for cancer diagnosis since this method is not standardized among pathologists.", "### Discussion of Biases", "### Other Known Limitations\n\n\nSatellite annotation is not as accurate for pixel-level representation due to single-point annotations.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nMembers of the Clinical Analyses Department from Universidade Federal de Santa Catarina (UFSC) collected the dataset samples: Ane Francyne Costa, Fabiana Botelho De Miranda Onofre, and Alexandre Sherlley Casimiro Onofre.", "### Licensing Information\n\n\nThe files associated with this dataset are licensed under an Attribution-NonCommercial 3.0 Unported license.\n\n\nUsers are free to adapt, copy or redistribute the material as long as they attribute it appropriately and do not use it for commercial purposes.", "### Contributions\n\n\nThanks to @johnnv1 for adding this dataset." ]
[ "TAGS\n#task_categories-image-segmentation #task_categories-object-detection #task_ids-semantic-segmentation #task_ids-instance-segmentation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-3.0 #region-us \n", "### Dataset Summary\n\n\nThe CCAgT (Images of Cervical Cells with AgNOR Stain Technique) dataset contains 9339 images (1600x1200 resolution where each pixel is 0.111µmX0.111µm) from 15 different slides stained using the AgNOR technique. Each image has at least one label. In total, this dataset has more than 63K instances of annotated object. The images are from the patients of the Gynecology and Colonoscopy Outpatient Clinic of the Polydoro Ernani de São Thiago University Hospital of the Universidade Federal de Santa Catarina (HU-UFSC).", "### Supported Tasks and Leaderboards\n\n\n* 'image-segmentation': The dataset can be used to train a model for semantic segmentation or instance segmentation. Semantic segmentation consists in classifying each pixel of the image. Success on this task is typically measured by achieving high values of mean iou or f-score for pixels results. Instance segmentation consists of doing object detection first and then using a semantic segmentation model inside detected objects. For instances results, this task is typically measured by achieving high values of recall, precision and f-score.\n* 'object-detection': The dataset can be used to train a model for object detection to detect the nuclei categories or the nucleolus organizer regions (NORs), which consists of locating instances of objects and then classifying each one. This task is typically measured by achieving a high values of recall, precision and f-score.", "### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example looks like the one below:", "#### 'semantic segmentation' (default configuration)", "#### 'object detection'", "#### 'instance segmentation'", "### Data Fields\n\n\nThe data annotations have the following fields:", "#### 'semantic segmentation' (default configuration)\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'annotation': A 'PIL.Image.Image' object containing the annotation mask. The mask has a single channel and the following pixel values are possible: 'BACKGROUND' (0), 'NUCLEUS' (1), 'CLUSTER' (2), 'SATELLITE' (3), 'NUCLEUS\\_OUT\\_OF\\_FOCUS' (4), 'OVERLAPPED\\_NUCLEI' (5), 'NON\\_VIABLE\\_NUCLEUS' (6) and 'LEUKOCYTE\\_NUCLEUS' (7).", "#### 'object detection'\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'objects': a dictionary containing bounding boxes and labels of the cell objects\n\t+ 'bbox': a list of bounding boxes (in the coco format) corresponding to the objects present on the image\n\t+ 'label': a list of integers representing the category (7 categories to describe the objects in total; two to differentiate nucleolus organizer regions), with the possible values including 'NUCLEUS' (0), 'CLUSTER' (1), 'SATELLITE' (2), 'NUCLEUS\\_OUT\\_OF\\_FOCUS' (3), 'OVERLAPPED\\_NUCLEI' (4), 'NON\\_VIABLE\\_NUCLEUS' (5) and 'LEUKOCYTE\\_NUCLEUS' (6).", "#### 'instance segmentation'\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'objects': a dictionary containing bounding boxes and labels of the cell objects\n\t+ 'bbox': a list of bounding boxes (in the coco format) corresponding to the objects present on the image\n\t+ 'segment': a list of segments in format of '[polygon\\_0, ..., polygon\\_n]', where each polygon is '[x0, y0, ..., xn, yn]'.\n\t+ 'label': a list of integers representing the category (7 categories to describe the objects in total; two to differentiate nucleolus organizer regions), with the possible values including 'NUCLEUS' (0), 'CLUSTER' (1), 'SATELLITE' (2), 'NUCLEUS\\_OUT\\_OF\\_FOCUS' (3), 'OVERLAPPED\\_NUCLEI' (4), 'NON\\_VIABLE\\_NUCLEUS' (5) and 'LEUKOCYTE\\_NUCLEUS' (6).", "### Data Splits\n\n\nThe data is split randomly using the fixed seed into training, test and validation set. The training data contains 70% of the images and the testing and the validation data contain 15% of the images each. In total, the training set contains 6533 images and the testing and the validation set 1403 images each.\n\n\n\n\n Click here to see additional statistics:\n \n\nLesion types:\n\n\n* Cervical intraepithelial neoplasia 1 - CIN 1\n* Cervical intraepithelial neoplasia 2 - CIN 2\n* Cervical intraepithelial neoplasia 3 - CIN 3\n* Squamous cell carcinoma - SCC\n* Adenocarcinoma - AC\n* No lesion\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCCAgT was built to provide a dataset for machines to learn how to identify nucleus and nucleolus organizer regions (NORs).", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe images are collected as patches/tiles of whole slide images (WSIs) from cervical samples stained with AgNOR technique to allow the detection of nucleolus organizer regions (NORs). NORs are DNA loops containing genes responsible for the transcription of ribosomal RNA located in the cell nucleolus. They contain a set of argyrophilic proteins, selectively stained by silver nitrate, which can be identified as black dots located throughout the nucleoli area and called AgNORs.", "#### Who are the source language producers?\n\n\nThe dataset was built using images from examinations (a gynecological exam, colposcopy and biopsy) of 15 women patients who were treated at the Gynecology and Colposcopy Outpatient Clinic of the University Hospital Professor Polydoro Ernani de São Thiago of Federal University of Santa Catarina (HU-UFSC) and had 6 different diagnoses in their oncological exams. The samples were collected by the members of the Clinical Analyses Department: Ane Francyne Costa, Fabiana Botelho De Miranda Onofre, and Alexandre Sherlley Casimiro Onofre.", "### Annotations", "#### Annotation process\n\n\nThe instances were annotated using the labelbox tool. The satellite category was labeled as a single dot, and the other categories were labeled as polygons. After the annotation process, all annotations were reviewed.", "#### Who are the annotators?\n\n\nMembers of the Clinical Analyses Department and the Image Processing and Computer Graphics Lab. — LAPiX from Universidade Federal de Santa Catarina (UFSC).\n\n\n* Tainee Bottamedi\n* Vinícius Sanches\n* João H. Telles de Carvalho\n* Ricardo Thisted", "### Personal and Sensitive Information\n\n\nThis research was approved by the UFSC Research Ethics Committee (CEPSH), protocol number 57423616.3.0000.0121. All involved patients were informed about the study's objectives, and those who agreed to participate signed an informed consent form.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThis dataset's purpose is to help spread the AgNOR as a support method for cancer diagnosis since this method is not standardized among pathologists.", "### Discussion of Biases", "### Other Known Limitations\n\n\nSatellite annotation is not as accurate for pixel-level representation due to single-point annotations.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nMembers of the Clinical Analyses Department from Universidade Federal de Santa Catarina (UFSC) collected the dataset samples: Ane Francyne Costa, Fabiana Botelho De Miranda Onofre, and Alexandre Sherlley Casimiro Onofre.", "### Licensing Information\n\n\nThe files associated with this dataset are licensed under an Attribution-NonCommercial 3.0 Unported license.\n\n\nUsers are free to adapt, copy or redistribute the material as long as they attribute it appropriately and do not use it for commercial purposes.", "### Contributions\n\n\nThanks to @johnnv1 for adding this dataset." ]
64f93ccd51d919efda61d1cdde92dc31e52deadd
efdsv
gegham/tensor
[ "region:us" ]
2022-07-01T12:49:46+00:00
{}
2022-07-13T12:07:12+00:00
[]
[]
TAGS #region-us
efdsv
[]
[ "TAGS\n#region-us \n" ]
4eb44157012b9ec251767721d6bae55cf0c564e2
## Dataset Description - **Homepage:** [www.sen4agrinet.space.noa.gr](https://www.sen4agrinet.space.noa.gr/) - **Repository:** [github.com/Orion-AI-Lab/S4A](https://github.com/Orion-AI-Lab/S4A) - **Paper:** ["A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning" (D. Sykas, M. Sdraka, D. Zografakis, I. Papoutsis](https://arxiv.org/abs/2204.00951) ### Dataset Summary Sen4AgriNet is a Sentinel-2 based time series multi-country benchmark dataset, tailored for agricultural monitoring applications with Machine and Deep Learning. It is annotated from farmer declarations collected via the Land Parcel Identification System (LPIS) for harmonizing country wide labels. These declarations have only recently been made available as open data, allowing for the first time the labelling of satellite imagery from ground truth data. We proceed to propose and standardise a new crop type taxonomy across Europe that address Common Agriculture Policy (CAP) needs, based on the Food and Agriculture Organization (FAO) Indicative Crop Classification scheme. Sen4AgriNet is the only multi-country, multi-year dataset that includes all spectral information. The current version covers the period 2019-2020 for Catalonia and France, while it can be extended to include additional countries. ### Languages All information in the dataset is in English (`en_GB`). ## Dataset Structure ### Data Instances A typical sample in Sen4AgriNet consists of the following fields: ``` { 'patch_full_name': '2019_31TCF_patch_10_14', 'patch_year': '2019', 'patch_name': 'patch_10_14', 'patch_country_code': 'ES', 'patch_tile': '31TCF', 'B01': array([...]), 'B02': array([...]), 'B03': array([...]), 'B04': array([...]), 'B05': array([...]), 'B06': array([...]), 'B07': array([...]), 'B08': array([...]), 'B09': array([...]), 'B10': array([...]), 'B11': array([...]), 'B12': array([...]), 'B8A': array([...]), 'parcels': array([...]), 'labels': array([...]), 'timestamp': [...] } ``` ### Data Fields Below we provide a brief explanation of each field: - `patch_full_name`: The full name of the patch. - `patch_year`: The year of the observations included in the patch. - `patch_name`: The name of the patch. It is of the form: `patch_xx_yy` where `xx` and `yy` are the indices of the patch inside the tile. - `patch_country_code`: The country code of the observations included in the patch. Currently it is either `ES` for Catalonia or `FR` for France. - `B01`, ..., `B8A`: Each one is an array containing the observations of the corresponding Sentinel-2 band. The shape of each array is (T, H, W) where T is the number of observations, H the height of the image and W the width of the image. - `parcels`: A mask containing the parcels code number. - `labels`: A mask containing the class codes for each crop in the taxonomy. - `timestamp`: The timestamps of the observations. ### Data Splits In this version of the dataset there are no predefined train/val/test splits so that the users can define their own. ### Data configurations There are the following configurations in the current version of Sen4AgriNet: - `complete`: The complete Sen4AgriNet dataset. - `cat_2019`: Only Catalonia data for 2019. - `cat_2020`: Only Catalonia data for 2020. - `fr_2019`: Only France data for 2019. ## Dataset Creation ### Curation Rationale One of the major problems faced by researchers in the fields of Remote Sensing and AI is the absence of country-wide labelled data that are harmonized along space and time. Specifically in the EU, the Common Agriculture Policy (CAP) has placed a stepping stone to overcome this issue by legally establishing Paying Agencies in each EU country which are responsible for distributing subsidies to farmers. In order to fulfill their objectives, Paying Agencies systematically collect the cultivated crop type and parcel geometries for every farmer and record it via the Land Parcel Identification System (LPIS) in a standardized way for each country. Unfortunately, public access to these farmer declarations has been restricted for several years, thus making it almost impossible to get country-wide ground truth data. However, since 2019 and for the first time these datasets are gradually becoming open (e.g. France, Catalonia, Estonia, Croatia, Slovenia, Slovakia and Luxemburg). This change offers a significant opportunity for the Earth Observation (EO) community to explore novel and innovative data-driven agricultural applications, by exploiting this abundance of new LPIS information. In principle, this fusion of the LPIS data sources has tremendous potential but there are still some barriers to overcome. First of all, the LPIS system of each country is customly configured to utilize the local language of the crop types and the specific taxonomy structure of the crops that matches the local subsidies policy implementation. This non-standardization of the labels prohibits the spatial generalization of Deep Learning (DL) models and thus needs to be carefully handled to achieve a common representation consistent among countries. On top of these contextual/semantic barriers, parcels are mapped in the corresponding national cartographic projection which in all cases is different from the cartographic projection of the satellite images and pose an additional challenge on the preparation of a consistent, proper and at scale DL-ready dataset. Aiming to overcome the above limitations in this repository we offer Sen4AgriNet, a unique benchmark EO dataset for agricultural monitoring with the following key characteristics: - it is **pixel based** to capture spatial parcel variability - it is **multi-temporal** to capture the crop phenology phases - it is **multi-annual** to model the seasonal variability - it is **multi-country** to model the geographic spatial variability - it is **object-aggregated** to further incorporate ground truth data (parcel geometries) in the process - it is **modular** since it can be enlarged with parcels from more EU countries or expanded in a straightforward way to include additional sensor and non-EO data (e.g. meteorological data) ### Source Data 1) The LPIS data for the region of Catalonia for 2019–2020 provided by the "Agricultura, Ramaderia, Pesca i Alimentacio" with an Open Data Commons Attribution License. 2) France LPIS data for 2019 provided by the French Paying Agency with an Open Data Commons Attribution License. 3) All Sentinel-2 L1C images with less than 10% cloud coverage for the above tiles. #### Initial Data Collection and Normalization The Sentinel-2 L1C images were downloaded from Copernicus and each image was split into 900 non-overlapping patches. A single patch contains 366x366 images for the 10-meter bands, 183x183 for the 20-meter bands and 61x61 for the 60-meter bands. The size of the patches was chosen in order to have integer division of the size of the tile with all 3 different spatial resolutions of Sentinel-2. #### Annotation process The Indicative Crop Classification (ICC) scheme was developed by the United Nations FAO organization. It is an approach to produce a harmonized vocabulary and taxonomy for crops and plants that are used in food production. Sen4AgriNet adopts and customises an extended version of FAO ICC in order to create a universally applicable crop label nomenclature for the collected LPIS data with the following benefits: - Single language (English) is used and naming for all classes across all participating countries. - Classes are normalized among different datasets. - Hierarchical class structure is adopted. Depending on the application different levels of classes can be used. - Additional non-agricultural classes are used (e.g. "fallow land", "barren land", etc.) to model Remote Sensing spectral signatures since agricultural parcels co-exist with other unrelated classes in satellite images. The presented custom FAO/CLC classification scheme has a total of 9 groups, 168 classes and sub-classes. The 161 classes/sub-classes are crop related, 4 are some major CLC classes (as sub-classes in this hierarchy), 2 are the fallow and barren lands, and 1 is the no data sub-class. This crop taxonomy was used to create the `labels` mask. In addition, a second annotation mask is provided (`parcels`) where each parcel obtains a unique identifier, regardless of the crops cultivated in it. ### Personal and Sensitive Information None. ## Considerations for Using the Data ### Social Impact of Dataset We believe that Sen4AgriNet can be regarded as a labelled benchmark dataset, tailored for CAP and the use of Sentinel-2 imagery that come at no cost, and can spur numerous DL-based applications for crop type classification, parcel extraction, parcel counting and semantic segmentation. More importantly, the dataset can be extended to include other input data sources, including Sentinel-1 Synthetic Aperture Radar data, and meteorological data, allowing a new family of applications on early warning risk assessment and agricultural insurance. ## Additional Information ### Licensing Information MIT License. ### Citation Information ``` @ARTICLE{ 9749916, author={Sykas, Dimitrios and Sdraka, Maria and Zografakis, Dimitrios and Papoutsis, Ioannis}, journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing}, title={A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning}, year={2022}, doi={10.1109/JSTARS.2022.3164771} } ```
paren8esis/S4A
[ "arxiv:2204.00951", "region:us" ]
2022-07-01T15:26:54+00:00
{}
2023-10-24T07:15:34+00:00
[ "2204.00951" ]
[]
TAGS #arxiv-2204.00951 #region-us
## Dataset Description - Homepage: URL - Repository: URL - Paper: "A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning" (D. Sykas, M. Sdraka, D. Zografakis, I. Papoutsis ### Dataset Summary Sen4AgriNet is a Sentinel-2 based time series multi-country benchmark dataset, tailored for agricultural monitoring applications with Machine and Deep Learning. It is annotated from farmer declarations collected via the Land Parcel Identification System (LPIS) for harmonizing country wide labels. These declarations have only recently been made available as open data, allowing for the first time the labelling of satellite imagery from ground truth data. We proceed to propose and standardise a new crop type taxonomy across Europe that address Common Agriculture Policy (CAP) needs, based on the Food and Agriculture Organization (FAO) Indicative Crop Classification scheme. Sen4AgriNet is the only multi-country, multi-year dataset that includes all spectral information. The current version covers the period 2019-2020 for Catalonia and France, while it can be extended to include additional countries. ### Languages All information in the dataset is in English ('en_GB'). ## Dataset Structure ### Data Instances A typical sample in Sen4AgriNet consists of the following fields: ### Data Fields Below we provide a brief explanation of each field: - 'patch_full_name': The full name of the patch. - 'patch_year': The year of the observations included in the patch. - 'patch_name': The name of the patch. It is of the form: 'patch_xx_yy' where 'xx' and 'yy' are the indices of the patch inside the tile. - 'patch_country_code': The country code of the observations included in the patch. Currently it is either 'ES' for Catalonia or 'FR' for France. - 'B01', ..., 'B8A': Each one is an array containing the observations of the corresponding Sentinel-2 band. The shape of each array is (T, H, W) where T is the number of observations, H the height of the image and W the width of the image. - 'parcels': A mask containing the parcels code number. - 'labels': A mask containing the class codes for each crop in the taxonomy. - 'timestamp': The timestamps of the observations. ### Data Splits In this version of the dataset there are no predefined train/val/test splits so that the users can define their own. ### Data configurations There are the following configurations in the current version of Sen4AgriNet: - 'complete': The complete Sen4AgriNet dataset. - 'cat_2019': Only Catalonia data for 2019. - 'cat_2020': Only Catalonia data for 2020. - 'fr_2019': Only France data for 2019. ## Dataset Creation ### Curation Rationale One of the major problems faced by researchers in the fields of Remote Sensing and AI is the absence of country-wide labelled data that are harmonized along space and time. Specifically in the EU, the Common Agriculture Policy (CAP) has placed a stepping stone to overcome this issue by legally establishing Paying Agencies in each EU country which are responsible for distributing subsidies to farmers. In order to fulfill their objectives, Paying Agencies systematically collect the cultivated crop type and parcel geometries for every farmer and record it via the Land Parcel Identification System (LPIS) in a standardized way for each country. Unfortunately, public access to these farmer declarations has been restricted for several years, thus making it almost impossible to get country-wide ground truth data. However, since 2019 and for the first time these datasets are gradually becoming open (e.g. France, Catalonia, Estonia, Croatia, Slovenia, Slovakia and Luxemburg). This change offers a significant opportunity for the Earth Observation (EO) community to explore novel and innovative data-driven agricultural applications, by exploiting this abundance of new LPIS information. In principle, this fusion of the LPIS data sources has tremendous potential but there are still some barriers to overcome. First of all, the LPIS system of each country is customly configured to utilize the local language of the crop types and the specific taxonomy structure of the crops that matches the local subsidies policy implementation. This non-standardization of the labels prohibits the spatial generalization of Deep Learning (DL) models and thus needs to be carefully handled to achieve a common representation consistent among countries. On top of these contextual/semantic barriers, parcels are mapped in the corresponding national cartographic projection which in all cases is different from the cartographic projection of the satellite images and pose an additional challenge on the preparation of a consistent, proper and at scale DL-ready dataset. Aiming to overcome the above limitations in this repository we offer Sen4AgriNet, a unique benchmark EO dataset for agricultural monitoring with the following key characteristics: - it is pixel based to capture spatial parcel variability - it is multi-temporal to capture the crop phenology phases - it is multi-annual to model the seasonal variability - it is multi-country to model the geographic spatial variability - it is object-aggregated to further incorporate ground truth data (parcel geometries) in the process - it is modular since it can be enlarged with parcels from more EU countries or expanded in a straightforward way to include additional sensor and non-EO data (e.g. meteorological data) ### Source Data 1) The LPIS data for the region of Catalonia for 2019–2020 provided by the "Agricultura, Ramaderia, Pesca i Alimentacio" with an Open Data Commons Attribution License. 2) France LPIS data for 2019 provided by the French Paying Agency with an Open Data Commons Attribution License. 3) All Sentinel-2 L1C images with less than 10% cloud coverage for the above tiles. #### Initial Data Collection and Normalization The Sentinel-2 L1C images were downloaded from Copernicus and each image was split into 900 non-overlapping patches. A single patch contains 366x366 images for the 10-meter bands, 183x183 for the 20-meter bands and 61x61 for the 60-meter bands. The size of the patches was chosen in order to have integer division of the size of the tile with all 3 different spatial resolutions of Sentinel-2. #### Annotation process The Indicative Crop Classification (ICC) scheme was developed by the United Nations FAO organization. It is an approach to produce a harmonized vocabulary and taxonomy for crops and plants that are used in food production. Sen4AgriNet adopts and customises an extended version of FAO ICC in order to create a universally applicable crop label nomenclature for the collected LPIS data with the following benefits: - Single language (English) is used and naming for all classes across all participating countries. - Classes are normalized among different datasets. - Hierarchical class structure is adopted. Depending on the application different levels of classes can be used. - Additional non-agricultural classes are used (e.g. "fallow land", "barren land", etc.) to model Remote Sensing spectral signatures since agricultural parcels co-exist with other unrelated classes in satellite images. The presented custom FAO/CLC classification scheme has a total of 9 groups, 168 classes and sub-classes. The 161 classes/sub-classes are crop related, 4 are some major CLC classes (as sub-classes in this hierarchy), 2 are the fallow and barren lands, and 1 is the no data sub-class. This crop taxonomy was used to create the 'labels' mask. In addition, a second annotation mask is provided ('parcels') where each parcel obtains a unique identifier, regardless of the crops cultivated in it. ### Personal and Sensitive Information None. ## Considerations for Using the Data ### Social Impact of Dataset We believe that Sen4AgriNet can be regarded as a labelled benchmark dataset, tailored for CAP and the use of Sentinel-2 imagery that come at no cost, and can spur numerous DL-based applications for crop type classification, parcel extraction, parcel counting and semantic segmentation. More importantly, the dataset can be extended to include other input data sources, including Sentinel-1 Synthetic Aperture Radar data, and meteorological data, allowing a new family of applications on early warning risk assessment and agricultural insurance. ## Additional Information ### Licensing Information MIT License.
[ "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \"A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning\" (D. Sykas, M. Sdraka, D. Zografakis, I. Papoutsis", "### Dataset Summary\n\nSen4AgriNet is a Sentinel-2 based time series multi-country benchmark dataset, tailored for agricultural monitoring applications with Machine and Deep Learning. It is annotated from farmer declarations collected via the Land Parcel Identification System (LPIS) for harmonizing country wide labels. These declarations have only recently been made available as open data, allowing for the first time the labelling of satellite imagery from ground truth data. We proceed to propose and standardise a new crop type taxonomy across Europe that address Common Agriculture Policy (CAP) needs, based on the Food and Agriculture Organization (FAO) Indicative Crop Classification scheme. Sen4AgriNet is the only multi-country, multi-year dataset that includes all spectral information. The current version covers the period 2019-2020 for Catalonia and France, while it can be extended to include additional countries.", "### Languages\n\nAll information in the dataset is in English ('en_GB').", "## Dataset Structure", "### Data Instances\n\nA typical sample in Sen4AgriNet consists of the following fields:", "### Data Fields\n\nBelow we provide a brief explanation of each field:\n - 'patch_full_name': The full name of the patch.\n - 'patch_year': The year of the observations included in the patch.\n - 'patch_name': The name of the patch. It is of the form: 'patch_xx_yy' where 'xx' and 'yy' are the indices of the patch inside the tile.\n - 'patch_country_code': The country code of the observations included in the patch. Currently it is either 'ES' for Catalonia or 'FR' for France.\n - 'B01', ..., 'B8A': Each one is an array containing the observations of the corresponding Sentinel-2 band. The shape of each array is (T, H, W) where T is the number of observations, H the height of the image and W the width of the image.\n - 'parcels': A mask containing the parcels code number.\n - 'labels': A mask containing the class codes for each crop in the taxonomy.\n - 'timestamp': The timestamps of the observations.", "### Data Splits\n\nIn this version of the dataset there are no predefined train/val/test splits so that the users can define their own.", "### Data configurations\n\nThere are the following configurations in the current version of Sen4AgriNet:\n - 'complete': The complete Sen4AgriNet dataset.\n - 'cat_2019': Only Catalonia data for 2019.\n - 'cat_2020': Only Catalonia data for 2020.\n - 'fr_2019': Only France data for 2019.", "## Dataset Creation", "### Curation Rationale\n\nOne of the major problems faced by researchers in the fields of Remote Sensing and AI is the absence of country-wide labelled data that are harmonized along space and time. Specifically in the EU, the Common Agriculture Policy (CAP) has placed a stepping stone to overcome this issue by legally establishing Paying Agencies in each EU country which are responsible for distributing subsidies to farmers. In order to fulfill their objectives, Paying Agencies systematically collect the cultivated crop type and parcel geometries for every farmer and record it via the Land Parcel Identification System (LPIS) in a standardized way for each country. Unfortunately, public access to these farmer declarations has been restricted for several years, thus making it almost impossible to get country-wide ground truth data. However, since 2019 and for the\nfirst time these datasets are gradually becoming open (e.g. France, Catalonia, Estonia, Croatia, Slovenia, Slovakia and Luxemburg). This change offers a significant opportunity for the Earth Observation (EO) community to explore novel and innovative data-driven agricultural applications, by exploiting this abundance of new LPIS information.\n\nIn principle, this fusion of the LPIS data sources has tremendous potential but there are still some barriers to overcome. First of all, the LPIS system of each country is customly configured to utilize the local language of the crop types and the specific taxonomy structure of the crops that matches the local subsidies policy implementation. This non-standardization of the labels prohibits the spatial generalization of Deep Learning (DL) models and thus needs to be carefully handled to achieve a common representation consistent among countries. On top of these contextual/semantic barriers, parcels are mapped in the corresponding national cartographic projection which in all cases is different from the cartographic projection of the satellite images and pose an additional challenge on the preparation of a consistent, proper and at scale DL-ready dataset.\n\nAiming to overcome the above limitations in this repository we offer Sen4AgriNet, a unique benchmark EO dataset for agricultural monitoring with the following key characteristics: \n - it is pixel based to capture spatial parcel variability\n - it is multi-temporal to capture the crop phenology phases\n - it is multi-annual to model the seasonal variability\n - it is multi-country to model the geographic spatial variability\n - it is object-aggregated to further incorporate ground truth data (parcel geometries) in the process\n - it is modular since it can be enlarged with parcels from more EU countries or expanded in a straightforward way to include additional sensor and non-EO data (e.g. meteorological data)", "### Source Data\n\n1) The LPIS data for the region of Catalonia for 2019–2020 provided by the \"Agricultura, Ramaderia, Pesca i Alimentacio\" with an Open Data Commons Attribution License.\n2) France LPIS data for 2019 provided by the French Paying Agency with an Open Data Commons Attribution License. \n3) All Sentinel-2 L1C images with less than 10% cloud coverage for the above tiles.", "#### Initial Data Collection and Normalization\n\nThe Sentinel-2 L1C images were downloaded from Copernicus and each image was split into 900 non-overlapping patches. A single patch contains 366x366 images for the 10-meter bands, 183x183 for the 20-meter bands and 61x61 for the 60-meter bands. The size of the patches was chosen in order to have integer division of the size of the tile with all 3 different spatial resolutions of Sentinel-2.", "#### Annotation process\n\nThe Indicative Crop Classification (ICC) scheme was developed by the United Nations FAO organization. It is an approach to produce a harmonized vocabulary and taxonomy for crops and plants that are used in food production. Sen4AgriNet adopts and customises an extended version of FAO ICC in order to create a universally applicable crop label nomenclature for the collected LPIS data with the following benefits:\n - Single language (English) is used and naming for all classes across all participating countries.\n - Classes are normalized among different datasets.\n - Hierarchical class structure is adopted. Depending on the application different levels of classes can be used.\n - Additional non-agricultural classes are used (e.g. \"fallow land\", \"barren land\", etc.) to model Remote Sensing spectral signatures since agricultural parcels co-exist with other unrelated classes in satellite images.\n\nThe presented custom FAO/CLC classification scheme has a total of 9 groups, 168 classes and sub-classes. The 161 classes/sub-classes are crop related, 4 are some major CLC classes (as sub-classes in this hierarchy), 2 are the fallow and barren lands, and 1 is the no data sub-class.\n\nThis crop taxonomy was used to create the 'labels' mask. In addition, a second annotation mask is provided ('parcels') where each parcel obtains a unique identifier, regardless of the crops cultivated in it.", "### Personal and Sensitive Information\n\nNone.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nWe believe that Sen4AgriNet can be regarded as a labelled benchmark dataset, tailored for CAP and the use of Sentinel-2 imagery that come at no cost, and can spur numerous DL-based applications for crop type classification, parcel extraction, parcel counting and semantic segmentation. More importantly, the dataset can be extended to include other input data sources, including Sentinel-1 Synthetic Aperture Radar data, and meteorological data, allowing a new family of applications on early warning risk assessment and agricultural insurance.", "## Additional Information", "### Licensing Information\n\nMIT License." ]
[ "TAGS\n#arxiv-2204.00951 #region-us \n", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \"A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning\" (D. Sykas, M. Sdraka, D. Zografakis, I. Papoutsis", "### Dataset Summary\n\nSen4AgriNet is a Sentinel-2 based time series multi-country benchmark dataset, tailored for agricultural monitoring applications with Machine and Deep Learning. It is annotated from farmer declarations collected via the Land Parcel Identification System (LPIS) for harmonizing country wide labels. These declarations have only recently been made available as open data, allowing for the first time the labelling of satellite imagery from ground truth data. We proceed to propose and standardise a new crop type taxonomy across Europe that address Common Agriculture Policy (CAP) needs, based on the Food and Agriculture Organization (FAO) Indicative Crop Classification scheme. Sen4AgriNet is the only multi-country, multi-year dataset that includes all spectral information. The current version covers the period 2019-2020 for Catalonia and France, while it can be extended to include additional countries.", "### Languages\n\nAll information in the dataset is in English ('en_GB').", "## Dataset Structure", "### Data Instances\n\nA typical sample in Sen4AgriNet consists of the following fields:", "### Data Fields\n\nBelow we provide a brief explanation of each field:\n - 'patch_full_name': The full name of the patch.\n - 'patch_year': The year of the observations included in the patch.\n - 'patch_name': The name of the patch. It is of the form: 'patch_xx_yy' where 'xx' and 'yy' are the indices of the patch inside the tile.\n - 'patch_country_code': The country code of the observations included in the patch. Currently it is either 'ES' for Catalonia or 'FR' for France.\n - 'B01', ..., 'B8A': Each one is an array containing the observations of the corresponding Sentinel-2 band. The shape of each array is (T, H, W) where T is the number of observations, H the height of the image and W the width of the image.\n - 'parcels': A mask containing the parcels code number.\n - 'labels': A mask containing the class codes for each crop in the taxonomy.\n - 'timestamp': The timestamps of the observations.", "### Data Splits\n\nIn this version of the dataset there are no predefined train/val/test splits so that the users can define their own.", "### Data configurations\n\nThere are the following configurations in the current version of Sen4AgriNet:\n - 'complete': The complete Sen4AgriNet dataset.\n - 'cat_2019': Only Catalonia data for 2019.\n - 'cat_2020': Only Catalonia data for 2020.\n - 'fr_2019': Only France data for 2019.", "## Dataset Creation", "### Curation Rationale\n\nOne of the major problems faced by researchers in the fields of Remote Sensing and AI is the absence of country-wide labelled data that are harmonized along space and time. Specifically in the EU, the Common Agriculture Policy (CAP) has placed a stepping stone to overcome this issue by legally establishing Paying Agencies in each EU country which are responsible for distributing subsidies to farmers. In order to fulfill their objectives, Paying Agencies systematically collect the cultivated crop type and parcel geometries for every farmer and record it via the Land Parcel Identification System (LPIS) in a standardized way for each country. Unfortunately, public access to these farmer declarations has been restricted for several years, thus making it almost impossible to get country-wide ground truth data. However, since 2019 and for the\nfirst time these datasets are gradually becoming open (e.g. France, Catalonia, Estonia, Croatia, Slovenia, Slovakia and Luxemburg). This change offers a significant opportunity for the Earth Observation (EO) community to explore novel and innovative data-driven agricultural applications, by exploiting this abundance of new LPIS information.\n\nIn principle, this fusion of the LPIS data sources has tremendous potential but there are still some barriers to overcome. First of all, the LPIS system of each country is customly configured to utilize the local language of the crop types and the specific taxonomy structure of the crops that matches the local subsidies policy implementation. This non-standardization of the labels prohibits the spatial generalization of Deep Learning (DL) models and thus needs to be carefully handled to achieve a common representation consistent among countries. On top of these contextual/semantic barriers, parcels are mapped in the corresponding national cartographic projection which in all cases is different from the cartographic projection of the satellite images and pose an additional challenge on the preparation of a consistent, proper and at scale DL-ready dataset.\n\nAiming to overcome the above limitations in this repository we offer Sen4AgriNet, a unique benchmark EO dataset for agricultural monitoring with the following key characteristics: \n - it is pixel based to capture spatial parcel variability\n - it is multi-temporal to capture the crop phenology phases\n - it is multi-annual to model the seasonal variability\n - it is multi-country to model the geographic spatial variability\n - it is object-aggregated to further incorporate ground truth data (parcel geometries) in the process\n - it is modular since it can be enlarged with parcels from more EU countries or expanded in a straightforward way to include additional sensor and non-EO data (e.g. meteorological data)", "### Source Data\n\n1) The LPIS data for the region of Catalonia for 2019–2020 provided by the \"Agricultura, Ramaderia, Pesca i Alimentacio\" with an Open Data Commons Attribution License.\n2) France LPIS data for 2019 provided by the French Paying Agency with an Open Data Commons Attribution License. \n3) All Sentinel-2 L1C images with less than 10% cloud coverage for the above tiles.", "#### Initial Data Collection and Normalization\n\nThe Sentinel-2 L1C images were downloaded from Copernicus and each image was split into 900 non-overlapping patches. A single patch contains 366x366 images for the 10-meter bands, 183x183 for the 20-meter bands and 61x61 for the 60-meter bands. The size of the patches was chosen in order to have integer division of the size of the tile with all 3 different spatial resolutions of Sentinel-2.", "#### Annotation process\n\nThe Indicative Crop Classification (ICC) scheme was developed by the United Nations FAO organization. It is an approach to produce a harmonized vocabulary and taxonomy for crops and plants that are used in food production. Sen4AgriNet adopts and customises an extended version of FAO ICC in order to create a universally applicable crop label nomenclature for the collected LPIS data with the following benefits:\n - Single language (English) is used and naming for all classes across all participating countries.\n - Classes are normalized among different datasets.\n - Hierarchical class structure is adopted. Depending on the application different levels of classes can be used.\n - Additional non-agricultural classes are used (e.g. \"fallow land\", \"barren land\", etc.) to model Remote Sensing spectral signatures since agricultural parcels co-exist with other unrelated classes in satellite images.\n\nThe presented custom FAO/CLC classification scheme has a total of 9 groups, 168 classes and sub-classes. The 161 classes/sub-classes are crop related, 4 are some major CLC classes (as sub-classes in this hierarchy), 2 are the fallow and barren lands, and 1 is the no data sub-class.\n\nThis crop taxonomy was used to create the 'labels' mask. In addition, a second annotation mask is provided ('parcels') where each parcel obtains a unique identifier, regardless of the crops cultivated in it.", "### Personal and Sensitive Information\n\nNone.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nWe believe that Sen4AgriNet can be regarded as a labelled benchmark dataset, tailored for CAP and the use of Sentinel-2 imagery that come at no cost, and can spur numerous DL-based applications for crop type classification, parcel extraction, parcel counting and semantic segmentation. More importantly, the dataset can be extended to include other input data sources, including Sentinel-1 Synthetic Aperture Radar data, and meteorological data, allowing a new family of applications on early warning risk assessment and agricultural insurance.", "## Additional Information", "### Licensing Information\n\nMIT License." ]
cecdd5845d29aa5a62fbcecf294d1b72d8fd860b
This dataset contains **5,242,391** samples of Ukrainian news headlines. Usage: ```python from datasets import load_dataset ds = load_dataset('Yehor/ukrainian-news-headlines', split='train') for row in ds: print(row['headline']) ``` Attribution to the dataset: - Chaplynskyi, D. et al. (2021) lang-uk Ukrainian Ubercorpus [Data set]. https://lang.org.ua/uk/corpora/#anchor4
Yehor/ukrainian-news-headlines
[ "language:uk", "license:cc-by-nc-sa-4.0", "uk", "region:us" ]
2022-07-01T17:12:19+00:00
{"language": ["uk"], "license": "cc-by-nc-sa-4.0", "tags": ["uk"]}
2022-07-30T16:39:30+00:00
[]
[ "uk" ]
TAGS #language-Ukrainian #license-cc-by-nc-sa-4.0 #uk #region-us
This dataset contains 5,242,391 samples of Ukrainian news headlines. Usage: Attribution to the dataset: - Chaplynskyi, D. et al. (2021) lang-uk Ukrainian Ubercorpus [Data set]. URL
[]
[ "TAGS\n#language-Ukrainian #license-cc-by-nc-sa-4.0 #uk #region-us \n" ]
4e75b1b7fabb453a60a571bc9ccc2b95b9789fe0
# Dataset Card for "UnpredicTable-full" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_full
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-02T19:22:21+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-full"}
2022-08-04T19:07:28+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-full" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-full\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-full\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
7b34cb148522a73022791027f729cc9518da2a05
# Dataset Card for SciTail ## Dataset Description - **Homepage:** https://allenai.org/data/scitail - **Pubmed:** False - **Public:** True - **Tasks:** TE The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question and the correct answer choice are converted into an assertive statement to form the hypothesis. We use information retrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We crowd source the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create the SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples with neutral label. ## Citation Information ``` @inproceedings{scitail, author = {Tushar Khot and Ashish Sabharwal and Peter Clark}, booktitle = {AAAI} title = {SciTail: A Textual Entailment Dataset from Science Question Answering}, year = {2018} ```
bigbio/scitail
[ "multilinguality:monolingual", "language:en", "license:apache-2.0", "region:us" ]
2022-07-02T19:53:40+00:00
{"language": ["en"], "license": "apache-2.0", "multilinguality": "monolingual", "paperswithcode_id": "scitail", "pretty_name": "SciTail", "bigbio_language": ["English"], "bigbio_license_shortname": "APACHE_2p0", "homepage": "https://allenai.org/data/scitail", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXTUAL_ENTAILMENT"]}
2023-03-31T01:11:26+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-apache-2.0 #region-us
# Dataset Card for SciTail ## Dataset Description - Homepage: URL - Pubmed: False - Public: True - Tasks: TE The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question and the correct answer choice are converted into an assertive statement to form the hypothesis. We use information retrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We crowd source the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create the SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples with neutral label.
[ "# Dataset Card for SciTail", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TE\n\n\nThe SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question and the correct answer choice are converted into an assertive statement to form the hypothesis. We use information retrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We crowd source the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create the SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples with neutral label." ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for SciTail", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TE\n\n\nThe SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question and the correct answer choice are converted into an assertive statement to form the hypothesis. We use information retrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We crowd source the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create the SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples with neutral label." ]
dad2cadd8d501bf91facd78bbd7a598d98f32e7e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: lewtun/sagemaker-distilbert-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@gabrielaltay](https://huggingface.co/gabrielaltay) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-854c0218-9415245
[ "autotrain", "evaluation", "region:us" ]
2022-07-02T21:28:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/sagemaker-distilbert-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-02T21:28:44+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: lewtun/sagemaker-distilbert-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @gabrielaltay for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: lewtun/sagemaker-distilbert-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @gabrielaltay for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: lewtun/sagemaker-distilbert-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @gabrielaltay for evaluating this model." ]
f68d414189a214d5a52b5842006e55eb8b95a337
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446 * Dataset: bigscience-biomedical/tmp-scitail To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@gabrielaltay](https://huggingface.co/gabrielaltay) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-562e1223-9425246
[ "autotrain", "evaluation", "region:us" ]
2022-07-02T22:00:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["bigscience-biomedical/tmp-scitail"], "eval_info": {"task": "binary_classification", "model": "gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446", "metrics": [], "dataset_name": "bigscience-biomedical/tmp-scitail", "dataset_config": "scitail_bigbio_te", "dataset_split": "test", "col_mapping": {"text": "premise", "target": "label"}}}
2022-07-02T22:01:39+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Text Classification * Model: gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446 * Dataset: bigscience-biomedical/tmp-scitail To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @gabrielaltay for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446\n* Dataset: bigscience-biomedical/tmp-scitail\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @gabrielaltay for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446\n* Dataset: bigscience-biomedical/tmp-scitail\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @gabrielaltay for evaluating this model." ]
d40231fb47c493a4a6cbdc01e69ef4193b27bd2c
# AutoTrain Dataset for project: new_model ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project new_model. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "\u8fd1\u671f\uff0c\u7f8e\u56fd\u56fd\u4f1a\u4f17\u9662\u901a\u8fc7\u6cd5\u6848\uff0c\u91cd\u7533\u7f8e\u56fd\u5bf9\u53f0\u6e7e\u7684\u627f\u8bfa\u3002\u5bf9\u6b64\uff0c\u4e2d\u56fd\u5916\u4ea4\u90e8\u53d1\u8a00\u4eba\u8868\u793a\uff0c\u6709\u5173\u6cd5\u6848\u4e25\u91cd\u8fdd\u53cd\u4e00\u4e2a\u4e2d\u56fd\u539f\u5219\u548c\u4e2d\u7f8e\u4e09\u4e2a\u8054\u5408\u516c\u62a5\u89c4\u5b9a\uff0c\u7c97\u66b4\u5e72\u6d89\u4e2d\u56fd\u5185\u653f\uff0c\u4e2d\u65b9\u5bf9\u6b64\u575a\u51b3\u53cd\u5bf9\u5e76\u5df2\u5411\u7f8e\u65b9\u63d0\u51fa\u4e25\u6b63\u4ea4\u6d89\u3002\n\u4e8b\u5b9e\u4e0a\uff0c\u4e2d[...]", "target": "\u671b\u6d77\u697c\u7f8e\u56fd\u6253\u201c\u53f0\u6e7e\u724c\u201d\u662f\u5371\u9669\u7684\u8d4c\u535a" }, { "text": "\u5728\u63a8\u8fdb\u201c\u53cc\u4e00\u6d41\u201d\u9ad8\u6821\u5efa\u8bbe\u8fdb\u7a0b\u4e2d\uff0c\u6211\u4eec\u8981\u7d27\u7d27\u56f4\u7ed5\u4e3a\u515a\u80b2\u4eba\u3001\u4e3a\u56fd\u80b2\u624d\uff0c\u627e\u51c6\u95ee\u9898\u3001\u7834\u89e3\u96be\u9898\uff0c\u4ee5\u4e00\u6d41\u610f\u8bc6\u548c\u62c5\u5f53\u7cbe\u795e\uff0c\u5927\u529b\u63a8\u8fdb\u9ad8\u6821\u7684\u6cbb\u7406\u80fd\u529b\u5efa\u8bbe\u3002\n\u589e\u5f3a\u653f\u6cbb\u5f15\u9886\u529b\u3002\u575a\u6301\u515a\u5bf9\u9ad8\u6821\u5de5\u4f5c\u7684\u5168\u9762\u9886\u5bfc\uff0c\u59cb\u7ec8\u628a\u653f\u6cbb\u5efa\u8bbe\u6446\u5728[...]", "target": "\u5927\u529b\u63a8\u8fdb\u9ad8\u6821\u6cbb\u7406\u80fd\u529b\u5efa\u8bbe" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 5850 | | valid | 1679 |
dddb/autotrain-data-new_model
[ "region:us" ]
2022-07-03T03:14:30+00:00
{"task_categories": ["conditional-text-generation"]}
2022-07-03T03:34:26+00:00
[]
[]
TAGS #region-us
AutoTrain Dataset for project: new\_model ========================================= Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project new\_model. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
fc9df95d425ad80e3a96ff6a7738b8fb93ee3c80
# AutoTrain Dataset for project: persina-paraphrase ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project persina-paraphrase. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": " \u0686\u0631\u0627 \u0645\u06cc \u06af\u0648\u06cc\u06cc\u0645 \"\u06cc\u06a9 \u0634\u0644\u0648\u0627\u0631\" \u0627\u06af\u0631 \u0641\u0642\u0637 \u06cc\u06a9 \u0686\u06cc\u0632 \u0627\u0633\u062a\u061f", "target": " \u0686\u0631\u0627 \u0645\u06cc \u06af\u0648\u06cc\u06cc\u0645 \u0634\u0644\u0648\u0627\u0631\u061f" }, { "text": " \u0647\u0646\u062f \u0631\u0627 \u062f\u0631 \u06cc\u06a9 \u062e\u0637 \u0686\u06af\u0648\u0646\u0647 \u062a\u0639\u0631\u06cc\u0641 \u0645\u06cc \u06a9\u0646\u06cc\u062f\u061f", "target": " \u0686\u06af\u0648\u0646\u0647 \u0647\u0646\u062f \u0631\u0627 \u062f\u0631 \u06cc\u06a9 \u062c\u0645\u0644\u0647 \u062a\u0639\u0631\u06cc\u0641 \u06a9\u0646\u06cc\u0645\u061f" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 119410 | | valid | 29853 |
mahdiAsefi/autotrain-data-persina-paraphrase
[ "region:us" ]
2022-07-03T05:49:08+00:00
{"task_categories": ["conditional-text-generation"]}
2022-07-03T05:53:16+00:00
[]
[]
TAGS #region-us
AutoTrain Dataset for project: persina-paraphrase ================================================= Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project persina-paraphrase. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
85aaa66a7843304692990eea17bc3b89ef99aac5
# Dataset Card for "UnpredicTable-mmo-champion-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_mmo-champion-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T07:15:38+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-mmo-champion-com"}
2022-08-04T19:09:49+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-mmo-champion-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-mmo-champion-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-mmo-champion-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
ccd340079cf7705fabed9a460fdff394abac01bd
# Dataset Card for "UnpredicTable-baseball-fantasysports-yahoo-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_baseball-fantasysports-yahoo-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T07:46:09+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-baseball-fantasysports-yahoo-com"}
2022-08-04T18:37:41+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-baseball-fantasysports-yahoo-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-baseball-fantasysports-yahoo-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-baseball-fantasysports-yahoo-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
f4f1dcb833270d8e0319a2a86cfa3805fb3e4081
# Dataset Card for "UnpredicTable-phonearena-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_phonearena-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T07:59:46+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-phonearena-com"}
2022-08-04T19:11:00+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-phonearena-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-phonearena-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-phonearena-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
944fe3f43c298ca526eeb51927210795ab4721a0
# (NER) ontonotes-v5-eng-v4 This dataset is subset of [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) original dataset. - Language: english - Version: v4 | Dataset | Examples | | --- | --- | | Training | 75187 | | Testing | 9479 |
djagatiya/ner-ontonotes-v5-eng-v4
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "source_datasets:subset", "language:eng", "region:us" ]
2022-07-03T08:04:18+00:00
{"language": ["eng"], "source_datasets": ["subset"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"]}
2022-07-03T10:36:33+00:00
[]
[ "eng" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #source_datasets-subset #language-English #region-us
(NER) ontonotes-v5-eng-v4 ========================= This dataset is subset of conll2012\_ontonotesv5 original dataset. * Language: english * Version: v4
[]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #source_datasets-subset #language-English #region-us \n" ]
0b76fc0ecb5ea9fe99a5d5be9812716664061013
# Dataset Card for "UnpredicTable-support-google-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_support-google-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T08:06:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-support-google-com"}
2022-08-04T19:15:33+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-support-google-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-support-google-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-support-google-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
a74dbc1675b4a257fa3312c56efdc297cdc2361f
# Dataset Card for "UnpredicTable-dividend-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_dividend-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T08:15:30+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-dividend-com"}
2022-08-04T19:04:10+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-dividend-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-dividend-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-dividend-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
121fb00f1583e20e3457e130c80e05a68c3c7f39
# Dataset Card for "UnpredicTable-bulbapedia-bulbagarden-net" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_bulbapedia-bulbagarden-net
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T08:24:28+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-bulbapedia-bulbagarden-net"}
2022-08-04T18:40:16+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-bulbapedia-bulbagarden-net" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-bulbapedia-bulbagarden-net\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-bulbapedia-bulbagarden-net\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
f05478ec2e00f9b85c0076a44b771504dffaa14f
# Dataset Card for "UnpredicTable-wkdu-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_wkdu-org
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T08:30:13+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-wkdu-org"}
2022-08-04T19:18:48+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-wkdu-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-wkdu-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-wkdu-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
36db98c5f36305fb63229fd88b9c1f50bca7b140
# Dataset Card for "UnpredicTable-dummies-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_dummies-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T08:42:45+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-dummies-com"}
2022-08-04T19:04:46+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-dummies-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-dummies-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-dummies-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
13f3febb67413609d9cb25545f3587fa2ca5604d
# Dataset Card for "AdapTable-mgoblog-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_mgoblog-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T08:56:07+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "AdapTable-mgoblog-com"}
2022-08-04T19:09:03+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "AdapTable-mgoblog-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"AdapTable-mgoblog-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"AdapTable-mgoblog-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
91c25524931b0f421ab607c20c1a7bc6199be922
# Dataset Card for "UnpredicTable-gamefaqs-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_gamefaqs-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T09:10:20+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-gamefaqs-com"}
2022-08-04T19:08:30+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-gamefaqs-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-gamefaqs-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-gamefaqs-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
832c7304a9b4dbd1c3f7a436d5e644c78084962d
# Dataset Card for "UnpredicTable-studystack-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_studystack-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T09:23:52+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-studystack-com"}
2022-08-04T19:15:01+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-studystack-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-studystack-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-studystack-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
2af65195e39bd9839053773b0afb1a330165449c
# Dataset Card for "UnpredicTable-sittercity-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_sittercity-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T09:37:38+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-sittercity-com"}
2022-08-04T19:13:09+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-sittercity-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-sittercity-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-sittercity-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
107a673e9688ef4bc63e27884e16e2c741ee494d
# Dataset Card for "UnpredicTable-msdn-microsoft-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_msdn-microsoft-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T09:50:56+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-msdn-microsoft-com"}
2022-08-04T19:10:19+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-msdn-microsoft-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-msdn-microsoft-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-msdn-microsoft-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
eaf057b4650acaee32eefcc413131ab5e64ff2c4
# Dataset Card for "UnpredicTable-cappex.com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cappex-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T10:04:27+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cappex.com"}
2022-08-04T18:41:09+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "URL" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"URL\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"URL\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
848fe8a39fc1bb84dce9e3a26818376eb810e77d
# Dataset Card for "UnpredicTable-en-wikipedia-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_en-wikipedia-org
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T10:17:38+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-en-wikipedia-org"}
2022-08-04T19:05:44+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-en-wikipedia-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-en-wikipedia-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-en-wikipedia-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
c295ef60a01bb8a33a8702ddda18308eacba4a31
# Dataset Card for "UnpredicTable-cram-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cram-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T10:31:09+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cram-com"}
2022-08-04T19:03:25+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cram-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cram-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cram-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
13acf265ff28b4809f0e95a5b41a5b96e831cdb4
# Dataset Card for "UnpredicTable-w3-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_w3-org
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T10:45:06+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-w3-org"}
2022-08-04T19:16:53+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-w3-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-w3-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-w3-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
7d11a3ddcd818ce988ae8d89e5e997e8eea2c0a1
# Dataset Card for "UnpredicTable-sporcle-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_sporcle-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T10:58:21+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-sporcle-com"}
2022-08-04T19:13:59+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-sporcle-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-sporcle-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-sporcle-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
00686de27a07cdda04f8fe2bdeafb534e2c8a839
# Dataset Card for "UnpredicTable-wiki-openmoko-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_wiki-openmoko-org
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T11:06:24+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-wiki-openmoko-org"}
2022-08-04T19:17:59+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-wiki-openmoko-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-wiki-openmoko-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-wiki-openmoko-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
a9b29ebafb43b2a8e2f6f3c3253aa0df2e920688
# Dataset Card for "UnpredicTable-ensembl-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_ensembl-org
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-03T11:19:43+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-ensembl-org"}
2022-08-04T19:06:23+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-ensembl-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-ensembl-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-ensembl-org\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
9d9df9f4f8531f0033aa1a9ec78925759ef84c0a
annotations_creators: - crowdsourced language: - uk language_creators: - crowdsourced license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: squad pretty_name: '' size_categories: - 100K<n<1M source_datasets: - extended|squad_v2 task_categories: - question-answering task_ids: - open-domain-qa - extractive-qa train-eval-index: - col_mapping: answers: answer_start: answer_start text: text context: context question: question config: squad_v2 metrics: - name: SQuAD v2 type: squad_v2 splits: eval_split: validation train_split: train task: question-answering task_id: extractive_question_answering # Dataset Card for ua-squad ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/fido-ai/ua-datasets - **Repository:** https://huggingface.co/datasets/FIdo-AI/ua-squad - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Ukrainian translation of the Stanford Question Answering Dataset (SQuAD) 2.0 ### Supported Tasks and Leaderboards question-answering ### Languages Ukrainian ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
FIdo-AI/ua-squad
[ "region:us" ]
2022-07-03T14:28:24+00:00
{}
2022-07-09T19:55:51+00:00
[]
[]
TAGS #region-us
annotations_creators: - crowdsourced language: - uk language_creators: - crowdsourced license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: squad pretty_name: '' size_categories: - 100K<n<1M source_datasets: - extended|squad_v2 task_categories: - question-answering task_ids: - open-domain-qa - extractive-qa train-eval-index: - col_mapping: answers: answer_start: answer_start text: text context: context question: question config: squad_v2 metrics: - name: SQuAD v2 type: squad_v2 splits: eval_split: validation train_split: train task: question-answering task_id: extractive_question_answering # Dataset Card for ua-squad ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Ukrainian translation of the Stanford Question Answering Dataset (SQuAD) 2.0 ### Supported Tasks and Leaderboards question-answering ### Languages Ukrainian ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information
[ "# Dataset Card for ua-squad", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nUkrainian translation of the Stanford Question Answering Dataset (SQuAD) 2.0", "### Supported Tasks and Leaderboards\n\nquestion-answering", "### Languages\n\nUkrainian", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#region-us \n", "# Dataset Card for ua-squad", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nUkrainian translation of the Stanford Question Answering Dataset (SQuAD) 2.0", "### Supported Tasks and Leaderboards\n\nquestion-answering", "### Languages\n\nUkrainian", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
2109baf2b05a833eff82638f74fafe5162bf9c80
from datasets import load_dataset dataset = load_dataset("1989shack/1989shack.com")
1989shack/1989shack.com
[ "license:apache-2.0", "region:us" ]
2022-07-03T14:36:48+00:00
{"license": "apache-2.0"}
2022-12-04T00:45:42+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
from datasets import load_dataset dataset = load_dataset("1989shack/URL")
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
c2dd5ab3983839f23d24c455c115d39634fe2f2c
# Dataset Card for [corpusELE.csv] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description corpusELE is a dataset made of texts from students of Spanish as Foreign Language (**ELE**), all from a set of files of **CAES** (Corpus de Aprendices de Español como Lengua Extranjera) downloaded from the website of the **Instituto Cervantes**. The main objective of this dataset is the creation and subsequent training, by means of Deep Learning, of a classification model that, based on these data, allows to establish, given an expression in Spanish, the level of knowledge of Spanish and even the mother tongue of the speaker. In linguistics, a corpus is a more or less extensive set of texts in electronic format that have been assembled in a computer application, according to a certain design, to facilitate the study of the language or linguistic variety from which these texts have been extracted. Among the many types and subtypes of corpora that currently exist, the so-called 'learner corpora' contain texts produced by people who are learning a given language and who speak different initial, familiar or mother tongues and different degrees of knowledge of the target language (levels) and CAES is one of those. * **File Name**: corpusELE.csv * **Content Description**: Set of text from ELE students of different levels of proficiency and with different mother tongues. * **File Type**: CSV separated by COMMA * **Header Descriptions**: Included in the dataset (first row) * **Encoding type**: UTF-8 ### Dataset Summary * Number of Columns: 6 * Number of Rows: 46.787 ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Spanish ## Dataset Structure 1. **numero** (float): CAES phrase or text number. 2. **nivel** (string): Level of knowledge of Spanish of the ELE student who has provided the text. It will be one of those established in the Common European Framework of Reference in the learning of foreign languages. 3. **lenguaM** (string): Mother tongue of the ELE student to which the registered text belongs. 4. **pClave** (string): Key Word in the phrase or text. As indicated, it may be a punctuation mark, a mark or any other element or character in the sentence considered prominent or characteristic. 5. **frase** (string): Complete phrase or text provided by the ELE student. It is made up of the concatenation of the two parts into which it has been segmented in the source files and it also includes the word or key element. 6. **archivo** (string): It is and additional information included in the dataset in order to be able to use it in the preprocessing of the data. It refers to the name of the file from which the corresponding text has been taken. Although this information is not necessary for the object of the work, it is of interest when debugging the data capture. Later it will be information that we can do without. ### Data Instances Each instance of the dataset consists of a single sentence or text ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The Cervantes Institute, on its website (www.institutocervantes.es), makes freely available to users the so-called CAES or Corpus de Aprendices de Español, currently in its version 2.1. published in March 2022. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
FJC/corpusELE.csv
[ "region:us" ]
2022-07-03T17:26:08+00:00
{}
2022-07-06T21:06:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for [URL] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description corpusELE is a dataset made of texts from students of Spanish as Foreign Language (ELE), all from a set of files of CAES (Corpus de Aprendices de Español como Lengua Extranjera) downloaded from the website of the Instituto Cervantes. The main objective of this dataset is the creation and subsequent training, by means of Deep Learning, of a classification model that, based on these data, allows to establish, given an expression in Spanish, the level of knowledge of Spanish and even the mother tongue of the speaker. In linguistics, a corpus is a more or less extensive set of texts in electronic format that have been assembled in a computer application, according to a certain design, to facilitate the study of the language or linguistic variety from which these texts have been extracted. Among the many types and subtypes of corpora that currently exist, the so-called 'learner corpora' contain texts produced by people who are learning a given language and who speak different initial, familiar or mother tongues and different degrees of knowledge of the target language (levels) and CAES is one of those. * File Name: URL * Content Description: Set of text from ELE students of different levels of proficiency and with different mother tongues. * File Type: CSV separated by COMMA * Header Descriptions: Included in the dataset (first row) * Encoding type: UTF-8 ### Dataset Summary * Number of Columns: 6 * Number of Rows: 46.787 ### Supported Tasks and Leaderboards ### Languages Spanish ## Dataset Structure 1. numero (float): CAES phrase or text number. 2. nivel (string): Level of knowledge of Spanish of the ELE student who has provided the text. It will be one of those established in the Common European Framework of Reference in the learning of foreign languages. 3. lenguaM (string): Mother tongue of the ELE student to which the registered text belongs. 4. pClave (string): Key Word in the phrase or text. As indicated, it may be a punctuation mark, a mark or any other element or character in the sentence considered prominent or characteristic. 5. frase (string): Complete phrase or text provided by the ELE student. It is made up of the concatenation of the two parts into which it has been segmented in the source files and it also includes the word or key element. 6. archivo (string): It is and additional information included in the dataset in order to be able to use it in the preprocessing of the data. It refers to the name of the file from which the corresponding text has been taken. Although this information is not necessary for the object of the work, it is of interest when debugging the data capture. Later it will be information that we can do without. ### Data Instances Each instance of the dataset consists of a single sentence or text ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data The Cervantes Institute, on its website (URL), makes freely available to users the so-called CAES or Corpus de Aprendices de Español, currently in its version 2.1. published in March 2022. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for [URL]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\ncorpusELE is a dataset made of texts from students of Spanish as Foreign Language (ELE), all from a set of files of CAES (Corpus de Aprendices de Español como Lengua Extranjera) downloaded from the website of the Instituto Cervantes. The main objective of this dataset is the creation and subsequent training, by means of Deep Learning, of a classification model that, based on these data, allows to establish, given an expression in Spanish, the level of knowledge of Spanish and even the mother tongue of the speaker.\n\nIn linguistics, a corpus is a more or less extensive set of texts in electronic format that have been assembled in a computer application, according to a certain design, to facilitate the study of the language or linguistic variety from which these texts have been extracted. Among the many types and subtypes of corpora that currently exist, the so-called 'learner corpora' contain texts produced by people who are learning a given language and who speak different initial, familiar or mother tongues and different degrees of knowledge of the target language (levels) and CAES is one of those.\n\n* File Name: URL\n* Content Description: Set of text from ELE students of different levels of proficiency and with different mother tongues.\n* File Type: CSV separated by COMMA\n* Header Descriptions: Included in the dataset (first row)\n* Encoding type: UTF-8", "### Dataset Summary\n\n* Number of Columns: 6\n* Number of Rows: 46.787", "### Supported Tasks and Leaderboards", "### Languages\n\nSpanish", "## Dataset Structure\n\n1. numero (float): CAES phrase or text number.\n2. nivel (string): Level of knowledge of Spanish of the ELE student who has provided the text. It will be one of those established in the Common European Framework of Reference in the learning of foreign languages.\n3. lenguaM (string): Mother tongue of the ELE student to which the registered text belongs.\n4. pClave (string): Key Word in the phrase or text. As indicated, it may be a punctuation mark, a mark or any other element or character in the sentence considered prominent or characteristic.\n5. frase (string): Complete phrase or text provided by the ELE student. It is made up of the concatenation of the two parts into which it has been segmented in the source files and it also includes the word or key element.\n6. archivo (string): It is and additional information included in the dataset in order to be able to use it in the preprocessing of the data. It refers to the name of the file from which the corresponding text has been taken. Although this information is not necessary for the object of the work, it is of interest when debugging the data capture. Later it will be information that we can do without.", "### Data Instances\n\nEach instance of the dataset consists of a single sentence or text", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nThe Cervantes Institute, on its website (URL), makes freely available to users the so-called CAES or Corpus de Aprendices de Español, currently in its version 2.1. published in March 2022.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#region-us \n", "# Dataset Card for [URL]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\ncorpusELE is a dataset made of texts from students of Spanish as Foreign Language (ELE), all from a set of files of CAES (Corpus de Aprendices de Español como Lengua Extranjera) downloaded from the website of the Instituto Cervantes. The main objective of this dataset is the creation and subsequent training, by means of Deep Learning, of a classification model that, based on these data, allows to establish, given an expression in Spanish, the level of knowledge of Spanish and even the mother tongue of the speaker.\n\nIn linguistics, a corpus is a more or less extensive set of texts in electronic format that have been assembled in a computer application, according to a certain design, to facilitate the study of the language or linguistic variety from which these texts have been extracted. Among the many types and subtypes of corpora that currently exist, the so-called 'learner corpora' contain texts produced by people who are learning a given language and who speak different initial, familiar or mother tongues and different degrees of knowledge of the target language (levels) and CAES is one of those.\n\n* File Name: URL\n* Content Description: Set of text from ELE students of different levels of proficiency and with different mother tongues.\n* File Type: CSV separated by COMMA\n* Header Descriptions: Included in the dataset (first row)\n* Encoding type: UTF-8", "### Dataset Summary\n\n* Number of Columns: 6\n* Number of Rows: 46.787", "### Supported Tasks and Leaderboards", "### Languages\n\nSpanish", "## Dataset Structure\n\n1. numero (float): CAES phrase or text number.\n2. nivel (string): Level of knowledge of Spanish of the ELE student who has provided the text. It will be one of those established in the Common European Framework of Reference in the learning of foreign languages.\n3. lenguaM (string): Mother tongue of the ELE student to which the registered text belongs.\n4. pClave (string): Key Word in the phrase or text. As indicated, it may be a punctuation mark, a mark or any other element or character in the sentence considered prominent or characteristic.\n5. frase (string): Complete phrase or text provided by the ELE student. It is made up of the concatenation of the two parts into which it has been segmented in the source files and it also includes the word or key element.\n6. archivo (string): It is and additional information included in the dataset in order to be able to use it in the preprocessing of the data. It refers to the name of the file from which the corresponding text has been taken. Although this information is not necessary for the object of the work, it is of interest when debugging the data capture. Later it will be information that we can do without.", "### Data Instances\n\nEach instance of the dataset consists of a single sentence or text", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nThe Cervantes Institute, on its website (URL), makes freely available to users the so-called CAES or Corpus de Aprendices de Español, currently in its version 2.1. published in March 2022.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
b76b66326552bd73cd041a6090c8b3eb5f7e3f55
# AutoTrain Dataset for project: sum-200-random ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project sum-200-random. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "aen: {Forest hermit to Professor, it's never too late to change. | Dr. Gregory P. Smith | TEDxByronB[...]", "target": "Fire, plenty of ferns to sleep on and an endless supply of alcohol. 65" }, { "text": "aen: {William Noel: Revealing the lost codex of Archimedes}{from 62% to 72%}{And combinatorics is a [...]", "target": "The really astonishing thing though about this manuscript is that we looked at the other manuscripts[...]" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 451280 | | valid | 112821 |
mf99/autotrain-data-sum-200-random
[ "language:en", "region:us" ]
2022-07-03T19:38:28+00:00
{"language": ["en"], "task_categories": ["conditional-text-generation"]}
2022-10-23T05:22:05+00:00
[]
[ "en" ]
TAGS #language-English #region-us
AutoTrain Dataset for project: sum-200-random ============================================= Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project sum-200-random. ### Languages The BCP-47 code for the dataset's language is en. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#language-English #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
7c23e66a959c1f7198c25fd82111d6c633e8d514
annotations_creators: - other language: - en language_creators: - found license: - gpl-3.0 multilinguality: - monolingual pretty_name: The World's Sentiment size_categories: - 1K<n<10K source_datasets: - original task_categories: - other task_ids: [] # Dataset Card for The World's Sentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [TWS Homepage](the-worlds-sentiment.enzon3.repl.co) - **Repository:** [GitHub](https://github.com/EnZon3/TWS-dataset_gen) - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The World's Sentiment is a dataset of news headlines' titles and their sentiment scores. This dataset was for a project of mine to see how positive or negative events in the world are. There are some use cases for this dataset, if you use only the headlines, you could train an AI to generate fake, but realistic headlines. But if you opt for what I did, which was to analyze the dataset, you'll find quite a bit of interesting stuff. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages This dataset is in English, text is from news articles' titles, provided by [News API](https://newsapi.org) The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances A typical data point comprises of a title, and the title's sentiment score. The dataset is a CSV file, so I cannot provide JSON data, but I can provide an example of what it would look like in Excel. | Headline | Sentiment score | | ----- | ---- | | Russia Ukraine: Russian missile strike hits crowded shopping mall in Kremenchuk - 9News | -0.181818182 | Here's what the example would look like in plain-text: Russia Ukraine: Russian missile strike hits crowded shopping mall in Kremenchuk - 9News,-0.181818182 ### Data Fields Headline: The title to a headline Sentiment: The sentiment score of the headline's title. ### Data Splits [N/A] ## Dataset Creation ### Curation Rationale I created the TWS dataset after a question popped up in my head on June 27th. It kind of went like this: 'How negative, or positive are news headlines?' ### Source Data #### Initial Data Collection and Normalization The data was collected by getting the top headlines in every English-speaking country that [News API](https://newsapi.org) supported and running through the responses and logging only the titles, while also simultaneously using Sentiment analysis to get the sentiment score (The sentiment dataset I used was afinn). The data was slightly modified in it's final form to correct any syntax errors in the CSV file using [CSVlint](https://csvlint.io/) to find them. The dataset is not tokenized. #### Who are the source language producers? The data was made by humans, and the news sources I used are located [here](https://newsapi.org/docs/endpoints/sources), because there are so many that I can't put them here. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Names might show up in the titles. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
EnZon3/The-Worlds-Sentiment
[ "region:us" ]
2022-07-03T23:56:51+00:00
{}
2022-07-04T22:08:24+00:00
[]
[]
TAGS #region-us
annotations\_creators: * other language: * en language\_creators: * found license: * gpl-3.0 multilinguality: * monolingual pretty\_name: The World's Sentiment size\_categories: * 1K<n<10K source\_datasets: * original task\_categories: * other task\_ids: [] Dataset Card for The World's Sentiment ====================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: TWS Homepage * Repository: GitHub * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary The World's Sentiment is a dataset of news headlines' titles and their sentiment scores. This dataset was for a project of mine to see how positive or negative events in the world are. There are some use cases for this dataset, if you use only the headlines, you could train an AI to generate fake, but realistic headlines. But if you opt for what I did, which was to analyze the dataset, you'll find quite a bit of interesting stuff. ### Supported Tasks and Leaderboards ### Languages This dataset is in English, text is from news articles' titles, provided by News API The associated BCP-47 code is 'en'. Dataset Structure ----------------- ### Data Instances A typical data point comprises of a title, and the title's sentiment score. The dataset is a CSV file, so I cannot provide JSON data, but I can provide an example of what it would look like in Excel. Here's what the example would look like in plain-text: Russia Ukraine: Russian missile strike hits crowded shopping mall in Kremenchuk - 9News,-0.181818182 ### Data Fields Headline: The title to a headline Sentiment: The sentiment score of the headline's title. ### Data Splits [N/A] Dataset Creation ---------------- ### Curation Rationale I created the TWS dataset after a question popped up in my head on June 27th. It kind of went like this: 'How negative, or positive are news headlines?' ### Source Data #### Initial Data Collection and Normalization The data was collected by getting the top headlines in every English-speaking country that News API supported and running through the responses and logging only the titles, while also simultaneously using Sentiment analysis to get the sentiment score (The sentiment dataset I used was afinn). The data was slightly modified in it's final form to correct any syntax errors in the CSV file using CSVlint to find them. The dataset is not tokenized. #### Who are the source language producers? The data was made by humans, and the news sources I used are located here, because there are so many that I can't put them here. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Names might show up in the titles. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information
[ "### Dataset Summary\n\n\nThe World's Sentiment is a dataset of news headlines' titles and their sentiment scores. This dataset was for a project of mine to see how positive or negative events in the world are. There are some use cases for this dataset, if you use only the headlines, you could train an AI to generate fake, but realistic headlines. But if you opt for what I did, which was to analyze the dataset, you'll find quite a bit of interesting stuff.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThis dataset is in English, text is from news articles' titles, provided by News API\nThe associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises of a title, and the title's sentiment score. The dataset is a CSV file, so I cannot provide JSON data, but I can provide an example of what it would look like in Excel.\n\n\n\nHere's what the example would look like in plain-text:\n\n\nRussia Ukraine: Russian missile strike hits crowded shopping mall in Kremenchuk - 9News,-0.181818182", "### Data Fields\n\n\nHeadline:\n\n\nThe title to a headline\n\n\nSentiment:\n\n\nThe sentiment score of the headline's title.", "### Data Splits\n\n\n[N/A]\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nI created the TWS dataset after a question popped up in my head on June 27th. It kind of went like this: 'How negative, or positive are news headlines?'", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was collected by getting the top headlines in every English-speaking country that News API supported and running through the responses and logging only the titles, while also simultaneously using Sentiment analysis to get the sentiment score (The sentiment dataset I used was afinn). The data was slightly modified in it's final form to correct any syntax errors in the CSV file using CSVlint to find them. The dataset is not tokenized.", "#### Who are the source language producers?\n\n\nThe data was made by humans, and the news sources I used are located here, because there are so many that I can't put them here.", "### Annotations", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nNames might show up in the titles.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#region-us \n", "### Dataset Summary\n\n\nThe World's Sentiment is a dataset of news headlines' titles and their sentiment scores. This dataset was for a project of mine to see how positive or negative events in the world are. There are some use cases for this dataset, if you use only the headlines, you could train an AI to generate fake, but realistic headlines. But if you opt for what I did, which was to analyze the dataset, you'll find quite a bit of interesting stuff.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThis dataset is in English, text is from news articles' titles, provided by News API\nThe associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises of a title, and the title's sentiment score. The dataset is a CSV file, so I cannot provide JSON data, but I can provide an example of what it would look like in Excel.\n\n\n\nHere's what the example would look like in plain-text:\n\n\nRussia Ukraine: Russian missile strike hits crowded shopping mall in Kremenchuk - 9News,-0.181818182", "### Data Fields\n\n\nHeadline:\n\n\nThe title to a headline\n\n\nSentiment:\n\n\nThe sentiment score of the headline's title.", "### Data Splits\n\n\n[N/A]\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nI created the TWS dataset after a question popped up in my head on June 27th. It kind of went like this: 'How negative, or positive are news headlines?'", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was collected by getting the top headlines in every English-speaking country that News API supported and running through the responses and logging only the titles, while also simultaneously using Sentiment analysis to get the sentiment score (The sentiment dataset I used was afinn). The data was slightly modified in it's final form to correct any syntax errors in the CSV file using CSVlint to find them. The dataset is not tokenized.", "#### Who are the source language producers?\n\n\nThe data was made by humans, and the news sources I used are located here, because there are so many that I can't put them here.", "### Annotations", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nNames might show up in the titles.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information" ]
dd93e7ba97dd8c0776cb50249b0e1d53e4076b2c
# Dataset Card for Yincen/SalienceEvaluation ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/qyccc) for adding this dataset.
Yincen/SalienceEvaluation
[ "task_categories:text-classification", "task_ids:multi-input-text-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:zh", "license:gpl-3.0", "region:us" ]
2022-07-04T01:10:27+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["zh"], "license": ["gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-input-text-classification"], "pretty_name": "Yincen/SalienceEvaluation"}
2022-07-04T01:36:58+00:00
[]
[ "zh" ]
TAGS #task_categories-text-classification #task_ids-multi-input-text-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Chinese #license-gpl-3.0 #region-us
# Dataset Card for Yincen/SalienceEvaluation ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for Yincen/SalienceEvaluation", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-input-text-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Chinese #license-gpl-3.0 #region-us \n", "# Dataset Card for Yincen/SalienceEvaluation", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
b316ca521cc2a96d5b628af83cb7a5cfed424027
hi hi
Jeongyeon/SynthDoG
[ "region:us" ]
2022-07-04T05:13:54+00:00
{}
2022-07-04T07:59:17+00:00
[]
[]
TAGS #region-us
hi hi
[]
[ "TAGS\n#region-us \n" ]
eb915043fa53039237e47183108b7aaf19b7da9e
# IndoLVCSR TITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected and proposed by the authors of "A Large Vocabulary Continuous Speech Recognition System for Indonesian Language". The text transcriptions are obtained from newspaper and magazine articles. The speech is recorded from 20 speakers (11 males and 9 females). # How to cite If you use this dataset, you have to cite this paper: ``` @inproceedings{lestari2006titmlidn, title={A large vocabulary continuous speech recognition system for Indonesian language}, author={Lestari, Dessi Puji and Iwano, Koji and Furui, Sadaoki}, booktitle={15th Indonesian Scientific Conference in Japan Proceedings}, pages={17--22}, year={2006} } ```
holylovenia/TITML-IDN
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:id", "license:other", "speech-recognition", "region:us" ]
2022-07-04T05:25:01+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["id"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "TITML-IDN: A large vocabulary continuous speech recognition system for Indonesian language", "tags": ["speech-recognition"]}
2022-10-25T05:23:17+00:00
[]
[ "id" ]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-other #speech-recognition #region-us
# IndoLVCSR TITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected and proposed by the authors of "A Large Vocabulary Continuous Speech Recognition System for Indonesian Language". The text transcriptions are obtained from newspaper and magazine articles. The speech is recorded from 20 speakers (11 males and 9 females). # How to cite If you use this dataset, you have to cite this paper:
[ "# IndoLVCSR\n\nTITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected and proposed by the authors of \"A Large Vocabulary Continuous Speech Recognition System for Indonesian Language\". The text transcriptions are obtained from newspaper and magazine articles. The speech is recorded from 20 speakers (11 males and 9 females).", "# How to cite\nIf you use this dataset, you have to cite this paper:" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-other #speech-recognition #region-us \n", "# IndoLVCSR\n\nTITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected and proposed by the authors of \"A Large Vocabulary Continuous Speech Recognition System for Indonesian Language\". The text transcriptions are obtained from newspaper and magazine articles. The speech is recorded from 20 speakers (11 males and 9 females).", "# How to cite\nIf you use this dataset, you have to cite this paper:" ]
ee6ef3917f0210c08e7337f318b99b48c4c4c4c0
This dataset contains embeddings of the abstracts of ArXiv Machine Learning papers. The embeddings are produced from sentence-transformers/paraphrase-MiniLM-L6-v2. The model can be accessed here: <a href = "https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2/discussions/2">HuggingFace Sentence Transformers </a> The original dataset before embedding can be accessed here: <a href = "https://huggingface.co/datasets/CShorten/ML-ArXiv-Papers">ML ArXiv Papers</a>
CShorten/ArXiv-ML-Abstract-Embeddings
[ "region:us" ]
2022-07-04T11:47:11+00:00
{}
2022-07-04T12:13:37+00:00
[]
[]
TAGS #region-us
This dataset contains embeddings of the abstracts of ArXiv Machine Learning papers. The embeddings are produced from sentence-transformers/paraphrase-MiniLM-L6-v2. The model can be accessed here: <a href = "URL Sentence Transformers </a> The original dataset before embedding can be accessed here: <a href = "URL ArXiv Papers</a>
[]
[ "TAGS\n#region-us \n" ]
7c2cd16a06fdbd304e68d85877485fde46e97312
This dataset contains embeddings of the titles of ArXiv Machine Learning papers. The embeddings are produced from sentence-transformers/paraphrase-MiniLM-L6-v2. The model can be accessed here: HuggingFace Sentence Transformers The original dataset before embedding can be accessed here: ML ArXiv Papers
CShorten/ArXiv-ML-Title-Embeddings
[ "region:us" ]
2022-07-04T12:14:10+00:00
{}
2022-07-04T12:44:15+00:00
[]
[]
TAGS #region-us
This dataset contains embeddings of the titles of ArXiv Machine Learning papers. The embeddings are produced from sentence-transformers/paraphrase-MiniLM-L6-v2. The model can be accessed here: HuggingFace Sentence Transformers The original dataset before embedding can be accessed here: ML ArXiv Papers
[]
[ "TAGS\n#region-us \n" ]
df94aa548fa4f93e16c6c269a99f0bd746a2ed1f
Do cite the below reference for using the dataset: @inproceedings{marreddy2021clickbait, title={Clickbait Detection in Telugu: Overcoming NLP Challenges in Resource-Poor Languages using Benchmarked Techniques}, author={Marreddy, Mounika and Oota, Subba Reddy and Vakada, Lakshmi Sireesha and Chinni, Venkata Charan and Mamidi, Radhika}, booktitle={2021 International Joint Conference on Neural Networks (IJCNN)}, pages={1--8}, year={2021}, organization={IEEE} }
mounikaiiith/Telugu_Clickbait
[ "license:cc-by-4.0", "region:us" ]
2022-07-04T13:45:04+00:00
{"license": "cc-by-4.0"}
2022-07-04T13:59:27+00:00
[]
[]
TAGS #license-cc-by-4.0 #region-us
Do cite the below reference for using the dataset: @inproceedings{marreddy2021clickbait, title={Clickbait Detection in Telugu: Overcoming NLP Challenges in Resource-Poor Languages using Benchmarked Techniques}, author={Marreddy, Mounika and Oota, Subba Reddy and Vakada, Lakshmi Sireesha and Chinni, Venkata Charan and Mamidi, Radhika}, booktitle={2021 International Joint Conference on Neural Networks (IJCNN)}, pages={1--8}, year={2021}, organization={IEEE} }
[]
[ "TAGS\n#license-cc-by-4.0 #region-us \n" ]
daa94bf882e19183a05424eef5af19a1f685a251
# Icelandic WinoGrande dataset This is the Icelandic WinoGrande dataset described in the IceBERT paper https://aclanthology.org/2022.lrec-1.464.pdf . ## Translation and localization The records were manually translated and localized (skipped if localization was not possible) from English. For the examples which were singlets instead of sentence pairs we added a corresponding sentence. The "translations per se" are not exact since accurately preserving the original semantics is unimportant. E.g., for some words, it was too difficult or impossible to match all constraints (gender, number, and case must not give the answer away for free, and changing gender means using a different lexical item); for others, the word choice simply didn't work. Due to the inflections each candidate word had to be selected with extreme precision so we could not find any use with machine translation, neither as a starting point nor as a reference. ## Citation If you make use of this dataset pleace cite ``` @inproceedings{snaebjarnarson-etal-2022-warm, title = "A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models", author = "Sn{\ae}bjarnarson, V{\'e}steinn and S{\'\i}monarson, Haukur Barri and Ragnarsson, P{\'e}tur Orri and Ing{\'o}lfsd{\'o}ttir, Svanhv{\'\i}t Lilja and J{\'o}nsson, Haukur and Thorsteinsson, Vilhjalmur and Einarsson, Hafsteinn", editor = "Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.464", pages = "4356--4366", abstract = "We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain .is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.", } ```
mideind/icelandic-winogrande
[ "language:is", "license:cc-by-4.0", "region:us" ]
2022-07-04T14:41:06+00:00
{"language": ["is"], "license": "cc-by-4.0"}
2024-01-19T07:40:02+00:00
[]
[ "is" ]
TAGS #language-Icelandic #license-cc-by-4.0 #region-us
# Icelandic WinoGrande dataset This is the Icelandic WinoGrande dataset described in the IceBERT paper URL . ## Translation and localization The records were manually translated and localized (skipped if localization was not possible) from English. For the examples which were singlets instead of sentence pairs we added a corresponding sentence. The "translations per se" are not exact since accurately preserving the original semantics is unimportant. E.g., for some words, it was too difficult or impossible to match all constraints (gender, number, and case must not give the answer away for free, and changing gender means using a different lexical item); for others, the word choice simply didn't work. Due to the inflections each candidate word had to be selected with extreme precision so we could not find any use with machine translation, neither as a starting point nor as a reference. If you make use of this dataset pleace cite
[ "# Icelandic WinoGrande dataset\n\nThis is the Icelandic WinoGrande dataset described in the IceBERT paper URL .", "## Translation and localization\nThe records were manually translated and localized (skipped if localization was not possible) from English.\nFor the examples which were singlets instead of sentence pairs we added a corresponding sentence.\nThe \"translations per se\" are not exact since accurately preserving the original semantics is unimportant.\nE.g., for some words, it was too difficult or impossible to match all constraints (gender, number, and case must not give the answer away for free, and changing gender means using a different lexical item); for others, the word choice simply didn't work.\n\nDue to the inflections each candidate word had to be selected with extreme precision so we could not find any use with machine translation, neither as a starting point nor as a reference.\n\nIf you make use of this dataset pleace cite" ]
[ "TAGS\n#language-Icelandic #license-cc-by-4.0 #region-us \n", "# Icelandic WinoGrande dataset\n\nThis is the Icelandic WinoGrande dataset described in the IceBERT paper URL .", "## Translation and localization\nThe records were manually translated and localized (skipped if localization was not possible) from English.\nFor the examples which were singlets instead of sentence pairs we added a corresponding sentence.\nThe \"translations per se\" are not exact since accurately preserving the original semantics is unimportant.\nE.g., for some words, it was too difficult or impossible to match all constraints (gender, number, and case must not give the answer away for free, and changing gender means using a different lexical item); for others, the word choice simply didn't work.\n\nDue to the inflections each candidate word had to be selected with extreme precision so we could not find any use with machine translation, neither as a starting point nor as a reference.\n\nIf you make use of this dataset pleace cite" ]
8f65fde41b0e3362383eaf9e7f0dbfa53bf5e487
#samples=5007831 ``` dataset = load_dataset('lyakaap/laion2B-japanese-subset', split='train') dataset = dataset.remove_columns(['LANGUAGE', 'NSFW', 'LICENSE', 'SAMPLE_ID']) dataset = dataset.filter(lambda x: x['HEIGHT'] <= 384 and x['WIDTH'] <= 384) dataset = dataset.filter(lambda x: x['HEIGHT'] >= 128 and x['WIDTH'] >= 128) dataset = dataset.filter(lambda x: x['similarity'] >= 0.31) dataset.push_to_hub('lyakaap/laion-mini-ja', token='XXX') ```
lyakaap/laion-mini-ja
[ "region:us" ]
2022-07-04T22:18:55+00:00
{}
2022-07-05T01:30:45+00:00
[]
[]
TAGS #region-us
#samples=5007831
[]
[ "TAGS\n#region-us \n" ]
a7ea759535bb9fad6361cca151cf94a46e88edf3
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-spanish
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:es", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:06:37+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Spanish HateCheck"}
2022-07-05T09:27:07+00:00
[ "2206.09917" ]
[ "es" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Spanish #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Spanish #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
323bdf67e0fbd3d7f8086fad0971b5bd5a62524b
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-portuguese
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pt", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:21:24+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["pt"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Portuguese HateCheck"}
2022-07-05T09:27:47+00:00
[ "2206.09917" ]
[ "pt" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
28d7098e2e5a211c4810d0a4d8deccc5889e55b6
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-polish
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:24:24+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["pl"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Polish HateCheck"}
2022-07-05T09:26:41+00:00
[ "2206.09917" ]
[ "pl" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
617d3e9fccd186277297cc305f6588af7384b008
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-mandarin
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:zh", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:31:28+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["zh"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Mandarin HateCheck"}
2022-07-05T09:32:33+00:00
[ "2206.09917" ]
[ "zh" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
21e3d5c827cb60619a89988b24979850a7af85a5
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-italian
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:it", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:33:01+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["it"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Italian HateCheck"}
2022-07-05T09:35:17+00:00
[ "2206.09917" ]
[ "it" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Italian #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Italian #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
e9e68e1a4db04726b9278192377049d0f9693012
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-hindi
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:hi", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:35:40+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["hi"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Hindi HateCheck"}
2022-07-05T09:36:37+00:00
[ "2206.09917" ]
[ "hi" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hindi #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hindi #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
5229a5cc475f36c08d03ca52f0ccb005705e60d2
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-german
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:de", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:36:48+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["de"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "German HateCheck"}
2022-07-05T09:38:52+00:00
[ "2206.09917" ]
[ "de" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-German #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-German #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]